diff --git "a/results_retrieval/emb_potion_r32M/retrieval_recursivecharacterchunker_docling.json" "b/results_retrieval/emb_potion_r32M/retrieval_recursivecharacterchunker_docling.json" deleted file mode 100644--- "a/results_retrieval/emb_potion_r32M/retrieval_recursivecharacterchunker_docling.json" +++ /dev/null @@ -1,22210 +0,0 @@ -[ - { - "top_k": 10, - "mrr": 0.4573161375661376, - "recall": 0.7, - "count_empty_strings": 37 - }, - [ - { - "references": { - "source_file": "uksi_20200438_en.pdf", - "query": "What does \"new account\" mean according to the international tax compliance from 2020 ?", - "target_page": 2, - "target_passage": "“new account” means a financial account maintained by a reporting financial institution opened on or after 13th May 2020", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## S T A T U T O R Y I N S T R U M E N T S\n\n## 2020 No. 438\n\n## TAXES\n\n## The International Tax Compliance (Amendment) Regulations 2020\n\nMade\n\n-\n\n-\n\n-\n\n-\n\n20th April 2020\n\nLaid before the House of Commons\n\n21st April 2020\n\nComing into force\n\n- -\n\n13th May 2020\n\nThe Treasury make these Regulations in exercise of the powers conferred by section 222 of the Finance Act 2013( a ):\n\n## Citation and commencement\n\n- 1. These Regulations may be cited as the International Tax Compliance (Amendment) Regulations 2020 and come into force on 13th May 2020.\n\n## Amendments to the International Tax Compliance Regulations 2015\n\n- 2. -(1) The International Tax Compliance Regulations 2015( b ) are amended as follows.\n- (2) In regulation 1(3)(b)(i), for '16th May 2019' substitute '19th April 2020'( c ).\n- (3) In regulation 3(4A)(a), at the beginning insert 'subject to regulation 24(3)'.\n- (4) In regulation 24-\n- (a) in the table in paragraph (2), in the column headed 'the CRS'-\n- (i) at the beginning of the entry for 'new account' insert 'subject to paragraph (3)', and\n- (ii) at the beginning of the entry for 'pre-existing account' insert 'subject to regulation 3(4A)(a) and paragraph (3)', and\n- (b) after paragraph (2) insert-\n- '(3) In respect of the accounts listed in paragraph (4)-", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20200438_en.pdf" - }, - { - "text": "- (a) 'new account' means a financial account maintained by a reporting financial institution( a ) opened on or after 13th May 2020;\n - (b) 'pre-existing account' means-\n - (i) a financial account maintained by a reporting financial institution as of 12th May 2020, or\n - (ii) a financial account within Section VIII(C)(9)(b) of Annex 1 of the DAC( b ), but in the application of that provision the references to 'subparagraph C(9)(a)' are to be read as references to paragraph (i) of this sub-paragraph.\n - (4) The accounts are-\n - (a) non-registered pension arrangements where the annual contributions are limited to £50,000 and funds contributed cannot be accessed before the age of 55 except in circumstances of serious ill health;\n - (b) Premium Bonds issued by the UK National Savings and Investments;\n - (c) Fixed Interest Savings Certificates issued by the UK National Savings and Investments; and\n - (d) Index Linked Savings Certificates issued by the UK National Savings and Investments.'.\n - (5) In Schedule 2, omit paragraphs 2, 6, 8 and 9.\n\n## Transitional provision\n\n - 3. -(1) For the purposes of the International Tax Compliance Regulations 2015, in relation to an account that by virtue of regulation 2(5) ceases to be an excluded account, the calendar year 2020 is treated as beginning on 13th May 2020 and ending on 31st December 2020.\n - (2) Where in consequence of paragraph (1) it is necessary to apportion an amount for the calendar year 2020 to the period ending immediately before 13th May 2020 and the period beginning with that date, it is to be apportioned-\n - (a) on a time basis according to the respective length of the periods, or\n - (b) if that method would produce a result that is unjust or unreasonable, on a just and reasonable basis.\n\nDavid Rutley Maggie Throup\n\n20th April 2020\n\nTwo of the Lords Commissioners of Her Majesty's Treasury\n\n## EXPLANATORY NOTE\n\n(This note is not part of the Regulations)\n\nThe Regulations amend the International Tax Compliance Regulations 2015 ('the principal Regulations') which give effect to agreements and arrangements reached between the United Kingdom and other jurisdictions to improve international tax compliance.\n\nRegulation 2(2) extends the application of the principal Regulations to arrangements entered into by the United Kingdom for the exchange of financial account information with other jurisdictions up to 19th April 2020, the date before the Regulations are made.\n\nRegulation 2(5) omits various accounts from the category of excluded accounts. Regulation 2(4)(b) amends the definitions of 'new account' and 'pre-existing account' in relation to those", - "page_start": 1, - "page_end": 1, - "source_file": "uksi_20200438_en.pdf" - }, - { - "text": "At December 31, 2004, we had $93 million of deferred tax assets and $1.9 billion of deferred tax liabilities. Except for certain New Jersey state net operating losses and certain other New Jersey state deferred tax assets, we believe that it is more likely than not that our deferred tax assets are fully realizable because of the future reversal of existing taxable temporary differences and future projected taxable income. The valuation allowance at December 31, 2004 related to the New Jersey deferred tax assets was $6 million.\n\nOur income tax returns are subject to examination by the Internal Revenue Service ('IRS') and other tax authorities. While positions taken in tax returns are sometimes subject to uncertainty in the tax laws, we do not take such positions unless we have 'substantial authority' to do so under the Internal Revenue Code and applicable regulations. We may take positions on our tax returns based on substantial authority that are not ultimately accepted by the IRS.\n\nWe assess such potential unfavorable outcomes based on the criteria of Statement of Financial Accounting Standards No. 5, 'Accounting for Contingencies' ('SFAS 5'). We establish a tax reserve if an unfavorable outcome is probable and the amount of the unfavorable outcome can be reasonably estimated. We assess the potential outcomes of tax uncertainties on a quarterly basis. In determining whether the probable criterion of SFAS 5 is met, we presume that the taxing authority will focus on the exposure and we assess the probable outcome of a particular issue based upon the relevant legal and technical merits. We also apply our judgment regarding the potential actions by the tax authorities and resolution through the settlement process.\n\nWe maintain required tax reserves until such time as the underlying issue is resolved. When actual results differ from reserve estimates, we adjust the income tax provision and our tax reserves in the period resolved. For tax years that are examined by taxing authorities, we adjust tax reserves in the year the tax examinations are settled. For tax years that are not examined by taxing authorities, we adjust tax reserves in the year that the statute of limitations expires. Our estimate of the\n\npotential outcome for any uncertain tax issue is highly judgmental, and we believe we have adequately provided for any reasonable and foreseeable outcomes related to uncertain tax matters.\n\nIn December 2002, we settled the IRS audit of the Company's 1995 and 1996 tax returns, which did not result in a material impact on our results of operations or financial position. During 2003, we filed amended returns for tax years subsequent to 1996 to reflect the impact of the IRS audits of the 1993 through 1996 tax years on those subsequent years. In the fourth quarter of 2003, the statutes of limitations expired for the 1997 through 1999 tax years, resulting in a reduction of our tax reserves of $13 million and a corresponding reduction in our provision for income taxes. In the third quarter of 2004, the statute of limitations expired for our 2000 tax return, resulting in a reduction of our tax reserves of $6 million and a corresponding reduction in our provision for income taxes. Subsequent to December 31, 2004, we received notice that the IRS will audit our 2001 and 2002 tax returns, and the tax returns for years after 2002 are subject to possible future examination.\n\nWe classify reserves for tax uncertainties within 'other accrued liabilities' in the accompanying consolidated balance sheets, separate from any related income tax payable or deferred income taxes. Reserve amounts may relate to the deductibility of an item, as well as potential interest associated with those items.", - "page_start": 44, - "page_end": 44, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## Income Tax and Other Taxes\n\nWe collect, pay and accrue significant amounts of income and other taxes such as federal and provincial sales tax, employment taxes and property taxes, for and to various taxation authorities.\n\nWe have recorded significant amounts of deferred income tax liabilities and current income tax expense, and calculated these amounts based on substantively enacted income tax rates in effect at the relevant time. A legislative change in these rates could have a material impact on the amounts recorded and payable in the future.\n\nWe have also recorded the benefit of income and other tax positions that are more likely than not of being sustained on examination and are measured at the amount expected to be realized when we have an ultimate settlement with taxation authorities.\n\nWhile we believe we have paid and provided for adequate amounts of tax, our business is complex and significant judgement is required in interpreting tax legislation and regulations. Our tax filings are subject to audit by the relevant government revenue authorities and the results of the government audit could materially change the amount of our actual income tax expense, income taxes payable or receivable, other taxes payable or receivable and deferred income tax assets or liabilities and could, in certain circumstances, result in an assessment of interest and penalties.", - "page_start": 80, - "page_end": 80, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Employee Share Accumulation Plan\n\nEmployees voluntarily participate in the share accumulation plan by contributing a specified percentage of their regular earnings. We match employee contributions up to a certain amount, and record our contributions as a compensation expense in the year we make them.\n\nSee note 24 for more information about our stock-based compensation and other stock-based payments.\n\n## Income Taxes\n\nIncome tax expense includes both current and deferred taxes. We use judgment to interpret tax rules and regulations to calculate the expense recorded each period. We recognize income tax expense in net income unless it relates to an item recognized directly in equity or other comprehensive income.\n\nCurrent tax expense is tax we expect to pay or receive based on our taxable income or loss during the year. We calculate the current tax expense using tax rates enacted or substantively enacted at the reporting date, and including any adjustment to taxes payable or receivable related to previous years.\n\nDeferred tax assets and liabilities arise from temporary differences between the carrying amounts of the assets and liabilities we record in our consolidated statements of financial position and their respective tax bases. We calculate deferred tax assets and liabilities using enacted or substantively enacted tax rates that will apply in the years the temporary differences are expected to reverse.\n\nDeferred tax assets and liabilities are offset if there is a legally enforceable right to offset current tax liabilities and assets and they relate to income taxes levied by the same authority on:\n\n - GLYPH<129> the same taxable entity, or\n - GLYPH<129> different tax entities where these entities intend to settle current tax liabilities and assets on a net basis or the tax assets and liabilities will be realized simultaneously.\n\nWe recognize a deferred tax asset for unused losses, tax credits and deductible temporary differences to the extent that it is probable that future taxable income will be available to use the asset. We use judgement to evaluate whether we can recover a deferred tax asset based on our assessment on existing tax laws, estimates of future profitability and tax planning strategies.\n\nWe rely on estimates and assumptions when determining the amount of current and deferred tax, and take into account the impact of uncertain tax positions and whether additional taxes and interest may be due. If new information becomes available and changes our judgment on the adequacy of existing tax liabilities, these changes would affect the income tax expense in the period that we make this determination.\n\nSee note 9 for more information about our income taxes.\n\n## Foreign Currency Translation\n\nWe translate amounts denominated in foreign currencies into Canadian dollars as follows:", - "page_start": 99, - "page_end": 99, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Energy Generation and Storage Segment\n\n## Energy Generation and Storage Sales\n\nWe record as deferred revenue any non-refundable amounts that are collected from customers related to prepayments, which is recognized as revenue ratably over the respective customer contract term. As of September 30, 2024 and December 31, 2023, deferred revenue related to such customer payments amounted to $1.73 billion and $1.60 billion, respectively, mainly due to contractual payment terms. Revenue recognized from the deferred revenue balances as of December 31, 2023 and 2022 was $1.09 billion and $511 million for the nine months ended September 30, 2024 and 2023, respectively. As of September 30, 2024, total transaction price allocated to performance obligations that were unsatisfied or partially unsatisfied for contracts with an original expected length of more than one year was $6.61 billion. Of this amount, we expect to recognize $4.23 billion in the next 12 months and the rest over the remaining performance obligation period.\n\nWe have financing receivables on our consolidated balance sheets related to loans we provide for financing our energy products. As of September 30, 2024 and December 31, 2023, we had current net financing receivables of $32 million and $31 million, respectively, in Accounts receivable, net, and $641 million and $578 million, respectively, in Other non-current assets for the long-term portion.\n\n## Income Taxes\n\nWe are subject to income taxes in the U.S. and in many foreign jurisdictions. Significant judgment is required in determining our provision for income taxes, our deferred tax assets and liabilities and any valuation allowance recorded against our net deferred tax assets that are not more likely than not to be realized. We monitor the realizability of our deferred tax assets taking into account all relevant factors at each reporting period. In completing our assessment of realizability of our deferred tax assets, we consider our history of income (loss) measured at pre-tax income (loss) adjusted for permanent book-tax differences on a jurisdictional basis, volatility in actual earnings, excess tax benefits related to stock-based compensation in recent prior years and impacts of the timing of reversal of existing temporary differences. We also rely on our assessment of the Company's projected future results of business operations, including uncertainty in future operating results relative to historical results, volatility in the market price of our common stock and its performance over time, variable macroeconomic conditions impacting our ability to forecast future taxable income, and changes in business that may affect the existence and magnitude of future taxable income. Our valuation allowance assessment is based on our best estimate of future results considering all available information.\n\nOur provision for or benefit from income taxes for interim periods is determined using an estimate of our annual effective tax rate, adjusted for discrete items, if any, that are taken into account in the relevant period. Each quarter, we update our estimate of the annual effective tax rate, and if our estimated tax rate changes, we make a cumulative adjustment.\n\n## Net Income per Share of Common Stock Attributable to Common Stockholders\n\nThe following table presents the reconciliation of net income attributable to common stockholders to net income used in computing basic and diluted net income per share of common stock (in millions):\n\nTable of Contents", - "page_start": 15, - "page_end": 15, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "## NOTES TO THE CONSOLIDATED FINANCIAL STATEMENTS\n\n## NOTE 7 - INCOME TAX EXPENSE continued\n\n| Year ended 31 December | Year ended 31 December | 2014 US$'000 | 2013 US$'000 |\n|---------------------------------------------|---------------------------------------------------------------------------------------------------------|----------------|----------------|\n| c) | Unused tax losses and temporary differences for which no deferred tax asset has been recognised at 30% | 2,685 | 170 |\n| d) Deferred tax charged directly to equity: | - Equity raising costs | 1,147 | 665 |\n| - Currency translation adjustment | | (268) | - |\n\n - 1) The Oklahoma US state tax jurisdiction computes income taxes on a direct accounting basis. A significant portion of the 2014 impairment related to this jurisdiction resulting in a deferred tax benefit of $3,044 creating deferred tax assets, of which $2,064 were unrecognized.\n - 2) The change in apportioned state tax rates in US controlled entities is a result of the Company disposing of its property in Colorado (income tax rate of 4.63%) (2013: North Dakota with income tax rate of 4.53%) through a tax deferred sale and reinvesting the property in Texas (margin tax rate of 1%). As the Texas margin tax computation is similar in nature to an income tax computation, it is treated as an income tax for financial reporting purposes.\n - 3) This income tax benefit results from the election to consolidate certain Australian subsidiaries for income tax purposes effective 1 January 2014, making previously unrecognized deferred tax assets of one of these Australian subsidiaries available for utilization against future income of the consolidated Australian entities. These deferred tax assets were previously unrecognized due to the lack of evidence of future taxable income for these Australian subsidiaries on a stand-alone basis.\n\n## NOTE 8 - KEY MANAGEMENT PERSONNEL COMPENSATION\n\n - a) Names and positions held of Consolidated Group key management personnel in office at any time during\n\n## the financial period are:\n\nMr M Hannell\n\nChairman Non-executive\n\nMr E McCrady\n\nManaging Director and Chief Executive Officer\n\nMr D Hannes\n\nDirector - Non-executive\n\nMr N Martin\n\nDirector - Non-executive\n\nMr W Holcombe Director - Non-executive\n\nMs C Anderson Chief Financial Officer\n\nMs G Ford\n\nVice President of Exploration and Development\n\nBased on her increased responsibilities due to the Company's growth, Ms. Ford was deemed to be a KMP during the 2014 fiscal year. Prior to that time, Ms. Ford was not considered to be KMP\n\nOther than Directors and Officers of the Company listed above, there are no additional key management personnel.", - "page_start": 78, - "page_end": 78, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "Included in the balance of unrecognized income tax benefits at June 30, 2012, 2011 and 2010 are $1,221, $659 and $988, respectively, of income tax benefits that, if recognized, would affect the effective income tax rate.\n\nDuring 2012, 2011 and 2010, the Company recognized $(95), $(22) and $22, respectively, for interest and penalties related to unrecognized income tax benefits in its statements of consolidated income. The Company had a liability for penalties and interest of $430 and $525 as of June 30, 2012 and 2011, respectively. The Company does not anticipate a significant change to the total amount of unrecognized income tax benefits within the next twelve months.\n\nThe Company is subject to U.S. federal income tax examinations for the tax years 2009 through 2012 and to state and local income tax examinations for the tax years 2008 through 2012. In addition, the Company is subject to foreign income tax examinations for the tax years 2005 through 2012.\n\nThe Company's unrecognized income tax benefits are included in other liabilities in the consolidated balance sheets since payment of cash is not expected within one year.\n\n## NOTE 8: SHAREHOLDERS' EQUITY\n\n## Treasury Shares\n\nAt June 30, 2012, 596 shares of the Company's common stock held as treasury shares were restricted as collateral under escrow arrangements relating to change in control and director and officer indemnification agreements.\n\n## Accumulated Other Comprehensive Income (Loss)\n\nAccumulated other comprehensive income (loss) is comprised of the following:\n\n| June 30, | 2012 | 2011 |\n|----------------------------------------------------------------------------------------------------|-------------|--------------|\n| Postemployment liability, net of income taxes of $(3,899) and $(6,990) | $ (6,229 ) | $ (11,212 ) |\n| Foreign currency translation | 1,718 | 16,189 |\n| Unrealized gains on investment securities available for sale, net of income taxes of $(32) and $48 | (58 ) | 82 |\n| Total accumulated other comprehensive income (loss) | $ (4,569 ) | $ 5,059 |", - "page_start": 28, - "page_end": 28, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "Current tax is the expected tax payable or receivable on the taxable income or loss for the year, using tax rates enacted or substantively enacted at the reporting date, and any adjustment to tax payable in respect of previous years. Deferred tax is provided using the liability method, providing for temporary differences between the carrying amounts of assets and liabilities for financial reporting purposes and the amounts used for taxation purposes. The amount of deferred tax provided is based on the expected manner of realisation or settlement of the carrying amount of assets and liabilities, using tax rates enacted or substantively enacted at the reporting date.\n\nA deferred tax asset is recognised for unused tax losses, tax credits and deductible temporary differences, to the extent that it is probable that future taxable profits will be available against which they can be utilised. Deferred tax assets are reviewed at each reporting date and are reduced to the extent that it is no longer probable that the related tax benefit will be realised.\n\nDeferred tax is not recognised for:\n\n - 〉 temporary differences on the initial recognition of assets or liabilities in a transaction that is not a business combination and that affects neither accounting nor taxable profit or loss;\n - 〉 temporary differences related to investments in subsidiaries where the Company is able to control the timing of the reversal of the temporary differences and it is probable that they will not reverse in the foreseeable future; and\n - 〉 taxable temporary differences arising on the initial recognition of goodwill.\n\n\n\nDeferred tax assets and liabilities are offset if there is a legally enforceable right to offset current tax liabilities and assets, and they relate to income taxes levied by the same tax authority on the same taxable entity.\n\nAdditional income tax expenses that arise from the distribution of cash dividends are recognised at the same time that the liability to pay the related dividend is recognised.\n\n## Tax consolidation\n\nThe Company and its wholly-owned Australian resident entities formed a tax-consolidation group with effect from 1 July 2003 and are therefore taxed as a single entity from that date. The head entity within the tax-consolidation group is Kingsgate Consolidated Limited.\n\nCurrent tax expense or benefit, deferred tax assets and deferred tax liabilities arising from temporary differences of the members of the tax-consolidation group are recognised in the separate financial statements of the members of the tax-consolidation group using the 'stand alone taxpayer' approach by reference to the carrying amounts in the separate financial statements of each entity and the tax values applying under tax consolidation.\n\nCurrent tax assets or liabilities and deferred tax assets arising from unused tax losses assumed by the head entity from the subsidiaries in the tax-consolidation group, are recognised as amounts receivable or payable to other entities in the tax-consolidation group in conjunction with any tax funding agreement amounts.\n\nThe Company recognises deferred tax assets arising from unused tax losses of the tax-consolidation group to the extent that it is probable that future taxable profits of the tax-consolidation group will be available against which the asset can be utilised.\n\n## Tax funding and sharing agreements", - "page_start": 70, - "page_end": 70, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "- 284 Communication from the Commission to the European Parliament, The Council, The European Economic and Social Committee and the Committee of the Regions: A new Circular Economy Action Plan: For a cleaner and more competitive Europe. 11.03.2020 COM(2020) 98 final, here\n - 285 EU-OSHA, 2013: Green jobs and occupational safety and health: Foresight on new and emerging risks associated with new technologies by 2020\n - 286 EU-OSHA, 2021: What will the circular economy (CE) mean for occupational safety and health (OSH)? An overview of four foresight scenarios.\n - 287 EU-OSHA, Emerging risks: Workers' safety and health in green jobs\n - 288 United States Environmental Protection Agency: Green Engineering\n - 289 United Nations Environmental Programme (UNEP): Global Chemicals Outlook\n - 290 CEFIC, Facts and figures: Chemical Industry Contributes $5.7 Trillion to Global GDP and Supports 120 Million Jobs, New Report Shows, here\n - 291 UNEP, 2019: Global Chemicals Outlook II - From Legacies to Innovative Solutions: Implementing the 2030 Agenda for Sustainable Development (p. 27).\n - 292 Naidu et al., 2021: Chemical pollution: A growing peril and potential catastrophic risk to humanity", - "page_start": 151, - "page_end": 151, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20200438_en.pdf", - "query": "Under which conditions can the funds of a non-registered pension arrengements be obtained before the age of 55 ?", - "target_page": 2, - "target_passage": "non-registered pension arrangements where the annual contributions are limited to £50,000 and funds contributed cannot be accessed before the age of 55 except in circumstances of serious ill health", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## NOTE 22: PENSIONS\n\nWe have contributory and non-contributory defined benefit pension plans that are made available to most of our employees. The plans provide pensions based on years of service, years of contributions and earnings. We do not provide any non-pension post-retirement benefits. We also provide unfunded supplemental pension benefits to certain executives.\n\nThe assets of the defined benefit pension plans are held in segregated accounts isolated from our assets. We administer the defined benefit pension plans pursuant to applicable regulations, the Statement of Investment Policies and Procedures and to the mandate of the Pension Committee of the Board of Directors. The Pension Committee of the Board of Directors oversees our administration of the defined benefits pension plans, which includes the following principal areas:", - "page_start": 121, - "page_end": 121, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## SECURITIES AND EXCHANGE COMMISSION\n\nWashington, D.C. 20549\n\n## FORM 10-K\n\n(Mark One)\n\n≤ ANNUAL REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934\n\nFor the Ñscal year ended December 31, 2004\n\nOR\n\nn TRANSITION REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934\n\nFor the transition period from\n\nto\n\nCommission Ñle number: 1-14267\n\n## REPUBLIC SERVICES, INC.\n\n(Exact name of Registrant as SpeciÑed in its Charter)\n\nDelaware\n\n65-0716904\n\n(State of Incorporation)\n\n(I.R.S. Employer IdentiÑcation No.)\n\nRepublic Services, Inc. 110 S.E. 6th Street, 28th Floor Fort Lauderdale, Florida\n\n33301\n\n(Zip Code)\n\n(Address of Principal Executive OÇces)\n\nRegistrant's telephone number, including area code: (954) 769-2400\n\nSecurities registered pursuant to Section 12(b) of the Act:\n\nTitle of Each Class\n\nName of Each Exchange on which Registered\n\nCommon Stock, par value $.01 per share\n\nThe New York Stock Exchange\n\nSecurities registered pursuant to Section 12(g) of the Act: None\n\nIndicate by check mark whether the registrant: (1) has Ñled all reports required to be Ñled by Section 13 or 15(d) of the Securities Exchange Act of 1934 during the preceding 12 months (or for such shorter period that the registrant was required to Ñle such reports), and (2) has been subject to such Ñling requirements for the past 90 days. Yes ≤ No n\n\nIndicate by check mark if disclosure of delinquent Ñlers pursuant to Item 405 of Regulation S-K is not contained herein, and will not be contained, to the best of registrant's knowledge, in deÑnitive proxy or information statements incorporated by reference in Part III of this Form 10-K or any amendment to this Form 10-K. ≤\n\nIndicate by check mark whether the registrant is an accelerated Ñler (as deÑned in Rule 12b-2 of the Act). Yes ≤ No n\n\nAs of June 30, 2004, the aggregate market value of the shares of the Common Stock held by non- aÇliates of the registrant was approximately $4,395,636,476.\n\nAs of February 18, 2005, the registrant had outstanding 149,670,988 shares of Common Stock.\n\nDOCUMENTS INCORPORATED BY REFERENCE\n\nPart III Portions of the Registrant's Proxy Statement relative to the 2005 Annual Meeting of Stockholders.", - "page_start": 6, - "page_end": 6, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "- (3) W here a person is entitled to exercise an option as to w hich of tw o or m ore law s shall apply in his or her case, the law for w hich he or she opts shall, for the purposes of this section, be deem ed to be m ore favourable to him or her than the other law or law s.\n - (4) A ll pensions benefits shall (except to the extent to w hich under any law providing for the funding of pensions benefits they are a charge on a fund established by that law and have been duly paid out of that fund to the person or authority to w hom paym ent is due) be a charge on the C onsolidated Fund.\n - (5) In this section \"pensions benefits\" m eans any pensions, com pensation, gratuities or other like allow ances for persons in respect of their service as public officers or as m em bers of the arm ed forces or for the w idow s, children, dependants or personal representatives of such persons in respect of such service.", - "page_start": 49, - "page_end": 49, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- (v) the parent or carer of a domestic elite sportsperson under the age of 18;", - "page_start": 46, - "page_end": 46, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "(5) In this section \"pensions benefits\" m eans any pensions, com pensation, gratuities or other like allow ances for persons in respect of their service as public officers (including service as public officers of the form er P rotectorate of B echuanaland) or for the w idow s, children, dependants or personal representatives of such persons in respect of such service.\n\n## C H A P TE R V III (ss 117-124)\n\n## Finance\n\n## 117. C onsolidated Fund\n\nAll revenues or other m oneys raised or received for the purposes of the G overnm ent of B otsw ana (not being revenues or other m oneys that are payable by or under any law into som e other fund established for a specific purpose or that m ay by or under any law be retained by the departm ent of G overnm ent that received them for the purposes of defraying the expenses of that departm ent) shall be paid into and form one C onsolidated Fund.\n\n## 118. W ithdraw als from C onsolidated Fund or other public funds", - "page_start": 50, - "page_end": 50, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- 113. Tenure of office of D irector of P ublic P rosecutions\n - 114. Tenure of office of A uditor-G eneral\n - 115. Pensions law s and protection of pensions rights\n - 116. Pow er of C om m issions in relation to pensions, etc.\n\n## C H A P T E R V III\n\n## Finance\n\n - 117. C onsolidated Fund\n - 118. W ithdraw als from C onsolidated Fund or other public funds\n - 119. Authorization of expenditure\n - 120. Authorization of expenditure in advance of appropriation\n - 121. C ontingencies Fund\n - 122. R em uneration of certain officers\n - 123. Public debt\n - 124. Auditor-G eneral\n\n## C H A P T E R IX\n\n## M iscellaneous\n\n - 125. R esignations\n - 126. R eappointm ents and concurrent appointm ents\n - 127.\n - Interpretation\n\nFirst S chedule - E lection of S pecially E lected M em bers of the N ational Assem bly\n\nSecond Schedule - D ivision of D istricts into regions for the purpose of selecting M em bers of N tlo ya D ikgosi\n\nL.N . 83, 1966,\n\nAct 30, 1969,\n\nAct 43, 1969,\n\nAct 25, 1970,\n\nAct 28, 1972,\n\nAct 24, 1973,\n\nAct 28, 1978,\n\nS.I. 25, 1980,\n\nAct 32, 1982,\n\nAct 1, 1983,\n\nAct 22, 1987,\n\nS.I. 37, 1991,\n\nAct 27, 1992,\n\nS.I. 51, 1993,\n\nS.I. 119, 1993,\n\nAct 16, 1997,\n\nAct 18, 1997,\n\nAct 1, 1999,\n\nAct 2, 2002,\n\nAct 12, 2002,\n\nAct 9, 2005,\n\nS.I. 91, 2006.\n\n[ D ate of C om m encem ent: 30th S eptem ber, 1966 ]\n\n## C H A P TE R I", - "page_start": 3, - "page_end": 3, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "Use these links to rapidly review the document HORMEL FOODS CORPORATION TABLE OF CONTENTS\n\n## ANNUAL REPORT ON FORM 10-K HORMEL FOODS CORPORATION OCTOBER 25, 2003\n\n## FORM 10-K\n\nANNUAL REPORT PURSUANT TO SECTION 13 OR 15 (d) OF THE SECURITIES EXCHANGE ACT OF 1934\n\n## HORMEL FOODS CORPORATION\n\n(Exact name of registrant as specified in its charter)\n\n## DELAWARE\n\n41-0319970\n\n(State or other jurisdiction of incorporation or organization)\n\n(I.R.S. Employer Identification No.)\n\n## 1 HORMEL PLACE AUSTIN, MINNESOTA\n\n55912-3680\n\n(Address of principal executive offices)\n\n(Zip Code)\n\nRegistrant's telephone number, including area code (507) 437-5611\n\nSecurities registered pursuant to Section 12 (b) of the Act:\n\nCOMMON STOCK, PAR VALUE $.0586 PER SHARE\n\nTitle of Each Class\n\nNEW YORK STOCK EXCHANGE\n\nName of Each Exchange On Which Registered\n\nSecurities registered pursuant to Section 12 (g) of the Act:\n\nIndicate by check mark whether the registrant (1) has filed all reports required to be filed by Section 13 or 15(d) of the Securities Exchange Act of 1934 during the preceding 12 months, and (2) has been subject to such filing requirements for the past 90 days. Yes ý No o\n\nIndicate by check mark if disclosure of delinquent filers pursuant to Item 405 of Regulation S-K is not contained herein, and will not be contained, to the best of registrant's knowledge in definitive proxy or information statements incorporated by reference in Part III of this Form 10-K or any amendments to this Form 10-K. o\n\nIndicate by check mark whether the registrant is an accelerated filer (as defined in Rule 12b-2 of the Act). Yes ý No o\n\nThe aggregate market value of the voting stock held by non-affiliates of the registrant as of April 26, 2003 (the last business day of the registrant's most recently completed second fiscal quarter), was $1,592,020,962 based on the closing price of $21.74 per share on that date.\n\nAs of December 1, 2003, the number of shares outstanding of each of the Corporation's classes of common stock was as follows:\n\nCommon Stock, $.0586 Par Value-138,672,803 shares\n\nCommon Stock Non-Voting, $.01 Par Value-0 shares\n\n## DOCUMENTS INCORPORATED BY REFERENCE\n\nPortions of the Annual Stockholders' Report for the year ended October 25, 2003, are incorporated by reference into Part I and Part II Items 5-8, and included as exhibit 13.1 filed herewith.\n\nHORMEL FOODS CORPORATION\n\nTABLE OF CONTENTS", - "page_start": 1, - "page_end": 1, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "## NOTES TO AND FORMING PART OF THE FINANCIAL STATEMENTS FOR THE FINANCIAL YEAR ENDED 30 JUNE 2000\n\n## (p) Receivables\n\nTrade receivables and other receivables are recorded at amounts due less any provision for doubtful debts.\n\nBills of exchange are recorded at amortised cost, with revenue recognised on an effective yield basis.\n\n## (q) Recoverable Amount of Non-Current Assets\n\nNon-current assets are written down to recoverable amount where the carrying value of any non-current asset exceeds recoverable amount. In determining the recoverable amount of noncurrent assets, the expected net cash flows have not been discounted to their discount value.\n\n## (r) Revenue Recognition\n\n## Sale of Goods and Disposal of Assets\n\nRevenue from the sale of goods and disposal of other assets is recognised when the economic entity has passed control of the goods or other assets to the buyer.\n\n## Rendering of Services\n\nRevenue from a contract to provide services is recognised by reference to the stage of completion of the contract.\n\n## Contribution of Assets\n\nRevenue arising from the contribution of assets is recognised when the economic entity gains control of the contribution or the right to receive the contribution.\n\n## Liabilities Forgiven\n\nThe gross amount of a liability forgiven by a credit provider is recognised as revenue.\n\n", - "page_start": 44, - "page_end": 44, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "which the Group becomes a party to the contractual provisions of the instrument.\n\nThe Group derecognises a financial asset when the contractual rights to the cash flows from the asset expire, or it transfers the rights to receive the contractual cash flows on the financial asset in a transaction in which substantially all the risks and rewards of ownership of the financial assets are transferred.\n\nFinancial assets and liabilities are offset and the net amount presented in the statement of financial position when, and only when, the Group has a legal right to offset the amounts and intends either to settle on a net basis or to realise the asset and settle the liability simultaneously.\n\n## (i) Financial assets at fair value through profit or loss\n\nFinancial assets at fair value through profit or loss are financial assets held for trading if acquired principally for the purpose of selling in the short term. Derivatives are also categorised as held for trading unless they are designated as hedges.\n\nAttributable transaction costs are recognised in the profit or loss when incurred. Assets in this category are classified as current assets if they are expected to be settled within 12 months, otherwise they are classified as non-current.\n\n## (ii) Loans and receivables\n\nLoans and receivables are non-derivative financial assets with fixed or determinable payments that are not quoted in an active market. They are included in current assets, except for those with maturities greater than 12 months after the reporting date which are classified as noncurrent assets.\n\nLoans and receivables are measured at amortised cost using the effective interest method, less any impairment losses.\n\n## (iii) Available-for-sale financial assets\n\nAvailable-for-sale financial assets, comprising principally marketable equity securities, are non-derivative financial assets that are either designated in this category or not classified in any of the other categories. They are included in non-current assets unless management intends to dispose of the investment within 12 months of the reporting date. Investments are designated as available-for-sale if they do not have fixed maturities and fixed or determinable payments and management intends to hold them for the medium to long term.\n\nSubsequent to initial recognition, available-forsale financial assets are measured at fair value and changes therein, other than impairment\n\nlosses, are recognised as a separate component of equity net of attributable tax. When an asset is derecognised the cumulative gain or loss in equity is transferred to the statement of comprehensive income.\n\n## Impairment\n\nThe Group assesses at each reporting date whether there is objective evidence that a financial asset or group of financial assets is impaired. In the case of equity securities classified as available-for-sale, a significant or prolonged decline in the fair value of a security below its cost is considered as an indicator that the securities are impaired. If any such evidence exists for available-for-sale financial assets, the cumulative loss measured as the difference between the acquisition cost and the current fair value, less any impairment loss on that financial asset previously recognised in profit or loss, is removed from equity and recognised in the statement of comprehensive income. Impairment losses recognised in the profit or loss on equity instruments classified as available-for-sale are not reversed through the statement of comprehensive income.\n\nIf there is evidence of impairment for any of the Group's financial assets carried at amortised cost, the loss is measured as the difference between the asset's carrying amount and the present value of estimated future cash flows, excluding future credit losses that have not been incurred. The cash flows are discounted at the financial asset's original effective interest rate. The loss is recognised in the statement of comprehensive income.", - "page_start": 72, - "page_end": 72, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Pension Obligations\n\nOur retiree pension plans had a funding deficit of approximately $172 million at December 31, 2013. We have been making special minimum monthly payments in addition to our regular contributions to eliminate the pension liability. During 2013, our funding deficit was reduced by $162 million.\n\nThe special payments, including contributions associated with benefits paid from the plans, were approximately $7 million in 2013. We expect our total estimated funding requirements to be $96 million in 2014 and to be adjusted annually thereafter, based on various market factors such as interest rates and expected returns and staffing assumptions.\n\nChanges in factors such as the discount rate, increase in compensation and the expected return on plan assets can affect the accrued benefit obligation, pension expense and the deficiency of plan assets over\n\naccrued obligations in the future. See Critical accounting estimates for more information.\n\n## Purchase of Annuities\n\nFrom time to time we have made additional lump-sum contributions to our pension plans, and the pension plans have purchased annuities from insurance companies to fund the pension benefit obligations for certain groups of retired employees in the plans. Purchasing the annuities relieves us of our primary responsibility for that portion of the accrued benefit obligations for the retired employees and eliminates the significant risk associated with the obligations.\n\nWe did not make any additional lump-sum contributions to our pension plans in 2013 or 2012, and the pension plans did not purchase additional annuities.\n\n## FINANCIAL RISK MANAGEMENT\n\nWe normally use three categories of derivative instruments to manage risks related to our business activities:\n\n| Categories | The risk it manages | Types of derivative instruments |\n|-------------------------|----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|\n| Debt Derivatives | GLYPH<129> Impact of fluctuations in foreign exchange rates on principal and interest payments for US denominated long-term debt | GLYPH<129> Cross-currency interest rate exchange agreements GLYPH<129> Forward foreign exchange agreements (from time to time, as applicable) |\n| Expenditure Derivatives | GLYPH<129> Impact of fluctuations in foreign exchange rates on forecasted US dollar denominated expenditures | GLYPH<129> Forward foreign exchange agreements |\n| Equity Derivatives | GLYPH<129> Impact of fluctuations in share price on stock-based compensation expense | GLYPH<129> Total return swap agreements |\n\nWe also manage our exposure to fluctuating interest rates and we have fixed the interest rate on 95.3 % of our debt including short-term borrowings at December 31, 2013 (2012 - 100 % ).\n\n## Debt Derivatives\n\nWe use cross currency interest exchange agreements (Debt Derivatives), to hedge the foreign exchange risk on all of the principal and interest obligations of our US dollar denominated senior notes and debentures. At December 31, 2013 we used Debt Derivatives to hedge the foreign exchange risk on 100 % of the principal and interest obligations on all our US dollar denominated debt. We use Debt Derivatives for risk management purposes only.\n\nDuring 2013, we completed Debt Derivatives transactions as follows:", - "page_start": 65, - "page_end": 65, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2538.pdf", - "query": "What metrics are good indicators of the coverage of gas molecules on carbon nanotubes ?", - "target_page": 1, - "target_passage": "the bind- ing energy and scattering resistance of the molecules", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Computational Design of Chemical Nanosensors: Metal Doped Carbon Nanotubes\n\nJ. M. Garc´ıa-Lastra 1,2 , ∗ D. J. Mowbray 1,2 , K. S. Thygesen 2 , A. Rubio 1,3 , and K. W. Jacobsen 2 1 Nano-Bio Spectroscopy group and ETSF Scientific Development Centre, Dpto. F´ısica de Materiales, Universidad del Pa´ıs Vasco, Centro de F´ısica de Materiales CSIC-UPV/EHU- MPC and DIPC, Av. Tolosa 72, E-20018 San Sebasti´an, Spain 2 Center for Atomic-scale Materials Design, Department of Physics, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark 3 Fritz-Haber-Institut der Max-Planck-Gesellschaft, Berlin, Germany\n\nWe use computational screening to systematically investigate the use of transition metal doped carbon nanotubes for chemical gas sensing. For a set of relevant target molecules (CO, NH3, H2S) and the main components of air (N2, O2, H2O), we calculate the binding energy and change in conductance upon adsorption on a metal atom occupying a vacancy of a (6,6) carbon nanotube. Based on these descriptors, we identify the most promising dopant candidates for detection of a given target molecule. From the fractional coverage of the metal sites in thermal equilibrium with air, we estimate the change in the nanotube resistance per doping site as a function of the target molecule concentration assuming charge transport in the diffusive regime. Our analysis points to Ni-doped nanotubes as candidates for CO sensors working under typical atmospheric conditions.\n\nPACS numbers: 73.63.-b, 68.43.-h, 73.50.Lw\n\nThe ability to detect small concentrations of specific chemical species is fundamental for a variety of industrial and scientific processes as well as for medical applications and environmental monitoring [1]. In general, nanostructured materials should be well suited for sensor applications because of their large surface to volume ratio which makes them sensitive to molecular adsorption. Specifically, carbon nanotubes (CNT) [2] have been shown to work remarkably well as detectors of small gas molecules. This has been demonstrated both for individual CNTs [3-8] as well as for CNT networks [9, 10].\n\nPristine CNTs are known to be chemically inert - a property closely related to their high stability. As a consequence, only radicals bind strong enough to the CNT to notably affect its electrical properties [2, 5, 11-13]. To make CNTs attractive for sensor applications thus requires some kind of functionalization, e.g. through doping or decoration of the CNT sidewall [13-21]. Ideally, this type of functionalization could be used to control not only the reactivity of the CNT but also the selectivity towards specific chemical species.\n\nIn this work we consider the possibility of using CNTs doped by 3d transition metal atoms for chemical gas sensing. We use computational screening to systematically identify the most promising dopant candidates for detection of three different target molecules (CO, NH3, H2S) under typical atmospheric conditions. The screening procedure is based on the calculation of two microscopic descriptors: the binding energy and scattering resistance of the molecules when adsorbed on a doped CNT. These two quantities give a good indication of the gas coverage and impact on the resistance. For the most promising candidates we then employ a simple thermodynamic model of the CNT sensor. In this model, the binding energies are used to obtain the fractional coverage of the metallic sites as a function of the target molecule concentration under ambient conditions. Under the assumption of transport in the diffusive rather than localization regime, the\n\nchange in CNT resistivity may then be obtained from the calculated coverages and single impurity conductances.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2538.pdf" - }, - { - "text": "- ∗ Electronic address: juanmaria.garcia@ehu.es\n- [1] Gas Sensing Materials, MRS Bull. , vol. 24 (1999).\n- [2] J. C. Chalier, X. Blase, and S. Roche, 'Electronic and transport properties of nanotubes', Rev. Mod. Phys. 79 (2), 677 (May 2007), doi:10.1103/RevModPhys.79.677.\n- [3] J. Kong, N. R. Franklin, C. Zhou, M. G. Chapline, S. Peng, K. Cho, and H. Dai, 'Nanotube molecular wires as chemical sensors', Science 287 (5453), 622 (Jan. 2000), doi:10.1126/science.287.5453.622.\n- [4] P. G. Collins, K. Bradley, M. Ishigami, and A. Zettl, 'Extreme oxygen sensitivity of electronic properties of carbon nanotubes', Science 287 (5459), 1801 (Mar. 2000), doi:10.1126/science.287.5459.1801.\n- [5] C. Hierold, Carbon Nanotube Devices: Properties, Modeling, Integration and Applications (Wiley-VCH, Weinheim, 2008).\n- [6] F. Villalpando-P'aez, A. H. Romero, E. Mu˜noz-Sandoval, L. M. Mart'ınez, H. Terrones, and M. Terrones, 'Fabrication of vapor and gas sensors using films of aligned CN x nanotubes', Chem. Phys. Lett. 386 (1-3), 137 (Mar. 2004), doi:10.1016/j.cplett.2004.01.052.\n- [7] A. R. Rocha, M. Rossi, A. Fazzio, and A. J. R. da Silva, 'Designing real nanotube-based gas sensors', Phys. Rev. Lett. 100 (17), 176803 (May 2008), doi:10.1103/PhysRevLett.100.176803.\n- [8] S. Brahim, S. Colbern, R. Gump, and L. Grigorian, 'Tailoring gas sensing properties of carbon nanotubes', J. Appl. Phys. 104 (2), 024502 (Jul. 2008), doi:10.1063/1.2956395.\n- [9] C. Morgan, Z. Alemipour, and M. Baxendale, 'Variable range hopping in oxygen-exposed single-wall carbon nanotube networks', Phys. Stat. Solidi A 205 (6), 1394 (May 2008), doi:10.1002/pssa.200778113.\n- [10] D. J. Mowbray, C. Morgan, and K. S. Thygesen, 'Influence of O2 and N2 on the conductivity of carbon nanotube networks', Phys. Rev. B 79 (19), 195431 (May 2009), doi:10.1103/PhysRevB.79.195431.\n- [11] L. Valentini, F. Mercuri, I. Armentano, C. Cantalini, S. Picozzi, L. Lozzi, S. Santucci, A. Sgamellotti, and J. M. Kenny, 'Role of defects on the gas sensing properties of carbon nanotubes thin films: experiment and theory', Chem. Phys. Lett. 387 (4-6), 356 (Apr. 2004), doi:10.1016/j.cplett.2004.02.038.\n- [12] Z. Zanolli and J.-C. Charlier, 'Defective carbon nanotubes for single-molecule sensing', Phys. Rev. B 80 (15), 155447 (Oct. 2009), doi:10.1103/PhysRevB.80.155447.\n- [13] J. M. Garc'ıa-Lastra, K. S. Thygesen, M. Strange, and ' Angel Rubio, 'Conductance of sidewall-functionalized carbon nanotubes: Universal dependence on adsorption sites', Phys. Rev. Lett. 101 (23), 236806 (Dec. 2008), doi:10.1103/PhysRevLett.101.236806.\n- [14] S. B. Fagan, R. Mota, A. J. R. da Silva, and A. Fazzio, ' Ab initio study of an iron atom interacting with single-wall carbon nanotubes', Phys. Rev. B 67 (20), 205414 (May 2003), doi:10.1103/PhysRevB.67.205414.\n- [15] Y. Yagi, T. M. Briere, M. H. F. Sluiter, V. Kumar, A. A. Farajian, and Y. Kawazoe, 'Stable geometries and magnetic properties of single-walled carbon nanotubes doped with 3 d transition metals: A first-principles study', Phys. Rev. B 69 (7), 075414 (Feb 2004), doi:10.1103/PhysRevB.69.075414.\n- [16] S. H. Yang, W. H. Shin, J. W. Lee, S. Y. Kim, S. I. Woo, and J. K. Kang, 'Interaction of a transition metal atom with intrinsic defects in single-walled carbon nanotubes', J. Phys. Chem. B 110 (28), 13941 (Jun. 2006), doi:10.1021/jp061895q.\n- [17] K. T. Chan, J. B. Neaton, and M. L. Cohen, 'First-principles study of metal adatom adsorption on graphene', Phys. Rev. B 77 , 235430 (Jun. 2008), doi:10.1103/PhysRevB.77.235430.\n- [18] C. S. Yeung, L. V. Liu, and Y. A. Wang, 'Adsorption of small gas molecules onto Pt-doped single-walled carbon nanotubes', J. Phys. Chem. C 112 (19), 7401 (Apr. 2008), doi:10.1021/jp0753981.\n- [19] T. Vo, Y.-D. Wu, R. Car, and M. Robert, 'Structures, interactions, and ferromagnetism of Fe-carbon nanotube systems', J. Phys. Chem. C 112 (22), 400 (May 2008), doi:10.1021/jp0761968.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2538.pdf" - }, - { - "text": "all N impurities. At this point it suffices to see that the conservative estimates obtained from Eq. (7) predict measurable signals in response to small changes in concentration of the target molecules.\n\nTo our knowledge, controlled doping of CNTs with transition metal atoms has so far not been achieved. It has, however, been found that metal atoms incorporated into the CNT lattice during catalytic growth are afterwards very difficult to remove [30]. Furthermore, it has been shown that CNT vacancies, which are needed for the metallic doping, may be formed in a controlled way by irradiation by Ar ions [31]. This suggests that metallic doping of CNTs should be possible.\n\nIn summary, we have presented a general model of nanostructured chemical sensors which takes the adsorption energies of the relevant chemical species and their individual scattering resistances as the only input. On the basis of this model we have performed a computational screening of transition metal doped CNTs, and found that Ni-doped CNTs are promising candidates for detecting CO in a background of air. The model may be applied straightforwardly to other nanostructures than CNTs, other functionalizations than metal doping and other gas compositions than air.\n\nThe authors acknowledge financial support from Spanish MEC (FIS2007-65702-C02-01), 'Grupos Consolidados UPV/EHU del Gobierno Vasco' (IT-319-07), e-I3 ETSF project (Contract Number 211956), 'Red Espa˜nola de Supercomputaci'on', NABIIT and the Danish Center for Scientific Computing. The Center for Atomic-scale Materials Design (CAMD) is sponsored by the Lundbeck Foundation. JMG-L acknowledges funding from Spanish MICINN through Juan de la Cierva and Jos'e Castillejo programs.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2538.pdf" - }, - { - "text": "## Growing Demand for U.S. Natural Gas Will Drive Improved Prices in the Years Ahead\n\nSeveral factors are emerging in the U.S. that will drive increased demand for natural gas, which in turn could improve out year natural gas prices:\n\n## Growing momentum for CNG passenger and LNG long-haul truck vehicles\n\nEnormous cost savings are available to consumers and businesses that chose to use natural gas as an alternative transportation fuel ($1.39 per gallon for CNG in Oklahoma, for example, compared to $3.75-$4.00 per gallon for gasoline and diesel).\n\n## Growing industrial demand\n\nWith recent low prices for domestic natural gas, U.S. industries that utilize natural gas as a feedstock in their manufacturing processes have a significant cost advantage compared with international peers whose feedstock is indexed either to oil or global natural gas prices.\n\n## Continuing and accelerating shift from coal to natural gas for U.S. electrical power generation\n\nTo clean our environment, dozens of aging coal-powered electricity plants will be retired in the next decade and replaced with the cleaner alternative of natural gas. A combination of shifting power sources and higher utilization within existing gas-fired power plants will likely increase natural gas demand by 10-15 bcf per day over the next decade.\n\n## Conversion of U.S. LNG import facilities to LNG export facilities\n\nWith increasing demand for natural gas around the world and the abundance of U.S. natural gas reserves, producers will be able to tap into higher-margin markets in Europe, South America and Asia once export capabilities are available potentially beginning in 2015.\n\n## Construction of U.S. gas-to-liquids (GTL) plants\n\nConverting natural gas to a room temperature liquid would allow U.S. natural gas producers to sell products based on world oil prices instead of domestic natural gas prices. Technological advancements continue to gain traction and may make GTL a realistic possibility by 2016.\n\n## U.S. natural gas producers are rapidly moving to a more liquids-rich production base\n\nDue to the premium margins realized in the U.S. when producing liquids as compared to natural gas, there is a meaningful shift of producers targeting liquids-rich drilling prospects. This shift will ultimately help bring\n\n$2.25 billion in cash and drilling carries for its 25% stake in the Barnett, and we are extremely proud to have Total as one of our premier joint venture partners.\n\nHaynesville Shale - The Haynesville Shale in Northwest Louisiana and East Texas is the shale play of which we are most proud (to date) because it was discovered by Chesapeake's own geoscientists and engineers. We conducted our geoscientific investigation of the Haynesville in 2005-06 and tested our theories through drilling in 2007. In 2008 we formed an innovative joint venture agreement with our well-respected industry partner, Houston-based Plains Exploration & Production Company, to which we sold 20% of our Haynesville (and Bossier) assets for approximately $3.2 billion in cash and drilling carries.\n\nThe Haynesville Shale is now the nation's largest producing natural gas shale play, having just recently passed the Barnett Shale in production (in last year's letter, I incorrectly estimated it would take until 2014 for the Haynesville to reach this achievement, a testament to the play's enormous productive potential). Ultimate recoveries from the Haynesville could exceed 250 tcfe, likely making it one of the five largest natural gas fields in the world. Today, we are producing from more than 260 net wells in the Haynesville on our 530,000 net leasehold acres, are currently drilling with 35 rigs and estimate we could drill up to 6,300 additional net wells in the years ahead. Our gross operated production in the Haynesville recently set a record of\n\nnearly 1.6 bcfe per day.\n\n", - "page_start": 9, - "page_end": 9, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "change in CNT resistivity may then be obtained from the calculated coverages and single impurity conductances.\n\nWe find that oxidation of the active metal site passivates the sensor in the case of doping by Ti, V, Cr, and Mn under standard conditions (room temperature and 1 bar of pressure). Among the remaining metals, we identify Ni as is the most promising candidate for CO detection. For this system the change in resistance per active site is generally significant ( > 1 Ω ) for small changes in CO concentration in the relevant range of around 0.1-10 ppm. Our approach is quite general and is directly applicable to other nanostructures than CNTs, other functionalizations than metal doping, and other backgrounds than atmospheric air.\n\nAll total energy calculations and structure optimizations have been performed with the real-space density functional theory (DFT) code GPAW [22] which is based on the projector augmented wave method. We use a grid spacing of 0.2 ˚ A for representing the density and wave functions and the PBE exchange correlation functional [23]. Transport calculations for the optimized structures have been performed using the nonequilibrium Green's function method [24] with an electronic Hamiltonian obtained from the SIESTA code [25] in a double zeta polarized (DZP) basis set. Spin polarization has been taken into account in all calculations.\n\nMetallic doping of a (6,6) CNT has been modeled in a supercell containing six repeated minimal unit cells along the CNT axis (dimensions: 15 ˚ A × 15 ˚ A × 14.622 ˚ A). For this size of supercell a Γ -point sampling of the Brillouin zone was found to be sufficient. The formation energy for creating a vacancy (VC) occupied by a transition metal atom (M) was calculated using the relation\n\nE form [ M @ VC ] = E [ M @ VC ] + nE [ C ] -E [ M@NT ] (1)\n\nwhere E [M@VC] is the total energy of a transition metal atom occupying a vacancy in the nanotube, n is the number of carbon atoms removed to form the vacancy, E [C] is the energy per carbon atom in a pristine nanotube, and E [M@NT]", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2538.pdf" - }, - { - "text": "Jeff Mobley Senior Vice President -\n\n\n\nInvestor Relations and Research\n\ncurrent price disparity between natural gas and oil will increasingly lead to greater use of natural gas in the U.S. transportation system. Whether it be compressed natural gas (CNG) for medium and light-duty vehicles, LNG for heavy-duty vehicles or the commercialization of gas-to-liquids (GTL) natural gas refineries that supplement the U.S. liquid fuel supply stream, we believe that the marketplace will increasingly utilize and embrace natural gas. Chesapeake is working with industry, public policymakers and potential partners on each of these demand reinvention opportunities. Natural gas is clean, affordable, abundant and American. Why shouldn't it trade at a BTU premium in the years ahead?\n\nNick Dell'Osso\n\n\n\nExecutive Vice President and Chief Financial Officer\n\n## Why is an investment grade rating on its debt securities important to CHK?\n\nWe believe that Chesapeake will benefit in multiple ways from an investment grade rating on our debt securities, which we hope to achieve in 2012 or 2013. First, a higher rating would obviously lower the company's borrowing costs over time. In addition, other less easily quantifiable benefits will also accrue to Chesapeake. Higher debt ratings would result in lower costs on long-term firm transportation contracts that we enter into in order to market our natural gas and oil production as well as facilitate our ability to enter into long-term contracts to sell our natural gas production to international buyers in the form of LNG. An improved rating will also enhance Chesapeake's ability to further attract world-class energy companies to participate in our joint venture projects, which profitably monetize a portion of our leasehold investments and also accelerate the development of our resource base. Finally, and perhaps most importantly, we believe that reduced financial leverage and an invest ment grade rating will lead to a higher stock price and provide further interest from worldwide equity investors.", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "FIG. 1: Structural schematics and formation energy for a 3d transition metal occupied monovacancy (black), divacancy I (gray), or divacancy II (white) in a (6,6) carbon nanotube. Formation energies of the empty vacancies are indicated by dashed lines.\n\n\n\nis the total energy of the pristine nanotube with a physisorbed transition metal atom. We have considered the monovacancy and two divacancies shown in Fig. 1. The energy required to form an empty vacancy is obtained from\n\nE form [ VC ] = E [ VC ] + nE [ C ] -E [ NT ] , (2)\n\nwhere E [VC] is the total energy of the nanotube with a vacancy of n atoms.\n\nThe calculated formation energies for the 3d transition metals are shown in Fig. 1. From the horizontal lines we see that both divacancies are more stable than the monovacancy. This may be attributed to the presence of a two-fold coordinated C atom in the monovacancy, while all C atoms remain three-fold coordinated in the divacancies. When a transition metal atom occupies a vacancy, the strongest bonding to the C atoms is through its d orbitals [26]. For this reason, Cu and Zn, which both have filled d-bands, are rather unstable in the CNT. For the remaining metals, adsorption in the monovacancies leads to quite stable structures. This is because the three-fold coordination of the C atoms and the CNT's hexagonal structure are recovered when the metal atom is inserted. On the other hand, metal adsorption in divacancies is slightly less stable because of the resulting pentagon defects, see upper panel in Fig. 1. A similar behaviour has been reported by Krasheninnikov et al. for transition metal atoms in graphene [21].\n\nThe adsorption energies for N2, O2, H2O, CO, NH3, and H2S on the metallic site of the doped (6,6) CNTs are shown in Fig. 2(a). The adsorption energy of a molecule X is defined by\n\nE ads [ X @M@VC ] = E [ X @M@VC ] -E [ X ] -E [ M@VC ] , (3)\n\nFIG. 2: Calculated (a) adsorption energy E ads in eV and (b) change in conductance ∆ G in units of G 0 = 2 e 2 /h for N2, O2, H2O, CO, NH3, and H2S on 3d transition metals occupying a monovacancy (top), divacancy I (middle), and divacancy II (bottom) in a (6,6) carbon nanotube.\n\nwhere E [ X @M@VC] is the total energy of molecule X on a transition metal atom occupying a vacancy, and E [ X ] is the gas phase energy of the molecule.\n\nFrom the adsorption energies plotted in Fig. 2(a), we see that the earlier transition metals tend to bind the adsorbates stronger than the late transition metals. The latest metals in the series (Cu and Zn) bind adsorbates rather weakly in the divacancy structures. We also note that O2 binds significantly stronger than any of the three target molecules on Ti, V, Cr, and Mn (except for Cr in divacancy I where H2S is found to dissociate). Active sites containing these metals are therefore expected to be completely passivated if oxygen is present in the background. Further, we find H2O is rather weakly bound to most of the active sites. This ensures that these types of sensors are robust against changes in humidity.\n\nIn thermodynamic equilibrium [27], the coverage of the active sites follows from\n\nΘ[ X ] = K [ X ] C [ X ] 1 + ∑ Y K [ Y ] C [ Y ] , (4)\n\nwhere K = k + /k -is the ratio of forward and backward rate constants for the adsorption reaction,\n\nK [ X ] = exp [ -E ads [ X ] + TS [ X ] k B T ] . (5)\n\nIn these expressions C [ X ] is the concentration of species X , S [ X ] is its gas phase entropy and T is the temperature. Experimental values for the gas phase entropies have been taken from Ref. [28].", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2538.pdf" - }, - { - "text": "## IT'S LOGICAL\n\nMermaid Marine Australia Limited's future is now inextricably linked to the oil and gas industry. Oil continues to be the main attraction to explorers because of its lower infrastructure costs, early cash flow and easier marketing, but it is gas, which is emerging as the premier fuel.\n\nGas is clean, portable, in massive supply and the environmental answer to so many of today's atmospheric problems. Together with its sister product, condensate it is also the dominant feedstock for the petrochemical industry. In Australia . . . gas has a very, very great future .\n\nFrom all sources, our country consumes or exports a modest total of approximately one trillion cubic feet of gas each year. Gas is found in a number of places around Australia, but over 90% of national reserves are found offshore of northwestern Australia, where current estimates of gas in place comfortably exceed 100 trillion cubic feet .\n\nThese reserves were effectively found as a co-product in the search for oil, yet on today's economics, only one trillion cubic feet of gas with the appropriate production infrastructure in place is estimated to be worth $A5 billion dollars .\n\nTherefore it may be simplistic, but true, that the undiscounted value of the gas in Australia's northwest, is so far worth a staggering $500 billion dollars and rising .\n\nDespite these mind bending numbers, existing production taps far less than half of 1% of this huge resource each year, a resource which increases almost by accident, from exceedingly modest levels of exploration for oil.\n\nWe at Mermaid recognise that these are solid and compelling reasons to focus our attention unwaveringly on this industry, within this region. To build the company's seagoing assets and strategically blessed shore bases at Dampier, Broome and Darwin with all speed. To lift our professional expertise and productive capability to meet what we have assessed to be an amazing and highly exclusive opportunity.", - "page_start": 4, - "page_end": 4, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## UNLOCKING THE VALUE OF STRATEGIC ASSETS\n\n\n\n'Our objective is to derive value from undeveloped assets which have been outside of Santos' base business.'\n\n## BRUCE WOOD\n\nVice President Strategic Projects\n\nSantos' Strategic Projects team focuses on assets that have proven difficult to commercialise or that need to be considered in a regional context rather than on an individual basis.\n\nThe other key activity for this team has been to lead Santos' continuous improvement focus.\n\n## UNITED STATES GAS\n\nThe US gas business was a major focus in 2004 for a number of reasons, not the least of which are the higher gas prices in the US compared with the domestic Australian market, and the ability to rapidly commercialise new discoveries.\n\nAn ongoing development and delineation program was carried out during the year, yielding better than planned production. The exploration initiative also continued to seek higher risk but more material prospects, aimed at enhancing the move into the shallow water area of the Gulf of Mexico. Exploration results in this area during 2005 will shape Santos' future strategy in the US.\n\n## TIGHT GAS\n\nHydrocarbons contained in traps with poor permeability are known as 'tight gas'. Large tight gas resources are known to exist in the Cooper Basin. Under current circumstances, this gas cannot be economically developed but, with the combination of improved production techniques and better commercial terms, could prove attractive.\n\nSantos assessed the resources and potential technologies that could be applied to unlock these resources during 2004 and is now\n\nworking up a range of possible evaluation projects to be undertaken in 2005.\n\n## NORTHERN AUSTRALIA GAS\n\nSantos has a significant existing gas resource base and some promising exploration acreage in the waters offshore Darwin, where it intends to drill a gas exploration well later this year.\n\nThe Company currently operates the Mereenie gas field in the Amadeus Basin in central Australia, which supplies gas to Darwin. Santos' first offshore gas production in northern Australia begins in 2006, sending BayuUndan gas to Darwin for conversion to LNG. Santos plans to build upon its growing position in the region to target further development which could ensure long-term gas supplies for the current market, or an expanded Northern Territory domestic market, or for export.\n\n## PAPUA NEW GUINEA GAS\n\nSantos is in active discussions with the PNG Gas Project participants to potentially re-enter the PNG Gas Project. Santos has a significant interest in a large part of the liquids-rich Hides gas field which is integral to the development of the Project.\n\n2004 CONTINGENT RESOURCES (TOTAL 1,443 mmboe)\n\n\n\n - Northern Australia 709 mmboe\n\nWestern Australia\n\n71 mmboe\n\nCentral Australia 240 mmboe\n\n - Southern Australia 32 mmboe\n - Papua New Guinea 391 mmboe\n\n\n\n\n\n", - "page_start": 23, - "page_end": 23, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "FIG. 3: Fractional coverage Θ in thermal equilibrium of Ni in a (a) monovacancy, (b) divacancy I, (c) divacancy II and (d) change in resistance ∆ R per dopant site as a function of CO concentration in a background of air at room temperature and 1 bar of pressure. The reference concentration of CO is taken to be C 0 = 0.1 ppm. Note the change from linear to log scale on the y -axis at ∆ R = 10 Ω .\n\n\n\nFor a given background composition we may thus estimate the fractional coverages for each available adsorbate for a given type of doping. As an example, Fig. 3(a)-(c) shows the fractional coverage of a Ni atom occupying a monovacancy, divacancy I, and divacancy II, versus CO concentration in a background of air at room temperature and 1 bar of pressure. Due to the relatively small binding energy of N2 and H2O as compared to O2 and CO, all Ni sites will be either empty or occupied by O2 or CO. In particular, Ni in a monovacancy (top panel of Fig. 3) will be completely oxidized for all relevant CO concentrations. For the Ni occupied divacancy II structures we find the coverage of CO changes significantly around toxic concentrations ( ∼ 10 ppm).\n\nTo estimate the effect of adsorbates on the electrical conductance of doped CNTs, we first consider the change in conductance when a single molecule is adsorbed on a metal site of an otherwise pristine CNT. In Fig. 2(b) we show the calculated change in conductance relative to the metal site with no adsorbate. In contrast to the binding energies, there are no clear trends in the conductances. The sensitivity of the conductance is perhaps most clearly demonstrated by the absence of correlation between different types of vacancies, i.e. between the three panels in Fig. 2(b). Close to the Fermi level, the conductance of a perfect armchair CNT equals 2 G 0 . The presence of the metal dopant leads to several dips in the transmission function known as Fano antiresonances [20]. The position and shape of these dips depend on the d -levels of the transition metal atom, the character of its bonding to the CNT, and is further affected by the presence of the adsorbate molecule. The coupling of all these factors is very complex and makes it difficult to estimate or rationalize the value of the conductance. For the spin polarized cases, we use the spin-averaged\n\nconductances, i.e. G = ( G ↑ + G ↓ ) / 2.\n\nNext, we estimate the resistance of a CNT containing several impurities (a specific metal dopant with different molecular adsorbates). Under the assumption that the electron phasecoherence length, l φ , is smaller than the average distance between the dopants, d , we may neglect quantum interference and obtain the total resistance by adding the scattering resistances due to each impurity separately. The scattering resistance due to a single impurity is given by\n\nR s ( X ) = 1 /G ( X ) -1 / ( 2 G 0 ) , (6)\n\nwhere G ( X ) is the Landauer conductance of the pristine CNT with a single metal dopant occupied by molecule X and 1 / ( 2 G 0 ) is the contact resistance of a (6,6) CNT.\n\nWe may now obtain the total resistance per dopant site relative to the reference background signal as a function of the target molecule concentration\n\n∆ R N ≈ ∑ X R s ( X )(Θ[ X,C ] -Θ[ X,C 0 ]) , (7)\n\nwhere N is the number of dopants, Θ[ X,C ] is the fractional coverage of species X at concentration C of the target and C 0 is the reference concentration. Notice that the contact resistance drops out as we evaluate a change in resistance.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2538.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2648.pdf", - "query": "What is the source of inaccuracy of the MSA3 model at high ionic concentrations ?", - "target_page": 3, - "target_passage": "At high concentration (about 1 mol l−1), the MSA3 overestimates the free energy", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## Models of electrolyte solutions from molecular descriptions: The example of NaCl solutions\n\nJohn Jairo Molina 1 , 2 , 3 , ∗ Jean-Fran¸cois Dufrˆeche 1 , 2 , 3 , † Mathieu Salanne 1 , 2 , Olivier Bernard 1 , 2 , Marie Jardat 1 , 2 , and Pierre Turq 1 , 2 1 UPMC-Universit'e Paris 06, UMR 7195, PECSA, F-75005 Paris, France 2 CNRS, UMR 7195, PECSA, F-75005 Paris, France 3 Institut de Chimie S'eparative de Marcoule (ICSM), UMR 5257 CEA-CNRS-Universit'e Montpellier 2, Site de Marcoule,\n\nBˆatiment 426, BP 17171, 30207 Bagnols-sur-C'eze Cedex, France\n\nWe present a method to derive implicit solvent models of electrolyte solutions from all-atom descriptions; providing analytical expressions of the thermodynamic and structural properties of the ions consistent with the underlying explicit solvent representation. Effective potentials between ions in solution are calculated to perform perturbation theory calculations, in order to derive the best possible description in terms of charged hard spheres. Applying this method to NaCl solutions yields excellent agreement with the all-atom model, provided ion association is taken into account.\n\nSince the pioneering works of Debye, Huckel, and Onsager, electrolyte solutions have been commonly described by continuous solvent models, for which the McMillan-Mayer theory [1] provides a rigorous statistical-mechanical foundation. Within that level of description, simple phenomenological models such as the primitive model (PM), for which the ions are assimilated to charged hard spheres [2], can lead to explicit formulas for the thermodynamic and structural properties (e.g., with the help of the mean spherical approximation (MSA) [3] or the binding MSA (BIMSA) [4]). These models are the most practical to use [5], since they allow for a direct link between the experimental measurements and the microscopic parameters of the system. Nevertheless, they ignore the molecular structure of the solvent. Consequently, they cannot properly account for the complex specific effects of the ions, which appear in numerous biological, chemical, and physical interfacial phenomena [6, 7], without further developments.\n\nAn alternative procedure consists in carrying out molecular simulations, where both the solvent and solute are treated explicitly. After a rigorous averaging over the solvent configurations, a coarse-grained description of the ions, which still includes the effect of the solvent structure, can be obtained [8-11]. However, this set of methods is purely numeric; they do not provide any analytical expression for thermodynamic quantities. They are therefore restricted to simple geometries [12, 13] (bulk solutions or planar interfaces). The description of complex systems, such as porous or electrochemical materials, is still based on continuous solvent models [14].\n\nIn this letter we present a method aimed at bridging the gap between analytical and numerical approaches. It is based on the application of liquid perturbation theory (LPT) [15] to effective ion-ion potentials extracted from\n\nmolecular dynamics (MD) results. Different approximations of the PM are employed for the case of NaCl electrolyte solutions: a two component model (MSA2), that only takes free ions into account, and two different three component models (MSA3 and BIMSA3), which include a third species (the contact ion pair). As we proceed to show, LPT allows us to select the best simple model which accurately accounts for the thermodynamics and the physical-chemistry of the system.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2648.pdf" - }, - { - "text": "Rather than using the original CMIP5 ensemble as in previous studies, the aim is to allow for an improved representation of atmospheric and land surface processes including extremes by using higher spatial resolution [11].\n\nHadGEM3 (Hadley Centre Global Environment Model version 3) is a configuration of the UK Met Office Unified Model (MetUM) which has been developed for use for both climate research and weather prediction applications. It is the result of converging the development of the Met Office's weather and climate global atmospheric model components so that, where possible, atmospheric processes are modelled or parametrized seamlessly across spatial resolutions and timescales.\n\nThe high-resolution simulations were performed using the HadGEM3A Global Atmosphere (GA) 3.0 model [12-14] at a resolution of N216 (0.556° of latitude by 0.833° of longitude with gridboxes of approx. 60 km length in mid-latitudes). This is the atmospheric component of the HadGEM3-GC2 coupled climate model [15,16], which is part of the HadGEM3 family of climate models [12]. This represents the third generation of HadGEM configurations, leading on from the HadGEM2 family of climate model configurations [13] which was used for CMIP5. Key improvements over the previous model, HadGEM2, include increased vertical levels in the atmosphere (85 compared to 38) and substantial changes to the model dynamics (ENDGame) [17]. This version of the HadGEM3 model lies in the transition from CMIP5 to CMIP6 versions. The Met Office is currently operationally running the coupled HadGEM3-GC2 model at N216 resolution for seasonal and decadal forecasting and clear benefits are emerging from this use at higher resolution [18,19].\n\nWe ran the model using only its atmosphere and land components, with time-varying seasurface temperatures (SSTs) and sea-ice concentrations (SICs) prescribed as input quantities. This approach was taken for two reasons: (i) to provide a rapid first analysis of the implications of the higher resolution for projections of climate extremes and impacts-an atmosphereonly simulation requires considerably less computing time than a coupled ocean-atmosphere general circulation model (GCM); (ii) to allow us to explore, to some degree, uncertainties in regional climate changes by using SSTs and SICs from different climate models. To explore these uncertainties in the regional impacts of climate change, we carried out six HadGEM3 atmospheric simulations driven by time-varying SSTs and SICs from a subset of projections from the CMIP5 with the RCP8.5 scenario. The assumption here is that SSTs and SICs provide a substantial influence on regional patterns of climate change over land, so using a range of SST and SIC patterns in a single atmosphere model goes some way towards representing the range of regional climate changes that would arise in a set of different coupled ocean-atmosphere GCMs. This approach will not capture the full range of uncertainty affecting regional climate changes over land, because it still relies on one atmosphere model and one land surface scheme, so responses to radiative forcing that depend mainly on atmospheric process or land-atmosphere interactions will still be constrained by the behaviour of that single model. Nevertheless, we consider that our experimental design avoids the reliance on one single realization of climate and hence allows some of the uncertainties in regional climate-change impacts to be illustrated and explored.", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed11.pdf" - }, - { - "text": "FIG. 1: Effective McMillan-Mayer short-range pair potentials extracted from explicit solvent simulations using the HNC closure. (a) Cation anion, (b) cation cation, (c) anion anion, (d) cation anion RDF obtained from explicit solvent MD and implicit solvent MC simulations.\n\n\n\npute all ion thermodynamic properties through implicit solvent MC simulations.\n\nThe second stage of our coarse-graining procedure consists in applying LPT, in order to deduce the best analytical model of electrolyte solutions which reproduces this molecular description. The principle of LPT is to describe the properties of a given system in terms of those of a well known reference system, with the difference between them treated as a perturbation in the reference potential. Assuming pairwise additive potentials, V ij = V (0) ij + ∆V ij , a first-order truncated expression for the free energy density of the system βf v is obtained,\n\nβf v /lessorsimilar βf (0) v + 1 2 β ∑ i,j ρ i ρ j ∫ d r g (0) ij ( r ) ∆V ij ( r ) (1)\n\nwhich depends only on the free-energy density f (0) v and RDF g (0) of the reference fluid, with β = ( k B T ) -1 and ρ i the concentration of species i . The Gibbs-Bogoliubov inequality [15] ensures that the right-hand side of Eq. (1) is actually a strict upper bound. Once a reference system has been chosen, the expression on the right-hand side of Eq. (1) must be minimized with respect to the parameters defining the reference. This procedure yields the best first-order approximation to the free energy of the system under consideration.\n\nFor a system of charged particles in solution, the natural reference is the PM, defined in terms of the charge and diameter ( σ i ) of each species. In this case, the perturbing potentials are just the short-range effective potentials computed above (∆ V ij = V SR ij ). We use the MSA [3] solution to the PM, since it provides analytical expressions for both the free energy and the RDF. The perturbation term is evaluated using an exponential approximation to the RDF obtained within the MSA, g ( r ) = exp [ g MSA ( r ) -1], which removes any unphysical negative regions and improves the comparison with HNC calculations.\n\nΦ\n\nFIG. 2: (Color online) (a) Osmotic coefficient Φ in the McMillan-Mayer frame of reference. (diamond) MC simulations, (dot dashed) MSA2, (dot) Debye Huckel Limiting law (DHLL), (cross) experiments (Ref. [18] with the McMillanMayer to Lewis Randall conversion). (b) Minimization diameters. (dot dashed) MSA2 and (diamond) MSA-fit.\n\n\n\nWe first used LPT for a two-component system (Na + and Cl -free ions) within the MSA (model MSA2), for concentrations ranging from 0.1 to 2 . 0 mol l -1 . The minimization leads to almost constant diameters on the whole range of concentration: σ 1 = 3 . 67 ˚ A and σ 2 = 4 . 78 ˚ A. As shown in Fig. 2, these parameters yield osmotic coefficients close to MC calculations only at very low concentration, i.e., c ≤ 0 . 1 moll -1 (experimental values are given for indicative purposes only, since a perfect model will exactly match the MC results). For molar solutions, the LPT results differ considerably from MC calculations. This discrepancy can easily be understood by comparing the diameters found within the MSA2 calculation with the effective potentials given in Fig. 1. The anion/cation contact distance obtained within the MSA2 calculation is 4 . 2 ˚ A, which is in the region of the second minimum of the effective potential and corresponds to the situation where there is a single layer of water molecules between the ions. The first minimum of the potential, which corresponds to the contact ion pair (CIP) is thus completely ignored by the MSA2 calculation. If the MSA diameters are directly fitted to reproduce the MC osmotic pressure, much smaller values are obtained. These MSA-fit hydrated diameters, which are compared to the MSA2 diameters in the bottom part of Fig. 2, are averages of the CIP and the solvent-separated ion pair.", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2648.pdf" - }, - { - "text": "ing the temporal dynamics of belief changes in experimental participants. Dynamic belief trajectories can then be related to other (for example, physiological) measures, as is usual in model-based neuroscience [65]. This method can also, in principle, be used for fitting models to other types of experimentally observable systems, like animals, organoids [66], and simulated or emergent systems [67]. The package can also be used for agent-based modelling in general, for repeating earlier analyses with sampling based model-fitting and for comparing POMDP-based AIF models directly to other types of models.\n\nSince they implement full approximate Bayesian inferences, AIF models are computationally more demanding than many approaches traditionally used in cognitive and agent-based modelling, in particular when the dimensionality of the generative model is large. This means that models with highly multidimensional or complex behaviour and large numbers of agents can be computationally infeasible to implement, especially given the additional computational demands introduced by fitting these models to empirical data. Avenues for addressing this implicit scaling problem were proposed in the context of machine learning applications [68,69], and with the use of simplifying assumptions-the use of which are ubiquitous in computational modelling-AIF has been used to model multi-agent phenomena, such as opinion dynamics [15,70], coordinated foraging [71] and fish school movements [12]. It remains to be explored how AIF models can be applied to highly complex natural phenomena, such as a concrete election, which underscores the need for efficient but flexible and accessible software tools in the field.\n\nThere are many ways in which ActiveInference can be improved. It would be useful to extend the set of dynamic belief states to include prediction errors since they are often used for model-based neuroscience. This would entail departing from discrete state-space (i.e., POMDP) models to consider continuous state-space models apt for Bayesian filtering or predictive coding (see below). An alternative would be to generate prediction errors from belief updating under discrete models, where prediction errors can be read as the (KL) divergence between posterior and prior beliefs (i.e., complexity or information gain). A simple interface could be added for creating custom parametrisations of the requisite parameters that could be parametrised with Boltzmann or Gibbs distributions, as opposed to Dirichlet distributions. Parameter learning could be extended to all generative model parameters, as well as in parametrised forms (e.g., so that the Boltzmann parameter or temperature of the parameters that are learned); similarly for the precision over expected free energies γ . Preference priors should also be implementable for environmental states, in addition to observations, and A can be made action dependent.", - "page_start": 28, - "page_end": 28, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "Figure 1. Depiction of a POMDP generative model. This encodes the agent's expectations about how the state s of the environment changes over time t , and how it generates observation o at each time step. A , also called the observation model, describes how environmental states give rise to observations. B , also called the transition model, describes how environmental states change over time, depending on action u (called policy π when structured into sequences). C is the preference prior, which encodes the agent's preferences for observations. This shapes the expected free energy G associated with each policy, which is used for policy selection. D encodes the agent's prior belief over environmental states before making any observations, and E is the prior over policies that determines the agent's preferences for policies in the absence of other motivation.\n\n\n\n## 2.2. Perception in Active Inference\n\nIn AIF, perception is conceptualised as the result of variational (i.e., approximate) Bayesian inference, performed by minimising the VFE to optimise parameters of posterior beliefs about the environment. In exact Bayesian inference, we use a parametrised generative model m to make an optimal inference about state s of the environment based on observation o . This is performed by combining a prior belief over states p ( s | m ) ; a likelihood model p ( o | s , m ) ; and the model evidence p ( o | m ) , a normalisation term encoding the likelihood of receiving the given observations across all possible environmental states, as follows [1]:\n\np ( s | o , m ) = p ( o | s , m ) p ( s | m ) p ( o | m ) (1)\n\nThe posterior distribution over states given observations p ( s | o , m ) here represent the agent's beliefs about the environment. Forming beliefs in this way is thought to be the process that enables conscious, as well as unconscious, perception. The product of the likelihood model and prior is also called the joint likelihood p ( o , s | m ) , which fully defines the generative model, and which we use henceforth. In the following, for notational simplicity, we also omit denoting the dependency on the generative model m .\n\nCalculating the model evidence p ( o ) is often intractable, making exact Bayesian inference unfeasible. The way to circumvent this in AIF is to use a variational approximation to Bayesian inference [23,33,50,51]. This works by transforming the inference into an optimisation problem, specifically the minimisation of the VFE . First, an arbitrary probability distribution over environmental states q ( s ) , an approximate posterior that is used to approximate the exact posterior, is introduced. We then introduce the Kullback-Leibler (KL)", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "To overcome this difficulty, we have explicitly introduced the CIP in our model (species 3). Straightforward calculations, based on a characteristic-function formalism, allow us to define an equivalent model in which the free ions and the CIP are explicitly taken into account [19, 20]. We apply this formalism by defining a pair as an anion and a cation at a distance less than 4 ˚ A, which corresponds to the position of the effective potential maximum. The interaction between free, like charges in this new system remains unchanged, and the cation-anion interactions are easily approximated by ex-", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2648.pdf" - }, - { - "text": "The first stage consists in calculating the McMillanMayer effective ion-ion interaction potentials V eff ij ( r ), by inverting the radial distribution functions (RDF) g ij ( r ) obtained by MD. The simulations were carried out on a box of 2000 water molecules and 48 NaCl pairs using the same interaction potentials as in reference [16]. This setup corresponds to a concentration of 0 . 64 moll -1 . NPT ensemble sampling at standard pressure and temperature was enforced, with a time step of 1 fs and a pressure bath coupling constant of 1 ps. An equilibration run of 0.25 ns was followed by a production run of 0.6 ns for five different initial configurations. The averages of the resulting RDF were then used for the potential inversion via the HNC closure [15]. These effective potentials are assumed to be concentration independent and will be used for simulations at all concentrations.\n\nSubtracting the long-range Coulombic potential V LR ij ( r ) (which depends on the dielectric constant of the solvent) from V eff ij ( r ), we obtain the short-range contribution V SR ij ( r ) to the effective potentials. These are given in Fig. 1 (species 1 and 2 refer to Na + and Cl -free ions, respectively). All the short-range potentials exhibit oscillations corresponding to the solvent layering between the ions, but this effect is particularly important for the cation-anion interaction: a considerable potential barrier ( /greaterorsimilar 2 k B T ) separates the first two attractive wells. To serve as a reference, Monte Carlo (MC) simulations were performed with these effective potentials; a comparison between MD and MC RDF is also provided in Fig. 1. The excellent agreement between both sets of RDF validates the HNC inversion procedure [17], and allows us to com-", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2648.pdf" - }, - { - "text": "Figure 1. A schematic illustration of a hierarchical active inference model. This model links (exteroceptive, interoceptive, and proprioceptive) sensations at lower levels with multimodal models of hidden bodily states, such as fatigue and hunger, at intermediate levels, and finally with temporally extended, integrative models of the embodied self at the higher hierarchical level. In this schematic, following predictive coding (Rao and Ballard 1999, Friston 2005), black and red circles represent neural units that encode predictions and prediction errors, respectively. The levels are reciprocally connected, so predictions are propagated from the top-down (black edges) and prediction errors from the bottom-up (red edges). Finally, the pink triangles indicate a mechanism of precision gating (or gain control) of prediction error units, which determines their relative influence on units encoding predictions. At a neurobiological level, prediction and prediction error units could be mapped to deep and superficial pyramidal cells in cortical hierarchies, whereas expected precision could be linked to neuromodulatory input. The elements of the generative model shown do not need to map one-to-one to specific brain areas or networks but are plausibly distributed across many of them. However, as a first approximation, the lower and intermediate layers of the generative model could be linked to brain networks that process unimodal information (e.g. sensory cortices for exteroceptive information) and multimodal association areas, respectively. The highest level of the generative model could be linked to brain networks that process information about the self, such as the insular cortex, the anterior cingulate cortex, and the medial prefrontal cortex. See Parr et al. (2022) for details about hierarchical generative models supporting adaptive regulation and allostasis and Barrett and Simmons (2015) for their putative neuronal underpinnings. See online article for colored version of this figure.\n\n\n\nare reciprocally linked through top-down connections that convey predictions (black edges) and bottom-up connections that convey prediction errors (red edges), within and across levels. This predictive coding architecture permits inferring (in the Bayesian sense) the most likely causes of sensations, across multiple modalities and multiple hierarchical levels, by minimizing prediction errors at all levels. The rationale is that predictions at all levels are continuously adjusted (and synaptic weights adjusted at a slower time scale) until they match with incoming multimodal stimuli sufficiently well, and, consequently, the prediction errors across all levels are minimized. This process entails that even if a predictive coding agent starts with an incorrect prediction (e.g. about what object it is looking at) the prediction errors that measure a discrepancy between the predicted sensations and the actual sensations can help revise the initial predictions. See Parr et al. (2022) for a more detailed explanation of how to interpret these schematics.\n\nAnother critical aspect of Fig. 1 is that it illustrates two pathways in which prediction errors at the proprioceptive and interoceptive levels are used to steer physical actions (reflex arcs) and autonomic actions (autonomic reflexes). Endowing predictive coding with these reflexes-hence realizing an 'active inference' architecture-permits minimizing prediction errors by changing the state of the world (by physically acting) or the internal milieu (by engaging in autonomic actions) rather than only by changing predictions, as described later.", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed1.pdf" - }, - { - "text": "## ANNEX III\n\n -  Model for specific contracts\n -  Model for order forms", - "page_start": 41, - "page_end": 41, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "1\n\nFIG. 11: The evolution of the optical integral in the NS (top) and the SCS (bottom) in the original MFLI model. Parameters are the same as above. Note that only ∼ 75 -80% of the spectral weight is recovered up to 1 eV .\n\n\n\nFIG. 12: Evolution of the difference of the optical integrals in the SCS and the NS with the upper cut-off ω c . Parameters are the same as before. Observe that the optical sum in the SCS is larger than in the NS and that ∆ W has not yet reached ∆ W K up to the bandwidth. The dashed line is the FGT result.\n\n\n\nc\n\nThis clearly affects n k because it is expressed via the full Green's function and competes with the conventional effect of the gap opening. The distribution function from this model, which we show in Fig.2b brings this point out by showing that in a MFLI model, at /epsilon1 < 0, n k in a superconductor is larger than n k in the normal state, in clear difference with the BCSI case.\n\nWe analyzed the original MFLI model for various parameters and found that the behavior presented in Fig. 12, where ∆ W ( ω c ) > 0 for all frequencies, is typical but\n\nFIG. 13: Behavior of W K with Γ for the original MFLI model at very small α = 0 . 05. We set ω 1 = ∆ = 32 meV . Observe the inconsistency with W K in the BCSI model in Fig 4.\n\n\n\nFIG. 14: The special case of α = 1 . 5,Γ = 5 meV , other parameters the same as in Fig. 10. These parameters are chosen to illustrate that two sign changes (indicated by arrows in the figure) are also possible within the original MFLI model.\n\n\n\nnot not a generic one. There exists a range of parameters α and Γ where ∆ W K is still positive, but ∆ W ( ω c ) changes the sign twice and is negative at intermediate frequencies. We show an example of such behavior in Fig14. Still, for most of the parameters, the behavior of ∆ W ( ω c ) is the same as in Fig. 12.\n\nOn more careful looking we found the problem with the original MFLI model. We recall that in this model the self-energy in the SCS state was obtained by just cutting the NS self energy at ω 1 (see Eq.18). We argue that this phenomenological formalism is not fully consistent, at least for small α . Indeed, for α = 0, the MFLI model reduces to BCSI model for which the behavior of the selfenergy is given by Eq. (12). This self-energy evolves with ω and Σ '' has a square-root singularity at ω = ∆ + ω o (with ω o = 0). Meanwhile Σ '' in the original MFLI model in Eq. (18) simply jumps to zero at ω = ω 1 = ∆, and this happens for all values of α including α = 0 where the MFLI and BCSI model should merge. This inconsistency is reflected in Fig 13, where we plot the near-BCS limit of MFLI model by taking a very small α = 0 . 05. We see that the optical integral W K in the SCS still remains larger than in the NS over a wide range of Γ, in clear difference with the exactly known behavior in the BCSI", - "page_start": 8, - "page_end": 8, - "source_file": "1001.0764.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20210582_en.pdf", - "query": "In the health regulation regarding coronavirus, what is considered a \"device\" ?", - "target_page": 3, - "target_passage": "means an in vitro diagnostic medical device within the meaning given in regulation 2(1) of the Medical Devices Regulations 2002", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- (a) for the purpose of carrying out a function under these Regulations;\n - (b) for the purpose of-\n - (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,\n - (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n - (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or\n - (c) for a purpose connected with, or otherwise incidental to, a purpose described in subparagraph (a) or (b).\n - (4) Subject to paragraph (7), A may only disclose relevant information to another person (the 'recipient') where it is necessary for the recipient to have the information -\n - (a) for the purpose of carrying out a function of the recipient under-\n - (i) these Regulations, or\n - (ii) an enactment which, in Scotland, Wales or Northern Ireland, has the effect of requiring the isolation or quarantine of persons who have been outside the common travel area, for any of the purposes described in sub-paragraph (b);\n - (b) for the purpose of-\n - (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,", - "page_start": 28, - "page_end": 28, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n - (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or", - "page_start": 28, - "page_end": 28, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- 2. -(1) The Health Protection (Coronavirus, Restrictions) (Self-Isolation) (England) Regulations 2020( a ) are amended as follows.\n - (2) In regulation 2D(1)(c), for 'regulation 4 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'regulation 9 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.\n - (3) In regulation 6(1)-\n - (a) in the definitions of 'designated place', 'isolation requirements' and 'self-isolating worker', for 'regulation 4' substitute 'regulation 9';", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (2) The coronavirus exception applies where it is not reasonably practicable for the local authority to meet the requirement specified in regulation 11(2)(a) for a reason relating to the incidence or transmission of coronavirus.'.\n\n## Amendment of the Special Educational Needs and Disability (Detained Persons) Regulations 2015\n\n - 18. The Special Educational Needs and Disability (Detained Persons) Regulations 2015( a ) are amended as follows.\n - 19. In regulation 2(1) (interpretation), at the appropriate place insert-\n - ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n - 20. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n - 2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n - (2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n - (3) The following regulations are specified for the purposes of paragraphs (1) and (2)-\n - (a) regulation 15(1) and (4) (needs assessments which are not completed);\n - (b) regulation 16(2), (3) and (4) (transfer of a kept EHC plan);\n - (c) regulation 17(1) and (2) (restriction on disclosure of EHC plans);\n - (d) regulation 19 (requirement to consider mediation);\n - (e) regulation 20(1) and (2) (where the appropriate person does not wish to or fails to pursue mediation);\n - (f) regulation 21 (mediation);\n - (g) regulation 24(1) and (3) (mediation certificate under section 55(5) of the Act);\n - (h) regulation 27(3) (steps to be taken by a home authority);\n - (i) regulation 29(2) and (6) (compliance with the orders of the First-tier Tribunal); and\n - (j) regulation 30(3) and (6) (unopposed appeals).'.\n - 21. In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert-\n - '(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.'.\n - 22. In regulation 5(4) (decision whether or not to conduct a detained person's EHC needs assessment)-\n - (a) at the end of sub-paragraph (b) omit 'or'; and\n - (b) at the end of sub-paragraph (c) insert-\n\n', or\n\n - (d) of a reason relating to the incidence or transmission of coronavirus'.", - "page_start": 3, - "page_end": 3, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "18. Guidance issued by the Secretary of State pursuant to paragraph 4(2) of Schedule 2D to the 2020 Regulations has effect as guidance issued pursuant to paragraph 4(2) of Schedule 9 to these Regulations.\n\n## EXPLANATORY NOTE\n\n(This note is not part of the Regulations)\n\nThese Regulations replace the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the International Travel Regulations'), the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020 and the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021.\n\nThey impose requirements on certain categories of person to provide information upon arrival in England, to take coronavirus tests before and after arrival and to self-isolate in order to prevent the spread of infection or contamination from coronavirus or coronavirus disease. They also impose obligations on operators to ensure that passengers receive information and comply with the requirements.\n\nAn impact assessment has not been produced for this instrument. An explanatory memorandum has been published alongside this instrument at www.legislation.gov.uk.\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 90, - "page_end": 90, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (b) in the definition of 'International Travel Regulations', for 'the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- 23. In regulation 8(2) (duty to co-operate in a detained person's EHC needs assessment), at the end of sub-paragraph (d) insert-\n\n'; or\n\n - (e) of a reason relating to the incidence or transmission of coronavirus'.\n - 24. In regulation 10(4) (decision not to secure an EHC plan)-\n - (a) at the end of sub-paragraph (b) omit 'or'; and\n - (b) at the end of sub-paragraph (c) insert-\n\n'; or\n\n - (d) of a reason relating to the incidence or transmission of coronavirus'.\n - 25. In regulation 13(3) (timescales for EHC plans), for '(c)' substitute '(d)'.\n - 26. In regulation 29 (compliance with the orders of the First-tier Tribunal)-\n - (a) after paragraph (6) insert-\n - '(6A) The home authority need not comply with the time limits specified in paragraph (3) if it is impractical to do so because the circumstances referred to in regulation 10(4)(d) apply.'.\n - (b) in paragraph (7)(c) after '10(4)(a)' insert 'or (d)'.\n - 27. In regulation 30(7)(c) (unopposed appeals), after '10(4)(a)' insert 'or (d)'.\n\n## Amendment of the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017\n\n28. The Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017( a ) are amended as follows.\n\n - 29. In regulation 2 (interpretation), at the appropriate place insert-\n - ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n - 30. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n - 2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n - (2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n - (3) The following regulations are specified for the purposes of paragraphs (1) and (2)-\n - (a) regulation 6(3) and (6) (responding to health care recommendations); and\n - (b) regulation 7(1) and (4) (responding to social care recommendations).'.\n\nVicky Ford Parliamentary Under Secretary of State Department for Education\n\n28th April 2020", - "page_start": 4, - "page_end": 4, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "18. In determining how many fixed penalty notices a person ('P') has received for the purposes of paragraph 8 (breach of requirement in regulation 9 to self-isolate etc), if P received more than one fixed penalty notice for that offence before 2nd October 2020, only one of those notices may be taken into account.\n\n## SCHEDULE 15\n\nRegulation 26(2)\n\n## Consequential Amendments\n\n1. -(1) The Health Protection (Notification) Regulations 2010( a ) are amended as follows.\n\n(2) In regulation 4(3D)(b), for 'regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.", - "page_start": 87, - "page_end": 87, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## S T A T U T O R Y I N S T R U M E N T S\n\n## 2020 No. 471\n\n## EDUCATION, ENGLAND\n\nThe Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020\n\nMade\n\n-\n\n-\n\n-\n\n-\n\n28th April 2020\n\nLaid before Parliament\n\n30th April 2020\n\nComing into force\n\n-\n\n-\n\n1st May 2020\n\nThe Secretary of State makes the following Regulations in exercise of the powers conferred by sections 30(8), 31(4), 36(11), 37(4), 44(7)(b) and (c), 47, 49(3), 51(4), 56(1), 71(11), 73(4), 74(3) and 135(2) and (3) of the Children and Families Act 2014( a ) and sections 29(3) and 569(4) of the Education Act 1996( b ).\n\n## Citation and commencement\n\n- 1. These Regulations may be cited as the Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020 and come into force on 1st May 2020.\n\n## Review and expiry\n\n- 2. -(1) The Secretary of State must review the effectiveness of these Regulations during the period for which they have effect.\n- (2) These Regulations cease to have effect on 25th September 2020.\n\n## Amendment of the Special Educational Needs and Disability Regulations 2014\n\n- 3. The Special Educational Needs and Disability Regulations 2014( c ) are amended as follows.\n- 4. In regulation 2(1) (interpretation), at the appropriate place insert-\n- ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n- 5. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- (3) In regulation 4ZA-\n - (a) in the heading, for 'the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021';\n - (b) in paragraph (1)(a), for 'regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the 2020 Regulations')' substitute 'regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021 ('the International Travel and Operator Liability Regulations')';\n - (c) in paragraph (1)(c), for 'paragraph 7(1)(f) of Schedule 2C to the 2020 Regulations' substitute 'paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations';\n - (d) in paragraph (3), for 'paragraph 7(1)(f) of Schedule 2C to the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations'.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20210582_en.pdf", - "query": "Regarding the regulation of Enforcement of requirement to self-isolate concerning travel and coronavirus, who are considered an \"authorised persons\" ?", - "target_page": 19, - "target_passage": "For the purposes of this regulation, “authorised person” means— (a) a constable; (b) for the purposes of paragraphs (2) and (3) only, an immigration officer; or (c) a person designated by the Secretary of State for the purposes of this regulation.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- 2. -(1) The Health Protection (Coronavirus, Restrictions) (Self-Isolation) (England) Regulations 2020( a ) are amended as follows.\n - (2) In regulation 2D(1)(c), for 'regulation 4 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'regulation 9 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.\n - (3) In regulation 6(1)-\n - (a) in the definitions of 'designated place', 'isolation requirements' and 'self-isolating worker', for 'regulation 4' substitute 'regulation 9';", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (2) The coronavirus exception applies where it is not reasonably practicable for the local authority to meet the requirement specified in regulation 11(2)(a) for a reason relating to the incidence or transmission of coronavirus.'.\n\n## Amendment of the Special Educational Needs and Disability (Detained Persons) Regulations 2015\n\n - 18. The Special Educational Needs and Disability (Detained Persons) Regulations 2015( a ) are amended as follows.\n - 19. In regulation 2(1) (interpretation), at the appropriate place insert-\n - ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n - 20. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n - 2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n - (2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n - (3) The following regulations are specified for the purposes of paragraphs (1) and (2)-\n - (a) regulation 15(1) and (4) (needs assessments which are not completed);\n - (b) regulation 16(2), (3) and (4) (transfer of a kept EHC plan);\n - (c) regulation 17(1) and (2) (restriction on disclosure of EHC plans);\n - (d) regulation 19 (requirement to consider mediation);\n - (e) regulation 20(1) and (2) (where the appropriate person does not wish to or fails to pursue mediation);\n - (f) regulation 21 (mediation);\n - (g) regulation 24(1) and (3) (mediation certificate under section 55(5) of the Act);\n - (h) regulation 27(3) (steps to be taken by a home authority);\n - (i) regulation 29(2) and (6) (compliance with the orders of the First-tier Tribunal); and\n - (j) regulation 30(3) and (6) (unopposed appeals).'.\n - 21. In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert-\n - '(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.'.\n - 22. In regulation 5(4) (decision whether or not to conduct a detained person's EHC needs assessment)-\n - (a) at the end of sub-paragraph (b) omit 'or'; and\n - (b) at the end of sub-paragraph (c) insert-\n\n', or\n\n - (d) of a reason relating to the incidence or transmission of coronavirus'.", - "page_start": 3, - "page_end": 3, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- (a) for the purpose of carrying out a function under these Regulations;\n - (b) for the purpose of-\n - (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,\n - (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n - (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or\n - (c) for a purpose connected with, or otherwise incidental to, a purpose described in subparagraph (a) or (b).\n - (4) Subject to paragraph (7), A may only disclose relevant information to another person (the 'recipient') where it is necessary for the recipient to have the information -\n - (a) for the purpose of carrying out a function of the recipient under-\n - (i) these Regulations, or\n - (ii) an enactment which, in Scotland, Wales or Northern Ireland, has the effect of requiring the isolation or quarantine of persons who have been outside the common travel area, for any of the purposes described in sub-paragraph (b);\n - (b) for the purpose of-\n - (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,", - "page_start": 28, - "page_end": 28, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "18. Guidance issued by the Secretary of State pursuant to paragraph 4(2) of Schedule 2D to the 2020 Regulations has effect as guidance issued pursuant to paragraph 4(2) of Schedule 9 to these Regulations.\n\n## EXPLANATORY NOTE\n\n(This note is not part of the Regulations)\n\nThese Regulations replace the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the International Travel Regulations'), the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020 and the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021.\n\nThey impose requirements on certain categories of person to provide information upon arrival in England, to take coronavirus tests before and after arrival and to self-isolate in order to prevent the spread of infection or contamination from coronavirus or coronavirus disease. They also impose obligations on operators to ensure that passengers receive information and comply with the requirements.\n\nAn impact assessment has not been produced for this instrument. An explanatory memorandum has been published alongside this instrument at www.legislation.gov.uk.\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 90, - "page_end": 90, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## Form B: positive test result\n\nYour coronavirus test result is positive. You had the virus when the test was done.\n\nIf you have not had symptoms of coronavirus, you must self-isolate for 10 days from the day after your test date. If you have symptoms of coronavirus, you must self-isolate for 10 days from the day your symptoms started, if earlier than when you took your test.\n\nPeople you live with or are travelling with should also self-isolate for 10 days from the day after you took the test.\n\nYou may be contacted for contact tracing and to check that you, and those who you live or are travelling with, are self-isolating.\n\nYou must not travel, including to leave the UK, during self-isolation.\n\nContact 111 if you need medical help. In an emergency dial 999.\n\n## Form C: unclear test result\n\nYour coronavirus test result is unclear. It is not possible to say if you had the virus when the test was done.\n\nYou must, by law, continue self-isolating for the remainder of your self-isolation period as an international arrival travelling to the UK from an amber-list country, territory or region. You may be contacted to check that you are self-isolating.\n\nIf you want to shorten your self-isolation period you will need to take another test for international arrivals from amber list countries, territories or regions. For more information, go to https://www.gov.uk/guidance/coronavirus-covid-19-test-to-release-for-international-travel.", - "page_start": 72, - "page_end": 72, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "and the Channel Islands. The British Overseas Territories are not in the common travel area. Public health requirements may vary depending upon in which nation of the UK you are staying.\n\nEngland: https://www.gov.uk/uk-border-control\n\nNorthern Ireland: https://www.nidirect.gov.uk/articles/coronavirus-covid-19-international-traveladvice\n\nScotland: https://www.gov.scot/publications/coronavirus-covid-19-international-travel-quarantine/ Wales: https://gov.wales/arriving-wales-overseas\n\nFailure to comply with these measures is a criminal offence and you could be fined. There are a limited set of exemptions from these measures. Check the list of exemptions carefully. You may be fined if you fraudulently claim an exemption.\n\n## PART 2\n\n## Onboard announcement\n\nThe following is a public health message on behalf of the UK's public health agencies.\n\nIf you have been in or transited through an amber or red country within the previous 10 days you must quarantine for the first 10 days after you arrive. This is to protect yourself and others.\n\nThe symptoms of coronavirus are a new continuous cough, a high temperature or a loss of, or change in, normal sense of taste or smell. If you experience any of these symptoms, however mild, you are advised to make yourself known to the crew.\n\nSimple measures you can take to help protect yourself and family are:\n\nwash your hands\n\navoid touching your face with your hands\n\ncatch coughs and sneezes in a tissue and dispose of it immediately.\n\n## PART 3\n\n## Relevant websites\n\n - 1. The following are 'the relevant websites' for the purposes of regulation 14-\n\nhttps://www.gov.uk/government/publications/coronavirus-covid-19-travellers-exempt-from-ukborder-rules/coronavirus-covid-19-travellers-exempt-from-uk-border-rules\n\nhttps://www.gov.uk/guidance/booking-and-staying-in-a-quarantine-hotel-when-you-arrive-inengland\n\nhttps://www.gov.uk/guidance/coronavirus-covid-19-testing-for-people-travelling-to-england\n\nhttp://www.gov.uk/travel-quarantine-and-testing\n\nhttps://www.gov.uk/guidance/red-amber-and-green-list-rules-for-entering-england", - "page_start": 82, - "page_end": 82, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "you are going into hospital (self-isolating until the date you go in)\n\nsomeone you live with tests positive\n\nyou have been traced as a contact of someone who tested positive\n\nFor advice on when you might need to self-isolate and what to do, go to www.nhs.uk/conditions/coronavirus-covid-19 and read 'Self-isolation and treating symptoms'.\n\nIt is a legal requirement to self-isolate when you arrive in the UK from an amber-list country, territory or region. If you are contacted by the enforcement authorities or the police after you have received this negative result please show them this notification.", - "page_start": 71, - "page_end": 71, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n - (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or", - "page_start": 28, - "page_end": 28, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "18. In determining how many fixed penalty notices a person ('P') has received for the purposes of paragraph 8 (breach of requirement in regulation 9 to self-isolate etc), if P received more than one fixed penalty notice for that offence before 2nd October 2020, only one of those notices may be taken into account.\n\n## SCHEDULE 15\n\nRegulation 26(2)\n\n## Consequential Amendments\n\n1. -(1) The Health Protection (Notification) Regulations 2010( a ) are amended as follows.\n\n(2) In regulation 4(3D)(b), for 'regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.", - "page_start": 87, - "page_end": 87, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (b) in the definition of 'International Travel Regulations', for 'the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20210582_en.pdf", - "query": "What is the expiracy date of the regulation regarding travel during the coronavirus pandemic made in 2021 ?", - "target_page": 31, - "target_passage": "These Regulations expire at the end of 16th May 2022.", - "chunk_present": { - "presence": true, - "index": 8 - } - }, - "top_chunk": [ - { - "text": "- (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n - (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or", - "page_start": 28, - "page_end": 28, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (a) for the purpose of carrying out a function under these Regulations;\n - (b) for the purpose of-\n - (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,\n - (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n - (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or\n - (c) for a purpose connected with, or otherwise incidental to, a purpose described in subparagraph (a) or (b).\n - (4) Subject to paragraph (7), A may only disclose relevant information to another person (the 'recipient') where it is necessary for the recipient to have the information -\n - (a) for the purpose of carrying out a function of the recipient under-\n - (i) these Regulations, or\n - (ii) an enactment which, in Scotland, Wales or Northern Ireland, has the effect of requiring the isolation or quarantine of persons who have been outside the common travel area, for any of the purposes described in sub-paragraph (b);\n - (b) for the purpose of-\n - (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,", - "page_start": 28, - "page_end": 28, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "18. Guidance issued by the Secretary of State pursuant to paragraph 4(2) of Schedule 2D to the 2020 Regulations has effect as guidance issued pursuant to paragraph 4(2) of Schedule 9 to these Regulations.\n\n## EXPLANATORY NOTE\n\n(This note is not part of the Regulations)\n\nThese Regulations replace the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the International Travel Regulations'), the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020 and the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021.\n\nThey impose requirements on certain categories of person to provide information upon arrival in England, to take coronavirus tests before and after arrival and to self-isolate in order to prevent the spread of infection or contamination from coronavirus or coronavirus disease. They also impose obligations on operators to ensure that passengers receive information and comply with the requirements.\n\nAn impact assessment has not been produced for this instrument. An explanatory memorandum has been published alongside this instrument at www.legislation.gov.uk.\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 90, - "page_end": 90, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (b) in the definition of 'International Travel Regulations', for 'the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (2) The coronavirus exception applies where it is not reasonably practicable for the local authority to meet the requirement specified in regulation 11(2)(a) for a reason relating to the incidence or transmission of coronavirus.'.\n\n## Amendment of the Special Educational Needs and Disability (Detained Persons) Regulations 2015\n\n - 18. The Special Educational Needs and Disability (Detained Persons) Regulations 2015( a ) are amended as follows.\n - 19. In regulation 2(1) (interpretation), at the appropriate place insert-\n - ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n - 20. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n - 2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n - (2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n - (3) The following regulations are specified for the purposes of paragraphs (1) and (2)-\n - (a) regulation 15(1) and (4) (needs assessments which are not completed);\n - (b) regulation 16(2), (3) and (4) (transfer of a kept EHC plan);\n - (c) regulation 17(1) and (2) (restriction on disclosure of EHC plans);\n - (d) regulation 19 (requirement to consider mediation);\n - (e) regulation 20(1) and (2) (where the appropriate person does not wish to or fails to pursue mediation);\n - (f) regulation 21 (mediation);\n - (g) regulation 24(1) and (3) (mediation certificate under section 55(5) of the Act);\n - (h) regulation 27(3) (steps to be taken by a home authority);\n - (i) regulation 29(2) and (6) (compliance with the orders of the First-tier Tribunal); and\n - (j) regulation 30(3) and (6) (unopposed appeals).'.\n - 21. In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert-\n - '(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.'.\n - 22. In regulation 5(4) (decision whether or not to conduct a detained person's EHC needs assessment)-\n - (a) at the end of sub-paragraph (b) omit 'or'; and\n - (b) at the end of sub-paragraph (c) insert-\n\n', or\n\n - (d) of a reason relating to the incidence or transmission of coronavirus'.", - "page_start": 3, - "page_end": 3, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- 2. -(1) The Health Protection (Coronavirus, Restrictions) (Self-Isolation) (England) Regulations 2020( a ) are amended as follows.\n - (2) In regulation 2D(1)(c), for 'regulation 4 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'regulation 9 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.\n - (3) In regulation 6(1)-\n - (a) in the definitions of 'designated place', 'isolation requirements' and 'self-isolating worker', for 'regulation 4' substitute 'regulation 9';", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (3) In regulation 4ZA-\n - (a) in the heading, for 'the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021';\n - (b) in paragraph (1)(a), for 'regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the 2020 Regulations')' substitute 'regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021 ('the International Travel and Operator Liability Regulations')';\n - (c) in paragraph (1)(c), for 'paragraph 7(1)(f) of Schedule 2C to the 2020 Regulations' substitute 'paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations';\n - (d) in paragraph (3), for 'paragraph 7(1)(f) of Schedule 2C to the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations'.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## S T A T U T O R Y I N S T R U M E N T S\n\n## 2020 No. 471\n\n## EDUCATION, ENGLAND\n\nThe Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020\n\nMade\n\n-\n\n-\n\n-\n\n-\n\n28th April 2020\n\nLaid before Parliament\n\n30th April 2020\n\nComing into force\n\n-\n\n-\n\n1st May 2020\n\nThe Secretary of State makes the following Regulations in exercise of the powers conferred by sections 30(8), 31(4), 36(11), 37(4), 44(7)(b) and (c), 47, 49(3), 51(4), 56(1), 71(11), 73(4), 74(3) and 135(2) and (3) of the Children and Families Act 2014( a ) and sections 29(3) and 569(4) of the Education Act 1996( b ).\n\n## Citation and commencement\n\n- 1. These Regulations may be cited as the Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020 and come into force on 1st May 2020.\n\n## Review and expiry\n\n- 2. -(1) The Secretary of State must review the effectiveness of these Regulations during the period for which they have effect.\n- (2) These Regulations cease to have effect on 25th September 2020.\n\n## Amendment of the Special Educational Needs and Disability Regulations 2014\n\n- 3. The Special Educational Needs and Disability Regulations 2014( c ) are amended as follows.\n- 4. In regulation 2(1) (interpretation), at the appropriate place insert-\n- ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n- 5. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "## PART 6\n\n## Final provisions\n\n## Review of need for requirements\n\n24. The Secretary of State must review the need for the requirements imposed by these Regulations by 14th June 2021 and at least once every 28 days thereafter.\n\n## Expiry of Regulations\n\n25. These Regulations expire at the end of 16th May 2022.\n\n## Revocations, transitional provision consequential amendments and savings\n\n26. -(1) The following Regulations are revoked-\n\n - (a) the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020( a );\n - (b) the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the International Travel Regulations')( b ); and\n - (c) the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021( c ).\n - (2) Schedule 15 makes consequential amendments to other instruments specified in that Schedule.\n - (3) Schedule 16 makes transitional provisions.\n - (4) Nothing in these Regulations applies in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021 (and accordingly, the regulations mentioned in paragraph (1) continue to have effect in relation to such a person).\n\nSigned by authority of the Secretary of State\n\nAt 10.32 a.m. on 14th May 2021\n\nRobert Courts Parliamentary Under Secretary of State Department for Transport", - "page_start": 30, - "page_end": 30, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "18. In determining how many fixed penalty notices a person ('P') has received for the purposes of paragraph 8 (breach of requirement in regulation 9 to self-isolate etc), if P received more than one fixed penalty notice for that offence before 2nd October 2020, only one of those notices may be taken into account.\n\n## SCHEDULE 15\n\nRegulation 26(2)\n\n## Consequential Amendments\n\n1. -(1) The Health Protection (Notification) Regulations 2010( a ) are amended as follows.\n\n(2) In regulation 4(3D)(b), for 'regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.", - "page_start": 87, - "page_end": 87, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia2.pdf", - "query": "Who first suggested the notions of \"hard\" and \"easy\" problems regarding consciousness ?", - "target_page": 1, - "target_passage": "The terms \"hard problem\" and \"easy problems\" were coined by the philosopher David Chalmers", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\n## Hard problem of consciousness\n\nIn the philosophy of mind, the hard problem of consciousness is to explain why and how humans and other organisms have qualia, phenomenal consciousness, or subjective experience. [1][2] It is contrasted with the \"easy problems\" of explaining why and how physical systems give a (healthy) human being the ability to discriminate, to integrate information, and to perform behavioral functions such as watching, listening, speaking (including generating an utterance that appears to refer to personal behaviour or belief), and so forth. [1] The easy problems are amenable to functional explanation-that is, explanations that are mechanistic or behavioral-since each physical system can be explained (at least in principle) purely by reference to the \"structure and dynamics\" that underpin the phenomenon. [1][3]\n\nProponents of the hard problem argue that it is categorically different from the easy problems since no mechanistic or behavioral explanation could explain the character of an experience, not even in principle. Even after all the relevant functional facts are explicated, they argue, there will still remain a further question: \"why is the performance of these functions accompanied by experience?\" [1] To bolster their case, proponents of the hard problem frequently turn to various philosophical thought experiments, involving philosophical zombies (which, they claim, are conceivable) or inverted qualia, or the claimed ineffability of colour experiences, or the claimed unknowability of foreign states of consciousness, such as the experience of being a bat.\n\nThe terms \"hard problem\" and \"easy problems\" were coined by the philosopher David Chalmers in a 1994 talk given at The Science of Consciousness conference held in Tucson, Arizona. [4] The following year, the main talking points of Chalmers' talk were published in The Journal of Consciousness Studies . [1] The publication gained significant attention from consciousness researchers and became the subject of a special volume of the journal, [5][6] which was later published into a book. [7] In 1996, Chalmers published The Conscious Mind , a book-length treatment of the hard problem, in which he elaborated on his core arguments and responded to counterarguments. His use of the word easy is \"tongue-in-cheek\". [8] As the\n\nChalmers on stage for an Alan Turing Year event at De La Salle University, Manila, 27 March 2012\n\n\n\ncognitive psychologist Steven Pinker puts it, they are about as easy as going to Mars or curing cancer. \"That is, scientists more or less know what to look for, and with enough brainpower and funding, they would probably crack it in this century.\" [9]\n\nThe existence of the hard problem is disputed. It has been accepted by some philosophers of mind such as Joseph Levine, [10] Colin McGinn, [11] and Ned Block [12] and cognitive neuroscientists such as Francisco Varela, [13] Giulio Tononi, [14][15] and Christof Koch. [14][15] On the other hand, its existence is denied by other philosophers of mind, such as Daniel Dennett, [16] Massimo Pigliucci, [17] Thomas Metzinger, Patricia Churchland, [18] and Keith Frankish, [19] and by cognitive neuroscientists such as Stanislas Dehaene, [20] Bernard Baars, [21] Anil Seth, [22] and Antonio Damasio. [23] Clinical neurologist and skeptic", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia2.pdf" - }, - { - "text": "The philosophers Glenn Carruthers and Elizabeth Schier said in 2012 that the main arguments for the existence of a hard problem-philosophical zombies, Mary's room, and Nagel's bats-are only persuasive if one already assumes that \"consciousness must be independent of the structure and function of mental states, i.e. that there is a hard problem.\" Hence, the arguments beg the question. The authors suggest that \"instead of letting our conclusions on the thought experiments guide our theories of consciousness, we should let our theories of consciousness guide our conclusions from the thought experiments.\" [64]\n\nThe philosopher Massimo Pigliucci argued in 2013 that the hard problem is misguided, resulting from a \"category mistake\". [17] He said: \"Of course an explanation isn't the same as an experience, but that's because the two are completely independent categories, like colors and triangles. It is obvious that I cannot experience what it is like to be you, but I can potentially have a complete explanation of how and why it is possible to be you.\" [17]\n\nIn 2017, the philosopher Marco Stango, in a paper on John Dewey's approach to the problem of consciousness (which preceded Chalmers' formulation of the hard problem by over half a century), noted that Dewey's approach would see the hard problem as the consequence of an unjustified assumption that feelings and functional behaviors are not the same physical process: \"For the Deweyan philosopher, the 'hard problem' of consciousness is a 'conceptual fact' only in the sense that it is a philosophical mistake : the mistake of failing to see that the physical can be had as an episode of immediate sentiency.\" [65]\n\nThe philosopher Thomas Metzinger likens the hard problem of consciousness to vitalism, a formerly widespread view in biology which was not so much solved as abandoned. [66] Brian Jonathan Garrett has also argued that the hard problem suffers from flaws analogous to those of vitalism. [67]\n\nThe philosopher Peter Hacker argues that the hard problem is misguided in that it asks how consciousness can emerge from matter, whereas in fact sentience emerges from the evolution of living organisms. [68] He states: \"The hard problem isn't a hard problem at all. The really hard problems are the problems the scientists are dealing with. [...] The philosophical problem, like all philosophical problems, is a confusion in the conceptual scheme.\" [68] Hacker's critique extends beyond Chalmers and the hard problem, being directed against contemporary philosophy of mind and neuroscience more broadly. Along with the neuroscientist Max Bennett, he has argued that most of contemporary neuroscience remains implicitly dualistic in its conceptualizations and is predicated on the mereological fallacy of ascribing psychological concepts to the brain that can properly be ascribed only to the person as a whole. [69] Hacker further states that \"consciousness studies\", as it exists today, is \"literally a total waste of time\" and that \"the conception of consciousness which they have is incoherent\". [68]\n\n## Eliminative materialism / Illusionism\n\nEliminative materialism or eliminativism is the view that many or all of the mental states used in folk psychology (i.e., common-sense ways of discussing the mind) do not, upon scientific examination, correspond to real brain mechanisms. [59] According the 2020 PhilPapers survey, 4.51% of philosophers surveyed subscribe to eliminativism. [25]", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia2.pdf" - }, - { - "text": "patterns. A clock, a hurricane, and the easy problems, are all the sum of their parts (as are most things). [27]\n\nThe easy problems relevant to consciousness concern mechanistic analysis of the neural processes that accompany behaviour. Examples of these include how sensory systems work, how sensory data is processed in the brain, how that data influences behaviour or verbal reports, the neural basis of thought and emotion, and so on. They are problems that can be analyzed through \"structures and functions\". [27]\n\n## Hard problem\n\nThe hard problem, in contrast, is the problem of why and how those processes are accompanied by experience. [1] It may further include the question of why these processes are accompanied by this or that particular experience, rather than some other kind of experience. In other words, the hard problem is the problem of explaining why certain mechanisms are accompanied by conscious experience. [27] For example, why should neural processing in the brain lead to the felt sensations of, say, feelings of hunger? And why should those neural firings lead to feelings of hunger rather than some other feeling (such as, for example, feelings of thirst)?\n\nChalmers argues that it is conceivable that the relevant behaviours associated with hunger, or any other feeling, could occur even in the absence of that feeling. This suggests that experience is irreducible to physical systems such as the brain. This is the topic of the next section.\n\n## How the easy and hard problems are related\n\nChalmers believes that the hard problem is irreducible to the easy problems: solving the easy problems will not lead to a solution to the hard problems. This is because the easy problems pertain to the causal structure of the world while the hard problem pertains to consciousness, and facts about consciousness include facts that go beyond mere causal or structural description. [32]\n\nFor example, suppose someone were to stub their foot and yelp. In this scenario, the easy problems are mechanistic explanations that involve the activity of the nervous system and brain and its relation to the environment (such as the propagation of nerve signals from the toe to the brain, the processing of that information and how it leads to yelping, and so on). The hard problem is the question of why these mechanisms are accompanied by the feeling of pain , or why these feelings of pain feel the particular way that they do. Chalmers argues that facts about the neural mechanisms of pain, and pain behaviours, do not lead to facts about conscious experience. Facts about conscious experience are, instead, further facts, not derivable from facts about the brain. [27][32]\n\nAn explanation for all of the relevant physical facts about neural processing would leave unexplained facts about what it is like to feel pain. This is in part because functions and physical structures of any sort could conceivably exist in the absence of experience. Alternatively, they could exist alongside a different set of experiences. For example, it is logically possible for a perfect replica of Chalmers to have no experience at all, or for it to have a different set of experiences (such as an inverted visible spectrum, so that the blue-yellow red-green axes of its visual field are flipped). [32]\n\nThe same cannot be said about clocks, hurricanes, or other physical things. In those cases, a structural or functional description is a complete description. A perfect replica of a clock is a clock, a perfect replica of a hurricane is a hurricane, and so on. The difference is that physical things are nothing more than their", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia2.pdf" - }, - { - "text": "from a variety of unconscious and otherwise autonomous networks in the brain and then broadcasts them to unconscious networks (represented in the metaphor by a broad, unlit \"audience\"). [140] The theory has since been expanded upon by other scientists including cognitive neuroscientist Stanislas Dehaene. [141]\n\nIn his original paper outlining the hard problem of consciousness, Chalmers discussed GWT as a theory that only targets one of the \"easy problems\" of consciousness. [1] In particular, he said GWT provided a promising account of how information in the brain could become globally accessible, but argued that \"now the question arises in a different form: why should global accessibility give rise to conscious experience? As always, this bridging question is unanswered.\" [1] J. W. Dalton similarly criticized GWT on the grounds that it provides, at best, an account of the cognitive function of consciousness, and fails to explain its experiential aspect. [142] By contrast, A. C. Elitzur argued: \"While [GWT] does not address the 'hard problem', namely, the very nature of consciousness, it constrains any theory that attempts to do so and provides important insights into the relation between consciousness and cognition.\" [143]\n\nFor his part, Baars writes (along with two colleagues) that there is no hard problem of explaining qualia over and above the problem of explaining causal functions, because qualia are entailed by neural activity and themselves causal. [21] Dehaene, in his 2014 book Consciousness and the Brain , rejected the concept of qualia and argued that Chalmers' \"easy problems\" of consciousness are actually the hard problems. [20] He further stated that the \"hard problem\" is based only upon ill-defined intuitions that are continually shifting as understanding evolves: [20]\n\nOnce our intuitions are educated by cognitive neuroscience and computer simulations, Chalmers' hard problem will evaporate. The hypothetical concept of qualia, pure mental experience, detached from any information-processing role, will be viewed as a peculiar idea of the prescientific era, much like vitalism... [Just as science dispatched vitalism] the science of consciousness will keep eating away at the hard problem of consciousness until it vanishes.\n\n## Meta-problem\n\nIn 2018, Chalmers highlighted what he calls the \" meta-problem of consciousness \", another problem related to the hard problem of consciousness: [76]\n\nThe meta-problem of consciousness is (to a first approximation) the problem of explaining why we think that there is a [hard] problem of consciousness.\n\nIn his \"second approximation\", he says it is the problem of explaining the behavior of \"phenomenal reports\", and the behavior of expressing a belief that there is a hard problem of consciousness. [76]\n\nExplaining its significance, he says: [76]\n\nAlthough the meta-problem is strictly speaking an easy problem, it is deeply connected to the hard problem. We can reasonably hope that a solution to the meta-problem will shed significant light on the hard problem. A particularly strong line holds that a solution to the", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia2.pdf" - }, - { - "text": "Today there is a strong tendency to simply equate consciousness with the qualia. Yet there is clearly something not quite right about this. The \"itchiness of itches\" and the \"hurtfulness of pain\" are qualities we are conscious of . So philosophy of mind tends to treat consciousness as if it consisted simply of the contents of consciousness (the phenomenal qualities), while it really is precisely consciousness of contents, the very givenness of whatever is subjectively given. And therefore the problem of consciousness does not pertain so much to some alleged \"mysterious, nonpublic objects\", i.e. objects that seem to be only \"visible\" to the respective subject, but rather to the nature of \"seeing\" itself (and in today's philosophy of mind astonishingly little is said about the latter). [129]\n\n## Relationship to scientific frameworks\n\nMost neuroscientists and cognitive scientists believe that Chalmers' alleged \"hard problem\" will be solved, or be shown to not be a real problem, in the course of the solution of the so-called \"easy problems\", although a significant minority disagrees. [9][130]\n\n## Neural correlates of consciousness\n\nSince 1990, researchers including the molecular biologist Francis Crick and the neuroscientist Christof Koch have made significant progress toward identifying which neurobiological events occur concurrently to the experience of subjective consciousness. [131] These postulated events are referred to as neural correlates of consciousness or NCCs. However, this research arguably addresses the question of which neurobiological mechanisms are linked to consciousness but not the question of why they should give rise to consciousness at all, the latter being the hard problem of consciousness as Chalmers formulated it. In \"On the Search for the Neural Correlate of Consciousness\", Chalmers said he is confident that, granting the principle that something such as what he terms \"global availability\" can be used as an indicator of consciousness, the neural correlates will be discovered \"in a century or two\". [132] Nevertheless, he stated regarding their relationship to the hard problem of consciousness:\n\nOne can always ask why these processes of availability should give rise to consciousness in the first place. As yet we cannot explain why they do so, and it may well be that full details about the processes of availability will still fail to answer this question. Certainly, nothing in the standard methodology I have outlined answers the question; that methodology assumes a relation between availability and consciousness, and therefore does nothing to explain it. [...] So the hard problem remains. But who knows: Somewhere along the line we may be led to the relevant insights that show why the link is there, and the hard problem may then be solved. [132]\n\nThe neuroscientist and Nobel laureate Eric Kandel wrote that locating the NCCs would not solve the hard problem, but rather one of the so-called easy problems to which the hard problem is contrasted. [133] Kandel went on to note Crick and Koch's suggestion that once the binding problem-understanding what accounts for the unity of experience-is solved, it will be possible to solve the hard problem empirically. [133] However, neuroscientist Anil Seth argued that emphasis on the so-called hard problem is a distraction from what he calls the \"real problem\": understanding the neurobiology underlying", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia2.pdf" - }, - { - "text": "Steven Novella has dismissed it as \"the hard non-problem\". [24] According to a 2020 PhilPapers survey, a majority (62.42%) of the philosophers surveyed said they believed that the hard problem is a genuine problem, while 29.72% said that it does not exist. [25]\n\nThere are a number of other potential philosophical problems that are related to the Hard Problem. Ned Block believes that there exists a \"Harder Problem of Consciousness\", due to the possibility of different physical and functional neurological systems potentially having phenomenal overlap. [12] Another potential philosophical problem which is closely related to Benj Hellie's vertiginous question, dubbed \"The Even Harder Problem of Consciousness\", refers to why a given individual has their own particular personal identity, as opposed to existing as someone else. [26]\n\n## Overview\n\nCognitive scientist David Chalmers first formulated the hard problem in his paper \"Facing up to the problem of consciousness\" (1995) [1] and expanded upon it in The Conscious Mind (1996). His works provoked comment. Some, such as philosopher David Lewis and Steven Pinker, have praised Chalmers for his argumentative rigour and \"impeccable clarity\". [27] Pinker later said, in 2018, \"In the end I still think that the hard problem is a meaningful conceptual problem, but agree with Dennett that it is not a meaningful scientific problem. No one will ever get a grant to study whether you are a zombie or whether the same Captain Kirk walks on the deck of the Enterprise and the surface of Zakdorn. And I agree with several other philosophers that it may be futile to hope for a solution at all, precisely because it is a conceptual problem, or, more accurately, a problem with our concepts.\" [28] Daniel Dennett and Patricia Churchland, among others, believe that the hard problem is best seen as a collection of easy problems that will be solved through further analysis of the brain and behaviour. [29][30]\n\nConsciousness is an ambiguous term. It can be used to mean self consciousness, awareness, the state of being awake, and so on. Chalmers uses Thomas Nagel's definition of consciousness: \" the feeling of what it is like to be something.\" Consciousness, in this sense, is synonymous with experience. [31][27]\n\n## Chalmers' formulation\n\n. . .even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience-perceptual discrimination, categorization, internal access, verbal report-there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?\n\n- - David Chalmers, Facing up to the problem of consciousness\n\nThe problems of consciousness, Chalmers argues, are of two kinds: the easy problems and the hard problem .\n\n## Easy problems\n\nThe easy problems are amenable to reductive inquiry. They are a logical consequence of lower-level facts about the world, similar to how a clock's ability to tell time is a logical consequence of its clockwork and structure, or a hurricane being a logical consequence of the structures and functions of certain weather", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia2.pdf" - }, - { - "text": "- Weisberg, Josh. \"The hard problem of consciousness\" (http://www.iep.utm.edu/hard-con). Internet Encyclopedia of Philosophy .\n\nRetrieved from \"https://en.wikipedia.org/w/index.php?title=Hard\\_problem\\_of\\_consciousness&oldid=1261818884\"", - "page_start": 27, - "page_end": 27, - "source_file": "wikipedia2.pdf" - }, - { - "text": "This stance has recently taken on the name of illusionism : the view that phenomenal consciousness is an illusion. The term was popularized by the philosopher Keith Frankish. [60] Frankish argues that \"illusionism\" is preferable to \"eliminativism\" for labelling the view that phenomenal consciousness is an illusion. More substantively, Frankish argues that illusionism about phenomenal consciousness is preferable to realism about phenomenal consciousness. He states: \"Theories of consciousness typically address the hard problem. They accept that phenomenal consciousness is real and aim to explain how it comes to exist. There is, however, another approach, which holds that phenomenal consciousness is an illusion and aims to explain why it seems to exist.\" [19] Frankish concludes that illusionism \"replaces the hard problem with the illusion problem-the problem of explaining how the illusion of phenomenality arises and why it is so powerful.\" [19]\n\nThe philosopher Daniel Dennett is another prominent figure associated with illusionism. After Frankish published a paper in the Journal of Consciousness Studies titled Illusionism as a Theory of Consciousness, [60] Dennett responded with his own paper with the spin-off title Illusionism as the Obvious Default Theory of Consciousness. [61] Dennett has been arguing for the illusory status of consciousness since early on in his career. For example, in 1979 he published a paper titled On the Absence of Phenomenology (where he argues for the nonexistence of phenomenal consciousness). [70] Similar ideas have been explicated in his 1991 book Consciousness Explained. [71] Dennett argues that the so-called \"hard problem\" will be solved in the process of solving what Chalmers terms the \"easy problems\". [16] He compares consciousness to stage magic and its capability to create extraordinary illusions out of ordinary things. [72] To show how people might be commonly fooled into overstating the accuracy of their introspective abilities, he describes a phenomenon called change blindness, a visual process that involves failure to detect scenery changes in a series of alternating images. [73] He accordingly argues that consciousness need not be what it seems to be based on introspection. To address the question of the hard problem, or how and why physical processes give rise to experience, Dennett states that the phenomenon of having experience is nothing more than the performance of functions or the production of behavior, which can also be referred to as the easy problems of consciousness. [16] Thus, Dennett argues that the hard problem of experience is included among-not separate from-the easy problems, and therefore they can only be explained together as a cohesive unit. [72]\n\nEliminativists differ on the role they believe intuitive judgement plays in creating the apparent reality of consciousness. The philosopher Jacy Reese Anthis is of the position that this issue is born of an overreliance on intuition, calling philosophical discussions on the topic of consciousness a form of \"intuition jousting\". [74] But when the issue is tackled with \"formal argumentation\" and \"precise semantics\" then the hard problem will dissolve. [74] The philosopher Elizabeth Irvine, in contrast, can be read as having the opposite view, since she argues that phenomenal properties (that is, properties of consciousness) do not exist in our common-sense view of the world. She states that \"the hard problem of consciousness may not be a genuine problem for non-philosophers (despite its overwhelming obviousness to philosophers).\" [75]", - "page_start": 8, - "page_end": 8, - "source_file": "wikipedia2.pdf" - }, - { - "text": "consciousness, namely the neural correlates of various conscious processes. [22] This more modest goal is the focus of most scientists working on consciousness. [133] Psychologist Susan Blackmore believes, by contrast, that the search for the neural correlates of consciousness is futile and itself predicated on an erroneous belief in the hard problem of consciousness. [134]\n\n## Computational cognition\n\nA functionalist view in cognitive science holds that the mind is an information processing system, and that cognition and consciousness together are a form of computation. Cognition, distinct from consciousness, is explained by neural computation in the computational theory of cognition. The computational theory of mind asserts that not only cognition, but also phenomenal consciousness or qualia, are computational. While the computation system is realized by neurons rather than electronics, in theory it would be possible for artificial intelligence to be conscious.\n\n## Integrated information theory\n\nIntegrated information theory (IIT), developed by the neuroscientist and psychiatrist Giulio Tononi in 2004 and more recently also advocated by Koch, is one of the most discussed models of consciousness in neuroscience and elsewhere. [135][136] The theory proposes an identity between consciousness and integrated information, with the latter item (denoted as Φ) defined mathematically and thus in principle measurable. [136][137] The hard problem of consciousness, write Tononi and Koch, may indeed be intractable when working from matter to consciousness. [15] However, because IIT inverts this relationship and works from phenomenological axioms to matter, they say it could be able to solve the hard problem. [15] In this vein, proponents have said the theory goes beyond identifying human neural correlates and can be extrapolated to all physical systems. Tononi wrote (along with two colleagues):\n\nWhile identifying the \"neural correlates of consciousness\" is undoubtedly important, it is hard to see how it could ever lead to a satisfactory explanation of what consciousness is and how it comes about. As will be illustrated below, IIT offers a way to analyze systems of mechanisms to determine if they are properly structured to give rise to consciousness, how much of it, and of which kind. [138]\n\nAs part of a broader critique of IIT, Michael Cerullo suggested that the theory's proposed explanation is in fact for what he dubs (following Scott Aaronson) the \"Pretty Hard Problem\" of methodically inferring which physical systems are conscious-but would not solve Chalmers' hard problem. [136] \"Even if IIT is correct,\" he argues, \"it does not explain why integrated information generates (or is) consciousness.\" [136] Chalmers agrees that IIT, if correct, would solve the \"Pretty Hard Problem\" rather than the hard problem. [139]\n\n## Global workspace theory\n\nGlobal workspace theory (GWT) is a cognitive architecture and theory of consciousness proposed by the cognitive psychologist Bernard Baars in 1988. [140] Baars explains the theory with the metaphor of a theater, with conscious processes represented by an illuminated stage. [140] This theater integrates inputs", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia2.pdf" - }, - { - "text": "- 32. \"Hard Problem of Consciousness\" (https://iep.utm.edu/hard-problem-of-conciousness/). Internet Encyclopedia of Philosophy . Retrieved 2024-10-09.\n - 33. Chalmers, David (January 1997). \"Moving forward on the problem of consciousness\" (http s://philpapers.org/rec/CHAMFO). Journal of Consciousness Studies . 4 (1): 3-46.", - "page_start": 19, - "page_end": 19, - "source_file": "wikipedia2.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia2.pdf", - "query": "What is David Chalmer's definition of \"consciousness\" ?", - "target_page": 2, - "target_passage": "Chalmers uses Thomas Nagel's definition of consciousness: \"the feeling of what it is like to be something.\"", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- 32. \"Hard Problem of Consciousness\" (https://iep.utm.edu/hard-problem-of-conciousness/). Internet Encyclopedia of Philosophy . Retrieved 2024-10-09.\n - 33. Chalmers, David (January 1997). \"Moving forward on the problem of consciousness\" (http s://philpapers.org/rec/CHAMFO). Journal of Consciousness Studies . 4 (1): 3-46.", - "page_start": 19, - "page_end": 19, - "source_file": "wikipedia2.pdf" - }, - { - "text": "Today there is a strong tendency to simply equate consciousness with the qualia. Yet there is clearly something not quite right about this. The \"itchiness of itches\" and the \"hurtfulness of pain\" are qualities we are conscious of . So philosophy of mind tends to treat consciousness as if it consisted simply of the contents of consciousness (the phenomenal qualities), while it really is precisely consciousness of contents, the very givenness of whatever is subjectively given. And therefore the problem of consciousness does not pertain so much to some alleged \"mysterious, nonpublic objects\", i.e. objects that seem to be only \"visible\" to the respective subject, but rather to the nature of \"seeing\" itself (and in today's philosophy of mind astonishingly little is said about the latter). [129]\n\n## Relationship to scientific frameworks\n\nMost neuroscientists and cognitive scientists believe that Chalmers' alleged \"hard problem\" will be solved, or be shown to not be a real problem, in the course of the solution of the so-called \"easy problems\", although a significant minority disagrees. [9][130]\n\n## Neural correlates of consciousness\n\nSince 1990, researchers including the molecular biologist Francis Crick and the neuroscientist Christof Koch have made significant progress toward identifying which neurobiological events occur concurrently to the experience of subjective consciousness. [131] These postulated events are referred to as neural correlates of consciousness or NCCs. However, this research arguably addresses the question of which neurobiological mechanisms are linked to consciousness but not the question of why they should give rise to consciousness at all, the latter being the hard problem of consciousness as Chalmers formulated it. In \"On the Search for the Neural Correlate of Consciousness\", Chalmers said he is confident that, granting the principle that something such as what he terms \"global availability\" can be used as an indicator of consciousness, the neural correlates will be discovered \"in a century or two\". [132] Nevertheless, he stated regarding their relationship to the hard problem of consciousness:\n\nOne can always ask why these processes of availability should give rise to consciousness in the first place. As yet we cannot explain why they do so, and it may well be that full details about the processes of availability will still fail to answer this question. Certainly, nothing in the standard methodology I have outlined answers the question; that methodology assumes a relation between availability and consciousness, and therefore does nothing to explain it. [...] So the hard problem remains. But who knows: Somewhere along the line we may be led to the relevant insights that show why the link is there, and the hard problem may then be solved. [132]\n\nThe neuroscientist and Nobel laureate Eric Kandel wrote that locating the NCCs would not solve the hard problem, but rather one of the so-called easy problems to which the hard problem is contrasted. [133] Kandel went on to note Crick and Koch's suggestion that once the binding problem-understanding what accounts for the unity of experience-is solved, it will be possible to solve the hard problem empirically. [133] However, neuroscientist Anil Seth argued that emphasis on the so-called hard problem is a distraction from what he calls the \"real problem\": understanding the neurobiology underlying", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia2.pdf" - }, - { - "text": "- 1. Chalmers, David (1995). \"Facing up to the problem of consciousness\" (http://consc.net/pape rs/facing.pdf) (PDF). Journal of Consciousness Studies . 2 (3): 200-219.\n - 2. Harnad, Stevan (1995). \"Why and how we are not zombies\" (http://cogprints.org/1601/6/har nad95.zombies.html). Journal of Consciousness Studies . 1 : 164-167. See also Harnad, Stevan (April 2000). \"How/why the mind-body problem is hard\" (http://cogprints.org/1617/1/ harnad00.mind.humphrey.html). Journal of Consciousness Studies . 7 (4): 54-61.\n - 3. See Cooney's foreword to the reprint of Chalmers' paper: Brian Cooney, ed. (1999). \"Chapter 27: Facing up to the problem of consciousness\". The place of mind . Cengage Learning. pp. 382 ff . ISBN 978-0534528256.\n - 4. Problem of Consciousness (Tuscan 1994) (https://www.youtube.com/watch?v=\\_lWp-6hH\\_6 g%7CHard)\n - 5. JCS vol. 4, pp. 3-46, 1997\n - 6. Chalmers, David (1997). \"Moving forward on the problem of consciousness\". Journal of Consciousness Studies . 4 (1): 3-46.\n - 7. Shear, Jonathan (1997). Explaining Consciousness: The Hard Problem . MIT Press. ISBN 978-0262692212.\n - 8. \"Episode 83, The David Chalmers Interview (Part I - Consciousness)\" (https://thepanpsycas t.com/panpsycast2/episode83-1). The Panpsycast Philosophy Podcast . 19 July 2020. Retrieved 2020-09-05.\n - 9. Pinker, Steven (29 January 2007). \"The Brain: The Mystery of Consciousness\" (http://conten t.time.com/time/magazine/article/0,9171,1580394-1,00.html). Time . Retrieved 19 December 2018.\n - 10. Levine, Joseph (2009-01-15). \"The Explanatory Gap\" (https://www.oxfordhandbooks.com/vi ew/10.1093/oxfordhb/9780199262618.001.0001/oxfordhb-9780199262618-e-17). The Oxford Handbook of Philosophy of Mind : 281-291. doi:10.1093/oxfordhb/9780199262618.003.0017 (https://doi.org/10.1093%2Foxfordhb%2F9 780199262618.003.0017). ISBN 978-0199262618.\n - 11. McGinn, Colin (20 February 2012). \"All machine and no ghost?\" (http://www.newstatesman. com/ideas/2012/02/consciousness-mind-brain). New Statesman . Retrieved 27 March 2012.\n - 12. Block, Ned (2002). \"The Harder Problem of Consciousness\" (https://philpapers.org/rec/BLO THP). The Journal of Philosophy . 99 (8): 391-425. doi:10.2307/3655621 (https://doi.org/10. 2307%2F3655621). JSTOR 3655621 (https://www.jstor.org/stable/3655621). S2CID 111383062 (https://api.semanticscholar.org/CorpusID:111383062).\n - 13. Varela, F.J. (1 April 1996). \"Neurophenomenology: a methodological remedy for the hard problem\" (https://www.ingentaconnect.com/content/imp/jcs/1996/00000003/00000004/718). Journal of Consciousness Studies . 3 (4): 330-349.\n - 14. Tononi, Giulio; Boly, Melanie; Massimini, Marcello; Koch, Christof (July 2016). \"Integrated information theory: from consciousness to its physical substrate\". Nature Reviews Neuroscience . 17 (7): 450-461. doi:10.1038/nrn.2016.44 (https://doi.org/10.1038%2Fnrn.20 16.44). PMID 27225071 (https://pubmed.ncbi.nlm.nih.gov/27225071). S2CID 21347087 (htt ps://api.semanticscholar.org/CorpusID:21347087).\n - 15. Tononi, Giulio; Koch, Christof (March 2015). \"Consciousness: here, there and everywhere?\" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4387509). Philosophical Transactions of the Royal Society B: Biological Sciences . 370 (1668): 20140167. doi:10.1098/rstb.2014.0167 (ht tps://doi.org/10.1098%2Frstb.2014.0167). PMC 4387509 (https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC4387509). PMID 25823865 (https://pubmed.ncbi.nlm.nih.gov/25823865).", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia2.pdf" - }, - { - "text": "- 79. Scarfone, Matthew (2022). \"Using and Abusing Moorean Arguments\" (https://philpapers.org/ rec/SCAUAA-2). Journal of the American Philosophical Association . 8 (1): 52-71. doi:10.1017/apa.2020.47 (https://doi.org/10.1017%2Fapa.2020.47). S2CID 239672728 (http s://api.semanticscholar.org/CorpusID:239672728).\n - 80. Augustine of Hippo. \"Book 11, Chapter 26\". City of God .\n - 81. Descartes, René (1637). \"4\". Discourse on the Method .\n - 82. Descartes, René (1641). \"Second Meditation\". Meditations on First Philosophy .\n - 83. Chalmers, David (2020). \"Debunking Arguments for Illusionism\" (https://philpapers.org/rec/C HADAF-2). Journal of Consciousness Studies . 27 (5-6): 258-281.\n - 84. Chalmers, David (2002). \"Debunking Arguments for Illusionism\" (https://philpapers.org/rec/C HADAF-2). Journal of Consciousness Studies . 27 (5-6): 258-281.\n - 85. Strawson, G. (2018). \"The Consciousness Deniers\" (https://www.nybooks.com/daily/2018/0 3/13/the-consciousness-deniers/). The New York Review of Books .\n - 86. Koch, Christof (2019). The Feeling of Life Itself: Why Consciousness is Everywhere But Can't be Computed . MIT Press. p. 2.\n - 87. Koch, Christof (2019). The Feeling of Life Itself: Why Consciousness is Everywhere But Can't be Computed . MIT Press. p. 3.\n - 88. Balmer, A. (2020). \"Soft-Wired Illusionism vs. the Meta-Problem of Consciousness\" (https://p hilpapers.org/rec/BALSIV). Journal of Consciousness Studies . 27 (5-6): 26-37.\n - 89. Chalmers, David (2020). \"Is the Hard Problem of Consciousness Universal?\". Journal of Consciousness Studies . 27 (5-6): 227-257.\n - 90. Papineau, D. (2019). \"Response to Chalmers' 'The Meta-Problem of Consciousness' \" (http s://philpapers.org/rec/PAPRTC-6). Journal of Consciousness Studies . 26 (9-10): 173-181.\n - 91. J. Levine, \"Conceivability, Identity, and the Explanatory Gap\" in Stuart R. Hameroff, Alfred W. Kaszniak and David Chalmers (eds.), Towards a Science of Consciousness III: The Third Tucson Discussions and Debates , The MIT Press, 1999,. pp 3-12.\n - 92. Gennaro, Rocco J. \"Consciousness\" (https://www.iep.utm.edu/consciou). Internet Encyclopedia of Philosophy .\n - 93. Block, Ned; Stalnaker, Robert (1999). \"Conceptual Analysis, Dualism, and the Explanatory Gap\" (http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/ExplanatoryGap.pdf) (PDF). The Philosophical Review . 108 (1): 1-46. CiteSeerX 10.1.1.693.2421 (https://citeseerx.ist.ps u.edu/viewdoc/summary?doi=10.1.1.693.2421). doi:10.2307/2998259 (https://doi.org/10.230 7%2F2998259). JSTOR 2998259 (https://www.jstor.org/stable/2998259).\n - 94. Stoljar, Daniel (2005). \"Physicalism and Phenomenal Concepts\". Mind & Language . 20 (5): 469-494. doi:10.1111/j.0268-1064.2005.00296.x (https://doi.org/10.1111%2Fj.0268-1064.2 005.00296.x).\n - 95. Chalmers, David (2006). \"Phenomenal Concepts and the Explanatory Gap\" (http://consc.ne t/papers/pceg.pdf) (PDF). In Alter, Torin; Walter, Sven (eds.). Phenomenal Concepts and Phenomenal Knowledge: New Essays on Consciousness and Physicalism . Oxford University Press. ISBN 9780195171655. Retrieved 27 March 2019.\n - 96. Wierzbicka, A. (2019). \"From 'Consciousness' to 'I Think, I Feel, I Know': A Commentary on David Chalmers\". Journal of Consciousness Studies . 26 (9-10): 257-269.\n - 97. Lau, Hakwan; Michel, Matthias (2019). \"A Socio-Historical Take on the Meta-Problem of Consciousness\". Journal of Consciousness Studies . 26 (9-10): 136-147.\n - 98. \"Is the hard problem of consciousness really that hard? | Brian Greene and Pat Churchland lock horns\" (https://www.youtube.com/watch?v=hru5d\\_wsu7g). YouTube . 9 July 2022.\n - 99. \"Abiogenesis\" (https://www.allaboutscience.org/abiogenesis.htm).", - "page_start": 23, - "page_end": 23, - "source_file": "wikipedia2.pdf" - }, - { - "text": "philosophers \"were to use panhuman concepts expressed in crosstranslatable words\" (such as know , think , or feel ) then the hard problem would dissolve. [96] David Chalmers has responded to these criticisms by saying that he will not \"apologize for using technical terms in an academic article . . . they play a key role in efficient communication in every discipline, including Wierzbicka's\". [89]\n\n## Type-C Materialism\n\nType-C materialists acknowledge a distinction between knowledge and experience [98] without asserting a more complete explanation for the experiential phenomenon. One taking this view would admit that there is an explanatory gap for which no answer to date may be satisfactory, but trust that inevitably the gap will be closed. [52] This is described by analogy to progression in other areas of science, such as massenergy equivalence which would have been unfathomable in ancient times, [52] abiogenesis which was once considered paradoxical from an evolutionary framework, [99][98] or a suspected future theory of everything combining relativity and quantum mechanics. Similarly, type-C materialism posits that the problem of consciousness is a consequence of our ignorance [71][100] but just as resolvable as any other question in neuroscience.\n\nBecause the explanatory question of consciousness is evaded, type-C materialism does not presuppose [101] the descriptive question, for instance that there is any self-consciousness, wakefulness, or even sentience [102] in a rock. Principally, the basis for the argument arises from the apparently high correlation of consciousness with living brain tissue, [103] thereby rejecting panpsychism [101] without explicitly formulating physical causation. More specifically this position denies the existence of philosophical zombies [64] for which there is an absence of data and no proposed method of testing. [104][105] Whether via the inconceivability or actual nonexistence of zombies, a contradiction is exposed nullifying the premise of the consciousness problem's \"hardness\".\n\nType-C materialism is compatible with several cases and could collapse into one of these other metaphysical views [52] depending on scientific discovery and its interpretation. With evidence of emergence, it resolves to strong reductionism under type A. With a different, possibly cultural paradigm for understanding consciousness, it resolves to type-B materialism. [32] If consciousness is explained by the quantum mind, then it resolves to property dualism under type D. [106] With characterization of intrinsic properties in physics extending beyond structure and dynamics, it could resolve to type-F monism. [52]\n\n## Type-D Dualism\n\nDualism views consciousness as either a non-physical substance separate from the brain or a non-physical property of the physical brain. [107] Dualism is the view that the mind is irreducible to the physical body. [107] There are multiple dualist accounts of the causal relationship between the mental and the physical, of which interactionism and epiphenomenalism are the most common today. Interactionism posits that the mental and physical causally impact one another, and is associated with the thought of René Descartes (1596-1650). [52] Epiphenomenalism holds the mental is causally dependent on the physical, but does not in turn causally impact it. [52]\n\nIn contemporary philosophy, interactionism has been defended by philosophers including Martine NidaRümelin, [108] while epiphenomenalism has been defended by philosophers including Frank Jackson [109][110] (although Jackson later changed his stance to physicalism). [111] Chalmers has also", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia2.pdf" - }, - { - "text": "This stance has recently taken on the name of illusionism : the view that phenomenal consciousness is an illusion. The term was popularized by the philosopher Keith Frankish. [60] Frankish argues that \"illusionism\" is preferable to \"eliminativism\" for labelling the view that phenomenal consciousness is an illusion. More substantively, Frankish argues that illusionism about phenomenal consciousness is preferable to realism about phenomenal consciousness. He states: \"Theories of consciousness typically address the hard problem. They accept that phenomenal consciousness is real and aim to explain how it comes to exist. There is, however, another approach, which holds that phenomenal consciousness is an illusion and aims to explain why it seems to exist.\" [19] Frankish concludes that illusionism \"replaces the hard problem with the illusion problem-the problem of explaining how the illusion of phenomenality arises and why it is so powerful.\" [19]\n\nThe philosopher Daniel Dennett is another prominent figure associated with illusionism. After Frankish published a paper in the Journal of Consciousness Studies titled Illusionism as a Theory of Consciousness, [60] Dennett responded with his own paper with the spin-off title Illusionism as the Obvious Default Theory of Consciousness. [61] Dennett has been arguing for the illusory status of consciousness since early on in his career. For example, in 1979 he published a paper titled On the Absence of Phenomenology (where he argues for the nonexistence of phenomenal consciousness). [70] Similar ideas have been explicated in his 1991 book Consciousness Explained. [71] Dennett argues that the so-called \"hard problem\" will be solved in the process of solving what Chalmers terms the \"easy problems\". [16] He compares consciousness to stage magic and its capability to create extraordinary illusions out of ordinary things. [72] To show how people might be commonly fooled into overstating the accuracy of their introspective abilities, he describes a phenomenon called change blindness, a visual process that involves failure to detect scenery changes in a series of alternating images. [73] He accordingly argues that consciousness need not be what it seems to be based on introspection. To address the question of the hard problem, or how and why physical processes give rise to experience, Dennett states that the phenomenon of having experience is nothing more than the performance of functions or the production of behavior, which can also be referred to as the easy problems of consciousness. [16] Thus, Dennett argues that the hard problem of experience is included among-not separate from-the easy problems, and therefore they can only be explained together as a cohesive unit. [72]\n\nEliminativists differ on the role they believe intuitive judgement plays in creating the apparent reality of consciousness. The philosopher Jacy Reese Anthis is of the position that this issue is born of an overreliance on intuition, calling philosophical discussions on the topic of consciousness a form of \"intuition jousting\". [74] But when the issue is tackled with \"formal argumentation\" and \"precise semantics\" then the hard problem will dissolve. [74] The philosopher Elizabeth Irvine, in contrast, can be read as having the opposite view, since she argues that phenomenal properties (that is, properties of consciousness) do not exist in our common-sense view of the world. She states that \"the hard problem of consciousness may not be a genuine problem for non-philosophers (despite its overwhelming obviousness to philosophers).\" [75]", - "page_start": 8, - "page_end": 8, - "source_file": "wikipedia2.pdf" - }, - { - "text": "consciousness, namely the neural correlates of various conscious processes. [22] This more modest goal is the focus of most scientists working on consciousness. [133] Psychologist Susan Blackmore believes, by contrast, that the search for the neural correlates of consciousness is futile and itself predicated on an erroneous belief in the hard problem of consciousness. [134]\n\n## Computational cognition\n\nA functionalist view in cognitive science holds that the mind is an information processing system, and that cognition and consciousness together are a form of computation. Cognition, distinct from consciousness, is explained by neural computation in the computational theory of cognition. The computational theory of mind asserts that not only cognition, but also phenomenal consciousness or qualia, are computational. While the computation system is realized by neurons rather than electronics, in theory it would be possible for artificial intelligence to be conscious.\n\n## Integrated information theory\n\nIntegrated information theory (IIT), developed by the neuroscientist and psychiatrist Giulio Tononi in 2004 and more recently also advocated by Koch, is one of the most discussed models of consciousness in neuroscience and elsewhere. [135][136] The theory proposes an identity between consciousness and integrated information, with the latter item (denoted as Φ) defined mathematically and thus in principle measurable. [136][137] The hard problem of consciousness, write Tononi and Koch, may indeed be intractable when working from matter to consciousness. [15] However, because IIT inverts this relationship and works from phenomenological axioms to matter, they say it could be able to solve the hard problem. [15] In this vein, proponents have said the theory goes beyond identifying human neural correlates and can be extrapolated to all physical systems. Tononi wrote (along with two colleagues):\n\nWhile identifying the \"neural correlates of consciousness\" is undoubtedly important, it is hard to see how it could ever lead to a satisfactory explanation of what consciousness is and how it comes about. As will be illustrated below, IIT offers a way to analyze systems of mechanisms to determine if they are properly structured to give rise to consciousness, how much of it, and of which kind. [138]\n\nAs part of a broader critique of IIT, Michael Cerullo suggested that the theory's proposed explanation is in fact for what he dubs (following Scott Aaronson) the \"Pretty Hard Problem\" of methodically inferring which physical systems are conscious-but would not solve Chalmers' hard problem. [136] \"Even if IIT is correct,\" he argues, \"it does not explain why integrated information generates (or is) consciousness.\" [136] Chalmers agrees that IIT, if correct, would solve the \"Pretty Hard Problem\" rather than the hard problem. [139]\n\n## Global workspace theory\n\nGlobal workspace theory (GWT) is a cognitive architecture and theory of consciousness proposed by the cognitive psychologist Bernard Baars in 1988. [140] Baars explains the theory with the metaphor of a theater, with conscious processes represented by an illuminated stage. [140] This theater integrates inputs", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia2.pdf" - }, - { - "text": "While Patricia Churchland and Paul Churchland have famously applied eliminative materialism to propositional attitudes, philosophers including Daniel Dennett, Georges Rey, and Keith Frankish have applied it to qualia or phenomenal consciousness (i.e., conscious experience). [59] On their view, it is mistaken not only to believe there is a hard problem of consciousness, but to believe phenomenal consciousness exists at all. [19][61]", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia2.pdf" - }, - { - "text": "defended versions of both positions as plausible. [52] Traditional dualists such as Descartes believed the mental and the physical to be two separate substances, or fundamental types of entities (hence \"substance dualism\"); some more recent dualists, however, accept only one substance, the physical, but state it has both mental and physical properties (hence \"property dualism\"). [107]\n\n## Type-E Dualism\n\n## Type-F Monism\n\nMeanwhile, panpsychism and neutral monism, broadly speaking, view consciousness as intrinsic to matter. [52] In its most basic form, panpsychism holds that all physical entities have minds (though its proponents take more qualified positions), [112] while neutral monism, in at least some variations, holds that entities are composed of a substance with mental and physical aspects-and is thus sometimes described as a type of panpsychism. [113]\n\nForms of panpsychism and neutral monism were defended in the early twentieth century by the psychologist William James, [114][115][note 2] the philosopher Alfred North Whitehead, [115] the physicist Arthur Eddington, [116][117] and the philosopher Bertrand Russell, [112][113] and interest in these views has been revived in recent decades by philosophers including Thomas Nagel, [115] Galen Strawson, [115][118] Philip Goff, [115] and David Chalmers. [112] Chalmers describes his overall view as \"naturalistic dualism\", [1] but he says panpsychism is in a sense a form of physicalism, [52] as does Strawson. [118] Proponents of panpsychism argue it solves the hard problem of consciousness parsimoniously by making consciousness a fundamental feature of reality. [43][119]\n\n## Idealism and cosmopsychism\n\nA traditional solution to the hard problem is idealism, according to which consciousness is fundamental and not simply an emergent property of matter. It is claimed that this avoids the hard problem entirely. [120] Objective idealism and cosmopsychism consider mind or consciousness to be the fundamental substance of the universe. Proponents claim that this approach is immune to both the hard problem of consciousness and the combination problem that affects panpsychism. [121][122][123]\n\nFrom an idealist perspective, matter is a representation or image of mental processes. Supporters suggest that this avoids the problems associated with the materialist view of mind as an emergent property of a physical brain. [124] Critics argue that this then leads to a decombination problem: how is it possible to split a single, universal conscious experience into multiple, distinct conscious experiences? In response, Bernardo Kastrup claims that nature hints at a mechanism for this in the condition dissociative identity disorder (previously known as Multiple Personality Disorder). [125] Kastrup proposes dissociation as an example from nature showing that multiple minds with their own individual subjective experience could develop within a single universal mind.\n\nCognitive psychologist Donald D. Hoffman uses a mathematical model based around conscious agents, within a fundamentally conscious universe, to support conscious realism as a description of nature-one that falls within the objective idealism approaches to the hard problem: \"The objective world, i.e., the world whose existence does not depend on the perceptions of a particular conscious agent, consists entirely of conscious agents.\" [126]", - "page_start": 12, - "page_end": 12, - "source_file": "wikipedia2.pdf" - }, - { - "text": "from a variety of unconscious and otherwise autonomous networks in the brain and then broadcasts them to unconscious networks (represented in the metaphor by a broad, unlit \"audience\"). [140] The theory has since been expanded upon by other scientists including cognitive neuroscientist Stanislas Dehaene. [141]\n\nIn his original paper outlining the hard problem of consciousness, Chalmers discussed GWT as a theory that only targets one of the \"easy problems\" of consciousness. [1] In particular, he said GWT provided a promising account of how information in the brain could become globally accessible, but argued that \"now the question arises in a different form: why should global accessibility give rise to conscious experience? As always, this bridging question is unanswered.\" [1] J. W. Dalton similarly criticized GWT on the grounds that it provides, at best, an account of the cognitive function of consciousness, and fails to explain its experiential aspect. [142] By contrast, A. C. Elitzur argued: \"While [GWT] does not address the 'hard problem', namely, the very nature of consciousness, it constrains any theory that attempts to do so and provides important insights into the relation between consciousness and cognition.\" [143]\n\nFor his part, Baars writes (along with two colleagues) that there is no hard problem of explaining qualia over and above the problem of explaining causal functions, because qualia are entailed by neural activity and themselves causal. [21] Dehaene, in his 2014 book Consciousness and the Brain , rejected the concept of qualia and argued that Chalmers' \"easy problems\" of consciousness are actually the hard problems. [20] He further stated that the \"hard problem\" is based only upon ill-defined intuitions that are continually shifting as understanding evolves: [20]\n\nOnce our intuitions are educated by cognitive neuroscience and computer simulations, Chalmers' hard problem will evaporate. The hypothetical concept of qualia, pure mental experience, detached from any information-processing role, will be viewed as a peculiar idea of the prescientific era, much like vitalism... [Just as science dispatched vitalism] the science of consciousness will keep eating away at the hard problem of consciousness until it vanishes.\n\n## Meta-problem\n\nIn 2018, Chalmers highlighted what he calls the \" meta-problem of consciousness \", another problem related to the hard problem of consciousness: [76]\n\nThe meta-problem of consciousness is (to a first approximation) the problem of explaining why we think that there is a [hard] problem of consciousness.\n\nIn his \"second approximation\", he says it is the problem of explaining the behavior of \"phenomenal reports\", and the behavior of expressing a belief that there is a hard problem of consciousness. [76]\n\nExplaining its significance, he says: [76]\n\nAlthough the meta-problem is strictly speaking an easy problem, it is deeply connected to the hard problem. We can reasonably hope that a solution to the meta-problem will shed significant light on the hard problem. A particularly strong line holds that a solution to the", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia2.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia2.pdf", - "query": "What is the role of the PhilPapers organization ?", - "target_page": 6, - "target_passage": " PhilPapers is an organization that archives academic philosophy papers and periodically surveys professional philosophers about their views.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- Blair, J. Anthony; Johnson, Ralph H. (2000). \"Informal Logic: An Overview\" (https://philpaper s.org/rec/BLAILA-3). Informal Logic . 20 (2): 93-107. doi:10.22329/il.v20i2.2262 (https://doi.o rg/10.22329%2Fil.v20i2.2262). Archived (https://web.archive.org/web/20211209195317/http s://philpapers.org/rec/BLAILA-3) from the original on 9 December 2021. Retrieved 29 December 2021.\n - Blair, J. Anthony (20 October 2011). Groundwork in the Theory of Argumentation: Selected Papers of J. Anthony Blair . Springer Science & Business Media. p. 47. ISBN 978-94-0072363-4.\n - Bobzien, Susanne (2020). \"Ancient Logic: 2. Aristotle\" (https://plato.stanford.edu/entries/logi c-ancient/#Ari). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20180828102117/https://plato.sta nford.edu/entries/logic-ancient/#Ari) from the original on 28 August 2018. Retrieved 3 January 2022.\n - Borchert, Donald, ed. (2006a). \"Computability Theory\". Macmillan Encyclopedia of Philosophy Volume 2 (https://philpapers.org/rec/BORMEO) (2nd ed.). Macmillan. pp. 372390. ISBN 978-0-02-865782-0.\n - Borchert, Donald (2006b). \"Induction\". Macmillan Encyclopedia of Philosophy Volume 4 (htt ps://philpapers.org/rec/BORMEO) (2nd ed.). Macmillan. pp. 635-648. ISBN 978-0-02865784-4. Archived (https://web.archive.org/web/20210112065913/https://philpapers.org/re c/BORMEO) from the original on 12 January 2021. Retrieved 4 January 2022.\n - Borchert, Donald (2006c). \"Logic, Non-Classical\". Macmillan Encyclopedia of Philosophy Volume 5 (https://philpapers.org/rec/BORMEO) (2nd ed.). Macmillan. pp. 485-492. ISBN 978-0-02-865785-1. Archived (https://web.archive.org/web/20210112065913/https://ph ilpapers.org/rec/BORMEO) from the original on 12 January 2021. Retrieved 4 January 2022.\n - Boris, Kulik; Alexander, Fridman (30 November 2017). N-ary Relations for Logical Analysis of Data and Knowledge . IGI Global. p. 74. ISBN 978-1-5225-2783-1.\n - Bridges, Douglas; Ishihara, Hajime; Rathjen, Michael; Schwichtenberg, Helmut (30 April 2023). Handbook of Constructive Mathematics . Cambridge University Press. pp. 73-4. ISBN 978-1-316-51086-5.\n - Brody, Boruch A. (2006). Encyclopedia of Philosophy . Vol. 5. Donald M. Borchert (2nd ed.). Thomson Gale/Macmillan Reference US. pp. 535-536. ISBN 978-0-02-865780-6. OCLC 61151356 (https://search.worldcat.org/oclc/61151356). \"The two most important types of logical calculi are propositional (or sentential) calculi and functional (or predicate) calculi. A propositional calculus is a system containing propositional variables and connectives (some also contain propositional constants) but not individual or functional variables or constants. In the extended propositional calculus, quantifiers whose operator variables are propositional variables are added.\"\n - Bunnin, Nicholas; Yu, Jiyuan (27 January 2009). The Blackwell Dictionary of Western Philosophy . John Wiley & Sons. p. 179. ISBN 978-1-4051-9112-8.\n - Burgess, John P. (2009). \"1. Classical logic\". Philosophical Logic (https://philpapers.org/rec/ BURPL-3). Princeton, NJ: Princeton University Press. pp. 1-12. ISBN 978-0-691-15633-0. Archived (https://web.archive.org/web/20211216143954/https://philpapers.org/rec/BURPL3) from the original on 16 December 2021. Retrieved 4 January 2022.\n - Bäck, Allan T. (2016). Aristotle's Theory of Predication . Brill. p. 317. ISBN 978-90-04-321090.\n - Calderbank, Robert; Sloane, Neil J. A. (April 2001). \"Claude Shannon (1916-2001)\" (https:// doi.org/10.1038%2F35071223). Nature . 410 (6830): 768. doi:10.1038/35071223 (https://doi. org/10.1038%2F35071223). ISSN 1476-4687 (https://search.worldcat.org/issn/1476-4687). PMID 11298432 (https://pubmed.ncbi.nlm.nih.gov/11298432). S2CID 4402158 (https://api.s emanticscholar.org/CorpusID:4402158).", - "page_start": 25, - "page_end": 25, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- Vidyabhusana, Satis Chandra (1988). A History of Indian Logic: Ancient, Mediaeval and Modern Schools . Motilal Banarsidass Publisher. p. 221. ISBN 978-81-208-0565-1.\n - Vleet, Van Jacob E. (2010). \"Introduction\". Informal Logical Fallacies: A Brief Guide (https://p hilpapers.org/rec/VLEILF). Upa. pp. ix-x. ISBN 978-0-7618-5432-6. Archived (https://web.ar chive.org/web/20220228035654/https://philpapers.org/rec/VLEILF) from the original on 28 February 2022. Retrieved 2 January 2022.\n - Väänänen, Jouko (2021). \"Second-order and Higher-order Logic\" (https://plato.stanford.edu/ entries/logic-higher-order/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20211030222316/ https://plato.stanford.edu/entries/logic-higher-order/) from the original on 30 October 2021. Retrieved 23 November 2021.\n - Walton, Douglas N. (1987). Informal Fallacies: Towards a Theory of Argument Criticisms (htt ps://philpapers.org/rec/WALIFT). John Benjamins. ISBN 978-1-55619-010-0. Archived (http s://web.archive.org/web/20220302001111/https://philpapers.org/rec/WALIFT) from the original on 2 March 2022. Retrieved 2 January 2022.\n - Warren, Jared (2020). Shadows of Syntax: Revitalizing Logical and Mathematical Conventionalism (https://global.oup.com/academic/product/shadows-of-syntax-9780190086 152). Oxford University Press. ISBN 978-0-19-008615-2.\n - Washell, Richard F. (1973). \"Logic, Language, and Albert the Great\" (https://philpapers.org/r ec/WASLLA-3). Journal of the History of Ideas . 34 (3): 445-50. doi:10.2307/2708963 (http s://doi.org/10.2307%2F2708963). JSTOR 2708963 (https://www.jstor.org/stable/2708963).\n - Wasilewska, Anita (2018). Logics for Computer Science: Classical and Non-Classical . Springer. pp. 145-6. ISBN 978-3-319-92591-2.\n - Weber, Zach. \"Paraconsistent Logic\" (https://iep.utm.edu/para-log/). Internet Encyclopedia of Philosophy . Retrieved 12 December 2021.\n - Weddle, Perry (2011). \"Chapter 36. Informal logic and the eductive-inductive distinction\". Across the Lines of Disciplines (https://www.degruyter.com/document/doi/10.1515/97831108 67718.383/html). De Gruyter Mouton. pp. 383-388. doi:10.1515/9783110867718.383 (http s://doi.org/10.1515%2F9783110867718.383). ISBN 978-3-11-086771-8. Archived (https://w eb.archive.org/web/20211231172343/https://www.degruyter.com/document/doi/10.1515/978 3110867718.383/html) from the original on 31 December 2021. Retrieved 2 January 2022.\n - Westerståhl, Dag (1989). \"Aristotelian Syllogisms and Generalized Quantifiers\" (https://philp apers.org/rec/WESASA). Studia Logica . 48 (4): 577-585. doi:10.1007/BF00370209 (https:// doi.org/10.1007%2FBF00370209). S2CID 32089424 (https://api.semanticscholar.org/Corpu sID:32089424). Archived (https://web.archive.org/web/20220104182746/https://philpapers.o rg/rec/WESASA) from the original on 4 January 2022. Retrieved 4 January 2022.\n - Wilbanks, Jan J. (1 March 2010). \"Defining Deduction, Induction, and Validity\" (https://link.sp ringer.com/article/10.1007/s10503-009-9131-5). Argumentation . 24 (1): 107-124. doi:10.1007/s10503-009-9131-5 (https://doi.org/10.1007%2Fs10503-009-9131-5). ISSN 1572-8374 (https://search.worldcat.org/issn/1572-8374). S2CID 144481717 (https://ap i.semanticscholar.org/CorpusID:144481717). Archived (https://web.archive.org/web/202201 08171721/https://link.springer.com/article/10.1007/s10503-009-9131-5) from the original on 8 January 2022. Retrieved 8 January 2022.\n - Wilce, Alexander (2021). \"Quantum Logic and Probability Theory: 2.1 Realist Quantum Logic\" (https://plato.stanford.edu/entries/qt-quantlog/#RealQuanLogi). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Retrieved 11 March 2023.", - "page_start": 36, - "page_end": 36, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- Haack, Susan (1978). \"1. 'Philosophy of logics' \". Philosophy of Logics (https://philpapers.or g/rec/HAAPOL-2). London and New York: Cambridge University Press. pp. 1-10. ISBN 9780-521-29329-7. Archived (https://web.archive.org/web/20211207200551/https://philpapers.o rg/rec/HAAPOL-2) from the original on 7 December 2021. Retrieved 29 December 2021.\n - Haack, Susan (1996). Deviant Logic, Fuzzy Logic: Beyond the Formalism . University of Chicago Press. ISBN 978-0-226-31133-3.\n - Haaparanta, Leila (2009). \"1. Introduction\". The Development of Modern Logic . Oxford University Press. pp. 4-6. ISBN 978-0-19-513731-6.\n - Hansen, Hans (2020). \"Fallacies\" (https://plato.stanford.edu/entries/fallacies/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Archived (http s://web.archive.org/web/20210329182946/https://plato.stanford.edu/entries/fallacies/) from the original on 29 March 2021. Retrieved 18 March 2021.\n - Hartmann, Stephan; Sprenger, Jan (2010). \"Bayesian Epistemology\". The Routledge Companion to Epistemology (https://philpapers.org/rec/BOVSIO). London: Routledge. pp. 609-620. ISBN 978-0-415-96219-3. Archived (https://web.archive.org/web/2021051609 5047/https://philpapers.org/rec/BOVSIO) from the original on 16 May 2021. Retrieved 4 January 2022.\n - Hasse, Dag Nikolaus (2008). \"Influence of Arabic and Islamic Philosophy on the Latin West\" (https://plato.stanford.edu/entries/arabic-islamic-influence/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Retrieved 19 July 2023.\n - Hawthorne, James (2021). \"Inductive Logic\" (https://plato.stanford.edu/entries/logic-inductiv e/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20220121081805/https://plato.stanford.ed u/entries/logic-inductive/) from the original on 21 January 2022. Retrieved 6 January 2022.\n - Hintikka, Jaakko J. (2019). \"Philosophy of logic\" (https://www.britannica.com/topic/philosoph y-of-logic). Encyclopædia Britannica . Archived (https://web.archive.org/web/2015042810173 2/http://www.britannica.com/EBchecked/topic/346240/philosophy-of-logic) from the original on 28 April 2015. Retrieved 21 November 2021.\n - Hintikka, Jaakko J. (2023). \"Logical systems\" (https://www.britannica.com/topic/logic/Logical -systems). Encyclopædia Britannica . Archived (https://web.archive.org/web/2021120718465 6/https://www.britannica.com/topic/logic/Logical-systems) from the original on 7 December 2021. Retrieved 4 December 2021.\n - Hintikka, Jaakko (1970). \"Information, Deduction, and the A Priori\". Noûs . 4 (2): 135-152. doi:10.2307/2214318 (https://doi.org/10.2307%2F2214318). ISSN 0029-4624 (https://searc h.worldcat.org/issn/0029-4624). JSTOR 2214318 (https://www.jstor.org/stable/2214318).\n - Hintikka, Jaakko; Sandu, Gabriel (2006). \"What is Logic?\". In Jacquette, D. (ed.). Philosophy of Logic (https://philpapers.org/rec/JAAWIL). North Holland. pp. 13-39. ISBN 978-0-444-51541-4. Archived (https://web.archive.org/web/20211207235525/https://ph ilpapers.org/rec/JAAWIL) from the original on 7 December 2021. Retrieved 29 December 2021.\n - Hintikka, Jaakko J.; Spade, Paul Vincent. \"History of logic\" (https://www.britannica.com/topi c/history-of-logic). Encyclopædia Britannica . Retrieved 23 September 2022.\n - Honderich, Ted (2005). The Oxford Companion to Philosophy (https://philpapers.org/rec/HO NTOC-2). Oxford University Press. ISBN 978-0-19-926479-7. Archived (https://web.archive. org/web/20210129082636/https://philpapers.org/rec/HONTOC-2) from the original on 29 January 2021. Retrieved 2 January 2022.\n - Hurley, Patrick J. (2015). \"4. Categorical Syllogisms\". Logic: The Essentials . Wadsworth. pp. 189-237. ISBN 978-1-305-59041-0.", - "page_start": 29, - "page_end": 29, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- 79. Scarfone, Matthew (2022). \"Using and Abusing Moorean Arguments\" (https://philpapers.org/ rec/SCAUAA-2). Journal of the American Philosophical Association . 8 (1): 52-71. doi:10.1017/apa.2020.47 (https://doi.org/10.1017%2Fapa.2020.47). S2CID 239672728 (http s://api.semanticscholar.org/CorpusID:239672728).\n - 80. Augustine of Hippo. \"Book 11, Chapter 26\". City of God .\n - 81. Descartes, René (1637). \"4\". Discourse on the Method .\n - 82. Descartes, René (1641). \"Second Meditation\". Meditations on First Philosophy .\n - 83. Chalmers, David (2020). \"Debunking Arguments for Illusionism\" (https://philpapers.org/rec/C HADAF-2). Journal of Consciousness Studies . 27 (5-6): 258-281.\n - 84. Chalmers, David (2002). \"Debunking Arguments for Illusionism\" (https://philpapers.org/rec/C HADAF-2). Journal of Consciousness Studies . 27 (5-6): 258-281.\n - 85. Strawson, G. (2018). \"The Consciousness Deniers\" (https://www.nybooks.com/daily/2018/0 3/13/the-consciousness-deniers/). The New York Review of Books .\n - 86. Koch, Christof (2019). The Feeling of Life Itself: Why Consciousness is Everywhere But Can't be Computed . MIT Press. p. 2.\n - 87. Koch, Christof (2019). The Feeling of Life Itself: Why Consciousness is Everywhere But Can't be Computed . MIT Press. p. 3.\n - 88. Balmer, A. (2020). \"Soft-Wired Illusionism vs. the Meta-Problem of Consciousness\" (https://p hilpapers.org/rec/BALSIV). Journal of Consciousness Studies . 27 (5-6): 26-37.\n - 89. Chalmers, David (2020). \"Is the Hard Problem of Consciousness Universal?\". Journal of Consciousness Studies . 27 (5-6): 227-257.\n - 90. Papineau, D. (2019). \"Response to Chalmers' 'The Meta-Problem of Consciousness' \" (http s://philpapers.org/rec/PAPRTC-6). Journal of Consciousness Studies . 26 (9-10): 173-181.\n - 91. J. Levine, \"Conceivability, Identity, and the Explanatory Gap\" in Stuart R. Hameroff, Alfred W. Kaszniak and David Chalmers (eds.), Towards a Science of Consciousness III: The Third Tucson Discussions and Debates , The MIT Press, 1999,. pp 3-12.\n - 92. Gennaro, Rocco J. \"Consciousness\" (https://www.iep.utm.edu/consciou). Internet Encyclopedia of Philosophy .\n - 93. Block, Ned; Stalnaker, Robert (1999). \"Conceptual Analysis, Dualism, and the Explanatory Gap\" (http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/ExplanatoryGap.pdf) (PDF). The Philosophical Review . 108 (1): 1-46. CiteSeerX 10.1.1.693.2421 (https://citeseerx.ist.ps u.edu/viewdoc/summary?doi=10.1.1.693.2421). doi:10.2307/2998259 (https://doi.org/10.230 7%2F2998259). JSTOR 2998259 (https://www.jstor.org/stable/2998259).\n - 94. Stoljar, Daniel (2005). \"Physicalism and Phenomenal Concepts\". Mind & Language . 20 (5): 469-494. doi:10.1111/j.0268-1064.2005.00296.x (https://doi.org/10.1111%2Fj.0268-1064.2 005.00296.x).\n - 95. Chalmers, David (2006). \"Phenomenal Concepts and the Explanatory Gap\" (http://consc.ne t/papers/pceg.pdf) (PDF). In Alter, Torin; Walter, Sven (eds.). Phenomenal Concepts and Phenomenal Knowledge: New Essays on Consciousness and Physicalism . Oxford University Press. ISBN 9780195171655. Retrieved 27 March 2019.\n - 96. Wierzbicka, A. (2019). \"From 'Consciousness' to 'I Think, I Feel, I Know': A Commentary on David Chalmers\". Journal of Consciousness Studies . 26 (9-10): 257-269.\n - 97. Lau, Hakwan; Michel, Matthias (2019). \"A Socio-Historical Take on the Meta-Problem of Consciousness\". Journal of Consciousness Studies . 26 (9-10): 136-147.\n - 98. \"Is the hard problem of consciousness really that hard? | Brian Greene and Pat Churchland lock horns\" (https://www.youtube.com/watch?v=hru5d\\_wsu7g). YouTube . 9 July 2022.\n - 99. \"Abiogenesis\" (https://www.allaboutscience.org/abiogenesis.htm).", - "page_start": 23, - "page_end": 23, - "source_file": "wikipedia2.pdf" - }, - { - "text": "- Knuuttila, Simo (1980). Reforging the Great Chain of Being: Studies of the History of Modal Theories . Springer Science & Business Media. p. 71. ISBN 978-90-277-1125-0.\n - Korb, Kevin (2004). \"Bayesian Informal Logic and Fallacy\" (https://philpapers.org/rec/KORBI L). Informal Logic . 24 (1): 41-70. doi:10.22329/il.v24i1.2132 (https://doi.org/10.22329%2Fil. v24i1.2132). Archived (https://web.archive.org/web/20211110075255/https://philpapers.org/r ec/KORBIL) from the original on 10 November 2021. Retrieved 2 January 2022.", - "page_start": 30, - "page_end": 30, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- Iqbal, Mohammad (2013). \"The Spirit of Muslim Culture\". The Reconstruction of Religious Thought in Islam (http://www.allamaiqbal.com/works/prose/english/reconstruction/). Stanford University Press. pp. 99-115. ISBN 978-0-8047-8686-7.\n - Irvine, Andrew David (2022). \"Bertrand Russell\" (https://plato.stanford.edu/entries/russell/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Retrieved 29 September 2022.\n - Jacquette, Dale (2006). \"Introduction: Philosophy of logic today\". Philosophy of Logic (http s://philpapers.org/rec/JACPOL). North Holland. pp. 1-12. ISBN 978-0-444-51541-4. Archived (https://web.archive.org/web/20211207184932/https://philpapers.org/rec/JACPOL) from the original on 7 December 2021. Retrieved 29 December 2021.\n - Jago, Mark (2014). The Impossible: An Essay on Hyperintensionality . OUP Oxford. p. 41. ISBN 978-0-19-101915-9.\n - Janssen, Theo M. V.; Zimmermann, Thomas Ede (2021). \"Montague Semantics\" (https://plat o.stanford.edu/entries/montague-semantics/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. pp. 3-4. Retrieved 10 March 2023.\n - Johnson, Ralph H. (1999). \"The Relation Between Formal and Informal Logic\" (https://philpa pers.org/rec/JOHTRB-2). Argumentation . 13 (3): 265-274. doi:10.1023/A:1007789101256 (https://doi.org/10.1023%2FA%3A1007789101256). S2CID 141283158 (https://api.semantic scholar.org/CorpusID:141283158). Archived (https://web.archive.org/web/20211207184706/ https://philpapers.org/rec/JOHTRB-2) from the original on 7 December 2021. Retrieved 2 January 2022.\n - Johnson, Ralph H. (15 July 2014). The Rise of Informal Logic: Essays on Argumentation, Critical Thinking, Reasoning and Politics . University of Windsor. ISBN 978-0-920233-71-9.\n - Ketland, Jeffrey (2005). \"Second Order Logic\". Macmillan Encyclopedia of Philosophy Volume 8 (https://www.encyclopedia.com/humanities/encyclopedias-almanacs-transcripts-a nd-maps/second-order-logic). Macmillan Reference USA. pp. 707-708. ISBN 978-0-02865788-2. Archived (https://web.archive.org/web/20211207184921/https://www.encyclopedi a.com/humanities/encyclopedias-almanacs-transcripts-and-maps/second-order-logic) from the original on 7 December 2021. Retrieved 4 January 2022.\n - King, Jeffrey C. (2 September 2009). \"Formal Semantics\". The Oxford Handbook of Philosophy of Language . pp. 557-8. doi:10.1093/oxfordhb/9780199552238.003.0023 (http s://doi.org/10.1093%2Foxfordhb%2F9780199552238.003.0023). ISBN 978-0-19-955223-8.\n - King, Jeffrey C. (2019). \"Structured Propositions\" (https://plato.stanford.edu/entries/propositi ons-structured/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20211025211706/https://plato.sta nford.edu/entries/propositions-structured/) from the original on 25 October 2021. Retrieved 4 December 2021.\n - Klement, Kevin C. (1995b). \"Propositional Logic\" (https://iep.utm.edu/prop-log/). Internet Encyclopedia of Philosophy . ISSN 2161-0002 (https://search.worldcat.org/issn/2161-0002). Retrieved 23 September 2022.\n - Kline, Morris (1972). Mathematical Thought From Ancient to Modern Times . Oxford University Press. ISBN 978-0-19-506135-2.\n - Kneale, William; Kneale, Martha (1962). The Development of Logic . Clarendon Press. ISBN 978-0-19-824773-9.\n - Knuuttila, Simo (1980). Reforging the Great Chain of Being: Studies of the History of Modal Theories . Springer Science & Business Media. p. 71. ISBN 978-90-277-1125-0.", - "page_start": 30, - "page_end": 30, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- 52. Chalmers, David (2003). \"Consciousness and its Place in Nature\". In Stich, Stephen P.; Warfield, Ted A. (eds.). Blackwell Guide to the Philosophy of Mind . Malden, MA: Blackwell. pp. 102-142. doi:10.1002/9780470998762.ch5 (https://doi.org/10.1002%2F978047099876 2.ch5). ISBN 9780470998762.\n - 53. Boutel, Adrian (2013). \"How to be a Type-C Physicalist\" (https://philpapers.org/rec/BOUHT B). Philosophical Studies . 164 (2): 301-320. doi:10.1007/s11098-012-9854-2 (https://doi.or g/10.1007%2Fs11098-012-9854-2). S2CID 254941872 (https://api.semanticscholar.org/Cor pusID:254941872).\n - 54. Majeed, Raamy (September 2016). \"The hard problem & its explanatory targets\". Ratio . 29 (3): 298-311. doi:10.1111/rati.12103 (https://doi.org/10.1111%2Frati.12103).\n - 55. Levin, Janet (2008). \"Taking Type-B Materialism Seriously\" (https://philpapers.org/rec/LEVT TM). Mind and Language . 23 (4): 402-425. doi:10.1111/j.1468-0017.2008.00349.x (https://d oi.org/10.1111%2Fj.1468-0017.2008.00349.x).\n - 56. Mandik, Pete; Weisberg, Josh (2008). Wrenn, Chase (ed.). Type-Q Materialism (https://philp apers.org/rec/MANTM). Peter Lang Publishing Group.\n - 57. Pereira, Roberto Horácio Sá (2016). \"In Defence of Type-A Materialism\" (https://philpapers.o rg/rec/PERIDO-3). Diametros . 49 (49): 68-83. doi:10.13153/diam.49.2016.921 (https://doi.or g/10.13153%2Fdiam.49.2016.921).\n - 58. Yetter-Chappell, Helen (2017). \"Dissolving Type-B Physicalism\" (https://philpapers.org/rec/Y ETDTP-2). Philosophical Perspectives . 31 (1): 469-498. doi:10.1111/phpe.12099 (https://do i.org/10.1111%2Fphpe.12099).\n - 59. Ramsey, William (2019). \"Eliminative Materialism\" (https://plato.stanford.edu/entries/material ism-eliminative/). In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy . Retrieved 1 April 2019.\n - 60. Frankish, K. (2016). \"Illusionism as a theory of consciousness\". Journal of Consciousness Studies . 23 (11-12): 11-39.\n - 61. Dennett, Daniel (2016). \"Illusionism as the Obvious Default Theory of Consciousness\" (http s://philpapers.org/rec/DENIAT-3). Journal of Consciousness Studies . 23 (11-12): 65-72.\n - 62. Carruthers, Peter (2016). \"Higher-order theories of consciousness\" (http://plato.stanford.ed u/entries/consciousness-higher/). Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University.\n - 63. Carruthers, Peter (2005). \"Phenomenal concepts and higher-order experiments\" (https://boo ks.google.com/books?id=FKI4flNaGjUC&pg=PA79). Consciousness: essays from a higherorder perspective . Oxford University Press. pp. 79 ff . ISBN 978-0191535048.\n - 64. Carruthers, Glenn; Schier, Elizabeth (2012). \"Dissolving the hard problem of consciousness\" (http://consciousnessonline.files.wordpress.com/2012/01/disolvinghardproblem.pdf) (PDF). Consciousness Online fourth conference . Retrieved 7 July 2014.\n - 65. Stango, Marco (Summer 2017). \"A Deweyan assessment of three major tendencies in philosophy of consciousness\" (http://muse.jhu.edu/article/680916). Transactions of the Charles S. Peirce Society . 53 (3): 466-490. doi:10.2979/trancharpeirsoc.53.3.06 (https://doi. org/10.2979%2Ftrancharpeirsoc.53.3.06). S2CID 148690536 (https://api.semanticscholar.or g/CorpusID:148690536).", - "page_start": 21, - "page_end": 21, - "source_file": "wikipedia2.pdf" - }, - { - "text": "- Shermer, Michael (25 October 2022). Conspiracy: Why the Rational Believe the Irrational . JHU Press. ISBN 978-1-4214-4445-1.\n - Sider, Theodore (2010). Logic for Philosophy . Oxford University Press. ISBN 978-0-19957558-9.\n - Siegel, Harvey; Biro, John (1997). \"Epistemic Normativity, Argumentation, and Fallacies\" (htt ps://philpapers.org/rec/SIEENA). Argumentation . 11 (3): 277-292. doi:10.1023/A:1007799325361 (https://doi.org/10.1023%2FA%3A1007799325361). S2CID 126269789 (https://api.semanticscholar.org/CorpusID:126269789). Archived (https:// web.archive.org/web/20220228035651/https://philpapers.org/rec/SIEENA) from the original on 28 February 2022. Retrieved 4 January 2022.\n - Simpson, R. L. (2008). Essentials of Symbolic Logic (3rd ed.). Broadview Press. p. 14. ISBN 978-1-77048-495-5.\n - Smith, Robin (2022). \"Aristotle's Logic\" (https://plato.stanford.edu/entries/aristotle-logic/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Retrieved 11 March 2023.\n - Spade, Paul Vincent; Panaccio, Claude (2019). \"William of Ockham\" (https://plato.stanford.e du/entries/ockham/#SummLogi). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University.\n - Spriggs, John (2012). GSN - The Goal Structuring Notation: A Structured Approach to Presenting Arguments . Springer Science & Business Media. pp. 20-22. ISBN 978-1-44712312-5.\n - Stairs, Allen (2017). A Thinker's Guide to the Philosophy of Religion . Routledge. p. 343. ISBN 978-1-351-21981-5.\n - Sternberg, Robert J. \"Thought\" (https://www.britannica.com/topic/thought). Encyclopædia Britannica . Archived (https://web.archive.org/web/20211013145532/https://www.britannica.c om/topic/thought) from the original on 13 October 2021. Retrieved 14 October 2021.\n - Stolyar, Abram Aronovich (1 January 1984). Introduction to Elementary Mathematical Logic . Courier Corporation. ISBN 978-0-486-64561-2.\n - Stone, Mark A. (2012). \"Denying the Antecedent: Its Effective Use in Argumentation\" (https:// philpapers.org/rec/STODTA). Informal Logic . 32 (3): 327-356. doi:10.22329/il.v32i3.3681 (ht tps://doi.org/10.22329%2Fil.v32i3.3681). Archived (https://web.archive.org/web/2022022812 3240/https://philpapers.org/rec/STODTA) from the original on 28 February 2022. Retrieved 8 January 2022.\n - Stump, David J. \"Fallacy, Logical\" (https://www.encyclopedia.com/history/dictionaries-thesau ruses-pictures-and-press-releases/fallacy-logical). encyclopedia.com . Archived (https://web. archive.org/web/20210215112403/https://www.encyclopedia.com/history/dictionaries-thesau ruses-pictures-and-press-releases/fallacy-logical) from the original on 15 February 2021. Retrieved 20 March 2021.\n - Talbott, William (2016). \"Bayesian Epistemology\" (https://plato.stanford.edu/entries/epistemo logy-bayesian/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20210401034856/https://plato.sta nford.edu/entries/epistemology-bayesian/) from the original on 1 April 2021. Retrieved 6 March 2021.\n - Tarski, Alfred (1994). Introduction to Logic and to the Methodology of the Deductive Sciences . Oxford University Press. p. 40. ISBN 978-0-19-802139-1.\n - Tondl, L. (2012). Problems of Semantics: A Contribution to the Analysis of the Language Science . Springer Science & Business Media. p. 111. ISBN 978-94-009-8364-9.\n - Velleman, Daniel J. (2006). How to Prove It: A Structured Approach . Cambridge University Press. p. 8, 103. ISBN 978-0-521-67599-4.\n - Vickers, John M. (2022). \"Inductive Reasoning\" (https://www.oxfordbibliographies.com/displ ay/document/obo-9780195396577/obo-9780195396577-0171.xml). Oxford Bibliographies . Oxford University Press. Retrieved 18 January 2023.", - "page_start": 35, - "page_end": 35, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- Wolf, Robert G. (1978). \"Are Relevant Logics Deviant?\" (https://philpapers.org/rec/WOLAR L). Philosophia . 7 (2): 327-340. doi:10.1007/BF02378819 (https://doi.org/10.1007%2FBF02 378819). S2CID 143697796 (https://api.semanticscholar.org/CorpusID:143697796). Archived (https://web.archive.org/web/20211216143955/https://philpapers.org/rec/WOLAR L) from the original on 16 December 2021. Retrieved 4 January 2022.\n - Zegarelli, Mark (2010). Logic For Dummies . John Wiley & Sons. p. 30. ISBN 978-1-11805307-2.\n\n## External links\n\nRetrieved from \"https://en.wikipedia.org/w/index.php?title=Logic&oldid=1266818857\"", - "page_start": 37, - "page_end": 37, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- 32. \"Hard Problem of Consciousness\" (https://iep.utm.edu/hard-problem-of-conciousness/). Internet Encyclopedia of Philosophy . Retrieved 2024-10-09.\n - 33. Chalmers, David (January 1997). \"Moving forward on the problem of consciousness\" (http s://philpapers.org/rec/CHAMFO). Journal of Consciousness Studies . 4 (1): 3-46.", - "page_start": 19, - "page_end": 19, - "source_file": "wikipedia2.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0510.pdf", - "query": "What explains mostly the physical behavior that occurs in region iii of thin films ?", - "target_page": 5, - "target_passage": "The observed behaviour in region iii) can be reason- ably attributed to the decreasing relevance of the con- tribution to the total energy of the system coming from the competitive interactions among NNN planes as the film thickness decreases", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "## Abstract\n\nWe review recent experiments on dewetting thin films of evaporating colloidal nanoparticle suspensions (nanofluids) and discuss several theoretical approaches to describe the ongoing processes including coupled transport and phase changes. These approaches range from microscopic discrete stochastic theories to mesoscopic continuous deterministic descriptions. In particular, we focus on (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nModels (i) and (ii) are employed to discuss the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor film' that remains behind a mesoscopic dewetting front. We highlight, in particular, the presence of a transverse instability in the evaporative dewetting front which results in highly branched fingering structures. The subtle interplay of decomposition in the film and contact line motion is discussed.\n\nFinally, we discuss a simple thin film model (iii) of the hydrodynamics on the mesoscale. We employ coupled evolution equations for the film thickness profile and mean particle concentration. The model is used to discuss the self-pinning and de-pinning of a contact line related to the 'coffee-stain' effect.\n\nIn the course of the review we discuss the advantages and limitations of the different theories, as well as possible future developments and extensions.\n\nThe paper is published in: J. Phys.-Cond. Mat. 21 , 264016 (2009), in the Volume 'Nanofluids on solid substrates' and can be obtained at http://dx.doi.org/10.1088/0953-8984/21/26/264016", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [5] F. Brochard-Wyart and J. Daillant, 'Drying of solids wetted by thin liquid films,' Can. J. Phys. 68 , 1084-1088 (1989).\n - [6] P. Muller-Buschbaum, 'Dewetting and pattern formation in thin polymer films as investigated in real and reciprocal space,' J. Phys.-Condes. Matter 15 , R1549-R1582 (2003).\n - [7] R. Seemann, S. Herminghaus, C. Neto, S. Schlagowski, D. Podzimek, R. Konrad, H. Mantz, and K. Jacobs, 'Dynamics and structure formation in thin polymer melt films,' J. Phys.-Condes. Matter 17 , S267-S290 (2005).\n - [8] U. Thiele, 'Structure formation in thin liquid films,' in S. Kalliadasis and U. Thiele, editors, 'Thin films of Soft Matter,' pages 25-93, Springer, Wien (2007).\n - [9] R. Xie, A. Karim, J. F. Douglas, C. C. Han, and R. A. Weiss, 'Spinodal dewetting of thin polymer films,' Phys. Rev. Lett. 81 , 1251-1254 (1998).\n - [10] R. Seemann, S. Herminghaus, and K. Jacobs, 'Dewetting patterns and molecular forces: A reconciliation,' Phys. Rev. Lett. 86 , 5534-5537 (2001).\n - [11] U. Thiele, M. G. Velarde, and K. Neuffer, 'Dewetting: Film rupture by nucleation in the spinodal regime,' Phys. Rev. Lett. 87 , 016104 (2001).\n - [12] M. Bestehorn and K. Neuffer, 'Surface patterns of laterally extended thin liquid films in three dimensions,' Phys. Rev. Lett. 87 , 046101 (2001).\n - [13] J. Becker, G. Grun, R. Seemann, H. Mantz, K. Jacobs, K. R. Mecke, and R. Blossey, 'Complex dewetting scenarios captured by thin-film models,' Nat. Mater. 2 , 59-63 (2003).\n - [14] C. Redon, F. Brochard-Wyart, and F. Rondelez, 'Dynamics of dewetting,' Phys. Rev. Lett. 66 , 715718 (1991).\n - [15] R. Seemann, S. Herminghaus, and K. Jacobs, 'Shape of a liquid front upon dewetting,' Phys. Rev. Lett. 87 , 196101 (2001).\n - [16] R. Fetzer, K. Jacobs, A. Munch, B. Wagner, and T. P. Witelski, 'New slip regimes and the shape of dewetting thin liquid films,' Phys. Rev. Lett. 95 , 127801 (2005).\n - [17] F. Brochard-Wyart and C. Redon, 'Dynamics of liquid rim instabilities,' Langmuir 8 , 2324-2329 (1992).\n - [18] G. Reiter and A. Sharma, 'Auto-optimization of dewetting rates by rim instabilities in slipping polymer films,' Phys. Rev. Lett. 87 , 166103 (2001).\n - [19] A. Munch and B. Wagner, 'Contact-line instability of dewetting thin films,' Physica D 209 , 178-190 (2005).", - "page_start": 25, - "page_end": 25, - "source_file": "1001.2669.pdf" - }, - { - "text": "- /SM590000 Thin-provisioned", - "page_start": 441, - "page_end": 441, - "source_file": "sg247938.pdf" - }, - { - "text": "\n\nFIG. 8: (Colour online) Space-time plots are given for (left) the film thickness h and (right) the nanoparticle layer height h p = hφ . The plot corresponds to the complete evolution resulting in the ring profile of Fig. 6(b). In both panels bright [dark] parts denote high [low] regions. The prominent central dark-bright border in the left panel indicates the change of the position of the contact line in time. Over time, four regimes can be distinguished: (i) fast motion before pinning, (ii) nearly no front motion during self-pinning, (iii) slow motion after depinning, and (iv) final evaporation from the center.\n\n\n\nshould also be investigated further in the simple case presented here.\n\n## IV. CONCLUSION\n\nWe have discussed recent work on pattern formation processes in films and drops of evaporating suspensions/solutions of polymers and particles. After reviewing experiments on suspensions of thiol-coated gold nanoparticles in toluene we have focused on the modelling of the transport and phase change processes involved. A theoretical approach to the modelling of the hydrodynamics on the mesoscale has been described as well as more microscopic models for the dynamics in the observed nanoscopic 'postcursor' film. In particular, we have introduced (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nThe kinetic Monte Carlo model and the dynamical density functional theory can both be used to investigate and understand the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor' film that remains behind the mesoscopic dewetting front. They are, however, not capable of describing the dynamical processes in a meso-", - "page_start": 22, - "page_end": 22, - "source_file": "1001.2669.pdf" - }, - { - "text": "## IV. DISCUSSION AND CONCLUSION\n\nA possible framework to analyze the results presented in the previous Section is suggested by Fig. 5, where we can easily distinguish three significant regions: i ) high thickness, n /greaterorequalslant 16, where the films substantially display a bulk behaviour, with the single planes ordering temperature coinciding with the helical phase transition one; ii ) intermediate thickness, 6 ≤ n /lessorsimilar 15, where the temperature corresponding to the onset of in-plane order, T C ( n ), is still /similarequal T Ho N , but where the helical/fan arrangement stabilizes only below a finite temperature T N ( n ) < T C ( n ); iii ) low thickness,1 ≤ n ≤ 5, where T C ( n ) /lessorsimilar T Ho N but no fan phase is present at any temperature.\n\nThe observed behaviour in region iii ) can be reasonably attributed to the decreasing relevance of the contribution to the total energy of the system coming from the competitive interactions among NNN planes as the film thickness decreases; moreover, the thinness of the", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0510.pdf" - }, - { - "text": "## 11.1.15 Thin provisioned FlashCopy\n\nFlashCopy source and target volumes can be thin-provisioned.\n\n## Source or target thin-provisioned\n\nThe most common configuration is a fully allocated source and a thin-provisioned target. By using this configuration, the target uses a smaller amount of real storage than the source.", - "page_start": 482, - "page_end": 482, - "source_file": "sg247938.pdf" - }, - { - "text": "also shift the spinodal and binodal lines as compared to the locations of these lines in the phase diagram for the pure solvent [41]. As a consequence, the solute concentration influences the hole nucleation rate. More importantly, the solute particles may also destabilise the dewetting fronts. As a result, one may find strongly ramified structures in all three systems [23, 25, 40, 42]. A selection of images exhibiting some of the possible structures is displayed in Fig.1.\n\nFor volatile solvents, the contact lines retract even for wetting fluids. It has been found that such evaporatively receding contact lines may deposit very regular line or ring patterns parallel to the moving contact line [24, 43]. The deposition of a single ring of colloids from a evaporating drop of colloidal suspension is well known as the 'coffee stain effect' [44]. Detailed investigations reveal the emergence of rich structures including multiple irregular rings, networks, regular droplet patterns, sawtooth patterns, Sierpinski carpets, and - in the case of DNA - liquid crystalline structures [22, 30, 45-49]. The deposition of regularly spaced straight lines orthogonal to the moving contact line has also been reported [50]. Droplet patterns may as well be created employing solvent-induced dewetting of glassy polymer layers below the glass transition temperature [51-53].\n\nNote that the dewetting of pure volatile liquids has also been studied experimentally [54] and theoretically [55-58]. In this case, different contact line instabilities have been observed for evaporating liquid drops [59, 60].\n\nIn the present article we review and preview the experiments and in particular the various modelling approaches for dewetting suspensions of (nano-)particles in volatile partially wetting solvents. After reviewing the basic experimental results in Section II, we discuss in Section III several theoretical approaches. In particular, we present a kinetic Monte Carlo model in Section III A, a dynamic density functional theory in Section III B, and a thin film evolution equation in Section III C. Finally, we conclude in Section IV by discussing advantages and shortcomings of the individual approaches and future challenges to all of them.\n\n## II. EXPERIMENT WITH NANOPARTICLE SOLUTIONS\n\nWe focus on experiments that use monodisperse colloidal suspensions of thiol-passivated gold nanoparticles in toluene [33, 34, 37-40, 61]. The gold core of 2 - 3 nm diameter is coated by a layer of alkyl-thiol molecules. The length of the carbon backbone of the thiol used in the experiments ranges from 6 to 12 carbon atoms ( C 6 to C 12 ) [40]. By varying the chain length, one can control", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2669.pdf" - }, - { - "text": "- /SM590000 Thin and Deduplicated", - "page_start": 442, - "page_end": 442, - "source_file": "sg247938.pdf" - }, - { - "text": "fast evaporation [104, 105]. These complex experimental systems all represent systems of high practical interest that the theories presented here are not (yet) able to describe. Such experiments do, however, provide a strong motivation for further work to extend the theories presented here, as well as to develop new approaches.\n\nLet us finally mention that several topics were entirely excluded from our discussion here. First, we focused on a limited range of descriptions and did, for instance, not mention lattice Boltzmann, molecular dynamics or dissipative particle dynamics approaches that may also be employed to describe fluid suspensions [106-109]. Second, we have only discussed spatially homogeneous substrates. Patterned substrates are widely used in dewetting experiments [38, 110-112]. Theoretical descriptions are well developed for the dewetting of films of pure non-volatile liquids on such substrates [68, 113-119]. However, in the case of volatile liquids on heterogeneous substrates, much less work has been done. A third topic that we did not touch upon are possible continuum thin film approaches to demixing dewetting suspensions. We believe it is feasible to extend the diffuse interface theories such as model-H [120] to include the influence of evaporation in dewetting nanoparticle suspensions. For instance, such models have already been adapted to describe demixing free surface films of polymer blends [121-123].\n\n## Acknowledgments\n\nAJA and MJR gratefully acknowledge RCUK and EPSRC, respectively, for financial support. We acknowledge support by the European Union via the FP6 and FP7 Marie Curie schemes [Grants MRTN-CT-2004005728 (PATTERNS) and PITN-GA-2008-214919 (MULTIFLOW)].\n\n- [2] G. Reiter, 'Mobility of polymers in films thinner than their unperturbed size,' Europhys. Lett. 23 , 579-584 (1993).\n- [3] A. Sharma and G. Reiter, 'Instability of thin polymer films on coated substrates: Rupture, dewetting and drop formation,' J. Colloid Interface Sci. 178 , 383-399 (1996).\n- [4] P.-G. de Gennes, 'Wetting: Statics and dynamics,' Rev. Mod. Phys. 57 , 827-863 (1985).", - "page_start": 24, - "page_end": 24, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [65] J. P. Burelbach, S. G. Bankoff, and S. H. Davis, 'Nonlinear stability of evaporating/condensing liquid films,' J. Fluid Mech. 195 , 463-494 (1988).\n - [66] A. Oron and S. G. Bankoff, 'Dewetting of a heated surface by an evaporating liquid film under conjoining/disjoining pressures,' J. Colloid Interface Sci. 218 , 152-166 (1999).\n - [67] L. W. Schwartz, R. V. Roy, R. R. Eley, and S. Petrash, 'Dewetting patterns in a drying liquid film,' J. Colloid Interface Sci. 214 , 363-374 (2001).\n - [68] K. Kargupta, R. Konnur, and A. Sharma, 'Spontaneous dewetting and ordered patterns in evaporating thin liquid films on homogeneous and heterogeneous substrates,' Langmuir 17 , 1294-1305 (2001).\n - [69] M. Bestehorn and D. Merkt, 'Regular surface patterns on Rayleigh-Taylor unstable evaporating films heated from below,' Phys. Rev. Lett. 97 , 127802 (2006).\n - [70] G. F. Teletzke, H. T. Davis, and L. E. Scriven, 'Wetting hydrodynamics,' Rev. Phys. Appl. 23 , 9891007 (1988).\n - [71] J. N. Israelachvili, Intermolecular and Surface Forces , Academic Press, London (1992).\n - [72] V. S. Mitlin, 'Dewetting of solid surface: Analogy with spinodal decomposition,' J. Colloid Interface Sci. 156 , 491-497 (1993).\n - [73] L. M. Pismen and Y. Pomeau, 'Disjoining potential and spreading of thin liquid layers in the diffuse interface model coupled to hydrodynamics,' Phys. Rev. E 62 , 2480-2492 (2000).\n - [74] L. Onsager, 'Crystal statistics. I. A two-dimensional model with an order-disorder transition,' Phys. Rev. 65 , 117-149 (1944).\n - [75] G. Reiter, 'Unstable thin polymer films: Rupture and dewetting processes,' Langmuir 9 , 1344-1351 (1993).\n - [76] C. G. Sztrum, O. Hod, and E. Rabani, 'Self-assembly of nanoparticles in three-dimensions: Formation of stalagmites,' J. Phys. Chem. B 109 , 6741-6747 (2005).\n - [77] G. Yosef and E. Rabani, 'Self-assembly of nanoparticles into rings: A lattice-gas model,' J. Phys. Chem. B 110 , 20965-20972 (2006).\n - [78] J. F. Gouyet, M. Plapp, W. Dieterich, and P. Maass, 'Description of far-from-equilibrium processes by mean-field lattice gas models,' Adv. Phys. 52 , 523-638 (2003).\n - [79] U. M. B. Marconi and P. Tarazona, 'Dynamic density functional theory of fluids,' J. Chem. Phys. 110 , 8032-8044 (1999).\n - [80] U. M. B. Marconi and P. Tarazona, 'Dynamic density functional theory of fluids,' J. Phys.-Condes. Matter 12 , A413-A418 (2000).", - "page_start": 29, - "page_end": 29, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0510.pdf", - "query": "Where are located the magnetic ions in the lattice of the studied layers ?", - "target_page": 2, - "target_passage": "the magnetic ions are located on the sites of a body-centered tetragonal (BCT) lattice", - "chunk_present": { - "presence": true, - "index": 9 - } - }, - "top_chunk": [ - { - "text": "Here, we demonstrate an antiferromagnetic coupling and exchange bias in Fe/(Ga,Mn)As bilayer films, by combining element-specific XMCD measurements and bulk-sensitive superconducting quantum interference device (SQUID) magnetometry. As with previous studies of FM metal/FM semiconductor bilayers 4,5 (and in contrast to AFM coupled FM metal/FM metal exchange bias structures 10,11 ) the layers are in direct contact without a non-magnetic spacer in between. We distinguish interface and bulk (Ga,Mn)As layers that are respectively strongly and weakly antiferromagnetically coupled to the Fe overlayer. In agreement with Ref. 7 , the interface layer remains polarized at room temperature.\n\nThe Fe and (Ga,Mn)As layers of the present study were both grown by molecular beam epitaxy in the same ultra-high vacuum system, in order to ensure a clean interface between them. The (Ga,Mn)As layer of thickness 10 to 50 nm was deposited on a GaAs(001) substrate at a temperature of 260 · C, using previously established methods 3,8 . A low Mn concentration of x ≈ 0 . 03 was chosen in order to avoid the formation of compensating Mn interstitials. The substrate temperature was then reduced to ∼ 0 · C, before depositing a 2 nm Fe layer, plus a 2 nm Al capping layer. In-situ reflection high energy electron diffraction and ex-situ x-ray reflectivity and diffraction measurements confirmed that the layers are single-crystalline with sub-nm interface roughness. SQUID magnetometry measurements were performed using a Quantum Design Magnetic Property Measurement System. Mn and Fe L 2 , 3 x-ray absorption and XMCD", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "## Exchange bias of a ferromagnetic semiconductor by a ferromagnetic metal\n\nK. Olejnik, 1, 2 P. Wadley, 3 J. Haigh, 3 K. W. Edmonds, 3 R. P. Campion, 3 A. W. Rushforth, 3 B. L. Gallagher, 3 C. T. Foxon, 3 T. Jungwirth, 2, 3 J. Wunderlich, 1, 2 S. S. Dhesi, 4 S. Cavill, 4 G. van der Laan, 4 and E. Arenholz 5\n\n1 Hitachi Cambridge Laboratory, Cambridge CB3 0HE, United Kingdom\n\n2 Institute of Physics ASCR, v.v.i., Cukrovarnicka 10, 16253 Praha 6, Czech Republic\n\n3 School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD, United Kingdom 4 Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA\n\n5 (Dated: August 24, 2018)\n\nWe demonstrate an exchange bias in (Ga,Mn)As induced by antiferromagnetic coupling to a thin overlayer of Fe. Bias fields of up to 240 Oe are observed. Using element-specific x-ray magnetic circular dichroism measurements, we distinguish a strongly exchange coupled (Ga,Mn)As interface layer in addition to the biassed bulk of the (Ga,Mn)As film. The interface layer remains polarized at room temperature.\n\nPACS numbers: 75.70.Cn, 75.50.Pp, 75.50.Bb\n\nFerromagnetic (FM) semiconductors offer the prospect of combining high-density storage and gate-controlled logic in a single material. The realization of spin-valve devices from FM semiconductors requires the controlled switching of magnetization in adjacent layers between antiferromagnetic (AFM) and FM configurations. This has motivated several theoretical investigations of interlayer coupling in all-semiconductor devices 1 , and AFM coupling has recently been demonstrated in (Ga,Mn)As multilayers separated by p -type non-magnetic spacers 2 . However, the Curie temperature T C of (Ga,Mn)As is currently limited to 185 K in single layers 3 , and is typically much lower for layers embedded within a heterostructure 2 , which is an obstacle to the practical implementation of semiconductor spintronics.\n\nThe development of FM metal/FM semiconductor heterostructures has the potential to bring together the benefits of metal and semiconductor based spintronics, offering access to new functionalities and physical phenomena. Recent studies of MnAs/(Ga,Mn)As and NiFe/(Ga,Mn)As bilayer films have shown FM interlayer coupling and independent magnetization behavior, respectively 4,5 . Of particular interest is the Fe/(Ga,Mn)As system, since the growth of epitaxial Fe/GaAs(001) films is well-established 6 . Remarkably, a recent x-ray magnetic circular dichroism (XMCD) study has shown that Fe may induce a proximity polarization in the near-surface region of (Ga,Mn)As, antiparallel to the Fe moment and persisting even above room temperature 7 . Devices incorporating Fe/(Ga,Mn)As therefore offer the prospect of obtaining non-volatile room temperature spin-polarization in a semiconductor.\n\nUntil now, no information has been revealed about the coupling of Fe to (Ga,Mn)As layers away from the nearsurface region. At the surface, the (Ga,Mn)As layer may be highly non-stoichiometric and Mn-rich, due to its nonequilibrium nature 8,9 . Previously, Fe/(Ga,Mn)As layers were produced by a process including exposure to air followed by sputtering and annealing prior to Fe deposition,\n\nwhich may further disrupt the interface order. The origin of the interface magnetism then had to be inferred by comparison to a series of reference samples 7 . Demonstration of coupling between the bulk of the layers, i.e. , an exchange bias effect, would provide direct evidence of the interface magnetic order. Moreover, such coupling would offer new means of manipulating the FM semiconductor spin state and utilizing the proximity polarization effect in a spintronic device.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "measurements were performed on beamline I06 at the Diamond Light Source, and on beamline 4.0.2 at the Advanced Light Source. Total-electron yield (TEY) and fluorescence yield (FY) were monitored simultaneously using the sample drain current and the photocurrent of a diode mounted at 90 · to the incident beam, respectively.\n\nSQUID magnetometry measurements were first performed on control Fe/GaAs(001) and (Ga,Mn)As/GaAs(001) samples, grown under the same conditions as the bilayers, to determine the magnetic anisotropies of the individual layers and the Curie temperature of the (Ga,Mn)As layer. The Fe film has a uniaxial magnetic anisotropy with easy axis along the [110] orientation, similar to previous studies 6 . For the (Ga,Mn)As control sample, there is a competition between cubic and uniaxial magnetic anisotropies, with the former dominant at low temperatures and favoring easy axes along the in-plane 〈 100 〉 orientations, and the latter dominant close to T C ( ∼ 35 K) giving an easy axis along the [1 ¯ 10] orientation. Figure 1 shows [110] magnetization versus temperature curves and low temperature hysteresis loops for a bilayer film containing a 20 nm thick (Ga,Mn)As layer. The total remnant moment of the bilayer film decreases on cooling under zero magnetic field below the T C of the (Ga,Mn)As, indicating that this layer aligns antiparallel to the Fe magnetization at zero field. The hysteresis curve shows a two-step magnetization reversal, indicating different behavior of the Fe and (Ga,Mn)As layers, with the smaller loop attributed to the dilute moment (Ga,Mn)As film. The minor hysteresis loop shown in Fig. 1 clearly shows a shift from zero field by a bias field H E , indicating that the Fe layer induces an exchange bias in the magnetic semiconductor. The shape and size of the minor loop is in agreement with the hysteresis loop for the control (Ga,Mn)As sample, also shown in Fig. 1. This strongly indicates that the exchange bias affects the whole of the (Ga,Mn)As layer in the bilayer sample.\n\nSimilar behavior is observed for bilayer samples containing a 10 nm or 50 nm (Ga,Mn)As layer, with a bias field which is approximately inversely proportional to the thickness d of the ferromagnetic semiconductor layer (Fig. 1, inset). This 1/ d dependence of H E was found previously for MnAs/(Ga,Mn)As bilayers 4 , and is generally observed in exchanged-biased thin films 12 . From this dependence it is possible to describe the exchange bias in terms of an interface energy per unit area, ∆ E = M FS H E d = 0 . 003 erg/cm 2 . This value is rather small compared to typical exchange bias systems 12 , reflecting the low moment density M FS of the diluted FM semiconductor layer. However, the bias field for a given (Ga,Mn)As thickness is larger than is observed for MnO/(Ga,Mn)As structures 13 , while the reproducibility and flexibility of the present structures is much higher due to the single-crystalline ferromagnetic nature of the Fe layer.\n\nTo confirm the presence of AFM interlayer coupling, we performed XMCD measurements at the Mn and Fe", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2449.pdf" - }, - { - "text": "To confirm the presence of AFM interlayer coupling, we performed XMCD measurements at the Mn and Fe\n\nL 2 , 3 absorption edges in order to determine the magnetic response of the individual elements. In L 2 , 3 XMCD, electrons are excited from a 2 p core level to the unoccupied 3 d valence states of the element of interest by circularly polarized x-rays at the resonance energies of the transitions. The difference in absorption for opposite polarizations gives a direct and element-specific measurement of the projection of the 3 d magnetic moment along the xray polarization vector. The absorption cross-section is conventionally obtained by measuring the decay products - either fluorescent x-rays or electrons - of the photoexcited core hole. The type of decay product measured determines the probing depth of the technique. For Mn L 2 , 3 absorption, the probing depths for FY and TEY detection are λ FY ≈ 100 nm and λ TEY ≈ 3 nm. In the current experiment, the Mn XMCD measured using FY and TEY are thus sensitive to the bulk of the (Ga,Mn)As film and the near-interface layers, respectively.\n\nFigure 2(a)-(c) shows the magnetic field dependence of XMCD asymmetry, defined as ( I l -I r ) / ( I l + I r ) where I l ( r ) is the absorption for left- (right-) circularly polarized x-rays. This is measured at the Fe and Mn L 3 absorption peaks for a Fe(2 nm)/(Ga,Mn)As(10 nm) sample at 2 K. The external field is applied along the photon incidence direction, which is at 70 · to the surface normal with an in-plane projection along the [110] axis. The XMCD data show that the Fe film displays a square hysteresis loop with a single magnetization switch, as expected for a monocrystalline Fe film with strong uniaxial magnetic anisotropy. The Mn XMCD shows a more complicated loop due to the effect of the interlayer coupling. The projected Mn moment aligns antiparallel to the Fe moment at remanence, and undergoes a magnetization reversal of opposite sign to the Fe. With further increase of the external magnetic field, the Mn moment gradually rotates away from antiparallel alignment with the Fe layer, and into the field direction. Qualitatively similar behavior is observed for the Fe(2 nm)/(Ga,Mn)As(20 nm) sample: the (Ga,Mn)As layer is aligned antiparallel to the Fe layer at zero field, although the bias field is lower by approximately a factor of two.\n\nClear differences are observed between the Mn XMCD hysteresis loops obtained using TEY and FY detection modes. For FY the magnitude of the XMCD is similar (but of opposite sign) at remanence and at high magnetic fields, whereas for TEY at remanence it is approximately a factor of two larger than at 1000 Oe. The Mn L 2 , 3 XMCD spectra recorded at remanence and at 1000 Oe, shown in Fig. 3, confirm this result. At remanence the FY and TEY detected XMCD have similar magnitudes. However, under a large external field the XMCD is substantially smaller in TEY than in FY, confirming that the net magnetization of the Mn ions near the interface is significantly less than in the bulk of the (Ga,Mn)As film. This is the case even up to the highest field applied (20 kOe). By applying the XMCD sum rules 14 to the TEY data, and by comparing the spectra to previous measurements on well-characterized (Ga,Mn)As", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2449.pdf" - }, - { - "text": "FIG. 1: (colors online) (a) : body-centered tetragonal (BCT) lattice with J 0 in-plane coupling constant, and out-of-plane J 1 , and J 2 competing interactions.\n\n\n\nbe achieved with different number of interacting layers: notably, nearest and next-nearest layers competitive interactions are enough to get a helical structure with a whatever pitch wavevector. Such observation gives us a possible way to solve the conundrum previously emerged, as we have the possibility of varying the range of interactions without modifying the helical pitch, thus decoupling the two relevant length scales along the film growth direction, and making accessible a range of n of the order of, or smaller than, the helical pitch, but still large enough that a substantial number of layers can behave as 'bulk' layers. Therefore, while in the previous papers we have studied the properties of ultrathin magnetic films of Ho assuming a model with six interlayer exchange interactions, here we investigate by MC simulations the properties of the same system by making use of the simplest model Hamiltonian able to describe the onset of a helical magnetic order in Holmium, i.e. we consider only two inter-layer coupling constants, as previously done in Ref. 11.\n\nThe paper is organized as follows: In Sec. II the model Hamiltonian will be defined, and the MC techniques, and all the thermodynamic quantities relevant for this study, will be introduced. In Sec. III the results obtained for different thicknesses will be presented, both in the matter of the critical properties of the model and of the magnetic ordered structures observed. Finally, in Sec. IV we shall discuss such results, drawing also some conclusions.\n\n## II. MODEL HAMILTONIAN AND MONTE CARLO OBSERVABLES\n\nThe model Hamiltonian we use in our simulations is the minimal one able to describe helimagnetic structures:\n\nH = -  J 0 ∑ 〈 ij 〉 /vector S i · /vector S j + J 1 ∑ 〈 ik 〉 /vector S i · /vector S k + J 2 ∑ 〈 il 〉 /vector S i · /vector S l   . (1)", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0510.pdf" - }, - { - "text": "teractions by including six different exchange constants along the c crystallographic axis, and gives a helix pitch wave-vector Q z such that Q z c ' /similarequal 30 · , where c ' = c/ 2 is the distance between nearest neighboring spin layers parallel to the ab crystallographic planes, henceforth denoted also as x -y planes, while z will be taken parallel to c . For n > 16, n being the number of spin layers in the film, a correct bulk limit is reached, while for lower n the film properties are clearly affected by the strong competition among the helical pitch and the surface effects, which involve the majority of the spin layers. In the thickness range n = 9 -16, i.e. right for thickness values comparable with the helical pitch, three different magnetic phases emerged, with the high-temperature, disordered, paramagnetic phase and the low-temperature, long-range ordered one separated by an intriguing, intermediatetemperature block phase, where outer ordered layers coexist with some inner disordered ones, the phase transition of the latter eventually displaying the signatures of a Kosterlitz-Thouless one. Finally, for n ≤ 7 the film collapses once and for all to a quasi-collinear order.\n\nThe complex phase diagram unveiled by such MC simulations awaken however a further intriguing question: to what extent the observed behavior may be considered a simple consequence of the competition between helical order and surface effects? I.e., is it just a matter of having such a competition or does the range of interactions also play a relevant role? Indeed, when the range of the interactions is large enough we have a greater number of planes which can be thought of as 'surface planes', i.e. for which the number of interacting neighbors are significantly reduced with respect to the bulk layers; therefore, we expect that the larger the interaction range, the stronger should be the surface effects. But, at the same time, the same modulation of the magnetic order can", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0510.pdf" - }, - { - "text": "## Interplay among helical order, surface effects and range of interacting layers in ultrathin films.\n\nF. Cinti (1 , 2 , 3) , A. Rettori (2 , 3) , and A. Cuccoli (2)\n\n(1) Department of Physics, University of Alberta, Edmonton, Alberta, Canada T6G 2J1\n\n(2) CNISM and Department of Physics, University of Florence, 50019 Sesto Fiorentino (FI), Italy. and\n\n(3) CNR-INFM S 3 National Research Center, I-41100 Modena, Italy\n\n(Dated: June 8, 2022)\n\nThe properties of helical thin films have been thoroughly investigated by classical Monte Carlo simulations. The employed model assumes classical planar spins in a body-centered tetragonal lattice, where the helical arrangement along the film growth direction has been modeled by nearest neighbor and next-nearest neighbor competing interactions, the minimal requirement to get helical order. We obtain that, while the in-plane transition temperatures remain essentially unchanged with respect to the bulk ones, the helical/fan arrangement is stabilized at more and more low temperature when the film thickness, n , decreases; in the ordered phase, increasing the temperature, a softening of the helix pitch wave-vector is also observed. Moreover, we show also that the simulation data around both transition temperatures lead us to exclude the presence of a first order transition for all analyzed sizes. Finally, by comparing the results of the present work with those obtained for other models previously adopted in literature, we can get a deeper insight about the entwined role played by the number (range) of interlayer interactions and surface effects in non-collinear thin films.\n\nPACS numbers: 64.60.an,64.60.De,75.10.Hk,75.40.Cx,75.70.Ak.\n\n## I. INTRODUCTION\n\nThe study of low dimensional frustrated magnetic systems 1 still raises great interest, both in consequence of theoretical aspects, related to their peculiar critical properties 2 , and in view of possible technological applications 3 . Indeed, beside conventional ferromagnetic or antiferromagnetic phase transitions, in many new materials other nontrivial and unconventional forms of ordering have been observed 4,5 . A quantity of particular interest in this context is the spin chirality, an order parameter which turned out to be extremely relevant in, e.g., magnetoelectric materials 6 , itinerant MnSi 7 , binary compounds as FeGe 8 , glass transition of spins 9 , and XY helimagnets, as Holmium, Terbium or Dysprosium 10 . In the latter case, a new universality class was predicted because a Z 2 × SO (2) symmetry is spontaneously broken in the ordered phase 2 : In fact, when dealing with such systems, in addition to the SO (2) symmetry of the spin degrees of freedom /vector S i , one has to consider also the Z 2 symmetry of the spin chirality κ ij ∝ [ /vector S i × /vector S j ] z .\n\nFor these rare-earth elements, the development of new and sophisticated experimental methods 11 has allowed to obtain ultra-thin films where the non-collinear modulation is comparable with the film thickness. Under such conditions the lack of translational invariance due to the presence of surfaces results decisive in order to observe a drastic change of the magnetic structures 12 . Recent experimental data on ultra-thin Holmium films 13 have been lately interpreted and discussed 14,15 on the basis of detailed classical Monte Carlo (MC) simulations of a spin Hamiltonian, which is believed to give a realistic modeling of bulk Holmium. Such Hamiltonian, proposed by Bohr et al. 16 , allows for competitive middle-range in-", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0510.pdf" - }, - { - "text": "The MC simulations outcomes for n = 16 we just presented appear quite different with respect to those obtained at the same thickness for the model with six coupling constants along the z direction 14,15 . Indeed, for the J 1 -J 2 model here investigated, we observe that all layers order at the same temperature, and we do not find any hint of the block-phase, with inner disordered planes intercalated to antiparallel quasi -FM four-layer blocks, previously observed; sample MC runs we made using the same hcp lattice employed in Refs. 14,15 shows that the presence or absence of the block phase is not related to the lattice geometry, but it is a consequence of the interaction range only.\n\nWe now move to describe and discuss MC simulation data for thinner samples. A graphical synthesis of the results obtained for n = 8 in reported in Fig. 4a-d. The specific heat c v , shown in Figs. 4a, reveals very small finite-size effects, which, however, cannot be unambiguously detected for the largest lattice size ( L = 64), as they fall comfortably within the error range. Surprisingly, the specific heat maximum is located close to the bulk transition temperature as found for n = 16, and", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0510.pdf" - }, - { - "text": "change in CNT resistivity may then be obtained from the calculated coverages and single impurity conductances.\n\nWe find that oxidation of the active metal site passivates the sensor in the case of doping by Ti, V, Cr, and Mn under standard conditions (room temperature and 1 bar of pressure). Among the remaining metals, we identify Ni as is the most promising candidate for CO detection. For this system the change in resistance per active site is generally significant ( > 1 Ω ) for small changes in CO concentration in the relevant range of around 0.1-10 ppm. Our approach is quite general and is directly applicable to other nanostructures than CNTs, other functionalizations than metal doping, and other backgrounds than atmospheric air.\n\nAll total energy calculations and structure optimizations have been performed with the real-space density functional theory (DFT) code GPAW [22] which is based on the projector augmented wave method. We use a grid spacing of 0.2 ˚ A for representing the density and wave functions and the PBE exchange correlation functional [23]. Transport calculations for the optimized structures have been performed using the nonequilibrium Green's function method [24] with an electronic Hamiltonian obtained from the SIESTA code [25] in a double zeta polarized (DZP) basis set. Spin polarization has been taken into account in all calculations.\n\nMetallic doping of a (6,6) CNT has been modeled in a supercell containing six repeated minimal unit cells along the CNT axis (dimensions: 15 ˚ A × 15 ˚ A × 14.622 ˚ A). For this size of supercell a Γ -point sampling of the Brillouin zone was found to be sufficient. The formation energy for creating a vacancy (VC) occupied by a transition metal atom (M) was calculated using the relation\n\nE form [ M @ VC ] = E [ M @ VC ] + nE [ C ] -E [ M@NT ] (1)\n\nwhere E [M@VC] is the total energy of a transition metal atom occupying a vacancy in the nanotube, n is the number of carbon atoms removed to form the vacancy, E [C] is the energy per carbon atom in a pristine nanotube, and E [M@NT]", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2538.pdf" - }, - { - "text": "H = -  J 0 ∑ 〈 ij 〉 /vector S i · /vector S j + J 1 ∑ 〈 ik 〉 /vector S i · /vector S k + J 2 ∑ 〈 il 〉 /vector S i · /vector S l   . (1)\n\n/vector S i are classical planar unit vectors representing the direction of the total angular momentum of the magnetic ions, whose magnitude √ j ( j +1) ( j = 8 for Holmium ions) is already encompassed within the definition of the interaction constants J 0 , 1 , 2 . As sketched in Fig. 1, the magnetic ions are located on the sites of a body-centered tetragonal (BCT) lattice; the first sum appearing in the Hamiltonian describes the in-plane ( xy ) nearest neighbor (NN) interaction, which is taken ferromagnetic (FM), with exchange strength J 0 > 0; the second sum represents the coupling, of exchange strength J 1 , between spins belonging to nearest neighbor (NN) planes along the z -direction (which we will assume to coincide with the film growth direction); finally, the third sum takes into account the interaction, of exchange strength J 2 , between spins lying on next-nearest neighbor (NNN) planes along z . In order to have frustration, giving rise to noncollinear order along z in the bulk, NN interaction J 1 can be taken both ferro- or antiferromagnetic, but NNN coupling J 2 has necessarily to be antiferromagnetic, and the condition | J 2 | > | J 1 | / 4 must be fulfilled. Such simplified Hamiltonian was already employed to simulate helical ordering in bulk systems by Diep 1,17 and Loison 18 . In the bulk limit, the state of minimal energy of a system described by Eq.(1) corresponds to a helical arrangement of spins. The ground state energy per spin is equal to e g ( Q z ) = [ -4 J 0 -2 J 1 (4 cos ( Q z c ' ) + δ cos (2 Q z c ' ))] where c ' is the distance between NN layers, δ = J 2 J 1 , and Q z c ' = arccos ( -1 δ ) is the angle between spins lying on adjacent planes along the z -direction. The observed helical arrangement in bulk holmium corresponds to Q z c ' /similarequal 30 . 5 · 10 : such value can be obtained from the formula above with the set of coupling constants J 0 =67.2K, J 1 =20.9K, and J 2 = -24.2 K, that we have employed in our simulations. The given values for the exchange constants are the same already used by Weschke et al. in Ref. 13 to interpret experimental data on Holmium films on the basis of a J 1 -J 2 model, after a proper scaling by the numbers of NN and NNN on neighboring layers of a BCT lattice.\n\nIn the following we will denote with n the film thickness, i.e. the number of spin layers along the z direction, and with L × L the number of spins in each layer (i.e., L is the lattice size along both the x and y directions). In our simulations thickness values from 1 to 24 were considered, while the range of lateral size L was from 8 to 64. Periodic boundary conditions were applied along x and y , while free boundaries were obviously taken along the film growth direction z .\n\nThermal equilibrium was attained by the usual Metropolis algorithm 19 , supplemented by the overrelaxed technique 20 in order to speed-up the sampling of the spin configuration space: a typical 'Monte Carlo step' was composed by four Metropolis and four-five over-relaxed moves per particle. Such judicious mix of moves is able both to get faster the thermal equilibrium and to minimize the correlation 'time' between successive samples, i.e. the undesired effects due to lack of in-", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0510.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0510.pdf", - "query": "What is the minimum number of spin layers in a film before a correct bulk is reached ?", - "target_page": 1, - "target_passage": "For n > 16, n being the number of spin layers in the film, a correct bulk limit is reached", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "teractions by including six different exchange constants along the c crystallographic axis, and gives a helix pitch wave-vector Q z such that Q z c ' /similarequal 30 · , where c ' = c/ 2 is the distance between nearest neighboring spin layers parallel to the ab crystallographic planes, henceforth denoted also as x -y planes, while z will be taken parallel to c . For n > 16, n being the number of spin layers in the film, a correct bulk limit is reached, while for lower n the film properties are clearly affected by the strong competition among the helical pitch and the surface effects, which involve the majority of the spin layers. In the thickness range n = 9 -16, i.e. right for thickness values comparable with the helical pitch, three different magnetic phases emerged, with the high-temperature, disordered, paramagnetic phase and the low-temperature, long-range ordered one separated by an intriguing, intermediatetemperature block phase, where outer ordered layers coexist with some inner disordered ones, the phase transition of the latter eventually displaying the signatures of a Kosterlitz-Thouless one. Finally, for n ≤ 7 the film collapses once and for all to a quasi-collinear order.\n\nThe complex phase diagram unveiled by such MC simulations awaken however a further intriguing question: to what extent the observed behavior may be considered a simple consequence of the competition between helical order and surface effects? I.e., is it just a matter of having such a competition or does the range of interactions also play a relevant role? Indeed, when the range of the interactions is large enough we have a greater number of planes which can be thought of as 'surface planes', i.e. for which the number of interacting neighbors are significantly reduced with respect to the bulk layers; therefore, we expect that the larger the interaction range, the stronger should be the surface effects. But, at the same time, the same modulation of the magnetic order can", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0510.pdf" - }, - { - "text": "chirality interactions in cold atom optical lattices has been proposed 38 .\n\nOur model (8) is achieved at second order of the perturbation series. Higher order terms become truncation errors but may be controlled by small parameters λ x,y,z /J cluster ∼ √ | J x,y,z | /J cluster .\n\n## V. CONCLUSIONS.\n\nWe constructed the exactly solvable Kitaev honeycomb model 1 as the exact low energy effective Hamiltonian of a spin-1/2 model [equations (8) or (9)] with spin-rotation and time reversal symmetry. The spin in Kitaev model is represented as the pseudo-spin in the two-fold degenerate spin singlet subspace of a cluster of four antiferromagnetically coupled spin-1/2 moments. The physical spin model is a honeycomb lattice of such four-spin clusters, with certain inter-cluster interactions. The machinery for the exact mapping to pseudo-spin Hamiltonian was developed (see e.g. TABLE I), which is quite general and can be used to construct other interesting (exactly solvable) spin-1/2 models from spin rotation invariant systems.\n\nIn this construction the pseudo-spin correlations in the Kitaev model will be mapped to dimer or spin-chirality correlations in the physical spin system. The corresponding picture of the fractionalized Majorana fermion excitations and Ising vortices still remain to be clarified.\n\nThis exact construction contains high order physical spin interactions, which is undesirable for practical implementation. We described two possible approaches to reduce this problem: generating the high order spin interactions by perturbative expansion of the coupling to optical phonon, or the magnetic coupling between clusters. This perturbative construction will introduce truncation error of perturbation series, which may be controlled by small expansion parameters. Whether these constructions can be experimentally engineered is however beyond the scope of this study. It is conceivable that other perturbative expansion can also generate these high order spin interactions, but this possibility will be left for future works.\n\n## Acknowledgments\n\nThe author thanks Ashvin Vishwanath, Yong-Baek Kim and Arun Paramekanti for inspiring discussions, and Todadri Senthil for critical comments. The author is supported by the MIT Pappalardo Fellowship in Physics.\n\n## Appendix A: Coupling between Distortions of a Tetrahedron and the Pseudo-spins\n\nIn this Appendix we reproduce from Ref. 35 the couplings of all tetrahedron distortion modes to the spin\n\nsystem. And convert them to pseudo-spin notation in the physical spin singlet sector.\n\nConsider a general small distortion of the tetrahedron, the spin Hamiltonian becomes\n\nH cluster , SL = ( J cluster / 2)( ∑ /lscript S /lscript ) 2 + J ' ∑ /lscript J x τ x j τ x k -∑ y -links J y τ y j τ y k -∑ z -links J z τ z j τ z k (1)\n\nwhere τ x,y,z are Pauli matrices, and x, y, z -links are defined in FIG. 1. It was shown by Kitaev 1 that this spin1/2 model can be mapped to a model with one Majorana fermion per site coupled to Ising gauge fields on the links. And as the Ising gauge flux has no fluctuation, the model can be regarded as, under each gauge flux configuration, a free Majorana fermion problem. The ground state is achieved in the sector of zero gauge flux through each hexagon. The Majorana fermions in this sector have Dirac-like gapless dispersion resembling that of graphene, as long as | J x | , | J y | , and | J z | satisfy the triangular relation, sum of any two of them is greater than the third one 1 . It was further proposed by Kitaev 1 that opening of fermion gap by magnetic field can give the Ising vortices non-Abelian anyonic statistics, because the Ising vortex will carry a zero-energy Majorana mode, although magnetic field destroys the exact solvability.\n\nGreat efforts have been invested to better understand the properties of the Kitaev model. For example, several groups have pointed out that the fractionalized Majorana fermion excitations may be understood from the more familiar Jordan-Wigner transformation of 1D spin systems 2,3 . The analogy between the non-Abelian Ising vortices and vortices in p + ip superconductors has been raised in serveral works 4-7 . Exact diagonalization has been used to study the Kitaev model on small lattices 8 . And perturbative expansion methods have been developed to study the gapped phases of the Kitaev-type models 9 .", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0266.pdf" - }, - { - "text": "FIG. 1: The honeycomb lattice for the Kitaev model. Filled and open circles indicate two sublattices. x, y, z label the links along three different directions used in (1).\n\n\n\nderived as well. There have been several proposals to open the fermion gap for the non-Abelian phase without spoiling exact solvability 4,6 . And many generalizations to other(even 3D) lattices have been developed in the last few years 10-16 . All these efforts have significantly enriched our knowledge of exactly solvable models and quantum phases of matter.\n\nHowever, in the original Kitaev model and its later generalizations in the form of spin models, spin rotation symmetry is explicitly broken. This makes them harder to realize in solid state systems. There are many proposals to realized the Kitaev model in more controllable situations, e.g. in cold atom optical lattices 17,18 , or in superconducting circuits 19 . But it is still desirable for theoretical curiosity and practical purposes to realize the Kitaev-type models in spin rotation invariant systems.\n\nIn this paper we realize the Kitaev honeycomb lattice model as the low energy Hamiltonian for a spin rotation invariant system. The trick is not to use the physical spin as the spin in the Kitaev model, instead the spin-1/2 in Kitaev model is from some emergent two-fold degenerate low energy states in the elementary unit of physical system. This type of idea has been explored recently by Jackeli and Khaliullin 20 , in which the spin-1/2 in the Kitaev model is the low energy Kramers doublet created by strong spin-orbit coupling of t 2 g orbitals. In the model presented below, the Hilbert space of spin-1/2 in the Kitaev model is actually the two dimensional spin singlet sector of four antiferromagnetically coupled spin-1/2 moments, and the role of spin-1/2 operators(Pauli matrices) in the Kitaev model is replaced by certain combinations of S j · S k [or the spin-chirality S j · ( S k × S /lscript )] between the four spins.\n\nOne major drawback of the model to be presented is that it contains high order spin interactions(involves up to six or eight spins), thus is still unnatural. However it opens the possibility to realize exotic (exactly solvable) models from spin-1/2 Hamiltonian with spin rotation invariant interactions. We will discuss two possible routes to reduce this artificialness through controlled perturbative expansions, by coupling to optical phonons or by magnetic couplings between the elementary units.\n\nThe outline of this paper is as follows. In Section II we will lay out the pseudo-spin-1/2 construction. In Sec-\n\nFIG. 2: Left: the physical spin lattice for the model (8). The dash circles are honeycomb lattice sites, each of which is actually a cluster of four physical spins. The dash straight lines are honeycomb lattice bonds, with their type x, y, z labeled. The interaction between clusters connected by x, y, z bonds are the J x,y,z terms in (8) or (9) respectively. Note this is not the 3-12 lattice used in Ref. 9,10 . Right: enlarged picture of the clusters with the four physical spins labeled as 1 , . . . , 4. Thick solid bonds within one cluster have large antiferromagnetic Heisenberg coupling J cluster .\n\n\n\ntion III the Kitaev model will be explicitly constructed using this formalism, and some properties of this construction will be discussed. In Section IV we will discuss two possible ways to generate the high order spin interactions involved in the construction of Section III by perturbative expansions. Conclusions and outlook will be summarized in Section V.\n\n## II. FORMULATION OF THE PSEUDO-SPIN-1/2 FROM FOUR-SPIN CLUSTER.", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0266.pdf" - }, - { - "text": "Another note to take is that it is not necessary to have such a highly symmetric cluster Hamiltonian (2). The mappings to pseudo-spin-1/2 should work as long as the ground states of the cluster Hamiltonian are the two-fold degenerate singlets. One generalization, which conforms the symmetry of the lattice in FIG. 2, is to have\n\nH cluster = ( J cluster / 2)( r · S 1 + S 2 + S 3 + S 4 ) 2 (11)\n\nwith J cluster > 0 and 0 < r < 3. However this is not convenient for later discussions and will not be used.\n\nWe briefly describe some of the properties of (8). Its low energy states are entirely in the space that each of the clusters is a physical spin singlet (called cluster singlet subspace hereafter). Therefore physical spin correlations are strictly confined within each cluster. The excitations carrying physical spin are gapped, and their dynamics are 'trivial' in the sense that they do not move from one cluster to another. But there are non-trivial low energy physical spin singlet excitations, described by the pseudospins defined above. The correlations of the pseudo-spins can be mapped to correlations of their corresponding physical spin observables (the inverse mappings are not unique, c.f. TABLE I). For example τ x,y correlations become certain dimer-dimer correlations, τ z correlation becomes chirality-chirality correlation, or four-dimer correlation. It will be interesting to see the corresponding picture of the exotic excitations in the Kitaev model, e.g. the Majorana fermion and the Ising vortex. However this will be deferred to future studies.\n\nIt is tempting to call this as an exactly solved spin liquid with spin gap ( ∼ J cluster ), an extremely short-range resonating valence bond(RVB) state, from a model with spin rotation and time reversal symmetry. However it should be noted that the unit cell of this model contains an even number of spin-1/2 moments (so does the original Kitaev model) which does not satisfy the stringent definition of spin liquid requiring odd number of electrons per unit cell. Several parent Hamiltonians of spin liquids have already been constructed. See for example, Ref. 24-27 .\n\n## IV. GENERATE THE HIGH ORDER PHYSICAL SPIN INTERACTIONS BY PERTURBATIVE EXPANSION.\n\nOne major drawback of the present construction is that it involves high order interactions of physical spins[see (8) and (9)], thus is 'unnatural'. In this Section we will make compromises between exact solvability and naturalness. We consider two clusters j and k and try to generate the J x,y,z interactions in (7) from perturbation series expansion of more natural(lower order) physical spin interactions. Two different approaches for this purpose will be laid out in the following two Subsections. In Subsection IV A we will consider the two clusters as two tetrahedra, and couple the spin system to certain optical phonons, further coupling between the phonon modes\n\nFIG. 3: Illustration of the tetragonal to orthorhombic Q E 1 (top) and Q E 2 (bottom) distortion modes. (a) Perspective view of the tetrahedron. 1 , . . . , 4 label the spins. Arrows indicate the motion of each spin under the distortion mode. (b) Top view of (a). (c)(d) Side view of (a).\n\n\n\nof the two clusters can generate at lowest order the desired high order spin interactions. In Subsection IV B we will introduce certain magnetic, e.g. Heisenberg-type, interactions between physical spins of different clusters, at lowest order(second order) of perturbation theory the desired high order spin interactions can be achieved. These approaches involve truncation errors in the perturbation series, thus the mapping to low energy effect Hamiltonian will no longer be exact. However the error introduced may be controlled by small expansion parameters. In this Section we denote the physical spins on cluster j ( k ) as j 1 , . . . , j 4 ( k 1 , . . . , k 4), and denote pseudo-spins on cluster j ( k ) as /vectorτ j ( /vectorτ k ).", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0266.pdf" - }, - { - "text": "The fundamental requirement of the spin is that the airplane be placed at an excessive angle of attack to produce the autorotation rolling and yawing tendencies. Generally speaking, the conventional airplane must be stalled .before a spin can take place. This relationship establishes a fundamental p&rciple of recovery-the airplane must be unstalled by decreasing the wing angle of attack. The most dfective procedure for the conventional configuration is to use opposite rudder to stop the sideslip, then lower the angle of attack with the elevators. With sufficient rudder power this procedure will produce a positive recovery with a minimum loss of altitude. Care should be taken during pullout from the ensuing dive to prevent excessive angle of attack and entry into another spin.\n\nIt should be appreciated that a spin is always a possible corollary of a stall and the selfsustaining motion of a spin will take place at", - "page_start": 326, - "page_end": 326, - "source_file": "00-80T-80.pdf" - }, - { - "text": "to a certain extent the particle-particle attraction. Normally, the solution is deposited on to a plain silicon substrate that is covered by the native oxide layer only [34]. However, one may locally change the wetting behaviour of the solvent by further oxidising the substrate [38]. By adding excess thiol one can also vary the properties of the solvent [40].\n\nTwo different procedures are employed for the deposition of the solution on to the substrate: spincoating or a meniscus technique [61, 62]. The choice is important as it strongly influences the evaporation rate and, as a result, the pattern formation process. When using spin-coating, one finds that directly after deposition, evaporation competes with dewetting until all the solvent has evaporated. The resulting deposits of nanoparticles are imaged by atomic force microscopy (AFM). For spin-coated films, the evaporation rate is high and structuring is normally finished before the spincoater is stopped. Conversely, the solvent evaporation rate is strongly decreased when employing the meniscus technique [61], i.e., by depositing a drop of solution on a Teflon ring that is wetted by the solvent. This allows for a better control of the process and enables the use of contrast-enhanced microscopy to observe the dewetting process in situ [40]. All pattern formation is confined to the region of the receding contact line of toluene, silicon and air. With both techniques one may find mono-modal or bi-modal polygonal networks [34], labyrinthine spinodal structures, or branched patterns (see Fig. 1). The meniscus technique allows for the study of branched structures in a more controlled manner. The work in Ref. [40] indicates that fingering strongly depends on the interaction strength of the particles, i.e., on the chain length of the thiol molecules coating the gold cores. For short chains (C 5 and C 8 ) no formation of branched structures is observed. At similar concentrations, well-developed branched structures are formed for longer chains (C 10 and C 12 ). For even longer chains (C 14 ), however, one again finds less branching. It also depends on the amount of excess thiol in the solvent (for details see Ref. [40]).\n\nWhen following the evolution of the branched patterns in situ (see the complementary video material of Ref. [40]), one clearly observes that different processes occur on different lenght scales. First, a macroscopic dewetting front recedes, leaving behind a seemingly dry substrate. The macroscopic front can be transversely unstable resulting in large-scale ( > 100 µ m) strongly anisotropic fingered structures. For fronts that move relatively quickly these macroscopic structures cover all the available substrate. However, when at a later stage the macroscopic front becomes slower, those fingers become scarce and 'macroscopic fingering' finally ceases. At this stage it is possible to appreciate that the seemingly dry region left behind by the front is not at all dry, but covered by an ultrathin 'postcursor' film that is itself unstable. The thickness of this film", - "page_start": 5, - "page_end": 5, - "source_file": "1001.2669.pdf" - }, - { - "text": "inter-cluster spin-chirality coupling in H perturbation z explicitly breaks time reversal symmetry and is probably harder to implement in solid state systems. However spin-chirality order may have important consequences in frustrated magnets 36,37 , and a realization of spin-", - "page_start": 6, - "page_end": 6, - "source_file": "1001.0266.pdf" - }, - { - "text": "excessive angles of attack. Of course, a low speed airplane could be: designed to be spinproof by making it stallproof. By limiting the amount of control deflection, the airplane may not have the longitudinal control power to trim to maximum lift angle of attack. Such a provision may be possible for certain light planes and commercial aircraft but would create an unrealistic and impractical limitation on the utility of a military airplane.\n\nThe modern high speed airplane configuration is typified by low aspect ratio, swept wing planforms with relatively large yaw and pitch inertia. The aerodynamic characteristics of such a configuration are shown in figure 4.32. The lift curve (C, versus U) is quite shallow at high angles of attack and maximum lift is not clearly defined. When this type of airplane is provided a rolling motion at high angles of attack, relatively small changes in C, take place. When this effect is combined with the relatively short span of this type airplane, it is apparent that the wing autorotation contribution will be quite weak and will not be a predominating pro-spin moment. The relatively large changes in drag coefficient with rolling motion imply .a predominance of yaw for the spin of the high speed airplane configuration.\n\nActually, various other factors contribute to the predominating yaw tendency for the spin of the modern airplane configuration. The static directional stability deteriorates at high angles of attack and may be so weak that extemely large yaw displacements result. In certain instances, very high angles of attack may bring such a decay in directional stability that a 'slice' or extreme yaw displacement takes place before a true spin is apparent. At these high angles of attack, the adverse yaw due to roll and aileron deflection can be very strong and create large yaw displacements of the airplane prior to realizing a stall.\n\nThe aircraft with the relatively large, long fuselage can exhibit a significant moment contribution from the fuselage alone. The cross flow pattern on the fuselage at high angles of\n\n## NAWWEPS DO-BOT-BO STABILITY AND CONTROL\n\nattack is capable of producing pro-spin moments of considerable magnitude which contribute to the self-sustaining nature of the spin. Also, the large distributed mass of the fuselage in rolling-yawing rotation contributes to inertia moments which flatten the spin and place the aircraft at extreme angles of attack.\n\nThe spin recovery of the modern high speed airplane involves principles which are similar to those of the spin recovery of the conventional airplane. However, the nature of the spin for the modern configuration may involve specific differences in technique necessary to reduce the sideslip and angle of attack. The use of opposite rudder to control the sideslip and effect recovery will depend on the effectiveness of the rudder when the airplane is in the spin. At high positive angles of attack and high sideslip the rudder effectiveness may be reduced and additional anti-spin moments must be provided for rapid recovery. The deflection of ailerons into the spin reduces the autorotation rolling moment and can produce adverse yaw to aid the rudder yawing moment in effecting recovery.", - "page_start": 328, - "page_end": 328, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_JWN_2014.pdf", - "query": "What the rough sales amount of the nordstrom.com website ?", - "target_page": 3, - "target_passage": "$2 billion in nordstrom.com sales", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## Net Sales (2014 vs. 2013)\n\nIn 2014, total company net sales increased 7.8%, which was attributable to the comparable sales increase of 4.0%. During the year, we opened three Nordstrom full-line stores, including our first store in Canada, and 27 Nordstrom Rack stores. Additionally, as a result of the acquisition of Trunk Club, we acquired four Trunk Club showrooms and opened one additional Trunk Club showroom in 2014. These additions increased our square footage by 5.5% and represented 2.8% of our total net sales for 2014.\n\nNordstrom net sales, which consist of the U.S. full-line and Nordstrom.com businesses, were $9,678 in 2014, an increase of 3.8% compared with 2013, with comparable sales up 3.6%. These increases reflected continued momentum in our Nordstrom.com channel. Both the number of items sold and the average selling price increased on a comparable basis in 2014. Category highlights included Accessories, Cosmetics and Men's Apparel.\n\nU.S. full-line net sales for 2014 were $7,682, a decrease of 0.3% compared with 2013 and comparable sales decreased by 0.5%. The topperforming geographic regions for full-line stores were the Southeast and Southwest.\n\nOur Nordstrom.com, Nordstromrack.com and HauteLook channels continued to experience outsized growth. Nordstrom.com net sales increased 23% and Nordstromrack.com and HauteLook net sales increased 22%, both driven by expanded merchandise selection and ongoing technology investments to enhance the customer experience.\n\nNordstrom Rack net sales increased $477, or 17%, compared with 2013, reflecting incremental volume from existing stores and the impact of 27 new stores since fiscal 2013. Comparable sales increased 3.8% for the year. Shoes and Accessories were the top-performing categories for the year. On a comparable basis, the average selling price of Nordstrom Rack merchandise increased while the number of items sold was flat.\n\n## Net Sales (2013 vs. 2012)\n\nNet sales for 2013 increased 3.4% compared with 2012, driven by a comparable sales increase of 2.5%, attributable to growth at Nordstrom.com and Nordstrom Rack's accelerated store expansion. During 2013, we opened 22 Nordstrom Rack stores and relocated one Nordstrom full-line store and two Nordstrom Rack stores. These additions represented 1.6% of our total net sales for 2013 and increased our square footage by 2.9%. The 53 rd week in 2012 contributed approximately $162 in additional net sales.\n\nNordstrom net sales for 2013 were $9,327, an increase of 1.0% compared with 2012, with comparable sales up 2.3%. Strong growth at Nordstrom.com was partially offset by sales decreases at our full-line stores. Both the average selling price and the number of items sold increased on a comparable basis in 2013 compared with 2012. Category highlights included Cosmetics, Men's Shoes and Women's Apparel.\n\nFull-line net sales for 2013 were $7,705, a decrease of 3.3% compared with 2012, which was primarily driven by a comparable sales decrease of 2.1% for the year. The top-performing geographic regions for full-line stores for 2013 were the Southwest and Southeast. Nordstrom.com showed strong sales growth with net sales of $1,622, an increase of 28% compared with 2012, with comparable sales up 30% on a comparable 52-week basis. These increases were driven by expanded merchandise selection and ongoing technology investments to enhance the customer experience.\n\nNordstrom Rack net sales were $2,738, up 12.0% compared with 2012, primarily due to 37 new store openings in 2012 and 2013. Comparable sales increased 2.7% for the year. Cosmetics and Shoes were the strongest-performing categories for the year. Both the average selling price and the number of items sold increased on a comparable basis in 2013 compared with 2012.\n\n## Retail Business Gross Profit\n\nThe following table summarizes the Retail Business gross profit:", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## Item 1. Business.\n\n## DESCRIPTION OF BUSINESS\n\nFounded in 1901 as a retail shoe business in Seattle, Nordstrom later incorporated in Washington state in 1946 and went on to become one of the leading fashion specialty retailers based in the U.S. As of March 16, 2015, we operate 290 U.S. stores located in 38 states as well as a robust ecommerce business through Nordstrom.com, Nordstromrack.com and HauteLook and TrunkClub.com. We also operate two Nordstrom full-line stores in Canada. The west and east coasts of the U.S. are the areas in which we have the largest presence. We have two reportable segments: Retail and Credit.\n\nAs of March 16, 2015, the Retail segment includes our 115 'Nordstrom' branded full-line stores in the U.S. and Nordstrom.com, 167 off-price Nordstrom Rack stores, two Canada full-line stores, Nordstromrack.com and HauteLook, and other retail channels including five Trunk Club showrooms and TrunkClub.com, our two Jeffrey boutiques and one clearance store that operates under the name 'Last Chance.' Through these multiple retail channels, we strive to deliver the best customer experience possible. We offer an extensive selection of high-quality brand-name and private label merchandise focused on apparel, shoes, cosmetics and accessories. Our integrated Nordstrom full-line stores and online store allow us to provide our customers with a seamless shopping experience. In-store purchases are primarily fulfilled from that store's inventory, but when inventory is unavailable at that store it may also be shipped to our customers from our fulfillment center in Cedar Rapids, Iowa, or from other Nordstrom full-line stores. Online purchases are primarily shipped to our customers from our Cedar Rapids fulfillment center, but may also be shipped from our Nordstrom full-line stores. Our customers can also pick up online orders in our Nordstrom full-line stores if inventory is available at one of our locations. These capabilities allow us to better serve customers across various channels and improve sales. Nordstrom Rack stores purchase high-quality brand-name merchandise primarily from the same vendors carried in Nordstrom full-line stores and also serve as outlets for clearance merchandise from our Nordstrom stores and other retail channels. During the year, we launched Nordstromrack.com and the associated mobile app. Nordstromrack.com combines the technology expertise of HauteLook with the merchant expertise of Nordstrom Rack. Nordstromrack.com and HauteLook offer limited-time sale events on fashion and lifestyle brands as well as a persistent selection of off-price, high-quality brand-name merchandise and are integrated with a single customer log-in, shared shopping cart and streamlined checkout process. Furthermore, we can accommodate returns from these sites by mail or at any Nordstrom Rack location.\n\nOur Credit segment includes our wholly owned federal savings bank, Nordstrom fsb, through which we provide a private label credit card, two Nordstrom Visa credit cards and a debit card. The credit and debit cards feature a loyalty program designed to increase customer visits and spending. Although the primary purposes of our Credit segment are to foster greater customer loyalty and drive more sales, we also generate revenues from finance charges and other fees on these cards. In addition, we save on interchange fees that the Retail segment would incur if our customers used third-party cards.\n\nFor more information about our business and our reportable segments, see Item 7: Management's Discussion and Analysis of Financial Condition and Results of Operations and Note 16: Segment Reporting in Item 8: Financial Statements and Supplementary Data.\n\n## FISCAL YEAR\n\nWe operate on a 52/53-week fiscal year ending on the Saturday closest to January 31 st . References to 2014 and all years within this document are based on a 52-week fiscal year, except 2012, which is based on a 53-week fiscal year.\n\n## TRADEMARKS", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## Retail Business Net Sales\n\nIn our ongoing effort to enhance the customer experience, we are focused on providing customers with a seamless experience across our channels. While our customers may engage with us through multiple channels, we know they value the overall Nordstrom brand experience and view us simply as Nordstrom, which is ultimately how we view our business. To provide additional transparency into our net sales by channel, we present the following summary of our Retail Business:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n|-----------------------------------------------------|---------|---------|---------|\n| Net sales by channel: | | | |\n| Nordstrom full-line stores - U.S. | $7,682 | $7,705 | $7,964 |\n| Nordstrom.com | 1,996 | 1,622 | 1,269 |\n| Nordstrom | 9,678 | 9,327 | 9,233 |\n| Nordstrom Rack | 3,215 | 2,738 | 2,445 |\n| Nordstromrack.com and HauteLook | 360 | 295 | 236 |\n| Other retail 1 | 116 | 35 | 35 |\n| Total Retail segment | 13,369 | 12,395 | 11,949 |\n| Corporate/Other | (259) | (229) | (187) |\n| Total net sales | $13,110 | $12,166 | $11,762 |\n| Net sales increase | 7.8% | 3.4% | 12.1% |\n| Comparable sales increase (decrease) by channel 2 : | | | |\n| Nordstrom full-line stores - U.S. | (0.5%) | (2.1%) | 3.9% |\n| Nordstrom.com | 23.1% | 29.5% | 37.1% |\n| Nordstrom | 3.6% | 2.3% | 7.5% |\n| Nordstrom Rack | 3.8% | 2.7% | 7.4% |\n| Nordstromrack.com and HauteLook | 22.1% | 27.3% | - |\n| Total company | 4.0% | 2.5% | 7.3% |\n| Sales per square foot 3 : | | | |\n| Total sales per square foot | $493 | $474 | $470 |\n| 4-wall sales per square foot | 413 | 408 | 417 |\n| Full-line sales per square foot - U.S. | 371 | 372 | 385 |\n| Nordstrom Rack sales per square foot | 552 | 553 | 568 |\n| Percentage of net sales by merchandise category: | | | |\n| Women's Apparel | 30% | 31% | 31% |\n| Shoes | 23% | 23% | 23% |\n| Men's Apparel | 16% | 16% | 16% |\n| Women's Accessories | 14% | 14% | 13% |\n| Cosmetics | 11% | 11% | 11% |\n| Kids' Apparel | 4% | 3% | 3% |\n| Other | 2% | 2% | 3% |\n| Total | 100% | 100% | 100% |", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "»› THE RACK GOES ONLINE SHOPPING GENIUSES CAN NOW CONTINUE THEIR STYLE SEARCH AT NORDSTROMRACK.COM, WHERE CUSTOMERS CAN EASILY CHOOSE HOW THEY SHOP BOTH HAUTELOOK AND NORDSTROM RACK.\n\n\n\nour engagement with customers. In 2014, we added more than 1 million new Rewards accounts, a 15% increase from the previous year. We want to give customers more choices with our loyalty program, and our goal is to provide an integrated multi-tender program in all stores and online later this year. We know our Rewards members are many of our most loyal and best customers. So growing these relationships by o/ffering programs that appeal to more customers will be beneficial in the long term.\n\n## CONCLUSION\n\nOur strategy is based on the customer and will remain so. Customers' expectations of speed, convenience, personalization and mobile are increasing. As we continue on our journey, we recognize it's imperative for us to invest for the future and find ways to make our stores more\n\n«‹ THAT'S A RECORD! WE OPENED 27 NEW NORDSTROM RACK STORES IN 2014-THE MOST WE'VE EVER OPENED IN ONE YEAR.\n\n\n\nconvenient and our online experience richer. We believe we are well positioned to deliver a great experience for our customers-no matter how they choose to shop with Nordstrom.\n\n\n\n## Blake W. Nordstrom\n\nPresident, Nordstrom, Inc.\n\n\n\nPeter E. Nordstrom\n\nPresident of Merchandising, Nordstrom, Inc.\n\n\n\n## Erik B. Nordstrom\n\nPresident of Nordstrom.com, Nordstrom, Inc.\n\n' I don't think I could've received better news today. Nordstrom Rack has now launched online! '\n\nOUR CUSTOMER, JOANNA D.", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## DEAR\n\n## CUSTOMERS, EMPLOYEES AND SHAREHOLDERS,\n\nFor 114 years, our focus has been on our customers. We have been most successful when we view our business through their eyes. In today's rapidly changing retail landscape, this approach has never been more important, so our strategy remains squarely focused on serving\n\ncustomers on their terms. Knowing customers increasingly desire an experience that's both personalized and convenient, we continue to make investments that further integrate our store and online experience to enable our customers to shop seamlessly any way they choose.\n\nA RECORD\n\n\n\nIN TOTAL COMPANY SALES. WITH SALES GROWTH OF 7.8% AND COMPARABLE SALES INCREASE OF 4%, WE BEAT OUR OWN EXPECTATIONS.\n\nNEARLY\n\n\n\nNEW CUSTOMERS SHOPPED AT NORDSTROM RACK-THAT'S MORE THAN AT ANY OTHER CHANNEL.\n\n\n\n27\n\nNEW NORDSTROM RACK STORES. PLUS, RACK SALES INCREASED 17% AND RACK COMPARABLE SALES GAINED 3.8%.\n\nMORE THAN\n\n## 1 million\n\nSTORE VISITS FROM CUSTOMERS RETURNING THEIR HAUTELOOK AND NORDSTROMRACK.COM PURCHASES TO NORDSTROM RACK.\n\nALMOST\n\n\n\nIN NORDSTROM.COM SALES. THAT'S MORE THAN DOUBLE OUR SALES\n\nFROM JUST THREE YEARS AGO.\n\nMORE THAN\n\n## 1 million\n\nNEW MEMBERS JOINED OUR NORDSTROM REWARDS™ PROGRAM FOR THE THIRD YEAR IN A ROW.\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## TRADEMARKS\n\nWe have 156 trademarks, each of which is the subject of one or more trademark registrations and/or trademark applications. Our most notable trademarks include Nordstrom, Nordstrom Rack, HauteLook, Halogen, BP., Zella, Caslon and Trunk Club. Each of our trademarks is renewable indefinitely, provided that it is still used in commerce at the time of the renewal.\n\n## RETURN POLICY\n\nWe have a fair and liberal approach to returns as part of our objective to provide high-quality customer service. We do not have a formal return policy at our Nordstrom full-line stores or online at Nordstrom.com. Our goal is to take care of our customers, which includes making returns and exchanges easy, whether in stores or online, where we offer free shipping and free returns. Our Nordstrom Rack stores generally accept returns up to 90 days from the date of purchase with the original price tag and sales receipt, and also accept returns of Nordstromrack.com and HauteLook merchandise. Nordstromrack.com and HauteLook generally accept returns of apparel, footwear and accessories within 90 days from the date of shipment.\n\n## SEASONALITY\n\nDue to our Anniversary Sale in July and the holidays in December, our sales are typically higher in the second and fourth quarters than in the first and third quarters of the fiscal year.\n\n## PART I", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## Nordstrom, Inc.\n\n## Notes to Consolidated Financial Statements\n\nDollar and share amounts in millions except per share, per option and per unit amounts\n\n## NOTE 1: NATURE OF OPERATIONS AND SUMMARY OF SIGNIFICANT ACCOUNTING POLICIES\n\n## The Company\n\nFounded in 1901 as a shoe store in Seattle, Washington, Nordstrom, Inc. is now a leading fashion specialty retailer that offers customers a well-edited selection of high-quality fashion brands focused on apparel, shoes, cosmetics and accessories for men, women and children. This breadth of merchandise allows us to serve a wide range of customers who appreciate quality fashion and a superior shopping experience. We offer an extensive selection of high-quality brand-name and private label merchandise through multiple retail channels, including 116 'Nordstrom' branded full-line stores in the U.S. and at Nordstrom.com (collectively, 'Nordstrom'), one Canada full-line store, 167 off-price Nordstrom Rack stores, Nordstromrack.com and HauteLook, five Trunk Club showrooms and TrunkClub.com, two Jeffrey boutiques and one Last Chance clearance store. Our stores are located in 38 states throughout the U.S and in one province in Canada.\n\nThrough our Credit segment, we provide our customers with a variety of payment products and services, including a Nordstrom private label card, two Nordstrom Visa credit cards and a debit card for Nordstrom purchases. These products also allow our customers to participate in our loyalty program designed to increase customer visits and spending. Although the primary purposes of our Credit segment are to foster greater customer loyalty and drive more sales, we also generate revenues from finance charges and other fees on these cards. In addition, we save on interchange fees that the Retail segment would incur if our customers used third-party cards.\n\n## Fiscal Year\n\nWe operate on a 52/53-week fiscal year ending on the Saturday closest to January 31 st . References to 2014 and all years within this document are based on a 52-week fiscal year, except 2012, which is based on a 53-week fiscal year.\n\n## Principles of Consolidation\n\nThe consolidated financial statements include the balances of Nordstrom, Inc. and its subsidiaries. All intercompany transactions and balances are eliminated in consolidation.\n\n## Use of Estimates\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the U.S. requires management to make estimates and assumptions that affect the reported amounts of assets, liabilities, revenues and expenses, and disclosure of contingent assets and liabilities during the reporting period. Uncertainties regarding such estimates and assumptions are inherent in the preparation of financial statements and actual results may differ from these estimates and assumptions. Our most significant accounting judgments and estimates include the allowance for credit losses, revenue recognition, inventory, goodwill, stock-based compensation and income taxes.\n\n## Net Sales\n\nWe recognize revenue from sales at our retail stores at the point of sale, net of estimated returns and excluding sales taxes. Revenue from sales to customers shipped directly from our stores, website and catalog, which includes shipping revenue when applicable, is recognized upon estimated receipt by the customer. We estimate customer merchandise returns based on historical return patterns and reduce sales and cost of sales accordingly. Activity in the allowance for sales returns, net, for the past three fiscal years is as follows:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n|--------------------------------|---------|---------|---------|\n| Allowance at beginning of year | $128 | $116 | $103 |\n| Additions | 2,129 | 1,880 | 1,724 |\n| Returns, net 1 | (2,097) | (1,868) | (1,711) |\n| Allowance at end of year | $160 | $128 | $116 |\n\n## Credit Card Revenues", - "page_start": 52, - "page_end": 52, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "\n\n\n\nOUR NEW LOOK FROM WINDOWS THAT BRING THE OUTSIDE IN TO DEPARTMENTS THAT SEAMLESSLY FLOW TOGETHEROUR NEW STORE DESIGN CREATES AN EXCITING SPACE THAT CAN CHANGE WITH HOW OUR CUSTOMERS SHOP.\n\n\n\nto be within two-day ground delivery of approximately half the population of the United States, which will help improve delivery times for customers and help us meet their rising expectations.\n\nFinally, in 2014, we acquired Trunk Club, a high-growth personalized men's clothing business based on a service model that is highly complementary to our own. We believe Trunk Club is a natural extension of our business, and together we will continue to evolve and bring together the online and o/ffline worlds to deliver a great shopping experience.\n\n## OFF-PRICE: NORDSTROM RACK, NORDSTROMRACK.COM AND HAUTELOOK\n\nWe opened a record 27 new Nordstrom Rack stores, ending 2014 with 167 stores and on track to meet our long-term growth plans\n\nof 300 stores by 2020. Customers continue to respond favorably to the treasure-hunt experience that defines Nordstrom Rack stores. As we expand in many markets for the first time, we hope to continue delivering a great experience, as this business represents a terrific opportunity for us to attract new customers. Last year, Nordstrom Rack was our biggest source of new customers, attracting nearly 4 million. Also, a year ago, we began accepting returns of HauteLook and Nordstromrack.com merchandise at any Nordstrom Rack store. This drove nearly 1 million trips to Nordstrom Rack stores in 2014. The Nordstrom Rack customer also tends to be younger than our full-line customer, and there is a meaningful opportunity for these customers to begin shopping our full-price channels as well. We plan to open 27 more Nordstrom Racks in 2015 across the U.S.\n\n\n\n\n\n' I love how you used models with physical challenges in your Anniversary catalog. Nice work! '\n\nOUR CUSTOMER, DONNA A.", - "page_start": 6, - "page_end": 6, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "»› THAT'S BRILLIANT! WE'LL HAVE TOPSHOP IN 80 STORES BY THE END OF 2015-AND THAT'S JUST ONE OF THE WAYS WE'RE ATTRACTING NEW YOUNG CUSTOMERS WITH GREAT BRANDS AT ACCESSIBLE PRICE POINTS.\n\n\n\n' Praise the fashion gods. Nordstrom Downtown Portland is opening Topshop in the next month.\n\n'\n\nOUR CUSTOMER, KARLY T.\n\n«‹ A PERFECT PAIR: SHOES AND SJP ACTRESS AND STYLE ICON SARAH JESSICA PARKER DESIGNED HER OWN SHOE LINE, SJP, AND WE WERE THE EXCLUSIVE RETAILER FOR ITS LAUNCH.\n\n\n\nIn addition to our new stores, we improved our online/o/ff-price capabilities with the launch of Nordstromrack.com. Combined with HauteLook, the integrated ecommerce site o/ffers a consistent merchandise selection as well as flash sales in a single web or mobile experience, providing customers a wide range of merchandise with one easy-to-use, shared checkout. Since the launch last spring, we've more than doubled the selection at Nordstromrack.com. We will continue to work on ways to further integrate our business to improve our customer experience.\n\n## INCREASING RELEVANCE\n\nWe know ultimately customers come to Nordstrom for great merchandise. They continue to respond to fresh, relevant brands. Last year, we were the exclusive retail partner for the global launch of\n\nSarah Jessica Parker's SJP line of shoes and launched Charlotte Tilbury in Beauty. We increased the number of full-line stores with Topshop to 53 and launched Kate Moss for Topshop, which helped us rapidly grow the number of Topshop customers, including a younger customer who in many cases is new to Nordstrom. By the end of 2015, we plan to have Topshop in more than 80 stores.\n\nThis March, we were excited to begin carrying Madewell, representing a new partnership with J.Crew. Our initial launch was on Nordstrom.com and in 15 of our stores in our t.b.d. department. This is a terrific example of our continued focus to bring great fashion brands to customers at accessible price points.\n\nFinally, Nordstrom Rewards has been a successful program enabling us to deepen", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "Nordstrom Rack net sales for the quarter increased $130, or 17%, reflecting 27 new Nordstrom Rack store openings since the fourth quarter of 2013, while comparable sales increased 3.2%. On a comparable basis, the average selling price of Nordstrom Rack merchandise increased while the number of items sold was flat. Shoes and Accessories were the category highlights for Nordstrom Rack.\n\n## Gross Profit\n\nOur total company gross profit rate decreased 53 basis points compared with the same period in the prior year, primarily due to increased markdowns at Nordstrom Rack.\n\n## Retail Selling, General, and Administrative Expenses\n\nOur Retail SG&A rate increased 80 basis points primarily due to expenses related to the acquisition of Trunk Club and ongoing technology and fulfillment expenses.\n\n## Credit Expenses\n\nIn the fourth quarter, expenses for our Credit segment of $54 increased from $38 in the prior year. The increase was primarily driven by higher operational expenses resulting from a 6% increase in credit volume during the fourth quarter of 2014. The fourth quarter of 2013 also included the impact of the conversion of our Nordstrom Rewards travel benefit into Nordstrom Notes, which decreased operational expenses in the prior year.\n\nFor further information on our quarterly results in 2014 and 2013, refer to Note 17: Selected Quarterly Data in the Notes to Consolidated Financial Statements in Item 8: Financial Statements and Supplementary Data.\n\n## 2015 Outlook\n\nOur expectations for 2015 are as follows:\n\n| Net sales | 7 percent to 9 percent increase |\n|------------------------------|-----------------------------------|\n| Comparable sales | 2 percent to 4 percent increase |\n| Earnings per diluted share 1 | $3.65 to $3.80 |\n\nCapital expenditures, net of property incentives, of approximately $1.2 billion are expected in 2015, an increase from $751 in 2014. The increase relates to store expansion, including Canada and Manhattan, and ongoing investments to improve the customer experience through flagship store remodels and a third fulfillment center expected to open in the second half of the year. To date in 2015, we have opened our second full-line store in Canada. We plan to open 27 Nordstrom Rack stores, three additional Nordstrom full-line stores in the U.S. and another full-line store in Canada during 2015. Planned net store openings are expected to increase our retail square footage by approximately 6.1%.", - "page_start": 36, - "page_end": 36, - "source_file": "NYSE_JWN_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_JWN_2014.pdf", - "query": "How many employees did Nordstrom count in 2014 ?", - "target_page": 17, - "target_passage": "During 2014, we employed approximately 67,000 employees on a full- or part-time basis.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## Item 1. Business.\n\n## DESCRIPTION OF BUSINESS\n\nFounded in 1901 as a retail shoe business in Seattle, Nordstrom later incorporated in Washington state in 1946 and went on to become one of the leading fashion specialty retailers based in the U.S. As of March 16, 2015, we operate 290 U.S. stores located in 38 states as well as a robust ecommerce business through Nordstrom.com, Nordstromrack.com and HauteLook and TrunkClub.com. We also operate two Nordstrom full-line stores in Canada. The west and east coasts of the U.S. are the areas in which we have the largest presence. We have two reportable segments: Retail and Credit.\n\nAs of March 16, 2015, the Retail segment includes our 115 'Nordstrom' branded full-line stores in the U.S. and Nordstrom.com, 167 off-price Nordstrom Rack stores, two Canada full-line stores, Nordstromrack.com and HauteLook, and other retail channels including five Trunk Club showrooms and TrunkClub.com, our two Jeffrey boutiques and one clearance store that operates under the name 'Last Chance.' Through these multiple retail channels, we strive to deliver the best customer experience possible. We offer an extensive selection of high-quality brand-name and private label merchandise focused on apparel, shoes, cosmetics and accessories. Our integrated Nordstrom full-line stores and online store allow us to provide our customers with a seamless shopping experience. In-store purchases are primarily fulfilled from that store's inventory, but when inventory is unavailable at that store it may also be shipped to our customers from our fulfillment center in Cedar Rapids, Iowa, or from other Nordstrom full-line stores. Online purchases are primarily shipped to our customers from our Cedar Rapids fulfillment center, but may also be shipped from our Nordstrom full-line stores. Our customers can also pick up online orders in our Nordstrom full-line stores if inventory is available at one of our locations. These capabilities allow us to better serve customers across various channels and improve sales. Nordstrom Rack stores purchase high-quality brand-name merchandise primarily from the same vendors carried in Nordstrom full-line stores and also serve as outlets for clearance merchandise from our Nordstrom stores and other retail channels. During the year, we launched Nordstromrack.com and the associated mobile app. Nordstromrack.com combines the technology expertise of HauteLook with the merchant expertise of Nordstrom Rack. Nordstromrack.com and HauteLook offer limited-time sale events on fashion and lifestyle brands as well as a persistent selection of off-price, high-quality brand-name merchandise and are integrated with a single customer log-in, shared shopping cart and streamlined checkout process. Furthermore, we can accommodate returns from these sites by mail or at any Nordstrom Rack location.\n\nOur Credit segment includes our wholly owned federal savings bank, Nordstrom fsb, through which we provide a private label credit card, two Nordstrom Visa credit cards and a debit card. The credit and debit cards feature a loyalty program designed to increase customer visits and spending. Although the primary purposes of our Credit segment are to foster greater customer loyalty and drive more sales, we also generate revenues from finance charges and other fees on these cards. In addition, we save on interchange fees that the Retail segment would incur if our customers used third-party cards.\n\nFor more information about our business and our reportable segments, see Item 7: Management's Discussion and Analysis of Financial Condition and Results of Operations and Note 16: Segment Reporting in Item 8: Financial Statements and Supplementary Data.\n\n## FISCAL YEAR\n\nWe operate on a 52/53-week fiscal year ending on the Saturday closest to January 31 st . References to 2014 and all years within this document are based on a 52-week fiscal year, except 2012, which is based on a 53-week fiscal year.\n\n## TRADEMARKS", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "The following table lists our U.S. and Canada retail store count and facility square footage by state/province as of January 31, 2015:\n\nNordstrom Full-Line Stores -", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## Net Sales (2014 vs. 2013)\n\nIn 2014, total company net sales increased 7.8%, which was attributable to the comparable sales increase of 4.0%. During the year, we opened three Nordstrom full-line stores, including our first store in Canada, and 27 Nordstrom Rack stores. Additionally, as a result of the acquisition of Trunk Club, we acquired four Trunk Club showrooms and opened one additional Trunk Club showroom in 2014. These additions increased our square footage by 5.5% and represented 2.8% of our total net sales for 2014.\n\nNordstrom net sales, which consist of the U.S. full-line and Nordstrom.com businesses, were $9,678 in 2014, an increase of 3.8% compared with 2013, with comparable sales up 3.6%. These increases reflected continued momentum in our Nordstrom.com channel. Both the number of items sold and the average selling price increased on a comparable basis in 2014. Category highlights included Accessories, Cosmetics and Men's Apparel.\n\nU.S. full-line net sales for 2014 were $7,682, a decrease of 0.3% compared with 2013 and comparable sales decreased by 0.5%. The topperforming geographic regions for full-line stores were the Southeast and Southwest.\n\nOur Nordstrom.com, Nordstromrack.com and HauteLook channels continued to experience outsized growth. Nordstrom.com net sales increased 23% and Nordstromrack.com and HauteLook net sales increased 22%, both driven by expanded merchandise selection and ongoing technology investments to enhance the customer experience.\n\nNordstrom Rack net sales increased $477, or 17%, compared with 2013, reflecting incremental volume from existing stores and the impact of 27 new stores since fiscal 2013. Comparable sales increased 3.8% for the year. Shoes and Accessories were the top-performing categories for the year. On a comparable basis, the average selling price of Nordstrom Rack merchandise increased while the number of items sold was flat.\n\n## Net Sales (2013 vs. 2012)\n\nNet sales for 2013 increased 3.4% compared with 2012, driven by a comparable sales increase of 2.5%, attributable to growth at Nordstrom.com and Nordstrom Rack's accelerated store expansion. During 2013, we opened 22 Nordstrom Rack stores and relocated one Nordstrom full-line store and two Nordstrom Rack stores. These additions represented 1.6% of our total net sales for 2013 and increased our square footage by 2.9%. The 53 rd week in 2012 contributed approximately $162 in additional net sales.\n\nNordstrom net sales for 2013 were $9,327, an increase of 1.0% compared with 2012, with comparable sales up 2.3%. Strong growth at Nordstrom.com was partially offset by sales decreases at our full-line stores. Both the average selling price and the number of items sold increased on a comparable basis in 2013 compared with 2012. Category highlights included Cosmetics, Men's Shoes and Women's Apparel.\n\nFull-line net sales for 2013 were $7,705, a decrease of 3.3% compared with 2012, which was primarily driven by a comparable sales decrease of 2.1% for the year. The top-performing geographic regions for full-line stores for 2013 were the Southwest and Southeast. Nordstrom.com showed strong sales growth with net sales of $1,622, an increase of 28% compared with 2012, with comparable sales up 30% on a comparable 52-week basis. These increases were driven by expanded merchandise selection and ongoing technology investments to enhance the customer experience.\n\nNordstrom Rack net sales were $2,738, up 12.0% compared with 2012, primarily due to 37 new store openings in 2012 and 2013. Comparable sales increased 2.7% for the year. Cosmetics and Shoes were the strongest-performing categories for the year. Both the average selling price and the number of items sold increased on a comparable basis in 2013 compared with 2012.\n\n## Retail Business Gross Profit\n\nThe following table summarizes the Retail Business gross profit:", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## Retail Business Net Sales\n\nIn our ongoing effort to enhance the customer experience, we are focused on providing customers with a seamless experience across our channels. While our customers may engage with us through multiple channels, we know they value the overall Nordstrom brand experience and view us simply as Nordstrom, which is ultimately how we view our business. To provide additional transparency into our net sales by channel, we present the following summary of our Retail Business:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n|-----------------------------------------------------|---------|---------|---------|\n| Net sales by channel: | | | |\n| Nordstrom full-line stores - U.S. | $7,682 | $7,705 | $7,964 |\n| Nordstrom.com | 1,996 | 1,622 | 1,269 |\n| Nordstrom | 9,678 | 9,327 | 9,233 |\n| Nordstrom Rack | 3,215 | 2,738 | 2,445 |\n| Nordstromrack.com and HauteLook | 360 | 295 | 236 |\n| Other retail 1 | 116 | 35 | 35 |\n| Total Retail segment | 13,369 | 12,395 | 11,949 |\n| Corporate/Other | (259) | (229) | (187) |\n| Total net sales | $13,110 | $12,166 | $11,762 |\n| Net sales increase | 7.8% | 3.4% | 12.1% |\n| Comparable sales increase (decrease) by channel 2 : | | | |\n| Nordstrom full-line stores - U.S. | (0.5%) | (2.1%) | 3.9% |\n| Nordstrom.com | 23.1% | 29.5% | 37.1% |\n| Nordstrom | 3.6% | 2.3% | 7.5% |\n| Nordstrom Rack | 3.8% | 2.7% | 7.4% |\n| Nordstromrack.com and HauteLook | 22.1% | 27.3% | - |\n| Total company | 4.0% | 2.5% | 7.3% |\n| Sales per square foot 3 : | | | |\n| Total sales per square foot | $493 | $474 | $470 |\n| 4-wall sales per square foot | 413 | 408 | 417 |\n| Full-line sales per square foot - U.S. | 371 | 372 | 385 |\n| Nordstrom Rack sales per square foot | 552 | 553 | 568 |\n| Percentage of net sales by merchandise category: | | | |\n| Women's Apparel | 30% | 31% | 31% |\n| Shoes | 23% | 23% | 23% |\n| Men's Apparel | 16% | 16% | 16% |\n| Women's Accessories | 14% | 14% | 13% |\n| Cosmetics | 11% | 11% | 11% |\n| Kids' Apparel | 4% | 3% | 3% |\n| Other | 2% | 2% | 3% |\n| Total | 100% | 100% | 100% |", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## Nordstrom, Inc.\n\n## Notes to Consolidated Financial Statements\n\nDollar and share amounts in millions except per share, per option and per unit amounts\n\n## NOTE 1: NATURE OF OPERATIONS AND SUMMARY OF SIGNIFICANT ACCOUNTING POLICIES\n\n## The Company\n\nFounded in 1901 as a shoe store in Seattle, Washington, Nordstrom, Inc. is now a leading fashion specialty retailer that offers customers a well-edited selection of high-quality fashion brands focused on apparel, shoes, cosmetics and accessories for men, women and children. This breadth of merchandise allows us to serve a wide range of customers who appreciate quality fashion and a superior shopping experience. We offer an extensive selection of high-quality brand-name and private label merchandise through multiple retail channels, including 116 'Nordstrom' branded full-line stores in the U.S. and at Nordstrom.com (collectively, 'Nordstrom'), one Canada full-line store, 167 off-price Nordstrom Rack stores, Nordstromrack.com and HauteLook, five Trunk Club showrooms and TrunkClub.com, two Jeffrey boutiques and one Last Chance clearance store. Our stores are located in 38 states throughout the U.S and in one province in Canada.\n\nThrough our Credit segment, we provide our customers with a variety of payment products and services, including a Nordstrom private label card, two Nordstrom Visa credit cards and a debit card for Nordstrom purchases. These products also allow our customers to participate in our loyalty program designed to increase customer visits and spending. Although the primary purposes of our Credit segment are to foster greater customer loyalty and drive more sales, we also generate revenues from finance charges and other fees on these cards. In addition, we save on interchange fees that the Retail segment would incur if our customers used third-party cards.\n\n## Fiscal Year\n\nWe operate on a 52/53-week fiscal year ending on the Saturday closest to January 31 st . References to 2014 and all years within this document are based on a 52-week fiscal year, except 2012, which is based on a 53-week fiscal year.\n\n## Principles of Consolidation\n\nThe consolidated financial statements include the balances of Nordstrom, Inc. and its subsidiaries. All intercompany transactions and balances are eliminated in consolidation.\n\n## Use of Estimates\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the U.S. requires management to make estimates and assumptions that affect the reported amounts of assets, liabilities, revenues and expenses, and disclosure of contingent assets and liabilities during the reporting period. Uncertainties regarding such estimates and assumptions are inherent in the preparation of financial statements and actual results may differ from these estimates and assumptions. Our most significant accounting judgments and estimates include the allowance for credit losses, revenue recognition, inventory, goodwill, stock-based compensation and income taxes.\n\n## Net Sales\n\nWe recognize revenue from sales at our retail stores at the point of sale, net of estimated returns and excluding sales taxes. Revenue from sales to customers shipped directly from our stores, website and catalog, which includes shipping revenue when applicable, is recognized upon estimated receipt by the customer. We estimate customer merchandise returns based on historical return patterns and reduce sales and cost of sales accordingly. Activity in the allowance for sales returns, net, for the past three fiscal years is as follows:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n|--------------------------------|---------|---------|---------|\n| Allowance at beginning of year | $128 | $116 | $103 |\n| Additions | 2,129 | 1,880 | 1,724 |\n| Returns, net 1 | (2,097) | (1,868) | (1,711) |\n| Allowance at end of year | $160 | $128 | $116 |\n\n## Credit Card Revenues", - "page_start": 52, - "page_end": 52, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "»› THE RACK GOES ONLINE SHOPPING GENIUSES CAN NOW CONTINUE THEIR STYLE SEARCH AT NORDSTROMRACK.COM, WHERE CUSTOMERS CAN EASILY CHOOSE HOW THEY SHOP BOTH HAUTELOOK AND NORDSTROM RACK.\n\n\n\nour engagement with customers. In 2014, we added more than 1 million new Rewards accounts, a 15% increase from the previous year. We want to give customers more choices with our loyalty program, and our goal is to provide an integrated multi-tender program in all stores and online later this year. We know our Rewards members are many of our most loyal and best customers. So growing these relationships by o/ffering programs that appeal to more customers will be beneficial in the long term.\n\n## CONCLUSION\n\nOur strategy is based on the customer and will remain so. Customers' expectations of speed, convenience, personalization and mobile are increasing. As we continue on our journey, we recognize it's imperative for us to invest for the future and find ways to make our stores more\n\n«‹ THAT'S A RECORD! WE OPENED 27 NEW NORDSTROM RACK STORES IN 2014-THE MOST WE'VE EVER OPENED IN ONE YEAR.\n\n\n\nconvenient and our online experience richer. We believe we are well positioned to deliver a great experience for our customers-no matter how they choose to shop with Nordstrom.\n\n\n\n## Blake W. Nordstrom\n\nPresident, Nordstrom, Inc.\n\n\n\nPeter E. Nordstrom\n\nPresident of Merchandising, Nordstrom, Inc.\n\n\n\n## Erik B. Nordstrom\n\nPresident of Nordstrom.com, Nordstrom, Inc.\n\n' I don't think I could've received better news today. Nordstrom Rack has now launched online! '\n\nOUR CUSTOMER, JOANNA D.", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## Nordstrom, Inc.\n\n## Consolidated Balance Sheets\n\nIn millions", - "page_start": 49, - "page_end": 49, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "The following table summarizes our store count and square footage activity:\n\n| | Store count | Store count | Store count | Square footage | Square footage | Square footage |\n|-----------------------------------|---------------|---------------|---------------|------------------|------------------|------------------|\n| Fiscal year | 2014 | 2013 | 2012 | 2014 | 2013 | 2012 |\n| Total, beginning of year | 260 | 240 | 225 | 26.0 | 25.3 | 24.7 |\n| Store openings: | | | | | | |\n| Nordstrom full-line stores - U.S. | 2 | - | 1 | 0.3 | - | 0.1 |\n| Nordstrom Rack and other stores 1 | 29 | 22 | 15 | 1.2 | 0.7 | 0.6 |\n| Stores acquired | 4 | - | - | - | - | |\n| Stores closed | (3) | (2) | (1) | (0.4) | - | (0.1) |\n| Total, end of year | 292 | 260 | 240 | 27.1 | 26.0 | 25.3 |\n\nWe had no store relocations in 2014, compared with one Nordstrom full-line store and two Nordstrom Rack relocations in 2013 and three Nordstrom Rack relocations in 2012. Our 2014 new store openings increased our square footage by 5.5%.\n\nTo date in 2015, we have opened our second full-line store in Canada. We plan to open 27 Nordstrom Rack stores, three additional Nordstrom full-line stores in the U.S. and another full-line store in Canada during 2015. Planned net store openings are expected to increase our retail square footage by approximately 6.1%.", - "page_start": 38, - "page_end": 38, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## Nordstrom, Inc. and Subsidiaries Exhibit Index", - "page_start": 85, - "page_end": 85, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## Nordstrom, Inc.\n\n## Notes to Consolidated Financial Statements\n\nDollar and share amounts in millions except per share, per option and per unit amounts\n\n## NOTE 16: SEGMENT REPORTING\n\n## Segments\n\nWe have two reportable segments: Retail and Credit . Our Retail segment includes our 'Nordstrom' operating segment, which is composed of our Nordstrom full-line stores in the U.S. and our online store at Nordstrom.com. Through our multi-channel initiatives, we have integrated the operations, merchandising and technology of our Nordstrom full-line and online stores, consistent with our customers' expectations of a seamless shopping experience regardless of channel. Our internal reporting to our president, who is our chief operating decision maker, is consistent with these multi-channel initiatives. We aggregate our Nordstrom Rack operating segment into the Retail reporting segment, based on similar economic and other qualitative characteristics. Additionally, we include Nordstromrack.com, HauteLook, Jeffrey, Trunk Club and our Canadian operations in the Retail reporting segment.\n\nThrough our Credit segment, we provide our customers with a variety of payment products and services, including a Nordstrom private label card, two Nordstrom Visa credit cards and a debit card for Nordstrom purchases. Our credit and debit card products also include a loyalty program that provides benefits to our cardholders based on their level of spending.\n\nAmounts in the Corporate/Other column include unallocated corporate expenses and assets, sales return reserve, inter-segment eliminations and other adjustments to segment results necessary for the presentation of consolidated financial results in accordance with generally accepted accounting principles.\n\n## Accounting Policy\n\nIn general, we use the same measurements to compute earnings before income taxes for reportable segments as we do for the consolidated company. However, redemptions of our Nordstrom Notes are included in net sales for our Retail segment. The sales amount in our Corporate/Other column includes an entry to eliminate these transactions from our consolidated net sales. The related Nordstrom Notes expenses are included in our Retail segment at face value. Our Corporate/Other column includes an adjustment to reduce the Nordstrom Notes expense from face value to their estimated cost. In addition, our sales return reserve and other corporate adjustments are recorded in the Corporate/Other column. Other than as described above, the accounting policies of the operating segments are the same as those described in Note 1: Nature of Operations and Summary of Significant Accounting Policies.", - "page_start": 73, - "page_end": 73, - "source_file": "NYSE_JWN_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_JWN_2014.pdf", - "query": "How many stores did Nordstrom posses at the end of 2014 ?", - "target_page": 22, - "target_passage": "Number of stores, end of year : 292", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## Item 1. Business.\n\n## DESCRIPTION OF BUSINESS\n\nFounded in 1901 as a retail shoe business in Seattle, Nordstrom later incorporated in Washington state in 1946 and went on to become one of the leading fashion specialty retailers based in the U.S. As of March 16, 2015, we operate 290 U.S. stores located in 38 states as well as a robust ecommerce business through Nordstrom.com, Nordstromrack.com and HauteLook and TrunkClub.com. We also operate two Nordstrom full-line stores in Canada. The west and east coasts of the U.S. are the areas in which we have the largest presence. We have two reportable segments: Retail and Credit.\n\nAs of March 16, 2015, the Retail segment includes our 115 'Nordstrom' branded full-line stores in the U.S. and Nordstrom.com, 167 off-price Nordstrom Rack stores, two Canada full-line stores, Nordstromrack.com and HauteLook, and other retail channels including five Trunk Club showrooms and TrunkClub.com, our two Jeffrey boutiques and one clearance store that operates under the name 'Last Chance.' Through these multiple retail channels, we strive to deliver the best customer experience possible. We offer an extensive selection of high-quality brand-name and private label merchandise focused on apparel, shoes, cosmetics and accessories. Our integrated Nordstrom full-line stores and online store allow us to provide our customers with a seamless shopping experience. In-store purchases are primarily fulfilled from that store's inventory, but when inventory is unavailable at that store it may also be shipped to our customers from our fulfillment center in Cedar Rapids, Iowa, or from other Nordstrom full-line stores. Online purchases are primarily shipped to our customers from our Cedar Rapids fulfillment center, but may also be shipped from our Nordstrom full-line stores. Our customers can also pick up online orders in our Nordstrom full-line stores if inventory is available at one of our locations. These capabilities allow us to better serve customers across various channels and improve sales. Nordstrom Rack stores purchase high-quality brand-name merchandise primarily from the same vendors carried in Nordstrom full-line stores and also serve as outlets for clearance merchandise from our Nordstrom stores and other retail channels. During the year, we launched Nordstromrack.com and the associated mobile app. Nordstromrack.com combines the technology expertise of HauteLook with the merchant expertise of Nordstrom Rack. Nordstromrack.com and HauteLook offer limited-time sale events on fashion and lifestyle brands as well as a persistent selection of off-price, high-quality brand-name merchandise and are integrated with a single customer log-in, shared shopping cart and streamlined checkout process. Furthermore, we can accommodate returns from these sites by mail or at any Nordstrom Rack location.\n\nOur Credit segment includes our wholly owned federal savings bank, Nordstrom fsb, through which we provide a private label credit card, two Nordstrom Visa credit cards and a debit card. The credit and debit cards feature a loyalty program designed to increase customer visits and spending. Although the primary purposes of our Credit segment are to foster greater customer loyalty and drive more sales, we also generate revenues from finance charges and other fees on these cards. In addition, we save on interchange fees that the Retail segment would incur if our customers used third-party cards.\n\nFor more information about our business and our reportable segments, see Item 7: Management's Discussion and Analysis of Financial Condition and Results of Operations and Note 16: Segment Reporting in Item 8: Financial Statements and Supplementary Data.\n\n## FISCAL YEAR\n\nWe operate on a 52/53-week fiscal year ending on the Saturday closest to January 31 st . References to 2014 and all years within this document are based on a 52-week fiscal year, except 2012, which is based on a 53-week fiscal year.\n\n## TRADEMARKS", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## Net Sales (2014 vs. 2013)\n\nIn 2014, total company net sales increased 7.8%, which was attributable to the comparable sales increase of 4.0%. During the year, we opened three Nordstrom full-line stores, including our first store in Canada, and 27 Nordstrom Rack stores. Additionally, as a result of the acquisition of Trunk Club, we acquired four Trunk Club showrooms and opened one additional Trunk Club showroom in 2014. These additions increased our square footage by 5.5% and represented 2.8% of our total net sales for 2014.\n\nNordstrom net sales, which consist of the U.S. full-line and Nordstrom.com businesses, were $9,678 in 2014, an increase of 3.8% compared with 2013, with comparable sales up 3.6%. These increases reflected continued momentum in our Nordstrom.com channel. Both the number of items sold and the average selling price increased on a comparable basis in 2014. Category highlights included Accessories, Cosmetics and Men's Apparel.\n\nU.S. full-line net sales for 2014 were $7,682, a decrease of 0.3% compared with 2013 and comparable sales decreased by 0.5%. The topperforming geographic regions for full-line stores were the Southeast and Southwest.\n\nOur Nordstrom.com, Nordstromrack.com and HauteLook channels continued to experience outsized growth. Nordstrom.com net sales increased 23% and Nordstromrack.com and HauteLook net sales increased 22%, both driven by expanded merchandise selection and ongoing technology investments to enhance the customer experience.\n\nNordstrom Rack net sales increased $477, or 17%, compared with 2013, reflecting incremental volume from existing stores and the impact of 27 new stores since fiscal 2013. Comparable sales increased 3.8% for the year. Shoes and Accessories were the top-performing categories for the year. On a comparable basis, the average selling price of Nordstrom Rack merchandise increased while the number of items sold was flat.\n\n## Net Sales (2013 vs. 2012)\n\nNet sales for 2013 increased 3.4% compared with 2012, driven by a comparable sales increase of 2.5%, attributable to growth at Nordstrom.com and Nordstrom Rack's accelerated store expansion. During 2013, we opened 22 Nordstrom Rack stores and relocated one Nordstrom full-line store and two Nordstrom Rack stores. These additions represented 1.6% of our total net sales for 2013 and increased our square footage by 2.9%. The 53 rd week in 2012 contributed approximately $162 in additional net sales.\n\nNordstrom net sales for 2013 were $9,327, an increase of 1.0% compared with 2012, with comparable sales up 2.3%. Strong growth at Nordstrom.com was partially offset by sales decreases at our full-line stores. Both the average selling price and the number of items sold increased on a comparable basis in 2013 compared with 2012. Category highlights included Cosmetics, Men's Shoes and Women's Apparel.\n\nFull-line net sales for 2013 were $7,705, a decrease of 3.3% compared with 2012, which was primarily driven by a comparable sales decrease of 2.1% for the year. The top-performing geographic regions for full-line stores for 2013 were the Southwest and Southeast. Nordstrom.com showed strong sales growth with net sales of $1,622, an increase of 28% compared with 2012, with comparable sales up 30% on a comparable 52-week basis. These increases were driven by expanded merchandise selection and ongoing technology investments to enhance the customer experience.\n\nNordstrom Rack net sales were $2,738, up 12.0% compared with 2012, primarily due to 37 new store openings in 2012 and 2013. Comparable sales increased 2.7% for the year. Cosmetics and Shoes were the strongest-performing categories for the year. Both the average selling price and the number of items sold increased on a comparable basis in 2013 compared with 2012.\n\n## Retail Business Gross Profit\n\nThe following table summarizes the Retail Business gross profit:", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "The following table summarizes our store count and square footage activity:\n\n| | Store count | Store count | Store count | Square footage | Square footage | Square footage |\n|-----------------------------------|---------------|---------------|---------------|------------------|------------------|------------------|\n| Fiscal year | 2014 | 2013 | 2012 | 2014 | 2013 | 2012 |\n| Total, beginning of year | 260 | 240 | 225 | 26.0 | 25.3 | 24.7 |\n| Store openings: | | | | | | |\n| Nordstrom full-line stores - U.S. | 2 | - | 1 | 0.3 | - | 0.1 |\n| Nordstrom Rack and other stores 1 | 29 | 22 | 15 | 1.2 | 0.7 | 0.6 |\n| Stores acquired | 4 | - | - | - | - | |\n| Stores closed | (3) | (2) | (1) | (0.4) | - | (0.1) |\n| Total, end of year | 292 | 260 | 240 | 27.1 | 26.0 | 25.3 |\n\nWe had no store relocations in 2014, compared with one Nordstrom full-line store and two Nordstrom Rack relocations in 2013 and three Nordstrom Rack relocations in 2012. Our 2014 new store openings increased our square footage by 5.5%.\n\nTo date in 2015, we have opened our second full-line store in Canada. We plan to open 27 Nordstrom Rack stores, three additional Nordstrom full-line stores in the U.S. and another full-line store in Canada during 2015. Planned net store openings are expected to increase our retail square footage by approximately 6.1%.", - "page_start": 38, - "page_end": 38, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "»› THE RACK GOES ONLINE SHOPPING GENIUSES CAN NOW CONTINUE THEIR STYLE SEARCH AT NORDSTROMRACK.COM, WHERE CUSTOMERS CAN EASILY CHOOSE HOW THEY SHOP BOTH HAUTELOOK AND NORDSTROM RACK.\n\n\n\nour engagement with customers. In 2014, we added more than 1 million new Rewards accounts, a 15% increase from the previous year. We want to give customers more choices with our loyalty program, and our goal is to provide an integrated multi-tender program in all stores and online later this year. We know our Rewards members are many of our most loyal and best customers. So growing these relationships by o/ffering programs that appeal to more customers will be beneficial in the long term.\n\n## CONCLUSION\n\nOur strategy is based on the customer and will remain so. Customers' expectations of speed, convenience, personalization and mobile are increasing. As we continue on our journey, we recognize it's imperative for us to invest for the future and find ways to make our stores more\n\n«‹ THAT'S A RECORD! WE OPENED 27 NEW NORDSTROM RACK STORES IN 2014-THE MOST WE'VE EVER OPENED IN ONE YEAR.\n\n\n\nconvenient and our online experience richer. We believe we are well positioned to deliver a great experience for our customers-no matter how they choose to shop with Nordstrom.\n\n\n\n## Blake W. Nordstrom\n\nPresident, Nordstrom, Inc.\n\n\n\nPeter E. Nordstrom\n\nPresident of Merchandising, Nordstrom, Inc.\n\n\n\n## Erik B. Nordstrom\n\nPresident of Nordstrom.com, Nordstrom, Inc.\n\n' I don't think I could've received better news today. Nordstrom Rack has now launched online! '\n\nOUR CUSTOMER, JOANNA D.", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "\n\n\n\nOUR NEW LOOK FROM WINDOWS THAT BRING THE OUTSIDE IN TO DEPARTMENTS THAT SEAMLESSLY FLOW TOGETHEROUR NEW STORE DESIGN CREATES AN EXCITING SPACE THAT CAN CHANGE WITH HOW OUR CUSTOMERS SHOP.\n\n\n\nto be within two-day ground delivery of approximately half the population of the United States, which will help improve delivery times for customers and help us meet their rising expectations.\n\nFinally, in 2014, we acquired Trunk Club, a high-growth personalized men's clothing business based on a service model that is highly complementary to our own. We believe Trunk Club is a natural extension of our business, and together we will continue to evolve and bring together the online and o/ffline worlds to deliver a great shopping experience.\n\n## OFF-PRICE: NORDSTROM RACK, NORDSTROMRACK.COM AND HAUTELOOK\n\nWe opened a record 27 new Nordstrom Rack stores, ending 2014 with 167 stores and on track to meet our long-term growth plans\n\nof 300 stores by 2020. Customers continue to respond favorably to the treasure-hunt experience that defines Nordstrom Rack stores. As we expand in many markets for the first time, we hope to continue delivering a great experience, as this business represents a terrific opportunity for us to attract new customers. Last year, Nordstrom Rack was our biggest source of new customers, attracting nearly 4 million. Also, a year ago, we began accepting returns of HauteLook and Nordstromrack.com merchandise at any Nordstrom Rack store. This drove nearly 1 million trips to Nordstrom Rack stores in 2014. The Nordstrom Rack customer also tends to be younger than our full-line customer, and there is a meaningful opportunity for these customers to begin shopping our full-price channels as well. We plan to open 27 more Nordstrom Racks in 2015 across the U.S.\n\n\n\n\n\n' I love how you used models with physical challenges in your Anniversary catalog. Nice work! '\n\nOUR CUSTOMER, DONNA A.", - "page_start": 6, - "page_end": 6, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## Nordstrom, Inc.\n\n## Notes to Consolidated Financial Statements\n\nDollar and share amounts in millions except per share, per option and per unit amounts\n\n## NOTE 1: NATURE OF OPERATIONS AND SUMMARY OF SIGNIFICANT ACCOUNTING POLICIES\n\n## The Company\n\nFounded in 1901 as a shoe store in Seattle, Washington, Nordstrom, Inc. is now a leading fashion specialty retailer that offers customers a well-edited selection of high-quality fashion brands focused on apparel, shoes, cosmetics and accessories for men, women and children. This breadth of merchandise allows us to serve a wide range of customers who appreciate quality fashion and a superior shopping experience. We offer an extensive selection of high-quality brand-name and private label merchandise through multiple retail channels, including 116 'Nordstrom' branded full-line stores in the U.S. and at Nordstrom.com (collectively, 'Nordstrom'), one Canada full-line store, 167 off-price Nordstrom Rack stores, Nordstromrack.com and HauteLook, five Trunk Club showrooms and TrunkClub.com, two Jeffrey boutiques and one Last Chance clearance store. Our stores are located in 38 states throughout the U.S and in one province in Canada.\n\nThrough our Credit segment, we provide our customers with a variety of payment products and services, including a Nordstrom private label card, two Nordstrom Visa credit cards and a debit card for Nordstrom purchases. These products also allow our customers to participate in our loyalty program designed to increase customer visits and spending. Although the primary purposes of our Credit segment are to foster greater customer loyalty and drive more sales, we also generate revenues from finance charges and other fees on these cards. In addition, we save on interchange fees that the Retail segment would incur if our customers used third-party cards.\n\n## Fiscal Year\n\nWe operate on a 52/53-week fiscal year ending on the Saturday closest to January 31 st . References to 2014 and all years within this document are based on a 52-week fiscal year, except 2012, which is based on a 53-week fiscal year.\n\n## Principles of Consolidation\n\nThe consolidated financial statements include the balances of Nordstrom, Inc. and its subsidiaries. All intercompany transactions and balances are eliminated in consolidation.\n\n## Use of Estimates\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the U.S. requires management to make estimates and assumptions that affect the reported amounts of assets, liabilities, revenues and expenses, and disclosure of contingent assets and liabilities during the reporting period. Uncertainties regarding such estimates and assumptions are inherent in the preparation of financial statements and actual results may differ from these estimates and assumptions. Our most significant accounting judgments and estimates include the allowance for credit losses, revenue recognition, inventory, goodwill, stock-based compensation and income taxes.\n\n## Net Sales\n\nWe recognize revenue from sales at our retail stores at the point of sale, net of estimated returns and excluding sales taxes. Revenue from sales to customers shipped directly from our stores, website and catalog, which includes shipping revenue when applicable, is recognized upon estimated receipt by the customer. We estimate customer merchandise returns based on historical return patterns and reduce sales and cost of sales accordingly. Activity in the allowance for sales returns, net, for the past three fiscal years is as follows:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n|--------------------------------|---------|---------|---------|\n| Allowance at beginning of year | $128 | $116 | $103 |\n| Additions | 2,129 | 1,880 | 1,724 |\n| Returns, net 1 | (2,097) | (1,868) | (1,711) |\n| Allowance at end of year | $160 | $128 | $116 |\n\n## Credit Card Revenues", - "page_start": 52, - "page_end": 52, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## Item 7. Management's Discussion and Analysis of Financial Condition and Results of Operations.\n\nDollar, share and square footage amounts in millions except percentages, per share and per square foot amounts\n\n## OVERVIEW\n\nNordstrom is a leading fashion specialty retailer offering apparel, shoes, cosmetics and accessories for women, men and children. We offer an extensive selection of high-quality brand-name and private label merchandise through our various channels: 'Nordstrom' branded full-line stores and online store at Nordstrom.com, Nordstrom Rack stores, Nordstromrack.com and HauteLook and other retail channels, including Trunk Club showrooms and TrunkClub.com, our Jeffrey boutiques and our clearance store that operates under the name 'Last Chance.' As of January 31, 2015, our stores are located in 38 states throughout the United States and in one province in Canada. In addition, we offer our customers a Nordstrom Rewards ™ loyalty program along with a variety of payment products and services, including credit and debit cards.\n\nWe continue to see the ongoing evolution of retail, with increasing customer interaction between our stores and ecommerce. We are making progress to meet customer expectations of a personalized experience that merges the richness of stores with the convenience of online. Because the customer views us simply as Nordstrom, we believe there is tremendous value in strengthening our platform for the customer experience that encompasses full-price, off-price, in-store and online. While each channel represents a substantial growth opportunity, there are significant synergies across channels to create a unique customer experience to gain market share.\n\nWe considered 2014 a watershed year in our company history, with our successful entry into Canada, continued expansion of our Nordstrom Rack business through store growth, the launch of Nordstromrack.com and the acquisition of Trunk Club. Our performance in 2014 reflected continued progress in executing our customer strategy through investments to drive growth across channels. We achieved total net sales growth of 7.8%, adding nearly $1 billion to our top-line and delivering record sales and earnings per diluted share. Our financial position remains strong and this marked the sixth consecutive year we generated over $1 billion in cash flow from operations.\n\nOur partnership with vendors and brands enhances our product offering. We offer Topshop merchandise at 53 full-line stores and online, with plans to reach over 80 stores in 2015. Our new partnership with Madewell in 2015, initially available at 15 of our stores and online, is another way to provide sought-after brands that appeal to new and existing customers.\n\nIn 2014, we opened our first full-line store in Canada in Calgary, Alberta, reflecting a multi-year effort from our team to address the unique challenges of crossing the border. With our store outperforming our expectations, we are encouraged with our customers' response in this market. We are looking forward to opening stores in 2015 in Ottawa, Ontario and Vancouver, British Columbia. In the U.S. we increased our presence with two full-line stores in The Woodlands, Texas and Jacksonville, Florida. In 2015, we plan to open three full-line stores in Puerto Rico, Minneapolis, Minnesota and Milwaukee, Wisconsin.\n\nAt Nordstrom Rack, we offer customers great brands at great prices, with 48 of the top 50 full-line brands represented. We opened 27 Nordstrom Rack stores in 2014, a record number of openings, contributing to Nordstrom Rack's total sales growth of 17%.", - "page_start": 27, - "page_end": 27, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "The following table lists our U.S. and Canada retail store count and facility square footage by state/province as of January 31, 2015:\n\nNordstrom Full-Line Stores -", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## Retail Business Net Sales\n\nIn our ongoing effort to enhance the customer experience, we are focused on providing customers with a seamless experience across our channels. While our customers may engage with us through multiple channels, we know they value the overall Nordstrom brand experience and view us simply as Nordstrom, which is ultimately how we view our business. To provide additional transparency into our net sales by channel, we present the following summary of our Retail Business:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n|-----------------------------------------------------|---------|---------|---------|\n| Net sales by channel: | | | |\n| Nordstrom full-line stores - U.S. | $7,682 | $7,705 | $7,964 |\n| Nordstrom.com | 1,996 | 1,622 | 1,269 |\n| Nordstrom | 9,678 | 9,327 | 9,233 |\n| Nordstrom Rack | 3,215 | 2,738 | 2,445 |\n| Nordstromrack.com and HauteLook | 360 | 295 | 236 |\n| Other retail 1 | 116 | 35 | 35 |\n| Total Retail segment | 13,369 | 12,395 | 11,949 |\n| Corporate/Other | (259) | (229) | (187) |\n| Total net sales | $13,110 | $12,166 | $11,762 |\n| Net sales increase | 7.8% | 3.4% | 12.1% |\n| Comparable sales increase (decrease) by channel 2 : | | | |\n| Nordstrom full-line stores - U.S. | (0.5%) | (2.1%) | 3.9% |\n| Nordstrom.com | 23.1% | 29.5% | 37.1% |\n| Nordstrom | 3.6% | 2.3% | 7.5% |\n| Nordstrom Rack | 3.8% | 2.7% | 7.4% |\n| Nordstromrack.com and HauteLook | 22.1% | 27.3% | - |\n| Total company | 4.0% | 2.5% | 7.3% |\n| Sales per square foot 3 : | | | |\n| Total sales per square foot | $493 | $474 | $470 |\n| 4-wall sales per square foot | 413 | 408 | 417 |\n| Full-line sales per square foot - U.S. | 371 | 372 | 385 |\n| Nordstrom Rack sales per square foot | 552 | 553 | 568 |\n| Percentage of net sales by merchandise category: | | | |\n| Women's Apparel | 30% | 31% | 31% |\n| Shoes | 23% | 23% | 23% |\n| Men's Apparel | 16% | 16% | 16% |\n| Women's Accessories | 14% | 14% | 13% |\n| Cosmetics | 11% | 11% | 11% |\n| Kids' Apparel | 4% | 3% | 3% |\n| Other | 2% | 2% | 3% |\n| Total | 100% | 100% | 100% |", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "»› THAT'S BRILLIANT! WE'LL HAVE TOPSHOP IN 80 STORES BY THE END OF 2015-AND THAT'S JUST ONE OF THE WAYS WE'RE ATTRACTING NEW YOUNG CUSTOMERS WITH GREAT BRANDS AT ACCESSIBLE PRICE POINTS.\n\n\n\n' Praise the fashion gods. Nordstrom Downtown Portland is opening Topshop in the next month.\n\n'\n\nOUR CUSTOMER, KARLY T.\n\n«‹ A PERFECT PAIR: SHOES AND SJP ACTRESS AND STYLE ICON SARAH JESSICA PARKER DESIGNED HER OWN SHOE LINE, SJP, AND WE WERE THE EXCLUSIVE RETAILER FOR ITS LAUNCH.\n\n\n\nIn addition to our new stores, we improved our online/o/ff-price capabilities with the launch of Nordstromrack.com. Combined with HauteLook, the integrated ecommerce site o/ffers a consistent merchandise selection as well as flash sales in a single web or mobile experience, providing customers a wide range of merchandise with one easy-to-use, shared checkout. Since the launch last spring, we've more than doubled the selection at Nordstromrack.com. We will continue to work on ways to further integrate our business to improve our customer experience.\n\n## INCREASING RELEVANCE\n\nWe know ultimately customers come to Nordstrom for great merchandise. They continue to respond to fresh, relevant brands. Last year, we were the exclusive retail partner for the global launch of\n\nSarah Jessica Parker's SJP line of shoes and launched Charlotte Tilbury in Beauty. We increased the number of full-line stores with Topshop to 53 and launched Kate Moss for Topshop, which helped us rapidly grow the number of Topshop customers, including a younger customer who in many cases is new to Nordstrom. By the end of 2015, we plan to have Topshop in more than 80 stores.\n\nThis March, we were excited to begin carrying Madewell, representing a new partnership with J.Crew. Our initial launch was on Nordstrom.com and in 15 of our stores in our t.b.d. department. This is a terrific example of our continued focus to bring great fashion brands to customers at accessible price points.\n\nFinally, Nordstrom Rewards has been a successful program enabling us to deepen", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_JWN_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2538.pdf", - "query": "What type of nanostructured material works notably well to build gas nanosensors ?", - "target_page": 1, - "target_passage": "carbon nanotubes (CNT) [2] have been shown to work remarkably well as de- tectors of small gas molecules", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Computational Design of Chemical Nanosensors: Metal Doped Carbon Nanotubes\n\nJ. M. Garc´ıa-Lastra 1,2 , ∗ D. J. Mowbray 1,2 , K. S. Thygesen 2 , A. Rubio 1,3 , and K. W. Jacobsen 2 1 Nano-Bio Spectroscopy group and ETSF Scientific Development Centre, Dpto. F´ısica de Materiales, Universidad del Pa´ıs Vasco, Centro de F´ısica de Materiales CSIC-UPV/EHU- MPC and DIPC, Av. Tolosa 72, E-20018 San Sebasti´an, Spain 2 Center for Atomic-scale Materials Design, Department of Physics, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark 3 Fritz-Haber-Institut der Max-Planck-Gesellschaft, Berlin, Germany\n\nWe use computational screening to systematically investigate the use of transition metal doped carbon nanotubes for chemical gas sensing. For a set of relevant target molecules (CO, NH3, H2S) and the main components of air (N2, O2, H2O), we calculate the binding energy and change in conductance upon adsorption on a metal atom occupying a vacancy of a (6,6) carbon nanotube. Based on these descriptors, we identify the most promising dopant candidates for detection of a given target molecule. From the fractional coverage of the metal sites in thermal equilibrium with air, we estimate the change in the nanotube resistance per doping site as a function of the target molecule concentration assuming charge transport in the diffusive regime. Our analysis points to Ni-doped nanotubes as candidates for CO sensors working under typical atmospheric conditions.\n\nPACS numbers: 73.63.-b, 68.43.-h, 73.50.Lw\n\nThe ability to detect small concentrations of specific chemical species is fundamental for a variety of industrial and scientific processes as well as for medical applications and environmental monitoring [1]. In general, nanostructured materials should be well suited for sensor applications because of their large surface to volume ratio which makes them sensitive to molecular adsorption. Specifically, carbon nanotubes (CNT) [2] have been shown to work remarkably well as detectors of small gas molecules. This has been demonstrated both for individual CNTs [3-8] as well as for CNT networks [9, 10].\n\nPristine CNTs are known to be chemically inert - a property closely related to their high stability. As a consequence, only radicals bind strong enough to the CNT to notably affect its electrical properties [2, 5, 11-13]. To make CNTs attractive for sensor applications thus requires some kind of functionalization, e.g. through doping or decoration of the CNT sidewall [13-21]. Ideally, this type of functionalization could be used to control not only the reactivity of the CNT but also the selectivity towards specific chemical species.\n\nIn this work we consider the possibility of using CNTs doped by 3d transition metal atoms for chemical gas sensing. We use computational screening to systematically identify the most promising dopant candidates for detection of three different target molecules (CO, NH3, H2S) under typical atmospheric conditions. The screening procedure is based on the calculation of two microscopic descriptors: the binding energy and scattering resistance of the molecules when adsorbed on a doped CNT. These two quantities give a good indication of the gas coverage and impact on the resistance. For the most promising candidates we then employ a simple thermodynamic model of the CNT sensor. In this model, the binding energies are used to obtain the fractional coverage of the metallic sites as a function of the target molecule concentration under ambient conditions. Under the assumption of transport in the diffusive rather than localization regime, the\n\nchange in CNT resistivity may then be obtained from the calculated coverages and single impurity conductances.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2538.pdf" - }, - { - "text": "all N impurities. At this point it suffices to see that the conservative estimates obtained from Eq. (7) predict measurable signals in response to small changes in concentration of the target molecules.\n\nTo our knowledge, controlled doping of CNTs with transition metal atoms has so far not been achieved. It has, however, been found that metal atoms incorporated into the CNT lattice during catalytic growth are afterwards very difficult to remove [30]. Furthermore, it has been shown that CNT vacancies, which are needed for the metallic doping, may be formed in a controlled way by irradiation by Ar ions [31]. This suggests that metallic doping of CNTs should be possible.\n\nIn summary, we have presented a general model of nanostructured chemical sensors which takes the adsorption energies of the relevant chemical species and their individual scattering resistances as the only input. On the basis of this model we have performed a computational screening of transition metal doped CNTs, and found that Ni-doped CNTs are promising candidates for detecting CO in a background of air. The model may be applied straightforwardly to other nanostructures than CNTs, other functionalizations than metal doping and other gas compositions than air.\n\nThe authors acknowledge financial support from Spanish MEC (FIS2007-65702-C02-01), 'Grupos Consolidados UPV/EHU del Gobierno Vasco' (IT-319-07), e-I3 ETSF project (Contract Number 211956), 'Red Espa˜nola de Supercomputaci'on', NABIIT and the Danish Center for Scientific Computing. The Center for Atomic-scale Materials Design (CAMD) is sponsored by the Lundbeck Foundation. JMG-L acknowledges funding from Spanish MICINN through Juan de la Cierva and Jos'e Castillejo programs.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2538.pdf" - }, - { - "text": "- ∗ Electronic address: juanmaria.garcia@ehu.es\n- [1] Gas Sensing Materials, MRS Bull. , vol. 24 (1999).\n- [2] J. C. Chalier, X. Blase, and S. Roche, 'Electronic and transport properties of nanotubes', Rev. Mod. Phys. 79 (2), 677 (May 2007), doi:10.1103/RevModPhys.79.677.\n- [3] J. Kong, N. R. Franklin, C. Zhou, M. G. Chapline, S. Peng, K. Cho, and H. Dai, 'Nanotube molecular wires as chemical sensors', Science 287 (5453), 622 (Jan. 2000), doi:10.1126/science.287.5453.622.\n- [4] P. G. Collins, K. Bradley, M. Ishigami, and A. Zettl, 'Extreme oxygen sensitivity of electronic properties of carbon nanotubes', Science 287 (5459), 1801 (Mar. 2000), doi:10.1126/science.287.5459.1801.\n- [5] C. Hierold, Carbon Nanotube Devices: Properties, Modeling, Integration and Applications (Wiley-VCH, Weinheim, 2008).\n- [6] F. Villalpando-P'aez, A. H. Romero, E. Mu˜noz-Sandoval, L. M. Mart'ınez, H. Terrones, and M. Terrones, 'Fabrication of vapor and gas sensors using films of aligned CN x nanotubes', Chem. Phys. Lett. 386 (1-3), 137 (Mar. 2004), doi:10.1016/j.cplett.2004.01.052.\n- [7] A. R. Rocha, M. Rossi, A. Fazzio, and A. J. R. da Silva, 'Designing real nanotube-based gas sensors', Phys. Rev. Lett. 100 (17), 176803 (May 2008), doi:10.1103/PhysRevLett.100.176803.\n- [8] S. Brahim, S. Colbern, R. Gump, and L. Grigorian, 'Tailoring gas sensing properties of carbon nanotubes', J. Appl. Phys. 104 (2), 024502 (Jul. 2008), doi:10.1063/1.2956395.\n- [9] C. Morgan, Z. Alemipour, and M. Baxendale, 'Variable range hopping in oxygen-exposed single-wall carbon nanotube networks', Phys. Stat. Solidi A 205 (6), 1394 (May 2008), doi:10.1002/pssa.200778113.\n- [10] D. J. Mowbray, C. Morgan, and K. S. Thygesen, 'Influence of O2 and N2 on the conductivity of carbon nanotube networks', Phys. Rev. B 79 (19), 195431 (May 2009), doi:10.1103/PhysRevB.79.195431.\n- [11] L. Valentini, F. Mercuri, I. Armentano, C. Cantalini, S. Picozzi, L. Lozzi, S. Santucci, A. Sgamellotti, and J. M. Kenny, 'Role of defects on the gas sensing properties of carbon nanotubes thin films: experiment and theory', Chem. Phys. Lett. 387 (4-6), 356 (Apr. 2004), doi:10.1016/j.cplett.2004.02.038.\n- [12] Z. Zanolli and J.-C. Charlier, 'Defective carbon nanotubes for single-molecule sensing', Phys. Rev. B 80 (15), 155447 (Oct. 2009), doi:10.1103/PhysRevB.80.155447.\n- [13] J. M. Garc'ıa-Lastra, K. S. Thygesen, M. Strange, and ' Angel Rubio, 'Conductance of sidewall-functionalized carbon nanotubes: Universal dependence on adsorption sites', Phys. Rev. Lett. 101 (23), 236806 (Dec. 2008), doi:10.1103/PhysRevLett.101.236806.\n- [14] S. B. Fagan, R. Mota, A. J. R. da Silva, and A. Fazzio, ' Ab initio study of an iron atom interacting with single-wall carbon nanotubes', Phys. Rev. B 67 (20), 205414 (May 2003), doi:10.1103/PhysRevB.67.205414.\n- [15] Y. Yagi, T. M. Briere, M. H. F. Sluiter, V. Kumar, A. A. Farajian, and Y. Kawazoe, 'Stable geometries and magnetic properties of single-walled carbon nanotubes doped with 3 d transition metals: A first-principles study', Phys. Rev. B 69 (7), 075414 (Feb 2004), doi:10.1103/PhysRevB.69.075414.\n- [16] S. H. Yang, W. H. Shin, J. W. Lee, S. Y. Kim, S. I. Woo, and J. K. Kang, 'Interaction of a transition metal atom with intrinsic defects in single-walled carbon nanotubes', J. Phys. Chem. B 110 (28), 13941 (Jun. 2006), doi:10.1021/jp061895q.\n- [17] K. T. Chan, J. B. Neaton, and M. L. Cohen, 'First-principles study of metal adatom adsorption on graphene', Phys. Rev. B 77 , 235430 (Jun. 2008), doi:10.1103/PhysRevB.77.235430.\n- [18] C. S. Yeung, L. V. Liu, and Y. A. Wang, 'Adsorption of small gas molecules onto Pt-doped single-walled carbon nanotubes', J. Phys. Chem. C 112 (19), 7401 (Apr. 2008), doi:10.1021/jp0753981.\n- [19] T. Vo, Y.-D. Wu, R. Car, and M. Robert, 'Structures, interactions, and ferromagnetism of Fe-carbon nanotube systems', J. Phys. Chem. C 112 (22), 400 (May 2008), doi:10.1021/jp0761968.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2538.pdf" - }, - { - "text": "change in CNT resistivity may then be obtained from the calculated coverages and single impurity conductances.\n\nWe find that oxidation of the active metal site passivates the sensor in the case of doping by Ti, V, Cr, and Mn under standard conditions (room temperature and 1 bar of pressure). Among the remaining metals, we identify Ni as is the most promising candidate for CO detection. For this system the change in resistance per active site is generally significant ( > 1 Ω ) for small changes in CO concentration in the relevant range of around 0.1-10 ppm. Our approach is quite general and is directly applicable to other nanostructures than CNTs, other functionalizations than metal doping, and other backgrounds than atmospheric air.\n\nAll total energy calculations and structure optimizations have been performed with the real-space density functional theory (DFT) code GPAW [22] which is based on the projector augmented wave method. We use a grid spacing of 0.2 ˚ A for representing the density and wave functions and the PBE exchange correlation functional [23]. Transport calculations for the optimized structures have been performed using the nonequilibrium Green's function method [24] with an electronic Hamiltonian obtained from the SIESTA code [25] in a double zeta polarized (DZP) basis set. Spin polarization has been taken into account in all calculations.\n\nMetallic doping of a (6,6) CNT has been modeled in a supercell containing six repeated minimal unit cells along the CNT axis (dimensions: 15 ˚ A × 15 ˚ A × 14.622 ˚ A). For this size of supercell a Γ -point sampling of the Brillouin zone was found to be sufficient. The formation energy for creating a vacancy (VC) occupied by a transition metal atom (M) was calculated using the relation\n\nE form [ M @ VC ] = E [ M @ VC ] + nE [ C ] -E [ M@NT ] (1)\n\nwhere E [M@VC] is the total energy of a transition metal atom occupying a vacancy in the nanotube, n is the number of carbon atoms removed to form the vacancy, E [C] is the energy per carbon atom in a pristine nanotube, and E [M@NT]", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2538.pdf" - }, - { - "text": "FIG. 1: Structural schematics and formation energy for a 3d transition metal occupied monovacancy (black), divacancy I (gray), or divacancy II (white) in a (6,6) carbon nanotube. Formation energies of the empty vacancies are indicated by dashed lines.\n\n\n\nis the total energy of the pristine nanotube with a physisorbed transition metal atom. We have considered the monovacancy and two divacancies shown in Fig. 1. The energy required to form an empty vacancy is obtained from\n\nE form [ VC ] = E [ VC ] + nE [ C ] -E [ NT ] , (2)\n\nwhere E [VC] is the total energy of the nanotube with a vacancy of n atoms.\n\nThe calculated formation energies for the 3d transition metals are shown in Fig. 1. From the horizontal lines we see that both divacancies are more stable than the monovacancy. This may be attributed to the presence of a two-fold coordinated C atom in the monovacancy, while all C atoms remain three-fold coordinated in the divacancies. When a transition metal atom occupies a vacancy, the strongest bonding to the C atoms is through its d orbitals [26]. For this reason, Cu and Zn, which both have filled d-bands, are rather unstable in the CNT. For the remaining metals, adsorption in the monovacancies leads to quite stable structures. This is because the three-fold coordination of the C atoms and the CNT's hexagonal structure are recovered when the metal atom is inserted. On the other hand, metal adsorption in divacancies is slightly less stable because of the resulting pentagon defects, see upper panel in Fig. 1. A similar behaviour has been reported by Krasheninnikov et al. for transition metal atoms in graphene [21].\n\nThe adsorption energies for N2, O2, H2O, CO, NH3, and H2S on the metallic site of the doped (6,6) CNTs are shown in Fig. 2(a). The adsorption energy of a molecule X is defined by\n\nE ads [ X @M@VC ] = E [ X @M@VC ] -E [ X ] -E [ M@VC ] , (3)\n\nFIG. 2: Calculated (a) adsorption energy E ads in eV and (b) change in conductance ∆ G in units of G 0 = 2 e 2 /h for N2, O2, H2O, CO, NH3, and H2S on 3d transition metals occupying a monovacancy (top), divacancy I (middle), and divacancy II (bottom) in a (6,6) carbon nanotube.\n\nwhere E [ X @M@VC] is the total energy of molecule X on a transition metal atom occupying a vacancy, and E [ X ] is the gas phase energy of the molecule.\n\nFrom the adsorption energies plotted in Fig. 2(a), we see that the earlier transition metals tend to bind the adsorbates stronger than the late transition metals. The latest metals in the series (Cu and Zn) bind adsorbates rather weakly in the divacancy structures. We also note that O2 binds significantly stronger than any of the three target molecules on Ti, V, Cr, and Mn (except for Cr in divacancy I where H2S is found to dissociate). Active sites containing these metals are therefore expected to be completely passivated if oxygen is present in the background. Further, we find H2O is rather weakly bound to most of the active sites. This ensures that these types of sensors are robust against changes in humidity.\n\nIn thermodynamic equilibrium [27], the coverage of the active sites follows from\n\nΘ[ X ] = K [ X ] C [ X ] 1 + ∑ Y K [ Y ] C [ Y ] , (4)\n\nwhere K = k + /k -is the ratio of forward and backward rate constants for the adsorption reaction,\n\nK [ X ] = exp [ -E ads [ X ] + TS [ X ] k B T ] . (5)\n\nIn these expressions C [ X ] is the concentration of species X , S [ X ] is its gas phase entropy and T is the temperature. Experimental values for the gas phase entropies have been taken from Ref. [28].", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2538.pdf" - }, - { - "text": "- [19] T. Vo, Y.-D. Wu, R. Car, and M. Robert, 'Structures, interactions, and ferromagnetism of Fe-carbon nanotube systems', J. Phys. Chem. C 112 (22), 400 (May 2008), doi:10.1021/jp0761968.\n- [20] J. A. Furst, M. Brandbyge, A.-P. Jauho, and K. Stokbro, ' Ab initio study of spin-dependent transport in carbon nanotubes with iron and vanadium adatoms', Phys. Rev. B 78 (19), 195405 (Nov. 2008), doi:10.1103/PhysRevB.78.195405.\n- [21] A. V. Krasheninnikov, P. O. Lehtinen, A. S. Foster, P. Pyykko, and R. M. Nieminen, 'Embedding transitionmetal atoms in graphene: Structure, bonding, and magnetism', Phys. Rev. Lett. 102 (12), 126807 (Mar. 2009), doi:10.1103/PhysRevLett.102.126807.\n- [22] J. J. Mortensen, L. B. Hansen, and K. W. Jacobsen, 'Real-space grid implementation of the projector augmented wave method', Phys. Rev. B 71 (3), 035109 (Jan. 2005), doi:10.1103/PhysRevB.71.035109.\n- [23] J. P. Perdew, K. Burke, and M. Ernzerhof, 'Generalized gradient approximation made simple', Phys. Rev. Lett. 77 (18), 3865 (Oct. 1996), doi:10.1103/PhysRevLett.77.3865.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2538.pdf" - }, - { - "text": "## Exchange bias of a ferromagnetic semiconductor by a ferromagnetic metal\n\nK. Olejnik, 1, 2 P. Wadley, 3 J. Haigh, 3 K. W. Edmonds, 3 R. P. Campion, 3 A. W. Rushforth, 3 B. L. Gallagher, 3 C. T. Foxon, 3 T. Jungwirth, 2, 3 J. Wunderlich, 1, 2 S. S. Dhesi, 4 S. Cavill, 4 G. van der Laan, 4 and E. Arenholz 5\n\n1 Hitachi Cambridge Laboratory, Cambridge CB3 0HE, United Kingdom\n\n2 Institute of Physics ASCR, v.v.i., Cukrovarnicka 10, 16253 Praha 6, Czech Republic\n\n3 School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD, United Kingdom 4 Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA\n\n5 (Dated: August 24, 2018)\n\nWe demonstrate an exchange bias in (Ga,Mn)As induced by antiferromagnetic coupling to a thin overlayer of Fe. Bias fields of up to 240 Oe are observed. Using element-specific x-ray magnetic circular dichroism measurements, we distinguish a strongly exchange coupled (Ga,Mn)As interface layer in addition to the biassed bulk of the (Ga,Mn)As film. The interface layer remains polarized at room temperature.\n\nPACS numbers: 75.70.Cn, 75.50.Pp, 75.50.Bb\n\nFerromagnetic (FM) semiconductors offer the prospect of combining high-density storage and gate-controlled logic in a single material. The realization of spin-valve devices from FM semiconductors requires the controlled switching of magnetization in adjacent layers between antiferromagnetic (AFM) and FM configurations. This has motivated several theoretical investigations of interlayer coupling in all-semiconductor devices 1 , and AFM coupling has recently been demonstrated in (Ga,Mn)As multilayers separated by p -type non-magnetic spacers 2 . However, the Curie temperature T C of (Ga,Mn)As is currently limited to 185 K in single layers 3 , and is typically much lower for layers embedded within a heterostructure 2 , which is an obstacle to the practical implementation of semiconductor spintronics.\n\nThe development of FM metal/FM semiconductor heterostructures has the potential to bring together the benefits of metal and semiconductor based spintronics, offering access to new functionalities and physical phenomena. Recent studies of MnAs/(Ga,Mn)As and NiFe/(Ga,Mn)As bilayer films have shown FM interlayer coupling and independent magnetization behavior, respectively 4,5 . Of particular interest is the Fe/(Ga,Mn)As system, since the growth of epitaxial Fe/GaAs(001) films is well-established 6 . Remarkably, a recent x-ray magnetic circular dichroism (XMCD) study has shown that Fe may induce a proximity polarization in the near-surface region of (Ga,Mn)As, antiparallel to the Fe moment and persisting even above room temperature 7 . Devices incorporating Fe/(Ga,Mn)As therefore offer the prospect of obtaining non-volatile room temperature spin-polarization in a semiconductor.\n\nUntil now, no information has been revealed about the coupling of Fe to (Ga,Mn)As layers away from the nearsurface region. At the surface, the (Ga,Mn)As layer may be highly non-stoichiometric and Mn-rich, due to its nonequilibrium nature 8,9 . Previously, Fe/(Ga,Mn)As layers were produced by a process including exposure to air followed by sputtering and annealing prior to Fe deposition,\n\nwhich may further disrupt the interface order. The origin of the interface magnetism then had to be inferred by comparison to a series of reference samples 7 . Demonstration of coupling between the bulk of the layers, i.e. , an exchange bias effect, would provide direct evidence of the interface magnetic order. Moreover, such coupling would offer new means of manipulating the FM semiconductor spin state and utilizing the proximity polarization effect in a spintronic device.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "- [34] P. Moriarty, M. D. R. Taylor, and M. Brust, 'Nanostructured cellular networks,' Phys. Rev. Lett. 89 , 248303 (2002).\n - [35] E. Rabani, D. R. Reichman, P. L. Geissler, and L. E. Brus, 'Drying-mediated self-assembly of nanoparticles,' Nature 426 , 271-274 (2003).\n - [36] L. V. Govor, G. Reiter, J. Parisi, and G. H. Bauer, 'Self-assembled nanoparticle deposits formed at the contact line of evaporating micrometer-size droplets,' Phys. Rev. E 69 , 061609 (2004).\n - [37] C. P. Martin, M. O. Blunt, and P. Moriarty, 'Nanoparticle networks on silicon: Self-organized or disorganized?' Nano Lett. 4 , 2389-2392 (2004).\n - [38] C. P. Martin, M. O. Blunt, E. Pauliac-Vaujour, A. Stannard, P. Moriarty, I. Vancea, and U. Thiele, 'Controlling pattern formation in nanoparticle assemblies via directed solvent dewetting,' Phys. Rev. Lett. 99 , 116103 (2007).\n - [39] A. Stannard, C. P. Martin, E. Pauliac-Vaujour, P. Moriarty, and U. Thiele, 'Dual-scale pattern formation in nanoparticle assemblies,' J. Chem. Phys. C 112 , 15195-15203 (2008).\n - [40] E. Pauliac-Vaujour, A. Stannard, C. P. Martin, M. O. Blunt, I. Notingher, P. J. Moriarty, I. Vancea, and U. Thiele, 'Fingering instabilities in dewetting nanofluids,' Phys. Rev. Lett. 100 , 176102 (2008).\n - [41] I. Vancea, U. Thiele, E. Pauliac-Vaujour, A. Stannard, C. P. Martin, M. O. Blunt, and P. J. Moriarty, 'Front instabilities in evaporatively dewetting nanofluids,' Phys. Rev. E 78 , 041601 (2008).\n - [42] U. Thiele, Entnetzung von Kollagenfilmen , Ph.D. thesis, Technische Universitat Dresden (1998).\n - [43] H. Yabu and M. Shimomura, 'Preparation of self-organized mesoscale polymer patterns on a solid substrate: Continuous pattern formation from a receding meniscus,' Adv. Funct. Mater. 15 , 575-581 (2005).\n - [44] R. D. Deegan, O. Bakajin, T. F. Dupont, G. Huber, S. R. Nagel, and T. A. Witten, 'Capillary flow as the cause of ring stains from dried liquid drops,' Nature 389 , 827-829 (1997).\n - [45] E. Adachi, A. S. Dimitrov, and K. Nagayama, 'Stripe patterns formed on a glass-surface during droplet evaporation,' Langmuir 11 , 1057-1060 (1995).\n - [46] R. D. Deegan, 'Pattern formation in drying drops,' Phys. Rev. E 61 , 475-485 (2000).\n - [47] R. D. Deegan, O. Bakajin, T. F. Dupont, G. Huber, S. R. Nagel, and T. A. Witten, 'Contact line deposits in an evaporating drop,' Phys. Rev. E 62 , 756-765 (2000).\n - [48] L. Shmuylovich, A. Q. Shen, and H. A. Stone, 'Surface morphology of drying latex films: Multiple ring formation,' Langmuir 18 , 3441-3445 (2002).\n - [49] V. X. Nguyen and K. J. Stebe, 'Patterning of small particles by a surfactant-enhanced Marangoni-", - "page_start": 27, - "page_end": 27, - "source_file": "1001.2669.pdf" - }, - { - "text": "Here, we demonstrate an antiferromagnetic coupling and exchange bias in Fe/(Ga,Mn)As bilayer films, by combining element-specific XMCD measurements and bulk-sensitive superconducting quantum interference device (SQUID) magnetometry. As with previous studies of FM metal/FM semiconductor bilayers 4,5 (and in contrast to AFM coupled FM metal/FM metal exchange bias structures 10,11 ) the layers are in direct contact without a non-magnetic spacer in between. We distinguish interface and bulk (Ga,Mn)As layers that are respectively strongly and weakly antiferromagnetically coupled to the Fe overlayer. In agreement with Ref. 7 , the interface layer remains polarized at room temperature.\n\nThe Fe and (Ga,Mn)As layers of the present study were both grown by molecular beam epitaxy in the same ultra-high vacuum system, in order to ensure a clean interface between them. The (Ga,Mn)As layer of thickness 10 to 50 nm was deposited on a GaAs(001) substrate at a temperature of 260 · C, using previously established methods 3,8 . A low Mn concentration of x ≈ 0 . 03 was chosen in order to avoid the formation of compensating Mn interstitials. The substrate temperature was then reduced to ∼ 0 · C, before depositing a 2 nm Fe layer, plus a 2 nm Al capping layer. In-situ reflection high energy electron diffraction and ex-situ x-ray reflectivity and diffraction measurements confirmed that the layers are single-crystalline with sub-nm interface roughness. SQUID magnetometry measurements were performed using a Quantum Design Magnetic Property Measurement System. Mn and Fe L 2 , 3 x-ray absorption and XMCD", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "the dominant dynamic process, but does not allow one to probe this assumption. In Section III B we show how one may develop a dynamical density functional theory (DDFT) that describes the system at a similar level to the KMC. However, the DDFT may also be easily extended to include other effects such as fluid diffusion, that the KMC does not incorporate.\n\n## A. Kinetic Monte Carlo model\n\nThe kinetic Monte Carlo model for two-dimensional dewetting nanofluids [33] was first proposed in Ref. [35] and extended to include next-nearest neighbour interactions in [37]. The two key assumptions used are: (i) the relevant processes can be mapped on to a two-dimensional lattice gas model, thereby neglecting continuous changes in the thickness of the evaporating film, and (ii) all relevant dynamics results from diffusing nanoparticles and evaporating/condensing solvent.\n\nThe model builds on an Ising-type model for the liquid-gas phase transition. The surface is divided up into a regular array of lattice sites whose size is dictated by the nanoparticles. One then considers each lattice site to be occupied either by a nanoparticle, liquid or vapour. This effectively maps the system onto a two-dimensional two-component lattice gas having two fields n and l . The resulting three possible states of a cell are: liquid ( l = 1 , n = 0 ), nanoparticle ( l = 0 , n = 1 ), and vapour ( l = 0 , n = 0 , i.e., cell empty). The energy of an overall configuration is given by the hamiltonian\n\nE = -ε nn 2 ∑ n i n j -ε nl 2 ∑ n i l j -ε ll 2 ∑ l i l j -µ ∑ i l i (3)\n\nwhere ∑ denotes a sum over nearest neighbour pairs and ε ll , ε nn and ε nl are the liquid-liquid, particle-particle and liquid-particle interaction energies, respectively. Fixing the three interaction strength parameters ε ll , ε nn , ε nl and the effective chemical potential µ determines the equilibrium state of the system. We choose ε ll as unit of energy - i.e. we set ε ll = 1 .\n\nThe hamiltonian determines the equilibrium state and the energy landscape of the system. However, as the system 'dries in' during the course of the solvent evaporation, the final nanoparticle configurations do not necessarily represent equilibrium structures. This implies that the system dynamics is of paramount importance. It is determined by the possible Monte Carlo moves, their relative frequencies, and the probabilities for their acceptance. Two types of moves are allowed: (i) evaporation/condensation of liquid and (ii) diffusion of nanoparticles within the liquid. A mobility M corresponds to the ratio of cycles of particle and solvent moves and reflects the physical ratio of", - "page_start": 8, - "page_end": 8, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2538.pdf", - "query": "What seems to be a great technique to ensure vacancies are formed in carbon nanotubes (CNT) ?", - "target_page": 4, - "target_passage": "Furthermore, it has been shown that CNT vacan- cies, which are needed for the metallic doping, may be formed in a controlled way by irradiation by Ar ion", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## Computational Design of Chemical Nanosensors: Metal Doped Carbon Nanotubes\n\nJ. M. Garc´ıa-Lastra 1,2 , ∗ D. J. Mowbray 1,2 , K. S. Thygesen 2 , A. Rubio 1,3 , and K. W. Jacobsen 2 1 Nano-Bio Spectroscopy group and ETSF Scientific Development Centre, Dpto. F´ısica de Materiales, Universidad del Pa´ıs Vasco, Centro de F´ısica de Materiales CSIC-UPV/EHU- MPC and DIPC, Av. Tolosa 72, E-20018 San Sebasti´an, Spain 2 Center for Atomic-scale Materials Design, Department of Physics, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark 3 Fritz-Haber-Institut der Max-Planck-Gesellschaft, Berlin, Germany\n\nWe use computational screening to systematically investigate the use of transition metal doped carbon nanotubes for chemical gas sensing. For a set of relevant target molecules (CO, NH3, H2S) and the main components of air (N2, O2, H2O), we calculate the binding energy and change in conductance upon adsorption on a metal atom occupying a vacancy of a (6,6) carbon nanotube. Based on these descriptors, we identify the most promising dopant candidates for detection of a given target molecule. From the fractional coverage of the metal sites in thermal equilibrium with air, we estimate the change in the nanotube resistance per doping site as a function of the target molecule concentration assuming charge transport in the diffusive regime. Our analysis points to Ni-doped nanotubes as candidates for CO sensors working under typical atmospheric conditions.\n\nPACS numbers: 73.63.-b, 68.43.-h, 73.50.Lw\n\nThe ability to detect small concentrations of specific chemical species is fundamental for a variety of industrial and scientific processes as well as for medical applications and environmental monitoring [1]. In general, nanostructured materials should be well suited for sensor applications because of their large surface to volume ratio which makes them sensitive to molecular adsorption. Specifically, carbon nanotubes (CNT) [2] have been shown to work remarkably well as detectors of small gas molecules. This has been demonstrated both for individual CNTs [3-8] as well as for CNT networks [9, 10].\n\nPristine CNTs are known to be chemically inert - a property closely related to their high stability. As a consequence, only radicals bind strong enough to the CNT to notably affect its electrical properties [2, 5, 11-13]. To make CNTs attractive for sensor applications thus requires some kind of functionalization, e.g. through doping or decoration of the CNT sidewall [13-21]. Ideally, this type of functionalization could be used to control not only the reactivity of the CNT but also the selectivity towards specific chemical species.\n\nIn this work we consider the possibility of using CNTs doped by 3d transition metal atoms for chemical gas sensing. We use computational screening to systematically identify the most promising dopant candidates for detection of three different target molecules (CO, NH3, H2S) under typical atmospheric conditions. The screening procedure is based on the calculation of two microscopic descriptors: the binding energy and scattering resistance of the molecules when adsorbed on a doped CNT. These two quantities give a good indication of the gas coverage and impact on the resistance. For the most promising candidates we then employ a simple thermodynamic model of the CNT sensor. In this model, the binding energies are used to obtain the fractional coverage of the metallic sites as a function of the target molecule concentration under ambient conditions. Under the assumption of transport in the diffusive rather than localization regime, the\n\nchange in CNT resistivity may then be obtained from the calculated coverages and single impurity conductances.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2538.pdf" - }, - { - "text": "all N impurities. At this point it suffices to see that the conservative estimates obtained from Eq. (7) predict measurable signals in response to small changes in concentration of the target molecules.\n\nTo our knowledge, controlled doping of CNTs with transition metal atoms has so far not been achieved. It has, however, been found that metal atoms incorporated into the CNT lattice during catalytic growth are afterwards very difficult to remove [30]. Furthermore, it has been shown that CNT vacancies, which are needed for the metallic doping, may be formed in a controlled way by irradiation by Ar ions [31]. This suggests that metallic doping of CNTs should be possible.\n\nIn summary, we have presented a general model of nanostructured chemical sensors which takes the adsorption energies of the relevant chemical species and their individual scattering resistances as the only input. On the basis of this model we have performed a computational screening of transition metal doped CNTs, and found that Ni-doped CNTs are promising candidates for detecting CO in a background of air. The model may be applied straightforwardly to other nanostructures than CNTs, other functionalizations than metal doping and other gas compositions than air.\n\nThe authors acknowledge financial support from Spanish MEC (FIS2007-65702-C02-01), 'Grupos Consolidados UPV/EHU del Gobierno Vasco' (IT-319-07), e-I3 ETSF project (Contract Number 211956), 'Red Espa˜nola de Supercomputaci'on', NABIIT and the Danish Center for Scientific Computing. The Center for Atomic-scale Materials Design (CAMD) is sponsored by the Lundbeck Foundation. JMG-L acknowledges funding from Spanish MICINN through Juan de la Cierva and Jos'e Castillejo programs.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2538.pdf" - }, - { - "text": "FIG. 1: Structural schematics and formation energy for a 3d transition metal occupied monovacancy (black), divacancy I (gray), or divacancy II (white) in a (6,6) carbon nanotube. Formation energies of the empty vacancies are indicated by dashed lines.\n\n\n\nis the total energy of the pristine nanotube with a physisorbed transition metal atom. We have considered the monovacancy and two divacancies shown in Fig. 1. The energy required to form an empty vacancy is obtained from\n\nE form [ VC ] = E [ VC ] + nE [ C ] -E [ NT ] , (2)\n\nwhere E [VC] is the total energy of the nanotube with a vacancy of n atoms.\n\nThe calculated formation energies for the 3d transition metals are shown in Fig. 1. From the horizontal lines we see that both divacancies are more stable than the monovacancy. This may be attributed to the presence of a two-fold coordinated C atom in the monovacancy, while all C atoms remain three-fold coordinated in the divacancies. When a transition metal atom occupies a vacancy, the strongest bonding to the C atoms is through its d orbitals [26]. For this reason, Cu and Zn, which both have filled d-bands, are rather unstable in the CNT. For the remaining metals, adsorption in the monovacancies leads to quite stable structures. This is because the three-fold coordination of the C atoms and the CNT's hexagonal structure are recovered when the metal atom is inserted. On the other hand, metal adsorption in divacancies is slightly less stable because of the resulting pentagon defects, see upper panel in Fig. 1. A similar behaviour has been reported by Krasheninnikov et al. for transition metal atoms in graphene [21].\n\nThe adsorption energies for N2, O2, H2O, CO, NH3, and H2S on the metallic site of the doped (6,6) CNTs are shown in Fig. 2(a). The adsorption energy of a molecule X is defined by\n\nE ads [ X @M@VC ] = E [ X @M@VC ] -E [ X ] -E [ M@VC ] , (3)\n\nFIG. 2: Calculated (a) adsorption energy E ads in eV and (b) change in conductance ∆ G in units of G 0 = 2 e 2 /h for N2, O2, H2O, CO, NH3, and H2S on 3d transition metals occupying a monovacancy (top), divacancy I (middle), and divacancy II (bottom) in a (6,6) carbon nanotube.\n\nwhere E [ X @M@VC] is the total energy of molecule X on a transition metal atom occupying a vacancy, and E [ X ] is the gas phase energy of the molecule.\n\nFrom the adsorption energies plotted in Fig. 2(a), we see that the earlier transition metals tend to bind the adsorbates stronger than the late transition metals. The latest metals in the series (Cu and Zn) bind adsorbates rather weakly in the divacancy structures. We also note that O2 binds significantly stronger than any of the three target molecules on Ti, V, Cr, and Mn (except for Cr in divacancy I where H2S is found to dissociate). Active sites containing these metals are therefore expected to be completely passivated if oxygen is present in the background. Further, we find H2O is rather weakly bound to most of the active sites. This ensures that these types of sensors are robust against changes in humidity.\n\nIn thermodynamic equilibrium [27], the coverage of the active sites follows from\n\nΘ[ X ] = K [ X ] C [ X ] 1 + ∑ Y K [ Y ] C [ Y ] , (4)\n\nwhere K = k + /k -is the ratio of forward and backward rate constants for the adsorption reaction,\n\nK [ X ] = exp [ -E ads [ X ] + TS [ X ] k B T ] . (5)\n\nIn these expressions C [ X ] is the concentration of species X , S [ X ] is its gas phase entropy and T is the temperature. Experimental values for the gas phase entropies have been taken from Ref. [28].", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2538.pdf" - }, - { - "text": "change in CNT resistivity may then be obtained from the calculated coverages and single impurity conductances.\n\nWe find that oxidation of the active metal site passivates the sensor in the case of doping by Ti, V, Cr, and Mn under standard conditions (room temperature and 1 bar of pressure). Among the remaining metals, we identify Ni as is the most promising candidate for CO detection. For this system the change in resistance per active site is generally significant ( > 1 Ω ) for small changes in CO concentration in the relevant range of around 0.1-10 ppm. Our approach is quite general and is directly applicable to other nanostructures than CNTs, other functionalizations than metal doping, and other backgrounds than atmospheric air.\n\nAll total energy calculations and structure optimizations have been performed with the real-space density functional theory (DFT) code GPAW [22] which is based on the projector augmented wave method. We use a grid spacing of 0.2 ˚ A for representing the density and wave functions and the PBE exchange correlation functional [23]. Transport calculations for the optimized structures have been performed using the nonequilibrium Green's function method [24] with an electronic Hamiltonian obtained from the SIESTA code [25] in a double zeta polarized (DZP) basis set. Spin polarization has been taken into account in all calculations.\n\nMetallic doping of a (6,6) CNT has been modeled in a supercell containing six repeated minimal unit cells along the CNT axis (dimensions: 15 ˚ A × 15 ˚ A × 14.622 ˚ A). For this size of supercell a Γ -point sampling of the Brillouin zone was found to be sufficient. The formation energy for creating a vacancy (VC) occupied by a transition metal atom (M) was calculated using the relation\n\nE form [ M @ VC ] = E [ M @ VC ] + nE [ C ] -E [ M@NT ] (1)\n\nwhere E [M@VC] is the total energy of a transition metal atom occupying a vacancy in the nanotube, n is the number of carbon atoms removed to form the vacancy, E [C] is the energy per carbon atom in a pristine nanotube, and E [M@NT]", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2538.pdf" - }, - { - "text": "to a certain extent the particle-particle attraction. Normally, the solution is deposited on to a plain silicon substrate that is covered by the native oxide layer only [34]. However, one may locally change the wetting behaviour of the solvent by further oxidising the substrate [38]. By adding excess thiol one can also vary the properties of the solvent [40].\n\nTwo different procedures are employed for the deposition of the solution on to the substrate: spincoating or a meniscus technique [61, 62]. The choice is important as it strongly influences the evaporation rate and, as a result, the pattern formation process. When using spin-coating, one finds that directly after deposition, evaporation competes with dewetting until all the solvent has evaporated. The resulting deposits of nanoparticles are imaged by atomic force microscopy (AFM). For spin-coated films, the evaporation rate is high and structuring is normally finished before the spincoater is stopped. Conversely, the solvent evaporation rate is strongly decreased when employing the meniscus technique [61], i.e., by depositing a drop of solution on a Teflon ring that is wetted by the solvent. This allows for a better control of the process and enables the use of contrast-enhanced microscopy to observe the dewetting process in situ [40]. All pattern formation is confined to the region of the receding contact line of toluene, silicon and air. With both techniques one may find mono-modal or bi-modal polygonal networks [34], labyrinthine spinodal structures, or branched patterns (see Fig. 1). The meniscus technique allows for the study of branched structures in a more controlled manner. The work in Ref. [40] indicates that fingering strongly depends on the interaction strength of the particles, i.e., on the chain length of the thiol molecules coating the gold cores. For short chains (C 5 and C 8 ) no formation of branched structures is observed. At similar concentrations, well-developed branched structures are formed for longer chains (C 10 and C 12 ). For even longer chains (C 14 ), however, one again finds less branching. It also depends on the amount of excess thiol in the solvent (for details see Ref. [40]).\n\nWhen following the evolution of the branched patterns in situ (see the complementary video material of Ref. [40]), one clearly observes that different processes occur on different lenght scales. First, a macroscopic dewetting front recedes, leaving behind a seemingly dry substrate. The macroscopic front can be transversely unstable resulting in large-scale ( > 100 µ m) strongly anisotropic fingered structures. For fronts that move relatively quickly these macroscopic structures cover all the available substrate. However, when at a later stage the macroscopic front becomes slower, those fingers become scarce and 'macroscopic fingering' finally ceases. At this stage it is possible to appreciate that the seemingly dry region left behind by the front is not at all dry, but covered by an ultrathin 'postcursor' film that is itself unstable. The thickness of this film", - "page_start": 5, - "page_end": 5, - "source_file": "1001.2669.pdf" - }, - { - "text": "- ∗ Electronic address: juanmaria.garcia@ehu.es\n- [1] Gas Sensing Materials, MRS Bull. , vol. 24 (1999).\n- [2] J. C. Chalier, X. Blase, and S. Roche, 'Electronic and transport properties of nanotubes', Rev. Mod. Phys. 79 (2), 677 (May 2007), doi:10.1103/RevModPhys.79.677.\n- [3] J. Kong, N. R. Franklin, C. Zhou, M. G. Chapline, S. Peng, K. Cho, and H. Dai, 'Nanotube molecular wires as chemical sensors', Science 287 (5453), 622 (Jan. 2000), doi:10.1126/science.287.5453.622.\n- [4] P. G. Collins, K. Bradley, M. Ishigami, and A. Zettl, 'Extreme oxygen sensitivity of electronic properties of carbon nanotubes', Science 287 (5459), 1801 (Mar. 2000), doi:10.1126/science.287.5459.1801.\n- [5] C. Hierold, Carbon Nanotube Devices: Properties, Modeling, Integration and Applications (Wiley-VCH, Weinheim, 2008).\n- [6] F. Villalpando-P'aez, A. H. Romero, E. Mu˜noz-Sandoval, L. M. Mart'ınez, H. Terrones, and M. Terrones, 'Fabrication of vapor and gas sensors using films of aligned CN x nanotubes', Chem. Phys. Lett. 386 (1-3), 137 (Mar. 2004), doi:10.1016/j.cplett.2004.01.052.\n- [7] A. R. Rocha, M. Rossi, A. Fazzio, and A. J. R. da Silva, 'Designing real nanotube-based gas sensors', Phys. Rev. Lett. 100 (17), 176803 (May 2008), doi:10.1103/PhysRevLett.100.176803.\n- [8] S. Brahim, S. Colbern, R. Gump, and L. Grigorian, 'Tailoring gas sensing properties of carbon nanotubes', J. Appl. Phys. 104 (2), 024502 (Jul. 2008), doi:10.1063/1.2956395.\n- [9] C. Morgan, Z. Alemipour, and M. Baxendale, 'Variable range hopping in oxygen-exposed single-wall carbon nanotube networks', Phys. Stat. Solidi A 205 (6), 1394 (May 2008), doi:10.1002/pssa.200778113.\n- [10] D. J. Mowbray, C. Morgan, and K. S. Thygesen, 'Influence of O2 and N2 on the conductivity of carbon nanotube networks', Phys. Rev. B 79 (19), 195431 (May 2009), doi:10.1103/PhysRevB.79.195431.\n- [11] L. Valentini, F. Mercuri, I. Armentano, C. Cantalini, S. Picozzi, L. Lozzi, S. Santucci, A. Sgamellotti, and J. M. Kenny, 'Role of defects on the gas sensing properties of carbon nanotubes thin films: experiment and theory', Chem. Phys. Lett. 387 (4-6), 356 (Apr. 2004), doi:10.1016/j.cplett.2004.02.038.\n- [12] Z. Zanolli and J.-C. Charlier, 'Defective carbon nanotubes for single-molecule sensing', Phys. Rev. B 80 (15), 155447 (Oct. 2009), doi:10.1103/PhysRevB.80.155447.\n- [13] J. M. Garc'ıa-Lastra, K. S. Thygesen, M. Strange, and ' Angel Rubio, 'Conductance of sidewall-functionalized carbon nanotubes: Universal dependence on adsorption sites', Phys. Rev. Lett. 101 (23), 236806 (Dec. 2008), doi:10.1103/PhysRevLett.101.236806.\n- [14] S. B. Fagan, R. Mota, A. J. R. da Silva, and A. Fazzio, ' Ab initio study of an iron atom interacting with single-wall carbon nanotubes', Phys. Rev. B 67 (20), 205414 (May 2003), doi:10.1103/PhysRevB.67.205414.\n- [15] Y. Yagi, T. M. Briere, M. H. F. Sluiter, V. Kumar, A. A. Farajian, and Y. Kawazoe, 'Stable geometries and magnetic properties of single-walled carbon nanotubes doped with 3 d transition metals: A first-principles study', Phys. Rev. B 69 (7), 075414 (Feb 2004), doi:10.1103/PhysRevB.69.075414.\n- [16] S. H. Yang, W. H. Shin, J. W. Lee, S. Y. Kim, S. I. Woo, and J. K. Kang, 'Interaction of a transition metal atom with intrinsic defects in single-walled carbon nanotubes', J. Phys. Chem. B 110 (28), 13941 (Jun. 2006), doi:10.1021/jp061895q.\n- [17] K. T. Chan, J. B. Neaton, and M. L. Cohen, 'First-principles study of metal adatom adsorption on graphene', Phys. Rev. B 77 , 235430 (Jun. 2008), doi:10.1103/PhysRevB.77.235430.\n- [18] C. S. Yeung, L. V. Liu, and Y. A. Wang, 'Adsorption of small gas molecules onto Pt-doped single-walled carbon nanotubes', J. Phys. Chem. C 112 (19), 7401 (Apr. 2008), doi:10.1021/jp0753981.\n- [19] T. Vo, Y.-D. Wu, R. Car, and M. Robert, 'Structures, interactions, and ferromagnetism of Fe-carbon nanotube systems', J. Phys. Chem. C 112 (22), 400 (May 2008), doi:10.1021/jp0761968.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2538.pdf" - }, - { - "text": "FIG. 3: Fractional coverage Θ in thermal equilibrium of Ni in a (a) monovacancy, (b) divacancy I, (c) divacancy II and (d) change in resistance ∆ R per dopant site as a function of CO concentration in a background of air at room temperature and 1 bar of pressure. The reference concentration of CO is taken to be C 0 = 0.1 ppm. Note the change from linear to log scale on the y -axis at ∆ R = 10 Ω .\n\n\n\nFor a given background composition we may thus estimate the fractional coverages for each available adsorbate for a given type of doping. As an example, Fig. 3(a)-(c) shows the fractional coverage of a Ni atom occupying a monovacancy, divacancy I, and divacancy II, versus CO concentration in a background of air at room temperature and 1 bar of pressure. Due to the relatively small binding energy of N2 and H2O as compared to O2 and CO, all Ni sites will be either empty or occupied by O2 or CO. In particular, Ni in a monovacancy (top panel of Fig. 3) will be completely oxidized for all relevant CO concentrations. For the Ni occupied divacancy II structures we find the coverage of CO changes significantly around toxic concentrations ( ∼ 10 ppm).\n\nTo estimate the effect of adsorbates on the electrical conductance of doped CNTs, we first consider the change in conductance when a single molecule is adsorbed on a metal site of an otherwise pristine CNT. In Fig. 2(b) we show the calculated change in conductance relative to the metal site with no adsorbate. In contrast to the binding energies, there are no clear trends in the conductances. The sensitivity of the conductance is perhaps most clearly demonstrated by the absence of correlation between different types of vacancies, i.e. between the three panels in Fig. 2(b). Close to the Fermi level, the conductance of a perfect armchair CNT equals 2 G 0 . The presence of the metal dopant leads to several dips in the transmission function known as Fano antiresonances [20]. The position and shape of these dips depend on the d -levels of the transition metal atom, the character of its bonding to the CNT, and is further affected by the presence of the adsorbate molecule. The coupling of all these factors is very complex and makes it difficult to estimate or rationalize the value of the conductance. For the spin polarized cases, we use the spin-averaged\n\nconductances, i.e. G = ( G ↑ + G ↓ ) / 2.\n\nNext, we estimate the resistance of a CNT containing several impurities (a specific metal dopant with different molecular adsorbates). Under the assumption that the electron phasecoherence length, l φ , is smaller than the average distance between the dopants, d , we may neglect quantum interference and obtain the total resistance by adding the scattering resistances due to each impurity separately. The scattering resistance due to a single impurity is given by\n\nR s ( X ) = 1 /G ( X ) -1 / ( 2 G 0 ) , (6)\n\nwhere G ( X ) is the Landauer conductance of the pristine CNT with a single metal dopant occupied by molecule X and 1 / ( 2 G 0 ) is the contact resistance of a (6,6) CNT.\n\nWe may now obtain the total resistance per dopant site relative to the reference background signal as a function of the target molecule concentration\n\n∆ R N ≈ ∑ X R s ( X )(Θ[ X,C ] -Θ[ X,C 0 ]) , (7)\n\nwhere N is the number of dopants, Θ[ X,C ] is the fractional coverage of species X at concentration C of the target and C 0 is the reference concentration. Notice that the contact resistance drops out as we evaluate a change in resistance.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2538.pdf" - }, - { - "text": "## INVESTING IN OUR WORLD AND OUR PEOPLE »\n\nAs we explore for and produce clean, affordable, abundant, American natural gas, we provide an important solution to our nation's energy challenges and its quest for energy independence. With at least a 200year supply of natural gas located right here in the U.S., this versatile fuel can be used to not only heat homes, create electricity and meet America's transportation needs, but also to fuel the country's future by creating jobs and stimulating local and national economies through investment and taxes.\n\n## Environmentally Friendly Operations\n\nAt Chesapeake, we realize that the way a great product is produced is as important as the product itself. For example, we have helped pioneer the use of multiwell padsites to drill up to 16 wells from a single location, greatly reducing our land and road use and overall environmental footprint. We use the latest horizontal and directional drilling technology to place wells at a safe distance from homes, schools and businesses. In addition, we build and maintain access roads and work to eliminate soil erosion near our sites, as well as restore local vegetation.\n\nWe implement advanced, modern protective measures known as Best Management Practices (BMPs) to help ensure energy development is conducted in an environmentally responsible manner. Procedures are implemented throughout our operations to protect freshwater aquifers and reduce environmental impacts. BMPs protect wildlife, air quality, water and landscapes as we work to develop vitally needed domestic energy sources.\n\nImplemented throughout the entire life cycle of a well, BMPs can be as simple as strategically placing a berm, or land barrier, on locations to control surface water runoff. Others involve cutting-edge operational technologies such as utilizing the most advanced techniques offered in drilling fluids, well casing and cement design. Regardless of complexity, all BMPs are based on the idea that the environmental footprint of\n\nenergy development should be as small and temporary as possible. These practices are continually evolving and further improving as Chesapeake and the industry develop new innovative techniques and approaches to business.\n\nIn addition to our BMPs, Chesapeake has also initiated several innovative internal programs focused on water recycling and greener hydraulic fracturing processes.\n\n## Aqua Renew ®\n\nCreated to meet the challenge of reducing our water usage, Chesapeake's Aqua Renew ® program uses state-of-the-art technology to recycle pro-\n\nduced water. Since the company's preliminary reclamation project in\n\n\n\n\n\n2006, our focus on water reuse and conservation has become a companywide endeavor, stretching from the Barnett Shale of North Texas to the Marcellus Shale of northern Pennsylvania.\n\nThe Aqua Renew program has yet to find a limit to how much recycled water could be used without compromising well production. In fact, our Marcellus Shale operations are treating and recycling virtually 100% of produced water (more than 10 million gallons per month) for reuse in our hydraulic fracturing operations. Properly conducted modern fracking is a highly engineered, controlled, sophisticated and safe procedure.\n\nWith such large volumes of recycled water, the company is seeing more than just environmental advantages. We estimate that this\n\nGreen operations - Chesapeake's Best Management Practices ensure our operations are as environmentally friendly as possible, while protecting our employees, neighbors and the areas where we operate.\n\n", - "page_start": 27, - "page_end": 27, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "- [19] T. Vo, Y.-D. Wu, R. Car, and M. Robert, 'Structures, interactions, and ferromagnetism of Fe-carbon nanotube systems', J. Phys. Chem. C 112 (22), 400 (May 2008), doi:10.1021/jp0761968.\n- [20] J. A. Furst, M. Brandbyge, A.-P. Jauho, and K. Stokbro, ' Ab initio study of spin-dependent transport in carbon nanotubes with iron and vanadium adatoms', Phys. Rev. B 78 (19), 195405 (Nov. 2008), doi:10.1103/PhysRevB.78.195405.\n- [21] A. V. Krasheninnikov, P. O. Lehtinen, A. S. Foster, P. Pyykko, and R. M. Nieminen, 'Embedding transitionmetal atoms in graphene: Structure, bonding, and magnetism', Phys. Rev. Lett. 102 (12), 126807 (Mar. 2009), doi:10.1103/PhysRevLett.102.126807.\n- [22] J. J. Mortensen, L. B. Hansen, and K. W. Jacobsen, 'Real-space grid implementation of the projector augmented wave method', Phys. Rev. B 71 (3), 035109 (Jan. 2005), doi:10.1103/PhysRevB.71.035109.\n- [23] J. P. Perdew, K. Burke, and M. Ernzerhof, 'Generalized gradient approximation made simple', Phys. Rev. Lett. 77 (18), 3865 (Oct. 1996), doi:10.1103/PhysRevLett.77.3865.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2538.pdf" - }, - { - "text": "FIG. 2: Typical KMC results for the final dried-in nanoparticle structures resulting from the evaporative dewetting processes of nanoparticle solutions (nanofluids) in the case of (a) a spinodal-like process at µ = -2 . 55 , (b) nucleation and growth of holes at µ = -2 . 3 , (c) unstable fronts at µ = -2 . 3 and low mobility M = 5 , and (d) unstable fronts at µ = -2 . 3 and medium mobility M = 10 . The starting configuration in (a) and (b) is a homogeneous liquid film with uniformly distributed particles whereas in (c) and (d) a hole at the center is nucleated 'by hand'. The remaining parameters are (a,b) M = 50 , glyph[epsilon1] nl = 2 . 0 , glyph[epsilon1] nn = 1 . 5 , ρ av n = 0 . 2 , kT = 0 . 3 , MC steps = 500 , domain size 1200 × 1200 ; (c,d) ε nn = 2 . 0 , glyph[epsilon1] nl = 1 . 5 , ρ av n = 0 . 2 , kT = 0 . 2 , MC steps = 3000 , domain size 1200 × 1200 . Lattice sites occupied by particles are coloured black, and the empty sites are coloured white.\n\n", - "page_start": 10, - "page_end": 10, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HNI_2003.pdf", - "query": "How many employees did HON Industries count in 2003 ?", - "target_page": 15, - "target_passage": "Members (employees) at year-end : 8,926", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## HON INDUSTRIES Inc. and SUBSIDIARIES", - "page_start": 56, - "page_end": 56, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## HON INDUSTRIES 2003\n\n## FINANCIAL HIGHLIGHTS", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## BOARD OF DIRECTORS AND OFFICERS\n\n## BOARD OF DIRECTORS\n\n## Stan A. Askren\n\nPresident, HON INDUSTRIES Inc.\n\n## Gary M. Christensen\n\nRetired President and\n\nChief Executive Officer,\n\nPella Corporation\n\n## Cheryl A. Francis\n\nAdvisor/Consultant Former Executive Vice President and Chief Financial Officer,\n\nRR Donnelley & Sons\n\n## Robert L. Katz\n\nPresident,\n\nRobert L. Katz and Associates\n\n## Dennis J. Martin\n\nChairman, President and\n\nChief Executive Officer,\n\nGeneral Binding Corporation\n\n## Jack D. Michaels\n\nChairman and Chief Executive Officer, HON INDUSTRIES Inc.\n\n## Joseph Scalzo\n\nVice President and President, Personal Care Products,\n\nThe Gillette Company\n\n## Abbie J. Smith\n\nChaired Professor,\n\nThe University of Chicago\n\nGraduate School of Business\n\n## Richard H. Stanley\n\nVice Chairman, HON INDUSTRIES Inc.\n\nChairman, SC Companies, Inc.\n\nChairman, Stanley Consultants, Inc.\n\n## Brian E. Stern\n\nPresident,\n\nXerox Supplies Technology Enterprises\n\nXerox Corporation\n\n## Ronald V. Waters, III\n\nChief Operating Officer,\n\nWm. Wrigley Jr. Company\n\n## COMMITTEES OF THE BOARD\n\nAUDIT\n\nCheryl A. Francis, Chairperson\n\nDennis J. Martin\n\nRonald V. Waters, III\n\n## HUMAN RESOURCES AND COMPENSATION\n\nGary M. Christensen, Chairperson\n\nRobert L. Katz\n\nAbbie J. Smith\n\n## PUBLIC POLICY AND CORPORATE GOVERNANCE\n\nRichard H. Stanley, Chairperson\n\nJoseph Scalzo\n\nBrian E. Stern\n\n## HON INDUSTRIES INC. OFFICERS\n\nJack D. Michaels\n\nChairman and Chief Executive Officer\n\n## Stan A. Askren\n\nPresident\n\nPeter R. Atherton\n\nVice President and Chief Technology Officer\n\nJerald K. Dittmer\n\nVice President and Chief Financial Officer\n\nRobert J. Driessnack\n\nVice President, Controller\n\n## Melinda C. Ellsworth\n\nVice President, Treasurer and\n\nInvestor Relations\n\n## Jeffrey D. Fick\n\nVice President, Member and\n\nCommunity Relations\n\nMalcolm C. Fields\n\nVice President and Chief Information Officer\n\nJames I. Johnson\n\nVice President, General Counsel and Secretary\n\nTimothy R. Summers\n\nVice President, Lean Enterprise\n\n## SUBSIDIARIES\n\nDavid C. Burdakin\n\nExecutive Vice President, HON INDUSTRIES, Inc.\n\nPresident, The HON Company\n\n## Brad D. Determan\n\nPresident,\n\nHearth and Home Technologies Inc.\n\n## Thomas D. Head\n\nVice President,\n\nGeneral Manager, Holga Inc.\n\nEric K. Jungbluth\n\nPresident, Allsteel Inc.\n\nDonald T. Mead\n\nPresident, The Gunlocke Company L.L.C.\n\n## Marco V. Molinari\n\nPresident, International and Business\n\nDevelopment\n\nJean M. Reynolds\n\nPresident, Maxon Furniture Inc.\n\n## Thomas A. Tolone\n\nPresident, Paoli Inc.", - "page_start": 61, - "page_end": 61, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## I N V E S T O R I N F O R M A T I O N\n\n## SCHEDULE OF QUARTERLY RESULTS\n\nThe Company operates on a fiscal year ending on the Saturday nearest December 31. Quarterly results are typically announced within 25 days after the end of each quarter, and audited results are typically announced within 40 days after year-end.\n\n## FISCAL 2004 QUARTER-END DATES\n\n1st Quarter: Saturday, April 3\n\n2nd Quarter: Saturday, July 3\n\n3rd Quarter: Saturday, October 2\n\n4th Quarter: Saturday, January 1\n\n## ANNUAL MEETING\n\nThe Company's annual shareholders' meeting will be held at 10:30 a.m. on May 4, 2004, at the Holiday Inn, Highways 61 & 38 North, Muscatine, Iowa. Shareholders and other interested investors are encouraged to attend the meeting.\n\n## I NVESTOR RELATIONS\n\nSend inquiries to:\n\nInvestor Relations\n\nHON INDUSTRIES Inc.\n\n414 East Third Street\n\nMuscatine, IA 52761\n\nTelephone: 563.264.7400\n\nFax: 563.264.7655\n\nE-mail: investorrelations@honi.com\n\n## CORPORATE HEADQUARTERS\n\nHON INDUSTRIES Inc.\n\n414 East Third Street\n\nP.O. Box 1109\n\nMuscatine, IA 52761-0071\n\nTelephone: 563.264.7400\n\nFax: 563.264.7217\n\nWebsite: www.honi.com\n\n## I NDEPENDENT PUBLIC ACCOUNTANTS\n\nPricewaterhouseCoopers LLP\n\nOne North Wacker Drive\n\nChicago, IL 60606\n\n## FORWARD-LOOKING STATEMENTS\n\nStatements in this report that are not strictly historical, including statements as to plans, objectives, and future financial performance, are 'forward-looking' statements that are made pursuant to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements involve known and unknown risks, which may cause the Company's actual results in the future to differ materially from expected results. These risks include, among others:\n\n - · competition within the office furniture and fireplace industries, including competition from imported products and competitive pricing;\n - · increases in the cost of raw materials, including steel, which is the Company's largest raw material category;\n - · increases in the cost of health care benefits provided by the Company;\n - · reduced demand for the Company's storage products caused by changes in office technology; including the change from paper record storage to electronic record storage;\n - · the effects of economic conditions, on demand for office furniture, customer insolvencies and related bad debts and claims against the Company that it received preferential payments;\n - · changes in demand and order patterns from the Company's customers, particularly its top ten customers, which represented approximately 36% of net sales in 2003;\n - · issues associated with acquisitions and integration of acquisitions;\n - · the ability of the Company to realize cost savings and productivity improvements from its cost containment and business simplification initiatives;\n - · the ability of the Company to realize financial benefits from investments in new products;\n - · the ability of the Company's distributors and dealers to successfully market and sell the Company's products;\n - · the availability and cost of capital to finance planned growth; and\n - · other risks, uncertainties, and factors described from time to time in the Company's filings with the Securities and Exchange Commission.\n\nWe caution the reader that the above list of factors may not be exhaustive. The Company does not assume any obligation to update any forward-looking statement, whether as a result of new information, future events or otherwise.\n\n## COMMON STOCK", - "page_start": 62, - "page_end": 62, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## MANAGEMENT'S DISCUSSION AND ANALYSIS\n\nThe following discussion of the Company's historical results of operations and of its liquidity and capital resources should be read in conjunction with the Consolidated Financial Statements of the Company and related notes.\n\n## Overview\n\nThe Company has two reportable core operating segments: office furniture and hearth products. The Company is the second largest office furniture manufacturer in the United States and the nation's leading manufacturer and marketer of gas- and wood-burning fireplaces.\n\nFrom 2000 to 2003, the office furniture industry experienced an unprecedented three-year decline due to the challenging economic environment. In 2003, this decline negatively impacted the Company's office furniture segment. In contrast, the housing market was at record high levels during 2003, which positively impacted the Company's hearth segment. The Company outperformed its peers in both segments in which it competes. The Company gained market share by providing strong brands, innovative products and services, and greater value to its end-users. Fiscal 2003 also included an extra week of activity due to the Company's 52/53-week fiscal year.\n\nNet sales were $1.8 billion in 2003, as compared to $1.7 billion in 2002. The increase in net sales reflects the 9% increase in the hearth segment and the additional week of business activity. In 2003 and 2002, the Company recorded restructuring charges and accelerated depreciation related to the closure and consolidation of office furniture facilities totaling $15.2 million and $3.0 million, respectively. Gross margins increased to 36.4% in 2003 from 35.4% in 2002 due to benefits from restructuring initiatives and its rapid continuous improvement program, new products, and increased price realization. The Company also invested aggressively in brand building and selling initiatives in 2003. Net income was $98.1 million or $1.68 per diluted share in 2003, as compared to $91.4 million or $1.55 per diluted share in 2002.\n\nThe Company generated $141.3 million in cash flow from operating activities and increased its cash position, including shortterm investments, by $48.6 million to $204.2 million. The Company paid dividends of $30.3 million and repurchased $21.5 million of its common stock, while investing $35.7 million in net capital expenditures and repaying $20.2 million of debt.\n\n## Critical Accounting Policies and Estimates GENERAL\n\nManagement's Discussion and Analysis of Financial Condition and Results of Operations is based upon the Consolidated Financial Statements, which have been prepared in accordance with GAAP. The preparation of these financial statements requires management to make estimates and assumptions that affect the reported amounts of assets, liabilities, revenue and expenses, and related disclosure of contingent assets and liabilities. Management bases its estimates on historical experience and on various other assumptions that are believed to be reasonable under the circumstances, the results of which form the basis for making judgments about the carrying values of assets and liabilities that are not readily apparent from other sources. Senior management has discussed the development, selection and disclosure of these estimates with the Audit Committee of our Board of Directors. Actual results may differ from these estimates under different assumptions or conditions.", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## Retirement Benefits\n\nThe Company has defined contribution profit-sharing plans covering substantially all employees who are not participants in certain defined benefit plans. The Company's annual contribution to the defined contribution plans is based on employee eligible earnings and results of operations and amounted to $26,489,000, $23,524,000, and $24,826,000 in 2003, 2002, and 2001, respectively.\n\nThe Company sponsors defined benefit plans which include a limited number of salaried and hourly employees at certain subsidiaries. The Company's funding policy is generally to contribute annually the minimum actuarially computed amount. Net pension costs relating to these plans were $176,000; $0; and $0 for 2003, 2002, and 2001, respectively. The actuarial present value of obligations, less related plan assets at fair value, is not significant.\n\nThe Company also participates in a multiemployer plan, which provides defined benefits to certain of the Company's union\n\nemployees. Pension expense for this plan amounted to $309,000, $309,000, and $310,000 in 2003, 2002, and 2001, respectively.\n\n## Postretirement Health Care\n\nIn accordance with the guidelines of revised SFAS No. 132, 'Employers' Disclosures about Pensions and other Postretirement Benefits,' the following table sets forth the funded status of the plan, reconciled to the accrued postretirement benefits cost recognized in the Company's balance sheet at:", - "page_start": 50, - "page_end": 50, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## afkljdf aojvoaipddddS EEKING I N V E S T O R S FOR A PERFECT MATCH\n\nJoin us in the dynamic, aggressive, profitable growth of HON INDUSTRIES.\n\nTHE BEST IS YET TO COME!\n\nManagement's Discussion and Analysis … 32 Consolidated Financial Statements and Notes … 39 Eleven-Year Summary … 56 Reports of Independent Auditors … 58 A Message from the Board of Directors … 61 Board of Directors and Officers … 62", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## SHAREHOLDER INFORMATION\n\nApplied Industrial Technologies, Inc. common stock is listed on the New York Stock Exchange under the symbol AIT. The Company is identified in most financial listings as 'AppliedIndlTch.'\n\n## RESEARCH ON APPLIED INDUSTRIAL TECHNOLOGIES IS AVAILABLE THROUGH:\n\n## BB&T CAPITAL MARKETS\n\nHolden Lewis, 703/471-3894\n\n## CJS SECURITIES\n\nJonathan Tanwanteng, 914/287-7600\n\n## CLEVELAND RESEARCH COMPANY\n\nAdam Uhlman, 216/649-7241\n\n## KEYBANC CAPITAL MARKETS\n\nJeffrey D. Hammond, 216/689-0236\n\n## SIDOTI & CO.\n\nJoseph Mondillo, 212/894-3339\n\nGREAT LAKES REVIEW - Division of\n\nWellington Shields & Co.\n\nElliott Schlang, 216/767-1340\n\n## STEPHENS INC.\n\nMatt Duncan, 501/377-3723\n\n## WELLS FARGO SECURITIES, LLC\n\nAllison Poliniak-Cusic, 212/214-5062\n\n## WUNDERLICH SECURITIES\n\nBrent D. Rakers, 901/251-2236\n\n## SHAREHOLDER INQUIRIES\n\nRequests to transfer Applied Industrial Technologies, Inc. shares and all correspondence regarding address change information, duplicate mailings, missing certificates, failure to receive dividend checks in a timely manner or to participate in the Company's direct stock purchase program should be directed to the Company's transfer agent and registrar:\n\n## COMPUTERSHARE TRUST COMPANY, N.A.\n\n250 Royall Street Canton, MA 02021 800/988-5291\n\n## ANNUAL REPORT ON FORM 10-K\n\nThe Applied Industrial Technologies, Inc. Annual Report on Form 10-K for the fiscal year ended June 30, 2012, including the financial statements and schedules thereto, is available at our website at www.Applied.com. It is also available without charge upon written request to the Vice President - Chief Financial Officer & Treasurer at the address shown.\n\n## ANNUAL MEETING\n\nThe Annual Meeting of Shareholders will be held at 10:00 a.m., Tuesday, October 23, 2012, at the Corporate Headquarters of Applied Industrial Technologies, 1 Applied Plaza, East 36th and Euclid Avenue, Cleveland, Ohio 44115.\n\n## COMPARISON OF FIVE-YEAR CUMULATIVE TOTAL RETURN\n\nApplied Industrial Technologies, Inc., Standard & Poor's 500, and Peer Group (Performance Results from 7/1/2007 through 6/30/2012)\n\n\n\nAssumes $100 invested at the close of trading 6/30/07 in Applied Industrial Technologies, Inc. common stock, Standard & Poor's 500, and Peer Group.\n\nCumulative total return assumes reinvestment of dividends.\n\nThe returns of the companies in the Peer Group are weighted based on the companies' relative stock market capitalization.\n\nPeer Group companies selected on a line-of-business basis include: DXP Enterprises, Inc.; Fastenal Company; Genuine Parts Company; W. W. Grainger, Inc.; Kaman Corporation; Lawson Products, Inc.; MSC Industrial Direct Co., Inc.; and WESCO International, Inc.\n\n| | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 |\n|---------------------------------------|---------|--------|--------|--------|---------|---------|\n| Applied Industrial Technologies, Inc. | $100.00 | $83.63 | $70.22 | $92.62 | $133.17 | $141.07 |\n| Standard & Poor's 500 | 100.00 | 86.88 | 64.11 | 73.36 | 95.88 | 101.10 |\n| Peer Group | 100.00 | 86.96 | 74.77 | 100.34 | 148.47 | 170.81 |\n\nSource: Value Line Publishing LLC\n\n## INVESTOR RELATIONS INQUIRIES SHOULD BE DIRECTED TO:\n\n## MARK O. EISELE\n\nVice President - Chief Financial Officer\n\n - & Treasurer\n\nApplied Industrial Technologies\n\n - 1 Applied Plaza\n\nCleveland, OH 44115-5014\n\nTelephone: 216/426-4000, Fax: 216/426-4845", - "page_start": 46, - "page_end": 46, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "## A MESSAGE FROM THE BOARD OF DIRECTORS\n\n## Dear Shareholders:\n\nWe, the members of the HON INDUSTRIES Board of Directors, believe that integrity is central to good corporate governance. This belief is reflected in the HON INDUSTRIES vision statement (shown on the back of this annual report), adopted many years ago. Our Vision statement represents much more than a traditional 'mission,' and it goes much deeper than company policy. The beliefs and values represented in that document are the very foundation of our corporate culture, and guide the attitude and actions of every member, every day.\n\nFrom its beginnings, HON INDUSTRIES has sought to implement its vision through sound policies and practices, and by maintaining a strong Board composed predominantly of outside directors. We are fully committed to executing our responsibilities, and we will continue to maintain the company's long-standing tradition of an independent, well-informed, active, and engaged Board of Directors.\n\nOur board meetings and procedures have been developed and refined to encourage open and informed communication. The company's accounting policies have always been conservative and straightforward. The Board's three committees - Audit; Human Resources and Compensation; Public Policy and Corporate Governance - have consisted entirely of non-management directors for many years.\n\nDuring 2003, we have given significant attention to the newly released rules emanating from the Sarbanes-Oxley Act of 2002 and the New York Stock Exchange listing requirements - rules intended to improve corporate governance across the country. It is gratifying to report that HON INDUSTRIES governance practices were already in accord with the spirit of the rules.\n\nIt is an honor to serve as directors of HON INDUSTRIES. We are very proud to represent you, the shareholder, as we oversee the management of this great company. Please be assured that we intend to remain vigilant and focused on good corporate governance.\n\n## Sincerely,\n\nThe HON INDUSTRIES Board of Directors\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nStan A. Askren\n\nGary M. Christensen\n\nCheryl A. Francis\n\nRobert L. Katz\n\nDennis J. Martin\n\nJack D. Michaels\n\nJoseph Scalzo\n\nAbbie J. Smith\n\nRichard H. Stanley\n\nBrian E. Stern\n\nRonald V. Waters, III\n\n\n\n\n\n\n\n", - "page_start": 60, - "page_end": 60, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "Figure 18: Employment types in EU27, development 2005 to 2022 65 - Eurostat\n\n\n\nThe minor deviation of the sum of the different types of employment to the 100% 'Employed persons' is due to 'No response' answers. The data of part-time employees and of employees with a temporary contract are for the full year 2019, not for Q4.\n\nThe group 'employees' is characterised by two major contractual distinctions that are important for OSH: 1) full- or part-time work, and 2) the time limit of the contract (indefinite or temporary). Moreover, in many Member States there are major differences between employment contracts of private employers in comparison to public employers.\n\n## Definitions Eurostat 66\n\nEmployers = self-employed with employee: employing one or more employees: persons who work in their own business, professional practice or farm for the purpose of earning a profit and who employ at least one other person.\n\nSelf-employed: not employing any employees (self-employed without employees): persons who work in their business, professional practices or farm for the purpose of earning a profit and who employ no other persons.\n\nEmployees: persons who work for a public or private employer and who receive compensation in the form of wages, salaries, fees, gratuities, payment by result or in kind. Contributing family workers: persons who help another member of the family to run a farm or business, provided they are not classed as employees.", - "page_start": 46, - "page_end": 46, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed8.pdf", - "query": "Did automating the writing of EM-to-IP handoffs notes using LLM lead to life-threatening outputs ?", - "target_page": 8, - "target_passage": "none of the incorrect output text elements reached life-threatening risk", - "chunk_present": { - "presence": true, - "index": 7 - } - }, - "top_chunk": [ - { - "text": "Abstract (continued)\n\nand safety via a novel evaluation framework. This study suggests the importance of a physician-inloop implementation design for this model and demonstrates an effective strategy to measure preimplementation patient safety of LLM models.\n\nJAMANetwork Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723\n\n## Introduction\n\nHandoffs, where patient information is exchanged between health professionals during a transfer of clinical responsibility, have been identified as a critical source of medical errors. 1,2 The Joint Commission, the Accreditation Council for Graduate Medical Education, and the Association of American Medical Colleges have all recommended the development of high-quality and standardized handoff processes to address the substantial patient risk of this ubiquitous event. 3,4 Implementing handoff tools has previously demonstrated significant reductions in medical errors. 5,6 High-quality handoffs from emergency medicine (EM) to inpatient (IP) services (EM-to-IP) are challenged by medical complexity, diagnostic uncertainty, rapidly evolving care plans, and time constraints. 7-10 The EM-to-IP handoff structure is not well standardized, frequently communicated verbally, and poorly adhered to in emergency departments (EDs), including in medical centers with formalized handoff systems. 11-14 Prior research has demonstrated that suboptimal EM-to-IP handoff is associated with adverse events, EM leaders and front-line clinicians themselves view the EM-to-IP handoff as high risk, and an electronic health record (EHR)-based technology is commonly mentioned as the most desired assistive tool in improving ED transitions of care. 15-18 Limited work to date has demonstrated EMelectronic handoff tools as feasible, efficient, and effective. 19-21 In April 2023, EM and internal medicine leadership of the study site collaboratively developed and launched a mandatory, EHR-based handoff workflow via a standardized EM-to-IP handoff note template, designed for realtime completion by the EM care team at time of admission. At 3 and 6 months postlaunch, informal evaluation of new EM-to-IP handoff notes through random medical record review and unstructured clinician feedback sessions revealed variable completeness, quality, and subsequent usefulness of the handoff notes.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed8.pdf" - }, - { - "text": "\n\n\n\n## Original Investigation | Emergency Medicine\n\n## DevelopingandEvaluatingLargeLanguageModel-GeneratedEmergencyMedicine HandoffNotes\n\nVince Hartman, MS; Xinyuan Zhang, PhD; Ritika Poddar, MS; Matthew McCarty, MD; Alexander Fortenko, MD, MPH; Evan Sholle, MS; Rahul Sharma, MD, MBA; Thomas Campion Jr, PhD; Peter A. D. Steel, MA, MBBS\n\n## Abstract\n\nIMPORTANCE An emergency medicine (EM) handoff note generated by a large language model (LLM) has the potential to reduce physician documentation burden without compromising the safety of EM-to-inpatient (IP) handoffs.\n\nOBJECTIVE To develop LLM-generated EM-to-IP handoff notes and evaluate their accuracy and safety compared with physician-written notes.\n\nDESIGN, SETTING, AND PARTICIPANTS This cohort study used EM patient medical records with acute hospital admissions that occurred in 2023 at NewYork-Presbyterian/Weill Cornell Medical Center. A customized clinical LLM pipeline was trained, tested, and evaluated to generate templated EM-to-IP handoff notes. Using both conventional automated methods (ie, recall-oriented understudy for gisting evaluation [ROUGE], bidirectional encoder representations from transformers score [BERTScore], and source chunking approach for large-scale inconsistency evaluation [SCALE]) and a novel patient safety-focused framework, LLM-generated handoff notes vs physician-written notes were compared. Data were analyzed from October 2023 to March 2024.\n\nEXPOSURE LLM-generated EM handoff notes.\n\nMAINOUTCOMESANDMEASURES LLM-generated handoff notes were evaluated for (1) lexical similarity with respect to physician-written notes using ROUGE and BERTScore; (2) fidelity with respect to source notes using SCALE; and (3) readability, completeness, curation, correctness, usefulness, and implications for patient safety using a novel framework.\n\nRESULTS In this study of 1600 EM patient records (832 [52%] female and mean [SD] age of 59.9 [18.9] years), LLM-generated handoff notes, compared with physician-written ones, had higher ROUGE(0.322 vs 0.088), BERTScore (0.859 vs 0.796), and SCALE scores (0.691 vs 0.456), indicating the LLM-generated summaries exhibited greater similarity and more detail. As reviewed by 3 board-certified EM physicians, a subsample of 50 LLM-generated summaries had a mean (SD) usefulness score of 4.04 (0.86) out of 5 (compared with 4.36 [0.71] for physician-written) and mean (SD) patient safety scores of 4.06 (0.86) out of 5 (compared with 4.50 [0.56] for physician-written). None of the LLM-generated summaries were classified as a critical patient safety risk.\n\nCONCLUSIONSANDRELEVANCE In this cohort study of 1600 EM patient medical records, LLM-generated EM-to-IP handoff notes were determined superior compared with physician-written summaries via conventional automated evaluation methods, but marginally inferior in usefulness\n\n(continued)\n\n\n\nOpenAccess. This is an open access article distributed under the terms of the CC-BY License.\n\nJAMANetwork Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723\n\n(Reprinted)\n\n## KeyPoints\n\nQuestion Can a large language model (LLM) generate emergency medicine (EM)-to-inpatient (IP) handoff notes that are useful and safe for EM care?\n\nFindings In this cohort study of 1600 EMpatient medical records using a novel evaluation framework, the LLM-generated EM-to-IP handoff notes had a mean usefulness of 4.04 out of 5 (compared with 4.36 for physician-written) and a mean patient safety of 4.06 out of 5 (compared with 4.50 for physician-written) with no critical patient safety risks.\n\nMeaning These findings suggest the value of a manual, patient safetyfocused clinical evaluation of LLM models and the potential of LLM-generated handoff notes to create a new standard of care in EM.\n\n\n\n+\n\n\n\nInvited Commentary\n\n## + Supplemental content\n\nAuthor affiliations and article information are listed at the end of this article.", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed8.pdf" - }, - { - "text": "evaluation frameworks may not address the anticipated effect LLM performance limitations could have on patient safety. 38-41\n\nIn this study, we aim to expand on prior work of clinical summarization to rigorously evaluate the outcomes of a fine-tuned model developed to generate accurate and safe summaries of the care rendered during an ED visit, with the long-term goal of integrating automated, structured EM-to-IP handoff notes into an EHR-based electronic handoff admission workflow (see eAppendix 1 in Supplement 1). We fine-tune pretrained LLMs on well curated datasets of structured and unstructured EHR data from the ED encounter to summarize the patient's ED care. We improved the correctness of model generations and customized the summaries in a structured format designed by a team of EM and internal medicine physician leaders for optimal usefulness. We proposed a novel patient safety-focused LLM evaluation framework to examine the LLM-generated handoff notes' quality and accuracy and the downstream patient safety implications of any identified inaccuracies. To evaluate noninferiority, we compared the LLM-generated handoff notes with the preexisting physician-written EM-to-IP handoff notes as the active control, using both the proposed patient safety-focused clinical evaluation framework and automated benchmark-driven methods. We used the physician-written EM-to-IP handoff notes as the active control and used the scores from both evaluation frameworks for the margin of inferiority of the intervention.\n\n## Methods\n\n## Data Collection\n\nThe study, with review and approval from the Weill Cornell institutional review board (IRB), was conducted at an urban academic 840-bed quaternary-care hospital in New York City, with approximately 71 000 adult ED visits and 21 000 admissions annually. EHR data from 1600 individual EM patient encounters leading to acute hospital admission were randomly selected from visits occurring between April and September of 2023. We limited our analysis to EM patient encounters occurring after April 2023, as the study site had updated the EM-handoff at that time. Encounters before this date used an earlier version of the EM-handoff note that would have provided suboptimal data for training labels. We used these data to fine-tune a pretrained LLM, which then generated an abstractive EM-handoff note. For the 1600 patient encounters (the study participants), Weill Cornell Medicine IRB approved a waiver of informed consent because the study used retrospective data and posed minimal risk to patients. We used Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guidelines.\n\n## EM-to-IP Handoff Note Template\n\nThe EM-to-IP handoff note template used in the study is a replication of the current manual handoff note structure used at the study site. The generated EM handoff note consists of components generated by a rule-based pattern-matching approach (laboratory tests, vitals, medications, consult orders, and radiology impressions) and components generated by the trained abstractive summarization model (history of present illness [HPI], differential diagnoses, immediate care plans, in-ED events, and disposition). Each summary also included a header with the timestamp of ED triage and discharge, patient's birth date, patient's unique identifier, patient's encounter number, and the total time of patient's stay in the ED.\n\n## Data Curation for Automated ED Note Generation", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed8.pdf" - }, - { - "text": "## Data Curation for Automated ED Note Generation\n\nThe EHR data were bifurcated into 2 datasets linked by the patient encounter number: 1 for the rulebased pattern-matching approach and the other for the LLM fine-tuning discussed in further detail in eAppendix 1 in Supplement 1. The rule-based framework was designed by the 3 board certified EM physicians (M.M., A.F., and P.S.). Fine tuning of the pretrained LLM consisted of the notes in Table 1 : EMclinician notes, consultation notes, EM progress note entries, and EM procedure notes. The EM-to-IP handoff notes were used as the labels. As the preexisting labels were of variable quality for\n\n\n\n(Reprinted)", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed8.pdf" - }, - { - "text": "curation (4.24 [0.58] vs 4.76 [0.48]), readability (4.00 [0.64] vs 4.64 [0.49]), correctness (4.52 [0.64] vs 4.90 [0.39]), and patient safety (4.06 [0.86] vs 4.50 [0.56]).\n\nIn extrapolating the estimated worst-case scenario impact of these performance gaps on patient safety, the 3 expert clinicians determined none of the identified model performance issues were anticipated to create a level 1 (life-threatening) safety event (see examples of worst case scenarios in eTable 2 in Supplement 1). While the incompleteness and faulty logic identified in the automated summaries received mean (SD) safety scores of 4.20 (0.93) and 4.60 (0.75), respectively; 13 (8.7%) and 11 (7.3%) of these events, respectively, were determined to have the potential to create a level 2 patient safety event following EM-to-IP handoff, substantially higher compared with the physician-written summaries (0%). All of the 5 hallucinations had patient safety scores between 4 and 5 and a mean (SD) score of 4.96 (0.14), which is defined as the hallucinations posing mild to no patient safety risk. LLM-generated notes demonstrated a higher rate of incorrectness (9.6%) compared with the physician-written notes (2.0%), although very few hallucinations.\n\nICC were 0.79 for completeness, 0.70 for curation, 0.59 for readability, 0.76 for correctness, and 0.74 for usefulness. These numbers suggest good reliability of agreement for completeness, curation, correctness, and usefulness and suggest fair reliability for readability among the 3 raters.\n\n## Discussion\n\nThe study demonstrated success in generating EM-to-IP handoff notes using both a fine tuned, pretrained LLM and rule-based approaches within an end user-developed note template. It is important to note that (largely due to time constraints within the EM care delivery model) the performance of EM-to-IP handoff notes was not the current standard of care in EM. The study site's unique electronic handoff process enabled a comparison between physician-written and LLM-generated handoff notes. Traditional automated evaluations of the model output suggested\n\nTable 3. Mean Clinical Quality Evaluation, Large Language Model (LLM)-Generated and Physician-Written", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed8.pdf" - }, - { - "text": "In recent years there has been an accelerated interest in using LLMs to automate clinical tasks in an effort to unburden physicians and reduce burnout. 22 Computer-generated text within clinical notes using natural language processing (NLP) have been overall shown to improve note completion rates, physician satisfaction, and patient outcomes. 23 Since 2018, NLP has made rapid advancements in health care with the discovery of the transformer model architecture, the building block of large language models (LLMs). LLMs can automate workflows such as discharge summaries, 24 radiology reports, 25 patient messaging, 26 after-visit summaries, 27 and ambient dictation 28 with various levels of perceived quality in each workflow. 29 LLMs are particularly effective at summarizing large unstructured clinical datasets, such as ED patient medical records. 30 Acommonconcern of LLMs is their ability to hallucinate data, or LLMs generating output text that is not factually consistent with the original source content. 31 Much work has been done in health care to reduce hallucinations through building larger-parameter models trained on trillions of datasets, and then instruction finetuning the LLM on smaller, well-curated datasets. 32,33 LLMs can also be designed with explainability by citing inferred content back to the reference source notes. 34 For short-context length notes, using few-shot prompt engineering approaches with large language models like GPT-4 can produce summaries that outperform standard physician documentation in completeness and error frequency. 35 However, factual inconsistencies in the summaries produced by LLMs increase as the context length increases, 36 and for medium- to long-context tasks, fine-tuning an open-source model has been shown to perform better than a prompt-learning approach. 37 In prior work, members of this study team demonstrated 62% of LLM-generated hospital course summaries met standard-of-care for a formal inpatient discharge summary. 24 However, recently published clinical\n\n\n\n(Reprinted)", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed8.pdf" - }, - { - "text": "- - IP Remote Copy", - "page_start": 771, - "page_end": 771, - "source_file": "sg247938.pdf" - }, - { - "text": "superior performance. However, while the manual clinical evaluation demonstrated the majority of the LLM-generated notes were of promising comparative quality (scores of 4-5), they were, on average, inferior to the clinician-written notes.\n\nOur novel clinical evaluation's findings suggest the majority of identified quality limitations and incorrectness would have minimal impact on patient safety, even when extrapolated to the worstcase scenario of the LLM-generated summary content not being reviewed and edited by a clinician before completion. This was designed to address contemporary LLM concerns of user trust, reliance and expertise. 49 As such, none of the incorrect output text elements reached life-threatening risk. However, incompleteness and faulty logic identified in the automated summaries were not always negligible, with just under 1 in 10 of these performance gaps determined to have the potential to create significant patient safety risk compared with the physician-written summaries. These critical implementation safety findings will inform (1) directionality of further model refinement; (2) further clinical evaluation of postrefinement model output; and (3) irrespective of downstream model performance, an EHR-implementation plan constrained to a user-interface design that will allow EM clinicians to review and edit the LLM-generated handoff note as a draft before finalizing (see eAppendix 1 in Supplement 1). This physician-in-the-loop process has also been identified as critical in other recent work implementing LLMs into clinical workflows. 29,53\n\nWhile the automated methods of SCALE and MPNet-based sentence transformers demonstrated a cursory view of the faithfulness performance of the models, the clinical evaluation provided the nuanced context of the true factuality of our system on a word by word level. When comparing with the source notes, the automatic evaluations rewarded the summaries with more details, more semantic similarities, and more entailment logics, while physician-written notes tended to be more concise with more shortcuts and clinical jargon, which are penalized by automatic evaluation metrics. In addition, LLM-generated summaries are completely based on the source notes, while physician-written summaries are often composed with additional knowledge that cannot be found from the source notes.\n\nThe divergence of the automated and clinical evaluation results of an LLM intended for integration into a critical clinical workflow is an important finding. First, this observed finding validates the importance of clinical evaluations in addition to conventional automated evaluations to determine accuracy. 54 While other LLM clinical evaluation frameworks have been described to measure conventional model output quality categories (such as incorrectness domains and other performance gaps), 30,35 to our knowledge, our novel framework is the first to incorporate anticipated patient safety implications for each individual category deficiency.\n\n## Limitations\n\nThere were several limitations to the study that were primarily driven from constraints of infrastructure, as well as regulations, legal governance, and labor requirements. At the study location, the data were required to remain on premise at all times and the infrastructure that was provided had a GPU limitation of 24 GB. Given these infrastructure restrictions, the best open-source model available during the study was LLM 2. Furthermore, we were not able to demonstrate the comparable difference between our fine-tuned LLM 2 model and third party LLMs 32,55 because of the study location's restrictions and concerns with the data retention policies. Nevertheless, our study demonstrates the potential capability of integrating state-of-the-art open source LLMs at organizations that are less open to integrating third-party LLMs.", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed8.pdf" - }, - { - "text": "LLM-model training, an informatics professional (V.H.) worked over a period of 200 hours with 3 board certified emergency medicine physician leaders with experience in formal quality and patient safety review processes (M.M., A.F., and P.S.) to improve the dataset through manual curation and annotation. As the task of EM-handoff note generation is not dependent on racial characteristics of the patients, we removed all mentions of race during the annotation stage as a means to avoid race bias; therefore, the model was trained to generate text without race-based assumptions. Although resource intensive, a small and carefully curated dataset of at least 1000 examples has been shown to be sufficient to produce remarkable results for the language model chosen. 42 Given the size of our dataset, we created a train and test dataset with a ratio of 1500:100, with a higher ratio of data placed in the training set and eschewed a validation set to lower the variance of the models. We used k-fold cross validation on the training dataset to avoid sampling bias for the hyperparameter optimization of the LLMs.\n\n## Models\n\nFor this study, we chose the LLMs Robustly Optimized BERT Approach (RoBERTa; hereafter referred to as LLM 1) 43 for saliency content selection and Large Language Model Meta AI 2 (Llama-2; hereafter referred to as LLM 2) 7B 44 for abstractive summarization. Further information about the models and technology specifications is provided in detail in eAppendix 1 in Supplement 1.\n\n## Data Processing\n\nAs LLM 2 only has a context size of 4096 tokens, 44 weused 2 steps to process the EM notes to both shorten the input size while maintaining content salience. First, we adopted a number of heuristic strategies for prioritization and filtration: (1) clinical note types (hierarchy presented in Table 1), (2) time of authorship, and (3) duplicate sentence detection. Second, we used an LLM 1-based saliency model to infer EM note sentences based on likelihood of content contribution to the EM-to-IP handoff notes.\n\n## ModelTraining and Inference\n\nOur summarization model is a fine-tuned decoder-only causal language model based on LLM 2. We used different prompts for the separate types of summarization: HPI and EM handoff. Additional information about the model training and inference process is provided in eAppendix 1 in\n\n## Supplement 1.\n\nUsing a combination of generative AI powered by our fine-tuned LLM 2 model and a set of heuristic rules, our summarization system produced ED handoff notes with various sections for downstream clinical tasks. The inference process is shown in the Figure .\n\nTable 1. Types of Data Included From the Emergency Department (ED) Patient Electronic Health Record a", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed8.pdf" - }, - { - "text": "- /SM590000 VLAN tagging by default is disabled for any IP address of a node port. You can use the CLI or GUI to optionally set the VLAN ID for port IPs on both systems in the IP partnership.", - "page_start": 574, - "page_end": 574, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed8.pdf", - "query": "How did automating the writing of EM-to-IP handoffs notes using LLM affect the usefulness of these notes ?", - "target_page": 1, - "target_passage": "LLM-generated EM-to-IP handoff notes were determined superior compared with physician-written summaries via conventional automated evaluation methods, but marginally inferior in usefulness", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\n\n\n## Original Investigation | Emergency Medicine\n\n## DevelopingandEvaluatingLargeLanguageModel-GeneratedEmergencyMedicine HandoffNotes\n\nVince Hartman, MS; Xinyuan Zhang, PhD; Ritika Poddar, MS; Matthew McCarty, MD; Alexander Fortenko, MD, MPH; Evan Sholle, MS; Rahul Sharma, MD, MBA; Thomas Campion Jr, PhD; Peter A. D. Steel, MA, MBBS\n\n## Abstract\n\nIMPORTANCE An emergency medicine (EM) handoff note generated by a large language model (LLM) has the potential to reduce physician documentation burden without compromising the safety of EM-to-inpatient (IP) handoffs.\n\nOBJECTIVE To develop LLM-generated EM-to-IP handoff notes and evaluate their accuracy and safety compared with physician-written notes.\n\nDESIGN, SETTING, AND PARTICIPANTS This cohort study used EM patient medical records with acute hospital admissions that occurred in 2023 at NewYork-Presbyterian/Weill Cornell Medical Center. A customized clinical LLM pipeline was trained, tested, and evaluated to generate templated EM-to-IP handoff notes. Using both conventional automated methods (ie, recall-oriented understudy for gisting evaluation [ROUGE], bidirectional encoder representations from transformers score [BERTScore], and source chunking approach for large-scale inconsistency evaluation [SCALE]) and a novel patient safety-focused framework, LLM-generated handoff notes vs physician-written notes were compared. Data were analyzed from October 2023 to March 2024.\n\nEXPOSURE LLM-generated EM handoff notes.\n\nMAINOUTCOMESANDMEASURES LLM-generated handoff notes were evaluated for (1) lexical similarity with respect to physician-written notes using ROUGE and BERTScore; (2) fidelity with respect to source notes using SCALE; and (3) readability, completeness, curation, correctness, usefulness, and implications for patient safety using a novel framework.\n\nRESULTS In this study of 1600 EM patient records (832 [52%] female and mean [SD] age of 59.9 [18.9] years), LLM-generated handoff notes, compared with physician-written ones, had higher ROUGE(0.322 vs 0.088), BERTScore (0.859 vs 0.796), and SCALE scores (0.691 vs 0.456), indicating the LLM-generated summaries exhibited greater similarity and more detail. As reviewed by 3 board-certified EM physicians, a subsample of 50 LLM-generated summaries had a mean (SD) usefulness score of 4.04 (0.86) out of 5 (compared with 4.36 [0.71] for physician-written) and mean (SD) patient safety scores of 4.06 (0.86) out of 5 (compared with 4.50 [0.56] for physician-written). None of the LLM-generated summaries were classified as a critical patient safety risk.\n\nCONCLUSIONSANDRELEVANCE In this cohort study of 1600 EM patient medical records, LLM-generated EM-to-IP handoff notes were determined superior compared with physician-written summaries via conventional automated evaluation methods, but marginally inferior in usefulness\n\n(continued)\n\n\n\nOpenAccess. This is an open access article distributed under the terms of the CC-BY License.\n\nJAMANetwork Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723\n\n(Reprinted)\n\n## KeyPoints\n\nQuestion Can a large language model (LLM) generate emergency medicine (EM)-to-inpatient (IP) handoff notes that are useful and safe for EM care?\n\nFindings In this cohort study of 1600 EMpatient medical records using a novel evaluation framework, the LLM-generated EM-to-IP handoff notes had a mean usefulness of 4.04 out of 5 (compared with 4.36 for physician-written) and a mean patient safety of 4.06 out of 5 (compared with 4.50 for physician-written) with no critical patient safety risks.\n\nMeaning These findings suggest the value of a manual, patient safetyfocused clinical evaluation of LLM models and the potential of LLM-generated handoff notes to create a new standard of care in EM.\n\n\n\n+\n\n\n\nInvited Commentary\n\n## + Supplemental content\n\nAuthor affiliations and article information are listed at the end of this article.", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed8.pdf" - }, - { - "text": "evaluation frameworks may not address the anticipated effect LLM performance limitations could have on patient safety. 38-41\n\nIn this study, we aim to expand on prior work of clinical summarization to rigorously evaluate the outcomes of a fine-tuned model developed to generate accurate and safe summaries of the care rendered during an ED visit, with the long-term goal of integrating automated, structured EM-to-IP handoff notes into an EHR-based electronic handoff admission workflow (see eAppendix 1 in Supplement 1). We fine-tune pretrained LLMs on well curated datasets of structured and unstructured EHR data from the ED encounter to summarize the patient's ED care. We improved the correctness of model generations and customized the summaries in a structured format designed by a team of EM and internal medicine physician leaders for optimal usefulness. We proposed a novel patient safety-focused LLM evaluation framework to examine the LLM-generated handoff notes' quality and accuracy and the downstream patient safety implications of any identified inaccuracies. To evaluate noninferiority, we compared the LLM-generated handoff notes with the preexisting physician-written EM-to-IP handoff notes as the active control, using both the proposed patient safety-focused clinical evaluation framework and automated benchmark-driven methods. We used the physician-written EM-to-IP handoff notes as the active control and used the scores from both evaluation frameworks for the margin of inferiority of the intervention.\n\n## Methods\n\n## Data Collection\n\nThe study, with review and approval from the Weill Cornell institutional review board (IRB), was conducted at an urban academic 840-bed quaternary-care hospital in New York City, with approximately 71 000 adult ED visits and 21 000 admissions annually. EHR data from 1600 individual EM patient encounters leading to acute hospital admission were randomly selected from visits occurring between April and September of 2023. We limited our analysis to EM patient encounters occurring after April 2023, as the study site had updated the EM-handoff at that time. Encounters before this date used an earlier version of the EM-handoff note that would have provided suboptimal data for training labels. We used these data to fine-tune a pretrained LLM, which then generated an abstractive EM-handoff note. For the 1600 patient encounters (the study participants), Weill Cornell Medicine IRB approved a waiver of informed consent because the study used retrospective data and posed minimal risk to patients. We used Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guidelines.\n\n## EM-to-IP Handoff Note Template\n\nThe EM-to-IP handoff note template used in the study is a replication of the current manual handoff note structure used at the study site. The generated EM handoff note consists of components generated by a rule-based pattern-matching approach (laboratory tests, vitals, medications, consult orders, and radiology impressions) and components generated by the trained abstractive summarization model (history of present illness [HPI], differential diagnoses, immediate care plans, in-ED events, and disposition). Each summary also included a header with the timestamp of ED triage and discharge, patient's birth date, patient's unique identifier, patient's encounter number, and the total time of patient's stay in the ED.\n\n## Data Curation for Automated ED Note Generation", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed8.pdf" - }, - { - "text": "## Data Curation for Automated ED Note Generation\n\nThe EHR data were bifurcated into 2 datasets linked by the patient encounter number: 1 for the rulebased pattern-matching approach and the other for the LLM fine-tuning discussed in further detail in eAppendix 1 in Supplement 1. The rule-based framework was designed by the 3 board certified EM physicians (M.M., A.F., and P.S.). Fine tuning of the pretrained LLM consisted of the notes in Table 1 : EMclinician notes, consultation notes, EM progress note entries, and EM procedure notes. The EM-to-IP handoff notes were used as the labels. As the preexisting labels were of variable quality for\n\n\n\n(Reprinted)", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed8.pdf" - }, - { - "text": "Abstract (continued)\n\nand safety via a novel evaluation framework. This study suggests the importance of a physician-inloop implementation design for this model and demonstrates an effective strategy to measure preimplementation patient safety of LLM models.\n\nJAMANetwork Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723\n\n## Introduction\n\nHandoffs, where patient information is exchanged between health professionals during a transfer of clinical responsibility, have been identified as a critical source of medical errors. 1,2 The Joint Commission, the Accreditation Council for Graduate Medical Education, and the Association of American Medical Colleges have all recommended the development of high-quality and standardized handoff processes to address the substantial patient risk of this ubiquitous event. 3,4 Implementing handoff tools has previously demonstrated significant reductions in medical errors. 5,6 High-quality handoffs from emergency medicine (EM) to inpatient (IP) services (EM-to-IP) are challenged by medical complexity, diagnostic uncertainty, rapidly evolving care plans, and time constraints. 7-10 The EM-to-IP handoff structure is not well standardized, frequently communicated verbally, and poorly adhered to in emergency departments (EDs), including in medical centers with formalized handoff systems. 11-14 Prior research has demonstrated that suboptimal EM-to-IP handoff is associated with adverse events, EM leaders and front-line clinicians themselves view the EM-to-IP handoff as high risk, and an electronic health record (EHR)-based technology is commonly mentioned as the most desired assistive tool in improving ED transitions of care. 15-18 Limited work to date has demonstrated EMelectronic handoff tools as feasible, efficient, and effective. 19-21 In April 2023, EM and internal medicine leadership of the study site collaboratively developed and launched a mandatory, EHR-based handoff workflow via a standardized EM-to-IP handoff note template, designed for realtime completion by the EM care team at time of admission. At 3 and 6 months postlaunch, informal evaluation of new EM-to-IP handoff notes through random medical record review and unstructured clinician feedback sessions revealed variable completeness, quality, and subsequent usefulness of the handoff notes.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed8.pdf" - }, - { - "text": "curation (4.24 [0.58] vs 4.76 [0.48]), readability (4.00 [0.64] vs 4.64 [0.49]), correctness (4.52 [0.64] vs 4.90 [0.39]), and patient safety (4.06 [0.86] vs 4.50 [0.56]).\n\nIn extrapolating the estimated worst-case scenario impact of these performance gaps on patient safety, the 3 expert clinicians determined none of the identified model performance issues were anticipated to create a level 1 (life-threatening) safety event (see examples of worst case scenarios in eTable 2 in Supplement 1). While the incompleteness and faulty logic identified in the automated summaries received mean (SD) safety scores of 4.20 (0.93) and 4.60 (0.75), respectively; 13 (8.7%) and 11 (7.3%) of these events, respectively, were determined to have the potential to create a level 2 patient safety event following EM-to-IP handoff, substantially higher compared with the physician-written summaries (0%). All of the 5 hallucinations had patient safety scores between 4 and 5 and a mean (SD) score of 4.96 (0.14), which is defined as the hallucinations posing mild to no patient safety risk. LLM-generated notes demonstrated a higher rate of incorrectness (9.6%) compared with the physician-written notes (2.0%), although very few hallucinations.\n\nICC were 0.79 for completeness, 0.70 for curation, 0.59 for readability, 0.76 for correctness, and 0.74 for usefulness. These numbers suggest good reliability of agreement for completeness, curation, correctness, and usefulness and suggest fair reliability for readability among the 3 raters.\n\n## Discussion\n\nThe study demonstrated success in generating EM-to-IP handoff notes using both a fine tuned, pretrained LLM and rule-based approaches within an end user-developed note template. It is important to note that (largely due to time constraints within the EM care delivery model) the performance of EM-to-IP handoff notes was not the current standard of care in EM. The study site's unique electronic handoff process enabled a comparison between physician-written and LLM-generated handoff notes. Traditional automated evaluations of the model output suggested\n\nTable 3. Mean Clinical Quality Evaluation, Large Language Model (LLM)-Generated and Physician-Written", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed8.pdf" - }, - { - "text": "## References and notes", - "page_start": 140, - "page_end": 140, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "superior performance. However, while the manual clinical evaluation demonstrated the majority of the LLM-generated notes were of promising comparative quality (scores of 4-5), they were, on average, inferior to the clinician-written notes.\n\nOur novel clinical evaluation's findings suggest the majority of identified quality limitations and incorrectness would have minimal impact on patient safety, even when extrapolated to the worstcase scenario of the LLM-generated summary content not being reviewed and edited by a clinician before completion. This was designed to address contemporary LLM concerns of user trust, reliance and expertise. 49 As such, none of the incorrect output text elements reached life-threatening risk. However, incompleteness and faulty logic identified in the automated summaries were not always negligible, with just under 1 in 10 of these performance gaps determined to have the potential to create significant patient safety risk compared with the physician-written summaries. These critical implementation safety findings will inform (1) directionality of further model refinement; (2) further clinical evaluation of postrefinement model output; and (3) irrespective of downstream model performance, an EHR-implementation plan constrained to a user-interface design that will allow EM clinicians to review and edit the LLM-generated handoff note as a draft before finalizing (see eAppendix 1 in Supplement 1). This physician-in-the-loop process has also been identified as critical in other recent work implementing LLMs into clinical workflows. 29,53\n\nWhile the automated methods of SCALE and MPNet-based sentence transformers demonstrated a cursory view of the faithfulness performance of the models, the clinical evaluation provided the nuanced context of the true factuality of our system on a word by word level. When comparing with the source notes, the automatic evaluations rewarded the summaries with more details, more semantic similarities, and more entailment logics, while physician-written notes tended to be more concise with more shortcuts and clinical jargon, which are penalized by automatic evaluation metrics. In addition, LLM-generated summaries are completely based on the source notes, while physician-written summaries are often composed with additional knowledge that cannot be found from the source notes.\n\nThe divergence of the automated and clinical evaluation results of an LLM intended for integration into a critical clinical workflow is an important finding. First, this observed finding validates the importance of clinical evaluations in addition to conventional automated evaluations to determine accuracy. 54 While other LLM clinical evaluation frameworks have been described to measure conventional model output quality categories (such as incorrectness domains and other performance gaps), 30,35 to our knowledge, our novel framework is the first to incorporate anticipated patient safety implications for each individual category deficiency.\n\n## Limitations\n\nThere were several limitations to the study that were primarily driven from constraints of infrastructure, as well as regulations, legal governance, and labor requirements. At the study location, the data were required to remain on premise at all times and the infrastructure that was provided had a GPU limitation of 24 GB. Given these infrastructure restrictions, the best open-source model available during the study was LLM 2. Furthermore, we were not able to demonstrate the comparable difference between our fine-tuned LLM 2 model and third party LLMs 32,55 because of the study location's restrictions and concerns with the data retention policies. Nevertheless, our study demonstrates the potential capability of integrating state-of-the-art open source LLMs at organizations that are less open to integrating third-party LLMs.", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed8.pdf" - }, - { - "text": "LLM-model training, an informatics professional (V.H.) worked over a period of 200 hours with 3 board certified emergency medicine physician leaders with experience in formal quality and patient safety review processes (M.M., A.F., and P.S.) to improve the dataset through manual curation and annotation. As the task of EM-handoff note generation is not dependent on racial characteristics of the patients, we removed all mentions of race during the annotation stage as a means to avoid race bias; therefore, the model was trained to generate text without race-based assumptions. Although resource intensive, a small and carefully curated dataset of at least 1000 examples has been shown to be sufficient to produce remarkable results for the language model chosen. 42 Given the size of our dataset, we created a train and test dataset with a ratio of 1500:100, with a higher ratio of data placed in the training set and eschewed a validation set to lower the variance of the models. We used k-fold cross validation on the training dataset to avoid sampling bias for the hyperparameter optimization of the LLMs.\n\n## Models\n\nFor this study, we chose the LLMs Robustly Optimized BERT Approach (RoBERTa; hereafter referred to as LLM 1) 43 for saliency content selection and Large Language Model Meta AI 2 (Llama-2; hereafter referred to as LLM 2) 7B 44 for abstractive summarization. Further information about the models and technology specifications is provided in detail in eAppendix 1 in Supplement 1.\n\n## Data Processing\n\nAs LLM 2 only has a context size of 4096 tokens, 44 weused 2 steps to process the EM notes to both shorten the input size while maintaining content salience. First, we adopted a number of heuristic strategies for prioritization and filtration: (1) clinical note types (hierarchy presented in Table 1), (2) time of authorship, and (3) duplicate sentence detection. Second, we used an LLM 1-based saliency model to infer EM note sentences based on likelihood of content contribution to the EM-to-IP handoff notes.\n\n## ModelTraining and Inference\n\nOur summarization model is a fine-tuned decoder-only causal language model based on LLM 2. We used different prompts for the separate types of summarization: HPI and EM handoff. Additional information about the model training and inference process is provided in eAppendix 1 in\n\n## Supplement 1.\n\nUsing a combination of generative AI powered by our fine-tuned LLM 2 model and a set of heuristic rules, our summarization system produced ED handoff notes with various sections for downstream clinical tasks. The inference process is shown in the Figure .\n\nTable 1. Types of Data Included From the Emergency Department (ED) Patient Electronic Health Record a", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed8.pdf" - }, - { - "text": "In recent years there has been an accelerated interest in using LLMs to automate clinical tasks in an effort to unburden physicians and reduce burnout. 22 Computer-generated text within clinical notes using natural language processing (NLP) have been overall shown to improve note completion rates, physician satisfaction, and patient outcomes. 23 Since 2018, NLP has made rapid advancements in health care with the discovery of the transformer model architecture, the building block of large language models (LLMs). LLMs can automate workflows such as discharge summaries, 24 radiology reports, 25 patient messaging, 26 after-visit summaries, 27 and ambient dictation 28 with various levels of perceived quality in each workflow. 29 LLMs are particularly effective at summarizing large unstructured clinical datasets, such as ED patient medical records. 30 Acommonconcern of LLMs is their ability to hallucinate data, or LLMs generating output text that is not factually consistent with the original source content. 31 Much work has been done in health care to reduce hallucinations through building larger-parameter models trained on trillions of datasets, and then instruction finetuning the LLM on smaller, well-curated datasets. 32,33 LLMs can also be designed with explainability by citing inferred content back to the reference source notes. 34 For short-context length notes, using few-shot prompt engineering approaches with large language models like GPT-4 can produce summaries that outperform standard physician documentation in completeness and error frequency. 35 However, factual inconsistencies in the summaries produced by LLMs increase as the context length increases, 36 and for medium- to long-context tasks, fine-tuning an open-source model has been shown to perform better than a prompt-learning approach. 37 In prior work, members of this study team demonstrated 62% of LLM-generated hospital course summaries met standard-of-care for a formal inpatient discharge summary. 24 However, recently published clinical\n\n\n\n(Reprinted)", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed8.pdf" - }, - { - "text": "- 3. Select IBM SKLM (with KMIP) as the key server type, as shown in Figure 12-31.\n\nFigure 12-31 Selecting SKLM as key server type\n\n\n\n - 4. The wizard moves to the Key Servers tab, as shown in Figure 12-32 on page 630. Enter the name and IP address of the key servers. Note that the first key server specified must be the primary SKLM key server.\n\nNote: The supported versions of IBM Security Key Lifecycle Manager (up to V3.0, which was the latest code version available at the time of this writing) differentiate between the primary and secondary key server role. The Primary SKLM server as defined on the Key Servers window of the Enable Encryption wizard must be the server defined as the primary by SKLM administrators.", - "page_start": 650, - "page_end": 650, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv5_ccby4license.pdf", - "query": "What company released MegatronLM ?", - "target_page": 2, - "target_passage": "NVIDIA released the MegatronLM", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Table 1: Overview of recent large language models\n\n| Year | Model | # of Parameters | Dataset Size |\n|--------|-------------------------|-------------------|----------------|\n| 2019 | BERT [39] | 3.4e+08 | 16GB |\n| 2019 | DistilBERT [113] | 6.6e+07 | 16GB |\n| 2019 | ALBERT [70] | 2.23e+08 | 16GB |\n| 2019 | XLNet (Large) [150] | 3.4e+08 | 126GB |\n| 2020 | ERNIE-Gen (Large) [145] | 3.4e+08 | 16GB |\n| 2019 | RoBERTa (Large) [74] | 3.55e+08 | 161GB |\n| 2019 | MegatronLM [122] | 8.3e+09 | 174GB |\n| 2020 | T5-11B [107] | 1.1e+10 | 745GB |\n| 2020 | T-NLG [112] | 1.7e+10 | 174GB |\n| 2020 | GPT-3 [25] | 1.75e+11 | 570GB |\n| 2020 | GShard [73] | 6e+11 | - |\n| 2021 | Switch-C [43] | 1.57e+12 | 745GB |\n\nthe maximum development F1 score in 10 epochs as opposed to 486 without ELMo. This model furthermore achieved the same F1 score with 1% of the data as the baseline model achieved with 10% of the training data. Increasing the number of model parameters, however, did not yield noticeable increases for LSTMs [e.g. 82].\n\nTransformer models, on the other hand, have been able to continuously benefit from larger architectures and larger quantities of data. Devlin et al. [39] in particular noted that training on a large dataset and fine-tuning for specific tasks leads to strictly increasing results on the GLUE tasks [138] for English as the hyperparameters of the model were increased. Initially developed as Chinese LMs, the ERNIE family [130, 131, 145] produced ERNIE-Gen, which was also trained on the original (English) BERT dataset, joining the ranks of very large LMs. NVIDIA released the MegatronLM which has 8.3B parameters and was trained on 174GB of text from the English Wikipedia, OpenWebText, RealNews and CC-Stories datasets [122]. Trained on the same dataset, Microsoft released T-NLG, 1 an LM with 17B parameters. OpenAI's GPT-3 [25] and Google's GShard [73] and Switch-C [43] have increased the definition of large LM by orders of magnitude in terms of parameters at 175B, 600B, and 1.6T parameters, respectively. Table 1 summarizes a selection of these LMs in terms of training data size and parameters. As increasingly large amounts of text are collected from the web in datasets such as the Colossal Clean Crawled Corpus [107] and the Pile [51], this trend of increasingly large LMs can be expected to continue as long as they correlate with an increase in performance.\n\nA number of these models also have multilingual variants such as mBERT [39] and mT5 [148] or are trained with some amount of multilingual data such as GPT-3 where 7% of the training data was not in English [25]. The performance of these multilingual models across languages is an active area of research. Wu and Drezde [144] found that while mBERT does not perform equally well across all 104 languages in its training data, it performed better at NER, POS tagging, and dependency parsing than monolingual models trained with comparable amounts of data for four low-resource languages. Conversely, [95] surveyed monolingual BERT models developed with more specific architecture considerations or additional monolingual data and found that they generally outperform", - "page_start": 1, - "page_end": 1, - "source_file": "arxiv5_ccby4license.pdf" - }, - { - "text": "- /SM590000 The Link Bandwidth, expressed in Mbps (megabits per second), is the amount of bandwidth that can be used for the FC or IP connection between the systems within the partnership.", - "page_start": 565, - "page_end": 565, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 mkvolume\n - /SM590000 mkimagevolume", - "page_start": 325, - "page_end": 325, - "source_file": "sg247938.pdf" - }, - { - "text": "Figure 12-32 Configuration of the primary SKLM server\n\n\n\n - 5. If you want to add secondary SKLM servers, click the ' + ' symbol and enter the data for secondary SKLM servers, as shown on Figure 12-33. You can define up to four SKLM servers. Click Next when you are done.\n\nFigure 12-33 Configuring multiple SKLM servers\n\n", - "page_start": 651, - "page_end": 651, - "source_file": "sg247938.pdf" - }, - { - "text": "- 38. Walsh MS , B € ohm H , Butter /uniFB01 eld MM , Santhosam J. Gender bias in the effects of arms and countermovement on jumping performance. J Strength Cond Res 21: 362 -366, 2007. doi:10.1519/00124278200705000-00012.\n - 39. Vadgaonkar R , Prameela MD , Kumar CG , Blossom V , Tonse M , Murlimanju BV , Pai MM , Prabhu LV. Dimensions of pes anserinus of the lower extremity, an anatomical study with its surgical implications. Anat Cell Biol 54: 178 -183, 2021. doi:10.5115/acb.20.275.\n - 40. Heinemeier KM , Schjerling P , Heinemeier J , Magnusson SP , Kjaer M. Lack of tissue renewal in human adult Achilles tendon is revealed by nuclear bomb 14 C. FASEB J 27: 2074 -2079, 2013. doi:10.1096/ fj.12-225599.\n - 41. Balshaw TG , Funnell MP , McDermott EJ , Maden-Wilkinson TM , Massey GJ , Abela S , Quteishat B , Edsey M , James LJ , Folland JP. The effect of speci /uniFB01 c bioactive collagen peptides on tendon remodeling during 15 wk of lower body resistance training. Med Sci Sports Exerc 55: 2083 -2095, 2023. doi:10.1249/mss.0000000000003242.\n - 42. Welle S , Totterman S , Thornton C. Effect of age on muscle hypertrophy induced by resistance training. J Gerontol A Biol Sci M /C19 ed Sci 51: M270 -M275, 1996. doi:10.1093/gerona/51a.6.m270.", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed12.pdf" - }, - { - "text": "- /SM590000 IBM Security Key Lifecycle Manager (SKLM)", - "page_start": 648, - "page_end": 648, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 Machine type (MT)", - "page_start": 630, - "page_end": 630, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 Volume name: Your volume name.\n - /SM590000 Volume type: Primary or backup.\n - /SM590000 Capacity in megabytes: Capacity of one side of the optical media after it is initialized.\n - /SM590000 Optical media family:\n - -Rewritable (REWT)\n - - WORM\n - - Universal Disk Format single-sided (UDF1) that is used by DVD RAM drives", - "page_start": 145, - "page_end": 145, - "source_file": "sg246915.pdf" - }, - { - "text": "- /SM590000 Intercluster and intracluster Metro Mirror can be used concurrently.", - "page_start": 543, - "page_end": 543, - "source_file": "sg247938.pdf" - }, - { - "text": "Allsteel Inc. provides high quality office furniture solutions with advanced functionality and lifetime durability for the contract market. Products are distributed through a national network of aligned, independent contract dealers as well as our sales force, targeting corporate, government, and institutional markets.\n\n## HIGHLIGHTS/AWARDS:\n\n- · Major product introductions - Get Set TM and Terrace ® 2.6 have been well received by the market, winning industry awards.\n- · Get Set TM - 2003 Editor's Choice, Top Pick Annual Award by Buildings Magazine and the Chicago Atheneum Good Design Award.\n- · Terrace ® 2.6 - recognized among top products of 2003 by Architectural Record magazine.\n- · The #19 ® chair, introduced in 2002, continues to receive numerous awards including the California IIDA Acclaim Award and the Best of Category Award by I.D. magazine.\n- · Office Furniture Dealers Alliance (OFDA), 2003 Dealers Choice award for Management.\n- · General Services Administration's (GSA) 2003 'Evergreen Furniture and Furnishings Award' for environmental stewardship.\n\nW W W . A L L S T E E L O F F I C E . C O M\n\n## HON INDUSTRIES 2003\n\n## OFFICE FURNITURE AT-A-GLANCE\n\n\n\nThe Gunlocke Company L.L.C. is one of America's oldest and most respected producers of quality wood office furniture. The company handcrafts executive case goods, as well as a wide range of executive seating, lounge furniture, and conference tables. Known for more than a century for crafting elegantly tailored solutions for distinctive business and government clients, Gunlocke focuses primarily on the contract market and furniture specifying communities.\n\n## HIGHLIGHTS/AWARDS:\n\n- · Aggressive 2003 product launch of nine new seating lines: Amalfi TM , Valor TM , Porter TM , Tiara TM , Raffaella TM , Napoli TM , Sirmione TM , Fitzgerald TM , and Debonair TM .\n- · Launched Mantra TM , a new modular and contemporary case good line. Using mixed materials - from wood to brushed aluminum and glass - the line focuses on the integration of technology into today's executive office environments.\n- · The Amalfi TM line won the Silver Award at NeoCon.\n- · Experienced record operational performance.\n\nWWW.GUNLOCKE.COM\n\n\n\nThe HON Company is North America's leading manufacturer and marketer of office solutions for small and medium-sized workplaces. Its strong distribution channel of independent dealers, wholesalers, and retailers supports the broadest mid-market product offering in the industry.\n\n## HIGHLIGHTS/AWARDS:\n\n- · Launched contemporary Perpetual ® collection targeting the 18- to 35-year-old segment.\n- · 2003 Shingo Award for Excellence in Manufacturing.\n- · Office Furniture Dealers Alliance (OFDA), 2003 Dealers' Choice Manufacturer of the Year, Best Support, Service, and Training, and Best Management.\n- · General Services Administration's (GSA) 2003 'Evergreen Furniture and Furnishings Award' for environmental stewardship.\n- · The Chicago Athenaeum: Museum of Architecture and Design Award for the Olson Flex Stacker TM Chair and Perpetual ® desking.\n- · Buildings Magazine' s Innovations Award and Editor's Top 100 - Perpetual ® desking.\n- · Today's Facilities Manager Readers' Choice Award - Non-task seating, storage, and conference room furnishings.\n\nWWW.HON.COM", - "page_start": 27, - "page_end": 27, - "source_file": "NYSE_HNI_2003.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv5_ccby4license.pdf", - "query": "What is the average emission of a human being per year in terms of CO2eq ?", - "target_page": 3, - "target_passage": "the average human is responsible for an estimated 5t CO2e per year", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "issues and re-constructing them di GLYPH<11> erently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as 'earth' and 'pollution', whereas 'climate change' was more associated to specific issues like 'solar', 'coal', 'china', and 'food'.\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, 'snow', 'summer', 'winter', or 'heatwave' in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' di GLYPH<11> erences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n## 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag 'tcot', favored by right-leaning users and 'p2', favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n## 5.1.3. Discourse Structure", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "In the global warming network, politics was the second-largest discourse cluster (20% of the network), where 'tcot', short for 'Top Conservatives on Twitter', was the node ranked highest, and 'p2', short for 'Progressives 2.0', is also included. Several political figures, such as Obama and Al Gore, are frequently mentioned. Action toward the global climate issue was the third-largest cluster (16%), including both domestic e GLYPH<11> orts, such as 'us', 'trump', 'climatechangeisreal', 'climateaction', and 'epa', and two international items, like 'china' and 'india'. The fourth cluster (in blue) referred to emissions, including hashtags like 'co2', 'green', and 'carbon'. The smallest cluster (8%) was composed of 'snow', 'winter', 'heatwave', and 'summer', referring to the temperature abnormalities on the earth.\n\n## 4.3. Temporal Analysis of the Associations in the Two Discourses\n\nThe online presentations of the climate change and global warming discourses are dynamic. As shown in Table 2, for the global warming discourse, 11 key concepts remained in the top 50 central hashtags each year for all 10 years, with 16 for the climate change'discourse. By comparing the 11 nodes of the global warming discourse and the 16 nodes of the climate change discourse, we found that the two lists shared nine concepts. We found 'pollution' and 'earth' were unique to the keyword list of the global warming discourse, and 'economy', 'water', 'china', 'coal', 'solar', 'sustainability', and 'food' only occurred on the critical list for the climate change discourse.\n\nTable 2. Hashtags that remained on the top 50 list for the climate change or the global warming discourse from 2009 to 2018.\n\n| | Unique | Shared |\n|-------------------------------|---------------------------------------------------------------------------|---------------------------------------------------------------------|\n| #climatechange #globalwarming | china, solar, water, food, economy, coal, sustainability pollution, earth | co2, news, carbon, green, climate, us, energy, science, environment |\n\nFigures 3 and 4 show the overall evolution of critical hashtags' associations in the 10-year period, where the nodes in the 10 graphs are located in the same position but the strength of associations varies across longitudinal time. Vector graphics with the label of nodes are provided in the Supplementary Materials. Four themes were identified in each discourse according to the nodes' associations. To more explicitly demonstrate the relative importance of each cluster in each year, we calculated the sum of the degree centrality of all the nodes belonging to each cluster and their change in centrality over the 10 years, as shown in Figure 5.", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed10.pdf" - }, - { - "text": "In the present study, processing errors in the input data for one ensemble member, the HadGEM2-ES-driven member, caused the results to be invalid. Results for this member for the HCVI are, therefore, not presented here.\n\n## (d) Freshwater resources: run-o/ff\n\nImpacts on freshwater were assessed with a version of the JULES land surface model [24,25], a coupled ecosystem-hydrology-surface exchange model which simulates land-atmosphere fluxes of water, energy and carbon in an internally consistent way, typically applied at global scales. Variants of JULES form the land surface scheme of Met Office Hadley Centre Earth System Models [26,27] and have been used to assess impacts of climate change on global terrestrial ecosystems and hydrology [28-30] within such models. JULES can also be used outside of the Earth System Model (ESM), driven by meteorological outputs of other ESMs to assess impacts of a wider range of climate projections [6,8]. Here we use a new, higher-resolution configuration of JULES on a global grid of 0.5° resolution [31].\n\nIt has been noted that hydrological impacts models driven by climate-change projections from climate models tend to give more severe drying than simulated in the climate models themselves [32-34]. This is largely attributed to the inclusion of plant stomatal closure in response to elevated CO2 in the climate model land surface schemes, which generally reduces evapotranspiration relative to climate projections without this process and hence further increases run-off/streamflow or ameliorates decreases [34]. This process is often omitted from standard hydrological models. Plant physiological responses to CO 2 are included in the JULES model, so our projections of changes in run-off here do account for this process.\n\nWe used each HadGEM3 simulation to drive JULES to simulate changes in run-off due to the effects of climate change and CO 2 rise on precipitation, evaporation and transpiration. We analysed 30 year periods centred around the year of crossing GWLs of 1.5°C and 2°C relative to pre-industrial. We examined changes in both mean flows and low flows (defined as the flows for the lowest 10% of time).\n\n## (e) Correcting biases in climate model output and implications for de/fining levels of global warming\n\nThe ClimPACT extreme weather indices, HCVI and JULES run-off simulations were all performed using outputs from the higher-resolution HadGEM3 projections described in §2a. However, there were some differences in how these data were applied, with different approaches to the treatment of systematic biases in the climate model output. For the ClimPACT analysis, it was considered important to assess changes in the raw climate model output, because this directly represents the behaviour of the model itself. The main focus was on the changes relative to the presentday baseline climate, defined as 1981-2010, with absolute values in either the baseline or the GWLs of 1.5°C and 2°C being only of secondary interest. For the HCVI and JULES run-off analyses, however, it was considered important to correct for systematic biases in the climate model output, because these can lead to unrealistic representations of the key quantities in the present-day simulation [35]. A bias-correction methodology was, therefore, applied for these two parts of the analysis, whereby the model output was adjusted to make it consistent with an observed climatology [36]. We used a multi-segment statistical bias-correction methodology for precipitation [37], and a modification of this for other variables [37].", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed11.pdf" - }, - { - "text": "## OPEN\n\n\n\n## The impact of ͷ.ͻ °C and ͸.Ͷ °C global warming on global maize production and trade\n\nKuo Li ͷ * , Jie Pan ͷ , Wei Xiong ͸ , Wei Xie ͹ & Tariq Ali ͹\n\nClimate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by ͻ climate models recommended by ISI-MIP under ͺ RCP scenarios, in which the approximate scenarios with global warming by ͷ.ͻ °C and ͸ °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by ͷ.ͻ °C and ͸.Ͷ °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under ͸.Ͷ °C scenario was much more serious than ͷ.ͻ °C scenario; the ratios of yield changes were separately Ͷ.ͷ;% and - ͷͶ.;% under ͷ.ͻ °C and ͸.Ͷ °C scenarios. The reduction trend of total maize production is obvious in the top five countries and the main producing regions of the world, especially under the ͸.Ͷ °C scenario. The market price of maize would increase by around Ͷ.ͽ% and ͹.ͺ% under ͷ.ͻ °C and ͸.Ͷ °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.\n\nIn the past hundred years, the global climate has experienced great changes 1-4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming 5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health 6-10 . Global warming has gradually changed from a scienti/fic issue to a major social issue of common concern to governments and people of all countries 11-13 . In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris 14 . Paris Agreement has indicated and pursue e/fforts to limit the temperature increase to 1.5 °C above pre-industrial levels.", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "Firstly, the period of 1986-2005 is de/fined as the baseline, of which the simulated average value is recognized as 0.61 °C above pre-industrial (the period of 1850-1900) levels; the baseline is selected according to the accessibility and operability of data, which is used for the determination of the periods with global warming by 1.5 °C and 2.0 °C and the comparison of maize yield between di/fferent periods. Secondly, the simulated values of global mean temperature in the future years are subtracted from the simulated average value of 1986-2005; then the values should be plus with 0.61 °C, which are the global warming results above pre-industrial levels; then 20 years moving average of the above results are calculated. /T\\_hirdly, the climate data of global warming by 1.5 °C is de/fined according to the principles provided in the /fi/f\\_th IPCC Assessment Report, for which it should be within 1.5-2.0 °C above pre-industrial levels at the end of the twenty-/first century; the climate data of global warming by 2.0 °C is de/fined according to the principles provided in the /fi/f\\_th IPCC Assessment Report, for which it should be within 2.0-2.5 °C above pre-industrial levels at the end of the twenty-/first century and the period of global warming by 2.0 °C should not be earlier than 2050. Finally, the climate models, scenarios and periods of global warming by 1.5 °C and 2.0 °C are separately con/firmed; the data of global warming by 1.5 °C, simulated by IPSL-CM5A-LR under RCP2.6 scenario during 2020-2039 and simulated by GFDL-ESM2M under RCP4.5 scenario during 2041-2060; the data of global warming by 2.0 °C, simulated by NorESM1-M under RCP4.5 scenario during 2060-2079 and simulated by GFDL-ESM2M under RCP6.0 scenario during 2065-2084.\n\nSimulation of maize yield using DSSAT. According to the data of global warming by 1.5 °C and 2.0 °C selected above, we simulated global maize yield changes compared with the average yield during 1986-2005 on grid level using CERES-Maize, which is part of DSSAT version 4.6 49 .\n\n/T\\_he inputs for DSSAT simulation include daily weather data, soil parameters, crop calendar data and management information. All the inputs are formatted at a 0.5° × 0.5° grid resolution which are computed by highperformance computers. Weather data is from the AgMERRA dataset, including maximum and minimum temperatures, precipitation, total radiation and humidity. Crop calendar data were from the Center for Sustainability and Global Environment (SAGE), in which the existing observations of crop planting and harvesting dates are gridded formatted at a resolution of 5 min 50 . For management information, fertilizer applications, irrigation and other management practices are required. A crop-speci/fic gridded dataset of nitrogen fertilizer application for the world was developed by integrating national and subnational fertilizer application data from a variety of sources, which is used to set up current fertilizer application rates for maize in each grid cell. Soil parameters are from the International Soil Pro/file Dataset (WISE), including soil texture, bulk density, pH, organic carbon content and fraction of calcium carbonate for each of /five 20 cm thick soil layers 51 . All the soil data is allocated to be in accordance with the request of DSSAT simulation; the missing soil parameters for organic soils were adopted from FAO soil dataset.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed9.pdf" - }, - { - "text": "but also a result of the different forcings influencing the atmosphere model at the time of passing each GWL, and the interaction with the climate sensitivity of HadGEM3. The radiative forcing of non-CO 2 forcings has previously been highlighted as a potentially important influence on patterns of climate change at 1.5°C and 2°C global warming [39]. Furthermore, despite some differences in regional climate responses between ensemble members, there were also some remarkable consistencies especially in the changes that might be considered inconsistent with a warming climate, such as regions such as northern South America where heavy rainfall (Rx5day) decreases rather increasing as might be expected under a warming climate. Again, these consistencies point to some common forcing of all simulations.\n\nOne key factor is the different times of passing a particular GWL, because the net radiative forcing would be different even though the same emissions and concentration scenario was used in all simulations. A given GWL was reached at a different time in each ensemble member, so the CO2 and aerosol concentrations vary between ensemble members; in members reaching a GWL early, such as that driven by IPSL-CM5A-LR, the CO 2 concentration is relatively lower than in other members, and the total aerosol concentration would be relatively higher (CO 2 concentrations are projected to increase in RCP8.5, but aerosol concentrations are projected decline). The net radiative forcing is smaller, because in RCP8.5 the increase positive radiative forcing from CO 2 is greater than the decrease in net negative radiative forcing from aerosols. Moreover, the physiological effect of CO 2 is also smaller, meaning that the consequent reduction in transpiration and associated additional land surface warming influence would also be expected to be smaller.\n\nConversely, in members reaching the same GWL later, such as that driven by GFDL-ESM2M, CO 2 concentration is relatively higher, and aerosol concentrations are lower. So, net radiative forcing, CO 2 physiological effects and the regional-scale radiative forcings from individual aerosol types could, therefore, be quite different in the GFDL-driven HadGEM3 simulation when it reaches 2°C global warming 25 years later than the IPSL-CM5A-LR-driven simulation.\n\nThe spatial pattern of changes in the different ensemble members may also play a role in influencing the global mean changes, for example, with large changes in some regions due to faster snow-melt or changes in cloud cover in one ensemble member leading to particular changes in regional warming that are not seen in other ensemble members. Moreover, the individual forcings of the different aerosol components such as sulfate and black carbon differ in sign and spatial pattern, so the overall impact on local radiative forcing and hence regional temperature patterns is more complex. Therefore, the global mean changes may not necessarily be expected to relative to global mean forcings.", - "page_start": 23, - "page_end": 23, - "source_file": "pubmed11.pdf" - }, - { - "text": "Model Intercomparison Project (CMIP5) ensemble, forced with the RCP8.5 concentration scenario. To provide more detailed representations of climate processes and impacts, the spatial resolution was N216 (approx. 60 km grid length in mid-latitudes), a higher resolution than the CMIP5 models. We used a set of impacts-relevant indices and a global land surface model to examine the projected changes in weather extremes and their implications for freshwater availability and vulnerability to food insecurity. Uncertainties in regional climate responses are assessed, examining ranges of outcomes in impacts to inform risk assessments. Despite some degree of inconsistency between components of the study due to the need to correct for systematic biases in some aspects, the outcomes from different ensemble members could be compared for several different indicators. The projections for weather extremes indices and biophysical impacts quantities support expectations that the magnitude of change is generally larger for 2°C global warming than 1.5°C. Hot extremes become even hotter, with increases being more intense than seen in CMIP5 projections. Precipitation-related extremes show more geographical variation with some increases and some decreases in both heavy precipitation and drought. There are substantial regional uncertainties in hydrological impacts at local scales due to different climate models producing different outcomes. Nevertheless, hydrological impacts generally point towards wetter conditions on average, with increased mean river flows, longer heavy rainfall events, particularly in South and East Asia with the most extreme projections suggesting more than a doubling of flows in the Ganges at 2°C global warming. Some areas are projected to experience shorter meteorological drought events and less severe low flows, although longer droughts and/or decreases in low flows are projected in many other areas, particularly southern Africa and South America. Flows in the Amazon are projected to decline by up to 25%. Increases in either heavy rainfall or drought events imply increased vulnerability to food insecurity, but if global warming is limited to 1.5°C, this vulnerability is projected to remain smaller than at 2°C global warming in approximately 76% of developing countries. At 2°C, four countries are projected to reach unprecedented levels of vulnerability to food insecurity.\n\nThis article is part of the theme issue 'The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels'.\n\n## 1. Introduction\n\nThe majority of climate-change impacts assessments have tended to be framed in terms of future time horizons, e.g. impacts by the middle or end of the twenty-first century [1,2]. However, with international climate policy now largely focused on limiting warming to specific levels of global mean temperature such as 2°C [3] or 1.5°C [4], policy-relevant climate impacts assessments increasingly need to be framed in terms of such warming levels.\n\nThere are two major research questions concerning the impacts of climate change at 1.5°C and 2°C global warming, which are relevant to both mitigation and adaptation policy areas.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed11.pdf" - }, - { - "text": "make global action salient for people talking about global warming than people talking about climate change [40], even though the facts of climate issues are highly recognized in both discourses.\n\n## 6. Conclusions\n\nAs social media is gradually overtaking the role of legacy media providing a forum for public discussion, the semantic associations contained in social media discussions reflect and reinforce how individuals portray global climate issues. By examining hashtag co-occurrence patterns on Twitter between 2009 and 2018, we identified distinct climate perceptions hidden behind two competing climate discourses and discovered how these two discourses evolved.\n\nWe found that broad scientific, social, political, and international discussions are the topics of public climate discourse. Although the semantic di GLYPH<11> erence between climate change and global warming seems subtle, the di GLYPH<11> erences in their cognitive associations are not trivial. Despite some shared concerns between the two discourses, 'global warming' is more politicized and focuses more on general phenomena, especially temperature abnormalities, whereas climate change is a more compact topic with a more scientific perspective and tends to refer to specific issues. The temporal analysis revealed that traditional political discussions decreased in both discourses but climate change started to build a discourse alliance with diverse domestic issues to show political intentions. Global warming's associations to extreme events and temperature change were suddenly strengthened around 2012. Climate change is becoming dominant compared with global warming in public discussions. Although the two discourses are becoming increasingly similar in the rank order of climate concepts, a notable discrepancy still exists in the way in which they get concepts associated. These observations may provide climate communicators with theoretical and practical hints to narrow the discrepancy between diverse climate perceptions.\n\n## Limitation and Future Directions\n\nThough big data allowed us to decrease the bias by dealing with the whole set of social media data rather than samples, discrepancies still exist between social media users and the public. As most Twitter users do not disclose their age, education, income, and gender in users' profile, demographics were not introduced as moderator factors in this study. Previous studies noted that in 1970s, global cooling was a prominent climate concern amongst the public [105]. While in the 1980s, ozone layer depletion, species extinction and rainforest destruction became salient on the mass media agenda [106]. Considering the historical background of climate issues, age might influence how individuals perceive climate issues. According to the statistics in 2017 [107], only 16 % of older people (older than 60) in America use Twitter, while the proportion is 39% for people between 30-59 years old and 47% for people younger than 30 years old (Stastista, 2017). Our results reflect the climate perception of older people who use Twitter, as well as younger people amongst whom Twitter is more popular. Although some scholars reported that it is statistically reliable to take data on Twitter as a substitute and supplement for polling [108], we thought our results should be further examined before being generalized to the whole population.", - "page_start": 15, - "page_end": 15, - "source_file": "pubmed10.pdf" - }, - { - "text": "A further complexity in identifying precise mechanisms for regional changes is the experimental design used here, with one atmospheric model and concentration/emissions scenario but six different SST and SIC patterns, means that the impact of spatial heterogeneity in radiative forcings is complex and involves a mix of effects in HadGEM3 and the original CMIP5 models. In the case of aerosols, for example, our HadGEM3 simulations are driven with RCP8.5 aerosol emissions and the aerosol concentrations are then calculated within the model itself. The spatial distributions of aerosol optical depth and radiative forcing can, therefore, be expected to be reasonably similar, because they arise from the same emissions scenario, although some differences may occur due to the different regional climate-change patterns. However, the impact of aerosols is also seen in the SST and SIC changes, because these will have responded to changes in regional aerosol radiative forcing in the original CMIP5 simulations. Therefore, these SST and SIC patterns will carry the 'memory' of aerosol changes in the original CMIP5 projections.\n\nOne example of an impact of changing aerosol radiative forcing could be the precipitation changes in northern South America including Amazonia. All ensemble members show a general drying in this region, as seen in RX5day and mean run-off results. The reduction in Rx5day is particularly notable, because the general expectation would be for an increase in heavy rainfall events in a warmer climate, as is seen in most other regions in these projections. This reduced rainfall in the Amazon region may be associated with the reducing net negative aerosol radiative forcing in the North Atlantic [40]. CO2 physiological forcing may also play a role here [41,42].", - "page_start": 23, - "page_end": 23, - "source_file": "pubmed11.pdf" - }, - { - "text": "\n\nIn China, which emits more carbon dioxide In China, which emits more carbon dioxide than any other country, finding ways of than any other country, finding ways of promoting new energy-saving measures promoting new energy-saving measures and restructuring industry have become and restructuring industry have become pressing issues. pressing issues.\n\nThe Japan Research Institute has built up a The Japan Research Institute has built up a successful track record in the course of its successful track record in the course of its advisory activities in China, in joint research advisory activities in China, in joint research into local-level microgrid construction at into local-level microgrid construction at the Tianjin Eco-City, and in policy-making the Tianjin Eco-City, and in policy-making relating to renewable energy management relating to renewable energy management\n\n\n\nsys systems and other areas.ems and other areas.\n\nIn partnership with the Guangdong Provincial In partnership with the Guangdong Provincial Department of Science and Technology, the Department of Science and Technology, the Japan Research Institute also advises Japan Research Institute also advises government departments on system government departments on system establishment for new energy-saving establishment for new energy-saving businesses. Guangdong is China businesses. Guangdong is China's richest s richest province by gross provincial product, and province by gross provincial product, and here both needs and potential in the field here both needs and potential in the field of energy-saving are very great. The Japan of energy-saving are very great. The Japan Research Institute also supports industrial Research Institute also supports industrial restructuring and low-carbon projects in the restructuring and low-carbon projects in the province through model projects. province through model projects.\n\nIGEM2010 greeted many visitors\n\n\n\nIn the battle against global warming, both In the battle against global warming, both public and private sectors are facing mounting public and private sectors are facing mounting pressure to curb carbon dioxide pollution from pressure to curb carbon dioxide pollution from transportation, one of the major sources of transportation, one of the major sources of emissions. Against this backdrop, the Japan emissions. Against this backdrop, the Japan Research Institute is supporting environmental Research Institute is supporting environmental businesses that map out pathways and businesses that map out pathways and develop projects, tailored to the needs of develop projects, tailored to the needs of particular localities, to bring about a particular localities, to bring about a low-carbon society. Experimental projects are low-carbon society. Experimental projects are currently underway in Kanagawa Prefecture, currently underway in Kanagawa Prefecture, Saitama Prefecture, Kyoto and Sapporo. Saitama Prefecture, Kyoto and Sapporo. These initiatives are aimed at hastening the These initiatives are aimed at hastening the adoption of electric vehicles and car-sharing adoption of electric vehicles and car-sharing to cut carbon dioxide emissions. The Institute to cut carbon dioxide emissions. The Institute is working in cooperation with government is working in cooperation with government bodies, car-rental, commercial vehicle-leasing bodies, car-rental, commercial vehicle-leasing and parking-facility management companies, and parking-facility management companies, railways, communications providers and railways, communications providers and other entities. other entities.\n\nElectric vehicles not only emit no carbon dioxide, but offer a comfortable drive as well\n\n\n\n", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_SMFG_2011.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv5_ccby4license.pdf", - "query": "How did the Black Lives Matter movement influence the writing of Wikipedia articles ?", - "target_page": 5, - "target_passage": " the Black Lives Matter movement (BLM) influenced Wikipedia article generation and editing such that, as the BLM movement grew, articles covering shootings of Black people in- creased in coverage and were generated with reduced latency", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "committed to physicalism. Unlike Type-A Materialists, however, Type-B Materialists do accept inconceivability arguments often cited in support of the hard problem, but with a key caveat: that inconceivability arguments give us insight only into how the human mind tends to conceptualize the relationship between mind and matter, but not into what the true nature of this relationship actually is. [43][52] According to this view, there is a gap between two ways of knowing (introspection and neuroscience) that will not be resolved by understanding all the underlying neurobiology, but still believe that consciousness and neurobiology are one and the same in reality. [43]\n\nWhile Type-B Materialists all agree that intuitions about the hard problem are psychological rather than ontological in origin, they differ as to whether our intuitions about the hard problem are innate or culturally conditioned. This has been dubbed the \"hard-wired/soft-wired distinction.\" [88][89] In relation to Type-B Materialism, those who believe that our intuitions about the hard problem are innate (and therefore common to all humans) subscribe to the \"hard-wired view\". [89] Those that believe our intuitions are culturally conditioned subscribe to the \"soft-wired view\". Unless otherwise specified, the term Type-B Materialism refers to the hard-wired view. [89]\n\nNotable philosophers who subscribe to Type-B Materialism include David Papineau, [90] Joseph Levine, [91] and Janet Levine. [55]\n\n## The \"hard-wired view\"\n\nJoseph Levine (who formulated the notion of the explanatory gap) states: \"The explanatory gap argument doesn't demonstrate a gap in nature, but a gap in our understanding of nature.\" [91] He nevertheless contends that full scientific understanding will not close the gap, [43] and that analogous gaps do not exist for other identities in nature, such as that between water and H 2 O. [92] The philosophers Ned Block and Robert Stalnaker agree that facts about what a conscious experience is like to the one experiencing it cannot be deduced from knowing all the facts about the underlying physiology, but by contrast argue that such gaps of knowledge are also present in many other cases in nature, such as the distinction between water and H 2 O. [93][12]\n\nTo explain why these two ways of knowing (i.e. third-person scientific observation and first-person introspection) yield such different understandings of consciousness, weak reductionists often invoke the phenomenal concepts strategy , which argues the difference stems from our inaccurate phenomenal concepts (i.e., how we think about consciousness), not from the nature of consciousness itself. [94][95] By this view, the hard problem of consciousness stems from a dualism of concepts, not from a dualism of properties or substances. [43]\n\n## The \"soft-wired view\"\n\nSome consciousness researchers have argued that the hard problem is a cultural artifact, unique to contemporary Western Culture. This is similar to Type-B Materialism, but it makes the further claim that the psychological facts that cause us to intuit the hard problem are not innate, but culturally conditioned. Notable researchers who hold this view include Anna Wierzbicka, [96] Hakwan Lau and Matthias Michel. [97]\n\nWierzbicka (who is a linguist) argues that the vocabulary used by consciousness researchers (including words like experience and consciousness ) are not universally translatable, and are \"parochially English.\" [96] Weirzbicka calls David Chalmers out by name for using these words, arguing that if", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia2.pdf" - }, - { - "text": "philosophers \"were to use panhuman concepts expressed in crosstranslatable words\" (such as know , think , or feel ) then the hard problem would dissolve. [96] David Chalmers has responded to these criticisms by saying that he will not \"apologize for using technical terms in an academic article . . . they play a key role in efficient communication in every discipline, including Wierzbicka's\". [89]\n\n## Type-C Materialism\n\nType-C materialists acknowledge a distinction between knowledge and experience [98] without asserting a more complete explanation for the experiential phenomenon. One taking this view would admit that there is an explanatory gap for which no answer to date may be satisfactory, but trust that inevitably the gap will be closed. [52] This is described by analogy to progression in other areas of science, such as massenergy equivalence which would have been unfathomable in ancient times, [52] abiogenesis which was once considered paradoxical from an evolutionary framework, [99][98] or a suspected future theory of everything combining relativity and quantum mechanics. Similarly, type-C materialism posits that the problem of consciousness is a consequence of our ignorance [71][100] but just as resolvable as any other question in neuroscience.\n\nBecause the explanatory question of consciousness is evaded, type-C materialism does not presuppose [101] the descriptive question, for instance that there is any self-consciousness, wakefulness, or even sentience [102] in a rock. Principally, the basis for the argument arises from the apparently high correlation of consciousness with living brain tissue, [103] thereby rejecting panpsychism [101] without explicitly formulating physical causation. More specifically this position denies the existence of philosophical zombies [64] for which there is an absence of data and no proposed method of testing. [104][105] Whether via the inconceivability or actual nonexistence of zombies, a contradiction is exposed nullifying the premise of the consciousness problem's \"hardness\".\n\nType-C materialism is compatible with several cases and could collapse into one of these other metaphysical views [52] depending on scientific discovery and its interpretation. With evidence of emergence, it resolves to strong reductionism under type A. With a different, possibly cultural paradigm for understanding consciousness, it resolves to type-B materialism. [32] If consciousness is explained by the quantum mind, then it resolves to property dualism under type D. [106] With characterization of intrinsic properties in physics extending beyond structure and dynamics, it could resolve to type-F monism. [52]\n\n## Type-D Dualism\n\nDualism views consciousness as either a non-physical substance separate from the brain or a non-physical property of the physical brain. [107] Dualism is the view that the mind is irreducible to the physical body. [107] There are multiple dualist accounts of the causal relationship between the mental and the physical, of which interactionism and epiphenomenalism are the most common today. Interactionism posits that the mental and physical causally impact one another, and is associated with the thought of René Descartes (1596-1650). [52] Epiphenomenalism holds the mental is causally dependent on the physical, but does not in turn causally impact it. [52]\n\nIn contemporary philosophy, interactionism has been defended by philosophers including Martine NidaRümelin, [108] while epiphenomenalism has been defended by philosophers including Frank Jackson [109][110] (although Jackson later changed his stance to physicalism). [111] Chalmers has also", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia2.pdf" - }, - { - "text": "\n\nmost similar to the ones used in GPT-2's training data, i.e. documents linked to from Reddit [25], plus Wikipedia and a collection of books. While this was reportedly effective at filtering out documents that previous work characterized as 'unintelligible' [134], what is unmeasured (and thus unknown) is what else it filtered out. The Colossal Clean Crawled Corpus [107], used to train a trillion parameter LM in [43], is cleaned, inter alia , by discarding any page containing one of a list of about 400 'Dirty, Naughty, Obscene or Otherwise Bad Words' [p.6]. 14 This list is overwhelmingly words related to sex, with a handful of racial slurs and words related to white supremacy (e.g. swastika , white power ) included. While possibly effective at removing documents containing pornography (and the associated problematic stereotypes encoded in the language of such sites [125]) and certain kinds of hate speech, this approach will also undoubtedly attenuate, by suppressing such words as twink , the influence of online spaces built by and for LGBTQ people. 15 If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light.\n\nThus at each step, from initial participation in Internet fora, to continued presence there, to the collection and finally the filtering of training data, current practice privileges the hegemonic viewpoint. In accepting large amounts of web text as 'representative' of 'all' of humanity we risk perpetuating dominant viewpoints, increasing power imbalances, and further reifying inequality. We instead propose practices that actively seek to include communities underrepresented on the Internet. For instance, one can take inspiration from movements to decolonize education by moving towards oral histories due to the overrepresentation of colonial views in text [35, 76, 127], and curate training datasets through a thoughtful process of deciding what to put in, rather than aiming solely for scale and trying haphazardly to weed out, post-hoc, flotsam deemed 'dangerous', 'unintelligible', or 'otherwise bad'.\n\n## 4.2 Static Data/Changing Social Views\n\nA central aspect of social movement formation involves using language strategically to destabilize dominant narratives and call attention to underrepresented social perspectives. Social movements produce new norms, language, and ways of communicating. This adds challenges to the deployment of LMs, as methodologies reliant on LMs run the risk of 'value-lock', where the LM-reliant technology reifies older, less-inclusive understandings.\n\nFor instance, the Black Lives Matter movement (BLM) influenced Wikipedia article generation and editing such that, as the BLM movement grew, articles covering shootings of Black people increased in coverage and were generated with reduced latency [135]. Importantly, articles describing past shootings and incidents of police brutality were created and updated as articles for new events were created, reflecting how social movements make connections between events in time to form cohesive narratives [102]. More generally, Twyman et al. [135] highlight how social movements actively influence framings and reframings of minority narratives\n\nin the type of online discourse that potentially forms the data that underpins LMs.", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv5_ccby4license.pdf" - }, - { - "text": "physical constituents. For example, water is nothing more than H 2 O molecules, and understanding everything about H 2 O molecules is to understand everything there is to know about water. But consciousness is not like this. Knowing everything there is to know about the brain, or any physical system, is not to know everything there is to know about consciousness. Consciousness, then, must not be purely physical. [27]\n\n## Implications for physicalism\n\nChalmers's idea contradicts physicalism, sometimes labelled materialism. This is the view that everything that exists is a physical or material thing, so everything can be reduced to microphysical things. For example, the rings of Saturn are a physical thing because they are nothing more than a complex arrangement of a large\n\nThe hard problem is often illustrated by appealing to the logical possibility of inverted visible spectra. If there is no logical contradiction in supposing that one's colour vision could be inverted, it follows that mechanistic explanations of visual processing do not determine facts about what it is like to see colours.\n\n\n\nnumber of subatomic particles interacting in a certain way. According to physicalism, everything, including consciousness, can be explained by appeal to its microphysical constituents. Chalmers's hard problem presents a counterexample to this view and to other phenomena like swarms of birds, since it suggests that consciousness, like swarms of birds, cannot be reductively explained by appealing to their physical constituents. Thus, if the hard problem is a real problem then physicalism must be false, and if physicalism is true then the hard problem must not be a real problem.\n\nThough Chalmers rejects physicalism, he is still a naturalist. [27]\n\n## Historical precedents\n\nThe hard problem of consciousness has scholarly antecedents considerably earlier than Chalmers. Chalmers himself notes that \"a number of thinkers in the recent and distant past\" have \"recognised the particular difficulties of explaining consciousness.\" [33] He states that all his original 1996 paper contributed to the discussion was \"a catchy name, a minor reformulation of philosophically familiar points\". [33]\n\nAmong others, thinkers who have made arguments similar to Chalmers' formulation of the hard problem include Isaac Newton, [34] John Locke, [35] Gottfried Wilhelm Leibniz, [36][34] John Stuart Mill, [37] and Thomas Henry Huxley. [38][34] Likewise, Asian philosophers like Dharmakirti and Guifeng Zongmi discussed the problem of how consciousness arises from unconscious matter. [34][39][40][41]\n\n## Related concepts\n\nThe mind-body problem\n\nA swarm of birds showing high order structure emerging from simpler physical constituents\n\n", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia2.pdf" - }, - { - "text": "## Article", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed3.pdf" - }, - { - "text": "defended versions of both positions as plausible. [52] Traditional dualists such as Descartes believed the mental and the physical to be two separate substances, or fundamental types of entities (hence \"substance dualism\"); some more recent dualists, however, accept only one substance, the physical, but state it has both mental and physical properties (hence \"property dualism\"). [107]\n\n## Type-E Dualism\n\n## Type-F Monism\n\nMeanwhile, panpsychism and neutral monism, broadly speaking, view consciousness as intrinsic to matter. [52] In its most basic form, panpsychism holds that all physical entities have minds (though its proponents take more qualified positions), [112] while neutral monism, in at least some variations, holds that entities are composed of a substance with mental and physical aspects-and is thus sometimes described as a type of panpsychism. [113]\n\nForms of panpsychism and neutral monism were defended in the early twentieth century by the psychologist William James, [114][115][note 2] the philosopher Alfred North Whitehead, [115] the physicist Arthur Eddington, [116][117] and the philosopher Bertrand Russell, [112][113] and interest in these views has been revived in recent decades by philosophers including Thomas Nagel, [115] Galen Strawson, [115][118] Philip Goff, [115] and David Chalmers. [112] Chalmers describes his overall view as \"naturalistic dualism\", [1] but he says panpsychism is in a sense a form of physicalism, [52] as does Strawson. [118] Proponents of panpsychism argue it solves the hard problem of consciousness parsimoniously by making consciousness a fundamental feature of reality. [43][119]\n\n## Idealism and cosmopsychism\n\nA traditional solution to the hard problem is idealism, according to which consciousness is fundamental and not simply an emergent property of matter. It is claimed that this avoids the hard problem entirely. [120] Objective idealism and cosmopsychism consider mind or consciousness to be the fundamental substance of the universe. Proponents claim that this approach is immune to both the hard problem of consciousness and the combination problem that affects panpsychism. [121][122][123]\n\nFrom an idealist perspective, matter is a representation or image of mental processes. Supporters suggest that this avoids the problems associated with the materialist view of mind as an emergent property of a physical brain. [124] Critics argue that this then leads to a decombination problem: how is it possible to split a single, universal conscious experience into multiple, distinct conscious experiences? In response, Bernardo Kastrup claims that nature hints at a mechanism for this in the condition dissociative identity disorder (previously known as Multiple Personality Disorder). [125] Kastrup proposes dissociation as an example from nature showing that multiple minds with their own individual subjective experience could develop within a single universal mind.\n\nCognitive psychologist Donald D. Hoffman uses a mathematical model based around conscious agents, within a fundamentally conscious universe, to support conscious realism as a description of nature-one that falls within the objective idealism approaches to the hard problem: \"The objective world, i.e., the world whose existence does not depend on the perceptions of a particular conscious agent, consists entirely of conscious agents.\" [126]", - "page_start": 12, - "page_end": 12, - "source_file": "wikipedia2.pdf" - }, - { - "text": "Work through these panes to customize your Custom volume as wanted, and then commit these changes by clicking Create .", - "page_start": 286, - "page_end": 286, - "source_file": "sg247938.pdf" - }, - { - "text": "## Word\n\n## Get writing suggestions\n\nWith Editor , bring out your best writing. Editor helps you bring out your best writing by giving you intelligent writing suggestions. It also calculates an Editor Score based on the number and types of suggestions you have yet to address. Select an underlined word or phrase to accept or ignore a suggestion.\n\n\n\n## Review and track changes\n\nWhether you just want to check spelling, keep your word count in check, or fully collaborate with other people, the Review tab has essential commands to track, discuss, and manage all of the changes made to your documents.\n\n\n\n\n\n## View who else is typing\n\nCo-authoring Word documents that are shared on OneDrive or on a SharePoint site happens in real-time, which means you can easily view where other authors are making changes in the same document that you're currently working in.\n\n\n\n## Format with styles\n\nStyles lets you create, apply, and review the formatting styles in your current document. To open it, select the Home tab, and then select the small arrow in the lower right corner of the Styles gallery.", - "page_start": 2, - "page_end": 2, - "source_file": "Word QS.pdf" - }, - { - "text": "of being conscious is merely an error in perception, held by brains which evolved to hold erroneous and incomplete models of their own internal workings, just as they hold erroneous and incomplete models of their own bodies and of the external world. [77][78]\n\n## Criticisms\n\nThe main criticisms of eliminative materialism and illusionism hinge on the counterintuitive nature of the view. Arguments of this form are called Moorean Arguments . A Moorean argument seeks to undermine the conclusion of an argument by asserting that the negation of that conclusion is more certain than the premises of the argument. [79]\n\nThe roots of the Moorean Argument against illusionism extend back to Augustine of Hippo who stated that he could not be deceived regarding his own existence, since the very act of being deceived secures the existence of a being there to be the recipient of that deception. [note 1][80]\n\nIn the Early-Modern era, these arguments were repopularized by René Descartes, who coined the now famous phrase \"Je pense, donc je suis\" (\"I think, therefore I am\"). [81] Descartes argued that even if he was maximally deceived (because, for example, an evil demon was manipulating all his senses) he would still know with certainty that his mind exists, because the state of being deceived requires a mind as a prerequisite. [82]\n\nThis same general argumentative structure is still in use today. For example, in 2002 David Chalmers published an explicitly Moorean argument against illusionism. The argument goes like this: The reality of consciousness is more certain than any theoretical commitments (to, for example, physicalism) that may be motivating the illusionist to deny the existence of consciousness. The reason for this is because we have direct \"acquaintance\" with consciousness, but we do not have direct acquaintance with anything else (including anything that could inform our beliefs in consciousness being an illusion). In other words: consciousness can be known directly, so the reality of consciousness is more certain than any philosophical or scientific theory that says otherwise. [83] Chalmers concludes that \"there is little doubt that something like the Moorean argument is the reason that most people reject illusionism and many find it crazy.\" [84]\n\nEliminative materialism and illusionism have been the subject of criticism within the popular press. One highly cited example comes from the philosopher Galen Strawson who wrote an article in the New York Review of Books titled \"The Consciousness Deniers\". In it, Strawson describes illusionism as the \"silliest claim ever made\", next to which \"every known religious belief is only a little less sensible than the belief that the grass is green.\" [85] Another notable example comes from Christof Koch (a neuroscientist and one of the leading proponents of Integrated Information Theory) in his popular science book The Feeling of Life Itself . In the early pages of the book, Koch describes eliminativism as the \"metaphysical counterpart to Cotard's syndrome, a psychiatric condition in which patients deny being alive.\" [86] Koch takes the prevalence of eliminativism as evidence that \"much of twentieth-century analytic philosophy has gone to the dogs\". [87]\n\n## Type-B Materialism\n\nType-B Materialism, also known as Weak Reductionism or A Posteriori Physicalism , is the view that the hard problem stems from human psychology, and is therefore not indicative of a genuine ontological gap between consciousness and the physical world. [43] Like Type-A Materialists, Type-B Materialists are", - "page_start": 9, - "page_end": 9, - "source_file": "wikipedia2.pdf" - }, - { - "text": "The philosophers Glenn Carruthers and Elizabeth Schier said in 2012 that the main arguments for the existence of a hard problem-philosophical zombies, Mary's room, and Nagel's bats-are only persuasive if one already assumes that \"consciousness must be independent of the structure and function of mental states, i.e. that there is a hard problem.\" Hence, the arguments beg the question. The authors suggest that \"instead of letting our conclusions on the thought experiments guide our theories of consciousness, we should let our theories of consciousness guide our conclusions from the thought experiments.\" [64]\n\nThe philosopher Massimo Pigliucci argued in 2013 that the hard problem is misguided, resulting from a \"category mistake\". [17] He said: \"Of course an explanation isn't the same as an experience, but that's because the two are completely independent categories, like colors and triangles. It is obvious that I cannot experience what it is like to be you, but I can potentially have a complete explanation of how and why it is possible to be you.\" [17]\n\nIn 2017, the philosopher Marco Stango, in a paper on John Dewey's approach to the problem of consciousness (which preceded Chalmers' formulation of the hard problem by over half a century), noted that Dewey's approach would see the hard problem as the consequence of an unjustified assumption that feelings and functional behaviors are not the same physical process: \"For the Deweyan philosopher, the 'hard problem' of consciousness is a 'conceptual fact' only in the sense that it is a philosophical mistake : the mistake of failing to see that the physical can be had as an episode of immediate sentiency.\" [65]\n\nThe philosopher Thomas Metzinger likens the hard problem of consciousness to vitalism, a formerly widespread view in biology which was not so much solved as abandoned. [66] Brian Jonathan Garrett has also argued that the hard problem suffers from flaws analogous to those of vitalism. [67]\n\nThe philosopher Peter Hacker argues that the hard problem is misguided in that it asks how consciousness can emerge from matter, whereas in fact sentience emerges from the evolution of living organisms. [68] He states: \"The hard problem isn't a hard problem at all. The really hard problems are the problems the scientists are dealing with. [...] The philosophical problem, like all philosophical problems, is a confusion in the conceptual scheme.\" [68] Hacker's critique extends beyond Chalmers and the hard problem, being directed against contemporary philosophy of mind and neuroscience more broadly. Along with the neuroscientist Max Bennett, he has argued that most of contemporary neuroscience remains implicitly dualistic in its conceptualizations and is predicated on the mereological fallacy of ascribing psychological concepts to the brain that can properly be ascribed only to the person as a whole. [69] Hacker further states that \"consciousness studies\", as it exists today, is \"literally a total waste of time\" and that \"the conception of consciousness which they have is incoherent\". [68]\n\n## Eliminative materialism / Illusionism\n\nEliminative materialism or eliminativism is the view that many or all of the mental states used in folk psychology (i.e., common-sense ways of discussing the mind) do not, upon scientific examination, correspond to real brain mechanisms. [59] According the 2020 PhilPapers survey, 4.51% of philosophers surveyed subscribe to eliminativism. [25]", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia2.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2648.pdf", - "query": "Concerning electrolyte solutions, what assumption makes the primitive model (PM) regarding ions?", - "target_page": 1, - "target_passage": "simple phenomenological models such as the primitive model (PM), for which the ions are assimi- lated to charged hard spheres", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Models of electrolyte solutions from molecular descriptions: The example of NaCl solutions\n\nJohn Jairo Molina 1 , 2 , 3 , ∗ Jean-Fran¸cois Dufrˆeche 1 , 2 , 3 , † Mathieu Salanne 1 , 2 , Olivier Bernard 1 , 2 , Marie Jardat 1 , 2 , and Pierre Turq 1 , 2 1 UPMC-Universit'e Paris 06, UMR 7195, PECSA, F-75005 Paris, France 2 CNRS, UMR 7195, PECSA, F-75005 Paris, France 3 Institut de Chimie S'eparative de Marcoule (ICSM), UMR 5257 CEA-CNRS-Universit'e Montpellier 2, Site de Marcoule,\n\nBˆatiment 426, BP 17171, 30207 Bagnols-sur-C'eze Cedex, France\n\nWe present a method to derive implicit solvent models of electrolyte solutions from all-atom descriptions; providing analytical expressions of the thermodynamic and structural properties of the ions consistent with the underlying explicit solvent representation. Effective potentials between ions in solution are calculated to perform perturbation theory calculations, in order to derive the best possible description in terms of charged hard spheres. Applying this method to NaCl solutions yields excellent agreement with the all-atom model, provided ion association is taken into account.\n\nSince the pioneering works of Debye, Huckel, and Onsager, electrolyte solutions have been commonly described by continuous solvent models, for which the McMillan-Mayer theory [1] provides a rigorous statistical-mechanical foundation. Within that level of description, simple phenomenological models such as the primitive model (PM), for which the ions are assimilated to charged hard spheres [2], can lead to explicit formulas for the thermodynamic and structural properties (e.g., with the help of the mean spherical approximation (MSA) [3] or the binding MSA (BIMSA) [4]). These models are the most practical to use [5], since they allow for a direct link between the experimental measurements and the microscopic parameters of the system. Nevertheless, they ignore the molecular structure of the solvent. Consequently, they cannot properly account for the complex specific effects of the ions, which appear in numerous biological, chemical, and physical interfacial phenomena [6, 7], without further developments.\n\nAn alternative procedure consists in carrying out molecular simulations, where both the solvent and solute are treated explicitly. After a rigorous averaging over the solvent configurations, a coarse-grained description of the ions, which still includes the effect of the solvent structure, can be obtained [8-11]. However, this set of methods is purely numeric; they do not provide any analytical expression for thermodynamic quantities. They are therefore restricted to simple geometries [12, 13] (bulk solutions or planar interfaces). The description of complex systems, such as porous or electrochemical materials, is still based on continuous solvent models [14].\n\nIn this letter we present a method aimed at bridging the gap between analytical and numerical approaches. It is based on the application of liquid perturbation theory (LPT) [15] to effective ion-ion potentials extracted from\n\nmolecular dynamics (MD) results. Different approximations of the PM are employed for the case of NaCl electrolyte solutions: a two component model (MSA2), that only takes free ions into account, and two different three component models (MSA3 and BIMSA3), which include a third species (the contact ion pair). As we proceed to show, LPT allows us to select the best simple model which accurately accounts for the thermodynamics and the physical-chemistry of the system.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2648.pdf" - }, - { - "text": "To overcome this difficulty, we have explicitly introduced the CIP in our model (species 3). Straightforward calculations, based on a characteristic-function formalism, allow us to define an equivalent model in which the free ions and the CIP are explicitly taken into account [19, 20]. We apply this formalism by defining a pair as an anion and a cation at a distance less than 4 ˚ A, which corresponds to the position of the effective potential maximum. The interaction between free, like charges in this new system remains unchanged, and the cation-anion interactions are easily approximated by ex-", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2648.pdf" - }, - { - "text": "FIG. 1: Effective McMillan-Mayer short-range pair potentials extracted from explicit solvent simulations using the HNC closure. (a) Cation anion, (b) cation cation, (c) anion anion, (d) cation anion RDF obtained from explicit solvent MD and implicit solvent MC simulations.\n\n\n\npute all ion thermodynamic properties through implicit solvent MC simulations.\n\nThe second stage of our coarse-graining procedure consists in applying LPT, in order to deduce the best analytical model of electrolyte solutions which reproduces this molecular description. The principle of LPT is to describe the properties of a given system in terms of those of a well known reference system, with the difference between them treated as a perturbation in the reference potential. Assuming pairwise additive potentials, V ij = V (0) ij + ∆V ij , a first-order truncated expression for the free energy density of the system βf v is obtained,\n\nβf v /lessorsimilar βf (0) v + 1 2 β ∑ i,j ρ i ρ j ∫ d r g (0) ij ( r ) ∆V ij ( r ) (1)\n\nwhich depends only on the free-energy density f (0) v and RDF g (0) of the reference fluid, with β = ( k B T ) -1 and ρ i the concentration of species i . The Gibbs-Bogoliubov inequality [15] ensures that the right-hand side of Eq. (1) is actually a strict upper bound. Once a reference system has been chosen, the expression on the right-hand side of Eq. (1) must be minimized with respect to the parameters defining the reference. This procedure yields the best first-order approximation to the free energy of the system under consideration.\n\nFor a system of charged particles in solution, the natural reference is the PM, defined in terms of the charge and diameter ( σ i ) of each species. In this case, the perturbing potentials are just the short-range effective potentials computed above (∆ V ij = V SR ij ). We use the MSA [3] solution to the PM, since it provides analytical expressions for both the free energy and the RDF. The perturbation term is evaluated using an exponential approximation to the RDF obtained within the MSA, g ( r ) = exp [ g MSA ( r ) -1], which removes any unphysical negative regions and improves the comparison with HNC calculations.\n\nΦ\n\nFIG. 2: (Color online) (a) Osmotic coefficient Φ in the McMillan-Mayer frame of reference. (diamond) MC simulations, (dot dashed) MSA2, (dot) Debye Huckel Limiting law (DHLL), (cross) experiments (Ref. [18] with the McMillanMayer to Lewis Randall conversion). (b) Minimization diameters. (dot dashed) MSA2 and (diamond) MSA-fit.\n\n\n\nWe first used LPT for a two-component system (Na + and Cl -free ions) within the MSA (model MSA2), for concentrations ranging from 0.1 to 2 . 0 mol l -1 . The minimization leads to almost constant diameters on the whole range of concentration: σ 1 = 3 . 67 ˚ A and σ 2 = 4 . 78 ˚ A. As shown in Fig. 2, these parameters yield osmotic coefficients close to MC calculations only at very low concentration, i.e., c ≤ 0 . 1 moll -1 (experimental values are given for indicative purposes only, since a perfect model will exactly match the MC results). For molar solutions, the LPT results differ considerably from MC calculations. This discrepancy can easily be understood by comparing the diameters found within the MSA2 calculation with the effective potentials given in Fig. 1. The anion/cation contact distance obtained within the MSA2 calculation is 4 . 2 ˚ A, which is in the region of the second minimum of the effective potential and corresponds to the situation where there is a single layer of water molecules between the ions. The first minimum of the potential, which corresponds to the contact ion pair (CIP) is thus completely ignored by the MSA2 calculation. If the MSA diameters are directly fitted to reproduce the MC osmotic pressure, much smaller values are obtained. These MSA-fit hydrated diameters, which are compared to the MSA2 diameters in the bottom part of Fig. 2, are averages of the CIP and the solvent-separated ion pair.", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2648.pdf" - }, - { - "text": "ing the temporal dynamics of belief changes in experimental participants. Dynamic belief trajectories can then be related to other (for example, physiological) measures, as is usual in model-based neuroscience [65]. This method can also, in principle, be used for fitting models to other types of experimentally observable systems, like animals, organoids [66], and simulated or emergent systems [67]. The package can also be used for agent-based modelling in general, for repeating earlier analyses with sampling based model-fitting and for comparing POMDP-based AIF models directly to other types of models.\n\nSince they implement full approximate Bayesian inferences, AIF models are computationally more demanding than many approaches traditionally used in cognitive and agent-based modelling, in particular when the dimensionality of the generative model is large. This means that models with highly multidimensional or complex behaviour and large numbers of agents can be computationally infeasible to implement, especially given the additional computational demands introduced by fitting these models to empirical data. Avenues for addressing this implicit scaling problem were proposed in the context of machine learning applications [68,69], and with the use of simplifying assumptions-the use of which are ubiquitous in computational modelling-AIF has been used to model multi-agent phenomena, such as opinion dynamics [15,70], coordinated foraging [71] and fish school movements [12]. It remains to be explored how AIF models can be applied to highly complex natural phenomena, such as a concrete election, which underscores the need for efficient but flexible and accessible software tools in the field.\n\nThere are many ways in which ActiveInference can be improved. It would be useful to extend the set of dynamic belief states to include prediction errors since they are often used for model-based neuroscience. This would entail departing from discrete state-space (i.e., POMDP) models to consider continuous state-space models apt for Bayesian filtering or predictive coding (see below). An alternative would be to generate prediction errors from belief updating under discrete models, where prediction errors can be read as the (KL) divergence between posterior and prior beliefs (i.e., complexity or information gain). A simple interface could be added for creating custom parametrisations of the requisite parameters that could be parametrised with Boltzmann or Gibbs distributions, as opposed to Dirichlet distributions. Parameter learning could be extended to all generative model parameters, as well as in parametrised forms (e.g., so that the Boltzmann parameter or temperature of the parameters that are learned); similarly for the precision over expected free energies γ . Preference priors should also be implementable for environmental states, in addition to observations, and A can be made action dependent.", - "page_start": 28, - "page_end": 28, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "The first stage consists in calculating the McMillanMayer effective ion-ion interaction potentials V eff ij ( r ), by inverting the radial distribution functions (RDF) g ij ( r ) obtained by MD. The simulations were carried out on a box of 2000 water molecules and 48 NaCl pairs using the same interaction potentials as in reference [16]. This setup corresponds to a concentration of 0 . 64 moll -1 . NPT ensemble sampling at standard pressure and temperature was enforced, with a time step of 1 fs and a pressure bath coupling constant of 1 ps. An equilibration run of 0.25 ns was followed by a production run of 0.6 ns for five different initial configurations. The averages of the resulting RDF were then used for the potential inversion via the HNC closure [15]. These effective potentials are assumed to be concentration independent and will be used for simulations at all concentrations.\n\nSubtracting the long-range Coulombic potential V LR ij ( r ) (which depends on the dielectric constant of the solvent) from V eff ij ( r ), we obtain the short-range contribution V SR ij ( r ) to the effective potentials. These are given in Fig. 1 (species 1 and 2 refer to Na + and Cl -free ions, respectively). All the short-range potentials exhibit oscillations corresponding to the solvent layering between the ions, but this effect is particularly important for the cation-anion interaction: a considerable potential barrier ( /greaterorsimilar 2 k B T ) separates the first two attractive wells. To serve as a reference, Monte Carlo (MC) simulations were performed with these effective potentials; a comparison between MD and MC RDF is also provided in Fig. 1. The excellent agreement between both sets of RDF validates the HNC inversion procedure [17], and allows us to com-", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2648.pdf" - }, - { - "text": "Alibrary of pre-made canonical POMDP models could be created so that users can easily implement them directly. Alternatives to the fixed-point iteration method for updating posteriors over environmental states could be included, like the marginal message passing algorithm. There are various ways in which the package can be made more computationally efficient, and it could be compared with other software implementations. There are plenty of utility and plotting functions that could be added to the package to make it easier to use and to facilitate integration with the model-fitting packages it relies on; for example, to allow for combining the models with linear regressions to compare parameters values of different populations in a single model. More complex types of POMDP models can also be added, like hierarchical and temporally deep POMDPs. Model structure learning could be considered, where different model structures are compared and chosen between by evaluating their free energies. Sophisticated inference, where predictions are also made about changes in one's own beliefs-depending on expected action-dependent observations in the future-could also be implemented [58]. Finally, the package could be extended to other types of generative models than POMDPs, including other universal models, like generalised filtering [17] and Hierarchical Gaussian Filter models [41], as well as custom", - "page_start": 28, - "page_end": 28, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "processesing business. And, I get excited about the opportunity to expand our transaction base with our mobile banking, bill payment and mobile operator solutions.\n\nThe real value of our company is in our transaction processing. Because of the low incremental cost of connecting to a new customer, anytime we sign a new contract most of the incremental revenue will now be flowing to our bottom line. The infrastructure is in place to leverage additional growth and bring us closer to being EBITDA and cash flow positive in the near term.\n\n## What role will strategic alliances play in extending\n\nAlliances are an important part of our strategic direction. Recently, we announced several partnerships that help us expand sales channels and distribution of our products and services. Our partners were looking for wireless transaction solutions to complement their own offerings, and they selected Euronet's products, proving that our solutions are rock solid.\n\nGemplus, the world's number one provider of smart card-based solutions, chose us as their global partner to provide electronic recharge solutions to mobile operators. We also have agreements with Sila Communications to help us market our suite of mobile banking solutions throughout Europe, the Middle East and Asia Pacific and with Aether Systems which is offering our mobile banking solutions in the United States.\n\n## Why did you change your corporate name to Euronet Worldwide last year?\n\nWe became Euronet Worldwide to more accurately reflect the company's growing presence in the global marketplace. We are no longer focused solely on Europe, and today, deliver comprehensive solutions to more than 200 customers in over 60 countries.\n\n## What was your biggest challenge in 2000?\n\nachieve high growth. As banks began moving to outsourcing rather than purchasing software to manage their transactions, we realized that this high growth would not materialize. We've basically downsized to reduce expenses to better correspond to revenue expec-\n\ntations, so we expect this division to be an EBITDA contributor from this point forward. The trend towards outsourcing negatively impacted our software business, but positively benefits our network services division.\n\nIt's important to point out that our software is an asset to our business of\n\n\n\nselling transactions. For example, our software sales doubled in the Asia Pacific region over 1999. Relationships with large financial institutions like Westpac Banking Corporation have cemented our position in Asia Pacific as a leading supplier of transaction processing solutions.\n\n## Why is ATM outsourcing important?\n\nIncreasingly, financial institutions are choosing to outsource their ATM operations to free up resources\n\n\n\nI think it was restructuring our software business late in the year. When Euronet purchased Arkansas Systems, Inc. over two years ago, the division was expected to\n\nand concentrate on their core banking business. Some analysts predict that outsourcing by the European banking and finance sector will total $91 billion by 2003. We are expanding our outsourcing business with wireless and Internet banking services.\n\nOur outsourcing business is thriving. Currently we provide ATM outsourcing for some of the biggest banks in the world - banks like Citibank, ABN AMRO, Deutsche Bank, Millennium\n\nand Raiffeisenbank - as they expand into emerging markets. We have contracts with Citibank in five countries, most recently in Greece and the Czech Republic.", - "page_start": 5, - "page_end": 5, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "Figure 1. Depiction of a POMDP generative model. This encodes the agent's expectations about how the state s of the environment changes over time t , and how it generates observation o at each time step. A , also called the observation model, describes how environmental states give rise to observations. B , also called the transition model, describes how environmental states change over time, depending on action u (called policy π when structured into sequences). C is the preference prior, which encodes the agent's preferences for observations. This shapes the expected free energy G associated with each policy, which is used for policy selection. D encodes the agent's prior belief over environmental states before making any observations, and E is the prior over policies that determines the agent's preferences for policies in the absence of other motivation.\n\n\n\n## 2.2. Perception in Active Inference\n\nIn AIF, perception is conceptualised as the result of variational (i.e., approximate) Bayesian inference, performed by minimising the VFE to optimise parameters of posterior beliefs about the environment. In exact Bayesian inference, we use a parametrised generative model m to make an optimal inference about state s of the environment based on observation o . This is performed by combining a prior belief over states p ( s | m ) ; a likelihood model p ( o | s , m ) ; and the model evidence p ( o | m ) , a normalisation term encoding the likelihood of receiving the given observations across all possible environmental states, as follows [1]:\n\np ( s | o , m ) = p ( o | s , m ) p ( s | m ) p ( o | m ) (1)\n\nThe posterior distribution over states given observations p ( s | o , m ) here represent the agent's beliefs about the environment. Forming beliefs in this way is thought to be the process that enables conscious, as well as unconscious, perception. The product of the likelihood model and prior is also called the joint likelihood p ( o , s | m ) , which fully defines the generative model, and which we use henceforth. In the following, for notational simplicity, we also omit denoting the dependency on the generative model m .\n\nCalculating the model evidence p ( o ) is often intractable, making exact Bayesian inference unfeasible. The way to circumvent this in AIF is to use a variational approximation to Bayesian inference [23,33,50,51]. This works by transforming the inference into an optimisation problem, specifically the minimisation of the VFE . First, an arbitrary probability distribution over environmental states q ( s ) , an approximate posterior that is used to approximate the exact posterior, is introduced. We then introduce the Kullback-Leibler (KL)", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "## Core Concepts\n\nAIF\n\nVFE\n\nEFE\n\nGenerative model\n\nPOMDP\n\nA ctive i nference is a formal framework for modelling behaviour and cognition. Perception and action are cast as minimising free energy-the VFE and EFE , respectively-given a generative model of the environment.\n\nThe v ariational f ree e nergy F quantifies how well a generative model explains incoming sensory observations. It can be rewritten as the negative log model evidence (called surprise) upper-bounded by the divergence from the optimal posterior p ( s | o ) . Perception as inference is accomplished by selecting the approximate posterior q ( s ) with the lowest associated VFE .\n\nF [ q ( s ) , o ] ≜ D KL [ q ( s ) ∥ p ( o , s )] = D KL [ q ( s ) ∥ p ( s | o )] ︸ ︷︷ ︸ Divergence -ln p ( o ) ︸ ︷︷ ︸ Surprise\n\nThe e xpected f ree e nergy G quantifies the expected future free energy under an action policy π . It consists of an information gain term and a pragmatic value term that provide a natural balance between exploratory and goal-seeking behaviour. Action as inference is accomplished by selecting the action policy with the lowest associated EFE .\n\nG π = -E q ( ˜ o , ˜ s | π ) [ ln q ( ˜ s | ˜ o , π ) -ln q ( ˜ s | π )] ︸ ︷︷ ︸ Information gain -E q ( ˜ o | π ) [ ln p ( ˜ o | C )] ︸ ︷︷ ︸ Pragmatic value\n\nThe generative model is an agent's formal assumptions about the structure and dynamics of its environment, based on which perceptual and active inferences are carried out. Many types of generative models exist that are suitable for different environments and tasks.\n\nThe P artially O bservable M arkov D ecision P rocess is a type of flexible generative model that is widely used in the AIF literature. In discrete time and usually a discrete state space, this model type is parametrised to fit a given task by a set matrices containing probability distributions.\n\n## 2. Active Inference with POMDPs\n\nIn this section, we briefly describe the core concepts of AIF and POMDPs. This should familiarise the reader with the vernacular used in the later sections regarding the functionalities of the package. While various extensions, such as structure learning, which enables an agent to learn the structure or shape of its environment through model comparison [44-47], or hierarchical and temporally deep POMDPs [48,49], are relevant for future work, describing these in detail is beyond the scope of this foundational paper.\n\nAt the core of AIF lies the minimisation of a variational free energy upper bound on surprise for perception, as well as action. This is motivated by the free energy principle [4-8], which states that self-organising systems can be described as minimising the variational free energy of their sensory states. The minimisation of free energy generally takes two", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "where γ is the liquid-gas surface tension and f ( h ) is a local free energy term that describes the wettability of the surface. Since µ corresponds to a chemical potential, the term µh may either bias the system towards the liquid or towards the gas state. The variation of F w.r.t. h gives the pressure. It contains the curvature (Laplace) pressure -γ ∆ h and the disjoining pressure Π( h ) = -∂ h f ( h ) . Many different forms for the latter are in use (see, e.g., Refs. [4, 8, 63, 70-73]).\n\nFor the present system a thin film description using Eq. (1) is not appropriate because the nanoparticles are not taken into account. However, under certain conditions one can augment equation (1) for the evolution of the film thickness by coupling it to an equation for the evolution of the mean particle concentration. The resulting model is able to describe the behaviour of an evaporating solution on the meso- and macroscale. Such an approach is briefly discussed below in Section III C. Weshould expect such a model to describe the mesoscopic dewetting front discussed above. However, the theory is less suited to a description of the dewetting dynamics of the ultrathin postcursor\n\nfilm.\n\nThe dewetting of the ultrathin film of highly concentrated suspension may be described by a discrete stochastic model such as, for instance, a kinetic Monte Carlo (KMC) model based solely on evaporation/condensation dynamics of the solvent and diffusion of the solute [35, 39, 41]. The validity of this strong assumption regarding the relevant transport processes can be confirmed from an estimate based on Eq. (1): The pressure p = δF/δh drives convection and evaporation. The convective mobility is proportional to h 3 , i.e., it is large for thick films but decreases strongly with reduced film thickness. The evaporative mobility, however, is a constant, implying that evaporation will dominate below a certain (cross-over) thickness. For the parameter values of Ref. [57] and a small contact angle ( ≈ 0 . 01 ), the cross-over thickness is in the range of 1-5 nanometers. This estimate justifies the neglect of convective transport in a description of the postcursor film and may explain why one has such good agreement between the experimentally observed patterns and the patterns obtained from a purely two-dimensional (single layer) kinetic Monte Carlo model [35]. We introduce the KMC model below in Section III A.\n\nIn several respects, however, the kinetic Monte Carlo model is rather simplistic, limiting its potential applications. For instance, the thermodynamic chemical potential as well as any wetting interaction of the solvent with the substrate are collected in a single parameter - an effective chemical potential. This implies that any influence of a disjoining pressure is 'smeared out' over the whole system and that no distinction between the short- and the long-range parts of the disjoining pressure is possible. It is furthermore based on the assumption that evaporation/condensation is", - "page_start": 7, - "page_end": 7, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2648.pdf", - "query": "What is the principle of the liquid perturbation theory (LPT) ?", - "target_page": 2, - "target_passage": "The principle of LPT is to describe the properties of a given system in terms of those of a well known reference system, with the differ- ence between them treated as a perturbation in the ref- erence potential", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "scopic film. We have seen that the KMC model is able to describe the interplay of solute diffusion within the solvent and solvent evaporation/condensation. It also takes the liquid-liquid, liquidparticle and particle-particle interactions into account and therefore allows us to distinguish different regimes of the transverse (fingering) instability of the evaporative dewetting front: a transport regime where the instability is almost completely independent of the interaction strengths and a demixing regime where particles and liquid demix at the receding front thereby increasing its transverse instability.\n\nThe dynamical density functional theory describes the coupled dynamics of the density fields of the liquid and the nanoparticles. In the form described above (i.e. based on the two-dimensional hamiltonian (3)) we obtain a simple theory that allows us to study the time evolution of the evaporating ultrathin film and also to investigate the influence of processes such as surface diffusion by the liquid, which are not incorporated in the KMC model. However, it is straightforward to extend the theory to consider a fully three-dimensional fluid film, in which one can distinguish between short- and long-range interactions of solvent and/or solute with the substrate. We have, however, restricted the examples given here to situations that can also be described using the KMC model. A further exploration will be presented elsewhere.\n\nFinally, we have discussed a simple thin film model for the hydrodynamics on the mesoscale. It results from a long-wave approximation and consists of coupled evolution equations for the film thickness profile and the mean particle concentration. It has been used to discuss the self-pinning of receding contact lines that is related to the formation of rings of dried-in particles (coffeestain effect) that frequently occurs when films or drops of solutions or suspensions dewet by the combined effects of convection and evaporation.\n\nOne of the primary goals of researchers in this field, is the search for simple-to-use techniques that allow one to produce hierarchically structured functional layers for a wide range of applications such as, e.g., organic solar cells [98]. This means that the experiments advance very rapidly towards increasingly complex systems. For example, there have been investigations of the influence of the phase behaviour on the drying of droplets of a suspension of hard-sphere colloidal particles and non-adsorbing polymer [99], of the instabilities and the formation of drops in evaporating thin films of binary solutions [100] that may lead to treelike patterns [101], of effects of a secondary phase separation on evaporation-induced pattern formation in polymer films [102], and of the influence of an imposed flow on decomposition and deposition processes in a sliding ridge of evaporating solution of a binary polymer mixture [103] and of the influence of rather", - "page_start": 23, - "page_end": 23, - "source_file": "1001.2669.pdf" - }, - { - "text": "on the model (see above). The purely two-dimensional character of the KMC was extended to a 'pseudo three-dimensional' one by making the effective chemical potential dependent on the mean liquid coverage [38]. As the latter is related to a mean film thickness, this corresponds to the introduction of a 'global' thickness-dependent disjoining pressure into the evaporation term without an explicit consideration of a film thickness. The amended model can reproduce bimodal structures that are beyond the scope of the purely two-dimensional model [38, 39]. Fully threedimensional models are also discussed in the literature [76, 77].\n\n## B. Dynamical Density Functional theory\n\nThe limitations of the kinetic Monte Carlo model introduced in the previous Section are related to its character as a two-dimensional lattice gas with only three states: gas, liquid or particle. This implies that (i) no liquid can be transported to a site on the surface already filled with liquid, i.e., diffusion of the liquid can not be incorporated in a sensible way and (ii) one is not able to distinguish between the influence of the short- and the long-range parts of the interactions with the substrate, as all such interactions are absorbed into the effective chemical potential.\n\nHowever, using dynamical density functional theory (DDFT) [78-83] one can develop a model for the processes in the ultrathin postcursor film without these limitations, although here we limit ourselves to developing the theory at the level of the KMC and solely discuss how to extend it to incorporate the influence of the liquid diffusion over the surface. Such a DDFT model describes the coupled dynamics of the density fields of the liquid ρ l and the nanoparticles ρ n . The densities ρ l and ρ n are defined as the probabilities of finding a given lattice site on the surface to be occupied by a film of liquid or by a nanoparticle, respectively. Note that the probability densities correspond to number densities as we use the lattice spacing σ = 1 as our unit of length.\n\nTo develop the DDFT, one must first derive the underlying free energy functional F [ ρ l , ρ n ] , and secondly, devise dynamical equations for both density fields that account for the conserved and the non-conserved aspects of their dynamics, i.e., transport and phase change processes, respectively. For a system governed by the hamiltonian (3), we may construct a mean-field (Bragg-Williams) approximation for the free energy of the system [78, 84] which contains an entropic contribution and contributions from the interactions between the different species (nanoparticles and liquid). The free energy is a semi-grand free energy, since the liquid is treated grand canonically (it is coupled to a reservoir with chemical potential µ ), whereas the nanoparticles are treated in the", - "page_start": 13, - "page_end": 13, - "source_file": "1001.2669.pdf" - }, - { - "text": "where γ is the liquid-gas surface tension and f ( h ) is a local free energy term that describes the wettability of the surface. Since µ corresponds to a chemical potential, the term µh may either bias the system towards the liquid or towards the gas state. The variation of F w.r.t. h gives the pressure. It contains the curvature (Laplace) pressure -γ ∆ h and the disjoining pressure Π( h ) = -∂ h f ( h ) . Many different forms for the latter are in use (see, e.g., Refs. [4, 8, 63, 70-73]).\n\nFor the present system a thin film description using Eq. (1) is not appropriate because the nanoparticles are not taken into account. However, under certain conditions one can augment equation (1) for the evolution of the film thickness by coupling it to an equation for the evolution of the mean particle concentration. The resulting model is able to describe the behaviour of an evaporating solution on the meso- and macroscale. Such an approach is briefly discussed below in Section III C. Weshould expect such a model to describe the mesoscopic dewetting front discussed above. However, the theory is less suited to a description of the dewetting dynamics of the ultrathin postcursor\n\nfilm.\n\nThe dewetting of the ultrathin film of highly concentrated suspension may be described by a discrete stochastic model such as, for instance, a kinetic Monte Carlo (KMC) model based solely on evaporation/condensation dynamics of the solvent and diffusion of the solute [35, 39, 41]. The validity of this strong assumption regarding the relevant transport processes can be confirmed from an estimate based on Eq. (1): The pressure p = δF/δh drives convection and evaporation. The convective mobility is proportional to h 3 , i.e., it is large for thick films but decreases strongly with reduced film thickness. The evaporative mobility, however, is a constant, implying that evaporation will dominate below a certain (cross-over) thickness. For the parameter values of Ref. [57] and a small contact angle ( ≈ 0 . 01 ), the cross-over thickness is in the range of 1-5 nanometers. This estimate justifies the neglect of convective transport in a description of the postcursor film and may explain why one has such good agreement between the experimentally observed patterns and the patterns obtained from a purely two-dimensional (single layer) kinetic Monte Carlo model [35]. We introduce the KMC model below in Section III A.\n\nIn several respects, however, the kinetic Monte Carlo model is rather simplistic, limiting its potential applications. For instance, the thermodynamic chemical potential as well as any wetting interaction of the solvent with the substrate are collected in a single parameter - an effective chemical potential. This implies that any influence of a disjoining pressure is 'smeared out' over the whole system and that no distinction between the short- and the long-range parts of the disjoining pressure is possible. It is furthermore based on the assumption that evaporation/condensation is", - "page_start": 7, - "page_end": 7, - "source_file": "1001.2669.pdf" - }, - { - "text": "the dominant dynamic process, but does not allow one to probe this assumption. In Section III B we show how one may develop a dynamical density functional theory (DDFT) that describes the system at a similar level to the KMC. However, the DDFT may also be easily extended to include other effects such as fluid diffusion, that the KMC does not incorporate.\n\n## A. Kinetic Monte Carlo model\n\nThe kinetic Monte Carlo model for two-dimensional dewetting nanofluids [33] was first proposed in Ref. [35] and extended to include next-nearest neighbour interactions in [37]. The two key assumptions used are: (i) the relevant processes can be mapped on to a two-dimensional lattice gas model, thereby neglecting continuous changes in the thickness of the evaporating film, and (ii) all relevant dynamics results from diffusing nanoparticles and evaporating/condensing solvent.\n\nThe model builds on an Ising-type model for the liquid-gas phase transition. The surface is divided up into a regular array of lattice sites whose size is dictated by the nanoparticles. One then considers each lattice site to be occupied either by a nanoparticle, liquid or vapour. This effectively maps the system onto a two-dimensional two-component lattice gas having two fields n and l . The resulting three possible states of a cell are: liquid ( l = 1 , n = 0 ), nanoparticle ( l = 0 , n = 1 ), and vapour ( l = 0 , n = 0 , i.e., cell empty). The energy of an overall configuration is given by the hamiltonian\n\nE = -ε nn 2 ∑ n i n j -ε nl 2 ∑ n i l j -ε ll 2 ∑ l i l j -µ ∑ i l i (3)\n\nwhere ∑ denotes a sum over nearest neighbour pairs and ε ll , ε nn and ε nl are the liquid-liquid, particle-particle and liquid-particle interaction energies, respectively. Fixing the three interaction strength parameters ε ll , ε nn , ε nl and the effective chemical potential µ determines the equilibrium state of the system. We choose ε ll as unit of energy - i.e. we set ε ll = 1 .\n\nThe hamiltonian determines the equilibrium state and the energy landscape of the system. However, as the system 'dries in' during the course of the solvent evaporation, the final nanoparticle configurations do not necessarily represent equilibrium structures. This implies that the system dynamics is of paramount importance. It is determined by the possible Monte Carlo moves, their relative frequencies, and the probabilities for their acceptance. Two types of moves are allowed: (i) evaporation/condensation of liquid and (ii) diffusion of nanoparticles within the liquid. A mobility M corresponds to the ratio of cycles of particle and solvent moves and reflects the physical ratio of", - "page_start": 8, - "page_end": 8, - "source_file": "1001.2669.pdf" - }, - { - "text": "small holes. The competition for space results in a fine-meshed polygonal network of nanoparticle deposits. The concentration of particles is much higher at the network nodes - an effect that can not been seen within the KMC model. As the particles attract the liquid there remains some liquid on the substrate where the nanoparticles are.\n\nFig. 5 gives snapshots of the evolution of a fingering instability for a retracting dewetting front. At early times the straight front shows a rather short-wave instability, about 16 wiggles can be seen. However, they are only a transient: the finger pattern coarsens rapidly till only about 7 fingers remain. The fingering then becomes stationary, i.e., just as in the KMC, the mean finger number remains constant, although new branches are continuously created and old branches join each other. In general, the results on fingering agree well with results obtained using the KMC model [41]. From this we conclude that jamming of discrete particles is not a necessary factor for causing the instability, since the fingering is seen here in a continuum model with a diffusion constant that is independent of the nanoparticle concentration. The DDFT is better suited than the KMC for investigations of the early instability stages: they are more easy to discern without the discrete background noise of the KMC. Furthermore, one may perform a linear stability analysis of the one-dimensional undisturbed streamwise front profiles with respect to transverse perturbations (in analogy to the approach used in Refs. [19, 86, 87]).\n\n## C. Thin film hydrodynamics\n\nThe previous two sections focused on two approaches to describe the experimentally observed patterning dynamics in the ultrathin postcursor film left behind by a mesoscopic receding dewetting front. Although both the kinetic Monte Carlo model and the dynamical density functional theory are able to describe well the processes in the ultrathin film, they can not be employed to describe mesoscale hydrodynamics. A relatively simple model for the latter can be derived in the framework of a long-wave or lubrication equation [8, 63]. We will illustrate here the approach by considering an isothermal situation where the nanoparticles are not surface active, i.e., they do not act as surfactants. For a model incorporating the effects of latent heat generation and surfaceactive particles resulting in thermal and solutal Marangoni stresses, see Ref. [88]. A description of spreading particle solutions incorporating a structural disjoining pressure has also been considered [89]. For related work on particle-laden film flow on an incline see Refs. [90, 91].\n\nOne starts from the Stokes equations, together with continuity, no-slip boundary conditions at the", - "page_start": 17, - "page_end": 17, - "source_file": "1001.2669.pdf" - }, - { - "text": "substrate and force equilibria at the free surface, and applies a long-wave approximation. Under the assumption that concentrations equilibrate rapidly over the film thickness, we obtain coupled non-linear evolution equations for the film thickness profile h ( x, t ) and the amount of nanoparticles per unit length h p = φh , where φ is the volume concentration of the nanoparticles. Note, that h p corresponds to the local thickness of the nanoparticle layer when all the solvent is evaporated. The resulting evolution equation for the film thickness is Eq. (1) above and focusing on the influence of particle-independent capillarity and wettability only, the energy functional F [ h ] is given by Eq. (2) above. Note that the viscosity η depends on the particle concentration. Following Refs. [88, 89, 91, 92] we use the Quemada law for dense suspensions [93-95]\n\nη ( φ ) = η 0 ( 1 -φ φ c ) -2 (8)\n\nwhere φ c = 0 . 64 corresponds to random close packing of spherical particles. For the nanoparticle volume per length h p = φh one obtains the following evolution equation:\n\n∂ t ( φh ) = ∇· [ φQ c ∇ δF δh ] + ∇· [ D ( φ ) h ∇ φ ] , (9)\n\nwhere the particle concentration dependent diffusion coefficient D ( φ ) is related to the viscosity by the Einstein relation D ( φ ) = kT/ 6 πRη ( φ ) , where R is the radius of the nanoparticles [96].\n\nWe illustrate results obtained employing this thin film theory using the single example of a receding dewetting front for a partially wetting film. We use the disjoining pressure and material constants for the liquid considered in Ref. [57], where the evaporative and convective dewetting of a film of volatile liquid is studied. We add, however, the nanoparticles to the system. The expression that we employ for the local free energy term in Eq. (2) is:\n\nf ( h ) = S LW d 2 0 h 2 + S P exp ( d 0 -h l 0 ) , (10)\n\nwhere the parameters characterising the interaction between the liquid film and the surface are the apolar and polar spreading coefficients S LW and S P , respectively, the Debye length l 0 and the Born repulsion length d 0 [57]. The resulting disjoining pressure Π = -∂ h f ( h ) allows for a stable precursor film (thickness h precursor ) and also has a second (larger) thickness ( h 0 ) that corresponds to a secondary minimum of the underlying energy functional. See Refs. [11, 97] for studies of film and drop states for similar disjoining pressures. Our results are calculated for a system where the profiles only vary in one Cartesian direction ( x ), corresponding to a straight dewetting front. However, our results may also be interpreted as applying to a circular flat drop whose front remains", - "page_start": 18, - "page_end": 18, - "source_file": "1001.2669.pdf" - }, - { - "text": "modes of neighboring tetrahedra. And these coupling constants λ x,y,z need to be tuned to produce J x,y,z of the Kitaev model. This is still not easy to implement in solid state systems. At lowest non-trivial order of perturbative expansion, we do get our model (9). Higher order terms in expansion destroy the exact solvability, but may be controlled by the small parameters λ x,y,z /k .\n\n## B. Generate the High Order Terms by Magnetic Interactions between Clusters.\n\nIn this Subsection we consider more conventional perturbations, magnetic interactions between the clusters, e.g. the Heisenberg coupling S j · S k with j and k belong to different tetrahedra. This has the advantage over the previous phonon approach for not introducing additional degrees of freedom. But it also has a significant disadvantage: the perturbation does not commute with the cluster Heisenberg Hamiltonian (2), so the cluster singlet subspace will be mixed with other total spin states. In this Subsection we will use the spin-chirality representation (6) for τ z .\n\nAgain consider two clusters j and k . For simplicity of notations define a projection operator P jk = P j P k , where P j,k is projection into the singlet subspace of cluster j and k , respectively, P j,k = ∑ s = ± 1 | τ z j,k = s 〉〈 τ z j,k = s | . For a given perturbation λH perturbation with small parameter λ (in factor λ/J cluster is the expansion parameter), lowest two orders of the perturbation series are\n\nλ P jk H perturbation P jk + λ 2 P jk H perturbation (1 -P jk ) × [0 -H cluster j -H cluster k ] -1 (1 -P jk ) H perturbation P jk (15)\n\nWith proper choice of λ and H perturbation we can generate\n\nthe desired J x,y,z terms in (8) from the first and second order of perturbations.\n\nThe calculation can be dramatically simplified by the following fact that any physical spin-1/2 operator S x,y,z /lscript converts the cluster spin singlet states | τ z = ± 1 〉 into spin-1 states of the cluster. This can be checked by explicit calculations and will not be proved here. For all the perturbations to be considered later, the above mentioned fact can be exploited to replace the factor [0 -H cluster j -H cluster k ] -1 in the second order perturbation to a c -number ( -2 J cluster ) -1 .\n\nThe detailed calculations are given in Appendix B. We will only list the results here.\n\nThe perturbation on x -links is given by\n\nλ x H perturbation , x = λ x [ S j 1 · S k 1 +sgn( J x ) · ( S j 2 · S k 2 )] -J x ( S j 1 · S j 2 + S k 1 · S k 2 ) .\n\nwhere λ x = √ 12 | J x | · J cluster , sgn( J x ) = ± 1 is the sign of J x .\n\nThe perturbation on y -links is\n\nλ y H perturbation , y = λ y [ S j 1 · S k 1 +sgn( J y ) · ( S j 3 -S j 4 ) · ( S k 3 -S k 4 )] -| J y | ( S j 3 · S j 4 + S k 3 · S k 4 )\n\nwith λ y = √ 4 | J y | · J cluster .\n\nThe perturbation on z -links is\n\nλ z H perturbation , z = λ z [ S j 2 · ( S k 3 × S k 4 ) + sgn( J z ) · S k 2 · ( S j 3 × S j 4 )] -| J z | ( S j 3 · S j 4 + S k 3 · S k 4 ) .\n\nwith\n\nλ z = 4 √ | J z | · J cluster . The entire Hamiltonian H magnetic reads explicitly as,\n\nH magnetic = ∑ cluster j ( J cluster / 2)( S j 1 + S j 2 + S j 3 + S j 4 ) 2 + ∑ x -links {√ 12 | J x | · J cluster [ S j 1 · S k 1 +sgn( J x ) · ( S j 2 · S k 2 ) ] -J x ( S j 1 · S j 2 + S k 1 · S k 2 ) } + ∑ y -links { √ 4 | J y | · J cluster [ S j 1 · ( S k 3 -S k 4 ) + sgn( J y ) S k 1 · ( S j 3 -S j 4 ) ] -| J y | ( S j 3 · S j 4 + S k 3 · S k 4 ) } + ∑ z -links { 4 √ | J z | · J cluster [ S j 2 · ( S k 3 × S k 4 ) + sgn( J z ) S k 2 · ( S j 3 × S j 4 ) ] -| J z | ( S j 3 · S j 4 + S k 3 · S k 4 ) } . (16)\n\nIn (16), we have been able to reduce the four spin interactions in (8) to inter-cluster Heisenberg interactions, and the six-spin interactions in (8) to inter-cluster spinchirality interactions. The inter-cluster Heisenberg couplings in H perturbation x,y may be easier to arrange. The", - "page_start": 6, - "page_end": 6, - "source_file": "1001.0266.pdf" - }, - { - "text": "is similar to the size of the nanoparticles. At a certain distance from the macroscopic front, the ultrathin film starts to evolve a locally isotropic pattern of holes. The holes themselves grow in an unstable manner resulting in an array of isotropically branched structures as shown, e.g., above in Fig. 1. This indicates that at least some of the patterns described in the literature may have arisen from processes in similar ultrathin 'postcursor' films.\n\nThe existence of the ultrathin 'postcursor' film is an experimental finding that can be drawn on when choosing a theoretical approach to account for the pattern formation (see below). Note however, that at the moment there exists no explanation for its existence. A possible hypothesis is that the substrate strongly attracts the nanoparticles. As a result they form a dense suspension layer having a thickness roughly equal to the diameter of the nanoparticles. The observed mesoscopic dewetting front then actually correspond to an autophobic dewetting of a low concentration suspension from the higher concentration suspension on the surface of the substrate.\n\n## III. MODELLING APPROACHES\n\nModels of dewetting thin films of pure liquids or polymers are often based on thin film hydrodynamics. Starting from the Stokes equations, together with continuity and boundary conditions at the substrate and free surface, one applies a long-wave approximation (assuming small surface slopes and contact angles) [8, 63] and obtains a non-linear evolution equation for the film thickness profile h ( x, y, t ) . In the case of volatile liquids one finds [55-58, 64]\n\n∂ t h = ∇· [ Q c ∇ δF δh ] -Q e δF δh , (1)\n\nwith the mobility functions Q c ( h ) = h 3 / 3 η ≥ 0 (assuming Poiseuille flow in the film and no slip at the substrate; η is the dynamic viscosity) and Q e ≥ 0 for the convective and evaporative part of the dynamics, respectively. Q e is a rate constant that can be obtained from gas kinetic theory or from experiment [57]. Note that Eq. (1) only applies if the pressure in the vapour above the film is close to the saturation pressure. For alternative expressions that are used to describe the non-conserved evaporative dynamics see, e.g., Refs. [56, 57, 65-69]. Finally, ∇ = ( ∂ x , ∂ y ) , and ∂ t , ∂ x and ∂ y denote partial derivatives w.r.t. time and the coordinates.\n\nFocusing on the influence of capillarity and wettability only, the energy functional F [ h ] is given by\n\nF [ h ] = ∫ dx ∫ dy [ γ 2 ( ∇ h ) 2 + f ( h ) -µh ] (2)", - "page_start": 6, - "page_end": 6, - "source_file": "1001.2669.pdf" - }, - { - "text": "## Abstract\n\nWe review recent experiments on dewetting thin films of evaporating colloidal nanoparticle suspensions (nanofluids) and discuss several theoretical approaches to describe the ongoing processes including coupled transport and phase changes. These approaches range from microscopic discrete stochastic theories to mesoscopic continuous deterministic descriptions. In particular, we focus on (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nModels (i) and (ii) are employed to discuss the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor film' that remains behind a mesoscopic dewetting front. We highlight, in particular, the presence of a transverse instability in the evaporative dewetting front which results in highly branched fingering structures. The subtle interplay of decomposition in the film and contact line motion is discussed.\n\nFinally, we discuss a simple thin film model (iii) of the hydrodynamics on the mesoscale. We employ coupled evolution equations for the film thickness profile and mean particle concentration. The model is used to discuss the self-pinning and de-pinning of a contact line related to the 'coffee-stain' effect.\n\nIn the course of the review we discuss the advantages and limitations of the different theories, as well as possible future developments and extensions.\n\nThe paper is published in: J. Phys.-Cond. Mat. 21 , 264016 (2009), in the Volume 'Nanofluids on solid substrates' and can be obtained at http://dx.doi.org/10.1088/0953-8984/21/26/264016", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [65] J. P. Burelbach, S. G. Bankoff, and S. H. Davis, 'Nonlinear stability of evaporating/condensing liquid films,' J. Fluid Mech. 195 , 463-494 (1988).\n - [66] A. Oron and S. G. Bankoff, 'Dewetting of a heated surface by an evaporating liquid film under conjoining/disjoining pressures,' J. Colloid Interface Sci. 218 , 152-166 (1999).\n - [67] L. W. Schwartz, R. V. Roy, R. R. Eley, and S. Petrash, 'Dewetting patterns in a drying liquid film,' J. Colloid Interface Sci. 214 , 363-374 (2001).\n - [68] K. Kargupta, R. Konnur, and A. Sharma, 'Spontaneous dewetting and ordered patterns in evaporating thin liquid films on homogeneous and heterogeneous substrates,' Langmuir 17 , 1294-1305 (2001).\n - [69] M. Bestehorn and D. Merkt, 'Regular surface patterns on Rayleigh-Taylor unstable evaporating films heated from below,' Phys. Rev. Lett. 97 , 127802 (2006).\n - [70] G. F. Teletzke, H. T. Davis, and L. E. Scriven, 'Wetting hydrodynamics,' Rev. Phys. Appl. 23 , 9891007 (1988).\n - [71] J. N. Israelachvili, Intermolecular and Surface Forces , Academic Press, London (1992).\n - [72] V. S. Mitlin, 'Dewetting of solid surface: Analogy with spinodal decomposition,' J. Colloid Interface Sci. 156 , 491-497 (1993).\n - [73] L. M. Pismen and Y. Pomeau, 'Disjoining potential and spreading of thin liquid layers in the diffuse interface model coupled to hydrodynamics,' Phys. Rev. E 62 , 2480-2492 (2000).\n - [74] L. Onsager, 'Crystal statistics. I. A two-dimensional model with an order-disorder transition,' Phys. Rev. 65 , 117-149 (1944).\n - [75] G. Reiter, 'Unstable thin polymer films: Rupture and dewetting processes,' Langmuir 9 , 1344-1351 (1993).\n - [76] C. G. Sztrum, O. Hod, and E. Rabani, 'Self-assembly of nanoparticles in three-dimensions: Formation of stalagmites,' J. Phys. Chem. B 109 , 6741-6747 (2005).\n - [77] G. Yosef and E. Rabani, 'Self-assembly of nanoparticles into rings: A lattice-gas model,' J. Phys. Chem. B 110 , 20965-20972 (2006).\n - [78] J. F. Gouyet, M. Plapp, W. Dieterich, and P. Maass, 'Description of far-from-equilibrium processes by mean-field lattice gas models,' Adv. Phys. 52 , 523-638 (2003).\n - [79] U. M. B. Marconi and P. Tarazona, 'Dynamic density functional theory of fluids,' J. Chem. Phys. 110 , 8032-8044 (1999).\n - [80] U. M. B. Marconi and P. Tarazona, 'Dynamic density functional theory of fluids,' J. Phys.-Condes. Matter 12 , A413-A418 (2000).", - "page_start": 29, - "page_end": 29, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HIG_2001.pdf", - "query": "By how much did the Hartford group's link to AARP website account concerning buisness made over the internet ?", - "target_page": 16, - "target_passage": "In 2001 the company’s link to AARP’s Web site accounted for much of the $55 million worth of auto business The Hartford generated over the Internet", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "most dynamic sources of business growth. In 2001 the company's link to AARP's Web site accounted for much of the $55 million worth of auto business The Hartford generated over the Internet.\n\nBecause The Hartford quotes and issues this business online (and added online billing in 2001), acquisition and processing costs are 15 to 20 percent lower than those of traditional direct-marketing or face-toface sales. Because of this and other factors, the expense ratio for AARP business is 30 percent below that of the industry in general. And the customer renewal rate is 96 percent, versus the industry's 88 percent, making the AARP program yield some of the most profitable auto business The Hartford writes.\n\nThe relationship also has The Hartford thinking ahead toward new business and an even stronger relationship with AARP members. The Hartford can crossmarket auto insurance to homeowner's customers and homeowner's insurance to auto customers, which presents a tremendous growth opportunity. In addition,\n\nThe Hartford is committed to providing value to AARP members in many ways. An example: The Hartford and AARP work with the MIT Age Lab to produce information-available in print and on both partners' Web sites-advising AARP members about Alzheimer's disease and other forms of dementia as they affect driving ability. The information guides caregivers struggling with difficult decisions about family members' safety behind the wheel. The resource-a customer solution like no other-helps enhance the superior value The Hartford provides to AARP members.\n\nAlthough it's the most comprehensive, the AARP relationship isn't The Hartford's only affinity program. The company also has affinity arrangements with USAA and other companies. Regardless of the program's size, the affinity partners share the right qualities: strong name-brand recognition, first-class marketing and a broad and loyal customer base.\n\nIn other words, they share some of The Hartford's core attributes.\n\n", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "\n\n'P artnering' is a popular business buzzword that may vanish as quickly as it appeared. The Hartford's partnerships, on the other hand, are built for the long term and have played a major role in the company's growth and success.\n\nThe company enjoys outstanding partnerships with several of the world's top asset managers. It also values its thousands of relationships with financial intermediaries such as large broker-dealers, banks and independent financial planners-and with affinity partners who extend The Hartford's reach into large, growing markets.\n\n'A lot of people talk about having the right partners, but The Hartford views it differently from most,' says Gary Trippe, CEO of Fort Myers, Fla., propertycasualty agency Oswald, Trippe and Company, Inc. 'They look for partners who share their core values, and the relationship is based on trust and respect. It's all about compatibility.' Trippe should know. His\n\nagency writes three times as much business with The Hartford, in both personal and commercial lines, as it writes with any other insurer.\n\nMutually beneficial partnerships with successful businesses of all sizes are the foundation of The Hartford's business model.\n\nPerhaps no relationship represents shared values and shared success better than the one with AARP, which signed a new eight-year contract with The Hartford that began Jan. 1, 2002. The AARP insurance program with The Hartford is a model of affinity marketing and distribution savvy. AARP's membershipthose age 50 and over-is the fastest-growing segment of the U.S. population. Computer use among this group is growing by an estimated 20 percent per year, and the population segment respects established brands and seeks value, convenience and extraordinary service.\n\nThat right combination of factors helps make AARP's World Wide Web site one of The Hartford's", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "\n\nN ew technology tools made The Hartford Experiencecustomer solutions, ease of doing business and extraordinary service-more real than ever for our customers in 2001.\n\nIt was a year that saw the debut of life operations' Hartford Investor Web portal, expanded Web portals for group benefits administrators, and enhancements to technology for The Hartford's property-casualty agents and customers.\n\nHartford Investor is both a versatile personal assistant and an aid in wholesaling, especially for the independent financial planner channel. Broker-dealers and financial advisors can use it to research The Hartford's full complement of individual life and investment products, update their books of business in seconds, track daily fund performance, run financialplanning models, receive online product training, produce customized presentations and even submit business electronically.\n\nIn short, the portal allows The Hartford to bring products and functions from a variety of sources into one convenient online environment.\n\nHartford Investor has two strategic objectives: One, deepen current intermediaries' loyalty to The Hartford by extending The Hartford Experience right to their desktops. Two, expand the network of intermediaries by giving them the technological support they need to grow their businesses.\n\nMore than 153,000 licensed intermediaries-from solo advisors to members of large financial institutions-are appointed to sell The Hartford's products. Yet fewer than 60,000 actively write business for the company. The untapped potential is vast, especially among independents, the fastest-growing distribution channel and the only one in which The Hartford doesn't hold the largest market share.\n\nThat's bound to change. With Hartford Investor available on their desktops, intermediaries will have far", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "## Corporate Information\n\nCorporate Headquarters\n\nThe Hartford Financial Services Group, Inc. 690 Asylum Avenue Hartford, Connecticut 06115 860-547-5000\n\nInternet Address\n\nhttp://www.thehartford.com\n\nAnnual Meeting\n\nShareholders are cordially invited to attend The Hartford's Annual Meeting of Shareholders, which will be held on Thursday, April 18, 2002 at 9:00a.m. in the Wallace Stevens Theater at The Hartford Financial Services Group, Inc.'s home office at 690 Asylum Avenue, Hartford, Connecticut. Shareholders of record as of February 28, 2002 are entitled to notice of, and to vote at, the Annual Meeting.\n\n## Form 10-K and Other Information\n\nShareholders may receive, without charge, a copy of The Hartford's Form 10-K (without exhibits) filed with the Securities and Exchange Commission for the year ended December 31, 2001 by contacting 1-888-FACT-HIG. Forms 10-Q, press releases, and other shareholder communications are also available through this toll-free number.\n\n## Transfer Agent/Shareholder Records\n\nFor information or assistance regarding stock records, dividend checks or stock certificates, please contact The Hartford's transfer agent:\n\nThe Bank of New York Shareholder Relations Department-11E P.O. Box 11258 Church Street Station New York, NY 10286 800-254-2823\n\nTo send certificates for transfer and address changes:\n\nThe Bank of New York Receive and Deliver Department-11W P.O. Box 11002 Church Street Station New York, NY 10286\n\nAddress inquiries about The Hartford's Dividend Reinvestment and Cash Payment Plan to:\n\nThe Bank of New York Dividend Reinvestment Department P.O. Box 1958 Newark, NJ 07101-9774\n\nE-mail: shareowner-svcs@bankofny.com\n\nInternet address: www.stockbny.com\n\nInvestor Relations\n\nThe Hartford Financial Services Group, Inc. Hartford Plaza, HO-1-01 Hartford, Connecticut 06115 Attn: Investor Relations\n\n860-547-2537\n\nMedia Inquiries\n\nThe Hartford Financial Services Group, Inc. Media Relations Hartford Plaza, T-12-56 Hartford, CT 06115 860-547-5200\n\n## Common Stock and Dividend Information\n\nThe Hartford's common stock is traded on the New York Stock Exchange (NYSE) under the trading symbol 'HIG.' The following table presents the high and low closing prices for the common stock of The Hartford on the NYSE for the periods indicated, and the quarterly dividends declared per share.\n\n| | Common Stock Price | Common Stock Price | Dividends |\n|----------------|----------------------|----------------------|-------------|\n| | High | Low | Declared |\n| 2001 | | | |\n| First quarter | $ 67.75 | $ 55.15 | $0.25 |\n| Second quarter | 70.46 | 56.88 | 0.25 |\n| Third quarter | 69.28 | 50.10 | 0.25 |\n| Fourth quarter | 62.83 | 53.91 | 0.26 |\n| 2000 | | | |\n| First quarter | $ 52.75 | $ 29.38 | $0.24 |\n| Second quarter | 64.00 | 44.25 | 0.24 |\n| Third quarter | 73.75 | 56.38 | 0.24 |\n| Fourth quarter | 79.31 | 65.44 | 0.25 |\n\nAs of February 28, 2002 there were approximately 120,000 shareholders of The Hartford.", - "page_start": 37, - "page_end": 37, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "- /H17074 J ohn Belisle, right, is senior vice president of Oswald, Trippe and Company, Inc. in Fort Myers, Fla., one of The Hartford's largest sellers of Select Customer commercial insurance. David van der Merwe, president of electronics manufacturer Saftronics, Inc., depends on him for reliable counsel, as well as products tailored to Saftronics' business.\n - /H17075 T he Hartford signed a new eightyear contract, beginning Jan.1, 2002, to continue its highly successful relationship with AARP. Property & Casualty Operations President and CEO Dave Zwiener, second from left, works closely with, left to right, Bill Farris, director, financial products, AARP Services, Inc.; Leisha Spaulding, manager, financial products, AARP Services, Inc.; and Steve Zaleznick, CEO, AARP Services, Inc.\n\n", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "- - During node failure or link failure, the IP partnership traffic continues from the other available link and the port group. Therefore, if two links of 10 Mbps each are available and you have 20 Mbps of effective link bandwidth, bandwidth is reduced to 10 Mbps only during a failure.", - "page_start": 583, - "page_end": 583, - "source_file": "sg247938.pdf" - }, - { - "text": "\n\nIntermediary Service Award and the first-ever Life Insurance Service Award. The triple win reflected the overall excellence of The Hartford's service, a natural complement to the company's quality products. DALBAR also recognized The Hartford's mutual funds as the industry leader in several categories, including investment management.\n\nIn managing its product portfolio, The Hartford follows its own advice: think ahead and diversify. The company's earnings base derives from a variety of businesses. Diversification is a key element in managing risk and ensuring profitability-a time-tested philosophy that held especially true in 2001, as the company's other businesses evolved to anticipate changing market demands and to offer protection from new risks.\n\nThe property-casualty Business Insurance group, for example, extended its coverage to include common risks associated with e-commerce. Hartford Financial Products' (HFP) coverage continued to meet emerging risks in an extremely volatile business environment.\n\nThe Hartford helped customers manage risk by developing a new category of commercial coverage called CyberFlex. TM This targets the previously unmet needs of small and mid-sized businesses that are integrating the Internet and other communications tools into their regular operations.\n\nA 2001 survey by The Hartford revealed that 80 percent of small and mid-sized businesses weren't sure if their current insurance policies covered specific-and increasingly common-risks such as e-mail viruses, Web site business interruption and online copyright infringement. CyberFlex coverage protects middle-market and small-business policyholders against the risk of those potentially debilitating conditions.\n\nCyberFlex is part of a broad array of industryspecific coverages in The Hartford's SPECTRUM ® business-owner's policy, including protection against employment practices liability, equipment breakdown and business interruption. As the economic environment changes rapidly, The Hartford thinks ahead by providing those flexible coverages. And the company's", - "page_start": 19, - "page_end": 19, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "\n\n - /H17073 T he Hartford's acquisition of Fortis Financial Group in 2001 enhanced the company's market share and distribution advantage. Most importantly, the acquisition brought into The Hartford's family powerful sales professionals like Allen Chinoy of Darien, Ill., left, the nation's fifthleading producer of The Hartford's variable universal life insurance. Chinoy is a vocal supporter of Hartford Investor, which makes it easier for him to show customers such as Dr. Dilip Patel how his portfolio is performing.\n - /H17075 J oe Smith, right, and Kim Connolly, left, are a brother-sister team heading Smith Brothers Insurance, Inc. of Glastonbury, Conn. These VIP agents are enthusiastic users of The Hartford's Electronic Business Center (EBC) and other technological tools for propertycasualty agents. They piloted the EBC and have given valuable feedback to Senior Commercial Underwriter Tracey Kamenash and others at The Hartford to help develop the EBC standards and navigational model.", - "page_start": 23, - "page_end": 23, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "The Hartford Financial Services Group, Inc.\n\n2001 Summary Annual Report\n\nThere's only\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "\n\n - /H17073 Marsh, Inc. is a major distributor of The Hartford's group benefits plans for mid-sized businessesa key growth area for The Hartford. Joe Axelrod, senior account executive, third from right, and Kevin Szott, group sales representative, far right, work in partnership with senior executives from\n\nMarsh's employee benefits practice. The team includes, left to right, Senior Vice Presidents Kerry King, Robert Lustberg, Maria McHugh and, second from right, Eric Jacobson. Szott, who is legally blind, also works with The Hartford's Team Ability, a group of company-sponsored athletes with disabilities.\n\n - /H17075 I n 2001, The Hartford introduced a new category of commercial coverage called CyberFlex, TM designed to protect small and mid-sized businesses against e-business risks such as e-mail viruses and Web site business interruption. Deirdre Barbee, The Hartford's middle market manager in Charlotte, N.C., Mike Lesniak, Charlotte regional vice president, far left, and VIP agent\n\nCameron Harris, president of Cameron M. Harris & Company, second from right, explain CyberFlex's benefits to Todd W. Mansfield, CEO of Crosland, a Charlotte property developer and a 13-year customer of The Hartford. Product innovations such as CyberFlex allow The Hartford to provide riskmanagement solutions for customers as their businesses evolve.", - "page_start": 17, - "page_end": 17, - "source_file": "NYSE_HIG_2001.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HIG_2001.pdf", - "query": "How many licensed intermediaries did Hartford group have in 2001 ?", - "target_page": 23, - "target_passage": "More than 153,000 licensed intermediaries", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\nN ew technology tools made The Hartford Experiencecustomer solutions, ease of doing business and extraordinary service-more real than ever for our customers in 2001.\n\nIt was a year that saw the debut of life operations' Hartford Investor Web portal, expanded Web portals for group benefits administrators, and enhancements to technology for The Hartford's property-casualty agents and customers.\n\nHartford Investor is both a versatile personal assistant and an aid in wholesaling, especially for the independent financial planner channel. Broker-dealers and financial advisors can use it to research The Hartford's full complement of individual life and investment products, update their books of business in seconds, track daily fund performance, run financialplanning models, receive online product training, produce customized presentations and even submit business electronically.\n\nIn short, the portal allows The Hartford to bring products and functions from a variety of sources into one convenient online environment.\n\nHartford Investor has two strategic objectives: One, deepen current intermediaries' loyalty to The Hartford by extending The Hartford Experience right to their desktops. Two, expand the network of intermediaries by giving them the technological support they need to grow their businesses.\n\nMore than 153,000 licensed intermediaries-from solo advisors to members of large financial institutions-are appointed to sell The Hartford's products. Yet fewer than 60,000 actively write business for the company. The untapped potential is vast, especially among independents, the fastest-growing distribution channel and the only one in which The Hartford doesn't hold the largest market share.\n\nThat's bound to change. With Hartford Investor available on their desktops, intermediaries will have far", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "\n\n - /H17073 T he Hartford's acquisition of Fortis Financial Group in 2001 enhanced the company's market share and distribution advantage. Most importantly, the acquisition brought into The Hartford's family powerful sales professionals like Allen Chinoy of Darien, Ill., left, the nation's fifthleading producer of The Hartford's variable universal life insurance. Chinoy is a vocal supporter of Hartford Investor, which makes it easier for him to show customers such as Dr. Dilip Patel how his portfolio is performing.\n - /H17075 J oe Smith, right, and Kim Connolly, left, are a brother-sister team heading Smith Brothers Insurance, Inc. of Glastonbury, Conn. These VIP agents are enthusiastic users of The Hartford's Electronic Business Center (EBC) and other technological tools for propertycasualty agents. They piloted the EBC and have given valuable feedback to Senior Commercial Underwriter Tracey Kamenash and others at The Hartford to help develop the EBC standards and navigational model.", - "page_start": 23, - "page_end": 23, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "\n\n## products & services\n\nH ow do you secure the future when the present is puzzling enough? It's a big challenge, and The Hartford's primary objective. Everything we do is designed to help our customers deal with the uncertainties that lie ahead.\n\nThe Hartford believes the best way to secure the future is to provide customers with the right products, and then back those products with outstanding performance and great service. Staying focused on this objective was never more important-or more challenging-than in 2001.\n\nTrue to form, The Hartford's life operations' annuities and mutual funds delivered high-quality performance in a time of market turmoil. Despite an anemic stock market, 87 percent of the funds in The Hartford's Director variable annuity remained in the first or second quartile of three-year returns within the Lipper Peer Group in 2001. Sixty-four percent of the funds in the Leaders suite of annuities and 91 percent of The Hartford's mutual funds remained in the first or second quartile over the three-year period.\n\nThe ability to deliver that kind of performance can be traced to our money managers-Wellington Management Co., American Funds, Franklin Templeton Investments, MFS Investment Management, AIM Funds Management, Inc., Putnam Investment Management and The Hartford's own Hartford Investment Management Co.\n\nAll of The Hartford's money managers have years of experience and are among the most respected firms in the industry. Their experience and expertise were especially important during the market volatility we saw in 2001. They always stay focused on long-term performance, which is the true measuring stick of The Hartford's value to its customers.\n\nBesides outstanding products and excellent management, great service is a critical component in delivering the right solutions to our customers. In 2001, The Hartford won an unprecedented sixth consecutive DALBAR Annuity Service Award, as well as the", - "page_start": 18, - "page_end": 18, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "The Hartford Financial Services Group, Inc.\n\n2001 Summary Annual Report\n\nThere's only\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "the New York metropolitan area. In order to speed the payment of claims, GBD employees immediately contacted customers with offices in the towers and worked with industry organizations to expedite the issuing of death certificates.\n\nThe Hartford's individual life operations scoured airline manifests and missing-persons lists, looking for names of customers. When they spotted a potential match, they called agents to alert them to a possible claim and provided tips on how to proceed.\n\nFuture generations will measure the full impact of Sept. 11. But at The Hartford, one thing is known already. As they did after disasters such as the New York fire of 1835, the Chicago fire of 1871 and the 1906 San Francisco earthquake, The Hartford's people in 2001 ran their business the only way they know howthe right way. They put customers first and kept promises. In so doing, they helped lay the foundation for a more confident future.\n\n\n\n - /H17076 N ew York employees admire a painting depicting the courage and resilience of The Hartford employees and the New York rescue teams. The montage, which now hangs in the lobby of The Hartford's New York offices, was painted by Andy Yelenak of The Hartford's Information Technology department.\n - /H17073 T he Hartford's New York staff got their businesses back up and running in less than a week after the Sept. 11 attack, despite the destruction of their offices. Among those who were instrumental in getting 330 employees situated in temporary office space were, left to right, Lucille T. Sgaglione, vice president, Hartford Financial Products; Linda Banks, administrative assistant, office support\n\nservices, Business Insurance; Holly McCalmont, human resources manager, Business Insurance; Jim Norris, business technology solutions manager, Business Insurance; Craig Lowenthal, first vice president and chief information officer, Hartford Financial Products; and Susan Miranda, support services manager, Hartford Specialty Co.", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "## Corporate Information\n\nCorporate Headquarters\n\nThe Hartford Financial Services Group, Inc. 690 Asylum Avenue Hartford, Connecticut 06115 860-547-5000\n\nInternet Address\n\nhttp://www.thehartford.com\n\nAnnual Meeting\n\nShareholders are cordially invited to attend The Hartford's Annual Meeting of Shareholders, which will be held on Thursday, April 18, 2002 at 9:00a.m. in the Wallace Stevens Theater at The Hartford Financial Services Group, Inc.'s home office at 690 Asylum Avenue, Hartford, Connecticut. Shareholders of record as of February 28, 2002 are entitled to notice of, and to vote at, the Annual Meeting.\n\n## Form 10-K and Other Information\n\nShareholders may receive, without charge, a copy of The Hartford's Form 10-K (without exhibits) filed with the Securities and Exchange Commission for the year ended December 31, 2001 by contacting 1-888-FACT-HIG. Forms 10-Q, press releases, and other shareholder communications are also available through this toll-free number.\n\n## Transfer Agent/Shareholder Records\n\nFor information or assistance regarding stock records, dividend checks or stock certificates, please contact The Hartford's transfer agent:\n\nThe Bank of New York Shareholder Relations Department-11E P.O. Box 11258 Church Street Station New York, NY 10286 800-254-2823\n\nTo send certificates for transfer and address changes:\n\nThe Bank of New York Receive and Deliver Department-11W P.O. Box 11002 Church Street Station New York, NY 10286\n\nAddress inquiries about The Hartford's Dividend Reinvestment and Cash Payment Plan to:\n\nThe Bank of New York Dividend Reinvestment Department P.O. Box 1958 Newark, NJ 07101-9774\n\nE-mail: shareowner-svcs@bankofny.com\n\nInternet address: www.stockbny.com\n\nInvestor Relations\n\nThe Hartford Financial Services Group, Inc. Hartford Plaza, HO-1-01 Hartford, Connecticut 06115 Attn: Investor Relations\n\n860-547-2537\n\nMedia Inquiries\n\nThe Hartford Financial Services Group, Inc. Media Relations Hartford Plaza, T-12-56 Hartford, CT 06115 860-547-5200\n\n## Common Stock and Dividend Information\n\nThe Hartford's common stock is traded on the New York Stock Exchange (NYSE) under the trading symbol 'HIG.' The following table presents the high and low closing prices for the common stock of The Hartford on the NYSE for the periods indicated, and the quarterly dividends declared per share.\n\n| | Common Stock Price | Common Stock Price | Dividends |\n|----------------|----------------------|----------------------|-------------|\n| | High | Low | Declared |\n| 2001 | | | |\n| First quarter | $ 67.75 | $ 55.15 | $0.25 |\n| Second quarter | 70.46 | 56.88 | 0.25 |\n| Third quarter | 69.28 | 50.10 | 0.25 |\n| Fourth quarter | 62.83 | 53.91 | 0.26 |\n| 2000 | | | |\n| First quarter | $ 52.75 | $ 29.38 | $0.24 |\n| Second quarter | 64.00 | 44.25 | 0.24 |\n| Third quarter | 73.75 | 56.38 | 0.24 |\n| Fourth quarter | 79.31 | 65.44 | 0.25 |\n\nAs of February 28, 2002 there were approximately 120,000 shareholders of The Hartford.", - "page_start": 37, - "page_end": 37, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "| The Hartford Financial Services Group, Inc. Hartford Plaza, 690 Asylum Avenue |\n|---------------------------------------------------------------------------------|\n| Hartford, Connecticut 06115 |\n\nFORM 100025[2001]", - "page_start": 39, - "page_end": 39, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "\n\nMeanwhile, in midtown Manhattan, The Hartford's negotiations for permanent offices-a process that normally takes 12 to 15 months-were complete.\n\nThe feverish pace was in some ways therapeutic. It helped take people's minds off the tragedy and the monumental loss of life, including the lives of many good friends and business colleagues at Aon, Marsh & McLennan, Bank of America and Morgan Stanley-major partners of The Hartford with offices in the Twin Towers.\n\nLike many Americans watching the heroism of firefighters, police and emergency crews, thousands of our employees asked, 'How can we help?' Fortunately, they found ways. Lots of them. Employees crowded into bloodmobiles and dropped food and supplies into overflowing bins. With the company's match, employees also donated more than $700,000 to relief efforts, and The Hartford provided a special telephone hotline for employees who needed counseling.\n\n'Focused resolve' is how New York-based Regional Vice President Brandon Hickey characterizes The Hartford's response. 'It solidified in my mind how strong the culture is at this company,' he says. 'The emotional stress of Sept. 11 will be with us for a long time. But as a tribute to the people who were there, we came back as quickly as we did because we knew we\n\nBy early November-less than 60 days after the attack-The Hartford's New York employees were in their new permanent offices at 2 Park Ave.\n\nNo less impressive-and certainly no less swiftwas The Hartford's claims service during Sept. 11's aftermath. 'Catastrophe Team'-CAT-adjusters were on the ground within days, fulfilling obligations to policyholders who suffered losses. As an example, The Hartford advanced $1 million within 72 hours of the disaster to help the Thacher, Proffitt & Wood law firm establish temporary midtown Manhattan offices. All the firm's employees had evacuated the World Trade Center's south tower before everything in their offices was destroyed. Within a week, Thacher, Proffitt & Wood was back in business.\n\nThe Hartford assigned extra resources to expedite service requests, and customers received premium payment extensions as needed. One adjuster wrote a $250,000 check on the spot to help a lower Manhattan software-development company begin its recovery. CAT team members and call center customer service representatives received special training to help them cope with traumatized customers, and the company distributed disaster-recovery literature and forms to help customers get back to business.\n\nThe Hartford's Group Benefits Division (GBD) offered crisis-counseling services to policyholders in", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "\n\n'P artnering' is a popular business buzzword that may vanish as quickly as it appeared. The Hartford's partnerships, on the other hand, are built for the long term and have played a major role in the company's growth and success.\n\nThe company enjoys outstanding partnerships with several of the world's top asset managers. It also values its thousands of relationships with financial intermediaries such as large broker-dealers, banks and independent financial planners-and with affinity partners who extend The Hartford's reach into large, growing markets.\n\n'A lot of people talk about having the right partners, but The Hartford views it differently from most,' says Gary Trippe, CEO of Fort Myers, Fla., propertycasualty agency Oswald, Trippe and Company, Inc. 'They look for partners who share their core values, and the relationship is based on trust and respect. It's all about compatibility.' Trippe should know. His\n\nagency writes three times as much business with The Hartford, in both personal and commercial lines, as it writes with any other insurer.\n\nMutually beneficial partnerships with successful businesses of all sizes are the foundation of The Hartford's business model.\n\nPerhaps no relationship represents shared values and shared success better than the one with AARP, which signed a new eight-year contract with The Hartford that began Jan. 1, 2002. The AARP insurance program with The Hartford is a model of affinity marketing and distribution savvy. AARP's membershipthose age 50 and over-is the fastest-growing segment of the U.S. population. Computer use among this group is growing by an estimated 20 percent per year, and the population segment respects established brands and seeks value, convenience and extraordinary service.\n\nThat right combination of factors helps make AARP's World Wide Web site one of The Hartford's", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "most dynamic sources of business growth. In 2001 the company's link to AARP's Web site accounted for much of the $55 million worth of auto business The Hartford generated over the Internet.\n\nBecause The Hartford quotes and issues this business online (and added online billing in 2001), acquisition and processing costs are 15 to 20 percent lower than those of traditional direct-marketing or face-toface sales. Because of this and other factors, the expense ratio for AARP business is 30 percent below that of the industry in general. And the customer renewal rate is 96 percent, versus the industry's 88 percent, making the AARP program yield some of the most profitable auto business The Hartford writes.\n\nThe relationship also has The Hartford thinking ahead toward new business and an even stronger relationship with AARP members. The Hartford can crossmarket auto insurance to homeowner's customers and homeowner's insurance to auto customers, which presents a tremendous growth opportunity. In addition,\n\nThe Hartford is committed to providing value to AARP members in many ways. An example: The Hartford and AARP work with the MIT Age Lab to produce information-available in print and on both partners' Web sites-advising AARP members about Alzheimer's disease and other forms of dementia as they affect driving ability. The information guides caregivers struggling with difficult decisions about family members' safety behind the wheel. The resource-a customer solution like no other-helps enhance the superior value The Hartford provides to AARP members.\n\nAlthough it's the most comprehensive, the AARP relationship isn't The Hartford's only affinity program. The company also has affinity arrangements with USAA and other companies. Regardless of the program's size, the affinity partners share the right qualities: strong name-brand recognition, first-class marketing and a broad and loyal customer base.\n\nIn other words, they share some of The Hartford's core attributes.\n\n", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_HIG_2001.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HIG_2001.pdf", - "query": "When did the annual sherholder meeting of Hartford happen in 2002 ?", - "target_page": 38, - "target_passage": "Shareholders are cordially invited to attend The Hartford’s Annual Meeting of Shareholders, which will be held on Thursday, April 18, 2002 ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Corporate Information\n\nCorporate Headquarters\n\nThe Hartford Financial Services Group, Inc. 690 Asylum Avenue Hartford, Connecticut 06115 860-547-5000\n\nInternet Address\n\nhttp://www.thehartford.com\n\nAnnual Meeting\n\nShareholders are cordially invited to attend The Hartford's Annual Meeting of Shareholders, which will be held on Thursday, April 18, 2002 at 9:00a.m. in the Wallace Stevens Theater at The Hartford Financial Services Group, Inc.'s home office at 690 Asylum Avenue, Hartford, Connecticut. Shareholders of record as of February 28, 2002 are entitled to notice of, and to vote at, the Annual Meeting.\n\n## Form 10-K and Other Information\n\nShareholders may receive, without charge, a copy of The Hartford's Form 10-K (without exhibits) filed with the Securities and Exchange Commission for the year ended December 31, 2001 by contacting 1-888-FACT-HIG. Forms 10-Q, press releases, and other shareholder communications are also available through this toll-free number.\n\n## Transfer Agent/Shareholder Records\n\nFor information or assistance regarding stock records, dividend checks or stock certificates, please contact The Hartford's transfer agent:\n\nThe Bank of New York Shareholder Relations Department-11E P.O. Box 11258 Church Street Station New York, NY 10286 800-254-2823\n\nTo send certificates for transfer and address changes:\n\nThe Bank of New York Receive and Deliver Department-11W P.O. Box 11002 Church Street Station New York, NY 10286\n\nAddress inquiries about The Hartford's Dividend Reinvestment and Cash Payment Plan to:\n\nThe Bank of New York Dividend Reinvestment Department P.O. Box 1958 Newark, NJ 07101-9774\n\nE-mail: shareowner-svcs@bankofny.com\n\nInternet address: www.stockbny.com\n\nInvestor Relations\n\nThe Hartford Financial Services Group, Inc. Hartford Plaza, HO-1-01 Hartford, Connecticut 06115 Attn: Investor Relations\n\n860-547-2537\n\nMedia Inquiries\n\nThe Hartford Financial Services Group, Inc. Media Relations Hartford Plaza, T-12-56 Hartford, CT 06115 860-547-5200\n\n## Common Stock and Dividend Information\n\nThe Hartford's common stock is traded on the New York Stock Exchange (NYSE) under the trading symbol 'HIG.' The following table presents the high and low closing prices for the common stock of The Hartford on the NYSE for the periods indicated, and the quarterly dividends declared per share.\n\n| | Common Stock Price | Common Stock Price | Dividends |\n|----------------|----------------------|----------------------|-------------|\n| | High | Low | Declared |\n| 2001 | | | |\n| First quarter | $ 67.75 | $ 55.15 | $0.25 |\n| Second quarter | 70.46 | 56.88 | 0.25 |\n| Third quarter | 69.28 | 50.10 | 0.25 |\n| Fourth quarter | 62.83 | 53.91 | 0.26 |\n| 2000 | | | |\n| First quarter | $ 52.75 | $ 29.38 | $0.24 |\n| Second quarter | 64.00 | 44.25 | 0.24 |\n| Third quarter | 73.75 | 56.38 | 0.24 |\n| Fourth quarter | 79.31 | 65.44 | 0.25 |\n\nAs of February 28, 2002 there were approximately 120,000 shareholders of The Hartford.", - "page_start": 37, - "page_end": 37, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "\n\n## products & services\n\nH ow do you secure the future when the present is puzzling enough? It's a big challenge, and The Hartford's primary objective. Everything we do is designed to help our customers deal with the uncertainties that lie ahead.\n\nThe Hartford believes the best way to secure the future is to provide customers with the right products, and then back those products with outstanding performance and great service. Staying focused on this objective was never more important-or more challenging-than in 2001.\n\nTrue to form, The Hartford's life operations' annuities and mutual funds delivered high-quality performance in a time of market turmoil. Despite an anemic stock market, 87 percent of the funds in The Hartford's Director variable annuity remained in the first or second quartile of three-year returns within the Lipper Peer Group in 2001. Sixty-four percent of the funds in the Leaders suite of annuities and 91 percent of The Hartford's mutual funds remained in the first or second quartile over the three-year period.\n\nThe ability to deliver that kind of performance can be traced to our money managers-Wellington Management Co., American Funds, Franklin Templeton Investments, MFS Investment Management, AIM Funds Management, Inc., Putnam Investment Management and The Hartford's own Hartford Investment Management Co.\n\nAll of The Hartford's money managers have years of experience and are among the most respected firms in the industry. Their experience and expertise were especially important during the market volatility we saw in 2001. They always stay focused on long-term performance, which is the true measuring stick of The Hartford's value to its customers.\n\nBesides outstanding products and excellent management, great service is a critical component in delivering the right solutions to our customers. In 2001, The Hartford won an unprecedented sixth consecutive DALBAR Annuity Service Award, as well as the", - "page_start": 18, - "page_end": 18, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "the New York metropolitan area. In order to speed the payment of claims, GBD employees immediately contacted customers with offices in the towers and worked with industry organizations to expedite the issuing of death certificates.\n\nThe Hartford's individual life operations scoured airline manifests and missing-persons lists, looking for names of customers. When they spotted a potential match, they called agents to alert them to a possible claim and provided tips on how to proceed.\n\nFuture generations will measure the full impact of Sept. 11. But at The Hartford, one thing is known already. As they did after disasters such as the New York fire of 1835, the Chicago fire of 1871 and the 1906 San Francisco earthquake, The Hartford's people in 2001 ran their business the only way they know howthe right way. They put customers first and kept promises. In so doing, they helped lay the foundation for a more confident future.\n\n\n\n - /H17076 N ew York employees admire a painting depicting the courage and resilience of The Hartford employees and the New York rescue teams. The montage, which now hangs in the lobby of The Hartford's New York offices, was painted by Andy Yelenak of The Hartford's Information Technology department.\n - /H17073 T he Hartford's New York staff got their businesses back up and running in less than a week after the Sept. 11 attack, despite the destruction of their offices. Among those who were instrumental in getting 330 employees situated in temporary office space were, left to right, Lucille T. Sgaglione, vice president, Hartford Financial Products; Linda Banks, administrative assistant, office support\n\nservices, Business Insurance; Holly McCalmont, human resources manager, Business Insurance; Jim Norris, business technology solutions manager, Business Insurance; Craig Lowenthal, first vice president and chief information officer, Hartford Financial Products; and Susan Miranda, support services manager, Hartford Specialty Co.", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "The Hartford Financial Services Group, Inc.\n\n2001 Summary Annual Report\n\nThere's only\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "\n\nN ew technology tools made The Hartford Experiencecustomer solutions, ease of doing business and extraordinary service-more real than ever for our customers in 2001.\n\nIt was a year that saw the debut of life operations' Hartford Investor Web portal, expanded Web portals for group benefits administrators, and enhancements to technology for The Hartford's property-casualty agents and customers.\n\nHartford Investor is both a versatile personal assistant and an aid in wholesaling, especially for the independent financial planner channel. Broker-dealers and financial advisors can use it to research The Hartford's full complement of individual life and investment products, update their books of business in seconds, track daily fund performance, run financialplanning models, receive online product training, produce customized presentations and even submit business electronically.\n\nIn short, the portal allows The Hartford to bring products and functions from a variety of sources into one convenient online environment.\n\nHartford Investor has two strategic objectives: One, deepen current intermediaries' loyalty to The Hartford by extending The Hartford Experience right to their desktops. Two, expand the network of intermediaries by giving them the technological support they need to grow their businesses.\n\nMore than 153,000 licensed intermediaries-from solo advisors to members of large financial institutions-are appointed to sell The Hartford's products. Yet fewer than 60,000 actively write business for the company. The untapped potential is vast, especially among independents, the fastest-growing distribution channel and the only one in which The Hartford doesn't hold the largest market share.\n\nThat's bound to change. With Hartford Investor available on their desktops, intermediaries will have far", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "\n\nMeanwhile, in midtown Manhattan, The Hartford's negotiations for permanent offices-a process that normally takes 12 to 15 months-were complete.\n\nThe feverish pace was in some ways therapeutic. It helped take people's minds off the tragedy and the monumental loss of life, including the lives of many good friends and business colleagues at Aon, Marsh & McLennan, Bank of America and Morgan Stanley-major partners of The Hartford with offices in the Twin Towers.\n\nLike many Americans watching the heroism of firefighters, police and emergency crews, thousands of our employees asked, 'How can we help?' Fortunately, they found ways. Lots of them. Employees crowded into bloodmobiles and dropped food and supplies into overflowing bins. With the company's match, employees also donated more than $700,000 to relief efforts, and The Hartford provided a special telephone hotline for employees who needed counseling.\n\n'Focused resolve' is how New York-based Regional Vice President Brandon Hickey characterizes The Hartford's response. 'It solidified in my mind how strong the culture is at this company,' he says. 'The emotional stress of Sept. 11 will be with us for a long time. But as a tribute to the people who were there, we came back as quickly as we did because we knew we\n\nBy early November-less than 60 days after the attack-The Hartford's New York employees were in their new permanent offices at 2 Park Ave.\n\nNo less impressive-and certainly no less swiftwas The Hartford's claims service during Sept. 11's aftermath. 'Catastrophe Team'-CAT-adjusters were on the ground within days, fulfilling obligations to policyholders who suffered losses. As an example, The Hartford advanced $1 million within 72 hours of the disaster to help the Thacher, Proffitt & Wood law firm establish temporary midtown Manhattan offices. All the firm's employees had evacuated the World Trade Center's south tower before everything in their offices was destroyed. Within a week, Thacher, Proffitt & Wood was back in business.\n\nThe Hartford assigned extra resources to expedite service requests, and customers received premium payment extensions as needed. One adjuster wrote a $250,000 check on the spot to help a lower Manhattan software-development company begin its recovery. CAT team members and call center customer service representatives received special training to help them cope with traumatized customers, and the company distributed disaster-recovery literature and forms to help customers get back to business.\n\nThe Hartford's Group Benefits Division (GBD) offered crisis-counseling services to policyholders in", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "\n\n'P artnering' is a popular business buzzword that may vanish as quickly as it appeared. The Hartford's partnerships, on the other hand, are built for the long term and have played a major role in the company's growth and success.\n\nThe company enjoys outstanding partnerships with several of the world's top asset managers. It also values its thousands of relationships with financial intermediaries such as large broker-dealers, banks and independent financial planners-and with affinity partners who extend The Hartford's reach into large, growing markets.\n\n'A lot of people talk about having the right partners, but The Hartford views it differently from most,' says Gary Trippe, CEO of Fort Myers, Fla., propertycasualty agency Oswald, Trippe and Company, Inc. 'They look for partners who share their core values, and the relationship is based on trust and respect. It's all about compatibility.' Trippe should know. His\n\nagency writes three times as much business with The Hartford, in both personal and commercial lines, as it writes with any other insurer.\n\nMutually beneficial partnerships with successful businesses of all sizes are the foundation of The Hartford's business model.\n\nPerhaps no relationship represents shared values and shared success better than the one with AARP, which signed a new eight-year contract with The Hartford that began Jan. 1, 2002. The AARP insurance program with The Hartford is a model of affinity marketing and distribution savvy. AARP's membershipthose age 50 and over-is the fastest-growing segment of the U.S. population. Computer use among this group is growing by an estimated 20 percent per year, and the population segment respects established brands and seeks value, convenience and extraordinary service.\n\nThat right combination of factors helps make AARP's World Wide Web site one of The Hartford's", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "most dynamic sources of business growth. In 2001 the company's link to AARP's Web site accounted for much of the $55 million worth of auto business The Hartford generated over the Internet.\n\nBecause The Hartford quotes and issues this business online (and added online billing in 2001), acquisition and processing costs are 15 to 20 percent lower than those of traditional direct-marketing or face-toface sales. Because of this and other factors, the expense ratio for AARP business is 30 percent below that of the industry in general. And the customer renewal rate is 96 percent, versus the industry's 88 percent, making the AARP program yield some of the most profitable auto business The Hartford writes.\n\nThe relationship also has The Hartford thinking ahead toward new business and an even stronger relationship with AARP members. The Hartford can crossmarket auto insurance to homeowner's customers and homeowner's insurance to auto customers, which presents a tremendous growth opportunity. In addition,\n\nThe Hartford is committed to providing value to AARP members in many ways. An example: The Hartford and AARP work with the MIT Age Lab to produce information-available in print and on both partners' Web sites-advising AARP members about Alzheimer's disease and other forms of dementia as they affect driving ability. The information guides caregivers struggling with difficult decisions about family members' safety behind the wheel. The resource-a customer solution like no other-helps enhance the superior value The Hartford provides to AARP members.\n\nAlthough it's the most comprehensive, the AARP relationship isn't The Hartford's only affinity program. The company also has affinity arrangements with USAA and other companies. Regardless of the program's size, the affinity partners share the right qualities: strong name-brand recognition, first-class marketing and a broad and loyal customer base.\n\nIn other words, they share some of The Hartford's core attributes.\n\n", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "| The Hartford Financial Services Group, Inc. Hartford Plaza, 690 Asylum Avenue |\n|---------------------------------------------------------------------------------|\n| Hartford, Connecticut 06115 |\n\nFORM 100025[2001]", - "page_start": 39, - "page_end": 39, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "\n\n - /H17073 T he Hartford's acquisition of Fortis Financial Group in 2001 enhanced the company's market share and distribution advantage. Most importantly, the acquisition brought into The Hartford's family powerful sales professionals like Allen Chinoy of Darien, Ill., left, the nation's fifthleading producer of The Hartford's variable universal life insurance. Chinoy is a vocal supporter of Hartford Investor, which makes it easier for him to show customers such as Dr. Dilip Patel how his portfolio is performing.\n - /H17075 J oe Smith, right, and Kim Connolly, left, are a brother-sister team heading Smith Brothers Insurance, Inc. of Glastonbury, Conn. These VIP agents are enthusiastic users of The Hartford's Electronic Business Center (EBC) and other technological tools for propertycasualty agents. They piloted the EBC and have given valuable feedback to Senior Commercial Underwriter Tracey Kamenash and others at The Hartford to help develop the EBC standards and navigational model.", - "page_start": 23, - "page_end": 23, - "source_file": "NYSE_HIG_2001.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed11.pdf", - "query": "Regarding climate change, to what corresponds the \"average length of flood events ?", - "target_page": 11, - "target_passage": "The average length of flood events (number of days in which the cumulative daily rainfall excess is positive, compared to the 95th percentile of the baseline", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "issues and re-constructing them di GLYPH<11> erently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as 'earth' and 'pollution', whereas 'climate change' was more associated to specific issues like 'solar', 'coal', 'china', and 'food'.\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, 'snow', 'summer', 'winter', or 'heatwave' in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' di GLYPH<11> erences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n## 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag 'tcot', favored by right-leaning users and 'p2', favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n## 5.1.3. Discourse Structure", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "Model Intercomparison Project (CMIP5) ensemble, forced with the RCP8.5 concentration scenario. To provide more detailed representations of climate processes and impacts, the spatial resolution was N216 (approx. 60 km grid length in mid-latitudes), a higher resolution than the CMIP5 models. We used a set of impacts-relevant indices and a global land surface model to examine the projected changes in weather extremes and their implications for freshwater availability and vulnerability to food insecurity. Uncertainties in regional climate responses are assessed, examining ranges of outcomes in impacts to inform risk assessments. Despite some degree of inconsistency between components of the study due to the need to correct for systematic biases in some aspects, the outcomes from different ensemble members could be compared for several different indicators. The projections for weather extremes indices and biophysical impacts quantities support expectations that the magnitude of change is generally larger for 2°C global warming than 1.5°C. Hot extremes become even hotter, with increases being more intense than seen in CMIP5 projections. Precipitation-related extremes show more geographical variation with some increases and some decreases in both heavy precipitation and drought. There are substantial regional uncertainties in hydrological impacts at local scales due to different climate models producing different outcomes. Nevertheless, hydrological impacts generally point towards wetter conditions on average, with increased mean river flows, longer heavy rainfall events, particularly in South and East Asia with the most extreme projections suggesting more than a doubling of flows in the Ganges at 2°C global warming. Some areas are projected to experience shorter meteorological drought events and less severe low flows, although longer droughts and/or decreases in low flows are projected in many other areas, particularly southern Africa and South America. Flows in the Amazon are projected to decline by up to 25%. Increases in either heavy rainfall or drought events imply increased vulnerability to food insecurity, but if global warming is limited to 1.5°C, this vulnerability is projected to remain smaller than at 2°C global warming in approximately 76% of developing countries. At 2°C, four countries are projected to reach unprecedented levels of vulnerability to food insecurity.\n\nThis article is part of the theme issue 'The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels'.\n\n## 1. Introduction\n\nThe majority of climate-change impacts assessments have tended to be framed in terms of future time horizons, e.g. impacts by the middle or end of the twenty-first century [1,2]. However, with international climate policy now largely focused on limiting warming to specific levels of global mean temperature such as 2°C [3] or 1.5°C [4], policy-relevant climate impacts assessments increasingly need to be framed in terms of such warming levels.\n\nThere are two major research questions concerning the impacts of climate change at 1.5°C and 2°C global warming, which are relevant to both mitigation and adaptation policy areas.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed11.pdf" - }, - { - "text": "## 2. Background\n\n## 2.1. Climate Change, Global Warming, and Frames\n\nExisting studies have noted that the subtle di GLYPH<11> erence between climate change and global warming evokes di GLYPH<11> erent public cognitive responses, where global warming'indicates heat-related impacts, human causes, increased UV light penetration, ozone depletion, and the greenhouse e GLYPH<11> ect, whereas climate change is more associated with a wide range of influences on climate, including drought and agriculture [9]. An N-gram analysis suggested that global warming showed a closer connection with ice, snow, and sea, whereas climate change was always connected with scientific investigations, such as", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed10.pdf" - }, - { - "text": "\n\n\n\n## What can users expect from UKCP18?\n\nThere are three components to UKCP18: observations of historic climate, marine projections and projections over land. These components are described below and summarised in Table 1. UKCP18 will provide each of these components at a higher spatial and temporal resolution than UKCP09 and with more information on different types of uncertainty.\n\n\n\n\n\n## OBSERVATIONS\n\n## Annual report: State of the UK Climate. Downloadable data.\n\nThe 'State of the UK Climate' report for 2017 will be included as part of the UKCP18 package, bringing the observed data right up to date. This annual update 8 covers trends, the multidecade climate record and significant weather events such as the early July 2015 hot spell and the exceptionally mild and wet December of the same year.\n\nQuality controlled UK observational datasets from the Met Office observing network, provided at spatial resolutions to match the land projections and for pre-defined administrative regions and river basins, will be available under an Open Government Licence 9 . For variables such as temperature and precipitation these data sets will span the late 19th Century to the present day and will be provided for daily, monthly, seasonal, annual and long term averages.\n\n## MARINE PROJECTIONS\n\n## Sea level rise. Storm surge. Past event case studies.\n\nSea-level rise projections will extend to 2100 and will include contributions from glaciers, ice sheets, freshwater reservoirs, groundwater and thermal expansion. Outputs will include an estimate of the year-to-year changes in sea level rise and a 'plausible but highly unlikely' scenario known as H++. A new feature of UKCP18 will be assessing the credibility of making sea level rise projections to 2300. The projections will use the latest information from the CMIP5 models and application of the methods used in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report 10 .\n\nThe UKCP09 storm surge projections will be updated to provide new estimates of the change in high water levels over the 21st Century. These estimates will be based on a combination of projected mean sea level change and projections of change in the extremes due to changes in atmospheric storminess. These 'storminess' projections will use the same surge model used in operational weather forecasting, using the wind and pressure from the CMIP5 ensemble to drive the surge. New understanding of the modification of large-scale sea level change signals as they pass from the open ocean onto the shelf sea around the UK will be incorporated into the UKCP18 marine projections. UKCP18 will also include storm surge historical case studies derived from applying plausible future sea level change to historical extreme events.\n\n - 8 The latest update can be found at http://www.metoffice.gov.uk/climate/uk/about/state-of-climate\n - 9 http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/\n - 10 https://www.ipcc.ch/report/ar5/", - "page_start": 1, - "page_end": 1, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "A detailed investigation of these factors is beyond the scope of this paper; nevertheless, this result illustrates the important point that the nature and patterns of the climate forcing at a particular level of global warming can play an important role in determining the patterns of regional impacts.\n\n## 5. Conclusion\n\nThe higher-resolution HadGEM3 simulations project consistent increases in temperature-related extremes, with larger changes at 2°C compared to 1.5°C and local changes being larger than the global annual mean. There is a higher degree of spatial variation in our projections compared with CMIP5-based studies.\n\nIn the model projections examined here, changes relating to the water cycle are complex, both in their geographical pattern and in the variation between different models. The length of flooding events generally increases across world in all models, but maximum rainfall can either increase or decrease depending on locations. Global patterns of increase and decrease show some consistency between the different GWLs, but also some local differences. Worldwide, most impacts broadly tend to increase with global warming in most areas. For global mean changes, even when the sign of change is uncertain, individual realizations generally show reduced impact at 1.5°C compared with 2°C. However, this does not always hold even at the scale of major global river basins.\n\nVulnerability to food insecurity increases more at 2°C global warming than 1.5°C in approximately three-quarters of countries assessed. The vulnerability increase can arise from increases in either flooding or drought. Reduced drought leads to decreased vulnerability in a limited number of cases.\n\nMost simulations here project a general increase in mean streamflow in most of the basins examined, but with a number of notable exceptions in the tropics. While flows in the Ganges are consistently projected to increase by 30-110% at 2°C, Amazon flows could either increase by 3% or decrease by 25%. Ensemble-mean changes in river flow often do not give a full impression of the magnitude of changes that may be possible, so adaptation planning in particular should not rely on ensemble-mean projections and instead consider a range of outcomes. The seasonal low streamflows also increase in many basins, but not as many as for the mean flows-many basins see decreased low flows in some or all projections.\n\nBroadly, changes in weather extremes at 1.5°C global warming could be estimated by scalingback the impacts at 2°C, if this is done with individual ensemble members rather than the ensemble mean. However, this was not always the case for impacts that depend on more complex process or interactions between more than one climate variable, such as run-off and an indicator of vulnerability to food insecurity.\n\nData accessibility.\n\nThis article has no additional data.\n\nCompeting interests. We declare we have no competing interests.\n\nFunding. This research received funding from the European Union Seventh Framework Programme FP7/20072013 under grant agreement no. 603864 (HELIX: 'High-End cLimate Impacts and eXtremes'; www. helixclimate.eu). The work of R.A.B., C.B., J.C., L.G., K.L. and K.R. was additionally supported by the Joint UK BEIS/Defra Met Office Hadley Centre Climate Programme (GA01101).\n\nAcknowledgements. The authors thank Ed Pope, Jason Lowe and Dann Mitchell for advice and discussion, Alissa Haward and Maria Pearce for project management and administration of HELIX, and two anonymous reviewers whose comments substantially improved the paper.\n\n## References\n\n - 1. IPCC. 2014 Summary for policymakers. In Climate change 2014: impacts, adaptation, and vulnerability. Part A: global and sectoral aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (eds CB Field et al .), pp. 1-32. Cambridge, UK: Cambridge University Press.", - "page_start": 24, - "page_end": 24, - "source_file": "pubmed11.pdf" - }, - { - "text": "\n\n\n\n\n\n## UK CLIMATE PROJECTIONS: A PROJECT OVERVIEW\n\n\n\n## What is UKCP18 and why do we need it?\n\nFollowing the historic Paris Agreement on Climate Change in December 2015, the Department of Environment, Food and Rural Affairs announced a major upgrade to the UK Climate Projections.\n\nThe UKCP18 project will build upon the current set of projections (UKCP09) to provide the most up-todate assessment of how the climate of the UK may change over the 21st century. This information will be essential to future Climate Change Risk Assessments 1 and to equip the UK with information to help adapt to the challenges and opportunities of climate change in line with the National Adaptation Programme 2 .\n\nOrganisations and individual users will use UKCP18 to inform risk assessments and adaptation plans to ensure they are resilient to extreme weather and climate change. Some organisations will use UKCP18 in responding to the Adaptation Reporting Power 3 for example.\n\n\n\n## What improvements does UKCP18 deliver?\n\nUKCP18 will benefit from a range of developments since the release of UKCP09, including:\n\n- · Greater understanding of user needs as a result of the adaptation community's use of UKCP09 projections and the subsequent feedback - user workshops indicated that users supported the continued use of probabilistic projections and the importance of spatially coherent information 4 .\n- · Advances in climate models in recent years, such as the Met Office Hadley Centre HadGEM3 5 model and the CMIP5 6 set of models. Improvements include better representation of the past climate, the inclusion of more cloud and aerosol processes and the ability to model important climate phenomena such as the El-Niño Southern Oscillation (ENSO).\n- · Groundbreaking Met Office research on modelling of extreme events in high resolution regional climate models 7 .\n- · The increased quantity and range of observations available since 2009.\n- · Use of the new Met Office supercomputer, enabling a credible range of climate projections to be generated in greater spatial detail.\n- 1 The 2008 Climate Change Act allows UK government to mandate or invite certain organisations to produce reports to assess the impacts of climate change on their operations and present proposals for adaptation. https://www.gov.uk/government/collections/climate-changeadaptationreporting-second-round-reports\n- 2 Expected in 2018, the National Adaptation Programme will be supported by the Evidence Report of the Adaptation Sub-Committee of the Committee on Climate Change (ASC): https://www.theccc.org.uk/uk-climate-change-risk-assessment-2017/introduction-to-the-ccra/ 3 Under the 2008 Climate Change Act, organisations are invited to produce Adaptation Reporting Power reports to assess the impacts of climate change on their operations and present proposals for adaptation: https://www.gov.uk/government/collections/climate-change-adaptation-\n\n## reporting-second-round-reports\n\n- 4 Spatial coherence means that climate projections can be compared between locations and aggregated over larger areas, enabling climate change to be assessed consistently over larger study areas.\n- 5 http://www.metoffice.gov.uk/research/modelling-systems/unified-model/climate-models/hadgem3\n- 6 Coupled model intercomparison project phase 5, see http://cmip-pcmdi.llnl.gov/cmip5/\n- 7 Kendon, E. J., Roberts, N. M., Senior, C. A. & Roberts, M. J. Realism of rainfall in a very high resolution regional climate model. J. Clim. 25, 5791-5806 (2012) http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00562.1", - "page_start": 0, - "page_end": 0, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "In the present study, processing errors in the input data for one ensemble member, the HadGEM2-ES-driven member, caused the results to be invalid. Results for this member for the HCVI are, therefore, not presented here.\n\n## (d) Freshwater resources: run-o/ff\n\nImpacts on freshwater were assessed with a version of the JULES land surface model [24,25], a coupled ecosystem-hydrology-surface exchange model which simulates land-atmosphere fluxes of water, energy and carbon in an internally consistent way, typically applied at global scales. Variants of JULES form the land surface scheme of Met Office Hadley Centre Earth System Models [26,27] and have been used to assess impacts of climate change on global terrestrial ecosystems and hydrology [28-30] within such models. JULES can also be used outside of the Earth System Model (ESM), driven by meteorological outputs of other ESMs to assess impacts of a wider range of climate projections [6,8]. Here we use a new, higher-resolution configuration of JULES on a global grid of 0.5° resolution [31].\n\nIt has been noted that hydrological impacts models driven by climate-change projections from climate models tend to give more severe drying than simulated in the climate models themselves [32-34]. This is largely attributed to the inclusion of plant stomatal closure in response to elevated CO2 in the climate model land surface schemes, which generally reduces evapotranspiration relative to climate projections without this process and hence further increases run-off/streamflow or ameliorates decreases [34]. This process is often omitted from standard hydrological models. Plant physiological responses to CO 2 are included in the JULES model, so our projections of changes in run-off here do account for this process.\n\nWe used each HadGEM3 simulation to drive JULES to simulate changes in run-off due to the effects of climate change and CO 2 rise on precipitation, evaporation and transpiration. We analysed 30 year periods centred around the year of crossing GWLs of 1.5°C and 2°C relative to pre-industrial. We examined changes in both mean flows and low flows (defined as the flows for the lowest 10% of time).\n\n## (e) Correcting biases in climate model output and implications for de/fining levels of global warming\n\nThe ClimPACT extreme weather indices, HCVI and JULES run-off simulations were all performed using outputs from the higher-resolution HadGEM3 projections described in §2a. However, there were some differences in how these data were applied, with different approaches to the treatment of systematic biases in the climate model output. For the ClimPACT analysis, it was considered important to assess changes in the raw climate model output, because this directly represents the behaviour of the model itself. The main focus was on the changes relative to the presentday baseline climate, defined as 1981-2010, with absolute values in either the baseline or the GWLs of 1.5°C and 2°C being only of secondary interest. For the HCVI and JULES run-off analyses, however, it was considered important to correct for systematic biases in the climate model output, because these can lead to unrealistic representations of the key quantities in the present-day simulation [35]. A bias-correction methodology was, therefore, applied for these two parts of the analysis, whereby the model output was adjusted to make it consistent with an observed climatology [36]. We used a multi-segment statistical bias-correction methodology for precipitation [37], and a modification of this for other variables [37].", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed11.pdf" - }, - { - "text": "Figure 11. Distributions of changes in run-o/ff for low /flows (/flows for lowest 10% of time) simulated by the JULES ecosystemhydrology model under the ensemble of six climate projections at 1.5 ° C(blue)and2 ° C (orange) global warming. Boxes show the 25th and 75th percentile changes, whiskers show the range, circles show the four projections that do not de/fine the ends of the range, and crosses show the ensemble means. Numbers in square brackets show the ensemble-mean /flow in the baseline, in millimetres of rain equivalent.\n\n\n\nTable 6. Global mean changes at 1.5 ° C global warming compared to present day for individual ensemble members, for the ClimPACT indices, the /flood and drought proxies used as input to the HCVI calculations, and percentage change in mean precipitation (Pmean), mean run-o/ff (Rmean) and low run-o/ff (Rlow).", - "page_start": 16, - "page_end": 16, - "source_file": "pubmed11.pdf" - }, - { - "text": "Acombination of the above questions is also relevant-how does the range of outcomes at 2°C compare to that at 1.5°C? This is also relevant to adaptation policy, as it can inform assessment on whether to adapt to potential impacts at 2°C or just 1.5°C. Putting in place adaptation measures to deal with potential impacts at 1.5°C and then increasing these to deal with 2°C later may be more expensive and difficult than adapting to potential risks at 2°C at the outset. On the other hand, because adaptation actions may themselves have consequences, unnecessary overadaptation may have undesirable effects which it may be preferable to avoid or at least delay until absolutely necessary.\n\nBoth questions require an appropriate assessment of uncertainty. There are considerable uncertainties in projections of regional climate change, with different climate models projecting regional climate changes that can differ in magnitude or even, in the case of precipitation and impacts quantities strongly related to this, differ in sign [5,6]. This may have important implications for regional impacts at specific levels of global warming. A common approach to exploring and presenting such uncertainties is to examine the ensemble mean and the level of consensus among the ensemble members on the sign of the change. While this can often be useful in informing an assessment of the level of confidence in future projections, it may not always be sufficient to fully inform decisions. Risk assessment approaches require consideration of a range of possible risks, not just the most likely. This paper explores a range of regional climate states and related impacts that occur at global warming of 2°C, and a range of differences with warming limited to 1.5°C.\n\nWe examine the implications of our new climate projections by applying some commonly used indices of climate extremes, and a further index quantifying relative vulnerability to food insecurity which combines climate extremes indices with information on a range of factors representing sensitivity and adaptability of food systems to climate hazards. We also use the climate projections to drive a global land surface model to simulate changes in run-off as an indicator of freshwater availability. We assess whether regional extremes are projected to increase or decrease at 2°C global warming, and whether the consequent impact on drought and vulnerability to food insecurity become greater or smaller. We also assess whether these changes are reduced by limiting global warming to 1.5°C. We explore some of the uncertainties in these projections, and, in particular, examine whether the use of ensemble-mean projections is a useful simple guide to impacts projections or whether this can lead to a misleading impression for some impacts. Regarding vulnerability to food insecurity, we consider the impacts of global warming at 1.5°C and 2°C alongside socio-economic influences that affect the sensitivity to climate change. Wealso consider our climate-change impacts results in comparison with other studies using older, lower-resolution climate projections.\n\nA large number of previous studies have assessed potential impacts of future climate change using the 5th Coupled Model Intercomparison Project (CMIP5) ensemble or subsets of this [7], and some have framed this in terms of impacts at global warming of 1.5°C and/or 2°C [8,9]. We also base our study on a subset of CMIP5 projections, but use a new, higher-resolution atmosphere model to provide greater spatial detail and improved representation of atmospheric processes.\n\n## 2. Methods and models\n\n## (a) Global climate simulations at 1.5 ° Cand2 ° Cglobalwarming", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed11.pdf" - }, - { - "text": "studies have noticed that the Maya inscription about doomsday, which seemed rather ridiculous for scientists, might lead to unexpected public associations with climate issues. However, science fiction may influence the public's attitude toward scientific issues. Frankenstein's monster, a well-known fictional character who was a human-built creature in the novel written by Mary Shelley, has long been linked to transgenic technology by referring genetically-modified food as 'Frankenstein Food' [98]. Scientists found that these associations successfully symbolized the the public's uncertainty about the risk of transgenic technology, anxiety toward the human-made living creature, and moral discomfort about reducing life to genetic code [99], even though people all know Frankenstein was only a fictional character created 100 years ago. In the current study, we concludd that a similar mechanism may exist in global warming communication. Though 'the end of world in 2012' and its adapted popular movie sounded unconvincing for scientists, the public, especially who have limited scientific literacy, were defenceless against fiction [100]. Some of the public may accept the indications of temperature rise and extreme weather, and cannot help but strengthen their associations with global warming. However, no similar associations were discovered in the climate change discourse in 2012, which may suggest that global warming is more likely to be associated with disasters, risk, or negative sentiment compared with climate change.\n\n## 5.3. Discrepancy between the Two Discourses\n\nThe status of the two discourses varied significantly in the more recent years in the study period. Data from Google in prior study suggested that the search record for global warming was larger than that of climate change in earlier times [13]. The authors found that in the battle to be the most representative hashtag for global climate concern, #climatechange showed growing popularity and became an overwhelming trending topic compared with #globalwarming. Also, #climatechange showed a stronger ability to incorporate diverse hashtags into its discourse in both relative and absolute dimensions. Comparatively, the popularity of the global warming discourse among social media users did not increase apparently in terms of tweets volume and hashtag diversity, especially when considering the yearly increase in Twitter users. The reason for the observed shift in public discourse toward climate change from global warming may be attributed to the high exposure of climate change in the media and scientific reports in recent years [13]. Previous studies noted that perceived scientific consensus can increase acceptance of science [101]. Though global warming has been commonly used since the 1980s to describe the world-wide temperature rise, climate change is preferred by scientists to refer a range of complex changes of climate [102]. Pew found science-related accounts draw millions of followers on Facebook and volume of posts they released climbed in past years [103]. Climate scientists are found to be opinion makers on Twitter [104]. As social media has become an emerging platform for science popularization, scientific community might contribute to the prevalence of climate change discourse by talking about climate change facts and mitigating measures [75].\n\nHowever, di GLYPH<11> erences between two discourses were not eliminated. Even though two discourses showed more similarities in the rank order of key concepts, the QAP analysis of two matrices of semantic network showed that two discourses still embody distinct public perceptions of climate issues by associating these hashtags in di GLYPH<11> erent manners.", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed10.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed11.pdf", - "query": "What is the projected situation of India regarding HCVI (Hunger and Climate Vulnerability Index)?", - "target_page": 12, - "target_passage": "India is projected to see increased HCVI by all ensemble members, due to a consistent increase in length of flood events projected in all members, outweighing the beneficial impact of decreased length of drought which is again projected in all members", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "members at any given date. Since specific levels of global warming such as 1.5°C or 2°C were reached at different times in the different ensemble members, according to the SST forcings used, any given level of global warming could be associated with different radiative forcings in different ensemble members. In any given ensemble member at any specific level of global warming, the CO 2 concentration and SSTs were the same as in the driving CMIP5 model at that GWL. Land cover was fixed in this simulation-there was no dynamic vegetation nor any time-dependent anthropogenic land use change.\n\nSome comparison of the higher-resolution atmospheric simulations with the original CMIP5 simulations, is provided by Wyser et al. [20].\n\n## (b) Temperature and precipitation extremes: the ClimPACT indices\n\nTo quantify changes in weather extremes projected in our climate simulations, we calculated a number of indices designed to be relevant to sector-specific impacts using an established methodology, ClimPACT [21](table 1)\n\n## (c) Food security: the Hunger and Climate Vulnerability Index\n\nTo assess implications of climate change for vulnerability to food insecurity, we used an adaptation of the Hunger and Climate Vulnerability Index (HCVI) [22]. The HCVI was developed by the United Nations World Food Programme to provide a country-level assessment of vulnerability to food insecurity as a result of climate-related events. We used a new iteration of the HCVI which makes use of gridded climate model projections to understand the impact of climate change on vulnerability to food insecurity, and the benefits that adaptation can bring via scenarios of adaptation investment [23]. This iteration of the HCVI only considers in-country production of food and does not account for food trade. For this reason, the HCVI is only calculated for 122 developing and least-developed countries (defined here as countries not in the OECD or EU which can be resolved by the scale of the climate model; i.e. larger than 500 km 2 ).\n\nThe index provides quantification at the national level across the globe of the scale and direction of impact of climate change on food insecurity. As such, it aims to provide the following: (i) information to help policy-makers understand the level of challenge to global food security that climate change presents; (ii) information on the geography of the impacts and help to evaluate the relative benefits of mitigation and adaptation responses.\n\nThe index is not intended to be a detailed planning tool, but aims to help planners evaluate the nature of the top-level threat to food insecurity that climate change presents, thereby supporting prioritization of effort.\n\nThe HCVI consists of three equally weighted components: exposure to climate-related hazards, sensitivity of national agricultural production to climate-related hazards, and adaptive capacitya measure of a country's ability to cope with climate-related food shocks. The sensitivity and adaptive capacity components are based on data from the World Bank, World Resources Institute,", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed11.pdf" - }, - { - "text": "Figure 12. Comparison of global mean changes in climate extremes indices relative to 1981-2010 at 2 ° Cand1.5 ° Cglobal warming for individual ensemble members and ensemble mean. ( a ) Change in annual daily maximum temperature; ( b ) percentage of days with maximum temperature above 90th percentile for 1981-2010; ( c ) change in consecutive dry days; ( d ) change in annual maximum 5-day rainfall.\n\n\n\nFor precipitation, generally similar changes are seen at 1.5°C global warming as at 2°C, but smaller in magnitude (compare figures 16 and 4), suggesting that most of these changes are a response to radiatively forced climate change as opposed to internal climate variability. However, some localized changes do vary in sign between the GWLs, such as in South Australia, suggesting a possible dominance of internal variability over the global warming signal in these places.\n\nWhere Rx5day increases, the increases are projected to be larger-in some cases approximately double-at 2°C global warming than 1.5°C. Where Rx5day decreases, again the decreases are projected to be larger at 2°C global warming than 1.5°C (figure 17).\n\nOf the 122 countries assessed, 93 have smaller ensemble-mean HCVI calculated at 1.5°C global warming than at 2°C, indicating an ensemble consensus that 76% of assessed countries would see a smaller increase in vulnerability to food insecurity if global warming were limited to 1.5°C (figures 18 and 19). Conversely, 24% of countries would, by this metric, see the same or higher vulnerability to food insecurity at 1.5°C than 2°C. Of these, some are countries where HCVI is projected to be lower at 2°C global warming than in the baseline. For example, in Mali the ensemble-mean baseline HCVI of 0.83 increased slightly to 0.85 at 1.5°C then reduced to 0.81 at 2°C. In some countries, the ensemble-mean HCVI happened to be identical at both warming levels. In Chad, for example, the baseline HCVI of 0.89 increased to 0.91 at both 1.5°C and 2°C.\n\nAs noted above, four countries saw ensemble-mean HCVI values at 2°C above any seen in the baseline, and this number increased to seven at 1.5°C. The same four countries with 'unprecedented' HCVI values at 2°C also saw 'unprecedented' values at 1.5°C; these were Oman, Bangladesh, Mauritania and Yemen. These were joined by Myanmar, India and Cambodia as having 'unprecedented' values at 1.5°C. The role of internal climate variability in the HCVI results needs to be assessed, as does the effect of potential nonlinear interactions between the flood and drought metric. Until the reasons behind these country-specific results are understood,", - "page_start": 17, - "page_end": 17, - "source_file": "pubmed11.pdf" - }, - { - "text": "IPSL-CM5A-LR\n\n\n\nIPSL-CM5A-MR\n\n\n\nMIROC-ESM-CHEM\n\n\n\nACCESS1-0\n\n\n\nFigure 8. Change in Hunger and Climate Vulnerability Index relative to baseline calculated for simulated climate states at 2 ° C globalwarming,for/five individual HadGEM3 simulations driven by SSTs and SICs from di/fferent members of the CMIP5 ensemble, and the ensemble mean.\n\n\n\nFour countries show ensemble-mean HCVI values at 2°C global warming that are higher than any seen in the baseline climate; these are Oman, Bangladesh, Mauritania and Yemen. The implication of such HCVI values is that climate change at 2°C is projected to cause levels of vulnerability to food insecurity that are greater than any seen in the present day. For individual ensemble members, the number of countries with 'unprecedented' HCVI values at 2°C varies from three to seven. Conversely, many countries in the baseline climate have levels of vulnerability to food insecurity that are greater than those expected in other countries under 2°C global warming. This suggests that other factors are already posing greater risk for food insecurity than 2°C climate change is expected to cause in other countries, so the increased risk from climate change should not overshadow the need to reduce vulnerability to food insecurity arising from non-climatic factors. There is scope to reduce vulnerability to food insecurity by addressing various socio-economic issues in such counties.\n\nThe JULES simulations show a general tendency towards increased run-off over approximately half of the land surface (figure 9) and the majority of the major river basins assessed (figure 10), but with large regional uncertainties including the possibility of decreased flows in many basins. The ensemble-mean change in mean streamflow shows an increase of between 5 and 25% over most of the Northern Hemisphere land surface, with some regions seeing an increase of over 50% at 2°C global warming. Notable exceptions to this are western Europe and southcentral USA, which see less than a 5% change in run-off, and the already very dry region of the Sahara Desert where the existing very small run-off become even smaller.\n\nEnsemble-mean projected changes in low run-off flows are generally larger (figure 11), with the regions seeing an increase in mean run-off seeing a larger percentage increase in low run-off-over 75% increases over much of North America, Eastern Europe and Asia. Note that this does not necessarily imply a larger increase in absolute low flow compared to absolute mean flow, because the baseline is (by definition) smaller for low flows. In western Europe, where the changes in mean flows were less than 5%, the ensemble-mean low flow decreases by between 5\n\nGFDL-ESM2M\n\n\n\n", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed11.pdf" - }, - { - "text": "IPSL-CM5A-LR\n\n\n\nGFDL-ESM2M\n\n\n\nIPSL-CM5A-MR\n\n\n\nMIROC-ESM-CHEM\n\nACCESS1-0\n\n\n\n\n\nFigure 5. Simulated changes in the annual maximum rainfall over 5 days relative to 1981-2010, at 2 ° C global warming, for individual HadGEM3 simulations driven by SSTs and SICs from di/fferent members of the CMIP5 ensemble, and the ensemble mean. The labels above each panel identify the driving CMIP5 model (or ensemble mean).\n\n\n\n2°C, although the geographical variation is still dominated by the non-climatic factors (figure 7). Therefore, the ensemble-mean change is a reasonable guide to the results.\n\nThe ensemble mean is higher in nearly all assessed countries relative to the baseline (figure 8). The greatest increase was in Oman, followed by India, Bangladesh and Saudi Arabia, then Brazil and a number of its neighbouring countries. Smaller increases in HCVI were seen across Africa. Southeastern Africa showed larger increases than Central Africa. The HCVI decreased in three countries: Mali, Burkino Faso and Sudan.\n\nThe ensemble members showed broadly consistent changes in HCVI at 2°C global warming, with increases in most assessed countries and generally similar sets of countries experiencing the largest and smallest changes. Southeastern Africa consistently showed larger increases in HCVI than Central Africa, due to increased length of drought events projected in all ensemble members (not shown). The length of flood events was not projected to increase in this region. The Sahel region consistently showed one or more countries with a small decrease in the HCVI, although the precise country or countries varied between ensemble members. The decrease in HCVI here was due to projected decreases in length of drought, with length of flood events projected to change little.\n\nIndia is projected to see increased HCVI by all ensemble members, due to a consistent increase in length of flood events projected in all members, outweighing the beneficial impact of decreased length of drought which is again projected in all members.\n\nBrazil is projected to see increased HCVI, but for reasons which vary between ensemble members. Although the location of projected longer flood events varies across the country in different members, the aggregation of the HCVI to the country level renders this geographical variability irrelevant for such a large country because only the median value across the country is used in the HCVI. Some ensemble members project longer drought for Brazil, which again contributed to increased HCVI.\n\n\n\nHadGEM2-ES\n\n", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed11.pdf" - }, - { - "text": "Model Intercomparison Project (CMIP5) ensemble, forced with the RCP8.5 concentration scenario. To provide more detailed representations of climate processes and impacts, the spatial resolution was N216 (approx. 60 km grid length in mid-latitudes), a higher resolution than the CMIP5 models. We used a set of impacts-relevant indices and a global land surface model to examine the projected changes in weather extremes and their implications for freshwater availability and vulnerability to food insecurity. Uncertainties in regional climate responses are assessed, examining ranges of outcomes in impacts to inform risk assessments. Despite some degree of inconsistency between components of the study due to the need to correct for systematic biases in some aspects, the outcomes from different ensemble members could be compared for several different indicators. The projections for weather extremes indices and biophysical impacts quantities support expectations that the magnitude of change is generally larger for 2°C global warming than 1.5°C. Hot extremes become even hotter, with increases being more intense than seen in CMIP5 projections. Precipitation-related extremes show more geographical variation with some increases and some decreases in both heavy precipitation and drought. There are substantial regional uncertainties in hydrological impacts at local scales due to different climate models producing different outcomes. Nevertheless, hydrological impacts generally point towards wetter conditions on average, with increased mean river flows, longer heavy rainfall events, particularly in South and East Asia with the most extreme projections suggesting more than a doubling of flows in the Ganges at 2°C global warming. Some areas are projected to experience shorter meteorological drought events and less severe low flows, although longer droughts and/or decreases in low flows are projected in many other areas, particularly southern Africa and South America. Flows in the Amazon are projected to decline by up to 25%. Increases in either heavy rainfall or drought events imply increased vulnerability to food insecurity, but if global warming is limited to 1.5°C, this vulnerability is projected to remain smaller than at 2°C global warming in approximately 76% of developing countries. At 2°C, four countries are projected to reach unprecedented levels of vulnerability to food insecurity.\n\nThis article is part of the theme issue 'The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels'.\n\n## 1. Introduction\n\nThe majority of climate-change impacts assessments have tended to be framed in terms of future time horizons, e.g. impacts by the middle or end of the twenty-first century [1,2]. However, with international climate policy now largely focused on limiting warming to specific levels of global mean temperature such as 2°C [3] or 1.5°C [4], policy-relevant climate impacts assessments increasingly need to be framed in terms of such warming levels.\n\nThere are two major research questions concerning the impacts of climate change at 1.5°C and 2°C global warming, which are relevant to both mitigation and adaptation policy areas.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed11.pdf" - }, - { - "text": "Figure 13. Global mean percentage changes relative to 1981-2010 in ( a ) precipitation over land, ( b )meanrun-o/ff/flows,( c )low run-o/ff lows (10th percentile), at 2 ° Cand1.5 ° C global warming.\n\n\n\nthis comparison of the number of 'unprecedented' HCVI values at 1.5°C and 2°C should be treated with caution. Nevertheless, the finding that some countries see HCVI values higher at either or both 1.5°C and 2°C compared to the baseline may indicate that climate change has the potential to lead to unprecedented levels of vulnerability to food insecurity in some countries. More robustly, it can be concluded that by this metric, overall worldwide vulnerability to food insecurity generally increases with global warming, and for approximately three-quarters of countries assessed, this increase is larger at 2°C than 1.5°C.\n\nIn the ensemble mean, changes in mean, low and high flows are generally larger at 2°C global warming compared to 1.5°C (figure 20). This is often the case for both increases and decreases in flows-increasing the level of global warming magnifies the pattern of river flow changes, although not in all cases.\n\nThe range of projected mean run-off changes is larger for 2°C than 1.5°C in many basins, but this was not always the case, with many basins showing similar or smaller ranges at 2°C compared with 1.5°. Moreover, the ranges overlap substantially, so in terms of the set of", - "page_start": 18, - "page_end": 18, - "source_file": "pubmed11.pdf" - }, - { - "text": "\n\nvulnerability to food insecurity\n\n-0.2\n\n0.2\n\n0.4\n\n0.6\n\n0\n\n0.8\n\n1.0\n\n1.2\n\n1.4\n\nFigure 18. Hunger and Climate Vulnerability Index at 1.5 ° C global warming (ensemble mean).IPSL-CM5A-LR\n\n\n\nIPSL-CM5A-MR\n\n\n\nMIROC-ESM-CHEM\n\n\n\n\n\n\n\n\n\nFigure19. Di/fference in Hunger and Climate Vulnerability Index between 2 ° Cand1.5 ° Cglobalwarming,forindividualensemble members and ensemble mean.\n\n\n\n## 4. Discussion\n\nIn most cases, global mean changes at 2°C are larger than those at 1.5°C, not only for individual members but also for the ensemble as a whole. All ensemble members show increases in TXx at 2°C which are larger than all changes at 1.5°C, and same true for most other variables.", - "page_start": 21, - "page_end": 21, - "source_file": "pubmed11.pdf" - }, - { - "text": "rsta.royalsocietypublishing.org\n\n## Research\n\n\n\n\n\nCite this article: Betts RA et al . 2018 Changes in climate extremes, fresh water availability and vulnerability to food insecurity projected at 1.5 ° Cand2 ° C global warming with a higher-resolution global climate model. Phil. Trans. R. Soc. A 376 : 20160452.\n\nhttp://dx.doi.org/10.1098/rsta.2016.0452\n\nAccepted:13February2018\n\nOne contribution of 20 to a theme issue 'The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5 ° C above pre-industrial levels'.\n\n## Subject Areas:\n\nclimatology, hydrology\n\n## Keywords:\n\n1.5 ° C, Paris Agreement, 2 ° C, global climate impacts, water resources, terrestrial ecosystems\n\n## Author for correspondence:\n\nRichard A. Betts\n\ne-mail: richard.betts@meto/ffice.gov.uk\n\n\n\nChanges in climate extremes, fresh water availability and vulnerability to food insecurity projected at 1.5 ° C and 2 ° C global warming with a higher-resolution global climate model\n\nRichard A. Betts 1,2 , Lorenzo Al/fieri 3 , Catherine Bradshaw 2 ,JohnCaesar 2 ,LucFeyen 3 ,Pierre Friedlingstein 4 , Laila Gohar 2 , Aristeidis Koutroulis 5 , Kirsty Lewis 2 , Catherine Morfopoulos 1 , Lamprini Papadimitriou 5,6 ,KatyJ.Richardson 2 , Ioannis Tsanis 5 and Klaus Wyser 7\n\n7 Rossby Centre, SMHI, 601 76 Norrköping, Sweden\n\n\n\n- RAB, 0000-0002-4929-0307\n\nWe projected changes in weather extremes, hydrological impacts and vulnerability to food insecurity at global warming of 1.5°C and 2°C relative to pre-industrial, using a new global atmospheric general circulation model HadGEM3A-GA3.0 driven by patterns of sea-surface temperatures and sea ice from selected members of the 5th Coupled\n\n2018 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/ by/4.0/, which permits unrestricted use, provided the original author and source are credited.", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed11.pdf" - }, - { - "text": "-\n\nareas are projected to see an increase in flood event lengths of 4 days or more, particularly India and Bangladesh, for which such increases are projected in all ensemble members to some extent. Increases of 2-4 days are also projected in parts of Brazil by all ensemble members, although the magnitude and location within the country varied between members. Similar increases are projected in the region of the Horn of Africa and southern Arabian Peninsula in several members.\n\nThe HCVI calculated for 2°C global warming showed very large geographical variability (figure 7) which relates largely to differences in socio-economic factors [22]. Differences in the climate change simulated in different ensemble members leads to some variation in the HCVI at\n\nHadGEM2-ES\n\n\n\n", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed11.pdf" - }, - { - "text": "- 15. Roche, K. R., Müller-Itten, M., Dralle, D. N., Bolster, D. & Müller, M. F. Climate change and the opportunity cost of con/flict. PNAS 117 (4), 1935-1940 (2020).\n - 16. Challinor, A. J. et al. A meta-analysis of crop yield under climate change and adaptation. Nat. Clim. Change 4 , 287-291 (2014).\n - 17. Lobell, D. B. et al. Prioritizing climate change adaptation needs for food security in 2030. Science 319 , 607-610 (2017).\n - 18. Lv, S. et al. Yield gap simulations using ten maize cultivars commonly planted in Northeast China during the past /five decades. Agric. For. Meteorol. 205 , 1-10 (2015).\n - 19. Chao, W., Kehui, C. & Shah, F. Heat stress decreases rice grain weight: Evidence and physiological mechanisms of heat e/ffects prior to /flowering. Int. J. Mol. Sci. 23 (18), 10922 (2022).\n - 20. Chao, W. et al. Estimating the yield stability of heat-tolerant rice genotypes under various heat conditions across reproductive stages: A 5-year case study. Sci. Rep. 11 , 13604 (2021).\n - 21. IPCC. Food security and food production systems. In Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fi/f\\_th Assessment Report of the Intergovernmental Panel of Climate Change 485-533 (Cambridge University Press, 2014).\n - 22. Tigchelaar, M., Battisti, D. S., Naylor, R. L. & Ray, D. K. Future warming increases probability of globally synchronized maize production shocks. PNAS 115 (26), 6644-6649 (2018).\n - 23. Zhao, C. et al. Temperature increase reduces global yields of major crops in four independent estimates. PNAS 114 , 9326-9331 (2017).\n - 24. Di/ffenbaugh, N. S., Hertel, T. W., Scherer, M. & Verma, M. Response of corn markets to climate volatility under alternative energy futures. Nat. Clim. Change 2 , 514-518 (2012).\n - 25. Jensen, H. G. & Anderson, K. Grain price spikes and beggar-thy-neighbor policy responses: A global economywide analysis. World Bank Econ. Rev. 31 , 158-175 (2017).\n - 26. Fraser, E. D. G., Simelton, E., Termansen, M., Gosling, S. N. & South, A. 'Vulnerability hotspots': Integrating socio-economic and hydrological models to identify where cereal production may decline in the future due to climate change induced drought. Agric. For. Meteorol. 170 , 195-205 (2013).\n - 27. Puma, M. J., Bose, S., Chon, S. Y. & Cook, B. I. Assessing the evolving fragility of the global food system. Environ. Res. Lett. 10 , 024007 (2015).\n - 28. Wheeler, T. & Braun, J. V. Climate change impacts on global food security. Science 341 (6145), 508-513 (2013).\n - 29. Lunt, T., Jones, A. W., Mulhern, W. S., Lezaks, D. P. M. & Jahn, M. M. Vulnerabilities to agricultural production shocks: An extreme, plausible scenario for assessment of risk for the insurance sector. Clim. Risk Manag. 13 , 1-9 (2016).\n - 30. Jägermeyr, J. & Frieler, K. Spatial variations in crop growing seasons pivotal to reproduce global /fluctuations in maize and wheat\n - yields. Sci. Adv. 4 (11), eaat4517 (2018).\n - 31. Elliott, J. et al. Characterizing agricultural impacts of recent large-scale US droughts and changing technology and management. Agric. Syst. 159 , 275-281 (2017).\n - 32. Tack, J., Barkley, A. & Nalley, L. L. E/ffect of warming temperatures on US wheat yields. Proc. Natl. Acad. Sci. 112 , 6931-6936 (2015).\n - 33. Tao, F., Zhang, Z., Liu, J. & Yokozawa, M. Modelling the impacts of weather and climate variability on crop productivity over a large area: A new super-ensemblebased probabilistic projection. Agric. For. Meteorol. 149 , 1266-1278 (2009).\n - 34. Parent, B. et al. Maize yields over Europe may increase in spite of climate change, with an appropriate use of the genetic variability of /flowering time. PNAS 115 (42), 10642-10647 (2018).", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed9.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed11.pdf", - "query": "Regarding climate change simulation, what is JULES ?", - "target_page": 7, - "target_passage": "Impacts on freshwater were assessed with a version of the JULES land surface model [24,25], a coupled ecosystem–hydrology–surface exchange model which simulates land-atmosphere fluxes of water, energy and carbon in an internally consistent way", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "In the present study, processing errors in the input data for one ensemble member, the HadGEM2-ES-driven member, caused the results to be invalid. Results for this member for the HCVI are, therefore, not presented here.\n\n## (d) Freshwater resources: run-o/ff\n\nImpacts on freshwater were assessed with a version of the JULES land surface model [24,25], a coupled ecosystem-hydrology-surface exchange model which simulates land-atmosphere fluxes of water, energy and carbon in an internally consistent way, typically applied at global scales. Variants of JULES form the land surface scheme of Met Office Hadley Centre Earth System Models [26,27] and have been used to assess impacts of climate change on global terrestrial ecosystems and hydrology [28-30] within such models. JULES can also be used outside of the Earth System Model (ESM), driven by meteorological outputs of other ESMs to assess impacts of a wider range of climate projections [6,8]. Here we use a new, higher-resolution configuration of JULES on a global grid of 0.5° resolution [31].\n\nIt has been noted that hydrological impacts models driven by climate-change projections from climate models tend to give more severe drying than simulated in the climate models themselves [32-34]. This is largely attributed to the inclusion of plant stomatal closure in response to elevated CO2 in the climate model land surface schemes, which generally reduces evapotranspiration relative to climate projections without this process and hence further increases run-off/streamflow or ameliorates decreases [34]. This process is often omitted from standard hydrological models. Plant physiological responses to CO 2 are included in the JULES model, so our projections of changes in run-off here do account for this process.\n\nWe used each HadGEM3 simulation to drive JULES to simulate changes in run-off due to the effects of climate change and CO 2 rise on precipitation, evaporation and transpiration. We analysed 30 year periods centred around the year of crossing GWLs of 1.5°C and 2°C relative to pre-industrial. We examined changes in both mean flows and low flows (defined as the flows for the lowest 10% of time).\n\n## (e) Correcting biases in climate model output and implications for de/fining levels of global warming\n\nThe ClimPACT extreme weather indices, HCVI and JULES run-off simulations were all performed using outputs from the higher-resolution HadGEM3 projections described in §2a. However, there were some differences in how these data were applied, with different approaches to the treatment of systematic biases in the climate model output. For the ClimPACT analysis, it was considered important to assess changes in the raw climate model output, because this directly represents the behaviour of the model itself. The main focus was on the changes relative to the presentday baseline climate, defined as 1981-2010, with absolute values in either the baseline or the GWLs of 1.5°C and 2°C being only of secondary interest. For the HCVI and JULES run-off analyses, however, it was considered important to correct for systematic biases in the climate model output, because these can lead to unrealistic representations of the key quantities in the present-day simulation [35]. A bias-correction methodology was, therefore, applied for these two parts of the analysis, whereby the model output was adjusted to make it consistent with an observed climatology [36]. We used a multi-segment statistical bias-correction methodology for precipitation [37], and a modification of this for other variables [37].", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed11.pdf" - }, - { - "text": "issues and re-constructing them di GLYPH<11> erently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as 'earth' and 'pollution', whereas 'climate change' was more associated to specific issues like 'solar', 'coal', 'china', and 'food'.\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, 'snow', 'summer', 'winter', or 'heatwave' in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' di GLYPH<11> erences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n## 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag 'tcot', favored by right-leaning users and 'p2', favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n## 5.1.3. Discourse Structure", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "## OPEN\n\n\n\n## The impact of ͷ.ͻ °C and ͸.Ͷ °C global warming on global maize production and trade\n\nKuo Li ͷ * , Jie Pan ͷ , Wei Xiong ͸ , Wei Xie ͹ & Tariq Ali ͹\n\nClimate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by ͻ climate models recommended by ISI-MIP under ͺ RCP scenarios, in which the approximate scenarios with global warming by ͷ.ͻ °C and ͸ °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by ͷ.ͻ °C and ͸.Ͷ °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under ͸.Ͷ °C scenario was much more serious than ͷ.ͻ °C scenario; the ratios of yield changes were separately Ͷ.ͷ;% and - ͷͶ.;% under ͷ.ͻ °C and ͸.Ͷ °C scenarios. The reduction trend of total maize production is obvious in the top five countries and the main producing regions of the world, especially under the ͸.Ͷ °C scenario. The market price of maize would increase by around Ͷ.ͽ% and ͹.ͺ% under ͷ.ͻ °C and ͸.Ͷ °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.\n\nIn the past hundred years, the global climate has experienced great changes 1-4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming 5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health 6-10 . Global warming has gradually changed from a scienti/fic issue to a major social issue of common concern to governments and people of all countries 11-13 . In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris 14 . Paris Agreement has indicated and pursue e/fforts to limit the temperature increase to 1.5 °C above pre-industrial levels.", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "studies have noticed that the Maya inscription about doomsday, which seemed rather ridiculous for scientists, might lead to unexpected public associations with climate issues. However, science fiction may influence the public's attitude toward scientific issues. Frankenstein's monster, a well-known fictional character who was a human-built creature in the novel written by Mary Shelley, has long been linked to transgenic technology by referring genetically-modified food as 'Frankenstein Food' [98]. Scientists found that these associations successfully symbolized the the public's uncertainty about the risk of transgenic technology, anxiety toward the human-made living creature, and moral discomfort about reducing life to genetic code [99], even though people all know Frankenstein was only a fictional character created 100 years ago. In the current study, we concludd that a similar mechanism may exist in global warming communication. Though 'the end of world in 2012' and its adapted popular movie sounded unconvincing for scientists, the public, especially who have limited scientific literacy, were defenceless against fiction [100]. Some of the public may accept the indications of temperature rise and extreme weather, and cannot help but strengthen their associations with global warming. However, no similar associations were discovered in the climate change discourse in 2012, which may suggest that global warming is more likely to be associated with disasters, risk, or negative sentiment compared with climate change.\n\n## 5.3. Discrepancy between the Two Discourses\n\nThe status of the two discourses varied significantly in the more recent years in the study period. Data from Google in prior study suggested that the search record for global warming was larger than that of climate change in earlier times [13]. The authors found that in the battle to be the most representative hashtag for global climate concern, #climatechange showed growing popularity and became an overwhelming trending topic compared with #globalwarming. Also, #climatechange showed a stronger ability to incorporate diverse hashtags into its discourse in both relative and absolute dimensions. Comparatively, the popularity of the global warming discourse among social media users did not increase apparently in terms of tweets volume and hashtag diversity, especially when considering the yearly increase in Twitter users. The reason for the observed shift in public discourse toward climate change from global warming may be attributed to the high exposure of climate change in the media and scientific reports in recent years [13]. Previous studies noted that perceived scientific consensus can increase acceptance of science [101]. Though global warming has been commonly used since the 1980s to describe the world-wide temperature rise, climate change is preferred by scientists to refer a range of complex changes of climate [102]. Pew found science-related accounts draw millions of followers on Facebook and volume of posts they released climbed in past years [103]. Climate scientists are found to be opinion makers on Twitter [104]. As social media has become an emerging platform for science popularization, scientific community might contribute to the prevalence of climate change discourse by talking about climate change facts and mitigating measures [75].\n\nHowever, di GLYPH<11> erences between two discourses were not eliminated. Even though two discourses showed more similarities in the rank order of key concepts, the QAP analysis of two matrices of semantic network showed that two discourses still embody distinct public perceptions of climate issues by associating these hashtags in di GLYPH<11> erent manners.", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed10.pdf" - }, - { - "text": "## 2. Methods and models\n\n## (a) Global climate simulations at 1.5 ° Cand2 ° Cglobalwarming\n\nThere are a number of ways in which 1.5°C or 2°C global warming can be defined-one could be the long-term climate state following a stabilization of warming at that level, another could be the state over a shorter period around the time of first reaching that level. Here we choose the second definition, which is what is seen first and hence needs to be adapted to. There are also a number of methods with which such changes can be assessed [10]. We take the opportunity of availability of a new set of higher-resolutions transient climate and impacts simulations, and use a time-sampling methodology [10] to assess global-scale impacts at these resolutions for the first time.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed11.pdf" - }, - { - "text": "from 5 climate models under 4 RCP scenarios, the future climate situations were selected which are the approximate scenarios with global warming by 1.5 °C and 2.0 °C at the end of 21 century relative to pre-industrial levels; it could minimize the uncertainties of future climate data. /T\\_he inputs for DSSAT simulation include soil parameters, crop calendar data and management information are coped with carefully to improve the e/ffectiveness and reliability of maize yield simulation.\n\n/T\\_here are also several uncertainties and limitations. Firstly, there is no uni/fied understanding of how to calculate the temperature rise of 1.5 °C and 2.0 °C relative to pre-industrial levels in the worldwide. At present the research on climate prediction and impact assessment under global warming 1.5 °C and 2.0 °C usually adopts multi-mode ensemble average methods 61,62 , which could obtain the warming response under the condition of instantaneous change, rather than the warming process under the stable state expected by the long-term goal. If we expect to obtain the accurate results, the model prediction test should be estimated to form proprietary scenarios for global warming by 1.5 °C and 2.0 °C 63,64 , which could support for the impacts assessment on di/fferent sectors. Some institutions are carrying out climate change predictions under the lower emission scenarios (global warming 1.5 °C or 2.0 °C). At the same time, in order to achieve the goal of controlling temperature by 1.5 °C at the end of the twenty-/first century, it is urgent to take actions to reduce emissions and develop along the track of low energy consumption 65,66 ; but it is a great challenge for human society to achieve this goal.\n\nSecondly, our methodological approach in this study also has some important limitations, including our use of a single crop model to estimate maize yields. /T\\_here are some limitations for the DSSAT model to simulate yield loss caused by climate extreme events 67 , in which the impacts of pests and diseases are also ignored 68 . However, the DSSAT model has been applied in a lot of researches to simulate historical maize yield 69-71 , in which the results are reliable and credible 72 . /T\\_he results of this research could be an important reference to the other studies which simulate global maize yield in the future, applying crop models such as APSIM, WOFOST, ORYZA and so on.\n\n/T\\_hirdly, there are relatively more researches on the prediction of climate change trend under the background of 1.5 °C and 2.0 °C; but the research on the impact assessment of the main grain crops including global trade in worldwide is few. In the meantime, we do not assess the e/ffect of future changes on agriculture, such as increases in farm productivity due to new technology. /T\\_he maize planting area in the future is assumed to be the same as the current situation of maize cultivation in the world.\n\nConclusion. According to the simulation results, the yield of maize under global warming by 2.0 °C would decrease between 3.0 and 18.7% in the worldwide relative to 1986-2005; the maize yield would /fluctuate between - 6.8 and 7.2% under global warming by 1.5 °C. From the spatial distribution, the gross maize yield in the top 5 high-yield countries (including the United States, China, Brazil, Argentina and Mexico) would decrease by 2% under global warming by 1.5 °C and 11.4% under global warming by 2.0 °C. At the global level, the market price for maize would increase by 0.7% and 3.4% under 1.5 °C scenario and 2.0 °C scenario, respectively, which would vary quite largely among di/fferent countries and regions. So, it is urgent for all countries to pay enough attention to the loss risk of maize yield and take actions of mitigation and adaptation to climate change. /T\\_he time le/f\\_t for changing our minds and actions is becoming less and less.\n\n## Data availability", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed9.pdf" - }, - { - "text": "\n\n\n\n\n\n## UK CLIMATE PROJECTIONS: A PROJECT OVERVIEW\n\n\n\n## What is UKCP18 and why do we need it?\n\nFollowing the historic Paris Agreement on Climate Change in December 2015, the Department of Environment, Food and Rural Affairs announced a major upgrade to the UK Climate Projections.\n\nThe UKCP18 project will build upon the current set of projections (UKCP09) to provide the most up-todate assessment of how the climate of the UK may change over the 21st century. This information will be essential to future Climate Change Risk Assessments 1 and to equip the UK with information to help adapt to the challenges and opportunities of climate change in line with the National Adaptation Programme 2 .\n\nOrganisations and individual users will use UKCP18 to inform risk assessments and adaptation plans to ensure they are resilient to extreme weather and climate change. Some organisations will use UKCP18 in responding to the Adaptation Reporting Power 3 for example.\n\n\n\n## What improvements does UKCP18 deliver?\n\nUKCP18 will benefit from a range of developments since the release of UKCP09, including:\n\n- · Greater understanding of user needs as a result of the adaptation community's use of UKCP09 projections and the subsequent feedback - user workshops indicated that users supported the continued use of probabilistic projections and the importance of spatially coherent information 4 .\n- · Advances in climate models in recent years, such as the Met Office Hadley Centre HadGEM3 5 model and the CMIP5 6 set of models. Improvements include better representation of the past climate, the inclusion of more cloud and aerosol processes and the ability to model important climate phenomena such as the El-Niño Southern Oscillation (ENSO).\n- · Groundbreaking Met Office research on modelling of extreme events in high resolution regional climate models 7 .\n- · The increased quantity and range of observations available since 2009.\n- · Use of the new Met Office supercomputer, enabling a credible range of climate projections to be generated in greater spatial detail.\n- 1 The 2008 Climate Change Act allows UK government to mandate or invite certain organisations to produce reports to assess the impacts of climate change on their operations and present proposals for adaptation. https://www.gov.uk/government/collections/climate-changeadaptationreporting-second-round-reports\n- 2 Expected in 2018, the National Adaptation Programme will be supported by the Evidence Report of the Adaptation Sub-Committee of the Committee on Climate Change (ASC): https://www.theccc.org.uk/uk-climate-change-risk-assessment-2017/introduction-to-the-ccra/ 3 Under the 2008 Climate Change Act, organisations are invited to produce Adaptation Reporting Power reports to assess the impacts of climate change on their operations and present proposals for adaptation: https://www.gov.uk/government/collections/climate-change-adaptation-\n\n## reporting-second-round-reports\n\n- 4 Spatial coherence means that climate projections can be compared between locations and aggregated over larger areas, enabling climate change to be assessed consistently over larger study areas.\n- 5 http://www.metoffice.gov.uk/research/modelling-systems/unified-model/climate-models/hadgem3\n- 6 Coupled model intercomparison project phase 5, see http://cmip-pcmdi.llnl.gov/cmip5/\n- 7 Kendon, E. J., Roberts, N. M., Senior, C. A. & Roberts, M. J. Realism of rainfall in a very high resolution regional climate model. J. Clim. 25, 5791-5806 (2012) http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00562.1", - "page_start": 0, - "page_end": 0, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "## 2. Background\n\n## 2.1. Climate Change, Global Warming, and Frames\n\nExisting studies have noted that the subtle di GLYPH<11> erence between climate change and global warming evokes di GLYPH<11> erent public cognitive responses, where global warming'indicates heat-related impacts, human causes, increased UV light penetration, ozone depletion, and the greenhouse e GLYPH<11> ect, whereas climate change is more associated with a wide range of influences on climate, including drought and agriculture [9]. An N-gram analysis suggested that global warming showed a closer connection with ice, snow, and sea, whereas climate change was always connected with scientific investigations, such as", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed10.pdf" - }, - { - "text": "There are two major research questions concerning the impacts of climate change at 1.5°C and 2°C global warming, which are relevant to both mitigation and adaptation policy areas.\n\n - (i) How much larger are the impacts at 2°C compared to 1.5°C? This is the primary question arising from the Paris Agreement [4] and is relevant to mitigation policy, informing judgements and actions on holding the global temperature rise to 'well below 2°C' and 'pursuing efforts to limit the temperature increase to 1.5°C'.\n - (ii) What regional climate conditions and related hydrological and ecological conditions could occur at a particular level of global warming, such as 2°C? This is relevant to adaptation policy and planning-exploring the possible outcomes for these levels of warming will help facilitate adaptation and improved resilience to account for a 1.5°C or 2°C world. It is recognized that many adaptation decisions require information on timing of specific impacts or risks, but nevertheless, framing regional impacts assessments in terms of associated global warming levels (GWLs) may help provide context of the levels of climate change that may be avoidable or unavoidable (and hence require adaptation).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed11.pdf" - }, - { - "text": "- 7. Caitlyn Kennedy, R.L. What's the Di GLYPH<11> erence between Global Warming and Climate Change? 2015. Available online: https: // www.climate.gov / news-features / climate-qa / whats-di GLYPH<11> erence-between-global-warming-andclimate-change (accessed on 10 October 2019).\n - 8. Pachauri, R.K.; Allen, M.R.; Barros, V.R.; Broome, J.; Cramer, W.; Christ, R.; Church, J.A.; Clarke, L.; Dahe, Q.; Dasgupta, P.; et al. Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change ; IPCC: Geneva, Switzerland, 2014.\n - 9. Whitmarsh, L. What's in a name? Commonalities and di GLYPH<11> erences in public understanding of 'climate change' and 'global warming'. Public Underst. Sci. 2009 , 18 , 401-420. [CrossRef]\n - 10. Shehata, A.; Hopmann, D.N. Framing climate change: A study of US and Swedish press coverage of global warming. Journal. Stud. 2012 , 13 , 175-192. [CrossRef]\n - 11. Schuldt, J.P.; Roh, S. Of accessibility and applicability: How heat-related cues a GLYPH<11> ect belief in 'global warming' versus 'climate change'. Soc. Cogn. 2014 , 32 , 217-238. [CrossRef]\n - 12. McCright,A.M.; Dunlap, R.E. Challenging global warming as a social problem: An analysis of the conservative movement's counter-claims. Soc. Probl. 2000 , 47 , 499-522. [CrossRef]\n - 13. Lineman, M.; Do, Y.; Kim, J.Y.; Joo, G.J. Talking about climate change and global warming. PLoS ONE 2015 , 10 , e0138996. [CrossRef]\n - 14. Anderson, J.R. The Architecture of Cognition ; Psychology Press: London, UK, 2013.\n - 15. Pan, B.; Zheng, Y.; Wilkie, D.; Shahabi, C. Crowd sensing of tra GLYPH<14> c anomalies based on human mobility and social media. In Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Orlando, FL, USA, 5-8 November 2013; pp. 344-353.\n - 16. Rogstadius, J.; Vukovic, M.; Teixeira, C.A.; Kostakos, V.; Karapanos, E.; Laredo, J.A. CrisisTracker: Crowdsourced social media curation for disaster awareness. IBM J. Res. Dev. 2013 , 57 , 4:1-4:13. [CrossRef]\n - 17. Leetaru, K.; Wang, S.; Cao, G.; Padmanabhan, A.; Shook, E. Mapping the global Twitter heartbeat: The geography of Twitter. First Monday 2013 , 18 . [CrossRef]\n - 18. Kirilenko, A.P.; Molodtsova, T.; Stepchenkova, S.O. People as sensors: Mass media and local temperature influence climate change discussion on Twitter. Glob. Environ. Chang. 2015 , 30 , 92-100. [CrossRef]\n - 19. Gamson, W.A.; Modigliani, A. Media discourse and public opinion on nuclear power: A constructionist approach. Am. J. Sociol. 1989 , 95 , 1-37. [CrossRef]\n - 20. Entman, R.M. Framing: Toward clarification of a fractured paradigm. J. Commun. 1993 , 43 , 51-58. [CrossRef]\n - 21. McCombs, M.; Llamas, J.P.; Lopez-Escobar, E.; Rey, F. Candidate images in Spanish elections: Second-level agenda-setting e GLYPH<11> ects. Journal. Mass Commun. Q. 1997 , 74 , 703-717. [CrossRef]\n - 22. Druckman, J.N. On the limits of framing e GLYPH<11> ects: Who can frame? J. Politics 2001 , 63 , 1041-1066. [CrossRef]\n - 23. Druckman, J.N. The implications of framing e GLYPH<11> ects for citizen competence. Political Behav. 2001 , 23 , 225-256. [CrossRef]\n - 24. Teigen, K.H.; Karevold, K.I. Looking back versus looking ahead: Framing of time and work at di GLYPH<11> erent stages of a project. J. Behav. Decis. Mak. 2005 , 18 , 229-246. [CrossRef]\n - 25. McKenzie, C.R.; Nelson, J.D. What a speaker's choice of frame reveals: Reference points, frame selection, and framing e GLYPH<11> ects. Psychon. Bull. Rev. 2003 , 10 , 596-602. [CrossRef]\n - 26. Du, Y.R. Same events, di GLYPH<11> erent stories: Internet censorship in the Arab Spring seen from China. Journal. Mass Commun. Q. 2016 , 93 , 99-117. [CrossRef]", - "page_start": 17, - "page_end": 17, - "source_file": "pubmed10.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed10.pdf", - "query": "Which of #climatechange and #globalwarming is the most used ?", - "target_page": 5, - "target_passage": "A total of 6,662,478 tweets were retained, of which 5,774,747 contained #climatechange, and 887,731 contained #globalwarming", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Figure 5. The sum of centrality for nodes in four clusters in the climate change discourse from 2009 to 2018 ( a ); (the sum of centrality for nodes in four clusters in the global warming discourse from 2009 to 2018 ( b ). Figure 5. The sum of centrality for nodes in four clusters in the climate change discourse from 2009 to 2018 ( a ); (the sum of centrality for nodes in four clusters in the global warming discourse from 2009 to 2018 ( b ). Figure 5. The sum of centrality for nodes in four clusters in the climate change discourse from 2009 to 2018 ( a ); (the sum of centrality for nodes in four clusters in the global warming discourse from 2009 to 2018 ( b ).\n\n\n\nAs the climate change and global warming discourses evolved over the past years, their relative statuses in public discourse also changed. Although from 2009 to 2018, increasing numbers of people started to use Twitter, resulting in an overall rise in the number of tweets and hashtags, the ratio of #climatechange frequency and #globalwarming frequency still indicated the public's change in frame preference. Figure 1a displays that in 2009, the number of tweets with #climatechange was 2.69 times that of the tweets with #globalwarming, whereas the ratio significantly since 2013 and reached 13.02 in 2018. The climate change network showed a stronger ability to incorporate diverse hashtags into discussions, according to Figure 1b. In 2009, the hashtags that co-occurred with #climatechange were 2.44 times those that co-occurred with #globalwarming, and the ratio climbed to 6.36 in 2018. As the climate change and global warming discourses evolved over the past years, their relative statuses in public discourse also changed. Although from 2009 to 2018, increasing numbers of people started to use Twitter, resulting in an overall rise in the number of tweets and hashtags, the ratio of #climatechange frequency and #globalwarming frequency still indicated the public's change in frame preference. Figure 1a displays that in 2009, the number of tweets with #climatechange was 2.69 times that of the tweets with #globalwarming, whereas the ratio significantly since 2013 and reached 13.02 in 2018. The climate change network showed a stronger ability to incorporate diverse hashtags into discussions, according to Figure 1b. In 2009, the hashtags that co-occurred with #climatechange were 2.44 times those that co-occurred with #globalwarming, and the ratio climbed to 6.36 in 2018. As the climate change and global warming discourses evolved over the past years, their relative statuses in public discourse also changed. Although from 2009 to 2018, increasing numbers of people started to use Twitter, resulting in an overall rise in the number of tweets and hashtags, the ratio of #climatechange frequency and #globalwarming frequency still indicated the public's change in frame preference. Figure 1a displays that in 2009, the number of tweets with #climatechange was 2.69 times that of the tweets with #globalwarming, whereas the ratio significantly since 2013 and reached 13.02 in 2018. The climate change network showed a stronger ability to incorporate diverse hashtags into discussions, according to Figure 1b. In 2009, the hashtags that co-occurred with #climatechange were 2.44 times those that co-occurred with #globalwarming, and the ratio climbed to 6.36 in 2018.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed10.pdf" - }, - { - "text": "\n\n## Article\n\n## #Climatechange vs. #Globalwarming: Characterizing Two Competing Climate Discourses on Twitter with Semantic Network and Temporal Analyses\n\nWen Shi 1 , Haohuan Fu 1,2 , Peinan Wang 3 , Changfeng Chen 3 and Jie Xiong 4, *\n\n- 1 Ministry of Education Key Laboratory for Earth System Modeling, Department of Earth System Science, Tsinghua University, Beijing 100084, China; shi-w18@mails.tsinghua.edu.cn (W.S.); haohuan@tsinghua.edu.cn (H.F.)\n- 2 National Supercomputing Center in Wuxi, Wuxi 214000, China\n- 3 School of Journalism and Communication, Tsinghua University, Beijing 100084, China;\n- wpn17@mails.tsinghua.edu.cn (P.W.); chencf@mail.tsinghua.edu.cn (C.C.)\n- 4 Strategy and Innovation Department, Rennes School of Business, 35065 Rennes, France\n- * Correspondence: jie.xiong@rennes-sb.com; Tel.: + 33-(0)-2-99-54-46-79\n\nReceived: 5 December 2019; Accepted: 3 February 2020; Published: 7 February 2020\n\n\n\nAbstract: Distinct perceptions of the global climate is one of the factors preventing society from achieving consensus or taking collaborative actions on this issue. The public has not even reached an agreement on the naming of the global concern, showing preference for either 'climate change' or 'global warming', and few previous studies have addressed these two competing discourses resulting from distinct climate concerns by di GLYPH<11> erently linking numerous climate concepts. Based on the 6,662,478 tweets containing #climatechange or #globalwarming generated between 1 January 2009 and 31 December 2018, we constructed the semantic networks of the two discourses and examined their evolution over the decade. The findings indicate that climate change demonstrated a more scientific perspective and showed an attempt to condense climate discussions rather than di GLYPH<11> use the topic by frequently addressing sub-topics simultaneously. Global warming triggered more political responses and showed a greater connection with phenomena. Temporal analysis suggests that traditional political discussions were gradually fading in both discourses but more recently started to revive in the form of discourse alliance in the climate change discourse. The associations between global warming and weather abnormalitiessuddenly strengthened around 2012. Climate change is becoming more dominant than global warming in public discussions. Although two discourses have shown more similarities in the rank order of important climate concepts, apparent disagreements continue about how these concepts are associated. These findings lay the groundwork for researchers and communicators to narrow the discrepancy between diverse climate perceptions.\n\nKeywords: climate change; global warming; semantic network analysis; temporal analysis; public discourse; Twitter\n\n## 1. Introduction\n\nThe public's distinct understanding of the cause and e GLYPH<11> ect of the global climate issue is an obstacle to joint mitigation actions. In addition to a diversity of views co-existing in the public discourse [1,2], previous studies noticed that the public had even failed to reach an agreement on whether 'climate change' or 'global warming' is the most appropriate definition of the global climate concern [3-5]. According to the definition provided by [6], global warming describes global climate issues as a continuous increase in the average temperature of Earth's surface due to anthropogenic emissions of greenhouse gases, whereas climate change includes not only temperature rise but also a range of\n\n\n\n/gid00001", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed10.pdf" - }, - { - "text": "## 3. Results\n\nFor a world at 2°C global warming, we present a range of outcomes to provide insight into the level of agreement between models for a particular projected change, and hence an indication of potential robustness of the projected changes for informing adaptation. We then make a comparison of impacts at global warming 1.5°C to investigate the level of impact that would be avoided by limiting global warming to different levels. Bearing in mind the uncertainty in regional climate outcomes, we address this in a number of ways. For individual realizations, we compare the impacts at different warming levels to see if they are systematically smaller at 1.5°C, even if the sign of the change is uncertain. We also compare the range of outcomes at different GWLs, to see if the regional-scale uncertainty itself increases with global warming.\n\n## (a) Climate-change impacts at 2 ° Cglobalwarming\n\nFor 2°C global warming, the ensemble-mean increase in annual daily maximum temperature was above 2°C for most of the land surface, with the exception of the Indian subcontinent, most of Australia and Antarctica (figure 2). The increase was higher still in many regions; most of North America, much of China and north Asia, northwestern South America and all of Europe. In the northern and eastern USA and much of northern and western Europe, the annual daily maximum temperature increased by over 4°C for 2°C global warming. The global mean TXx increased by more than 2°C in all ensemble members (table 5), so the maximum temperature warming more than the global annual mean is a consistent result across all projections here, as found in previous studies with other models [9] (table 5).\n\nThe different ensemble members give somewhat different results at regional scales, although there is a strong consensus on the temperature extremes examined here becoming warmer. In the simulations driven by SSTs and SICs from the two IPSL CMIP5 models, most of the global", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed11.pdf" - }, - { - "text": "## 2. Methods and models\n\n## (a) Global climate simulations at 1.5 ° Cand2 ° Cglobalwarming\n\nThere are a number of ways in which 1.5°C or 2°C global warming can be defined-one could be the long-term climate state following a stabilization of warming at that level, another could be the state over a shorter period around the time of first reaching that level. Here we choose the second definition, which is what is seen first and hence needs to be adapted to. There are also a number of methods with which such changes can be assessed [10]. We take the opportunity of availability of a new set of higher-resolutions transient climate and impacts simulations, and use a time-sampling methodology [10] to assess global-scale impacts at these resolutions for the first time.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed11.pdf" - }, - { - "text": "column name to create two matrices. One matrix was created for the climate change discourse, and we filled the cell whose column name and row name were among the top 50 list in the climate change discourse with the frequency at which the two hashtags were associated in this discourse, and the other cells were filled with 0. This was repeated for the global warming matrix. We thus obtained two matrices with the same row and column names but di GLYPH<11> erent values in the cells. Then, the two matrices were input to the quadratic assignment procedure (QAP) [85] analysis provided by UCINET software [86] to assess their correlation for each year.\n\n## 4. Results\n\n## 4.1. General Descriptions\n\nAssociation networks surrounding #climatechange and #globalwarming showed di GLYPH<11> erent properties. The climate change discourse included 38,821 hashtags, whereas the global warming discourse only contained 8788 hashtags. Table 1 displays the 50 most significant hashtags in the two discourses based on centrality. As some hashtags were used in the form of an abbreviation or phrase, explanations are provided in the table. Two networks shared 32 out of the 50 most significant words. Hashtags 'canada', 'cdnpoli', 'sdgs', 'biodiversity', 'education', 'environmental', 'cop24', 'sustainable', 'auspol', 'food', 'agriculture', 'cleanenergy', 'renewableenergy', 'renewables', 'emissions', 'coal', 'fossilfuels', and 'cop21' only showed up on the top 50 list of the 'climate change' network. Hashtags 'tcot', 'california', 'p2', 'nyc', 'snow', 'agw', 'summer', 'global', 'winter', 'india', 'planet', 'heatwave', 'hoax', 'nasa', 'algore', 'world', 'oil', and 'eco' were unique on the top 50 list of the global warming network. The two lists only shared three out of the top five hashtags. In the #climatechange network, 'climateaction' was ranked third place and 'sustainability' was ranked fourth place, whereas they were ranked significantly lower, 17th and 22nd, respectxively, in the #globalwarming network. In the #globalwarming network, 'earth' and 'weather' were among the top five nodes, whereas they were ranked 14th and 24th in the #climatechange network, respectively.\n\nTable 1. The top 50 central hashtags on Twitter surrounding #climatechange and #globalwarming from 2009 to 2018. The hashtag with * is explained in Appendix A in ascending alphabetical order.", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed10.pdf" - }, - { - "text": "issues and re-constructing them di GLYPH<11> erently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as 'earth' and 'pollution', whereas 'climate change' was more associated to specific issues like 'solar', 'coal', 'china', and 'food'.\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, 'snow', 'summer', 'winter', or 'heatwave' in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' di GLYPH<11> erences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n## 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag 'tcot', favored by right-leaning users and 'p2', favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n## 5.1.3. Discourse Structure", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "studies have noticed that the Maya inscription about doomsday, which seemed rather ridiculous for scientists, might lead to unexpected public associations with climate issues. However, science fiction may influence the public's attitude toward scientific issues. Frankenstein's monster, a well-known fictional character who was a human-built creature in the novel written by Mary Shelley, has long been linked to transgenic technology by referring genetically-modified food as 'Frankenstein Food' [98]. Scientists found that these associations successfully symbolized the the public's uncertainty about the risk of transgenic technology, anxiety toward the human-made living creature, and moral discomfort about reducing life to genetic code [99], even though people all know Frankenstein was only a fictional character created 100 years ago. In the current study, we concludd that a similar mechanism may exist in global warming communication. Though 'the end of world in 2012' and its adapted popular movie sounded unconvincing for scientists, the public, especially who have limited scientific literacy, were defenceless against fiction [100]. Some of the public may accept the indications of temperature rise and extreme weather, and cannot help but strengthen their associations with global warming. However, no similar associations were discovered in the climate change discourse in 2012, which may suggest that global warming is more likely to be associated with disasters, risk, or negative sentiment compared with climate change.\n\n## 5.3. Discrepancy between the Two Discourses\n\nThe status of the two discourses varied significantly in the more recent years in the study period. Data from Google in prior study suggested that the search record for global warming was larger than that of climate change in earlier times [13]. The authors found that in the battle to be the most representative hashtag for global climate concern, #climatechange showed growing popularity and became an overwhelming trending topic compared with #globalwarming. Also, #climatechange showed a stronger ability to incorporate diverse hashtags into its discourse in both relative and absolute dimensions. Comparatively, the popularity of the global warming discourse among social media users did not increase apparently in terms of tweets volume and hashtag diversity, especially when considering the yearly increase in Twitter users. The reason for the observed shift in public discourse toward climate change from global warming may be attributed to the high exposure of climate change in the media and scientific reports in recent years [13]. Previous studies noted that perceived scientific consensus can increase acceptance of science [101]. Though global warming has been commonly used since the 1980s to describe the world-wide temperature rise, climate change is preferred by scientists to refer a range of complex changes of climate [102]. Pew found science-related accounts draw millions of followers on Facebook and volume of posts they released climbed in past years [103]. Climate scientists are found to be opinion makers on Twitter [104]. As social media has become an emerging platform for science popularization, scientific community might contribute to the prevalence of climate change discourse by talking about climate change facts and mitigating measures [75].\n\nHowever, di GLYPH<11> erences between two discourses were not eliminated. Even though two discourses showed more similarities in the rank order of key concepts, the QAP analysis of two matrices of semantic network showed that two discourses still embody distinct public perceptions of climate issues by associating these hashtags in di GLYPH<11> erent manners.", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed10.pdf" - }, - { - "text": "Table 1. The top 50 central hashtags on Twitter surrounding #climatechange and #globalwarming from 2009 to 2018. The hashtag with * is explained in Appendix A in ascending alphabetical order.\n\n| No. | #Climatechange | #Climatechange | #Globalwarming | #Globalwarming |\n|-------|---------------------|------------------|---------------------|------------------|\n| | Hashtag | Centrality | Hashtag | Centrality |\n| 1 | climate | 0.466 | climate | 0.530 |\n| 2 | environment | 0.465 | environment | 0.446 |\n| 3 | climateaction | 0.391 | science | 0.319 |\n| 4 | sustainability | 0.316 | earth | 0.296 |\n| 5 | science | 0.314 | weather | 0.280 |\n| 6 | energy | 0.283 | us * | 0.280 |\n| 7 | trump | 0.257 | trump | 0.263 |\n| 8 | us * | 0.247 | pollution | 0.256 |\n| 9 | cop21 * | 0.232 | co2 | 0.244 |\n| 10 | parisagreement * | 0.232 | green | 0.239 |\n| 11 | actonclimate * | 0.225 | tcot * | 0.229 |\n| 12 | water | 0.221 | nature | 0.213 |\n| 13 | pollution | 0.210 | news | 0.198 |\n| 14 | earth | 0.207 | energy | 0.192 |\n| 15 | green | 0.200 | climatechangeisreal | 0.187 |\n| 16 | climatechangeisreal | 0.195 | obama | 0.181 |\n| 17 | renewableenergy * | 0.194 | climateaction | 0.175 |\n| 18 | health | 0.193 | algore * | 0.174 |\n| 19 | nature | 0.187 | water | 0.171 |\n| 20 | renewables | 0.186 | agw * | 0.164 |\n| 21 | cleanenergy | 0.176 | carbon | 0.164 |\n| 22 | carbon | 0.175 | sustainability | 0.163 |", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed10.pdf" - }, - { - "text": "## OPEN\n\n\n\n## The impact of ͷ.ͻ °C and ͸.Ͷ °C global warming on global maize production and trade\n\nKuo Li ͷ * , Jie Pan ͷ , Wei Xiong ͸ , Wei Xie ͹ & Tariq Ali ͹\n\nClimate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by ͻ climate models recommended by ISI-MIP under ͺ RCP scenarios, in which the approximate scenarios with global warming by ͷ.ͻ °C and ͸ °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by ͷ.ͻ °C and ͸.Ͷ °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under ͸.Ͷ °C scenario was much more serious than ͷ.ͻ °C scenario; the ratios of yield changes were separately Ͷ.ͷ;% and - ͷͶ.;% under ͷ.ͻ °C and ͸.Ͷ °C scenarios. The reduction trend of total maize production is obvious in the top five countries and the main producing regions of the world, especially under the ͸.Ͷ °C scenario. The market price of maize would increase by around Ͷ.ͽ% and ͹.ͺ% under ͷ.ͻ °C and ͸.Ͷ °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.\n\nIn the past hundred years, the global climate has experienced great changes 1-4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming 5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health 6-10 . Global warming has gradually changed from a scienti/fic issue to a major social issue of common concern to governments and people of all countries 11-13 . In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris 14 . Paris Agreement has indicated and pursue e/fforts to limit the temperature increase to 1.5 °C above pre-industrial levels.", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "## 2. Background\n\n## 2.1. Climate Change, Global Warming, and Frames\n\nExisting studies have noted that the subtle di GLYPH<11> erence between climate change and global warming evokes di GLYPH<11> erent public cognitive responses, where global warming'indicates heat-related impacts, human causes, increased UV light penetration, ozone depletion, and the greenhouse e GLYPH<11> ect, whereas climate change is more associated with a wide range of influences on climate, including drought and agriculture [9]. An N-gram analysis suggested that global warming showed a closer connection with ice, snow, and sea, whereas climate change was always connected with scientific investigations, such as", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed10.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed10.pdf", - "query": "Is the #climateaction hashtag more bound the #globalwarming of #climatechange ?", - "target_page": 7, - "target_passage": "In the #climatechange network, “climateaction” was ranked third place and “sustainability” was ranked fourth place, whereas they were ranked significantly lower, 17th and 22nd, respectxively, in the #globalwarming network", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "Figure 5. The sum of centrality for nodes in four clusters in the climate change discourse from 2009 to 2018 ( a ); (the sum of centrality for nodes in four clusters in the global warming discourse from 2009 to 2018 ( b ). Figure 5. The sum of centrality for nodes in four clusters in the climate change discourse from 2009 to 2018 ( a ); (the sum of centrality for nodes in four clusters in the global warming discourse from 2009 to 2018 ( b ). Figure 5. The sum of centrality for nodes in four clusters in the climate change discourse from 2009 to 2018 ( a ); (the sum of centrality for nodes in four clusters in the global warming discourse from 2009 to 2018 ( b ).\n\n\n\nAs the climate change and global warming discourses evolved over the past years, their relative statuses in public discourse also changed. Although from 2009 to 2018, increasing numbers of people started to use Twitter, resulting in an overall rise in the number of tweets and hashtags, the ratio of #climatechange frequency and #globalwarming frequency still indicated the public's change in frame preference. Figure 1a displays that in 2009, the number of tweets with #climatechange was 2.69 times that of the tweets with #globalwarming, whereas the ratio significantly since 2013 and reached 13.02 in 2018. The climate change network showed a stronger ability to incorporate diverse hashtags into discussions, according to Figure 1b. In 2009, the hashtags that co-occurred with #climatechange were 2.44 times those that co-occurred with #globalwarming, and the ratio climbed to 6.36 in 2018. As the climate change and global warming discourses evolved over the past years, their relative statuses in public discourse also changed. Although from 2009 to 2018, increasing numbers of people started to use Twitter, resulting in an overall rise in the number of tweets and hashtags, the ratio of #climatechange frequency and #globalwarming frequency still indicated the public's change in frame preference. Figure 1a displays that in 2009, the number of tweets with #climatechange was 2.69 times that of the tweets with #globalwarming, whereas the ratio significantly since 2013 and reached 13.02 in 2018. The climate change network showed a stronger ability to incorporate diverse hashtags into discussions, according to Figure 1b. In 2009, the hashtags that co-occurred with #climatechange were 2.44 times those that co-occurred with #globalwarming, and the ratio climbed to 6.36 in 2018. As the climate change and global warming discourses evolved over the past years, their relative statuses in public discourse also changed. Although from 2009 to 2018, increasing numbers of people started to use Twitter, resulting in an overall rise in the number of tweets and hashtags, the ratio of #climatechange frequency and #globalwarming frequency still indicated the public's change in frame preference. Figure 1a displays that in 2009, the number of tweets with #climatechange was 2.69 times that of the tweets with #globalwarming, whereas the ratio significantly since 2013 and reached 13.02 in 2018. The climate change network showed a stronger ability to incorporate diverse hashtags into discussions, according to Figure 1b. In 2009, the hashtags that co-occurred with #climatechange were 2.44 times those that co-occurred with #globalwarming, and the ratio climbed to 6.36 in 2018.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed10.pdf" - }, - { - "text": "issues and re-constructing them di GLYPH<11> erently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as 'earth' and 'pollution', whereas 'climate change' was more associated to specific issues like 'solar', 'coal', 'china', and 'food'.\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, 'snow', 'summer', 'winter', or 'heatwave' in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' di GLYPH<11> erences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n## 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag 'tcot', favored by right-leaning users and 'p2', favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n## 5.1.3. Discourse Structure", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "studies have noticed that the Maya inscription about doomsday, which seemed rather ridiculous for scientists, might lead to unexpected public associations with climate issues. However, science fiction may influence the public's attitude toward scientific issues. Frankenstein's monster, a well-known fictional character who was a human-built creature in the novel written by Mary Shelley, has long been linked to transgenic technology by referring genetically-modified food as 'Frankenstein Food' [98]. Scientists found that these associations successfully symbolized the the public's uncertainty about the risk of transgenic technology, anxiety toward the human-made living creature, and moral discomfort about reducing life to genetic code [99], even though people all know Frankenstein was only a fictional character created 100 years ago. In the current study, we concludd that a similar mechanism may exist in global warming communication. Though 'the end of world in 2012' and its adapted popular movie sounded unconvincing for scientists, the public, especially who have limited scientific literacy, were defenceless against fiction [100]. Some of the public may accept the indications of temperature rise and extreme weather, and cannot help but strengthen their associations with global warming. However, no similar associations were discovered in the climate change discourse in 2012, which may suggest that global warming is more likely to be associated with disasters, risk, or negative sentiment compared with climate change.\n\n## 5.3. Discrepancy between the Two Discourses\n\nThe status of the two discourses varied significantly in the more recent years in the study period. Data from Google in prior study suggested that the search record for global warming was larger than that of climate change in earlier times [13]. The authors found that in the battle to be the most representative hashtag for global climate concern, #climatechange showed growing popularity and became an overwhelming trending topic compared with #globalwarming. Also, #climatechange showed a stronger ability to incorporate diverse hashtags into its discourse in both relative and absolute dimensions. Comparatively, the popularity of the global warming discourse among social media users did not increase apparently in terms of tweets volume and hashtag diversity, especially when considering the yearly increase in Twitter users. The reason for the observed shift in public discourse toward climate change from global warming may be attributed to the high exposure of climate change in the media and scientific reports in recent years [13]. Previous studies noted that perceived scientific consensus can increase acceptance of science [101]. Though global warming has been commonly used since the 1980s to describe the world-wide temperature rise, climate change is preferred by scientists to refer a range of complex changes of climate [102]. Pew found science-related accounts draw millions of followers on Facebook and volume of posts they released climbed in past years [103]. Climate scientists are found to be opinion makers on Twitter [104]. As social media has become an emerging platform for science popularization, scientific community might contribute to the prevalence of climate change discourse by talking about climate change facts and mitigating measures [75].\n\nHowever, di GLYPH<11> erences between two discourses were not eliminated. Even though two discourses showed more similarities in the rank order of key concepts, the QAP analysis of two matrices of semantic network showed that two discourses still embody distinct public perceptions of climate issues by associating these hashtags in di GLYPH<11> erent manners.", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed10.pdf" - }, - { - "text": "column name to create two matrices. One matrix was created for the climate change discourse, and we filled the cell whose column name and row name were among the top 50 list in the climate change discourse with the frequency at which the two hashtags were associated in this discourse, and the other cells were filled with 0. This was repeated for the global warming matrix. We thus obtained two matrices with the same row and column names but di GLYPH<11> erent values in the cells. Then, the two matrices were input to the quadratic assignment procedure (QAP) [85] analysis provided by UCINET software [86] to assess their correlation for each year.\n\n## 4. Results\n\n## 4.1. General Descriptions\n\nAssociation networks surrounding #climatechange and #globalwarming showed di GLYPH<11> erent properties. The climate change discourse included 38,821 hashtags, whereas the global warming discourse only contained 8788 hashtags. Table 1 displays the 50 most significant hashtags in the two discourses based on centrality. As some hashtags were used in the form of an abbreviation or phrase, explanations are provided in the table. Two networks shared 32 out of the 50 most significant words. Hashtags 'canada', 'cdnpoli', 'sdgs', 'biodiversity', 'education', 'environmental', 'cop24', 'sustainable', 'auspol', 'food', 'agriculture', 'cleanenergy', 'renewableenergy', 'renewables', 'emissions', 'coal', 'fossilfuels', and 'cop21' only showed up on the top 50 list of the 'climate change' network. Hashtags 'tcot', 'california', 'p2', 'nyc', 'snow', 'agw', 'summer', 'global', 'winter', 'india', 'planet', 'heatwave', 'hoax', 'nasa', 'algore', 'world', 'oil', and 'eco' were unique on the top 50 list of the global warming network. The two lists only shared three out of the top five hashtags. In the #climatechange network, 'climateaction' was ranked third place and 'sustainability' was ranked fourth place, whereas they were ranked significantly lower, 17th and 22nd, respectxively, in the #globalwarming network. In the #globalwarming network, 'earth' and 'weather' were among the top five nodes, whereas they were ranked 14th and 24th in the #climatechange network, respectively.\n\nTable 1. The top 50 central hashtags on Twitter surrounding #climatechange and #globalwarming from 2009 to 2018. The hashtag with * is explained in Appendix A in ascending alphabetical order.", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed10.pdf" - }, - { - "text": "In the global warming network, politics was the second-largest discourse cluster (20% of the network), where 'tcot', short for 'Top Conservatives on Twitter', was the node ranked highest, and 'p2', short for 'Progressives 2.0', is also included. Several political figures, such as Obama and Al Gore, are frequently mentioned. Action toward the global climate issue was the third-largest cluster (16%), including both domestic e GLYPH<11> orts, such as 'us', 'trump', 'climatechangeisreal', 'climateaction', and 'epa', and two international items, like 'china' and 'india'. The fourth cluster (in blue) referred to emissions, including hashtags like 'co2', 'green', and 'carbon'. The smallest cluster (8%) was composed of 'snow', 'winter', 'heatwave', and 'summer', referring to the temperature abnormalities on the earth.\n\n## 4.3. Temporal Analysis of the Associations in the Two Discourses\n\nThe online presentations of the climate change and global warming discourses are dynamic. As shown in Table 2, for the global warming discourse, 11 key concepts remained in the top 50 central hashtags each year for all 10 years, with 16 for the climate change'discourse. By comparing the 11 nodes of the global warming discourse and the 16 nodes of the climate change discourse, we found that the two lists shared nine concepts. We found 'pollution' and 'earth' were unique to the keyword list of the global warming discourse, and 'economy', 'water', 'china', 'coal', 'solar', 'sustainability', and 'food' only occurred on the critical list for the climate change discourse.\n\nTable 2. Hashtags that remained on the top 50 list for the climate change or the global warming discourse from 2009 to 2018.\n\n| | Unique | Shared |\n|-------------------------------|---------------------------------------------------------------------------|---------------------------------------------------------------------|\n| #climatechange #globalwarming | china, solar, water, food, economy, coal, sustainability pollution, earth | co2, news, carbon, green, climate, us, energy, science, environment |\n\nFigures 3 and 4 show the overall evolution of critical hashtags' associations in the 10-year period, where the nodes in the 10 graphs are located in the same position but the strength of associations varies across longitudinal time. Vector graphics with the label of nodes are provided in the Supplementary Materials. Four themes were identified in each discourse according to the nodes' associations. To more explicitly demonstrate the relative importance of each cluster in each year, we calculated the sum of the degree centrality of all the nodes belonging to each cluster and their change in centrality over the 10 years, as shown in Figure 5.", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed10.pdf" - }, - { - "text": "Table 1. The top 50 central hashtags on Twitter surrounding #climatechange and #globalwarming from 2009 to 2018. The hashtag with * is explained in Appendix A in ascending alphabetical order.\n\n| No. | #Climatechange | #Climatechange | #Globalwarming | #Globalwarming |\n|-------|---------------------|------------------|---------------------|------------------|\n| | Hashtag | Centrality | Hashtag | Centrality |\n| 1 | climate | 0.466 | climate | 0.530 |\n| 2 | environment | 0.465 | environment | 0.446 |\n| 3 | climateaction | 0.391 | science | 0.319 |\n| 4 | sustainability | 0.316 | earth | 0.296 |\n| 5 | science | 0.314 | weather | 0.280 |\n| 6 | energy | 0.283 | us * | 0.280 |\n| 7 | trump | 0.257 | trump | 0.263 |\n| 8 | us * | 0.247 | pollution | 0.256 |\n| 9 | cop21 * | 0.232 | co2 | 0.244 |\n| 10 | parisagreement * | 0.232 | green | 0.239 |\n| 11 | actonclimate * | 0.225 | tcot * | 0.229 |\n| 12 | water | 0.221 | nature | 0.213 |\n| 13 | pollution | 0.210 | news | 0.198 |\n| 14 | earth | 0.207 | energy | 0.192 |\n| 15 | green | 0.200 | climatechangeisreal | 0.187 |\n| 16 | climatechangeisreal | 0.195 | obama | 0.181 |\n| 17 | renewableenergy * | 0.194 | climateaction | 0.175 |\n| 18 | health | 0.193 | algore * | 0.174 |\n| 19 | nature | 0.187 | water | 0.171 |\n| 20 | renewables | 0.186 | agw * | 0.164 |\n| 21 | cleanenergy | 0.176 | carbon | 0.164 |\n| 22 | carbon | 0.175 | sustainability | 0.163 |", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed10.pdf" - }, - { - "text": "## 2. Methods and models\n\n## (a) Global climate simulations at 1.5 ° Cand2 ° Cglobalwarming\n\nThere are a number of ways in which 1.5°C or 2°C global warming can be defined-one could be the long-term climate state following a stabilization of warming at that level, another could be the state over a shorter period around the time of first reaching that level. Here we choose the second definition, which is what is seen first and hence needs to be adapted to. There are also a number of methods with which such changes can be assessed [10]. We take the opportunity of availability of a new set of higher-resolutions transient climate and impacts simulations, and use a time-sampling methodology [10] to assess global-scale impacts at these resolutions for the first time.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed11.pdf" - }, - { - "text": "## 5.1.3. Discourse Structure\n\nIn the discourse surrounding #climatechange, 'environment', 'energy', and 'global action' represented the themes of the three largest clusters in the network. However, three popularly recurring hashtags, '#environment', '#energy', and '#climateaction', did not belong to any of the three clusters above, but formed another small tight cluster together, sitting in the most central part of the semantic network, as shown in Figure 2b. As each of the three hashtags can almost represent one sub-theme of the climate change topic and these three hashtags were tightly bundled might indicate an attempt by #climatechange users to address all three communities together [91], consolidating climate change as a topic rather than a loosely organized topic. Previous communication studies also confirmed hashtags' function of serving as a hybrid forum [68], where heterogeneous individuals coordinate to solve", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "## 2. Background\n\n## 2.1. Climate Change, Global Warming, and Frames\n\nExisting studies have noted that the subtle di GLYPH<11> erence between climate change and global warming evokes di GLYPH<11> erent public cognitive responses, where global warming'indicates heat-related impacts, human causes, increased UV light penetration, ozone depletion, and the greenhouse e GLYPH<11> ect, whereas climate change is more associated with a wide range of influences on climate, including drought and agriculture [9]. An N-gram analysis suggested that global warming showed a closer connection with ice, snow, and sea, whereas climate change was always connected with scientific investigations, such as", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed10.pdf" - }, - { - "text": "## 3. Results\n\nFor a world at 2°C global warming, we present a range of outcomes to provide insight into the level of agreement between models for a particular projected change, and hence an indication of potential robustness of the projected changes for informing adaptation. We then make a comparison of impacts at global warming 1.5°C to investigate the level of impact that would be avoided by limiting global warming to different levels. Bearing in mind the uncertainty in regional climate outcomes, we address this in a number of ways. For individual realizations, we compare the impacts at different warming levels to see if they are systematically smaller at 1.5°C, even if the sign of the change is uncertain. We also compare the range of outcomes at different GWLs, to see if the regional-scale uncertainty itself increases with global warming.\n\n## (a) Climate-change impacts at 2 ° Cglobalwarming\n\nFor 2°C global warming, the ensemble-mean increase in annual daily maximum temperature was above 2°C for most of the land surface, with the exception of the Indian subcontinent, most of Australia and Antarctica (figure 2). The increase was higher still in many regions; most of North America, much of China and north Asia, northwestern South America and all of Europe. In the northern and eastern USA and much of northern and western Europe, the annual daily maximum temperature increased by over 4°C for 2°C global warming. The global mean TXx increased by more than 2°C in all ensemble members (table 5), so the maximum temperature warming more than the global annual mean is a consistent result across all projections here, as found in previous studies with other models [9] (table 5).\n\nThe different ensemble members give somewhat different results at regional scales, although there is a strong consensus on the temperature extremes examined here becoming warmer. In the simulations driven by SSTs and SICs from the two IPSL CMIP5 models, most of the global", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed11.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed10.pdf", - "query": "What are two main reasons for one's low climate concern ?", - "target_page": 13, - "target_passage": "As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "issues and re-constructing them di GLYPH<11> erently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as 'earth' and 'pollution', whereas 'climate change' was more associated to specific issues like 'solar', 'coal', 'china', and 'food'.\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, 'snow', 'summer', 'winter', or 'heatwave' in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' di GLYPH<11> erences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n## 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag 'tcot', favored by right-leaning users and 'p2', favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n## 5.1.3. Discourse Structure", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "Model Intercomparison Project (CMIP5) ensemble, forced with the RCP8.5 concentration scenario. To provide more detailed representations of climate processes and impacts, the spatial resolution was N216 (approx. 60 km grid length in mid-latitudes), a higher resolution than the CMIP5 models. We used a set of impacts-relevant indices and a global land surface model to examine the projected changes in weather extremes and their implications for freshwater availability and vulnerability to food insecurity. Uncertainties in regional climate responses are assessed, examining ranges of outcomes in impacts to inform risk assessments. Despite some degree of inconsistency between components of the study due to the need to correct for systematic biases in some aspects, the outcomes from different ensemble members could be compared for several different indicators. The projections for weather extremes indices and biophysical impacts quantities support expectations that the magnitude of change is generally larger for 2°C global warming than 1.5°C. Hot extremes become even hotter, with increases being more intense than seen in CMIP5 projections. Precipitation-related extremes show more geographical variation with some increases and some decreases in both heavy precipitation and drought. There are substantial regional uncertainties in hydrological impacts at local scales due to different climate models producing different outcomes. Nevertheless, hydrological impacts generally point towards wetter conditions on average, with increased mean river flows, longer heavy rainfall events, particularly in South and East Asia with the most extreme projections suggesting more than a doubling of flows in the Ganges at 2°C global warming. Some areas are projected to experience shorter meteorological drought events and less severe low flows, although longer droughts and/or decreases in low flows are projected in many other areas, particularly southern Africa and South America. Flows in the Amazon are projected to decline by up to 25%. Increases in either heavy rainfall or drought events imply increased vulnerability to food insecurity, but if global warming is limited to 1.5°C, this vulnerability is projected to remain smaller than at 2°C global warming in approximately 76% of developing countries. At 2°C, four countries are projected to reach unprecedented levels of vulnerability to food insecurity.\n\nThis article is part of the theme issue 'The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels'.\n\n## 1. Introduction\n\nThe majority of climate-change impacts assessments have tended to be framed in terms of future time horizons, e.g. impacts by the middle or end of the twenty-first century [1,2]. However, with international climate policy now largely focused on limiting warming to specific levels of global mean temperature such as 2°C [3] or 1.5°C [4], policy-relevant climate impacts assessments increasingly need to be framed in terms of such warming levels.\n\nThere are two major research questions concerning the impacts of climate change at 1.5°C and 2°C global warming, which are relevant to both mitigation and adaptation policy areas.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed11.pdf" - }, - { - "text": "In the global warming network, politics was the second-largest discourse cluster (20% of the network), where 'tcot', short for 'Top Conservatives on Twitter', was the node ranked highest, and 'p2', short for 'Progressives 2.0', is also included. Several political figures, such as Obama and Al Gore, are frequently mentioned. Action toward the global climate issue was the third-largest cluster (16%), including both domestic e GLYPH<11> orts, such as 'us', 'trump', 'climatechangeisreal', 'climateaction', and 'epa', and two international items, like 'china' and 'india'. The fourth cluster (in blue) referred to emissions, including hashtags like 'co2', 'green', and 'carbon'. The smallest cluster (8%) was composed of 'snow', 'winter', 'heatwave', and 'summer', referring to the temperature abnormalities on the earth.\n\n## 4.3. Temporal Analysis of the Associations in the Two Discourses\n\nThe online presentations of the climate change and global warming discourses are dynamic. As shown in Table 2, for the global warming discourse, 11 key concepts remained in the top 50 central hashtags each year for all 10 years, with 16 for the climate change'discourse. By comparing the 11 nodes of the global warming discourse and the 16 nodes of the climate change discourse, we found that the two lists shared nine concepts. We found 'pollution' and 'earth' were unique to the keyword list of the global warming discourse, and 'economy', 'water', 'china', 'coal', 'solar', 'sustainability', and 'food' only occurred on the critical list for the climate change discourse.\n\nTable 2. Hashtags that remained on the top 50 list for the climate change or the global warming discourse from 2009 to 2018.\n\n| | Unique | Shared |\n|-------------------------------|---------------------------------------------------------------------------|---------------------------------------------------------------------|\n| #climatechange #globalwarming | china, solar, water, food, economy, coal, sustainability pollution, earth | co2, news, carbon, green, climate, us, energy, science, environment |\n\nFigures 3 and 4 show the overall evolution of critical hashtags' associations in the 10-year period, where the nodes in the 10 graphs are located in the same position but the strength of associations varies across longitudinal time. Vector graphics with the label of nodes are provided in the Supplementary Materials. Four themes were identified in each discourse according to the nodes' associations. To more explicitly demonstrate the relative importance of each cluster in each year, we calculated the sum of the degree centrality of all the nodes belonging to each cluster and their change in centrality over the 10 years, as shown in Figure 5.", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed10.pdf" - }, - { - "text": "make global action salient for people talking about global warming than people talking about climate change [40], even though the facts of climate issues are highly recognized in both discourses.\n\n## 6. Conclusions\n\nAs social media is gradually overtaking the role of legacy media providing a forum for public discussion, the semantic associations contained in social media discussions reflect and reinforce how individuals portray global climate issues. By examining hashtag co-occurrence patterns on Twitter between 2009 and 2018, we identified distinct climate perceptions hidden behind two competing climate discourses and discovered how these two discourses evolved.\n\nWe found that broad scientific, social, political, and international discussions are the topics of public climate discourse. Although the semantic di GLYPH<11> erence between climate change and global warming seems subtle, the di GLYPH<11> erences in their cognitive associations are not trivial. Despite some shared concerns between the two discourses, 'global warming' is more politicized and focuses more on general phenomena, especially temperature abnormalities, whereas climate change is a more compact topic with a more scientific perspective and tends to refer to specific issues. The temporal analysis revealed that traditional political discussions decreased in both discourses but climate change started to build a discourse alliance with diverse domestic issues to show political intentions. Global warming's associations to extreme events and temperature change were suddenly strengthened around 2012. Climate change is becoming dominant compared with global warming in public discussions. Although the two discourses are becoming increasingly similar in the rank order of climate concepts, a notable discrepancy still exists in the way in which they get concepts associated. These observations may provide climate communicators with theoretical and practical hints to narrow the discrepancy between diverse climate perceptions.\n\n## Limitation and Future Directions\n\nThough big data allowed us to decrease the bias by dealing with the whole set of social media data rather than samples, discrepancies still exist between social media users and the public. As most Twitter users do not disclose their age, education, income, and gender in users' profile, demographics were not introduced as moderator factors in this study. Previous studies noted that in 1970s, global cooling was a prominent climate concern amongst the public [105]. While in the 1980s, ozone layer depletion, species extinction and rainforest destruction became salient on the mass media agenda [106]. Considering the historical background of climate issues, age might influence how individuals perceive climate issues. According to the statistics in 2017 [107], only 16 % of older people (older than 60) in America use Twitter, while the proportion is 39% for people between 30-59 years old and 47% for people younger than 30 years old (Stastista, 2017). Our results reflect the climate perception of older people who use Twitter, as well as younger people amongst whom Twitter is more popular. Although some scholars reported that it is statistically reliable to take data on Twitter as a substitute and supplement for polling [108], we thought our results should be further examined before being generalized to the whole population.", - "page_start": 15, - "page_end": 15, - "source_file": "pubmed10.pdf" - }, - { - "text": "There are two major research questions concerning the impacts of climate change at 1.5°C and 2°C global warming, which are relevant to both mitigation and adaptation policy areas.\n\n - (i) How much larger are the impacts at 2°C compared to 1.5°C? This is the primary question arising from the Paris Agreement [4] and is relevant to mitigation policy, informing judgements and actions on holding the global temperature rise to 'well below 2°C' and 'pursuing efforts to limit the temperature increase to 1.5°C'.\n - (ii) What regional climate conditions and related hydrological and ecological conditions could occur at a particular level of global warming, such as 2°C? This is relevant to adaptation policy and planning-exploring the possible outcomes for these levels of warming will help facilitate adaptation and improved resilience to account for a 1.5°C or 2°C world. It is recognized that many adaptation decisions require information on timing of specific impacts or risks, but nevertheless, framing regional impacts assessments in terms of associated global warming levels (GWLs) may help provide context of the levels of climate change that may be avoidable or unavoidable (and hence require adaptation).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed11.pdf" - }, - { - "text": "Acombination of the above questions is also relevant-how does the range of outcomes at 2°C compare to that at 1.5°C? This is also relevant to adaptation policy, as it can inform assessment on whether to adapt to potential impacts at 2°C or just 1.5°C. Putting in place adaptation measures to deal with potential impacts at 1.5°C and then increasing these to deal with 2°C later may be more expensive and difficult than adapting to potential risks at 2°C at the outset. On the other hand, because adaptation actions may themselves have consequences, unnecessary overadaptation may have undesirable effects which it may be preferable to avoid or at least delay until absolutely necessary.\n\nBoth questions require an appropriate assessment of uncertainty. There are considerable uncertainties in projections of regional climate change, with different climate models projecting regional climate changes that can differ in magnitude or even, in the case of precipitation and impacts quantities strongly related to this, differ in sign [5,6]. This may have important implications for regional impacts at specific levels of global warming. A common approach to exploring and presenting such uncertainties is to examine the ensemble mean and the level of consensus among the ensemble members on the sign of the change. While this can often be useful in informing an assessment of the level of confidence in future projections, it may not always be sufficient to fully inform decisions. Risk assessment approaches require consideration of a range of possible risks, not just the most likely. This paper explores a range of regional climate states and related impacts that occur at global warming of 2°C, and a range of differences with warming limited to 1.5°C.\n\nWe examine the implications of our new climate projections by applying some commonly used indices of climate extremes, and a further index quantifying relative vulnerability to food insecurity which combines climate extremes indices with information on a range of factors representing sensitivity and adaptability of food systems to climate hazards. We also use the climate projections to drive a global land surface model to simulate changes in run-off as an indicator of freshwater availability. We assess whether regional extremes are projected to increase or decrease at 2°C global warming, and whether the consequent impact on drought and vulnerability to food insecurity become greater or smaller. We also assess whether these changes are reduced by limiting global warming to 1.5°C. We explore some of the uncertainties in these projections, and, in particular, examine whether the use of ensemble-mean projections is a useful simple guide to impacts projections or whether this can lead to a misleading impression for some impacts. Regarding vulnerability to food insecurity, we consider the impacts of global warming at 1.5°C and 2°C alongside socio-economic influences that affect the sensitivity to climate change. Wealso consider our climate-change impacts results in comparison with other studies using older, lower-resolution climate projections.\n\nA large number of previous studies have assessed potential impacts of future climate change using the 5th Coupled Model Intercomparison Project (CMIP5) ensemble or subsets of this [7], and some have framed this in terms of impacts at global warming of 1.5°C and/or 2°C [8,9]. We also base our study on a subset of CMIP5 projections, but use a new, higher-resolution atmosphere model to provide greater spatial detail and improved representation of atmospheric processes.\n\n## 2. Methods and models\n\n## (a) Global climate simulations at 1.5 ° Cand2 ° Cglobalwarming", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed11.pdf" - }, - { - "text": "complex changes in the state of the climate [7], which may be caused by natural process, external forces, or human interventions [8]. By randomly assigning respondents to climate change or global warming questionnaires, scholars confirmed that the di GLYPH<11> erent connotations contained in the two definitions are likely to evoke distinct interpretations of the causes and impacts of the global climate issue [9], which may inhibit collaboration and joint e GLYPH<11> orts to mitigate the global challenge.\n\nPublic preference between climate change and global warming is even more apparent when considering the ideology spectrum [10]. Some scholars concluded that conservatives, who are less concerned with environmental issues, tended to use global warming as a narrative strategy because global warming has a more direct connection with temperature rise, making it easier to find contradictory cues such as freezing weather or heavy snowstorms to deny global climate change facts [11]. The associations between global warming and human activities may contribute to more controversies as well [12], connecting global warming more with the 'hoax' frame [5] and evoking greater negative sentiment [13].\n\nAlthough these existing studies have often attempted to identify the di GLYPH<11> erences between these two terminologies, only a particular few perspectives, such as sentiment, ideological preference, or cause and e GLYPH<11> ect, were examined in each study [3,9,13]. However, the associate network model introduced by psychologists suggests that human recognition and memory have a network-shaped architecture [14], where individual understanding of particular objects is connected with numerous other objects in the mind. According to the associate network model, individual understanding of the global climate concern is a network composed of numerous inter-connected concepts, in which climate change and global warming. As the two terminologies concern the primary mechanism of the global climate issue, the preference between the two understandings may represent two distinct climate discourses by di GLYPH<11> erently organizing numerous climate concepts. Examining the di GLYPH<11> erences between two discourses with an associative perspective may provide communicators with unique insights into narrowing the cognitive discrepancy. The temporal dimension was lacking in existing studies, necessitating the study of how concepts associated with each other have evolved with time.\n\nLargeamountsofuser-generateddataonsocialmedia, whichhavebeenvaluedincomputerscience, communication, and environmental studies [5,9,15-18], have enabled the acquistion of the social media representation of the two discourses in a decade. In this study, by analyzing hashtag co-occurrence patterns in 6,662,478 tweets containing 'climate change' and 'global warming' between 1 January 2009 and 31 December 2018, two semantic networks of public climate discourse were constructed to identify the critical concepts and links surrounding the two terminologies. We conducted temporal analysis to observe the evolution of the two discourses and to measure whether the discrepancy between the two has widened or narrowed within the 10-year period.\n\nTo be specific, we formulated three research questions (RQs) to be explored in this study:\n\nRQ1: What is the di GLYPH<11> erence in how the two the discourses are associated with important climate concepts in people's minds?\n\nRQ2: How did the two competing climate discourses evolve from 2009 to 2018? RQ3: Did the two competing discourses converge or diverge in this decade?\n\n## 2. Background\n\n## 2.1. Climate Change, Global Warming, and Frames", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed10.pdf" - }, - { - "text": "Biodiversity is also crucial for safeguarding EU and global food security. Biodiversity loss threatens our food systems 6 , putting our food security and nutrition at risk. Biodiversity also underpins healthy and nutritious diets and improves rural livelihoods and agricultural productivity 7 . For instance, more than 75% of global food crop types rely on animal pollination 8 .\n\nDespite this urgent moral, economic and environmental imperative, nature is in a state of crisis . The five main direct drivers of biodiversity loss 9 - changes in land and sea use, overexploitation, climate change, pollution, and invasive alien species - are making nature disappear quickly. We see the changes in our everyday lives: concrete blocks rising up on green spaces, wilderness disappearing in front of our eyes, and more species being put at risk of extinction than at any point in human history. In the last four decades, global wildlife populations fell by 60% as a result of human activities 10 . And almost three quarters of the Earth's surface have been altered 11 , squeezing nature into an eversmaller corner of the planet.\n\nThe biodiversity crisis and the climate crisis are intrinsically linked. Climate change accelerates the destruction of the natural world through droughts, flooding and wildfires, while the loss and unsustainable use of nature are in turn key drivers of climate change. But just as the crises are linked, so are the solutions. Nature is a vital ally in the fight against climate change 12 . Nature regulates the climate, and nature-based solutions 13 , such as protecting and restoring wetlands, peatlands and coastal ecosystems, or sustainably managing marine areas, forests, grasslands and agricultural soils, will be essential for emission reduction and climate adaptation. Planting trees and deploying green infrastructure will help us to cool urban areas and mitigate the impact of natural disasters.\n\nBiodiversity loss and ecosystem collapse are one of the biggest threats facing humanity in the next decade 14 . They also threaten the foundations of our economy and the costs of inaction are high and are anticipated to increase 15 . The world lost an estimated €3.5-18.5 trillion per year in ecosystem services from 1997 to 2011 owing to land-cover change, and an estimated €5.5-10.5 trillion per year from land degradation. Specifically, biodiversity loss results in reduced crop yields and fish catches, increased economic losses from flooding and other disasters, and the loss of potential new sources of medicine 16 .\n\nThe EU is ready to show ambition to reverse biodiversity loss, lead the world by example and by action, and help agree and adopt a transformative post-2020 global framework at the 15 th Conference of the Parties to the Convention on Biological Diversity. This should", - "page_start": 2, - "page_end": 2, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "## OPEN\n\n\n\n## The impact of ͷ.ͻ °C and ͸.Ͷ °C global warming on global maize production and trade\n\nKuo Li ͷ * , Jie Pan ͷ , Wei Xiong ͸ , Wei Xie ͹ & Tariq Ali ͹\n\nClimate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by ͻ climate models recommended by ISI-MIP under ͺ RCP scenarios, in which the approximate scenarios with global warming by ͷ.ͻ °C and ͸ °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by ͷ.ͻ °C and ͸.Ͷ °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under ͸.Ͷ °C scenario was much more serious than ͷ.ͻ °C scenario; the ratios of yield changes were separately Ͷ.ͷ;% and - ͷͶ.;% under ͷ.ͻ °C and ͸.Ͷ °C scenarios. The reduction trend of total maize production is obvious in the top five countries and the main producing regions of the world, especially under the ͸.Ͷ °C scenario. The market price of maize would increase by around Ͷ.ͽ% and ͹.ͺ% under ͷ.ͻ °C and ͸.Ͷ °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.\n\nIn the past hundred years, the global climate has experienced great changes 1-4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming 5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health 6-10 . Global warming has gradually changed from a scienti/fic issue to a major social issue of common concern to governments and people of all countries 11-13 . In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris 14 . Paris Agreement has indicated and pursue e/fforts to limit the temperature increase to 1.5 °C above pre-industrial levels.", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "\n\nFigure 2. (continued)\n\n\n\nis 16.9% in which the temperature would go up more than 3.0 °C, most located in the high latitude regions of Northern Hemisphere; the area is rarely in which the temperature would go up between 0 and 1.0 °C.\n\n/T\\_here are apparent trends of humidi/fication in most regions under global warming by 1.5 °C and 2.0 °C; but the drought risk also should be taken seriously in the other regions. Under global warming by 1.5 °C the area is 73.6% of the whole world in which the precipitation would increase, most located in the Northern Hemisphere; the area is 53.7% of the whole world in which the precipitation would increase by less than 50 mm; however, the area is 26.4% of whole world in which the rainfall would decrease, mainly located in the Southern Hemisphere and the middle regions of Northern Hemisphere. /T\\_he distribution of precipitation under global warming by 2.0 °C is similar with the situation under global warming by 1.5 °C. /T\\_he drought-threatened area would increase by 28.5% under global warming by 2.0 °C, especially in the middle and low latitude of the Northern Hemisphere; the area would expand to 26%, in which the precipitation increases more than 50 mm. In other words, the extreme rainfall events (such as drought, rainstorm) under global warming by 2.0 °C would be more serious than those under global warming by 1.5 °C, which is what we should be pay more attention to.\n\nYield change of maize under global warming by ͷ.ͻ °C and ͸.Ͷ °C. Maize production is a/ffected by climate change apparently. According to the simulation results of CERES-maize, the yield of maize would decrease in the worldwide relative to 1986-2005 under global warming by 2.0 °C; it would increase little under global warming by 1.5 °C. /T\\_he distributions of maize yield loss under the two scenarios are similar to each other, mostly located in the middle and low latitude, which are the main regions for maize planting in the world. /T\\_he loss risk of maize under global warming by 2.0 °C is much more serious than that under global warming of 1.5 °C. However, there are increasing potentials of maize yield in many regions, nearly half of the whole maize planting area in the world, in which the climate situation would become more proper for maize under global\n\nVol.:(0123456789)", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed9.pdf" - } - ] - }, - { - "references": { - "source_file": "infographic3.pdf", - "query": "How many scholarly articles are published every year ?", - "target_page": 1, - "target_passage": "over 3 million scholarly articles published per year", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage or retrieval system, without prior permission in writing from the publisher. Subject to any applicable licensing terms and conditions in the case of electronically supplied publications, a person may engage in fair dealing with a copy of this publication for his or her personal or private use, or his or her research or private study. See Section 12(1)(a) of the Copyright Act 98 of 1978.\n\nThe authors and the publisher have made every effort to obtain permission for and to acknowledge the use of copyright material. Should any infringement of copyright have occurred, please contact the publisher, and every effort will be made to rectify omissions or errors in the event of a reprint or new edition.\n\nDeveloped for Oxbridge Academy - 2015", - "page_start": 1, - "page_end": 1, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## Article", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed3.pdf" - }, - { - "text": "## The Value of Using Unique Identifiers for Researchers\n\n## What's in a Name?\n\nMost names are not unique\n\n\n\nMany people have the same name\n\nPeople use di/fferent versions of their name during their career\n\nIndividuals use di/fferent alphabets, abbreviations, or naming conventions\n\n## Researchers are mobile!\n\n\n\nFor example,\n\n30% OF THE SCIENTISTS WHO GOT THEIR PhD IN THE UNITED KINGDOM NOW LIVE ELSEWHERE\n\nSource: Science Magazine\n\nResearch institutions and organizations therefore find it hard to\n\n\n\n- Benchmark their organization against others\n- Identify, track, and report on researchers' a/ffiliations and contributions (publications, peer reviews, grants, and more)\n\n## Institutions Face a Rising Tide of Research\n\n\n\n\n\nInstitutions must increasingly recognize and demonstrate the impact of all types of research contributions\n\n\n\n## Tackling Information Overload\n\nORCID is a non-profit organization, which provides a fully open and interoperable identifier to reliably connect researchers with their research contributions. The ORCID iD is a 16-digit identifier that researchers can register for and use for free.\n\nConnects individuals and their professional contributions across disciplines, organizations, and time\n\nEnables recognition of all types of research contributions and innovation\n\n\n\nHelps research institutions, funders, publishers, and other organizations better track and support research work\n\n## How ORCID Works\n\n\n\n- It's a registry of unique persistent identifiers for researchers\n- It's a hub that connects researchers with their professional activities and contributions\n- It's a global community that enables researchers to share their data with other individuals, organizations, and systems\n\n## Why Connect with ORCID?\n\nHundreds of members and systems use ORCID globally\n\n## 5.5 MILLION+\n\nlive ORCID iDs registered since its 2012 launch\n\n\n\nSource: Orcid.org/statistics as of November 2018\n\n\n\nNames may\n\nchange through\n\nmarriage or other\n\ncircumstances\n\n\n\n## Evidence of Institutional Value\n\nExamples of time/sta/ff savings achieved by implementing ORCID from around the world\n\n\n\nUK: 0.2 - 0.4 FTEs per institution 1 Portugal: 100,000 researcher hours per year 2 Australia: 15-30 minutes per grant application 3\n\n1. Jisc/ARMA Institutional ORCID Implementation and Cost Benefit Analysis Report 2015 2. Cátia Laranjeira, FCT - Fundação para a Ciência e a Tecnologia 2017 3. Australian Research Council governance meeting, September 2018\n\n\"Having ORCID iDs for most of our researchers has helped in providing authoritative accounts in our various databases, ensuring accuracy in reviewer identities, and helping editors find reviewers and check expertise.\"\n\n-Brooks Hanson, Executive Vice President, Science, American Geophysical Union\n\n## How Organizations and Researchers Benefit\n\n## INSTITUTIONS\n\n- Save time and reduce errors with automated information-sharing and cross-system interoperability\n- Manage your organization name and your researchers' connections with it\n- Maintain links with your researchers - past, present, and future\n\n## RESEARCHERS\n\n- Improve recognition and discoverability of their research\n- Spend more time doing research, less time managing it\n- Control and manage a trusted and easily shareable record of their research activities and a/ffiliations - for free\n\n\n\n\n\n\n\n## Three Ways to Get Involved\n\n- 1. Encourage and support your researchers in getting, sharing, and using their ORCID iD\n- 2. Invest in integrating ORCID into your systems\n- 3. Connect data to and from your researchers' ORCID records to support information use and reuse across organizations\n\nSponsored by ORCID\n\nTo learn more go to https://orcid.org\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "infographic3.pdf" - }, - { - "text": "## 4. Copyright, Licensing, & Access to Books for Training\n\nEven if books can be acquired, digitized, and made technically useful for AI training, the development of a books data commons would necessarily need to navigate and comply with copyright law.\n\nOut-of-Copyright Books: A minority of books are old enough to be in the public domain and out of copyright, and an AI developer could use them in training without securing any copyright permission. In the United States, all books published or released before 1929 are in the public domain. While use of these books provides maximal certainty for the AI developer to train on, it is worth noting that the status of whether a book is in the public domain can be difficult to determine. For instance, books released between 1929 and 1963 in the U.S. are 14 out of copyright if they were not subject to a copyright renewal; however, data on copyright renewals is not easily accessible.\n\nWhat's more, copyright definitions and term lengths vary among countries. Even if a work is in the public domain in the US, it may not be in other countries. Countries generally use the 15 life of the last living author + 'x' years to determine the term of copyright protection. For most countries, 'x' is either 50 years (the minimum required by the Berne Convention) or 70 years (this is the case for all member states of the European Union and for all works published in the U.S. after 1978). This approach makes it difficult to determine copyright terms with certainty because it requires information about the date of death of each author, which is often not readily available.\n\nIn-Copyright Books: The vast majority of books are in copyright, and, insofar as the training process requires making a copy of the book, the use in AI training may implicate copyright law. Our workshop covered three possible paths for incorporating such works.\n\n## Direct licensing\n\nOne could directly license books from rightsholders. There may be some publishers who are willing to license their works for this purpose, but it is hard to determine the scale of such access, and, in any event, there are significant limits on this approach. Along with the challenge (and expense) of reaching agreements with relevant rightsholders, there is also the practical difficulty of simply identifying and finding the rightsholder that one must negotiate", - "page_start": 8, - "page_end": 8, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## 3. Why Books are Important to Training AI\n\nDespite the proliferation of online content and some speculating that books would simply die out with the advent of the Internet, books remain a critical vehicle for disseminating 9 knowledge. The more scientists study how books can impact people, the less surprising this is. Our brains have been shown to interact with longform books in meaningful ways: we develop bigger vocabularies when we read books; we develop more empathy when we read literary fiction; and connectivity between different regions of our brain increases when we read. 10\n\nIn that light, it might be unsurprising that books are important for training AI models. A broadly accessible books dataset could be useful not only for building LLMs, but also for many other types of AI research and development.\n\n## Performance and Quality\n\nThe performance and versatility of an AI model can significantly depend on whether the training corpus includes books or not. Books are uniquely valuable for AI training due to several characteristics.\n\n- · Length: Books tend to represent longer-form content, and fiction books, in particular, represent long-form narrative. An AI trained on this longer-form, narrative type of content is able to make connections over a longer context, so instead of putting words together to form a single sentence, the AI becomes more able to string concepts together into a coherent whole; even after a book is divided into many 'chunks' before the process of tokenization, that will still provide long stretches of text that are longer than the average web page. While Web documents, for instance, tend to be longer than a single sentence, they are not typically hundreds of pages long like a book.\n- · Quality: The qualities of the training data impact the outputs a tool can produce. Consider an LLM trained on gibberish; it can learn the patterns of that gibberish and, in turn, produce related gibberish, but will not be very useful for writing an argument or a story, for instance. In contrast, training an LLM on books with well-constructed arguments or crafted stories could serve those purposes. While 'well-constructed' and 'crafted' are necessarily subjective, the traditional role of editors and the publishing process can provide a useful indicator for the quality of writing inside of books. What's more, metadata for books - information such as the title, author and year of publication - is often more comprehensive than metadata for information", - "page_start": 5, - "page_end": 5, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "- Shermer, Michael (25 October 2022). Conspiracy: Why the Rational Believe the Irrational . JHU Press. ISBN 978-1-4214-4445-1.\n - Sider, Theodore (2010). Logic for Philosophy . Oxford University Press. ISBN 978-0-19957558-9.\n - Siegel, Harvey; Biro, John (1997). \"Epistemic Normativity, Argumentation, and Fallacies\" (htt ps://philpapers.org/rec/SIEENA). Argumentation . 11 (3): 277-292. doi:10.1023/A:1007799325361 (https://doi.org/10.1023%2FA%3A1007799325361). S2CID 126269789 (https://api.semanticscholar.org/CorpusID:126269789). Archived (https:// web.archive.org/web/20220228035651/https://philpapers.org/rec/SIEENA) from the original on 28 February 2022. Retrieved 4 January 2022.\n - Simpson, R. L. (2008). Essentials of Symbolic Logic (3rd ed.). Broadview Press. p. 14. ISBN 978-1-77048-495-5.\n - Smith, Robin (2022). \"Aristotle's Logic\" (https://plato.stanford.edu/entries/aristotle-logic/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Retrieved 11 March 2023.\n - Spade, Paul Vincent; Panaccio, Claude (2019). \"William of Ockham\" (https://plato.stanford.e du/entries/ockham/#SummLogi). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University.\n - Spriggs, John (2012). GSN - The Goal Structuring Notation: A Structured Approach to Presenting Arguments . Springer Science & Business Media. pp. 20-22. ISBN 978-1-44712312-5.\n - Stairs, Allen (2017). A Thinker's Guide to the Philosophy of Religion . Routledge. p. 343. ISBN 978-1-351-21981-5.\n - Sternberg, Robert J. \"Thought\" (https://www.britannica.com/topic/thought). Encyclopædia Britannica . Archived (https://web.archive.org/web/20211013145532/https://www.britannica.c om/topic/thought) from the original on 13 October 2021. Retrieved 14 October 2021.\n - Stolyar, Abram Aronovich (1 January 1984). Introduction to Elementary Mathematical Logic . Courier Corporation. ISBN 978-0-486-64561-2.\n - Stone, Mark A. (2012). \"Denying the Antecedent: Its Effective Use in Argumentation\" (https:// philpapers.org/rec/STODTA). Informal Logic . 32 (3): 327-356. doi:10.22329/il.v32i3.3681 (ht tps://doi.org/10.22329%2Fil.v32i3.3681). Archived (https://web.archive.org/web/2022022812 3240/https://philpapers.org/rec/STODTA) from the original on 28 February 2022. Retrieved 8 January 2022.\n - Stump, David J. \"Fallacy, Logical\" (https://www.encyclopedia.com/history/dictionaries-thesau ruses-pictures-and-press-releases/fallacy-logical). encyclopedia.com . Archived (https://web. archive.org/web/20210215112403/https://www.encyclopedia.com/history/dictionaries-thesau ruses-pictures-and-press-releases/fallacy-logical) from the original on 15 February 2021. Retrieved 20 March 2021.\n - Talbott, William (2016). \"Bayesian Epistemology\" (https://plato.stanford.edu/entries/epistemo logy-bayesian/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20210401034856/https://plato.sta nford.edu/entries/epistemology-bayesian/) from the original on 1 April 2021. Retrieved 6 March 2021.\n - Tarski, Alfred (1994). Introduction to Logic and to the Methodology of the Deductive Sciences . Oxford University Press. p. 40. ISBN 978-0-19-802139-1.\n - Tondl, L. (2012). Problems of Semantics: A Contribution to the Analysis of the Language Science . Springer Science & Business Media. p. 111. ISBN 978-94-009-8364-9.\n - Velleman, Daniel J. (2006). How to Prove It: A Structured Approach . Cambridge University Press. p. 8, 103. ISBN 978-0-521-67599-4.\n - Vickers, John M. (2022). \"Inductive Reasoning\" (https://www.oxfordbibliographies.com/displ ay/document/obo-9780195396577/obo-9780195396577-0171.xml). Oxford Bibliographies . Oxford University Press. Retrieved 18 January 2023.", - "page_start": 35, - "page_end": 35, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- Paulson, Lawrence C. (February 2018). \"Computational Logic: Its Origins and Applications\" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5832843). Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences . 474 (2210): 1-14. arXiv:1712.04375 (https://arxiv.org/abs/1712.04375). Bibcode:2018RSPSA.47470872P (https://ui.adsabs.harv ard.edu/abs/2018RSPSA.47470872P). doi:10.1098/rspa.2017.0872 (https://doi.org/10.109 8%2Frspa.2017.0872). PMC 5832843 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5832 843). PMID 29507522 (https://pubmed.ncbi.nlm.nih.gov/29507522). S2CID 3805901 (http s://api.semanticscholar.org/CorpusID:3805901).\n - Pedemonte, Bettina (25 June 2018). \"Strategic vs Definitory Rules: Their Role in Abductive Argumentation and their Relationship with Deductive Proof\" (https://www.ejmste.com/article/ strategic-vs-definitory-rules-their-role-in-abductive-argumentation-and-their-relationship-with -5539). Eurasia Journal of Mathematics, Science and Technology Education . 14 (9): 1-17. doi:10.29333/ejmste/92562 (https://doi.org/10.29333%2Fejmste%2F92562). ISSN 13058215 (https://search.worldcat.org/issn/1305-8215). S2CID 126245285 (https://api.semantics cholar.org/CorpusID:126245285). Archived (https://web.archive.org/web/20211207195246/h ttps://www.ejmste.com/article/strategic-vs-definitory-rules-their-role-in-abductive-argumentati on-and-their-relationship-with-5539) from the original on 7 December 2021. Retrieved 8 January 2022.\n - Pickel, Bryan (1 July 2020). \"Structured Propositions and Trivial Composition\" (https://doi.or g/10.1007%2Fs11229-018-1853-1). Synthese . 197 (7): 2991-3006. doi:10.1007/s11229018-1853-1 (https://doi.org/10.1007%2Fs11229-018-1853-1). hdl:20.500.11820/3427c028f2cb-4216-a199-9679a49ce71c (https://hdl.handle.net/20.500.11820%2F3427c028-f2cb-42 16-a199-9679a49ce71c). ISSN 1573-0964 (https://search.worldcat.org/issn/1573-0964). S2CID 49729020 (https://api.semanticscholar.org/CorpusID:49729020).\n - Pietroski, Paul (2021). \"Logical Form: 1. Patterns of Reason\" (https://plato.stanford.edu/entri es/logical-form/#pat). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20211002190116/https://plato.sta nford.edu/entries/logical-form/#pat) from the original on 2 October 2021. Retrieved 4 December 2021.\n - Planty-Bonjour, Guy (2012). The Categories of Dialectical Materialism: Contemporary Soviet Ontology . Springer Science & Business Media. p. 62. ISBN 978-94-010-3517-0.\n - Possin, Kevin (2016). \"Conductive Arguments: Why is This Still a Thing?\" (https://philpapers. org/rec/POSCAW-4). Informal Logic . 36 (4): 563-593. doi:10.22329/il.v36i4.4527 (https://do i.org/10.22329%2Fil.v36i4.4527). Archived (https://web.archive.org/web/20220108171723/ht tps://philpapers.org/rec/POSCAW-4) from the original on 8 January 2022. Retrieved 8 January 2022.\n - Priest, Graham; Tanaka, Koji; Weber, Zach (2018). \"Paraconsistent Logic\" (https://plato.stan ford.edu/entries/logic-paraconsistent/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Retrieved 14 December 2021.\n - Pépin, Jean (2004). \"Logos\". Encyclopedia of Religion (https://www.encyclopedia.com/philo sophy-and-religion/philosophy/philosophy-terms-and-concepts/logos). ISBN 978-0-02865733-2. Archived (https://web.archive.org/web/20211229134626/https://www.encyclopedi a.com/philosophy-and-religion/philosophy/philosophy-terms-and-concepts/logos) from the original on 29 December 2021. Retrieved 29 December 2021.\n - Putnam, H. (1969). \"Is Logic Empirical?\". Boston Studies in the Philosophy of Science . Vol. 5. pp. 216-241. doi:10.1007/978-94-010-3381-7\\_5 (https://doi.org/10.1007%2F978-94010-3381-7\\_5). ISBN 978-94-010-3383-1.\n - Quine, Willard Van Orman (1981). Mathematical Logic . Harvard University Press. p. 1. ISBN 978-0-674-55451-1.", - "page_start": 33, - "page_end": 33, - "source_file": "wikipedia1.pdf" - }, - { - "text": "institutional requirements. The participants provided their written informed consent to participate in this study.\n\n## Author contributions\n\nSD: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Project administration, Resources, Visualization, Writing -original draft, Writing -review & editing. EA: Conceptualization, Formal Analysis, Methodology, Supervision, Writing -review & editing. BN: Conceptualization, Formal Analysis, Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing -review & editing.\n\n## Funding\n\nThe author(s) declare that /uniFB01 nancial support was received for the research, authorship, and/or publication of this article.\n\nThe development of the CoreDISTparticipation and the RCT is funded by the Northern Norway Health Authority (Helse Nord RHF). This interview study was funded by Nord University (PhD salary).\n\n## References\n\n- 1. Walton C, King R, Rechtman L, Kaye W, Leray E, Marrie RA, et al. Rising prevalence of multiple sclerosis worldwide: insights from the Atlas of MS, third edition. Mult Scler . (2020) 26(14):1816 -21. doi: 10.1177/1352458520970841\n- 2. Casey B, Coote S, Galvin R, Donnelly A. Objective physical activity levels in people with multiple sclerosis: meta-analysis. Scand J Med Sci Sports . (2018) 28 (9):1960 -9. doi: 10.1111/sms.13214\n- 3. Kinnett-Hopkins D, Adamson B, Rougeau K, Motl RW. People with MS are less physically active than healthy controls but as active as those with other chronic diseases: an updated meta-analysis. Mult Scler Relat Disord . (2017) 13:38 -43. doi: 10.1016/j.msard.2017.01.016\n- 4. Hoang PD, Lord S, Gandevia S, Menant J. Exercise and sports science Australia (ESSA) position statement on exercise for people with mild to moderate multiple sclerosis. J Sci Med Sport . (2022) 25(2):146 -54. doi: 10.1016/j.jsams.2021.08.015\n- 5. Dalgas U, Langeskov-Christensen M, Stenager E, Riemenschneider M, Hvid LG. Exercise as medicine in multiple sclerosis -time for a paradigm shift: preventive, symptomatic, and disease-modifying aspects and perspectives. Curr Neurol Neurosci Rep . (2019) 19(11):1 -12. doi: 10.1007/s11910-019-1002-3\n- 6. Riemenschneider M, Hvid LG, Ringgaard S, Nygaard MKE, Eskildsen SF, Gaemelke T, et al. Investigating the potential disease-modifying and neuroprotective ef /uniFB01 cacy of exercise therapy early in the disease course of multiple sclerosis: the early multiple sclerosis exercise study (EMSES). Mult Scler . (2022) 28(10):1620 -9. doi: 10. 1177/13524585221079200\n- 7. Kalb R, Brown TR, Coote S, Costello K, Dalgas U, Garmon E, et al. Exercise and lifestyle physical activity recommendations for people with multiple sclerosis throughout the disease course. Mult Scler . (2020) 26(12):1459 -69. doi: 10.1177/ 1352458520915629\n- 8. Moreno-Navarro P, Manca A, Martinez G, Ventura L, Barbado D, Vera-García FJ, et al. Test-retest reliability and known-groups validity of trunk muscle tests in people with multiple sclerosis: a cross-sectional, case-control study. Phys Ther . (2021) 101 (5):1 -9. doi: 10.1093/ptj/ptzab049\n- 9. Raats J, Arntzen EC, Lamers I, Feys P, Normann B. What is the distribution of trunk impairments and its relationship with disability level in individuals with multiple sclerosis? Mul Scler Relat Disord . (2021) 57:103325. doi: 10.1016/j.msard. 2021.103325\n- 10. Normann B, Arntzen EC. What are the relationships between trunk control, balance and walking in individuals with multiple sclerosis with minor to moderate disability? Eur J Physiother . (2021) 23(6):377 -83. doi: 10.1080/21679169.2020.1772870\n\n## Acknowledgments\n\nThe authors would like to thank the participants in this study and the user representatives from Nordland MS Association for their valuable contributions. The authors also acknowledge philosopher of the mind and cognitive sciences Hanne De Jaegher for the valuable comments on the interpretations and discussions of the results.\n\n## Con /uniFB02 ict of interest", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed13.pdf" - }, - { - "text": "Neither the European Agency for Safety and Health at Work nor any person acting on behalf of the agency is responsible for the use that might be made of the following information.\n\nLuxembourg: Publications Office of the European Union, 2023\n\nPrint\n\nISBN 978-92-9479-934-0\n\ndoi: 10.2802/26873\n\nPDF\n\nISBN 978-92-9479-935-7\n\ndoi: 10.2802/56459\n\n- © European Agency for Safety and Health at Work, 2023\n\nReproduction is authorised provided the source is acknowledged.\n\nFor any use or reproduction of photos or other material that is not under the copyright of the European Agency for Safety and Health at Work, permission must be sought directly from the copyright holders.\n\nThe photographs used in this publication illustrate a range of work activities. They do not necessarily show good practices or compliance with legislative requirements.\n\nFor one-click access to websites and references please consult the online version of this publication https://osha.europa.eu/en/publications/occupational-safety-and-health-europe-state-and-trends-2023", - "page_start": 1, - "page_end": 1, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "\n\nSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/351037551\n\n## A Practical Guide to Building OWL Ontologies Using Protégé 5.5 and Plugins\n\nPreprint · April 2021\n\nCITATIONS\n\n0\n\n## 1 author:\n\n\n\nREADS 36,030", - "page_start": 0, - "page_end": 0, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - } - ] - }, - { - "references": { - "source_file": "infographic3.pdf", - "query": "For what reason a researcher's name is not a good tools to track back its works and affiliations ?", - "target_page": 1, - "target_passage": "Many people have the same name Names may change through marriage or other circumstances Individuals use different alphabets, abbreviations, or naming conventions People use different versions of their name during their career", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "## PRACTICAL AND PROFESSIONAL\n\nSomething of a paradox, too; highly competitive but approachable; stylish but never a slave to fashion. I have a true talent for leadership. I'm stable, steady, reliable, and efficient. At the same time, I'm good-looking, good-natured, and good-humored. Seek successful business person driven by values, with a 'whatever it takes' attitude - just like me, practical and professional.\n\nTHE\n\nHON COMPANY\n\n", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "- · The quality of statistics and surveys fades the more irregular are the working conditions being studied. Which research methods are adequate for a clearer and more reliable evidence base on these working conditions? It might require research methods different from those used today, for example, more investigative case studies; it might also be helpful to evaluate the existing national working conditions surveys or statistics under this aspect.\n - · Fading employer-employee relations. There are special research efforts necessary to study the application of OSH regulations of work with weak or no employer-employee relations, for example, for the self-employed and new forms of employment.\n - · Surveys usually suffer a participation bias, for example, for the migrant workforce. The low participation rate of migrants can contribute to a particular underestimation regarding their often unfavourable working conditions.\n - · Workers in manual occupations report better health than administrative workers but less expectations to do the job until being 60 years old . What are the reasons behind this? Is it the healthy worker effect, strong occupation-related differences regarding the perception of health and the expression of health problems? 502,503\n - · High work intensity is a major cause for low wellbeing and high psychosocial risks. Survey data suggest that work intensification stopped after 2005 . What might be the reasons? Are the current indicators not specific enough to measure developments of work intensity? Has since then the major burden of intensification been put on other types of workers, for example, subcontracted or self-employed, temporary and seasonal workers, or on workers in the global supply chain?\n - · How much evidence is there that dangerous work has been increasingly contracted out to small and medium-size enterprises and the self-employed ? Are there sufficiently detailed data on whether a larger share of service and client-related work at atypical times or work requiring long working hours has been taken over by self-employed or subcontractors?\n - · The influence of enterprise size is often difficult to explain. In several aspects, the SMEs perform better, and in other important aspects worse. What might be the reason for this?\n - · How is it possible to overcome the 'prevention gap' that in general exists between mobile and stationary workplaces? Can the solutions be technical or must there be organisational and legal measures, for example, a limitation of the prolonged use of ergonomically inadequate equipment like mobile phones?", - "page_start": 139, - "page_end": 139, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## Safer and healthier technologies and organisation\n\nTo support the practical implementation of preventive safety and health measures , numerous actors (e.g. organisations of OSH professionals and practitioners, and standardisation institutes such as the European Committee for Standardisation and the International Organisation for Standardisation) issued safety and health guidance or standards, or developed new and advanced OSH management systems, the engineering sciences worked on better technical preventive technologies, on measuring and monitoring technologies, the medical sciences introduced better medical diagnosis and treatment of work-related diseases, and the social sciences contributed with better knowledge on the legal and economic determinants of OSH, or analysed the characteristics of awareness raising, knowledge development and healthy work organisation.\n\nIt is obvious that better technical and organisational prevention at work contributed to more safety and the evident strong reduction in accidents. Prominent fields and examples of such improvements are: technically safer design of moving vehicles (e.g. for fork lifts or heavy trucks and machines, light and noise warning signals for moving vehicles); safer design of machines like automatic shutdowns or disconnections, two-hand operating of machines (e.g. for pressing and punching), safer cranes including better technologies for communication between co-workers, coverage of moving parts, safer company cars (e.g. safety belts and airbags), safer tools (e.g. for drilling or cutting); improved personal protective equipment like air-supplied breathing apparatus, steel mesh gloves for meat workers, trousers for forest workers that resist a chainsaw; minimum safety requirements for buildings (e.g. forms and size of stairs and handrails, fire exits and fire alarms, safer ladders and scaffolds), emergency equipment like eye wash and emergency showers; better monitoring of acute hazards (e.g. in sewage water systems), exhaust and ventilation technologies to avoid fumes, dusts, chemicals or contact with hazardous biological agents; strong safety obligations for work in confined spaces, or for work at height and work in trenches; introduction of explosion zones and of non-sparking tools, a comprehensive system of warning signals, warning signals for slippery floors and unsafe grounds, better warning systems and equipment in particularly dangerous work environments like road maintenance, combined with better organisational measures; quality systems that promote continuous repair and maintenance of tools; regular instructions by safety representatives and safety coordinators, and guarantee of minimum safety standards of machines and products by European standards like CE ('European Conformity').\n\n## Major technological developments\n\nThe widespread introduction of new or advanced technologies - automation, digitalisation/ICT, green technologies, new material technologies and so on - results in substantial changes in work organisation and work processes, and replacement of (traditional) materials (screws by glues, metal and wood by plastics, nanomaterials). For OSH regulators and practitioners, it is a constant challenge to assess these changes regarding their impact on risks for health and safety and to develop adequate risk prevention and mitigation measures.", - "page_start": 13, - "page_end": 13, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Download the following tools:", - "page_start": 778, - "page_end": 778, - "source_file": "sg247938.pdf" - }, - { - "text": "way by OSH legislation or OSH practice. The principle of employer responsibility for working conditions of workers is undermined or at least blurred in such situations.\n\nFuture solutions could focus on several aspects - a new definition of 'work' or of 'employment', stronger individual responsibility, or extended state interventions to guarantee OSH also in such working and employment conditions. There are some examples of such solutions but to date most of them focus on better information, that is, stronger individual responsibility.\n\nUndeclared and illegal employment is scarcely visible in the statistics. Due to the difficult conditions for research, the overall OSH situation in these types of work is widely unknown; in case study-based investigative studies, the working conditions - including safety and health - for this group are mostly regarded as worse compared to workers with a regular work contract. It seems to be necessary to consider different research and action initiatives for this type of work, also in collaboration with other state supervising authorities.\n\nThe health data clearly show an ever-growing share of work tasks that go along with or even require physical inactivity . Inactive work is often characterised by permanent sitting combined with high requirements for visual and mental focusing during work, for example, towards digital equipment or to traffic situations. Serious indirect health consequences of such inactivity can be seen in the strong increase in certain widespread diseases or disease-supporting factors, like obesity.\n\nEven 15 years after the enlargement of the EU in 2004, significant differences between Member States can still be observed regarding several working conditions. The data demonstrate that the worst status concerning physical risks, wellbeing, and expectations to do the job until the age of 60 - is almost always present in eastern EU Member States, followed by southern Member States, all compared to the status in central, western and northern Member States . For psychosocial risks, it is just the other way around, these are more often reported in central, western and northern Member States.\n\nInternational organisations complain about an unfair divide of OSH risks in globalised supply chains , be it in mining, metallurgy, textile production, disposal of hazardous waste or other sectors. The ILO decided in June 2022 to make OSH one of the Fundamental Principles and Rights at Work. In this context, 10 ILO conventions and instruments are considered now as fundamental, including two OSH conventions: the Occupational Safety and Health Convention, of 1981 (No. 155) and the Promotional Framework for Occupational Safety and Health Convention, of 2006 (No. 187). Ethical, fairness and justice considerations have led to more activities on decent, safe and healthy work in developing countries and a fair share of risks at work in global supply chains. These are important initiatives, but until now they only slightly changed the overall situation when looking at the global scale of the issue.", - "page_start": 18, - "page_end": 18, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## The Value of Using Unique Identifiers for Researchers\n\n## What's in a Name?\n\nMost names are not unique\n\n\n\nMany people have the same name\n\nPeople use di/fferent versions of their name during their career\n\nIndividuals use di/fferent alphabets, abbreviations, or naming conventions\n\n## Researchers are mobile!\n\n\n\nFor example,\n\n30% OF THE SCIENTISTS WHO GOT THEIR PhD IN THE UNITED KINGDOM NOW LIVE ELSEWHERE\n\nSource: Science Magazine\n\nResearch institutions and organizations therefore find it hard to\n\n\n\n- Benchmark their organization against others\n- Identify, track, and report on researchers' a/ffiliations and contributions (publications, peer reviews, grants, and more)\n\n## Institutions Face a Rising Tide of Research\n\n\n\n\n\nInstitutions must increasingly recognize and demonstrate the impact of all types of research contributions\n\n\n\n## Tackling Information Overload\n\nORCID is a non-profit organization, which provides a fully open and interoperable identifier to reliably connect researchers with their research contributions. The ORCID iD is a 16-digit identifier that researchers can register for and use for free.\n\nConnects individuals and their professional contributions across disciplines, organizations, and time\n\nEnables recognition of all types of research contributions and innovation\n\n\n\nHelps research institutions, funders, publishers, and other organizations better track and support research work\n\n## How ORCID Works\n\n\n\n- It's a registry of unique persistent identifiers for researchers\n- It's a hub that connects researchers with their professional activities and contributions\n- It's a global community that enables researchers to share their data with other individuals, organizations, and systems\n\n## Why Connect with ORCID?\n\nHundreds of members and systems use ORCID globally\n\n## 5.5 MILLION+\n\nlive ORCID iDs registered since its 2012 launch\n\n\n\nSource: Orcid.org/statistics as of November 2018\n\n\n\nNames may\n\nchange through\n\nmarriage or other\n\ncircumstances\n\n\n\n## Evidence of Institutional Value\n\nExamples of time/sta/ff savings achieved by implementing ORCID from around the world\n\n\n\nUK: 0.2 - 0.4 FTEs per institution 1 Portugal: 100,000 researcher hours per year 2 Australia: 15-30 minutes per grant application 3\n\n1. Jisc/ARMA Institutional ORCID Implementation and Cost Benefit Analysis Report 2015 2. Cátia Laranjeira, FCT - Fundação para a Ciência e a Tecnologia 2017 3. Australian Research Council governance meeting, September 2018\n\n\"Having ORCID iDs for most of our researchers has helped in providing authoritative accounts in our various databases, ensuring accuracy in reviewer identities, and helping editors find reviewers and check expertise.\"\n\n-Brooks Hanson, Executive Vice President, Science, American Geophysical Union\n\n## How Organizations and Researchers Benefit\n\n## INSTITUTIONS\n\n- Save time and reduce errors with automated information-sharing and cross-system interoperability\n- Manage your organization name and your researchers' connections with it\n- Maintain links with your researchers - past, present, and future\n\n## RESEARCHERS\n\n- Improve recognition and discoverability of their research\n- Spend more time doing research, less time managing it\n- Control and manage a trusted and easily shareable record of their research activities and a/ffiliations - for free\n\n\n\n\n\n\n\n## Three Ways to Get Involved\n\n- 1. Encourage and support your researchers in getting, sharing, and using their ORCID iD\n- 2. Invest in integrating ORCID into your systems\n- 3. Connect data to and from your researchers' ORCID records to support information use and reuse across organizations\n\nSponsored by ORCID\n\nTo learn more go to https://orcid.org\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "infographic3.pdf" - }, - { - "text": "Foresight studies (e.g. by EU-OSHA) have shown that such technological change can help improve working conditions, for example, by taking over heavy, dangerous or routine work (automation, robotisation, exoskeletons), or by better communication and remote control via ICT tools. At the same time, they can also pose new risks, creating rigid work processes without much decision latitude, along with technical options for extreme surveillance and control (e.g. by constant geolocation), or pose new safety risks like working at height (renewable energies) or by exposure to materials with widely unknown health effects (e.g. nano).\n\nEU-OSHA has published several foresight studies to emphasise possible safety and health concerns. Examples are the reports and fact sheets about new safety risks in green jobs (green buildings, solar energy, wind energy) published more than 10 years ago. Since 2015, EU-OSHA has been publishing reviews and discussion papers on emerging risks and foresight topics. This work covers topics like robotics, performance-enhancing drugs, 3D printing, monitoring technologies, developments in the eretail sector, artificial intelligence, platform work, Long COVID, exoskeletons and so on. In 2018, the Agency published a foresight report on new and emerging OSH risks associated with digitalisation.\n\nA well-known example of such changes in work processes causing new OSH challenges is the growing number of workers outside the premises of the employer , that is, at non-stationary or mobile workplaces or at home. This refers to the increasing amount of mobile work in transport, traffic and", - "page_start": 13, - "page_end": 13, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## 4.3 Wellbeing and health status\n\nExisting concepts of wellbeing cover more aspects of work than working conditions or safety and health at workplaces. Eurofound mentions as the most relevant components: income, working time arrangements, possibilities for skills development and career advancement, and the degree of individual control over work . 243 The United Nations Economic Commission for Europe (UNECE) developed a scheme of quality of employment that covers these aspects: safety and ethics of employment, income benefits and employment, working hours and balancing working and non-working life, security of employment and social protection, social dialogue, skills development and training, workplace relationships and work motivation. 244\n\nThis chapter focuses on the health and safety aspects of wellbeing, although the OSH aspect is often not clearly separable from the above-mentioned aspects, that is, when surveys are intending to identify the level of 'satisfaction at work'. Still, due to its serious impact on all other aspects of working conditions, the consequences of insufficient health are regarded as critical:\n\n'While OHS is only one substantive working condition, like earnings and job insecurity it is arguably a critical one for many workers. In terms of scope and severity, even official data … suggests poor OHS is something most workers will experience at some point and many far more frequently.' 245\n\nA common methodology to collect data on health status and wellbeing is self-reporting and selfassessment of workplace risks, health risks and health problems, absence, job satisfaction and working life perspective from a health point of view. The data are in general collected by EU-wide surveys, for example, by the EWCS, the Flash Eurobarometer, ESENER or the LFS Ad hoc modules. The description of working conditions in the OSH Barometer starts with responses regarding the 'Overall opinion' on working conditions. This allows insight into the subjective assessment of health risks at work and wellbeing.\n\n## 4.3.1 Satisfaction at work\n\nIn the EWCS of 2015, at EU level 86% of the workers respond that they are 'satisfied' (60%) or 'very satisfied' (26%) with their work. Country differences exist but are not striking. The EU Member States with the highest satisfaction rates are Austria, the Netherlands, Finland, Czechia, Denmark, Belgium and Estonia; they range between 93% and 90%. The six countries with the lowest sum of satisfied and very satisfied responses are Greece, Croatia, France, Spain, Italy and Latvia; their values range between 77% and 82%.\n\nFigure 28: Satisfaction with working conditions in the main paid job - EWCS 2015 246European Agency for Safety and Health at Work - EU-OSHA\n\n", - "page_start": 88, - "page_end": 88, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "When we think about our careers, and what we need to do to establish them, we often forget about the need to develop an essential skill: communication. If you start reading through the job descriptions in a industry, you will find that the vast majority of jobs require one or more of the following:\n\n - · Effective communication skills\n - · Interpersonal skills\n - · Ability to work in a team\n - · Negotiation skills\n - · Conflict resolution skills\n - · Report writing skills\n\nWhat all of these skills have in common is that they involve the use of language to achieve a particular purpose. And for this reason, having good language skills is essential in any working environment.\n\n## In a career context, good language skills can also:\n\n - · Affect your credibility. Poor grammar indicates to a prospective employer that you are sloppy, while flawless grammar indicates that you pay attention to detail.\n - · Improve your relationships with your co- workers. If you are able to express yourself clearly, you can eliminate the confusion and misunderstanding that often leads to conflict.\n - · Increase your chances of being promoted.\n - · Help you to create a good impression.\n - · Improve your ability to persuade others (which is a valuable skill in the working world).", - "page_start": 4, - "page_end": 4, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "\n\n## CHAPTER 11:\n\n## LANGUAGE SKILLS AT WORK HOW TO WRITE A RESIGNATION LETTER\n\n\n\nNo matter what the reason, resigning from your job is likely to be an uncomfortable experience.\n\nIf you are leaving for personal reasons (such as moving away, starting a family, or retiring), you may feel sad about leaving. But if you are leaving for a better opportunity, or you've simply had enough of your current job, you may be glad to be moving on.\n\nEither way, it's always going to be in your best interests to leave on a positive note, and to resign in a professional manner.", - "page_start": 47, - "page_end": 47, - "source_file": "basic-english-language-skills.PDF" - } - ] - }, - { - "references": { - "source_file": "infographic3.pdf", - "query": "What is an ORCID iD ?", - "target_page": 1, - "target_passage": "ORCID iD is a 16-digit identifier that researchers can register for and use for free.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## The Value of Using Unique Identifiers for Researchers\n\n## What's in a Name?\n\nMost names are not unique\n\n\n\nMany people have the same name\n\nPeople use di/fferent versions of their name during their career\n\nIndividuals use di/fferent alphabets, abbreviations, or naming conventions\n\n## Researchers are mobile!\n\n\n\nFor example,\n\n30% OF THE SCIENTISTS WHO GOT THEIR PhD IN THE UNITED KINGDOM NOW LIVE ELSEWHERE\n\nSource: Science Magazine\n\nResearch institutions and organizations therefore find it hard to\n\n\n\n- Benchmark their organization against others\n- Identify, track, and report on researchers' a/ffiliations and contributions (publications, peer reviews, grants, and more)\n\n## Institutions Face a Rising Tide of Research\n\n\n\n\n\nInstitutions must increasingly recognize and demonstrate the impact of all types of research contributions\n\n\n\n## Tackling Information Overload\n\nORCID is a non-profit organization, which provides a fully open and interoperable identifier to reliably connect researchers with their research contributions. The ORCID iD is a 16-digit identifier that researchers can register for and use for free.\n\nConnects individuals and their professional contributions across disciplines, organizations, and time\n\nEnables recognition of all types of research contributions and innovation\n\n\n\nHelps research institutions, funders, publishers, and other organizations better track and support research work\n\n## How ORCID Works\n\n\n\n- It's a registry of unique persistent identifiers for researchers\n- It's a hub that connects researchers with their professional activities and contributions\n- It's a global community that enables researchers to share their data with other individuals, organizations, and systems\n\n## Why Connect with ORCID?\n\nHundreds of members and systems use ORCID globally\n\n## 5.5 MILLION+\n\nlive ORCID iDs registered since its 2012 launch\n\n\n\nSource: Orcid.org/statistics as of November 2018\n\n\n\nNames may\n\nchange through\n\nmarriage or other\n\ncircumstances\n\n\n\n## Evidence of Institutional Value\n\nExamples of time/sta/ff savings achieved by implementing ORCID from around the world\n\n\n\nUK: 0.2 - 0.4 FTEs per institution 1 Portugal: 100,000 researcher hours per year 2 Australia: 15-30 minutes per grant application 3\n\n1. Jisc/ARMA Institutional ORCID Implementation and Cost Benefit Analysis Report 2015 2. Cátia Laranjeira, FCT - Fundação para a Ciência e a Tecnologia 2017 3. Australian Research Council governance meeting, September 2018\n\n\"Having ORCID iDs for most of our researchers has helped in providing authoritative accounts in our various databases, ensuring accuracy in reviewer identities, and helping editors find reviewers and check expertise.\"\n\n-Brooks Hanson, Executive Vice President, Science, American Geophysical Union\n\n## How Organizations and Researchers Benefit\n\n## INSTITUTIONS\n\n- Save time and reduce errors with automated information-sharing and cross-system interoperability\n- Manage your organization name and your researchers' connections with it\n- Maintain links with your researchers - past, present, and future\n\n## RESEARCHERS\n\n- Improve recognition and discoverability of their research\n- Spend more time doing research, less time managing it\n- Control and manage a trusted and easily shareable record of their research activities and a/ffiliations - for free\n\n\n\n\n\n\n\n## Three Ways to Get Involved\n\n- 1. Encourage and support your researchers in getting, sharing, and using their ORCID iD\n- 2. Invest in integrating ORCID into your systems\n- 3. Connect data to and from your researchers' ORCID records to support information use and reuse across organizations\n\nSponsored by ORCID\n\nTo learn more go to https://orcid.org\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "infographic3.pdf" - }, - { - "text": "Figure 14-6 Adding a recipient list\n\n\n\n## 14.2.3 Adding a report ID\n\nThe next step is to define the reports to ODF. The report ID identifies the application group and application to which the report belongs. Figure 14-7 shows the window where you add the report ID.\n\nFigure 14-7 Adding a report ID\n\n\n\nTo create a report ID, specify the identifier and then choose the application group and application from the drop-down selection.", - "page_start": 345, - "page_end": 345, - "source_file": "sg246915.pdf" - }, - { - "text": "## 14.1.1 What documents are needed\n\nIn our example, we identified our documents as the customer statements. How do you identify the customer report that you need from the hundreds of thousands of documents that are stored in Content Manager OnDemand? Certain customers might receive multiple monthly statements.\n\nIn general, you identify the documents by creating an SQL query that uses index fields and values that uniquely identify the documents that you want to retrieve when they are loaded. You can then define the distribution to include multiple report bundles with different SQL queries for each bundle. If the SQL must retrieve the document that is the same except for a value that identifies the recipient, a single distribution can be used with a recipient list. In this case, the SQL specifies a wildcard value. When processing, ODF fills in the recipient ID in the SQL statement. For example, a recipient list contains recipients 100001, 100002, and 100003 and an SQL statement of ' Where branch\\_id = '$ODF\\_RECIPIENT ''. When this recipient list is processed, ODF creates a distribution for recipient 100001 with all reports where branch\\_id = '100001', recipient 100002 will receive a distribution that contains all reports where branch\\_id = '100002', and so on.\n\n## 14.1.2 Who receives the documents\n\nIn our example, each customer needs a statement copy every month. To identify the customers to Content Manager OnDemand, an ODF recipient must be created for each customer. Depending on how the documents are delivered, a destination must be set up. For example, if a set of documents will be delivered to a recipient by using email, an email address must be specified in the recipient definition.\n\n## 14.1.3 When the documents are retrieved and delivered\n\nODF operates throughout the 24-hour day. You can schedule your distributions to be processed at a specific time of day or processed as they are loaded. To specify when the distribution is delivered, choose the method, which is either Loaded, All Ready, Time of Day, Time of Print, or external.", - "page_start": 341, - "page_end": 341, - "source_file": "sg246915.pdf" - }, - { - "text": "- 2. To map a volume, select it and click Next to map it to the host. The volume is assigned the next available SCSI ID if you leave System Assign selected. However, by selecting Self Assign , you can manually set SCSI IDs, as shown in Figure 8-30.\n\nFigure 8-30 Modify Host Volume Mappings: Assign SCSI ID\n\n\n\nIf you select a SCSI ID that is in use for the host, you cannot proceed. As shown in Figure 8-29 on page 347, we selected SCSI ID 0. However, you can see in the right column SCSI ID 0 is allocated. By changing to SCSI ID 1, we can click Next .", - "page_start": 369, - "page_end": 369, - "source_file": "sg247938.pdf" - }, - { - "text": "Database owner\n\n - -odInstance < instance >\n - -odUser < user >\n\nOnDemand user ID", - "page_start": 367, - "page_end": 367, - "source_file": "sg246915.pdf" - }, - { - "text": "- AI & ML in Fusion (https://suli.pppl.gov/2023/course/Rea-PPPL-SULI2023.pdf)\n - AI & ML in Fusion, video lecture (https://drive.google.com/file/d/1npCTrJ8XJn20ZGDA\\_DfMpAN uQZFMzKPh/view?usp=drive\\_link) Archived (https://web.archive.org/web/20230702164332/ https://drive.google.com/file/d/1npCTrJ8XJn20ZGDA\\_DfMpANuQZFMzKPh/view?usp=drive \\_link) 2 July 2023 at the Wayback Machine\n - Alter, Alexandra; Harris, Elizabeth A. (20 September 2023), \"Franzen, Grisham and Other Prominent Authors Sue OpenAI\" (https://www.nytimes.com/2023/09/20/books/authors-open ai-lawsuit-chatgpt-copyright.html?campaign\\_id=2&emc=edit\\_th\\_20230921&instance\\_id=103 259&nl=todaysheadlines®i\\_id=62816440&segment\\_id=145288&user\\_id=ad24f3545dae 0ec44284a38bb4a88f1d), The New York Times , archived (https://web.archive.org/web/2024 0914155020/https://www.nytimes.com/2023/09/20/books/authors-openai-lawsuit-chatgpt-co pyright.html?campaign\\_id=2&emc=edit\\_th\\_20230921&instance\\_id=103259&nl=todaysheadl ines®i\\_id=62816440&segment\\_id=145288&user\\_id=ad24f3545dae0ec44284a38bb4a88 f1d) from the original on 14 September 2024, retrieved 5 October 2024\n - Altman, Sam; Brockman, Greg; Sutskever, Ilya (22 May 2023). \"Governance of Superintelligence\" (https://openai.com/blog/governance-of-superintelligence). openai.com . Archived (https://web.archive.org/web/20230527061619/https://openai.com/blog/governanc e-of-superintelligence) from the original on 27 May 2023. Retrieved 27 May 2023.\n - Anderson, Susan Leigh (2008). \"Asimov's \"three laws of robotics\" and machine metaethics\". AI & Society . 22 (4): 477-493. doi:10.1007/s00146-007-0094-5 (https://doi.org/10.1007%2Fs0 0146-007-0094-5). S2CID 1809459 (https://api.semanticscholar.org/CorpusID:1809459).\n - Anderson, Michael; Anderson, Susan Leigh (2011). Machine Ethics . Cambridge University Press.\n - Arntz, Melanie; Gregory, Terry; Zierahn, Ulrich (2016), \"The risk of automation for jobs in OECD countries: A comparative analysis\", OECD Social, Employment, and Migration Working Papers 189\n - Asada, M.; Hosoda, K.; Kuniyoshi, Y.; Ishiguro, H.; Inui, T.; Yoshikawa, Y.; Ogino, M.; Yoshida, C. (2009). \"Cognitive developmental robotics: a survey\". IEEE Transactions on Autonomous Mental Development . 1 (1): 12-34. doi:10.1109/tamd.2009.2021702 (https://doi.org/10.110 9%2Ftamd.2009.2021702). S2CID 10168773 (https://api.semanticscholar.org/CorpusID:101 68773).\n - \"Ask the AI experts: What's driving today's progress in AI?\" (https://www.mckinsey.com/business -functions/mckinsey-analytics/our-insights/ask-the-ai-experts-whats-driving-todays-progressin-ai). McKinsey & Company . Archived (https://web.archive.org/web/20180413190018/http s://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/ask-the-ai-expert s-whats-driving-todays-progress-in-ai) from the original on 13 April 2018. Retrieved 13 April 2018.\n - Barfield, Woodrow; Pagallo, Ugo (2018). Research handbook on the law of artificial intelligence . Cheltenham, UK: Edward Elgar Publishing. ISBN 978-1-7864-3904-8. OCLC 1039480085 (https://search.worldcat.org/oclc/1039480085).\n - Beal, J.; Winston, Patrick (2009), \"The New Frontier of Human-Level Artificial Intelligence\", IEEE Intelligent Systems , vol. 24, pp. 21-24, doi:10.1109/MIS.2009.75 (https://doi.org/10.11 09%2FMIS.2009.75), hdl:1721.1/52357 (https://hdl.handle.net/1721.1%2F52357), S2CID 32437713 (https://api.semanticscholar.org/CorpusID:32437713)", - "page_start": 52, - "page_end": 52, - "source_file": "wikipedia3.pdf" - }, - { - "text": "This chapter describes ODF V9.5. For any new installations (on z/OS or AIX) before version 9.5 of Content Manager OnDemand, we suggest that you install ODF.\n\nFigure 14-1 shows the evolution and merger of ODF 9.5 from its predecessors ODF9.0 and Report Distribution System (RDF) 9.0.\n\nFigure 14-1 Evolution of ODF\n\n\n\nWhen you load documents into Content Manager OnDemand, you might need to print these documents or send them to various people in your organization.\n\nContent Manager OnDemand automates the process of sending the documents that are loaded into Content Manager OnDemand to print (or the JES spool), a file (or a z/OS dataset), to a recipient as an email attachment, or to a recipient as an email notification.", - "page_start": 339, - "page_end": 339, - "source_file": "sg246915.pdf" - }, - { - "text": "```\narn:partition:service:region:account-id:resource-type/resource-id arn:partition:service:region:account-id:resource-type:resource-id\n```\n\n - · arn means this string is an ARN\n - · partition is one of the three AWS partitions: AWS regions, AWS China regions, or AWS GovCloud (US) regions\n - · service is the specific AWS service, for example: EC2\n - · region is the AWS region, for example: us-east-1 (North Virginia)\n - · account-id is the AWS account ID\n - · resource-id is the unique resource ID. (Could also be in the form resource-type/ resource-id )\n\n## Related resource(s):\n\n - · IAM identifiers provides an exhaustive list in the docs for IAM ARNs\n\n## Conditions\n\nConditions are specific rules for which the access is valid.\n\n## Other Elements\n\n - · All IAM policies have an Effect field which is set to either Allow or Deny .\n - · Version field defines which IAM service API version to use when evaluating the policy.\n - · Statement field consists of one or many JSON objects that contain the specific Action, Effect, Resource, and Condition fields described previously\n - · Sid (statement ID) is an optional identifier for a policy statement; some services like Amazon Simple Queue Service and Amazon Simple Notification Service might require this element and have uniqueness requirements for it\n\n## Policies\n\nWhen you set permissions, you attach a JSON policy to a principal. In the following example, an AWS managed policy named AWSLambdaInvocation-DynamoDB will be attached to a role that is related to a Lambda function:", - "page_start": 44, - "page_end": 44, - "source_file": "serverless-core.pdf" - }, - { - "text": "```\narn:partition:service:region:account-id:resource-id arn:partition:service:region:account-id:resource-type/resource-id arn:partition:service:region:account-id:resource-type:resource-id\n```\n\n - · arn: literally, the string \"arn\"\n - · partition is one of the three partitions: AWS Regions, AWS China Regions, or AWS GovCloud (US) Regions\n - · service is the specific service such as Amazon EC2 or DynamoDB\n - · region is the AWS region like us-east-1 (North Virginia)\n - · account-id is the AWS account ID\n - · resource-id is the unique resource ID. Other forms for resource IDs like resource-type/ resource-id , are used by services like IAM where IAM users have resource-type of user and resource-id a username like MyUsername,\n\nTry to identify the service, region, and resource for the following example ARNs:\n\n```\narn:aws::dynamodb:us-west-2:123456789012:table/myDynamoDBTable arn:aws::lambda:us-east-2:123456789012:function: my-function:1\n```\n\nIf you are interested in learning more, check out a map of Regions and Availability Zones, a view of our data centers, and the complete list of regional service endpoints.\n\n## Security model\n\nSecurity is a top priority for AWS. Before you start building serverless solutions, you need to know how security factors into AWS solutions.\n\nAmazon Web Services has a shared responsibility model:", - "page_start": 18, - "page_end": 18, - "source_file": "serverless-core.pdf" - }, - { - "text": "The idNode member specifies the ID of the node. This member may not have a value of 0 . A value of -1 indicates that child nodes do not use the idNodeParent member to specify this node as their parent. Instead, this node can be a parent only by enclosing child nodes in the EMF. Multiple nodes can have a ID of -1 . If the ID is not -1 , the value is unique across the document.\n\nThe nodetype specifies the type of structure node. This member is equal to one of the values from the MSODOCEXSTRUCTTYPE enumeration type. The following table lists examples of document structure node types.\n\nTable 7. Document structure node types\n\n\n\nExpand table", - "page_start": 20, - "page_end": 20, - "source_file": "office-pdf.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2669.pdf", - "query": "What type of instability causes rims in ruptured polystyrene thin films to decay into small drops ?", - "target_page": 3, - "target_passage": " The rims may further decay into lines of small drops due to a Rayleigh-type instability", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "scopic film. We have seen that the KMC model is able to describe the interplay of solute diffusion within the solvent and solvent evaporation/condensation. It also takes the liquid-liquid, liquidparticle and particle-particle interactions into account and therefore allows us to distinguish different regimes of the transverse (fingering) instability of the evaporative dewetting front: a transport regime where the instability is almost completely independent of the interaction strengths and a demixing regime where particles and liquid demix at the receding front thereby increasing its transverse instability.\n\nThe dynamical density functional theory describes the coupled dynamics of the density fields of the liquid and the nanoparticles. In the form described above (i.e. based on the two-dimensional hamiltonian (3)) we obtain a simple theory that allows us to study the time evolution of the evaporating ultrathin film and also to investigate the influence of processes such as surface diffusion by the liquid, which are not incorporated in the KMC model. However, it is straightforward to extend the theory to consider a fully three-dimensional fluid film, in which one can distinguish between short- and long-range interactions of solvent and/or solute with the substrate. We have, however, restricted the examples given here to situations that can also be described using the KMC model. A further exploration will be presented elsewhere.\n\nFinally, we have discussed a simple thin film model for the hydrodynamics on the mesoscale. It results from a long-wave approximation and consists of coupled evolution equations for the film thickness profile and the mean particle concentration. It has been used to discuss the self-pinning of receding contact lines that is related to the formation of rings of dried-in particles (coffeestain effect) that frequently occurs when films or drops of solutions or suspensions dewet by the combined effects of convection and evaporation.\n\nOne of the primary goals of researchers in this field, is the search for simple-to-use techniques that allow one to produce hierarchically structured functional layers for a wide range of applications such as, e.g., organic solar cells [98]. This means that the experiments advance very rapidly towards increasingly complex systems. For example, there have been investigations of the influence of the phase behaviour on the drying of droplets of a suspension of hard-sphere colloidal particles and non-adsorbing polymer [99], of the instabilities and the formation of drops in evaporating thin films of binary solutions [100] that may lead to treelike patterns [101], of effects of a secondary phase separation on evaporation-induced pattern formation in polymer films [102], and of the influence of an imposed flow on decomposition and deposition processes in a sliding ridge of evaporating solution of a binary polymer mixture [103] and of the influence of rather", - "page_start": 23, - "page_end": 23, - "source_file": "1001.2669.pdf" - }, - { - "text": "## I. INTRODUCTION\n\nThe patterns formed in dewetting processes have attracted strong interest since Reiter analysed the process quantitatively in the early nineties. In these experiments, that proved to be a paradigm in our understanding of dewetting, a uniform thin film of polystyrene (tens of nanometers thick) is deposited on a flat silicon oxide substrate is brought above the glass transition temperature. The film ruptures in several places, forming holes which subsequently grow, competing for space. As a result, a random polygonal network of liquid rims emerges. The rims may further decay into lines of small drops due to a Rayleigh-type instability [1-3]. The related problems of retracting contact lines on partially wetting substrates and the opening of single holes in rather thick films have also been studied [4, 5].\n\nSubsequent work has mainly focused on many different aspects of the dewetting process for simple non-volatile liquids and polymers (for reviews see Refs. [6-8]). All stages of the dewetting of a film are studied: the initial film rupture via nucleation or a surface instability (called spinodal dewetting) [1, 9-13], the growth process of individual holes [14-16], the evolution of the resulting hole pattern [3, 13], and the stability of the individual dewetting fronts [17-19]. We note in passing, that descriptions of dewetting patterns may also be found in historic papers, particularly for the dewetting of a liquid film on a liquid substrate. Tomlinson [20, footnote 18 on p. 40] considered turpentine on water and Marangoni [21, p. 352f] oil on water.\n\nMore recently, interest has turned to the dewetting processes of solutions and suspensions. However, these systems have not yet been investigated in any great depth. Such systems are complicated because their behaviour is determined by the interplay between the various solute (or colloid) and solvent transport processes. Furthermore, the solvents that are used often evaporate, i.e., one has to distinguish between 'normal' convective dewetting and evaporative dewetting. A number of experiments have been performed employing (colloidal) solutions of polymers [22-25], macromolecules like collagen and DNA [26-31] and nanoparticles [32-40]. The latter are sometimes referred to as 'nanofluids'. The initial focus of much of the research in the field has been on investigating the structures that are formed which are similar to the ones observed in the 'classical' dewetting of non-volatile liquids. Labyrinthine structures and polygonal networks result from spinodal dewetting and heterogeneous nucleation and growth, respectively. They are 'decorated' with the solute and therefore conserve the transient dewetting pattern as a dried-in structure when all the solvent has evaporated [28, 34]. The picture is, however, not complete. The solute may", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [5] F. Brochard-Wyart and J. Daillant, 'Drying of solids wetted by thin liquid films,' Can. J. Phys. 68 , 1084-1088 (1989).\n - [6] P. Muller-Buschbaum, 'Dewetting and pattern formation in thin polymer films as investigated in real and reciprocal space,' J. Phys.-Condes. Matter 15 , R1549-R1582 (2003).\n - [7] R. Seemann, S. Herminghaus, C. Neto, S. Schlagowski, D. Podzimek, R. Konrad, H. Mantz, and K. Jacobs, 'Dynamics and structure formation in thin polymer melt films,' J. Phys.-Condes. Matter 17 , S267-S290 (2005).\n - [8] U. Thiele, 'Structure formation in thin liquid films,' in S. Kalliadasis and U. Thiele, editors, 'Thin films of Soft Matter,' pages 25-93, Springer, Wien (2007).\n - [9] R. Xie, A. Karim, J. F. Douglas, C. C. Han, and R. A. Weiss, 'Spinodal dewetting of thin polymer films,' Phys. Rev. Lett. 81 , 1251-1254 (1998).\n - [10] R. Seemann, S. Herminghaus, and K. Jacobs, 'Dewetting patterns and molecular forces: A reconciliation,' Phys. Rev. Lett. 86 , 5534-5537 (2001).\n - [11] U. Thiele, M. G. Velarde, and K. Neuffer, 'Dewetting: Film rupture by nucleation in the spinodal regime,' Phys. Rev. Lett. 87 , 016104 (2001).\n - [12] M. Bestehorn and K. Neuffer, 'Surface patterns of laterally extended thin liquid films in three dimensions,' Phys. Rev. Lett. 87 , 046101 (2001).\n - [13] J. Becker, G. Grun, R. Seemann, H. Mantz, K. Jacobs, K. R. Mecke, and R. Blossey, 'Complex dewetting scenarios captured by thin-film models,' Nat. Mater. 2 , 59-63 (2003).\n - [14] C. Redon, F. Brochard-Wyart, and F. Rondelez, 'Dynamics of dewetting,' Phys. Rev. Lett. 66 , 715718 (1991).\n - [15] R. Seemann, S. Herminghaus, and K. Jacobs, 'Shape of a liquid front upon dewetting,' Phys. Rev. Lett. 87 , 196101 (2001).\n - [16] R. Fetzer, K. Jacobs, A. Munch, B. Wagner, and T. P. Witelski, 'New slip regimes and the shape of dewetting thin liquid films,' Phys. Rev. Lett. 95 , 127801 (2005).\n - [17] F. Brochard-Wyart and C. Redon, 'Dynamics of liquid rim instabilities,' Langmuir 8 , 2324-2329 (1992).\n - [18] G. Reiter and A. Sharma, 'Auto-optimization of dewetting rates by rim instabilities in slipping polymer films,' Phys. Rev. Lett. 87 , 166103 (2001).\n - [19] A. Munch and B. Wagner, 'Contact-line instability of dewetting thin films,' Physica D 209 , 178-190 (2005).", - "page_start": 25, - "page_end": 25, - "source_file": "1001.2669.pdf" - }, - { - "text": "also shift the spinodal and binodal lines as compared to the locations of these lines in the phase diagram for the pure solvent [41]. As a consequence, the solute concentration influences the hole nucleation rate. More importantly, the solute particles may also destabilise the dewetting fronts. As a result, one may find strongly ramified structures in all three systems [23, 25, 40, 42]. A selection of images exhibiting some of the possible structures is displayed in Fig.1.\n\nFor volatile solvents, the contact lines retract even for wetting fluids. It has been found that such evaporatively receding contact lines may deposit very regular line or ring patterns parallel to the moving contact line [24, 43]. The deposition of a single ring of colloids from a evaporating drop of colloidal suspension is well known as the 'coffee stain effect' [44]. Detailed investigations reveal the emergence of rich structures including multiple irregular rings, networks, regular droplet patterns, sawtooth patterns, Sierpinski carpets, and - in the case of DNA - liquid crystalline structures [22, 30, 45-49]. The deposition of regularly spaced straight lines orthogonal to the moving contact line has also been reported [50]. Droplet patterns may as well be created employing solvent-induced dewetting of glassy polymer layers below the glass transition temperature [51-53].\n\nNote that the dewetting of pure volatile liquids has also been studied experimentally [54] and theoretically [55-58]. In this case, different contact line instabilities have been observed for evaporating liquid drops [59, 60].\n\nIn the present article we review and preview the experiments and in particular the various modelling approaches for dewetting suspensions of (nano-)particles in volatile partially wetting solvents. After reviewing the basic experimental results in Section II, we discuss in Section III several theoretical approaches. In particular, we present a kinetic Monte Carlo model in Section III A, a dynamic density functional theory in Section III B, and a thin film evolution equation in Section III C. Finally, we conclude in Section IV by discussing advantages and shortcomings of the individual approaches and future challenges to all of them.\n\n## II. EXPERIMENT WITH NANOPARTICLE SOLUTIONS\n\nWe focus on experiments that use monodisperse colloidal suspensions of thiol-passivated gold nanoparticles in toluene [33, 34, 37-40, 61]. The gold core of 2 - 3 nm diameter is coated by a layer of alkyl-thiol molecules. The length of the carbon backbone of the thiol used in the experiments ranges from 6 to 12 carbon atoms ( C 6 to C 12 ) [40]. By varying the chain length, one can control", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [81] A. J. Archer and M. Rauscher, 'Dynamical density functional theory for interacting brownian particles: Stochastic or deterministic?' J. Phys. A-Math. Gen. 37 , 9325-9333 (2004).\n - [82] A. J. Archer and R. Evans, 'Dynamical density functional theory and its application to spinodal decomposition,' J. Chem. Phys. 121 , 4246-4254 (2004).\n - [83] P. A. Monson, 'Mean field kinetic theory for a lattice gas model of fluids confined in porous materials,' J. Chem. Phys. 128 , 084701 (2008).\n - [84] P. M. Chaikin and T. C. Lubensky, Principles of condensed matter physics , Cambridge University Press (1997).\n - [85] J. S. Langer, 'An introduction to the kinetics of first-order phase transitions,' in C. Godreche, editor, 'Solids far from Equilibrium,' pages 297-363, Cambridge University Press (1992).\n - [86] M. A. Spaid and G. M. Homsy, 'Stability of Newtonian and viscoelastic dynamic contact lines,' Phys. Fluids 8 , 460-478 (1996).\n - [87] U. Thiele and E. Knobloch, 'Front and back instability of a liquid film on a slightly inclined plate,' Phys. Fluids 15 , 892-907 (2003).\n - [88] M. R. E. Warner, R. V. Craster, and O. K. Matar, 'Surface patterning via evaporation of ultrathin films containing nanoparticles,' J. Colloid Interface Sci. 267 , 92-110 (2003).\n - [89] O. K. Matar, R. V. Craster, and K. Sefiane, 'Dynamic spreading of droplets containing nanoparticles,' Phys. Rev. E 76 , 056315 (2007).\n - [90] J. J. Zhou, B. Dupuy, A. L. Bertozzi, and A. E. Hosoi, 'Theory for shock dynamics in particle-laden thin films,' Phys. Rev. Lett. 94 , 117803 (2005).\n - [91] B. P. Cook, A. L. Bertozzi, and A. E. Hosoi, 'Shock solutions for particle-laden thin films,' SIAM J. Appl. Math. 68 , 760-783 (2008).\n - [92] R. V. Craster, O. K. Matar, and K. Sefiane, 'Pinning, retraction, and terracing of evaporating droplets containing nanoparticles,' Langmuir (2009), online available.\n - [93] D. Quemada, 'Rheology of concentrated disperse systems and minimum energy-dissipation principle I. Viscosity-concentration relationship,' Rheol. Acta 16 , 82-94 (1977).\n - [94] D. Quemada and C. Berli, 'Energy of interaction in colloids and its implications in rheological modeling,' Adv. Colloid Interface Sci. 98 , 51-85 (2002).\n - [95] J. J. Stickel and R. L. Powell, 'Fluid mechanics and rheology of dense suspensions,' Annu. Rev. Fluid Mech. 37 , 129-149 (2005).\n - [96] J. K. G. Dhont, An Introduction to Dynamics of Colloids , Elsevier, Amsterdam (1996).", - "page_start": 30, - "page_end": 30, - "source_file": "1001.2669.pdf" - }, - { - "text": "fast evaporation [104, 105]. These complex experimental systems all represent systems of high practical interest that the theories presented here are not (yet) able to describe. Such experiments do, however, provide a strong motivation for further work to extend the theories presented here, as well as to develop new approaches.\n\nLet us finally mention that several topics were entirely excluded from our discussion here. First, we focused on a limited range of descriptions and did, for instance, not mention lattice Boltzmann, molecular dynamics or dissipative particle dynamics approaches that may also be employed to describe fluid suspensions [106-109]. Second, we have only discussed spatially homogeneous substrates. Patterned substrates are widely used in dewetting experiments [38, 110-112]. Theoretical descriptions are well developed for the dewetting of films of pure non-volatile liquids on such substrates [68, 113-119]. However, in the case of volatile liquids on heterogeneous substrates, much less work has been done. A third topic that we did not touch upon are possible continuum thin film approaches to demixing dewetting suspensions. We believe it is feasible to extend the diffuse interface theories such as model-H [120] to include the influence of evaporation in dewetting nanoparticle suspensions. For instance, such models have already been adapted to describe demixing free surface films of polymer blends [121-123].\n\n## Acknowledgments\n\nAJA and MJR gratefully acknowledge RCUK and EPSRC, respectively, for financial support. We acknowledge support by the European Union via the FP6 and FP7 Marie Curie schemes [Grants MRTN-CT-2004005728 (PATTERNS) and PITN-GA-2008-214919 (MULTIFLOW)].\n\n- [2] G. Reiter, 'Mobility of polymers in films thinner than their unperturbed size,' Europhys. Lett. 23 , 579-584 (1993).\n- [3] A. Sharma and G. Reiter, 'Instability of thin polymer films on coated substrates: Rupture, dewetting and drop formation,' J. Colloid Interface Sci. 178 , 383-399 (1996).\n- [4] P.-G. de Gennes, 'Wetting: Statics and dynamics,' Rev. Mod. Phys. 57 , 827-863 (1985).", - "page_start": 24, - "page_end": 24, - "source_file": "1001.2669.pdf" - }, - { - "text": "dewetted liquid. The front recedes until all liquid is collected in a central drop. Since no liquid evaporates [ Q nc = 0 in Eq. (1)], the particle concentration does not change during the process.\n\nThe situation changes when allowing for evaporation ( Q nc > 0 ). Now the front may retract by convection and/or evaporation. Evaporation leads to the possibility of a strong increase in the particle concentration at the contact line as evaporation is strongest there. Due to the strong nonlinear dependence of the viscosity on the particle concentration, this may lead to a dramatic decrease of the convective contribution to the front velocity. For moderate evaporation rates, this may result in a (temporary) self-pinning of the front. Within the present basic model, the process can (after complete dry-in) result in three different basic deposition patterns: (i) for very fast evaporation rates, all other processes occur over time scales that are much larger. In particular, the effects of convective redistribution of the liquid are neglectable. As a result one finds that a nearly homogeneous film of nanoparticles of thickness h p = φ 0 h 0 is deposited (see Fig. 6(a)). Convection only results in the small heap of material visible at the left hand side of Fig. 6(a). The decrease in h p on the right side of Fig. 6(a) arises due to the diffusion of particles to the right of the initial front position; (ii) for very low evaporation rates, the film dynamics is dominated by convective dewetting as this process acts on a much shorter time scale than evaporation. As a result, all the liquid is collected into a drop before evaporation slowly removes the remaining solvent. Under these conditions most of the nanoparticles are deposited in a single heap (see Fig. 6(c)). Depending on the diffusivity, the heap might be highest at the centre or show a depression there; (iii) at intermediate evaporation rates, one may observe the deposition of a nanoparticle ring around a region with a nanoparticle film of much lower height. At the centre deposition might increase again (see Fig. 6(b)).\n\nThe most intriguing feature is the ring formation that has been observed experimentally for suspensions of very different particle sizes ranging from nanometers [32, 36, 46, 47] to hundreds of micrometers. Pinning of the contact line and thermal Marangoni effects are often mentioned as necessary conditions for the ring formation. The contact line pinning is often assumed to result from substrate heterogeneities. Film height and concentration profiles at various instants during the dewetting process are displayed in Fig. 7. The profiles are from before, at and after self-pinning of the contact line. In Fig. 8 we display a space-time plot for the complete process. At first, the front recedes in the same manner as when there is no evaporation, but now driven by convection and evaporation. A small capillary rim forms that collects all the dewetted liquid that does not evaporate. The particle concentration slowly increases at the contact line (Fig. 7(a) and regime", - "page_start": 20, - "page_end": 20, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [20] C. Tomlinson, 'On the motion of certain liquids on the surface of water,' Phil. Mag. Ser. 4 39 , 32-48 (1870).\n - [21] C. G. Marangoni, 'Ueber die Ausbreitung der Tropfen einer Flussigkeit auf der Oberflache einer anderen,' Ann. Phys. (Poggendorf) 143 , 337-354 (1871).\n - [22] O. Karthaus, L. Grasjo, N. Maruyama, and M. Shimomura, 'Formation of ordered mesoscopic polymer arrays by dewetting,' Chaos 9 , 308-314 (1999).\n - [23] X. Gu, D. Raghavan, J. F. Douglas, and A. Karim, 'Hole-growth instability in the dewetting of evaporating polymer solution films,' J. Polym. Sci. Pt. B-Polym. Phys. 40 , 2825-2832 (2002).\n - [24] S. W. Hong, J. F. Xia, and Z. Q. Lin, 'Spontaneous formation of mesoscale polymer patterns in an evaporating bound solution,' Adv. Mater. 19 , 1413-1417 (2007).\n - [25] G. Liu, C. F. Zhang, J. Zhao, and Y. X. Zhu, 'Study of the morphology of the three-phase contact line and its evolution by morphological examination after droplet evaporation of aqueous polymer solutions,' Langmuir 24 , 7923-7930 (2008).\n - [26] M. Mertig, U. Thiele, J. Bradt, G. Leibiger, W. Pompe, and H. Wendrock, 'Scanning force microscopy and geometrical analysis of two-dimensional collagen network formation,' Surface and Interface Analysis 25 , 514-521 (1997).\n - [27] M. Mertig, U. Thiele, J. Bradt, D. Klemm, and W. Pompe, 'Dewetting of thin collagenous precursor films,' Appl. Phys. A 66 , S565-S568 (1998).\n - [28] U. Thiele, M. Mertig, and W. Pompe, 'Dewetting of an evaporating thin liquid film: Heterogeneous nucleation and surface instability,' Phys. Rev. Lett. 80 , 2869-2872 (1998).\n - [29] H. Maeda, 'An atomic force microscopy study of ordered molecular assemblies and concentric ring patterns from evaporating droplets of collagen solutions,' Langmuir 15 , 8505-8513 (1999).\n - [30] I. I. Smalyukh, O. V. Zribi, J. C. Butler, O. D. Lavrentovich, and G. C. L. Wong, 'Structure and dynamics of liquid crystalline pattern formation in drying droplets of DNA,' Phys. Rev. Lett. 96 , 177801 (2006).\n - [31] L. Zhang, S. Maheshwari, H. C. Chang, and Y. X. Zhu, 'Evaporative self-assembly from complex DNA-colloid suspensions,' Langmuir 24 , 3911-3917 (2008).\n - [32] M. Maillard, L. Motte, A. T. Ngo, and M. P. Pileni, 'Rings and hexagons made of nanocrystals: A Marangoni effect,' J. Phys. Chem. B 104 , 11871-11877 (2000).\n - [33] G. L. Ge and L. Brus, 'Evidence for spinodal phase separation in two-dimensional nanocrystal selfassembly,' J. Phys. Chem. B 104 , 9573-9575 (2000).", - "page_start": 26, - "page_end": 26, - "source_file": "1001.2669.pdf" - }, - { - "text": "polymers which only result in fingers without side-branches [75] or fields of droplets left behind [18].\n\nAquantitative analysis shows that the mean number of fingers depends only very weakly on the average concentration of the nanoparticles ρ av n ; only the mean finger width increases with increasing concentration. However, decreasing the mobility (i.e., decreasing the diffusivity of the particles) leads to a much denser finger pattern and also causes the front instability to appear at an earlier stage, i.e., when the front instability is in its initial linear regime, it has a higher growth rate and a smaller characteristic wavelength (cf. Fig. 2(c) and (d)). Decreasing the effective chemical potential (increasing its absolute value) has a similar but less strong effect. For details see [41]. These findings lead to the conclusion that the determining factor for the front instability is the ratio of the time-scales of the different transport processes. In particular, the front becomes more unstable when the velocity of the dewetting front increases as compared to the mean diffusion velocity of the nanoparticles.\n\nIf the particle diffusivity is low, the front 'collects' the particles, resulting in a build up of the particles at the front that itself is slowed down. This makes the front unstable and any fluctuation along the front will trigger a transverse instability that results in an evolving fingering pattern. This happens even when the particle-liquid and particle-particle attractive interactions do not favour clustering (i.e. demixing of the liquid and the nanoparticles). In this regime, the instability is a purely dynamic effect and energetics plays no role in determining the number of fingers. We call this the 'transport regime'.\n\nTo illustrate the influence of energetics (characterized by the interaction parameters ε ij ) on fingering in Fig. 3 we display the dependence of the mean finger number on particle-liquid interaction strength ε nl . For ε nl ≥ 1 . 5 the mean finger number < f > is nearly constant; this is the transport regime. However, on decreasing ε nl below 1.5, we observe a marked increase in the value of < f > , indicating that energy plays an important role in determining the number of fingers in this regime. In this parameter range, demixing of particles and liquid occurs at the moving front and increases its transverse instability. In this 'demixing regime', the wavelength of the fingering instability is determined by the dynamics and the energetics of the system. Decreasing ε nl further (below 1 . 4 in Fig. 3) one first observes in regime (iii) a slight decrease in the average finger number. This is a geometric effect resulting from our one-dimensional finger counting routine: The fingers increasingly break up and the dried-in pattern looks progressively isotropic. In regime (iv), the measure 〈 f 〉 does not represent a finger number but instead indicates a decrease in the typical", - "page_start": 11, - "page_end": 11, - "source_file": "1001.2669.pdf" - }, - { - "text": "\n\nFIG. 8: (Colour online) Space-time plots are given for (left) the film thickness h and (right) the nanoparticle layer height h p = hφ . The plot corresponds to the complete evolution resulting in the ring profile of Fig. 6(b). In both panels bright [dark] parts denote high [low] regions. The prominent central dark-bright border in the left panel indicates the change of the position of the contact line in time. Over time, four regimes can be distinguished: (i) fast motion before pinning, (ii) nearly no front motion during self-pinning, (iii) slow motion after depinning, and (iv) final evaporation from the center.\n\n\n\nshould also be investigated further in the simple case presented here.\n\n## IV. CONCLUSION\n\nWe have discussed recent work on pattern formation processes in films and drops of evaporating suspensions/solutions of polymers and particles. After reviewing experiments on suspensions of thiol-coated gold nanoparticles in toluene we have focused on the modelling of the transport and phase change processes involved. A theoretical approach to the modelling of the hydrodynamics on the mesoscale has been described as well as more microscopic models for the dynamics in the observed nanoscopic 'postcursor' film. In particular, we have introduced (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nThe kinetic Monte Carlo model and the dynamical density functional theory can both be used to investigate and understand the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor' film that remains behind the mesoscopic dewetting front. They are, however, not capable of describing the dynamical processes in a meso-", - "page_start": 22, - "page_end": 22, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2669.pdf", - "query": "Concerning the dewetting of nanoparticle solutions, how does the concentration of nanoparticle affect the main finger's width ?", - "target_page": 12, - "target_passage": "A quantitative analysis shows that the mean number of fingers depends only very weakly on the av- erage concentration of the nanoparticles ; only the mean finger width increases with increasing concentration", - "chunk_present": { - "presence": true, - "index": 8 - } - }, - "top_chunk": [ - { - "text": "## I. INTRODUCTION\n\nThe patterns formed in dewetting processes have attracted strong interest since Reiter analysed the process quantitatively in the early nineties. In these experiments, that proved to be a paradigm in our understanding of dewetting, a uniform thin film of polystyrene (tens of nanometers thick) is deposited on a flat silicon oxide substrate is brought above the glass transition temperature. The film ruptures in several places, forming holes which subsequently grow, competing for space. As a result, a random polygonal network of liquid rims emerges. The rims may further decay into lines of small drops due to a Rayleigh-type instability [1-3]. The related problems of retracting contact lines on partially wetting substrates and the opening of single holes in rather thick films have also been studied [4, 5].\n\nSubsequent work has mainly focused on many different aspects of the dewetting process for simple non-volatile liquids and polymers (for reviews see Refs. [6-8]). All stages of the dewetting of a film are studied: the initial film rupture via nucleation or a surface instability (called spinodal dewetting) [1, 9-13], the growth process of individual holes [14-16], the evolution of the resulting hole pattern [3, 13], and the stability of the individual dewetting fronts [17-19]. We note in passing, that descriptions of dewetting patterns may also be found in historic papers, particularly for the dewetting of a liquid film on a liquid substrate. Tomlinson [20, footnote 18 on p. 40] considered turpentine on water and Marangoni [21, p. 352f] oil on water.\n\nMore recently, interest has turned to the dewetting processes of solutions and suspensions. However, these systems have not yet been investigated in any great depth. Such systems are complicated because their behaviour is determined by the interplay between the various solute (or colloid) and solvent transport processes. Furthermore, the solvents that are used often evaporate, i.e., one has to distinguish between 'normal' convective dewetting and evaporative dewetting. A number of experiments have been performed employing (colloidal) solutions of polymers [22-25], macromolecules like collagen and DNA [26-31] and nanoparticles [32-40]. The latter are sometimes referred to as 'nanofluids'. The initial focus of much of the research in the field has been on investigating the structures that are formed which are similar to the ones observed in the 'classical' dewetting of non-volatile liquids. Labyrinthine structures and polygonal networks result from spinodal dewetting and heterogeneous nucleation and growth, respectively. They are 'decorated' with the solute and therefore conserve the transient dewetting pattern as a dried-in structure when all the solvent has evaporated [28, 34]. The picture is, however, not complete. The solute may", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2669.pdf" - }, - { - "text": "also shift the spinodal and binodal lines as compared to the locations of these lines in the phase diagram for the pure solvent [41]. As a consequence, the solute concentration influences the hole nucleation rate. More importantly, the solute particles may also destabilise the dewetting fronts. As a result, one may find strongly ramified structures in all three systems [23, 25, 40, 42]. A selection of images exhibiting some of the possible structures is displayed in Fig.1.\n\nFor volatile solvents, the contact lines retract even for wetting fluids. It has been found that such evaporatively receding contact lines may deposit very regular line or ring patterns parallel to the moving contact line [24, 43]. The deposition of a single ring of colloids from a evaporating drop of colloidal suspension is well known as the 'coffee stain effect' [44]. Detailed investigations reveal the emergence of rich structures including multiple irregular rings, networks, regular droplet patterns, sawtooth patterns, Sierpinski carpets, and - in the case of DNA - liquid crystalline structures [22, 30, 45-49]. The deposition of regularly spaced straight lines orthogonal to the moving contact line has also been reported [50]. Droplet patterns may as well be created employing solvent-induced dewetting of glassy polymer layers below the glass transition temperature [51-53].\n\nNote that the dewetting of pure volatile liquids has also been studied experimentally [54] and theoretically [55-58]. In this case, different contact line instabilities have been observed for evaporating liquid drops [59, 60].\n\nIn the present article we review and preview the experiments and in particular the various modelling approaches for dewetting suspensions of (nano-)particles in volatile partially wetting solvents. After reviewing the basic experimental results in Section II, we discuss in Section III several theoretical approaches. In particular, we present a kinetic Monte Carlo model in Section III A, a dynamic density functional theory in Section III B, and a thin film evolution equation in Section III C. Finally, we conclude in Section IV by discussing advantages and shortcomings of the individual approaches and future challenges to all of them.\n\n## II. EXPERIMENT WITH NANOPARTICLE SOLUTIONS\n\nWe focus on experiments that use monodisperse colloidal suspensions of thiol-passivated gold nanoparticles in toluene [33, 34, 37-40, 61]. The gold core of 2 - 3 nm diameter is coated by a layer of alkyl-thiol molecules. The length of the carbon backbone of the thiol used in the experiments ranges from 6 to 12 carbon atoms ( C 6 to C 12 ) [40]. By varying the chain length, one can control", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2669.pdf" - }, - { - "text": "time scales for evaporation and diffusion. A large mobility M indicates fast diffusion as compared to evaporation. A trial move is accepted with the probability p acc = min[1 , exp( -∆ E/kT )] where k is the Boltzmann constant, T the temperature and ∆ E is the change in energy resulting from the potential move. Note that particles are only allowed to move into wet areas of the substrate, i.e., onto cells with l = 1 . This models zero diffusivity of the particles on a dry substrate. The replaced liquid fills the site left by the nanoparticle.\n\nWithout nanoparticles, the behaviour of the model is well known as it reduces to the classical two-dimensional Ising model [74]. For kT < kT c ≈ 0 . 567 liquid and vapour coexist when µ = µ coex = -2 . For µ > -2 [ µ < -2 ] eventually the liquid [vapour] dominates. A straight liquidgas interface will recede [advance] for µ < -2 [ µ > -2 ], i.e. one finds evaporative dewetting [wetting] fronts. If one starts, however, with a substrate covered homogeneously by the liquid, for µ < -2 the film will dewet via a nucleation or spinodal-like process. If the nanoparticles are present, they form dried-in structures when all the liquid evaporates. The final structures do not normally change any further - at least on short time scales. However, if the liquid wets the particles (i.e. is attracted to the particles), over long times there might be a coarsening of the structures, facilitated by the adsorbed liquid. The dried-in patterns depend on the particular pathway taken by the evaporative dewetting process. They range from labyrinthine to polygonal network structures or holes in a dense particle layer. Some typical patterns are displayed in Fig. 2, for cases when the average surface coverage of the nanoparticles ρ av n = 0 . 2 . Panels (a) and (b) result from a spinodal-like and nucleation and growth process, respectively. At first sight they look very similar to the patterns seen for the pure solvent and one might argue that the particles solely act as passive tracers and preserve the transient volatile dewetting structures of the solvent. This was suggested in Refs. [26-28] for dewetting collagen solutions. However, panels (c) and (d) indicate that the particles may at times play a rather more significant role. When the diffusion of the particles is slow, the evaporative dewetting fronts become transversely unstable and may result in strongly ramified patterns. This instability is caused by the nanoparticles. The lower their mobility, the stronger the fingering effect, i.e., there are more fingers in (c) than in (d) because in the latter the mobility is larger.\n\nThe front instability is intriguing as it results in strongly branched structures. As the dewetting front moves, new branches are continuously created and existing branches merge at the moving contact line. However, the mean finger number in the streamwise direction of the resulting ramified pattern is a constant. This behaviour is in contrast to the front instabilities found for dewetting", - "page_start": 9, - "page_end": 9, - "source_file": "1001.2669.pdf" - }, - { - "text": "small holes. The competition for space results in a fine-meshed polygonal network of nanoparticle deposits. The concentration of particles is much higher at the network nodes - an effect that can not been seen within the KMC model. As the particles attract the liquid there remains some liquid on the substrate where the nanoparticles are.\n\nFig. 5 gives snapshots of the evolution of a fingering instability for a retracting dewetting front. At early times the straight front shows a rather short-wave instability, about 16 wiggles can be seen. However, they are only a transient: the finger pattern coarsens rapidly till only about 7 fingers remain. The fingering then becomes stationary, i.e., just as in the KMC, the mean finger number remains constant, although new branches are continuously created and old branches join each other. In general, the results on fingering agree well with results obtained using the KMC model [41]. From this we conclude that jamming of discrete particles is not a necessary factor for causing the instability, since the fingering is seen here in a continuum model with a diffusion constant that is independent of the nanoparticle concentration. The DDFT is better suited than the KMC for investigations of the early instability stages: they are more easy to discern without the discrete background noise of the KMC. Furthermore, one may perform a linear stability analysis of the one-dimensional undisturbed streamwise front profiles with respect to transverse perturbations (in analogy to the approach used in Refs. [19, 86, 87]).\n\n## C. Thin film hydrodynamics\n\nThe previous two sections focused on two approaches to describe the experimentally observed patterning dynamics in the ultrathin postcursor film left behind by a mesoscopic receding dewetting front. Although both the kinetic Monte Carlo model and the dynamical density functional theory are able to describe well the processes in the ultrathin film, they can not be employed to describe mesoscale hydrodynamics. A relatively simple model for the latter can be derived in the framework of a long-wave or lubrication equation [8, 63]. We will illustrate here the approach by considering an isothermal situation where the nanoparticles are not surface active, i.e., they do not act as surfactants. For a model incorporating the effects of latent heat generation and surfaceactive particles resulting in thermal and solutal Marangoni stresses, see Ref. [88]. A description of spreading particle solutions incorporating a structural disjoining pressure has also been considered [89]. For related work on particle-laden film flow on an incline see Refs. [90, 91].\n\nOne starts from the Stokes equations, together with continuity, no-slip boundary conditions at the", - "page_start": 17, - "page_end": 17, - "source_file": "1001.2669.pdf" - }, - { - "text": "FIG. 5: (Colour online) Density profiles for the situation where the substrate is covered by nanoparticles with average density ρ av n = 0 . 3 and with the liquid excluded from the region y < 0 . The top row shows the nanoparticle density profiles and bottom row the corresponding liquid density profiles at the times t/t l = 1000 (left), 10000 (middle) and 30000 (right), where t l = 1 /kTM nc l σ 2 . The parameters are kT/ε ll = 0 . 8 , ε nl /ε ll = 0 . 6 , ε nn = 0 , α = 0 . 2 M nc l σ 4 , M c l = 0 , ρ l ( t = 0) = 0 . 9 ± ξ (where ξ represents white noise of amplitude 0.05) and ( µ -µ coex ) /kT = -0 . 78 .\n\n\n\nThis theory allows us to study the time evolution of the evaporating film of nanoparticle suspension without some of the restrictions of the kinetic Monte Carlo model. Here, however, we illustrate its application in similar parameter regimes as used above for the KMC. We focus on two examples: (i) the spinodal dewetting of a initially flat film of nanoparticle suspension characterised by constant ρ l and ρ n (Fig. 4); and (ii) the retraction of a dewetting front that is unstable with respect to a fingering instability (Fig. 5).\n\nFig. 4 presents two pairs of snapshots from a purely evaporative dewetting process deep inside the parameter region of the phase diagram where spinodal dewetting occurs. For small times the film becomes unstable showing a typical spinodal labyrinthine pattern with a typical wavelength. The nanoparticles concentrate where the remaining liquid is situated. However, they are 'slow' in their reaction: when ρ l already takes values in the range 0.08 - 0.83, the nanoparticle concentration has only deviated by about 25% from its initial value. The film thins strongly forming many", - "page_start": 16, - "page_end": 16, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [34] P. Moriarty, M. D. R. Taylor, and M. Brust, 'Nanostructured cellular networks,' Phys. Rev. Lett. 89 , 248303 (2002).\n - [35] E. Rabani, D. R. Reichman, P. L. Geissler, and L. E. Brus, 'Drying-mediated self-assembly of nanoparticles,' Nature 426 , 271-274 (2003).\n - [36] L. V. Govor, G. Reiter, J. Parisi, and G. H. Bauer, 'Self-assembled nanoparticle deposits formed at the contact line of evaporating micrometer-size droplets,' Phys. Rev. E 69 , 061609 (2004).\n - [37] C. P. Martin, M. O. Blunt, and P. Moriarty, 'Nanoparticle networks on silicon: Self-organized or disorganized?' Nano Lett. 4 , 2389-2392 (2004).\n - [38] C. P. Martin, M. O. Blunt, E. Pauliac-Vaujour, A. Stannard, P. Moriarty, I. Vancea, and U. Thiele, 'Controlling pattern formation in nanoparticle assemblies via directed solvent dewetting,' Phys. Rev. Lett. 99 , 116103 (2007).\n - [39] A. Stannard, C. P. Martin, E. Pauliac-Vaujour, P. Moriarty, and U. Thiele, 'Dual-scale pattern formation in nanoparticle assemblies,' J. Chem. Phys. C 112 , 15195-15203 (2008).\n - [40] E. Pauliac-Vaujour, A. Stannard, C. P. Martin, M. O. Blunt, I. Notingher, P. J. Moriarty, I. Vancea, and U. Thiele, 'Fingering instabilities in dewetting nanofluids,' Phys. Rev. Lett. 100 , 176102 (2008).\n - [41] I. Vancea, U. Thiele, E. Pauliac-Vaujour, A. Stannard, C. P. Martin, M. O. Blunt, and P. J. Moriarty, 'Front instabilities in evaporatively dewetting nanofluids,' Phys. Rev. E 78 , 041601 (2008).\n - [42] U. Thiele, Entnetzung von Kollagenfilmen , Ph.D. thesis, Technische Universitat Dresden (1998).\n - [43] H. Yabu and M. Shimomura, 'Preparation of self-organized mesoscale polymer patterns on a solid substrate: Continuous pattern formation from a receding meniscus,' Adv. Funct. Mater. 15 , 575-581 (2005).\n - [44] R. D. Deegan, O. Bakajin, T. F. Dupont, G. Huber, S. R. Nagel, and T. A. Witten, 'Capillary flow as the cause of ring stains from dried liquid drops,' Nature 389 , 827-829 (1997).\n - [45] E. Adachi, A. S. Dimitrov, and K. Nagayama, 'Stripe patterns formed on a glass-surface during droplet evaporation,' Langmuir 11 , 1057-1060 (1995).\n - [46] R. D. Deegan, 'Pattern formation in drying drops,' Phys. Rev. E 61 , 475-485 (2000).\n - [47] R. D. Deegan, O. Bakajin, T. F. Dupont, G. Huber, S. R. Nagel, and T. A. Witten, 'Contact line deposits in an evaporating drop,' Phys. Rev. E 62 , 756-765 (2000).\n - [48] L. Shmuylovich, A. Q. Shen, and H. A. Stone, 'Surface morphology of drying latex films: Multiple ring formation,' Langmuir 18 , 3441-3445 (2002).\n - [49] V. X. Nguyen and K. J. Stebe, 'Patterning of small particles by a surfactant-enhanced Marangoni-", - "page_start": 27, - "page_end": 27, - "source_file": "1001.2669.pdf" - }, - { - "text": "Benard instability,' Phys. Rev. Lett. 88 , 164501 (2002).\n\n - [50] J. Huang, F. Kim, A. R. Tao, S. Connor, and P. Yang, 'Spontaneous formation of nanoparticle stripe patterns through dewetting,' Nat. Mater. 4 , 896-900 (2005).\n - [51] S. H. Lee, P. J. Yoo, S. J. Kwon, and H. H. Lee, 'Solvent-driven dewetting and rim instability,' J. Chem. Phys. 121 , 4346-4351 (2004).\n - [52] L. Xu, T. F. Shi, P. K. Dutta, and L. An, 'Rim instability by solvent-induced dewetting,' J. Chem. Phys. 127 , 144704 (2007).\n - [53] L. Xu, T. F. Shi, and L. J. An, 'The dewetting dynamics of the polymer thin film by solvent annealing,' J. Chem. Phys. 129 , 044904 (2008).\n - [54] M. Elbaum and S. G. Lipson, 'How does a thin wetted film dry up?' Phys. Rev. Lett. 72 , 3562-3565 (1994).\n - [55] N. Samid-Merzel, S. G. Lipson, and D. S. Tannhauser, 'Pattern formation in drying water films,' Phys. Rev. E 57 , 2906-2913 (1998).\n - [56] A. Padmakar, K. Kargupta, and A. Sharma, 'Instability and dewetting of evaporating thin water films on partially and completely wettable substrates,' J. Chem. Phys. 110 , 1735-1744 (1999).\n - [57] A. V. Lyushnin, A. A. Golovin, and L. M. Pismen, 'Fingering instability of thin evaporating liquid films,' Phys. Rev. E 65 , 021602 (2002).\n - [58] L. M. Pismen, 'Spinodal dewetting in a volatile liquid film,' Phys. Rev. E 70 , 021601 (2004).\n - [59] C. Poulard, O. Benichou, and A. M. Cazabat, 'Freely receding evaporating droplets,' Langmuir 19 , 8828-8834 (2003).\n - [60] Y. Gotkis, I. Ivanov, N. Murisic, and L. Kondic, 'Dynamic structure formation at the fronts of volatile liquid drops,' Phys. Rev. Lett. 97 , 186101 (2006).\n - [61] E. Pauliac-Vaujour and P. Moriarty, 'Meniscus-mediated organization of colloidal nanoparticles,' J. Phys. Chem. C 111 , 16255-16260 (2007).\n - [62] C. Gigault, K. Dalnoki-Veress, and J. R. Dutcher, 'Changes in the morphology of self-assembled polystyrene microsphere monolayers produced by annealing,' J. Colloid Interface Sci. 243 , 143-155 (2001).\n - [63] A. Oron, S. H. Davis, and S. G. Bankoff, 'Long-scale evolution of thin liquid films,' Rev. Mod. Phys. 69 , 931-980 (1997).\n - [64] U. Thiele, 'Thin film evolution equations from (evaporating) dewetting liquid layers to epitaxial growth,' J. Phys.-Cond. Mat. (2010), (at press).", - "page_start": 28, - "page_end": 28, - "source_file": "1001.2669.pdf" - }, - { - "text": "## Abstract\n\nWe review recent experiments on dewetting thin films of evaporating colloidal nanoparticle suspensions (nanofluids) and discuss several theoretical approaches to describe the ongoing processes including coupled transport and phase changes. These approaches range from microscopic discrete stochastic theories to mesoscopic continuous deterministic descriptions. In particular, we focus on (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nModels (i) and (ii) are employed to discuss the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor film' that remains behind a mesoscopic dewetting front. We highlight, in particular, the presence of a transverse instability in the evaporative dewetting front which results in highly branched fingering structures. The subtle interplay of decomposition in the film and contact line motion is discussed.\n\nFinally, we discuss a simple thin film model (iii) of the hydrodynamics on the mesoscale. We employ coupled evolution equations for the film thickness profile and mean particle concentration. The model is used to discuss the self-pinning and de-pinning of a contact line related to the 'coffee-stain' effect.\n\nIn the course of the review we discuss the advantages and limitations of the different theories, as well as possible future developments and extensions.\n\nThe paper is published in: J. Phys.-Cond. Mat. 21 , 264016 (2009), in the Volume 'Nanofluids on solid substrates' and can be obtained at http://dx.doi.org/10.1088/0953-8984/21/26/264016", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2669.pdf" - }, - { - "text": "polymers which only result in fingers without side-branches [75] or fields of droplets left behind [18].\n\nAquantitative analysis shows that the mean number of fingers depends only very weakly on the average concentration of the nanoparticles ρ av n ; only the mean finger width increases with increasing concentration. However, decreasing the mobility (i.e., decreasing the diffusivity of the particles) leads to a much denser finger pattern and also causes the front instability to appear at an earlier stage, i.e., when the front instability is in its initial linear regime, it has a higher growth rate and a smaller characteristic wavelength (cf. Fig. 2(c) and (d)). Decreasing the effective chemical potential (increasing its absolute value) has a similar but less strong effect. For details see [41]. These findings lead to the conclusion that the determining factor for the front instability is the ratio of the time-scales of the different transport processes. In particular, the front becomes more unstable when the velocity of the dewetting front increases as compared to the mean diffusion velocity of the nanoparticles.\n\nIf the particle diffusivity is low, the front 'collects' the particles, resulting in a build up of the particles at the front that itself is slowed down. This makes the front unstable and any fluctuation along the front will trigger a transverse instability that results in an evolving fingering pattern. This happens even when the particle-liquid and particle-particle attractive interactions do not favour clustering (i.e. demixing of the liquid and the nanoparticles). In this regime, the instability is a purely dynamic effect and energetics plays no role in determining the number of fingers. We call this the 'transport regime'.\n\nTo illustrate the influence of energetics (characterized by the interaction parameters ε ij ) on fingering in Fig. 3 we display the dependence of the mean finger number on particle-liquid interaction strength ε nl . For ε nl ≥ 1 . 5 the mean finger number < f > is nearly constant; this is the transport regime. However, on decreasing ε nl below 1.5, we observe a marked increase in the value of < f > , indicating that energy plays an important role in determining the number of fingers in this regime. In this parameter range, demixing of particles and liquid occurs at the moving front and increases its transverse instability. In this 'demixing regime', the wavelength of the fingering instability is determined by the dynamics and the energetics of the system. Decreasing ε nl further (below 1 . 4 in Fig. 3) one first observes in regime (iii) a slight decrease in the average finger number. This is a geometric effect resulting from our one-dimensional finger counting routine: The fingers increasingly break up and the dried-in pattern looks progressively isotropic. In regime (iv), the measure 〈 f 〉 does not represent a finger number but instead indicates a decrease in the typical", - "page_start": 11, - "page_end": 11, - "source_file": "1001.2669.pdf" - }, - { - "text": "dewetted liquid. The front recedes until all liquid is collected in a central drop. Since no liquid evaporates [ Q nc = 0 in Eq. (1)], the particle concentration does not change during the process.\n\nThe situation changes when allowing for evaporation ( Q nc > 0 ). Now the front may retract by convection and/or evaporation. Evaporation leads to the possibility of a strong increase in the particle concentration at the contact line as evaporation is strongest there. Due to the strong nonlinear dependence of the viscosity on the particle concentration, this may lead to a dramatic decrease of the convective contribution to the front velocity. For moderate evaporation rates, this may result in a (temporary) self-pinning of the front. Within the present basic model, the process can (after complete dry-in) result in three different basic deposition patterns: (i) for very fast evaporation rates, all other processes occur over time scales that are much larger. In particular, the effects of convective redistribution of the liquid are neglectable. As a result one finds that a nearly homogeneous film of nanoparticles of thickness h p = φ 0 h 0 is deposited (see Fig. 6(a)). Convection only results in the small heap of material visible at the left hand side of Fig. 6(a). The decrease in h p on the right side of Fig. 6(a) arises due to the diffusion of particles to the right of the initial front position; (ii) for very low evaporation rates, the film dynamics is dominated by convective dewetting as this process acts on a much shorter time scale than evaporation. As a result, all the liquid is collected into a drop before evaporation slowly removes the remaining solvent. Under these conditions most of the nanoparticles are deposited in a single heap (see Fig. 6(c)). Depending on the diffusivity, the heap might be highest at the centre or show a depression there; (iii) at intermediate evaporation rates, one may observe the deposition of a nanoparticle ring around a region with a nanoparticle film of much lower height. At the centre deposition might increase again (see Fig. 6(b)).\n\nThe most intriguing feature is the ring formation that has been observed experimentally for suspensions of very different particle sizes ranging from nanometers [32, 36, 46, 47] to hundreds of micrometers. Pinning of the contact line and thermal Marangoni effects are often mentioned as necessary conditions for the ring formation. The contact line pinning is often assumed to result from substrate heterogeneities. Film height and concentration profiles at various instants during the dewetting process are displayed in Fig. 7. The profiles are from before, at and after self-pinning of the contact line. In Fig. 8 we display a space-time plot for the complete process. At first, the front recedes in the same manner as when there is no evaporation, but now driven by convection and evaporation. A small capillary rim forms that collects all the dewetted liquid that does not evaporate. The particle concentration slowly increases at the contact line (Fig. 7(a) and regime", - "page_start": 20, - "page_end": 20, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2669.pdf", - "query": "Which of ultrathin film or mesoscale hydrodynamics are best explained by kinetic Monte Carlo models ? ", - "target_page": 18, - "target_passage": "lthough both the kinetic Monte Carlo model and the dynamical density functional theory are able to describe well the processes in the ultrathin film, they can not be employed to describe mesoscale hydrodynamics", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "\n\nFIG. 8: (Colour online) Space-time plots are given for (left) the film thickness h and (right) the nanoparticle layer height h p = hφ . The plot corresponds to the complete evolution resulting in the ring profile of Fig. 6(b). In both panels bright [dark] parts denote high [low] regions. The prominent central dark-bright border in the left panel indicates the change of the position of the contact line in time. Over time, four regimes can be distinguished: (i) fast motion before pinning, (ii) nearly no front motion during self-pinning, (iii) slow motion after depinning, and (iv) final evaporation from the center.\n\n\n\nshould also be investigated further in the simple case presented here.\n\n## IV. CONCLUSION\n\nWe have discussed recent work on pattern formation processes in films and drops of evaporating suspensions/solutions of polymers and particles. After reviewing experiments on suspensions of thiol-coated gold nanoparticles in toluene we have focused on the modelling of the transport and phase change processes involved. A theoretical approach to the modelling of the hydrodynamics on the mesoscale has been described as well as more microscopic models for the dynamics in the observed nanoscopic 'postcursor' film. In particular, we have introduced (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nThe kinetic Monte Carlo model and the dynamical density functional theory can both be used to investigate and understand the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor' film that remains behind the mesoscopic dewetting front. They are, however, not capable of describing the dynamical processes in a meso-", - "page_start": 22, - "page_end": 22, - "source_file": "1001.2669.pdf" - }, - { - "text": "where γ is the liquid-gas surface tension and f ( h ) is a local free energy term that describes the wettability of the surface. Since µ corresponds to a chemical potential, the term µh may either bias the system towards the liquid or towards the gas state. The variation of F w.r.t. h gives the pressure. It contains the curvature (Laplace) pressure -γ ∆ h and the disjoining pressure Π( h ) = -∂ h f ( h ) . Many different forms for the latter are in use (see, e.g., Refs. [4, 8, 63, 70-73]).\n\nFor the present system a thin film description using Eq. (1) is not appropriate because the nanoparticles are not taken into account. However, under certain conditions one can augment equation (1) for the evolution of the film thickness by coupling it to an equation for the evolution of the mean particle concentration. The resulting model is able to describe the behaviour of an evaporating solution on the meso- and macroscale. Such an approach is briefly discussed below in Section III C. Weshould expect such a model to describe the mesoscopic dewetting front discussed above. However, the theory is less suited to a description of the dewetting dynamics of the ultrathin postcursor\n\nfilm.\n\nThe dewetting of the ultrathin film of highly concentrated suspension may be described by a discrete stochastic model such as, for instance, a kinetic Monte Carlo (KMC) model based solely on evaporation/condensation dynamics of the solvent and diffusion of the solute [35, 39, 41]. The validity of this strong assumption regarding the relevant transport processes can be confirmed from an estimate based on Eq. (1): The pressure p = δF/δh drives convection and evaporation. The convective mobility is proportional to h 3 , i.e., it is large for thick films but decreases strongly with reduced film thickness. The evaporative mobility, however, is a constant, implying that evaporation will dominate below a certain (cross-over) thickness. For the parameter values of Ref. [57] and a small contact angle ( ≈ 0 . 01 ), the cross-over thickness is in the range of 1-5 nanometers. This estimate justifies the neglect of convective transport in a description of the postcursor film and may explain why one has such good agreement between the experimentally observed patterns and the patterns obtained from a purely two-dimensional (single layer) kinetic Monte Carlo model [35]. We introduce the KMC model below in Section III A.\n\nIn several respects, however, the kinetic Monte Carlo model is rather simplistic, limiting its potential applications. For instance, the thermodynamic chemical potential as well as any wetting interaction of the solvent with the substrate are collected in a single parameter - an effective chemical potential. This implies that any influence of a disjoining pressure is 'smeared out' over the whole system and that no distinction between the short- and the long-range parts of the disjoining pressure is possible. It is furthermore based on the assumption that evaporation/condensation is", - "page_start": 7, - "page_end": 7, - "source_file": "1001.2669.pdf" - }, - { - "text": "small holes. The competition for space results in a fine-meshed polygonal network of nanoparticle deposits. The concentration of particles is much higher at the network nodes - an effect that can not been seen within the KMC model. As the particles attract the liquid there remains some liquid on the substrate where the nanoparticles are.\n\nFig. 5 gives snapshots of the evolution of a fingering instability for a retracting dewetting front. At early times the straight front shows a rather short-wave instability, about 16 wiggles can be seen. However, they are only a transient: the finger pattern coarsens rapidly till only about 7 fingers remain. The fingering then becomes stationary, i.e., just as in the KMC, the mean finger number remains constant, although new branches are continuously created and old branches join each other. In general, the results on fingering agree well with results obtained using the KMC model [41]. From this we conclude that jamming of discrete particles is not a necessary factor for causing the instability, since the fingering is seen here in a continuum model with a diffusion constant that is independent of the nanoparticle concentration. The DDFT is better suited than the KMC for investigations of the early instability stages: they are more easy to discern without the discrete background noise of the KMC. Furthermore, one may perform a linear stability analysis of the one-dimensional undisturbed streamwise front profiles with respect to transverse perturbations (in analogy to the approach used in Refs. [19, 86, 87]).\n\n## C. Thin film hydrodynamics\n\nThe previous two sections focused on two approaches to describe the experimentally observed patterning dynamics in the ultrathin postcursor film left behind by a mesoscopic receding dewetting front. Although both the kinetic Monte Carlo model and the dynamical density functional theory are able to describe well the processes in the ultrathin film, they can not be employed to describe mesoscale hydrodynamics. A relatively simple model for the latter can be derived in the framework of a long-wave or lubrication equation [8, 63]. We will illustrate here the approach by considering an isothermal situation where the nanoparticles are not surface active, i.e., they do not act as surfactants. For a model incorporating the effects of latent heat generation and surfaceactive particles resulting in thermal and solutal Marangoni stresses, see Ref. [88]. A description of spreading particle solutions incorporating a structural disjoining pressure has also been considered [89]. For related work on particle-laden film flow on an incline see Refs. [90, 91].\n\nOne starts from the Stokes equations, together with continuity, no-slip boundary conditions at the", - "page_start": 17, - "page_end": 17, - "source_file": "1001.2669.pdf" - }, - { - "text": "## Abstract\n\nWe review recent experiments on dewetting thin films of evaporating colloidal nanoparticle suspensions (nanofluids) and discuss several theoretical approaches to describe the ongoing processes including coupled transport and phase changes. These approaches range from microscopic discrete stochastic theories to mesoscopic continuous deterministic descriptions. In particular, we focus on (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nModels (i) and (ii) are employed to discuss the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor film' that remains behind a mesoscopic dewetting front. We highlight, in particular, the presence of a transverse instability in the evaporative dewetting front which results in highly branched fingering structures. The subtle interplay of decomposition in the film and contact line motion is discussed.\n\nFinally, we discuss a simple thin film model (iii) of the hydrodynamics on the mesoscale. We employ coupled evolution equations for the film thickness profile and mean particle concentration. The model is used to discuss the self-pinning and de-pinning of a contact line related to the 'coffee-stain' effect.\n\nIn the course of the review we discuss the advantages and limitations of the different theories, as well as possible future developments and extensions.\n\nThe paper is published in: J. Phys.-Cond. Mat. 21 , 264016 (2009), in the Volume 'Nanofluids on solid substrates' and can be obtained at http://dx.doi.org/10.1088/0953-8984/21/26/264016", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2669.pdf" - }, - { - "text": "scopic film. We have seen that the KMC model is able to describe the interplay of solute diffusion within the solvent and solvent evaporation/condensation. It also takes the liquid-liquid, liquidparticle and particle-particle interactions into account and therefore allows us to distinguish different regimes of the transverse (fingering) instability of the evaporative dewetting front: a transport regime where the instability is almost completely independent of the interaction strengths and a demixing regime where particles and liquid demix at the receding front thereby increasing its transverse instability.\n\nThe dynamical density functional theory describes the coupled dynamics of the density fields of the liquid and the nanoparticles. In the form described above (i.e. based on the two-dimensional hamiltonian (3)) we obtain a simple theory that allows us to study the time evolution of the evaporating ultrathin film and also to investigate the influence of processes such as surface diffusion by the liquid, which are not incorporated in the KMC model. However, it is straightforward to extend the theory to consider a fully three-dimensional fluid film, in which one can distinguish between short- and long-range interactions of solvent and/or solute with the substrate. We have, however, restricted the examples given here to situations that can also be described using the KMC model. A further exploration will be presented elsewhere.\n\nFinally, we have discussed a simple thin film model for the hydrodynamics on the mesoscale. It results from a long-wave approximation and consists of coupled evolution equations for the film thickness profile and the mean particle concentration. It has been used to discuss the self-pinning of receding contact lines that is related to the formation of rings of dried-in particles (coffeestain effect) that frequently occurs when films or drops of solutions or suspensions dewet by the combined effects of convection and evaporation.\n\nOne of the primary goals of researchers in this field, is the search for simple-to-use techniques that allow one to produce hierarchically structured functional layers for a wide range of applications such as, e.g., organic solar cells [98]. This means that the experiments advance very rapidly towards increasingly complex systems. For example, there have been investigations of the influence of the phase behaviour on the drying of droplets of a suspension of hard-sphere colloidal particles and non-adsorbing polymer [99], of the instabilities and the formation of drops in evaporating thin films of binary solutions [100] that may lead to treelike patterns [101], of effects of a secondary phase separation on evaporation-induced pattern formation in polymer films [102], and of the influence of an imposed flow on decomposition and deposition processes in a sliding ridge of evaporating solution of a binary polymer mixture [103] and of the influence of rather", - "page_start": 23, - "page_end": 23, - "source_file": "1001.2669.pdf" - }, - { - "text": "is similar to the size of the nanoparticles. At a certain distance from the macroscopic front, the ultrathin film starts to evolve a locally isotropic pattern of holes. The holes themselves grow in an unstable manner resulting in an array of isotropically branched structures as shown, e.g., above in Fig. 1. This indicates that at least some of the patterns described in the literature may have arisen from processes in similar ultrathin 'postcursor' films.\n\nThe existence of the ultrathin 'postcursor' film is an experimental finding that can be drawn on when choosing a theoretical approach to account for the pattern formation (see below). Note however, that at the moment there exists no explanation for its existence. A possible hypothesis is that the substrate strongly attracts the nanoparticles. As a result they form a dense suspension layer having a thickness roughly equal to the diameter of the nanoparticles. The observed mesoscopic dewetting front then actually correspond to an autophobic dewetting of a low concentration suspension from the higher concentration suspension on the surface of the substrate.\n\n## III. MODELLING APPROACHES\n\nModels of dewetting thin films of pure liquids or polymers are often based on thin film hydrodynamics. Starting from the Stokes equations, together with continuity and boundary conditions at the substrate and free surface, one applies a long-wave approximation (assuming small surface slopes and contact angles) [8, 63] and obtains a non-linear evolution equation for the film thickness profile h ( x, y, t ) . In the case of volatile liquids one finds [55-58, 64]\n\n∂ t h = ∇· [ Q c ∇ δF δh ] -Q e δF δh , (1)\n\nwith the mobility functions Q c ( h ) = h 3 / 3 η ≥ 0 (assuming Poiseuille flow in the film and no slip at the substrate; η is the dynamic viscosity) and Q e ≥ 0 for the convective and evaporative part of the dynamics, respectively. Q e is a rate constant that can be obtained from gas kinetic theory or from experiment [57]. Note that Eq. (1) only applies if the pressure in the vapour above the film is close to the saturation pressure. For alternative expressions that are used to describe the non-conserved evaporative dynamics see, e.g., Refs. [56, 57, 65-69]. Finally, ∇ = ( ∂ x , ∂ y ) , and ∂ t , ∂ x and ∂ y denote partial derivatives w.r.t. time and the coordinates.\n\nFocusing on the influence of capillarity and wettability only, the energy functional F [ h ] is given by\n\nF [ h ] = ∫ dx ∫ dy [ γ 2 ( ∇ h ) 2 + f ( h ) -µh ] (2)", - "page_start": 6, - "page_end": 6, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [20] C. Tomlinson, 'On the motion of certain liquids on the surface of water,' Phil. Mag. Ser. 4 39 , 32-48 (1870).\n - [21] C. G. Marangoni, 'Ueber die Ausbreitung der Tropfen einer Flussigkeit auf der Oberflache einer anderen,' Ann. Phys. (Poggendorf) 143 , 337-354 (1871).\n - [22] O. Karthaus, L. Grasjo, N. Maruyama, and M. Shimomura, 'Formation of ordered mesoscopic polymer arrays by dewetting,' Chaos 9 , 308-314 (1999).\n - [23] X. Gu, D. Raghavan, J. F. Douglas, and A. Karim, 'Hole-growth instability in the dewetting of evaporating polymer solution films,' J. Polym. Sci. Pt. B-Polym. Phys. 40 , 2825-2832 (2002).\n - [24] S. W. Hong, J. F. Xia, and Z. Q. Lin, 'Spontaneous formation of mesoscale polymer patterns in an evaporating bound solution,' Adv. Mater. 19 , 1413-1417 (2007).\n - [25] G. Liu, C. F. Zhang, J. Zhao, and Y. X. Zhu, 'Study of the morphology of the three-phase contact line and its evolution by morphological examination after droplet evaporation of aqueous polymer solutions,' Langmuir 24 , 7923-7930 (2008).\n - [26] M. Mertig, U. Thiele, J. Bradt, G. Leibiger, W. Pompe, and H. Wendrock, 'Scanning force microscopy and geometrical analysis of two-dimensional collagen network formation,' Surface and Interface Analysis 25 , 514-521 (1997).\n - [27] M. Mertig, U. Thiele, J. Bradt, D. Klemm, and W. Pompe, 'Dewetting of thin collagenous precursor films,' Appl. Phys. A 66 , S565-S568 (1998).\n - [28] U. Thiele, M. Mertig, and W. Pompe, 'Dewetting of an evaporating thin liquid film: Heterogeneous nucleation and surface instability,' Phys. Rev. Lett. 80 , 2869-2872 (1998).\n - [29] H. Maeda, 'An atomic force microscopy study of ordered molecular assemblies and concentric ring patterns from evaporating droplets of collagen solutions,' Langmuir 15 , 8505-8513 (1999).\n - [30] I. I. Smalyukh, O. V. Zribi, J. C. Butler, O. D. Lavrentovich, and G. C. L. Wong, 'Structure and dynamics of liquid crystalline pattern formation in drying droplets of DNA,' Phys. Rev. Lett. 96 , 177801 (2006).\n - [31] L. Zhang, S. Maheshwari, H. C. Chang, and Y. X. Zhu, 'Evaporative self-assembly from complex DNA-colloid suspensions,' Langmuir 24 , 3911-3917 (2008).\n - [32] M. Maillard, L. Motte, A. T. Ngo, and M. P. Pileni, 'Rings and hexagons made of nanocrystals: A Marangoni effect,' J. Phys. Chem. B 104 , 11871-11877 (2000).\n - [33] G. L. Ge and L. Brus, 'Evidence for spinodal phase separation in two-dimensional nanocrystal selfassembly,' J. Phys. Chem. B 104 , 9573-9575 (2000).", - "page_start": 26, - "page_end": 26, - "source_file": "1001.2669.pdf" - }, - { - "text": "on the model (see above). The purely two-dimensional character of the KMC was extended to a 'pseudo three-dimensional' one by making the effective chemical potential dependent on the mean liquid coverage [38]. As the latter is related to a mean film thickness, this corresponds to the introduction of a 'global' thickness-dependent disjoining pressure into the evaporation term without an explicit consideration of a film thickness. The amended model can reproduce bimodal structures that are beyond the scope of the purely two-dimensional model [38, 39]. Fully threedimensional models are also discussed in the literature [76, 77].\n\n## B. Dynamical Density Functional theory\n\nThe limitations of the kinetic Monte Carlo model introduced in the previous Section are related to its character as a two-dimensional lattice gas with only three states: gas, liquid or particle. This implies that (i) no liquid can be transported to a site on the surface already filled with liquid, i.e., diffusion of the liquid can not be incorporated in a sensible way and (ii) one is not able to distinguish between the influence of the short- and the long-range parts of the interactions with the substrate, as all such interactions are absorbed into the effective chemical potential.\n\nHowever, using dynamical density functional theory (DDFT) [78-83] one can develop a model for the processes in the ultrathin postcursor film without these limitations, although here we limit ourselves to developing the theory at the level of the KMC and solely discuss how to extend it to incorporate the influence of the liquid diffusion over the surface. Such a DDFT model describes the coupled dynamics of the density fields of the liquid ρ l and the nanoparticles ρ n . The densities ρ l and ρ n are defined as the probabilities of finding a given lattice site on the surface to be occupied by a film of liquid or by a nanoparticle, respectively. Note that the probability densities correspond to number densities as we use the lattice spacing σ = 1 as our unit of length.\n\nTo develop the DDFT, one must first derive the underlying free energy functional F [ ρ l , ρ n ] , and secondly, devise dynamical equations for both density fields that account for the conserved and the non-conserved aspects of their dynamics, i.e., transport and phase change processes, respectively. For a system governed by the hamiltonian (3), we may construct a mean-field (Bragg-Williams) approximation for the free energy of the system [78, 84] which contains an entropic contribution and contributions from the interactions between the different species (nanoparticles and liquid). The free energy is a semi-grand free energy, since the liquid is treated grand canonically (it is coupled to a reservoir with chemical potential µ ), whereas the nanoparticles are treated in the", - "page_start": 13, - "page_end": 13, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [81] A. J. Archer and M. Rauscher, 'Dynamical density functional theory for interacting brownian particles: Stochastic or deterministic?' J. Phys. A-Math. Gen. 37 , 9325-9333 (2004).\n - [82] A. J. Archer and R. Evans, 'Dynamical density functional theory and its application to spinodal decomposition,' J. Chem. Phys. 121 , 4246-4254 (2004).\n - [83] P. A. Monson, 'Mean field kinetic theory for a lattice gas model of fluids confined in porous materials,' J. Chem. Phys. 128 , 084701 (2008).\n - [84] P. M. Chaikin and T. C. Lubensky, Principles of condensed matter physics , Cambridge University Press (1997).\n - [85] J. S. Langer, 'An introduction to the kinetics of first-order phase transitions,' in C. Godreche, editor, 'Solids far from Equilibrium,' pages 297-363, Cambridge University Press (1992).\n - [86] M. A. Spaid and G. M. Homsy, 'Stability of Newtonian and viscoelastic dynamic contact lines,' Phys. Fluids 8 , 460-478 (1996).\n - [87] U. Thiele and E. Knobloch, 'Front and back instability of a liquid film on a slightly inclined plate,' Phys. Fluids 15 , 892-907 (2003).\n - [88] M. R. E. Warner, R. V. Craster, and O. K. Matar, 'Surface patterning via evaporation of ultrathin films containing nanoparticles,' J. Colloid Interface Sci. 267 , 92-110 (2003).\n - [89] O. K. Matar, R. V. Craster, and K. Sefiane, 'Dynamic spreading of droplets containing nanoparticles,' Phys. Rev. E 76 , 056315 (2007).\n - [90] J. J. Zhou, B. Dupuy, A. L. Bertozzi, and A. E. Hosoi, 'Theory for shock dynamics in particle-laden thin films,' Phys. Rev. Lett. 94 , 117803 (2005).\n - [91] B. P. Cook, A. L. Bertozzi, and A. E. Hosoi, 'Shock solutions for particle-laden thin films,' SIAM J. Appl. Math. 68 , 760-783 (2008).\n - [92] R. V. Craster, O. K. Matar, and K. Sefiane, 'Pinning, retraction, and terracing of evaporating droplets containing nanoparticles,' Langmuir (2009), online available.\n - [93] D. Quemada, 'Rheology of concentrated disperse systems and minimum energy-dissipation principle I. Viscosity-concentration relationship,' Rheol. Acta 16 , 82-94 (1977).\n - [94] D. Quemada and C. Berli, 'Energy of interaction in colloids and its implications in rheological modeling,' Adv. Colloid Interface Sci. 98 , 51-85 (2002).\n - [95] J. J. Stickel and R. L. Powell, 'Fluid mechanics and rheology of dense suspensions,' Annu. Rev. Fluid Mech. 37 , 129-149 (2005).\n - [96] J. K. G. Dhont, An Introduction to Dynamics of Colloids , Elsevier, Amsterdam (1996).", - "page_start": 30, - "page_end": 30, - "source_file": "1001.2669.pdf" - }, - { - "text": "FIG. 5: (Colour online) Density profiles for the situation where the substrate is covered by nanoparticles with average density ρ av n = 0 . 3 and with the liquid excluded from the region y < 0 . The top row shows the nanoparticle density profiles and bottom row the corresponding liquid density profiles at the times t/t l = 1000 (left), 10000 (middle) and 30000 (right), where t l = 1 /kTM nc l σ 2 . The parameters are kT/ε ll = 0 . 8 , ε nl /ε ll = 0 . 6 , ε nn = 0 , α = 0 . 2 M nc l σ 4 , M c l = 0 , ρ l ( t = 0) = 0 . 9 ± ξ (where ξ represents white noise of amplitude 0.05) and ( µ -µ coex ) /kT = -0 . 78 .\n\n\n\nThis theory allows us to study the time evolution of the evaporating film of nanoparticle suspension without some of the restrictions of the kinetic Monte Carlo model. Here, however, we illustrate its application in similar parameter regimes as used above for the KMC. We focus on two examples: (i) the spinodal dewetting of a initially flat film of nanoparticle suspension characterised by constant ρ l and ρ n (Fig. 4); and (ii) the retraction of a dewetting front that is unstable with respect to a fingering instability (Fig. 5).\n\nFig. 4 presents two pairs of snapshots from a purely evaporative dewetting process deep inside the parameter region of the phase diagram where spinodal dewetting occurs. For small times the film becomes unstable showing a typical spinodal labyrinthine pattern with a typical wavelength. The nanoparticles concentrate where the remaining liquid is situated. However, they are 'slow' in their reaction: when ρ l already takes values in the range 0.08 - 0.83, the nanoparticle concentration has only deviated by about 25% from its initial value. The film thins strongly forming many", - "page_start": 16, - "page_end": 16, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed9.pdf", - "query": "What is AgMERRA ?", - "target_page": 2, - "target_passage": " historical daily weather data (1986–2005) are from the AgMERRA dataset. AgMERRA is a post-processing of the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) data. The dataset is proved to be suitable for agricultural modelling and features consistent, daily time-series data", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "premises within the complex. The agreement is subject to the implementation of proposed gaming law reforms and a tax structure acceptable to the Company, and obtaining required planning and other approvals.\n\nMacau. In connection with the Company's pending joint venture in Macau (see Note 1), the Company has committed to invest up to $280 million in the entity in the form of capital contributions and shareholder loans.\n\nNew York Racing Association. The Company has an understanding with the New York Racing Association ('NYRA') to manage video lottery terminals ('VLTs') at NYRA's Aqueduct horseracing facility in metropolitan New York. The Company would assist in the development of the facility, including providing project financing, and would manage the facility for a fee. Work was halted on the VLT facility in August 2003 pending the outcome of an investigation of certain aspects of NYRA's operations by Federal prosecutors. In December 2003, NYRA reached agreement with the Justice Department whereby NYRA was indicted with prosecution deferred. NYRA agreed to pay a fine and the indictment will be dismissed with prejudice upon NYRA implementing certain reforms and otherwise complying with the terms of the agreement. The Company's participation is subject to a definitive agreement, regulatory approvals and certain legislative changes by the State of New York.\n\nThe Residences at MGM Grand. In July 2004, the venture obtained construction financing for up to $210 million for the development of the first tower. The Company has provided a guaranty for up to 50% of the interest and principal payment obligations on the construction financing as well as a joint and several completion guaranty with its partners. The Company recorded the value of the guaranty obligation, approximately $2 million, in other long-term liabilities.\n\nOther Guarantees. The Company is party to various guarantee contracts in the normal course of business, which are generally supported by letters of credit issued by financial institutions. The Company's Senior Credit Facility limits the amount of letters of credit that can be issued to $200 million, and the amount of available borrowings under the Senior Credit Facility is reduced by any outstanding letters of credit. At December 31, 2004, the Company had provided a $50 million letter of credit to support the Economic Development Corporation of the City of Detroit bonds referred to above, which are a liability of the Company.\n\nLitigation. The Company is a party to various legal proceedings, most of which relate to routine matters incidental to its business. Management does not believe that the outcome of such proceedings will have a material adverse effect on the Company's financial position or results of operations.", - "page_start": 71, - "page_end": 71, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "- /SM590000 The following results can occur if the background copy bandwidth is set too high compared to the MM/GM intercluster link capacity:", - "page_start": 565, - "page_end": 565, - "source_file": "sg247938.pdf" - }, - { - "text": "- 7. The administrator creates MM, GM, and GM with Change Volume relationships.", - "page_start": 576, - "page_end": 576, - "source_file": "sg247938.pdf" - }, - { - "text": "\n\n\n\nUnconcerned by a Chesapeake drilling rig, antelope continue their daily routines in southeastern Wyoming's Powder River Basin where the company is developing the promising Niobrara Play.", - "page_start": 18, - "page_end": 18, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "- - The background copy I/Os can back up on the MM/GM intercluster link.", - "page_start": 565, - "page_end": 565, - "source_file": "sg247938.pdf" - }, - { - "text": "- 3. To manage multiple MM/GM relationships as one entity, the relationships can be made part of a MM/GM Consistency Group to ensure data consistency across multiple MM/GM relationships, or for ease of management.", - "page_start": 562, - "page_end": 562, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 dumperrlog", - "page_start": 747, - "page_end": 747, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 Intercluster and intracluster Global Mirror can be used concurrently, but not for the same volume.", - "page_start": 547, - "page_end": 547, - "source_file": "sg247938.pdf" - }, - { - "text": "- (i) Challenger Gold Operations Pty Ltd changed its name from Dominion Gold Operations Pty Ltd on 26 March 2013.\n - (ii) Quadrio Resources Limited was sold by the Group during the year.\n - (iii) Kingsgate Treasury Pty Ltd changed its name from Yilgarn Metals Exploration Pty Ltd on 29 November 2012.\n - (iv) Akara Mining Limited changed its name to Akara Resource Public Company Limited on 29 August 2013.\n\n", - "page_start": 95, - "page_end": 95, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "- Lee, Timothy B. (22 August 2014). \"Will artificial intelligence destroy humanity? Here are 5 reasons not to worry\" (https://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worr y-about-super-intelligent-computers-taking). Vox . Archived (https://web.archive.org/web/201 51030092203/http://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worry-about-s uper-intelligent-computers-taking) from the original on 30 October 2015. Retrieved 30 October 2015.\n - Lenat, Douglas; Guha, R. V. (1989). Building Large Knowledge-Based Systems . AddisonWesley. ISBN 978-0-2015-1752-1.\n - Lighthill, James (1973). \"Artificial Intelligence: A General Survey\". Artificial Intelligence: a paper symposium . Science Research Council.\n - Lipartito, Kenneth (6 January 2011), The Narrative and the Algorithm: Genres of Credit Reporting from the Nineteenth Century to Today (https://mpra.ub.uni-muenchen.de/28142/1/ MPRA\\_paper\\_28142.pdf) (PDF) (Unpublished manuscript), doi:10.2139/ssrn.1736283 (http s://doi.org/10.2139%2Fssrn.1736283), S2CID 166742927 (https://api.semanticscholar.org/C orpusID:166742927), archived (https://ghostarchive.org/archive/20221009/https://mpra.ub.u ni-muenchen.de/28142/1/MPRA\\_paper\\_28142.pdf) (PDF) from the original on 9 October 2022\n - Lohr, Steve (2017). \"Robots Will Take Jobs, but Not as Fast as Some Fear, New Report Says\" (https://www.nytimes.com/2017/01/12/technology/robots-will-take-jobs-but-not-as-fast-as-so me-fear-new-report-says.html). The New York Times . Archived (https://web.archive.org/web/ 20180114073704/https://www.nytimes.com/2017/01/12/technology/robots-will-take-jobs-butnot-as-fast-as-some-fear-new-report-says.html) from the original on 14 January 2018. Retrieved 13 January 2018.\n - Lungarella, M.; Metta, G.; Pfeifer, R.; Sandini, G. (2003). \"Developmental robotics: a survey\". Connection Science . 15 (4): 151-190. CiteSeerX 10.1.1.83.7615 (https://citeseerx.ist.psu.ed u/viewdoc/summary?doi=10.1.1.83.7615). doi:10.1080/09540090310001655110 (https://doi. org/10.1080%2F09540090310001655110). S2CID 1452734 (https://api.semanticscholar.or g/CorpusID:1452734).\n - \"Machine Ethics\" (https://web.archive.org/web/20141129044821/http://www.aaai.org/Library/Sy mposia/Fall/fs05-06). aaai.org . Archived from the original (http://www.aaai.org/Library/Symp osia/Fall/fs05-06) on 29 November 2014.\n - Madrigal, Alexis C. (27 February 2015). \"The case against killer robots, from a guy actually working on artificial intelligence\" (https://www.hrw.org/report/2012/11/19/losing-humanity/cas e-against-killer-robots). Fusion.net . Archived (https://web.archive.org/web/20160204175716/ http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai) from the original on 4 February 2016. Retrieved 31 January 2016.\n - Mahdawi, Arwa (26 June 2017). \"What jobs will still be around in 20 years? Read this to prepare your future\" (https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robo ts-skills-creative-health). The Guardian . Archived (https://web.archive.org/web/20180114021 804/https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skillscreative-health) from the original on 14 January 2018. Retrieved 13 January 2018.\n - Maker, Meg Houston (2006), AI@50: AI Past, Present, Future (https://web.archive.org/web/200 81008120238/http://www.engagingexperience.com/2006/07/ai50\\_ai\\_past\\_pr.html), Dartmouth College, archived from the original (http://www.engagingexperience.com/2006/0 7/ai50\\_ai\\_past\\_pr.html) on 8 October 2008, retrieved 16 October 2008", - "page_start": 59, - "page_end": 59, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed9.pdf", - "query": "In 2018, what was the global proportion of maize grown in the US ?", - "target_page": 5, - "target_passage": "According to statistics in 2018, the gross maize yield in the top 5 countries is almost 80% of the total maize yield of the whole world. The United States accounts for more than 32%", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "\n\nFigure 5. (continued)\n\n\n\nby 1.5 °C. According to the simulation results, comparing to 1986-2005, the maize yield in the United States, China and Brazil would decrease under global warming by 2.0 °C; the yield loss rate would reach more than 24% in Brazil; the United States would decrease by 13.3%; China would decrease by 11.5%. However, there would be increasing trends in Argentina and Mexico; the maize yield would increase by 16.8% in Argentina; the yield increasing rate would exceed 40% in Mexico. Overall, the gross maize yield in the top 5 countries would decrease by 11.4% under global warming by 2.0 °C. By comparing the maize production in di/fferent countries, it can be found that the reduction trend of total maize production in the top /five countries is more obvious, especially under the scenario of global warming by 2.0 °C, the global food trade and food security may face greater risks.\n\nFrom the view of continents, there are di/fferent trends of maize yield changes in the 6 continents (except Antarctica) under global warming by 1.5 °C and 2.0 °C (Fig. 6). From the results of simulated by CRESE-maize under global warming by 1.5 °C, the maize yield in 3 continents would decline apparently, including South America, Europe and Oceania; the average yield loss rates are respectively - 15.6%, - 12.4%, - 36.4%; in the other 3 continents the average maize yield would go up, especially in Africa more than 30%; the increasing trends are slight in Asia and North America, in which the yield increasing rates are separately 0.7% and 0.4%. However, the yield change trends simulated by IPSL-CM5A-LR and GFDL-ESM2M models are di/fferent in 2 continents, including Asia and North America. From the results of simulated by CRESE-maize under global warming by 2.0 °C, the maize yield in 5 continents would decline apparently, except Africa; the average yield loss rates are respectively - 7.9% (Asia), - 14.1% (North America), - 9.3% (South America), - 22.5% (Europe), - 25.5% (Oceania); only in Africa the average maize yield would go up also more than 30%; meanwhile the yield change trends simulated by IPSL-CM5A-LR and GFDL-ESM2M models are the same in each continent. Comparing the two global warming scenarios, there would be apparent variations in maize yield in Asia and North America, in which the annual maize yield accounts for a great proportion of the whole world, leading to a much more serious yield loss under global warming by 2.0 °C than that under global warming by 1.5 °C. /T\\_here would be an obvious crisis of food supply under global warming by 2.0 °C with the increasing population in the future. So, it is important to make full preparation for adaptation to climate change in the whole world.\n\nVol.:(0123456789)", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed9.pdf" - }, - { - "text": "\n\nFigure 5. Yield loss rates on maize in top 20 countries under global warming by 1.5 °C and 2.0 °C.\n\n\n\nthat maize yield would decrease severely. For the whole world more mitigation and adaptation actions should be taken from now on. Food security would be a signi/ficant challenge in this century.\n\nYield change of maize in main countries. /T\\_here are huge di/fferences in impacts on maize yield under climate change, which would in/fluence the food crisis in di/fferent regions. /T\\_here are 159 countries in the whole world which plant maize. /T\\_he gross yield of maize the top 20 countries accounts for more than 90% of the total yield in the 159 countries. So, the changes in the top 20 countries under future scenarios would in/fluence the food security of the whole world (Fig. 5). From the results of simulated by CRESE-maize under global warming by 1.5 °C, there would be 75 countries facing with yield loss of maize; the mean yield loss rate would become 33.5%. /T\\_here would be 84 countries experiencing yield increases. Overall, the global maize yield would slightly increase. Under global warming by 2.0 °C, there would be 82 countries facing with yield loss of maize, for which the mean yield loss rate is approximate to that under global warming by 1.5 °C. /T\\_here would be 77 countries experiencing yield increase; however, the mean yield increase is apparently smaller than that under global warming by 1.5 °C. Generally, the global maize yield would decrease. /T\\_he results show that the adverse e/ffect of warming up 2.0 °C on global maize production is far greater than warming up 1.5 °C. It is important to take actions to develop forward-looking adaptation measures to cope with future climate change.\n\nAccording to statistics in 2018, the gross maize yield in the top 5 countries is almost 80% of the total maize yield of the whole world. /T\\_he United States accounts for more than 32%; China accounts for about 24%; Brazil, Argentina and Mexico account for about 23%. /T\\_he /fluctuation of maize production in these /five top countries will have a signi/ficant impact on the global maize trade. Based on the simulation results, comparing to 1986-2005, the maize yield in China, Brazil and Argentina would decrease under global warming by 1.5 °C; the yield loss rate would reach more than 20% in Brazil; Argentina would decrease by 14.7%; China would decrease by 3.7%. However, there would be increasing trends in the United States and Mexico; the change in the United States would not be signi/ficant and the maize yield would increase by 0.5%; the yield increasing rate would exceed 50% in Mexico. Overall, the gross maize yield in the top 5 countries would decrease by 2% under global warming\n\nVol:.(1234567890)", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed9.pdf" - }, - { - "text": "\n\nFigure 2. (continued)\n\n\n\nis 16.9% in which the temperature would go up more than 3.0 °C, most located in the high latitude regions of Northern Hemisphere; the area is rarely in which the temperature would go up between 0 and 1.0 °C.\n\n/T\\_here are apparent trends of humidi/fication in most regions under global warming by 1.5 °C and 2.0 °C; but the drought risk also should be taken seriously in the other regions. Under global warming by 1.5 °C the area is 73.6% of the whole world in which the precipitation would increase, most located in the Northern Hemisphere; the area is 53.7% of the whole world in which the precipitation would increase by less than 50 mm; however, the area is 26.4% of whole world in which the rainfall would decrease, mainly located in the Southern Hemisphere and the middle regions of Northern Hemisphere. /T\\_he distribution of precipitation under global warming by 2.0 °C is similar with the situation under global warming by 1.5 °C. /T\\_he drought-threatened area would increase by 28.5% under global warming by 2.0 °C, especially in the middle and low latitude of the Northern Hemisphere; the area would expand to 26%, in which the precipitation increases more than 50 mm. In other words, the extreme rainfall events (such as drought, rainstorm) under global warming by 2.0 °C would be more serious than those under global warming by 1.5 °C, which is what we should be pay more attention to.\n\nYield change of maize under global warming by ͷ.ͻ °C and ͸.Ͷ °C. Maize production is a/ffected by climate change apparently. According to the simulation results of CERES-maize, the yield of maize would decrease in the worldwide relative to 1986-2005 under global warming by 2.0 °C; it would increase little under global warming by 1.5 °C. /T\\_he distributions of maize yield loss under the two scenarios are similar to each other, mostly located in the middle and low latitude, which are the main regions for maize planting in the world. /T\\_he loss risk of maize under global warming by 2.0 °C is much more serious than that under global warming of 1.5 °C. However, there are increasing potentials of maize yield in many regions, nearly half of the whole maize planting area in the world, in which the climate situation would become more proper for maize under global\n\nVol.:(0123456789)", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed9.pdf" - }, - { - "text": "\n\nFigure 3. Distribution of yield loss rate on maize in the world under global warming by 1.5 °C (up: IPSLCM5A-LR model, RCP 2.6; down: GFDL-ESM2M model, RCP 4.5). /T\\_he /figure has been generated using ArcGIS 10.2 and Natural Earth-Free vector and raster map data @ https:// natur alear thdata. com.\n\n\n\nwarming by 1.5 °C and 2.0 °C. So, there are apparent challenges and opportunities for maize production in the whole world under climate change. We should grasp the opportunities and expand the yield increasing potentials; meanwhile, the threat of maize yield loss should be controlled and compressed to the minimum in the high-risk regions.\n\nFrom the results simulated by IPSL-CM5A-LR model under RCP 2.6 scenario, the gross yield of maize in the world between 2020 and 2039 would decrease by 6.8% relative to 1986-2005. /T\\_he area is 37.7% of the whole maize planting regions in the world, in which the yield loss would be less than 50%, mainly located in the low and middle latitude of South America and Asia, and the middle latitude of Africa and North America. /T\\_he area is 16.4% of the whole maize planting regions, in which the yield loss would be more than 50%, mainly located in the low latitude of South America and the middle latitude of Asia and Europe. /T\\_he area is 45.8% of the whole maize planting regions, in which the yield would increase, mainly located in the low latitude of Africa, Asia and North America, the high latitude of Europe. From the results simulated by the GFDL-ESM2M model under RCP 4.5 scenario, the gross yield of maize in the world between 2041 and 2060 would increase by 7.2% relative to 1986-2005. /T\\_here are opposite trends of maize yield under global warming by 1.5 °C, which are simulated by di/fferent global climate models. However, the spatial distributions of maize yield change are similar to each other. /T\\_he di/fference is that the regions of high yield loss rate are decreasing, and the regions of yield increasing are going up. In a comprehensive perspective, under global warming by 1.5 °C, maize yield in the whole world would increase 0.18% relative to 1986-2005 (Fig. 3). According to Paris Agreement, all countries should do their best to limit the global warming by 1.5 °C until the end of 21 century. If that objective could be accomplished, gross maize production of the whole world would not be in/fluenced so much by climate change, but the food\n\nVol:.(1234567890)", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed9.pdf" - }, - { - "text": "\n\nFigure 4. Distribution of yield loss rates on maize in the world under global warming by 2.0 °C (up: NorESM1-M model, RCP 4.5; down: GFDL-ESM2M model, RCP 6.0). /T\\_he /figure has been generated using ArcGIS 10.2 and Natural Earth-Free vector and raster map data @ https:// natur alear thdata. com.\n\n\n\nsecurity of the whole world would still be attacked violently. /T\\_here are huge di/fferences among the continents; South America, Asia and the Middle East are threatened seriously by yield loss seriously under global warming by 1.5 °C. /T\\_he changes in maize yield in di/fferent regions would in/fluence the maize price and food trades. So, it should be cautious to cope with the maize changes under global warming by 1.5 °C.\n\nFrom the results of simulated by the NorESM1-M model under RCP 4.5 scenario, the gross yield of maize in the world between 2060 and 2079 would decrease by 18.7% relative to 1986-2005. /T\\_he area is 41.7% of the whole maize planting regions in the world, in which the yield loss would be less than 50%. /T\\_he area is 15.6% of the whole maize planting regions, in which the yield loss would be more than 50%. /T\\_he area is 42.7% of the whole maize planting regions, in which the yield would increase. /T\\_he distribution of maize yield change is similar to that under global warming by 1.5 °C. From the results simulated by the GFDL-ESM2M model under RCP 6.0 scenario, the gross yield of maize in the world between 2065 and 2084 would decrease by 3% relative to 1986-2005. Comparing to the results of the NorESM1-M model, the regions of high yield loss rate are increasing, and the regions of yield increases are going down; but the per unit area yields are increasing quickly in the regions of yield increasing. So, the gross maize yield in the whole world simulated by the GFDL-ESM2M model is more than the NorESM1-M model. In a comprehensive perspective, under global warming by 2.0 °C, maize yield in the whole world would decrease 10.8% relative to 1986-2005 (Fig. 4). Compared to the results under global warming by 1.5 °C, the risk of yield loss is much higher. According to the new results from the Emission Gap Report in 2019, the target of global warming by 1.5 °C would not be implemented according to the reality of mitigation actions; the chance become much bigger for all countries in the world, who will be facing the severe challenge of global temperature rise of 2.0 °C or even higher (3.0 °C or 4.0 °C) in the future. So it is critical to cope with the serious condition\n\nVol.:(0123456789)", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed9.pdf" - }, - { - "text": "First maize yields across the world during the historical period 1986-2005 were simulated at the 0.5° × 0.5° grid scale with two main production systems, including Spring maize and Summer maize. Historical national maize production is aggregated from simulated gridded yield and weighted by grid cell maize areas in 2000 from the gridded global dataset by combining two data products 47 . Second, genetic parameters of speci/fic cultivars of maize from previous works were adopted for the initial parameters; model parameters related to crop genotype characteristics were calibrated and tuned following the method in Xiong et al. 52 , in which the simulated yields from 1986-2005 were comparable to the statistical data. /T\\_hird, maize yields across the world were simulated under global warming by 1.5 °C and 2.0 °C. Finally, global and national maize yields were aggregated from gridded values; changes in national and global yields under global warming by 1.5 °C and 2.0 °C were calculated, comparing maize yield average for 1986-2005.\n\nSimulation of market price using GTAP. /T\\_he yield changes for maize from the DSSAT models under 1.5 °C and 2.0 °C temperature increase are used to carry out simulations using competitive market for changes in production, market price, and self-su/fficiency ratio of maize at national and global levels 53,54 . For this study, we use a comparative static analysis approach to simulate the impact of climate changes on the prices and trade of the major food crops under current economic conditions. Utilizing current economic conditions has the advantage of minimizing assumptions and model uncertainties related to future economic conditions 55,56 .\n\n/T\\_he original GTAP database doesn't include maize as a separate sector, rather it is combined with other coarse grains to form an 'other coarse grain' sector. For this study, we updated the GTAP database by splitting maize from the original sector in the database, design an appropriate sectoral and regional aggregation scheme to the original database. /T\\_he detailed method is given as follows:\n\nFirst, we improved the database by splitting maize from the existing sector 'other coarse grain', following similar work using GTAP 57-59 based on the routines from the Splitcom method 60 . In this procedure, the old /flows of data both at national and trade levels are allocated between the new /flows using weights. /T\\_he national weights include the division of each unsplit user's use of the original split commodity among the new commodities; the division of unsplit inputs to the original industry between the new industries; the splitting of new industry's use of each new commodity. Maize use is mainly shared between feed, food, processing and others (seed, waste, etc.).\n\nTrade shares allocate the original slice of the split commodity into the new commodity for all elements of basic price value, tax, and margin. Finally, we used the RAS method for balancing the newly created database. /T\\_he values for the national shares matrix were obtained from FAOSTAT. /T\\_he trade shares matrix was calculated based on the data from UN Comtrade Database.\n\nSecond, our sectoral aggregation scheme for GTAP ensures that all the competing and complimenting sectors for maize are present in the most disaggregated form. For example, for maize, other crops compete for inputs of production and both livestock and households are major users of maize. For regional aggregation, we kept the details for all the main producing, consuming, and trading regions, for maize.\n\nVol.:(0123456789)", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed9.pdf" - }, - { - "text": "## OPEN\n\n\n\n## The impact of ͷ.ͻ °C and ͸.Ͷ °C global warming on global maize production and trade\n\nKuo Li ͷ * , Jie Pan ͷ , Wei Xiong ͸ , Wei Xie ͹ & Tariq Ali ͹\n\nClimate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by ͻ climate models recommended by ISI-MIP under ͺ RCP scenarios, in which the approximate scenarios with global warming by ͷ.ͻ °C and ͸ °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by ͷ.ͻ °C and ͸.Ͷ °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under ͸.Ͷ °C scenario was much more serious than ͷ.ͻ °C scenario; the ratios of yield changes were separately Ͷ.ͷ;% and - ͷͶ.;% under ͷ.ͻ °C and ͸.Ͷ °C scenarios. The reduction trend of total maize production is obvious in the top five countries and the main producing regions of the world, especially under the ͸.Ͷ °C scenario. The market price of maize would increase by around Ͷ.ͽ% and ͹.ͺ% under ͷ.ͻ °C and ͸.Ͷ °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.\n\nIn the past hundred years, the global climate has experienced great changes 1-4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming 5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health 6-10 . Global warming has gradually changed from a scienti/fic issue to a major social issue of common concern to governments and people of all countries 11-13 . In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris 14 . Paris Agreement has indicated and pursue e/fforts to limit the temperature increase to 1.5 °C above pre-industrial levels.", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "Figure 7. Price change on maize in main continents under global warming by 1.5 °C and 2.0 °C.\n\n\n\nFigure 8. Changes in Self-su/fficiency ratio of maize in main countries under global warming by 1.5 °C and 2.0 °C.\n\n\n\nmeantime, the huge di/fferences in yield changes in di/fferent regions provide a small chance for the world, especially under global warming by 1.5 °C. In the near future, if the global temperature can be e/ffectively controlled under 1.5 °C warming scenario, there would be an increase in the potential for maize yield in the worldwide. All regions and countries should take actions to reduce the yield loss risk. For the yield-increasing regions, the potentials of climate resources should be fully utilized to guarantee maize yield under future scenarios; for the yield-reducing regions, the targeted adaptation actions should be taken in advance under global warming by 1.5 °C and 2.0 °C.\n\nMeanwhile, the risk of price /fluctuations caused by global corn trade due to future climate change should be paid more attention to, especially for developing and undeveloped countries. In the view of supply and demand, the population would go up quickly in the next 30 years; the demand for maize would increase hugely; however, the supply of maize would go down in the future, especially under global warming by 2.0 °C; it would intensify the contradiction between supply and demand, which would threaten the food security and sustainable development in the whole world.\n\nIn this study, 5 climate models are selected, which are recommended by ISI-MIP (/T\\_he Inter-Sectoral Impact Model Intercomparison Project); compared with other climate models, the /five models could more e/ffectively support impact assessment in di/fferent sectors and provide more reliable results. Based on the simulation results\n\nVol.:(0123456789)", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed9.pdf" - }, - { - "text": "Faced with climate change, agriculture is the most vulnerable sector, which will experience the largest negative impacts from climatic change and lead to more serious food security in the whole world 15-20 . Meanwhile, global production losses might lead to price shocks and trigger export restrictions 21-24 ; an increasingly interconnected global food system 25,26 and the projected fragility of the global food production system due to climatic change further exacerbate the threats to food security in the worldwide 27-29 . So, the impacts of climate changes on crop yields and prices have been of highly concerned. Numerous studies have revealed that the warming trend has negative impact on crop yields and global trade in most regions all over the world 30-32 . /T\\_here are three main methods for impacts assessment of climate change on crops, including environment-controlled experiments, statistical regression analysis and model simulations 17,33 . Environment-controlled experiments are designed to observe the in/fluence of climate factors on crops, such as drought, /flood, heat stress, cold damage, elevated CO 2 concentration, through which the impact mechanism of climate change on crops would be revealed and established 23,34,35 . Crop models and trade models are applied to simulate the response of crop yield and market price under climate change, based on process-based crop growth in daily time steps, either in selected /field sites or in selected regions 36-39 . /T\\_he statistical regression analysis usually explores the relationship between historical crop yields and meteorological records in di/fferent sites or counties to establish regression functions for crop responses predictions 40-43 . /T\\_hese researches have documented that crop yield and price would be threatened much more seriously by global warming, especially due to the increasing trend of frequency and intensity of climate extreme events in the future.\n\nͷ Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing ͷͶͶͶ;ͷ, China. ͸ International Maize and Wheat Improvement Center, Texcoco, Mexico. ͹ Peking University, Beijing, China. * email: hqlk͸ͶͶͶ@ͷͼ͹.com\n\nglyph", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "\n\nFigure 6. Yield loss rates on maize in 6 continents under global warming by 1.5 °C and 2.0 °C.\n\n\n\nMarket price of maize in main countries. In this study, we elaborate on the endogenous response of our economic models. /T\\_his response can be theoretically elaborated as: due to the e/ffect of climate change on yield reduction (improvement), the supply curve moves le/f\\_tward (rightward), reducing (increasing) production and raising (lowering) prices. In response, the consumers decrease (increase) their consumption of more expensive (cheaper) crops and shi/f\\_ting to other (increase the use of the same) crops. Producers, at the same time, respond by changing farm-level management practices and increasing (decreasing) the amount of acreage under these crops. At a global scale, the reallocation of production and consumption through international trade further alters climate change impacts on global agriculture. /T\\_his also alters the self-su/fficiency ratios of each country/ region due to climate change.\n\nIn response to production changes, the price of each commodity changes under both scenarios. At the global level, the market price for maize would increase by 0.7% and 3.4% under 1.5 °C scenario and 2.0 °C scenario, respectively, which would vary quite largely among di/fferent countries and regions under both climate change scenarios (Fig. 7). Particularly, the market price would increase by around 22% and 27% in Iran under 2.0 °C scenario and 1.5 °C scenario, respectively. Iran is also the region where the highest yield reduction is observed due to climate change. Market prices for maize in India, Mexico, Russia, South Africa and the Rest of Africa would decrease signi/ficantly under both scenarios, as their yields improve due to climate e/ffects. Along with the domestic production, the climate change will also induce changes in international trade of maize, resulting in changing levels of self-su/fficiency ratios (SSR) for each country/region. By SSR, we mean the ratio of domestically produced commodity, to the sum of net imports and domestic production. In our scenario analysis, generally, the countries that face positive e/ffects on yields and/or are relatively less dependent on imports, are positively (less negatively) a/ffected by climate change. For example, maize SSR for Ukraine, India, Russia and Mexico would improve under both scenarios (Fig. 8). Whereas the self-su/fficiency ratios of maize for Southeast Asia, Bangladesh and Iran will worsen under both scenarios. China's SSR for maize stays almost similar to the level as the baseline.\n\n## Discussion and conclusion\n\nDiscussion. Our analysis highlights the e/ffects of climate change on global- and regional-speci/fic maize yields and the associated economic consequences in 1.5 °C and 2.0 °C -warming scenarios. We /find that the reduction risk of maize yield under global warming by 2.0 °C is much more serious than that under global warming by 1.5 °C. On the one hand, the larger the temperature rise, the greater the evapotranspiration would be. Although the precipitation is also increasing, the evapotranspiration would become more intense. /T\\_he limitation of water supply for maize growth leads to the decline of yield. On the other hand, relative to global warming by 1.5 °C, maize production would be faced with more serious and frequent extreme climate events, such as drought and heat waves, which would increase the risk of corn yield reduction under global warming by 2.0 °C. In the\n\nVol:.(1234567890)", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed9.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed9.pdf", - "query": "What would be the price increase resulting from maize production changes due to 1.5°C and 2°C global temperature increase ?", - "target_page": 10, - "target_passage": "In response to production changes, the price of each commodity changes under both scenarios. At the global level, the market price for maize would increase by 0.7% and 3.4% under 1.5 °C scenario and 2.0 °C scenario, respectively", - "chunk_present": { - "presence": true, - "index": 8 - } - }, - "top_chunk": [ - { - "text": "\n\nFigure 2. (continued)\n\n\n\nis 16.9% in which the temperature would go up more than 3.0 °C, most located in the high latitude regions of Northern Hemisphere; the area is rarely in which the temperature would go up between 0 and 1.0 °C.\n\n/T\\_here are apparent trends of humidi/fication in most regions under global warming by 1.5 °C and 2.0 °C; but the drought risk also should be taken seriously in the other regions. Under global warming by 1.5 °C the area is 73.6% of the whole world in which the precipitation would increase, most located in the Northern Hemisphere; the area is 53.7% of the whole world in which the precipitation would increase by less than 50 mm; however, the area is 26.4% of whole world in which the rainfall would decrease, mainly located in the Southern Hemisphere and the middle regions of Northern Hemisphere. /T\\_he distribution of precipitation under global warming by 2.0 °C is similar with the situation under global warming by 1.5 °C. /T\\_he drought-threatened area would increase by 28.5% under global warming by 2.0 °C, especially in the middle and low latitude of the Northern Hemisphere; the area would expand to 26%, in which the precipitation increases more than 50 mm. In other words, the extreme rainfall events (such as drought, rainstorm) under global warming by 2.0 °C would be more serious than those under global warming by 1.5 °C, which is what we should be pay more attention to.\n\nYield change of maize under global warming by ͷ.ͻ °C and ͸.Ͷ °C. Maize production is a/ffected by climate change apparently. According to the simulation results of CERES-maize, the yield of maize would decrease in the worldwide relative to 1986-2005 under global warming by 2.0 °C; it would increase little under global warming by 1.5 °C. /T\\_he distributions of maize yield loss under the two scenarios are similar to each other, mostly located in the middle and low latitude, which are the main regions for maize planting in the world. /T\\_he loss risk of maize under global warming by 2.0 °C is much more serious than that under global warming of 1.5 °C. However, there are increasing potentials of maize yield in many regions, nearly half of the whole maize planting area in the world, in which the climate situation would become more proper for maize under global\n\nVol.:(0123456789)", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed9.pdf" - }, - { - "text": "## OPEN\n\n\n\n## The impact of ͷ.ͻ °C and ͸.Ͷ °C global warming on global maize production and trade\n\nKuo Li ͷ * , Jie Pan ͷ , Wei Xiong ͸ , Wei Xie ͹ & Tariq Ali ͹\n\nClimate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by ͻ climate models recommended by ISI-MIP under ͺ RCP scenarios, in which the approximate scenarios with global warming by ͷ.ͻ °C and ͸ °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by ͷ.ͻ °C and ͸.Ͷ °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under ͸.Ͷ °C scenario was much more serious than ͷ.ͻ °C scenario; the ratios of yield changes were separately Ͷ.ͷ;% and - ͷͶ.;% under ͷ.ͻ °C and ͸.Ͷ °C scenarios. The reduction trend of total maize production is obvious in the top five countries and the main producing regions of the world, especially under the ͸.Ͷ °C scenario. The market price of maize would increase by around Ͷ.ͽ% and ͹.ͺ% under ͷ.ͻ °C and ͸.Ͷ °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.\n\nIn the past hundred years, the global climate has experienced great changes 1-4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming 5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health 6-10 . Global warming has gradually changed from a scienti/fic issue to a major social issue of common concern to governments and people of all countries 11-13 . In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris 14 . Paris Agreement has indicated and pursue e/fforts to limit the temperature increase to 1.5 °C above pre-industrial levels.", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "## 3. Results\n\nFor a world at 2°C global warming, we present a range of outcomes to provide insight into the level of agreement between models for a particular projected change, and hence an indication of potential robustness of the projected changes for informing adaptation. We then make a comparison of impacts at global warming 1.5°C to investigate the level of impact that would be avoided by limiting global warming to different levels. Bearing in mind the uncertainty in regional climate outcomes, we address this in a number of ways. For individual realizations, we compare the impacts at different warming levels to see if they are systematically smaller at 1.5°C, even if the sign of the change is uncertain. We also compare the range of outcomes at different GWLs, to see if the regional-scale uncertainty itself increases with global warming.\n\n## (a) Climate-change impacts at 2 ° Cglobalwarming\n\nFor 2°C global warming, the ensemble-mean increase in annual daily maximum temperature was above 2°C for most of the land surface, with the exception of the Indian subcontinent, most of Australia and Antarctica (figure 2). The increase was higher still in many regions; most of North America, much of China and north Asia, northwestern South America and all of Europe. In the northern and eastern USA and much of northern and western Europe, the annual daily maximum temperature increased by over 4°C for 2°C global warming. The global mean TXx increased by more than 2°C in all ensemble members (table 5), so the maximum temperature warming more than the global annual mean is a consistent result across all projections here, as found in previous studies with other models [9] (table 5).\n\nThe different ensemble members give somewhat different results at regional scales, although there is a strong consensus on the temperature extremes examined here becoming warmer. In the simulations driven by SSTs and SICs from the two IPSL CMIP5 models, most of the global", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed11.pdf" - }, - { - "text": "Figure 10. Distributions of changes in run-o/ff for mean /flows simulated by the JULES ecosystem-hydrology model under the ensemble of six climate projections at 1.5 ° C(blue)and2 ° C (orange) global warming. Boxes show the 25th and 75th percentile changes, whiskers show the range, circles show the four projections that do not de/fine the ends of the range, and crosses show the ensemble means. Numbers in square brackets show the ensemble-mean /flow in the baseline, in millimetres of rain equivalent.\n\n\n\nall members (figure 12). This is not the case for the precipitation and run-off results; for those quantities, there is substantial overlap in the ranges of changes at 2°C and 1.5°C, so there is not a consistent picture of how much wetter or drier the world is projected to be in this ensemble, even though it involves a single atmosphere model.\n\nFor TXx, the difference between 2°C and 1.5°C global warming is larger than the 0.5°C difference in global mean temperature across most of the land surface in all ensemble members (figure 14). Although some ensemble members simulate local temperatures to be higher at 1.5°C global warming than 2°C in some small regions, these are relatively localized and most regions are cooler at 1.5°C global warming than 2°C. In many regions, the difference is between 0.5°C and 1.0°C, but many other regions see larger differences. In several ensemble members, the difference is 1.5°C, 2°C or larger in large parts of North America, South America, Europe and China. For example, over parts of Europe, where annual maximum daily temperature was projected to increase by over 5°C for a 2°C global warming, the local increase is limited to 3-4°C for 1.5°C global warming. Limiting global warming by half a degree Celsius would, therefore, limit maximum temperatures by three or four times as much in those areas (figure 14).\n\nAt 1.5°C global warming, although the increases in TXx are smaller than at 2°C, these increases show similar geographical patterns as for 2°C in all ensemble members, with larger changes in continental interiors especially in the mid-latitudes (not shown).\n\nThe percentage of days exceeding the 90th percentile of daily temperature (Tx90p) also increases less at 1.5°C global warming than at 2°C (figure 15). The largest reductions are in the tropics, where the largest increase was seen at 2°C; whereas at 2°C global warming, 50% or more", - "page_start": 15, - "page_end": 15, - "source_file": "pubmed11.pdf" - }, - { - "text": "Figure 7. Price change on maize in main continents under global warming by 1.5 °C and 2.0 °C.\n\n\n\nFigure 8. Changes in Self-su/fficiency ratio of maize in main countries under global warming by 1.5 °C and 2.0 °C.\n\n\n\nmeantime, the huge di/fferences in yield changes in di/fferent regions provide a small chance for the world, especially under global warming by 1.5 °C. In the near future, if the global temperature can be e/ffectively controlled under 1.5 °C warming scenario, there would be an increase in the potential for maize yield in the worldwide. All regions and countries should take actions to reduce the yield loss risk. For the yield-increasing regions, the potentials of climate resources should be fully utilized to guarantee maize yield under future scenarios; for the yield-reducing regions, the targeted adaptation actions should be taken in advance under global warming by 1.5 °C and 2.0 °C.\n\nMeanwhile, the risk of price /fluctuations caused by global corn trade due to future climate change should be paid more attention to, especially for developing and undeveloped countries. In the view of supply and demand, the population would go up quickly in the next 30 years; the demand for maize would increase hugely; however, the supply of maize would go down in the future, especially under global warming by 2.0 °C; it would intensify the contradiction between supply and demand, which would threaten the food security and sustainable development in the whole world.\n\nIn this study, 5 climate models are selected, which are recommended by ISI-MIP (/T\\_he Inter-Sectoral Impact Model Intercomparison Project); compared with other climate models, the /five models could more e/ffectively support impact assessment in di/fferent sectors and provide more reliable results. Based on the simulation results\n\nVol.:(0123456789)", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed9.pdf" - }, - { - "text": "\n\nFigure 5. Yield loss rates on maize in top 20 countries under global warming by 1.5 °C and 2.0 °C.\n\n\n\nthat maize yield would decrease severely. For the whole world more mitigation and adaptation actions should be taken from now on. Food security would be a signi/ficant challenge in this century.\n\nYield change of maize in main countries. /T\\_here are huge di/fferences in impacts on maize yield under climate change, which would in/fluence the food crisis in di/fferent regions. /T\\_here are 159 countries in the whole world which plant maize. /T\\_he gross yield of maize the top 20 countries accounts for more than 90% of the total yield in the 159 countries. So, the changes in the top 20 countries under future scenarios would in/fluence the food security of the whole world (Fig. 5). From the results of simulated by CRESE-maize under global warming by 1.5 °C, there would be 75 countries facing with yield loss of maize; the mean yield loss rate would become 33.5%. /T\\_here would be 84 countries experiencing yield increases. Overall, the global maize yield would slightly increase. Under global warming by 2.0 °C, there would be 82 countries facing with yield loss of maize, for which the mean yield loss rate is approximate to that under global warming by 1.5 °C. /T\\_here would be 77 countries experiencing yield increase; however, the mean yield increase is apparently smaller than that under global warming by 1.5 °C. Generally, the global maize yield would decrease. /T\\_he results show that the adverse e/ffect of warming up 2.0 °C on global maize production is far greater than warming up 1.5 °C. It is important to take actions to develop forward-looking adaptation measures to cope with future climate change.\n\nAccording to statistics in 2018, the gross maize yield in the top 5 countries is almost 80% of the total maize yield of the whole world. /T\\_he United States accounts for more than 32%; China accounts for about 24%; Brazil, Argentina and Mexico account for about 23%. /T\\_he /fluctuation of maize production in these /five top countries will have a signi/ficant impact on the global maize trade. Based on the simulation results, comparing to 1986-2005, the maize yield in China, Brazil and Argentina would decrease under global warming by 1.5 °C; the yield loss rate would reach more than 20% in Brazil; Argentina would decrease by 14.7%; China would decrease by 3.7%. However, there would be increasing trends in the United States and Mexico; the change in the United States would not be signi/ficant and the maize yield would increase by 0.5%; the yield increasing rate would exceed 50% in Mexico. Overall, the gross maize yield in the top 5 countries would decrease by 2% under global warming\n\nVol:.(1234567890)", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed9.pdf" - }, - { - "text": "Figure 13. Global mean percentage changes relative to 1981-2010 in ( a ) precipitation over land, ( b )meanrun-o/ff/flows,( c )low run-o/ff lows (10th percentile), at 2 ° Cand1.5 ° C global warming.\n\n\n\nthis comparison of the number of 'unprecedented' HCVI values at 1.5°C and 2°C should be treated with caution. Nevertheless, the finding that some countries see HCVI values higher at either or both 1.5°C and 2°C compared to the baseline may indicate that climate change has the potential to lead to unprecedented levels of vulnerability to food insecurity in some countries. More robustly, it can be concluded that by this metric, overall worldwide vulnerability to food insecurity generally increases with global warming, and for approximately three-quarters of countries assessed, this increase is larger at 2°C than 1.5°C.\n\nIn the ensemble mean, changes in mean, low and high flows are generally larger at 2°C global warming compared to 1.5°C (figure 20). This is often the case for both increases and decreases in flows-increasing the level of global warming magnifies the pattern of river flow changes, although not in all cases.\n\nThe range of projected mean run-off changes is larger for 2°C than 1.5°C in many basins, but this was not always the case, with many basins showing similar or smaller ranges at 2°C compared with 1.5°. Moreover, the ranges overlap substantially, so in terms of the set of", - "page_start": 18, - "page_end": 18, - "source_file": "pubmed11.pdf" - }, - { - "text": "Figure 12. Comparison of global mean changes in climate extremes indices relative to 1981-2010 at 2 ° Cand1.5 ° Cglobal warming for individual ensemble members and ensemble mean. ( a ) Change in annual daily maximum temperature; ( b ) percentage of days with maximum temperature above 90th percentile for 1981-2010; ( c ) change in consecutive dry days; ( d ) change in annual maximum 5-day rainfall.\n\n\n\nFor precipitation, generally similar changes are seen at 1.5°C global warming as at 2°C, but smaller in magnitude (compare figures 16 and 4), suggesting that most of these changes are a response to radiatively forced climate change as opposed to internal climate variability. However, some localized changes do vary in sign between the GWLs, such as in South Australia, suggesting a possible dominance of internal variability over the global warming signal in these places.\n\nWhere Rx5day increases, the increases are projected to be larger-in some cases approximately double-at 2°C global warming than 1.5°C. Where Rx5day decreases, again the decreases are projected to be larger at 2°C global warming than 1.5°C (figure 17).\n\nOf the 122 countries assessed, 93 have smaller ensemble-mean HCVI calculated at 1.5°C global warming than at 2°C, indicating an ensemble consensus that 76% of assessed countries would see a smaller increase in vulnerability to food insecurity if global warming were limited to 1.5°C (figures 18 and 19). Conversely, 24% of countries would, by this metric, see the same or higher vulnerability to food insecurity at 1.5°C than 2°C. Of these, some are countries where HCVI is projected to be lower at 2°C global warming than in the baseline. For example, in Mali the ensemble-mean baseline HCVI of 0.83 increased slightly to 0.85 at 1.5°C then reduced to 0.81 at 2°C. In some countries, the ensemble-mean HCVI happened to be identical at both warming levels. In Chad, for example, the baseline HCVI of 0.89 increased to 0.91 at both 1.5°C and 2°C.\n\nAs noted above, four countries saw ensemble-mean HCVI values at 2°C above any seen in the baseline, and this number increased to seven at 1.5°C. The same four countries with 'unprecedented' HCVI values at 2°C also saw 'unprecedented' values at 1.5°C; these were Oman, Bangladesh, Mauritania and Yemen. These were joined by Myanmar, India and Cambodia as having 'unprecedented' values at 1.5°C. The role of internal climate variability in the HCVI results needs to be assessed, as does the effect of potential nonlinear interactions between the flood and drought metric. Until the reasons behind these country-specific results are understood,", - "page_start": 17, - "page_end": 17, - "source_file": "pubmed11.pdf" - }, - { - "text": "\n\nFigure 6. Yield loss rates on maize in 6 continents under global warming by 1.5 °C and 2.0 °C.\n\n\n\nMarket price of maize in main countries. In this study, we elaborate on the endogenous response of our economic models. /T\\_his response can be theoretically elaborated as: due to the e/ffect of climate change on yield reduction (improvement), the supply curve moves le/f\\_tward (rightward), reducing (increasing) production and raising (lowering) prices. In response, the consumers decrease (increase) their consumption of more expensive (cheaper) crops and shi/f\\_ting to other (increase the use of the same) crops. Producers, at the same time, respond by changing farm-level management practices and increasing (decreasing) the amount of acreage under these crops. At a global scale, the reallocation of production and consumption through international trade further alters climate change impacts on global agriculture. /T\\_his also alters the self-su/fficiency ratios of each country/ region due to climate change.\n\nIn response to production changes, the price of each commodity changes under both scenarios. At the global level, the market price for maize would increase by 0.7% and 3.4% under 1.5 °C scenario and 2.0 °C scenario, respectively, which would vary quite largely among di/fferent countries and regions under both climate change scenarios (Fig. 7). Particularly, the market price would increase by around 22% and 27% in Iran under 2.0 °C scenario and 1.5 °C scenario, respectively. Iran is also the region where the highest yield reduction is observed due to climate change. Market prices for maize in India, Mexico, Russia, South Africa and the Rest of Africa would decrease signi/ficantly under both scenarios, as their yields improve due to climate e/ffects. Along with the domestic production, the climate change will also induce changes in international trade of maize, resulting in changing levels of self-su/fficiency ratios (SSR) for each country/region. By SSR, we mean the ratio of domestically produced commodity, to the sum of net imports and domestic production. In our scenario analysis, generally, the countries that face positive e/ffects on yields and/or are relatively less dependent on imports, are positively (less negatively) a/ffected by climate change. For example, maize SSR for Ukraine, India, Russia and Mexico would improve under both scenarios (Fig. 8). Whereas the self-su/fficiency ratios of maize for Southeast Asia, Bangladesh and Iran will worsen under both scenarios. China's SSR for maize stays almost similar to the level as the baseline.\n\n## Discussion and conclusion\n\nDiscussion. Our analysis highlights the e/ffects of climate change on global- and regional-speci/fic maize yields and the associated economic consequences in 1.5 °C and 2.0 °C -warming scenarios. We /find that the reduction risk of maize yield under global warming by 2.0 °C is much more serious than that under global warming by 1.5 °C. On the one hand, the larger the temperature rise, the greater the evapotranspiration would be. Although the precipitation is also increasing, the evapotranspiration would become more intense. /T\\_he limitation of water supply for maize growth leads to the decline of yield. On the other hand, relative to global warming by 1.5 °C, maize production would be faced with more serious and frequent extreme climate events, such as drought and heat waves, which would increase the risk of corn yield reduction under global warming by 2.0 °C. In the\n\nVol:.(1234567890)", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed9.pdf" - }, - { - "text": "There are two major research questions concerning the impacts of climate change at 1.5°C and 2°C global warming, which are relevant to both mitigation and adaptation policy areas.\n\n - (i) How much larger are the impacts at 2°C compared to 1.5°C? This is the primary question arising from the Paris Agreement [4] and is relevant to mitigation policy, informing judgements and actions on holding the global temperature rise to 'well below 2°C' and 'pursuing efforts to limit the temperature increase to 1.5°C'.\n - (ii) What regional climate conditions and related hydrological and ecological conditions could occur at a particular level of global warming, such as 2°C? This is relevant to adaptation policy and planning-exploring the possible outcomes for these levels of warming will help facilitate adaptation and improved resilience to account for a 1.5°C or 2°C world. It is recognized that many adaptation decisions require information on timing of specific impacts or risks, but nevertheless, framing regional impacts assessments in terms of associated global warming levels (GWLs) may help provide context of the levels of climate change that may be avoidable or unavoidable (and hence require adaptation).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed11.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia1.pdf", - "query": "What is a formal fallacy ?", - "target_page": 8, - "target_passage": "For formal fallacies, the source of the error is found in the form of the argument", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "burglar broke into the house last night, got hungry on the job, and had a midnight snack, would also explain the state of the kitchen. But this conclusion is not justified because it is not the best or most likely explanation. [82][83]\n\n## Fallacies\n\nNot all arguments live up to the standards of correct reasoning. When they do not, they are usually referred to as fallacies. Their central aspect is not that their conclusion is false but that there is some flaw with the reasoning leading to this conclusion. [84] So the argument \"it is sunny today; therefore spiders have eight legs\" is fallacious even though the conclusion is true. Some theorists, like John Stuart Mill, give a more restrictive definition of fallacies by additionally requiring that they appear to be correct. [85] This way, genuine fallacies can be distinguished from mere mistakes of reasoning due to carelessness. This explains why people tend to commit fallacies: because they have an alluring element that seduces people into committing and accepting them. [86] However, this reference to appearances is controversial because it belongs to the field of psychology, not logic, and because appearances may be different for different people. [87]\n\nFallacies are usually divided into formal and informal fallacies. [38] For formal fallacies, the source of the error is found in the form of the argument. For example, denying the antecedent is one type of formal fallacy, as in \"if Othello is a bachelor, then he is male; Othello is not a bachelor; therefore Othello is not male\". [88] But most fallacies fall into the category of informal fallacies, of which a great variety is discussed in the academic literature. The source of their error is usually found in the content or the context of the argument. [89] Informal fallacies are sometimes categorized as fallacies of ambiguity, fallacies of presumption, or fallacies of relevance. For fallacies of ambiguity, the ambiguity and vagueness of natural language are\n\nYoung America's dilemma: Shall I be wise and great, or rich and powerful? (poster from 1901) This is an example of a false dilemma: an informal fallacy using a disjunctive premise that excludes viable alternatives.\n\n\n\nresponsible for their flaw, as in \"feathers are light; what is light cannot be dark; therefore feathers cannot be dark\". [90] Fallacies of presumption have a wrong or unjustified premise but may be valid otherwise. [91] In the case of fallacies of relevance, the premises do not support the conclusion because they are not relevant to it. [92]\n\n## Definitory and strategic rules\n\nThe main focus of most logicians is to study the criteria according to which an argument is correct or incorrect. A fallacy is committed if these criteria are violated. In the case of formal logic, they are known as rules of inference . [93] They are definitory rules, which determine whether an inference is correct or which inferences are allowed. Definitory rules contrast with strategic rules. Strategic rules specify which inferential moves are necessary to reach a given conclusion based on a set of premises. This distinction does not just apply to logic but also to games. In chess, for example, the definitory rules dictate that bishops may only move diagonally. The strategic rules, on the other hand, describe how the allowed", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia1.pdf" - }, - { - "text": "\n\n## Logic\n\nLogic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the study of deductively valid inferences or logical truths. It examines how conclusions follow from premises based on the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory. Informal logic examines arguments expressed in natural language whereas formal logic uses formal language. When used as a countable noun, the term \"a logic\" refers to a specific logical formal system that articulates a proof system. Logic plays a central role in many fields, such as philosophy, mathematics, computer science, and linguistics.\n\nLogic studies valid forms of inference like modus ponens .\n\n\n\nLogic studies arguments, which consist of a set of premises that leads to a conclusion. An example is the argument from the premises \"it's Sunday\" and \"if it's Sunday then I don't have to work\" leading to the conclusion \"I don't have to work\". [1] Premises and conclusions express propositions or claims that can be true or false. An important feature of propositions is their internal structure. For example, complex propositions are made up of simpler propositions linked by logical vocabulary like (and) or (if...then). Simple propositions also have parts, like \"Sunday\" or \"work\" in the example. The truth of a proposition usually depends on the meanings of all of its parts. However, this is not the case for logically true propositions. They are true only because of their logical structure independent of the specific meanings of the individual parts.\n\nArguments can be either correct or incorrect. An argument is correct if its premises support its conclusion. Deductive arguments have the strongest form of support: if their premises are true then their conclusion must also be true. This is not the case for ampliative arguments, which arrive at genuinely new information not found in the premises. Many arguments in everyday discourse and the sciences are ampliative arguments. They are divided into inductive and abductive arguments. Inductive arguments are statistical generalization-such as inferring that all ravens are black, based on many individual observations of black ravens. [2] Abductive arguments are inferences to the best explanation-for example, when a doctor concludes that a patient has a certain disease, as the best explanation for the symptoms that they are observed to suffer. [3] Arguments that fall short of the standards of correct reasoning often embody fallacies. Systems of logic are theoretical frameworks for assessing the correctness of arguments.\n\nLogic has been studied since antiquity. Early approaches include Aristotelian logic, Stoic logic, Nyaya, and Mohism. Aristotelian logic focuses on reasoning in the form of syllogisms. It was considered the main system of logic in the Western world until it was replaced by modern formal logic, which has its roots in the work of late 19th-century mathematicians such as Gottlob Frege. Today, the most commonly used system is classical logic. It consists of propositional logic and first-order logic. Propositional logic only considers logical relations between full propositions. First-order logic also takes the internal parts of", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia1.pdf" - }, - { - "text": "new formal systems have been proposed. There are disagreements about what makes a formal system a logic. [22] For example, it has been suggested that only logically complete systems, like first-order logic, qualify as logics. For such reasons, some theorists deny that higher-order logics are logics in the strict sense. [23]\n\n## Informal logic\n\n\n\nFormal logic needs to translate natural language arguments into a formal language, like first-order logic, to assess whether they are valid. In this example, the letter \"c\" represents Carmen while the letters \"M\" and \"T\" stand for \"Mexican\" and \"teacher\". The symbol \" ∧ \" has the meaning of \"and\".\n\nWhen understood in a wide sense, logic encompasses both formal and informal logic. [24] Informal logic uses non-formal criteria and standards to analyze and assess the correctness of arguments. Its main focus is on everyday discourse. [25] Its development was prompted by difficulties in applying the insights of formal logic to natural language arguments. [26] In this regard, it considers problems that formal logic on its own is unable to address. [27] Both provide criteria for assessing the correctness of arguments and distinguishing them from fallacies. [28]\n\nMany characterizations of informal logic have been suggested but there is no general agreement on its precise definition. [29] The most literal approach sees the terms \"formal\" and \"informal\" as applying to the language used to express arguments. On this view, informal logic studies arguments that are in informal or natural language. [30] Formal logic can only examine them indirectly by translating them first into a formal language while informal logic investigates them in their original form. [31] On this view, the argument \"Birds fly. Tweety is a bird. Therefore, Tweety flies.\" belongs to natural language and is examined by informal logic. But the formal translation \"(1) ; (2) ; (3) \" is studied by formal logic. [32] The study of natural language arguments comes with various difficulties. For example, natural language expressions are often ambiguous, vague, and context-dependent. [33] Another approach defines informal logic in a wide sense as the normative study of the standards, criteria, and procedures of argumentation. In this sense, it includes questions about the role of rationality, critical thinking, and the psychology of argumentation. [34]\n\nAnother characterization identifies informal logic with the study of non-deductive arguments. In this way, it contrasts with deductive reasoning examined by formal logic. [35] Non-deductive arguments make their conclusion probable but do not ensure that it is true. An example is the inductive argument from the empirical observation that \"all ravens I have seen so far are black\" to the conclusion \"all ravens are black\". [36]\n\nA further approach is to define informal logic as the study of informal fallacies. [37] Informal fallacies are incorrect arguments in which errors are present in the content and the context of the argument. [38] A false dilemma, for example, involves an error of content by excluding viable options. This is the case in the fallacy \"you are either with us or against us; you are not with us; therefore, you are against us\". [39] Some theorists state that formal logic studies the general form of arguments while informal logic studies particular instances of arguments. Another approach is to hold that formal logic only considers the role of", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia1.pdf" - }, - { - "text": "propositions into account, like predicates and quantifiers. Extended logics accept the basic intuitions behind classical logic and apply it to other fields, such as metaphysics, ethics, and epistemology. Deviant logics, on the other hand, reject certain classical intuitions and provide alternative explanations of the basic laws of logic.\n\n## Definition\n\nThe word \"logic\" originates from the Greek word logos , which has a variety of translations, such as reason, discourse, or language. [4] Logic is traditionally defined as the study of the laws of thought or correct reasoning, [5] and is usually understood in terms of inferences or arguments. Reasoning is the activity of drawing inferences. Arguments are the outward expression of inferences. [6] An argument is a set of premises together with a conclusion. Logic is interested in whether arguments are correct, i.e. whether their premises support the conclusion. [7] These general characterizations apply to logic in the widest sense, i.e., to both formal and informal logic since they are both concerned with assessing the correctness of arguments. [8] Formal logic is the traditionally dominant field, and some logicians restrict logic to formal logic. [9]\n\n## Formal logic\n\nFormal logic is also known as symbolic logic and is widely used in mathematical logic. It uses a formal approach to study reasoning: it replaces concrete expressions with abstract symbols to examine the logical form of arguments independent of their concrete content. In this sense, it is topic-neutral since it is only concerned with the abstract structure of arguments and not with their concrete content. [10]\n\nFormal logic is interested in deductively valid arguments, for which the truth of their premises ensures the truth of their conclusion. This means that it is impossible for the premises to be true and the conclusion to be false. [11] For valid arguments, the logical structure of the premises and the conclusion follows a pattern called a rule of inference. [12] For example, modus ponens is a rule of inference according to which all arguments of the form \"(1) p , (2) if p then q , (3) therefore q \" are valid, independent of what the terms p and q stand for. [13] In this sense, formal logic can be defined as the science of valid inferences. An alternative definition sees logic as the study of logical truths. [14] A proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true in all possible worlds and under all interpretations of its non-logical terms, like the claim \"either it is raining, or it is not\". [15] These two definitions of formal logic are not identical, but they are closely related. For example, if the inference from p to q is deductively valid then the claim \"if p then q \" is a logical truth. [16]\n\nFormal logic uses formal languages to express and analyze arguments. [17] They normally have a very limited vocabulary and exact syntactic rules. These rules specify how their symbols can be combined to construct sentences, so-called well-formed formulas. [18] This simplicity and exactness of formal logic make it capable of formulating precise rules of inference. They determine whether a given argument is valid. [19] Because of the reliance on formal language, natural language arguments cannot be studied directly. Instead, they need to be translated into formal language before their validity can be assessed. [20]\n\nThe term \"logic\" can also be used in a slightly different sense as a countable noun. In this sense, a logic is a logical formal system. Distinct logics differ from each other concerning the rules of inference they accept as valid and the formal languages used to express them. [21] Starting in the late 19th century, many", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia1.pdf" - }, - { - "text": "Paraconsistent logics are logical systems that can deal with contradictions. They are formulated to avoid the principle of explosion: for them, it is not the case that anything follows from a contradiction. [139] They are often motivated by dialetheism, the view that contradictions are real or that reality itself is contradictory. Graham Priest is an influential contemporary proponent of this position and similar views have been ascribed to Georg Wilhelm Friedrich Hegel. [140]\n\n## Informal\n\nInformal logic is usually carried out in a less systematic way. It often focuses on more specific issues, like investigating a particular type of fallacy or studying a certain aspect of argumentation. Nonetheless, some frameworks of informal logic have also been presented that try to provide a systematic characterization of the correctness of arguments. [141]\n\nThe pragmatic or dialogical approach to informal logic sees arguments as speech acts and not merely as a set of premises together with a conclusion. [142] As speech acts, they occur in a certain context, like a dialogue, which affects the standards of right and wrong arguments. [143] A prominent version by Douglas N. Walton understands a dialogue as a game between two players. The initial position of each player is characterized by the propositions to which they are committed and the conclusion they intend to prove. Dialogues are games of persuasion: each player has the goal of convincing the opponent of their own conclusion. [144] This is achieved by making arguments: arguments are the moves of the game. [145] They affect to which propositions the players are committed. A winning move is a successful argument that takes the opponent's commitments as premises and shows how one's own conclusion follows from them. This is usually not possible straight away. For this reason, it is normally necessary to formulate a sequence of arguments as intermediary steps, each of which brings the opponent a little closer to one's intended conclusion. Besides these positive arguments leading one closer to victory, there are also negative arguments preventing the opponent's victory by denying their conclusion. [144] Whether an argument is correct depends on whether it promotes the progress of the dialogue. Fallacies, on the other hand, are violations of the standards of proper argumentative rules. [146] These standards also depend on the type of dialogue. For example, the standards governing the scientific discourse differ from the standards in business negotiations. [147]\n\nThe epistemic approach to informal logic, on the other hand, focuses on the epistemic role of arguments. [148] It is based on the idea that arguments aim to increase our knowledge. They achieve this by linking justified beliefs to beliefs that are not yet justified. [149] Correct arguments succeed at expanding knowledge while fallacies are epistemic failures: they do not justify the belief in their conclusion. [150] For example, the fallacy of begging the question is a fallacy because it fails to provide independent justification for its conclusion, even though it is deductively valid. [151] In this sense, logical normativity consists in epistemic success or rationality. [149] The Bayesian approach is one example of an epistemic approach. [152] Central to Bayesianism is not just whether the agent believes something but the degree to which they believe it, the so-called credence . Degrees of belief are seen as subjective probabilities in the believed proposition, i.e. how certain the agent is that the proposition is true. [153] On this view, reasoning can be interpreted as a process of changing one's credences, often in reaction to new", - "page_start": 12, - "page_end": 12, - "source_file": "wikipedia1.pdf" - }, - { - "text": "argument is made up of a chain of simple arguments. This means that the conclusion of one argument acts as a premise of later arguments. For a complex argument to be successful, each link of the chain has to be successful. [43]\n\nArguments and inferences are either correct or incorrect. If they are correct then their premises support their conclusion. In the incorrect case, this support is missing. It can take different forms corresponding to the different types of reasoning. [62] The strongest form of support corresponds to deductive reasoning. But even arguments that are not deductively valid may still be good arguments because their premises offer nondeductive support to their conclusions. For such cases, the term ampliative or inductive reasoning is used. [63] Deductive arguments are associated with formal logic in contrast to the\n\nArgument terminology used in logic\n\n\n\nrelation between ampliative arguments and informal logic. [64]\n\n## Deductive\n\nA deductively valid argument is one whose premises guarantee the truth of its conclusion. [11] For instance, the argument \"(1) all frogs are amphibians; (2) no cats are amphibians; (3) therefore no cats are frogs\" is deductively valid. For deductive validity, it does not matter whether the premises or the conclusion are actually true. So the argument \"(1) all frogs are mammals; (2) no cats are mammals; (3) therefore no cats are frogs\" is also valid because the conclusion follows necessarily from the premises. [65]\n\nAccording to an influential view by Alfred Tarski, deductive arguments have three essential features: (1) they are formal, i.e. they depend only on the form of the premises and the conclusion; (2) they are a priori, i.e. no sense experience is needed to determine whether they obtain; (3) they are modal, i.e. that they hold by logical necessity for the given propositions, independent of any other circumstances. [66]\n\nBecause of the first feature, the focus on formality, deductive inference is usually identified with rules of inference. [67] Rules of inference specify the form of the premises and the conclusion: how they have to be structured for the inference to be valid. Arguments that do not follow any rule of inference are deductively invalid. [68] The modus ponens is a prominent rule of inference. It has the form \" p ; if p , then q ; therefore q \". [69] Knowing that it has just rained ( ) and that after rain the streets are wet ( ), one can use modus ponens to deduce that the streets are wet ( ). [70]\n\nThe third feature can be expressed by stating that deductively valid inferences are truth-preserving: it is impossible for the premises to be true and the conclusion to be false. [71] Because of this feature, it is often asserted that deductive inferences are uninformative since the conclusion cannot arrive at new information not already present in the premises. [72] But this point is not always accepted since it would mean, for example, that most of mathematics is uninformative. A different characterization distinguishes", - "page_start": 5, - "page_end": 5, - "source_file": "wikipedia1.pdf" - }, - { - "text": "moves may be used to win a game, for instance, by controlling the center and by defending one's king. [94] It has been argued that logicians should give more emphasis to strategic rules since they are highly relevant for effective reasoning. [93]\n\n## Formal systems\n\nA formal system of logic consists of a formal language together with a set of axioms and a proof system used to draw inferences from these axioms. [95] In logic, axioms are statements that are accepted without proof. They are used to justify other statements. [96] Some theorists also include a semantics that specifies how the expressions of the formal language relate to real objects. [97] Starting in the late 19th century, many new formal systems have been proposed. [98]\n\nA formal language consists of an alphabet and syntactic rules. The alphabet is the set of basic symbols used in expressions. The syntactic rules determine how these symbols may be arranged to result in wellformed formulas. [99] For instance, the syntactic rules of propositional logic determine that \" \" is a well-formed formula but \" \" is not since the logical conjunction requires terms on both sides. [100]\n\nA proof system is a collection of rules to construct formal proofs. It is a tool to arrive at conclusions from a set of axioms. Rules in a proof system are defined in terms of the syntactic form of formulas independent of their specific content. For instance, the classical rule of conjunction introduction states that follows from the premises and . Such rules can be applied sequentially, giving a mechanical procedure for generating conclusions from premises. There are different types of proof systems including natural deduction and sequent calculi. [101]\n\nA semantics is a system for mapping expressions of a formal language to their denotations. In many systems of logic, denotations are truth values. For instance, the semantics for classical propositional logic assigns the formula the denotation \"true\" whenever and are true. From the semantic point of view, a premise entails a conclusion if the conclusion is true whenever the premise is true. [102]\n\nA system of logic is sound when its proof system cannot derive a conclusion from a set of premises unless it is semantically entailed by them. In other words, its proof system cannot lead to false conclusions, as defined by the semantics. A system is complete when its proof system can derive every conclusion that is semantically entailed by its premises. In other words, its proof system can lead to any true conclusion, as defined by the semantics. Thus, soundness and completeness together describe a system whose notions of validity and entailment line up perfectly. [103]\n\n## Systems of logic\n\nSystems of logic are theoretical frameworks for assessing the correctness of reasoning and arguments. For over two thousand years, Aristotelian logic was treated as the canon of logic in the Western world, [104] but modern developments in this field have led to a vast proliferation of logical systems. [105] One prominent categorization divides modern formal logical systems into classical logic, extended logics, and deviant logics. [106]\n\n## Aristotelian", - "page_start": 8, - "page_end": 8, - "source_file": "wikipedia1.pdf" - }, - { - "text": "logical constants for correct inferences while informal logic also takes the meaning of substantive concepts into account. Further approaches focus on the discussion of logical topics with or without formal devices and on the role of epistemology for the assessment of arguments. [40]\n\n## Basic concepts\n\n## Premises, conclusions, and truth\n\n## Premises and conclusions\n\nPremises and conclusions are the basic parts of inferences or arguments and therefore play a central role in logic. In the case of a valid inference or a correct argument, the conclusion follows from the premises, or in other words, the premises support the conclusion. [41] For instance, the premises \"Mars is red\" and \"Mars is a planet\" support the conclusion \"Mars is a red planet\". For most types of logic, it is accepted that premises and conclusions have to be truth-bearers. [41][a] This means that they have a truth value: they are either true or false. Contemporary philosophy generally sees them either as propositions or as sentences . [43] Propositions are the denotations of sentences and are usually seen as abstract objects. [44] For example, the English sentence \"the tree is green\" is different from the German sentence \"der Baum ist grün\" but both express the same proposition. [45]\n\nPropositional theories of premises and conclusions are often criticized because they rely on abstract objects. For instance, philosophical naturalists usually reject the existence of abstract objects. Other arguments concern the challenges involved in specifying the identity criteria of propositions. [43] These objections are avoided by seeing premises and conclusions not as propositions but as sentences, i.e. as concrete linguistic objects like the symbols displayed on a page of a book. But this approach comes with new problems of its own: sentences are often context-dependent and ambiguous, meaning an argument's validity would not only depend on its parts but also on its context and on how it is interpreted. [46] Another approach is to understand premises and conclusions in psychological terms as thoughts or judgments. This position is known as psychologism. It was discussed at length around the turn of the 20th century but it is not widely accepted today. [47]\n\n## Internal structure\n\nPremises and conclusions have an internal structure. As propositions or sentences, they can be either simple or complex. [48] A complex proposition has other propositions as its constituents, which are linked to each other through propositional connectives like \"and\" or \"if...then\". Simple propositions, on the other hand, do not have propositional parts. But they can also be conceived as having an internal structure: they are made up of subpropositional parts, like singular terms and predicates. [49][48] For example, the simple proposition \"Mars is red\" can be formed by applying the predicate \"red\" to the singular term \"Mars\". In contrast, the complex proposition \"Mars is red and Venus is white\" is made up of two simple propositions connected by the propositional connective \"and\". [49]\n\nWhether a proposition is true depends, at least in part, on its constituents. For complex propositions formed using truth-functional propositional connectives, their truth only depends on the truth values of their parts. [49][50] But this relation is more complicated in the case of simple propositions and their", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia1.pdf" - }, - { - "text": "incoming information. [154] Correct reasoning and the arguments it is based on follow the laws of probability, for example, the principle of conditionalization. Bad or irrational reasoning, on the other hand, violates these laws. [155]\n\n## Areas of research\n\nLogic is studied in various fields. In many cases, this is done by applying its formal method to specific topics outside its scope, like to ethics or computer science. [156] In other cases, logic itself is made the subject of research in another discipline. This can happen in diverse ways. For instance, it can involve investigating the philosophical assumptions linked to the basic concepts used by logicians. Other ways include interpreting and analyzing logic through mathematical structures as well as studying and comparing abstract properties of formal logical systems. [157]\n\n## Philosophy of logic and philosophical logic\n\nPhilosophy of logic is the philosophical discipline studying the scope and nature of logic. [59] It examines many presuppositions implicit in logic, like how to define its basic concepts or the metaphysical assumptions associated with them. [158] It is also concerned with how to classify logical systems and considers the ontological commitments they incur. [159] Philosophical logic is one of the areas within the philosophy of logic. It studies the application of logical methods to philosophical problems in fields like metaphysics, ethics, and epistemology. [160] This application usually happens in the form of extended or deviant logical systems. [161]\n\n## Metalogic\n\nMetalogic is the field of inquiry studying the properties of formal logical systems. For example, when a new formal system is developed, metalogicians may study it to determine which formulas can be proven in it. They may also study whether an algorithm could be developed to find a proof for each formula and whether every provable formula in it is a tautology. Finally, they may compare it to other logical systems to understand its distinctive features. A key issue in metalogic concerns the relation between syntax and semantics. The syntactic rules of a formal system determine how to deduce conclusions from premises, i.e. how to formulate proofs. The semantics of a formal system governs which sentences are true and which ones are false. This determines the validity of arguments since, for valid arguments, it is impossible for the premises to be true and the conclusion to be false. The relation between syntax and semantics concerns issues like whether every valid argument is provable and whether every provable argument is valid. Metalogicians also study whether logical systems are complete, sound, and consistent. They are interested in whether the systems are decidable and what expressive power they have. Metalogicians usually rely heavily on abstract mathematical reasoning when examining and formulating metalogical proofs. This way, they aim to arrive at precise and general conclusions on these topics. [162]\n\n## Mathematical logic\n\nThe term \"mathematical logic\" is sometimes used as a synonym of \"formal logic\". But in a more restricted sense, it refers to the study of logic within mathematics. Major subareas include model theory, proof theory, set theory, and computability theory. [164] Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic. However, it can also include attempts", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia1.pdf" - }, - { - "text": "sentence would be true or false. One of its central methodological assumptions is the principle of compositionality. It states that the meaning of a complex expression is determined by the meanings of its parts and how they are combined. For example, the meaning of the verb phrase \"walk and sing\" depends on the meanings of the individual expressions \"walk\" and \"sing\". Many theories in formal semantics rely on model theory. This means that they employ set theory to construct a model and then interpret the meanings of expression in relation to the elements in this model. For example, the term \"walk\" may be interpreted as the set of all individuals in the model that share the property of walking. Early influential theorists in this field were Richard Montague and Barbara Partee, who focused their analysis on the English language. [173]\n\n## Epistemology of logic\n\nThe epistemology of logic studies how one knows that an argument is valid or that a proposition is logically true. [174] This includes questions like how to justify that modus ponens is a valid rule of inference or that contradictions are false. [175] The traditionally dominant view is that this form of logical understanding belongs to knowledge a priori. [176] In this regard, it\n\nConjunction (AND) is one of the basic operations of Boolean logic. It can be electronically implemented in several ways, for example, by using two transistors.\n\n\n\nis often argued that the mind has a special faculty to examine relations between pure ideas and that this faculty is also responsible for apprehending logical truths. [177] A similar approach understands the rules of logic in terms of linguistic conventions. On this view, the laws of logic are trivial since they are true by definition: they just express the meanings of the logical vocabulary. [178]\n\nSome theorists, like Hilary Putnam and Penelope Maddy, object to the view that logic is knowable a priori. They hold instead that logical truths depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world. According to this view, they may be explored by studying general patterns of the fundamental sciences. For example, it has been argued that certain insights of quantum mechanics refute the principle of distributivity in classical logic, which states that the formula is equivalent to . This claim can be used as an empirical argument for the thesis that quantum logic is the correct logical system and should replace classical logic. [179]\n\n## History\n\nLogic was developed independently in several cultures during antiquity. One major early contributor was Aristotle, who developed term logic in his Organon and Prior Analytics . [183] He was responsible for the introduction of the hypothetical syllogism [184] and temporal modal logic. [185] Further innovations include inductive logic [186] as well as the discussion of new logical concepts such as terms, predicables, syllogisms, and propositions. Aristotelian logic was highly regarded in classical and medieval times, both in Europe and the Middle East. It remained in wide use in the West until the early 19th century. [187] It has now been superseded by later work, though many of its key insights are still present in modern systems of logic. [188]", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia1.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia1.pdf", - "query": "In early Chinese philosophy, what were the major influences regarding the philosophy of logic ?", - "target_page": 18, - "target_passage": "In Chinese philosophy, the School of Names and Mohism were particularly influential", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "In Chinese philosophy, the School of Names and Mohism were particularly influential. The School of Names focused on the use of language and on paradoxes. For example, Gongsun Long proposed the white horse paradox, which defends the thesis that a white horse is not a horse. The school of Mohism also acknowledged the importance of language for logic and tried to relate the ideas in these fields to the realm of ethics. [197]\n\nIn India, the study of logic was primarily pursued by the schools of Nyaya, Buddhism, and Jainism. It was not treated as a separate academic discipline and discussions of its topics usually happened in the context of epistemology and theories of dialogue or argumentation. [198] In Nyaya, inference is understood as a source of knowledge (pramā ṇ a). It follows the perception of an object and tries to arrive at conclusions, for example, about the cause of this object. [199] A similar emphasis on the relation to epistemology is also found in Buddhist and Jainist schools of logic, where inference is used to expand the knowledge gained through other sources. [200] Some of the later theories of Nyaya, belonging to the Navya-Nyāya school, resemble modern forms of logic, such as Gottlob Frege's distinction between sense and reference and his definition of number. [201]\n\nThe syllogistic logic developed by Aristotle predominated in the West until the mid-19th century, when interest in the foundations of mathematics stimulated the development of modern symbolic logic. [202] Many see Gottlob Frege's Begriffsschrift as the birthplace of modern logic. Gottfried Wilhelm Leibniz's idea of a universal formal language is often considered a forerunner. Other pioneers were George Boole, who invented Boolean algebra as a mathematical system of logic, and Charles Peirce, who developed the logic of relatives. Alfred North Whitehead and Bertrand Russell, in turn, condensed many of these insights in their work Principia Mathematica . Modern logic introduced novel concepts, such as functions, quantifiers, and relational predicates. A hallmark of modern symbolic logic is its use of formal language to precisely codify its insights. In this regard, it departs from earlier logicians, who relied mainly on natural language. [203] Of particular influence was the development of first-order logic, which is usually treated as the standard system of modern logic. [204] Its analytical generality allowed the formalization of mathematics and drove the investigation of set theory. It also made Alfred Tarski's approach to model theory possible and provided the foundation of modern mathematical logic. [205]\n\n## See also\n\n", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia1.pdf" - }, - { - "text": "\n\nIbn Sina (Avicenna) was the founder of Avicennian logic, which replaced Aristotelian logic as the dominant system of logic in the Islamic world. [189] It influenced Western medieval writers such as Albertus Magnus and William of Ockham. [190] Ibn Sina wrote on the hypothetical syllogism [191] and on the propositional calculus. [192] He developed an original \"temporally modalized\" syllogistic theory, involving temporal logic and modal logic. [193] He also made use of inductive logic, such as his methods of agreement, difference, and concomitant variation, which are critical to the scientific method. [191] Fakhr al-Din al-Razi was another influential Muslim logician. He criticized Aristotelian syllogistics and formulated an early system of inductive logic, foreshadowing the system of inductive logic developed by John Stuart Mill. [194]\n\nDuring the Middle Ages, many translations and interpretations of Aristotelian logic were made. The works of Boethius were particularly influential. Besides translating Aristotle's work into Latin, he also produced textbooks on logic. [195] Later, the works of Islamic philosophers such as Ibn Sina and Ibn Rushd (Averroes) were drawn on. This expanded the range of ancient works available to medieval Christian scholars since more Greek work was available to Muslim scholars that had been preserved in Latin commentaries. In 1323, William of Ockham's influential Summa Logicae was released. It is a comprehensive treatise on logic that discusses many basic concepts of logic and provides a systematic exposition of types of propositions and their truth conditions. [196]", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia1.pdf" - }, - { - "text": "incoming information. [154] Correct reasoning and the arguments it is based on follow the laws of probability, for example, the principle of conditionalization. Bad or irrational reasoning, on the other hand, violates these laws. [155]\n\n## Areas of research\n\nLogic is studied in various fields. In many cases, this is done by applying its formal method to specific topics outside its scope, like to ethics or computer science. [156] In other cases, logic itself is made the subject of research in another discipline. This can happen in diverse ways. For instance, it can involve investigating the philosophical assumptions linked to the basic concepts used by logicians. Other ways include interpreting and analyzing logic through mathematical structures as well as studying and comparing abstract properties of formal logical systems. [157]\n\n## Philosophy of logic and philosophical logic\n\nPhilosophy of logic is the philosophical discipline studying the scope and nature of logic. [59] It examines many presuppositions implicit in logic, like how to define its basic concepts or the metaphysical assumptions associated with them. [158] It is also concerned with how to classify logical systems and considers the ontological commitments they incur. [159] Philosophical logic is one of the areas within the philosophy of logic. It studies the application of logical methods to philosophical problems in fields like metaphysics, ethics, and epistemology. [160] This application usually happens in the form of extended or deviant logical systems. [161]\n\n## Metalogic\n\nMetalogic is the field of inquiry studying the properties of formal logical systems. For example, when a new formal system is developed, metalogicians may study it to determine which formulas can be proven in it. They may also study whether an algorithm could be developed to find a proof for each formula and whether every provable formula in it is a tautology. Finally, they may compare it to other logical systems to understand its distinctive features. A key issue in metalogic concerns the relation between syntax and semantics. The syntactic rules of a formal system determine how to deduce conclusions from premises, i.e. how to formulate proofs. The semantics of a formal system governs which sentences are true and which ones are false. This determines the validity of arguments since, for valid arguments, it is impossible for the premises to be true and the conclusion to be false. The relation between syntax and semantics concerns issues like whether every valid argument is provable and whether every provable argument is valid. Metalogicians also study whether logical systems are complete, sound, and consistent. They are interested in whether the systems are decidable and what expressive power they have. Metalogicians usually rely heavily on abstract mathematical reasoning when examining and formulating metalogical proofs. This way, they aim to arrive at precise and general conclusions on these topics. [162]\n\n## Mathematical logic\n\nThe term \"mathematical logic\" is sometimes used as a synonym of \"formal logic\". But in a more restricted sense, it refers to the study of logic within mathematics. Major subareas include model theory, proof theory, set theory, and computability theory. [164] Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic. However, it can also include attempts", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia1.pdf" - }, - { - "text": "sentence would be true or false. One of its central methodological assumptions is the principle of compositionality. It states that the meaning of a complex expression is determined by the meanings of its parts and how they are combined. For example, the meaning of the verb phrase \"walk and sing\" depends on the meanings of the individual expressions \"walk\" and \"sing\". Many theories in formal semantics rely on model theory. This means that they employ set theory to construct a model and then interpret the meanings of expression in relation to the elements in this model. For example, the term \"walk\" may be interpreted as the set of all individuals in the model that share the property of walking. Early influential theorists in this field were Richard Montague and Barbara Partee, who focused their analysis on the English language. [173]\n\n## Epistemology of logic\n\nThe epistemology of logic studies how one knows that an argument is valid or that a proposition is logically true. [174] This includes questions like how to justify that modus ponens is a valid rule of inference or that contradictions are false. [175] The traditionally dominant view is that this form of logical understanding belongs to knowledge a priori. [176] In this regard, it\n\nConjunction (AND) is one of the basic operations of Boolean logic. It can be electronically implemented in several ways, for example, by using two transistors.\n\n\n\nis often argued that the mind has a special faculty to examine relations between pure ideas and that this faculty is also responsible for apprehending logical truths. [177] A similar approach understands the rules of logic in terms of linguistic conventions. On this view, the laws of logic are trivial since they are true by definition: they just express the meanings of the logical vocabulary. [178]\n\nSome theorists, like Hilary Putnam and Penelope Maddy, object to the view that logic is knowable a priori. They hold instead that logical truths depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world. According to this view, they may be explored by studying general patterns of the fundamental sciences. For example, it has been argued that certain insights of quantum mechanics refute the principle of distributivity in classical logic, which states that the formula is equivalent to . This claim can be used as an empirical argument for the thesis that quantum logic is the correct logical system and should replace classical logic. [179]\n\n## History\n\nLogic was developed independently in several cultures during antiquity. One major early contributor was Aristotle, who developed term logic in his Organon and Prior Analytics . [183] He was responsible for the introduction of the hypothetical syllogism [184] and temporal modal logic. [185] Further innovations include inductive logic [186] as well as the discussion of new logical concepts such as terms, predicables, syllogisms, and propositions. Aristotelian logic was highly regarded in classical and medieval times, both in Europe and the Middle East. It remained in wide use in the West until the early 19th century. [187] It has now been superseded by later work, though many of its key insights are still present in modern systems of logic. [188]", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia1.pdf" - }, - { - "text": "mathematics, it does not include logical vocabulary relevant to many other topics of philosophical importance. Examples of concepts it overlooks are the contrast between necessity and possibility and the problem of ethical obligation and permission. Similarly, it does not address the relations between past, present, and future. [119] Such issues are addressed by extended logics. They build on the basic intuitions of classical logic and expand it by introducing new logical vocabulary. This way, the exact logical approach is applied to fields like ethics or epistemology that lie beyond the scope of mathematics. [120]\n\n## Propositional logic\n\nPropositional logic comprises formal systems in which formulae are built from atomic propositions using logical connectives. For instance, propositional logic represents the conjunction of two atomic propositions and as the complex formula . Unlike predicate logic where terms and predicates are the smallest units, propositional logic takes full propositions with truth values as its most basic component. [121] Thus, propositional logics can only represent logical relationships that arise from the way complex propositions are built from simpler ones. But it cannot represent inferences that result from the inner structure of a proposition. [122]\n\n## First-order logic\n\nFirst-order logic includes the same propositional connectives as propositional logic but differs from it because it articulates the internal structure of propositions. This happens through devices such as singular terms, which refer to particular objects, predicates, which refer to properties and relations, and quantifiers, which treat notions like \"some\" and \"all\". [123] For example, to express the proposition \"this raven is black\", one may use the predicate for the property \"black\" and the singular term referring to the raven to form the expression . To express that some objects are black, the existential quantifier is combined\n\n\n\nGottlob Frege's Begriffschrift introduced the notion of quantifier in a graphical notation, which here represents the judgment that is true.\n\nwith the variable to form the proposition . First-order logic contains various rules of inference that determine how expressions articulated this way can form valid arguments, for example, that one may infer from . [124]\n\n## Extended\n\nExtended logics are logical systems that accept the basic principles of classical logic. They introduce additional symbols and principles to apply it to fields like metaphysics, ethics, and epistemology. [125]\n\n## Modal logic\n\nModal logic is an extension of classical logic. In its original form, sometimes called \"alethic modal logic\", it introduces two new symbols: expresses that something is possible while expresses that something is necessary. [126] For example, if the formula stands for the sentence \"Socrates is a banker\" then the formula articulates the sentence \"It is possible that Socrates is a banker\". [127] To include these symbols in the logical formalism, modal logic introduces new rules of inference that govern", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia1.pdf" - }, - { - "text": "propositions into account, like predicates and quantifiers. Extended logics accept the basic intuitions behind classical logic and apply it to other fields, such as metaphysics, ethics, and epistemology. Deviant logics, on the other hand, reject certain classical intuitions and provide alternative explanations of the basic laws of logic.\n\n## Definition\n\nThe word \"logic\" originates from the Greek word logos , which has a variety of translations, such as reason, discourse, or language. [4] Logic is traditionally defined as the study of the laws of thought or correct reasoning, [5] and is usually understood in terms of inferences or arguments. Reasoning is the activity of drawing inferences. Arguments are the outward expression of inferences. [6] An argument is a set of premises together with a conclusion. Logic is interested in whether arguments are correct, i.e. whether their premises support the conclusion. [7] These general characterizations apply to logic in the widest sense, i.e., to both formal and informal logic since they are both concerned with assessing the correctness of arguments. [8] Formal logic is the traditionally dominant field, and some logicians restrict logic to formal logic. [9]\n\n## Formal logic\n\nFormal logic is also known as symbolic logic and is widely used in mathematical logic. It uses a formal approach to study reasoning: it replaces concrete expressions with abstract symbols to examine the logical form of arguments independent of their concrete content. In this sense, it is topic-neutral since it is only concerned with the abstract structure of arguments and not with their concrete content. [10]\n\nFormal logic is interested in deductively valid arguments, for which the truth of their premises ensures the truth of their conclusion. This means that it is impossible for the premises to be true and the conclusion to be false. [11] For valid arguments, the logical structure of the premises and the conclusion follows a pattern called a rule of inference. [12] For example, modus ponens is a rule of inference according to which all arguments of the form \"(1) p , (2) if p then q , (3) therefore q \" are valid, independent of what the terms p and q stand for. [13] In this sense, formal logic can be defined as the science of valid inferences. An alternative definition sees logic as the study of logical truths. [14] A proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true in all possible worlds and under all interpretations of its non-logical terms, like the claim \"either it is raining, or it is not\". [15] These two definitions of formal logic are not identical, but they are closely related. For example, if the inference from p to q is deductively valid then the claim \"if p then q \" is a logical truth. [16]\n\nFormal logic uses formal languages to express and analyze arguments. [17] They normally have a very limited vocabulary and exact syntactic rules. These rules specify how their symbols can be combined to construct sentences, so-called well-formed formulas. [18] This simplicity and exactness of formal logic make it capable of formulating precise rules of inference. They determine whether a given argument is valid. [19] Because of the reliance on formal language, natural language arguments cannot be studied directly. Instead, they need to be translated into formal language before their validity can be assessed. [20]\n\nThe term \"logic\" can also be used in a slightly different sense as a countable noun. In this sense, a logic is a logical formal system. Distinct logics differ from each other concerning the rules of inference they accept as valid and the formal languages used to express them. [21] Starting in the late 19th century, many", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia1.pdf" - }, - { - "text": "Bertrand Russell made various contributions to mathematical logic. [163]\n\n\n\nto use logic to analyze mathematical reasoning or to establish logic-based foundations of mathematics. [165] The latter was a major concern in early 20th-century mathematical logic, which pursued the program of logicism pioneered by philosopherlogicians such as Gottlob Frege, Alfred North Whitehead, and Bertrand Russell. Mathematical theories were supposed to be logical tautologies, and their program was to show this by means of a reduction of mathematics to logic. Many attempts to realize this program failed, from the crippling of Frege's project in his Grundgesetze by Russell's paradox, to the defeat of Hilbert's program by Gödel's incompleteness theorems. [166]\n\nSet theory originated in the study of the infinite by Georg Cantor, and it has been the source of many of the most challenging and important issues in mathematical logic. They include Cantor's theorem, the status of the Axiom of Choice, the question of the independence of the continuum hypothesis, and the modern debate on large cardinal axioms. [167]\n\nComputability theory is the branch of mathematical logic that studies effective procedures to solve calculation problems. One of\n\nits main goals is to understand whether it is possible to solve a given problem using an algorithm. For instance, given a certain claim about the positive integers, it examines whether an algorithm can be found to determine if this claim is true. Computability theory uses various theoretical tools and models, such as Turing machines, to explore this type of issue. [168]\n\n## Computational logic\n\nComputational logic is the branch of logic and computer science that studies how to implement mathematical reasoning and logical formalisms using computers. This includes, for example, automatic theorem provers, which employ rules of inference to construct a proof step by step from a set of premises to the intended conclusion without human intervention. [169] Logic programming languages are designed specifically to express facts using logical formulas and to draw inferences from these facts. For example, Prolog is a logic programming language based on predicate logic. [170] Computer scientists also apply concepts from logic to problems in computing. The works of Claude Shannon were influential in this regard. He showed how Boolean logic can be used to understand and implement computer circuits. [171] This can be achieved using electronic logic gates, i.e. electronic circuits with one or more inputs and usually one output. The truth values of propositions are represented by voltage levels. In this way, logic functions can be simulated by applying the corresponding voltages to the inputs of the circuit and determining the value of the function by measuring the voltage of the output. [172]\n\n## Formal semantics of natural language\n\nFormal semantics is a subfield of logic, linguistics, and the philosophy of language. The discipline of semantics studies the meaning of language. Formal semantics uses formal tools from the fields of symbolic logic and mathematics to give precise theories of the meaning of natural language expressions. It understands meaning usually in relation to truth conditions, i.e. it examines in which situations a", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia1.pdf" - }, - { - "text": "- Wile, Bruce; Goss, John; Roesner, Wolfgang (2005). Comprehensive Functional Verification: The Complete Industry Cycle . Elsevier. p. 447. ISBN 978-0-08-047664-3.\n - Willman, Marshall D. (2022). \"Logic and Language in Early Chinese Philosophy\" (https://plat o.stanford.edu/entries/chinese-logic-language/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Introduction. Retrieved 11 March 2023.", - "page_start": 36, - "page_end": 36, - "source_file": "wikipedia1.pdf" - }, - { - "text": "new formal systems have been proposed. There are disagreements about what makes a formal system a logic. [22] For example, it has been suggested that only logically complete systems, like first-order logic, qualify as logics. For such reasons, some theorists deny that higher-order logics are logics in the strict sense. [23]\n\n## Informal logic\n\n\n\nFormal logic needs to translate natural language arguments into a formal language, like first-order logic, to assess whether they are valid. In this example, the letter \"c\" represents Carmen while the letters \"M\" and \"T\" stand for \"Mexican\" and \"teacher\". The symbol \" ∧ \" has the meaning of \"and\".\n\nWhen understood in a wide sense, logic encompasses both formal and informal logic. [24] Informal logic uses non-formal criteria and standards to analyze and assess the correctness of arguments. Its main focus is on everyday discourse. [25] Its development was prompted by difficulties in applying the insights of formal logic to natural language arguments. [26] In this regard, it considers problems that formal logic on its own is unable to address. [27] Both provide criteria for assessing the correctness of arguments and distinguishing them from fallacies. [28]\n\nMany characterizations of informal logic have been suggested but there is no general agreement on its precise definition. [29] The most literal approach sees the terms \"formal\" and \"informal\" as applying to the language used to express arguments. On this view, informal logic studies arguments that are in informal or natural language. [30] Formal logic can only examine them indirectly by translating them first into a formal language while informal logic investigates them in their original form. [31] On this view, the argument \"Birds fly. Tweety is a bird. Therefore, Tweety flies.\" belongs to natural language and is examined by informal logic. But the formal translation \"(1) ; (2) ; (3) \" is studied by formal logic. [32] The study of natural language arguments comes with various difficulties. For example, natural language expressions are often ambiguous, vague, and context-dependent. [33] Another approach defines informal logic in a wide sense as the normative study of the standards, criteria, and procedures of argumentation. In this sense, it includes questions about the role of rationality, critical thinking, and the psychology of argumentation. [34]\n\nAnother characterization identifies informal logic with the study of non-deductive arguments. In this way, it contrasts with deductive reasoning examined by formal logic. [35] Non-deductive arguments make their conclusion probable but do not ensure that it is true. An example is the inductive argument from the empirical observation that \"all ravens I have seen so far are black\" to the conclusion \"all ravens are black\". [36]\n\nA further approach is to define informal logic as the study of informal fallacies. [37] Informal fallacies are incorrect arguments in which errors are present in the content and the context of the argument. [38] A false dilemma, for example, involves an error of content by excluding viable options. This is the case in the fallacy \"you are either with us or against us; you are not with us; therefore, you are against us\". [39] Some theorists state that formal logic studies the general form of arguments while informal logic studies particular instances of arguments. Another approach is to hold that formal logic only considers the role of", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia1.pdf" - }, - { - "text": "what role they play in inferences. One rule of inference states that, if something is necessary, then it is also possible. This means that follows from . Another principle states that if a proposition is necessary then its negation is impossible and vice versa. This means that is equivalent to . [128]\n\nOther forms of modal logic introduce similar symbols but associate different meanings with them to apply modal logic to other fields. For example, deontic logic concerns the field of ethics and introduces symbols to express the ideas of obligation and permission, i.e. to describe whether an agent has to perform a certain action or is allowed to perform it. [129] The modal operators in temporal modal logic articulate temporal relations. They can be used to express, for example, that something happened at one time or that something is happening all the time. [129] In epistemology, epistemic modal logic is used to represent the ideas of knowing something in contrast to merely believing it to be the case. [130]\n\n## Higher order logic\n\nHigher-order logics extend classical logic not by using modal operators but by introducing new forms of quantification. [131] Quantifiers correspond to terms like \"all\" or \"some\". In classical first-order logic, quantifiers are only applied to individuals. The formula \" \" ( some apples are sweet) is an example of the existential quantifier \" \" applied to the individual variable \" \". In higherorder logics, quantification is also allowed over predicates. This increases its expressive power. For example, to express the idea that Mary and John share some qualities, one could use the formula \" \". In this case, the existential quantifier is applied to the predicate variable \" \". [132] The added expressive power is especially useful for mathematics since it allows for more succinct formulations of mathematical theories. [43] But it has drawbacks in regard to its meta-logical properties and ontological implications, which is why first-order logic is still more commonly used. [133]\n\n## Deviant\n\nDeviant logics are logical systems that reject some of the basic intuitions of classical logic. Because of this, they are usually seen not as its supplements but as its rivals. Deviant logical systems differ from each other either because they reject different classical intuitions or because they propose different alternatives to the same issue. [134]\n\nIntuitionistic logic is a restricted version of classical logic. [135] It uses the same symbols but excludes some rules of inference. For example, according to the law of double negation elimination, if a sentence is not not true, then it is true. This means that follows from . This is a valid rule of inference in classical logic but it is invalid in intuitionistic logic. Another classical principle not part of intuitionistic logic is the law of excluded middle. It states that for every sentence, either it or its negation is true. This means that every proposition of the form is true. [135] These deviations from classical logic are based on the idea that truth is established by verification using a proof. Intuitionistic logic is especially prominent in the field of constructive mathematics, which emphasizes the need to find or construct a specific example to prove its existence. [136]", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia1.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia1.pdf", - "query": "What is considered a deductively valid argument regarding logic ?", - "target_page": 6, - "target_passage": "A deductively valid argument is one whose premises guarantee the truth of its conclusion", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "argument is made up of a chain of simple arguments. This means that the conclusion of one argument acts as a premise of later arguments. For a complex argument to be successful, each link of the chain has to be successful. [43]\n\nArguments and inferences are either correct or incorrect. If they are correct then their premises support their conclusion. In the incorrect case, this support is missing. It can take different forms corresponding to the different types of reasoning. [62] The strongest form of support corresponds to deductive reasoning. But even arguments that are not deductively valid may still be good arguments because their premises offer nondeductive support to their conclusions. For such cases, the term ampliative or inductive reasoning is used. [63] Deductive arguments are associated with formal logic in contrast to the\n\nArgument terminology used in logic\n\n\n\nrelation between ampliative arguments and informal logic. [64]\n\n## Deductive\n\nA deductively valid argument is one whose premises guarantee the truth of its conclusion. [11] For instance, the argument \"(1) all frogs are amphibians; (2) no cats are amphibians; (3) therefore no cats are frogs\" is deductively valid. For deductive validity, it does not matter whether the premises or the conclusion are actually true. So the argument \"(1) all frogs are mammals; (2) no cats are mammals; (3) therefore no cats are frogs\" is also valid because the conclusion follows necessarily from the premises. [65]\n\nAccording to an influential view by Alfred Tarski, deductive arguments have three essential features: (1) they are formal, i.e. they depend only on the form of the premises and the conclusion; (2) they are a priori, i.e. no sense experience is needed to determine whether they obtain; (3) they are modal, i.e. that they hold by logical necessity for the given propositions, independent of any other circumstances. [66]\n\nBecause of the first feature, the focus on formality, deductive inference is usually identified with rules of inference. [67] Rules of inference specify the form of the premises and the conclusion: how they have to be structured for the inference to be valid. Arguments that do not follow any rule of inference are deductively invalid. [68] The modus ponens is a prominent rule of inference. It has the form \" p ; if p , then q ; therefore q \". [69] Knowing that it has just rained ( ) and that after rain the streets are wet ( ), one can use modus ponens to deduce that the streets are wet ( ). [70]\n\nThe third feature can be expressed by stating that deductively valid inferences are truth-preserving: it is impossible for the premises to be true and the conclusion to be false. [71] Because of this feature, it is often asserted that deductive inferences are uninformative since the conclusion cannot arrive at new information not already present in the premises. [72] But this point is not always accepted since it would mean, for example, that most of mathematics is uninformative. A different characterization distinguishes", - "page_start": 5, - "page_end": 5, - "source_file": "wikipedia1.pdf" - }, - { - "text": "propositions into account, like predicates and quantifiers. Extended logics accept the basic intuitions behind classical logic and apply it to other fields, such as metaphysics, ethics, and epistemology. Deviant logics, on the other hand, reject certain classical intuitions and provide alternative explanations of the basic laws of logic.\n\n## Definition\n\nThe word \"logic\" originates from the Greek word logos , which has a variety of translations, such as reason, discourse, or language. [4] Logic is traditionally defined as the study of the laws of thought or correct reasoning, [5] and is usually understood in terms of inferences or arguments. Reasoning is the activity of drawing inferences. Arguments are the outward expression of inferences. [6] An argument is a set of premises together with a conclusion. Logic is interested in whether arguments are correct, i.e. whether their premises support the conclusion. [7] These general characterizations apply to logic in the widest sense, i.e., to both formal and informal logic since they are both concerned with assessing the correctness of arguments. [8] Formal logic is the traditionally dominant field, and some logicians restrict logic to formal logic. [9]\n\n## Formal logic\n\nFormal logic is also known as symbolic logic and is widely used in mathematical logic. It uses a formal approach to study reasoning: it replaces concrete expressions with abstract symbols to examine the logical form of arguments independent of their concrete content. In this sense, it is topic-neutral since it is only concerned with the abstract structure of arguments and not with their concrete content. [10]\n\nFormal logic is interested in deductively valid arguments, for which the truth of their premises ensures the truth of their conclusion. This means that it is impossible for the premises to be true and the conclusion to be false. [11] For valid arguments, the logical structure of the premises and the conclusion follows a pattern called a rule of inference. [12] For example, modus ponens is a rule of inference according to which all arguments of the form \"(1) p , (2) if p then q , (3) therefore q \" are valid, independent of what the terms p and q stand for. [13] In this sense, formal logic can be defined as the science of valid inferences. An alternative definition sees logic as the study of logical truths. [14] A proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true in all possible worlds and under all interpretations of its non-logical terms, like the claim \"either it is raining, or it is not\". [15] These two definitions of formal logic are not identical, but they are closely related. For example, if the inference from p to q is deductively valid then the claim \"if p then q \" is a logical truth. [16]\n\nFormal logic uses formal languages to express and analyze arguments. [17] They normally have a very limited vocabulary and exact syntactic rules. These rules specify how their symbols can be combined to construct sentences, so-called well-formed formulas. [18] This simplicity and exactness of formal logic make it capable of formulating precise rules of inference. They determine whether a given argument is valid. [19] Because of the reliance on formal language, natural language arguments cannot be studied directly. Instead, they need to be translated into formal language before their validity can be assessed. [20]\n\nThe term \"logic\" can also be used in a slightly different sense as a countable noun. In this sense, a logic is a logical formal system. Distinct logics differ from each other concerning the rules of inference they accept as valid and the formal languages used to express them. [21] Starting in the late 19th century, many", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia1.pdf" - }, - { - "text": "\n\n## Logic\n\nLogic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the study of deductively valid inferences or logical truths. It examines how conclusions follow from premises based on the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory. Informal logic examines arguments expressed in natural language whereas formal logic uses formal language. When used as a countable noun, the term \"a logic\" refers to a specific logical formal system that articulates a proof system. Logic plays a central role in many fields, such as philosophy, mathematics, computer science, and linguistics.\n\nLogic studies valid forms of inference like modus ponens .\n\n\n\nLogic studies arguments, which consist of a set of premises that leads to a conclusion. An example is the argument from the premises \"it's Sunday\" and \"if it's Sunday then I don't have to work\" leading to the conclusion \"I don't have to work\". [1] Premises and conclusions express propositions or claims that can be true or false. An important feature of propositions is their internal structure. For example, complex propositions are made up of simpler propositions linked by logical vocabulary like (and) or (if...then). Simple propositions also have parts, like \"Sunday\" or \"work\" in the example. The truth of a proposition usually depends on the meanings of all of its parts. However, this is not the case for logically true propositions. They are true only because of their logical structure independent of the specific meanings of the individual parts.\n\nArguments can be either correct or incorrect. An argument is correct if its premises support its conclusion. Deductive arguments have the strongest form of support: if their premises are true then their conclusion must also be true. This is not the case for ampliative arguments, which arrive at genuinely new information not found in the premises. Many arguments in everyday discourse and the sciences are ampliative arguments. They are divided into inductive and abductive arguments. Inductive arguments are statistical generalization-such as inferring that all ravens are black, based on many individual observations of black ravens. [2] Abductive arguments are inferences to the best explanation-for example, when a doctor concludes that a patient has a certain disease, as the best explanation for the symptoms that they are observed to suffer. [3] Arguments that fall short of the standards of correct reasoning often embody fallacies. Systems of logic are theoretical frameworks for assessing the correctness of arguments.\n\nLogic has been studied since antiquity. Early approaches include Aristotelian logic, Stoic logic, Nyaya, and Mohism. Aristotelian logic focuses on reasoning in the form of syllogisms. It was considered the main system of logic in the Western world until it was replaced by modern formal logic, which has its roots in the work of late 19th-century mathematicians such as Gottlob Frege. Today, the most commonly used system is classical logic. It consists of propositional logic and first-order logic. Propositional logic only considers logical relations between full propositions. First-order logic also takes the internal parts of", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia1.pdf" - }, - { - "text": "incoming information. [154] Correct reasoning and the arguments it is based on follow the laws of probability, for example, the principle of conditionalization. Bad or irrational reasoning, on the other hand, violates these laws. [155]\n\n## Areas of research\n\nLogic is studied in various fields. In many cases, this is done by applying its formal method to specific topics outside its scope, like to ethics or computer science. [156] In other cases, logic itself is made the subject of research in another discipline. This can happen in diverse ways. For instance, it can involve investigating the philosophical assumptions linked to the basic concepts used by logicians. Other ways include interpreting and analyzing logic through mathematical structures as well as studying and comparing abstract properties of formal logical systems. [157]\n\n## Philosophy of logic and philosophical logic\n\nPhilosophy of logic is the philosophical discipline studying the scope and nature of logic. [59] It examines many presuppositions implicit in logic, like how to define its basic concepts or the metaphysical assumptions associated with them. [158] It is also concerned with how to classify logical systems and considers the ontological commitments they incur. [159] Philosophical logic is one of the areas within the philosophy of logic. It studies the application of logical methods to philosophical problems in fields like metaphysics, ethics, and epistemology. [160] This application usually happens in the form of extended or deviant logical systems. [161]\n\n## Metalogic\n\nMetalogic is the field of inquiry studying the properties of formal logical systems. For example, when a new formal system is developed, metalogicians may study it to determine which formulas can be proven in it. They may also study whether an algorithm could be developed to find a proof for each formula and whether every provable formula in it is a tautology. Finally, they may compare it to other logical systems to understand its distinctive features. A key issue in metalogic concerns the relation between syntax and semantics. The syntactic rules of a formal system determine how to deduce conclusions from premises, i.e. how to formulate proofs. The semantics of a formal system governs which sentences are true and which ones are false. This determines the validity of arguments since, for valid arguments, it is impossible for the premises to be true and the conclusion to be false. The relation between syntax and semantics concerns issues like whether every valid argument is provable and whether every provable argument is valid. Metalogicians also study whether logical systems are complete, sound, and consistent. They are interested in whether the systems are decidable and what expressive power they have. Metalogicians usually rely heavily on abstract mathematical reasoning when examining and formulating metalogical proofs. This way, they aim to arrive at precise and general conclusions on these topics. [162]\n\n## Mathematical logic\n\nThe term \"mathematical logic\" is sometimes used as a synonym of \"formal logic\". But in a more restricted sense, it refers to the study of logic within mathematics. Major subareas include model theory, proof theory, set theory, and computability theory. [164] Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic. However, it can also include attempts", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia1.pdf" - }, - { - "text": "new formal systems have been proposed. There are disagreements about what makes a formal system a logic. [22] For example, it has been suggested that only logically complete systems, like first-order logic, qualify as logics. For such reasons, some theorists deny that higher-order logics are logics in the strict sense. [23]\n\n## Informal logic\n\n\n\nFormal logic needs to translate natural language arguments into a formal language, like first-order logic, to assess whether they are valid. In this example, the letter \"c\" represents Carmen while the letters \"M\" and \"T\" stand for \"Mexican\" and \"teacher\". The symbol \" ∧ \" has the meaning of \"and\".\n\nWhen understood in a wide sense, logic encompasses both formal and informal logic. [24] Informal logic uses non-formal criteria and standards to analyze and assess the correctness of arguments. Its main focus is on everyday discourse. [25] Its development was prompted by difficulties in applying the insights of formal logic to natural language arguments. [26] In this regard, it considers problems that formal logic on its own is unable to address. [27] Both provide criteria for assessing the correctness of arguments and distinguishing them from fallacies. [28]\n\nMany characterizations of informal logic have been suggested but there is no general agreement on its precise definition. [29] The most literal approach sees the terms \"formal\" and \"informal\" as applying to the language used to express arguments. On this view, informal logic studies arguments that are in informal or natural language. [30] Formal logic can only examine them indirectly by translating them first into a formal language while informal logic investigates them in their original form. [31] On this view, the argument \"Birds fly. Tweety is a bird. Therefore, Tweety flies.\" belongs to natural language and is examined by informal logic. But the formal translation \"(1) ; (2) ; (3) \" is studied by formal logic. [32] The study of natural language arguments comes with various difficulties. For example, natural language expressions are often ambiguous, vague, and context-dependent. [33] Another approach defines informal logic in a wide sense as the normative study of the standards, criteria, and procedures of argumentation. In this sense, it includes questions about the role of rationality, critical thinking, and the psychology of argumentation. [34]\n\nAnother characterization identifies informal logic with the study of non-deductive arguments. In this way, it contrasts with deductive reasoning examined by formal logic. [35] Non-deductive arguments make their conclusion probable but do not ensure that it is true. An example is the inductive argument from the empirical observation that \"all ravens I have seen so far are black\" to the conclusion \"all ravens are black\". [36]\n\nA further approach is to define informal logic as the study of informal fallacies. [37] Informal fallacies are incorrect arguments in which errors are present in the content and the context of the argument. [38] A false dilemma, for example, involves an error of content by excluding viable options. This is the case in the fallacy \"you are either with us or against us; you are not with us; therefore, you are against us\". [39] Some theorists state that formal logic studies the general form of arguments while informal logic studies particular instances of arguments. Another approach is to hold that formal logic only considers the role of", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia1.pdf" - }, - { - "text": "sentence would be true or false. One of its central methodological assumptions is the principle of compositionality. It states that the meaning of a complex expression is determined by the meanings of its parts and how they are combined. For example, the meaning of the verb phrase \"walk and sing\" depends on the meanings of the individual expressions \"walk\" and \"sing\". Many theories in formal semantics rely on model theory. This means that they employ set theory to construct a model and then interpret the meanings of expression in relation to the elements in this model. For example, the term \"walk\" may be interpreted as the set of all individuals in the model that share the property of walking. Early influential theorists in this field were Richard Montague and Barbara Partee, who focused their analysis on the English language. [173]\n\n## Epistemology of logic\n\nThe epistemology of logic studies how one knows that an argument is valid or that a proposition is logically true. [174] This includes questions like how to justify that modus ponens is a valid rule of inference or that contradictions are false. [175] The traditionally dominant view is that this form of logical understanding belongs to knowledge a priori. [176] In this regard, it\n\nConjunction (AND) is one of the basic operations of Boolean logic. It can be electronically implemented in several ways, for example, by using two transistors.\n\n\n\nis often argued that the mind has a special faculty to examine relations between pure ideas and that this faculty is also responsible for apprehending logical truths. [177] A similar approach understands the rules of logic in terms of linguistic conventions. On this view, the laws of logic are trivial since they are true by definition: they just express the meanings of the logical vocabulary. [178]\n\nSome theorists, like Hilary Putnam and Penelope Maddy, object to the view that logic is knowable a priori. They hold instead that logical truths depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world. According to this view, they may be explored by studying general patterns of the fundamental sciences. For example, it has been argued that certain insights of quantum mechanics refute the principle of distributivity in classical logic, which states that the formula is equivalent to . This claim can be used as an empirical argument for the thesis that quantum logic is the correct logical system and should replace classical logic. [179]\n\n## History\n\nLogic was developed independently in several cultures during antiquity. One major early contributor was Aristotle, who developed term logic in his Organon and Prior Analytics . [183] He was responsible for the introduction of the hypothetical syllogism [184] and temporal modal logic. [185] Further innovations include inductive logic [186] as well as the discussion of new logical concepts such as terms, predicables, syllogisms, and propositions. Aristotelian logic was highly regarded in classical and medieval times, both in Europe and the Middle East. It remained in wide use in the West until the early 19th century. [187] It has now been superseded by later work, though many of its key insights are still present in modern systems of logic. [188]", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia1.pdf" - }, - { - "text": "mathematics, it does not include logical vocabulary relevant to many other topics of philosophical importance. Examples of concepts it overlooks are the contrast between necessity and possibility and the problem of ethical obligation and permission. Similarly, it does not address the relations between past, present, and future. [119] Such issues are addressed by extended logics. They build on the basic intuitions of classical logic and expand it by introducing new logical vocabulary. This way, the exact logical approach is applied to fields like ethics or epistemology that lie beyond the scope of mathematics. [120]\n\n## Propositional logic\n\nPropositional logic comprises formal systems in which formulae are built from atomic propositions using logical connectives. For instance, propositional logic represents the conjunction of two atomic propositions and as the complex formula . Unlike predicate logic where terms and predicates are the smallest units, propositional logic takes full propositions with truth values as its most basic component. [121] Thus, propositional logics can only represent logical relationships that arise from the way complex propositions are built from simpler ones. But it cannot represent inferences that result from the inner structure of a proposition. [122]\n\n## First-order logic\n\nFirst-order logic includes the same propositional connectives as propositional logic but differs from it because it articulates the internal structure of propositions. This happens through devices such as singular terms, which refer to particular objects, predicates, which refer to properties and relations, and quantifiers, which treat notions like \"some\" and \"all\". [123] For example, to express the proposition \"this raven is black\", one may use the predicate for the property \"black\" and the singular term referring to the raven to form the expression . To express that some objects are black, the existential quantifier is combined\n\n\n\nGottlob Frege's Begriffschrift introduced the notion of quantifier in a graphical notation, which here represents the judgment that is true.\n\nwith the variable to form the proposition . First-order logic contains various rules of inference that determine how expressions articulated this way can form valid arguments, for example, that one may infer from . [124]\n\n## Extended\n\nExtended logics are logical systems that accept the basic principles of classical logic. They introduce additional symbols and principles to apply it to fields like metaphysics, ethics, and epistemology. [125]\n\n## Modal logic\n\nModal logic is an extension of classical logic. In its original form, sometimes called \"alethic modal logic\", it introduces two new symbols: expresses that something is possible while expresses that something is necessary. [126] For example, if the formula stands for the sentence \"Socrates is a banker\" then the formula articulates the sentence \"It is possible that Socrates is a banker\". [127] To include these symbols in the logical formalism, modal logic introduces new rules of inference that govern", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia1.pdf" - }, - { - "text": "Distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails). [77]\n\n## Logic\n\nFormal logic is used for reasoning and knowledge representation. [78] Formal logic comes in two main forms: propositional logic (which operates on statements that are true or false and uses logical connectives such as \"and\", \"or\", \"not\" and \"implies\") [79] and predicate logic (which also operates on objects, predicates and relations and uses quantifiers such as \" Every X is a Y \" and \"There are some X s that are Y s\"). [80]\n\nIllustration of gradient descent for 3 different starting points; two parameters (represented by the plan coordinates) are adjusted in order to minimize the loss function (the height)\n\n\n\nDeductive reasoning in logic is the process of proving a new statement (conclusion) from other statements that are given and assumed to be true (the premises). [81] Proofs can be structured as proof trees, in which nodes are labelled by sentences, and children nodes are connected to parent nodes by inference rules.\n\nGiven a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whose leaf nodes are labelled by premises or axioms. In the case of Horn clauses, problem-solving search can be performed by reasoning forwards from the premises or backwards from the problem. [82] In the more general case of the clausal form of first-order logic, resolution is a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that include the negation of the problem to be solved. [83]\n\nInference in both Horn clause logic and first-order logic is undecidable, and therefore intractable. However, backward reasoning with Horn clauses, which underpins computation in the logic programming language Prolog, is Turing complete. Moreover, its efficiency is competitive with computation in other symbolic programming languages. [84]\n\nFuzzy logic assigns a \"degree of truth\" between 0 and 1. It can therefore handle propositions that are vague and partially true. [85]\n\nNon-monotonic logics, including logic programming with negation as failure, are designed to handle default reasoning. [28] Other specialized versions of logic have been developed to describe many complex domains.\n\n## Probabilistic methods for uncertain reasoning\n\nMany problems in AI (including in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics. [86] Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis, [87] and information value theory. [88] These tools include models such as Markov decision processes, [89] dynamic decision networks, [90] game theory and mechanism design. [91]", - "page_start": 5, - "page_end": 5, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Bertrand Russell made various contributions to mathematical logic. [163]\n\n\n\nto use logic to analyze mathematical reasoning or to establish logic-based foundations of mathematics. [165] The latter was a major concern in early 20th-century mathematical logic, which pursued the program of logicism pioneered by philosopherlogicians such as Gottlob Frege, Alfred North Whitehead, and Bertrand Russell. Mathematical theories were supposed to be logical tautologies, and their program was to show this by means of a reduction of mathematics to logic. Many attempts to realize this program failed, from the crippling of Frege's project in his Grundgesetze by Russell's paradox, to the defeat of Hilbert's program by Gödel's incompleteness theorems. [166]\n\nSet theory originated in the study of the infinite by Georg Cantor, and it has been the source of many of the most challenging and important issues in mathematical logic. They include Cantor's theorem, the status of the Axiom of Choice, the question of the independence of the continuum hypothesis, and the modern debate on large cardinal axioms. [167]\n\nComputability theory is the branch of mathematical logic that studies effective procedures to solve calculation problems. One of\n\nits main goals is to understand whether it is possible to solve a given problem using an algorithm. For instance, given a certain claim about the positive integers, it examines whether an algorithm can be found to determine if this claim is true. Computability theory uses various theoretical tools and models, such as Turing machines, to explore this type of issue. [168]\n\n## Computational logic\n\nComputational logic is the branch of logic and computer science that studies how to implement mathematical reasoning and logical formalisms using computers. This includes, for example, automatic theorem provers, which employ rules of inference to construct a proof step by step from a set of premises to the intended conclusion without human intervention. [169] Logic programming languages are designed specifically to express facts using logical formulas and to draw inferences from these facts. For example, Prolog is a logic programming language based on predicate logic. [170] Computer scientists also apply concepts from logic to problems in computing. The works of Claude Shannon were influential in this regard. He showed how Boolean logic can be used to understand and implement computer circuits. [171] This can be achieved using electronic logic gates, i.e. electronic circuits with one or more inputs and usually one output. The truth values of propositions are represented by voltage levels. In this way, logic functions can be simulated by applying the corresponding voltages to the inputs of the circuit and determining the value of the function by measuring the voltage of the output. [172]\n\n## Formal semantics of natural language\n\nFormal semantics is a subfield of logic, linguistics, and the philosophy of language. The discipline of semantics studies the meaning of language. Formal semantics uses formal tools from the fields of symbolic logic and mathematics to give precise theories of the meaning of natural language expressions. It understands meaning usually in relation to truth conditions, i.e. it examines in which situations a", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia1.pdf" - }, - { - "text": "what role they play in inferences. One rule of inference states that, if something is necessary, then it is also possible. This means that follows from . Another principle states that if a proposition is necessary then its negation is impossible and vice versa. This means that is equivalent to . [128]\n\nOther forms of modal logic introduce similar symbols but associate different meanings with them to apply modal logic to other fields. For example, deontic logic concerns the field of ethics and introduces symbols to express the ideas of obligation and permission, i.e. to describe whether an agent has to perform a certain action or is allowed to perform it. [129] The modal operators in temporal modal logic articulate temporal relations. They can be used to express, for example, that something happened at one time or that something is happening all the time. [129] In epistemology, epistemic modal logic is used to represent the ideas of knowing something in contrast to merely believing it to be the case. [130]\n\n## Higher order logic\n\nHigher-order logics extend classical logic not by using modal operators but by introducing new forms of quantification. [131] Quantifiers correspond to terms like \"all\" or \"some\". In classical first-order logic, quantifiers are only applied to individuals. The formula \" \" ( some apples are sweet) is an example of the existential quantifier \" \" applied to the individual variable \" \". In higherorder logics, quantification is also allowed over predicates. This increases its expressive power. For example, to express the idea that Mary and John share some qualities, one could use the formula \" \". In this case, the existential quantifier is applied to the predicate variable \" \". [132] The added expressive power is especially useful for mathematics since it allows for more succinct formulations of mathematical theories. [43] But it has drawbacks in regard to its meta-logical properties and ontological implications, which is why first-order logic is still more commonly used. [133]\n\n## Deviant\n\nDeviant logics are logical systems that reject some of the basic intuitions of classical logic. Because of this, they are usually seen not as its supplements but as its rivals. Deviant logical systems differ from each other either because they reject different classical intuitions or because they propose different alternatives to the same issue. [134]\n\nIntuitionistic logic is a restricted version of classical logic. [135] It uses the same symbols but excludes some rules of inference. For example, according to the law of double negation elimination, if a sentence is not not true, then it is true. This means that follows from . This is a valid rule of inference in classical logic but it is invalid in intuitionistic logic. Another classical principle not part of intuitionistic logic is the law of excluded middle. It states that for every sentence, either it or its negation is true. This means that every proposition of the form is true. [135] These deviations from classical logic are based on the idea that truth is established by verification using a proof. Intuitionistic logic is especially prominent in the field of constructive mathematics, which emphasizes the need to find or construct a specific example to prove its existence. [136]", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia1.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed8.pdf", - "query": "What was the mean correctness score for LLM-generated handoff notes ?", - "target_page": 7, - "target_passage": "Correctness 4.52", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "\n\n\n\n## Original Investigation | Emergency Medicine\n\n## DevelopingandEvaluatingLargeLanguageModel-GeneratedEmergencyMedicine HandoffNotes\n\nVince Hartman, MS; Xinyuan Zhang, PhD; Ritika Poddar, MS; Matthew McCarty, MD; Alexander Fortenko, MD, MPH; Evan Sholle, MS; Rahul Sharma, MD, MBA; Thomas Campion Jr, PhD; Peter A. D. Steel, MA, MBBS\n\n## Abstract\n\nIMPORTANCE An emergency medicine (EM) handoff note generated by a large language model (LLM) has the potential to reduce physician documentation burden without compromising the safety of EM-to-inpatient (IP) handoffs.\n\nOBJECTIVE To develop LLM-generated EM-to-IP handoff notes and evaluate their accuracy and safety compared with physician-written notes.\n\nDESIGN, SETTING, AND PARTICIPANTS This cohort study used EM patient medical records with acute hospital admissions that occurred in 2023 at NewYork-Presbyterian/Weill Cornell Medical Center. A customized clinical LLM pipeline was trained, tested, and evaluated to generate templated EM-to-IP handoff notes. Using both conventional automated methods (ie, recall-oriented understudy for gisting evaluation [ROUGE], bidirectional encoder representations from transformers score [BERTScore], and source chunking approach for large-scale inconsistency evaluation [SCALE]) and a novel patient safety-focused framework, LLM-generated handoff notes vs physician-written notes were compared. Data were analyzed from October 2023 to March 2024.\n\nEXPOSURE LLM-generated EM handoff notes.\n\nMAINOUTCOMESANDMEASURES LLM-generated handoff notes were evaluated for (1) lexical similarity with respect to physician-written notes using ROUGE and BERTScore; (2) fidelity with respect to source notes using SCALE; and (3) readability, completeness, curation, correctness, usefulness, and implications for patient safety using a novel framework.\n\nRESULTS In this study of 1600 EM patient records (832 [52%] female and mean [SD] age of 59.9 [18.9] years), LLM-generated handoff notes, compared with physician-written ones, had higher ROUGE(0.322 vs 0.088), BERTScore (0.859 vs 0.796), and SCALE scores (0.691 vs 0.456), indicating the LLM-generated summaries exhibited greater similarity and more detail. As reviewed by 3 board-certified EM physicians, a subsample of 50 LLM-generated summaries had a mean (SD) usefulness score of 4.04 (0.86) out of 5 (compared with 4.36 [0.71] for physician-written) and mean (SD) patient safety scores of 4.06 (0.86) out of 5 (compared with 4.50 [0.56] for physician-written). None of the LLM-generated summaries were classified as a critical patient safety risk.\n\nCONCLUSIONSANDRELEVANCE In this cohort study of 1600 EM patient medical records, LLM-generated EM-to-IP handoff notes were determined superior compared with physician-written summaries via conventional automated evaluation methods, but marginally inferior in usefulness\n\n(continued)\n\n\n\nOpenAccess. This is an open access article distributed under the terms of the CC-BY License.\n\nJAMANetwork Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723\n\n(Reprinted)\n\n## KeyPoints\n\nQuestion Can a large language model (LLM) generate emergency medicine (EM)-to-inpatient (IP) handoff notes that are useful and safe for EM care?\n\nFindings In this cohort study of 1600 EMpatient medical records using a novel evaluation framework, the LLM-generated EM-to-IP handoff notes had a mean usefulness of 4.04 out of 5 (compared with 4.36 for physician-written) and a mean patient safety of 4.06 out of 5 (compared with 4.50 for physician-written) with no critical patient safety risks.\n\nMeaning These findings suggest the value of a manual, patient safetyfocused clinical evaluation of LLM models and the potential of LLM-generated handoff notes to create a new standard of care in EM.\n\n\n\n+\n\n\n\nInvited Commentary\n\n## + Supplemental content\n\nAuthor affiliations and article information are listed at the end of this article.", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed8.pdf" - }, - { - "text": "curation (4.24 [0.58] vs 4.76 [0.48]), readability (4.00 [0.64] vs 4.64 [0.49]), correctness (4.52 [0.64] vs 4.90 [0.39]), and patient safety (4.06 [0.86] vs 4.50 [0.56]).\n\nIn extrapolating the estimated worst-case scenario impact of these performance gaps on patient safety, the 3 expert clinicians determined none of the identified model performance issues were anticipated to create a level 1 (life-threatening) safety event (see examples of worst case scenarios in eTable 2 in Supplement 1). While the incompleteness and faulty logic identified in the automated summaries received mean (SD) safety scores of 4.20 (0.93) and 4.60 (0.75), respectively; 13 (8.7%) and 11 (7.3%) of these events, respectively, were determined to have the potential to create a level 2 patient safety event following EM-to-IP handoff, substantially higher compared with the physician-written summaries (0%). All of the 5 hallucinations had patient safety scores between 4 and 5 and a mean (SD) score of 4.96 (0.14), which is defined as the hallucinations posing mild to no patient safety risk. LLM-generated notes demonstrated a higher rate of incorrectness (9.6%) compared with the physician-written notes (2.0%), although very few hallucinations.\n\nICC were 0.79 for completeness, 0.70 for curation, 0.59 for readability, 0.76 for correctness, and 0.74 for usefulness. These numbers suggest good reliability of agreement for completeness, curation, correctness, and usefulness and suggest fair reliability for readability among the 3 raters.\n\n## Discussion\n\nThe study demonstrated success in generating EM-to-IP handoff notes using both a fine tuned, pretrained LLM and rule-based approaches within an end user-developed note template. It is important to note that (largely due to time constraints within the EM care delivery model) the performance of EM-to-IP handoff notes was not the current standard of care in EM. The study site's unique electronic handoff process enabled a comparison between physician-written and LLM-generated handoff notes. Traditional automated evaluations of the model output suggested\n\nTable 3. Mean Clinical Quality Evaluation, Large Language Model (LLM)-Generated and Physician-Written", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed8.pdf" - }, - { - "text": "evaluation frameworks may not address the anticipated effect LLM performance limitations could have on patient safety. 38-41\n\nIn this study, we aim to expand on prior work of clinical summarization to rigorously evaluate the outcomes of a fine-tuned model developed to generate accurate and safe summaries of the care rendered during an ED visit, with the long-term goal of integrating automated, structured EM-to-IP handoff notes into an EHR-based electronic handoff admission workflow (see eAppendix 1 in Supplement 1). We fine-tune pretrained LLMs on well curated datasets of structured and unstructured EHR data from the ED encounter to summarize the patient's ED care. We improved the correctness of model generations and customized the summaries in a structured format designed by a team of EM and internal medicine physician leaders for optimal usefulness. We proposed a novel patient safety-focused LLM evaluation framework to examine the LLM-generated handoff notes' quality and accuracy and the downstream patient safety implications of any identified inaccuracies. To evaluate noninferiority, we compared the LLM-generated handoff notes with the preexisting physician-written EM-to-IP handoff notes as the active control, using both the proposed patient safety-focused clinical evaluation framework and automated benchmark-driven methods. We used the physician-written EM-to-IP handoff notes as the active control and used the scores from both evaluation frameworks for the margin of inferiority of the intervention.\n\n## Methods\n\n## Data Collection\n\nThe study, with review and approval from the Weill Cornell institutional review board (IRB), was conducted at an urban academic 840-bed quaternary-care hospital in New York City, with approximately 71 000 adult ED visits and 21 000 admissions annually. EHR data from 1600 individual EM patient encounters leading to acute hospital admission were randomly selected from visits occurring between April and September of 2023. We limited our analysis to EM patient encounters occurring after April 2023, as the study site had updated the EM-handoff at that time. Encounters before this date used an earlier version of the EM-handoff note that would have provided suboptimal data for training labels. We used these data to fine-tune a pretrained LLM, which then generated an abstractive EM-handoff note. For the 1600 patient encounters (the study participants), Weill Cornell Medicine IRB approved a waiver of informed consent because the study used retrospective data and posed minimal risk to patients. We used Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guidelines.\n\n## EM-to-IP Handoff Note Template\n\nThe EM-to-IP handoff note template used in the study is a replication of the current manual handoff note structure used at the study site. The generated EM handoff note consists of components generated by a rule-based pattern-matching approach (laboratory tests, vitals, medications, consult orders, and radiology impressions) and components generated by the trained abstractive summarization model (history of present illness [HPI], differential diagnoses, immediate care plans, in-ED events, and disposition). Each summary also included a header with the timestamp of ED triage and discharge, patient's birth date, patient's unique identifier, patient's encounter number, and the total time of patient's stay in the ED.\n\n## Data Curation for Automated ED Note Generation", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed8.pdf" - }, - { - "text": "subsequently evaluated 2 ED-to-inpatient handoff notes for each patient: (1) the physician-written note and (2) the LLM-generated note.\n\nOnaLikert scale of 1 to 5, where 1 is unacceptable and 5 is excellent, the 3 physicians rated the completeness, curation, readability, and correctness of the summary as shown in eTable 1 in Supplement 1. Physicians rated the usefulness of the summary, defined as the capability of the summary being incorporated into a workflow where a physician would make edits before final completion, mitigating potential future self-referential learning loops and the downstream adverse consequences. 51 Likewise, the raters assessed potential patient safety implications of unmitigated model errors using a scale from 1 to 5, where 1 denotes life-threatening risks and 5 denotes no identified patient safety risk for completeness, curation, readability, and the 4 subcategories within correctness (hallucination, faulty logic, knowledge gap, and bias), as well as the overall patient safety risk. 45 Evaluators arrived at prestudy consensus that a usefulness Likert score of at least a 3 out of 5 indicated that the LLM-generated summary likely demonstrated baseline acceptability for such a workflow. To extrapolate a theoretical worst case scenario, the physicians rated the safety of the LLM-generated summary as defined as the capability of the summary to fully replace a physicianwritten note (unmitigated).\n\nTo improve consistency and agreement, the 3 reviewers met to familiarize themselves with the framework and evaluated 10 separate cases from the test dataset that were not included in the clinical evaluation results. Additionally, after independently scoring the summaries, they met to ensure consensus interpretation of the multidimensional scoring framework. Interrater reliability was calculated using intraclass correlation coefficient (ICC), using a 2-way random effects model for consistency with the Pingouin statistical package version 0.5.4 in Python (Python Software Foundation). The ICC measures the similarity of the 3 raters to confirm the consistency and validity of the evaluation protocol; the scores are from 0 to 1, where 1 indicates unanimous agreement and 0 represents no agreement. 52 Data were analyzed from October 2023 to March 2024.\n\n## Results\n\n## AutomatedTasks\n\nOf 1600 patients, the mean (SD) age was 59.8 (18.9) years and 832 (52%) were female. In Table 2 , ROUGE and BERTScore compare the summaries with the testing set from our annotations, and SCALE score compares the summaries with the source notes. From automatic evaluation results, we observed that LLM-generated summaries had better scores than the physician summaries, such that ROUGE-2 was 0.322 vs 0.088, BERT-precision was 0.859 vs 0.796, and SCALE was 0.691 vs 0.456, suggesting the LLM-generated summaries were more similar and more detailed than the physician summaries.\n\n## Clinical Evaluation Tasks\n\nThe clinical evaluation results for LLM-generated summaries and physician-written summaries are shown in Table 3 and Table 4 . The mean clinical quality scores of the automated summaries are in a comparable range (4-5) to those of the physician summaries. However, the automated summaries were observed to be of lower quality compared with the physician-written summaries with regards to mean (SD) usefulness (4.04 [0.85] vs 4.36 [0.71]), completeness (4.00 [0.88] vs 4.16 [0.84]),\n\nTable 2. Automated Evaluation Scores, Large Language Model (LLM)-Generated and Physician-Written\n\n| Summary type | R-1 a | R-2 a | R-L a | BERT-p | BERT-r | SCALE |\n|-------------------|---------|---------|---------|----------|----------|---------|\n| LLM-generated | 0.494 | 0.322 | 0.391 | 0.859 | 0.876 | 0.691 |\n| Physician-written | 0.251 | 0.088 | 0.154 | 0.796 | 0.827 | 0.456 |", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed8.pdf" - }, - { - "text": "## Data Curation for Automated ED Note Generation\n\nThe EHR data were bifurcated into 2 datasets linked by the patient encounter number: 1 for the rulebased pattern-matching approach and the other for the LLM fine-tuning discussed in further detail in eAppendix 1 in Supplement 1. The rule-based framework was designed by the 3 board certified EM physicians (M.M., A.F., and P.S.). Fine tuning of the pretrained LLM consisted of the notes in Table 1 : EMclinician notes, consultation notes, EM progress note entries, and EM procedure notes. The EM-to-IP handoff notes were used as the labels. As the preexisting labels were of variable quality for\n\n\n\n(Reprinted)", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed8.pdf" - }, - { - "text": "superior performance. However, while the manual clinical evaluation demonstrated the majority of the LLM-generated notes were of promising comparative quality (scores of 4-5), they were, on average, inferior to the clinician-written notes.\n\nOur novel clinical evaluation's findings suggest the majority of identified quality limitations and incorrectness would have minimal impact on patient safety, even when extrapolated to the worstcase scenario of the LLM-generated summary content not being reviewed and edited by a clinician before completion. This was designed to address contemporary LLM concerns of user trust, reliance and expertise. 49 As such, none of the incorrect output text elements reached life-threatening risk. However, incompleteness and faulty logic identified in the automated summaries were not always negligible, with just under 1 in 10 of these performance gaps determined to have the potential to create significant patient safety risk compared with the physician-written summaries. These critical implementation safety findings will inform (1) directionality of further model refinement; (2) further clinical evaluation of postrefinement model output; and (3) irrespective of downstream model performance, an EHR-implementation plan constrained to a user-interface design that will allow EM clinicians to review and edit the LLM-generated handoff note as a draft before finalizing (see eAppendix 1 in Supplement 1). This physician-in-the-loop process has also been identified as critical in other recent work implementing LLMs into clinical workflows. 29,53\n\nWhile the automated methods of SCALE and MPNet-based sentence transformers demonstrated a cursory view of the faithfulness performance of the models, the clinical evaluation provided the nuanced context of the true factuality of our system on a word by word level. When comparing with the source notes, the automatic evaluations rewarded the summaries with more details, more semantic similarities, and more entailment logics, while physician-written notes tended to be more concise with more shortcuts and clinical jargon, which are penalized by automatic evaluation metrics. In addition, LLM-generated summaries are completely based on the source notes, while physician-written summaries are often composed with additional knowledge that cannot be found from the source notes.\n\nThe divergence of the automated and clinical evaluation results of an LLM intended for integration into a critical clinical workflow is an important finding. First, this observed finding validates the importance of clinical evaluations in addition to conventional automated evaluations to determine accuracy. 54 While other LLM clinical evaluation frameworks have been described to measure conventional model output quality categories (such as incorrectness domains and other performance gaps), 30,35 to our knowledge, our novel framework is the first to incorporate anticipated patient safety implications for each individual category deficiency.\n\n## Limitations\n\nThere were several limitations to the study that were primarily driven from constraints of infrastructure, as well as regulations, legal governance, and labor requirements. At the study location, the data were required to remain on premise at all times and the infrastructure that was provided had a GPU limitation of 24 GB. Given these infrastructure restrictions, the best open-source model available during the study was LLM 2. Furthermore, we were not able to demonstrate the comparable difference between our fine-tuned LLM 2 model and third party LLMs 32,55 because of the study location's restrictions and concerns with the data retention policies. Nevertheless, our study demonstrates the potential capability of integrating state-of-the-art open source LLMs at organizations that are less open to integrating third-party LLMs.", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed8.pdf" - }, - { - "text": "Abstract (continued)\n\nand safety via a novel evaluation framework. This study suggests the importance of a physician-inloop implementation design for this model and demonstrates an effective strategy to measure preimplementation patient safety of LLM models.\n\nJAMANetwork Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723\n\n## Introduction\n\nHandoffs, where patient information is exchanged between health professionals during a transfer of clinical responsibility, have been identified as a critical source of medical errors. 1,2 The Joint Commission, the Accreditation Council for Graduate Medical Education, and the Association of American Medical Colleges have all recommended the development of high-quality and standardized handoff processes to address the substantial patient risk of this ubiquitous event. 3,4 Implementing handoff tools has previously demonstrated significant reductions in medical errors. 5,6 High-quality handoffs from emergency medicine (EM) to inpatient (IP) services (EM-to-IP) are challenged by medical complexity, diagnostic uncertainty, rapidly evolving care plans, and time constraints. 7-10 The EM-to-IP handoff structure is not well standardized, frequently communicated verbally, and poorly adhered to in emergency departments (EDs), including in medical centers with formalized handoff systems. 11-14 Prior research has demonstrated that suboptimal EM-to-IP handoff is associated with adverse events, EM leaders and front-line clinicians themselves view the EM-to-IP handoff as high risk, and an electronic health record (EHR)-based technology is commonly mentioned as the most desired assistive tool in improving ED transitions of care. 15-18 Limited work to date has demonstrated EMelectronic handoff tools as feasible, efficient, and effective. 19-21 In April 2023, EM and internal medicine leadership of the study site collaboratively developed and launched a mandatory, EHR-based handoff workflow via a standardized EM-to-IP handoff note template, designed for realtime completion by the EM care team at time of admission. At 3 and 6 months postlaunch, informal evaluation of new EM-to-IP handoff notes through random medical record review and unstructured clinician feedback sessions revealed variable completeness, quality, and subsequent usefulness of the handoff notes.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed8.pdf" - }, - { - "text": "| Surrogate Target | R MF | ˆ R SW R CLS | R LLM | R SW | ˆ R MF R CLS | R LLM | R SW | ˆ R CLS S FM | R LLM | R SW | ˆ R LLM R MF | R CLS |\n|--------------------|------------|----------------|------------|------------|----------------|------------|------------|----------------|------------|------------|----------------|------------|\n| LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 |\n| MT-Bench | - 0 . 1 | - 0 . 1 | 0 . 0 | - 0 . 1 | - 0 . 1 | 0 . 0 | - 0 . 1 | 0 . 0 | 0 . 1 | - 0 . 2 | - 0 . 1 | - 0 . 2 |\n| MMLU | - 0 . 1 | 0 . 3 | - 0 . 2 | 4 . 8 | 1 . 0 | 0 . 5 | 2 . 5 | - 1 . 3 | - 0 . 8 | 2 . 6 | - 0 . 9 | 0 . 3 |\n| GSM8K | 14 . 9 | 9 . 6 | 15 . 2 | 18 . 6 | 13 . 8 | 14 . 7 | 13 . 4 | 6 . 8 | 12 . 6 | 13 . 6 | 11 . 3 | 10 . 4 |\n| LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 |\n| MT-Bench | - 0 . 1 | - 0 . 1 | - 0 . 1 | - 0 . 2 | - 0 . 2 | - 0 . 2 | - 0 . 1 | - 0 . 1 | 0 . 0 | - 0 . 2 | - 0 . 2 | - 0 . 2 |\n| MMLU | 1 . 6 | 4 . 0 | 4 . 2 | 7 . 9 | 5 . 0 | 4 . 4 | 5 . 0 | - 2 . 9 | 3 . 2 | 5 . 2 | - 0 . 9 | 3 . 8 |\n| GSM8K | 13 . 6 | 8 . 7 | 18 . 5 | 18 . 9 | 14 . 4 | 18 . 3 | 13 . 1 | 4 . 0 | 15 . 5 | 11 . 3 | 8 . 4 | 10 . 8 |\n| LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 |\n| MT-Bench | 0 . 2 | 0 . 0 | 0 . 1 | - 0 . 1 | - 0 . 1 | 0 . 0 | 0 . 0 | 0 . 2 | 0 . 2 | - 0 . 1 | 0 . 1 | - 0 . 1 |\n| MMLU | 5 . 0 | 6 . 8 | 5 . 8 | 11 . 3 | 9 . 1 | 4 . 7 | 8 . 1 | - 3 . 7 | 4 . 8 | 7 . 8 | 0 . 1 | 7 . 2 |\n| GSM8K | 20 . 5 | 13 . 4 | 20 . 9 | 24 . 3 | 18 . 6 | 21 . 6 | 17 . 9 | 11 . 2 | 18 . 9 | 16 . 7 | 15 . 2 | 14 . 2 |\n\nTable 7: Differences between average benchmark specific scores of responses to the original and confounded queries, when the confounder gadget was generated for a different surrogate router than the target (black-box setting) for three LLM pairs. Positive values indicate a higher average score for responses to the confounded queries; higher values are better for the attacker. Results are averaged across gadgets. Standard errors were omitted for readability and are on average 0 . 1 , 0 . 8 , and 1 . 8 for MT-bench, MMLU and GSM8K, respectively. Aligned with the white-box setting, results show almost no decrease in performance, and improvement when there is a performance gap for the LLM pair.\n\nResults for LLM pair 4. As discussed in Section 5, we replace the strong model that was used by Ong et al. [47], GPT-41106-preview (rank 28 in the Chatbot Arena leaderboard [1, 21]), with the open-sourced Llama-3.1-8B (rank 58) to reduce the costs of our extensive set of evaluations. In this section we perform a smaller-scale evaluation of the quality-enhancing attack performance when using GPT as the strong model, i.e., LLM pair 4. We evaluate this setting using three of the n = 10 confounder gadgets for each router.\n\n## 7 Rerouting Commercial Routers", - "page_start": 11, - "page_end": 11, - "source_file": "arxiv1.pdf" - }, - { - "text": "## References and notes", - "page_start": 140, - "page_end": 140, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "In recent years there has been an accelerated interest in using LLMs to automate clinical tasks in an effort to unburden physicians and reduce burnout. 22 Computer-generated text within clinical notes using natural language processing (NLP) have been overall shown to improve note completion rates, physician satisfaction, and patient outcomes. 23 Since 2018, NLP has made rapid advancements in health care with the discovery of the transformer model architecture, the building block of large language models (LLMs). LLMs can automate workflows such as discharge summaries, 24 radiology reports, 25 patient messaging, 26 after-visit summaries, 27 and ambient dictation 28 with various levels of perceived quality in each workflow. 29 LLMs are particularly effective at summarizing large unstructured clinical datasets, such as ED patient medical records. 30 Acommonconcern of LLMs is their ability to hallucinate data, or LLMs generating output text that is not factually consistent with the original source content. 31 Much work has been done in health care to reduce hallucinations through building larger-parameter models trained on trillions of datasets, and then instruction finetuning the LLM on smaller, well-curated datasets. 32,33 LLMs can also be designed with explainability by citing inferred content back to the reference source notes. 34 For short-context length notes, using few-shot prompt engineering approaches with large language models like GPT-4 can produce summaries that outperform standard physician documentation in completeness and error frequency. 35 However, factual inconsistencies in the summaries produced by LLMs increase as the context length increases, 36 and for medium- to long-context tasks, fine-tuning an open-source model has been shown to perform better than a prompt-learning approach. 37 In prior work, members of this study team demonstrated 62% of LLM-generated hospital course summaries met standard-of-care for a formal inpatient discharge summary. 24 However, recently published clinical\n\n\n\n(Reprinted)", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed8.pdf" - } - ] - }, - { - "references": { - "source_file": "legal1_opengouvernementlicense.pdf", - "query": "What are the improvements made to possible to the HadGEM3 and CMIP5 climate change models by UKCP18 ?", - "target_page": 1, - "target_passage": "mprovements include better representation of the past climate, the inclusion of more cloud and aerosol processes and the ability to model important climate phenomena ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\n\n\n\n\n## UK CLIMATE PROJECTIONS: A PROJECT OVERVIEW\n\n\n\n## What is UKCP18 and why do we need it?\n\nFollowing the historic Paris Agreement on Climate Change in December 2015, the Department of Environment, Food and Rural Affairs announced a major upgrade to the UK Climate Projections.\n\nThe UKCP18 project will build upon the current set of projections (UKCP09) to provide the most up-todate assessment of how the climate of the UK may change over the 21st century. This information will be essential to future Climate Change Risk Assessments 1 and to equip the UK with information to help adapt to the challenges and opportunities of climate change in line with the National Adaptation Programme 2 .\n\nOrganisations and individual users will use UKCP18 to inform risk assessments and adaptation plans to ensure they are resilient to extreme weather and climate change. Some organisations will use UKCP18 in responding to the Adaptation Reporting Power 3 for example.\n\n\n\n## What improvements does UKCP18 deliver?\n\nUKCP18 will benefit from a range of developments since the release of UKCP09, including:\n\n- · Greater understanding of user needs as a result of the adaptation community's use of UKCP09 projections and the subsequent feedback - user workshops indicated that users supported the continued use of probabilistic projections and the importance of spatially coherent information 4 .\n- · Advances in climate models in recent years, such as the Met Office Hadley Centre HadGEM3 5 model and the CMIP5 6 set of models. Improvements include better representation of the past climate, the inclusion of more cloud and aerosol processes and the ability to model important climate phenomena such as the El-Niño Southern Oscillation (ENSO).\n- · Groundbreaking Met Office research on modelling of extreme events in high resolution regional climate models 7 .\n- · The increased quantity and range of observations available since 2009.\n- · Use of the new Met Office supercomputer, enabling a credible range of climate projections to be generated in greater spatial detail.\n- 1 The 2008 Climate Change Act allows UK government to mandate or invite certain organisations to produce reports to assess the impacts of climate change on their operations and present proposals for adaptation. https://www.gov.uk/government/collections/climate-changeadaptationreporting-second-round-reports\n- 2 Expected in 2018, the National Adaptation Programme will be supported by the Evidence Report of the Adaptation Sub-Committee of the Committee on Climate Change (ASC): https://www.theccc.org.uk/uk-climate-change-risk-assessment-2017/introduction-to-the-ccra/ 3 Under the 2008 Climate Change Act, organisations are invited to produce Adaptation Reporting Power reports to assess the impacts of climate change on their operations and present proposals for adaptation: https://www.gov.uk/government/collections/climate-change-adaptation-\n\n## reporting-second-round-reports\n\n- 4 Spatial coherence means that climate projections can be compared between locations and aggregated over larger areas, enabling climate change to be assessed consistently over larger study areas.\n- 5 http://www.metoffice.gov.uk/research/modelling-systems/unified-model/climate-models/hadgem3\n- 6 Coupled model intercomparison project phase 5, see http://cmip-pcmdi.llnl.gov/cmip5/\n- 7 Kendon, E. J., Roberts, N. M., Senior, C. A. & Roberts, M. J. Realism of rainfall in a very high resolution regional climate model. J. Clim. 25, 5791-5806 (2012) http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00562.1", - "page_start": 0, - "page_end": 0, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "Rather than using the original CMIP5 ensemble as in previous studies, the aim is to allow for an improved representation of atmospheric and land surface processes including extremes by using higher spatial resolution [11].\n\nHadGEM3 (Hadley Centre Global Environment Model version 3) is a configuration of the UK Met Office Unified Model (MetUM) which has been developed for use for both climate research and weather prediction applications. It is the result of converging the development of the Met Office's weather and climate global atmospheric model components so that, where possible, atmospheric processes are modelled or parametrized seamlessly across spatial resolutions and timescales.\n\nThe high-resolution simulations were performed using the HadGEM3A Global Atmosphere (GA) 3.0 model [12-14] at a resolution of N216 (0.556° of latitude by 0.833° of longitude with gridboxes of approx. 60 km length in mid-latitudes). This is the atmospheric component of the HadGEM3-GC2 coupled climate model [15,16], which is part of the HadGEM3 family of climate models [12]. This represents the third generation of HadGEM configurations, leading on from the HadGEM2 family of climate model configurations [13] which was used for CMIP5. Key improvements over the previous model, HadGEM2, include increased vertical levels in the atmosphere (85 compared to 38) and substantial changes to the model dynamics (ENDGame) [17]. This version of the HadGEM3 model lies in the transition from CMIP5 to CMIP6 versions. The Met Office is currently operationally running the coupled HadGEM3-GC2 model at N216 resolution for seasonal and decadal forecasting and clear benefits are emerging from this use at higher resolution [18,19].\n\nWe ran the model using only its atmosphere and land components, with time-varying seasurface temperatures (SSTs) and sea-ice concentrations (SICs) prescribed as input quantities. This approach was taken for two reasons: (i) to provide a rapid first analysis of the implications of the higher resolution for projections of climate extremes and impacts-an atmosphereonly simulation requires considerably less computing time than a coupled ocean-atmosphere general circulation model (GCM); (ii) to allow us to explore, to some degree, uncertainties in regional climate changes by using SSTs and SICs from different climate models. To explore these uncertainties in the regional impacts of climate change, we carried out six HadGEM3 atmospheric simulations driven by time-varying SSTs and SICs from a subset of projections from the CMIP5 with the RCP8.5 scenario. The assumption here is that SSTs and SICs provide a substantial influence on regional patterns of climate change over land, so using a range of SST and SIC patterns in a single atmosphere model goes some way towards representing the range of regional climate changes that would arise in a set of different coupled ocean-atmosphere GCMs. This approach will not capture the full range of uncertainty affecting regional climate changes over land, because it still relies on one atmosphere model and one land surface scheme, so responses to radiative forcing that depend mainly on atmospheric process or land-atmosphere interactions will still be constrained by the behaviour of that single model. Nevertheless, we consider that our experimental design avoids the reliance on one single realization of climate and hence allows some of the uncertainties in regional climate-change impacts to be illustrated and explored.", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed11.pdf" - }, - { - "text": "IPSL-CM5A-LR\n\n\n\nGFDL-ESM2M\n\n\n\nIPSL-CM5A-MR\n\n\n\n\n\n\n\nMIROC-ESM-CHEM\n\nACCESS1-0Figure 20. Di/fference between 2 ° Cand1.5 ° C global warming in percentage changes in mean (top) run-o/ff in JULES simulations driven by the ensemble of HadGEM3 simulations. Note that the use of percentage changes emphasizes changes in regions where the baseline stream/flow is small.\n\n\n\nThe largest regional differences between 2°C and 1.5°C global warming tend to be in the regions where the local impact is largest relative to the baseline. For TXx this is generally the midlatitudes, whereas for TX90p it is generally the tropics. So, broadly, the impacts at 1.5°C global warming could be estimated by scaling-back the impacts at 2°C.\n\nThese results show some similarities with those from the CMIP5 models [9,38], but also some notable differences. The CMIP5 models were at lower spatial resolution than the models used here. Although the general patterns of change in TXx are broadly similar in our study and CMIP5, with greater warming in many continental interiors, is notable that our results show more marked geographical variation than those from CMIP5 projections ([9], among others), with the continental interior warming being more intense in our projections. In particular, our results with HadGEM3 show more intense increases in maximum temperature in North America and Europe.\n\nOur projections of changes in consecutive dry days (CDD) broadly consistent with those found in a subset of the CMIP5 ensemble [9], although there are some differences. Our ensemble mean suggests shorter dry spells in the central Amazon, whereas ISIMIP-indicated longer dry spells. Also, as with the temperature indices, our results show greater geographical differentiation in the intensity of changes.\n\nThe decrease in Rx5day in some regions in our simulations contrasts with the subset of CMIP5 models used for the ISIMIP Fast-Track projections [9] which suggested an increase in Rx5day almost everywhere where at least 66% of the model ensemble agreed on the sign of the change, including all of northern South America. The reasons for these differences require further investigation, but some insight into possible reasons may be gained by examining the similarities and differences between our own individual ensemble members.\n\nFor all the CLIMPAct variables, the variations in global means between the ensemble members were consistent at 1.5°C and 2°C. That is, the members with the largest changes at 2°C also showed the largest changes at 1.5°C, and the same was true for the smallest changes, and the relative proportions of changes in other ensemble members. This suggests that variations between the ensemble members at any particular GWL were not merely a consequence of internal variability\n\nHadGEM2-ES\n\n\n\n", - "page_start": 22, - "page_end": 22, - "source_file": "pubmed11.pdf" - }, - { - "text": "The SSTs and SICs were taken from a subset of the CMIP5 transient projections performed with the RCP8.5 scenario from 1979 to 2100-the CMIP5 members were selected as representative of a range of outcomes for future climate change, including high and low climate sensitivity, different biases in baseline precipitation climatology, and different global patterns of precipitation change. Specific levels of global warming such as 1.5°C or 2°C were defined on the basis of the global mean temperature in the original CMIP5 projections. The time of reaching a specific level of global warming, therefore, varied between ensemble members. The CMIP5 SSTs were not bias-corrected, which means that the results here may be sensitive to systematic errors arising from biases in the present-day SST patterns.\n\nAtmospheric greenhouse gas concentrations were prescribed from the standard RCP8.5 concentration scenario. Aerosol concentrations were calculated within the model, with aerosol emissions prescribed again from the standard RCP8.5 scenario. This means that the greenhouse gas and aerosol concentrations, and hence radiative forcing, were the same in all ensemble", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed11.pdf" - }, - { - "text": "\n\n\n\n## What can users expect from UKCP18?\n\nThere are three components to UKCP18: observations of historic climate, marine projections and projections over land. These components are described below and summarised in Table 1. UKCP18 will provide each of these components at a higher spatial and temporal resolution than UKCP09 and with more information on different types of uncertainty.\n\n\n\n\n\n## OBSERVATIONS\n\n## Annual report: State of the UK Climate. Downloadable data.\n\nThe 'State of the UK Climate' report for 2017 will be included as part of the UKCP18 package, bringing the observed data right up to date. This annual update 8 covers trends, the multidecade climate record and significant weather events such as the early July 2015 hot spell and the exceptionally mild and wet December of the same year.\n\nQuality controlled UK observational datasets from the Met Office observing network, provided at spatial resolutions to match the land projections and for pre-defined administrative regions and river basins, will be available under an Open Government Licence 9 . For variables such as temperature and precipitation these data sets will span the late 19th Century to the present day and will be provided for daily, monthly, seasonal, annual and long term averages.\n\n## MARINE PROJECTIONS\n\n## Sea level rise. Storm surge. Past event case studies.\n\nSea-level rise projections will extend to 2100 and will include contributions from glaciers, ice sheets, freshwater reservoirs, groundwater and thermal expansion. Outputs will include an estimate of the year-to-year changes in sea level rise and a 'plausible but highly unlikely' scenario known as H++. A new feature of UKCP18 will be assessing the credibility of making sea level rise projections to 2300. The projections will use the latest information from the CMIP5 models and application of the methods used in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report 10 .\n\nThe UKCP09 storm surge projections will be updated to provide new estimates of the change in high water levels over the 21st Century. These estimates will be based on a combination of projected mean sea level change and projections of change in the extremes due to changes in atmospheric storminess. These 'storminess' projections will use the same surge model used in operational weather forecasting, using the wind and pressure from the CMIP5 ensemble to drive the surge. New understanding of the modification of large-scale sea level change signals as they pass from the open ocean onto the shelf sea around the UK will be incorporated into the UKCP18 marine projections. UKCP18 will also include storm surge historical case studies derived from applying plausible future sea level change to historical extreme events.\n\n - 8 The latest update can be found at http://www.metoffice.gov.uk/climate/uk/about/state-of-climate\n - 9 http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/\n - 10 https://www.ipcc.ch/report/ar5/", - "page_start": 1, - "page_end": 1, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "For future (2020-2099), the original climate scenario data (Table 1) were extracted from output archives of /five ESMs (including GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM and NorESM1-M) under four RCPs (RCP2.6, RCP4.5, RCP6.0, RCP8.5) retrieved from the CMIP website. /T\\_he climate scenario data was interpolated into 0.5° × 0.5° horizontal resolution and bias-corrected with respect to historical observations to remove systematic errors 46 . /T\\_he data of maize-planting regions are from the gridded global dataset in 2000 by combining two data products 47,48 .\n\nSimulation of climate scenarios with global warming by ͷ.ͻ °C and ͸.Ͷ °C. In this study, climate data of global warming by 1.5 °C and 2.0 °C are determined according to the results of global climate models driven by typical concentration paths (RCPs) of greenhouse gas emissions. Eligible data are selected from a total of 20 sets of data under four RCP scenarios of /five ESMs (including GFDL-ESM2M, HadGEM2-ES, IPSLCM5A-LR, MIROC-ESM-CHEM and NorESM1-M), which estimate the temperature, precipitation and sunshine hours (Fig. 1).\n\nVol:.(1234567890)", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed9.pdf" - }, - { - "text": "\n\n## PROJECTIONS OVER LAND\n\nThe land projections comprise three components:\n\n## 60KM GLOBAL PROJECTIONS\n\n20 plausible climate futures. Latest Hadley Centre climate model. Simulations of extreme weather. Simultaneous impacts captured at multiple locations.\n\nThis resolution will enable more realistic simulations of climate for the UK and capture the drivers of extreme weather, a significant advance on the 300 km-resolution simulations of UKCP09. A set of 20 plausible global projections of 21st century climate will be generated using an ensemble of the Met Office Hadley Centre HadGEM3 climate model. These projections will be selected to represent a wide range of possible future climate states to reflect key uncertainties, informing a risk-based approach to planning. They will be generated to provide spatially coherent daily data at a horizontal resolution of 60 km for two greenhouse gas concentration scenarios. These will be compared with an ensemble of CMIP5 models to provide additional information on uncertainties in the projections relative to other climate models.\n\n## 25KM PROBABILISTIC PROJECTIONS\n\nCaptures natural variability and climate change . Updated models and observations. Provides seasonal scale projections.\n\nBased on the established, peer-reviewed, ground-breaking method of UKCP09 for estimating uncertainty for use in risk-based analysis. Probabilistic projections will be updated using an up-to-date collection of Met Office climate simulations and the latest IPCC-assessed simulations to estimate the model uncertainties, incorporate the latest observations and estimate carbon cycle feedbacks. Projections will be on a 25 km grid for the UK at monthly intervals for several emission scenarios, including one used in UKCP09 11 . The new probabilistic projections will indicate the range of uncertainty in our knowledge of the climate system and natural variability through the 21st century, using probability density functions to provide information on how climate varies from month to month. This contrasts with UKCP09 for which only 30-year means were provided 12 .\n\n## DOWNSCALED HIGH RESOLUTION PROJECTIONS\n\nDownscaled versions of the global model for the UK. For the most spatially detailed downscaling this includes hourly data. Simultaneous impacts captured at multiple UK locations.\n\nThe high resolution projections will provide information on types of weather of relevance to adaptation at two different resolutions. The 12 km model provides a downscaled product that is similar to UKCP09's 25 km simulations but driven by an improved global model and at a higher resolution. This may be especially useful for those interested in water availability and some aspects of agriculture. A key reason for providing this data is that users will be able to compare it directly with EURO-CORDEX 13 .\n\nThe global projections will also be downscaled to 2.2 km using a process of nesting models at finer resolution that maintains the integrity of the representation of evolving atmospheric processes. Key benefits of simulations at this resolution will be the information provided on high impact events such as localised heavy rainfall in summer and potential improvements in the diurnal cycle.\n\nThe output will be available at a time resolution of 3-hourly, possibly higher for some output, for a high emission scenario. Spatial coherence will be maintained. Specific time slices (e.g. 2061-2080) will be made available with the exact nature of these still to be confirmed.\n\n - 11 SRESA1B: IPCC future scenario based on rapid economic growth and a balance of energy sources\n - 12 30-year means can be created using the UKCP18 PDF data\n - 13 http://www.euro-cordex.net/\n\n\n\n\n\n\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "IPSL-CM5A-LR\n\n\n\nGFDL-ESM2M\n\n\n\nIPSL-CM5A-MR\n\n\n\nMIROC-ESM-CHEM\n\nACCESS1-0\n\n\n\n\n\nFigure 4. Simulated changes in the number of consecutive dry days relative to 1981-2010, at 2 ° C global warming, for individual HadGEM3 simulations driven by SSTs and SICs from di/fferent members of the CMIP5 ensemble, and the ensemble mean. The labels above each panel identify the driving CMIP5 model (or ensemble mean).\n\n\n\nTable 5. Global mean changes at 2 ° C global warming compared to present day for individual ensemble members, for the ClimPACT indices, the /flood and drought proxies used as input to the HCVI calculations, and percentage change in mean precipitation (Pmean), mean run-o/ff (Rmean) and low run-o/ff (Rlow).", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed11.pdf" - }, - { - "text": "In the present study, processing errors in the input data for one ensemble member, the HadGEM2-ES-driven member, caused the results to be invalid. Results for this member for the HCVI are, therefore, not presented here.\n\n## (d) Freshwater resources: run-o/ff\n\nImpacts on freshwater were assessed with a version of the JULES land surface model [24,25], a coupled ecosystem-hydrology-surface exchange model which simulates land-atmosphere fluxes of water, energy and carbon in an internally consistent way, typically applied at global scales. Variants of JULES form the land surface scheme of Met Office Hadley Centre Earth System Models [26,27] and have been used to assess impacts of climate change on global terrestrial ecosystems and hydrology [28-30] within such models. JULES can also be used outside of the Earth System Model (ESM), driven by meteorological outputs of other ESMs to assess impacts of a wider range of climate projections [6,8]. Here we use a new, higher-resolution configuration of JULES on a global grid of 0.5° resolution [31].\n\nIt has been noted that hydrological impacts models driven by climate-change projections from climate models tend to give more severe drying than simulated in the climate models themselves [32-34]. This is largely attributed to the inclusion of plant stomatal closure in response to elevated CO2 in the climate model land surface schemes, which generally reduces evapotranspiration relative to climate projections without this process and hence further increases run-off/streamflow or ameliorates decreases [34]. This process is often omitted from standard hydrological models. Plant physiological responses to CO 2 are included in the JULES model, so our projections of changes in run-off here do account for this process.\n\nWe used each HadGEM3 simulation to drive JULES to simulate changes in run-off due to the effects of climate change and CO 2 rise on precipitation, evaporation and transpiration. We analysed 30 year periods centred around the year of crossing GWLs of 1.5°C and 2°C relative to pre-industrial. We examined changes in both mean flows and low flows (defined as the flows for the lowest 10% of time).\n\n## (e) Correcting biases in climate model output and implications for de/fining levels of global warming\n\nThe ClimPACT extreme weather indices, HCVI and JULES run-off simulations were all performed using outputs from the higher-resolution HadGEM3 projections described in §2a. However, there were some differences in how these data were applied, with different approaches to the treatment of systematic biases in the climate model output. For the ClimPACT analysis, it was considered important to assess changes in the raw climate model output, because this directly represents the behaviour of the model itself. The main focus was on the changes relative to the presentday baseline climate, defined as 1981-2010, with absolute values in either the baseline or the GWLs of 1.5°C and 2°C being only of secondary interest. For the HCVI and JULES run-off analyses, however, it was considered important to correct for systematic biases in the climate model output, because these can lead to unrealistic representations of the key quantities in the present-day simulation [35]. A bias-correction methodology was, therefore, applied for these two parts of the analysis, whereby the model output was adjusted to make it consistent with an observed climatology [36]. We used a multi-segment statistical bias-correction methodology for precipitation [37], and a modification of this for other variables [37].", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed11.pdf" - }, - { - "text": "- 19. Knight J et al. 2014 Predictions of climate several years ahead using an improved decadal prediction system. J. Clim. 27 , 7550-7567. (doi:10.1175/JCLI-D-14-00069.1)\n - 20. Wyser K et al. 2016 Documentation of changes in climate variability and extremes simulated by the HELIX AGCMs at the 3 SWLs and comparison to changes in equivalent SST/SIC lowresolution CMIP5 projections. HELIX project deliverable 3.1.\n - 21. Alexander L, Yang H, Perkins S. 2018 ClimPACT-Indices and Software. User Manual. See http://www.wmo.int/pages/prog/wcp/ccl/opace/opace4/meetings/documents/ ETCRSCI\\_software\\_documentation\\_v2a.doc (accessed on 5 February 2018).", - "page_start": 25, - "page_end": 25, - "source_file": "pubmed11.pdf" - } - ] - }, - { - "references": { - "source_file": "legal1_opengouvernementlicense.pdf", - "query": "Which causes of the rise of sea level will be considered by UKCP18 ?", - "target_page": 2, - "target_passage": "Sea-level rise projections will extend to 2100 and will include contributions from glaciers, ice sheets, freshwater reservoirs, groundwater and thermal expansion", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\n\n\n## What can users expect from UKCP18?\n\nThere are three components to UKCP18: observations of historic climate, marine projections and projections over land. These components are described below and summarised in Table 1. UKCP18 will provide each of these components at a higher spatial and temporal resolution than UKCP09 and with more information on different types of uncertainty.\n\n\n\n\n\n## OBSERVATIONS\n\n## Annual report: State of the UK Climate. Downloadable data.\n\nThe 'State of the UK Climate' report for 2017 will be included as part of the UKCP18 package, bringing the observed data right up to date. This annual update 8 covers trends, the multidecade climate record and significant weather events such as the early July 2015 hot spell and the exceptionally mild and wet December of the same year.\n\nQuality controlled UK observational datasets from the Met Office observing network, provided at spatial resolutions to match the land projections and for pre-defined administrative regions and river basins, will be available under an Open Government Licence 9 . For variables such as temperature and precipitation these data sets will span the late 19th Century to the present day and will be provided for daily, monthly, seasonal, annual and long term averages.\n\n## MARINE PROJECTIONS\n\n## Sea level rise. Storm surge. Past event case studies.\n\nSea-level rise projections will extend to 2100 and will include contributions from glaciers, ice sheets, freshwater reservoirs, groundwater and thermal expansion. Outputs will include an estimate of the year-to-year changes in sea level rise and a 'plausible but highly unlikely' scenario known as H++. A new feature of UKCP18 will be assessing the credibility of making sea level rise projections to 2300. The projections will use the latest information from the CMIP5 models and application of the methods used in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report 10 .\n\nThe UKCP09 storm surge projections will be updated to provide new estimates of the change in high water levels over the 21st Century. These estimates will be based on a combination of projected mean sea level change and projections of change in the extremes due to changes in atmospheric storminess. These 'storminess' projections will use the same surge model used in operational weather forecasting, using the wind and pressure from the CMIP5 ensemble to drive the surge. New understanding of the modification of large-scale sea level change signals as they pass from the open ocean onto the shelf sea around the UK will be incorporated into the UKCP18 marine projections. UKCP18 will also include storm surge historical case studies derived from applying plausible future sea level change to historical extreme events.\n\n - 8 The latest update can be found at http://www.metoffice.gov.uk/climate/uk/about/state-of-climate\n - 9 http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/\n - 10 https://www.ipcc.ch/report/ar5/", - "page_start": 1, - "page_end": 1, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "\n\n\n\n\n\n## UK CLIMATE PROJECTIONS: A PROJECT OVERVIEW\n\n\n\n## What is UKCP18 and why do we need it?\n\nFollowing the historic Paris Agreement on Climate Change in December 2015, the Department of Environment, Food and Rural Affairs announced a major upgrade to the UK Climate Projections.\n\nThe UKCP18 project will build upon the current set of projections (UKCP09) to provide the most up-todate assessment of how the climate of the UK may change over the 21st century. This information will be essential to future Climate Change Risk Assessments 1 and to equip the UK with information to help adapt to the challenges and opportunities of climate change in line with the National Adaptation Programme 2 .\n\nOrganisations and individual users will use UKCP18 to inform risk assessments and adaptation plans to ensure they are resilient to extreme weather and climate change. Some organisations will use UKCP18 in responding to the Adaptation Reporting Power 3 for example.\n\n\n\n## What improvements does UKCP18 deliver?\n\nUKCP18 will benefit from a range of developments since the release of UKCP09, including:\n\n- · Greater understanding of user needs as a result of the adaptation community's use of UKCP09 projections and the subsequent feedback - user workshops indicated that users supported the continued use of probabilistic projections and the importance of spatially coherent information 4 .\n- · Advances in climate models in recent years, such as the Met Office Hadley Centre HadGEM3 5 model and the CMIP5 6 set of models. Improvements include better representation of the past climate, the inclusion of more cloud and aerosol processes and the ability to model important climate phenomena such as the El-Niño Southern Oscillation (ENSO).\n- · Groundbreaking Met Office research on modelling of extreme events in high resolution regional climate models 7 .\n- · The increased quantity and range of observations available since 2009.\n- · Use of the new Met Office supercomputer, enabling a credible range of climate projections to be generated in greater spatial detail.\n- 1 The 2008 Climate Change Act allows UK government to mandate or invite certain organisations to produce reports to assess the impacts of climate change on their operations and present proposals for adaptation. https://www.gov.uk/government/collections/climate-changeadaptationreporting-second-round-reports\n- 2 Expected in 2018, the National Adaptation Programme will be supported by the Evidence Report of the Adaptation Sub-Committee of the Committee on Climate Change (ASC): https://www.theccc.org.uk/uk-climate-change-risk-assessment-2017/introduction-to-the-ccra/ 3 Under the 2008 Climate Change Act, organisations are invited to produce Adaptation Reporting Power reports to assess the impacts of climate change on their operations and present proposals for adaptation: https://www.gov.uk/government/collections/climate-change-adaptation-\n\n## reporting-second-round-reports\n\n- 4 Spatial coherence means that climate projections can be compared between locations and aggregated over larger areas, enabling climate change to be assessed consistently over larger study areas.\n- 5 http://www.metoffice.gov.uk/research/modelling-systems/unified-model/climate-models/hadgem3\n- 6 Coupled model intercomparison project phase 5, see http://cmip-pcmdi.llnl.gov/cmip5/\n- 7 Kendon, E. J., Roberts, N. M., Senior, C. A. & Roberts, M. J. Realism of rainfall in a very high resolution regional climate model. J. Clim. 25, 5791-5806 (2012) http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00562.1", - "page_start": 0, - "page_end": 0, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "There are two major research questions concerning the impacts of climate change at 1.5°C and 2°C global warming, which are relevant to both mitigation and adaptation policy areas.\n\n - (i) How much larger are the impacts at 2°C compared to 1.5°C? This is the primary question arising from the Paris Agreement [4] and is relevant to mitigation policy, informing judgements and actions on holding the global temperature rise to 'well below 2°C' and 'pursuing efforts to limit the temperature increase to 1.5°C'.\n - (ii) What regional climate conditions and related hydrological and ecological conditions could occur at a particular level of global warming, such as 2°C? This is relevant to adaptation policy and planning-exploring the possible outcomes for these levels of warming will help facilitate adaptation and improved resilience to account for a 1.5°C or 2°C world. It is recognized that many adaptation decisions require information on timing of specific impacts or risks, but nevertheless, framing regional impacts assessments in terms of associated global warming levels (GWLs) may help provide context of the levels of climate change that may be avoidable or unavoidable (and hence require adaptation).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed11.pdf" - }, - { - "text": "Firstly, the period of 1986-2005 is de/fined as the baseline, of which the simulated average value is recognized as 0.61 °C above pre-industrial (the period of 1850-1900) levels; the baseline is selected according to the accessibility and operability of data, which is used for the determination of the periods with global warming by 1.5 °C and 2.0 °C and the comparison of maize yield between di/fferent periods. Secondly, the simulated values of global mean temperature in the future years are subtracted from the simulated average value of 1986-2005; then the values should be plus with 0.61 °C, which are the global warming results above pre-industrial levels; then 20 years moving average of the above results are calculated. /T\\_hirdly, the climate data of global warming by 1.5 °C is de/fined according to the principles provided in the /fi/f\\_th IPCC Assessment Report, for which it should be within 1.5-2.0 °C above pre-industrial levels at the end of the twenty-/first century; the climate data of global warming by 2.0 °C is de/fined according to the principles provided in the /fi/f\\_th IPCC Assessment Report, for which it should be within 2.0-2.5 °C above pre-industrial levels at the end of the twenty-/first century and the period of global warming by 2.0 °C should not be earlier than 2050. Finally, the climate models, scenarios and periods of global warming by 1.5 °C and 2.0 °C are separately con/firmed; the data of global warming by 1.5 °C, simulated by IPSL-CM5A-LR under RCP2.6 scenario during 2020-2039 and simulated by GFDL-ESM2M under RCP4.5 scenario during 2041-2060; the data of global warming by 2.0 °C, simulated by NorESM1-M under RCP4.5 scenario during 2060-2079 and simulated by GFDL-ESM2M under RCP6.0 scenario during 2065-2084.\n\nSimulation of maize yield using DSSAT. According to the data of global warming by 1.5 °C and 2.0 °C selected above, we simulated global maize yield changes compared with the average yield during 1986-2005 on grid level using CERES-Maize, which is part of DSSAT version 4.6 49 .\n\n/T\\_he inputs for DSSAT simulation include daily weather data, soil parameters, crop calendar data and management information. All the inputs are formatted at a 0.5° × 0.5° grid resolution which are computed by highperformance computers. Weather data is from the AgMERRA dataset, including maximum and minimum temperatures, precipitation, total radiation and humidity. Crop calendar data were from the Center for Sustainability and Global Environment (SAGE), in which the existing observations of crop planting and harvesting dates are gridded formatted at a resolution of 5 min 50 . For management information, fertilizer applications, irrigation and other management practices are required. A crop-speci/fic gridded dataset of nitrogen fertilizer application for the world was developed by integrating national and subnational fertilizer application data from a variety of sources, which is used to set up current fertilizer application rates for maize in each grid cell. Soil parameters are from the International Soil Pro/file Dataset (WISE), including soil texture, bulk density, pH, organic carbon content and fraction of calcium carbonate for each of /five 20 cm thick soil layers 51 . All the soil data is allocated to be in accordance with the request of DSSAT simulation; the missing soil parameters for organic soils were adopted from FAO soil dataset.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed9.pdf" - }, - { - "text": "The assumptions used are based on consultation with policy and operational experts at the Ministry of Justice and the National Offender Management Service. They also take into account observed data trends:\n\n-  These projections represent a change from last year where the 2013 Scenario 2 (central) saw the population gradually falling over the six year lifetime of the projection. The Central Scenario in the projections this year shows the population rising over the next six years. This change arises from the fact that the latest projections capture a recent upward trend in prosecutions of more serious offences.\n-  Despite the fact that overall crime is falling there has been an increase in recorded crime for certain offence types:\n- o Prosecutions for sexual offences are the highest in the decade and increased by 19% in the 12 months ending June 2014, in line with a 21% increase in recorded crime. Offenders sentenced for sexual offences had an Average Custodial Sentence Length (ASCL) of 59.7 months, a rise of 2.4 months, compared with year ending June 2013.\n- o Violence against the person proceedings for indictable offences have increased by 7% in the 12 months ending June 2014. This is in line with an 11% increase in recorded crime.\n\nFurther statistics and commentary on the changes seen in Court proceedings and sentencing over the last year is presented in the Criminal Justice System Statistics Quarterly publication. This is available online on GOV.UK at: www.gov.uk/government/collections/criminal-justice-statistics-quarterly", - "page_start": 4, - "page_end": 4, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "- (b) the Secretary of State has confirmed in writing that this paragraph applies in relation to P and has not withdrawn that confirmation.\n - (3) P is also a relevant person if-\n - (a) P is, or was on the 1st September 2020, a child;\n - (b) P travels to the UK for the purposes of receiving education at a boarding school in England at which education and accommodation is due to be provided for P;\n - (c) P is not accompanied into the UK by an individual who has responsibility for P, or if P is aged 18 or over, would have had such responsibility if P were a child; and", - "page_start": 78, - "page_end": 78, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "The SSTs and SICs were taken from a subset of the CMIP5 transient projections performed with the RCP8.5 scenario from 1979 to 2100-the CMIP5 members were selected as representative of a range of outcomes for future climate change, including high and low climate sensitivity, different biases in baseline precipitation climatology, and different global patterns of precipitation change. Specific levels of global warming such as 1.5°C or 2°C were defined on the basis of the global mean temperature in the original CMIP5 projections. The time of reaching a specific level of global warming, therefore, varied between ensemble members. The CMIP5 SSTs were not bias-corrected, which means that the results here may be sensitive to systematic errors arising from biases in the present-day SST patterns.\n\nAtmospheric greenhouse gas concentrations were prescribed from the standard RCP8.5 concentration scenario. Aerosol concentrations were calculated within the model, with aerosol emissions prescribed again from the standard RCP8.5 scenario. This means that the greenhouse gas and aerosol concentrations, and hence radiative forcing, were the same in all ensemble", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed11.pdf" - }, - { - "text": "A detailed investigation of these factors is beyond the scope of this paper; nevertheless, this result illustrates the important point that the nature and patterns of the climate forcing at a particular level of global warming can play an important role in determining the patterns of regional impacts.\n\n## 5. Conclusion\n\nThe higher-resolution HadGEM3 simulations project consistent increases in temperature-related extremes, with larger changes at 2°C compared to 1.5°C and local changes being larger than the global annual mean. There is a higher degree of spatial variation in our projections compared with CMIP5-based studies.\n\nIn the model projections examined here, changes relating to the water cycle are complex, both in their geographical pattern and in the variation between different models. The length of flooding events generally increases across world in all models, but maximum rainfall can either increase or decrease depending on locations. Global patterns of increase and decrease show some consistency between the different GWLs, but also some local differences. Worldwide, most impacts broadly tend to increase with global warming in most areas. For global mean changes, even when the sign of change is uncertain, individual realizations generally show reduced impact at 1.5°C compared with 2°C. However, this does not always hold even at the scale of major global river basins.\n\nVulnerability to food insecurity increases more at 2°C global warming than 1.5°C in approximately three-quarters of countries assessed. The vulnerability increase can arise from increases in either flooding or drought. Reduced drought leads to decreased vulnerability in a limited number of cases.\n\nMost simulations here project a general increase in mean streamflow in most of the basins examined, but with a number of notable exceptions in the tropics. While flows in the Ganges are consistently projected to increase by 30-110% at 2°C, Amazon flows could either increase by 3% or decrease by 25%. Ensemble-mean changes in river flow often do not give a full impression of the magnitude of changes that may be possible, so adaptation planning in particular should not rely on ensemble-mean projections and instead consider a range of outcomes. The seasonal low streamflows also increase in many basins, but not as many as for the mean flows-many basins see decreased low flows in some or all projections.\n\nBroadly, changes in weather extremes at 1.5°C global warming could be estimated by scalingback the impacts at 2°C, if this is done with individual ensemble members rather than the ensemble mean. However, this was not always the case for impacts that depend on more complex process or interactions between more than one climate variable, such as run-off and an indicator of vulnerability to food insecurity.\n\nData accessibility.\n\nThis article has no additional data.\n\nCompeting interests. We declare we have no competing interests.\n\nFunding. This research received funding from the European Union Seventh Framework Programme FP7/20072013 under grant agreement no. 603864 (HELIX: 'High-End cLimate Impacts and eXtremes'; www. helixclimate.eu). The work of R.A.B., C.B., J.C., L.G., K.L. and K.R. was additionally supported by the Joint UK BEIS/Defra Met Office Hadley Centre Climate Programme (GA01101).\n\nAcknowledgements. The authors thank Ed Pope, Jason Lowe and Dann Mitchell for advice and discussion, Alissa Haward and Maria Pearce for project management and administration of HELIX, and two anonymous reviewers whose comments substantially improved the paper.\n\n## References\n\n - 1. IPCC. 2014 Summary for policymakers. In Climate change 2014: impacts, adaptation, and vulnerability. Part A: global and sectoral aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (eds CB Field et al .), pp. 1-32. Cambridge, UK: Cambridge University Press.", - "page_start": 24, - "page_end": 24, - "source_file": "pubmed11.pdf" - }, - { - "text": "IPSL-CM5A-LR\n\n\n\nGFDL-ESM2M\n\nHadGEM2-ES\n\n\n\nIPSL-CM5A-MR\n\n\n\n\n\nMIROC-ESM-CHEM\n\n\n\nACCESS1-0Figure 9. Changesinrun-o/ff for mean /flows simulated by the JULES ecosystem-hydrology model under six climate simulations at 2 ° C global warming. ( a ) Ensemble mean and ( b ) percentage of models agreeing on increased /flow.\n\n\n\nand 75%, especially in the Iberian Peninsula. Southern Africa also sees a decrease in low flows where changes in mean flows were small. Changes in high run-off show similar patterns and magnitudes to those in mean run-off.\n\nThe simulated changes in both mean and low run-off flows show substantial differences among the six simulations (figures 10 and 11). In most basins examined here, the range of outcomes include both increases and decreases in mean and low flows for any particular basin, but generally with the largest proportion simulating increases in both mean and low flows. In a few cases, notably the Lena in northeast Asia and Ganges in southeast Asia, the ensemble agreed entirely or almost entirely on increased flows. Even here, the range of outcomes is large, with the projected flow increases in the Ganges for 2°C global warming ranging from approximately 30% to more than 110%.\n\nExceptions to the general picture of consensus on increasing flows are seen in the Amazon, Orange, Danube and Guadiana basins where the range of projected extends more towards decreased mean flows. Mean flows in the Amazon are projected to decline by up to 25% for 2°C global warming. For low flows, the ensemble of projections entirely gives decreased flows at 2°C global warming for these basins.\n\nThe signal of decreased flows was stronger for low flows than mean flows, and indeed in the Niger, the range of mean flow changes extended more towards increases whereas the range of low flow changes extended more towards decreases.\n\n## (b) Impacts at 1.5 ° Cglobalwarmingcomparedto2 ° C\n\nFor almost all quantities and simulations examined here, global-scale changes in extremes and run-off at 1.5°C global warming (table 6) are smaller than those compared to 2°C (table 5; figures 12 and 13). The exceptions to these are mean and low run-off which each show one instance of a smaller change at 2°C than 1.5°C, but still with a majority of simulations showing larger changes at 2°C (figure 13). For temperature-related indices, the ranges of change at the two GWLs do not overlap-the change at 2°C in all members is larger than the change at 1.5°C in\n\n\n\n", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed11.pdf" - }, - { - "text": "In the global warming network, politics was the second-largest discourse cluster (20% of the network), where 'tcot', short for 'Top Conservatives on Twitter', was the node ranked highest, and 'p2', short for 'Progressives 2.0', is also included. Several political figures, such as Obama and Al Gore, are frequently mentioned. Action toward the global climate issue was the third-largest cluster (16%), including both domestic e GLYPH<11> orts, such as 'us', 'trump', 'climatechangeisreal', 'climateaction', and 'epa', and two international items, like 'china' and 'india'. The fourth cluster (in blue) referred to emissions, including hashtags like 'co2', 'green', and 'carbon'. The smallest cluster (8%) was composed of 'snow', 'winter', 'heatwave', and 'summer', referring to the temperature abnormalities on the earth.\n\n## 4.3. Temporal Analysis of the Associations in the Two Discourses\n\nThe online presentations of the climate change and global warming discourses are dynamic. As shown in Table 2, for the global warming discourse, 11 key concepts remained in the top 50 central hashtags each year for all 10 years, with 16 for the climate change'discourse. By comparing the 11 nodes of the global warming discourse and the 16 nodes of the climate change discourse, we found that the two lists shared nine concepts. We found 'pollution' and 'earth' were unique to the keyword list of the global warming discourse, and 'economy', 'water', 'china', 'coal', 'solar', 'sustainability', and 'food' only occurred on the critical list for the climate change discourse.\n\nTable 2. Hashtags that remained on the top 50 list for the climate change or the global warming discourse from 2009 to 2018.\n\n| | Unique | Shared |\n|-------------------------------|---------------------------------------------------------------------------|---------------------------------------------------------------------|\n| #climatechange #globalwarming | china, solar, water, food, economy, coal, sustainability pollution, earth | co2, news, carbon, green, climate, us, energy, science, environment |\n\nFigures 3 and 4 show the overall evolution of critical hashtags' associations in the 10-year period, where the nodes in the 10 graphs are located in the same position but the strength of associations varies across longitudinal time. Vector graphics with the label of nodes are provided in the Supplementary Materials. Four themes were identified in each discourse according to the nodes' associations. To more explicitly demonstrate the relative importance of each cluster in each year, we calculated the sum of the degree centrality of all the nodes belonging to each cluster and their change in centrality over the 10 years, as shown in Figure 5.", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed10.pdf" - } - ] - }, - { - "references": { - "source_file": "legal1_opengouvernementlicense.pdf", - "query": "What perdiod is covered by the 12 km resolution projection data of the UKCP18 ?", - "target_page": 4, - "target_passage": "1981-2080", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "\n\n\n\n\n\n## UK CLIMATE PROJECTIONS: A PROJECT OVERVIEW\n\n\n\n## What is UKCP18 and why do we need it?\n\nFollowing the historic Paris Agreement on Climate Change in December 2015, the Department of Environment, Food and Rural Affairs announced a major upgrade to the UK Climate Projections.\n\nThe UKCP18 project will build upon the current set of projections (UKCP09) to provide the most up-todate assessment of how the climate of the UK may change over the 21st century. This information will be essential to future Climate Change Risk Assessments 1 and to equip the UK with information to help adapt to the challenges and opportunities of climate change in line with the National Adaptation Programme 2 .\n\nOrganisations and individual users will use UKCP18 to inform risk assessments and adaptation plans to ensure they are resilient to extreme weather and climate change. Some organisations will use UKCP18 in responding to the Adaptation Reporting Power 3 for example.\n\n\n\n## What improvements does UKCP18 deliver?\n\nUKCP18 will benefit from a range of developments since the release of UKCP09, including:\n\n- · Greater understanding of user needs as a result of the adaptation community's use of UKCP09 projections and the subsequent feedback - user workshops indicated that users supported the continued use of probabilistic projections and the importance of spatially coherent information 4 .\n- · Advances in climate models in recent years, such as the Met Office Hadley Centre HadGEM3 5 model and the CMIP5 6 set of models. Improvements include better representation of the past climate, the inclusion of more cloud and aerosol processes and the ability to model important climate phenomena such as the El-Niño Southern Oscillation (ENSO).\n- · Groundbreaking Met Office research on modelling of extreme events in high resolution regional climate models 7 .\n- · The increased quantity and range of observations available since 2009.\n- · Use of the new Met Office supercomputer, enabling a credible range of climate projections to be generated in greater spatial detail.\n- 1 The 2008 Climate Change Act allows UK government to mandate or invite certain organisations to produce reports to assess the impacts of climate change on their operations and present proposals for adaptation. https://www.gov.uk/government/collections/climate-changeadaptationreporting-second-round-reports\n- 2 Expected in 2018, the National Adaptation Programme will be supported by the Evidence Report of the Adaptation Sub-Committee of the Committee on Climate Change (ASC): https://www.theccc.org.uk/uk-climate-change-risk-assessment-2017/introduction-to-the-ccra/ 3 Under the 2008 Climate Change Act, organisations are invited to produce Adaptation Reporting Power reports to assess the impacts of climate change on their operations and present proposals for adaptation: https://www.gov.uk/government/collections/climate-change-adaptation-\n\n## reporting-second-round-reports\n\n- 4 Spatial coherence means that climate projections can be compared between locations and aggregated over larger areas, enabling climate change to be assessed consistently over larger study areas.\n- 5 http://www.metoffice.gov.uk/research/modelling-systems/unified-model/climate-models/hadgem3\n- 6 Coupled model intercomparison project phase 5, see http://cmip-pcmdi.llnl.gov/cmip5/\n- 7 Kendon, E. J., Roberts, N. M., Senior, C. A. & Roberts, M. J. Realism of rainfall in a very high resolution regional climate model. J. Clim. 25, 5791-5806 (2012) http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00562.1", - "page_start": 0, - "page_end": 0, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "\n\n\n\n## What can users expect from UKCP18?\n\nThere are three components to UKCP18: observations of historic climate, marine projections and projections over land. These components are described below and summarised in Table 1. UKCP18 will provide each of these components at a higher spatial and temporal resolution than UKCP09 and with more information on different types of uncertainty.\n\n\n\n\n\n## OBSERVATIONS\n\n## Annual report: State of the UK Climate. Downloadable data.\n\nThe 'State of the UK Climate' report for 2017 will be included as part of the UKCP18 package, bringing the observed data right up to date. This annual update 8 covers trends, the multidecade climate record and significant weather events such as the early July 2015 hot spell and the exceptionally mild and wet December of the same year.\n\nQuality controlled UK observational datasets from the Met Office observing network, provided at spatial resolutions to match the land projections and for pre-defined administrative regions and river basins, will be available under an Open Government Licence 9 . For variables such as temperature and precipitation these data sets will span the late 19th Century to the present day and will be provided for daily, monthly, seasonal, annual and long term averages.\n\n## MARINE PROJECTIONS\n\n## Sea level rise. Storm surge. Past event case studies.\n\nSea-level rise projections will extend to 2100 and will include contributions from glaciers, ice sheets, freshwater reservoirs, groundwater and thermal expansion. Outputs will include an estimate of the year-to-year changes in sea level rise and a 'plausible but highly unlikely' scenario known as H++. A new feature of UKCP18 will be assessing the credibility of making sea level rise projections to 2300. The projections will use the latest information from the CMIP5 models and application of the methods used in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report 10 .\n\nThe UKCP09 storm surge projections will be updated to provide new estimates of the change in high water levels over the 21st Century. These estimates will be based on a combination of projected mean sea level change and projections of change in the extremes due to changes in atmospheric storminess. These 'storminess' projections will use the same surge model used in operational weather forecasting, using the wind and pressure from the CMIP5 ensemble to drive the surge. New understanding of the modification of large-scale sea level change signals as they pass from the open ocean onto the shelf sea around the UK will be incorporated into the UKCP18 marine projections. UKCP18 will also include storm surge historical case studies derived from applying plausible future sea level change to historical extreme events.\n\n - 8 The latest update can be found at http://www.metoffice.gov.uk/climate/uk/about/state-of-climate\n - 9 http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/\n - 10 https://www.ipcc.ch/report/ar5/", - "page_start": 1, - "page_end": 1, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "\n\n## PROJECTIONS OVER LAND\n\nThe land projections comprise three components:\n\n## 60KM GLOBAL PROJECTIONS\n\n20 plausible climate futures. Latest Hadley Centre climate model. Simulations of extreme weather. Simultaneous impacts captured at multiple locations.\n\nThis resolution will enable more realistic simulations of climate for the UK and capture the drivers of extreme weather, a significant advance on the 300 km-resolution simulations of UKCP09. A set of 20 plausible global projections of 21st century climate will be generated using an ensemble of the Met Office Hadley Centre HadGEM3 climate model. These projections will be selected to represent a wide range of possible future climate states to reflect key uncertainties, informing a risk-based approach to planning. They will be generated to provide spatially coherent daily data at a horizontal resolution of 60 km for two greenhouse gas concentration scenarios. These will be compared with an ensemble of CMIP5 models to provide additional information on uncertainties in the projections relative to other climate models.\n\n## 25KM PROBABILISTIC PROJECTIONS\n\nCaptures natural variability and climate change . Updated models and observations. Provides seasonal scale projections.\n\nBased on the established, peer-reviewed, ground-breaking method of UKCP09 for estimating uncertainty for use in risk-based analysis. Probabilistic projections will be updated using an up-to-date collection of Met Office climate simulations and the latest IPCC-assessed simulations to estimate the model uncertainties, incorporate the latest observations and estimate carbon cycle feedbacks. Projections will be on a 25 km grid for the UK at monthly intervals for several emission scenarios, including one used in UKCP09 11 . The new probabilistic projections will indicate the range of uncertainty in our knowledge of the climate system and natural variability through the 21st century, using probability density functions to provide information on how climate varies from month to month. This contrasts with UKCP09 for which only 30-year means were provided 12 .\n\n## DOWNSCALED HIGH RESOLUTION PROJECTIONS\n\nDownscaled versions of the global model for the UK. For the most spatially detailed downscaling this includes hourly data. Simultaneous impacts captured at multiple UK locations.\n\nThe high resolution projections will provide information on types of weather of relevance to adaptation at two different resolutions. The 12 km model provides a downscaled product that is similar to UKCP09's 25 km simulations but driven by an improved global model and at a higher resolution. This may be especially useful for those interested in water availability and some aspects of agriculture. A key reason for providing this data is that users will be able to compare it directly with EURO-CORDEX 13 .\n\nThe global projections will also be downscaled to 2.2 km using a process of nesting models at finer resolution that maintains the integrity of the representation of evolving atmospheric processes. Key benefits of simulations at this resolution will be the information provided on high impact events such as localised heavy rainfall in summer and potential improvements in the diurnal cycle.\n\nThe output will be available at a time resolution of 3-hourly, possibly higher for some output, for a high emission scenario. Spatial coherence will be maintained. Specific time slices (e.g. 2061-2080) will be made available with the exact nature of these still to be confirmed.\n\n - 11 SRESA1B: IPCC future scenario based on rapid economic growth and a balance of energy sources\n - 12 30-year means can be created using the UKCP18 PDF data\n - 13 http://www.euro-cordex.net/\n\n\n\n\n\n\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "Before Content Manager OnDemand version 9.5, PDF documents can be viewed by the Windows client in two ways:\n\n - /SM590000 If they are configured in the application as data type 'PDF', the rich feature set of the AFP and Line Data viewer applies, but Adobe Acrobat Professional is required.\n - /SM590000 If the data type is configured as 'User Defined' and '. pdf ' as the extension, the documents are started externally. Therefore, you can view the documents with the no-charge Adobe Acrobat viewer or any other installed PDF viewer.\n\nAny data type can be specified as 'User Defined', for example, Word documents ( .docx ). User-defined data is viewed by invoking its associated application.\n\n## Web-based viewing options\n\nThe web-based viewing options for Content Manager OnDemand are provided primarily by ODWEK. ODWEK includes different viewers that are dedicated to Content Manager OnDemand documents that can use Content Manager OnDemand functions, such as the segment-wise retrieval of large objects or annotations. These viewers are used in web applications, such as Content Navigator or any other custom-developed web client:", - "page_start": 210, - "page_end": 210, - "source_file": "sg246915.pdf" - }, - { - "text": "Figure 8-5 Content Manager OnDemand CICS Client login panel\n\n\n\nThe CICS Client provides viewing capabilities for line data reports and a 'best fit' model for fully composed AFP documents. Viewing a standard line data report is shown in Figure 8-6.\n\nFigure 8-6 Viewing a standard line data report\n\n", - "page_start": 223, - "page_end": 223, - "source_file": "sg246915.pdf" - }, - { - "text": "The OS/400 indexer processes three input sources:\n\n - /SM590000 Indexing parameters that specify how the data needs to be indexed. The indexing parameters are created when you define a Content Manager OnDemand application.\n - /SM590000 AFP resources that are required to view and print the data if the application created an AFP print data stream.\n - /SM590000 The print data stream, which can be in a spooled file (all data types) or in a physical file (Line Data or SCS data that was converted to Line Data with First Character Forms Control (FCFC) characters in column one of the data).", - "page_start": 203, - "page_end": 203, - "source_file": "sg246915.pdf" - }, - { - "text": "## 13.4.4 Image data\n\nTo optimize performance with storing and retrieving image formats, such as Tagged Image File Format (TIFF), Graphics Interchange Format (GIF), and Joint Photographic Experts Group (JPEG), do not compress the data because the file sizes might increase. To turn off compression, select the Disable option from the Load Information tab within the application. See Figure 13-6.\n\nFigure 13-6 Disabling compression\n\n\n\nTwo methods are available to turn off data compression:\n\n - /SM590000 Disable: Content Manager OnDemand does not compress the input data. Choose this option when the input data, such as PDF and compressed TIFFs, is already compressed. Documents are extracted by the appropriate viewer on the client (for example, Adobe Acrobat Reader).\n - /SM590000 None: Content Manager OnDemand does not compress the input data when it loads the input data into the system. When the user selects a document for viewing, Content Manager OnDemand compresses the document before it transmits it over the network and extracts the document at the client.", - "page_start": 335, - "page_end": 335, - "source_file": "sg246915.pdf" - }, - { - "text": "## Windows client viewers\n\nThe Content Manager OnDemand Windows client contains native capabilities for viewing typical archive data types:\n\n - /SM590000 Line Data and SCS\n - /SM590000 AFP\n - /SM590000 Images\n\nThe Windows client reflects the richest set of capabilities in terms of viewing these data types. Because it directly communicates with the Content Manager OnDemand server, we reference the Windows client for all of its features that relate to document display.\n\nThe Line Data viewer of the Windows client is the most sophisticated viewer that is available for Content Manager OnDemand from the selection of readily available viewers.\n\nThe viewing of these primary data types happens within the same application. The Windows client provides other features, such as thumbnails, and configurable and saveable views.\n\nThe Content Manager OnDemand Windows client also contains other capabilities for viewing archive data types, such as Portable Document Format ( PDF ) and User-Defined .\n\nStarting with Content Manager OnDemand version 9.5, for both DocType=PDF and user-defined PDF, the Windows Client will attempt to view a PDF document with Adobe Acrobat, if it is installed. If Adobe Acrobat is not installed, for DocType=PDF, Adobe Acrobat Reader will be used instead when the PDF document is viewed.\n\nBefore Content Manager OnDemand version 9.5, PDF documents can be viewed by the Windows client in two ways:", - "page_start": 210, - "page_end": 210, - "source_file": "sg246915.pdf" - }, - { - "text": "- - Report file size, document file size (or in the case of large objects, report segment size), and number of documents per report.\n - - Number and distribution of triggers, fields, and indexes per document.\n - - Data type and required data conversion (if any).", - "page_start": 325, - "page_end": 325, - "source_file": "sg246915.pdf" - }, - { - "text": "- /SM590000 Certain data streams, such as Hewlett-Packard (HP) Printer Command Language (PCL) or Xerox metacode, are printer-specific and cannot be displayed. Before you archive or display the documents, these data streams must be transformed into a compatible format.\n - /SM590000 The archived data stream might need to comply with a company's internal rules or regulations. Therefore, the produced data streams must be transformed into the defined and required final format before they are archived.\n - /SM590000 The documents might need to be accessible by a user that is outside of the company. The document must be displayed through standard tools that are available on any or at least most of the clients, such as an Internet browser or Adobe Acrobat Reader.", - "page_start": 231, - "page_end": 231, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv4.pdf", - "query": "How many articles compose the Syntec French collective bargaining agreement ?", - "target_page": 2, - "target_passage": "The Syntec French collective bargaining agree- ment comprises around 90 articles", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## 6 OSH legislation and OSH infrastructure in the EU\n\n## 6.1 Foundation, legislation, compliance and supervision\n\nThe ethical and economic importance of safe and healthy working conditions led to an integration of this target in international conventions and agreements; it is also embedded in the treaties of the EU.\n\nUN has included 'Safe and secure work environment' as an indicator for Goal 8 of their 17 global 'Sustainable Development Goals ' for 2030. Goal 8 aims to 'Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all' . 334 It requests in its target 8.8 to 'Protect labour rights and promote safe and secure working environments for all workers, including migrant workers, in particular women migrants, and those in precarious employment.'\n\nThe Preamble to the Constitution 335 of the ILO includes as an objective ' … the protection of the worker against sickness, disease and injury arising out of his employment ...' . In 2022, the objective of a safe and healthy working environment became part of the 'Declaration on Fundamental Principles and Rights at Work', adding OSH to the existing four basic principles, that is, 1) freedom of association and right to collective bargaining, 2) the elimination of all forms of forced or compulsory labour, 3) the effective abolition of child labour, and 4) the elimination of discrimination. Between the year of the foundation in 1919 and today, the ILO agreed on more than 40 conventions and recommendations addressing OSH, be it either general provisions or provisions for specific groups and sectors or specific risks. 336\n\nThe EU and its predecessors have enshrined health and safety of workers in their founding treaties . Already in 1951, it was stated in Article 3 of the European Coal and Steel Community (ECSC) Treaty that 'The institutions of the Community shall, within the limits of their respective powers, in the common interest … promote improved working conditions and an improved standard of living for the workers in each of the industries for which it is responsible …' . 337 During the development of the European institutions and the EU from those years until today, references to working conditions and safety and health were always part of the treaties, and also in the latest Treaty of Lisbon from 2009. 338\n\nIn Article 151 of the Lisbon Treaty, it is stated that 'The Union and the Member States, shall have as their objectives the promotion of employment, improved living and working conditions …' . The areas of such promotion are set out in Article 153 , where two bullet points refer to OSH: (a) improvement in particular of the working environment to protect workers' health and safety; (b) working conditions. In 2017, the European Commission launched an initiative to agree on the 'European Pilar of Social Rights' (EPSR), comprising 20 key principles guiding the EU in the field of social policy. 339 These pillars were agreed by the Member States; Principle 10 refers to a ' Healthy, safe and well-adapted work environment and data protection.'\n\nThese European and international agreements and treaties regard safety and health as essential for human development, a basic human right . The main reasoning is to eliminate or reduce as much as possible suffering, sickness, disability and death of workers. Often the reasoning refers to intertwined objectives, that is, to economic growth (UN), or to reduce the economic burden of incomplete health and safety at work, be it the burden for enterprises or the society as a whole, that is, by 'Promotion of employment' (Lisbon Treaty) or by 'Prolongation of the participation in the labour market' (EPSR) or 'Data protection' (EPSR).", - "page_start": 117, - "page_end": 117, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- 337 Treaty Establishing the European Coal and Steel Community and Annexes I-III, PARIS, 18 APRIL 1951, Article 3e\n - (DRAFT ENGLISH TEXT), here\n - 338 Consolidated Version of the Treaty on the Functioning of the European Union Official Journal of the European Union, C 326/47, 6.10.2012, Article 151 and Article 153, here\n - 339 The European Parliament, the Council and the Commission: The European Pillar of Social Rights in 20 principles, here\n - 340 EU-OSHA, 2021: Directive 89/391/EEC - OSH 'Framework Directive' of 12 June 1989 on the introduction of measures to encourage improvements in the safety and health of workers at work - 'Framework Directive', here 341\n - Ibid., Framework Directive - Section 2 Employers' obligations.\n - 342 Ibid., Framework Directive - Section 3 Workers' obligations.", - "page_start": 152, - "page_end": 152, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- 3. Carefully read the license agreement. Select I agree with the terms in the license agreement when you are ready, as shown in Figure 4-9. Click Next .\n\nFigure 4-9 System setup: License agreement\n\n", - "page_start": 116, - "page_end": 116, - "source_file": "sg247938.pdf" - }, - { - "text": "Contract number: ECHA/2019/355\n\n## HAVE AGREED\n\n## I.1.1.1.1. Article 1 Subject matter\n\n - 1.1 This specific contract implements framework contract (FWC) No ECHA/2019/355 signed by the parties on [ complete date ] .\n - 1.2 In accordance with the provisions set out in the FWC and in this specific contract and [its][their] annex[es], which form an integral part of it, the contractor must provide the [following services:] [services specified in Annex [ complete ] . ]\n - I.1.1.1.2. Article 2 Entry into force and duration\n - 2.1 This specific contract enters into force on the date on which the last party signs it.\n - 2.2 The provision of the services starts from the date of entry into force of this specific contract.\n - 2.3 The provision of the services must not exceed [ complete ] [ days] [months ] . The parties may extend the duration by written agreement before it elapses and before expiry of the FWC.\n\n## I.1.1.1.3. Article 3 Price\n\n - 3.1 The price payable under this specific contract excluding reimbursement of expenses is EUR [ amount in figures and in words ].\n\n[The maximum amount covering all services to be provided under this specific contract including reimbursement of expenses and excluding price revision is EUR [ amount in figures and in words ].]\n\n - 3.2 [Reimbursement of expenses is not applicable to this specific contract.] [Within the maximum amount, up to EUR [ amount in figures and in words ] is earmarked for expenses, which must be reimbursed in accordance with the FWC].\n\n***\n\n## I.1.1.1.4. Article 4 communication details\n\nFor the purpose of this specific contract, communications must be sent to the following addresses:\n\nContracting authority:\n\nEuropean Chemicals Agency\n\n[Directorate [ complete ]]\n\n[Unit [ complete ]]\n\n[ Postcode and city ]\n\nE-mail: [ insert functional mailbox ]", - "page_start": 43, - "page_end": 43, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "## OFF-BALANCE SHEET ARRANGEMENTS\n\n## G uarantees\n\nAs a regular part of our business, we enter into agreements that provide for indemnification and guarantees to counterparties in transactions involving business sale and business combination agreements, sales of services and purchases and development of assets. Due to the nature of these indemnifications, we are unable to make a reasonable estimate of the maximum potential amount we could be required to pay counterparties. Historically, we have not made any significant payment under these indemnifications or guarantees. See Note 26 to our 2013 audited consolidated financial statements for more information.\n\n## Operating Leases\n\nWe have entered into operating leases for the rental of premises, distribution facilities, equipment and wireless towers and other contracts. Terminating any of these lease agreements would not have a material adverse effect on us as a whole. See 'Commitments and Other Contractual obligations' and Note 27 to our 2013 audited consolidated financial statements for quantification and more information.", - "page_start": 69, - "page_end": 69, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## FIVE-YEAR SUMMARY OF CONSOLIDATED FINANCIAL RESULTS", - "page_start": 90, - "page_end": 90, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Contents\n\n| Consolidated Five-Year Summary | 70 |\n|-------------------------------------------------|------|\n| Business and Other Risks | 71 |\n| Consolidated Balance Sheets | 72 |\n| Consolidated Statements of Income | 74 |\n| Consolidated Statements of Shareholders' Equity | 75 |\n| Consolidated Statements of Cash Flows | 76 |\n| Notes to Consolidated Financial Statements | 77 |\n| Report of Independent Auditors | 104 |\n| Non-consolidated Five-Year Summary | 105 |", - "page_start": 70, - "page_end": 70, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## I.10.3. Provision of list of pre-existing rights and documentary evidence\n\nThe contractor must provide the contracting authority with a list of pre-existing rights as set out in Article II.13.4 together with the invoice for payment of the balance at the latest.\n\n## I.11. Termination by either party 2\n\nEither party may terminate the FWC and/or the FWC and specific contracts by sending formal notification to the other party with three months written notice.\n\nIf the FWC or a specific contract is terminated:\n\n - a) neither party is entitled to compensation;\n - b) the contractor is entitled to payment only for the services provided before termination takes effect.\n\nThe second, third and fourth paragraphs of Article II.18.4 apply.\n\n## I.12. Applicable law and settlement of disputes\n\n - I.12.1 The FWC is governed by Union law, complemented, where necessary, by the law of Finland.\n - I.12.2 The courts of Finland have exclusive jurisdiction over any dispute regarding the interpretation, application or validity of the FWC.\n\n## I.13. Interinstitutional FWC\n\nNot applicable\n\n## I.14. Service provided on the premises of the contracting authority\n\nNot applicable.\n\n## I.15. Other special conditions\n\nElectronic documents exchange\n\nIt is intended that the documents exchange (e.g. invoices, deliverables) between the Agency and the Contractor will have to be carried out via electronic means.\n\nAt the request of the Agency, the use of such electronic applications will become mandatory, upon mutual agreement, during the performance of the contract, at no additional cost for the Agency.", - "page_start": 10, - "page_end": 10, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "## Q4: Are there any correlations between datasets with respect to model ranking?\n\nThe datasets correlation w.r.t model ranking are presented in appendix Figure 12. Except for two datasets ( MasakhaNEWSClusteringP2P , SummEvalFr ), the correlations, on average, are high. There is still enough diversity to make each dataset interesting for the French MTEB benchmark. Two groups ( SyntecReranking / SyntecRetrieval , MassiveScenarioClassification / MTOPDomainClassification / MassiveIntentClassification ) exhibit notably high correlations ( ∼ 0.97). It is interesting to point out some sub-diagonal correlation blocks. The datasets being arranged by task indicate that models behave slightly more similarly within the same task than between two different tasks. This underscores the importance of having multiple tasks in the benchmark to select general-purpose models. For readers interested in specific tasks, it is more relevant to examine task-specific rankings rather than the overall one. The complementary results of model correlations w.r.t to strengths and weaknesses on datasets are displayed in appendix Figure 11. Strong correlations in behavior emerge among the variants of the same models (e.g. DistilBERT, sentence-croissant, sentence-t5, e5, etc.). Correlations are also generally observed among numerous models trained using the sentence transformers framework (Reimers and Gurevych, 2019), as well as proprietary models, e.g. from Cohere and OpenAI. Conversely, these models finetuned for sentence similarity, show minimal correlation with pre-trained models for which tokenembedding pooling techniques are employed.\n\n## 5 Conclusion and perspectives\n\nIn this work, we introduce a large-scale embedding benchmark for French to enable the research community and industry to select the most relevant embedding methods based on their specific needs. We undertake significant efforts in collecting 15 datasets and create 3 new quality-checked ones to enhance this collection. The whole French benchmark runs on 26 tasks. We select a diverse range of 51 models, including prominent French and multilingual models deemed most efficient to conduct a broad comparison. Our implementation is open to the community and features a public leaderboard, allowing the results to evolve with new models or datasets. After an in-depth analysis of the results, OpenAI models perform significantly better than\n\nthe other models. However, other models should be considered for their performance on specific tasks, being open source or having a small embedding dimension.\n\nThis work opens several doors for future improvements. By examining dataset diversity in terms of topics and model ranking, we observe that the benchmark would benefit from additional datasets that introduce higher diversity. Beyond classification, many tasks focus on semantic similarity, explaining the strong performance of models trained for similarity. Exploring novel tasks in the generative spectrum or evaluating token embeddings (contextualized or not) on tasks like Named Entity Recognition could be an interesting path for future exploration. There are also opportunities for improvements on the model side. With numerous existing models that could be added to the leaderboard and many new proposals awaiting. For instance, we can already see the promising capabilities of early variants of recent models (Faysse et al., 2024) and expect that future proposals will come to compete strongly with closed-source models. Ultimately, we hope to see the emergence of other language-specific MTEB variants (e.g. for high-resource languages like Spanish and German), enabling a more comprehensive evaluation of multilingual model performance.\n\n## 6 Limitations", - "page_start": 7, - "page_end": 7, - "source_file": "arxiv4.pdf" - }, - { - "text": "Rent expense for the years 2003, 2002, and 2001 amounted to approximately $13,592,000, $13,683,000, and $13,387,000, respectively. Contingent rent expense under both capitalized and operating leases (generally based on mileage of transportation equipment) amounted to $313,000, $787,000, and $869,000 for the years 2003, 2002, and 2001, respectively.\n\n## Guarantees, Commitments, and Contingencies\n\nDuring the second quarter ended June 28, 2003, the Company entered into a one-year financial agreement for the benefit of one of its distribution chain partners. The maximum financial exposure assumed by the Company as a result of this arrangement totals $3 million of which over 75% is secured by collateral. In accordance with the provisions of FIN 45, the Company has recorded the fair value of this guarantee, which is estimated to be less than $0.1 million.\n\nThe Company utilizes letters of credit in the amount of $24 million to back certain financing instruments, insurance policies, and payment obligations. The letters of credit reflect fair value as a condition of their underlying purpose and are subject to fees competitively determined.\n\nThe Company is contingently liable for future minimum payments totaling $9.7 million under a transportation service contract. The transportation agreement is for a three-year period and is automatically renewable for periods of one year unless either party gives sixty days' written notice of its intent to terminate at the end of the original three-year term or any subsequent term. The minimum payments are $4.8 million in 2004, and $4.9 million in 2005.\n\nThe Company has guaranteed a contractual lease obligation of an independent contract furniture dealership. The related term expires in the fourth quarter of 2004. As of January 3, 2004, the remaining unpaid lease payments subject to this guarantee totaled approximately $69,000. In accordance with the provisions of FIN 45 no liability has been recorded, as the Company entered into this agreement prior to December 31, 2002.", - "page_start": 51, - "page_end": 51, - "source_file": "NYSE_HNI_2003.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv4.pdf", - "query": "In the context of research publication, what is HAL ?", - "target_page": 3, - "target_passage": "Hyper Articles en Ligne (HAL) is a French open archive of scholarly documents from all academic fields.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Learning what happens at inference time. Most BERT analysis papers focus on different probes of the model, with the goal to find what the language model \"knows\". However, probing studies have limitations (subsection 3.4), and to this point, far fewer papers have focused on discovering what knowledge actually gets used. Several promising directions are the \"amnesic probing\" (Elazar et al., 2020), identifying features important for prediction for a given task (Arkhangelskaia and Dutta, 2019), and pruning the model to remove the nonimportant components (Voita et al., 2019b; Michel et al., 2019; Prasanna et al., 2020).\n\n## 8 Conclusion\n\nIn a little over a year, BERT has become a ubiquitous baseline in NLP experiments and inspired numerous studies analyzing the model and proposing various improvements. The stream of papers seems to be accelerating rather than slowing down, and we hope that this survey helps the community to focus on the biggest unresolved questions.\n\n## 9 Acknowledgements\n\nWe thank the anonymous reviewers for their valuable feedback. This work is funded in part by the NSF award number IIS-1844740 to Anna Rumshisky.", - "page_start": 11, - "page_end": 11, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "It is also an example predicated on copyright's limitations and exceptions - in this case, on U.S. fair use. While the Authors Guild filed a copyright infringement suit against HathiTrust, federal courts in 2012 and 2014 ruled that HathiTrust's use of books was fair use. 32\n\nA nonprofit founded in 2008, HathiTrust grew out of a partnership among major US university libraries and today is 'an international community of research libraries committed to the long-term curation and availability of the cultural record.' It started in what it calls the 'early 33 days of mass digitization' - that is, at a time when it started to become economical to take existing physical artifacts in libraries and turn them into digital files at a large scale.\n\nThe founding members of HathiTrust were among the initial partners for Google's Book Search product, which allows people to search across and view small snippets of text from in-copyright books and read full copies of public domain books scanned from libraries' 34 collections. The libraries provided Google with books from their collections, Google would then scan the books for use in Book Search, and return to the libraries a digital copy for their own uses. These uses included setting up HathiTrust not only to ensure long-term preservation of the digital books and their metadata, but also to facilitate other uses, including full text search of books and accessibility for people with print disabilities. In separate court cases, both Google and HathiTrust's uses of the books were deemed consistent with copyright law.\n\nThe uses most relevant to this paper are those enabled by what HathiTrust refers to today as the Research Center. The Center grew in part out of a research discipline called 'digital humanities,' which, among other things, seeks to use computational resources or other digital technologies to analyze information and contribute to the study of literature, media, history, and other areas. For instance, imagine you want to understand how a given term (e.g., 'war on drugs') became used; one might seek to analyze when the term was first used and how often it was used over time by analyzing a vast quantity of sources, searching out the term's use. The insight here is that there is much to be learned not just from reading or otherwise consuming specific material, but also from 'non-consumptive research,' or \"research in which computational analysis is performed on one or more volumes (textual or image objects)\" to derive other sorts of insights. AI training is a type of non-consumptive use.\n\nToday, the Center '[s]upports large-scale computational analysis of the works in the HathiTrust Digital Library to facilitate non-profit and educational research.' It includes over 18 million books in over 400 languages from the HathiTrust Digital Library collection. Roughly 58% of the corpus is in copyright. HathiTrust notes that, while this corpus is large, it has limitations in terms of its representation across subject matter, language, geography, and other dimensions. In terms of subject matter, the corpus is skewed towards humanities (64.9%) and social sciences (14.3%). In terms of language, 51% of the books are in English,", - "page_start": 14, - "page_end": 14, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## Acknowledgements\n\nAuthored by Alek Tarkowski and Paul Keller (Open Future), Derek Slater and Betsy Masiello (Proteus Strategies) in collaboration with Creative Commons.\n\nWe are grateful to participants in the workshops, including Luis Villa, Tidelift and openml.fyi; Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan, Cornell; Lila Bailey, Internet Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin, Library Futures/ NYU Engelberg Center, Nicholas P. Garcia, Public Knowledge; Sayeed Choudhury; Erik Stallman, UC Berkeley School of Law. The paper represents the views of the authors, however, and should not be attributed to the workshop as a whole. All mistakes or errors are the authors'.\n\n\n\nThis report is published under the terms of the Creative Commons Attribution License.", - "page_start": 21, - "page_end": 21, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## A Primer in BERTology: What We Know About How BERT Works\n\n## Anna Rogers\n\nCenter for Social Data Science University of Copenhagen arogers@sodas.ku.dk\n\n## Olga Kovaleva\n\nUniversity of Massachusetts Lowell\n\nDept. of Computer Science okovalev@cs.uml.edu\n\n## Abstract\n\nTransformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.\n\n## 1 Introduction\n\nSince their introduction in 2017, Transformers (Vaswani et al., 2017) have taken NLP by storm, offering enhanced parallelization and better modeling of long-range dependencies. The best known Transformer-based model is BERT (Devlin et al., 2019); it obtained state-of-the-art results in numerous benchmarks and is still a must-have baseline.\n\nWhile it is clear that BERT works remarkably well, it is less clear why , which limits further hypothesis-driven improvement of the architecture. Unlike CNNs, the Transformers have little cognitive motivation, and the size of these models limits our ability to experiment with pre-training and perform ablation studies. This explains a large number of studies over the past year that attempted to understand the reasons behind BERT's performance.\n\nIn this paper, we provide an overview of what has been learned to date, highlighting the questions which are still unresolved. We first consider the linguistic aspects of it, i.e., the current evidence regarding the types of linguistic and world knowledge learned by BERT, as well as where and how this knowledge may be stored in the model. We then turn to the technical aspects of the model and provide an overview of the current proposals to\n\n## Anna Rumshisky\n\nDept. of Computer Science University of Massachusetts Lowell\n\narum@cs.uml.edu\n\nimprove BERT's architecture, pre-training and finetuning. We conclude by discussing the issue of overparameterization, the approaches to compressing BERT, and the nascent area of pruning as a model analysis technique.\n\n## 2 Overview of BERT architecture\n\nFundamentally, BERT is a stack of Transformer encoder layers (Vaswani et al., 2017) which consist of multiple self-attention \"heads\". For every input token in a sequence, each head computes key, value and query vectors, used to create a weighted representation. The outputs of all heads in the same layer are combined and run through a fully-connected layer. Each layer is wrapped with a skip connection and followed by layer normalization.\n\nThe conventional workflow for BERT consists of two stages: pre-training and fine-tuning. Pretraining uses two self-supervised tasks: masked language modeling (MLM, prediction of randomly masked input tokens) and next sentence prediction (NSP, predicting if two input sentences are adjacent to each other). In fine-tuning for downstream applications, one or more fully-connected layers are typically added on top of the final encoder layer.\n\nThe input representations are computed as follows: each word in the input is first tokenized into wordpieces (Wu et al., 2016), and then three embedding layers (token, position, and segment) are combined to obtain a fixed-length vector. Special token [CLS] is used for classification predictions, and [SEP] separates input segments.\n\nGoogle 1 and HuggingFace (Wolf et al., 2020) provide many variants of BERT, including the original \"base\" and \"large\" versions. They vary in the number of heads, layers, and hidden state size.\n\ngoogle-research/bert", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "- Haack, Susan (1978). \"1. 'Philosophy of logics' \". Philosophy of Logics (https://philpapers.or g/rec/HAAPOL-2). London and New York: Cambridge University Press. pp. 1-10. ISBN 9780-521-29329-7. Archived (https://web.archive.org/web/20211207200551/https://philpapers.o rg/rec/HAAPOL-2) from the original on 7 December 2021. Retrieved 29 December 2021.\n - Haack, Susan (1996). Deviant Logic, Fuzzy Logic: Beyond the Formalism . University of Chicago Press. ISBN 978-0-226-31133-3.\n - Haaparanta, Leila (2009). \"1. Introduction\". The Development of Modern Logic . Oxford University Press. pp. 4-6. ISBN 978-0-19-513731-6.\n - Hansen, Hans (2020). \"Fallacies\" (https://plato.stanford.edu/entries/fallacies/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Archived (http s://web.archive.org/web/20210329182946/https://plato.stanford.edu/entries/fallacies/) from the original on 29 March 2021. Retrieved 18 March 2021.\n - Hartmann, Stephan; Sprenger, Jan (2010). \"Bayesian Epistemology\". The Routledge Companion to Epistemology (https://philpapers.org/rec/BOVSIO). London: Routledge. pp. 609-620. ISBN 978-0-415-96219-3. Archived (https://web.archive.org/web/2021051609 5047/https://philpapers.org/rec/BOVSIO) from the original on 16 May 2021. Retrieved 4 January 2022.\n - Hasse, Dag Nikolaus (2008). \"Influence of Arabic and Islamic Philosophy on the Latin West\" (https://plato.stanford.edu/entries/arabic-islamic-influence/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Retrieved 19 July 2023.\n - Hawthorne, James (2021). \"Inductive Logic\" (https://plato.stanford.edu/entries/logic-inductiv e/). The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University. Archived (https://web.archive.org/web/20220121081805/https://plato.stanford.ed u/entries/logic-inductive/) from the original on 21 January 2022. Retrieved 6 January 2022.\n - Hintikka, Jaakko J. (2019). \"Philosophy of logic\" (https://www.britannica.com/topic/philosoph y-of-logic). Encyclopædia Britannica . Archived (https://web.archive.org/web/2015042810173 2/http://www.britannica.com/EBchecked/topic/346240/philosophy-of-logic) from the original on 28 April 2015. Retrieved 21 November 2021.\n - Hintikka, Jaakko J. (2023). \"Logical systems\" (https://www.britannica.com/topic/logic/Logical -systems). Encyclopædia Britannica . Archived (https://web.archive.org/web/2021120718465 6/https://www.britannica.com/topic/logic/Logical-systems) from the original on 7 December 2021. Retrieved 4 December 2021.\n - Hintikka, Jaakko (1970). \"Information, Deduction, and the A Priori\". Noûs . 4 (2): 135-152. doi:10.2307/2214318 (https://doi.org/10.2307%2F2214318). ISSN 0029-4624 (https://searc h.worldcat.org/issn/0029-4624). JSTOR 2214318 (https://www.jstor.org/stable/2214318).\n - Hintikka, Jaakko; Sandu, Gabriel (2006). \"What is Logic?\". In Jacquette, D. (ed.). Philosophy of Logic (https://philpapers.org/rec/JAAWIL). North Holland. pp. 13-39. ISBN 978-0-444-51541-4. Archived (https://web.archive.org/web/20211207235525/https://ph ilpapers.org/rec/JAAWIL) from the original on 7 December 2021. Retrieved 29 December 2021.\n - Hintikka, Jaakko J.; Spade, Paul Vincent. \"History of logic\" (https://www.britannica.com/topi c/history-of-logic). Encyclopædia Britannica . Retrieved 23 September 2022.\n - Honderich, Ted (2005). The Oxford Companion to Philosophy (https://philpapers.org/rec/HO NTOC-2). Oxford University Press. ISBN 978-0-19-926479-7. Archived (https://web.archive. org/web/20210129082636/https://philpapers.org/rec/HONTOC-2) from the original on 29 January 2021. Retrieved 2 January 2022.\n - Hurley, Patrick J. (2015). \"4. Categorical Syllogisms\". Logic: The Essentials . Wadsworth. pp. 189-237. ISBN 978-1-305-59041-0.", - "page_start": 29, - "page_end": 29, - "source_file": "wikipedia1.pdf" - }, - { - "text": "\n\n## Better information, Better decision\n\n## Market Intelligence\n\nASAKO HOSHINO Vice President\n\n\n\n'Why does a company conduct market research on consumers? It is not just about asking the customer if they prefer A or B, which is often what managers want to know. Accumulating knowledge on consumer behavior and emerging trends is how you come up with ideas that are truly focused on the customer. Our aim is to gain the deepest understanding of the customer possible, and use that insight to identify future trends.\n\nThe Market Intelligence department is relatively new, formed by combining the research functions once carried out separately by various divisions. The merger and our independent status have brought several practical benefits. We now have uniform procedures for conducting research, better research methodologies, and greater objectivity in the interpretation of the data. Today, we're a team of experts in this field, not simply coordinators between research organizations and the decision makers. We are often benchmarked by other industries.\n\nWhen the department was first established, Mr. Ghosn made one thing very clear: Do not attack the methodology! Different business areas may complain when we release information that is negative or differs from their objectives. However, they cannot attack how we came to our conclusions, because our methodology is considered the best within the organization. We are transparent in our\n\nselection of methodologies and how we approach conclusions. Among the various areas, we aim to be the department that most effectively utilizes the PDCA-plan, do, check and action-cycle. We are always working to get better and more accurate information to upgrade our methodology. Every year we hold a PDCA session to review our methodology with other departments. Anyone can assess Market Intelligence at this time. This is also a great opportunity to share methodologies and approaches with various functions.\n\nWe also conduct trend review meetings with all decision-makers, including non-marketing officers, to understand social, consumer and value trends so that we can identify sources of innovation for all areas. This makes us unique. Our analysts enrich the analysis, interpretation and forecast because they are aware of global social and consumer trends. The trend review meetings also remind people in all departments-even those not directly involved with sales and marketing-that customers are truly the center of our business.\n\nWe work with different research experts and companies as our partners. They offer a variety of hightech techniques such as glasses with cameras that track eye movement, instruments that measure brainwaves or pupil dilation to detect preferences, and non-categorical measures that help us find personal evaluations of perceived quality or design. Our job is to evaluate these research companies and their output, and to develop the best methodology for our issues. We are always refining the tools we have and looking for new ones that will boost our accuracy. Our strong ties with outside experts are a source of competitive advantage for Nissan.\n\nAgain, it all goes back to being customer-oriented. Confirming that customer-oriented stance will create value for Nissan. Market Intelligence must be a dedicated evangelist for this change.'", - "page_start": 41, - "page_end": 41, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "would be watermarked and thus detectable [7, 66, 123]? Are there policy approaches that could effectively regulate their use?\n\nIn summary, we advocate for research that centers the people who stand to be adversely affected by the resulting technology, with a broad view on the possible ways that technology can affect people. This, in turn, means making time in the research process for considering environmental impacts, for doing careful data curation and documentation, for engaging with stakeholders early in the design process, for exploring multiple possible paths towards longterm goals, for keeping alert to dual-use scenarios, and finally for allocating research effort to harm mitigation in such cases.\n\n## 8 CONCLUSION\n\nThe past few years, ever since processing capacity caught up with neural models, have been heady times in the world of NLP. Neural approaches in general, and large, Transformer LMs in particular, have rapidly overtaken the leaderboards on a wide variety of benchmarks and once again the adage 'there's no data like more data' seems to be true. It may seem like progress in the field, in fact, depends on the creation of ever larger language models (and research into how to deploy them to various ends).\n\nIn this paper, we have invited readers to take a step back and ask: Are ever larger LMs inevitable or necessary? What costs are associated with this research direction and what should we consider before pursuing it? Do the field of NLP or the public that it serves in fact need larger LMs? If so, how can we pursue this research direction while mitigating its associated risks? If not, what do we need instead?\n\nWe have identified a wide variety of costs and risks associated with the rush for ever larger LMs, including: environmental costs (borne typically by those not benefiting from the resulting technology); financial costs, which in turn erect barriers to entry, limiting who can contribute to this research area and which languages can benefit from the most advanced techniques; opportunity cost, as researchers pour effort away from directions requiring less resources; and the risk of substantial harms, including stereotyping, denigration, increases in extremist ideology, and wrongful arrest, should humans encounter seemingly coherent LM output and take it for the words of some person or organization who has accountability for what is said.\n\nThus, we call on NLP researchers to carefully weigh these risks while pursuing this research direction, consider whether the benefits outweigh the risks, and investigate dual use scenarios utilizing the many techniques (e.g. those from value sensitive design) that have been put forth. We hope these considerations encourage NLP researchers to direct resources and effort into techniques for approaching NLP tasks that are effective without being endlessly data hungry. But beyond that, we call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms. Work on synthetic human behavior is a bright line in ethical AI development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups. Thus what is also needed is scholarship on the benefits, harms, and risks of mimicking humans and thoughtful design of target tasks grounded in use cases sufficiently concrete to allow collaborative design with affected communities.", - "page_start": 9, - "page_end": 9, - "source_file": "arxiv5_ccby4license.pdf" - }, - { - "text": "## 7. Conclusion\n\nThis paper is a snapshot of an idea that is as underexplored as it is rooted in decades of existing work. The concept of mass digitization of books, including to support text and data mining, of which AI is a subset, is not new. But AI training is newly of the zeitgeist, and its transformative use makes questions about how we digitize, preserve, and make accessible knowledge and cultural heritage salient in a distinct way.\n\nAs such, efforts to build a books data commons need not start from scratch; there is much to glean from studying and engaging existing and previous efforts. Those learnings might inform substantive decisions about how to build a books data commons for AI training. For instance, looking at the design decisions of HathiTrust may inform how the technical infrastructure and data management practices for AI training might be designed, as well as how to address challenges to building a comprehensive, diverse, and useful corpus. In addition, learnings might inform the process by which we get to a books data commons for example, illustrating ways to attend to the interests of those likely to be impacted by the dataset's development. 41\n\nWhile this paper does not prescribe a particular path forward, we do think finding a path (or paths) to extend access to books for AI training is critical. In the status quo, large swaths of knowledge contained in books are effectively locked up and inaccessible to most everyone. Google is an exception - it can reap the benefits of their 40 million books dataset for research, development, and deployment of AI models. Large, well-resourced entities could theoretically try to replicate Google's digitization efforts, although it would be incredibly expensive, impractical, and largely duplicative for each entity to individually pursue their own efforts. Even then, it isn't clear how everyone else - independent researchers, entrepreneurs, and smaller entities - will have access. The controversy around the Books3 dataset discussed at the outset should not, then, be an argument in favor of preserving the status quo. Instead, it should highlight the urgency of building a books data commons to support an AI ecosystem that provides broad benefits beyond the privileged few.", - "page_start": 20, - "page_end": 20, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "institutional requirements. The participants provided their written informed consent to participate in this study.\n\n## Author contributions\n\nSD: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Project administration, Resources, Visualization, Writing -original draft, Writing -review & editing. EA: Conceptualization, Formal Analysis, Methodology, Supervision, Writing -review & editing. BN: Conceptualization, Formal Analysis, Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing -review & editing.\n\n## Funding\n\nThe author(s) declare that /uniFB01 nancial support was received for the research, authorship, and/or publication of this article.\n\nThe development of the CoreDISTparticipation and the RCT is funded by the Northern Norway Health Authority (Helse Nord RHF). This interview study was funded by Nord University (PhD salary).\n\n## References\n\n- 1. Walton C, King R, Rechtman L, Kaye W, Leray E, Marrie RA, et al. Rising prevalence of multiple sclerosis worldwide: insights from the Atlas of MS, third edition. Mult Scler . (2020) 26(14):1816 -21. doi: 10.1177/1352458520970841\n- 2. Casey B, Coote S, Galvin R, Donnelly A. Objective physical activity levels in people with multiple sclerosis: meta-analysis. Scand J Med Sci Sports . (2018) 28 (9):1960 -9. doi: 10.1111/sms.13214\n- 3. Kinnett-Hopkins D, Adamson B, Rougeau K, Motl RW. People with MS are less physically active than healthy controls but as active as those with other chronic diseases: an updated meta-analysis. Mult Scler Relat Disord . (2017) 13:38 -43. doi: 10.1016/j.msard.2017.01.016\n- 4. Hoang PD, Lord S, Gandevia S, Menant J. Exercise and sports science Australia (ESSA) position statement on exercise for people with mild to moderate multiple sclerosis. J Sci Med Sport . (2022) 25(2):146 -54. doi: 10.1016/j.jsams.2021.08.015\n- 5. Dalgas U, Langeskov-Christensen M, Stenager E, Riemenschneider M, Hvid LG. Exercise as medicine in multiple sclerosis -time for a paradigm shift: preventive, symptomatic, and disease-modifying aspects and perspectives. Curr Neurol Neurosci Rep . (2019) 19(11):1 -12. doi: 10.1007/s11910-019-1002-3\n- 6. Riemenschneider M, Hvid LG, Ringgaard S, Nygaard MKE, Eskildsen SF, Gaemelke T, et al. Investigating the potential disease-modifying and neuroprotective ef /uniFB01 cacy of exercise therapy early in the disease course of multiple sclerosis: the early multiple sclerosis exercise study (EMSES). Mult Scler . (2022) 28(10):1620 -9. doi: 10. 1177/13524585221079200\n- 7. Kalb R, Brown TR, Coote S, Costello K, Dalgas U, Garmon E, et al. Exercise and lifestyle physical activity recommendations for people with multiple sclerosis throughout the disease course. Mult Scler . (2020) 26(12):1459 -69. doi: 10.1177/ 1352458520915629\n- 8. Moreno-Navarro P, Manca A, Martinez G, Ventura L, Barbado D, Vera-García FJ, et al. Test-retest reliability and known-groups validity of trunk muscle tests in people with multiple sclerosis: a cross-sectional, case-control study. Phys Ther . (2021) 101 (5):1 -9. doi: 10.1093/ptj/ptzab049\n- 9. Raats J, Arntzen EC, Lamers I, Feys P, Normann B. What is the distribution of trunk impairments and its relationship with disability level in individuals with multiple sclerosis? Mul Scler Relat Disord . (2021) 57:103325. doi: 10.1016/j.msard. 2021.103325\n- 10. Normann B, Arntzen EC. What are the relationships between trunk control, balance and walking in individuals with multiple sclerosis with minor to moderate disability? Eur J Physiother . (2021) 23(6):377 -83. doi: 10.1080/21679169.2020.1772870\n\n## Acknowledgments\n\nThe authors would like to thank the participants in this study and the user representatives from Nordland MS Association for their valuable contributions. The authors also acknowledge philosopher of the mind and cognitive sciences Hanne De Jaegher for the valuable comments on the interpretations and discussions of the results.\n\n## Con /uniFB02 ict of interest", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed13.pdf" - }, - { - "text": "| Proceedings of the 2018 |\n| [32] Herbert H. Clark, Robert Schreuder, and Samuel Buttrick. 1983. Common ground at the understanding of demonstrative reference. Journal of Verbal Learning and Verbal Behavior 22, 2 (1983), 245 - 258. https://doi.org/10.1016/S0022- 5371(83)90189-5 [33] Herbert H. Clark and Deanna Wilkes-Gibbs. 1986. Referring as a collaborative |\n| Associa- tion for Computational Linguistics, Brussels, Belgium, 3570-3580. https: //doi.org/10.18653/v1/D18-1393 |\n| [36] Christian Davenport. 2009. Media bias, perspective, and state repression: The Black Panther Party . Cambridge University Press. |\n| [37] Ferdinand de Saussure. 1959. Course in General Linguistics . The Philosophical Society, New York. Translated by Wade Baskin. [38] Terrance de Vries, Ishan Misra, Changhan Wang, and Laurens van der Maaten. |\n| 2019. Does object recognition work for everyone?. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops . 52-59. |\n| [39] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Language Technologies, Volume 1 |\n| [40] Maeve Duggan. 2017. Online Harassment 2017 . Pew Research Center. [41] Jennifer Earl, Andrew Martin, John D. McCarthy, and Sarah A. Soule. 2004. The use of newspaper data in the study of collective action. Annual Review of 30 (2004), 65-80. |\n| Sociology [42] Ethan Fast, Tina Vachovsky, and Michael Bernstein. 2016. Shirtless and Danger- ous: Quantifying Linguistic Signals of Gender Bias in an Online Fiction Writing Community. In Proceedings of the International AAAI Conference on Web and |\n| Social Media , Vol. 10. Switch Transform- ers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. |\n| tational Analysis of Intricate Political Strategies. In |", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv5_ccby4license.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv4.pdf", - "query": "What is the effect of embedding dimension on embedding representation quality ?", - "target_page": 6, - "target_passage": "we observe a performance correla- tion with the embedding dimension and the model’s number of parameters, which are often correlated themselves", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "## MTEB-French: Resources for French Sentence Embedding Evaluation and Analysis\n\nMathieu Ciancone Wikit, France mathieu@wikit.ai\n\n## Marion Schaeffer\n\nWikit, France marion@wikit.ai\n\n## Abstract\n\nRecently, numerous embedding models have been made available and widely used for various NLP tasks. The Massive Text Embedding Benchmark (MTEB) has primarily simplified the process of choosing a model that performs well for several tasks in English, but extensions to other languages remain challenging. This is why we expand MTEB to propose the first massive benchmark of sentence embeddings for French. We gather 15 existing datasets in an easy-to-use interface and create three new French datasets for a global evaluation of 8 task categories. We compare 51 carefully selected embedding models on a large scale, conduct comprehensive statistical tests, and analyze the correlation between model performance and many of their characteristics. We find out that even if no model is the best on all tasks, large multilingual models pre-trained on sentence similarity perform exceptionally well. Our work comes with open-source code, new datasets and a public leaderboard 1 .\n\n## 1 Introduction\n\nEmbeddings are dense vector representations that capture the semantics of an input. The first emblematic example is Word2Vec, introduced by Mikolov et al. (2013). It consists of neural architectures trained to learn high-quality word representations from contextual relationships in vast amounts of text. Other models were proposed since then, leveraging the transformer architecture (Vaswani et al., 2017) to produce both generic and contextualized word embeddings using self-attention. Many models now exist with various architectures, monolingual or multilingual, pre-trained or fine-tuned (Naseem et al., 2021; Ding et al., 2023).\n\nIn this work, our primary objective is to introduce a large-scale embedding benchmark for\n\n## Imene Kerboua\n\nEsker, France imene.kerboua@esker.com\n\n## Wissam Siblini\n\nwissam.siblini92@gmail.com\n\nFrench to enable the research community and industry to select the most relevant embedding methods based on one's specific needs, such as being opensource, versatile or targeted toward a particular task, having a small embedding dimension, the ability to process long texts or their performance. To achieve this goal, we undertake significant efforts in collecting datasets to conduct a broad comparison of models. We ensure that the datasets cover various tasks within a common, easy-to-use framework, and we create three new quality-checked datasets to enhance this collection. We select a diverse range of models, including prominent French and multilingual models deemed most efficient. The results of our study already enable the community to make informed model selections, whether for general purposes or specific tasks. Additionally, our implementation is open to the community and features a public leaderboard, allowing the results to evolve with new models or datasets. With this first large-scale comparison, we perform an in-depth analysis of the results, confirming well-known findings such as the correlation between performance and model/embedding dimensions and uncovering interesting nuances.\n\n## 2 Related Work\n\nSentence Embeddings Sentence embeddings are required for many language tasks, such as Semantic Textual Similarity (STS) and knowledge retrieval. Many models have been proposed in the literature, leveraging pooling strategies (Devlin et al., 2019; Muennighoff, 2022) or similarity fine-tuning (Reimers and Gurevych, 2019) using a contrastive framework (Gao et al., 2021; Neelakantan et al., 2022; Ni et al., 2021; Wang et al., 2022; Zhang et al., 2023), leveraging prompts (Wang et al., 2023) or a two steps training process (Chen et al., 2024; Lee et al., 2024). Few French-language models have been proposed in the literature (Martin et al.,", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv4.pdf" - }, - { - "text": "with respect to model ranking?\n\nTo go further than the correlation analysis among datasets regarding their topics (see section 3.1.5), subsequent analysis will be conducted regarding how they rank models. Additionally, complementary insights will be derived from examining correlations of models relative to their strengths and weaknesses across different datasets.\n\n## 4 Results and discussion\n\nIn this section, we present the results through the prism of our research questions.\n\n## Q1: Is there a model that outstands on all tasks?\n\nModels performances for each task are presented in appendix Tables 9, 10, 11, 12 and 13. Figure 1 shows the critical difference diagram of average score ranks.\n\nAs in MTEB (Muennighoff et al., 2022), no model claims state-of-the-art in all tasks even if the text-embedding-3-large model is in first place on average on all tasks (see Table 9). It ranks first for the classification and reranking tasks. For the clustering task, text-embedding-ada-002 is the best model. The models voyage-code-2 , textembedding-3-small and mistral-embed share the top positions in the retrieval task ranking. For the pair classification task, laser2 is ahead of its competitors. Finally, sentence-camembert-large leads on the STS task and multilingual-e5-small has the best results for summarization.\n\nFigure 1 shows a global model comparison across all datasets. The models are arranged horizontally according to their performance, with the best models on the left. The black bars represent the statistical equivalence between the models' performances. The statistically equivalent top performers for this benchmark are OpenAI's models text-embedding-3-large , text-embedding-3small and text-embedding-ada-002 . Interestingly, many models do not show a significant performance gap between their base and large flavours. Some French models stand out among the multilingual models, such as Solon-embeddings-large0.1 , sentence\\_croissant\\_alpha\\_v0.3 and sentencecamembert-large .\n\n## Q2: Are there any links between model characteristics and performance?\n\nThe Spearman correlations between the average rank of the models and their characteristics are the\n\n## following:\n\n - · Tuned for sentence similarity : 0.727\n - · Finetuned vs pretrained : 0.544\n - · Model number of parameters : 0.49\n - · Embedding dimension : 0.452\n - · Closed source : 0.449\n - · Max sequence length : 0.336\n - · Multilingual : 0.103\n - · English : 0.025\n - · English but tuned on other languages : -0.025\n - · French : -0.134\n - · Bilingual : -0.135\n\nAdditionally, all cross-correlations between characteristics are reported in appendix Figure 10.\n\nAs expected, the score most strongly correlates with whether the evaluated models were trained on a sentence similarity task. Of course, this criterion is connected to the more general Finetuned one. The only top-performing models solely pre-trained are from the E5 family, where the pre-training is, in fact, contrastive and optimized for similarity. Conversely, models pre-trained on token-level tasks and generating embeddings via pooling appear less well-suited for the benchmark tasks.", - "page_start": 5, - "page_end": 5, - "source_file": "arxiv4.pdf" - }, - { - "text": "Furthermore, we observe a performance correlation with the embedding dimension and the model's number of parameters, which are often correlated themselves. This appears very clearly on the relative ranking of E5 and T5 models (see Figure 1). However, some small models perform very well on the benchmark, such as the standard version of the multilingual universal sentence encoder or Solon-embeddings-base-1.0 . Notably, the maximum sequence length, while an important criterion for generative tasks with LLMs, is less correlated with performance than the other dimensions. This can be explained by many datasets containing relatively small texts (see appendix Table 3 showing that 14 datasets have less than 50 tokens).\n\nRegarding language, it is surprising that good performance is not particularly correlated with French models in particular. In reality, the other aspects of the models, such as being fine-tuned", - "page_start": 5, - "page_end": 5, - "source_file": "arxiv4.pdf" - }, - { - "text": "Figure 3: Attention patterns in BERT (Kovaleva et al., 2019)\n\n\n\nies) insufficient (Warstadt et al., 2019). A given method might also favor one model over another, e.g., RoBERTa trails BERT with one tree extraction method, but leads with another (Htut et al., 2019). The choice of linguistic formalism also matters (Kuznetsov and Gurevych, 2020).\n\nIn view of all that, the alternative is to focus on identifying what BERT actually relies on at inference time. This direction is currently pursued both at the level of architecture blocks (to be discussed in detail in subsection 6.3), and at the level of information encoded in model weights. Amnesic probing (Elazar et al., 2020) aims to specifically remove certain information from the model and see how it changes performance, finding, for example, that language modeling does rely on part-of-speech information.\n\nAnother direction is information-theoretic probing. Pimentel et al. (2020) operationalize probing as estimating mutual information between the learned representation and a given linguistic property, which highlights that the focus should be not on the amount of information contained in a representation, but rather on how easily it can be extracted from it. Voita and Titov (2020) quantify the amount of effort needed to extract information from a given representation as minimum description length needed to communicate both the probe size and the amount of data required for it to do well on a task.\n\n## 4 Localizing linguistic knowledge\n\n## 4.1 BERT embeddings\n\nIn studies of BERT, the term \"embedding\" refers to the output of a Transformer layer (typically, the final one). Both conventional static embeddings (Mikolov et al., 2013) and BERT-style embeddings can be viewed in terms of mutual information maximization (Kong et al., 2019), but the latter are contextualized . Every token is represented by a vector dependent on the particular context of occurrence, and contains at least some information about that context (Miaschi and Dell'Orletta, 2020).\n\nSeveral studies reported that distilled contextualized embeddings better encode lexical semantic information (i.e. they are better at traditional word-level tasks such as word similarity). The methods to distill a contextualized representation into static include aggregating the information across multiple contexts (Akbik et al., 2019; Bommasani et al., 2020), encoding \"semantically bleached\" sentences that rely almost exclusively on the meaning of a given word (e.g. \"This is <>\") (May et al., 2019), and even using contextualized embeddings to train static embeddings (Wang et al., 2020d).\n\nBut this is not to say that there is no room for improvement. Ethayarajh (2019) measure how similar the embeddings for identical words are in every layer, reporting that later BERT layers produce more context-specific representations 3 . They also find that BERT embeddings occupy a narrow cone in the vector space, and this effect increases from the earlier to later layers. That is, two random words will on average have a much higher cosine similarity than expected if embeddings were directionally uniform (isotropic) . Since isotropy was shown to be beneficial for static word embeddings (Mu and Viswanath, 2018), this might be a fruitful direction to explore for BERT.\n\nSince BERT embeddings are contextualized, an interesting question is to what extent they capture phenomena like polysemy and homonymy. There is indeed evidence that BERT's contextualized embeddings form distinct clusters corresponding to word senses (Wiedemann et al., 2019; Schmidt and Hofmann, 2020), making BERT successful at word sense disambiguation task. However, Mickus et al. (2019) note that the representations of the same word depend on the position of the sentence in which it occurs , likely due to the NSP objective. This is not desirable from the linguistic point of view, and could be a promising", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "## Feature Prediction versus Pixel Reconstruction.\n\nApproaches that predict in pixel space must dedicate significant model capacity and compute to capture all the low-level detail in the visual input. By contrast, approaches that predict in latent space have the flexibility to eliminate irrelevant or unpredictable pixel-level details from the target representation (Vondrick et al., 2016). Predicting in representation space has been shown to lead to versatile representations that perform well across many downstream tasks through linear probing or lowshot adaptation (Assran et al., 2023; Oquab et al., 2023; Assran et al., 2022), while demonstrating an efficiency gain during pretraining compared to pixel level reconstruction (Assran et al., 2023; Baevski et al., 2022b,a). The works of Baevski et al. (2022a,b) additionally show that predicting in representation space results in competitive end-to-end fine-tuning performance in the image, audio and text domains. In this work, we extend these findings to the video modality.\n\n## 3 Methodology: Video-JEPA\n\nFigure 2 Joint-Embedding Predictive Architectures are trained to predict the representation of an input y from the representation of another input x . The additional variable z provides the predictor with information about the transformation that computes y from x .\n\n\n\nOur goal is to explore the effectiveness of feature prediction as a stand-alone objective for learning visual representations from video. To that end, we use a joint-embedding predictive architecture (JEPA) (LeCun, 2022); see Figure 2. The main idea behind a JEPA is to learn by predicting the representation of an input y from the representation of another input x . The basic architecture is made up of an encoder, E θ ( · ) , which computes the representation of the inputs, and a predictor, P ϕ ( · ) , which predicts the representation of y from the representation of x , conditioned on a variable z indicating the transformation (or corruption) between x and y . Conditioning on z enables the generation of distinct predictions for various transformations of x .\n\n## 3.1 Training Objective\n\nWe train our visual encoder E θ ( · ) to satisfy the constraint that representations computed from one part of the video, y , should be predictable from representations\n\ncomputed from another part of the video, x . The predictor network P ϕ ( · ) , which maps the representation of x to the representation of y , is trained simultaneously with the encoder, and is provided specification of the spatio-temporal positions of y through the conditioning variable z ← ∆ y .\n\nNaively implementing the objective using the regression\n\nminimize θ,ϕ ∥ P ϕ ( E θ ( x ) , ∆ y ) -E θ ( y ) ∥ 1 ,\n\nwould admit a trivial solution, where the encoder outputs a constant representation, regardless of its input. In practice, we use the following modified objective to prevent representation collapse,\n\nminimize θ,ϕ ∥ P ϕ ( E θ ( x ) , ∆ y ) -sg ( E θ ( y )) �� 1 , (1)\n\nwhere sg ( · ) denotes a stop-gradient operation, which does not backpropagate through its argument, and E θ ( · ) is an exponential moving average of the network E θ ( · ) . The use of an exponential-moving average feature extractor along with a stop-gradient and a predictor has been used as a collapse prevention strategy for image pretraining (Grill et al., 2020), and studied empirically (Xie et al., 2021) and theoretically (Tian et al., 2021). In fact, the objective in equation (1) is similar to the loss of Assran et al. (2023) used for image pretraining, but we modify it to use an ℓ 1 regression, which we found to be more stable.", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv3.pdf" - }, - { - "text": "Figure 3 V-JEPA. Training operates on a video clip of T frames with spatial resolution H × W , flattened into a sequence of L tokens. (Left to right): We first obtain the input of the x -encoder by dropping tokens from the video clip. The x -encoder then processes the masked video sequence, and outputs an embedding vector for each input token. Next, the outputs of the x -encoder are concatenated with a set of learnable mask tokens containing positional embeddings of the masked spatio-temporal patches. The predictor network processes the combined token sequence, and outputs an embedding vector for each mask token. The outputs of the predictor are then regressed to the prediction targets using an L 1 loss. The prediction targets correspond to the output of the y -encoder.\n\n\n\n×\n\n×\n\n×\n\n×\n\n×\n\n×\n\n×\n\n×\n\n## 3.2 Prediction Task: Predicting y from x\n\nThe feature prediction task is based on a masked modeling formulation (He et al., 2021; Tong et al., 2022); i.e., regions x and y from the video are sampled using masking. To sample y from a video, we sample several (possibly overlapping) spatially continuous blocks with various aspect ratios and repeat the spatial blocks across the entire temporal dimension of the video; x is taken to be the complement. Masking a large continuous block that covers the full temporal dimension limits information leakage due to the spatial and temporal redundancy of videos, and results in a harder prediction task (Tong et al., 2022).\n\nWe leverage two types of masks: short-range masks, where we take the union of 8 randomly sampled target blocks covering 15% of each frame, and long-range masks, where we take the union of 2 randomly sampled target blocks covering 70% of each frame. In both cases, the aspect ratio for all sampled blocks is randomly chosen in the range (0 . 75 , 1 . 5) . Given that both short-range and long-range masks are produced by sampling many blocks and taking their union, the result is an average masking ratio of ∼ 90% . We refer to our masking strategy as multi-block, and compare it to other possible masking strategies in Section 4.\n\n## 3.3 Network Parameterization\n\nWe use a Vision Transformer (ViT) (Dosovitskiy et al., 2020; Arnab et al., 2021) as our video backbone. To process a video with a transformer network, we split the video clip into a 3D grid of L spatio-temporal patches, where a patch consists of a 16 × 16 pixel block spanning 2 consecutive frames; we refer to these spatio-temporal patches as tokens. This sequence of tokens is then directly processed by the stack of transformer blocks. In-\n\nuts x and y correspond to masked regions of a video, we apply the video masks by simply dropping a subset of the tokens. We apply masking at the input of the x -encoder, and at the output of the y -encoder to construct contextualized targets (Baevski et al., 2022b). The encoder is parameterized using standard ViT networks, while the predictor is a narrow transformer implemented using 12 blocks with an embedding dimension of 384 . Taking inspiration from masked autoencoders (He et al., 2021), our predictor takes as input the sequence of embeddings produced by the x -encoder as well as a sequence of learnable mask tokens with positional embeddings indicating the spatio-temporal positions of the y tokens. The output of the predictor is an embedding vector for each mask token; see Figure 3 and refer to Appendix B for more details.\n\n## 3.4 Pretraining Data and Evaluation Setup", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv3.pdf" - }, - { - "text": "Figure 3: Cosine similarity between tasks' data. Ninety random samples per task's data are embedded using the multilingual-e5-small model. The embeddings of each task's data sample are averaged. The similarity between each dataset is then calculated using cosine similarity as in (Muennighoff et al., 2022).\n\n", - "page_start": 12, - "page_end": 12, - "source_file": "arxiv4.pdf" - }, - { - "text": "Figure 1: Critical difference diagram representing the significant rank gaps between models. The axis represents the normalized average rank of the models (lower is better). The black bars indicate that the difference in models' rank is not statistically significant, i.e. lower than the critical difference.\n\n\n\nfor similarity, prevail. Nevertheless, we can highlight the excellent performance of a few French models such as sentence-camembert and sentencecroissant and Solon-embeddings .\n\nLastly, we emphasize that closed-source models perform well on this benchmark ( text-embeddings , mistral-embed and voyage ), but we lack information about their characteristics. As more opensource well-performing models get added in the future, we could expect this correlation to decrease. Note that the correlation between sequence length and performance could be dragged by closedsource models that have generally larger sequence lengths.\n\nQ3: Do monolingual models have multilingual capabilities?\n\nModel perfromance vs language\n\nFigure 2: Model performance depending on the language of the data they have been trained on.\n\n\n\nWe also studied the capabilities of models on the French language when the language of the training data varies. It is surprising to note the absence of a clear correlation between the language the model is trained on and its performance on French, as shown by the large standard deviation in Figure 2. Furthermore, monolingual models trained exclusively on English such as voyage-code-2 show very good results on French datasets compared to models trained exclusively on French such as flaubert derivatives and distilbert-base-fr-cased (see Table D.1).\n\nThis is explained by the fact that a large part of the selected French models generate embeddings using a pooling strategy. Only a few are sentence transformer models, for which the pooled representation is part of the model and trained with it, leading to higher-quality embeddings. This is endorsed by the excellent results of sentence-camembert-large , a sentence transformer model trained on French corpus and confirms the recent findings in terms of model architecture (Gao et al., 2021).\n\nFinally, it should be noted that a significant portion of the French data used to train the selected French models actually comes from English datasets that have been machine translated (May, 2021). Despite the tremendous progress of machine translation, it is well known that the generated data may be unrepresentative of the language used by native speakers and cause a reduced final performance (Barbosa et al., 2021).", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv4.pdf" - }, - { - "text": "tion and, in practical applications, the underlying storage and compute costs. We selected models with embedding dimensions ranging from 384 to 4096.\n\n - · Sequence length: Being the number of tokens that a model can consider as input, the sequence length is important as it impacts the unit that can be encoded (sentence, paragraph, document). However, encoding overly long sequences requires efficiently storing the relevant information into a single vector. Among the selected methods, this criterion varies from 128 tokens to 32768.\n - · Model parameters: Often correlated with the two first characteristics, parameter count is important for practical applications as it affects usability on resource-efficient machines. The selected models have a number of parameters ranging from 20 million ( ∼ 100Mb in float32) to 7 billion ( ∼ 28Gb).\n - · Language: This is a major feature of language models. Some are monolingual, and others are multilingual. Language is usually acquired during pre-training, but sometimes, models familiarize themselves with new languages at tuning. For the benchmark, we selected French models, as well as bilingual or multilingual models. We also included a few ones that claimed to be English (e.g. allMiniLM-L12-v2 9 ).\n - · Model types: There are several strategies to generate text embeddings such as aggregating (e.g. with average pooling) token-level embeddings from raw pre-trained models, or adding an extra contrastive learning step on a sentence similarity task with, optionally, additional transformation layers. We included models of all types in our benchmark, summarizing the model type information under two relevant criteria: finetuned vs pretrained, and trained for sentence similarity or not.\n\nThe selected models are visible in Figure 1, and all of their characteristics are summarized in appendix Table 7. Overall, the selection includes the best models from the sentence transformers framework (Reimers and Gurevych, 2019), the most popular French NLP models (Le et al., 2020; Martin\n\net al., 2019), their variants optimized for semantic similarity (Reimers and Gurevych, 2019), numerous multilingual models performing at the top on MTEB (e.g E5 and T5 ), Bloom variants (Zhang et al., 2023), models based on very recent powerful LLMs (Wang et al., 2023; Faysse et al., 2024) and finally the proprietary models of OpenAI, Cohere and Voyage. Certain models were selected in multiple sizes to isolate the dimensionality effect effectively. We provide information on the models' licenses as reported in the Hugging Face hub 10 . However, we encourage readers to conduct further research before utilizing a model.\n\n## 3.3 Evaluation\n\nFor the sake of homogeneity, models are evaluated using the same metrics per task as in MTEB (Muennighoff et al., 2022): Classification (Accuracy), Bitext mining (F1 score), Pair classification (AP), Clustering (V measure), Reranking (MAP), Retrieval (NDCG@10), Summarization and STS (Spearman correlation based on cosine similarity). BitextMining tasks are excluded from the average performance scores and therefore the figures, as this task evaluates 2 languages instead of one, and this benchmark focuses only on one language (French). We present the results for both DiaBlaBitextMining and FloresBitextMining in Table 12.\n\nUsing the overall benchmark results, our goal will be to answer the following research questions: Q1: Is a model outstanding on all tasks?\n\nAs we are trying to find out whether one embedding model is statistically better than the others for French, the objective will also be to analyze the performance of the models by tasks to facilitate model choice for specific applications.\n\nQ2: Are there any links between the model characteristics and performance?", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv4.pdf" - }, - { - "text": "We now outline the routing methods considered in this work. See Ong et al. [47] for their full implementation details.\n\nSimilarity-weighted ranking: The first method is based on the Bradley-Terry (BT) model [17]. For a given user query, this model derives a function to compute the probability of the weak model being preferred over the strong model. The probability-function expressions all share parameters, which are optimized to minimize the sum of cross-entropy losses over the training-set queries, where each element in the sum is weighted by the respective query's similarity with the user's query (computed as embeddings cosine similarity, with the embedding derived using OpenAI's text-embedding-3small [6]). We denote this method as R SW .\n\nMatrix factorization: The second method is based on matrix factorization. The training queries are used to train a bilinear function mapping a model's embedding and a query's embedding to a score corresponding to how well the model performs on the query. Routing is done by computing the score of the input query for each model, and choosing the highest-scoring model. We denote this method as R MF .\n\nBERT classifier: The third method involves fine-tuning a classifier, based on the BERT-base architecture [26], to predict which of the two models produces a better response for the given query or whether they do equally well (a tie). The routing decision is based on the probability of the weak model providing a better response versus the strong model or the tie. We denote this method as R CLS .\n\nLLM classifier: The last method is based on asking an LLM to provide a score in the range 1 -5 of how an AI expert would struggle to respond to a given query based on the query's complexity. For this, Ong et al. fine-tuned a Llama-3-8B model [4] using their reference set of queries and corresponding scores. We denote this method as R LLM .\n\nUnderlying LLMs. In [47], Ong et al. trained the routers with GPT-4-1106-preview [14] as the strong model and Mixtral 8x7B [39] as the weak model. They report successful generalization between the underlying LLMs, stating that their routers trained for a particular strong-weak LLM pair can be used with other strong-weak LLM pairs.\n\nTo allow our evaluation to scale, we use as the strong model M s the open-sourced Llama-3.1-8B [3] and as M w the 4-bit quantized version of Mixtral 8x7B (for efficiency reasons). This reduced the cost of our experiments by avoiding expensive GPT API calls and lowering the computational costs of Mixtral. Unless mentioned otherwise, all of our results", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv1.pdf" - } - ] - }, - { - "references": { - "source_file": "EN-Draft FWC for services 0142.pdf", - "query": "What is the maximum amount covered by the FWC of the europeean chemical agency ?", - "target_page": 6, - "target_passage": "The maximum amount covering all purchases under this FWC, including all renewals and reimbursement of expenses is EUR 1 000 000 (one million)", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## I.4.2. Period of provision of the services\n\nThe period for the provision of the services starts to run from the date on which the specific contract is signed by the last party.\n\n## I.4.3. Implementation of FWC in cascade\n\nThe FWC is implemented as follows: the contracting authority orders services by sending a request for offer for a specific contract to the contractor who is ranked first in the cascade.\n\nWithin 5 working days (unless otherwise stated in the request for offer), the contractor must either:\n\n - (a) send the specific tender back to the contracting authority; or\n - (b) send an explanation of why it cannot accept the order.\n\nIf the contractor does not accept the order or fails to observe the deadline or to submit an acceptable offer for the Agency, or if it is in a situation of conflicting interests that may negatively affect the performance of the specific contract (see Article II.7), the contracting authority may place the order with the next contractor on the cascade.\n\nIf the contractor repeatedly refuses to accept requests for offer or repeatedly fails to send them back on time, the contractor may be considered in breach of its obligations under this FWC as set out in Article II.18.1 (c).\n\nWithin a maximum of 5 working days of a specific contract or order form being sent by the Agency to the contractor, the Agency shall receive it back, duly signed and dated. The period allowed for the execution of the tasks shall start to run on the date of signature of the specific contract or order form by both parties.\n\n## I.5. Prices\n\n## I.5.1. Maximum amount of the FWC and maximum prices\n\nThe maximum amount covering all purchases under this FWC, including all renewals and reimbursement of expenses is EUR 1 000 000 (one million). However, this does not bind the contracting authority to purchase for the maximum amount.\n\nThe maximum unit prices of the services are:\n\nSenior experts:\n\n[ ] EUR per man-day\n\nExperts:\n\n[ ] EUR per man-day\n\n## I.5.2. Price revision index\n\nPrice revision is determined by the formula set out in Article II.20 and using the trend in the harmonised indices of consumer prices (HICP) 'Euro area (19 countries)' published at http://ec.europa.eu/eurostat/web/hicp/data/database under HICP (2015 = 100) - monthly data (index) (prc\\_hicp\\_midx).]\n\n## I.5.3. Reimbursement of expenses\n\nIn addition to the maximum price specified in each specific contract, if applicable, the contracting authority shall reimburse the following in accordance with Article II.22:", - "page_start": 5, - "page_end": 5, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "## I. Special Conditions\n\n## I.1. Order of priority of provisions\n\nIf there is any conflict between different provisions in this FWC, the following rules must be applied:\n\n - (a) The provisions set out in the special conditions take precedence over those in the other parts of the FWC.\n - (b) The provisions set out in the general conditions take precedence over those in the order form and specific contract (Annex III)\n - (c) The provisions set out in the order form and specific contract (Annex III) take precedence over those in the other annexes.\n - (d) The provisions set out in the tender specifications (Annex I) take precedence over those in the tender (Annex II).\n - (e) The provisions set out in the FWC take precedence over those in the specific contracts.\n - (f) The provisions set out in the specific contracts take precedence over those in the requests for services.\n\nAny reference to specific contracts applies also to order forms.\n\n## I.2. Subject matter\n\nThe subject matter of the FWC is scientific support to ECHA for work on restrictions, dose-response functions, Annex XIV, POPs and dossier evaluation.\n\n## I.3. Entry into force and duration of the FWC\n\n - I.3.1 The FWC enters into force on the date on which the last party signs it.\n - I.3.2 The implementation of the FWC cannot start before its entry into force.\n - I.3.3 The FWC is concluded for a period of 24 months with effect from the date of its entry into force.\n - I.3.4 The parties must sign any specific contract before the FWC expires.\n\nThe FWC continues to apply to such specific contracts after its expiry. The services relating to such specific contracts must be performed no later than six months after the expiry of the FWC.\n\n## I.3.5 Renewal of the FWC\n\nThe FWC is renewed automatically 2 times for 12 months each, unless one of the parties receives formal notification to the contrary at least three months before the end of the ongoing duration. Renewal does not change or postpone any existing obligations.\n\n## I.4. Appointment of the contractor and implementation of the FWC\n\n## I.4.1. Appointment of the contractor\n\nThe contracting authority appoints the contractor for a multiple FWC in cascade in [ complete ] position.", - "page_start": 4, - "page_end": 4, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "Annex IV - Daily subsistence allowances and accommodation flat rates for Finland\n\nAnnex V -\n\n- (a) Declaration on list of pre-exisiting rights\n- (b) Statement of the contractor concerning rights to delivered results and\n- (c) Statement of creator (or right holder)\n\nwhich form an integral part of this framework contract ('the FWC').\n\n## This FWC sets out:\n\n- 1. the procedure by which the contracting authority may order services from the contractor;\n- 2. the provisions that apply to any specific contract which the contracting authority and the contractor may conclude under this FWC; and\n- 3. the obligations of the parties during and after the duration of this FWC.\n\nAll documents issued by the contractor (end-user agreements, general terms and conditions, etc.) except its tender are held inapplicable, unless explicitly mentioned in the special conditions of this FWC. In all circumstances, in the event of contradiction between this FWC and documents issued by the contractor, this FWC prevails, regardless of any provision to the contrary in the contractor's documents.", - "page_start": 1, - "page_end": 1, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "European Agency for Safety and Health at Work - EU-OSHA", - "page_start": 2, - "page_end": 2, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## II.18.1. Grounds for termination by the contracting authority\n\nThe contracting authority may terminate the FWC or any on-going specific contract in the following circumstances:\n\n - (a) if provision of the services under an on-going specific contract has not actually started within 15 days of the scheduled date and the contracting authority considers that the new date proposed, if any, is unacceptable, taking into account Article II.11.2;\n - (b) if the contractor is unable, through its own fault, to obtain any permit or licence required for implementation of the FWC ;\n - (c) if the contractor does not implement the FWC or perform the specific contract in accordance with the tender specifications or request for service or is in breach of another substantial contractual obligation or repeatedly refuses to sign specific contracts. Termination of three or more specific contracts in these circumstances also constitutes grounds for termination of the FWC;\n - (d) if the contractor or any person that assumes unlimited liability for the debts of the contractor is in one of the situations provided for in points (a) and (b) of Article 136(1) of the Financial Regulation 6 ;\n - (e) if the contractor or any related person is in one of the situations provided for in points (c) to (h) of Article 136(1) or to Article 136(2) of the Financial Regulation;\n - (f) if the procedure for awarding the FWC or the implementation of the FWC prove to have been subject to irregularities , fraud or breach of obligations ;", - "page_start": 29, - "page_end": 29, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "Contract number: ECHA/2019/355\n\n## HAVE AGREED\n\n## I.1.1.1.1. Article 1 Subject matter\n\n - 1.1 This specific contract implements framework contract (FWC) No ECHA/2019/355 signed by the parties on [ complete date ] .\n - 1.2 In accordance with the provisions set out in the FWC and in this specific contract and [its][their] annex[es], which form an integral part of it, the contractor must provide the [following services:] [services specified in Annex [ complete ] . ]\n - I.1.1.1.2. Article 2 Entry into force and duration\n - 2.1 This specific contract enters into force on the date on which the last party signs it.\n - 2.2 The provision of the services starts from the date of entry into force of this specific contract.\n - 2.3 The provision of the services must not exceed [ complete ] [ days] [months ] . The parties may extend the duration by written agreement before it elapses and before expiry of the FWC.\n\n## I.1.1.1.3. Article 3 Price\n\n - 3.1 The price payable under this specific contract excluding reimbursement of expenses is EUR [ amount in figures and in words ].\n\n[The maximum amount covering all services to be provided under this specific contract including reimbursement of expenses and excluding price revision is EUR [ amount in figures and in words ].]\n\n - 3.2 [Reimbursement of expenses is not applicable to this specific contract.] [Within the maximum amount, up to EUR [ amount in figures and in words ] is earmarked for expenses, which must be reimbursed in accordance with the FWC].\n\n***\n\n## I.1.1.1.4. Article 4 communication details\n\nFor the purpose of this specific contract, communications must be sent to the following addresses:\n\nContracting authority:\n\nEuropean Chemicals Agency\n\n[Directorate [ complete ]]\n\n[Unit [ complete ]]\n\n[ Postcode and city ]\n\nE-mail: [ insert functional mailbox ]", - "page_start": 43, - "page_end": 43, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "The European Agency for Safety and Health at Work (EU-OSHA) contributes to making Europe a safer, healthier and more productive place to work. The Agency researches, develops and distributes reliable, balanced and impartial safety and health information and organises panEuropean awareness-raising campaigns. Set up by the European Union in 1994 and based in Bilbao, Spain, the Agency brings together representatives from the European Commission, Member State governments and employers' and workers' organisations, as well as leading experts in each of the EU Member States and beyond.\n\nEuropean Agency for Safety and Health at Work (EU-OSHA)\n\nSantiago de Compostela 12, 5th floor 48003 Bilbao Spain Tel: (+34) 944 358 400 Email: information@osha.europa.eu\n\nhttps://osha.europa.eu\n\n", - "page_start": 163, - "page_end": 163, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "less environmentally critical processes (see for example, the principles of 'green engineering', like prevention instead of treatment of waste 288 ).\n\nChemical technologies have ousted traditional materials and processes. The United Nations' (UNEP) 'Global Chemical Outlook' 289 documents a strong growth of chemical production between 1970 and 2010. The value of the global chemical production grew from US$171 billion in 1970, to approximately US$ 5.7 trillion in 2019, roughly 33 times more. 290 The EU had a share of $1.3 trillion or about 20% of the global value. In less than two decades between 2000 and 2017, the capacity doubled and grew from 1,186 million tons to 2,276 million tons. 291,292\n\nThe reasons for this strong growth are: a) the replacement of traditional materials (wood, stone, iron and other metals, paper, natural fibres) by chemically based products (foremost plastics and multimaterial products); b) the replacement of traditional technologies by chemical processes (e.g. gluing instead of screwing of connections in metal, two-component paints); c) the development of new products (e.g. electronic devices, new types of batteries, nano); and d) new applications (e.g. specific fertilisers and pesticides).\n\nApproximately 300 million tons of synthetic chemicals were consumed in the EU in 2019, 223 million tons, or 74%, were regarded as hazardous to health.\n\nTable 29: Production and consumption of chemicals by hazard class in the EU in 2019 - Eurostat 293\n\nAccording to the detailed register data of the Swedish Chemicals Agency, 10 million tonnes of synthetic chemicals were used in Sweden in 2019 that were classified as hazardous to health and the environment (not counting petrol). That equals approximately 1 ton per citizen of such chemicals. 294\n\nThe ESENER 2019 survey provides information about sectors that reported a particularly high prevalence of dangerous substances . The percentage of enterprises reporting handling or exposure to chemicals are: 50% in 'Manufacturing', 49% in 'Construction, waste management, and water and electricity supply', and 47% in 'Human health and social work activities'. 295\n\nThe prevention of risks from the use of chemicals at workplaces is done according to extensive regulatory frameworks. The most relevant pieces of legislation at the EU level are the OSH Framework Directive, the Chemical Agents Directive, and the Carcinogens and Mutagens Directive. Legislation in other policy areas contributes to the reduction of risks from dangerous substances in workplaces, such as EU legislation on chemical substances and mixtures (CLP, the regulation on classification, labelling and packaging of chemicals, its predecessor directive was already issued in 1967; REACH the regulation on Registration, Evaluation, Authorisation and Restriction of Chemicals from 2007; and also specific EU and international legislation on specific aspects such as chemicals in waste, storage and transport, in specific products like batteries and cars, in specific sectors like agriculture, in natural environments like in water and soil, and in consumer products like food, detergents and cosmetics).", - "page_start": 106, - "page_end": 106, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## I.10.3. Provision of list of pre-existing rights and documentary evidence\n\nThe contractor must provide the contracting authority with a list of pre-existing rights as set out in Article II.13.4 together with the invoice for payment of the balance at the latest.\n\n## I.11. Termination by either party 2\n\nEither party may terminate the FWC and/or the FWC and specific contracts by sending formal notification to the other party with three months written notice.\n\nIf the FWC or a specific contract is terminated:\n\n - a) neither party is entitled to compensation;\n - b) the contractor is entitled to payment only for the services provided before termination takes effect.\n\nThe second, third and fourth paragraphs of Article II.18.4 apply.\n\n## I.12. Applicable law and settlement of disputes\n\n - I.12.1 The FWC is governed by Union law, complemented, where necessary, by the law of Finland.\n - I.12.2 The courts of Finland have exclusive jurisdiction over any dispute regarding the interpretation, application or validity of the FWC.\n\n## I.13. Interinstitutional FWC\n\nNot applicable\n\n## I.14. Service provided on the premises of the contracting authority\n\nNot applicable.\n\n## I.15. Other special conditions\n\nElectronic documents exchange\n\nIt is intended that the documents exchange (e.g. invoices, deliverables) between the Agency and the Contractor will have to be carried out via electronic means.\n\nAt the request of the Agency, the use of such electronic applications will become mandatory, upon mutual agreement, during the performance of the contract, at no additional cost for the Agency.", - "page_start": 10, - "page_end": 10, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "- 429 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A31994R2062\n - 430 Communication from the Commission - Adapting to change in work and society: a new Community strategy on health and safety at work 2002-2006 /COM/2002/0118 final\n - 431 European Commission Brussels, 31.5.2013 SWD (2013) 202 final COMMISSION STAFF WORKING DOCUMENT Evaluation of the European Strategy 2007-2012 on health and safety at work\n - 432 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Improving quality and productivity at work: Community strategy 2007-2012 on health and safety at work {SEC(2007) 214} {SEC(2007) 215} {SEC(2007) 216}\n - 433 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on an EU Strategic Framework on Health and Safety at Work 2014-2020, Brussels, 6.6.2014 COM (2014) 332 final\n - 434 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: EU strategic framework on health and safety at work 20212027: Occupational safety and health in a changing world of work, {SWD(2021) 148 final} - {SWD(2021) 149 final, Brussels, 28.6.2021\n - 435 European Agency for Safety and Health at Work, 2019: National Strategies in the field of Occupational Safety and Health in the EU, 2019, https://osha.europa.eu/en/safety-and-health-legislation/osh-strategies", - "page_start": 157, - "page_end": 157, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "EN-Draft FWC for services 0142.pdf", - "query": "How can I get compensation if the european chemical agency terminates a contract we have ?", - "target_page": 11, - "target_passage": "If the FWC or a specific contract is terminated: a) neither party is entitled to compensation", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## I.10.3. Provision of list of pre-existing rights and documentary evidence\n\nThe contractor must provide the contracting authority with a list of pre-existing rights as set out in Article II.13.4 together with the invoice for payment of the balance at the latest.\n\n## I.11. Termination by either party 2\n\nEither party may terminate the FWC and/or the FWC and specific contracts by sending formal notification to the other party with three months written notice.\n\nIf the FWC or a specific contract is terminated:\n\n - a) neither party is entitled to compensation;\n - b) the contractor is entitled to payment only for the services provided before termination takes effect.\n\nThe second, third and fourth paragraphs of Article II.18.4 apply.\n\n## I.12. Applicable law and settlement of disputes\n\n - I.12.1 The FWC is governed by Union law, complemented, where necessary, by the law of Finland.\n - I.12.2 The courts of Finland have exclusive jurisdiction over any dispute regarding the interpretation, application or validity of the FWC.\n\n## I.13. Interinstitutional FWC\n\nNot applicable\n\n## I.14. Service provided on the premises of the contracting authority\n\nNot applicable.\n\n## I.15. Other special conditions\n\nElectronic documents exchange\n\nIt is intended that the documents exchange (e.g. invoices, deliverables) between the Agency and the Contractor will have to be carried out via electronic means.\n\nAt the request of the Agency, the use of such electronic applications will become mandatory, upon mutual agreement, during the performance of the contract, at no additional cost for the Agency.", - "page_start": 10, - "page_end": 10, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "quality or continuity of the services. The parties may agree to draw up a transition plan detailing the contractor's assistance unless such plan is already detailed in other contractual documents or in the tender specifications. The contractor must provide such assistance at no additional cost, except if it can demonstrate that it requires substantial additional resources or means, in which case it must provide an estimate of the costs involved and the parties will negotiate an arrangement in good faith.\n\n## II.18.4. Effects of termination\n\nThe contractor is liable for damage incurred by the contracting authority as a result of the termination of the FWC or a specific contract, including the additional cost of appointing and contracting another contractor to provide or complete the services, except if the damage is a result of a termination in accordance with Article II.18.1(j), (k) or (l) or Article II.18.2. The contracting authority may claim compensation for such damage.\n\nThe contractor is not entitled to compensation for any loss resulting from the termination of the FWC or a specific contract, including loss of anticipated profits, unless the loss was caused by the situation specified in Article II.18.2.\n\nThe contractor must take all appropriate measures to minimise costs, prevent damage and cancel or reduce its commitments.\n\nWithin 60 days of the date of termination, the contractor must submit any report, deliverable or result and any invoice required for services that were provided before the date of termination.\n\nIn the case of joint tenders, the contracting authority may terminate the FWC or a specific contract with each member of the group separately on the basis of points (d), (e) or (g) of Article II.18.1, under the conditions set out in Article II.11.2\n\n## II.19. Invoices, value added tax and e-invoicing\n\n## II.19.1. Invoices and value added tax\n\nInvoices must contain the contractor's (or leader's in the case of a joint tender) identification data, the amount, the currency and the date, as well as the FWC reference and reference to the specific contract.\n\nInvoices must indicate the place of taxation of the contractor (or leader in the case of a joint tender) for value added tax (VAT) purposes and must specify separately amounts not including VAT and amounts including VAT.\n\nThe contracting authority is exempt from all taxes and duties, including VAT, in accordance with Articles 3 and 4 of the Protocol 7 of the Treaty on the Functioning of the European Union on the privileges and immunities of the European Union.\n\nThe contractor (or leader in the case of a joint tender) must complete the necessary formalities with the relevant authorities to ensure that the supplies and services required for implementation of the FWC are exempt from taxes and duties, including VAT.\n\n## II.19.2. E-invoicing\n\nIf provided for in the special conditions, the contractor (or leader in the case of a joint tender) submits invoices in electronic format if the conditions regarding electronic signature specified by Directive 2006/112/EC on VAT are fulfilled, i.e. using a qualified", - "page_start": 31, - "page_end": 31, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "Contract number: ECHA/2019/355\n\n## HAVE AGREED\n\n## I.1.1.1.1. Article 1 Subject matter\n\n - 1.1 This specific contract implements framework contract (FWC) No ECHA/2019/355 signed by the parties on [ complete date ] .\n - 1.2 In accordance with the provisions set out in the FWC and in this specific contract and [its][their] annex[es], which form an integral part of it, the contractor must provide the [following services:] [services specified in Annex [ complete ] . ]\n - I.1.1.1.2. Article 2 Entry into force and duration\n - 2.1 This specific contract enters into force on the date on which the last party signs it.\n - 2.2 The provision of the services starts from the date of entry into force of this specific contract.\n - 2.3 The provision of the services must not exceed [ complete ] [ days] [months ] . The parties may extend the duration by written agreement before it elapses and before expiry of the FWC.\n\n## I.1.1.1.3. Article 3 Price\n\n - 3.1 The price payable under this specific contract excluding reimbursement of expenses is EUR [ amount in figures and in words ].\n\n[The maximum amount covering all services to be provided under this specific contract including reimbursement of expenses and excluding price revision is EUR [ amount in figures and in words ].]\n\n - 3.2 [Reimbursement of expenses is not applicable to this specific contract.] [Within the maximum amount, up to EUR [ amount in figures and in words ] is earmarked for expenses, which must be reimbursed in accordance with the FWC].\n\n***\n\n## I.1.1.1.4. Article 4 communication details\n\nFor the purpose of this specific contract, communications must be sent to the following addresses:\n\nContracting authority:\n\nEuropean Chemicals Agency\n\n[Directorate [ complete ]]\n\n[Unit [ complete ]]\n\n[ Postcode and city ]\n\nE-mail: [ insert functional mailbox ]", - "page_start": 43, - "page_end": 43, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "## I.4.2. Period of provision of the services\n\nThe period for the provision of the services starts to run from the date on which the specific contract is signed by the last party.\n\n## I.4.3. Implementation of FWC in cascade\n\nThe FWC is implemented as follows: the contracting authority orders services by sending a request for offer for a specific contract to the contractor who is ranked first in the cascade.\n\nWithin 5 working days (unless otherwise stated in the request for offer), the contractor must either:\n\n - (a) send the specific tender back to the contracting authority; or\n - (b) send an explanation of why it cannot accept the order.\n\nIf the contractor does not accept the order or fails to observe the deadline or to submit an acceptable offer for the Agency, or if it is in a situation of conflicting interests that may negatively affect the performance of the specific contract (see Article II.7), the contracting authority may place the order with the next contractor on the cascade.\n\nIf the contractor repeatedly refuses to accept requests for offer or repeatedly fails to send them back on time, the contractor may be considered in breach of its obligations under this FWC as set out in Article II.18.1 (c).\n\nWithin a maximum of 5 working days of a specific contract or order form being sent by the Agency to the contractor, the Agency shall receive it back, duly signed and dated. The period allowed for the execution of the tasks shall start to run on the date of signature of the specific contract or order form by both parties.\n\n## I.5. Prices\n\n## I.5.1. Maximum amount of the FWC and maximum prices\n\nThe maximum amount covering all purchases under this FWC, including all renewals and reimbursement of expenses is EUR 1 000 000 (one million). However, this does not bind the contracting authority to purchase for the maximum amount.\n\nThe maximum unit prices of the services are:\n\nSenior experts:\n\n[ ] EUR per man-day\n\nExperts:\n\n[ ] EUR per man-day\n\n## I.5.2. Price revision index\n\nPrice revision is determined by the formula set out in Article II.20 and using the trend in the harmonised indices of consumer prices (HICP) 'Euro area (19 countries)' published at http://ec.europa.eu/eurostat/web/hicp/data/database under HICP (2015 = 100) - monthly data (index) (prc\\_hicp\\_midx).]\n\n## I.5.3. Reimbursement of expenses\n\nIn addition to the maximum price specified in each specific contract, if applicable, the contracting authority shall reimburse the following in accordance with Article II.22:", - "page_start": 5, - "page_end": 5, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "- 4. -(1) An employer with more than 50 employees who is the employer of any person who is required to undertake workforce tests or has responsibility for any agency worker who is required to undertake workforce tests, must take reasonable steps to facilitate the taking of those tests by that person or agency worker in accordance with these Regulations.\n - (2) In the discharge of the duty under sub-paragraph (1), an employer must have regard to any guidance issued by the Secretary of State for the purposes of this paragraph.\n - (3) In sub-paragraph (1) an employer has responsibility for an agency worker if-\n - (a) the agency worker is supplied or to be supplied by a person (an 'agent') to the employer under a contract or other arrangements made between the agent and the employer; and\n - (b) the agency worker is not-\n - (i) a worker because of the absence of a worker's contract between the agency worker and the agent or the employer, or", - "page_start": 67, - "page_end": 67, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- 3. The contracting authority may suspend the time limit for payment specified in point 2 in accordance with Article II.21.7. Once the suspension is lifted, the contracting authority shall give its approval and pay within the remainder of the time-limit indicated in point 2 unless it rejects partially or fully the submitted documents.\n\n## I.6.4. Performance guarantee\n\nPerformance guarantee is not applicable to this FWC.\n\n## I.6.5. Retention money guarantee\n\nRetention money guarantee is not applicable to this FWC.\n\n## I.7. Bank account\n\nPayments must be made to the contractor's (or leader's in the case of a joint tender) bank account denominated in euro, identified as follows:\n\nName of bank:\n\nFull address of branch:\n\nExact denomination of account holder:\n\nFull account number including bank codes:\n\n[IBAN 1 code:]\n\n## I.8. Communication details\n\nFor the purpose of this FWC, communications must be sent to the following addresses:\n\nContracting authority:\n\nDirectorate and Unit D3, Risk Management I\n\nEuropean Chemicals Agency Telakkakatu 6 00150 Helsinki Finland E-mail: [insert functional mailbox]\n\nContractor (or leader in the case of a joint tender):\n\n[ Full name ] [ Function ] [ Company name ] [ Full official address ] E-mail: [ complete ]\n\nBy derogation from this Article, different contact details for the contracting authority or the contractor may be provided in specific contracts.", - "page_start": 7, - "page_end": 7, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "- (ii) a party to a contract under which the agency worker undertakes to do the work for another party to a contract whose status is, by virtue of the contract, that of a client or customer of any profession or business undertaking carried on by the agency worker.", - "page_start": 67, - "page_end": 67, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## II.18.1. Grounds for termination by the contracting authority\n\nThe contracting authority may terminate the FWC or any on-going specific contract in the following circumstances:\n\n - (a) if provision of the services under an on-going specific contract has not actually started within 15 days of the scheduled date and the contracting authority considers that the new date proposed, if any, is unacceptable, taking into account Article II.11.2;\n - (b) if the contractor is unable, through its own fault, to obtain any permit or licence required for implementation of the FWC ;\n - (c) if the contractor does not implement the FWC or perform the specific contract in accordance with the tender specifications or request for service or is in breach of another substantial contractual obligation or repeatedly refuses to sign specific contracts. Termination of three or more specific contracts in these circumstances also constitutes grounds for termination of the FWC;\n - (d) if the contractor or any person that assumes unlimited liability for the debts of the contractor is in one of the situations provided for in points (a) and (b) of Article 136(1) of the Financial Regulation 6 ;\n - (e) if the contractor or any related person is in one of the situations provided for in points (c) to (h) of Article 136(1) or to Article 136(2) of the Financial Regulation;\n - (f) if the procedure for awarding the FWC or the implementation of the FWC prove to have been subject to irregularities , fraud or breach of obligations ;", - "page_start": 29, - "page_end": 29, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "- II.4.4 The contractor must obtain any permit or licence required in the State where the services are to be provided.\n - II.4.5 All periods specified in the FWC are calculated in calendar days, unless otherwise specified.\n - II.4.6 The contractor must not present itself as a representative of the contracting authority and must inform third parties that it is not part of the European public service.\n - II.4.7 The contractor is responsible for the personnel who carry out the services and exercises its authority over its personnel without interference by the contracting authority. The contractor must inform its personnel that:\n - (a) they may not accept any direct instructions from the contracting authority; and\n - (b) their participation in providing the services does not result in any employment or contractual relationship with the contracting authority.\n - II.4.8 The contractor must ensure that the personnel implementing the FWC and any future replacement personnel possess the professional qualifications and experience required to provide the services, as the case may be on the basis of the selection criteria set out in the tender specifications.\n - II.4.9 At the contracting authority's reasoned request, the contractor must replace any member of personnel who:\n - (a) does not have the expertise required to provide the services; or\n - (b) has caused disruption at the premises of the contracting authority.\n\nThe contractor bears the cost of replacing its personnel and is responsible for any delay in providing the services resulting from the replacement of personnel .\n\n - II.4.10 The contractor must record and report to the contracting authority any problem that affects its ability to provide the services. The report must describe the problem, state when it started and what action the contractor is taking to resolve it.\n\n## II.5. Communication between the parties\n\n## II.5.1. Form and means of communication\n\nAny communication of information, notices or documents under the FWC must:\n\n - (a) be made in writing in paper or electronic format in the language of the contract;\n - (b) bear the FWC number and, if applicable, the specific contract number;\n - (c) be made using the relevant communication details set out in Article I.8; and\n - (d) be sent by mail, email or, for the documents specified in the special conditions, via e-PRIOR .\n\nIf a party requests written confirmation of an e-mail within a reasonable time, the other party must provide an original signed paper version of the communication as soon as possible.", - "page_start": 15, - "page_end": 15, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "## II.16. Reduction in price\n\n## II.16.1. Quality standards\n\nIf the contractor fails to provide the service in accordance with the FWC or a specific contract ('unperformed obligations') or if it fails to provide the service in accordance with the expected quality levels specified in the tender specifications ('low quality delivery'), the contracting authority may reduce or recover payments proportionally to the seriousness of the unperformed obligations or low quality delivery. This includes in particular cases where the contracting authority cannot approve a result , report or deliverable as defined in Article I.6 after the contractor has submitted the required additional information, correction or new version.\n\nA reduction in price may be imposed together with liquidated damages under the conditions of Article II.15.\n\n## II.16.2. Procedure\n\nThe contracting authority must formally notify the contractor of its intention to reduce payment and the corresponding calculated amount.\n\nThe contractor has 30 days following the date of receipt to submit observations. Failing that, the decision becomes enforceable the day after the time limit for submitting observations has elapsed.\n\nIf the contractor submits observations, the contracting authority, taking into account the relevant observations, must notify the contractor:\n\n - (a) of the withdrawal of its intention to reduce payment; or\n - (b) of its final decision to reduce payment and the corresponding amount,.\n\n## II.16.3. Claims and liability\n\nAny reduction in price does not affect the contractor's actual or potential liability or the contracting authority's rights under Article II.18.\n\n## II.17. Suspension of the implementation of the FWC\n\n## II.17.1. Suspension by the contractor\n\nIf the contractor is affected by force majeure , it may suspend the provision of the services under a specific contract.\n\nThe contractor must immediately notify the contracting authority of the suspension. The notification must include a description of the force majeure and state when the contractor expects to resume the provision of services.\n\nThe contractor must notify the contracting authority as soon as it is able to resume performance of the specific contract , unless the contracting authority has already terminated the FWC or the specific contract.\n\n## II.17.2. Suspension by the contracting authority\n\nThe contracting authority may suspend the implementation of the FWC or performance of", - "page_start": 28, - "page_end": 28, - "source_file": "EN-Draft FWC for services 0142.pdf" - } - ] - }, - { - "references": { - "source_file": "EN-Draft FWC for services 0142.pdf", - "query": "According to the european chemical agency contracts, what is considers a grave professional misconduct ?", - "target_page": 14, - "target_passage": "'Grave professional misconduct': a violation of applicable laws or regulations or ethical standards of the profession to which a contractor or a related person belongs, including any conduct leading to sexual or other exploitation or abuse, or any wrongful conduct of the contractor or a related person which has an impact on its professional credibility where such conduct denotes wrongful intent or gross negligence.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "European Agency for Safety and Health at Work - EU-OSHA", - "page_start": 2, - "page_end": 2, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "less environmentally critical processes (see for example, the principles of 'green engineering', like prevention instead of treatment of waste 288 ).\n\nChemical technologies have ousted traditional materials and processes. The United Nations' (UNEP) 'Global Chemical Outlook' 289 documents a strong growth of chemical production between 1970 and 2010. The value of the global chemical production grew from US$171 billion in 1970, to approximately US$ 5.7 trillion in 2019, roughly 33 times more. 290 The EU had a share of $1.3 trillion or about 20% of the global value. In less than two decades between 2000 and 2017, the capacity doubled and grew from 1,186 million tons to 2,276 million tons. 291,292\n\nThe reasons for this strong growth are: a) the replacement of traditional materials (wood, stone, iron and other metals, paper, natural fibres) by chemically based products (foremost plastics and multimaterial products); b) the replacement of traditional technologies by chemical processes (e.g. gluing instead of screwing of connections in metal, two-component paints); c) the development of new products (e.g. electronic devices, new types of batteries, nano); and d) new applications (e.g. specific fertilisers and pesticides).\n\nApproximately 300 million tons of synthetic chemicals were consumed in the EU in 2019, 223 million tons, or 74%, were regarded as hazardous to health.\n\nTable 29: Production and consumption of chemicals by hazard class in the EU in 2019 - Eurostat 293\n\nAccording to the detailed register data of the Swedish Chemicals Agency, 10 million tonnes of synthetic chemicals were used in Sweden in 2019 that were classified as hazardous to health and the environment (not counting petrol). That equals approximately 1 ton per citizen of such chemicals. 294\n\nThe ESENER 2019 survey provides information about sectors that reported a particularly high prevalence of dangerous substances . The percentage of enterprises reporting handling or exposure to chemicals are: 50% in 'Manufacturing', 49% in 'Construction, waste management, and water and electricity supply', and 47% in 'Human health and social work activities'. 295\n\nThe prevention of risks from the use of chemicals at workplaces is done according to extensive regulatory frameworks. The most relevant pieces of legislation at the EU level are the OSH Framework Directive, the Chemical Agents Directive, and the Carcinogens and Mutagens Directive. Legislation in other policy areas contributes to the reduction of risks from dangerous substances in workplaces, such as EU legislation on chemical substances and mixtures (CLP, the regulation on classification, labelling and packaging of chemicals, its predecessor directive was already issued in 1967; REACH the regulation on Registration, Evaluation, Authorisation and Restriction of Chemicals from 2007; and also specific EU and international legislation on specific aspects such as chemicals in waste, storage and transport, in specific products like batteries and cars, in specific sectors like agriculture, in natural environments like in water and soil, and in consumer products like food, detergents and cosmetics).", - "page_start": 106, - "page_end": 106, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Union budget, ii) the non-disclosure of information in violation of a specific obligation, with the same effect or iii) the misapplication of such funds or assets for purposes other than those for which they were originally granted, which damages the Union's financial interests;\n\n'Grave professional misconduct': a violation of applicable laws or regulations or ethical standards of the profession to which a contractor or a related person belongs, including any conduct leading to sexual or other exploitation or abuse, or any wrongful conduct of the contractor or a related person which has an impact on its professional credibility where such conduct denotes wrongful intent or gross negligence.\n\n'Implementation of the FWC' : the purchase of services envisaged in the FWC through the signature and performance of specific contracts ;\n\n'Interface control document' : the guideline document which lays down the technical specifications, message standards, security standards, checks of syntax and semantics, etc. to facilitate machine-to-machine connection. This document is updated on a regular basis;\n\n'Irregularity' : any infringement of a provision of Union law resulting from an act or omission by an economic operator, which has, or would have, the effect of prejudicing the Union's budget.\n\n'Notification' (or 'notify'): form of communication between the parties made in writing including by electronic means;\n\n'Order form' : a simplified form of specific contract by which the contracting authority orders services under this FWC;\n\n'Performance of a specific contract' : the execution of tasks and delivery of the purchased services by the contractor to the contracting authority;\n\n'Personnel' : persons employed directly or indirectly or contracted by the contractor to implement the FWC;\n\n'Pre-existing material' : any material, document, technology or know-how which exists prior to the contractor using it for the production of a result in the implementation of the FWC ;\n\n'Pre-existing right' : any industrial and intellectual property right on pre-existing material ; it may consist in a right of ownership, a licence right and/or right of use belonging to the contractor, the creator , the contracting authority as well as to any other third parties;\n\n'Professional conflicting interest' : a situation in which the contractor's previous or ongoing professional activities affect its capacity to implement the FWC or to perform a specific contract to an appropriate quality standard.\n\n'Related person' : any natural or legal person who is a member of the administrative, management or supervisory body of the contractor, or who has powers of representation, decision or control with regard to the contractor;\n\n'Request for services' : a document from the contracting authority requesting that the contractors in a multiple FWC with re-opening of competition provide a specific tender for services whose terms are not entirely defined under the FWC;", - "page_start": 13, - "page_end": 13, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "The European Agency for Safety and Health at Work (EU-OSHA) contributes to making Europe a safer, healthier and more productive place to work. The Agency researches, develops and distributes reliable, balanced and impartial safety and health information and organises panEuropean awareness-raising campaigns. Set up by the European Union in 1994 and based in Bilbao, Spain, the Agency brings together representatives from the European Commission, Member State governments and employers' and workers' organisations, as well as leading experts in each of the EU Member States and beyond.\n\nEuropean Agency for Safety and Health at Work (EU-OSHA)\n\nSantiago de Compostela 12, 5th floor 48003 Bilbao Spain Tel: (+34) 944 358 400 Email: information@osha.europa.eu\n\nhttps://osha.europa.eu\n\n", - "page_start": 163, - "page_end": 163, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- (ii) a party to a contract under which the agency worker undertakes to do the work for another party to a contract whose status is, by virtue of the contract, that of a client or customer of any profession or business undertaking carried on by the agency worker.", - "page_start": 67, - "page_end": 67, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- 4. -(1) An employer with more than 50 employees who is the employer of any person who is required to undertake workforce tests or has responsibility for any agency worker who is required to undertake workforce tests, must take reasonable steps to facilitate the taking of those tests by that person or agency worker in accordance with these Regulations.\n - (2) In the discharge of the duty under sub-paragraph (1), an employer must have regard to any guidance issued by the Secretary of State for the purposes of this paragraph.\n - (3) In sub-paragraph (1) an employer has responsibility for an agency worker if-\n - (a) the agency worker is supplied or to be supplied by a person (an 'agent') to the employer under a contract or other arrangements made between the agent and the employer; and\n - (b) the agency worker is not-\n - (i) a worker because of the absence of a worker's contract between the agency worker and the agent or the employer, or", - "page_start": 67, - "page_end": 67, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "OLAF's own staff or by any outside body authorised to do so on its behalf.\n\nSuch checks and audits may be initiated at any moment during the provision of the services and up to five years starting from the payment of the balance of the last specific contract issued under this FWC\n\nThe audit procedure is initiated on the date of receipt of the relevant letter sent by the contracting authority. Audits are carried out on a confidential basis.\n\n - II.24.2 The contractor must keep all original documents stored on any appropriate medium, including digitised originals if authorised under national law, for a period of five years starting from the payment of the balance of the last specific contract issued under this FWC.\n - II.24.3 The contractor must grant the contracting authority's staff and outside personnel authorised by the contracting authority the appropriate right of access to sites and premises where the FWC is implemented and to all the information, including information in electronic format, needed to conduct such checks and audits. The contractor must ensure that the information is readily available at the moment of the check or audit and, if so requested, that information is handed over in an appropriate format.\n - II.24.4 On the basis of the findings made during the audit, a provisional report is drawn up. The contracting authority or its authorised representative must send it to the contractor, who has 30 days following the date of receipt to submit observations. The contractor must receive the final report within 60 days following the expiry of the deadline to submit observations.\n\nOn the basis of the final audit findings, the contracting authority may recover all or part of the payments made in accordance with Article II.23 and may take any other measures which it considers necessary.\n\n - II.24.5 In accordance with Council Regulation (Euratom, EC) No 2185/96 of 11 November 1996 concerning on-the-spot checks and inspection carried out by the Commission in order to protect the European Communities' financial interests against fraud and other irregularities and Regulation (EU, Euratom) No 883/2013 of the European Parliament and of the Council of 11 September 2013 concerning investigations conducted by the European Anti-Fraud Office, the European AntiFraud Office may carry out investigations, including on the spot checks and inspections, to establish whether there has been fraud , corruption or any other illegal activity under the contract affecting the financial interests of the Union. Findings arising from an investigation may lead to criminal prosecution under national law.\n\nThe investigations may be carried out at any moment during the provision of the services and up to five years starting from the payment of the balance of the last specific contract issued under this FWC.", - "page_start": 37, - "page_end": 37, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "Obviously, most informal, and - in particular irregular and illegal types of work do not respect legal OSH obligations - and at the same time legal monitoring obligations also fail. The EU Fundamental Rights Agency (FRA) published several case studies and examples in a series called 'Severe labour exploitation reports; 359 these studies provide an insight into most irregular working conditions.\n\nUndeclared work is defined as paid and lawful (not criminal) activity but undeclared to public authorities. ('paid activities that are lawful as regards their nature but not declared to public authorities, taking into account the differences in the regulatory systems of Member States'.)\n\nIn 2018, the European Commission estimated the scale of undeclared work in the EU. According to this estimate, on average, 11.6% of total labour input in the private sector is undeclared, and undeclared work constitutes on average 16.4% of gross value added. The main sectors according to the Special Flash Eurobarometer from 2019 360 are personal services (childcare/elderly care/cleaning) followed by construction and hospitality services. 361 The 'European Platform tackling undeclared work' provides fact sheets about the type and quantity of undeclared work in all EU Member States. 362\n\nThe compliance of enterprises with OSH regulations is supervised by state institutions, mainly the Labour Inspectorates . 363 At EU level, the SLIC developed common principles for their work. These common principles aim at harmonising their work and facilitate collaboration; they include planning and monitoring, inspectors' competencies and independence, prevention, protection, and assistance and guidance for inspectors, and internal and external communication. 364\n\nPractically all labour inspections in the EU Member States worked in the past two decades on organisational and strategic measures to achieve an effective and broad impact , and also to better adapt to new and emerging risks. 365 To enhance the level of implementation in terms of coverage and quality, many labour inspections developed smart enforcement and supervision concepts . 366\n\nOn average, two million visits per year were made by labour inspectorates, in approximately 22 million businesses in the EU, in the decade 2010-2020, with a steady decline over the years. 367 . 368 Many enterprises that are regarded as low-risk establishments have never been inspected by a labour inspectorate. Often more than one inspection is done in large enterprises, for example, as a follow-up inspection; there might also be more than one annual inspection in enterprises with high risks. The labour inspection is also tasked to supervise enterprises with many separated sites or establishments, for example, construction companies and shops of supermarket chains. The visit of one headquarter or one shop cannot be regarded as a visit of a representative selection of enterprises' locations, which possibly show different levels of safety and health.\n\nIn the decade between 2000 and 2010, the development of the resources of labour inspections show a mixed picture, some countries extended the capacities of labour inspections, others cut resources . 369 For the period between 2010 and 2020, the European Trade Union Institute (ETUI) counted a decrease of labour inspectors and inspections in 20 of 27 Member States, a drop of 7% for inspectors and of 18% for inspections. 370 Again, the picture between Member States differs but, in general, budget or staff cuts dominate. ESENER findings show that there was a significant decline between 2014 and 2019 regarding the number of visits by Labour Inspectorates. 371", - "page_start": 122, - "page_end": 122, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- II.6.3 The contractor is liable for any loss or damage caused to the contracting authority during or as a consequence of implementation of the FWC , including in the event of subcontracting, but only up to an amount not exceeding three times the total amount of the relevant specific contract. However, if the damage or loss is caused by the gross negligence or wilful misconduct of the contractor or of its personnel or subcontractors, as well as in the case of an action brought against the contracting authority by a third party for breach of its intellectual property rights, the contractor is liable for the whole amount of the damage or loss.\n - II.6.4 If a third party brings any action against the contracting authority in connection with the implementation of the FWC , including any action for alleged breach of intellectual property rights, the contractor must assist the contracting authority in the legal proceedings, including by intervening in support of the contracting authority upon request. If the contracting authority's liability towards the third party is established and that such liability is caused by the contractor during or as a consequence of the implementation of the FWC , Article II.6.3 applies.\n - II.6.5 If the contractor is composed of two or more economic operators (i.e. who submitted a joint tender), they are all jointly and severally liable to the contracting authority for the implementation of the FWC .\n - II.6.6 The contracting authority is not liable for any loss or damage caused to the contractor during or as a consequence of implementation of the FWC , unless the loss or damage was caused by wilful misconduct or gross negligence of the contracting authority.\n\n## II.7. Conflict of interest and professional conflicting interests\n\n - II.7.1 The contractor must take all the necessary measures to prevent any situation of conflict of interest or professional conflicting interest .\n - II.7.2 The contractor must notify the contracting authority in writing as soon as possible of any situation that could constitute a conflict of interest or a professional conflicting interest during the implementation of the FWC . The contractor must immediately take action to rectify the situation.\n\nThe contracting authority may do any of the following:", - "page_start": 18, - "page_end": 18, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "- 359 Fundamental Rights Agency (FRA), here, section on Trafficking and labour exploitation\n - 360 Special Eurobarometer 498: Undeclared Work in the European Union\n - 361 European Commission, Directorate-General for Employment, Social Affairs and Inclusion et al., 2018: An evaluation of the scale of undeclared work in the European Union and its structural determinants : estimates using the labour input method, here\n - 362 ELA: European Platform tackling undeclared work\n - 363 The OSH Barometer contains a special section on enforcement capacities, here\n - 364 SLIC, 2015: Common Principles for Labour Inspection in Relation to Health and Safety In the Workplace\n - 365 Cardiff University et al., 2011: Contract to assess the potential impact of emerging trends and risks on labour inspection methodologies in the domain of occupational health and safety,\n - European Federation of Public Service Unions (EPSU), 2012: A mapping report on Labour Inspection Services in 15 European countries (p. 13ff).", - "page_start": 153, - "page_end": 153, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_GLW_2002.pdf", - "query": "What or Corning's corporate values ?", - "target_page": 12, - "target_passage": "Quality, Integrity, Performance, Leadership, Innovation, Independence, and The Individual", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "## INVESTOR INFORMATION :\n\n## A NNUAL M EETING\n\nThe annual meeting of shareholders will be held on Thursday, April 24, 2003, in Corning, NY. A formal notice of the meeting together with a proxy statement will be mailed to shareholders on or about March 12, 2003. The proxy statement can also be accessed electronically through the Investor Relations category of the Corning home page on the Internet at www.corning.com. A summary report of the proceedings at the annual meeting will be available without charge upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831.\n\n## A DDITIONAL INFORMATION\n\n'Safe Harbor' Statement under the Private Securities Litigation Reform Act of 1995 facts or information are forward-looking statements. These forward-looking statements involve risks and uncertainties that may cause the outcome to be materially different. Such risks and uncertainties include, but are not limited to:\n\n - -global economic and political conditions,\n - -currency fluctuations,\n - -product demand and industry capacity,\n - -competitive products and pricing,\n\n-\n\nsufficiency of manufacturing capacity and efficiencies,\n\n - -cost reductions,\n - -availability and costs of critical materials,\n - -new product development and commercialization,\n - -attracting and retaining key personnel,\n - -order activity and demand from major customers,\n - -fluctuations in capital spending by customers in the telecommunications industry and other business segments,\n - -financial condition of customers,\n\nA copy of Corning's 2002 Annual Report on Form 10-K filed with the Securities and Exchange Commission is available upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831. The Annual Report on Form 10-K can also be accessed electronically through the Investor Relations category of the home page on the Internet at: www.corning.com\n\nINVESTOR INFORMATION\n\nInvestment analysts who need additional information may contact Mr. Kenneth C. Sofio, Manager of Investor Relations, Corning Incorporated, HQ-E2-25, Corning, NY 14831; Telephone 607.974.9000\n\n## C OMMON S TOCK\n\n - -changes in the mix of sales between premium and non-premium products,\n - -facility expansions and new plant start-up costs,\n - -adverse litigation or regulatory developments, including future or pending tax legislation,\n - -adequacy and availability of insurance,\n - -capital resource and cash flow activities,\n - -capital spending,\n - -equity company activities,\n - -interest costs,\n - -acquisition and divestiture activity,\n - -the rate of technology change,\n - -the ability to enforce patents,\n\nCorning Incorporated common stock is listed on the New York Stock Exchange and the SWX Swiss Exchange. In addition, it is traded on the Boston, Midwest, Pacific and Philadelphia stock exchanges. Common stock options are traded on the Chicago Board Options Exchange. The abbreviated ticker symbol for Corning Incorporated is 'GLW.'\n\nTRANSFER A GENT AND R EGISTRAR Computershare Investor Services LLC P.O. Box A-3504 Chicago, IL 60690-3504 Telephone: 800.255.0461 Website: www.computershare.com\n\nC HANGE OF A DDRESS\n\nReport change of address to Computershare Investor Services at the above address.\n\nINDEPENDENT A CCOUNTANTS PricewaterhouseCoopers LLP 1301 Avenue of the Americas New York, NY 10019\n\nCorning Incorporated\n\nwww.corning.com\n\n - -product performance issues,\n - -stock price fluctuations, and\n - -other risks detailed in Corning's SEC filings.\n\nNeither this report nor any statement contained herein is furnished in connection with any of\n\nCorning is an equal opportunity employer. Printed in USA\n\n© Corning Incorporated 2003\n\n", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "## Corporate Governance", - "page_start": 47, - "page_end": 47, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "C ORPORATE VALUES :\n\nCorning's Values provide an unchanging moral and ethical compass that guides the actions of everyone in the company. The corporate values are: Quality, Integrity, Performance, Leadership, Innovation, Independence, and The Individual.\n\nquality integri performance leadership innovation independence i i i i i i i T OTAL Q UALITY : In alignment with the quality policy of thecorporation, our policy is to achieve Total Quality performance. Total Quality performance means understanding who the customer is, what the requirements are, and meeting those requirements better than anyone else, without error, on time, every time.", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "## S CIENCE & TECHNOLOGY\n\n\n\nCorning' the context of managing the sensitive balance between the near alignment of R&D and business objectives, and longer discovery research and new opportunity creation.\n\nOver the past year with business conditions. markets and create life-changing innovations.\n\nWe opportunities more quickly and efficiently. We critical intellectual assets of our scientific organization.\n\nOur R&D or new product development, but also new process development. lowered cost and increased quality performance.\n\nInnovation is one of Corning's core V language and mindset of the company. Even in the face of dif commitment to research and development.\n\nC RITICAL T ECHNOLOGIES : CHEMICAL VAPOR DEPOSITIONM ATERIALS R ESEARCH : OPTICAL PROPERTIES\n\n\n\n", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "\n\nW ENDELL P. WEEKS\n\n\n\nJ AMES B. FLAWS\n\n## PRESIDENT\n\n## AND CHIEF OPERATING OFFICER\n\nVICE CHAIRMAN\n\nAND CHIEF FINANCIAL OFFICER\n\nIn our business operations during 2002 we invested a great deal of energy aligning our cost structure and business plans with our priority of restoring profitability. After massive restructuring - following restructuring efforts we launched in 2001-we feel we now have our cost structure and growth strategies in place to accomplish this goal.\n\nWe have re-balanced the company to take advantage of our broad and diverse set of businesses. And in charting our strategies, we have focused on ensuring that both our segments have solid business plans in place, enabling them to grow. Our people are rigorously committed to executing against these plans.\n\nWe take great pride in saying that Corning continues to be a financially sound company, thanks to the aggressive strategies we executed throughout 2002. Although it has been a very painful process, we have dramatically slowed the rate at which we are spending cash. We ended the year with a balance of cash and short-term investments of $2.1 billion. And we have access to $2 billion in credit that we haven't touched - and don't plan to. We also continue to pay down debt each quarter. This, combined a high degree of confidence in our ability to meet any future financial obligations. So, we feel very good about our liquidity position right now.\n\nAs you saw earlier in this report, our Corning Technologies businesses are in markets with solid growth potential. We have leading market positions in attractive businesses … we are ready to capitalize on that position of strength. Meanwhile, we are making these businesses even more cost-effective through significant manufacturing efficiency gains.\n\nIn telecommunications, we are not planning on a market recovery in 2003. We have aligned our cost structure to meet current demand levels after two very tough years of ongoing restructuring.\n\nThe ongoing economic weakness and uncertainty in world events continue to make the overall business environment to forecast revenues and expenses quarter-to-quarter, and we are encouraged by the near-term growth potential of our non-telecommunications businesses - especially our liquid-crystal display, environmental and semiconductor businesses. If these markets continue to grow as we expect, we are confident that we will be able to meet our goals.\n\nWe know that our shareholders are most eager to see a greater return on their investment with Corning, and of Wall Street's confidence. We are 100 percent committed to reaching that goal of profitability in 2003- and doing so within the rigorous compliance rules by which we have always been guided. Integrity characterizes all our relationships, both inside and outside of Corning, and we will never compromise that foundation of our reputation.\n\nWithin the context of our financial realities, however, we have not lost our sense of self. We will meet our goals…but the path we are taking to get there has been, and will continue to be, consistent with our Values. Integrity … quality … treating individuals with dignity and respect … these are the guiding principles of the decisions we make. We know that in adhering to our Values, solid business performance will follow.", - "page_start": 9, - "page_end": 9, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "## A MESSAGE FROM THE BOARD OF DIRECTORS\n\n## Dear Shareholders:\n\nWe, the members of the HON INDUSTRIES Board of Directors, believe that integrity is central to good corporate governance. This belief is reflected in the HON INDUSTRIES vision statement (shown on the back of this annual report), adopted many years ago. Our Vision statement represents much more than a traditional 'mission,' and it goes much deeper than company policy. The beliefs and values represented in that document are the very foundation of our corporate culture, and guide the attitude and actions of every member, every day.\n\nFrom its beginnings, HON INDUSTRIES has sought to implement its vision through sound policies and practices, and by maintaining a strong Board composed predominantly of outside directors. We are fully committed to executing our responsibilities, and we will continue to maintain the company's long-standing tradition of an independent, well-informed, active, and engaged Board of Directors.\n\nOur board meetings and procedures have been developed and refined to encourage open and informed communication. The company's accounting policies have always been conservative and straightforward. The Board's three committees - Audit; Human Resources and Compensation; Public Policy and Corporate Governance - have consisted entirely of non-management directors for many years.\n\nDuring 2003, we have given significant attention to the newly released rules emanating from the Sarbanes-Oxley Act of 2002 and the New York Stock Exchange listing requirements - rules intended to improve corporate governance across the country. It is gratifying to report that HON INDUSTRIES governance practices were already in accord with the spirit of the rules.\n\nIt is an honor to serve as directors of HON INDUSTRIES. We are very proud to represent you, the shareholder, as we oversee the management of this great company. Please be assured that we intend to remain vigilant and focused on good corporate governance.\n\n## Sincerely,\n\nThe HON INDUSTRIES Board of Directors\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nStan A. Askren\n\nGary M. Christensen\n\nCheryl A. Francis\n\nRobert L. Katz\n\nDennis J. Martin\n\nJack D. Michaels\n\nJoseph Scalzo\n\nAbbie J. Smith\n\nRichard H. Stanley\n\nBrian E. Stern\n\nRonald V. Waters, III\n\n\n\n\n\n\n\n", - "page_start": 60, - "page_end": 60, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "competitive markets. Guy is an excellent fit for this role on many levels and the entire Board look forward to his leadership for many years to come.\n\nI would encourage you to review the discussions around our corporate governance, community investments and sustainability initiatives later in this annual report. First class corporate governance practices have always been a strong tenet at Rogers, and as an entrepreneur founded and family controlled company, our Board takes pride in what is a proactive and disciplined approach to ensuring that our governance practices continue to justify the confidence of the public capital markets. Giving back to the communities we serve is also an important part of our culture at Rogers and the Board is very proud of the significant initiatives and investments which the company undertook over the past year on the corporate social responsibility front.\n\nI would like to thank Rogers' 28,000 employees for their ongoing dedication to our customers and striving to make Rogers better every day, my fellow Board members for their counsel and drive towards delivering continued value to our shareholders, and you our shareholders for your continued investment in this great company.\n\n\n\nALAN HORN, CPA, CA ALAN HORN\n\nCHAIRMAN OF THE BOARD ROGERS COMMUNICATIONS INC.", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "\n\n\n\nB A L A N C E\n\nCorning Annual Report 2002\n\n\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "## 38\n\nCorporate Governance Statement", - "page_start": 39, - "page_end": 39, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## BOARD OF DIRECTORS »\n\n\n\n## STANDING (LEFT TO RIGHT)\n\n## Merrill A. 'Pete' Miller, Jr. (1,2)\n\nChairman, President and CEO National Oilwell Varco, Inc. Houston, Texas\n\n## SEATED (LEFT TO RIGHT)\n\n## Don Nickles (4)\n\nFormer U.S. Senator, Oklahoma Founder and President The Nickles Group, LLC Washington, D.C.\n\n\n\n## Louis A. Simpson\n\nChairman SQ Advisors, LLC Naples, Florida\n\nNominated for election in June 2011\n\n## V. Burns Hargis (1)\n\nPresident Oklahoma State University Stillwater, Oklahoma\n\n## Charles T. Maxwell (3,4)\n\nSenior Energy Analyst Weeden & Co. Greenwich, Connecticut\n\n## Aubrey K. McClendon\n\nChairman of the Board and Chief Executive Officer Chesapeake Energy Corporation Oklahoma City, Oklahoma\n\n## Frederick B. Whittemore (3,4)\n\nAdvisory Director Morgan Stanley New York, New York\n\nRetiring from the Board in June 2011\n\n## Governance\n\nOur Board of Directors is responsible to our shareholders for the oversight of the company and for the imple mentation and operation of an effective and sound corporate governance environment. We believe that effective corporate governance contributes to long-term corporate performance. An effective governance structure should reinforce a culture of corporate integrity, foster the company's pursuit of long-term strategic goals of growth and profit and ensure quality and continuity of corporate leadership. Our directors will continue to be diligent in their efforts to preserve the public trust while fostering the long-term success of the company.\n\n## Richard K Davidson (1)\n\nRetired Chairman and CEO Union Pacific Corporation Bonita Springs, Florida\n\n## Frank Keating (3)\n\nFormer Governor, Oklahoma President and CEO American Bankers Association Washington, D.C.\n\n## Kathleen M. Eisbrenner (3,4)\n\nFounder and CEO Next Decade The Woodlands, Texas\n\n - (1) Audit Committee\n - (2) Lead Independent Director\n - (3) Compensation Committee\n - (4) Nominating and Corporate Governance Committee", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_CHK_2010.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_GLW_2002.pdf", - "query": "As a Corning's investor, how can I get a summary of the annual meeting of shareholders ?", - "target_page": 11, - "target_passage": "A summary report of the proceedings at the annual meeting will be available without charge upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## INVESTOR INFORMATION :\n\n## A NNUAL M EETING\n\nThe annual meeting of shareholders will be held on Thursday, April 24, 2003, in Corning, NY. A formal notice of the meeting together with a proxy statement will be mailed to shareholders on or about March 12, 2003. The proxy statement can also be accessed electronically through the Investor Relations category of the Corning home page on the Internet at www.corning.com. A summary report of the proceedings at the annual meeting will be available without charge upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831.\n\n## A DDITIONAL INFORMATION\n\n'Safe Harbor' Statement under the Private Securities Litigation Reform Act of 1995 facts or information are forward-looking statements. These forward-looking statements involve risks and uncertainties that may cause the outcome to be materially different. Such risks and uncertainties include, but are not limited to:\n\n - -global economic and political conditions,\n - -currency fluctuations,\n - -product demand and industry capacity,\n - -competitive products and pricing,\n\n-\n\nsufficiency of manufacturing capacity and efficiencies,\n\n - -cost reductions,\n - -availability and costs of critical materials,\n - -new product development and commercialization,\n - -attracting and retaining key personnel,\n - -order activity and demand from major customers,\n - -fluctuations in capital spending by customers in the telecommunications industry and other business segments,\n - -financial condition of customers,\n\nA copy of Corning's 2002 Annual Report on Form 10-K filed with the Securities and Exchange Commission is available upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831. The Annual Report on Form 10-K can also be accessed electronically through the Investor Relations category of the home page on the Internet at: www.corning.com\n\nINVESTOR INFORMATION\n\nInvestment analysts who need additional information may contact Mr. Kenneth C. Sofio, Manager of Investor Relations, Corning Incorporated, HQ-E2-25, Corning, NY 14831; Telephone 607.974.9000\n\n## C OMMON S TOCK\n\n - -changes in the mix of sales between premium and non-premium products,\n - -facility expansions and new plant start-up costs,\n - -adverse litigation or regulatory developments, including future or pending tax legislation,\n - -adequacy and availability of insurance,\n - -capital resource and cash flow activities,\n - -capital spending,\n - -equity company activities,\n - -interest costs,\n - -acquisition and divestiture activity,\n - -the rate of technology change,\n - -the ability to enforce patents,\n\nCorning Incorporated common stock is listed on the New York Stock Exchange and the SWX Swiss Exchange. In addition, it is traded on the Boston, Midwest, Pacific and Philadelphia stock exchanges. Common stock options are traded on the Chicago Board Options Exchange. The abbreviated ticker symbol for Corning Incorporated is 'GLW.'\n\nTRANSFER A GENT AND R EGISTRAR Computershare Investor Services LLC P.O. Box A-3504 Chicago, IL 60690-3504 Telephone: 800.255.0461 Website: www.computershare.com\n\nC HANGE OF A DDRESS\n\nReport change of address to Computershare Investor Services at the above address.\n\nINDEPENDENT A CCOUNTANTS PricewaterhouseCoopers LLP 1301 Avenue of the Americas New York, NY 10019\n\nCorning Incorporated\n\nwww.corning.com\n\n - -product performance issues,\n - -stock price fluctuations, and\n - -other risks detailed in Corning's SEC filings.\n\nNeither this report nor any statement contained herein is furnished in connection with any of\n\nCorning is an equal opportunity employer. Printed in USA\n\n© Corning Incorporated 2003\n\n", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "The policy prohibits Directors and employees from engaging in short-term trading of any of the Company's securities and buying or selling the Company's securities if they possess unpublished, price-sensitive information.\n\nDirectors and senior management may buy or sell Company securities in the four week period following significant announcements by the Company, including the release of the quarterly report, half-yearly results, the preliminary annual results and the lodgement of the Company's Annual Report (subject to the prohibition of dealing in the Company's securities if they possess unpublished price sensitive information).\n\nDirectors and senior management must also receive approval from the Chairman before buying or selling Company securities.\n\nThe Company's Share Trading Policy is available in the 'Corporate Governance' section of the Company's website.\n\n## Communication with Shareholders and Continuous Disclosure\n\nThe Company is committed to providing relevant and timely information to its shareholders in accordance with its continuous disclosure obligations under the ASX Listing Rules and the Corporations Act 2001 (Cth).\n\nInformation is communicated to shareholders through the distribution of the Company's Annual Report and other communications. All releases are posted on the Company's website and released to the ASX in a timely manner.\n\nThe Company has practices in place throughout the year governing who may authorise and make disclosures and the method by which the market is to be informed of any price sensitive information.\n\nThe Company Secretary is responsible for communications with the ASX and ensuring that the Company meets its continuous disclosure obligations.\n\nThe Company's Continuous Disclosure is available in the 'Corporate Governance' section of the Company's website.\n\n## Annual General Meeting\n\nAll shareholders are encouraged to attend and participate in the Company's Annual General Meeting. Shareholders may attend in person or send a proxy as their representative.\n\nThe Company's external auditor is routinely invited to and attends the Annual General Meeting in order to respond to questions raised by shareholders relating to the content and conduct of the audit and accounting policies adopted by the Company in relation to the preparation of the financial statements.\n\n## Corporate Governance Disclosure\n\nThe Company's governance policies and procedures comply in all substantial respects with the Australian Securities Exchange Corporate Governance Principles and Recommendations with 2010 Amendments. The following table compares the ASX Recommendations and the Company's corporate governance policies and practices.\n\nu", - "page_start": 38, - "page_end": 38, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "The Audit and Risk Management Committee's charter and information on the selection and appointment of the Company's external auditor is available in the corporate governance section on the Company's website. Information regarding qualifications and meeting attendance can be found in the Directors' Report of this Annual Report.\n\n## Principle 5: Make Timely and Balanced Disclosure\n\nThe Company has adopted a Market Disclosure Policy to ensure compliance with its continuous disclosure obligations whereby relevant information that could cause a reasonable person to expect a material effect on, or lead to a substantial movement in, the value of Sundance's share price, is immediately made available to shareholders and the public as a release to the ASX. D Connor, as Company Secretary, has been nominated as the person primarily responsible for communications with the ASX. All material information concerning the Company, including its financial situation, performance, ownership and governance is posted on the Company's web site to ensure all investors have equal and timely access. The Market Disclosure Policy is available in the corporate governance section on Sundance's website.\n\n## Principle 6: Respect the Rights of Shareholders\n\nThe Board fully recognises its responsibility to ensure that its shareholders are informed of all major developments affecting the Company. All shareholders, who have elected to do so, receive a copy of the Company's Annual Report and the Annual, Half Yearly and Quarterly Reports are prepared and posted on the Company's website in accordance with the ASX Listing Rules. Regular updates on operations are made via ASX releases. All information disclosed to the ASX is posted on Sundance's website as soon as possible after it is disclosed to the ASX. When analysts are briefed on aspects of the Company's operation, the material used in the presentation is immediately released to the ASX and posted on the Company's website. Sundance encourages its shareholders to attend its annual meetings and to discuss and question its Board and management. The Company's external auditor is requested to attend the annual general meeting and be available to answer shareholder questions about the conduct of the audit and the preparation and content of the audit report. The Shareholder Communications Policy is published on the Company's website under the corporate governance section.\n\n## Principle 7: Recognise and Manage Risk\n\n## 7.1 Risk Assessment and Management\n\nSundance has established a Risk Management Policy whereby the primary purpose of the policy is to ensure that:\n\n - · Appropriate systems are in place to identify, to the extent that is reasonably practical, all material risks that the Company faces in conducting its business;\n - · The financial impact of those risks is understood and appropriate controls are in place to limit exposures to them;\n - · Appropriate responsibilities are delegated to control the risks; and\n - · Any material changes to the Company's risk profile are disclosed in accordance with the Company's continuous Market Disclosure Policy.", - "page_start": 53, - "page_end": 53, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "## REPUBLIC SERVICES, INC. AND SUBSIDIARIES\n\n## NOTES TO CONSOLIDATED FINANCIAL STATEMENTS\n\n(All tables in millions, except per share data) Ì (Continued)\n\n## 7. STOCKHOLDERS' EQUITY\n\nDuring 2000 through 2004, the Board of Directors authorized the repurchase of up to $1,025.0 million of the Company's Common Stock. As of December 31, 2004, the Company had paid $750.4 million to repurchase 35.2 million shares of its common stock, of which 9.6 million shares were acquired during the year ended December 31, 2004 for $266.1 million.\n\nIn July 2003, the Company announced that its board of directors initiated a quarterly cash dividend of $.06 per share, which was increased to $.12 per share in the third quarter of 2004. Dividends declared were $54.6 million and $19.0 million during 2004 and 2003, respectively. As of December 31, 2004, the Company recorded a dividend payable of approximately $18.1 million to shareholders of record at the close of business on January 3, 2005.\n\n## 8. EMPLOYEE BENEFIT PLANS\n\nIn July 1998, the Company adopted the 1998 Stock Incentive Plan (\"\"Stock Incentive Plan'') to provide for grants of options to purchase shares of common stock, restricted stock and other equity-based compensation (\"\"Equity-Based Compensation Units'') to employees and non-employee directors of the Company who are eligible to participate in the Stock Incentive Plan. The Company accounts for stock-based compensation in accordance with Accounting Principles Board Opinion No. 25, \"\"Accounting for Stock Issued to Employees'' (\"\"APB 25''), and related interpretations. Stock options are granted at prices equal to the fair market value of the Company's common stock on the date of grant; therefore, no compensation expense is recognized. Compensation expense resulting from grants of restricted stock or stock units is recognized during the vesting period.\n\nOptions granted under the Stock Incentive Plan are non-qualiÑed and are granted at a price equal to the fair market value of the Company's common stock at the date of grant. Generally, options granted have a term of ten years from the date of grant, and vest in increments of 25% per year over a four year period beginning on the Ñrst anniversary date of the grant. Options granted to non-employee directors have a term of ten years and vest immediately at the date of grant. In May 2002, the Company's stockholders approved and adopted an amendment and restatement of the Stock Incentive Plan, which modiÑed a number of its provisions, including an increase in the number of shares of common stock reserved for issuance under the Stock Incentive Plan from 20.0 million to 27.0 million. As of December 31, 2004, there were 6.0 million stock options reserved for future grants under the Stock Incentive Plan.\n\nDuring the three months ended March 31, 2004, the Company awarded 20,000 deferred stock units to its non-employee directors under its Stock Incentive Plan. An additional 5,000 deferred stock units were granted to a new director during the three months ended December 31, 2004. These stock units vest immediately but the directors receive the underlying shares only after their board service ends. The stock units do not carry any voting or dividend rights, except the right to receive additional stock units in lieu of dividends.\n\nAlso during the three months ended March 31, 2004, the Company awarded 79,500 shares of restricted stock to its executive oÇcers. 7,500 of these restricted shares vest eÅective January 1, 2005. The remaining 72,000 shares vest in four equal annual installments beginning one year from the date of grant except that vesting may be accelerated based upon the achievement of certain performance targets. During the vesting period, the participants have voting rights and receive dividends, but the shares may not be sold, assigned, transferred, pledged or otherwise encumbered. Additionally, granted but unvested shares are forfeited upon termination of employment.", - "page_start": 85, - "page_end": 85, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## STATEMENTS OF CONSOLIDATED SHAREHOLDERS' EQUITY\n\nSTATEMENTS OF CONSOLIDATED SHAREHOLDERS (In thousands, except per share amounts)\n\n'\n\nEQUITY\n\n(In thousands, except per share amounts)", - "page_start": 19, - "page_end": 19, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "## DIRECTORS' REPORT\n\nrequirements of the Corporations Law and the ASX Listing Rules, the Company and Mr Bradley agreed to defer the first issue of Shares, making both issues conditional on shareholder approval.\n\nThe second agreement was with Clough Engineering Limited, pursuant to which it agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, 6,775,000 shares, within 7 days of that meeting.\n\nOn 15 June 2000 the Company announced that with effect from 1 July 2000 it acquired a 50% interest in OIS MOC Joint Venture Pty Ltd, to be paid for by the issue of 800,000 Shares in the Company. OIS MOC Joint Venture Pty Ltd owns the goodwill of a successful labour hire company. That company is to be renamed Mermaid Labour and Management Limited (MLML).\n\nMLML offers a full labour hire service inclusive of industrial relations consultancy, negotiating agreements and awards and were appropriate, provides ongoing management of the labour force.\n\nThe effective date is 1 July 2000. The Company will issue 800,000 ordinary fully paid shares in Mermaid Marine Australia Limited.\n\nThere have not been any other matters or circumstances, other than those referred to in the Chairman's and Operations Reviews and/or in the financial statements and notes attached thereto, that have arisen since the end of the Financial Year that have significantly affected, or may significantly affect Mermaid's operations, the results of those operations or its state of affairs in future financial years.\n\n## FUTURE DEVELOPMENTS\n\nThe Chairman's and Operations Reviews give indications, in general terms, of likely developments in Mermaid's operations in future financial years and the expected results of those operations.\n\n## ENVIRONMENTAL REGULATION\n\nThe development of the Company's Dampier and Broome\n\nbases is subject to the approval of the Western Australian Environmental Protection Authority.\n\nAs at the date of this report the Company had a total of 7,115,000 unissued shares under option as follows: 30 November 2000 Options SHARE OPTIONS\n\nAs at the date of this report there are outstanding 6,500,000 options to acquire 6,500,000 ordinary shares in the Company at an issue price of 0.75 cents per ordinary share. Each of these options expires on 30 November 2000.\n\n\n\nOn 9 August 2000 the Company announced to the ASX that, subject to shareholder approval", - "page_start": 33, - "page_end": 33, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## Item 11. EXECUTIVE COMPENSATION\n\nInformation for the year ended October 25, 2003, commencing with \"Summary Compensation Table\" on page 12 through page 15 and \"Compensation of Directors\" on page 5 of the definitive proxy statement for the Annual Meeting of Stockholders to be held January 27, 2004, is incorporated herein by reference.\n\n## Item 12. SECURITY OWNERSHIP OF CERTAIN BENEFICIAL OWNERS AND MANAGEMENT AND RELATED STOCKHOLDER MATTERS\n\nInformation for the year ended October 25, 2003, under \"Principal Stockholders\" and \"Security Ownership of Management\" on pages 7 through 9 and information under \"Equity Compensation Plan Information\" on page 15 of the definitive proxy statement for the Annual Meeting of Stockholders to be held January 27, 2004, is incorporated herein by reference.\n\n## Item 13. CERTAIN RELATIONSHIPS AND RELATED TRANSACTIONS\n\nInformation under \"Other Information Relating to Directors, Nominees, and Executive Officers\" for the year ended October 25, 2003, as set forth on page 17 of the definitive proxy statement for the Annual Meeting of Stockholders to be held January 27, 2004, is incorporated herein by reference.\n\n## Item 14. PRINCIPAL ACCOUNTING FEES AND SERVICES\n\nThe information under the \"Audit Committee Report and Ratification of Appointment of Auditors-Audit Fees\" through \"-Audit Committee Preapproval Policies and Procedures\" on page 7 of the Company's definitive proxy statement for the Annual Meeting of Stockholders to be held January 27, 2004, is incorporated herein by reference.\n\n## PART IV\n\n## Item 15. EXHIBITS, FINANCIAL STATEMENT SCHEDULES AND REPORTS ON FORM 8-K\n\n - (a) (1) and (2) The response to this portion of Item 15 is submitted as a separate section of this report.\n - (3) List of Exhibits-The response to this portion of Item 15 is submitted as a separate section of this report.\n - (b) The following reports on Form 8-K were filed during the fourth quarter:\n - Form 8-K was filed on August 1, 2003, announcing a January 24, 2004 retirement of Eric Brown, Group Vice President of Prepared Foods and member of the Board of Directors.\n\nForm 8-K was furnished on August 21, 2003, disclosing the issuance of the Company's earnings release for the third quarter ended July 26, 2003.\n\n - Form 8-K was filed on October 7, 2003, announcing union workers from five of the Company's production facilities voted to ratify a new four-year labor contract.\n\nForm 8-K was filed on October 23, 2003, announcing the Company entered into an unsecured 3-year revolving credit facility in the amount of $150,000,000, which replaced an existing $150,000,000 credit facility entered into on October 25, 2001.\n\n - (c) The response to this portion of Item 15 is submitted as a separate section of this report.\n - (d) The response to this portion of Item 15 is submitted as a separate section of this report.\n\n## SIGNATURES\n\nPursuant to the requirements of Section 13 or 15(d) of the Securities Exchange Act of 1934, the Registrant has duly caused this report to be signed on its behalf by the undersigned, thereunto duly authorized.\n\n## HORMEL FOODS CORPORATION\n\nBy: /s/ JOEL W. JOHNSON\n\nJOEL W. JOHNSON Chairman of the Board,\n\nPresident and Chief Executive Officer\n\nDate: January 23, 2004", - "page_start": 9, - "page_end": 9, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "## NOTES TO AND FORMING PART OF THE FINANCIAL STATEMENTS FOR THE FINANCIAL YEAR ENDED 30 JUNE 2000\n\n## 22. SUBSEQUENT EVENTS\n\nOn 25 August 2000 the Company announced that it had reached two agreements for the placement of a total of 16,666,666 ordinary fully paid shares in the Company at an issue price of 30 cents each (Shares).\n\nThe first agreement was with Mr Mark Bradley, who agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, a further 3,441,666 within 7 days of that meeting.\n\nOn Mr Bradley being appointed a Director of the Company, in order to comply with the requirements of the Corporations Law and the ASX Listing Rules, the Company and Mr Bradley agreed to defer the first issue of Shares, making both issues conditional on shareholder approval.\n\nThe second agreement was with Clough Engineering Limited, pursuant to which it agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, 6,775,000 shares, within 7 days of that meeting.\n\nOn 15 June 2000 the Company announced that with effect from 1 July 2000 it acquired a 50% interest in OIS MOC Joint Venture Pty Ltd, to be paid for by the issue of 800,000 Shares in the Company. OIS MOC Joint Venture Pty Ltd owns the goodwill of a successful labour hire company. That company is to be renamed Mermaid Labour and Management Limited (MLML).\n\nMLML offers a full labour hire service inclusive of industrial relations consultancy, negotiating agreements and awards and were appropriate, provides ongoing management of the labour force.\n\nThe financial effect of the above events have not been reflected in these financial statements.\n\n## 23. EARNINGS PER SHARE\n\n| | 2000 Cents per Share | 1999 Cents per Share |\n|-----------------------------------------------------------------------------------------------------------|-------------------------|-------------------------|\n| Basic earnings per share | (0.62) | 8.09 |\n| Diluted earnings per share | (0.21) | 8.05 |\n| | 2000 | 1999 |\n| | No. | No. |\n| Weighted average number of ordinary shares on issue used in the calculation of basic earnings per share | 43,000,000 | 30,356,164 |\n\n", - "page_start": 56, - "page_end": 56, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "of, non-executive, independent Directors, except for the Environmental and Safety Committee, which includes the CEO as a member.\n\nThe Board Guidelines prescribe that the Board is to meet at least eight times a year, including a strategy meeting of two days duration. The number of meetings of the Board and of each of its Committees and the names of attendees at those meetings are set out on page 47 of this Annual Report. Board Meetings are structured in two separate sessions, without management present for one of those sessions. The agenda for meetings is prepared by the Company Secretary in conjunction with the Chairman and CEO, with periodic input from the Board. Comprehensive Board papers are distributed to Directors in advance of scheduled meetings. Board meetings take place both at the Company's head office and at key operating sites, to assist the Board in its understanding of operational issues.\n\nExecutive management attend Board and Committee meetings, at which they report to Directors within their respective areas of responsibility. This assists the Board in maintaining its understanding of the Company's business and assessing the executive management team. Where appropriate, advisors to the Company attend meetings of the Board and of its Committees.\n\n## 2.3 Composition of the Board\n\nThe composition of the Board is determined in accordance with the Company's Constitution and the Board Guidelines which, among other things, require that:\n\n - · the Board is to comprise a minimum of five and a maximum of ten Directors (exclusive of the CEO);\n - · the Board should comprise a substantial majority of independent, non-executive Directors;\n - · there should be a separation of the roles of Chairman and Chief Executive Officer of the Company; and\n - · the Chairman of the Board should be an independent, non-executive Director.\n\nUnder the Company's Constitution approximately onethird of Directors retire by rotation each year and Directors appointed during the year are required to submit themselves for election by shareholders at the Company's next Annual General Meeting. The Board Guidelines encourage Directors to retire at the first Annual General Meeting after reaching the age of 72 years and not seek reappointment.\n\nCurrently, the Board comprises eight non-executive Directors and one executive Director. The Board has adopted the definition set out in the ASX Best Practice Recommendations and as defined in the 2002 guidelines of the Investment and Financial Services Association Limited and considers all current nonexecutive Directors, including the Chairman, to be independent directors.\n\nGenerally, the Board considers a Director to be independent if he or she is not a member of management and is free of any business or other relationship that could materially interfere with, or could reasonably be\n\nperceived to materially interfere with, the Director's ability to act in the best interests of the Company. The Board will assess the materiality of any given relationship that may affect independence on a case by case basis and has adopted materiality guidelines to assist in that assessment. Under these guidelines, the following interests are regarded as material in the absence of any mitigating factors:", - "page_start": 31, - "page_end": 31, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## DIRECTORS' REPORT\n\nMermaid's principal activities during the course of the Financial Year were:\n\n - Operating crewed vessel charters; ·\n - Vessel manning, management and logistics; ·\n - Operating supply base facilities; and ·\n - Equipment hire. ·\n\nOther than detailed in the Chairman's Report set out at pages 1 and 2 of this report and/or in the Operations Review set out on pages 3 to 9 of this report, (together the 'Chairman's and Operations Reviews'), there have been no significant changes to these activities during the Financial Year.\n\nIn respect of the financial year ended 30 June 1999, as detailed in the directors' report for that financial year, a final dividend of 1.25 cents per share, franked to 100 per cent at 36 per cent corporate income tax rate, was paid to the holders of fully paid ordinary shares on 1 November 1999.\n\nIn respect of the financial year ended 30 June 2000 the directors have not recommended the payment of a dividend.\n\nA review of operations for the Financial Year and the results of those operations are set out in the Chairman's and Operations Reviews.\n\nThe Chairman's and Operations\n\n## REVIEW OF OPERATIONS\n\n## SIGNIFICANT CHANGES IN THE STATE OF AFFAIRS\n\nReviews set out the matters which have had a significant effect on the state of affairs of Mermaid. Other than those matters there were no significant changes in the state of affairs of Mermaid during the Financial Year.\n\n## SUBSEQUENT EVENTS\n\nOn 25 August 2000 the Company announced that it had reached two agreements for the placement of a total of 16,666,666 ordinary fully paid shares in the Company at an issue price of 30 cents each (Shares).\n\nThe first agreement was with Mr Mark Bradley, who agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, a further 3,441,666 within 7 days of that meeting.\n\nOn Mr Bradley being appointed a Director of the Company, in order to comply with the\n\n## PRINCIPAL ACTIVITIES\n\n## DIVIDEND", - "page_start": 32, - "page_end": 32, - "source_file": "ASX_MRM_2000.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_GLW_2002.pdf", - "query": "How many employees did Corning company count at the end of 2002 ?", - "target_page": 5, - "target_passage": "We are continuing to invest in our people — all 23,200 of them", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## INVESTOR INFORMATION :\n\n## A NNUAL M EETING\n\nThe annual meeting of shareholders will be held on Thursday, April 24, 2003, in Corning, NY. A formal notice of the meeting together with a proxy statement will be mailed to shareholders on or about March 12, 2003. The proxy statement can also be accessed electronically through the Investor Relations category of the Corning home page on the Internet at www.corning.com. A summary report of the proceedings at the annual meeting will be available without charge upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831.\n\n## A DDITIONAL INFORMATION\n\n'Safe Harbor' Statement under the Private Securities Litigation Reform Act of 1995 facts or information are forward-looking statements. These forward-looking statements involve risks and uncertainties that may cause the outcome to be materially different. Such risks and uncertainties include, but are not limited to:\n\n - -global economic and political conditions,\n - -currency fluctuations,\n - -product demand and industry capacity,\n - -competitive products and pricing,\n\n-\n\nsufficiency of manufacturing capacity and efficiencies,\n\n - -cost reductions,\n - -availability and costs of critical materials,\n - -new product development and commercialization,\n - -attracting and retaining key personnel,\n - -order activity and demand from major customers,\n - -fluctuations in capital spending by customers in the telecommunications industry and other business segments,\n - -financial condition of customers,\n\nA copy of Corning's 2002 Annual Report on Form 10-K filed with the Securities and Exchange Commission is available upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831. The Annual Report on Form 10-K can also be accessed electronically through the Investor Relations category of the home page on the Internet at: www.corning.com\n\nINVESTOR INFORMATION\n\nInvestment analysts who need additional information may contact Mr. Kenneth C. Sofio, Manager of Investor Relations, Corning Incorporated, HQ-E2-25, Corning, NY 14831; Telephone 607.974.9000\n\n## C OMMON S TOCK\n\n - -changes in the mix of sales between premium and non-premium products,\n - -facility expansions and new plant start-up costs,\n - -adverse litigation or regulatory developments, including future or pending tax legislation,\n - -adequacy and availability of insurance,\n - -capital resource and cash flow activities,\n - -capital spending,\n - -equity company activities,\n - -interest costs,\n - -acquisition and divestiture activity,\n - -the rate of technology change,\n - -the ability to enforce patents,\n\nCorning Incorporated common stock is listed on the New York Stock Exchange and the SWX Swiss Exchange. In addition, it is traded on the Boston, Midwest, Pacific and Philadelphia stock exchanges. Common stock options are traded on the Chicago Board Options Exchange. The abbreviated ticker symbol for Corning Incorporated is 'GLW.'\n\nTRANSFER A GENT AND R EGISTRAR Computershare Investor Services LLC P.O. Box A-3504 Chicago, IL 60690-3504 Telephone: 800.255.0461 Website: www.computershare.com\n\nC HANGE OF A DDRESS\n\nReport change of address to Computershare Investor Services at the above address.\n\nINDEPENDENT A CCOUNTANTS PricewaterhouseCoopers LLP 1301 Avenue of the Americas New York, NY 10019\n\nCorning Incorporated\n\nwww.corning.com\n\n - -product performance issues,\n - -stock price fluctuations, and\n - -other risks detailed in Corning's SEC filings.\n\nNeither this report nor any statement contained herein is furnished in connection with any of\n\nCorning is an equal opportunity employer. Printed in USA\n\n© Corning Incorporated 2003\n\n", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "## MANAGEMENT'S DISCUSSION AND ANALYSIS\n\nThe following discussion of the Company's historical results of operations and of its liquidity and capital resources should be read in conjunction with the Consolidated Financial Statements of the Company and related notes.\n\n## Overview\n\nThe Company has two reportable core operating segments: office furniture and hearth products. The Company is the second largest office furniture manufacturer in the United States and the nation's leading manufacturer and marketer of gas- and wood-burning fireplaces.\n\nFrom 2000 to 2003, the office furniture industry experienced an unprecedented three-year decline due to the challenging economic environment. In 2003, this decline negatively impacted the Company's office furniture segment. In contrast, the housing market was at record high levels during 2003, which positively impacted the Company's hearth segment. The Company outperformed its peers in both segments in which it competes. The Company gained market share by providing strong brands, innovative products and services, and greater value to its end-users. Fiscal 2003 also included an extra week of activity due to the Company's 52/53-week fiscal year.\n\nNet sales were $1.8 billion in 2003, as compared to $1.7 billion in 2002. The increase in net sales reflects the 9% increase in the hearth segment and the additional week of business activity. In 2003 and 2002, the Company recorded restructuring charges and accelerated depreciation related to the closure and consolidation of office furniture facilities totaling $15.2 million and $3.0 million, respectively. Gross margins increased to 36.4% in 2003 from 35.4% in 2002 due to benefits from restructuring initiatives and its rapid continuous improvement program, new products, and increased price realization. The Company also invested aggressively in brand building and selling initiatives in 2003. Net income was $98.1 million or $1.68 per diluted share in 2003, as compared to $91.4 million or $1.55 per diluted share in 2002.\n\nThe Company generated $141.3 million in cash flow from operating activities and increased its cash position, including shortterm investments, by $48.6 million to $204.2 million. The Company paid dividends of $30.3 million and repurchased $21.5 million of its common stock, while investing $35.7 million in net capital expenditures and repaying $20.2 million of debt.\n\n## Critical Accounting Policies and Estimates GENERAL\n\nManagement's Discussion and Analysis of Financial Condition and Results of Operations is based upon the Consolidated Financial Statements, which have been prepared in accordance with GAAP. The preparation of these financial statements requires management to make estimates and assumptions that affect the reported amounts of assets, liabilities, revenue and expenses, and related disclosure of contingent assets and liabilities. Management bases its estimates on historical experience and on various other assumptions that are believed to be reasonable under the circumstances, the results of which form the basis for making judgments about the carrying values of assets and liabilities that are not readily apparent from other sources. Senior management has discussed the development, selection and disclosure of these estimates with the Audit Committee of our Board of Directors. Actual results may differ from these estimates under different assumptions or conditions.", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## INDUSTRY SEGMENT AND GEOGRAPHIC INFORMATION 10\n\nThe Company operates in one reportable industry segment: designing, developing, manufacturing and marketing products for the medical and health care industry and has no foreign operating subsidiaries. The Company's product lines include pressure relief valves and inflation systems, which are sold primarily to the aviation and marine industries. Due to the similarities in product technologies and manufacturing processes, these products are managed as part of the medical products segment. The Company recorded incidental revenues from its oxygen pipeline, which totaled approximately $950,000 in each of the years of 2003, 2002 and 2001. Pipeline net assets totaled $2.6 million at December 31, 2003 and 2002. Company revenues from sales to parties outside the United States totaled approximately 26, 25 and 33 percent of the Company's total revenues in 2003, 2002 and 2001, respectively. No Company assets are located outside the United States. A summary of revenues by geographic territory for the three years 2003, 2002 and 2001 is as follows (in thousands):\n\n| | YEAR ENDED DECEMBER 31, | YEAR ENDED DECEMBER 31, | YEAR ENDED DECEMBER 31, |\n|----------------|---------------------------|---------------------------|---------------------------|\n| | 2003 | 2002 | 2001 |\n| United States | $ 46,721 | $ 44,454 | $ 38,805 |\n| Canada | 8,620 | 6,938 | 10,635 |\n| United Kingdom | 1,547 | 1,693 | 2,182 |\n| Other | 5,915 | 6,448 | 5,983 |\n| Total | $ 62,803 | $ 59,533 | $ 57,605 |", - "page_start": 20, - "page_end": 20, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "The wireless industry in the late 1990's became increasingly competitive and the Company was not immune to these industry issues. The Clear Pay SM program, introduced by Sprint as a no-deposit offering in 2001, attracted high credit risk customers in the Company's markets. As the results began to materialize, the Company implemented deposits on this program (mid-April 2002), and experienced high levels of customer turnover (churn) and uncollectable accounts. The write-offs of uncollectable accounts peaked in the third quarter of 2002. During the fourth quarter of 2002 there was some evidence that the strengthened credit policy was having a favorable impact. Nonetheless, the 2002 net loss in the PCS operation was $5.4 million, as compared to $5.5 million in 2001. Despite the disappointing financial results for 2002, the PCS customer base grew by over 40%. While the PCS operation was adding customers, the cellular operation continued to lose its local customer base.\n\nThe growing belief that national branding was critical to our wireless operations, the expectation that roaming revenues from our analog cellular operation would not continue to grow, and the increase in the number of wireless competitors in our markets, prompted the Company to exit the cellular business in order to focus on our PCS operations. The Company entered into an agreement on November 21, 2002, to sell its 66% ownership interest in the Virginia 10 RSA cellular operation which was classified as a discontinued operation. The closing occurred February 28, 2003. The Company received $37.0 million in proceeds, including $5.0 million in escrow for two years and $1.7 million for working capital.\n\nIn many respects, 2003 was a successful year. Churn and levels of uncollectable accounts in the PCS operation returned to more acceptable levels. PCS revenues reached $67.0 million, and total revenues reached $105.9 million. The PCS operation recognized a small profit for the year, including favorable adjustments associated with settlement of disputed items with Sprint. Excluding the favorable adjustments, the PCS operation recognized a profit in the fourth quarter. With improved operating cash flow and reduced capital spending in 2003, the Company prepaid $4.6 million in debt, selecting those notes with nominal prepayment penalties. Additionally, after receiving the cash and paying taxes on the gain of the sale of the Virginia 10 partnership interest, the Company invested the remaining proceeds in liquid financial instruments, available for future deployment. Additionally, the Company has been successful at decreasing its dependency on wireline revenues. Wireline revenues, at $29.0 million in 2003 compared to $18.6 million in 1998, were 27.4% of total revenues in 2003 compared to 76.6% in 1998.\n\nEntering 2004, the Company is pleased with the milestone of a profitable quarter in the PCS operation, but recognizes that much work remains to ultimately earn a reasonable return on this investment. The recently announced signing of an addendum to the management and services agreements with Sprint is expected to lead to cost savings and greater certainty in fees paid to Sprint. However, the consolidation predicted for the wireless industry in recent years, including the recently announced Cingular/ATT deal and anticipated improvements in the overall economics of wireless services, has not yet materialized. Future Sprint marketing efforts, designed to meet the competition, could potentially have an unfavorable impact on the Company and lead to additional losses. The risks associated with the Sprint PCS affiliation are described in further detail elsewhere in this document. The Company is now reviewing alternatives for other businesses to further diversify our revenue base, from either a services platform or a geographic concentration.\n\n## Significant Transactions", - "page_start": 41, - "page_end": 41, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "In 2002, the Company liquidated its holdings of VeriSign, Inc, for proceeds of $2.8 million and a realized loss of $9.0 million. The VeriSign stock was valued at $38 per share at December 31, 2001, and declined over the ensuing months to approximately $6 per share in early July 2002. The Company liquidated all of its holdings in the stock early in the third quarter 2002. The Company's original investment in VeriSign's predecessor companies was approximately $1.0 million. Total proceeds from all sales of stock in VeriSign and its predecessor companies were $8.1 million, or more than eight times the original investment. .\n\nThere were no gross realized gains on available-for-sale securities included in income in 2003 or 2002, while there were $17.7 million for 2001. Gross realized losses included in income in 2003, 2002 and 2001 were $3 thousand, $9.0 million and $3.0 million, respectively.\n\nChanges in the unrealized gains (losses) on available-for-sale securities during the years ended December 31, 2003, 2002 and 2001 reported as a separate component of shareholders' equity are as follows:\n\n■", - "page_start": 25, - "page_end": 25, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "During 2002, the Company recorded a pretax charge of approximately $5.4 million due to the shutdown of an office furniture facility in Jackson, Tennessee. A total of 125 members were terminated and received severance due to this shutdown. During the second quarter of 2003, a restructuring credit of approximately $0.6 million was taken back into income relating to this charge. This was due to the fact that the Company was able to exit a lease with the lessor at more favorable terms than previously estimated.\n\nDuring the second quarter of 2001, the Company recorded a pretax charge of $24.0 million or $0.26 per diluted share for a restructuring plan that involved consolidating physical facilities, discontinuing low-volume product lines, and reductions of workforce. Included in the charge was the closedown of three of its office furniture facilities located in Williamsport, Pennsylvania; Tupelo, Mississippi; and Santa Ana, California. Approximately 500 members were terminated and received severance due to the closedown of these facilities. During the second quarter of 2002, a restructuring credit of approximately $2.4 million was taken back into income relating to this charge. This was mainly due to the fact that the Company was able to exit a lease with a lessor at more favorable terms than originally estimated and the Company's ability to minimize the number of members terminated as compared to the original plan.\n\nThe following table details the change in restructuring reserve for the last three years:", - "page_start": 45, - "page_end": 45, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "\n\nT he worst of 2001 brought out the best in The Hartford's people.\n\nAs the world watched the horrors of Sept. 11, some 330 of our New York employees fled their offices in 7 World Trade Center. Though many were caught in the debris and dust from the nearby Twin Towers, all escaped safely.\n\nBy the time the 47-story 7 World Trade Center building collapsed at about 5:20 p.m., The Hartford had already arranged for temporary space in several of the company's other offices. Employees and suppliers immediately began working around the clock to get the business up and running again. Despite the destruction, back-up systems kept distributors' and customers' data secure.\n\nA hundred miles from Ground Zero, home office employees in Hartford, Conn., began shuttling equipment and supplies to our temporary offices. Some\n\nbooked Long Island Sound ferries from Connecticut to Long Island within 48 hours of the attack. Others spent the weekend driving supplies to the new locations so employees could concentrate on customers instead of on finding pens and paper. Employees and suppliers were determined to get the company, its distributors and its customers through the crisis.\n\nBy Monday, Sept. 17, all of The Hartford's business units in New York were serving customers again. Employees had new furniture, phones, servers and PCs. Distributors' and customers' access to company e-mail was never interrupted. Calls to old phone numbers were rerouted to cell phones or new office phones. Print and radio ads-along with The Hartford's Web sitegave customers instructions for filing claims quickly. Customer relationships were stronger than ever. The Hartford Experience-customer solutions, ease of doing business and extraordinary service-was never better demonstrated.", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "markets its turkey products through its own sales force and independent brokers.\n\nThe acquisitions of Diamond Crystal Brands Nutritional Products in fiscal 2001 and the Century Foods International business in July of fiscal 2003 strengthened the Company's presence in the nutritional food products and supplements market. The Company currently operates as one of the largest companies providing nutritional products to the U.S. healthcare industry.\n\nThe Company acquired the Diamond Crystal Brands business from Imperial Sugar Co. in December of fiscal 2003. Diamond Crystal Brands packages and sells various sugar, sugar substitute, salt and pepper products, savory products, drink mixes and dessert mixes to retail and foodservice customers.\n\nInternationally, the Company markets its products through Hormel Foods International Corporation (HFIC), a wholly owned subsidiary. HFIC has a presence in the international marketplace through joint ventures and placement of personnel in strategic foreign locations such as China, Spain, and the Philippines. HFIC also has a global presence with minority positions in food companies in Spain (Campofrio Alimentacion S.A., 15% holding) and the Philippines (Purefoods-Hormel, 40% holding).\n\nThe Company has not been involved in any bankruptcy, receivership or similar proceedings during its history. Substantially all of the assets of the Company have been acquired in the ordinary course of business.\n\nThe Company had no significant change in the type of products produced or services rendered, nor in the markets or methods of distribution since the beginning of the fiscal year.\n\n## (b) Industry Segment\n\nThe Company's business is reported in five segments: Grocery Products, Refrigerated Foods, Jennie-O Turkey Store, Specialty Foods, and All Other. The contributions of each segment to net sales to unaffiliated customers and operating profit, and the presentation of certain other financial information by segment are reported in Note K of the Notes to Consolidated Financial Statements and in the Management's Discussion and Analysis of the Annual Stockholder's Report for the year ended October 25, 2003, incorporated herein by reference.\n\n## (c) Description of Business\n\n## Products and Distribution\n\nThe Company's products primarily consist of meat and other food products. The meat products are sold fresh, frozen, cured, smoked, cooked and canned. The percentages of total revenues contributed by classes of similar products for the last three fiscal years of the Company are as follows:\n\n| Perishable meat | 50.3% | 53.0% | 54.7% |\n|--------------------|---------|---------|---------|\n| Nonperishable meat | 18.9 | 19.8 | 21.0 |\n| Poultry | 22.1 | 22.6 | 20.3 |\n| | 100.0% | 100.0% | 100.0% |\n\nReporting of revenues from external customers is based on similarity of products, as the same or similar products are sold across multiple distribution channels such as retail, foodservice or international. Revenues reported are based on financial information used to produce the Company's generalpurpose financial statements.\n\nPerishable meat includes fresh meats, sausages, hams, wieners and bacon (excluding JOTS products.) Nonperishable meat includes canned luncheon meats, shelf stable microwaveable entrees, stews, chilies, hash, meat spreads and other items that do not require refrigeration as well as frozen processed products. The Poultry category is composed primarily of JOTS products. The Other category primarily consists of nutritional food products and supplements, sugar and sugar substitutes, salt and pepper products, dessert mixes, food packaging (casings for dry sausage), and industrial gelatin products. The Other category has increased over the past two years primarily due to the following acquisitions: Century Foods International (July 2003), Diamond Crystal Brands (December 2002), and Diamond Crystal Brands Nutritional Products (April 2001).", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "## Retirement Benefits\n\nThe Company has defined contribution profit-sharing plans covering substantially all employees who are not participants in certain defined benefit plans. The Company's annual contribution to the defined contribution plans is based on employee eligible earnings and results of operations and amounted to $26,489,000, $23,524,000, and $24,826,000 in 2003, 2002, and 2001, respectively.\n\nThe Company sponsors defined benefit plans which include a limited number of salaried and hourly employees at certain subsidiaries. The Company's funding policy is generally to contribute annually the minimum actuarially computed amount. Net pension costs relating to these plans were $176,000; $0; and $0 for 2003, 2002, and 2001, respectively. The actuarial present value of obligations, less related plan assets at fair value, is not significant.\n\nThe Company also participates in a multiemployer plan, which provides defined benefits to certain of the Company's union\n\nemployees. Pension expense for this plan amounted to $309,000, $309,000, and $310,000 in 2003, 2002, and 2001, respectively.\n\n## Postretirement Health Care\n\nIn accordance with the guidelines of revised SFAS No. 132, 'Employers' Disclosures about Pensions and other Postretirement Benefits,' the following table sets forth the funded status of the plan, reconciled to the accrued postretirement benefits cost recognized in the Company's balance sheet at:", - "page_start": 50, - "page_end": 50, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## Significant Transactions\n\nThe Company had several significant transactions during 2003. The largest was the sale of its 66% interest in the Virginia 10 RSA cellular operation, as described above. The Company originally entered into the agreement with Verizon Wireless in November 2002. The Company was the general partner of the limited partnership which operated an analog cellular network in the six-county area of Northwestern Virginia, including Clarke, Frederick, Page, Rappahannock, Shenandoah, and Warren counties, and the city of Winchester. The sales price was $37.0 million plus the Company's 66% share of the partnership's working capital, which was approximately $1.7 million. The Company was required to do a working capital true up following the closing, from which the Company recorded a charge for $23 thousand after taxes. In the fourth quarter the Company recorded an additional charge for taxes of $0.2 million to reflect the consolidated effective tax rate based on the final operating results for the year.\n\nThe sale of this business is reflected in the discontinued operations section of the income statement along with the results of operations for the two months of 2003 that the operation remained a part of the Company.\n\n\n\n■", - "page_start": 41, - "page_end": 41, - "source_file": "NASDAQ_SHEN_2003.pdf" - } - ] - }, - { - "references": { - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf", - "query": "What is the shortcut to mute myself in MS teams ?", - "target_page": 3, - "target_passage": "Use [Ctrl]+[Shift]+[M] for a shortcut to mute and unmute during meetings.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "\n\n## Up button:\n\nShort press to light up or turn off the screen; one press to go back the dial interface; long press to reactivate the watch.\n\n## Button down:\n\nShort press to enter multi-sport mode.\n\nIn addition, when the watch is in the off-screen state, you can light up the screen by pressing any buttons.\n\n## Charging instructions:\n\nWireless charging, as shown in the picture below.\n\n\n\n## 1.1 Shortcut function:\n\n- 1) Swipe to the left till you find the \"+\" icon, click the icon to add part of the functions in the shortcut.\n- 2) Scroll down the screen when the watch is in the dial interface, you can find Bluetooth connection status, time, power, brightness adjustment and other functions.", - "page_start": 1, - "page_end": 1, - "source_file": "6126797.pdf" - }, - { - "text": "- 3. The window displays an Apply Changes prompt to apply any changes you made before continuing,\n - 4. Lower in the configuration window, you can also configure internet Storage Name Service (iSNS) addresses and CHAP if you need these in your environment.\n\nNote: The authentication of hosts is optional. By default, it is disabled. The user can choose to enable CHAP or CHAP authentication , which involves sharing a CHAP secret between the cluster and the host. If the correct key is not provided by the host, the Storwize V7000 does not allow it to perform I/O to volumes. Also, you can assign a CHAP secret to the cluster.\n\n - 5. Click the Ethernet Ports tab to set the iSCSI IP address for each node (see Figure 8-12).\n\nFigure 8-12 Enter an iSCSI IP address\n\n", - "page_start": 355, - "page_end": 355, - "source_file": "sg247938.pdf" - }, - { - "text": "## Permitted reasons to leave or be outside place of self-isolation\n\n - 13. -(1) During the period of their self-isolation P may not leave or be outside of the place where P is self-isolating except-\n - (a) to travel directly to a port to leave the common travel area;\n - (b) to fulfil a legal obligation, including attending court or satisfying bail conditions or to participate in legal proceedings;\n - (c) to take exercise;", - "page_start": 76, - "page_end": 76, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- /SM590000 Automatic activation: Used when you have the authorization code and the workstation that is being used to activate the license has access to external network. In this case, you must enter only the authorization code. The license key is automatically obtained from the internet and activated in the IBM Spectrum Virtualize system.", - "page_start": 630, - "page_end": 630, - "source_file": "sg247938.pdf" - }, - { - "text": "## Share and collaborate\n\nWith this document saved in OneDrive, you can share it with others. They don't even need Word to open it.\n\nTry it: Select Share , and send a link to this document. (keyboard shortcut - Alt+F+Z or Alt+Z+S)\n\nYou can send the link by typing someone's email address or by copying the link and pasting it into a message or chat. If you want them to read the document but not edit it, set their permission to view-only.\n\nIf they don't have Word, the document will open in their web browser, in Word Online.\n\n## Add visuals with pictures from the web\n\n\n\nWord works with Bing to give you access to thousands of pictures you can use in your documents.\n\nTry it: Hit enter after this line to make a blank line:\n\n- 1. With your cursor in the blank space above, go to the Insert tab, select Online Pictures , and then search for something, like puppy clip art .\n- 2. Select the picture you want, and select Insert .", - "page_start": 2, - "page_end": 2, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "## Fill-in User Information\n\nThere are two options to fill in the information of a new user\n\n(a) by entering the data on the same row of the new user (figure 20a) or; (b) by entering the data in the General Properties , Sector and Role boxes (figure 20b, 20c and 20d).\n\n## Figure 20. New user created screen\n\nFigure 21. Users Administration\n\n\n\nFill in the following fields:\n\n -  First Name\n -  Last Name\n -  Name (optional)\n -  Email address\n -  Password (must have 1 capital letter, 1 numeric and 8 characters long)\n -  User Role\n -  Sectors\n -  Change password (tick the box prompts the user to change his/her password)\n\nThe functionality to change password is not fully implemented in this release. Please do not tick the 'Change password' box under General Properties! (See figure 20 b).\n\n -  Enable user (Proceed to section 3.3.2 Disable/Enable User)\n\n## 3.3.2 Disable/Enable User\n\nThis function allows the NFP and PM to activate and/or de-activate users of their country.\n\n -  Log in as NFP or PM\n -  Hover the cursor on the 'Users Management' tab and click on the 'Users Administration' button. (see figure 21); this opens the Disable/Enable User screen (figure 22).\n\n\n\n## 3.3.2.1 Enable User\n\nOn the Disable/Enable screen, search for the user whose account should be activated and un-tick the 'Disabled' box. (figure 22a).\n\n## 3.3.2.2 Disable User\n\nOn the Disable/Enable screen, search for the user whose account should be de-activated and tick the 'Disabled' box (figure 22a).", - "page_start": 14, - "page_end": 14, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "- 11. Paragraph 10 does not require P to remain in self-isolation-\n - (a) from any person with whom they were travelling when they arrived in England and who is also self-isolating in the place where P is self-isolating;\n - (b) from any person who is staying in the place where P is self-isolating whose assistance P reasonably requires by reason of-\n - (i) P being a child, or\n - (ii) any disability of P's.\n - 12. Paragraph 10 does not require P to remain in self-isolation from a person ('V') when V is at the place where P is self-isolating in exceptional circumstances such as-", - "page_start": 76, - "page_end": 76, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- - Enable the EnableResignature setting.\n - - Disable the DisallowSnapshotLUN setting.", - "page_start": 409, - "page_end": 409, - "source_file": "sg247938.pdf" - }, - { - "text": "- - Enable the EnableResignature setting.\n - - Disable the DisallowSnapshotLUN setting.", - "page_start": 408, - "page_end": 408, - "source_file": "sg247938.pdf" - }, - { - "text": "For example, in the Dashboard pane, you can open help information that is related to the dashboard-provided information, as shown in Figure 5-19.\n\nFigure 5-19 Example of Dashboard help content\n\n\n\nSelecting the Help Contents option redirects you to the Storwize V7000 IBM Knowledge Center. However, it requires internet access from the workstation where the management GUI is started.\n\n## 5.3 System View window\n\nStarting with IBM Spectrum Virtualize release V7.4, the welcome window of the GUI changed from the well-known former Overview/system 3D pane to the new System pane. In V8.2, the system pane was changed again to the new System view pane, and the 3D view was removed, as shown in Figure 5-20.\n\nFigure 5-20 Opening the Overview pane\n\n\n\nNext, we describe the structure of the pane and how to navigate to various system components to manage them more efficiently and quickly.\n\n## 5.3.1 Content-based organization\n\nThe following sections describe several view options within the GUI in which you can filter (to minimize the amount of data that is shown on the window), sort, and reorganize the content of the window.", - "page_start": 164, - "page_end": 164, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf", - "query": "How can I make a channel visible to an invited member ?", - "target_page": 4, - "target_passage": "Channels can be: • Shared (visible to invited team members and external members of your organization who are not on the team)", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "To create a host, complete the following steps:\n\n - 1. Open the host configuration window by clicking Hosts (see Figure 8-3).\n\nFigure 8-3 Open the host window\n\n\n\n - 2. To create a host, click Add Host . If you want to create a Fibre Channel host, continue with 'Creating Fibre Channel hosts' on page 329. To create an iSCSI host, go to 'Creating iSCSI hosts' on page 331.", - "page_start": 349, - "page_end": 349, - "source_file": "sg247938.pdf" - }, - { - "text": "## Creating Fibre Channel hosts\n\nTo create Fibre Channel hosts, complete the following steps:\n\n - 1. Select Fibre Channel . The Fibre Channel configuration window opens (see Figure 8-4).\n\nFigure 8-4 Fibre Channel host configuration\n\n", - "page_start": 350, - "page_end": 350, - "source_file": "sg247938.pdf" - }, - { - "text": "Figure 8-6 Host type selection\n\n\n\n - 6. Click Add to create the host object.\n - 7. Click Close to return to the host window. Repeat these steps for all of your Fibre Channel hosts. Figure 8-7 shows the All Hosts window after creating a second host.\n\nFigure 8-7 Hosts view after creating a host\n\n\n\nAfter you complete the adding of Fibre Channel hosts, see Chapter 7, 'Volumes' on page 241 to create volumes and map them to the created hosts.\n\n## Creating iSCSI hosts\n\nWhen creating an iSCSI attached host, consider the following points:", - "page_start": 352, - "page_end": 352, - "source_file": "sg247938.pdf" - }, - { - "text": "\n\n## Chat\n\n## Teams and channels\n\nBy default, your chats will be arranged along the left-hand side of the chat panel, with the most recent messages at the top. You can right-click on any chat and select \"Pin,\" which will keep it at the top of your list for quick access.\n\n\n\nWhen you create group chats you can edit the name of the group by selecting the pen symbol next to the group icon in the chat. This will help you give it context and make it easier to find.\n\nWhen you are invited to a new Team, it will automatically appear on the left panel along with all its associated channels. You can choose to \"show\" the most relevant chanels and \"hide\" the rest.\n\nTeams\n\nGeneral\n\nMarketing\n\nShared Channel\n\nA\n\nteam\n\nis a broad group of people that work together to get something\n\ndone. You can choose who is part of the team, and people can only access\n\nshared content by invitation. All teams are created with an associated\n\nGeneral channel that includes all team members by default.\n\nChannels\n\nA\n\nchannel\n\nis a central hub for a specific topic, within the larger team, where\n\npeople can hold focused conversations and organize a library of files.\n\nChannels can be:\n\n· Standard (visible to everyone on the team)\n\n· Private (only visible to select team members)\n\n· Shared (visible to invited team members and external members of your\n\norganization who are not on the team)\n\nCreate a team for your organization with channels for your leadership team, each department, and one just for fun! Tip\n\nAN", - "page_start": 3, - "page_end": 3, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "\n\n## Meeting essentials\n\n\n\n## Join meetings\n\n- From the calendar tab, select the meeting you intend to join, then select join. . 1.\n\n\n\n- A new screen will show up. Here you can choose how you want to appear in the meeting, and your audio preferences. 2.\n- 3. Then select join now. .\n\n## Present in meetings\n\n- Screen share from the Share button at the top of your meeting window. 1.\n- Choose what screen or window you want to share. Don't forget to include audio if you're sharing something with sound. 2.\n- When you are finished, use the share button at the top of your meeting window to stop sharing. 3.\n\n## Meeting controls\n\nWhen you join meetings, a different window will pop-up. These are the controls you need to know:\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "Tip: Create an alias for the I/O Group port set. This step makes it easier to correctly zone hosts to the correct set of I/O Group ports. It also makes host group membership visible in the FC switch configuration.\n\nThe use of this schema provides four paths to one I/O Group for each host, and helps to maintain an equal distribution of host connections on Storwize V7000 ports.\n\nTip: To maximize performance from the host point of view, distribute volumes that are mapped to each host between both I/O Group nodes.", - "page_start": 76, - "page_end": 76, - "source_file": "sg247938.pdf" - }, - { - "text": "Figure 8-25 IBM Spectrum Virtualize Hosts menu\n\n\n\nIn the Hosts → Hosts view, three hosts were created and volumes are mapped to them in our example. If needed, we can now modify these hosts by selecting a host and click Actions , or right-click the host to see the available tasks (see Figure 8-26 on page 346).", - "page_start": 366, - "page_end": 366, - "source_file": "sg247938.pdf" - }, - { - "text": "| Net income ÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | Ì | Ì | Ì | Ì | 237.9 | Ì | Ì | $237.9 |\n| Cash dividendsÏÏÏÏÏÏÏÏÏÏÏÏ | Ì | Ì | Ì | Ì | (54.6) | Ì | Ì | Ì |\n| Issuances of common stock | 2.3 | .1 | 48.8 | Ì | Ì | Ì | Ì | Ì |\n| Issuances of restricted stock and deferred stock units ÏÏ | .1 | Ì | 2.8 | (2.8) | Ì | Ì | Ì | Ì |\n| Amortization of deferred compensation ÏÏÏÏÏÏÏÏÏÏÏ | Ì | Ì | Ì | 1.8 | Ì | Ì | Ì | Ì |\n| Purchases of common stock for treasuryÏÏÏÏÏÏÏÏÏÏÏÏÏ | (9.6) | Ì | Ì | Ì | Ì | (266.1) | Ì | Ì |\n| Change in value of investments, net of tax ÏÏÏ | Ì | Ì | Ì | Ì | Ì | Ì | .1 | .1 |\n| Total comprehensive income | Ì | Ì | Ì | Ì | Ì | Ì | Ì | $238.0 |\n| BALANCE AT DECEMBER 31, 2004ÏÏÏÏÏ | 150.6 | $1.9 | $1,399.4 | $(1.0) | $1,222.6 | $(750.4) | $ Ì | |", - "page_start": 62, - "page_end": 62, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "- 2. A list of all the hosts is displayed. The function icons indicate whether the host is Fibre Channel, iSCSI, or SAS attached. The port details of the selected host are shown to the right. You can add a new host object by clicking Add Host . If you click Actions (see Figure 8-51), the tasks that are described in 'Modifying Volume Mappings' on page 346 can be selected.\n\nFigure 8-51 Ports by Host actions\n\n\n\n## Adding a Fibre Channel or iSCSI host port\n\nTo add a host port, complete the following steps:", - "page_start": 383, - "page_end": 383, - "source_file": "sg247938.pdf" - }, - { - "text": "- 2. Enter a host name and the iSCSI initiator name into the iSCSI host IQN field. Click the plus sign ( + ) if you want to add initiator names to one host.\n - 3. If you are connecting an HP-UX or TPGS host, click the Host type menu and then select the correct host type. For our ESX host, we selected VVOL . However, generic can be selected if you are not using VVOLs.", - "page_start": 353, - "page_end": 353, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf", - "query": "How can I notify a collegue mentionned in a chat message in Teams ?", - "target_page": 5, - "target_passage": "Tag a teammate in a message by typing the @ symbol followed by their name. They will receive a special notification calling for their attention.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\n\n\n## Connect through messages\n\nWhether you're in a meeting, channel, or a chat, your messaging box will look the same.\n\n## Compose\n\n - Format your messages, add bullet points, charts or hyperlinks.\n - Mark as important to call attention to specific messages.\n - Attach files to share with your teammates.\n - Include gifs , emojis, stickers to bring lightness to your conversations.\n\n## Respond\n\n - Tag a teammate in a message by typing the @ symbol followed by their name. They will receive a special notification calling for their attention. @\n - React to individual messages or quote them in a response.\n\nTip Going into format mode will prevent your message from sending when you hit [Enter], so it's a great way to draft and preview messages before sending them.\n\nTip If you want to revisit an important message later, hover on that message, select the three d , then choose 'Save.' Saved messages will be found under your profile picture dropdown menu.", - "page_start": 4, - "page_end": 4, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "The message format depends on the facility. The system can transmit syslog messages in the following formats:\n\n - - The concise message format provides standard detail about the event.", - "page_start": 745, - "page_end": 745, - "source_file": "sg247938.pdf" - }, - { - "text": "\n\n## Chat\n\n## Teams and channels\n\nBy default, your chats will be arranged along the left-hand side of the chat panel, with the most recent messages at the top. You can right-click on any chat and select \"Pin,\" which will keep it at the top of your list for quick access.\n\n\n\nWhen you create group chats you can edit the name of the group by selecting the pen symbol next to the group icon in the chat. This will help you give it context and make it easier to find.\n\nWhen you are invited to a new Team, it will automatically appear on the left panel along with all its associated channels. You can choose to \"show\" the most relevant chanels and \"hide\" the rest.\n\nTeams\n\nGeneral\n\nMarketing\n\nShared Channel\n\nA\n\nteam\n\nis a broad group of people that work together to get something\n\ndone. You can choose who is part of the team, and people can only access\n\nshared content by invitation. All teams are created with an associated\n\nGeneral channel that includes all team members by default.\n\nChannels\n\nA\n\nchannel\n\nis a central hub for a specific topic, within the larger team, where\n\npeople can hold focused conversations and organize a library of files.\n\nChannels can be:\n\n· Standard (visible to everyone on the team)\n\n· Private (only visible to select team members)\n\n· Shared (visible to invited team members and external members of your\n\norganization who are not on the team)\n\nCreate a team for your organization with channels for your leadership team, each department, and one just for fun! Tip\n\nAN", - "page_start": 3, - "page_end": 3, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "The message format depends on the facility. The system can transmit syslog messages in the following formats:\n\n - - The concise message format provides standard detail about the event.\n - - The expanded format provides more details about the event.\n - /SM590000 Event Notifications\n\nConsider the following points about event notifications:\n\n - - Select Error if you want the user to receive messages about problems, such as hardware failures, that must be resolved immediately.\n\nImportant: Browse to Recommended Actions to run the fix procedures on these notifications.", - "page_start": 186, - "page_end": 186, - "source_file": "sg247938.pdf" - }, - { - "text": "An event notification can be sent to one or more email addresses. This mechanism notifies individuals of problems. Individuals can receive notifications wherever they have email access, including mobile devices.\n\n - /SM590000 Cloud Call Home\n\nCloud services for Call Home is the optimal transmission method for error data because it ensures notifications are delivered directly to the IBM support center.", - "page_start": 731, - "page_end": 731, - "source_file": "sg247938.pdf" - }, - { - "text": "Consider the following points about event notifications:\n\n - - Select Error if you want the user to receive messages about problems, such as hardware failures, that must be resolved immediately.\n\nImportant: Browse to Recommended Actions to run the fix procedures on these notifications.\n\n - - Select Warning if you want the user to receive messages about problems and unexpected conditions. Investigate the cause immediately to determine any corrective action.\n\nImportant: Browse to Recommended Actions to run the fix procedures on these notifications.\n\n - - Select Info if you want the user to receive messages about expected events. No action is required for these events.\n\nTo remove an SNMP server, click the Minus sign ( -). To add another SNMP server, click the Plus sign ( + ).\n\n## Syslog notifications\n\nThe syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver on an IP network. The IP network can be IPv4 or IPv6. The system can send syslog messages that notify personnel about an event. You can use the Syslog pane to view the Syslog messages that are sent by the IBM Storwize V7000. To view the Syslog configuration, use the System pane and point to Settings and click Notification → Syslog (see Figure 5-55).\n\nFigure 5-55 Setting Syslog messages\n\n", - "page_start": 185, - "page_end": 185, - "source_file": "sg247938.pdf" - }, - { - "text": "Figure 13-63 Add SNMP Server\n\n\n\n## 13.7.4 Syslog notifications\n\nThe syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver on an IP network. The IP network can be IPv4 or IPv6. The system can send syslog messages that notify personnel about an event.\n\nYou can configure a syslog server to receive log messages from various systems and store them in a central repository by entering the following information (see Figure 13-64 on page 725):", - "page_start": 745, - "page_end": 745, - "source_file": "sg247938.pdf" - }, - { - "text": "The facility determines the format for the syslog messages. The facility can be used to determine the source of the message.\n\n - /SM590000 Message Format\n\nThe message format depends on the facility. The system can transmit syslog messages in the following formats:", - "page_start": 745, - "page_end": 745, - "source_file": "sg247938.pdf" - }, - { - "text": "- 14. -(1) For the purposes of regulation 13(2)(a) (pre-booking information requirement), the required information-\n - (a) in the case of online bookings-\n - (i) must be displayed prominently on an operator's website or mobile application,\n - (ii) is the information specified in Part 1 of Schedule 12 (information for passengers) and a hyperlink to each of the relevant websites;\n - (b) in the case of telephone bookings-\n - (i) must be provided orally,\n - (ii) is the information specified in Part 1 of Schedule 12;\n - (c) in the case of in-person bookings-\n - (i) must be provided orally or in writing,\n - (ii) where provided orally, is the information specified in Part 1 of Schedule 12,\n - (iii) where provided in writing, is a written notice which informs passengers of the requirements to provide information, to possess notification of a negative test result, to book and undertake tests and to self-isolate in regulations 3, 4, 6 and 9.\n - (2) For the purposes of regulation 13(2)(b) (pre-departure information requirement), the required information-\n - (a) must be provided by text message, push notification, email or orally;\n - (b) where provided by text message or push notification, is text which-\n - (i) informs passengers of the requirements to provide information in regulation 3 and that penalties apply for failure to comply with those requirements,\n - (ii) includes a hyperlink to https://www.gov.uk/provide-journey-contact-details-beforetravel-uk,\n - (iii) informs passengers of the requirement to possess notification of a negative test result in regulation 4, and\n - (iv) informs passengers of the requirement to book and undertake tests in regulation 6;", - "page_start": 19, - "page_end": 19, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "These options are described next.\n\n## 5.10.1 Notifications menu\n\nIBM Storwize V7000 can use SNMP traps, syslog messages, and Call Home email to notify you and the IBM Support Center when significant events are detected. Any combination of these notification methods can be used simultaneously.\n\nNotifications are normally sent immediately after an event is raised. However, events can occur because of service actions that are performed. If a recommended service action is active, notifications about these events are sent only if the events are still unfixed when the service action completes.\n\n## SNMP notifications\n\nSNMP is a standard protocol for managing networks and exchanging messages. The system can send SNMP messages that notify personnel about an event. You can use an SNMP manager to view the SNMP messages that are sent by IBM Storwize V7000.\n\nTo view the SNMP configuration, use the System window. Point to the Settings icon and click Notification → SNMP (see Figure 5-54).\n\nFigure 5-54 Setting SNMP server and traps\n\n", - "page_start": 184, - "page_end": 184, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "Botswana-constitution.pdf", - "query": "What are the 3 prerequisites to be elligible as president of Botswana ?", - "target_page": 18, - "target_passage": "A person shall be qualified for election as President if, and shall not be qualified unless, he or she- (a) is a citizen of Botswana by birth or descent; (b) has attained the age of 30 years; and (c) is qualified to be elected as a Member of the National Assembly", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "To successfully perform the configuration backup, the following prerequisites must be met:", - "page_start": 704, - "page_end": 704, - "source_file": "sg247938.pdf" - }, - { - "text": "- 4. When the wizard starts, you are prompted to verify the restrictions and prerequisites that are listed in Figure 9-4 on page 390. Address the following restrictions and prerequisites:\n - - Restrictions:", - "page_start": 410, - "page_end": 410, - "source_file": "sg247938.pdf" - }, - { - "text": "- (3) A list of the candidates nom inated for election by the P resident and the E lected M em bers of the N ational A ssem bly under the foregoing provisions of this paragraph shall be prepared, and each E lected M em ber of the A ssem bly shall be entitled to vote-\n - ( a ) in the case of a general election, for four candidates; and\n - ( b ) in the case of a by-election, for one candidate,\n - on the list so constituted.", - "page_start": 55, - "page_end": 55, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "For more information about the Storage Migration prerequisites, see 9.1.2, 'Prerequisites' on page 387.\n\nIf all restrictions are satisfied and prerequisites are met, select all of the options and click Next , as shown in Figure 9-4.\n\nFigure 9-4 Restrictions and prerequisites confirmation\n\n", - "page_start": 411, - "page_end": 411, - "source_file": "sg247938.pdf" - }, - { - "text": "- 68 Eurofound, 2021: Seasonal worker\n\n'A seasonal worker is defined in Article 3(b) of Directive 2014/36/EU on the conditions of entry and stay of thirdcountry nationals for the purpose of employment as 'a third-country national who retains his or her principal place of residence in a third country and stays legally and temporarily in the territory of a Member State to carry out an activity dependent on the passing of the seasons, under one or more fixed-term work contracts concluded directly between that third-country national and the employer established in that Member State.' European Parliament and the Council: Directive 2014/36/EU of 26 February 2014 on the conditions of entry and stay of third-country nationals for the purpose of employment as seasonal workers.\n\n69 Action Plan EU: Seasonal workers are a group of mobile workers who retain their main place of residence in their home country and move temporarily to another Member State to carry out an activity dependent on the passing of the seasons, here\n\nArticle 2.1. 'This Directive shall apply to third-country nationals who reside outside the territory of the Member", - "page_start": 142, - "page_end": 142, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## Annex 1: Non -Annex I (NAI) Parties\n\n| 1 | Afghanistan | AFG |\n|-----|----------------------------------------|-------|\n| 2 | Albania | ALB |\n| 3 | Algeria | DZA |\n| 4 | Andorra | AND |\n| 5 | Angola | AGO |\n| 6 | Antigua and Barbuda | ATG |\n| 7 | Argentina | ARG |\n| 8 | Armenia | ARM |\n| 9 | Azerbaijan | AZE |\n| 10 | Bahamas | BHS |\n| 11 | Bahrain | BHR |\n| 12 | Bangladesh | BGD |\n| 13 | Barbados | BRB |\n| 14 | Belize | BLZ |\n| 15 | Benin | BEN |\n| 16 | Bhutan | BTN |\n| 17 | Bolivia | BOL |\n| 18 | Bosnia and Herzegovina | BIH |\n| 19 | Botswana | BWA |\n| 20 | Brazil | BRA |\n| 21 | Brunei Darussalam | BRN |\n| 22 | Burkina Faso | BFA |\n| 23 | Burundi | BDI |\n| 24 | Cambodia | KHM |\n| 25 | Cameroon | CMR |\n| 26 | Cape Verde | CPV |\n| 27 | Central African Republic | CAF |\n| 28 | Chad | TCD |\n| 29 | Chile | CHL |\n| 30 | China | CHN |\n| 31 | Colombia | COL |\n| 32 | Comoros | COM |\n| 33 | Congo | COG |\n| 34 | Cook Islands | COK |\n| 35 | Costa Rica | CRI |\n| 36 | Cote d'Ivoire | CIV |\n| 37 | Cuba | CUB |\n| 38 | Democratic People's Republic of Korea | PRK |\n| 39 | Democratic Republic of the Congo | COD |\n| 40 | Djibouti | DJI |\n| 41 | Dominica | DMA |", - "page_start": 44, - "page_end": 44, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "Colombia\n\nDemocratic Republic of the Congo\n\nEcuador\n\nEswatini\n\nEthiopia\n\nFrench Guiana\n\nGuyana\n\nIndia\n\nKenya\n\nLesotho\n\nMalawi\n\nThe Maldives\n\nMozambique\n\nNamibia\n\nNepal\n\nOman\n\nPakistan\n\nPanama\n\nParaguay\n\nPeru\n\nPhilippines\n\nQatar\n\nRwanda\n\nSeychelles\n\nSomalia\n\nSouth Africa\n\nSuriname\n\nTanzania\n\nTurkey\n\nUnited Arab Emirates\n\nUruguay\n\nVenezuela\n\nZambia\n\nZimbabwe", - "page_start": 32, - "page_end": 32, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- ( d ) if he or she is elected as S peaker;\n - ( e ) if he or she is rem oved from office by a resolution of the A ssem bly supported by the votes of not less than tw o-thirds of all the M em bers of the A ssem bly; or ( f ) w hen the A ssem bly first sits after any dissolution of P arliam ent.\n\n## 61. Q ualifications for election to N ational A ssem bly\n\nSubject to the provisions of section 62 of this C onstitution, a person shall be qualified to be elected as a M em ber of the N ational A ssem bly if, and shall not be qualified to be so elected unless-\n\n - ( a ) he or she is a citizen of B otsw ana;\n - ( b ) he or she has attained the age of 18 years;\n - ( c ) he or she is qualified for registration as a voter for the purposes of the election of the E lected M em bers of the N ational A ssem bly and is so registered; and\n - ( d ) he or she is able to speak, and, unless incapacitated by blindness or other physical cause, to read E nglish w ell enough to take an active part in the proceedings of the A ssem bly.\n\n## 62. D isqualifications for m em bership of N ational A ssem bly\n\n(1) N o person shall be qualified to be elected as a M em ber of the N ational Assem bly w ho-", - "page_start": 27, - "page_end": 27, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "\n\n## Chapter\n\n## 4 STABILITY AND CONTROL\n\nAn aircraft must have satisfactory handling qualities in addition to adequate performance. 'lYhe aircraft must have adequate stability to maintain a uniform flight condition and recover from the various disturbing influences. It is necessary to provide sufficient stability to minimize the workload of the pilot. Also, the aircraft must have proper response to the controls so that it may achieve the inherent performance. There are certain conditions of\n\nflight which provide the most critical requirements of stability and control and these conditions must be understood and respected to accomplish safe and efficient operation of the aircraft.\n\n## DEFINITIONS\n\n## STATIC STABILITY\n\nAn aircraft is in a state of equilibrium when the sum of all forces and all moments is equal", - "page_start": 260, - "page_end": 260, - "source_file": "00-80T-80.pdf" - }, - { - "text": "\n\n## First National Bank of Abilene\n\n## Main Office\n\n400 Pine Street Abilene, Texas 79601\n\nChartered 1890\n\n## Branches\n\n4400 Buffalo Gap Road\n\nAbilene, Texas 79606\n\n4350 Southwest Drive Abilene, Texas 79606\n\n920 N. Willis\n\nAbilene, Texas 79603\n\n3300 S. 14th Street\n\nAbilene, Texas 79605\n\n1010 N. Judge Ely Blvd.\n\nAbilene, Texas 79601\n\n701 Pine Street\n\nAbilene, Texas 79601\n\n1345 Barrow Street Abilene, Texas 79605\n\n## Senior Officers\n\nF. Scott Dueser\n\nChairman of the Board\n\nChuck A. Cowell\n\nPresident and Chief Executive Officer\n\nRon Fogle\n\nExecutive Vice President, Commercial Loans\n\nRobert S. Patterson\n\nExecutive Vice President and\n\nSenior Trust Officer\n\nJohn Prince\n\nExecutive Vice President, Personal Loans\n\nChuck A. Cowell President and Chief Executive Officer\n\n\n\nMario A. Luppino\n\nExecutive Vice President, Marketing and Retail\n\nGary Tucker, CDP\n\nExecutive Vice President and Chief Information Officer\n\nLeo Dennis\n\nExecutive Vice President, Chief Financial Officer and Cashier\n\n## Directors\n\nChuck A. Cowell\n\nPresident and Chief Executive Officer\n\nJ. Michael Alexander\n\nPresident, James M. Alexander & Co.\n\nTucker S. Bridwell\n\nPresident and Chief Executive Officer, Mansefeldt Investments, Inc.\n\nJoseph E. Canon\n\nExecutive Director, Dodge Jones Foundation\n\nDavid Copeland\n\nPresident, Shelton Family Foundation\n\nJoe Crawford\n\nPresident, Abilene Aero, Inc.\n\nF. Scott Dueser\n\nFirst Financial Bankshares, Inc.\n\nCharles Ezzell\n\nInvestments\n\nAllan D. Frizzell\n\nExecutive Vice President,\n\nEnrich Oil Corporation\n\nRaymond A. McDaniel, Jr. Investments\n\n| IN THOUSANDS | December 31, 2002 | December 31, 2001 |\n|--------------------------|---------------------|---------------------|\n| Assets | $705,468 | $670,959 |\n| Loans | 353,564 | 344,341 |\n| Deposits | 624,262 | 598,310 |\n| Equity | 68,670 | 63,276 |\n| Net Income | 14,277 | 13,051 |\n| Trust Assets | 740,745 | 722,504 |\n| Return on Average Assets | 2.12% | 1.98% |\n| Return on Average Equity | 21.05 | 20.19 |\n\nTaylor County Deposit Market Share\n\n## Abilene\n\n\n\n\n\n6\n\nBynum Miers\n\nRancher\n\nWilliam D. Minter\n\nVice President, CameraMouse\n\nStanley Morris, Jr. Investments\n\nKenneth T. Murphy First Financial Bankshares, Inc.\n\nJames Parker\n\nPresident, Parker Properties, Inc.\n\nJack D. Ramsey, M.D. Physician\n\nDian Graves Stai Investments\n\nMichael C. Waters, F.A.C.H.E.\n\nPresident, Hendrick Health System\n\n## Advisory\n\nBob J. Surovik McMahon, Surovik, Suttle, Buhrmann, Hicks and Gill, P.C.\n\nSteve Suttle\n\nMcMahon, Surovik, Suttle, Buhrmann, Hicks and Gill, P.C.", - "page_start": 15, - "page_end": 15, - "source_file": "NASDAQ_FFIN_2002.pdf" - } - ] - }, - { - "references": { - "source_file": "Botswana-constitution.pdf", - "query": "What is the condition to be allowing to access the position of Director of public prosecution in Botswana ?", - "target_page": 25, - "target_passage": "A person shall not be qualified to be appointed to the Office of Director of Public Prosecutions unless he or she is qualified to be appointed to the Office of a Judge of the High Court", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "respect to a proposed appointee to the Board and the workings of the Board and its Committees are conveyed in interviews with the Chairman and induction procedures include access to appropriate executives in relation to details of the business of the Company.\n\nThe Chairman of the Board is the Chairman of the Nomination Committee. The current members of the Nomination Committee, all of whom are independent non-executive Directors, are Mr S Gerlach (Chairman), Mr P C Barnett and Mr G W McGregor.\n\n## 3. REVIEW OF BOARD AND EXECUTIVE PERFORMANCE\n\nThe Board Guidelines provide that:\n\n - · non-executive Directors are to be appointed on the basis that their nomination for re-election as a Director is subject to review and support by the Board;\n - · there should be appropriate circumstances justifying reelection after a specified period of service as a Director; and\n - · the contribution of the Board and of individual Directors is the subject of formal review and discussion on a biennial and annual basis, respectively.\n\nAs the biennial review of the Board and of its Committees was conducted by an independent consultant in 2003, no formal performance appraisal of the Board was conducted in 2004.\n\nPerformance evaluation of key executives is undertaken on a quarterly and annual basis by the CEO and summarised in presentation to the\n\nRemuneration Committee of the\n\nBoard, both specifically for determination of remuneration and generally in relation to management succession planning for review by the Board.\n\n## 4. INDEMNITY, ACCESS TO INFORMATION AND INDEPENDENT PROFESSIONAL ADVICE\n\nInformation in respect to indemnity and insurance arrangements for Directors and senior executives appears in the Directors' Statutory Report on page 49 of this Annual Report.\n\nThe Board Guidelines set out the circumstances and procedures pursuant to which a Director, in furtherance of his or her duties, may seek independent professional advice at the Company's expense. Those procedures require prior consultation with, and approval by, the Chairman and assurances as to the qualifications and reasonableness of the fees of the relevant expert and, under normal circumstances, the provision of the expert's advice to the Board.\n\nPursuant to a deed executed by the Company and each Director, a Director also has the right to have access to all documents which have been presented to meetings of the Board or to any Committee of the Board or otherwise made available to the Director whilst in office. This right continues for a term of seven years after ceasing to be a Director or such longer period as is necessary to determine relevant legal proceedings that commenced during that term.\n\n## 5. REMUNERATION\n\nThe role, responsibilities and composition of the Remuneration Committee and details of\n\nthe Company's remuneration objectives and principles, nonexecutive Director remuneration and executive remuneration are set out on pages 37 to 40 of this Annual Report in the Directors' and Executives' Remuneration section, as well as in the Directors' Statutory Report and in Notes 18 and 26 of the Financial Statements.\n\nDetails of the nature and amount of the remuneration of:\n\n - · the Directors; and\n - · the Specified Executives;\n\nare set out on pages 37 to 40 of this Annual Report.\n\n## 6. AUDIT COMMITTEE", - "page_start": 32, - "page_end": 32, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "com m unication be to the public generally or to any person or class of persons) and freedom from interference w ith his or her correspondence.\n\n - (2) N othing contained in or done under the authority of any law shall be held to be inconsistent w ith or in contravention of this section to the extent that the law in question m akes provision-\n - ( a ) that is reasonably required in the interests of defence, public safety, public order, public m orality or public health; or\n - ( b ) that is reasonably required for the purpose of protecting the reputations, rights and freedom s of other persons or the private lives of persons concerned in legal proceedings, preventing the disclosure of inform ation received in confidence, m aintaining the authority and independence of the courts, regulating educational institutions in the interests of persons receiving instruction therein, or regulating the technical adm inistration or the technical operation of telephony, telegraphy, posts, w ireless, broadcasting or television; or\n - ( c ) that im poses restrictions upon public officers, em ployees of local governm ent bodies, or teachers,\n\nand except so far as that provision or, as the case m ay be, the thing done under the authority thereof is show n not to be reasonably justifiable in a dem ocratic society.\n\n## 13. Protection of freedom of assem bly and association", - "page_start": 11, - "page_end": 11, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "## 26. Specified Director and Specified Executive Disclosures\n\n## (a) Specified Directors\n\nThe following persons were Specified Directors of Santos Ltd during the financial year:\n\nBarnett, Peter Charles Conroy, Francis John Ellice-Flint, John Charles Gerlach, Stephen Harding, Richard Michael McGregor, Graeme William O'Leary, Michael Anthony Sloan, Judith\n\nNon-executive Director\n\nNon-executive Director (retired 14 December 2004)\n\nManaging Director\n\nChairman and non-executive Director\n\nNon-executive Director (appointed 1 March 2004)\n\nNon-executive Director\n\nNon-executive Director\n\nNon-executive Director\n\n## (b) Specified Executives of the Santos Group\n\nThe following persons were the six executives with the greatest authority for the strategic direction and management of the Santos Group ('Specified Executives') during the financial year:\n\n| Name | Position |\n|--------------------------|--------------------------------------------------------------|\n| Gouadain, Jacques Elie | Vice President - Geoscience and New Ventures |\n| Moore, Paul Derek | Vice President - Development Projects and Technical Services |\n| Wasow, Peter Christopher | Chief Financial Officer |\n| Wilkinson, Richard John | Vice President - Gas Marketing and Commercialisation |\n| Wood, Bruce James | Vice President - Strategic Projects |\n| Young, Jonathon Terence | Executive Vice President - Operations |\n\nAll Specified Executives are employed by Santos Ltd.\n\n## (c) Remuneration of Specified Directors and Specified Executives\n\nThe Remuneration Committee of the Board is responsible for reviewing the remuneration policies and practices of the Company including: the compensation arrangements for the Managing Director and senior management; the Company's superannuation arrangements; employee share and option plans; and the fees for non-executive Directors.\n\n## Non-executive Directors\n\nWithin the aggregate amount (being $1,500,000 per year) approved by shareholders at the Annual General Meeting of the Company held on 7 May 2004, the fees of the Chairman and non-executive Directors are set at levels which represent the responsibilities of and the time commitments provided by those Directors in discharging their duties. Regard is also had to the level of fees payable to non-executive Directors of comparable companies. Non-executive Directors' fees were increased effective 1 July 2004. Non-executive Directors, other than the Chairman, who are members of Board committees receive additional fees. Non-executive Directors may not participate in any of the Company's bonus, share or option plans.\n\nThe Directors determined to cease retirement allowances to non-executive Directors effective from 30 June 2004. Non-executive Directors appointed before 1 January 2004 are entitled to receive benefits accrued to that date, payable upon ceasing to hold office as a Director. The retirement payment (inclusive of superannuation guarantee charge entitlements) is made pursuant to an agreement entered into with each non-executive Director on terms approved by shareholders at the 1989 Annual General Meeting. These benefits have been fully provided for by the Company.\n\n## Executive Directors\n\nThe Managing Director, Mr J C Ellice-Flint, is currently the only Executive Director.\n\nMr J C Ellice-Flint has an executive service agreement with the Company which continues until terminated by either party in accordance with the agreement.\n\nHis remuneration comprises a base salary reviewed annually and an annual bonus calculated on a formula that includes components to measure the growth of profitability, exploitable reserves and share price.", - "page_start": 76, - "page_end": 76, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "In particular, the Code of Conduct requires that Directors and employees:\n\n - · avoid conflicts of interest, and ensure that all business transactions are conducted solely in the best interests of the Company;\n - · are aware of, and comply with laws and regulations relevant to the Company's operations including environmental and trade laws both in Australia and abroad;\n - · protect any Company assets under their control and not use Company assets for personal purposes, without prior Company approval;\n - · do not disclose or use in any improper manner confidential information about the Company, its customers or affairs; and\n - · respect the privacy of others and comply with the Company's Privacy Policy.\n\nThe standards of conduct expected of Santos staff, including those directed at the broader stakeholder constituency of shareholders, employees, customers and the community, are also recorded in separate guidelines and policies relating to dealing in securities (refer to the next section), the\n\nenvironment, occupational health and safety and human resources. Further, a Code of Conduct, based on that developed by the Group of 100 (an association of senior finance executives from Australia's business enterprises) applies to the CFO and all other officers and employees within the finance function of the Company who have the opportunity to influence the integrity, direction and operation of the Company and its financial performance.\n\nWhere applicable, the guidelines and policies are incorporated by reference in individual contracts of employment or expressly set out in those contracts, including provisions relating to: conflicts of interest; confidentiality and restrictions against use and dissemination of information; use of Company assets; perquisites, tender processes, benefits and contact with suppliers; employment opportunity practices; privacy; training and further education support; and smoking, alcohol and drugs.\n\n## 10. GUIDELINES FOR DEALING IN SECURITIES\n\nThe Company has developed specific written guidelines that prohibit Directors and executives (and their respective associates) from acquiring, selling or otherwise trading in the Company's shares if they possess material price-sensitive information which is not in the public domain.\n\nPursuant to these guidelines, no person may deal in securities while they are in the possession of price sensitive information. In other circumstances, Directors must inform and receive", - "page_start": 34, - "page_end": 34, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## DIRECTORS' AND SENIOR EXECUTIVES' REMUNERATION\n\nThe Remuneration Committee is responsible for reviewing the remuneration policies and practices of the Company including: the compensation arrangements for the Managing Director and senior management; the Company's superannuation arrangements; employee share and option plans; executive and senior management performance review, succession planning, and, within the aggregate amount approved by shareholders, the fees for nonexecutive Directors. The role of the Remuneration Committee is documented in a Charter, approved by the Board, which Charter prescribes that the Committee must consist of at least three non-executive Directors. The Committee has access to independent advice and comparative studies on the appropriateness of remuneration arrangements.\n\nThe current members of the Remuneration Committee, all of whom are independent nonexecutive Directors, are: Professor J Sloan (Chairperson), Mr S Gerlach and Mr P C Barnett.\n\n## NON-EXECUTIVE DIRECTORS\n\nWithin the aggregate amount (being $1,500,000 per year) approved by shareholders at the Annual General Meeting of the Company held on 7 May 2004, the fees of the Chairman and non-executive Directors are set at levels which represent the responsibilities of and the time commitments provided by those Directors in discharging their duties. Regard is also had to the level of fees payable to non-executive Directors of comparable companies. As previously announced,\n\nnon-executive Directors' fees were increased effective 1 July 2004. Non-executive Directors, other than the Chairman, who are members of Board Committees receive additional fees. Nonexecutive Directors may not participate in any of the Company's bonus, share or option plans.\n\nThe Directors determined to cease retirement allowances to nonexecutive Directors effective from 30 June 2004. Non-executive Directors appointed before 1 January 2004 are entitled to receive benefits accrued to that date, payable upon ceasing to hold office as a Director. The retirement payment (inclusive of superannuation guarantee charge entitlements) is made pursuant to an agreement entered into with each non-executive Director on terms approved by shareholders at the 1989 Annual General Meeting. These benefits have been fully provided for by the Company.\n\n## EXECUTIVE DIRECTORS\n\nThe Managing Director, Mr J C Ellice-Flint, is currently the only Executive Director, he being appointed a Director on 19 December 2000.\n\nMr J C Ellice-Flint has an executive service agreement with the Company which continues until terminated by either party in accordance with the agreement. Termination arrangements relating to the Managing Director were agreed in advance of his appointment, and those relating to the equity component of his remuneration were notified at the time of the appointment.\n\nHis remuneration comprises a base salary reviewed annually and an\n\nannual bonus calculated on a formula that includes components to measure the growth of profitability, exploitable reserves and share price.\n\nHe also has an entitlement to 1,000,000 Restricted Shares, details of which are described in note 18(h) to the financial statements and holds 3,000,000 options under the Santos Executive Share Option Plan.\n\nIf the Company terminates Mr J C Ellice-Flint's appointment without cause, the Company may at its option, in lieu of part or all of the notice period of 24 months, pay to him an amount equal to a proportion or multiple of his annual base salary and the current year's potential bonus (excluding the application of any performance condition) at the time at which notice is given.\n\n## SENIOR EXECUTIVES (a) Remuneration Objectives and Principles\n\nThe objectives of the Company's remuneration policy are to attract, retain and motivate appropriately qualified and experienced executives capable of discharging their respective responsibilities to enable the Company to achieve its business strategy.", - "page_start": 38, - "page_end": 38, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## 7.2 Reserves Committee\n\nDuring July 2014, the Board established a Reserves Committee to assist the Board in monitoring:\n\n - · The integrity of the Company's oil, natural gas, and natural gas liquid reserves (the 'Reserves');\n - · The independence, qualifications and performance of the Company's independent reservoir engineers; and\n - · The compliance by the Company with legal and regulatory requirements.\n\nThe Reserves Committee consists of three members, H W Holcombe (chairman), M D Hannell, and N Martin, all whom are independent Non-Executive Directors. Formal minutes are kept of each meeting and submitted to the Board for review.\n\nThe Reserves Committee Charter is available in the corporate governance section of Sundance's website.\n\n## Principle 8: Remunerate Fairly and Responsibly\n\n## 8.1 Remuneration and Nominations Committee\n\nThe Remuneration and Nominations Committee has three members, M D Hannell (chairman), D Hannes and H W Holcombe, all whom are independent Non-Executive Directors, and reports its recommendations to the Board for approval. The Committee determines remuneration levels of senior staff on an individual basis. Advice is sought from an independent consultant based in the U.S.\n\nThe remuneration of Non-Executive Directors is structured separately from that of the executive Director and senior executives. The Remuneration Report at pages 28 to 43 of this Annual Report sets out details of the Company's policies and practices for remunerating Directors (Executive and Non-Executive) and Key Management Personnel.\n\nThe Remuneration and Nominations Committee Charter is available in the corporate governance section of Sundance's website.", - "page_start": 54, - "page_end": 54, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "What is, exactly, Public Data? A definition that is accepted almost implicitly is \"data that is of public interest, that belongs to the whole community, data that every citizen is surely entitled to know and use\" . This definition is so generic that accepting it together with the assumption that all such data should be open as preached by the Open Data movement (online, as soon as possible, in machine readable format with an open license etc...) doesn't create any particular problem or conflict.\n\nReal problems however start as it has happened all too often so far, whenever we assume more or less consciously that \"Public Data\" in the sense defined above and data directly produced by Governments and Public Administrations, that is what's normally called PSI (Public Sector Information) are the same thing.\n\nThere is no doubt that Governments and Public Administrations produce huge quantities of Public Data. But this is an age of privatization of many public services, from transportation to healthcare, energy and water management. This is an age in which many activities with potentially very serious impacts on whole communities, like processing of hazardous substances or toxic waste, happen outside Public Administrations. The paradox is that, as Sasaki put it, this increased privatization is happening in the very same period in which \" we are observing a worldwide diffusion of access to information laws that empower citizens to hold government agencies accountable.\"\n\nIn such a context, \"Public Data\"is critical just because it is a much bigger set of data than what constitutes traditional, official PSI. \"Public Data\" includes all that information plus the much bigger amount of data describing and measuring all the activities of private companies, from bus timetables to packaged food ingredients, aqueducts performances and composition of fumes released in the atmosphere, that have a direct impact on the health and rights of all citizens of the communities affected by the activities of those companies.\n\nAre such data \"Public\" today, in the sense defined at the beginning of this paragraph, that is something every citizen has the right to know without intermediaries or delegates, or not? Should they be public? If yes, shouldn't law mandate that all such data be Open (that is, published online as soon as possible, in machine readable format with an open license etc...) just like, for example, the budget of some Ministry? Answering these questions may be one of the biggest challenges for the Open Data community, and for society as a whole, in the next years.\n\nHere are, in order to facilitate reflection on this issue, a few recent, real world examples of \"Public Data\" that are not PSI, and of the impacts of their lack of openness.", - "page_start": 23, - "page_end": 23, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "Sundance's Board of Directors currently consists of one Managing Director based in the US, three Non-Executive Directors based in Australia, and one Non-Executive Director based in the US. All of the Directors are shareholders of the Company. At all times during the fiscal year 2014, all four of the Non-Executive Directors were independent. Sundance considers an independent director to be a non-executive director who is not a member of management and who is free of any business or other relationship that could materially interfere with, or could reasonably be perceived to materially interfere with, the independent exercise of their judgement. Sundance believes that its current Board composition is appropriate at this time in the Company's evolution. Sundance will continue to address the appropriate structure and composition of the Board over time.\n\nThe composition of the Board at the date of this report is:\n\nM D Hannell\n\nChairman, Independent Non-Executive Director\n\nE McCrady\n\nManaging Director and Chief Executive Officer\n\nN Martin\n\nIndependent Non-Executive Director\n\nD Hannes\n\nIndependent Non-Executive Director\n\nW Holcombe\n\nIndependent Non-Executive Director\n\nDirectors can have access, in appropriate circumstances, to independent professional advice at the Company's expense. It is the continuing practice for the four Non-Executive Directors to confer from time to time without the Executive Director being present.", - "page_start": 49, - "page_end": 49, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "\n\nSenior Management\n\n## Brett Dunstone\n\nDip. Catering and Hotel Management - William Angliss College, B.Bus. Victoria University (part complete)\n\n## General Manager - Human Resources\n\nBrett Dunstone joined Kingsgate in December 2012 and has over 25 years experience in senior human resource management roles across a diverse industry portfolio. Brett was formerly head of Human Resources for Crown Casino, Melbourne, the Myer group, key Village Roadshow entities and head of Employee Relations for the Coles Myer group. Brett has experience in supporting both large and emerging resource company development projects locally and overseas (BHP Billiton, Woodside, Equinox Minerals and Chalice Gold).\n\n## Michael Monaghan\n\nDip Eng (Mining) Dip Business MAusIMM MAICD SME\n\n## Chief Operating Officer and General Manager - Akara Resources PCL\n\nMike Monaghan joined Kingsgate as the General Manager of Chatree Gold Mine in October 2012. He is a mining engineer with 28 years of management experience in both underground and open cut opeartions across a number of commodities as well as commissioning, mine management, turnaround management and environmental and safety compliance in Australia, Africa and Europe. Mike was most recently Mining Manager at Geita Gold mine in Tanzania for AngloGold Ashanti Limited. Prior to that he held General Manager and Mining Manager positions at Etruscan Resources Youga Gold Mine in Burkina Faso and Red back Mining's Chirano Gold Mine in Ghana.\n\n\n\n## Pakorn Sukhum\n\nBSc (Hons) University of London, UK MBA Sasin Graduate Institute of Business Administration Thailand\n\n## Chief Executive Officer Akara Resources PCL\n\nPakorn Sukhum joined the management team of Akara Resources PCL as Chief Executive Officer at the end of 2009. He brings to Akara over 24 years of industrial commercial managerial experience in various industries such as metallurgy, chemicals and ceramics in international and domestic markets of Thailand, having held senior management positions in both Thai and Multinational joint venture companies such as Basell Poyolefins, Bayer AG as well as Padeang Industry of Thailand. His major contributions and responsibilities have ranged from project management, commercial marketing and sales to business development.", - "page_start": 41, - "page_end": 41, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## II.13.3. Exclusive rights\n\nThe Contracting Authority acquires the following exclusive rights:\n\n - (a) reproduction: the right to authorise or prohibit direct or indirect, temporary or permanent reproduction of the results by any means (mechanical, digital or other) and in any form, in whole or in part;\n - (b) communication to the public: the exclusive right to authorise or prohibit any display, performance or communication to the public, by wire or wireless means, including the making available to the public of the results in such a way that members of the public may access them from a place and at a time individually chosen by them; this also includes the communication on Internet and broadcasting by cable or by satellite;\n - (c) distribution: the exclusive right to authorise or prohibit any form of distribution of results or copies of the results to the public, by sale or otherwise;\n - (d) rental: the exclusive right to authorise or prohibit rental or lending of the results or of copies of the results ;\n - (e) adaptation: the exclusive right to authorise or prohibit any modification of the results ;\n - (f) translation: the exclusive right to authorise or prohibit any translation, adaptation, arrangement, creation of derivative works based on the results , and any other alteration of the results , subject to the respect of moral rights of authors, where applicable;\n - (g) where the results are or include a database: the exclusive right to authorise or prohibit the extraction of all or a substantial part of the contents of the database to another medium by any means or in any form; and the exclusive right to authorise or prohibit the re-utilization of all or a substantial part of the contents of the database by the distribution of copies, by renting, by on-line or other forms of transmission;\n - (h) where the results are or include a patentable subject-matter: the right to register them as a patent and to further exploit such patent to the fullest extent;", - "page_start": 23, - "page_end": 23, - "source_file": "EN-Draft FWC for services 0142.pdf" - } - ] - }, - { - "references": { - "source_file": "Botswana-constitution.pdf", - "query": "What are considered \"disciplined force\" according to Botswana constitution ?", - "target_page": 16, - "target_passage": "\"disciplined force\" means- (a) a naval, military or air force; (b) a police force; or (c) a prison service", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "Colombia\n\nDemocratic Republic of the Congo\n\nEcuador\n\nEswatini\n\nEthiopia\n\nFrench Guiana\n\nGuyana\n\nIndia\n\nKenya\n\nLesotho\n\nMalawi\n\nThe Maldives\n\nMozambique\n\nNamibia\n\nNepal\n\nOman\n\nPakistan\n\nPanama\n\nParaguay\n\nPeru\n\nPhilippines\n\nQatar\n\nRwanda\n\nSeychelles\n\nSomalia\n\nSouth Africa\n\nSuriname\n\nTanzania\n\nTurkey\n\nUnited Arab Emirates\n\nUruguay\n\nVenezuela\n\nZambia\n\nZimbabwe", - "page_start": 32, - "page_end": 32, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "Australia\n\nBrunei\n\nFalkland Islands\n\nFaroe Islands\n\nGibraltar\n\nIceland\n\nIsrael\n\nNew Zealand\n\nPortugal, including the Azores and Madeira\n\nSaint Helena, Ascension and Tristan da Cunha\n\nSingapore\n\nSouth Georgia and the South Sandwich Islands\n\n## SCHEDULE 2\n\nRegulation 2(1)\n\nRegulation 2(1)\n\nCategory 2 countries and territories\n\nAny country or territory outside the common travel area not listed in Schedule 1 or Schedule 3.\n\n## SCHEDULE 3\n\nCategory 3 countries and territories\n\nRegulation 2(1)\n\nAngola\n\nArgentina\n\nBangladesh\n\nBolivia\n\nBotswana\n\nBrazil\n\nBurundi\n\nCape Verde\n\nChile\n\n## SCHEDULES\n\n## SCHEDULE 1\n\n## Category 1 countries and territories", - "page_start": 31, - "page_end": 31, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## 6. Protection from slavery and forced labour\n\n - (1) N o person shall be held in slavery or servitude.\n - (2) N o person shall be required to perform forced labour.\n - (3) For the purposes of this section, the expression \"forced labour\" does not\n - include-\n - ( a ) any labour required in consequence of the sentence or order of a court;\n - ( b ) labour required of any person w hile he or she is law fully detained that, though not required in consequence of the sentence or order of a court, is reasonably necessary in the interests of hygiene or for the m aintenance of the place at w hich he or she is detained;\n - ( c ) any labour required of a m em ber of a disciplined force in pursuance of his or her duties as such or, in the case of a person w ho has conscientious objections to service as a m em ber of a naval, m ilitary or air force, any labour that that person is required by law to perform in place of such service;", - "page_start": 5, - "page_end": 5, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "## Annex 1: Non -Annex I (NAI) Parties\n\n| 1 | Afghanistan | AFG |\n|-----|----------------------------------------|-------|\n| 2 | Albania | ALB |\n| 3 | Algeria | DZA |\n| 4 | Andorra | AND |\n| 5 | Angola | AGO |\n| 6 | Antigua and Barbuda | ATG |\n| 7 | Argentina | ARG |\n| 8 | Armenia | ARM |\n| 9 | Azerbaijan | AZE |\n| 10 | Bahamas | BHS |\n| 11 | Bahrain | BHR |\n| 12 | Bangladesh | BGD |\n| 13 | Barbados | BRB |\n| 14 | Belize | BLZ |\n| 15 | Benin | BEN |\n| 16 | Bhutan | BTN |\n| 17 | Bolivia | BOL |\n| 18 | Bosnia and Herzegovina | BIH |\n| 19 | Botswana | BWA |\n| 20 | Brazil | BRA |\n| 21 | Brunei Darussalam | BRN |\n| 22 | Burkina Faso | BFA |\n| 23 | Burundi | BDI |\n| 24 | Cambodia | KHM |\n| 25 | Cameroon | CMR |\n| 26 | Cape Verde | CPV |\n| 27 | Central African Republic | CAF |\n| 28 | Chad | TCD |\n| 29 | Chile | CHL |\n| 30 | China | CHN |\n| 31 | Colombia | COL |\n| 32 | Comoros | COM |\n| 33 | Congo | COG |\n| 34 | Cook Islands | COK |\n| 35 | Costa Rica | CRI |\n| 36 | Cote d'Ivoire | CIV |\n| 37 | Cuba | CUB |\n| 38 | Democratic People's Republic of Korea | PRK |\n| 39 | Democratic Republic of the Congo | COD |\n| 40 | Djibouti | DJI |\n| 41 | Dominica | DMA |", - "page_start": 44, - "page_end": 44, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "- (a) 'defence' has the meaning given in section 2(4) of the Official Secrets Act 1989;\n - (b) 'visiting force' means any body, contingent or detachment of the forces of a country, being a body, contingent or detachment for the time being present in the United Kingdom (including United Kingdom territorial waters), on the invitation of Her Majesty's Government in the United Kingdom.\n - 4. An official of a foreign Government, required to travel to the United Kingdom to undertake essential border security duties, or a contractor directly supporting these essential border security duties where-\n - (a) they are in possession of a written notice signed by a senior member of their foreign Government confirming that they are required to undertake essential border security duties in the United Kingdom within the period during which they would, but for this paragraph, have had to self-isolate in accordance with regulation 9 and that that work cannot be undertaken whilst the person is complying with regulation 9; or\n - (b) their deployment is pursuant to a standing bilateral or multilateral agreement with Her Majesty's Government on the operation of the Border controls within the United Kingdom.", - "page_start": 35, - "page_end": 35, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "provisions of sections 3 to 16 (inclusive) of this C onstitution.\n\n - (3) If in any proceedings in any subordinate court any question arises as to the contravention of any of the provisions of sections 3 to 16 (inclusive) of this C onstitution, the person presiding in that court m ay, and shall if any party to the proceedings so requests, refer the question to the H igh C ourt unless, in his or her opinion, the raising of the question is m erely frivolous or vexatious.\n - (4) P arliam ent m ay confer upon the H igh C ourt such pow ers in addition to those conferred by this section as m ay appear to be necessary or desirable for the purpose of enabling that court m ore effectively to exercise the jurisdiction conferred upon it by this section.\n - (5) R ules of court m aking provision w ith respect to the practice and procedure of the H igh C ourt for the purposes of this section m ay be m ade by the person or authority for the tim e being having pow er to m ake rules of court w ith respect to the practice and procedure of that court generally.\n\n## 19. Interpretation and savings\n\n - (1) In this C hapter, unless the context otherw ise requires-\n\n\"court\" m eans any court of law having jurisdiction in B otsw ana other than a court established by a disciplinary law , and in sections 4 and 6 of this C onstitution a court established by a disciplinary law ;\n\n - \"disciplinary law \" m eans a law regulating the discipline of any disciplined force; \"disciplined force\" m eans-\n - ( a ) a naval, m ilitary or air force;\n - ( b ) a police force; or\n - ( c ) a prison service;\n\n\"legal representative\" m eans a person entitled to practise in B otsw ana as an advocate or attorney;", - "page_start": 15, - "page_end": 15, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- (d) 'essential policing' means policing which has been designated as such on behalf of the relevant chief officer or chief constable;\n - (e) 'essential state business' means activity which has been designated as essential to the United Kingdom or Her Majesty's Government by the relevant Department, and includes, in particular, bilateral or multilateral discussions with another state or international organisation and visits to another state on behalf of the United Kingdom or Her Majesty's Government;\n - (f) 'government contractor' has the meaning given in section 12(2) of the Official Secrets Act 1989.\n - 17. -(1) A person returning from undertaking essential or emergency work outside of the United Kingdom, which has been certified by the relevant Department as necessary to facilitate essential government work or essential state business.\n - (2) For the purposes of sub-paragraph (1) 'essential government work' and 'essential state business' have the same meaning as in paragraph 16.\n - 18. A person designated by the relevant Minister under section 5(3) of the Repatriation of Prisoners Act 1984( b ).\n - 19. A person responsible for escorting a person sought for extradition pursuant to a warrant issued under Part 3 of the Extradition Act 2003( c ) or sought for extradition pursuant to any other extradition arrangements.\n - 20. A representative of any territory travelling to the United Kingdom in order to take into custody a person whose surrender has been ordered pursuant to any provision of the Extradition Act 2003.", - "page_start": 38, - "page_end": 38, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (i) are required to return to the United Kingdom temporarily,\n - (ii) will thereafter depart to undertake essential government work related to the United Kingdom border outside of the United Kingdom.\n - (2) For the purposes of sub-paragraph (1) and paragraph 3-\n - (a) 'Crown servant' has the meaning given in section 12(1)(a) to (e) of the Official Secrets Act 1989( a );\n - (b) 'essential government work' means work which has been designated as such by the relevant Department or employer;\n - (c) 'government contractor' has the meaning given in section 12(2) of the Official Secrets Act 1989.\n - 3. -(1) A person who is a Crown servant, a government contractor, or a member of a visiting force, who-\n - (a) is required to undertake work necessary to the delivery of essential defence activities;\n - (b) has travelled from a point of origin within the common travel area or from a category 1 country or territory on a vessel or aircraft operated by, or in support of, Her Majesty's armed forces or by, or in support of, a visiting force and that vessel or aircraft has not taken on any persons, docked in any port or landed in any category 2 country or territory; or\n - (c) has undertaken a continuous period of at least 10 days ending with the day immediately preceding the day of their arrival in the United Kingdom aboard a vessel operated by or in support of Her Majesty's Naval Service or by, or in support of, a visiting force, where they have not disembarked and that vessel has not taken on any persons or docked in any port outside of the common travel area for a period of at least 10 days ending with the day of its arrival in the United Kingdom.\n - (2) For the purposes of sub-paragraph (1)-", - "page_start": 35, - "page_end": 35, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## Continual/Continuous\n\nIf something happens frequently, it is 'continual'.\n\nE.g. The trains were continually late.\n\nIf something happens all the time without interruption, it is 'continuous'.\n\nE.g. It rained continuously for three days.\n\n## Its/It's\n\n'Its' indicates possession.\n\nE.g. The company improved its performance by hiring new staff members.\n\n'It's' is a contraction of 'it is'. E.g. It's uncertain whether the company will meet the financial targets this year.\n\n## Principal/Principle\n\nE.g. The principal declared that the school term would be extended\n\nA 'principal' is the head of a school or college. by a week.\n\nA 'principal' thing is a main or most important thing.\n\nE.g. His commitment to the task was the principal reason for his success.\n\nA 'principle' is a fundamental rule or belief.\n\nE.g. It goes against my principles to eat meat.", - "page_start": 17, - "page_end": 17, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "com m unication be to the public generally or to any person or class of persons) and freedom from interference w ith his or her correspondence.\n\n - (2) N othing contained in or done under the authority of any law shall be held to be inconsistent w ith or in contravention of this section to the extent that the law in question m akes provision-\n - ( a ) that is reasonably required in the interests of defence, public safety, public order, public m orality or public health; or\n - ( b ) that is reasonably required for the purpose of protecting the reputations, rights and freedom s of other persons or the private lives of persons concerned in legal proceedings, preventing the disclosure of inform ation received in confidence, m aintaining the authority and independence of the courts, regulating educational institutions in the interests of persons receiving instruction therein, or regulating the technical adm inistration or the technical operation of telephony, telegraphy, posts, w ireless, broadcasting or television; or\n - ( c ) that im poses restrictions upon public officers, em ployees of local governm ent bodies, or teachers,\n\nand except so far as that provision or, as the case m ay be, the thing done under the authority thereof is show n not to be reasonably justifiable in a dem ocratic society.\n\n## 13. Protection of freedom of assem bly and association", - "page_start": 11, - "page_end": 11, - "source_file": "Botswana-constitution.pdf" - } - ] - }, - { - "references": { - "source_file": "serverless-core.pdf", - "query": "How much does AWS lambda charge when the function is not running ?", - "target_page": 52, - "target_passage": "there is no charge when your code is not running", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "initialization duration, and other details. If your function throws an error, the runtime returns that error to the invoker.\n\nTo help simplify troubleshooting, the AWS Serverless Application Model CLI (AWS SAM CLI) has a command called sam logs which will show you CloudWatch Logs generated by your Lambda function.\n\nFor example, the following terminal command would show the live tail of logs generated by the YourLambdaFunctionName Lambda function:\n\n```\nsam logs -n YourLambdaFunctionName --tail\n```\n\nLogging and debugging go hand in hand. Traces of events are available with Amazon X-Ray for debugging.\n\n## Securing functions\n\nAWS Identity and Access Management (IAM) is the service used to manage access to AWS services. Lambda is fully integrated with IAM, allowing you to control precisely what each Lambda function can do within the AWS Cloud. There are two important things that define the scope of permissions in Lambda functions:\n\n - · resource policy : Defines which events are authorized to invoke the function.\n - · execution role policy : Limits what the Lambda function is authorized to do.\n\n\n\nUsing IAM roles to describe a Lambda function's permissions, decouples security configuration from the code. This helps reduce the complexity of a lambda function, making it easier to maintain.\n\nA Lambda function's resource and execution policy should be granted the minimum required permissions for the function to perform it's task effectively. This is sometimes referred to as the rule of least privilege. As you develop a Lambda function, you expand the scope of this policy to allow access to other resources as required.", - "page_start": 59, - "page_end": 59, - "source_file": "serverless-core.pdf" - }, - { - "text": "'No Server Is Easier To Manage Than No Server' - Werner Vogels, VP and CTO\n\nThe Lambda service runs instances of your function only when needed and scales automatically from zero requests per day to thousands per second. You pay only for the compute time that's actually used - there is no charge when your code is not running.\n\n## Fundamentals\n\nServerless solutions are based on event-driven architecture, or EDA, where services send and receive events , which represent an update or change in state. The primary activity of Lambda functions is to process events.\n\nWithin the Lambda service, your function code is stored in a code package, deployed as a .zip or a container image. All interaction with the code occurs through the Lambda API. There is no direct invocation of functions from outside of the Lambda service.\n\n\n\nWhat you will learn on your journey to building applications with Lambda:\n\n - · How the event-driven programming model invokes Lambda functions\n - · How to create, invoke, test, update, package, and secure functions\n - · How the execution and runtime environment runs your functions\n - · How to view logs and monitor your functions\n - · Where to find hands-on opportunities to learn how to invoke functions", - "page_start": 51, - "page_end": 51, - "source_file": "serverless-core.pdf" - }, - { - "text": "After the handler finishes processing the first event, the runtime sends it another, and another. Each instance of your function could process thousands of requests.\n\nUnlike traditional servers, Lambda functions do not run constantly. When a function is triggered by an event, this is called an invocation . Lambda functions are limited to 15 minutes in duration, but on average, across all AWS customers, most invocations last for less than a second.\n\nThere are many types of invocation events. Some examples:\n\n - · HTTP request from API Gateway\n - · Schedule managed by an EventBridge rule\n - · Message from an IOT device\n - · Notification that a file was uploaded to an S3 bucket\n\nEven the smallest Lambda-based application uses at least one event that invokes your function.\n\n## How Lambda invokes your function (runtime environment)\n\nLambda invokes your function in an execution environment , which contains a secure and isolated runtime environment .\n\n - · A runtime provides a language-specific environment which relays invocation events, context information, and responses between the Lambda and your functions.\n - · An execution environment manages the processes and resources that are required to run the function.\n\n", - "page_start": 55, - "page_end": 55, - "source_file": "serverless-core.pdf" - }, - { - "text": "## Related resource(s):\n\n - · Permissions boundaries for IAM entities - official documentation.\n\n## Additional resources\n\nOfficial AWS documentation:\n\n - · AWS Identity and Access Management Documentation\n - · Example IAM identity-based policies - an extensive list of example policies, including AWS Lambda: Allows a lambda function to access an Amazon DynamoDB table which is useful in microservices\n - · Grant least privilege section of the Policies and permissions chapter suggests a method to refine permissions for increased security\n\nResources from the serverless community:\n\n - · Simplifying serverless permissions with AWSAWS SAM Connectors - AWS Compute blog post by Kurt Tometich, Senior Solutions Architect, AWS, from Oct 2022 that introduces a AWS SAM abstraction that creates minimally scoped IAM policies\n - · Building AWS Lambda governance and guardrails - AWS Compute blog post by Julian Wood, Senior Solutions Architect, AWS, from Aug 2022 that highlights how Lambda, as a serverless service, simplifies cloud security and compliance so you can concentrate on your business logic.\n\n## Next Steps\n\n - · Work through the Getting Started Resource Center 30-45 min tutorial on Setting Up Your AWS Environment to properly set up your AWS account, secure the root user, create an IAM user, and setup AWS CLI and (optionally) Cloud9 environment.\n\n## Get started with Lambda\n\nAll projects need a compute capability to handle processing tasks. Here are some examples:\n\n - · Handling web application and API requests\n - · Transforming batches of data\n - · Processing messages from a queue", - "page_start": 49, - "page_end": 49, - "source_file": "serverless-core.pdf" - }, - { - "text": "## Deploy with containers\n\nIf you need a custom runtime that is not provided by AWS, you can create and deploy a custom container image. AWS provides base images preloaded with a language runtime and other components that are required to run the image on Lambda. AWS provides a Dockerfile for each of the base images to help with building your container image.\n\nCustom containers are one way you might experiment with lift and shift of existing code to Lambda runtimes. If you do this, consider the architectural differences between always running containers, versus on demand nature of Lambda functions.\n\n## Related resource:\n\n - · Deploy container images\n\n## Add code with Layers\n\nA Lambda layer is a .zip file archive that can contain additional code or other content. A layer can contain libraries, a custom runtime, data, or configuration files. Layers are also necessary if your function .zip archive exceeds the size limit.\n\nLayers provide a convenient way to package libraries and other dependencies that you can use with your Lambda functions. Using layers reduces the size of uploaded deployment archives and makes it faster to deploy your code. Layers also promote code sharing and separation of responsibilities so that you can iterate faster on writing business logic.\n\n## Related resource:\n\n - · Creating and sharing Lambda layers\n\n## Extensions\n\nYou can use Lambda extensions to augment your Lambda functions. For example, use Lambda Extensions to integrate with your preferred monitoring, observability, security, and governance tools.\n\nLambda supports internal or external extensions. An internal extension runs as part of the runtime process. An external extension runs as an independent process in the execution environment and continues to run after the function invocation is fully processed.", - "page_start": 61, - "page_end": 61, - "source_file": "serverless-core.pdf" - }, - { - "text": "## Connect to functions with Function URLs\n\nA function URL is a dedicated HTTP(S) endpoint for your Lambda function. You can create and configure a function URL through the Lambda console or the Lambda API. When you create a function URL, Lambda automatically generates a unique URL endpoint for you. Once you create a function URL, its URL endpoint never changes. Function URL endpoints have the following format:\n\n```\nhttps://.lambda-url..on.aws\n```\n\nAfter you configure a function URL for your function, you can invoke your function through its HTTP(S) endpoint with a web browser, curl, Postman, or any HTTP client.\n\nRelated resources:\n\n - · Function URLs - official documentation\n\n## Additional resources\n\nOfficial AWS documentation:\n\n - · AWS Lambda Developer Guide - extensive and complete documentation for Lambda\n\n## Next steps\n\n## Learn serverless techniques in an online workshop\n\nLearn by doing in the Serverless Patterns Workshop . The first module introduces a serverless microservice to retrieve data from DynamoDB with Lambda and API Gateway. Additional modules provide practical examples of unit and integration testing, using infrastructure as code to deploy resources, and how to build common architectural patterns used in serverless solutions.", - "page_start": 63, - "page_end": 63, - "source_file": "serverless-core.pdf" - }, - { - "text": "## Related resources:\n\n - · Datadog Lambda Extension - an extension that supports submitting custom metrics, traces, and logs asynchronously while your Lambda function executes.\n - · Lambda Extensions - official documentation\n\n## Launch functions faster with SnapStart\n\nLambda SnapStart for Java can improve startup performance by up to 10x at no extra cost, typically with no changes to your function code. The largest contributor to startup latency (often referred to as cold start time) is the time that Lambda spends initializing the function, which includes loading the function's code, starting the runtime, and initializing the function code.\n\n\n\nWith SnapStart, Lambda initializes your function when you publish a function version. Lambda takes a Firecracker microVM snapshot of the memory and disk state of the initialized execution environment, encrypts the snapshot, and caches it for low-latency access.\n\nNote: You can use SnapStart only on published function versions and aliases that point to versions. You can't use SnapStart on a function's unpublished version ($LATEST).\n\n## Related resources:\n\n - · Accelerate Your Lambda Functions with Lambda SnapStart - an AWS Compute blog article by Jeff Barr from Nov 2022 that shows the configuration change and vast difference from roughly six seconds init time to 142 milliseconds of restore time with SnapStart", - "page_start": 62, - "page_end": 62, - "source_file": "serverless-core.pdf" - }, - { - "text": "- · Policies that grant least privilege to your functions\n\nWorkshop - Intro to Serverless - Before diving too deep, you can choose to try out serverless in a workshop or tutorial. Connect to a data source and create a REST API with your first Lambda function.'\n\n - · Services used: AWS Management Console, Lambda, DynamoDB, API Gateway\n\n## Programming Model\n\nThe Lambda service provides the same event-based programming model for all languages. The Lambda runtime passes an invocation event and context to your Lambda function handler which does some work and produces a resulting event:\n\n\n\nThe invocation event contains data, as a JSON packet, which varies from service to service. For example, API gateway events include path, HTTP method, query string parameters, headers, cookies, and more. DynamoDB events could contain updated or delete record data. S3 events include the bucket name and object key, among other things.\n\nThe context contains information about the environment the function is running inside. Additional contextual information can be set in familiar environment variables (ENV).\n\nThe function handler is a method in your function code that processes the inbound event. The handler, which is a standard function in your language of choice, does some work and emits a result event .", - "page_start": 54, - "page_end": 54, - "source_file": "serverless-core.pdf" - }, - { - "text": "\n\nThis guide will highlight what you need to know right away and link to service documentation for more service-specific details.\n\nFor example, you will learn that the Lambda service creates an execution environment to run compute functions. For more information on how Lambda manages function scaling or reduces start-up time, we will link you to relevant sections of the Lambda developer guide.\n\nThe topics in this guide will cover the prerequisites for understanding serverless development on AWS, such as account creation and an overview of AWS cloud infrastructure. Then, you will learn how to shift from a traditional development model to a serverless, event-driven architecture with which to develop applications on the cloud.\n\nAlong the way, this guide will introduce core services, workshops, and tutorials, you can choose to reinforce your learning with hands-on activities.\n\n- · AWS Identity and Access Management - for securely accessing resources on AWS.", - "page_start": 5, - "page_end": 5, - "source_file": "serverless-core.pdf" - }, - { - "text": "## Compute\n\n - · AWS Lambda - serverless compute functions; responsible for nearly all processing in serverless projects\n - · Amazon Elastic Compute Cloud - non-serverless compute alternative; useful when you need always-on and fully customizable capabilities. EC2 is often used for initial 'lift and shift' migration to the cloud. You can continue to use EC2 while migrating portions of your workflow to serverless patterns.\n - · AWS App Runner - fully managed service to deploy your containerized web applications and APIs. App Runner will scale compute instances and network resources automatically based on incoming traffic.\n - · AWS Fargate - serverless computer for clusters of containers; useful when you need custom containers but do not want to maintain and manage the infrastructure or cluster.\n\n## Security, identity & compliance\n\n - · IAM - identity and access management; provides policies to authorize service resources to interact with each other and your data.\n - · Amazon Cognito - authentication and authorization of users and systems\n - · AWS Secrets Manager - manage access to secrets using fine-grained policies\n\n## Management & governance\n\n - · Amazon CloudWatch - suite of monitoring and logging services\n - · AWS Management Console - web-based user interface for creating, configuring, and monitoring AWS resources and your code.\n - · AWS CloudFormation (CFN) - text templates to automate deploying infrastructure and code\n - · AWS Serverless Application Model (AWS SAM) - an open-source framework for deploying serverless application infrastructure and code. AWS SAM templates provide a shorthand syntax to declare functions, APIs, databases, and event source mappings. With just a few lines of configuration per resource, you can define the application infrastructure components. During deployment, AWS SAM transforms and expands the template into verbose AWS CloudFormation templates.\n - · AWS Cloud Development Kit (AWS CDK) - an open-source software development framework to define your cloud application resources using familiar programming languages. Instead of", - "page_start": 36, - "page_end": 36, - "source_file": "serverless-core.pdf" - } - ] - }, - { - "references": { - "source_file": "serverless-core.pdf", - "query": "What is the role of resource policies of lambda functions ?", - "target_page": 60, - "target_passage": "resource policy: Defines which events are authorized to invoke the function.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "initialization duration, and other details. If your function throws an error, the runtime returns that error to the invoker.\n\nTo help simplify troubleshooting, the AWS Serverless Application Model CLI (AWS SAM CLI) has a command called sam logs which will show you CloudWatch Logs generated by your Lambda function.\n\nFor example, the following terminal command would show the live tail of logs generated by the YourLambdaFunctionName Lambda function:\n\n```\nsam logs -n YourLambdaFunctionName --tail\n```\n\nLogging and debugging go hand in hand. Traces of events are available with Amazon X-Ray for debugging.\n\n## Securing functions\n\nAWS Identity and Access Management (IAM) is the service used to manage access to AWS services. Lambda is fully integrated with IAM, allowing you to control precisely what each Lambda function can do within the AWS Cloud. There are two important things that define the scope of permissions in Lambda functions:\n\n - · resource policy : Defines which events are authorized to invoke the function.\n - · execution role policy : Limits what the Lambda function is authorized to do.\n\n\n\nUsing IAM roles to describe a Lambda function's permissions, decouples security configuration from the code. This helps reduce the complexity of a lambda function, making it easier to maintain.\n\nA Lambda function's resource and execution policy should be granted the minimum required permissions for the function to perform it's task effectively. This is sometimes referred to as the rule of least privilege. As you develop a Lambda function, you expand the scope of this policy to allow access to other resources as required.", - "page_start": 59, - "page_end": 59, - "source_file": "serverless-core.pdf" - }, - { - "text": "'No Server Is Easier To Manage Than No Server' - Werner Vogels, VP and CTO\n\nThe Lambda service runs instances of your function only when needed and scales automatically from zero requests per day to thousands per second. You pay only for the compute time that's actually used - there is no charge when your code is not running.\n\n## Fundamentals\n\nServerless solutions are based on event-driven architecture, or EDA, where services send and receive events , which represent an update or change in state. The primary activity of Lambda functions is to process events.\n\nWithin the Lambda service, your function code is stored in a code package, deployed as a .zip or a container image. All interaction with the code occurs through the Lambda API. There is no direct invocation of functions from outside of the Lambda service.\n\n\n\nWhat you will learn on your journey to building applications with Lambda:\n\n - · How the event-driven programming model invokes Lambda functions\n - · How to create, invoke, test, update, package, and secure functions\n - · How the execution and runtime environment runs your functions\n - · How to view logs and monitor your functions\n - · Where to find hands-on opportunities to learn how to invoke functions", - "page_start": 51, - "page_end": 51, - "source_file": "serverless-core.pdf" - }, - { - "text": "After the handler finishes processing the first event, the runtime sends it another, and another. Each instance of your function could process thousands of requests.\n\nUnlike traditional servers, Lambda functions do not run constantly. When a function is triggered by an event, this is called an invocation . Lambda functions are limited to 15 minutes in duration, but on average, across all AWS customers, most invocations last for less than a second.\n\nThere are many types of invocation events. Some examples:\n\n - · HTTP request from API Gateway\n - · Schedule managed by an EventBridge rule\n - · Message from an IOT device\n - · Notification that a file was uploaded to an S3 bucket\n\nEven the smallest Lambda-based application uses at least one event that invokes your function.\n\n## How Lambda invokes your function (runtime environment)\n\nLambda invokes your function in an execution environment , which contains a secure and isolated runtime environment .\n\n - · A runtime provides a language-specific environment which relays invocation events, context information, and responses between the Lambda and your functions.\n - · An execution environment manages the processes and resources that are required to run the function.\n\n", - "page_start": 55, - "page_end": 55, - "source_file": "serverless-core.pdf" - }, - { - "text": "- · Policies that grant least privilege to your functions\n\nWorkshop - Intro to Serverless - Before diving too deep, you can choose to try out serverless in a workshop or tutorial. Connect to a data source and create a REST API with your first Lambda function.'\n\n - · Services used: AWS Management Console, Lambda, DynamoDB, API Gateway\n\n## Programming Model\n\nThe Lambda service provides the same event-based programming model for all languages. The Lambda runtime passes an invocation event and context to your Lambda function handler which does some work and produces a resulting event:\n\n\n\nThe invocation event contains data, as a JSON packet, which varies from service to service. For example, API gateway events include path, HTTP method, query string parameters, headers, cookies, and more. DynamoDB events could contain updated or delete record data. S3 events include the bucket name and object key, among other things.\n\nThe context contains information about the environment the function is running inside. Additional contextual information can be set in familiar environment variables (ENV).\n\nThe function handler is a method in your function code that processes the inbound event. The handler, which is a standard function in your language of choice, does some work and emits a result event .", - "page_start": 54, - "page_end": 54, - "source_file": "serverless-core.pdf" - }, - { - "text": "## Connect to functions with Function URLs\n\nA function URL is a dedicated HTTP(S) endpoint for your Lambda function. You can create and configure a function URL through the Lambda console or the Lambda API. When you create a function URL, Lambda automatically generates a unique URL endpoint for you. Once you create a function URL, its URL endpoint never changes. Function URL endpoints have the following format:\n\n```\nhttps://.lambda-url..on.aws\n```\n\nAfter you configure a function URL for your function, you can invoke your function through its HTTP(S) endpoint with a web browser, curl, Postman, or any HTTP client.\n\nRelated resources:\n\n - · Function URLs - official documentation\n\n## Additional resources\n\nOfficial AWS documentation:\n\n - · AWS Lambda Developer Guide - extensive and complete documentation for Lambda\n\n## Next steps\n\n## Learn serverless techniques in an online workshop\n\nLearn by doing in the Serverless Patterns Workshop . The first module introduces a serverless microservice to retrieve data from DynamoDB with Lambda and API Gateway. Additional modules provide practical examples of unit and integration testing, using infrastructure as code to deploy resources, and how to build common architectural patterns used in serverless solutions.", - "page_start": 63, - "page_end": 63, - "source_file": "serverless-core.pdf" - }, - { - "text": "## Advanced Topics\n\nYou can do a lot by just creating a function and connecting it to an event source like API Gateway or S3 triggers.\n\nAs you progress on your journey, you should explore the following more advanced topics.\n\n - · Connect services with event source mapping\n - · Deploy code in containers\n - · Add additional code with layers\n - · Augment functions with extensions\n - · Launch functions faster with SnapStart\n - · Connect to functions with Function URLs\n\n## Event source mapping\n\nSome services can trigger Lambda functions directly, for example, when an image is added to an S3 bucket, a Lambda can be triggered to resize it. Some services cannot invoke Lambda directly; but you can instead use an event source mapping which is a polling mechanism that reads from an event source and invokes a Lambda function.\n\nYou can use event source mappings to process items from a stream or queue in the following services:", - "page_start": 60, - "page_end": 60, - "source_file": "serverless-core.pdf" - }, - { - "text": "## Deploy with containers\n\nIf you need a custom runtime that is not provided by AWS, you can create and deploy a custom container image. AWS provides base images preloaded with a language runtime and other components that are required to run the image on Lambda. AWS provides a Dockerfile for each of the base images to help with building your container image.\n\nCustom containers are one way you might experiment with lift and shift of existing code to Lambda runtimes. If you do this, consider the architectural differences between always running containers, versus on demand nature of Lambda functions.\n\n## Related resource:\n\n - · Deploy container images\n\n## Add code with Layers\n\nA Lambda layer is a .zip file archive that can contain additional code or other content. A layer can contain libraries, a custom runtime, data, or configuration files. Layers are also necessary if your function .zip archive exceeds the size limit.\n\nLayers provide a convenient way to package libraries and other dependencies that you can use with your Lambda functions. Using layers reduces the size of uploaded deployment archives and makes it faster to deploy your code. Layers also promote code sharing and separation of responsibilities so that you can iterate faster on writing business logic.\n\n## Related resource:\n\n - · Creating and sharing Lambda layers\n\n## Extensions\n\nYou can use Lambda extensions to augment your Lambda functions. For example, use Lambda Extensions to integrate with your preferred monitoring, observability, security, and governance tools.\n\nLambda supports internal or external extensions. An internal extension runs as part of the runtime process. An external extension runs as an independent process in the execution environment and continues to run after the function invocation is fully processed.", - "page_start": 61, - "page_end": 61, - "source_file": "serverless-core.pdf" - }, - { - "text": "## Related resources:\n\n - · Datadog Lambda Extension - an extension that supports submitting custom metrics, traces, and logs asynchronously while your Lambda function executes.\n - · Lambda Extensions - official documentation\n\n## Launch functions faster with SnapStart\n\nLambda SnapStart for Java can improve startup performance by up to 10x at no extra cost, typically with no changes to your function code. The largest contributor to startup latency (often referred to as cold start time) is the time that Lambda spends initializing the function, which includes loading the function's code, starting the runtime, and initializing the function code.\n\n\n\nWith SnapStart, Lambda initializes your function when you publish a function version. Lambda takes a Firecracker microVM snapshot of the memory and disk state of the initialized execution environment, encrypts the snapshot, and caches it for low-latency access.\n\nNote: You can use SnapStart only on published function versions and aliases that point to versions. You can't use SnapStart on a function's unpublished version ($LATEST).\n\n## Related resources:\n\n - · Accelerate Your Lambda Functions with Lambda SnapStart - an AWS Compute blog article by Jeff Barr from Nov 2022 that shows the configuration change and vast difference from roughly six seconds init time to 142 milliseconds of restore time with SnapStart", - "page_start": 62, - "page_end": 62, - "source_file": "serverless-core.pdf" - }, - { - "text": "could be listening. The handler function might create and send another event to an SNS queue so that alerts for high temperature are sent to users through SMS messages.\n\nThe function finally wraps up the JSON weather data into a new event and sends it back to API gateway. Afterward, the function continues to handle hundreds of additional requests. Request from users slow down after 2AM, so after some time the Lambda service will tear down the function execution environment to conserve resources. As a Customer, you will only be charged for function usage.\n\n\n\n", - "page_start": 38, - "page_end": 38, - "source_file": "serverless-core.pdf" - }, - { - "text": "## Related resource(s):\n\n - · Return binary media from a Lambda proxy integration - Learn how to use a Lambda function to return binary media. This works for both REST and HTTP APIs.\n - · Working with binary media types for REST APIs - Additional considerations for REST non-proxy integrations", - "page_start": 72, - "page_end": 72, - "source_file": "serverless-core.pdf" - } - ] - }, - { - "references": { - "source_file": "serverless-core.pdf", - "query": "Why can't I use SnapStart on my function tagged with $LATEST ?", - "target_page": 63, - "target_passage": " You can use SnapStart only on published function versions and aliases that point to versions. You can't use SnapStart on a function's unpublished version ($LATEST)", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Related resources:\n\n - · Datadog Lambda Extension - an extension that supports submitting custom metrics, traces, and logs asynchronously while your Lambda function executes.\n - · Lambda Extensions - official documentation\n\n## Launch functions faster with SnapStart\n\nLambda SnapStart for Java can improve startup performance by up to 10x at no extra cost, typically with no changes to your function code. The largest contributor to startup latency (often referred to as cold start time) is the time that Lambda spends initializing the function, which includes loading the function's code, starting the runtime, and initializing the function code.\n\n\n\nWith SnapStart, Lambda initializes your function when you publish a function version. Lambda takes a Firecracker microVM snapshot of the memory and disk state of the initialized execution environment, encrypts the snapshot, and caches it for low-latency access.\n\nNote: You can use SnapStart only on published function versions and aliases that point to versions. You can't use SnapStart on a function's unpublished version ($LATEST).\n\n## Related resources:\n\n - · Accelerate Your Lambda Functions with Lambda SnapStart - an AWS Compute blog article by Jeff Barr from Nov 2022 that shows the configuration change and vast difference from roughly six seconds init time to 142 milliseconds of restore time with SnapStart", - "page_start": 62, - "page_end": 62, - "source_file": "serverless-core.pdf" - }, - { - "text": "## Starting the servers\n\nServers are started by running STRTCPSVR *ONDMD . The INSTANCE parameter of the STRTCPSVR *ONDMD command supports the special values of *DFT , *ALL , and *AUTOSTART , and the specification of the name of an instance. (An instance is set to autostart if the ars.cfg file for that instance contains ARS\\_AUTOSTART\\_INSTANCE=1 .) The default value for the INSTANCE parameter is *DFT . You can also create a data area that is named STRTCPSVR to further control the behavior of the STRTCPSVR command. For more information about the data area, see the IBM Content Manager OnDemand for i - Common Server Administration Guide , SC19-2792.\n\nWithout the STRTCPSVR data area, the values of *DFT and *AUTOSTART work identically. All instances that are set to autostart are started. Use the special value *ALL to start all of the instances that are configured on the system. You can also specify the name of a single instance to start, for example:\n\nSTRTCPSVR SERVER(*ONDMD) INSTANCE(ONDTEST)\n\nWith the data area, the value of *DFT starts only the instance that is named in the data area. The data area must be named STRTCPSVR and in library QUSRRDARS . The data area must be of the type character with a length of 10. To create the data area, run the following command (all as one command):\n\nCRTDTAARA DTAARA(QUSRRDARS/STRTCPSVR) TYPE(*CHAR) LEN(10) VALUE(QUSROND) TEXT('Autostart instance name for STRTCPSVR *ONDMD *DFT')\n\nQUSROND is the name of the instance to start.\n\nThe special values *ALL and *AUTOSTART work the same with the data area as without the data area.\n\nTo determine the instances that are started when STRTCPSVR SERVER(*ONDMD) INSTANCE(*AUTOSTART) is run, you can look for the ARS\\_AUTOSTART\\_INSTANCE=1 in the ARS.CFG file. However, an easier way is available so that you do not need to check the ARS.CFG file for every instance.\n\nRun grep in Qshell to search the contents of all of the ARS.CFG files for the string ARS\\_AUTOSTART\\_INSTANCE=1 , for example:\n\n$ grep -n 'ARS\\_AUTOSTART\\_INSTANCE=1' /qibm/userdata/ondemand/*/ars.cfg /qibm/userdata/ondemand/ONDDEMO/ars.cfg:53:ARS\\_AUTOSTART\\_INSTANCE=1 /qibm/userdata/ondemand/ONDDEU/ars.cfg:53:ARS\\_AUTOSTART\\_INSTANCE=1 /qibm/userdata/ondemand/ONDENU/ars.cfg:53:ARS\\_AUTOSTART\\_INSTANCE=1 /qibm/userdata/ondemand/QUSROND/ars.cfg:53:ARS\\_AUTOSTART\\_INSTANCE=1 $\n\nFrom the last four detail lines, which are the output of the grep command, you can determine that instances ONDDEMO , ONDDEU , ONDENU , and QUSROND are started when the STRTCPSVR SERVER(*ONDMD) INSTANCE(*AUTOSTART) command is run.\n\nTable 2-1 on page 35 summarizes the behavior of the STRTCPSVR command with and without the STRTCPSVR data area.", - "page_start": 57, - "page_end": 57, - "source_file": "sg246915.pdf" - }, - { - "text": "- 5. Click in the Filter box and enter snap to see a list of snap files, as shown in Figure 13-72. Locate the exact name of the snap that was generated by using the svc\\_snap command that was issued earlier. Select that file, and click Download .\n\nFigure 13-72 Filtering on snap to download\n\n\n\n - 6. Save the file to a folder of your choice on your workstation.\n\n## 13.9.3 Uploading files to the Support Center", - "page_start": 753, - "page_end": 753, - "source_file": "sg247938.pdf" - }, - { - "text": "- 1. Log in to the CLI and issue the svc\\_snap command that matches the type of snap requested by IBM Support:", - "page_start": 751, - "page_end": 751, - "source_file": "sg247938.pdf" - }, - { - "text": "with frequency ω j a and ω j b , and σ j -= ( | b 〉 〈 a | ) j is the 'spinflip' operator for the jth atom, with its adjoint σ j + = ( | a 〉 〈 b | ) j . The coupling constant g is given by g = µ √ ω/ 2 /planckover2pi1 /epsilon1 0 V , where µ is the magnitude of the atomic dipole moment, and V is the e ff ective volume of the cavity.\n\nIn order to denote the finite-time interaction between the atoms and Ramsey separated field, we introduce the function\n\nΓ j ( t ) = Θ ( t -t j ) -Θ ( t -t j -τ ) +Θ ( t -t j -τ -T ) -Θ ( t -t j -2 τ -T ) , (2)\n\nwhere Θ ( t ) is the Heaviside step function [ Θ ( t ) = 1 for t > 0, Θ ( t ) = 1 / 2 for t = 0, and Θ ( t ) = 0 for t < 0]. T is the free drift time of the atoms, and τ is the interacting time between the atom and one cavity.\n\nBy the standard way [25], we can get the HeisenbergLangevin equations of the motion for the single-atom and filed operators. By introducing the macroscopic atomic operator, M ( t ) = -i ∑ j Γ j ( t ) σ j -( t ), Na ( t ) = ∑ j Γ j ( t ) σ j aa ( t ), Nb ( t ) = ∑ j Γ j ( t ) σ j bb ( t ), the dynamic equations for the field and macroscopic atomic operators yield\n\n˙ a ( t ) = -κ 2 a ( t ) + gM ( t ) + F κ ( t ) , (3)\n\n˙ Na ( t ) = R (1 -A 0 + A 1 -A 2) -( γ a + γ ' a ) Na ( t ) -g [ M † ( t ) a ( t ) + a † ( t ) M ( t )] + Fa ( t ) , (4)\n\n˙ Nb ( t ) = -R ( B 0 -B 1 + B 2) -γ bNb ( t ) + γ ' a Na ( t ) + g [ a † ( t ) M ( t ) + M † ( t ) a ( t )] + Fb ( t ) , (5)\n\n˙ M ( t ) = -R ( C 0 -C 1 + C 2) -γ abM ( t ) + g [ Na ( t ) -Nb ( t )] a ( t ) + FM ( t ) , (6)\n\nwhere the macroscopic noise operators are defined as\n\nFa ( t ) = ∑ j ˙ Γ j ( t ) σ j a ( t ) -R (1 -A 0 + A 1 -A 2) + ∑ j Γ j ( t ) f j a ( t ) ,\n\nFb ( t ) = ∑ j ˙ Γ j ( t ) σ j b ( t ) + R ( B 0 -B 1 + B 2) + ∑ j Γ j ( t ) f j b ( t ) ,\n\nFM ( t ) = -i ∑ j ˙ Γ j ( t ) ˜ σ j -( t ) + R ( C 0 -C 1 + C 2) -i ∑ j Γ j ( t ) f j σ ( t ) ,\n\nwith A 0 = 〈 σ j a ( t j + τ ) 〉 q , A 1 = 〈 σ j a ( t j + τ + T ) 〉 q , A 2 = 〈 σ j a ( t j + 2 τ + T ) 〉 q , B 0 = 〈 σ j b ( t j + τ ) 〉 q , B 1 = 〈 σ j b ( t j + τ + T ) 〉 q , B 2 = 〈 σ j b ( t j + 2 τ + T ) 〉 q , C 0 = 〈 -i σ j -( t j + τ ) 〉 q , C 1 = 〈 -i σ j -( t j + τ + T ) 〉 q ,\n\nC 2 = 〈 -i σ j -( t j + 2 τ + T ) 〉 q . R is the mean pumping rate, which is defined in [26]. It is very easy to check that the average values of the above Langevin forces are all zero.\n\nBy using the above definitions of the noise operators, we find the correlation functions of macroscopic noise forces can be generally written in the form\n\n〈 Fk ( t ) Fl ( t ' ) 〉 = D (0) kl δ ( t -t ' ) + D (1) kl δ ( t -t ' -τ ) + D (2) kl δ ( t -t ' + τ ) + D (3) kl δ ( t -t ' -τ -T ) + D (4) kl δ ( t -t ' + τ + T ) + D (5) kl δ ( t -t ' -2 τ -T ) + D (6) kl δ ( t -t ' + 2 τ + T ) + D (7) kl δ ( t -t ' -T ) + D (8) kl δ ( t -t ' + T ) , (7)\n\nwhere D ( i ) kl ( k , l = a , b , M , M † ; i = 0 , 1 , 2) are the quantum diffusion coe ffi cients.\n\nc-number correlation functions: By choosing some particular ordering for products of atomic and field operators, one could derive the c-number stochastic Langevin equations from the quantum Langevin equations derived above, and all of the dynamic equations for c-number stochastic variables are the same as in [26]. The di ff erences are from the correlation functions. On the other hand, we convert the quantum noise operators into the c-number noise variables ˜ Fk ( t )( k = a , b , M , M † ), whose correlation functions are expressed as\n\n〈 ˜ Fk ( t ) ˜ Fk ( t ' ) 〉 = ˜ D (0) kl δ ( t -t ' ) + ˜ D (1) kl δ ( t -t ' -τ ) + ˜ D (2) kl δ ( t -t ' + τ ) + ˜ D (3) kl δ ( t -t ' -τ -T ) + ˜ D (4) kl δ ( t -t ' + τ + T ) + ˜ D (5) kl δ ( t -t ' -2 τ -T ) + ˜ D (6) kl δ ( t -t ' + 2 τ + T ) + ˜ D (7) kl δ ( t -t ' -T ) + ˜ D (8) kl δ ( t -t ' + T ) , (8)\n\nwhere ˜ D ( i ) kl are the c-number Langevin di ff usion coe ffi cients, related to quantum Langevin di ff usion coe ffi cients D ( i ) kl as in [27].", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2670.pdf" - }, - { - "text": "- 3. The Upload Support Package window provides four options for data collection. If you were contacted by IBM Support because your system called home or you manually opened a call with IBM Support, you receive a PMR number . Enter that PMR number into the PMR field and select the snap type (often referred to as an option 1, 2, 3, 4 snap ) as requested by IBM Support (see Figure 13-69). In our example, we entered our PMR number, selected snap type 3 (option 3) because this automatically collects the statesave that were created at the time the node restarted, and clicked Upload .\n\nTip: To open a service request online, see the IBM Support Service requests and PMRs web page.\n\nFigure 13-69 Upload Support Package window\n\n", - "page_start": 750, - "page_end": 750, - "source_file": "sg247938.pdf" - }, - { - "text": "## Advanced Topics\n\nYou can do a lot by just creating a function and connecting it to an event source like API Gateway or S3 triggers.\n\nAs you progress on your journey, you should explore the following more advanced topics.\n\n - · Connect services with event source mapping\n - · Deploy code in containers\n - · Add additional code with layers\n - · Augment functions with extensions\n - · Launch functions faster with SnapStart\n - · Connect to functions with Function URLs\n\n## Event source mapping\n\nSome services can trigger Lambda functions directly, for example, when an image is added to an S3 bucket, a Lambda can be triggered to resize it. Some services cannot invoke Lambda directly; but you can instead use an event source mapping which is a polling mechanism that reads from an event source and invokes a Lambda function.\n\nYou can use event source mappings to process items from a stream or queue in the following services:", - "page_start": 60, - "page_end": 60, - "source_file": "serverless-core.pdf" - }, - { - "text": "Figure 13-68 Support Package option\n\n\n\n## 2. Click the Upload Support Package button.\n\nAssuming that the problem encountered was an unexpected node restart that logged a 2030 error, we collect the default logs and the most recent statesave from each node to capture the most relevant data for support.\n\nNote: When a node unexpectedly reboots, it first dumps its current statesave information before it restarts to recover from an error condition. This statesave is critical for IBM Support to analyze what occurred. Collecting a snap type 4 creates statesaves at the time of the collection, which is not useful for understanding the restart event.", - "page_start": 749, - "page_end": 749, - "source_file": "sg247938.pdf" - }, - { - "text": "Table 2-1 Behavior of the STRTCPSVR command with or without the STRCPSVR data\n\n| Running STRTCPSVR start | *DFT | *ALL | *AUTOSTART | Named instance |\n|----------------------------|----------------------------------------------------|---------------------------------------------------|---------------------------------|---------------------|\n| Without the data area | All instances set to autostart | All instances that are configured on the system | All instances set to autostart | The named instance |\n| With the data area | Only the instance that is named in the data area | All instances that are configured on the system | All instances set to autostart | The named instance |\n\n## Stopping the servers\n\nServers are stopped by running ENDTCPSVR *ONDMD . The instance parameter of the STRTCPSVR *ONDMD command supports the special values of *DFT and *ALL , and the specification of the name of an instance. The default value for the INSTANCE parameter is *DFT . You also can create a data area that is named STRTCPSVR to further control the behavior of the ENDTCPSVR command. Create the data area as described in 'Starting the servers' on page 34. For more information about the data area, see the IBM Content Manager OnDemand for i - Common Server Administration Guide , SC19-2792. Even though the data area is named STRTCPSVR , it controls both the STRTCPSVR and ENDTCPSVR commands by design so that *DFT starts and ends the same instance.\n\nWithout the STRTCPSVR data area, the values of *DFT and *ALL work identically. All instances that are active are ended. You can also specify the name of a single instance to end, for example:\n\nENDTCPSVR SERVER(*ONDMD) INSTANCE(ONDTEST)\n\nWith the data area, the value of *DFT ends only the instance that is named in the data area. The data area must be named STRTCPSVR and in library QUSRRDARS .\n\nTable 2-2 summarizes the behavior of the ENDCPSVR command with and without the data area.\n\nTable 2-2 Behavior of the ENDCPSVR command with or without the data area\n\n| Running ENDCPSVR ends | *DFT | *ALL | Named instance |\n|---------------------------|---------------------------------------------------|-----------------------|--------------------|\n| Without the data area | All active instances | All active instances | The named instance |\n| With the data area | Only the instance that is named in the data area | All active instances | The named instance |\n\n## Server work management\n\nServer jobs are started by using a job description with the name of the instance, which must be in the instance library. If a job description with that name is not found in the instance library, job description QOND400 in library QRDARS is used (and can be changed if necessary).\n\nThe job description controls the following attributes of the server job:", - "page_start": 58, - "page_end": 58, - "source_file": "sg246915.pdf" - }, - { - "text": "/dumps/snap..YYMMDD.hhmmss.tgz\n\nIt takes a few minutes for the snap file to complete (longer if statesaves are included).\n\n - 4. The generated file can then be retrieved from the GUI clicking Settings → Support → Manual Upload Instructions twisty → Download Support Package and then, clicking Download Existing Package , as shown in Figure 13-71.\n\nFigure 13-71 Downloaded Existing Package\n\n", - "page_start": 752, - "page_end": 752, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_SHEN_2003.pdf", - "query": "At Shentel company, what determines an employees pension ?", - "target_page": 22, - "target_passage": "Pension benefits are based primarily on the employee's compensation and years of service", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## NOTE 22: PENSIONS\n\nWe have contributory and non-contributory defined benefit pension plans that are made available to most of our employees. The plans provide pensions based on years of service, years of contributions and earnings. We do not provide any non-pension post-retirement benefits. We also provide unfunded supplemental pension benefits to certain executives.\n\nThe assets of the defined benefit pension plans are held in segregated accounts isolated from our assets. We administer the defined benefit pension plans pursuant to applicable regulations, the Statement of Investment Policies and Procedures and to the mandate of the Pension Committee of the Board of Directors. The Pension Committee of the Board of Directors oversees our administration of the defined benefits pension plans, which includes the following principal areas:", - "page_start": 121, - "page_end": 121, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Retirement Benefits\n\nThe Company has defined contribution profit-sharing plans covering substantially all employees who are not participants in certain defined benefit plans. The Company's annual contribution to the defined contribution plans is based on employee eligible earnings and results of operations and amounted to $26,489,000, $23,524,000, and $24,826,000 in 2003, 2002, and 2001, respectively.\n\nThe Company sponsors defined benefit plans which include a limited number of salaried and hourly employees at certain subsidiaries. The Company's funding policy is generally to contribute annually the minimum actuarially computed amount. Net pension costs relating to these plans were $176,000; $0; and $0 for 2003, 2002, and 2001, respectively. The actuarial present value of obligations, less related plan assets at fair value, is not significant.\n\nThe Company also participates in a multiemployer plan, which provides defined benefits to certain of the Company's union\n\nemployees. Pension expense for this plan amounted to $309,000, $309,000, and $310,000 in 2003, 2002, and 2001, respectively.\n\n## Postretirement Health Care\n\nIn accordance with the guidelines of revised SFAS No. 132, 'Employers' Disclosures about Pensions and other Postretirement Benefits,' the following table sets forth the funded status of the plan, reconciled to the accrued postretirement benefits cost recognized in the Company's balance sheet at:", - "page_start": 50, - "page_end": 50, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "\n\n## We must serve well to prosper - We must prosper to serve well\n\nShenTel Service Company · Shenandoah Long Distance Company · Shenandoah Mobile Company Shenandoah Network Company · Shenandoah Telephone Company · Shenandoah Valley Leasing Company Shenandoah Cable Television Company · ShenTel Communications Company\n\nShenandoah Personal Communications Company\n\nPO Box 459 Edinburg, VA 22824-0459 Phone 540-984-4141 · Fax 540-984-8192\n\nwww.shentel.com", - "page_start": 59, - "page_end": 59, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "Figure 18: Employment types in EU27, development 2005 to 2022 65 - Eurostat\n\n\n\nThe minor deviation of the sum of the different types of employment to the 100% 'Employed persons' is due to 'No response' answers. The data of part-time employees and of employees with a temporary contract are for the full year 2019, not for Q4.\n\nThe group 'employees' is characterised by two major contractual distinctions that are important for OSH: 1) full- or part-time work, and 2) the time limit of the contract (indefinite or temporary). Moreover, in many Member States there are major differences between employment contracts of private employers in comparison to public employers.\n\n## Definitions Eurostat 66\n\nEmployers = self-employed with employee: employing one or more employees: persons who work in their own business, professional practice or farm for the purpose of earning a profit and who employ at least one other person.\n\nSelf-employed: not employing any employees (self-employed without employees): persons who work in their business, professional practices or farm for the purpose of earning a profit and who employ no other persons.\n\nEmployees: persons who work for a public or private employer and who receive compensation in the form of wages, salaries, fees, gratuities, payment by result or in kind. Contributing family workers: persons who help another member of the family to run a farm or business, provided they are not classed as employees.", - "page_start": 46, - "page_end": 46, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## Performance Grants\n\nIn fiscal 2009 and 2008, the Executive Organization and Compensation Committee made annual awards of three-year performance grants to key officers. A target payout was established at the beginning of each three-year performance period. The actual payout at the end of the period is calculated based upon the Company's achievement of sales growth, return on sales, and total shareholder return targets. All performance periods had expired by June 30, 2011. During fiscal 2011 and 2010, the Company recorded $1,020 and $(231), respectively, of compensation expense (income) for achievement relative to the total shareholder return-based goals of the Company's performance grants. The liability at June 30, 2011 was $1,558; this was paid in fiscal 2012.\n\n## NOTE 10: BENEFIT PLANS\n\n## Retirement Savings Plan\n\nSubstantially all U.S. associates participate in the Applied Industrial Technologies, Inc. Retirement Savings Plan. Participants may elect to contribute up to 50% of their compensation, subject to Internal Revenue Code maximums. The Company makes a discretionary profit-sharing contribution to the Retirement Savings Plan generally based upon a percentage of the Company's U.S. income before income taxes and before the amount of the contribution (5% for fiscal 2012, 2011 and 2010). The Company partially matches 401(k) contributions by participants; this match was suspended from January 1, 2009 to June 30, 2010. The Company's expense for profit sharing and matching of associates' 401(k) contributions was $10,866, $11,251 and $4,891 during fiscal 2012, 2011 and 2010, respectively.\n\n## Deferred Compensation Plans\n\nThe Company has deferred compensation plans that enable certain associates of the Company to defer receipt of a portion of their compensation and non-employee directors to defer receipt of director fees. The Company funds these deferred compensation liabilities by making contributions to rabbi trusts. Assets held in these rabbi trusts consist of investments in money market and mutual funds and Company common stock.\n\n## Postemployment Benefit Plans\n\nThe Company provides the following postemployment benefits which, except for the Qualified Defined Benefit Retirement Plan, are unfunded:\n\n## Supplemental Executive Retirement Benefits Plan\n\nThe Company has a non-qualified pension plan to provide supplemental retirement benefits to certain officers. Benefits are payable beginning at retirement and determinable at retirement based upon a percentage of the participant's historical compensation. On December 19, 2011, the Executive Organization and Compensation Committee of the Board of Directors froze participant benefits (credited service and final average earnings) and entry into the Supplemental Executive Retirement Benefits Plan (SERP) effective December 31, 2011. This action constituted a plan curtailment. The plan liability was remeasured in conjunction with the curtailment using a 3.5% discount rate and participant final average earnings through the curtailment date. The remeasurement in conjunction with the curtailment resulted in an actuarial loss (recorded in other comprehensive income (loss)) of $302 ($492 loss, net of income tax of $190).\n\nThe curtailment is reflected in the Company's consolidated balance sheets as: 1) a reduction to the overall SERP liability (included in postemployment benefits) of $8,860, 2) a reduction to deferred tax assets of $3,411 and 3) an increase in accumulated other comprehensive income (loss) of $5,449. Prior service costs previously recorded through accumulated other comprehensive income (loss) were reclassified into the statements of consolidated income ($3,117 gross expense, net of income tax of $1,200). The gross expense is recorded in selling, distribution and administrative expense in fiscal 2012.\n\n## Key Executive Restoration Plan", - "page_start": 32, - "page_end": 32, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "This annual report contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934, including statements regarding our expectations, hopes, intentions, or strategies regarding the future. These statements are subject to certain risks and uncertainties that could cause actual results to differ materially from those anticipated in the forward-looking statements. Factors that might cause such a difference include, but are not limited to, changes in the interest rate environment, management's business strategy, national, regional and local market conditions, and legislative and regulatory conditions. The Company undertakes no obligation to publicly revise these forward-looking statements to reflect subsequent events or circumstances, except as required by law.\n\n## General\n\nShenandoah Telecommunications Company is a diversified telecommunications company providing both regulated and unregulated telecommunications services through its nine wholly owned subsidiaries. These subsidiaries provide local exchange telephone services, wireless personal communications services (PCS), as well as cable television, paging, Internet access, long distance, fiber optics facilities, and leased tower facilities. The Company is the exclusive provider of wireless mobility communications network products and services under the Sprint brand from Harrisonburg, Virginia to Harrisburg, York and Altoona, Pennsylvania. The Company refers to the Hagerstown, Maryland; Martinsburg, West Virginia; and Harrisonburg and Winchester, Virginia markets as its Quad State markets. The Company refers to the Altoona, Harrisburg, and York, Pennsylvania markets as its Central Penn markets. Competitive local exchange carrier (CLEC) services were established on a limited basis during 2002. In addition, the Company sells and leases equipment, mainly related to services it provides, and also participates in emerging services and technologies by direct investment in non-affiliated companies.\n\nThe Company reports revenues as wireless, wireline and other revenues. These revenue classifications are defined as follows: Wireless revenues are made up of the Personal Communications Company (a PCS Affiliate of Sprint), and the Mobile Company. Wireline revenues include the following subsidiary revenues in the financial results: Telephone Company, Network Company, Cable Television Company, and the Long Distance Company. Other revenues are comprised of the revenues of ShenTel Service Company, the Leasing Company, ShenTel Communications Company and the Holding Company. For additional information on the Company's business segments, see Note 14 to audited consolidated financial statements appearing elsewhere in this report.", - "page_start": 40, - "page_end": 40, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## Pension Obligations\n\nOur retiree pension plans had a funding deficit of approximately $172 million at December 31, 2013. We have been making special minimum monthly payments in addition to our regular contributions to eliminate the pension liability. During 2013, our funding deficit was reduced by $162 million.\n\nThe special payments, including contributions associated with benefits paid from the plans, were approximately $7 million in 2013. We expect our total estimated funding requirements to be $96 million in 2014 and to be adjusted annually thereafter, based on various market factors such as interest rates and expected returns and staffing assumptions.\n\nChanges in factors such as the discount rate, increase in compensation and the expected return on plan assets can affect the accrued benefit obligation, pension expense and the deficiency of plan assets over\n\naccrued obligations in the future. See Critical accounting estimates for more information.\n\n## Purchase of Annuities\n\nFrom time to time we have made additional lump-sum contributions to our pension plans, and the pension plans have purchased annuities from insurance companies to fund the pension benefit obligations for certain groups of retired employees in the plans. Purchasing the annuities relieves us of our primary responsibility for that portion of the accrued benefit obligations for the retired employees and eliminates the significant risk associated with the obligations.\n\nWe did not make any additional lump-sum contributions to our pension plans in 2013 or 2012, and the pension plans did not purchase additional annuities.\n\n## FINANCIAL RISK MANAGEMENT\n\nWe normally use three categories of derivative instruments to manage risks related to our business activities:\n\n| Categories | The risk it manages | Types of derivative instruments |\n|-------------------------|----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|\n| Debt Derivatives | GLYPH<129> Impact of fluctuations in foreign exchange rates on principal and interest payments for US denominated long-term debt | GLYPH<129> Cross-currency interest rate exchange agreements GLYPH<129> Forward foreign exchange agreements (from time to time, as applicable) |\n| Expenditure Derivatives | GLYPH<129> Impact of fluctuations in foreign exchange rates on forecasted US dollar denominated expenditures | GLYPH<129> Forward foreign exchange agreements |\n| Equity Derivatives | GLYPH<129> Impact of fluctuations in share price on stock-based compensation expense | GLYPH<129> Total return swap agreements |\n\nWe also manage our exposure to fluctuating interest rates and we have fixed the interest rate on 95.3 % of our debt including short-term borrowings at December 31, 2013 (2012 - 100 % ).\n\n## Debt Derivatives\n\nWe use cross currency interest exchange agreements (Debt Derivatives), to hedge the foreign exchange risk on all of the principal and interest obligations of our US dollar denominated senior notes and debentures. At December 31, 2013 we used Debt Derivatives to hedge the foreign exchange risk on 100 % of the principal and interest obligations on all our US dollar denominated debt. We use Debt Derivatives for risk management purposes only.\n\nDuring 2013, we completed Debt Derivatives transactions as follows:", - "page_start": 65, - "page_end": 65, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "The assets of the defined benefit pension plans are invested and managed following all applicable regulations and the Statement of Investment Policies and Procedures, and reflect the characteristics and asset mix of each defined benefit pension plan. Investment and market return risk is managed by:\n\n - GLYPH<129> contracting professional investment managers to execute the investment strategy following the Statement of Investment Policies and Procedures and regulatory requirements\n - GLYPH<129> specifying the kinds of investments that can be held in the plans and monitoring compliance\n - GLYPH<129> using asset allocation and diversification strategies, and\n - GLYPH<129> purchasing annuities from time to time.\n\nThe funded pension plans are registered with the Office of the Superintendent of Financial Institutions and are subject to the Federal Pension Benefits Standards Act. The plans are also registered with the Canada Revenue Agency and are subject to the Canada Income Tax Act. The benefits provided under the plans and the contributions to the plans are funded and administered in accordance with all applicable legislation and regulations.", - "page_start": 121, - "page_end": 121, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "The cost of pensions is actuarially determined and takes into account the following assumptions and methods for pension accounting related to our defined benefit plans:\n\n - GLYPH<129> the expected rates of salary increases for calculating increases in future benefits\n - GLYPH<129> mortality rates for calculating the life expectancy of plan members, and\n - GLYPH<129> past service costs from plan amendments are immediately expensed in net income.\n\nWe recognize contributions to defined contribution plans as an employee benefit expense in operating costs in the consolidated statements of income in the periods the employees provide the related services.\n\nSee note 22 for more information about our pension plans.\n\n## Termination Benefits\n\nWe recognize termination benefits as an expense when we are committed to a formal detailed plan to terminate employment before the normal retirement date and it is not realistic that we will withdraw it.\n\n## Property, Plant and Equipment\n\nRecognition and Measurement\n\nWe recognize property, plant and equipment at cost, less accumulated depreciation and accumulated impairment losses.\n\nCost includes expenditures that are directly attributable to the acquisition of the asset. The cost of self-constructed assets also includes:", - "page_start": 101, - "page_end": 101, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## (f) Short-term investments and investment securities\n\nSecurities other than equity securities issued by subsidiaries and affiliates are classified into three categories: trading, held-to-maturity or other securities. Trading securities are carried at fair value and held-to-maturity securities are carried at amortized cost. Marketable securities classified as other securities are carried at fair value with changes in unrealized holding gain or loss, net of the applicable income taxes, included directly in shareholders' equity. Nonmarketable securities classified as other securities are carried at cost. Cost of securities sold is determined by the moving average method.\n\n## (g) Property, plant and equipment and depreciation\n\nDepreciation of property, plant and equipment of the Company and its consolidated subsidiaries is calculated principally by the straight-line method based on the estimated useful lives and the residual value determined by the Company. Significant renewals and additions are capitalized at cost. Maintenance and repairs are charged to income.\n\n## (h) Leases\n\nNoncancellable lease transactions that transfer substantially all risks and rewards associated with the ownership of assets are accounted for as finance leases. All other lease transactions are accounted for as operating leases and relating payments are charged to income as incurred. See Note 2(c).\n\n## (i) Retirement benefits\n\nAccrued retirement benefits for employees have been provided mainly at an amount calculated based on the retirement benefit obligation and the fair value of the pension plan assets as of balance sheet date, as adjusted for unrecognized net retirement benefit obligation at transition, unrecognized actuarial gain or loss, and unrecognized prior service cost. The retirement benefit obligation is attributed to each period by the straight-line method over the estimated years of service of the eligible employees. The net retirement benefit obligation at transition is being amortized principally over a period of 15 years by the straight-line method.\n\nActuarial gain or loss is amortized in the year following the year in which the gain or loss is recognized primarily by the straight-line method over periods (principally 8 years through 18 years) which are shorter than the average remaining years of service of the employees. Certain foreign consolidated subsidiaries have adopted the corridor approach for the amortization of actuarial gain and loss.\n\nPrior service cost is being amortized as incurred by the straight-line method over periods (principally 9 years through 15 years) which are shorter than the average remaining years of service of the employees.\n\nSee Note 9 for the method of accounting for the separation of the substitutional portion of the benefit obligation from the corporate portion of the benefit obligation under Welfare Pension Fund Plan.\n\nSee Note 2(b) for adoption of a new accounting standard by a consolidated subsidiary in the United Kingdom.\n\n## (j) Income taxes\n\nDeferred tax assets and liabilities have been recognized in the consolidated financial statements with respect to the differences between financial reporting and the tax bases of the assets and liabilities, and were measured using the enacted tax rates and laws which will be in effect when the differences are expected to reverse.", - "page_start": 78, - "page_end": 78, - "source_file": "OTC_NSANY_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_SHEN_2003.pdf", - "query": "At the end of 2003, how many available-for-sales investments did Shenandoah company count in its portfolio ?", - "target_page": 53, - "target_passage": "The Company’s available-for-sale portfolio at December 31, 2003 is made up of two investments", - "chunk_present": { - "presence": true, - "index": 9 - } - }, - "top_chunk": [ - { - "text": "\n\n## We must serve well to prosper - We must prosper to serve well\n\nShenTel Service Company · Shenandoah Long Distance Company · Shenandoah Mobile Company Shenandoah Network Company · Shenandoah Telephone Company · Shenandoah Valley Leasing Company Shenandoah Cable Television Company · ShenTel Communications Company\n\nShenandoah Personal Communications Company\n\nPO Box 459 Edinburg, VA 22824-0459 Phone 540-984-4141 · Fax 540-984-8192\n\nwww.shentel.com", - "page_start": 59, - "page_end": 59, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## SHENANDOAH TELECOMMUNICATIONS COMPANY AND SUBSIDIARIES NOTES TO CONSOLIDATED FINANCIAL STATEMENTS", - "page_start": 38, - "page_end": 38, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## Note 14. Segment Reporting\n\nThe Company, as a holding company with various operating subsidiaries, has identified ten reporting segments based on the products and services each provides. Each segment is managed and evaluated separately because of differing technologies and marketing strategies.\n\nThe reporting segments and the nature of their activities are as follows:\n\n| Shenandoah Telecommunications Company (Holding) | Holding company, which invests in both affiliated and non-affiliated companies. |\n|---------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Shenandoah Telephone Company (Telephone) | Provides both regulated and unregulated telephone services and leases fiber optic facilities primarily throughout the Northern Shenandoah Valley. |\n| Shenandoah Cable Television Company (CATV) | Provides cable television service in Shenandoah County. |\n| ShenTel Service Company (ShenTel) | Provides Internet access to a multi-state region surrounding the Northern Shenandoah Valley, hosts Travel 511 for Virginia, and sells and services telecommunication equipment. |\n| Shenandoah Valley Leasing Company (Leasing) | Finances purchases of telecommunications |\n| Shenandoah Mobile Company (Mobile) | Provides tower rental space in the Company's PCS markets and paging services throughout the Northern Shenandoah Valley. |\n| Shenandoah Long Distance Company (Long Distance) | Provides long distance services. |\n| Shenandoah Network Company (Network) | Leases interstate fiber optic facilities. |\n| ShenTel Communications Company (Shen Comm) | Provides DSL services as a CLEC operation. |\n| Shenandoah Personal Communications Company (PCS) | As a PCS Affiliate of Sprint, provides digital wireless service to a portion of a four-state area covering the region from Harrisburg, York and Altoona, Pennsylvania, to Harrisonburg, Virginia. |\n\nThe accounting policies of the segments are the same as those described in the summary of significant accounting policies. Each segment accounts for inter-segment sales and transfers as if the sales or transfers were to outside parties.\n\nIncome (loss) recognized from equity method nonaffiliated investees by segment is as follows:\n\n| Year | Holding | Telephone | Consolidated Totals |\n|----------------|----------------|----------------|-----------------------|\n| (in thousands) | (in thousands) | (in thousands) | (in thousands) |\n| 2003 | $ (441) | $ 65 | $ (376) |\n| 2002 | $ (822) | $ 45 | $ (777) |\n| 2001 | $ (1,218) | $104 | $ (1,114) |\n\n\n\n■", - "page_start": 36, - "page_end": 36, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## asset portfolio\n\n## Apartment Portfolio", - "page_start": 17, - "page_end": 17, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## OUR BUSINESS\n\nShenandoah Telecommunications Company is a diversified telecommunications holding company which provides various telecommunications services through its operating subsidiaries. These services include: wireline telephone service, primarily in Shenandoah County and small service areas in Rockingham, Frederick, and Warren counties, all in Virginia; cable television service in Shenandoah County; unregulated telecommunications equipment sales and services; online information and Internet access provided to the multi-state region surrounding the Northern Shenandoah Valley of Virginia; financing of purchases of telecommunications facilities and equipment; paging services in the Northern Shenandoah Valley; resale of long distance services; operation and maintenance of an interstate fiber optic network; wireless personal communications services (PCS) and a tower network in the four-state region from Harrisonburg, Virginia to the Harrisburg, York and Altoona, Pennsylvania markets.\n\n## ANNUAL MEETING\n\nThe Board of Directors extends an invitation to all shareholders to attend the Annual Meeting of Shareholders. The meeting will be held at 11:00 AM (EST) on April 20, 2004 in the Auditorium of the Company's offices at the Shentel Center, 500 Mill Road, Edinburg, Virginia.\n\n## FORMS 10-K, 10-Q, and 8-K\n\nThe Company files periodic reports with the Securities and Exchange Commission. The Company's Annual Report on Form 10-K, Quarterly Reports on Form 10-Q, and Current Reports on Form 8-K, along with any amendments to these reports, are available to shareholders through the Company's website, www.shentel.com. This website also has recent news releases and other information potentially of interest to shareholders.\n\nA copy of the Company's Annual Report on Form 10-K, without exhibits, may be obtained, without charge, by writing to Shenandoah Telecommunications Company, 124 South Main Street, P.O. Box 459, Edinburg, Virginia 22824, Attention: Secretary.\n\n## MARKET AND DIVIDEND INFORMATION\n\nThe Company's stock is traded on the NASDAQ National Market under the symbol 'SHEN.' Information on the high and low sales prices per share of common stock as reported by the NASDAQ National Market for the last two years is set forth below: The Company's stock is traded on the NASDAQ National Market under the symbol 'SHEN.' Information on the high and low closing prices per share of common stock as reported by the NASDAQ National Market for the last two years is set forth below:", - "page_start": 58, - "page_end": 58, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## Significant Transactions\n\nThe Company had several significant transactions during 2003. The largest was the sale of its 66% interest in the Virginia 10 RSA cellular operation, as described above. The Company originally entered into the agreement with Verizon Wireless in November 2002. The Company was the general partner of the limited partnership which operated an analog cellular network in the six-county area of Northwestern Virginia, including Clarke, Frederick, Page, Rappahannock, Shenandoah, and Warren counties, and the city of Winchester. The sales price was $37.0 million plus the Company's 66% share of the partnership's working capital, which was approximately $1.7 million. The Company was required to do a working capital true up following the closing, from which the Company recorded a charge for $23 thousand after taxes. In the fourth quarter the Company recorded an additional charge for taxes of $0.2 million to reflect the consolidated effective tax rate based on the final operating results for the year.\n\nThe sale of this business is reflected in the discontinued operations section of the income statement along with the results of operations for the two months of 2003 that the operation remained a part of the Company.\n\n\n\n■", - "page_start": 41, - "page_end": 41, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## SHENANDOAH TELECOMMUNICATIONS COMPANY AND SUBSIDIARIES NOTES TO CONSOLIDATED FINANCIAL STATEMENTS\n\n## Note 14. Segment Reporting (Continued)\n\nSelected financial data for each segment is as follows:", - "page_start": 37, - "page_end": 37, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## SHENANDOAH TELECOMMUNICATIONS COMPANY AND SUBSIDIARIES NOTES TO CONSOLIDATED FINANCIAL STATEMENTS\n\n## Note 1. Summary of Significant Accounting Policies\n\nDescription of business: Shenandoah Telecommunications Company and subsidiaries (the Company) provides telephone service, wireless personal communications service (PCS) under the Sprint brand name, cable television, unregulated communications equipment sales and services, Internet access, and paging services. In addition, the Company leases towers and operates and maintains an interstate fiber optic network. The Company's operations are located in the four state region surrounding the Northern Shenandoah Valley of Virginia. Pursuant to a management agreement with Sprint Communications Company and its related parties (collectively, 'Sprint'), the Company is the exclusive PCS Affiliate of Sprint providing wireless mobility communications network products and services in the geographic area extending from Altoona, Harrisburg and York, Pennsylvania, south through Western Maryland, and the panhandle of West Virginia, to Harrisonburg, Virginia. The Company is licensed to use the Sprint brand name in this territory, and operates its network under the Sprint radio spectrum license (Note 7). A summary of the Company's significant accounting policies follows:\n\nStock split: All share and per share information reflect the two for one stock split announced in October 2003, to shareholders of record as of the close of business on January 30, 2004. The additional shares were distributed on February 20, 2004. The effective date of the split is February 23, 2004. All previously reported share and per share data included herein are retroactively adjusted to reflect the split.\n\nPrinciples of consolidation: The consolidated financial statements include the accounts of all wholly owned subsidiaries and other entities where effective control is exercised. All significant intercompany balances and transactions have been eliminated in consolidation.\n\nUse of estimates: Management of the Company has made a number of estimates and assumptions related to the reporting of assets and liabilities, the disclosure of contingent assets and liabilities at the date of the consolidated financial statements and the reported amounts of revenues and expenses during the reporting periods. Management reviews its estimates, including those related to recoverability and useful lives of assets as well as liabilities for income taxes and pension benefits. Changes in facts and circumstances may result in revised estimates and actual results could differ from those reported estimates.\n\nCash and cash equivalents: The Company considers all temporary cash investments purchased with a maturity of three months or less to be cash equivalents. The Company places its temporary cash investments with high credit quality financial institutions. At times, these investments may be in excess of FDIC insurance limits. Cash and cash equivalents were $28.7million, $2.2 million, and $2.0 million at December 31, 2003, 2002 and 2001, respectively.", - "page_start": 19, - "page_end": 19, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## SHENANDOAH TELECOMMUNICATIONS COMPANY AND SUBSIDIARIES\n\n## 2003 Financial Statements\n\n## INDEPENDENT AUDITOR'S REPORT\n\n\n\nThe Board of Directors and Shareholders Shenandoah Telecommunications Company:\n\nWe have audited the accompanying consolidated balance sheets of Shenandoah Telecommunications Company and subsidiaries (the Company), as of December 31, 2003, 2002, and 2001, and the related consolidated statements of income, shareholders' equity and comprehensive income, and cash flows for the years then ended. These consolidated financial statements are the responsibility of the Company's management. Our responsibility is to express an opinion on these consolidated financial statements based on our audits.\n\nWe conducted our audits in accordance with auditing standards generally accepted in the United States of America. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement. An audit includes examining, on a test basis, evidence supporting the amounts and disclosures in the financial statements. An audit also includes assessing the accounting principles used and significant estimates made by management, as well as evaluating the overall financial statement presentation. We believe that our audits provide a reasonable basis for our opinion.\n\nIn our opinion, the consolidated financial statements referred to above present fairly, in all material respects, the financial position of Shenandoah Telecommunication s Company and subsidiaries as of December 31, 2003, 2002 and 2001, and the results of their operations and their cash flows for the years then ended, in conformity with accounting principles generally accepted in the United States of America.\n\nAs discussed in note 1 to the consolidated financial statements, the Company changed its method of accounting for goodwill in 2002. As further discussed in note 1 to the consolidated financial statements, the Company changed its method of accounting for asset retirement obligations in 2003.\n\n\n\n■", - "page_start": 12, - "page_end": 12, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## SHENANDOAH TELECOMMUNICATIONS COMPANY AND SUBSIDIARIES MANAGEMENT'S DISCUSSION AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS\n\nis the intent of the Company to evaluate whether to hold or sell parts or all of each investment on an individual basis. At December 31, 2003, the Company had external investments totaling $7.5 million.\n\nIn 2004, the Company anticipates taking advantage of a conversion feature on its Rural Telephone Bank stock. The Company will convert a portion of its holdings into a different class of stock that will pay cash dividends each year. The bank declares a dividend rate that varies, each year. The range of the dividend has been between 4.2% and 5.65% over the last 5 years. The rate in the two most recent years was 4.2%. This transaction is estimated to provide the Company with approximately $0.3 million in dividend income each year, based on the 2003 dividend rate of 4.2% and assuming we had converted the stock at the beginning of 2003.\n\n## Financial Condition, Liquidity and Capital Resources\n\nThe Company has four principal sources of funds available to meet the financing needs of its operations, capital projects, debt service, investments and potential dividends. These sources include cash flows from operations, cash and cash equivalents, the liquidation of investments and borrowings. Management routinely considers the alternatives available to determine what mix of sources are best suited for the long-term benefit of the Company.\n\nDuring the 2003 year, with the closing of the sale of the Virginia 10 RSA Limited partnership interest, the Company evaluated its capital requirements, and as a result eliminated its $20.0 million revolving line of credit with CoBank in May 2003. The Company had paid off the outstanding balance in early 2003, and did not borrow on it during the remaining time the facility was in place. In light of the $27.9 million balance in cash equivalent investments, management determined additional debt capacity is not necessary for the near-term.\n\nThe term debt loan agreements with CoBank have three financial covenants. These are measured on a trailing 12-month basis and are calculated on continuing operations. The first of the covenants is the total leverage ratio, which is total debt to operating cash flow. This ratio must remain below 3.5, and as of December 31, 2003 it was 1.2. The second measure is equity to total assets, which must be 35% or higher. At December 31, 2003 the ratio was 57.3%. The third measure is the debt service coverage ratio, which is operating cash flow to scheduled debt service, which must exceed 2.0. At December 31, 2003 this measure was 4.3. Management believes the Company will meet these covenant measures for the coming year. The Company has pledged all of its affiliates capital stock as collateral for the CoBank loans.\n\nThe Company's covenants on the RUS/RTB debt require the pledge of all current and future assets of the Telephone subsidiary until the debt is retired.\n\nAnother external source of funding is a $0.5 million unsecured, variable rate revolving line of credit with SunTrust Bank. This facility is in place to allow the Company to better manage its daily cash balances. The facility expires May 31, 2004. Management anticipates renewing this facility with SunTrust Bank under similar terms and conditions. At December 31, 2003 there were no balances outstanding under this facility.\n\nDue to make-whole provisions in the Company's debt agreements it is currently uneconomical for the Company to prepay any debt.\n\nThe Company is obligated to make future payments under various contracts it has entered into, including amounts pursuant to its various long-term debt facilities, and non-cancelable operating lease agreements for retail space, tower space and cell sites. Expected future minimum contractual cash obligations for the next five years and in the aggregate at December 30, 2003, are as follows:", - "page_start": 53, - "page_end": 53, - "source_file": "NASDAQ_SHEN_2003.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_SHEN_2003.pdf", - "query": "What was the main reason of the decrease of customer base of the Shenandoah and Virginia 10 RSA partnership ?", - "target_page": 51, - "target_passage": "he decline was the result of competition with digital technologies and increased competition from national carriers in the area", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "\n\n## We must serve well to prosper - We must prosper to serve well\n\nShenTel Service Company · Shenandoah Long Distance Company · Shenandoah Mobile Company Shenandoah Network Company · Shenandoah Telephone Company · Shenandoah Valley Leasing Company Shenandoah Cable Television Company · ShenTel Communications Company\n\nShenandoah Personal Communications Company\n\nPO Box 459 Edinburg, VA 22824-0459 Phone 540-984-4141 · Fax 540-984-8192\n\nwww.shentel.com", - "page_start": 59, - "page_end": 59, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## OUR BUSINESS\n\nShenandoah Telecommunications Company is a diversified telecommunications holding company which provides various telecommunications services through its operating subsidiaries. These services include: wireline telephone service, primarily in Shenandoah County and small service areas in Rockingham, Frederick, and Warren counties, all in Virginia; cable television service in Shenandoah County; unregulated telecommunications equipment sales and services; online information and Internet access provided to the multi-state region surrounding the Northern Shenandoah Valley of Virginia; financing of purchases of telecommunications facilities and equipment; paging services in the Northern Shenandoah Valley; resale of long distance services; operation and maintenance of an interstate fiber optic network; wireless personal communications services (PCS) and a tower network in the four-state region from Harrisonburg, Virginia to the Harrisburg, York and Altoona, Pennsylvania markets.\n\n## ANNUAL MEETING\n\nThe Board of Directors extends an invitation to all shareholders to attend the Annual Meeting of Shareholders. The meeting will be held at 11:00 AM (EST) on April 20, 2004 in the Auditorium of the Company's offices at the Shentel Center, 500 Mill Road, Edinburg, Virginia.\n\n## FORMS 10-K, 10-Q, and 8-K\n\nThe Company files periodic reports with the Securities and Exchange Commission. The Company's Annual Report on Form 10-K, Quarterly Reports on Form 10-Q, and Current Reports on Form 8-K, along with any amendments to these reports, are available to shareholders through the Company's website, www.shentel.com. This website also has recent news releases and other information potentially of interest to shareholders.\n\nA copy of the Company's Annual Report on Form 10-K, without exhibits, may be obtained, without charge, by writing to Shenandoah Telecommunications Company, 124 South Main Street, P.O. Box 459, Edinburg, Virginia 22824, Attention: Secretary.\n\n## MARKET AND DIVIDEND INFORMATION\n\nThe Company's stock is traded on the NASDAQ National Market under the symbol 'SHEN.' Information on the high and low sales prices per share of common stock as reported by the NASDAQ National Market for the last two years is set forth below: The Company's stock is traded on the NASDAQ National Market under the symbol 'SHEN.' Information on the high and low closing prices per share of common stock as reported by the NASDAQ National Market for the last two years is set forth below:", - "page_start": 58, - "page_end": 58, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## Note 14. Segment Reporting\n\nThe Company, as a holding company with various operating subsidiaries, has identified ten reporting segments based on the products and services each provides. Each segment is managed and evaluated separately because of differing technologies and marketing strategies.\n\nThe reporting segments and the nature of their activities are as follows:\n\n| Shenandoah Telecommunications Company (Holding) | Holding company, which invests in both affiliated and non-affiliated companies. |\n|---------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Shenandoah Telephone Company (Telephone) | Provides both regulated and unregulated telephone services and leases fiber optic facilities primarily throughout the Northern Shenandoah Valley. |\n| Shenandoah Cable Television Company (CATV) | Provides cable television service in Shenandoah County. |\n| ShenTel Service Company (ShenTel) | Provides Internet access to a multi-state region surrounding the Northern Shenandoah Valley, hosts Travel 511 for Virginia, and sells and services telecommunication equipment. |\n| Shenandoah Valley Leasing Company (Leasing) | Finances purchases of telecommunications |\n| Shenandoah Mobile Company (Mobile) | Provides tower rental space in the Company's PCS markets and paging services throughout the Northern Shenandoah Valley. |\n| Shenandoah Long Distance Company (Long Distance) | Provides long distance services. |\n| Shenandoah Network Company (Network) | Leases interstate fiber optic facilities. |\n| ShenTel Communications Company (Shen Comm) | Provides DSL services as a CLEC operation. |\n| Shenandoah Personal Communications Company (PCS) | As a PCS Affiliate of Sprint, provides digital wireless service to a portion of a four-state area covering the region from Harrisburg, York and Altoona, Pennsylvania, to Harrisonburg, Virginia. |\n\nThe accounting policies of the segments are the same as those described in the summary of significant accounting policies. Each segment accounts for inter-segment sales and transfers as if the sales or transfers were to outside parties.\n\nIncome (loss) recognized from equity method nonaffiliated investees by segment is as follows:\n\n| Year | Holding | Telephone | Consolidated Totals |\n|----------------|----------------|----------------|-----------------------|\n| (in thousands) | (in thousands) | (in thousands) | (in thousands) |\n| 2003 | $ (441) | $ 65 | $ (376) |\n| 2002 | $ (822) | $ 45 | $ (777) |\n| 2001 | $ (1,218) | $104 | $ (1,114) |\n\n\n\n■", - "page_start": 36, - "page_end": 36, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## SHENANDOAH TELECOMMUNICATIONS COMPANY AND SUBSIDIARIES NOTES TO CONSOLIDATED FINANCIAL STATEMENTS", - "page_start": 38, - "page_end": 38, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## Significant Transactions\n\nThe Company had several significant transactions during 2003. The largest was the sale of its 66% interest in the Virginia 10 RSA cellular operation, as described above. The Company originally entered into the agreement with Verizon Wireless in November 2002. The Company was the general partner of the limited partnership which operated an analog cellular network in the six-county area of Northwestern Virginia, including Clarke, Frederick, Page, Rappahannock, Shenandoah, and Warren counties, and the city of Winchester. The sales price was $37.0 million plus the Company's 66% share of the partnership's working capital, which was approximately $1.7 million. The Company was required to do a working capital true up following the closing, from which the Company recorded a charge for $23 thousand after taxes. In the fourth quarter the Company recorded an additional charge for taxes of $0.2 million to reflect the consolidated effective tax rate based on the final operating results for the year.\n\nThe sale of this business is reflected in the discontinued operations section of the income statement along with the results of operations for the two months of 2003 that the operation remained a part of the Company.\n\n\n\n■", - "page_start": 41, - "page_end": 41, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "INDEPENDENT AUDITOR\n\nShenandoah Telecommunications Company KPMG LLP 124 South Main Street 124 South Main Street\n\n124 South Main Street 1021 East Cary Street Edinburg, VA 22824 Richmond, VA 23219 Edinburg, VA 22824\n\n## CORPORATE HEADQUARTERS CORPORATE HEADQUARTERS\n\n## INDEPENDENT AUDITOR\n\n124 South Main Street 1021 East Cary Street 1021 East Cary Street\n\nEdinburg, VA 22824 Richmond, VA 23219\n\n## SHAREHOLDERS' QUESTIONS AND STOCK TRANSFERS\n\nCALL (540) 984-5200 Transfer Agent - Common Stock\n\nSHAREHOLDERS' QUESTIONS AND STOCK TRANSFERS CALL (540) 984-5200\n\nTransfer Agent - Common Stock Shenandoah Telecommunications Company\n\nShenandoah Telecommunications Company P.O. Box 459\n\nP.O. Box 459 Edinburg, VA 22824\n\nEdi b\n\nVA22824\n\nThis Annual Report to Shareholders contains forward-looking statements. These statements are subject to certain risks and uncertainties that could cause actual results to differ materially from those anticipated in the forward-looking statements. Factors that might cause such a difference include, but are not limited to: changes in the interest rate environment; management's business strategy; national, regional, and local market conditions; and legislative and regulatory conditions. Readers should not place undue reliance on forward-looking statements which reflect management's view only as of the date hereof. The Company undertakes no obligation to publicly revise these forward-looking statements to reflect subsequent events or circumstances, except as required by law.", - "page_start": 58, - "page_end": 58, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "- - SSH-2 RSA", - "page_start": 778, - "page_end": 778, - "source_file": "sg247938.pdf" - }, - { - "text": "\n\nF or over 100 years Shenandoah Telecommunications Company has been committed to providing outstanding service to our customers. Our employees take that same dedication after hours to make a difference in their community.\n\nWe take this opportunity to share with you, our shareholders, the stories of just a few of your dedicated employees.\n\nPatty Pomeroy\n\n\n\nVolunteerism is in Patty Pomeroy's blood. Her grandfather was a dispatcher for the rescue squad in Middletown, VA for 25 years and her grandmother was in the ladies auxiliary. Her father was a charter member of the Middletown Rescue Squad. In 1997, Patty, a customer service representative at Shentel for four years, continued the family tradition by earning her Emergency Medical Technician certification and going to 'work' for the Strasburg Rescue Squad. Patty is the administrator of membership recruitment and retention for the squad and is the liaison coordinator for junior squad members under 18. It is her job to make sure that new members are brought in to the squad and current members stay active.\n\n'There is a great satisfaction that comes from knowing that what you can do will help people.'\n\nJeff Beard has been an installer repairman with Shentel for almost five years. Two years ago, Jeff helped start Project Isaiah 58, a faith-based recovery ministry that reaches out to people who are struggling with addiction. Project Isaiah 58 has weekly group meetings in Winchester, Woodstock and Warrenton, VA. Jeff, who lives in Winchester, participates in the group meetings and also makes time to meet one-on-one with people who need personal attention.\n\n'I feel the need to reach out to people who are suffering.'\n\nJeff Beard\n\n\n\nJohn Gardner has been with Shentel for two years as a PCS technician in Central Pennsylvania, but for almost a year of that time he was on Naval Reserve duty in Sasebo, Japan. John joined the Reserves after serving 10 years of active duty. In October 2002, he was activated under Noble Eagle-Enduring Freedom as part of the increase in security at bases around the world. John worked on Motorola radios and repeater systems while stationed in Japan. It was tough for the serviceman to be away from his wife and children, but John believes very strongly in serving his country.\n\n'Being in the Reserves is a way for me to be a civilian and still serve my country.'\n\nJohn Gardner\n\nAt Shentel, George Brinkley, the store manager in Front Royal, VA, is known for being one of the biggest fund-raisers for the Shenandoah County American Cancer Society Relay for Life event. In his six years at the Company, George has raised nearly $20,000. In 2003, he raised $4,246 and was recognized as the top individual fund-raiser for the entire event.\n\nIn 2002, George was chairman of the parade committee for the Woodstock, VA 250th anniversary celebration. Under George's leadership, the 26-member committee worked for a year preparing for the parade, which was the largest in the town's history.\n\n'I just have a knack for volunteering. I want to make my community better any way I can.'\n\nGeorge Brinkley\n\n\n\n■", - "page_start": 4, - "page_end": 4, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "2003 was the 10th anniversary of Shentel's decision to enter the PCS business and the 8th year operating as a Sprint PCS Affiliate. This year was a significant milestone for Shentel's PCS business, as we posted our first profitable quarter and recorded net income for the year of $0.3 million versus a net loss of $5.4 million in 2002.\n\nOur Sprint PCS wireless customer base continues to grow, with year-end customers at 85,139 spread from Harrisonburg, Virginia to Harrisburg, Pennsylvania. Our customers are averaging approximately 700 minutes of usage per month and we have one of the lowest customer churn rates in the industry. To keep up with this growth and improve our service, we continued investing in additional network facilities. We added capacity to 26 existing tower sites and installed 16 new tower locations bringing our total sites to 253. Our plan is to add capacity and build additional sites in 2004 in order to meet expected growth.\n\nWe added a new type of customer in 2003. Through Sprint's relationship with its wholesale cutomers, more than 11,000 pre-paid customers were added to our network. These pre-paid accounts, usually for customers with no established credit, are a low cost method to increase customers. They can purchase phones and some minutes at various convenience, electronic or department stores in addition to one of our company locations. When needed, they can easily purchase additional minutes.\n\nCamera phones and e-mailing pictures were hot in 2003. We now offer phones that can take and send a 15 second video. Late in the year, we launched Spirit PCS ReadyLink sm , the Sprint walkie-talkie style service. It is hoped that these new services will be major sales drivers in 2004.\n\nIn 2003, we focused on improving our distribution channels. We expanded and relocated our stores in Harrisonburg and Winchester, Virginia to handle our growing customer base. At our Edinburg, Virginia store, we expanded both our hours and office space. We continue to increase our direct sales force to expand our base of business customers. To make it convenient for our potential customers, we also grew the number of local third-party sales partners.\n\n\n\nA much publicized development in our industry was the introduction of Wireless Local Number Portability (WLNP) on November 24 th , 2003. Starting on that day, customers in the 100 largest population centers in the United States were able to change wireless carriers while keeping their existing phone number. WLNP will be available in the entire country on May 24, 2004. To date, this change has had only a minor impact on Shentel's customer base.\n\nWe continue to work to make PCS a growth vehicle of revenue and net income for Shenandoah Telecommunications Company.\n\n■", - "page_start": 10, - "page_end": 10, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## SHENANDOAH TELECOMMUNICATIONS COMPANY AND SUBSIDIARIES NOTES TO CONSOLIDATED FINANCIAL STATEMENTS\n\n## Note 14. Segment Reporting (Continued)\n\nSelected financial data for each segment is as follows:", - "page_start": 37, - "page_end": 37, - "source_file": "NASDAQ_SHEN_2003.pdf" - } - ] - }, - { - "references": { - "source_file": "maiis-user-manual.pdf", - "query": "As a product manager, how can I reject an inventory in NAIIS ?", - "target_page": 38, - "target_passage": "Log in as PM. Click on “View Inventories Progress” under sub menu “Submission Management”. The “View Inventories Progress” screen appears. Select the appropriate inventory by clicking the Inventory name under column “Name” Press the “Reject” button ", - "chunk_present": { - "presence": true, - "index": 7 - } - }, - "top_chunk": [ - { - "text": "## 2.2 Pending NAIIS features\n\n## List of pending functionalities in NAIIS:\n\n-----------------------------------------\n\n - 1. Web services integration for help desk\n - 2. Display of information in 5 remaining UN languages.\n\n## 2.3 Contact\n\nRequests for access to, inquiries on the use of the software, and comments on the design and functionalities of the application should be sent to the dedicated e-mail address naiisapp@unfccc.int .", - "page_start": 4, - "page_end": 4, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "## 10 Submission management\n\n## 10.1 Workflow\n\nCreating and preparing an inventory, generating tables for checking by the NFP and approving and/or rejecting submission, follows a number of steps known collectively as a workflow. This chapter describes the workflow relating to the submission of the GHG inventory/(ies), which users should follow to create, prepare, and send GHG inventories for internal checking, and approval/rejection of the submission by the NFP, within the NAIIS web application (figure 52).\n\nFigure 52: Non-Annex I Inventory Software workflow\n\n\n\n## 10.2 Start of inventory/submission (NFP or PM)\n\nThis procedure allows the NFP or PM to start a new (created) inventory. The existing data for the inventory year identified will be made available in the new inventory/submission.\n\nThese are the steps to start a new inventory:\n\n - 1. Click on 'View Inventories Progress' under sub menu 'Submission Management' (figure 53).\n\nFigure 53. View Inventories Progress sub menu\n\n\n\n - 2. The 'View Inventories Progress' screen appears (figure 54).\n - 3. Select the appropriate inventory by clicking the box under column 'Working Inventory' (figure 54, a).\n\n*** Note: The selected appropriate inventory should be in status 'created' (figure 54, b)", - "page_start": 34, - "page_end": 34, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "## 3.2.2 Create, Start, Add new and View GHG inventory year\n\nThese functions allow the NFP and PM to create or edit a GHG inventory within the NAIIS software.\n\n## 3.2.2.1 Create a new GHG inventory or Start a GHG inventory year\n\n## 3.2.2.1.1 Create a new GHG inventory\n\n## Note : This step can ONLY be undertaken by the NFP or PM !\n\nIn order to create one or several GHG inventories, the following steps can be done by the NFP or PM:\n\n -  Log in as NFP or PM\n -  Hover the cursor on 'Submission Management' menu and click on the 'View Inventories Progress' button. (see Figure 5). Left click on the '+' sign will create a new GHG inventory. (see Figure 6)\n\nThe new GHG Inventory name will be automatically generated by the NAIIS system, as follows:\n\n\\_\\_\\_ Inventory\n\nFor example: Paraguay\\_2013\\_1\\_Inventory or Bhutan\\_2014\\_2\\_Inventory\n\n## Figure 5. Create new GHG inventory screen\n\nFigure 6. New GHG inventory created screen\n\n\n\n", - "page_start": 7, - "page_end": 7, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "## 10.5.2 Rejection of an inventory\n\n - 1. Log in as NFP.\n - 2. Click on 'View Inventories Progress' under sub menu 'Submission Management'.\n - 3. The 'View Inventories Progress' screen appears.\n - 4. Select the appropriate inventory by clicking the Inventory name under column 'Name' (figure 66).\n - 5. Press the 'Send for Rejection' button (figure 66, b).\n\nOnce the 'Send for Rejection' button was pressed, the status of the selected inventory changes to 'awaiting\\_rejection' (figure 67, a).\n\n - *** Note: A notification email will be sent to the PM that the inventory has been rejected. Therefore, the PM will be able to reject the submission. Proceed to section 10.4.2.\n\nFigure 66. Work on Inventories screen - Rejection of an inventory - Status = awaiting\\_approvalFigure 67. Work on Inventories screen - Rejection of an inventory - Status = rejected\\_approval\n\n\n\n", - "page_start": 40, - "page_end": 40, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "\n\n## NAIIS Web Application\n\n(Release version 1.1.3)\n\n## User Manual\n\n(As of 10 February 2014)", - "page_start": 0, - "page_end": 0, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "## 10.5 Approval or Rejection of an inventory (NFP)\n\nThis section describes how the NFP approves or rejects an inventory after being sent for approval by the PM (See section 10.4).\n\n## 10.5.1 Approval of an inventory\n\n - 1. Log in as NFP.\n - 2. Click on 'View Inventories Progress' under sub menu 'Submission Management'.\n - 3. The 'View Inventories Progress' screen appears.\n - 4. Select the appropriate inventory by clicking the Inventory name under column 'Name' (figure 64).\n - 5. Press the 'Approve' button (figure 64, b).\n\nOnce the 'Approve' button was pressed, the status of the selected inventory changes to 'approved' (figure 65, b).\n\n*** Note: A notification email will be sent to the PM that the inventory has been approved. Therefore, the PM may proceed to selecting the tables for preparing the official submission (See section 10.6).\n\nFigure 64. Work on Inventories screen - Approve an inventory - Status = awaiting\\_approval\n\n\n\nFigure 65. Work on Inventories screen - Approve an inventory - Status = approved\n\n", - "page_start": 39, - "page_end": 39, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "## Figure 54. View Inventories Progress screen\n\nFigure 56. Work on Inventories screen\n\n\n\n - 4. Click on 'Work on Inventories' under 'Submission' (figure 55).\n\n## Figure 55. Work on Inventories sub menu\n\n\n\n - 5. Click the appropriate Inventory year on 'Work on Inventories' under 'Submission' (figure 56, a).\n - 6. Press the 'Start Inventory' button to start the inventory (figure 56, b). Once pressed, the status changes to 'started' (figure 57).\n\n*** Once the 'Start Inventory' button has been pressed by the NFP or PM, a notification email will be sent to all SE's with the information that a new inventory was created. SE's and PM's can start entering their data into the NAIIS software. More details on how to do the data entry please see section 4.1 above.\n\n\n\nFigure 57. Work on Inventories screen - Status = Started\n\n\n\n", - "page_start": 35, - "page_end": 35, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "## 10.4 Send for approval/rejection of an Inventory (PM)\n\nThis section describes on how the PM approves or rejects an inventory after being checked by the PM.\n\n## 10.4.1 Send for approval of an Inventory\n\n - 1. Log in as PM.\n - 2. Click on 'View Inventories Progress' under sub menu 'Submission Management'.\n - 3. The 'View Inventories Progress' screen appears.\n - 4. Select the appropriate inventory by clicking the Inventory name under column 'Name' (figure 60, a).\n - 5. Press the 'Send for Approval' button to send it to NFP for his/her review and approval of the inventory (figure 60, b).\n\n*** Note: A notification email will be sent to the PM, once the 'Send for Approval' has been pressed. And the status changed to 'Awaiting\\_approval' (figure 61).\n\nFigure 60. Work on Inventories screen - Send for Approval - Status = checkFigure 61. Work on Inventories screen - Status = awaiting\\_approval\n\n\n\n\n\n## 10.4.2 Rejection of an Inventory\n\n - 1. Log in as PM.\n - 2. Click on 'View Inventories Progress' under sub menu 'Submission Management'.\n - 3. The 'View Inventories Progress' screen appears.\n - 4. Select the appropriate inventory by clicking the Inventory name under column 'Name' (figure 62, a).\n - 5. Press the 'Reject' button (figure 62, b).\n\n*** Note: A notification email will be sent to the PM, once the 'Reject' button has been pressed. And the status changed to 'Awaiting\\_rejection\\_check' (figure 63).", - "page_start": 37, - "page_end": 37, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "## 3.2.2.1.2 Start a GHG inventory\n\nIn order to START a GHG inventory, please follow the steps below:\n\n -  Log in as PM.\n -  Hover the cursor on the 'Submission Management' and click on the 'View Inventories Progress' button.\n -  Click/select the appropriate GHG Inventory in Status = 'created' (see figure 7a).\n -  Click on 'Work on Inventories' under Submission Management (see figure 7b).\n\n## Figure 7: Select an Inventory screen\n\nFigure 9: 'Started' status of an Inventory\n\n\n\n -  Left click to select the appropriate Inventory (figure 8a)\n -  Press the 'Start Inventory' button (figure 8b)\n\n## Figure 8: Start an Inventory screen\n\n\n\nOnce the 'Start Inventory' button is pressed, the status of the selected Inventory change to 'started'. (see Figure 9)\n\n", - "page_start": 8, - "page_end": 8, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "We strongly suggest that you keep the sending inventory option enabled to IBM Support. However, it might not be of interest to local users, although inventory content can serve as a basis for inventory and asset management.\n\n - 7. In Edit mode, you can change any of the previously configured settings. After you are finished editing these parameters, adding more recipients, or testing the connection, save the configuration so that the changes take effect (see Figure 13-53).\n\nFigure 13-53 Saving modified configuration\n\n", - "page_start": 737, - "page_end": 737, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "maiis-user-manual.pdf", - "query": "What is the global warming potential of Perfluorohexane ?", - "target_page": 48, - "target_passage": "7,400", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## 2. Background\n\n## 2.1. Climate Change, Global Warming, and Frames\n\nExisting studies have noted that the subtle di GLYPH<11> erence between climate change and global warming evokes di GLYPH<11> erent public cognitive responses, where global warming'indicates heat-related impacts, human causes, increased UV light penetration, ozone depletion, and the greenhouse e GLYPH<11> ect, whereas climate change is more associated with a wide range of influences on climate, including drought and agriculture [9]. An N-gram analysis suggested that global warming showed a closer connection with ice, snow, and sea, whereas climate change was always connected with scientific investigations, such as", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed10.pdf" - }, - { - "text": "## Annex 3: Global Warming Potentials (GWPs)\n\n| Greenhouse gas | Chemical formula | 1995 IPCC GWP |\n|----------------------|--------------------|-----------------|\n| Carbon dioxide | CO2 | 1 |\n| Methane | CH4 | 21 |\n| Nitrous oxide | N2O | 310 |\n| HFC-23 | CHF3 | 11,700 |\n| HFC-32 | CH2F2 | 650 |\n| HFC-41 | CH3F | 150 |\n| HFC-43-10mee | C5H2F10 | 1,300 |\n| HFC-125 | C2HF5 | 2,800 |\n| HFC-134 | C2H2F4 | 1,000 |\n| HFC-134a | CH2FCF3 | 1,300 |\n| HFC-152a | C2H4F2 | 140 |\n| HFC-143 | C2H3F3 | 300 |\n| HFC-143a | CF3CH3 | 3,800 |\n| HFC-227ea | C3HF7 | 2,900 |\n| HFC-236fa | C3H2F6 | 6,300 |\n| HFC-254ca | C3H3F5 | 560 |\n| Perfluoromethane | CF4 | 6,500 |\n| Perfluroethane | C2F6 | 9,200 |\n| Perfluoropropape | C3F8 | 7,000 |\n| Perfluorobutane | C2F10 | 7,000 |\n| Perfluorocyclobutane | c-c4F8 | 8,700 |\n| Perfluoropentane | C5F12 | 7,500 |\n| Perfluorohexane | C6F14 | 7,400 |\n| Sulphur hexafluoride | SF6 | 23,900 |\n\nSource: Climate Change 1995, The Science of Climate Change: Summary for Policymakers and Technical Summary of the Working Group I Report, page 22.", - "page_start": 47, - "page_end": 47, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "\n\nFigure 2. (continued)\n\n\n\nis 16.9% in which the temperature would go up more than 3.0 °C, most located in the high latitude regions of Northern Hemisphere; the area is rarely in which the temperature would go up between 0 and 1.0 °C.\n\n/T\\_here are apparent trends of humidi/fication in most regions under global warming by 1.5 °C and 2.0 °C; but the drought risk also should be taken seriously in the other regions. Under global warming by 1.5 °C the area is 73.6% of the whole world in which the precipitation would increase, most located in the Northern Hemisphere; the area is 53.7% of the whole world in which the precipitation would increase by less than 50 mm; however, the area is 26.4% of whole world in which the rainfall would decrease, mainly located in the Southern Hemisphere and the middle regions of Northern Hemisphere. /T\\_he distribution of precipitation under global warming by 2.0 °C is similar with the situation under global warming by 1.5 °C. /T\\_he drought-threatened area would increase by 28.5% under global warming by 2.0 °C, especially in the middle and low latitude of the Northern Hemisphere; the area would expand to 26%, in which the precipitation increases more than 50 mm. In other words, the extreme rainfall events (such as drought, rainstorm) under global warming by 2.0 °C would be more serious than those under global warming by 1.5 °C, which is what we should be pay more attention to.\n\nYield change of maize under global warming by ͷ.ͻ °C and ͸.Ͷ °C. Maize production is a/ffected by climate change apparently. According to the simulation results of CERES-maize, the yield of maize would decrease in the worldwide relative to 1986-2005 under global warming by 2.0 °C; it would increase little under global warming by 1.5 °C. /T\\_he distributions of maize yield loss under the two scenarios are similar to each other, mostly located in the middle and low latitude, which are the main regions for maize planting in the world. /T\\_he loss risk of maize under global warming by 2.0 °C is much more serious than that under global warming of 1.5 °C. However, there are increasing potentials of maize yield in many regions, nearly half of the whole maize planting area in the world, in which the climate situation would become more proper for maize under global\n\nVol.:(0123456789)", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed9.pdf" - }, - { - "text": "issues and re-constructing them di GLYPH<11> erently. By comparing the persistent words used related to the two discourses in the 10-year period in Table 2, we think that global warming showed a relative preference toward general descriptions or slogans, such as 'earth' and 'pollution', whereas 'climate change' was more associated to specific issues like 'solar', 'coal', 'china', and 'food'.\n\nStudies have suggested that the public shows a preference for scientific publications with general keywords compared with those with complicated scientific jargon [47], lacking a deep understanding of the complicated issue [46] and the necessity for mitigation of the climate issue [47]. These conclusions seem to suit global warming more than climate change according to the current study, which is probably because climate change receives more publicity and recognition than global warming in the scientific community. In the association network shown in Figure 2, global warming was found to be more connected with temperature abnormalities. This finding is in accordance with studies reporting that short-term temperature anomalies [87] can increase the public's belief about global warming by increasing the understanding of this abstract issue [88], although scientists mostly make judgments based on long-term weather statistics [89]. However, none of the four words, 'snow', 'summer', 'winter', or 'heatwave' in the temperature theme of global warming were ranked in the top 50 nodes list of the climate change network.\n\nEven when climate change and global warming shared concern about similar topics such as the cause of the climate issue, global warming tended to focus on carbon emission phenomena, whereas climate change preferred a more in-depth perspective, highlighting the importance of global action to mitigate the climate issue in its second-largest cluster, with energy structure as the contributor to carbon emissions in its third largest cluster. As invisible causes and disbelief in actions have long been regarded as two key reasons for low climate concern [90], the two terminologies' di GLYPH<11> erences in connotations suggest that introducing these absent sub-topics into global warming discourse or highlighting climate change for its inherent connotations may help communicators raise public concern about climate.\n\n## 5.1.2. Political Connotations\n\nStudies noted that frame preference between climate change and global warming reflects individuals' ideological spectrum, where climate change and global warming were favored by the liberals and conservatives, respectively [10]. The cluster analysis of the semantic network in the current study demonstrated that global warming triggered far more political responses than climate change. The second largest cluster of global warming was politics-based, where hashtag 'tcot', favored by right-leaning users and 'p2', favored by left-leaning users, were both ranked in the list of top nodes of the global warming discourse, but neither was included in the list of top nodes of the climate change discourse. Considering that earlier findings suggested that global warming was more likely to be used by conservatives to question the reality of climate issue [11] and climate change is more commonly adopted when discussing action against the climate change issue [5], global warming had a stronger political connotation in public discussion.\n\n## 5.1.3. Discourse Structure", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed10.pdf" - }, - { - "text": "Figure 7. Price change on maize in main continents under global warming by 1.5 °C and 2.0 °C.\n\n\n\nFigure 8. Changes in Self-su/fficiency ratio of maize in main countries under global warming by 1.5 °C and 2.0 °C.\n\n\n\nmeantime, the huge di/fferences in yield changes in di/fferent regions provide a small chance for the world, especially under global warming by 1.5 °C. In the near future, if the global temperature can be e/ffectively controlled under 1.5 °C warming scenario, there would be an increase in the potential for maize yield in the worldwide. All regions and countries should take actions to reduce the yield loss risk. For the yield-increasing regions, the potentials of climate resources should be fully utilized to guarantee maize yield under future scenarios; for the yield-reducing regions, the targeted adaptation actions should be taken in advance under global warming by 1.5 °C and 2.0 °C.\n\nMeanwhile, the risk of price /fluctuations caused by global corn trade due to future climate change should be paid more attention to, especially for developing and undeveloped countries. In the view of supply and demand, the population would go up quickly in the next 30 years; the demand for maize would increase hugely; however, the supply of maize would go down in the future, especially under global warming by 2.0 °C; it would intensify the contradiction between supply and demand, which would threaten the food security and sustainable development in the whole world.\n\nIn this study, 5 climate models are selected, which are recommended by ISI-MIP (/T\\_he Inter-Sectoral Impact Model Intercomparison Project); compared with other climate models, the /five models could more e/ffectively support impact assessment in di/fferent sectors and provide more reliable results. Based on the simulation results\n\nVol.:(0123456789)", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed9.pdf" - }, - { - "text": "## OPEN\n\n\n\n## The impact of ͷ.ͻ °C and ͸.Ͷ °C global warming on global maize production and trade\n\nKuo Li ͷ * , Jie Pan ͷ , Wei Xiong ͸ , Wei Xie ͹ & Tariq Ali ͹\n\nClimate change is becoming more and more remarkable which has an obvious impact on crop yields all over the world. Future climate scenario data was simulated by ͻ climate models recommended by ISI-MIP under ͺ RCP scenarios, in which the approximate scenarios with global warming by ͷ.ͻ °C and ͸ °C were selected. Applying DSSAT and GTAP models, the per unit yield changes of maize in the world under global warming by ͷ.ͻ °C and ͸.Ͷ °C were analyzed and the market prices of maize at national and global levels were simulated. The results showed that, the risk of maize yield reduction under ͸.Ͷ °C scenario was much more serious than ͷ.ͻ °C scenario; the ratios of yield changes were separately Ͷ.ͷ;% and - ͷͶ.;% under ͷ.ͻ °C and ͸.Ͷ °C scenarios. The reduction trend of total maize production is obvious in the top five countries and the main producing regions of the world, especially under the ͸.Ͷ °C scenario. The market price of maize would increase by around Ͷ.ͽ% and ͹.ͺ% under ͷ.ͻ °C and ͸.Ͷ °C scenarios. With the quickly increasing population in the world, it is urgent for all countries to pay enough attention to the risk of maize yield and take actions of mitigation and adaptation to climate change.\n\nIn the past hundred years, the global climate has experienced great changes 1-4 . According to the sixth assessment report of IPCC, the global average surface temperature increased by 1.09 °C between 1850 and 2020, and almost all regions in the world experienced surface warming 5 . Due to global warming, the extreme climate events become more and more frequent, and the ecological environment problems caused by climate change are more and more serious, which restrict the sustainable development of human society and health 6-10 . Global warming has gradually changed from a scienti/fic issue to a major social issue of common concern to governments and people of all countries 11-13 . In 2016, nearly 200 parties of the United Nations Framework Convention on climate change reached the Paris Agreement at the climate change conference in Paris 14 . Paris Agreement has indicated and pursue e/fforts to limit the temperature increase to 1.5 °C above pre-industrial levels.", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "Firstly, the period of 1986-2005 is de/fined as the baseline, of which the simulated average value is recognized as 0.61 °C above pre-industrial (the period of 1850-1900) levels; the baseline is selected according to the accessibility and operability of data, which is used for the determination of the periods with global warming by 1.5 °C and 2.0 °C and the comparison of maize yield between di/fferent periods. Secondly, the simulated values of global mean temperature in the future years are subtracted from the simulated average value of 1986-2005; then the values should be plus with 0.61 °C, which are the global warming results above pre-industrial levels; then 20 years moving average of the above results are calculated. /T\\_hirdly, the climate data of global warming by 1.5 °C is de/fined according to the principles provided in the /fi/f\\_th IPCC Assessment Report, for which it should be within 1.5-2.0 °C above pre-industrial levels at the end of the twenty-/first century; the climate data of global warming by 2.0 °C is de/fined according to the principles provided in the /fi/f\\_th IPCC Assessment Report, for which it should be within 2.0-2.5 °C above pre-industrial levels at the end of the twenty-/first century and the period of global warming by 2.0 °C should not be earlier than 2050. Finally, the climate models, scenarios and periods of global warming by 1.5 °C and 2.0 °C are separately con/firmed; the data of global warming by 1.5 °C, simulated by IPSL-CM5A-LR under RCP2.6 scenario during 2020-2039 and simulated by GFDL-ESM2M under RCP4.5 scenario during 2041-2060; the data of global warming by 2.0 °C, simulated by NorESM1-M under RCP4.5 scenario during 2060-2079 and simulated by GFDL-ESM2M under RCP6.0 scenario during 2065-2084.\n\nSimulation of maize yield using DSSAT. According to the data of global warming by 1.5 °C and 2.0 °C selected above, we simulated global maize yield changes compared with the average yield during 1986-2005 on grid level using CERES-Maize, which is part of DSSAT version 4.6 49 .\n\n/T\\_he inputs for DSSAT simulation include daily weather data, soil parameters, crop calendar data and management information. All the inputs are formatted at a 0.5° × 0.5° grid resolution which are computed by highperformance computers. Weather data is from the AgMERRA dataset, including maximum and minimum temperatures, precipitation, total radiation and humidity. Crop calendar data were from the Center for Sustainability and Global Environment (SAGE), in which the existing observations of crop planting and harvesting dates are gridded formatted at a resolution of 5 min 50 . For management information, fertilizer applications, irrigation and other management practices are required. A crop-speci/fic gridded dataset of nitrogen fertilizer application for the world was developed by integrating national and subnational fertilizer application data from a variety of sources, which is used to set up current fertilizer application rates for maize in each grid cell. Soil parameters are from the International Soil Pro/file Dataset (WISE), including soil texture, bulk density, pH, organic carbon content and fraction of calcium carbonate for each of /five 20 cm thick soil layers 51 . All the soil data is allocated to be in accordance with the request of DSSAT simulation; the missing soil parameters for organic soils were adopted from FAO soil dataset.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed9.pdf" - }, - { - "text": "complex changes in the state of the climate [7], which may be caused by natural process, external forces, or human interventions [8]. By randomly assigning respondents to climate change or global warming questionnaires, scholars confirmed that the di GLYPH<11> erent connotations contained in the two definitions are likely to evoke distinct interpretations of the causes and impacts of the global climate issue [9], which may inhibit collaboration and joint e GLYPH<11> orts to mitigate the global challenge.\n\nPublic preference between climate change and global warming is even more apparent when considering the ideology spectrum [10]. Some scholars concluded that conservatives, who are less concerned with environmental issues, tended to use global warming as a narrative strategy because global warming has a more direct connection with temperature rise, making it easier to find contradictory cues such as freezing weather or heavy snowstorms to deny global climate change facts [11]. The associations between global warming and human activities may contribute to more controversies as well [12], connecting global warming more with the 'hoax' frame [5] and evoking greater negative sentiment [13].\n\nAlthough these existing studies have often attempted to identify the di GLYPH<11> erences between these two terminologies, only a particular few perspectives, such as sentiment, ideological preference, or cause and e GLYPH<11> ect, were examined in each study [3,9,13]. However, the associate network model introduced by psychologists suggests that human recognition and memory have a network-shaped architecture [14], where individual understanding of particular objects is connected with numerous other objects in the mind. According to the associate network model, individual understanding of the global climate concern is a network composed of numerous inter-connected concepts, in which climate change and global warming. As the two terminologies concern the primary mechanism of the global climate issue, the preference between the two understandings may represent two distinct climate discourses by di GLYPH<11> erently organizing numerous climate concepts. Examining the di GLYPH<11> erences between two discourses with an associative perspective may provide communicators with unique insights into narrowing the cognitive discrepancy. The temporal dimension was lacking in existing studies, necessitating the study of how concepts associated with each other have evolved with time.\n\nLargeamountsofuser-generateddataonsocialmedia, whichhavebeenvaluedincomputerscience, communication, and environmental studies [5,9,15-18], have enabled the acquistion of the social media representation of the two discourses in a decade. In this study, by analyzing hashtag co-occurrence patterns in 6,662,478 tweets containing 'climate change' and 'global warming' between 1 January 2009 and 31 December 2018, two semantic networks of public climate discourse were constructed to identify the critical concepts and links surrounding the two terminologies. We conducted temporal analysis to observe the evolution of the two discourses and to measure whether the discrepancy between the two has widened or narrowed within the 10-year period.\n\nTo be specific, we formulated three research questions (RQs) to be explored in this study:\n\nRQ1: What is the di GLYPH<11> erence in how the two the discourses are associated with important climate concepts in people's minds?\n\nRQ2: How did the two competing climate discourses evolve from 2009 to 2018? RQ3: Did the two competing discourses converge or diverge in this decade?\n\n## 2. Background\n\n## 2.1. Climate Change, Global Warming, and Frames", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed10.pdf" - }, - { - "text": "- 7. Caitlyn Kennedy, R.L. What's the Di GLYPH<11> erence between Global Warming and Climate Change? 2015. Available online: https: // www.climate.gov / news-features / climate-qa / whats-di GLYPH<11> erence-between-global-warming-andclimate-change (accessed on 10 October 2019).\n - 8. Pachauri, R.K.; Allen, M.R.; Barros, V.R.; Broome, J.; Cramer, W.; Christ, R.; Church, J.A.; Clarke, L.; Dahe, Q.; Dasgupta, P.; et al. Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change ; IPCC: Geneva, Switzerland, 2014.\n - 9. Whitmarsh, L. What's in a name? Commonalities and di GLYPH<11> erences in public understanding of 'climate change' and 'global warming'. Public Underst. Sci. 2009 , 18 , 401-420. [CrossRef]\n - 10. Shehata, A.; Hopmann, D.N. Framing climate change: A study of US and Swedish press coverage of global warming. Journal. Stud. 2012 , 13 , 175-192. [CrossRef]\n - 11. Schuldt, J.P.; Roh, S. Of accessibility and applicability: How heat-related cues a GLYPH<11> ect belief in 'global warming' versus 'climate change'. Soc. Cogn. 2014 , 32 , 217-238. [CrossRef]\n - 12. McCright,A.M.; Dunlap, R.E. Challenging global warming as a social problem: An analysis of the conservative movement's counter-claims. Soc. Probl. 2000 , 47 , 499-522. [CrossRef]\n - 13. Lineman, M.; Do, Y.; Kim, J.Y.; Joo, G.J. Talking about climate change and global warming. PLoS ONE 2015 , 10 , e0138996. [CrossRef]\n - 14. Anderson, J.R. The Architecture of Cognition ; Psychology Press: London, UK, 2013.\n - 15. Pan, B.; Zheng, Y.; Wilkie, D.; Shahabi, C. Crowd sensing of tra GLYPH<14> c anomalies based on human mobility and social media. In Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Orlando, FL, USA, 5-8 November 2013; pp. 344-353.\n - 16. Rogstadius, J.; Vukovic, M.; Teixeira, C.A.; Kostakos, V.; Karapanos, E.; Laredo, J.A. CrisisTracker: Crowdsourced social media curation for disaster awareness. IBM J. Res. Dev. 2013 , 57 , 4:1-4:13. [CrossRef]\n - 17. Leetaru, K.; Wang, S.; Cao, G.; Padmanabhan, A.; Shook, E. Mapping the global Twitter heartbeat: The geography of Twitter. First Monday 2013 , 18 . [CrossRef]\n - 18. Kirilenko, A.P.; Molodtsova, T.; Stepchenkova, S.O. People as sensors: Mass media and local temperature influence climate change discussion on Twitter. Glob. Environ. Chang. 2015 , 30 , 92-100. [CrossRef]\n - 19. Gamson, W.A.; Modigliani, A. Media discourse and public opinion on nuclear power: A constructionist approach. Am. J. Sociol. 1989 , 95 , 1-37. [CrossRef]\n - 20. Entman, R.M. Framing: Toward clarification of a fractured paradigm. J. Commun. 1993 , 43 , 51-58. [CrossRef]\n - 21. McCombs, M.; Llamas, J.P.; Lopez-Escobar, E.; Rey, F. Candidate images in Spanish elections: Second-level agenda-setting e GLYPH<11> ects. Journal. Mass Commun. Q. 1997 , 74 , 703-717. [CrossRef]\n - 22. Druckman, J.N. On the limits of framing e GLYPH<11> ects: Who can frame? J. Politics 2001 , 63 , 1041-1066. [CrossRef]\n - 23. Druckman, J.N. The implications of framing e GLYPH<11> ects for citizen competence. Political Behav. 2001 , 23 , 225-256. [CrossRef]\n - 24. Teigen, K.H.; Karevold, K.I. Looking back versus looking ahead: Framing of time and work at di GLYPH<11> erent stages of a project. J. Behav. Decis. Mak. 2005 , 18 , 229-246. [CrossRef]\n - 25. McKenzie, C.R.; Nelson, J.D. What a speaker's choice of frame reveals: Reference points, frame selection, and framing e GLYPH<11> ects. Psychon. Bull. Rev. 2003 , 10 , 596-602. [CrossRef]\n - 26. Du, Y.R. Same events, di GLYPH<11> erent stories: Internet censorship in the Arab Spring seen from China. Journal. Mass Commun. Q. 2016 , 93 , 99-117. [CrossRef]", - "page_start": 17, - "page_end": 17, - "source_file": "pubmed10.pdf" - }, - { - "text": "reports, the environment, and science [13]. Some respondents even hold the belief that global warming results in climate change [9].\n\nThe two distinct climate discourses being produced based on the same reality can be explained by the framing theory in communication study. Framing refers to the phenomenon where the reality is always partially selected or highlighted when described by the public or media [19]. By distinctly defining problems, suggesting solutions, and indicating casual interpretations [20], di GLYPH<11> erent frames tell the audience di GLYPH<11> erent stories and influence how they observe facts [21,22]. Two types of frames, equivalency frames and emphasis frames, are commonly studied by scholars to examine how framing e GLYPH<11> ects influence individuals' attitudes and beliefs [23]. Equivalency frames describe the same fact or logic with di GLYPH<11> erent words and may suggest that the audience perceives facts in psychologicallydi GLYPH<11> erent ways [24]. For example, a cup can be described as 'half full' and 'half empty', where the former is a positive frame indicating a reference point lower than current status, and the latter is negative, meaning that the reference point is above the current situation [25]. Emphasis frames employ words selectively associated with parts of reality to shift the audience's attention to particular attributes [26]. Climate change and global warming have been noted to highlight di GLYPH<11> erent aspects of an issue by activating distinct cognitive accessibility patterns [27].\n\nDi GLYPH<11> erent frames concerning the global climate concern are popular among the public, politicians, environmentalists, and the media [1,28,29]. Big data analyses have indicated that when interpreting climate events, individuals' preference for frameworks was influenced by demographics [5] and social-political background [2]. Di GLYPH<11> erent choices of frameworks can evoke di GLYPH<11> erent psychological processes [30], promote or inhibit engagement intentions [31], or gain approval on various levels [32].\n\nStudies have noted that the frameworks of climate change and global warming may result from di GLYPH<11> erent political indications. The American Republican-leaning states show more preference for global warming than climate change compared with Democratic-leaning states, and global warming is more connected with 'hoax' in questioning the reality of the global climate issue [5]. Conservatives are more likely to link heat-related phenomena to global warming, whereas liberals associate these facts equally with both frames [27]. An earlier survey conducted by [4] argued that wording choice might not influence the whole population similarly. For the whole sample and politically independent individuals, the two terminologies were equally serious, but climate change seemed more serious compared with global warming among the Republicans, and the Democrats held the opposite opinion.\n\n## 2.2. Network Model for Cognition\n\nDi GLYPH<11> erent framework choices may create even more di GLYPH<11> erences than have already been noticed. Psychologists think that human beings are a collection of learned associations [33], and associative response rather than simply linear logic form the structural basis of thought [34]. Associative learning [35] is a long-standing assumption underlying cognitive science [14], suggesting that human cognition toward the world forms a network pattern, where the world is organized into several groups of related items and stored in a network model in the mind. When messages are processed by humans, they are first encoded into a temporary memory network and then linked to an existing associative memory network for long-term storage [36]. In the network, a node represents a certain concept, and edges refers to particular relationships, such as time sequences [37], similarity [38], semantic connections [37], or cause and e GLYPH<11> ect [33] between two nodes.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed10.pdf" - } - ] - }, - { - "references": { - "source_file": "maiis-user-manual.pdf", - "query": "How can I request access to NAIIS ?", - "target_page": 5, - "target_passage": "Requests for access to, inquiries on the use of the software, and comments on the design and functionalities of the application should be sent to the dedicated e-mail address naiisapp@unfccc.int.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## 2.2 Pending NAIIS features\n\n## List of pending functionalities in NAIIS:\n\n-----------------------------------------\n\n - 1. Web services integration for help desk\n - 2. Display of information in 5 remaining UN languages.\n\n## 2.3 Contact\n\nRequests for access to, inquiries on the use of the software, and comments on the design and functionalities of the application should be sent to the dedicated e-mail address naiisapp@unfccc.int .", - "page_start": 4, - "page_end": 4, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "\n\n## NAIIS Web Application\n\n(Release version 1.1.3)\n\n## User Manual\n\n(As of 10 February 2014)", - "page_start": 0, - "page_end": 0, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "Press the 'Enter key' and the non-Annex I Greenhouse Gas Inventories web page appears.\n\nTo access the NAIIS application, click on the image NAIIS Web Application, the right hand side of the screen. (figure 3, number 1) and the log-in page will be displayed. (figure 4)\n\n## Figure 3. UNFCCC non-Annex I Greenhouse Gas Inventories web page\n\nFigure 4. Log-in page of the NAIIS Web Application\n\n\n\n\n\nTo log-in , enter the username and password and click on the 'Sign in' button.\n\n", - "page_start": 6, - "page_end": 6, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "- 5. To access the auxiliary volume, the MM/GM relationship must be stopped with the access option enabled before write I/O is submitted to the auxiliary.", - "page_start": 562, - "page_end": 562, - "source_file": "sg247938.pdf" - }, - { - "text": "## 1 Introduction\n\nThe Non-Annex I Inventory software (NAIIS) web application is a web-based tool developed for use by Parties not included in Annex I to the Convention (non-Annex I Parties) to estimate and report their national greenhouse gas inventories (GHG inventories). As per Article 4, paragraph 1 (a), and Article 12, paragraph 1 (a) of the Convention, non-Annex I Parties are required to communicate to the Conference of the Parties a national inventory of anthropogenic emissions by sources and removals by sinks of all greenhouse gases (GHGs) not controlled by the Montreal Protocol, to the extent their capacities permit, following the guidelines contained in the annex to decision17/CP.8.\n\nIn order to assist non-Annex I Parties in estimating and reporting their GHG inventories as part of their national communications, the secretariat developed an Excel-based software which incorporated all the elements of a national GHG inventory prescribed by decision 17/CP.8. The software was based on the IPCC inventory software version 1.1, which used the Tier 1 methodologies for estimating GHG emissions and removals for all source categories included in the Revised 1996 IPCC Guidelines, and further complemented by the GPGs. 1\n\nSince its release in 2005, most non-Annex I Parties have been using that software for the development of their national GHG inventories. In December 2011, Parties requested the secretariat to upgrade the software and make it available to non-Annex I Parties by June 2013. Pursuant to that request, the secretariat converted the current Excelbased version of the software (v.1.3.2) 2 into a web-based application (NAIIS) which provides greater flexibility and security for maintaining data.\n\n## 2 General information\n\nThe NAIIS is a web-based application designed to enable non-Annex I Parties estimate their national GHG inventories according to the UNFCCC guidelines and using the IPCC methodologies, and to report the results in their national communications and biennial update reports.\n\n## 2.1 System overview\n\nThe NAIIS web application has the following functionalities:\n\n- 1. User management (only for the user roles NFP and PM)\n- 2. Submission management\n- 3. Data entry\n- 4. Key category analysis\n- 5. Reporting tables\n- 6. Data Export/Import\n- 7. Completeness\n- 8. Consistency\n\nThe NAIIS web application allows input of data through three different channels:\n\n- 1. Manual input into the entry grids\n- 2. Partial or full import of data from Excel\n- 3. Bulk import of data from XML\n\nThe GHG emissions totals, by gas and by sector, are automatically calculated and saved based on the values entered for activity data (AD), emission factors and other relevant parameters. In addition, the software facilitates the reporting of other category specific information, for example, the choice of the method for activity data and emission factors.", - "page_start": 3, - "page_end": 3, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "Attention: Before generating a request, ensure that your current browser does not have restrictions on the type of keys that are used for certificates. Some browsers limit the use of specific key-types for security and compatibility reasons.\n\nAlso, consult your organization's security policy to ensure that the key type you are configuring is compliant.\n\n## Click Generate Request .\n\n - 3. Save the generated request file. Until the signed certificate is installed, the Secure Communications window shows that an outstanding certificate request exists.\n\nAttention: If you must update a field in the certificate request, generate a new request and submit it to signing by the proper certificate authority. However, this process invalidates the previous certificate request and prevents the installation of the signed certificate associated with the original request.\n\n - 4. Submit the request to the certificate authority to receive a signed certificate.\n - 5. When you receive the signed certificate, select Update Certificate on the Secure Communications window again.\n - 6. Click the folder icon to upload the signed certificate, as shown in Figure 4-38. Click Update .\n\nFigure 4-38 Installing a signed certificate\n\n", - "page_start": 140, - "page_end": 140, - "source_file": "sg247938.pdf" - }, - { - "text": "## Fundamentals\n\nWith IAM, developers attach policies, JSON documents that define granular permissions, to resources. IAM provides pre-built AWS managed policies for common access levels. You can also define your own policies with the least-privilege level necessary to complete tasks.\n\nInformation about IAM policies may come at you fast. If it gets to be too much, put it in PARC :\n\n - · P rincipal: entity that is allowed or denied access\n - · A ction: type of access that is allowed or denied\n - · R esource: AWS resources the action will act upon\n - · C ondition: conditions for which the access is valid\n\nAt a high level, these four terms should be enough to get you started connecting serverless resources.\n\n## Account prerequisites\n\nBut, before you start, you need an AWS account. The following sections provide the best practice steps to create an account and an administrative user.\n\n## Sign up for an AWS account\n\nIf you do not have an AWS account, complete the following steps to create one.\n\n## To sign up for an AWS account\n\n - 1. Open https://portal.aws.amazon.com/billing/signup.\n - 2. Follow the online instructions.\n\nPart of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad.\n\nWhen you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access.", - "page_start": 40, - "page_end": 40, - "source_file": "serverless-core.pdf" - }, - { - "text": "- /SM590000 To access the auxiliary volume, the Metro Mirror relationship must be stopped with the access option enabled before write I/O is allowed to the auxiliary.", - "page_start": 544, - "page_end": 544, - "source_file": "sg247938.pdf" - }, - { - "text": "Figure 5-46 Copy Services in GUI\n\n\n\nBecause the Copy Services are one of the most important features for resiliency solutions, more technical information is available in Chapter 11, 'Advanced Copy Services' on page 435.\n\n## 5.9 Access\n\nThe Access menu in the GUI maintains who can log in to the system, defines the access rights to the user, and tracks what was done by each privileged user to the system. It is logically split into two categories:\n\n - /SM590000 Users\n - /SM590000 Audit Log\n\nIn this section, we explain how to create, modify, or remove user, and how to see records in the audit log.", - "page_start": 178, - "page_end": 178, - "source_file": "sg247938.pdf" - }, - { - "text": "If you are creating a new account, you will create a root account using an email address. The root account has unrestricted access , similar to root accounts for an operating system. As a best practice, you should create an administrative user too.\n\n\n\n## Granting administrative access to a user\n\nAs you might guess, granting administrative access to a user is still rather far reaching. An account with administrative level privileges will make getting started easier. For systems in production, follow the principle of least-privilege - granting only the minimum access necessary to accomplish tasks.\n\n - · For a step-by-step guide to account types and login management, see Signing in to the AWS Management Console.\n - · AWS Identity and Access Management (IAM) is the service to manage entities and resources authorized to use services and service resources.\n\n## Sign up for an AWS account\n\nIf you do not have an AWS account, complete the following steps to create one.\n\n## To sign up for an AWS account\n\n - 1. Open https://portal.aws.amazon.com/billing/signup.\n - 2. Follow the online instructions.\n\nPart of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad.\n\nWhen you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access.\n\nAWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account .", - "page_start": 13, - "page_end": 13, - "source_file": "serverless-core.pdf" - } - ] - }, - { - "references": { - "source_file": "creative_common_ai.pdf", - "query": "What is the problem regarding the use of the Book3 dataset ?", - "target_page": 2, - "target_passage": "The Books3 dataset contains text from over 170,000 books,2 which are a mix of in-copyright and out-of-copyright works. It is believed to have been originally sourced from a website that was not authorized to distribute all of the works", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "## A Supplementary materials for datasets\n\n## A.1 All datasets\n\nTable 3 displays the size of each dataset along with the average number of tokens per sample and their references. The dataset's content was tokenized using cl100k\\_base encoding. For Retrieval, the two numbers refer to the queries and the documents. For Reranking, the three numbers refer to the queries, the pairs of queries with relevant documents and the pairs of queries with irrelevant ones, respectively. The pairs of queries and documents are obtained from the 90 documents extracted. For SummEvalFr , the three numbers refer to the texts, human and machine summaries, respectively.\n\nFigure 3 represents the semantic similarity between each dataset. The methodology was as follows: 90 random samples per dataset are embedded using the multilingual-e5-large model. The embeddings of each dataset's samples are averaged. The similarity between each dataset is then calculated using cosine similarity as in (Muennighoff et al., 2022).\n\nWe complement this analysis by observing the dataset's clouds of embedding in a 2D plane using PCA in Figure 4.\n\n## A.2 Created datasets\n\nSyntec Figure 5 shows an extract from the Syntec dataset with a document and a query relative to this document.\n\nHAL Figure 6 is an extract from the HAL dataset. Table 4 lists the distribution of classes ( domain field) for the HAL dataset on raw subset and mteb\\_eval subset, which is used for MTEB evaluation. Labels descriptions can be found at this URL: https://api.archivesouvertes.fr/ref/domain/?q=*:*&rows=393 or in Table 4. After pre-processing, mteb\\_eval covers titles from 10 domains as classes with less than 500 samples were removed. In the MTEB evaluation subset of the dataset, titles composed of 2 words or less have been removed (371 samples), resulting in an average word count of 13 . 4 . Figure 7 shows the word count distribution per title. Furthermore, the dataset has been cleaned up by manually removing all non-French titles. Additionally, it can be observed in Table 4 that in the original raw dataset, the shs and sdv classes represent by far the majority of the dataset samples with respectively 58706 samples (73%) and 11049 samples (13%). In order to\n\nmitigate the class imbalance while preserving the majority of those classes, they have been randomly subsampled to 6701 and 4803 samples. Furthermore, baseline models have been trained and tested to assess the usability of this dataset in other tasks, such as classification and topic modeling. Table 5 shows the results obtained.\n\nSummEvalFr Extracts of humans and machine summaries translated in French from SummEvalFr and the original ones in English from SummEval (Fabbri et al., 2021) are shown in Figure 9. As explained in section 3.1.3, we use a LLM to evaluate the quality of translations for human summaries, we provide the prompt used with GPT-4 for this evaluation in Figure 8.\n\nTable 6 shows the distribution of ratings given by the LLM. With the scale being 10, we manually verify random samples rated above 9. We verify all samples with ratings under 9 and those with no provided rating (N/A) due to the triggering of the OpenAI content management policy. The LLM suggests that 60 samples are not correctly translated. These were verified manually, and after checking, less than 10 samples only needed to be corrected.\n\n## B Supplementary materials for correlation analysis\n\nThis section presents various correlations computed based on the model results on the proposed benchmark.\n\nFigure 10 represents cross-correlations between models' performances and their studied characteristics as a heatmap.\n\nFigure 11 represents the Spearman correlations in terms of performance across models.\n\nFigure 12 represents the Spearman correlations in terms of performance across datasets.\n\n## C Supplementary materials for models\n\nWe present in this section the model characteristics we collected for the 46 evaluated models.", - "page_start": 11, - "page_end": 11, - "source_file": "arxiv4.pdf" - }, - { - "text": "## What dataset management practices are necessary?\n\nNo matter how a books data commons gets built, it will be important to consider broader aspects of data governance. For example:\n\n - · Dataset documentation and transparency: Transparent documentation is important for any dataset used for AI training. A datasheet is a standardized form of documentation that includes information about provenance and composition of data, and includes information on management practices, recommended uses or collection process.\n - · Quality assurance: Above, we note the many features that make books useful for AI training, as compared with web data, for example. That said, the institution managing a books commons dataset may still want to collect and curate the collection to meet the particular purposes of its users. For instance, it may want to take steps to mitigate biases inherent in the dataset, by ensuring books are representative of a variety of languages and geographies.\n - · Understanding uses: The institution managing a books commons dataset could measure and study how the dataset is used, to inform future improvements. Such monitoring may also enable accountability measures with respect to uses of the dataset. Introducing community norms for disclosing datasets used in AI training and other forms of AI research would facilitate such monitoring.\n - · Governance mechanisms: In determining matters like acceptable and ethical use, the fundamental question is 'who decides.' While this might be settled simply by whoever sets up and operates the dataset and related infrastructure, participatory mechanisms - such as advisory bodies bringing together a broad range of users and stakeholders of a collection - could also be incorporated.", - "page_start": 19, - "page_end": 19, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n## 3.2.1 Entering the Datasets-View\n\nThe user has the following possibilities to enter the datasets view:\n\n - · Browsing directly to http://europeandataportal.eu/data\n - · Opening the 'Data' item in the main menu, then clicking on 'Datasets' in the submenu\n - · Clicking on 'Search' in the 'Search Datasets' area, with or without a search keyword entered\n\n\n\n## 3.2.2 How to filter datasets by using 'Faceted Search'\n\nThe user can find suitable datasets by perfo rming a 'Faceted Search' . This means the user systematically adds properties, which the desired dataset should fulfill, e.g. a dataset should be part of a specific catalogue or category. The following properties are available:\n\n - · Countries,\n - · Catalogues,\n - · Categories,\n - · Tags,\n - · Formats,\n - · Licences.\n\nThose facets are presented on the left side of the main dataset page. The available options for each facet always reflect the availability of it in the current set of results. The numbers in brackets indicate how many datasets in total have that property e.g. there are 117,610 datasets with a distribution in CSV format.\n\n", - "page_start": 26, - "page_end": 26, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "## 1. Introduction 1\n\nWhile the field of artificial intelligence research and technology has a long history, broad public attention grew over the last year in light of the wide availability of new generative AI systems, including large language models (LLMs) like GPT-4, Claude, and LLaMA-2. These tools are developed using machine learning and other techniques that analyze large datasets of written text, and they are capable of generating text in response to a user's prompts.\n\nWhile many large language models rely on website text for training, books have also played an important role in developing and improving AI systems. Despite the widespread use of ebooks and growth of sales in that market, books remain difficult for researchers and entrepreneurs to access at scale in digital form for the purposes of training AI.\n\nIn 2023, multiple news publications reported on the availability and use of a dataset of books called 'Books3' to train LLMs. The Books3 dataset contains text from over 170,000 books, 2 which are a mix of in-copyright and out-of-copyright works. It is believed to have been originally sourced from a website that was not authorized to distribute all of the works contained in the dataset. In lawsuits brought against OpenAI, Microsoft, Meta, and Bloomberg related to their LLMs, the use of Books3 as training data was specifically cited. 3\n\nThe Books3 controversy highlights a critical question at the heart of generative AI: what role do books play in training AI models, and how might digitized books be made widely accessible for the purposes of training AI? What dataset of books could be constructed and under what circumstances?\n\nIn February 2024, Creative Commons, Open Future and Proteus Strategies convened a series of workshops to investigate the concept of a responsibly designed, broadly accessible dataset of digitized books to be used in training AI models. Conducted under the Chatham House Rule, we set out to ask if there is a possible future in which a 'books data commons for AI training' might exist, and what such a commons might look like. The workshops brought together practitioners on the front lines of building next-generation AI models, as well as legal and policy scholars with expertise in the copyright and licensing challenges surrounding digitized books. Our goal was also to bridge the perspective of stewards of", - "page_start": 1, - "page_end": 1, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "More details about this process are provided in the appendix A.2 along with some extracts in Figure 6. We make the dataset publicly available in both their raw and clean versions. We use this dataset in a clustering setup to cluster publications by their title and use the domain as ground truth. To ensure the quality of this dataset, we run 3 baseline models for classification: TF-IDF + SVM , a fine-tuned Camembert (Martin et al., 2019) and GPT-4 leveraging In-Context Learning (ICL). Furthermore, we run one baseline model for topic modeling: Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and report scores in the appendix A.2.\n\n## 3.1.3 SummEvalFr (Summarization)\n\nThe original SummEval dataset (Fabbri et al., 2021) consists of 100 news articles from the CNN/Dai-\n\nlyMail dataset. Each article has 11 human-written summaries and 16 machine-generated summaries annotated by 8 people with a score for coherence, consistency, fluency, and relevance. We translated it from English to French using DeepL API 6 . Since MTEB evaluation is based on the embedding similarity between machine-generated and humangenerated summaries, we propose to compute the ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002) metrics between machine and human summaries for both French and English version. In Table 2, we report the average of the scores as well as their correlations between the two languages. The correlation is high (above 0.7), showing that the word and n-gram overlap between human and machine summaries is highly preserved in the French version. One may argue that computing the metric on fully translated texts (human and machine summaries are both translated from English) may introduce biases and not assess the quality of the translations. For this purpose, we ensure the French human summaries are correctly translated from English. We use an LLM as-a-judge (Zheng et al.,", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv4.pdf" - }, - { - "text": "## 5. Examining approaches to building a books data commons\n\nThere are many possible permutations for building a books data commons. To structure our exploration, we focused on two particular tracks, discussed below. We chose these tracks mindful of the above legal issues, and because there are already existence proofs that help to illuminate tradeoffs, challenges and potential paths forward for each.\n\n## 5a. Public domain and permissively licensed books\n\n## Existing Project Example : The Pile v2 27\n\nIn 2020, the nonprofit research group EleutherAI constructed and released The Pile - a large, diverse, open dataset for AI training. EleutherAI developed it not only to support their own training of LLMs, but also to lower the barriers for others. 28\n\nAlong with data drawn from the web at large, The Pile included books from three datasets. The first dataset was the Books3 corpus referenced at the outset of this paper. The second and third books datasets were smaller: BookCorpus2, which is a collection of 17,868 books by otherwise unpublished authors; and a 28,752 books in the public domain and published prior to 1919, drawn from a volunteer effort to digitize public domain works called Project Gutenberg.\n\nAs the awareness about The Pile dataset grew, certain rightsholders began sending copyright notices to have the dataset taken down from various websites.\n\nDespite the takedown requests, the importance of books to EleutherAI and the broader community's AI research remained. In hoping to forge a path forward EleutherAI announced in 2024 that they would create a new version of the dataset, which they will call The Pile v2. 29 Among other things, v2 would 'have many more books than the original Pile had, for example, and more diverse representation of non-academic non-fiction domains.' At the same time, it would only seek to include public domain books and permissively licensed content. As before, this corpus focuses on English language books.", - "page_start": 12, - "page_end": 12, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n## 3.4 Graphical Data Visualisation Tool\n\nThis section describes the features of the graphical visualisation tool for numeric data. The features are currently available for XLS (Excel) and CSV files, except for the selection of the sheet name which is applicable only for Excel files.\n\nMost GUI elements from th e 'Graph' tab (records selection, search box, filters and fields buttons) are al so available on the 'Grid' tab and work in the same way.\n\n## 3.4.1 How to visualize graphical data from a dataset resource\n\nAs a result of a dataset search, the system displays on th e 'Dataset' tab all distributions (resource/data files) that are part of the selected dataset. Each XLS or CSV distribution of the dataset can be further explored by clicking on ' Open Visualization ' under the ' Options ' button -if available.\n\n", - "page_start": 42, - "page_end": 42, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "German is the next-largest language represented at 9%, and is followed by a long-tail of languages by representation.\n\nIn order to enable these uses, HathiTrust has invested in technical solutions to prevent possible misuse. To some extent, they manage this by limiting who gets access to the Center, and limiting access to specific features to researchers at member institutions. HathiTrust has also put in place various security controls on both the physical storage of the digitized books and the network access to those files. The primary uses of the data through the Research Center includes access to an extracted features set and access to the complete corpus 'data capsule,' which is a virtual machine running on the Center's servers. The data capsule allows users to conduct non-consumptive research with the data, but it limits the types of outputs allowed in order to prevent users from obtaining full content of incopyright works. The measures taken include physical security controls on the data centers housing the information, as well as restrictions via network access and encryption of backup tapes. In the finding that HathiTrust use was a fair use and thus rejecting a lawsuit brought by the Authors Guild, the Court noted the importance of these controls. 35\n\nToday, the Center's tools are not suitable for AI training, in that they don't allow the specific types of technical manipulation of underlying text necessary to train an AI. Nevertheless, the Center demonstrates that building a books data commons for computational analysis is possible, and in turn points to the possibility of creating such a resource for AI training. 36\n\n## Implications of Overall Approach\n\nBy relying on existing limitations and exceptions in copyright law, the number of books one could include in the corpus of a books data commons is far greater and more diverse. Of course, a bigger dataset doesn't necessarily mean a higher quality dataset for all uses of AI models; as HathiTrust shows, even a multimillion book corpus can skew in various directions. Still, dataset size generally remains significant to an LLM's performance - the more text one can train on, or rather the more tokens for training the model, the better, at least along a number of performance metrics. 37\n\nWhile holding the potential for a broader and more diverse dataset, a key limitation in pursuing this approach is that it is only feasible where relevant copyright limitations and exceptions exist. Even then, legal uncertainty means that going down this path is likely to generate, at a minimum, expensive and time-consuming litigation and regulatory", - "page_start": 15, - "page_end": 15, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## 7. Conclusion\n\nThis paper is a snapshot of an idea that is as underexplored as it is rooted in decades of existing work. The concept of mass digitization of books, including to support text and data mining, of which AI is a subset, is not new. But AI training is newly of the zeitgeist, and its transformative use makes questions about how we digitize, preserve, and make accessible knowledge and cultural heritage salient in a distinct way.\n\nAs such, efforts to build a books data commons need not start from scratch; there is much to glean from studying and engaging existing and previous efforts. Those learnings might inform substantive decisions about how to build a books data commons for AI training. For instance, looking at the design decisions of HathiTrust may inform how the technical infrastructure and data management practices for AI training might be designed, as well as how to address challenges to building a comprehensive, diverse, and useful corpus. In addition, learnings might inform the process by which we get to a books data commons for example, illustrating ways to attend to the interests of those likely to be impacted by the dataset's development. 41\n\nWhile this paper does not prescribe a particular path forward, we do think finding a path (or paths) to extend access to books for AI training is critical. In the status quo, large swaths of knowledge contained in books are effectively locked up and inaccessible to most everyone. Google is an exception - it can reap the benefits of their 40 million books dataset for research, development, and deployment of AI models. Large, well-resourced entities could theoretically try to replicate Google's digitization efforts, although it would be incredibly expensive, impractical, and largely duplicative for each entity to individually pursue their own efforts. Even then, it isn't clear how everyone else - independent researchers, entrepreneurs, and smaller entities - will have access. The controversy around the Books3 dataset discussed at the outset should not, then, be an argument in favor of preserving the status quo. Instead, it should highlight the urgency of building a books data commons to support an AI ecosystem that provides broad benefits beyond the privileged few.", - "page_start": 20, - "page_end": 20, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "Table 3: Details of the data used for each task. The average number of tokens of texts is computed using the cl100k\\_base tokenizer. For Reranking, the three numbers refer to the queries, the pairs of queries with relevant documents and the pairs of queries with irrelevant ones, respectively. The pairs of queries and documents are obtained from the 90 dataset's documents. For Retrieval datasets, the two numbers refer to the queries and the documents, respectively. For SummEvalFr , the three numbers refer to the texts, human and machine summaries. References to all the datasets used are available.", - "page_start": 12, - "page_end": 12, - "source_file": "arxiv4.pdf" - } - ] - }, - { - "references": { - "source_file": "creative_common_ai.pdf", - "query": "In the United States, before which date is book out of copyright for sure ?", - "target_page": 9, - "target_passage": "In the United States, all books published or released before 1929 are in the public domain. While use of these books provides maximal certainty for the AI developer to train on", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## 4. Copyright, Licensing, & Access to Books for Training\n\nEven if books can be acquired, digitized, and made technically useful for AI training, the development of a books data commons would necessarily need to navigate and comply with copyright law.\n\nOut-of-Copyright Books: A minority of books are old enough to be in the public domain and out of copyright, and an AI developer could use them in training without securing any copyright permission. In the United States, all books published or released before 1929 are in the public domain. While use of these books provides maximal certainty for the AI developer to train on, it is worth noting that the status of whether a book is in the public domain can be difficult to determine. For instance, books released between 1929 and 1963 in the U.S. are 14 out of copyright if they were not subject to a copyright renewal; however, data on copyright renewals is not easily accessible.\n\nWhat's more, copyright definitions and term lengths vary among countries. Even if a work is in the public domain in the US, it may not be in other countries. Countries generally use the 15 life of the last living author + 'x' years to determine the term of copyright protection. For most countries, 'x' is either 50 years (the minimum required by the Berne Convention) or 70 years (this is the case for all member states of the European Union and for all works published in the U.S. after 1978). This approach makes it difficult to determine copyright terms with certainty because it requires information about the date of death of each author, which is often not readily available.\n\nIn-Copyright Books: The vast majority of books are in copyright, and, insofar as the training process requires making a copy of the book, the use in AI training may implicate copyright law. Our workshop covered three possible paths for incorporating such works.\n\n## Direct licensing\n\nOne could directly license books from rightsholders. There may be some publishers who are willing to license their works for this purpose, but it is hard to determine the scale of such access, and, in any event, there are significant limits on this approach. Along with the challenge (and expense) of reaching agreements with relevant rightsholders, there is also the practical difficulty of simply identifying and finding the rightsholder that one must negotiate", - "page_start": 8, - "page_end": 8, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage or retrieval system, without prior permission in writing from the publisher. Subject to any applicable licensing terms and conditions in the case of electronically supplied publications, a person may engage in fair dealing with a copy of this publication for his or her personal or private use, or his or her research or private study. See Section 12(1)(a) of the Copyright Act 98 of 1978.\n\nThe authors and the publisher have made every effort to obtain permission for and to acknowledge the use of copyright material. Should any infringement of copyright have occurred, please contact the publisher, and every effort will be made to rectify omissions or errors in the event of a reprint or new edition.\n\nDeveloped for Oxbridge Academy - 2015", - "page_start": 1, - "page_end": 1, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "It is also important to note two other issues that can affect the application of limitations and exceptions, in particular, their application to e-books.\n\nThe first important limitation is that almost every digital book published today comes with a set of contractual terms that restrict what users can do with it. In many cases, those terms will explicitly restrict text data mining or AI uses of the content, meaning that even where copyright law allows for reuse (for example, under fair use), publishers by contract can impose restrictions anyway. In the United States, those contract terms are generally thought to override the applicability of fair use or other limitations and exceptions. Other 23 jurisdictions, such as those in the EU, provide that certain limitations and exceptions cannot be contractually overridden, though experience to date varies with how those anti-contractual override protections work in practice. 24\n\nThe second limitation is the widespread adoption of 'anti-circumvention' rules in copyright laws and the interplay of these with a choice to rely on copyright limitations and exceptions. Digital books sold by major publishers are generally encumbered with 'digital rights management' (DRM) that limits how someone can use the digital file. For instance, DRM can limit the ability to make a copy of the book, or even screenshot or excerpt from it, among other things. Anti-circumvention laws restrict someone's ability to evade these technical restrictions, even if it is for an ultimately lawful use.\n\nWhat this means for our purposes is that even if one acquires a digital book from, for example, Amazon, and it is lawful under copyright law to use that book in AI training, it can still generally be unlawful to circumvent the DRM to do so, outside narrow exceptions. 25 Thus, the ability to use in-copyright books encumbered by DRM - that is, most all books sold by major publishers - is generally limited. 26\n\nPractically, using in-copyright books to build a books commons for AI training - while relying on copyright's limitations and exceptions - requires turning a physical book into digital form, or otherwise engaging in the laborious process of manually re-creating a book's text (i.e., retyping the full text of the book) without circumventing the technical restrictions themselves.", - "page_start": 11, - "page_end": 11, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work - on conditions of your choice. CC licenses let you change your copyright terms from the default of 'all rights reserved' to 'some rights reserved.'\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\n\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n\n\nPublic domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark . Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n## Where public domain tools fit in the copyright spectrum\n\n\n\n## The CC0 Public Domain Dedication\n\nUse this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.\n\n\n\n\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.\n\n## What is the di/fference between CC0 and the Public Domain Mark?\n\n\n\nCC0 ('CC Zero') is intended for use only by authors or holders of copyright and related rights (including database rights), in connection with works that are still subject to those rights in one or more countries.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "It is also an example predicated on copyright's limitations and exceptions - in this case, on U.S. fair use. While the Authors Guild filed a copyright infringement suit against HathiTrust, federal courts in 2012 and 2014 ruled that HathiTrust's use of books was fair use. 32\n\nA nonprofit founded in 2008, HathiTrust grew out of a partnership among major US university libraries and today is 'an international community of research libraries committed to the long-term curation and availability of the cultural record.' It started in what it calls the 'early 33 days of mass digitization' - that is, at a time when it started to become economical to take existing physical artifacts in libraries and turn them into digital files at a large scale.\n\nThe founding members of HathiTrust were among the initial partners for Google's Book Search product, which allows people to search across and view small snippets of text from in-copyright books and read full copies of public domain books scanned from libraries' 34 collections. The libraries provided Google with books from their collections, Google would then scan the books for use in Book Search, and return to the libraries a digital copy for their own uses. These uses included setting up HathiTrust not only to ensure long-term preservation of the digital books and their metadata, but also to facilitate other uses, including full text search of books and accessibility for people with print disabilities. In separate court cases, both Google and HathiTrust's uses of the books were deemed consistent with copyright law.\n\nThe uses most relevant to this paper are those enabled by what HathiTrust refers to today as the Research Center. The Center grew in part out of a research discipline called 'digital humanities,' which, among other things, seeks to use computational resources or other digital technologies to analyze information and contribute to the study of literature, media, history, and other areas. For instance, imagine you want to understand how a given term (e.g., 'war on drugs') became used; one might seek to analyze when the term was first used and how often it was used over time by analyzing a vast quantity of sources, searching out the term's use. The insight here is that there is much to be learned not just from reading or otherwise consuming specific material, but also from 'non-consumptive research,' or \"research in which computational analysis is performed on one or more volumes (textual or image objects)\" to derive other sorts of insights. AI training is a type of non-consumptive use.\n\nToday, the Center '[s]upports large-scale computational analysis of the works in the HathiTrust Digital Library to facilitate non-profit and educational research.' It includes over 18 million books in over 400 languages from the HathiTrust Digital Library collection. Roughly 58% of the corpus is in copyright. HathiTrust notes that, while this corpus is large, it has limitations in terms of its representation across subject matter, language, geography, and other dimensions. In terms of subject matter, the corpus is skewed towards humanities (64.9%) and social sciences (14.3%). In terms of language, 51% of the books are in English,", - "page_start": 14, - "page_end": 14, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## Implications of the The Overall Approach\n\nStepping back from The Pile v2 specifically, or any particular existing collection of books or dataset built on their basis, we want to understand the implications of relying on public domain works and expressly licensed works in building a books commons.\n\nThe benefits are relatively straightforward. Both categories, by definition come with express permission to use the books in AI training. The cost of acquiring the books for this use may be effectively zero or close to it, when considering public domain and 'openly' licensed books that allow redistribution and that have already been digitized.\n\nBut this approach comes with some clear limitations. First, as noted above, for many books in the public domain, their status as such is not always clear. And with respect to permissively licensed books, it is not always clear whether and how to comply with the license obligations in this context.\n\nSetting aside those challenges, the simple fact is that relying on public domain and existing permissively licensed books would limit the quantity and diversity of data available for training, impacting performance along different dimensions. Only a small fraction of books ever published fall into this category, and the corpus of books in this category is likely to be skewed heavily towards older public domain books. This skew would, in turn, impact the content available for AI training. For instance, relying on books from before 1929 would not 30 only incorporate outdated language patterns, but also a range of biases and misconceptions about race and gender, among other things. Efforts could be made to get people to permissively license more material - a book drive for permissive licensing, so to speak; this approach would still not encompass most books, at least when it comes to past works. 31\n\n## 5b. Limitations & Exceptions\n\n## Existing Project Example: HathiTrust Research Center (HTRC)\n\nThe HathiTrust Research Center provides researchers with the ability to perform computational analysis across millions of books. While it is not suited specifically for AI training, it is an existence proof for what such a resource might look like.", - "page_start": 13, - "page_end": 13, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "engagement. And, at least in the U.S., it could generate billions of dollars in damages if the specific design choices and technical constraints are not adequate to justify a finding of fair use.\n\nThis sort of books dataset could be built by expanding use of in-copyright books that have already been digitized from existing libraries and other sources. Specifically, workshop participants mentioned that the Internet Archive, HathiTrust, and Google as entities that have digitized books and could repurpose their use to build a books commons, although challenges with using these datasets were noted. The Internet Archive is in the midst of litigation brought by book publishers for its program for lending digital books; while not directly relevant to the issue of AI training using their corpus of books, this sort of litigation creates a chilling effect on organizations seeking to make new uses of these digitized books. Meanwhile, Google encumbered HathiTrust's digital copies with certain contractual restrictions, which would need to be addressed to develop a books dataset for AI training, and Google itself is unlikely to share its own copies while it provides them a competitive advantage.\n\nPerhaps as a matter of public policy, these existing copies could be made more freely available. For instance, to ensure robust competition around AI and advance other public interests, policymakers could remove legal obstacles to the sharing of digitized book files for use in AI training. Alternatively, policymakers could go further and affirmatively compel sharing access to these digital book files for AI training.\n\nIt's possible that there could be a new mass digitization initiative, turning physical books into new digital scans. At least in theory, one could try to replicate the existing corpora of HathiTrust, for example, without Google's contractual limitations. At the same time, such an effort would take many years, and it seems unlikely that many libraries would want to go to the trouble to have their collections digitized a second time. Moreover, while new scans may provide some incremental benefit over use of existing ones (e.g., by using the most modern digitization and OCR tools and thus improving accuracy), there is no inherent social value to making every entity that wants to do or allow AI training invest in their own redundant scanning.\n\nA new digitization effort could target works that have not been yet digitized. This may be particularly useful given that previous book digitization efforts, and the Google Books project in particular, have focused heavily (though not exclusively) on libraries in English-speaking countries. Additional digitization efforts might make more sense for books in those languages that have not yet been digitized at a meaningful scale. Any new digitization effort might therefore start with a mapping of the extent to which a books corpus in a given language has been digitized.", - "page_start": 16, - "page_end": 16, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "with. The vast majority of in-copyright books are out-of-print or out-of-commerce, and most are not actively managed by their rightsholders. There is no official registry of copyrighted works and their owners, and existing datasets can be incomplete or erroneous. 16\n\nAs a result, there may be no way to license the vast majority of in-copyright books, especially those that have or have had limited commercial value. Put differently, the barrier to using 17 most books is not simply to pay publishers; even if one had significant financial resources, licensing would not enable access to most works.\n\n## Permissively licensed works\n\nThere are books that have been permissively licensed in an easily identifiable way, such as works placed under Creative Commons (CC) licenses. Such works explicitly allow particular uses of works subject to various responsibilities (e.g., requiring attribution by the user in their follow-on use).\n\nWhile such works could be candidates for inclusion in a books data commons, their inclusion depends on whether the license's terms can be complied with in the context of AI training. For instance, in the context of CC licensed works, there are requirements for proper attribution across all licenses (the CC tools Public Domain Dedication (CC0) and Public Domain Mark (PDM) are not licenses and do not require attribution). 18", - "page_start": 9, - "page_end": 9, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## 1. Introduction 1\n\nWhile the field of artificial intelligence research and technology has a long history, broad public attention grew over the last year in light of the wide availability of new generative AI systems, including large language models (LLMs) like GPT-4, Claude, and LLaMA-2. These tools are developed using machine learning and other techniques that analyze large datasets of written text, and they are capable of generating text in response to a user's prompts.\n\nWhile many large language models rely on website text for training, books have also played an important role in developing and improving AI systems. Despite the widespread use of ebooks and growth of sales in that market, books remain difficult for researchers and entrepreneurs to access at scale in digital form for the purposes of training AI.\n\nIn 2023, multiple news publications reported on the availability and use of a dataset of books called 'Books3' to train LLMs. The Books3 dataset contains text from over 170,000 books, 2 which are a mix of in-copyright and out-of-copyright works. It is believed to have been originally sourced from a website that was not authorized to distribute all of the works contained in the dataset. In lawsuits brought against OpenAI, Microsoft, Meta, and Bloomberg related to their LLMs, the use of Books3 as training data was specifically cited. 3\n\nThe Books3 controversy highlights a critical question at the heart of generative AI: what role do books play in training AI models, and how might digitized books be made widely accessible for the purposes of training AI? What dataset of books could be constructed and under what circumstances?\n\nIn February 2024, Creative Commons, Open Future and Proteus Strategies convened a series of workshops to investigate the concept of a responsibly designed, broadly accessible dataset of digitized books to be used in training AI models. Conducted under the Chatham House Rule, we set out to ask if there is a possible future in which a 'books data commons for AI training' might exist, and what such a commons might look like. The workshops brought together practitioners on the front lines of building next-generation AI models, as well as legal and policy scholars with expertise in the copyright and licensing challenges surrounding digitized books. Our goal was also to bridge the perspective of stewards of", - "page_start": 1, - "page_end": 1, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "When CC0 is applied to a work, copyright and related rights are relinquished worldwide, making the work free from those restrictions to the greatest extent possible.\n\n\n\nThe Public Domain Mark (PDM) is used to label works that are already free of known copyright restrictions. Unlike CC0, PDM doesn't change the copyright status of a work.\n\nPDM can be used by anyone, and is intended for use with works that are already free of known copyright restrictions throughout the world.\n\n## Public Domain Mark\n\nUse this tool if you have identified a work that is free of known copyright restrictions.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - } - ] - }, - { - "references": { - "source_file": "creative_common_ai.pdf", - "query": "What of the main imporvement of the Pile v2 dataset in comparison to its first version ?", - "target_page": 13, - "target_passage": "Among other things, v2 would “have many more books than the original Pile had, for example, and more diverse representation of non-academic non-fiction domains.” At the same time, it would only seek to include public domain books and permissively licensed content", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## A Supplementary materials for datasets\n\n## A.1 All datasets\n\nTable 3 displays the size of each dataset along with the average number of tokens per sample and their references. The dataset's content was tokenized using cl100k\\_base encoding. For Retrieval, the two numbers refer to the queries and the documents. For Reranking, the three numbers refer to the queries, the pairs of queries with relevant documents and the pairs of queries with irrelevant ones, respectively. The pairs of queries and documents are obtained from the 90 documents extracted. For SummEvalFr , the three numbers refer to the texts, human and machine summaries, respectively.\n\nFigure 3 represents the semantic similarity between each dataset. The methodology was as follows: 90 random samples per dataset are embedded using the multilingual-e5-large model. The embeddings of each dataset's samples are averaged. The similarity between each dataset is then calculated using cosine similarity as in (Muennighoff et al., 2022).\n\nWe complement this analysis by observing the dataset's clouds of embedding in a 2D plane using PCA in Figure 4.\n\n## A.2 Created datasets\n\nSyntec Figure 5 shows an extract from the Syntec dataset with a document and a query relative to this document.\n\nHAL Figure 6 is an extract from the HAL dataset. Table 4 lists the distribution of classes ( domain field) for the HAL dataset on raw subset and mteb\\_eval subset, which is used for MTEB evaluation. Labels descriptions can be found at this URL: https://api.archivesouvertes.fr/ref/domain/?q=*:*&rows=393 or in Table 4. After pre-processing, mteb\\_eval covers titles from 10 domains as classes with less than 500 samples were removed. In the MTEB evaluation subset of the dataset, titles composed of 2 words or less have been removed (371 samples), resulting in an average word count of 13 . 4 . Figure 7 shows the word count distribution per title. Furthermore, the dataset has been cleaned up by manually removing all non-French titles. Additionally, it can be observed in Table 4 that in the original raw dataset, the shs and sdv classes represent by far the majority of the dataset samples with respectively 58706 samples (73%) and 11049 samples (13%). In order to\n\nmitigate the class imbalance while preserving the majority of those classes, they have been randomly subsampled to 6701 and 4803 samples. Furthermore, baseline models have been trained and tested to assess the usability of this dataset in other tasks, such as classification and topic modeling. Table 5 shows the results obtained.\n\nSummEvalFr Extracts of humans and machine summaries translated in French from SummEvalFr and the original ones in English from SummEval (Fabbri et al., 2021) are shown in Figure 9. As explained in section 3.1.3, we use a LLM to evaluate the quality of translations for human summaries, we provide the prompt used with GPT-4 for this evaluation in Figure 8.\n\nTable 6 shows the distribution of ratings given by the LLM. With the scale being 10, we manually verify random samples rated above 9. We verify all samples with ratings under 9 and those with no provided rating (N/A) due to the triggering of the OpenAI content management policy. The LLM suggests that 60 samples are not correctly translated. These were verified manually, and after checking, less than 10 samples only needed to be corrected.\n\n## B Supplementary materials for correlation analysis\n\nThis section presents various correlations computed based on the model results on the proposed benchmark.\n\nFigure 10 represents cross-correlations between models' performances and their studied characteristics as a heatmap.\n\nFigure 11 represents the Spearman correlations in terms of performance across models.\n\nFigure 12 represents the Spearman correlations in terms of performance across datasets.\n\n## C Supplementary materials for models\n\nWe present in this section the model characteristics we collected for the 46 evaluated models.", - "page_start": 11, - "page_end": 11, - "source_file": "arxiv4.pdf" - }, - { - "text": "## 5. Examining approaches to building a books data commons\n\nThere are many possible permutations for building a books data commons. To structure our exploration, we focused on two particular tracks, discussed below. We chose these tracks mindful of the above legal issues, and because there are already existence proofs that help to illuminate tradeoffs, challenges and potential paths forward for each.\n\n## 5a. Public domain and permissively licensed books\n\n## Existing Project Example : The Pile v2 27\n\nIn 2020, the nonprofit research group EleutherAI constructed and released The Pile - a large, diverse, open dataset for AI training. EleutherAI developed it not only to support their own training of LLMs, but also to lower the barriers for others. 28\n\nAlong with data drawn from the web at large, The Pile included books from three datasets. The first dataset was the Books3 corpus referenced at the outset of this paper. The second and third books datasets were smaller: BookCorpus2, which is a collection of 17,868 books by otherwise unpublished authors; and a 28,752 books in the public domain and published prior to 1919, drawn from a volunteer effort to digitize public domain works called Project Gutenberg.\n\nAs the awareness about The Pile dataset grew, certain rightsholders began sending copyright notices to have the dataset taken down from various websites.\n\nDespite the takedown requests, the importance of books to EleutherAI and the broader community's AI research remained. In hoping to forge a path forward EleutherAI announced in 2024 that they would create a new version of the dataset, which they will call The Pile v2. 29 Among other things, v2 would 'have many more books than the original Pile had, for example, and more diverse representation of non-academic non-fiction domains.' At the same time, it would only seek to include public domain books and permissively licensed content. As before, this corpus focuses on English language books.", - "page_start": 12, - "page_end": 12, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n## 3.2.1 Entering the Datasets-View\n\nThe user has the following possibilities to enter the datasets view:\n\n - · Browsing directly to http://europeandataportal.eu/data\n - · Opening the 'Data' item in the main menu, then clicking on 'Datasets' in the submenu\n - · Clicking on 'Search' in the 'Search Datasets' area, with or without a search keyword entered\n\n\n\n## 3.2.2 How to filter datasets by using 'Faceted Search'\n\nThe user can find suitable datasets by perfo rming a 'Faceted Search' . This means the user systematically adds properties, which the desired dataset should fulfill, e.g. a dataset should be part of a specific catalogue or category. The following properties are available:\n\n - · Countries,\n - · Catalogues,\n - · Categories,\n - · Tags,\n - · Formats,\n - · Licences.\n\nThose facets are presented on the left side of the main dataset page. The available options for each facet always reflect the availability of it in the current set of results. The numbers in brackets indicate how many datasets in total have that property e.g. there are 117,610 datasets with a distribution in CSV format.\n\n", - "page_start": 26, - "page_end": 26, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "## 4 WhatMatters for Learning Representations from Video?\n\nIn this section we isolate the contributions of several design choices, including: a) the use of a feature prediction\n\nversus pixel prediction objective, b) the construction of the pretraining data distribution, c) the feature pooling strategy for leveraging the model's representations in downstream tasks, and d) the masking strategy, towards identifying: what to predict from what?\n\n## 4.1 Predicting Representations versus Pixels\n\nWe first ablate the effect of computing the prediction loss in representation space. We train a pair of ViT-L/16 models using either a V-JEPA feature prediction loss, or a mean-squared error loss with the normalized pixel values, as in masked autoencoders (He et al., 2021), and perform a sweep over the learning rate and weight decay schedules for both approaches. All models are pretrained on VideoMix2M for 90K iterations with a batch size of 3072 using multi-block masking. We examine performance on Kinetics-400 (K400), Something-Something-v2 (SSv2), and ImageNet-1K (IN1K), using a frozen backbone with an attentive probe, and report top-1 accuracy using a single center view. We also examine end-to-end fine-tuning performance of the models on Kinetics-400.\n\nResults of this comparison are reported in Table 1 and indicate that predicting in feature space provides a consistent performance improvement over pixel space prediction in both frozen evaluation of the video backbone, as well as end-to-end fine-tuning.\n\n## 4.2 Pretraining Data Distribution\n\nNext we study the impact of the pretraining data distribution in Table 2. Leveraging large scale datasets", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv3.pdf" - }, - { - "text": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n## 3.4 Graphical Data Visualisation Tool\n\nThis section describes the features of the graphical visualisation tool for numeric data. The features are currently available for XLS (Excel) and CSV files, except for the selection of the sheet name which is applicable only for Excel files.\n\nMost GUI elements from th e 'Graph' tab (records selection, search box, filters and fields buttons) are al so available on the 'Grid' tab and work in the same way.\n\n## 3.4.1 How to visualize graphical data from a dataset resource\n\nAs a result of a dataset search, the system displays on th e 'Dataset' tab all distributions (resource/data files) that are part of the selected dataset. Each XLS or CSV distribution of the dataset can be further explored by clicking on ' Open Visualization ' under the ' Options ' button -if available.\n\n", - "page_start": 42, - "page_end": 42, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "Table 3: Details of the data used for each task. The average number of tokens of texts is computed using the cl100k\\_base tokenizer. For Reranking, the three numbers refer to the queries, the pairs of queries with relevant documents and the pairs of queries with irrelevant ones, respectively. The pairs of queries and documents are obtained from the 90 dataset's documents. For Retrieval datasets, the two numbers refer to the queries and the documents, respectively. For SummEvalFr , the three numbers refer to the texts, human and machine summaries. References to all the datasets used are available.", - "page_start": 12, - "page_end": 12, - "source_file": "arxiv4.pdf" - }, - { - "text": "More details about this process are provided in the appendix A.2 along with some extracts in Figure 6. We make the dataset publicly available in both their raw and clean versions. We use this dataset in a clustering setup to cluster publications by their title and use the domain as ground truth. To ensure the quality of this dataset, we run 3 baseline models for classification: TF-IDF + SVM , a fine-tuned Camembert (Martin et al., 2019) and GPT-4 leveraging In-Context Learning (ICL). Furthermore, we run one baseline model for topic modeling: Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and report scores in the appendix A.2.\n\n## 3.1.3 SummEvalFr (Summarization)\n\nThe original SummEval dataset (Fabbri et al., 2021) consists of 100 news articles from the CNN/Dai-\n\nlyMail dataset. Each article has 11 human-written summaries and 16 machine-generated summaries annotated by 8 people with a score for coherence, consistency, fluency, and relevance. We translated it from English to French using DeepL API 6 . Since MTEB evaluation is based on the embedding similarity between machine-generated and humangenerated summaries, we propose to compute the ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002) metrics between machine and human summaries for both French and English version. In Table 2, we report the average of the scores as well as their correlations between the two languages. The correlation is high (above 0.7), showing that the word and n-gram overlap between human and machine summaries is highly preserved in the French version. One may argue that computing the metric on fully translated texts (human and machine summaries are both translated from English) may introduce biases and not assess the quality of the translations. For this purpose, we ensure the French human summaries are correctly translated from English. We use an LLM as-a-judge (Zheng et al.,", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv4.pdf" - }, - { - "text": "## What dataset management practices are necessary?\n\nNo matter how a books data commons gets built, it will be important to consider broader aspects of data governance. For example:\n\n - · Dataset documentation and transparency: Transparent documentation is important for any dataset used for AI training. A datasheet is a standardized form of documentation that includes information about provenance and composition of data, and includes information on management practices, recommended uses or collection process.\n - · Quality assurance: Above, we note the many features that make books useful for AI training, as compared with web data, for example. That said, the institution managing a books commons dataset may still want to collect and curate the collection to meet the particular purposes of its users. For instance, it may want to take steps to mitigate biases inherent in the dataset, by ensuring books are representative of a variety of languages and geographies.\n - · Understanding uses: The institution managing a books commons dataset could measure and study how the dataset is used, to inform future improvements. Such monitoring may also enable accountability measures with respect to uses of the dataset. Introducing community norms for disclosing datasets used in AI training and other forms of AI research would facilitate such monitoring.\n - · Governance mechanisms: In determining matters like acceptable and ethical use, the fundamental question is 'who decides.' While this might be settled simply by whoever sets up and operates the dataset and related infrastructure, participatory mechanisms - such as advisory bodies bringing together a broad range of users and stakeholders of a collection - could also be incorporated.", - "page_start": 19, - "page_end": 19, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## Q4: Are there any correlations between datasets with respect to model ranking?\n\nThe datasets correlation w.r.t model ranking are presented in appendix Figure 12. Except for two datasets ( MasakhaNEWSClusteringP2P , SummEvalFr ), the correlations, on average, are high. There is still enough diversity to make each dataset interesting for the French MTEB benchmark. Two groups ( SyntecReranking / SyntecRetrieval , MassiveScenarioClassification / MTOPDomainClassification / MassiveIntentClassification ) exhibit notably high correlations ( ∼ 0.97). It is interesting to point out some sub-diagonal correlation blocks. The datasets being arranged by task indicate that models behave slightly more similarly within the same task than between two different tasks. This underscores the importance of having multiple tasks in the benchmark to select general-purpose models. For readers interested in specific tasks, it is more relevant to examine task-specific rankings rather than the overall one. The complementary results of model correlations w.r.t to strengths and weaknesses on datasets are displayed in appendix Figure 11. Strong correlations in behavior emerge among the variants of the same models (e.g. DistilBERT, sentence-croissant, sentence-t5, e5, etc.). Correlations are also generally observed among numerous models trained using the sentence transformers framework (Reimers and Gurevych, 2019), as well as proprietary models, e.g. from Cohere and OpenAI. Conversely, these models finetuned for sentence similarity, show minimal correlation with pre-trained models for which tokenembedding pooling techniques are employed.\n\n## 5 Conclusion and perspectives\n\nIn this work, we introduce a large-scale embedding benchmark for French to enable the research community and industry to select the most relevant embedding methods based on their specific needs. We undertake significant efforts in collecting 15 datasets and create 3 new quality-checked ones to enhance this collection. The whole French benchmark runs on 26 tasks. We select a diverse range of 51 models, including prominent French and multilingual models deemed most efficient to conduct a broad comparison. Our implementation is open to the community and features a public leaderboard, allowing the results to evolve with new models or datasets. After an in-depth analysis of the results, OpenAI models perform significantly better than\n\nthe other models. However, other models should be considered for their performance on specific tasks, being open source or having a small embedding dimension.\n\nThis work opens several doors for future improvements. By examining dataset diversity in terms of topics and model ranking, we observe that the benchmark would benefit from additional datasets that introduce higher diversity. Beyond classification, many tasks focus on semantic similarity, explaining the strong performance of models trained for similarity. Exploring novel tasks in the generative spectrum or evaluating token embeddings (contextualized or not) on tasks like Named Entity Recognition could be an interesting path for future exploration. There are also opportunities for improvements on the model side. With numerous existing models that could be added to the leaderboard and many new proposals awaiting. For instance, we can already see the promising capabilities of early variants of recent models (Faysse et al., 2024) and expect that future proposals will come to compete strongly with closed-source models. Ultimately, we hope to see the emergence of other language-specific MTEB variants (e.g. for high-resource languages like Spanish and German), enabling a more comprehensive evaluation of multilingual model performance.\n\n## 6 Limitations", - "page_start": 7, - "page_end": 7, - "source_file": "arxiv4.pdf" - }, - { - "text": "We select genomes with average autosomal coverage above 0.5×, except for VK518, which has previously been suggested to be of Saami ancestry 6 and which had a coverage of 0.438. We included VK518 in our panel to capture this ancestry. Genomes above a coverage cut-off of 0.5× have previously been shown to result in reliable imputation results 72 . We exclude samples with evidence of contamination. We remove any duplicate individuals, such as individuals who were resequenced, choosing the file with the highest coverage. We then filter out any relatives annotated in the Allen Ancient DNA Resource v. 54.1 27 , retaining the individual with the highest coverage in each family clade.\n\nOur final dataset includes 1,556 ancient genomes.\n\nImputation of ancient genomes. We follow the recommended pipeline of GLIMPSE 73 and first call genotype likelihoods for each genome in the 1000GP, segregating sites using bcftools mpileup with filter -q 20, -Q 20 and -C 50. We subsequently impute each genome separately using GLIMPSE v. 1.1.1 using the 1000GP phase 3 reference panel 74 downloaded from https://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/. These imputed genomes are merged into a single VCF (variant call format) for further downstream processing.\n\nWe filter any site for which more than 2% of sites have an imputation posterior of less than 0.8 and retain all remaining sites so as not to have any missing genotypes at individual SNPs.\n\nRelate-inferred genealogies. We merge imputed ancient genomes with a subset of the 1000GP dataset, including all European populations (CEU, Utah residents with northern and western European ancestry; CHB, Han Chinese in Bejing, China; FIN, Finnish in Finland; GBR, British in England and Scotland; BS, Iberian populations in Spain; TSI, Toscani in Italy, YRI, Yoruba in Ibadan, Nigeria). We create a second dataset in which we merge imputed genomes with the Simons Genome Diversity Project 75 (SGDP) downloaded from https://sharehost.hms. harvard.edu/genetics/reich\\_lab/sgdp/phased\\_data2021/. These two datasets contain, respectively, a total of 2,270 and 1,834 modern and ancient individuals.\n\nWe then infer genealogies for the joint dataset of ancient and modern genomes using Relate v. 1.2.1. We restrict our analysis to transversions only and assume a mutation rate of 4 × 10 -9 mutations per base per generation and input sample dates as shown in Supplementary Table 1. We use coalescences rates pre-inferred for the 1000GP and SGDP datasets.\n\nMDS analysis. We compute f 2-statistics using the Twigstats function f2\\_blocks\\_from\\_Relate between all pairs of individuals and between all individuals and an outgroup (Han Chinese people in SGDP) using the Relate genealogies of SGDP modern and imputed ancient genomes. We set the argument t to specify a time cut-off and set the argument use\\_muts to FALSE to compute these f -statistics on branches of the genealogy and to TRUE to compute these only on the mutations. We use these to compute f 3(outgroup, indiv1, indiv2) = 0.5 × ( f 2(outgroup, indiv1) + f 2(outgroup, indiv2) -f 2(indiv1, indiv2)) for every pair of individuals, and store 1 -f 3(outgroup, indiv1, indiv2) in a symmetric N × N matrix (where N is the number of individuals) for which we then compute an MDS using the R function cmdscale.", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed3.pdf" - } - ] - }, - { - "references": { - "source_file": "news1.pdf", - "query": "Where will the 2024 AI + Energy summit take place ?", - "target_page": 1, - "target_passage": "The AI + Energy Summit, scheduled for September 26, 2024, in Washington, D.C.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Log in\n\n\n\nHome / Arts and Entertainment / New Artificial Intelligence Summit Series Begins With Energy\n\n\n\nARTS AND ENTERTAINMENT\n\n## New Artificial Intelligence Summit Series Begins With Energy\n\n07/31/2024\n\n(AI) continues to transform the United States and the world. To promote and inform rapid advancements in AI and maintain America's global competitiveness, the Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI, announces the AI+ Summit Series.\n\nThe series kicks off with the topic of energy. The AI + Energy Summit, scheduled for September 26, 2024, in Washington, D.C., will bring together policy makers, energy industry leaders, top government and academic energy researchers, and technologists to address the challenges of AI's energy consumption and develop solutions for a resilient and abundant energy future. The event also aims to address the implications of AI and energy for national security and promote partnerships between AI and energy stakeholders.\n\nAI and other emerging technologies can help the United States take the lead in energy areas including maximizing energy efficiencies, discovering new materials, and enabling new forms of power generation. AI also has a role to play in overcoming energy challenges. The Department of Energy (DOE) already uses AI in several areas including advanced computing, emergency response, environmental modeling, climate forecasting, and materials research.\n\nSCSP's recent 'Action Plan for U.S. Leadership in Next-Generation Energy,' raises many issues related to AI and energy, including recommendations for the government to bring America forward. The AI+ Energy Summit will highlight these and other issues, and promote collaboration to solve problems. The stakes are high; if the U.S. falls short on energy, American adversaries could gain the upper hand in AI leadership, according to SCSP experts.\n\nVisit scsp.ai to learn more about the AI+Energy Summit and the SCSP's Next-Generation Energy Action Plan.\n\nArticle Link\n\nhttps://about.newsusa.com/new-artificial-intelligence-summit-series-begins-with…\n\n## RELATED ARTICLES\n\n\n\n\n\nMar 06, 2024\n\nCelebrate St. Patrick's Day with No Booze, Just Pure Irish Fun and Entertainment\n\nMar 06, 2024\n\nExplore Downtown San Pedro with Flair: Ride the Iconic Red Car Trolley for Free\n\nMar 06, 2024\n\nSay Hello to Your Big Break at the Stapleton Library Job Fair in Vocation, Trade, or Civil Service\n\n\n\n\n\nFeb 22, 2024\n\nRetrain Your Emotional Brain: A Natural Alternative to Weight Loss Drugs\n\n© Copyright NewsUSA 2025. All Rights Reserved.\n\nFeb 21, 2024\n\nSerial Entrepreneur Teaches Us How to Go the Distance in Business and in Life\n\nNEWSUSA\n\nMar 06, 2024\n\nLocal Artists Collaborate for a Unique Fusion of Groove and Collage\n\nFASHION\n\nBUSINESS\n\nINFOGRAPHIC\n\nENVIRONMENT\n\nHEALTH\n\nMONEY\n\nFOOD\n\nTRAVEL\n\nBRIDAL\n\nRECREATION\n\nTECHNOLOGY\n\nHOME\n\nEDUCATION\n\nARTS & ENTERTAINMENT\n\nAUTO\n\nCHILDREN\n\nFITNESS\n\nHOLIDAY\n\nINSURANCE\n\nLAWN & GARDEN\n\nLISTICLE\n\nNUTRITION\n\nPARENTING\n\nPETS\n\nSEASONAL\n\nSENIORS\n\nSPANISH\n\nTIPS AND HOW TO\n\nENTERTAINMENT\n\nCAREER\n\nCOMMUNITY\n\nFAMILY\n\nTIPS\n\nINTERNET\n\nHUMAN\\_INTEREST\n\nBEAUTY\n\nARTS\n\nREALESTATE\n\nSAFETY\n\nMEDICINE\n\nBOOK\\_REVIEW\n\nRECIPE\n\nAFRICAN\\_AMERICANS\n\nHOW\\_TO\n\nBYLINED\\_COLUMN\n\nCHARITY\n\nSPORTS\n\nHOME\\_IMPROVEMENT\n\nTECH\n\nWELLNESS\n\nARTS AND ENTERTAINMENT\n\nFOOD & DRINK\n\nREAL\\_ESTATE\n\nVETERANS\n\nOUTDOORS\n\nREAL ESTATE\n\nHUMAN INTEREST\n\nMONEY & FINANCE\n\nFASHION & BEAUTY\n\nMONEY AND FINANCE\n\nBOOKS & ENTERTAINMENT\n\nBOOKS\n\nARTS & ENTERTAINMENT\n\nCATEGORIES\n\nRECENT POSTS", - "page_start": 0, - "page_end": 0, - "source_file": "news1.pdf" - }, - { - "text": "- 200. \"Big tech and the pursuit of AI dominance\" (https://www.economist.com/business/2023/03/2 6/big-tech-and-the-pursuit-of-ai-dominance). The Economist . 26 March 2023. Archived (http s://web.archive.org/web/20231229021351/https://www.economist.com/business/2023/03/26/ big-tech-and-the-pursuit-of-ai-dominance) from the original on 29 December 2023.\n - 201. Fung, Brian (19 December 2023). \"Where the battle to dominate AI may be won\" (https://ww w.cnn.com/2023/12/19/tech/cloud-competition-and-ai/index.html). CNN Business . Archived (https://web.archive.org/web/20240113053332/https://www.cnn.com/2023/12/19/tech/cloudcompetition-and-ai/index.html) from the original on 13 January 2024.\n - 202. Metz, Cade (5 July 2023). \"In the Age of A.I., Tech's Little Guys Need Big Friends\" (https://w ww.nytimes.com/2023/07/05/business/artificial-intelligence-power-data-centers.html). The New York Times . Archived (https://web.archive.org/web/20240708214644/https://www.nytim es.com/2023/07/05/business/artificial-intelligence-power-data-centers.html) from the original on 8 July 2024. Retrieved 5 October 2024.\n - 203. \"Electricity 2024 - Analysis\" (https://www.iea.org/reports/electricity-2024). IEA . 24 January 2024. Retrieved 13 July 2024.\n - 204. Calvert, Brian (28 March 2024). \"AI already uses as much energy as a small country. It's only the beginning\" (https://www.vox.com/climate/2024/3/28/24111721/ai-uses-a-lot-of-ener gy-experts-expect-it-to-double-in-just-a-few-years). Vox . New York, New York. Archived (http s://web.archive.org/web/20240703080555/https://www.vox.com/climate/2024/3/28/2411172 1/ai-uses-a-lot-of-energy-experts-expect-it-to-double-in-just-a-few-years) from the original on 3 July 2024. Retrieved 5 October 2024.\n - 205. Halper, Evan; O'Donovan, Caroline (21 June 2024). \"AI is exhausting the power grid. Tech firms are seeking a miracle solution\" (https://www.washingtonpost.com/business/2024/06/2 1/artificial-intelligence-nuclear-fusion-climate/?utm\\_campaign=wp\\_post\\_most&utm\\_medium =email&utm\\_source=newsletter&wpisrc=nl\\_most&carta-url=https%3A%2F%2Fs2.washingto npost.com%2Fcar-ln-tr%2F3e0d678%2F6675a2d2c2c05472dd9ec0f4%2F596c09009bbc0f 20865036e7%2F12%2F52%2F6675a2d2c2c05472dd9ec0f4). Washington Post .\n - 206. Davenport, Carly. \"AI Data Centers and the Coming YS Power Demand Surge\" (https://web. archive.org/web/20240726080428/https://www.goldmansachs.com/intelligence/pages/gs-res earch/generational-growth-ai-data-centers-and-the-coming-us-power-surge/report.pdf) (PDF). Goldman Sachs . Archived from the original (https://www.goldmansachs.com/intellige nce/pages/gs-research/generational-growth-ai-data-centers-and-the-coming-us-power-surg e/report.pdf) (PDF) on 26 July 2024. Retrieved 5 October 2024.\n - 207. Ryan, Carol (12 April 2024). \"Energy-Guzzling AI Is Also the Future of Energy Savings\" (http s://www.wsj.com/business/energy-oil/ai-data-centers-energy-savings-d602296e). Wall Street Journal . Dow Jones.\n - 208. Hiller, Jennifer (1 July 2024). \"Tech Industry Wants to Lock Up Nuclear Power for AI\" (https:// www.wsj.com/business/energy-oil/tech-industry-wants-to-lock-up-nuclear-power-for-ai-6cb7 5316?mod=djem10point). Wall Street Journal . Dow Jones. Archived (https://web.archive.or g/web/20241005165650/https://www.wsj.com/business/energy-oil/tech-industry-wants-to-loc k-up-nuclear-power-for-ai-6cb75316?mod=djem10point) from the original on 5 October 2024. Retrieved 5 October 2024.\n - 209. Kendall, Tyler (28 September 2024). \"Nvidia's Huang Says Nuclear Power an Option to Feed Data Centers\" (https://www.bloomberg.com/news/articles/2024-09-27/nvidia-s-huang-s ays-nuclear-power-an-option-to-feed-data-centers). Bloomberg .", - "page_start": 41, - "page_end": 41, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 157. Roberts, Siobhan (25 July 2024). \"AI achieves silver-medal standard solving International Mathematical Olympiad problems\" (https://www.nytimes.com/2024/07/25/science/ai-math-al phaproof-deepmind.html). The New York Times . Archived (https://web.archive.org/web/2024 0926131402/https://www.nytimes.com/2024/07/25/science/ai-math-alphaproof-deepmind.ht ml) from the original on 26 September 2024. Retrieved 7 August 2024.\n - 158. LLEMMA . (https://blog.eleuther.ai/llemma/) eleuther.ai. Retrieved 2024-08-07.\n - 159. AI Math. (https://julius.ai/home/ai-math) Archived (https://web.archive.org/web/20241005165 649/https://julius.ai/home/ai-math) 5 October 2024 at the Wayback Machine Caesars Labs, 2024. Retrieved 2024-08-07.", - "page_start": 37, - "page_end": 37, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 314. Milmo, Dan (3 November 2023). \"Hope or Horror? The great AI debate dividing its pioneers\". The Guardian Weekly . pp. 10-12.\n - 315. \"The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023\" (https://web.archive.org/web/20231101123904/https://www.gov.uk/government/public ations/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countrie s-attending-the-ai-safety-summit-1-2-november-2023). GOV.UK . 1 November 2023. Archived from the original (https://www.gov.uk/government/publications/ai-safety-summit-20 23-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-s ummit-1-2-november-2023) on 1 November 2023. Retrieved 2 November 2023.\n - 316. \"Countries agree to safe and responsible development of frontier AI in landmark Bletchley Declaration\" (https://www.gov.uk/government/news/countries-agree-to-safe-and-responsible -development-of-frontier-ai-in-landmark-bletchley-declaration). GOV.UK (Press release). Archived (https://web.archive.org/web/20231101115016/https://www.gov.uk/government/ne ws/countries-agree-to-safe-and-responsible-development-of-frontier-ai-in-landmark-bletchle y-declaration) from the original on 1 November 2023. Retrieved 1 November 2023.\n - 317. \"Second global AI summit secures safety commitments from companies\" (https://www.reuter s.com/technology/global-ai-summit-seoul-aims-forge-new-regulatory-agreements-2024-05-2 1). Reuters. 21 May 2024. Retrieved 23 May 2024.\n - 318. \"Frontier AI Safety Commitments, AI Seoul Summit 2024\" (https://web.archive.org/web/2024 0523201611/https://www.gov.uk/government/publications/frontier-ai-safety-commitments-aiseoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024). gov.uk. 21 May 2024. Archived from the original (https://www.gov.uk/government/publications/frontier-ai-safe ty-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-202 4) on 23 May 2024. Retrieved 23 May 2024.\n - 319. Russell & Norvig 2021, p. 9.\n - 320. Copeland, J., ed. (2004). The Essential Turing: the ideas that gave birth to the computer age . Oxford, England: Clarendon Press. ISBN 0-1982-5079-7.\n - 321. \"Google books ngram\" (https://books.google.com/ngrams/graph?content=electronic+brain& year\\_start=1930&year\\_end=2019&corpus=en-2019&smoothing=3). Archived (https://web.ar chive.org/web/20241005170209/https://books.google.com/ngrams/graph?content=electronic +brain&year\\_start=1930&year\\_end=2019&corpus=en-2019&smoothing=3) from the original on 5 October 2024. Retrieved 5 October 2024.\n - 322. AI's immediate precursors: McCorduck (2004, pp. 51-107), Crevier (1993, pp. 27-32), Russell & Norvig (2021, pp. 8-17), Moravec (1988, p. 3)\n - 323. Turing's original publication of the Turing test in \"Computing machinery and intelligence\": Turing (1950) Historical influence and philosophical implications: Haugeland (1985, pp. 69), Crevier (1993, p. 24), McCorduck (2004, pp. 70-71), Russell & Norvig (2021, pp. 2, 984)\n - 324. Crevier (1993), pp. 47-49.\n - 325. Russell & Norvig (2003), p. 17.\n - 326. Russell & Norvig (2003), p. 18.\n - 327. Newquist (1994), pp. 86-86.\n - 328. Simon (1965, p. 96) quoted in Crevier (1993, p. 109)\n - 329. Minsky (1967, p. 2) quoted in Crevier (1993, p. 109)\n - 330. Russell & Norvig (2021), p. 21.\n - 331. Lighthill (1973).\n - 332. NRC 1999, pp. 212-213.\n - 333. Russell & Norvig (2021), p. 22.\n - 334. Expert systems: Russell & Norvig (2021, pp. 23, 292), Luger & Stubblefield (2004, pp. 227331), Nilsson (1998, chpt. 17.4), McCorduck (2004, pp. 327-335, 434-435), Crevier (1993, pp. 145-162, 197-203), Newquist (1994, pp. 155-183)", - "page_start": 47, - "page_end": 47, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 282. Arguments that AI is not an imminent risk: Brooks (2014), Geist (2015), Madrigal (2015), Lee (2014)\n - 283. Christian (2020), pp. 67, 73.\n - 284. Yudkowsky (2008).\n - 285. Anderson & Anderson (2011).\n - 286. AAAI (2014).\n - 287. Wallach (2010).\n - 288. Russell (2019), p. 173.\n - 289. Stewart, Ashley; Melton, Monica. \"Hugging Face CEO says he's focused on building a 'sustainable model' for the $4.5 billion open-source-AI startup\" (https://www.businessinsider. com/hugging-face-open-source-ai-approach-2023-12). Business Insider . Archived (https://w eb.archive.org/web/20240925013220/https://www.businessinsider.com/hugging-face-open-s ource-ai-approach-2023-12) from the original on 25 September 2024. Retrieved 14 April 2024.\n - 290. Wiggers, Kyle (9 April 2024). \"Google open sources tools to support AI model development\" (https://techcrunch.com/2024/04/09/google-open-sources-tools-to-support-ai-model-develop ment). TechCrunch . Archived (https://web.archive.org/web/20240910112401/https://techcrun ch.com/2024/04/09/google-open-sources-tools-to-support-ai-model-development/) from the original on 10 September 2024. Retrieved 14 April 2024.\n - 291. Heaven, Will Douglas (12 May 2023). \"The open-source AI boom is built on Big Tech's handouts. How long will it last?\" (https://www.technologyreview.com/2023/05/12/1072950/op en-source-ai-google-openai-eleuther-meta). MIT Technology Review . Retrieved 14 April 2024.\n - 292. Brodsky, Sascha (19 December 2023). \"Mistral AI's New Language Model Aims for Open Source Supremacy\" (https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-o pen-source-supremacy). AI Business . Archived (https://web.archive.org/web/202409052126 07/https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-open-source-supre macy) from the original on 5 September 2024. Retrieved 5 October 2024.\n - 293. Edwards, Benj (22 February 2024). \"Stability announces Stable Diffusion 3, a next-gen AI image generator\" (https://arstechnica.com/information-technology/2024/02/stability-announc es-stable-diffusion-3-a-next-gen-ai-image-generator). Ars Technica . Archived (https://web.ar chive.org/web/20241005170201/https://arstechnica.com/information-technology/2024/02/sta bility-announces-stable-diffusion-3-a-next-gen-ai-image-generator/) from the original on 5 October 2024. Retrieved 14 April 2024.\n - 294. Marshall, Matt (29 January 2024). \"How enterprises are using open source LLMs: 16 examples\" (https://venturebeat.com/ai/how-enterprises-are-using-open-source-llms-16-exa mples). VentureBeat . Archived (https://web.archive.org/web/20240926171131/https://ventur ebeat.com/ai/how-enterprises-are-using-open-source-llms-16-examples/) from the original on 26 September 2024. Retrieved 5 October 2024.\n - 295. Piper, Kelsey (2 February 2024). \"Should we make our most powerful AI models open source to all?\" (https://www.vox.com/future-perfect/2024/2/2/24058484/open-source-artificial -intelligence-ai-risk-meta-llama-2-chatgpt-openai-deepfake). Vox . Archived (https://web.archi ve.org/web/20241005170204/https://www.vox.com/future-perfect/2024/2/2/24058484/open-s ource-artificial-intelligence-ai-risk-meta-llama-2-chatgpt-openai-deepfake) from the original on 5 October 2024. Retrieved 14 April 2024.\n - 296. Alan Turing Institute (2019). \"Understanding artificial intelligence ethics and safety\" (https:// www.turing.ac.uk/sites/default/files/2019-06/understanding\\_artificial\\_intelligence\\_ethics\\_and \\_safety.pdf) (PDF). Archived (https://web.archive.org/web/20240911131935/https://www.turi ng.ac.uk/sites/default/files/2019-06/understanding\\_artificial\\_intelligence\\_ethics\\_and\\_safety. pdf) (PDF) from the original on 11 September 2024. Retrieved 5 October 2024.", - "page_start": 45, - "page_end": 45, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Franzen) sued AI companies for using their work to train generative AI. [195][196] Another discussed approach is to envision a separate sui generis system of protection for creations generated by AI to ensure fair attribution and compensation for human authors. [197]\n\n## Dominance by tech giants\n\nThe commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft. [198][199][200] Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace. [201][202]\n\n## Power needs and environmental impacts\n\nIn January 2024, the International Energy Agency (IEA) released Electricity 2024, Analysis and Forecast to 2026 , forecasting electric power use. [203] This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation. [204]\n\nProdigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources - from nuclear energy to geothermal to fusion. The tech firms argue that - in the long view - AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and \"intelligent\", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms. [205]\n\nA 2024 Goldman Sachs Research Paper, AI Data Centers and the Coming US Power Demand Surge , found \"US power demand (is) likely to experience growth not seen in a generation....\" and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means. [206] Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all. [207]\n\nIn 2024, the Wall Street Journal reported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US). [208] Nvidia CEO Jen-Hsun Huang said nuclear power is a good option for the data centers. [209]\n\nIn September 2024, Microsoft announced an agreement with Constellation Energy to re-open the Three Mile Island nuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the US Nuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power - enough for 800,000 homes - of", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 160. Alex McFarland: 7 Best AI for Math Tools. (https://www.unite.ai/best-ai-for-math-tools/) Archived (https://web.archive.org/web/20240911125615/https://www.unite.ai/best-ai-for-mat h-tools/) 11 September 2024 at the Wayback Machine unite.ai. Retrieved 2024-08-07\n - 161. Matthew Finio & Amanda Downie: IBM Think 2024 Primer, \"What is Artificial Intelligence (AI) in Finance?\" 8 Dec. 2023\n - 162. M. Nicolas, J. Firzli: Pensions Age/European Pensions magazine, \"Artificial Intelligence: Ask the Industry\" May June 2024 https://videovoice.org/ai-in-finance-innovationentrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligence-act-wont-work-asintended/ Archived (https://web.archive.org/web/20240911125502/https://videovoice.org/ai-i n-finance-innovation-entrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligenceact-wont-work-as-intended/) 11 September 2024 at the Wayback Machine.\n - 163. Congressional Research Service (2019). Artificial Intelligence and National Security (https://f as.org/sgp/crs/natsec/R45178.pdf) (PDF). Washington, DC: Congressional Research Service.PD-notice\n - 164. Slyusar, Vadym (2019). Artificial intelligence as the basis of future control networks (Preprint). doi:10.13140/RG.2.2.30247.50087 (https://doi.org/10.13140%2FRG.2.2.30247.5 0087).\n - 165. Iraqi, Amjad (3 April 2024). \" 'Lavender': The AI machine directing Israel's bombing spree in Gaza\" (https://www.972mag.com/lavender-ai-israeli-army-gaza/). +972 Magazine . Retrieved 6 April 2024.\n - 166. Davies, Harry; McKernan, Bethan; Sabbagh, Dan (1 December 2023). \" 'The Gospel': how Israel uses AI to select bombing targets in Gaza\" (https://www.theguardian.com/world/2023/ dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets). The Guardian . Retrieved 4 December 2023.\n - 167. Marti, J Werner (10 August 2024). \"Drohnen haben den Krieg in der Ukraine revolutioniert, doch sie sind empfindlich auf Störsender - deshalb sollen sie jetzt autonom operieren\" (http s://www.nzz.ch/international/die-ukraine-setzt-auf-drohnen-die-autonom-navigieren-und-toet en-koennen-ld.1838731). Neue Zürcher Zeitung (in German). Retrieved 10 August 2024.\n - 168. Newsom, Gavin; Weber, Shirley N. (6 September 2023). \"Executive Order N-12-23\" (https:// www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-\\_-GGN-Signed.pdf) (PDF). Executive Department, State of California. Archived (https://web.archive.org/web/202402212 22035/https://www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-\\_-GGN-Signed.pd f) (PDF) from the original on 21 February 2024. Retrieved 7 September 2023.\n - 169. Pinaya, Walter H. L.; Graham, Mark S.; Kerfoot, Eric; Tudosiu, Petru-Daniel; Dafflon, Jessica; Fernandez, Virginia; Sanchez, Pedro; Wolleb, Julia; da Costa, Pedro F.; Patel, Ashay (2023). \"Generative AI for Medical Imaging: extending the MONAI Framework\". arXiv:2307.15208 (https://arxiv.org/abs/2307.15208) [eess.IV (https://arxiv.org/archive/eess.I V)].\n - 170. Griffith, Erin; Metz, Cade (27 January 2023). \"Anthropic Said to Be Closing In on $300 Million in New A.I. Funding\" (https://www.nytimes.com/2023/01/27/technology/anthropic-ai-fu nding.html). The New York Times . Archived (https://web.archive.org/web/20231209074235/h ttps://www.nytimes.com/2023/01/27/technology/anthropic-ai-funding.html) from the original on 9 December 2023. Retrieved 14 March 2023.\n - 171. Lanxon, Nate; Bass, Dina; Davalos, Jackie (10 March 2023). \"A Cheat Sheet to AI Buzzwords and Their Meanings\" (https://news.bloomberglaw.com/tech-and-telecom-law/a-c heat-sheet-to-ai-buzzwords-and-their-meanings-quicktake). Bloomberg News . Archived (http s://web.archive.org/web/20231117140835/https://news.bloomberglaw.com/tech-and-telecom -law/a-cheat-sheet-to-ai-buzzwords-and-their-meanings-quicktake) from the original on 17 November 2023. Retrieved 14 March 2023.", - "page_start": 38, - "page_end": 38, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 265. Cellan-Jones (2014).\n - 266. Russell & Norvig 2021, p. 1001.\n - 267. Bostrom (2014).\n - 268. Russell (2019).\n - 269. Bostrom (2014); Müller & Bostrom (2014); Bostrom (2015).\n - 270. Harari (2023).\n - 271. Müller & Bostrom (2014).\n - 272. Leaders' concerns about the existential risks of AI around 2015: Rawlinson (2015), Holley (2015), Gibbs (2014), Sainato (2015)\n - 273. \" \"Godfather of artificial intelligence\" talks impact and potential of new AI\" (https://www.cbsne ws.com/video/godfather-of-artificial-intelligence-talks-impact-and-potential-of-new-ai). CBS News . 25 March 2023. Archived (https://web.archive.org/web/20230328225221/https://www. cbsnews.com/video/godfather-of-artificial-intelligence-talks-impact-and-potential-of-new-ai) from the original on 28 March 2023. Retrieved 28 March 2023.\n - 274. Pittis, Don (4 May 2023). \"Canadian artificial intelligence leader Geoffrey Hinton piles on fears of computer takeover\" (https://www.cbc.ca/news/business/ai-doom-column-don-pittis1.6829302). CBC . Archived (https://web.archive.org/web/20240707032135/https://www.cbc. ca/news/business/ai-doom-column-don-pittis-1.6829302) from the original on 7 July 2024. Retrieved 5 October 2024.\n - 275. \" '50-50 chance' that AI outsmarts humanity, Geoffrey Hinton says\" (https://www.bnnbloomb erg.ca/50-50-chance-that-ai-outsmarts-humanity-geoffrey-hinton-says-1.2085394). Bloomberg BNN . 14 June 2024. Retrieved 6 July 2024.\n - 276. Valance (2023).\n - 277. Taylor, Josh (7 May 2023). \"Rise of artificial intelligence is inevitable but should not be feared, 'father of AI' says\" (https://www.theguardian.com/technology/2023/may/07/rise-of-arti ficial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says). The Guardian . Archived (https://web.archive.org/web/20231023061228/https://www.theguardian.com/techn ology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-fatherof-ai-says) from the original on 23 October 2023. Retrieved 26 May 2023.\n - 278. Colton, Emma (7 May 2023). \" 'Father of AI' says tech fears misplaced: 'You cannot stop it' \" (https://www.foxnews.com/tech/father-ai-jurgen-schmidhuber-says-tech-fears-misplaced-can not-stop). Fox News . Archived (https://web.archive.org/web/20230526162642/https://www.fo xnews.com/tech/father-ai-jurgen-schmidhuber-says-tech-fears-misplaced-cannot-stop) from the original on 26 May 2023. Retrieved 26 May 2023.\n - 279. Jones, Hessie (23 May 2023). \"Juergen Schmidhuber, Renowned 'Father Of Modern AI,' Says His Life's Work Won't Lead To Dystopia\" (https://www.forbes.com/sites/hessiejones/20 23/05/23/juergen-schmidhuber-renowned-father-of-modern-ai-says-his-lifes-work-wont-leadto-dystopia). Forbes . Archived (https://web.archive.org/web/20230526163102/https://www.fo rbes.com/sites/hessiejones/2023/05/23/juergen-schmidhuber-renowned-father-of-modern-ai -says-his-lifes-work-wont-lead-to-dystopia/) from the original on 26 May 2023. Retrieved 26 May 2023.\n - 280. McMorrow, Ryan (19 December 2023). \"Andrew Ng: 'Do we think the world is better off with more or less intelligence?' \" (https://www.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f93 52be3). Financial Times . Archived (https://web.archive.org/web/20240125014121/https://ww w.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f9352be3) from the original on 25 January 2024. Retrieved 30 December 2023.\n - 281. Levy, Steven (22 December 2023). \"How Not to Be Stupid About AI, With Yann LeCun\" (http s://www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview). Wired . Archived (h ttps://web.archive.org/web/20231228152443/https://www.wired.com/story/artificial-intelligenc e-meta-yann-lecun-interview/) from the original on 28 December 2023. Retrieved 30 December 2023.", - "page_start": 44, - "page_end": 44, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers. [300]\n\nThe UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities. [301]\n\n## Regulation\n\nThe regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms. [302] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. [303] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone. [304][305] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI. [306] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and\n\nThe first global AI Safety Summit was held in 2023 with a declaration calling for international cooperation.\n\n\n\nVietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia. [306] The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. [306] Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI. [307] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. [308] In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics. [309] In 2024, the Council of Europe created the first international legally binding treaty on AI, called the \"Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law\". It was adopted by the European Union, the United States, the United Kingdom, and other signatories. [310]\n\nIn a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that \"products and services using AI have more benefits than drawbacks\". [304] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. [311] In a 2023 Fox News poll, 35% of Americans thought it \"very important\", and an additional 41% thought it \"somewhat important\", for the federal government to regulate AI, versus 13% responding \"not very important\" and 8% responding \"not at all important\". [312][313]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- [1] 'Chatbot Arena LLM Leaderboard: Community-driven evaluation for best LLM and AI chatbots,' https:// huggingface.co/spaces/lmarena-ai/chatbot-arena-leaderboard, accessed: 2024-11-14.\n - [2] 'Hello gpt-4o,' https://openai.com/index/hello-gpt-4o/, published: 2024-05-23.\n - [3] 'Introducing Llama 3.1: Our most capable models to date,' https://ai.meta.com/blog/meta-llama-3-1/, published: 2024-07-23.\n - [4] 'Introducing Meta Llama 3: The most capable openly available LLM to date,' https://ai.meta.com/blog/ meta-llama-3/, published: 2024-04-18.\n - [5] 'Martian LLM router,' https://withmartian.com/.\n - [6] 'New embedding models and API updates,' https://openai.com/index/new-embedding-models-and-api-updates, published: 2024-01-25.\n - [7] 'Notdiamond LLM router,' https://www.notdiamond.ai/.\n - [8] 'OpenAI and others seek new path to smarter AI as current methods hit limitations,' https://www.reuters.com/technology/artificial-intelligence/ openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11, published: 2024-11-15.\n - [9] 'OpenAI, Google and Anthropic are struggling to build more advanced AI,' https://www.bloomberg.com/news/ articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai?sref=CrGXSfHu, published: 2024-11-13.\n - [10] 'OpenAI shifts strategy as rate of 'GPT' AI improvements slows,' https://www.theinformation.com/articles/ openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows, published: 2024-11-9.\n - [11] 'Openrouter LLM router,' https://openrouter.ai/.\n - [12] 'Unify LLM router,' https://unify.ai/.\n - [13] 'What is a control plane?' https://www.ibm.com/think/topics/control-plane, published: 2024-10-31.\n - [14] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat et al. , 'GPT-4 technical report,' arXiv preprint arXiv:2303.08774 , 2023.\n - [15] P. Aggarwal, A. Madaan, A. Anand, S. P. Potharaju, S. Mishra, P. Zhou, A. Gupta, D. Rajagopal, K. Kappaganthu, Y. Yang et al. , 'Automix: Automatically mixing language models,' arXiv preprint arXiv:2310.12963 , 2023.\n - [16] G. Alon and M. Kamfonas, 'Detecting language model attacks with perplexity,' arXiv preprint arXiv:2308.14132 , 2023.\n - [17] R. A. Bradley and M. E. Terry, 'Rank analysis of incomplete block designs: I. the method of paired comparisons,' Biometrika , vol. 39, no. 3/4, 1952.\n - [18] N. Carlini, D. Paleka, K. D. Dvijotham, T. Steinke, J. Hayase, A. F. Cooper, K. Lee, M. Jagielski, M. Nasr, A. Conmy et al. , 'Stealing part of a production language model,' arXiv preprint arXiv:2403.06634 , 2024.\n - [19] H. Chaudhari, G. Severi, J. Abascal, M. Jagielski, C. A. Choquette-Choo, M. Nasr, C. Nita-Rotaru, and A. Oprea, 'Phantom: General trigger attacks on retrieval augmented language generation,' arXiv preprint arXiv:2405.20485 , 2024.\n - [20] L. Chen, M. Zaharia, and J. Zou, 'FrugalGPT: How to use large language models while reducing cost and improving performance,' arXiv preprint arXiv:2305.05176 , 2023.\n - [21] W.-L. Chiang, L. Zheng, Y. Sheng, A. N. Angelopoulos, T. Li, D. Li, B. Zhu, H. Zhang, M. Jordan, J. E. Gonzalez, and I. Stoica, 'Chatbot arena: An open platform for evaluating LLMs by human preference,' in Forty-first International Conference on Machine Learning (ICML) , 2024.\n - [22] S. Cho, S. Jeong, J. Seo, T. Hwang, and J. C. Park, 'Typos that broke the RAG's back: Genetic attack on RAG pipeline by simulating documents in the wild via low-level perturbations,' arXiv preprint arXiv:2404.13948 , 2024.", - "page_start": 18, - "page_end": 18, - "source_file": "arxiv1.pdf" - } - ] - }, - { - "references": { - "source_file": "news1.pdf", - "query": "What is the United States SCSP ?", - "target_page": 1, - "target_passage": "he Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- /SM590000 Host objects can have WWPNs and IQNs.", - "page_start": 352, - "page_end": 352, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 scp : tool to transfer files between hosts", - "page_start": 787, - "page_end": 787, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 AFP Viewer plug-in support\n - /SM590000 External Data Services (EDS) support\n - /SM590000 Favorites support for folders and documents\n - /SM590000 Single and multiple AFP file download as PDF (with AFP2PDF enabled)", - "page_start": 217, - "page_end": 217, - "source_file": "sg246915.pdf" - }, - { - "text": "- 2. Check the path count between your hosts and the IBM Spectrum Virtualize system to ensure that the number of paths is half of the usual supported maximum.\n\nFor more information, see IBM Knowledge Center.\n\n - 3. Run the lstargetportfc command to discover the primary host attach WWPNs (virtual WWPNs), as shown in bold in Example 8-5.\n\nExample 8-5 Using the lstargetportfc command to get primary host WWPNs (virtual WWPNs)", - "page_start": 345, - "page_end": 345, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 IBM System Storage SAN Volume Controller - Software Installation and Configuration Guide, SC23-6628", - "page_start": 811, - "page_end": 811, - "source_file": "sg247938.pdf" - }, - { - "text": "## 2.5.2 Creating an instance on z/OS\n\nIn this section, we explain how to create an instance on the z/OS system. To do so, complete the following steps:\n\n - 1. Copy the control files.\n - 2. Verify the ARS.INI file.\n - 3. Verify the ARS.CFG file.\n - 4. Modify the ARS.CACHE file.\n - 5. Verify the CLI.INI file.\n - 6. Modify the ARSSOCKD procedure.\n - 7. Modify the ARSLOAD procedure.\n\nYou can mount the Content Manager OnDemand installation directory at any mount point other than /usr/lpp/ars/V9R5M0 . You can run at different service levels with this flexibility. For example, a symmetric multiprocessor (SMP) might be used to install into SERVICE/usr/lpp/ars/V9R5M0 . SERVICE/usr/lpp/ars/V9R5M0 might be copied into /usr/lpp/ars/V9R5M0/maint for testing. When testing is complete,\n\n/usr/lpp/ars/V9R5M0/maint might be copied into /usr/lpp/ars/V9R5M0 for production.\n\n## Copying the control files\n\nTo copy the control files, complete the following steps:\n\n - 1. Create a directory ( /etc/ars ) for maintaining the updated configuration files.\n - 2. Create a symbolic link from the installed directory /usr/lpp/ars/V9R5M0/config to the /etc/ars directory, for example, ln -s /etc/ars /usr/lpp/ars/V9R5M0/config .\n - 3. Set the appropriate access mode of 755.\n\n## ARS.INI\n\nThe ARS.INI file contains a section for each instance; each section begins with a header. It is created at installation time and, by default, it is configured with information for the archive instance. In this scenario, ARC95037 is the header line definition.\n\nFigure 2-7 shows the content of a sample ARS.INI file.", - "page_start": 61, - "page_end": 61, - "source_file": "sg246915.pdf" - }, - { - "text": "- /SM590000 IBM System Storage Open Software Family SAN Volume Controller: Service Guide , SC26-7542", - "page_start": 811, - "page_end": 811, - "source_file": "sg247938.pdf" - }, - { - "text": "- 5. Ensure that the primary host attach WWPNs (virtual WWPNs) now allow host traffic, as shown in bold in Example 8-7.\n\nExample 8-7 Host attach WWPNs (virtual WWPNs) permitting host traffic", - "page_start": 346, - "page_end": 346, - "source_file": "sg247938.pdf" - }, - { - "text": "The system migration facility is required if you plan to migrate application group index data from the database to the archive. You initialize the system migration facility by completing the following steps:\n\n - 1. Move to the Content Manager OnDemand executable directory by running the following command:\n\n/opt/IBM/ondemand/V9.5/bin\n\n - 2. Run the ARSSYSCR program for this instance and use the -I parameter:\n\n```\narssyscr - I ARC95037 -m\n```\n\nAgain, - I ARC95037 is the new Content Manager OnDemand instance.\n\nThe ARSSYSCR program creates the application groups, applications, and folders that are required by the system logging, system load, and system migration facilities.\n\n## 2.5.3 Starting and verifying the new instance\n\nNow that the new instance is set up, you can start it and verify that it is installed correctly.\n\n## Starting the new instance\n\nWhen everything is set up, you can start the new instance by customizing the sample procedure in the SARSINST library to conform to your environment.\n\nFigure 2-11 shows an example of starting the new instance.\n\n```\n//ARS95037 PROC PARML= //* //* Library: USER.PRIVATE.PROCLIB(ARS95037) //* //ARS95037 EXEC PGM=ARSSOCKD,REGION=0M,TIME=NOLIMIT, // PARM=('/VERBOSE ARC95037') //STEPLIB DD DISP=SHR,DSN=ARS.ARSV950.SARSLOAD // DD DISP=SHR,DSN=DSN.DB2V910.SDSNEXIT // DD DISP=SHR,DSN=DSN.DB2V910.SDSNLOAD // DD DISP=SHR,DSN=DSN.DB2V910.SDSNLOD2 //ARSBIN DD PATH='/usr/lpp/ars/V9R5M0/bin' //DSNAOINI DD PATH='/etc/ars/cli937.ini' //SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=*\n```\n\nFigure 2-11 Sample Content Manager OnDemand procedure\n\nAfter this procedure is started, log on to the new instance by using the different port number and create users, application groups, applications, and storage sets with the normal procedures.\n\n## Running arsload to check the new instance and new file system\n\nAfter all of the configuration work is complete and the application group, application, and folder are created, run arsload for installation verification. Figure 2-12 on page 44 shows the procedure that is used to load data to the new instance. If you see problems in loading the file (writing an object), check the user permissions.", - "page_start": 66, - "page_end": 66, - "source_file": "sg246915.pdf" - }, - { - "text": "- /SM590000 Organization's security policy", - "page_start": 625, - "page_end": 625, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "news1.pdf", - "query": "What are some example of uses AI by the US departement of energy ?", - "target_page": 1, - "target_passage": "The Department of Energy (DOE) already uses AI in several areas including advanced computing, emergency response, environmental modeling, climate forecasting, and materials research", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Log in\n\n\n\nHome / Arts and Entertainment / New Artificial Intelligence Summit Series Begins With Energy\n\n\n\nARTS AND ENTERTAINMENT\n\n## New Artificial Intelligence Summit Series Begins With Energy\n\n07/31/2024\n\n(AI) continues to transform the United States and the world. To promote and inform rapid advancements in AI and maintain America's global competitiveness, the Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI, announces the AI+ Summit Series.\n\nThe series kicks off with the topic of energy. The AI + Energy Summit, scheduled for September 26, 2024, in Washington, D.C., will bring together policy makers, energy industry leaders, top government and academic energy researchers, and technologists to address the challenges of AI's energy consumption and develop solutions for a resilient and abundant energy future. The event also aims to address the implications of AI and energy for national security and promote partnerships between AI and energy stakeholders.\n\nAI and other emerging technologies can help the United States take the lead in energy areas including maximizing energy efficiencies, discovering new materials, and enabling new forms of power generation. AI also has a role to play in overcoming energy challenges. The Department of Energy (DOE) already uses AI in several areas including advanced computing, emergency response, environmental modeling, climate forecasting, and materials research.\n\nSCSP's recent 'Action Plan for U.S. Leadership in Next-Generation Energy,' raises many issues related to AI and energy, including recommendations for the government to bring America forward. The AI+ Energy Summit will highlight these and other issues, and promote collaboration to solve problems. The stakes are high; if the U.S. falls short on energy, American adversaries could gain the upper hand in AI leadership, according to SCSP experts.\n\nVisit scsp.ai to learn more about the AI+Energy Summit and the SCSP's Next-Generation Energy Action Plan.\n\nArticle Link\n\nhttps://about.newsusa.com/new-artificial-intelligence-summit-series-begins-with…\n\n## RELATED ARTICLES\n\n\n\n\n\nMar 06, 2024\n\nCelebrate St. Patrick's Day with No Booze, Just Pure Irish Fun and Entertainment\n\nMar 06, 2024\n\nExplore Downtown San Pedro with Flair: Ride the Iconic Red Car Trolley for Free\n\nMar 06, 2024\n\nSay Hello to Your Big Break at the Stapleton Library Job Fair in Vocation, Trade, or Civil Service\n\n\n\n\n\nFeb 22, 2024\n\nRetrain Your Emotional Brain: A Natural Alternative to Weight Loss Drugs\n\n© Copyright NewsUSA 2025. All Rights Reserved.\n\nFeb 21, 2024\n\nSerial Entrepreneur Teaches Us How to Go the Distance in Business and in Life\n\nNEWSUSA\n\nMar 06, 2024\n\nLocal Artists Collaborate for a Unique Fusion of Groove and Collage\n\nFASHION\n\nBUSINESS\n\nINFOGRAPHIC\n\nENVIRONMENT\n\nHEALTH\n\nMONEY\n\nFOOD\n\nTRAVEL\n\nBRIDAL\n\nRECREATION\n\nTECHNOLOGY\n\nHOME\n\nEDUCATION\n\nARTS & ENTERTAINMENT\n\nAUTO\n\nCHILDREN\n\nFITNESS\n\nHOLIDAY\n\nINSURANCE\n\nLAWN & GARDEN\n\nLISTICLE\n\nNUTRITION\n\nPARENTING\n\nPETS\n\nSEASONAL\n\nSENIORS\n\nSPANISH\n\nTIPS AND HOW TO\n\nENTERTAINMENT\n\nCAREER\n\nCOMMUNITY\n\nFAMILY\n\nTIPS\n\nINTERNET\n\nHUMAN\\_INTEREST\n\nBEAUTY\n\nARTS\n\nREALESTATE\n\nSAFETY\n\nMEDICINE\n\nBOOK\\_REVIEW\n\nRECIPE\n\nAFRICAN\\_AMERICANS\n\nHOW\\_TO\n\nBYLINED\\_COLUMN\n\nCHARITY\n\nSPORTS\n\nHOME\\_IMPROVEMENT\n\nTECH\n\nWELLNESS\n\nARTS AND ENTERTAINMENT\n\nFOOD & DRINK\n\nREAL\\_ESTATE\n\nVETERANS\n\nOUTDOORS\n\nREAL ESTATE\n\nHUMAN INTEREST\n\nMONEY & FINANCE\n\nFASHION & BEAUTY\n\nMONEY AND FINANCE\n\nBOOKS & ENTERTAINMENT\n\nBOOKS\n\nARTS & ENTERTAINMENT\n\nCATEGORIES\n\nRECENT POSTS", - "page_start": 0, - "page_end": 0, - "source_file": "news1.pdf" - }, - { - "text": "Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, including virtual assistants, chatbots, autonomous vehicles, game-playing systems, and industrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks. [175][176][177]\n\nVincent van Gogh in watercolour created by generative AI software\n\n\n\n## Other industry-specific tasks\n\nThere are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated \"AI\" in some offerings or processes. [178] A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management.\n\nAI applications for evacuation and disaster management are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions. [179][180][181]\n\nIn agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.\n\nArtificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for \"classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights.\" For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia3.pdf" - }, - { - "text": "\n\n## Artificial intelligence\n\nArtificial intelligence ( AI ), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. [1] Such machines may be called AIs.\n\nHigh-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.\" [2][3]\n\nVarious subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. [a] General intelligence-the ability to complete any task performed by a human on an at least equal level-is among the field's long-term goals. [4] To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. [b] AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields. [5]\n\nArtificial intelligence was founded as an academic discipline in 1956, [6] and the field went through multiple cycles of optimism throughout its history, [7][8] followed by periods of disappointment and loss of funding, known as AI winters. [9][10] Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques. [11] This growth accelerated further after 2017 with the transformer architecture, [12] and by the early 2020s many billions of dollars were being invested in AI and the field experienced rapid ongoing progress in what has become known as the AI boom. The emergence of advanced generative AI in the midst of the AI boom and its ability to create and modify content exposed several unintended consequences and harms in the present and raised concerns about the risks of AI and its long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.\n\n## Goals", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Franzen) sued AI companies for using their work to train generative AI. [195][196] Another discussed approach is to envision a separate sui generis system of protection for creations generated by AI to ensure fair attribution and compensation for human authors. [197]\n\n## Dominance by tech giants\n\nThe commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft. [198][199][200] Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace. [201][202]\n\n## Power needs and environmental impacts\n\nIn January 2024, the International Energy Agency (IEA) released Electricity 2024, Analysis and Forecast to 2026 , forecasting electric power use. [203] This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation. [204]\n\nProdigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources - from nuclear energy to geothermal to fusion. The tech firms argue that - in the long view - AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and \"intelligent\", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms. [205]\n\nA 2024 Goldman Sachs Research Paper, AI Data Centers and the Coming US Power Demand Surge , found \"US power demand (is) likely to experience growth not seen in a generation....\" and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means. [206] Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all. [207]\n\nIn 2024, the Wall Street Journal reported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US). [208] Nvidia CEO Jen-Hsun Huang said nuclear power is a good option for the data centers. [209]\n\nIn September 2024, Microsoft announced an agreement with Constellation Energy to re-open the Three Mile Island nuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the US Nuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power - enough for 800,000 homes - of", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers. [300]\n\nThe UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities. [301]\n\n## Regulation\n\nThe regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms. [302] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. [303] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone. [304][305] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI. [306] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and\n\nThe first global AI Safety Summit was held in 2023 with a declaration calling for international cooperation.\n\n\n\nVietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia. [306] The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. [306] Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI. [307] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. [308] In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics. [309] In 2024, the Council of Europe created the first international legally binding treaty on AI, called the \"Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law\". It was adopted by the European Union, the United States, the United Kingdom, and other signatories. [310]\n\nIn a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that \"products and services using AI have more benefits than drawbacks\". [304] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. [311] In a 2023 Fox News poll, 35% of Americans thought it \"very important\", and an additional 41% thought it \"somewhat important\", for the federal government to regulate AI, versus 13% responding \"not very important\" and 8% responding \"not at all important\". [312][313]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI, [367] with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did \"not actually use AI in a material way\". [368]\n\n## Evaluating approaches to AI\n\nNo established unifying theory or paradigm has guided AI research for most of its history. [aa] The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term \"artificial intelligence\" to mean \"machine learning with neural networks\"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.\n\n## Symbolic AI and its limits\n\nSymbolic AI (or \"GOFAI\") [370] simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at \"intelligent\" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: \"A physical symbol system has the necessary and sufficient means of general intelligent action.\" [371]\n\nHowever, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level \"intelligent\" tasks were easy for AI, but low level \"instinctive\" tasks were extremely difficult. [372] Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a \"feel\" for the situation, rather than explicit symbolic knowledge. [373] Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him. [ab][16]\n\nThe issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence, [375][376] in part because subsymbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.\n\n## Neat vs. scruffy\n\n\"Neats\" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). \"Scruffies\" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s, [377] but eventually was seen as irrelevant. Modern AI has elements of both.\n\n## Soft vs. hard computing", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia3.pdf" - }, - { - "text": "## Existential risk\n\nIt has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicist Stephen Hawking stated, \"spell the end of the human race\". [265] This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like \"self-awareness\" (or \"sentience\" or \"consciousness\") and becomes a malevolent character. [q] These sci-fi scenarios are misleading in several ways.\n\nFirst, AI does not require human-like sentience to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. Philosopher Nick Bostrom argued that if one gives almost any goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a paperclip factory manager). [267] Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that \"you can't fetch the coffee if you're dead.\" [268] In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is \"fundamentally on our side\". [269]\n\nSecond, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are built on language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive. [270]\n\nThe opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI. [271] Personalities such as Stephen Hawking, Bill Gates, and Elon Musk, [272] as well as AI pioneers such as Yoshua Bengio, Stuart Russell, Demis Hassabis, and Sam Altman, have expressed concerns about existential risk from AI.\n\nIn May 2023, Geoffrey Hinton announced his resignation from Google in order to be able to \"freely speak out about the risks of AI\" without \"considering how this impacts Google.\" [273] He notably mentioned risks of an AI takeover, [274] and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI. [275]\n\nIn 2023, many leading AI experts endorsed the joint statement that \"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war\". [276]\n\nSome other researchers were more optimistic. AI pioneer Jürgen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making \"human lives longer and healthier and easier.\" [277] While the tools that are now being used to improve lives can also be used by bad actors, \"they can also be used against the bad actors.\" [278][279] Andrew Ng also argued that \"it's a mistake to fall for the doomsday hype on AI-and that regulators who do will only benefit vested interests.\" [280] Yann LeCun \"scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction.\" [281] In the early 2010s, experts argued that the risks are too distant in", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia3.pdf" - }, - { - "text": "A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision. [o] Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction. [248] Even when used in conventional warfare, it is unlikely that they will be unable to reliably choose targets and could potentially kill an innocent person. [248] In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed. [249] By 2015, over fifty countries were reported to be researching battlefield robots. [250]\n\nAI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware. [251] All these technologies have been available since 2020 or earlier-AI facial recognition systems are already being used for mass surveillance in China. [252][253]\n\nThere many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours. [254]\n\n## Technological unemployment\n\nEconomists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment. [255]\n\nIn the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that \"we're in uncharted territory\" with AI. [256] A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in longterm unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. [257] Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at \"high risk\" of potential automation, while an OECD report classified only 9% of U.S. jobs as \"high risk\". [p][259] The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies. [255] In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence. [260][261]\n\nUnlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that \"the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution\" is \"worth taking seriously\". [262] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. [263]", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia3.pdf" - }, - { - "text": "\n\n## ISSUE\n\nDecember 2024\n\n## CATEGORIES\n\nTechnology & Cybersecurity\n\nEditor's Picks Finance - Personal Home - Interior\n\n\n\n## The top AI-powered tech trends in 2025\n\n\n\n(NC) As we look ahead to 2025, artificial intelligence (AI) continues to revolutionize our lives. From enhancing our daily routines to transforming entire industries, AI's impact is undeniable.\n\nThese five innovations are set to shape our future, offering unprecedented convenience, efficiency and personalization.\n\n## AI-powered computing\n\nAI-powered computing, such as Intel-powered laptops - or AI PC - is at the forefront of technological advancement. But what, exactly, is an AI PC? They're computers that have AI built into their processors - also known as the brain of the computer - which optimizes performance, enhances security and provides a more personalized experience as they learn from your usage patterns. For consumers, this means faster, smarter and more secure computing tailored to your individual needs.\n\n## Smart home automation\n\nSmart home automation has been around for a while, but AI is taking it to the next level. Imagine a home that not only follows your commands, but also anticipates your needs. Enhanced smart home systems can learn your daily routines and adjust settings accordingly, from lighting and temperature to security and entertainment, making your home smarter and more responsive than ever before.\n\n## Health and wellness\n\nThe health-care industry is seeing significant transformation. AI-driven health and wellness applications can monitor vital signs, predict potential health issues, and even provide personalized fitness and nutrition plans. Wearable devices equipped with this technology can offer real-time health insights, helping individuals make informed decisions about their well-being.\n\n## Financial services\n\nAI is also making waves in the financial sector, offering smarter and more secure ways to manage money. From AI-driven investment platforms that provide personalized financial advice to fraud detection systems that protect against cyber threats, AI can analyze vast amounts of data to identify trends and make more informed financial decisions.\n\n## Enhanced education\n\nIn education, enhanced learning tools provide personalized learning experiences that adapt to each student's strengths and weaknesses. This technology can offer real-time feedback, helping students improve their skills more effectively. Additionally, AI can assist educators by automating administrative tasks and providing insights into student performance, allowing for more focused and effective teaching.\n\nLearn more at intel.com/aipc.\n\nwww.newscanada.com\n\nWord Count: 346\n\n\n\n\n\n\n\n\n\nRADIO\n\n\n\n\n\n\n\n\n\nEN", - "page_start": 0, - "page_end": 0, - "source_file": "news4.pdf" - }, - { - "text": "show that even a computer capable of perfectly simulating human behavior would not have a mind. [387]\n\n## AI welfare and rights\n\nIt is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree. [388] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals. [389][390] Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights. [389] Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society. [391]\n\nIn 2017, the European Union considered granting \"electronic personhood\" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities. [392] Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own. [393][394]\n\nProgress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited. [390][389]\n\n## Future\n\n## Superintelligence and the singularity\n\nA superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. [379] If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an \"intelligence explosion\" and Vernor Vinge called a \"singularity\". [395]\n\nHowever, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do. [396]\n\n## Transhumanism\n\nRobot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger. [397]", - "page_start": 26, - "page_end": 26, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HNI_2003.pdf", - "query": "How can I contact Investor Relations of HON industries through email ?", - "target_page": 63, - "target_passage": "E-mail: investorrelations@honi.com", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "## HON INDUSTRIES Inc. and SUBSIDIARIES", - "page_start": 56, - "page_end": 56, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## HON INDUSTRIES 2003\n\n## FINANCIAL HIGHLIGHTS", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## I N V E S T O R I N F O R M A T I O N\n\n## SCHEDULE OF QUARTERLY RESULTS\n\nThe Company operates on a fiscal year ending on the Saturday nearest December 31. Quarterly results are typically announced within 25 days after the end of each quarter, and audited results are typically announced within 40 days after year-end.\n\n## FISCAL 2004 QUARTER-END DATES\n\n1st Quarter: Saturday, April 3\n\n2nd Quarter: Saturday, July 3\n\n3rd Quarter: Saturday, October 2\n\n4th Quarter: Saturday, January 1\n\n## ANNUAL MEETING\n\nThe Company's annual shareholders' meeting will be held at 10:30 a.m. on May 4, 2004, at the Holiday Inn, Highways 61 & 38 North, Muscatine, Iowa. Shareholders and other interested investors are encouraged to attend the meeting.\n\n## I NVESTOR RELATIONS\n\nSend inquiries to:\n\nInvestor Relations\n\nHON INDUSTRIES Inc.\n\n414 East Third Street\n\nMuscatine, IA 52761\n\nTelephone: 563.264.7400\n\nFax: 563.264.7655\n\nE-mail: investorrelations@honi.com\n\n## CORPORATE HEADQUARTERS\n\nHON INDUSTRIES Inc.\n\n414 East Third Street\n\nP.O. Box 1109\n\nMuscatine, IA 52761-0071\n\nTelephone: 563.264.7400\n\nFax: 563.264.7217\n\nWebsite: www.honi.com\n\n## I NDEPENDENT PUBLIC ACCOUNTANTS\n\nPricewaterhouseCoopers LLP\n\nOne North Wacker Drive\n\nChicago, IL 60606\n\n## FORWARD-LOOKING STATEMENTS\n\nStatements in this report that are not strictly historical, including statements as to plans, objectives, and future financial performance, are 'forward-looking' statements that are made pursuant to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements involve known and unknown risks, which may cause the Company's actual results in the future to differ materially from expected results. These risks include, among others:\n\n - · competition within the office furniture and fireplace industries, including competition from imported products and competitive pricing;\n - · increases in the cost of raw materials, including steel, which is the Company's largest raw material category;\n - · increases in the cost of health care benefits provided by the Company;\n - · reduced demand for the Company's storage products caused by changes in office technology; including the change from paper record storage to electronic record storage;\n - · the effects of economic conditions, on demand for office furniture, customer insolvencies and related bad debts and claims against the Company that it received preferential payments;\n - · changes in demand and order patterns from the Company's customers, particularly its top ten customers, which represented approximately 36% of net sales in 2003;\n - · issues associated with acquisitions and integration of acquisitions;\n - · the ability of the Company to realize cost savings and productivity improvements from its cost containment and business simplification initiatives;\n - · the ability of the Company to realize financial benefits from investments in new products;\n - · the ability of the Company's distributors and dealers to successfully market and sell the Company's products;\n - · the availability and cost of capital to finance planned growth; and\n - · other risks, uncertainties, and factors described from time to time in the Company's filings with the Securities and Exchange Commission.\n\nWe caution the reader that the above list of factors may not be exhaustive. The Company does not assume any obligation to update any forward-looking statement, whether as a result of new information, future events or otherwise.\n\n## COMMON STOCK", - "page_start": 62, - "page_end": 62, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## FOR FURTHER INFORMATION, PLEASE CONTACT\n\n## Investor Relations\n\n## Nissan Motor Co., Ltd.\n\nGlobal Communications, CSR and IR Division 17-1, Ginza 6-chome, Chuo-ku Tokyo 104-8023, Japan phone: +81(0)3-5565-2334 fax: +81(0)3-3546-2669 e-mail: nissan-ir@mail.nissan.co.jp\n\n## Corporate Information Website\n\nhttp://www.nissan-global.com/\n\n## Investor Relations Website\n\nhttp://www.nissan-global.com/EN/IR/", - "page_start": 111, - "page_end": 111, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "The Company participates in emerging technologies by investing in entities that invest in start-up companies. This includes indirect participation through capital venture funds of South Atlantic Venture Fund III, South Atlantic Private Equity IV, Dolphin Communications Parallel Fund, Dolphin Communications Fund II and the Burton Partnership. The Company also participates by direct investment in privately held companies. Currently the Company's only direct investment is in NTC Communications, a provider of voice, video and data connections to off campus housing properties at universities and colleges. For those companies that eventually make public offerings of their securities, it\n\n\n\n■", - "page_start": 52, - "page_end": 52, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## COMMON STOCK\n\nHON INDUSTRIES common stock trades on the New York Stock Exchange under the symbol: HNI. Stock price quotations can be found in major daily newspapers and The Wall Street Journal .\n\n## TRANSFER AGENT\n\nShareholders may report a change of address or make inquiries by writing or calling:\n\nComputershare Investor Services, LLC\n\n - 2 North LaSalle Street\n\nChicago, IL 60602\n\nTelephone: 312.588.4991", - "page_start": 62, - "page_end": 62, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "In 2003, the Company received distributions from its equity investments totaling $0.5 million in cash and invested $0.7 million in two equity investments, Dolphin Communications Parallel Fund, LP and Dolphin Communications Fund II, LP. These two investments recorded losses of approximately $0.4 million for the 2003 year. The Company recorded a loss from the Virginia Independent Telephone Alliance investment of $19 thousand, for 2003. The Company recorded a gain from the ValleyNet partnership of $84 thousand and received distributions of $84 thousand. Other equity investments lost an additional $0.4 million for 2003.\n\nThe Company was committed to invest an additional $1.8 million at December 31, 2003 in various equity method investees pursuant to capital calls from the fund managers. It is not practical to estimate the fair value of the other investments due to their limited market and restrictive nature of their transferability.\n\nThe Company's ownership interests in Virginia Independent Telephone Alliance and ValleyNet are approximately 22% and 20%, respectively. The Company purchases services from Virginia Independent Telephone Alliance and ValleyNet at rates comparable with other customers. The Company's ownership in NTC Communications is approximately 18%. Other equity method investees are investment limited partnerships which are approximately 2% owned each.\n\n\n\n■", - "page_start": 26, - "page_end": 26, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "- - IP of the email server (SMTP Server) and Port\n - - Call Home email address\n - - Email of one or more users set to receive one or more email notifications", - "page_start": 201, - "page_end": 201, - "source_file": "sg247938.pdf" - }, - { - "text": "## INVESTOR INFORMATION :\n\n## A NNUAL M EETING\n\nThe annual meeting of shareholders will be held on Thursday, April 24, 2003, in Corning, NY. A formal notice of the meeting together with a proxy statement will be mailed to shareholders on or about March 12, 2003. The proxy statement can also be accessed electronically through the Investor Relations category of the Corning home page on the Internet at www.corning.com. A summary report of the proceedings at the annual meeting will be available without charge upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831.\n\n## A DDITIONAL INFORMATION\n\n'Safe Harbor' Statement under the Private Securities Litigation Reform Act of 1995 facts or information are forward-looking statements. These forward-looking statements involve risks and uncertainties that may cause the outcome to be materially different. Such risks and uncertainties include, but are not limited to:\n\n - -global economic and political conditions,\n - -currency fluctuations,\n - -product demand and industry capacity,\n - -competitive products and pricing,\n\n-\n\nsufficiency of manufacturing capacity and efficiencies,\n\n - -cost reductions,\n - -availability and costs of critical materials,\n - -new product development and commercialization,\n - -attracting and retaining key personnel,\n - -order activity and demand from major customers,\n - -fluctuations in capital spending by customers in the telecommunications industry and other business segments,\n - -financial condition of customers,\n\nA copy of Corning's 2002 Annual Report on Form 10-K filed with the Securities and Exchange Commission is available upon written request to Ms. Denise A. Hauselt, Secretary and Assistant General Counsel, Corning Incorporated, HQ-E2-10, Corning, NY 14831. The Annual Report on Form 10-K can also be accessed electronically through the Investor Relations category of the home page on the Internet at: www.corning.com\n\nINVESTOR INFORMATION\n\nInvestment analysts who need additional information may contact Mr. Kenneth C. Sofio, Manager of Investor Relations, Corning Incorporated, HQ-E2-25, Corning, NY 14831; Telephone 607.974.9000\n\n## C OMMON S TOCK\n\n - -changes in the mix of sales between premium and non-premium products,\n - -facility expansions and new plant start-up costs,\n - -adverse litigation or regulatory developments, including future or pending tax legislation,\n - -adequacy and availability of insurance,\n - -capital resource and cash flow activities,\n - -capital spending,\n - -equity company activities,\n - -interest costs,\n - -acquisition and divestiture activity,\n - -the rate of technology change,\n - -the ability to enforce patents,\n\nCorning Incorporated common stock is listed on the New York Stock Exchange and the SWX Swiss Exchange. In addition, it is traded on the Boston, Midwest, Pacific and Philadelphia stock exchanges. Common stock options are traded on the Chicago Board Options Exchange. The abbreviated ticker symbol for Corning Incorporated is 'GLW.'\n\nTRANSFER A GENT AND R EGISTRAR Computershare Investor Services LLC P.O. Box A-3504 Chicago, IL 60690-3504 Telephone: 800.255.0461 Website: www.computershare.com\n\nC HANGE OF A DDRESS\n\nReport change of address to Computershare Investor Services at the above address.\n\nINDEPENDENT A CCOUNTANTS PricewaterhouseCoopers LLP 1301 Avenue of the Americas New York, NY 10019\n\nCorning Incorporated\n\nwww.corning.com\n\n - -product performance issues,\n - -stock price fluctuations, and\n - -other risks detailed in Corning's SEC filings.\n\nNeither this report nor any statement contained herein is furnished in connection with any of\n\nCorning is an equal opportunity employer. Printed in USA\n\n© Corning Incorporated 2003\n\n", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_GLW_2002.pdf" - }, - { - "text": "## SHAREHOLDER INFORMATION\n\nApplied Industrial Technologies, Inc. common stock is listed on the New York Stock Exchange under the symbol AIT. The Company is identified in most financial listings as 'AppliedIndlTch.'\n\n## RESEARCH ON APPLIED INDUSTRIAL TECHNOLOGIES IS AVAILABLE THROUGH:\n\n## BB&T CAPITAL MARKETS\n\nHolden Lewis, 703/471-3894\n\n## CJS SECURITIES\n\nJonathan Tanwanteng, 914/287-7600\n\n## CLEVELAND RESEARCH COMPANY\n\nAdam Uhlman, 216/649-7241\n\n## KEYBANC CAPITAL MARKETS\n\nJeffrey D. Hammond, 216/689-0236\n\n## SIDOTI & CO.\n\nJoseph Mondillo, 212/894-3339\n\nGREAT LAKES REVIEW - Division of\n\nWellington Shields & Co.\n\nElliott Schlang, 216/767-1340\n\n## STEPHENS INC.\n\nMatt Duncan, 501/377-3723\n\n## WELLS FARGO SECURITIES, LLC\n\nAllison Poliniak-Cusic, 212/214-5062\n\n## WUNDERLICH SECURITIES\n\nBrent D. Rakers, 901/251-2236\n\n## SHAREHOLDER INQUIRIES\n\nRequests to transfer Applied Industrial Technologies, Inc. shares and all correspondence regarding address change information, duplicate mailings, missing certificates, failure to receive dividend checks in a timely manner or to participate in the Company's direct stock purchase program should be directed to the Company's transfer agent and registrar:\n\n## COMPUTERSHARE TRUST COMPANY, N.A.\n\n250 Royall Street Canton, MA 02021 800/988-5291\n\n## ANNUAL REPORT ON FORM 10-K\n\nThe Applied Industrial Technologies, Inc. Annual Report on Form 10-K for the fiscal year ended June 30, 2012, including the financial statements and schedules thereto, is available at our website at www.Applied.com. It is also available without charge upon written request to the Vice President - Chief Financial Officer & Treasurer at the address shown.\n\n## ANNUAL MEETING\n\nThe Annual Meeting of Shareholders will be held at 10:00 a.m., Tuesday, October 23, 2012, at the Corporate Headquarters of Applied Industrial Technologies, 1 Applied Plaza, East 36th and Euclid Avenue, Cleveland, Ohio 44115.\n\n## COMPARISON OF FIVE-YEAR CUMULATIVE TOTAL RETURN\n\nApplied Industrial Technologies, Inc., Standard & Poor's 500, and Peer Group (Performance Results from 7/1/2007 through 6/30/2012)\n\n\n\nAssumes $100 invested at the close of trading 6/30/07 in Applied Industrial Technologies, Inc. common stock, Standard & Poor's 500, and Peer Group.\n\nCumulative total return assumes reinvestment of dividends.\n\nThe returns of the companies in the Peer Group are weighted based on the companies' relative stock market capitalization.\n\nPeer Group companies selected on a line-of-business basis include: DXP Enterprises, Inc.; Fastenal Company; Genuine Parts Company; W. W. Grainger, Inc.; Kaman Corporation; Lawson Products, Inc.; MSC Industrial Direct Co., Inc.; and WESCO International, Inc.\n\n| | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 |\n|---------------------------------------|---------|--------|--------|--------|---------|---------|\n| Applied Industrial Technologies, Inc. | $100.00 | $83.63 | $70.22 | $92.62 | $133.17 | $141.07 |\n| Standard & Poor's 500 | 100.00 | 86.88 | 64.11 | 73.36 | 95.88 | 101.10 |\n| Peer Group | 100.00 | 86.96 | 74.77 | 100.34 | 148.47 | 170.81 |\n\nSource: Value Line Publishing LLC\n\n## INVESTOR RELATIONS INQUIRIES SHOULD BE DIRECTED TO:\n\n## MARK O. EISELE\n\nVice President - Chief Financial Officer\n\n - & Treasurer\n\nApplied Industrial Technologies\n\n - 1 Applied Plaza\n\nCleveland, OH 44115-5014\n\nTelephone: 216/426-4000, Fax: 216/426-4845", - "page_start": 46, - "page_end": 46, - "source_file": "NYSE_AIT_2012.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HNI_2003.pdf", - "query": "What explains the decrease in net sales of HON industries in 2002 ?", - "target_page": 34, - "target_passage": "The decrease in 2002 was due to the decline in the office furniture market due to unstable economic conditions and the deletion of less profitable product lines in the hearth products segment", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## HON INDUSTRIES Inc. and SUBSIDIARIES", - "page_start": 56, - "page_end": 56, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## OPERATING INCOME\n\nOperating income increased 5% in 2003 and 16% in 2002, respectively. The increase in 2003 is due to the additional week, strong sales volume in the hearth segment, and improved gross margins in both segments, offset by increased restructuring charges due to additional plant closures and consolidations, increased investment in brand building and selling initiatives, and increased freight costs. The increase in 2002 was due to a $24 million restructuring charge in 2001 compared to a $3 million restructuring charge in 2002 and goodwill and indefinitelived intangibles amortization of $9 million incurred in 2001 that is not included in 2002 due to a change in accounting standards.\n\n## NET INCOME\n\nNet income increased 7% in 2003 and 23% in 2002, respectively. Net income in 2003 was favorably impacted by increased interest income due to increased investments and decreased interest expense due to reduction in debt. Net income in 2002 was favorably impacted by a decrease in interest expense and a decrease in the effective tax rate to 35% in 2002 from 36% in 2001 mainly due to tax benefits associated with various federal and state tax credits. The Company anticipates that its tax rate will increase to 36% in 2004 due to increased state taxes and a reduced benefit from federal and state tax credits. Net income per diluted share increased by 8% to $1.68 in 2003 and by 23% to $1.55 in 2002, respectively. Due to the appreciation in the Company's stock price, outstanding options had a dilutive impact of $0.01 per share in 2003.", - "page_start": 34, - "page_end": 34, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## HON INDUSTRIES 2003\n\n## FINANCIAL HIGHLIGHTS", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## OFFICE FURNITURE\n\n## Liquidity and Capital Resources\n\nOffice furniture comprised 74% of consolidated net sales for 2003 and 76% of consolidated net sales for 2002 and 2001. Net sales for office furniture increased 2% in 2003 and decreased 6% in 2002. The increase in 2003 is due to the increased week from the Company's 52/53-week fiscal year. The office furniture industry has experienced an unprecedented three-year decline in shipments. The Business and Institutional Furniture Manufacturer's Association (BIFMA) reported 2003 shipments down over 5% and 2002 shipments down 19%. The Company's estimated share of the market based on reported office furniture shipments increased to 15.3% in 2003 compared to 14.4% in 2002 and 12.4% in 2001. This increase was achieved by providing strong brands, innovative products and services, and greater value to end-users.\n\nOperating profit as a percent of sales was 10.0% in 2003, 10.2% in 2002, and 8.2% in 2001. Included in 2003 were $15.2 million of net pretax charges related to the closure of two office furniture facilities, which impacted operating margins by 1.1 percentage points. Included in 2002 were $3.0 million of restructuring charges, which impacted operating margins by 0.2 percentage points, and 2001 included $22.5 million of restructuring charges, which impacted operating margins by 1.7 percentage points. The increase in operating margins is due to increased gross profit from the benefits of restructuring initiatives, rapid continuous improvement programs, and increased price realization, offset by additional investments in brand building and selling initiatives and increased freight expense.\n\n## HEARTH PRODUCTS\n\nHearth products sales increased 9% in 2003 and decreased 3% in 2002, respectively. The growth in 2003 was attributable to strong housing starts, growth in market share in both the new construction and retail channels, strengthening alliances with key distributors and dealers, as well as focused new product introductions. The decrease in 2002 was mainly due to pruning out less profitable product lines.\n\nOperating profit as a percent of sales in 2003 was 12.1% compared to 10.8% and 9.2% in 2002 and 2001, respectively. The improved profitability in 2003 was the result of leveraging fixed costs over a higher sales volume and increased sales through company-owned distribution offset by increased freight costs and higher labor costs from increased use of overtime and temporary labor to meet record levels of demand. The increase in 2002 was mainly due to discontinuance of goodwill and indefinite-lived intangible amortization of approximately $7 million due to the adoption of SFAS 142.\n\nDuring 2003, cash flow from operations was $141.3 million, which along with funds from stock option exercises under employee stock plans, provided the funds necessary to meet working capital needs, invest in capital improvements, repay long-term debt, repurchase common stock, and pay increased dividends.\n\nCash, cash equivalents, and short-term investments totaled $204.2 million at the end of 2003 compared to $155.5 million at the end of 2002 and $78.8 million at the end of 2001. The Company used approximately $80 million of cash to acquire Paoli Inc. on January 5, 2004. These remaining funds, coupled with cash from future operations and additional long-term debt, if needed, are expected to be adequate to finance operations, planned improvements, and internal growth. The Company is not aware of any known trends or demands, commitments, events, or uncertainties that are reasonably likely to result in its liquidity increasing or decreasing in any material way.", - "page_start": 35, - "page_end": 35, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## GROSS PROFIT\n\nGross profit as a percent of net sales improved 1.0 percentage point in 2003 as compared to fiscal 2002 and 1.3 percentage points in 2002 as compared to 2001. The improvement in both periods was a result of the continued net benefits of rapid continuous improvement, restructuring initiatives, business simplification, new products, and improved price realization. Included in 2003 gross profit was $6.7 million of accelerated depreciation, which reduced gross profits 0.4 percentage points. The Company expects to mitigate any future increases in material costs through various initiatives, including alternative materials and suppliers and its rapid continuous improvement program.\n\n## SELLING AND ADMINISTRATIVE EXPENSES\n\nSelling and administrative expenses, excluding restructuring charges, increased 5.8% in 2003 and decreased 2.2% in 2002. The increase in 2003 was due to additional investment of approximately $14 million in brand building and selling initiatives, and increased freight costs of $7 million due to rate increases, fuel surcharges, and volume. The decrease in 2002 was due to no longer amortizing goodwill and certain other intangible assets of approximately $9 million and lower overall", - "page_start": 33, - "page_end": 33, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## Net Sales\n\n## (Dollars in Billions)\n\n\n\n## Net Income\n\n(Dollars in Millions)\n\n\n\n - * The goodwill impairment charge in fiscal 2009 reduced net income by $23.0 million.\n\n## Net Income Per Share\n\n(Dollars)\n\n\n\n - * The goodwill impairment charge in fiscal 2009 reduced net income per share by $0.54.", - "page_start": 43, - "page_end": 43, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "Income tax expense in 2003 totaled $1.9 million, compared with $1.4 million in 2002 and $1.8 million in 2001. The effective tax rates for 2003, 2002 and 2001 were 27.8 percent, 25.7 percent and 29.7 percent, respectively. Benefits from tax incentives for exports and R&D expenditures totaled $350,000 in 2003, $408,000 in 2002 and $404,000 in 2001. The higher effective tax rate in 2003 is primarily a result of benefits from tax incentives for exports and R&D expenditures being a lesser percentage of taxable income in 2003 than in 2002. The lower effective tax rate in 2002 is primarily a result of benefits from tax incentives for exports and R&D expenditures being a larger percentage of taxable income in 2002 than in 2001 and the utilization of capital loss carryforwards in 2002.\n\nThe Company believes that 2004 revenues will be higher than 2003 revenues and that the cost of goods sold, gross profit, operating income and income from continuing operations will each be higher in 2004 than in 2003. The Company further believes that it will have continuing volume growth in most of its product lines in 2004, complemented by the introduction of new products, and that it will achieve a double-digit annual rate of growth in earnings per share from continuing operations for the next several years.\n\n## DISCONTINUED OPERATIONS\n\nDuring 1997, the Company sold all of its natural gas operations. The financial statements presented herein reflect the Company's natural gas operations as discontinued operations for all periods presented. The financial statements also reflect an after-tax gain on disposal of these discontinued operations of $ .2 million, or $ .10 per basic and $ .09 per diluted share, in both 2003 and 2002, and $5.5 million, or $2.70 per basic and $2.42 per diluted share, in 2001.\n\nIn addition to the initial consideration received in 1997 upon the sale of the natural gas operations, certain annual contingent deferred payments of up to $250,000 per year were to be paid to the Company over an eight-year period which began in 1999, with the amount paid each year to be dependent upon revenues received by the purchaser from certain gas transportation contracts. The Company received deferred payments of $250,000 each, before tax, from the purchaser in April 2003, 2002 and 2001 which are reflected in each year as a gain from discontinued operations of $165,000, net of tax. The 2001 gain also includes a $5,327,000 non-cash gain from reversal of a reserve established when the Company disposed of its natural gas operations in 1997. This reversal in the third quarter of 2001 followed the resolution of an outstanding contingency related to the sale of those assets.", - "page_start": 26, - "page_end": 26, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "## YEAR ENDED JUNE 30, 2012 vs. 2011\n\nThe following table is included to aid in review of Applied's statements of consolidated income.\n\n| | Year Ended June 30, As a % of Net Sales | Year Ended June 30, As a % of Net Sales | Change in $'s Versus Prior Period |\n|----------------------------------------|------------------------------------------------------|------------------------------------------------------|---------------------------------------|\n| | 2012 | 2011 | % Increase |\n| Net Sales | 100.0 % | 100.0 % | 7.3 % |\n| Gross Profit | 27.6 % | 27.7 % | 6.7 % |\n| Selling, Distribution & Administrative | 20.5 % | 20.9 % | 5.1 % |\n| Operating Income | 7.1 % | 6.8 % | 11.7 % |\n| Net Income | 4.6 % | 4.4 % | 12.4 % |\n\nNet sales in fiscal 2012 were $2.4 billion, which was $162.6 million or 7.3% above the prior year, driven by improvements in the industrial economy as well as a continued focus on profitable sales growth. Incremental net sales from companies acquired since the prior year period contributed approximately $16.6 million or 0.7%. Currency translation decreased fiscal year sales by approximately $1.8 million or 0.1%. In local currency, net sales from our Canadian operations were up 12.2% from fiscal 2011, including 2.8% from acquisitions. In local currency, net sales from our Mexican operations were up 25.9%. The number of selling days in fiscal 2012 was the same as in fiscal 2011.\n\nNet sales of our Service Center Based Distribution segment increased $133.8 million, or 7.6%, compared to fiscal year 2011 led by improvements in the industrial economy as well as a continued focus on profitable sales growth, with acquisitions adding $16.6 million or 0.9%. Net sales of our Fluid Power Businesses segment increased $28.8 million or 6.5%, also driven by improvements in the industrial economy as well as a continued focus on profitable sales growth.\n\nThe sales product mix for fiscal 2012 was 70.8% industrial products and 29.2% fluid power products compared to 70.5% industrial and 29.5% fluid power in the prior year.\n\nAt June 30, 2012, we had a total of 476 operating facilities in the U.S., Canada and Mexico versus 474 at June 30, 2011.\n\n1", - "page_start": 6, - "page_end": 6, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "The Company's policy requires measurement of the allowance for an impaired collateral dependent loan based on the fair value of the collateral. Other loan impairments are measured based on the present value of expected future cash flows or the loan's observable market price.\n\n## Results of Operations\n\nPerformance Summary . Net earnings for 2002 were $34.0 million, an increase of $4.6 million, or 15.7%, over net earnings for 2001 of $29.4 million. Net earnings for 2000 were $28.3 million. The increase in net earnings for 2002 over 2001 was primarily attributable to an increase in net interest income resulting primarily from growth in average earning assets and an improved net interest margin. The increase in net earnings for 2001 over 2000 was primarily attributable to an increase in net interest income resulting primarily from the growth in average earning assets and an increase in noninterest income resulting primarily from increases in service fees on deposit accounts and real estate mortgage fees.\n\nOn a basic net earnings per share basis, net earnings were $2.75 for 2002 as compared to $2.38 for 2001 and $2.28 for 2000. Return on average assets was 1.78% for 2002 as compared to 1.62% for 2001 and 1.67% for 2000. Return on average equity was 15.13% for 2002 as compared to 14.35% for 2001 and 15.39% for 2000.\n\nAffecting our 2002 net earnings and basic and diluted earnings per share is the implementation of Statement of Financial Accounting Standards No. 141, \"Business Combinations\" (\"SFAS No. 141\") and Statement of Financial Accounting Standards No. 142, \"Goodwill and Other Intangible Assets\" (\"SFAS No. 142\"). SFAS No. 141 requires that all business combinations initiated after June 30, 2001 be accounted for under the purchase method and addresses the initial recognition and measurement of goodwill and other intangible assets acquired in a business combination. SFAS No. 142 addresses the initial recognition and measurement of intangible assets acquired outside of a business combination and the accounting for goodwill and other intangible assets subsequent to their acquisition. SFAS No. 142 provides that intangible assets with finite useful lives be amortized and that goodwill and intangible assets with indefinite lives not be amortized, but rather be tested at least annually for impairment. SFAS No. 142 was effective January 1, 2002 for calendar year companies; however, acquired goodwill and intangible assets recorded in the acquisition of City Bancshares, Inc. closed subsequent to June 30, 2001 were subject immediately to its provisions.\n\nOn January 1, 2002, goodwill amounting to $23,765,896 was not subject to further amortization as a result of SFAS No. 142. The Company conducted its initial impairment test in 2002, with no reduction of recorded goodwill resulting from the test. A reconciliation adjusting comparative net earnings and earnings per share for the years ended December 31, 2001 and 2000, to show the effect of no longer amortizing the Company's goodwill, follows:", - "page_start": 43, - "page_end": 43, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## YEAR ENDED JUNE 30, 2011 vs. 2010\n\nThe following table is included to aid in review of Applied's statements of consolidated income.\n\n| | Year Ended June 30, As a % of Net Sales | Year Ended June 30, As a % of Net Sales | Change in $'s Versus Prior Period |\n|----------------------------------------|------------------------------------------------------|------------------------------------------------------|---------------------------------------|\n| | 2011 | 2010 | % Increase |\n| Net Sales | 100.0 % | 100.0 % | 16.9 % |\n| Gross Profit | 27.7 % | 27.2 % | 18.9 % |\n| Selling, Distribution & Administrative | 20.9 % | 21.4 % | 14.0 % |\n| Operating Income | 6.8 % | 5.8 % | 37.0 % |\n| Net Income | 4.4 % | 3.5 % | 46.8 % |\n\nNet sales in fiscal 2011 were $2.2 billion, which was $319.6 million or 16.9% above the prior year driven by improvements in the industrial economy. Incremental net sales from companies acquired in fiscal 2011 contributed approximately $40.8 million or 1.8%. Currency translation increased fiscal year 2012 sales by approximately $16.3 million or 0.7%. In local currency, net sales from our Canadian operations were up 23.1% from fiscal 2010, including 8.4% from acquisitions. In local currency, net sales from our Mexican operations were up 17.9%. The number of selling days in fiscal 2011 was the same as in fiscal 2010.\n\nNet sales of our Service Center Based Distribution segment increased $234.3 million, or 15.2%, compared to fiscal year 2010 led by improvements in the industrial economy, with acquisitions adding $40.8 million or 2.7%. Net sales of our Fluid Power Businesses segment increased $85.4 million or 23.9%, driven by improvements in the industrial economy.\n\nThe sales product mix for fiscal 2011 was 70.5% industrial products and 29.5% fluid power products compared to 71.7% industrial and 28.3% fluid power in the prior year.\n\nAt June 30, 2011, we had a total of 474 operating facilities in the U.S., Canada and Mexico versus 455 at June 30, 2010. The increase in operating facilities represented 11 new locations due to acquisitions, the opening of 2 new locations, the impact of redefining certain shop operations which added 11 locations, and the merger of 5 locations with other locations.\n\nOur gross profit margin increased to 27.7% in fiscal 2011 from 27.2% in fiscal 2010. LIFO benefits had a negative 1.0% impact on gross profit margin in fiscal 2011 versus fiscal 2010. LIFO benefits recorded during fiscal year 2011 totaled $5.3 million which provided an overall benefit in our gross profit percent of 0.2%. This compares to a LIFO benefit of $23.5 million in fiscal 2010 which added 1.2% to gross profit. Our focused efforts on\n\n3\n\nselling products at a higher gross profit margin led to an approximate 0.9% improvement in gross profit margins. Other positive impacts on margins were an increase of approximately 0.4% from businesses acquired during the fiscal year and an increase of approximately 0.2% due to lower scrap expense.", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_AIT_2012.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_ATRI_2003.pdf", - "query": "What operations were discontinued in 1997 by Atrion Corp ?", - "target_page": 17, - "target_passage": "During 1997, the Company sold all of its natural gas operations. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## C o r p o r a t e O f f i c e :\n\nAtrion Corporation One Allentown Parkway Allen, Texas 75002 (972) 390-9800\n\nwww.atrioncorp.com\n\n## R e g i s t r a r a n d T r a n s f e r A g e n t\n\nAmerican Stock Transfer and Trust Company 59 Maiden Lane New York, New York 10007\n\n## F o r m 1 0 - K\n\nA copy of the Company's 2003 Annual Report on Form 10-K, as filed with the Securities and Exchange Commission, may be obtained by any stockholder without charge by written request to: Corporate Secretary Atrion Corporation One Allentown Parkway Allen, Texas 75002\n\n## S t o c k I n f o r m a t i o n\n\nThe Company's common stock is traded on The Nasdaq Stock Market (Symbol: ATRI). As of March 8, 2004, there were approximately 1,200 stockholders, including beneficial owners holding shares in nominee or 'street' name. The table below sets forth the high and low closing prices on The Nasdaq Stock Market and the quarterly dividends per share declared by the Company for each quarter of 2002 and 2003.\n\n| 2002 Quarter Ended | | High | Low | Dividends |\n|----------------------|----|--------|-------|-------------|\n| March 31 | $ | 38.14 | 26.91 | $ - |\n| June 30 | | 32.51 | 26.82 | - |\n| September 30 | | 28.09 | 18.31 | - |\n| December 31 | | 23.90 | 17.31 | - |\n| 2003 Quarter Ended | | High | Low | Dividends |\n| March 31 | $ | 22.85 | 17.95 | $ - |\n| June 30 | | 30.80 | 22.75 | - |\n| September 30 | | 45.20 | 26.80 | .12 |\n| December 31 | | 50.00 | 40.00 | .12 |\n\nThe Company paid no cash dividends on its common stock during 2002. In the third quarter of 2003 the Company began paying quarterly cash dividends and presently plans to pay quarterly cash dividends in the future.\n\nMPS and LacriCATH are registered trademarks of Atrion Corporation", - "page_start": 30, - "page_end": 30, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "and operate MGM Grand Australia. This transaction closed in July 2004 with net proceeds to the Company of $136 million.\n\nThe results of the Golden Nugget Subsidiaries, Online and MGM Grand Australia are classified as discontinued operations in the accompanying consolidated statements of income for all periods presented. Net revenues of discontinued operations were $45 million, $231 million and $222 million, respectively, for the years ended December 31, 2004, 2003 and 2002. Included in income from discontinued operations is an allocation of interest expense based on the ratio of the net assets of the discontinued operations to the total consolidated net assets and debt of the Company. Interest allocated to discontinued operations was $2 million, $9 million and $9 million for the years ended December 31, 2004, 2003 and 2002, respectively. Included in discontinued operations for the year ended December 31, 2003 is a loss on disposal of Online of $7 million relating primarily to unrecoverable costs of computer hardware and software. Included in the tax benefit from discontinued operations for the year ended December 31, 2003 is $2 million of previously unrecognized tax benefits relating to prior year operating losses of Online. Included in discontinued operations for the year ended December 31, 2004 is a gain on the sale of the Golden Nugget Subsidiaries of $8 million and a gain on sale of the MGM Grand Australia Subsidiaries of $74 million.\n\n## Notes to Consolidated Financial Statements\n\nThe following table summarizes the assets and liabilities of discontinued operations (the Golden Nugget Subsidiaries and Online) as of December 31, 2003, included as assets and liabilities held for sale in the accompanying consolidated balance sheet:", - "page_start": 62, - "page_end": 62, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## Discontinued Operations\n\nA discontinued operation is a component of our business that has operations and cash flows that are clearly distinguished from the rest of Rogers and:\n\n - GLYPH<129> represents a separate major line of business\n - GLYPH<129> is part of a single coordinated plan to dispose of a separate major line of business, or\n - GLYPH<129> is a subsidiary we have acquired with the intention to re-sell.\n\nWhen we classify a component as a discontinued operation, we restate our comparative income and comprehensive income as though the operation had been discontinued from the start of the comparative year.\n\nSee note 6 for information about discontinued operations.\n\n## New Accounting Pronouncements Effective in 2013\n\nWe adopted the following accounting changes for our 2013 consolidated financial statements on January 1, 2013.", - "page_start": 104, - "page_end": 104, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Kingsgate TSR Alpha™ performance\n\nWhere the TSR Alpha™ is 0%, or close to it, the Company has delivered returns that are consistent with shareholder expectations, whereas if the TSR Alpha™ is positive then the performance is in excess of shareholder expectations. However, if the TSR Alpha™ is negative, then shareholder expectations have not been met. Approximately, 50% of companies achieve TSR Alpha™ of ≥'0' which means that it is akin to 50th percentile performance and +20% TSR Alpha™ will be exceeded by less than 25% of companies meaning that it is in excess of 75th percentile performance.\n\nThe Board will commission an independent expert to review the operation of all remuneration instruments including TSR Alpha™ prior to the conclusion of the 2014 financial year.\n\n## Previous LTI Plan\n\nThe previous LTI Plan involved awarding participants options over shares. This Plan ceased prior to 1 July 2012 and no options over ordinary shares in the Company were provided as remuneration to the KMP of the parent entity and Group during the current or previous year. There are no options currently on issue that were issued under this Plan.", - "page_start": 55, - "page_end": 55, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## To the Stockholders and the Board of Directors of Atrion Corporation:\n\nWe have audited the accompanying consolidated balance sheets of Atrion Corporation (a Delaware corporation) and Subsidiaries as of December 31, 2003 and 2002, and the related consolidated statements of income, changes in stockholders' equity and cash flows for the years then ended. These financial statements are the responsibility of the Company's management. Our responsibility is to express an opinion on these financial statements based on our audit. The financial statements of Atrion Corporation and Subsidiaries as of and for the year in the period ended December 31, 2001, were audited by other auditors who have ceased operations. Those auditors expressed an unqualified opinion on those financial statements in their report dated February 25, 2002.\n\nWe conducted our audits in accordance with auditing standards generally accepted in the United States of America. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement. An audit includes examining, on a test basis, evidence supporting the amounts and disclosures in the financial statements. An audit also includes assessing the accounting principles used and significant estimates made by management as well as evaluating the overall financial statement presentation. We believe that our audits provide a reasonable basis for our opinion.\n\nIn our opinion, the financial statements referred to above present fairly, in all material respects, the consolidated financial position of Atrion Corporation and Subsidiaries as of December 31, 2003 and 2002, and the consolidated results of their operations and their consolidated cash flows for the years then ended in conformity with accounting principles generally accepted in the United States of America.\n\nAs discussed above, the financial statements of Atrion Corporation and Subsidiaries as of December 31, 2001, and for the year then ended were audited by other auditors who have ceased operations. As described in Note 2, these financial statements have been revised to include the transitional disclosures required by Statement of Financial Accounting Standards No. 142, Goodwill and Other Intangible Assets, which was adopted by the Company as of January 1, 2002. Our audit procedures with respect to the disclosures in Note 2 with respect to 2001 included agreeing the previously reported net income to the previously issued financial statements and the adjustments to reported net income representing amortization expense (including any related tax effects) recognized in those periods related to goodwill to the Company's underlying records obtained from management. We also tested the mathematical accuracy of the reconciliation of adjusted net income to reported net income, and the related income-per-share amounts. In our opinion, the disclosures for 2001 in Note 2 are appropriate. However, we were not engaged to audit, review, or apply any procedures to the 2001 financial statements of the Company other than with respect to such disclosures and, accordingly, we do not express an opinion or any other form of assurance on the 2001 financial statements taken as a whole.\n\n\n\nGrant Thornton LLP Dallas, Texas February 13, 2004\n\nThis is a copy of the audit report previously issued by Arthur Andersen LLP in connection with Atrion Corporation and Subsidiaries Annual Report for the year ended December 31, 2001. This audit report has not been reissued by Arthur Andersen LLP in connection with this Annual Report. The consolidated balance sheets as of December 31, 2001 and 2000 and the consolidated statements of income and cash flows for the years ended December 31, 2000 and 1999 referred to in this report have not been included in the accompanying financial statements.\n\n## To the Stockholders and the Board of Directors of Atrion Corporation:", - "page_start": 24, - "page_end": 24, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "## To the Stockholders and the Board of Directors of Atrion Corporation:\n\nWe have audited the accompanying consolidated balance sheets of Atrion Corporation (a Delaware corporation) and subsidiaries as of December 31, 2001 and 2000 and the related consolidated statements of income and cash flows for each of the three years in the period ended December 31, 2001. These financial statements are the responsibility of the Company's management. Our responsibility is to express an opinion on these financial statements based on our audits.\n\nWe conducted our audits in accordance with auditing standards generally accepted in the United States. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement. An audit includes examining, on a test basis, evidence supporting the amounts and disclosures in the financial statements. An audit also includes assessing the accounting principles used and significant estimates made by management as well as evaluating the overall financial statement presentation. We believe that our audits provide a reasonable basis for our opinion.\n\nIn our opinion, the financial statements referred to above present fairly, in all material respects, the financial position of Atrion Corporation and subsidiaries as of December 31, 2001 and 2000 and the results of their operations and their cash flows for each of the three years in the period ended December 31, 2001 in conformity with accounting principles generally accepted in the United States.\n\n\n\nArthur Andersen LLP Atlanta, Georgia February 25, 2002", - "page_start": 24, - "page_end": 24, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "The Company made income tax payments (net of refunds received) of approximately $12.9 million, $17.7 million and $69.3 million for the years ended December 31, 2004, 2003 and 2002, respectively.\n\nThrough the date of the Company's initial public oÅering of common stock in July 1998, the Company Ñled consolidated federal income tax returns with AutoNation Inc. (\"\"AutoNation''), its former parent company. In accordance with the Company's tax sharing agreement with AutoNation, the Company may be liable for certain assessments imposed by the Internal Revenue Service for the periods through June 1998. The Internal Revenue Service is auditing the Company's consolidated tax returns for Ñscal years 1998 through 2003. Management believes that the tax liabilities recorded are adequate. However, a signiÑcant assessment in excess of liabilities recorded against the Company could have a material adverse eÅect on the Company's Ñnancial position, results of operations or cash Öows.", - "page_start": 84, - "page_end": 84, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "| 10.1 | ÌSeparation and Distribution Agreement dated June 30, 1998 by and between Republic Services, Inc. and AutoNation, Inc. (then known as Republic Industries, Inc.) (incorporated by reference to Exhibit 10.1 of the Company's Quarterly Report on Form 10-Q for the period ended June 30, 1998). |\n| 10.2 | ÌTax IndemniÑcation and Allocation Agreement dated June 30, 1998 by and between Republic Services, Inc. and AutoNation, Inc. (then known as Republic Industries, Inc.) (incorporated by reference to Exhibit 10.4 of the Company's Quarterly Report on Form 10-Q for the period ended June 30, 1998). |\n| 10.3 | ÌRepublic Services, Inc. 1998 Stock Incentive Plan (as amended and restated March 6, 2002) (incorporated by reference to Exhibit 10.1 of the Company's Quarterly Report on Form 10-Q for the period ended March 31, 2002).* |\n| 10.4 | ÌEmployment Agreement dated October 25, 2000 by and between James E. O'Connor and Republic Services, Inc. (incorporated by reference to Exhibit 10.7 of the Company's Annual Report on Form 10-K for the year ended December 31, 2000).* |\n| 10.5 | ÌEmployment Agreement dated October 25, 2000 by and between Tod C. Holmes and Republic Services, Inc. (incorporated by reference to Exhibit 10.9 of the Company's Annual Report on Form 10-K for the year ended December 31, 2000).* |", - "page_start": 98, - "page_end": 98, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "on the Company's ATM network. In addition, the Company continues to invest in the on-going development of products that were re c e n t l y i n t roduced to the market. The Company's re s e a rch and development costs incurred for computer products to be sold, leased or otherw i s e marketed increased to $6.7 million for the year ended December 31, 2000 from $3.2 million for the year ended December 31, 1999. Of this total f i g u re, $1.0 million and $322,000 were capitalized, as at December 31, 2000 and 1999, re s p e c t i v e l y, in conjunction with the Company's accounting policy requiring the capitalization of development costs on a product by product basis once technological feasibility is established. Technological feasibility of computer software products is established when the Company has completed all planning, designing, coding, and testing activities that are necessary to establish that the product can be produced to meet its design specifications including functions, feature s , and technical perf o rmance re q u i rements.\n\nOperating Loss The Software Solutions Segment incurred an operating loss of $21.5 million for the year ended December 31, 2000 and $7.1 million for the year ended December 31, 1999 as a result of the factors discussed above\n\n## Corporate Services Segment\n\nOperating Expenses Operating expenses for the Corporate Services Segment increased to $7.9 million for the year ended December 31, 2000 f rom $6.8 million for the year ended December 31, 1999. The components of corporate services operating costs for the years ended December 31, 2000 and 1999 were:\n\n| (in thousands) | Years ending December 31, | Years ending December 31, |\n|-----------------------------------------|-----------------------------|-----------------------------|\n| | 2 0 0 0 | 1 9 9 9 |\n| Salaries and benefits | $ 3 , 8 1 3 | $ 3 , 3 3 5 |\n| Selling, general and administrative | 3 , 8 4 1 | 3 , 2 7 0 |\n| D e p reciation and amort i z a t i o n | 2 0 8 | 1 4 5 |\n| Total direct operating expenses | $ 7 , 8 6 2 | $ 6 , 7 5 0 |\n\nThe Company's expansion of its network infrastru c t u re, and increases in corporate and administrative capabilities are the primary reasons for these i n c reased expenditures.\n\n## Non-Operating Results for the Years Ended December 31, 2000 and 1999\n\nInterest Income I n t e rest income decreased to $1.1 million for the year ended December 31, 2000 from $2.0 million for the year ended December 31, 1999 and from $2.5 million for the year ended December 31, 1998. The decrease is the result of the decrease in investment securities and cash as a result of negative cash flow from operations and capital expenditure s .\n\nInterest Expense I n t e rest expense decreased to $10.8 million for the year ended December 31, 2000 from $10.9 million for the year ended December 31, 1999 and increased from $7.8 million for the year ended December 31, 1998. The decrease from 1999 to 2000 is due to exchange rate diff e rences as the majority of the debt is denominated in Deutsche Mark. The increase from 1998 to 1999 is the result of accretion of the C o m p a n y 's Notes Payable for a full year in 1999 in comparison to 6 months' accretion in 1998.", - "page_start": 20, - "page_end": 20, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "We discontinued our Video segment in the second quarter of 2012 and reported the Video results of operations as discontinued operations at that time.\n\nAs of June 2012, Rogers' stores no longer offered video and game rentals or sales at its retail locations. Certain of these stores continue to serve customers' wireless and cable needs.\n\nThe Video segment did not have any results from discontinued operations in 2013 or any significant assets or liabilities as at December 31, 2013 and 2012. Cash flows from operating activities for the segment for 2013 were nil (2012 - $2 million). The Video segment did not have any cash flows from investing or financing activities for the years ended December 31, 2013 and 2012.", - "page_start": 106, - "page_end": 106, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_ATRI_2003.pdf", - "query": "How much share of Atrion's revenues did its major customer representin in 2003 ? ", - "target_page": 21, - "target_passage": "The Company had one major customer which represented approximately $9.1 million (14.4 percent", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## Results of Continuing Operations\n\n## 2003 compared to 2002\n\nTotal revenue was $105.9 million in 2003, an increase of $12.9 million or 13.9%. Total revenues included $70.0 million of wireless revenues, an increase of $12.0 million or 20.7%; wireline revenues of $29.0 million, an increase of $0.3 million or 0.9%; and other revenues of $7.0 million, an increase of $0.6 million or 9.7%.\n\nWithin wireless revenues, the PCS operation contributed $69.8 million, an increase of $11.6 million, or 20.8%. PCS service revenues were $44.4 million, an increase of $10.9 million or 32.4%. Service revenue growth was driven by the increase in subscribers, totaling 85,139 at December 31, 2003, an increase of 17,297 or 25.5%, compared to 67,842 subscribers at year-end 2002. The company had churn of 2.1% in 2003 compared to 2.8% in 2002. The decline in the churn rate is the result of tightening the credit screening for new subscribers as well as continued efforts to improve the after sales support. Competition in the wireless industry continues to have a significant impact on the results of the Company's PCS operation.\n\nPCS travel revenue, including reseller revenue, which is compensation between Sprint and its PCS Affiliates for use of the other party's network, was $16.8 million, an increase of $0.3 million or 1.8%. Travel revenue is impacted by the geographic size of the Company's network service area, the overall number of Sprint wireless customers, their travel patterns and the travel exchange rate. The rate received on travel was $0.058 per minute in 2003, compared to $0.10 per minute in 2002. As a part of the amended management agreement signed on January 30, 2004, Sprint and the Company agreed to maintain the travel rate at $0.058 per minute through December 31, 2006.\n\n\n\n■", - "page_start": 46, - "page_end": 46, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "Income from discontinued operations was $22.4 million after taxes, an increase of $15.0 million or 202%. The income from discontinued operations in 2003 includes the sale of the partnership interest in February 2003 and results from the two months of its operations in 2003.\n\nThe Company adopted FAS 143 'Accounting for Asset Retirement Obligations.' effective January 1, 2003, and as a result recorded a charge to earnings for the cumulative effect of this change in accounting of $76 thousand after taxes.\n\nNet income was $32.1 million, an increase of $27.6 million or 610%. The increase is a result of improved operating results in the PCS operations, the 2002 VeriSign stock loss and the sale of the cellular operations.\n\n## DISCONTINUED OPERATIONS\n\nThe Company invested $2.0 million in the Virginia 10 RSA limited partnership in the early 1990's. The partnership's local customer base peaked in early 2000 with nearly 12,000 subscribers, then steadily declined to 6,700 by December 31, 2002. The decline was the result of competition with digital technologies and increased competition from national carriers in the area. As a result of the decline in the subscriber base, and the need for extensive capital expenditures to transform the analog network into a digital cellular network, the Company elected to sell its 66% interest in the partnership to one of the minority partners. The agreement was signed in November 2002, and closing was February 28, 2003. The Company's portion of the net income from its operations for 2003, 2002 and 2001 was $1.2 million, $7.4 million and $6.7 million, respectively.\n\n## CONTINUING OPERATIONS\n\n## 2002 compared to 2001\n\nTotal revenue was $93.0 million in 2002, an increase of $24.3 million or 35.3%. Total revenues included $57.9 million of wireless revenues, an increase of $21.7 million or 60.2%; wireline revenues of $28.7 million, an increase of $1.3 million or 4.6%; and other revenues of $6.4 million, an increase of $1.2 million or 24.5%.\n\nWithin wireless revenues, the PCS operation contributed $55.5 million, an increase of $21.4 million, or 63.0%. PCS service revenues were $37.4 million, an increase of $18.3 million or 95.7%. The increase in the subscriber base, which totaled 67,842 at December 31, 2002, was an increase of 20,524 or 43% from the prior year end.\n\nPCS travel revenue, which is compensation between Sprint and its PCS Affiliates for use of the other party's network, was $16.5 million, an increase of $2.9 million or 21.3%. Travel revenue is impacted by the geographic size of the Company's network service area, the overall number of Sprint wireless customers, and the travel exchange rate. The rate received on travel was $0.10 per minute in 2002. The rates in 2001 were $0.20 per minute from January 1, 2001 through April 30, 2001; $0.15 per minute from May 1, 2001 through September 30, 2001; and $0.12 per minute from October 1, 2001 through December 31, 2001.\n\nPCS equipment sales were $1.6 million, an increase of $0.3 million or 19.6%. The equipment sales are net of $0.3 million of rebates and discounts given at the time of sale, which became more pronounced during the year to meet industry competition for subscriber additions and subscriber retention.\n\nIn accordance with Sprint's requirements, the Company launched third generation (3G 1X) service in August 2002. The impact of 3G 1X-network enhancements on revenues was not significant in 2002.\n\nTower leases added $2.1 million to wireless revenues, an increase of $0.4 million or 24.5%. The increase was the result of other wireless carriers executing additional leases to use space on the Company's portfolio of towers. Of the 82 towers and poles owned by the Company as of December 31, 2002, 46 have tower space leased to other carriers.", - "page_start": 50, - "page_end": 50, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## SHENANDOAH TELECOMMUNICATIONS COMPANY AND SUBSIDIARIES MANAGEMENT'S DISCUSSION AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS\n\nFacility lease revenue contributed $5.5 million to wireline revenues, a decrease of $0.2 million or 3.5%. The decrease was primarily the result of the prolonged decline of lease rates associated with competitive pricing pressures and the economic downturn in the telecommunications industry. During 2002 the Company completed a second, diverse fiber route to its existing interconnection point in the Dulles airport area of Northern Virginia. This fiber route provides increased reliability for customers in the event of fiber cuts or breaks, and extends the availability of the Company's fiber network to additional market locations but to date has not added additional revenue to the Company's operation.\n\nBilling and collection services and other revenues contributed $0.4 million to wireline revenues, which was the same as 2002 results. Revenues from this service had declined in recent years, with interexchange carriers now issuing a greater proportion of their bills directly to their customers.\n\nWireline revenues from cable television services were $4.4 million, an increase of $0.1 million or 1.7%. The number of subscribers and service plan prices remained relatively constant during 2003.\n\nOther revenues, primarily consisting of Internet and 511Virginia service revenues were $5.8 million in 2003, an increase of $0.7 million or 13.5%. The Company had 17,420 dial-up Internet subscribers at December 31, 2003, compared to 18,050 at the end of the previous year. During 2003, the Company's DSL high-speed Internet access subscriber count increased to 1,298 from 646. Total Internet service revenue was $4.5 million, an increase of $0.3 million or 10.7%. The 511Virginia contract with the Virginia Department of Transportation contributed $1.3 million to other revenues, an increase of $0.4 million or 41.3%. Telecommunications equipment sales, services and lease revenues were $1.1 million, which reflects a $0.1 million decrease from 2002 results.\n\nTotal operating expenses were $87.2 million, an increase of $3.6 million or 4.3%. The primary driver in the increase in operating expenses is continued growth in the PCS operation somewhat offset by a significant decline in bad debt expense compared to 2002.\n\nLate in 2003, the Company made an employee benefits policy change, which eliminated the requirement for the Company to accrue a vacation liability in advance of the year in which the benefit was used. The result of this change was a reduction of benefit expense of $0.5 million for the year compared to 2002. Benefit expenses impact all operating departments based on the amount of direct labor charged to the department. The change has a one-time impact on the financial statements of the Company. The benefits policy now provides that employees earn and use their paid time off in the same period. In the future, under this policy, unused hours can be banked but only used for extended illness, not carried over for use as vacation.", - "page_start": 48, - "page_end": 48, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## RESULTS OF OPERATIONS\n\nThe Company's income from continuing operations was $4.9 million, or $2.86 per basic and $2.66 per diluted share, in 2003, compared to income from continuing operations of $4.1 million, or $2.37 per basic and $2.18 per diluted share, in 2002 and $4.3 million, or $2.10 per basic and $1.88 per diluted share, in 2001. Net income, including discontinued operations and cumulative effect of accounting change, totaled $5.1 million, or $2.96 per basic and $2.75 per diluted share, in 2003, compared with $2.6 million, or $1.51 per basic and $1.39 per diluted share, in 2002 and $9.8 million, or $4.80 per basic and $4.30 per diluted share, in 2001. The Company adopted Statement of Financial Accounting Standards ('SFAS') No. 142 effective January 1, 2002. The required adoption of SFAS No. 142 as discussed in Note 2 to the Company's Consolidated Financial Statements included herein is considered a change in accounting principle and the cumulative effect of adopting this standard resulted in a $1.6 million, or $ .96 per basic and $ .88 per diluted share, noncash, after-tax charge in 2002.\n\nOperating revenues were $62.8 million in 2003, compared with $59.5 million in 2002 and $57.6 million in 2001. These revenue increases are generally attributable to higher sales volumes. The 5 percent revenue increase in 2003 over the prior year is primarily attributable to an 8 percent increase in the revenues of the Company's ophthalmic products, an 8 percent increase in the revenues of the Company's cardiovascular products, a 3 percent increase in the Company's fluid delivery products and a 2 percent increase in the Company's other medical and non-medical products and services. The 3 percent revenue increase in 2002 over the prior year is primarily attributable to an 8 percent increase in the revenues of the Company's cardiovascular products, a 4 percent increase in the Company's fluid delivery products and a 4 percent increase in the Company's other medical and non-medical products and services.\n\nThe Company's cost of goods sold was $40.6 million in 2003, compared with $39.2 million in 2002 and $35.8 million in 2001. The increase in cost of goods sold for 2003 over 2002 was primarily related to the increase in revenues discussed above and increased insurance costs partially offset by an improvement in manufacturing variances resulting from increased production volumes. The increase in cost of goods sold for 2002 over 2001 was primarily related to a shift in product mix, which resulted in lower gross margins, and the increase in revenues discussed above.\n\nGross profit was $22.2 million in 2003, compared with $20.3 million in 2002 and $21.8 million in 2001. The Company's gross profit in 2003 was 35 percent of revenues compared with 34 percent of revenues in 2002 and 38 percent of revenues in 2001. The increase in gross profit percentage in 2003", - "page_start": 25, - "page_end": 25, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "## C o r p o r a t e O f f i c e :\n\nAtrion Corporation One Allentown Parkway Allen, Texas 75002 (972) 390-9800\n\nwww.atrioncorp.com\n\n## R e g i s t r a r a n d T r a n s f e r A g e n t\n\nAmerican Stock Transfer and Trust Company 59 Maiden Lane New York, New York 10007\n\n## F o r m 1 0 - K\n\nA copy of the Company's 2003 Annual Report on Form 10-K, as filed with the Securities and Exchange Commission, may be obtained by any stockholder without charge by written request to: Corporate Secretary Atrion Corporation One Allentown Parkway Allen, Texas 75002\n\n## S t o c k I n f o r m a t i o n\n\nThe Company's common stock is traded on The Nasdaq Stock Market (Symbol: ATRI). As of March 8, 2004, there were approximately 1,200 stockholders, including beneficial owners holding shares in nominee or 'street' name. The table below sets forth the high and low closing prices on The Nasdaq Stock Market and the quarterly dividends per share declared by the Company for each quarter of 2002 and 2003.\n\n| 2002 Quarter Ended | | High | Low | Dividends |\n|----------------------|----|--------|-------|-------------|\n| March 31 | $ | 38.14 | 26.91 | $ - |\n| June 30 | | 32.51 | 26.82 | - |\n| September 30 | | 28.09 | 18.31 | - |\n| December 31 | | 23.90 | 17.31 | - |\n| 2003 Quarter Ended | | High | Low | Dividends |\n| March 31 | $ | 22.85 | 17.95 | $ - |\n| June 30 | | 30.80 | 22.75 | - |\n| September 30 | | 45.20 | 26.80 | .12 |\n| December 31 | | 50.00 | 40.00 | .12 |\n\nThe Company paid no cash dividends on its common stock during 2002. In the third quarter of 2003 the Company began paying quarterly cash dividends and presently plans to pay quarterly cash dividends in the future.\n\nMPS and LacriCATH are registered trademarks of Atrion Corporation", - "page_start": 30, - "page_end": 30, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "## Note 11. Major Customers\n\nThe Company has one major customer and relationship that is a significant source of revenue. In 2003, as during the past number of years, the Company's relationship with Sprint continued to increase, due to growth in the PCS business segment. Approximately 61.2% of total revenues in 2003 were generated by or through Sprint and its customers using the Company's portion of Sprint's nationwide PCS network. This was compared to 57.6% in 2002, and 47.1% of total revenue in 2001. No other customer relationship on a stand-alone basis generates more than 2.5% of the Company's total revenue for 2003, 2002 and 2001.\n\n■", - "page_start": 34, - "page_end": 34, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "Income tax expense in 2003 totaled $1.9 million, compared with $1.4 million in 2002 and $1.8 million in 2001. The effective tax rates for 2003, 2002 and 2001 were 27.8 percent, 25.7 percent and 29.7 percent, respectively. Benefits from tax incentives for exports and R&D expenditures totaled $350,000 in 2003, $408,000 in 2002 and $404,000 in 2001. The higher effective tax rate in 2003 is primarily a result of benefits from tax incentives for exports and R&D expenditures being a lesser percentage of taxable income in 2003 than in 2002. The lower effective tax rate in 2002 is primarily a result of benefits from tax incentives for exports and R&D expenditures being a larger percentage of taxable income in 2002 than in 2001 and the utilization of capital loss carryforwards in 2002.\n\nThe Company believes that 2004 revenues will be higher than 2003 revenues and that the cost of goods sold, gross profit, operating income and income from continuing operations will each be higher in 2004 than in 2003. The Company further believes that it will have continuing volume growth in most of its product lines in 2004, complemented by the introduction of new products, and that it will achieve a double-digit annual rate of growth in earnings per share from continuing operations for the next several years.\n\n## DISCONTINUED OPERATIONS\n\nDuring 1997, the Company sold all of its natural gas operations. The financial statements presented herein reflect the Company's natural gas operations as discontinued operations for all periods presented. The financial statements also reflect an after-tax gain on disposal of these discontinued operations of $ .2 million, or $ .10 per basic and $ .09 per diluted share, in both 2003 and 2002, and $5.5 million, or $2.70 per basic and $2.42 per diluted share, in 2001.\n\nIn addition to the initial consideration received in 1997 upon the sale of the natural gas operations, certain annual contingent deferred payments of up to $250,000 per year were to be paid to the Company over an eight-year period which began in 1999, with the amount paid each year to be dependent upon revenues received by the purchaser from certain gas transportation contracts. The Company received deferred payments of $250,000 each, before tax, from the purchaser in April 2003, 2002 and 2001 which are reflected in each year as a gain from discontinued operations of $165,000, net of tax. The 2001 gain also includes a $5,327,000 non-cash gain from reversal of a reserve established when the Company disposed of its natural gas operations in 1997. This reversal in the third quarter of 2001 followed the resolution of an outstanding contingency related to the sale of those assets.", - "page_start": 26, - "page_end": 26, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "## CASH DIVIDENDS\n\nCash dividends were $0.52 per common share for 2003, $0.50 for 2002, and $0.48 for 2001. Further, the Board of Directors announced a 7.7% increase in the quarterly dividend from $0.13 to $0.14 per common share effective with the March 1, 2004, dividend payment for shareholders of record at the close of business February 20, 2004. The previous quarterly dividend increase was from $0.125 to $0.13, effective with the February 28, 2003, dividend payment for shareholders of record at the close of business on February 21, 2003. A cash dividend has been paid every quarter since April 15, 1955, and quarterly dividends are expected to continue. The average dividend payout percentage for the most recent three-year period has been 32% of prior year earnings.\n\n## COMMON SHARE REPURCHASES\n\nDuring 2003, the Company repurchased 762,300 shares of its common stock at a cost of approximately $21.5 million, or an average price of $28.22 per share. During 2002, the Company repurchased 614,580 shares of its common stock at a cost of approximately $15.7 million, or an average price of $25.60 per share. During 2001, the Company repurchased 1,472,937 shares at a cost of approximately $35.1 million, or an average price of $23.80 per share.\n\n## LITIGATION AND UNCERTAINTIES\n\nThe Company has contingent liabilities that have arisen in the course of its business, including pending litigation, preferential payments claims in customer bankruptcies, environmental remediation, taxes, and other", - "page_start": 36, - "page_end": 36, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## SHENANDOAH TELECOMMUNICATIONS COMPANY AND SUBSIDIARIES MANAGEMENT'S DISCUSSION AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS\n\nincreased again on July 1, 2002 to $6.50, and comparable rate increases also impacted business subscribers. Tied to the SLC rate increases were declines in rates charged to interexchange carriers for interstate minutes of use. The 2002 results reflect a significantly larger increase in network usage, which more than offset the decline in rates.\n\nFacility lease revenue contributed $5.7 million to wireline revenues, a decrease of $0.8 million or 12.6% from 2001. The decrease was primarily the result of declining lease rates associated with competitive pricing pressure, and the economic downturn in the telecommunications industry.\n\nBilling and collection services contributed $0.4 million to wireline revenues, which was the same as 2001 results. Revenues from this service had declined in recent years, with interexchange carriers now issuing a greater proportion of their bills directly to their customers.\n\nWireline revenues from cable television services were $4.3 million, an increase of $0.5 million or 14.5%. In December 2001, the Company increased its basic service charge by $6.00 per month, which produced $0.3 million of the increase in cable television revenue. The remaining $0.2 million was generated by an increased penetration of digital services and increased pay per view sales.\n\nWithin other revenues, Internet and 511Virginia contract revenues from the Virginia Department of Transportation, were $5.1 million in 2002, an increase of $1.2 million or 30.4%. The Company had 18,050 dial-up Internet subscribers at December 31, 2002, compared to 17,423 subscribers at the end of 2001. Total Internet service revenue was $4.2 million, an increase of $0.6 million or 15.7%. Services provided under the 511Virginia contract contributed $0.9 million to other revenues, an increase of $0.6 million. Telecommunications equipment sales, services and lease revenues were $1.2 million, a nominal increase over 2001 results.\n\nTotal operating expenses were $83.6 million, an increase of $21.3 million or 34.3%. The continued growth in the PCS operation was principally responsible for the change.\n\nCost of goods and services was $10.5 million, an increase of $3.1 million or 41.8%. The PCS cost of goods sold was $8.3 million, an increase of $2.8 million or 50.2%. This change is due primarily to higher volumes of handsets sold through Company owned stores and PCS handset subsidies paid to third-party retailers. The cable television programming (cost of service) expense was $1.4 million, an increase of $0.1 million or 4.6%. The other cost of goods sold increased $0.3 million, compared to the same period in 2001.\n\nNetwork operating costs were $32.5 million, an increase of $5.8 million or 21.5%. Line and switching costs were $9.7 million, an increase of $2.6 million or 37.4%, due principally to the impact of the expanded PCS network. Travel expense, generated by the Company's PCS subscribers' use of minutes on other providers' portions of the Sprint wireless network, was $10.7 million, an increase of $0.9 million or 8.4%. The increase in customer travel usage more than offset the travel rate explained above in travel revenue. Plant specific costs were $9.6 million, which include the operation, and maintenance of the networks increased $2.3 million or 30.7%. Tower, building, and land rentals, as well as PCS equipment maintenance, were major contributors to the increase in plant specific expenses. Other network costs such as power, network administration, and engineering, were $2.7 million, the same as in 2001.\n\nDepreciation and amortization expense was $14.5 million, an increase of $3.2 million or 28.6%. The PCS operation had depreciation expense of $8.6 million, an increase of $3.6 million or 72.7%. The PCS operation added 53 additional base stations during 2002.", - "page_start": 51, - "page_end": 51, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "7\n\nTotal income tax expense for continuing operations differs from the amount that would be provided by applying the statutory federal income tax rate to pretax earnings as illustrated below (in thousands):\n\n| | YEAR ENDED DECEMBER 31, | YEAR ENDED DECEMBER 31, | YEAR ENDED DECEMBER 31, |\n|-------------------------------------------------------------|---------------------------|---------------------------|---------------------------|\n| | 2003 | 2002 | 2001 |\n| Income tax expense at the statutory federal income tax rate | $ 2,298 | $ 1,858 | $ 2,062 |\n| Increase (decrease) resulting from: | | | |\n| State income taxes | 34 | 80 | 220 |\n| Decrease in valuation allowance | - | - | (68) |\n| R&D credit | (100) | (164) | (52) |\n| Foreign sales benefit | (250) | (244) | (352) |\n| Other, net | (103) | (127) | (7) |\n| Total income tax expense | $ 1,879 | $ 1,403 | $ 1,803 |\n\n## STOCKHOLDERS' EQUITY 6\n\nThe Board of Directors of the Company has at various times authorized repurchases of Company stock in open-market or negotiated transactions at such times and at such prices as management may from time to time decide. The Company has effected a number of open-market or negotiated transactions to purchase its stock during the past three years. These repurchases totaled 20,200, 26,000 and 10,300 shares during the years 2003, 2002 and 2001, respectively, at per share prices ranging from $14.02 to $42.42. As of December 31, 2003, authorization for the repurchase of 94,000 additional shares remained. The Company purchased 173,614 shares of its common stock at $23.00 per share in April 2003 pursuant to a tender offer. The Company purchased 502,229 shares of its common stock at $34.50 per share in December 2001 pursuant to a tender offer. All shares purchased in the tender offers and in the open-market or negotiated transactions became treasury shares upon repurchase by the Company.\n\nIn September 2003, the Company announced that it had adopted a policy for the payment of regular quarterly cash dividends on the Company's common stock. The Company subsequently paid a quarterly cash dividend of $ .12 per common share in both September and December of 2003.\n\nThe Company has a Common Share Purchase Rights Plan, which is intended to protect the interests of stockholders in the event of a hostile attempt to take over the Company. The rights, which are not presently exercisable and do not have any voting powers, represent the right of the Company's stockholders to purchase at a substantial discount, upon the occurrence of certain events, shares of common stock of the Company or of an acquiring company involved in a business combination with the Company. In January 2000, this plan, which was adopted in February 1990, was extended until February 2005.\n\n## INCOME PER SHARE\n\nThe following is the computation for basic and diluted income per share from continuing operations:", - "page_start": 18, - "page_end": 18, - "source_file": "NASDAQ_ATRI_2003.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_ATRI_2003.pdf", - "query": "What was Atrion's gross profit in 2003 (in thousands) ? ", - "target_page": 10, - "target_passage": "Gross Profit 22,239", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## GROSS PROFIT\n\nGross profit as a percent of net sales improved 1.0 percentage point in 2003 as compared to fiscal 2002 and 1.3 percentage points in 2002 as compared to 2001. The improvement in both periods was a result of the continued net benefits of rapid continuous improvement, restructuring initiatives, business simplification, new products, and improved price realization. Included in 2003 gross profit was $6.7 million of accelerated depreciation, which reduced gross profits 0.4 percentage points. The Company expects to mitigate any future increases in material costs through various initiatives, including alternative materials and suppliers and its rapid continuous improvement program.\n\n## SELLING AND ADMINISTRATIVE EXPENSES\n\nSelling and administrative expenses, excluding restructuring charges, increased 5.8% in 2003 and decreased 2.2% in 2002. The increase in 2003 was due to additional investment of approximately $14 million in brand building and selling initiatives, and increased freight costs of $7 million due to rate increases, fuel surcharges, and volume. The decrease in 2002 was due to no longer amortizing goodwill and certain other intangible assets of approximately $9 million and lower overall", - "page_start": 33, - "page_end": 33, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## C o r p o r a t e O f f i c e :\n\nAtrion Corporation One Allentown Parkway Allen, Texas 75002 (972) 390-9800\n\nwww.atrioncorp.com\n\n## R e g i s t r a r a n d T r a n s f e r A g e n t\n\nAmerican Stock Transfer and Trust Company 59 Maiden Lane New York, New York 10007\n\n## F o r m 1 0 - K\n\nA copy of the Company's 2003 Annual Report on Form 10-K, as filed with the Securities and Exchange Commission, may be obtained by any stockholder without charge by written request to: Corporate Secretary Atrion Corporation One Allentown Parkway Allen, Texas 75002\n\n## S t o c k I n f o r m a t i o n\n\nThe Company's common stock is traded on The Nasdaq Stock Market (Symbol: ATRI). As of March 8, 2004, there were approximately 1,200 stockholders, including beneficial owners holding shares in nominee or 'street' name. The table below sets forth the high and low closing prices on The Nasdaq Stock Market and the quarterly dividends per share declared by the Company for each quarter of 2002 and 2003.\n\n| 2002 Quarter Ended | | High | Low | Dividends |\n|----------------------|----|--------|-------|-------------|\n| March 31 | $ | 38.14 | 26.91 | $ - |\n| June 30 | | 32.51 | 26.82 | - |\n| September 30 | | 28.09 | 18.31 | - |\n| December 31 | | 23.90 | 17.31 | - |\n| 2003 Quarter Ended | | High | Low | Dividends |\n| March 31 | $ | 22.85 | 17.95 | $ - |\n| June 30 | | 30.80 | 22.75 | - |\n| September 30 | | 45.20 | 26.80 | .12 |\n| December 31 | | 50.00 | 40.00 | .12 |\n\nThe Company paid no cash dividends on its common stock during 2002. In the third quarter of 2003 the Company began paying quarterly cash dividends and presently plans to pay quarterly cash dividends in the future.\n\nMPS and LacriCATH are registered trademarks of Atrion Corporation", - "page_start": 30, - "page_end": 30, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "| Gross profit energy generation and storage segment | $ 725 | $ 381 | | | $ 1,868 | $ 827 | | |\n| Gross margin energy generation and storage segment | 30.5 % | 24.4 % | | | 26.6 % | 18.0 % | | |\n| Total gross profit | $ 4,997 | $ 4,178 | | | $ 13,271 | $ 13,222 | | |\n| Total gross margin | 19.8 % | 17.9 % | | | 18.4 % | 18.5 % | | |", - "page_start": 37, - "page_end": 37, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "Income tax expense in 2003 totaled $1.9 million, compared with $1.4 million in 2002 and $1.8 million in 2001. The effective tax rates for 2003, 2002 and 2001 were 27.8 percent, 25.7 percent and 29.7 percent, respectively. Benefits from tax incentives for exports and R&D expenditures totaled $350,000 in 2003, $408,000 in 2002 and $404,000 in 2001. The higher effective tax rate in 2003 is primarily a result of benefits from tax incentives for exports and R&D expenditures being a lesser percentage of taxable income in 2003 than in 2002. The lower effective tax rate in 2002 is primarily a result of benefits from tax incentives for exports and R&D expenditures being a larger percentage of taxable income in 2002 than in 2001 and the utilization of capital loss carryforwards in 2002.\n\nThe Company believes that 2004 revenues will be higher than 2003 revenues and that the cost of goods sold, gross profit, operating income and income from continuing operations will each be higher in 2004 than in 2003. The Company further believes that it will have continuing volume growth in most of its product lines in 2004, complemented by the introduction of new products, and that it will achieve a double-digit annual rate of growth in earnings per share from continuing operations for the next several years.\n\n## DISCONTINUED OPERATIONS\n\nDuring 1997, the Company sold all of its natural gas operations. The financial statements presented herein reflect the Company's natural gas operations as discontinued operations for all periods presented. The financial statements also reflect an after-tax gain on disposal of these discontinued operations of $ .2 million, or $ .10 per basic and $ .09 per diluted share, in both 2003 and 2002, and $5.5 million, or $2.70 per basic and $2.42 per diluted share, in 2001.\n\nIn addition to the initial consideration received in 1997 upon the sale of the natural gas operations, certain annual contingent deferred payments of up to $250,000 per year were to be paid to the Company over an eight-year period which began in 1999, with the amount paid each year to be dependent upon revenues received by the purchaser from certain gas transportation contracts. The Company received deferred payments of $250,000 each, before tax, from the purchaser in April 2003, 2002 and 2001 which are reflected in each year as a gain from discontinued operations of $165,000, net of tax. The 2001 gain also includes a $5,327,000 non-cash gain from reversal of a reserve established when the Company disposed of its natural gas operations in 1997. This reversal in the third quarter of 2001 followed the resolution of an outstanding contingency related to the sale of those assets.", - "page_start": 26, - "page_end": 26, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "## RESULTS OF OPERATIONS\n\nThe Company's income from continuing operations was $4.9 million, or $2.86 per basic and $2.66 per diluted share, in 2003, compared to income from continuing operations of $4.1 million, or $2.37 per basic and $2.18 per diluted share, in 2002 and $4.3 million, or $2.10 per basic and $1.88 per diluted share, in 2001. Net income, including discontinued operations and cumulative effect of accounting change, totaled $5.1 million, or $2.96 per basic and $2.75 per diluted share, in 2003, compared with $2.6 million, or $1.51 per basic and $1.39 per diluted share, in 2002 and $9.8 million, or $4.80 per basic and $4.30 per diluted share, in 2001. The Company adopted Statement of Financial Accounting Standards ('SFAS') No. 142 effective January 1, 2002. The required adoption of SFAS No. 142 as discussed in Note 2 to the Company's Consolidated Financial Statements included herein is considered a change in accounting principle and the cumulative effect of adopting this standard resulted in a $1.6 million, or $ .96 per basic and $ .88 per diluted share, noncash, after-tax charge in 2002.\n\nOperating revenues were $62.8 million in 2003, compared with $59.5 million in 2002 and $57.6 million in 2001. These revenue increases are generally attributable to higher sales volumes. The 5 percent revenue increase in 2003 over the prior year is primarily attributable to an 8 percent increase in the revenues of the Company's ophthalmic products, an 8 percent increase in the revenues of the Company's cardiovascular products, a 3 percent increase in the Company's fluid delivery products and a 2 percent increase in the Company's other medical and non-medical products and services. The 3 percent revenue increase in 2002 over the prior year is primarily attributable to an 8 percent increase in the revenues of the Company's cardiovascular products, a 4 percent increase in the Company's fluid delivery products and a 4 percent increase in the Company's other medical and non-medical products and services.\n\nThe Company's cost of goods sold was $40.6 million in 2003, compared with $39.2 million in 2002 and $35.8 million in 2001. The increase in cost of goods sold for 2003 over 2002 was primarily related to the increase in revenues discussed above and increased insurance costs partially offset by an improvement in manufacturing variances resulting from increased production volumes. The increase in cost of goods sold for 2002 over 2001 was primarily related to a shift in product mix, which resulted in lower gross margins, and the increase in revenues discussed above.\n\nGross profit was $22.2 million in 2003, compared with $20.3 million in 2002 and $21.8 million in 2001. The Company's gross profit in 2003 was 35 percent of revenues compared with 34 percent of revenues in 2002 and 38 percent of revenues in 2001. The increase in gross profit percentage in 2003", - "page_start": 25, - "page_end": 25, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "Our expense related to doubtful accounts as a percentage of revenue for 2002, 2003 and 2004 was .5%, .4% and .3%, respectively.\n\nAs of December 31, 2004, accounts receivable were $268.7 million, net of allowance for doubtful accounts of $18.0 million, resulting in days sales outstanding of 35 days, or 22 days net of deferred revenue. In addition, at December 31, 2004, our trade receivables in excess of 90 days old totaled $16.4 million, or 5.7% of gross receivables outstanding.\n\nOur expense for self-insurance as a percentage of revenue for 2002, 2003 and 2004 was 5.8%, 7.5% and 6.1%, respectively. The increase in self-insurance expense from 2002 to 2003 related to existing claims and was attributable to the expansion of our operations and various changes in estimates as a result of continued negative trends through our 2003 policy year, based on recent actuarial claims experience, expected claims development and medical cost inÖation.\n\n## Property and Equipment\n\nThe following tables reÖect the activity in our property and equipment accounts for the years ended December 31, 2002, 2003 and 2004 (in millions):\n\n| | Gross Property and Equipment | Gross Property and Equipment | Gross Property and Equipment | Gross Property and Equipment | Gross Property and Equipment | Gross Property and Equipment |\n|---------------------------------------|---------------------------------|--------------------------------|--------------------------------|-----------------------------------|--------------------------------|---------------------------------|\n| | Balance as of December 31, 2001 | Capital Additions | Retirements | Acquisitions, Net of Divestitures | Transfers and Adjustments | Balance as of December 31, 2002 |\n| Other land ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | $ 94.3 | $ 5.1 | $ (2.5) | $(7.0) | $ (.2) | $ 89.7 |\n| Non-depletable landÑll landÏÏÏÏÏÏÏÏ | 50.5 | 3.3 | Ì | Ì | .2 | 54.0 |\n| LandÑll development costsÏÏÏÏÏÏÏÏÏ | 958.8 | 18.1 | Ì | 5.1 | 44.3 | 1,026.3 |\n| Vehicles and equipment ÏÏÏÏÏÏÏÏÏÏÏ | 1,153.2 | 220.8 | (47.8) | 15.1 | 15.5 | 1,356.8 |\n| Buildings and improvements ÏÏÏÏÏÏÏ | 256.4 | 12.6 | (2.3) | (3.4) | 7.6 | 270.9 |\n| Construction in progress Ì landÑll ÏÏ | 17.6 | 56.0 | Ì | Ì | (41.3) | 32.3 |\n| Construction in progress Ì other ÏÏÏ | 23.5 | 15.3 | Ì | Ì | (29.7) | 9.1 |\n| Total ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | $2,554.3 | $331.2 | $(52.6) | $ 9.8 | $ (3.6) | $2,839.1 |", - "page_start": 51, - "page_end": 51, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## Retail Business Gross Profit\n\nThe following table summarizes the Retail Business gross profit:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n|-----------------------------------------|--------|--------|--------|\n| Retail gross profit 1 | $4,709 | $4,434 | $4,335 |\n| Retail gross profit as a % of net sales | 35.9% | 36.4% | 36.9% |\n| Ending inventory per square foot 2 | $64.05 | $58.84 | $53.77 |\n| Inventory turnover rate 3 | 4.67 | 5.07 | 5.37 |", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "In 2002, the Company liquidated its holdings of VeriSign, Inc, for proceeds of $2.8 million and a realized loss of $9.0 million. The VeriSign stock was valued at $38 per share at December 31, 2001, and declined over the ensuing months to approximately $6 per share in early July 2002. The Company liquidated all of its holdings in the stock early in the third quarter 2002. The Company's original investment in VeriSign's predecessor companies was approximately $1.0 million. Total proceeds from all sales of stock in VeriSign and its predecessor companies were $8.1 million, or more than eight times the original investment. .\n\nThere were no gross realized gains on available-for-sale securities included in income in 2003 or 2002, while there were $17.7 million for 2001. Gross realized losses included in income in 2003, 2002 and 2001 were $3 thousand, $9.0 million and $3.0 million, respectively.\n\nChanges in the unrealized gains (losses) on available-for-sale securities during the years ended December 31, 2003, 2002 and 2001 reported as a separate component of shareholders' equity are as follows:\n\n■", - "page_start": 25, - "page_end": 25, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "Accretion expense. Accretion expense was $13.7 million and $12.7 million or, as a percentage of revenue, .5% and .5%, for the years ended December 31, 2004 and 2003, respectively, versus $0 for 2002. Accretion expense resulted from the adoption of SFAS 143 as of January 1, 2003. The increase in such expenses in aggregate dollars in 2004 is primarily due to expansion of our landÑll operations.\n\nSelling, General and Administrative Expenses. Selling, general and administrative expenses were $268.3 million, $247.9 million and $238.7 million, or, as a percentage of revenue, 9.9%, 9.8% and 10.1%, for the years ended December 31, 2004, 2003 and 2002, respectively. The increases in aggregate dollars are primarily a result of the expansion of our operations through internal growth and acquisitions. The increase in such expenses as a percentage of revenue from 2003 to 2004 is primarily due to higher compensation costs. The decrease in such expenses as a percentage of revenue from 2002 to 2003 is primarily due to leveraging our existing overhead structure over an expanding revenue base.", - "page_start": 43, - "page_end": 43, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "Goodwill for 2003 acquisitions totaled approximately $21.2 million. As of December 31, 2003, we had goodwill, net of accumulated amortization, of $1,558.1 million. $27.7 million of the total purchase price paid for acquisitions and contingent payments to former owners was allocated to landÑll airspace.\n\nGoodwill for 2002 acquisitions totaled approximately $40.1 million. As of December 31, 2002, we had goodwill, net of accumulated amortization, of $1,544.2 million.\n\n## Consolidated Results of Operations\n\n## Years Ended December 31, 2004, 2003 and 2002\n\nOur income before cumulative eÅect of changes in accounting principles was $237.9 million for the year ended December 31, 2004, as compared to $215.4 million in 2003 and $239.6 million in 2002. Net income was $237.9 million for year ended December 31, 2004, or $1.53 per diluted share, as compared to $177.6 million, or $1.10 per diluted share, in 2003 and $239.6 million, or $1.44 per diluted share, in 2002. Net income for the year ended December 31, 2003 includes an after-tax expense of $37.8 million (net of an income tax beneÑt of $23.1 million), or $.23 per share, as a cumulative eÅect of a change in accounting principle resulting from the adoption of Statement of Financial Accounting Standards No. 143, \"\"Accounting for Asset Retirement Obligations,'' and a change in accounting principle for our methane gas collection systems. See Note 1, Basis of Presentation, of the Notes to our Consolidated Financial Statements for further discussion of these changes in accounting principles. Our operating results for the year ended December 31, 2002 include other charges (income) described below.\n\nThe following table summarizes our costs and expenses in millions of dollars and as a percentage of our revenue for 2002 through 2004:\n\n| | 2004 | 2004 | 2003 | 2003 | 2002 | 2002 |\n|------------------------------------------------------------------------------------------|----------|--------|----------|--------|----------|--------|\n| | $ | % | $ | % | $ | % |\n| Revenue ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | $2,708.1 | 100.0% | $2,517.8 | 100.0% | $2,365.1 | 100.0% |\n| Cost of operations ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 1,714.4 | 63.3 | 1,605.4 | 63.8 | 1,472.9 | 62.3 |\n| Depreciation, amortization and depletion of property and equipment ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 252.4 | 9.3 | 233.8 | 9.3 | 193.5 | 8.2 |\n| Amortization of intangible assets ÏÏÏÏ | 7.0 | .3 | 5.3 | .2 | 6.1 | .2 |\n| Accretion ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 13.7 | .5 | 12.7 | .5 | Ì | Ì |\n| Selling, general and administrative expenses ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 268.3 | 9.9 | 247.9 | 9.8 | 238.7 | 10.1 |\n| Other charges (income)ÏÏÏÏÏÏÏÏÏÏÏÏ | Ì | Ì | Ì | Ì | (5.6) | (.2) |\n| Operating income ÏÏÏÏÏÏÏÏÏ | $ 452.3 | 16.7% | $ 412.7 | 16.4% | $ 459.5 | 19.4% |\n\nRevenue. Revenue was $2,708.1 million, $2,517.8 million and $2,365.1 million for the years ended December 31, 2004, 2003 and 2002, respectively. Revenue increased by $190.3 million, or 7.6%, from 2003 to", - "page_start": 41, - "page_end": 41, - "source_file": "NYSE_RSG_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_EEFT_2000.pdf", - "query": "What the name of the first bridge buildt over Danube ?", - "target_page": 16, - "target_passage": "he Chain Bridge was the first bridge over the Danube", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\n## CHAIN BRIDGE, Budapest\n\nThe Chain Bridge, built from 1839 to 1849, was the first bridge over the Danube, linking the cities Buda and Pest. Measuring 380 meters long and 15.7 meters wide, it is supported by pillars shaped like antique triumphal arches.", - "page_start": 15, - "page_end": 15, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "- (vii) Kgalagadi D istrict,\n - (viii) Kgatleng D istrict,\n - (ix) Kw eneng D istrict,\n - (x) N gw aketse in the S outhern D istrict,\n - (xi) N orth E ast D istrict, and\n - (xii) Tlokw eng in the S outh E ast D istrict;\n - ( b ) five persons w ho shall be appointed by the P resident; and", - "page_start": 34, - "page_end": 34, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "See accompanying notes to consolidated financial statements.\n\n\n\n## B R O O K LYN BRIDGE, New Yo r k\n\nThe Brooklyn Bridge, proudly standing over the East River and connecting the boroughs of Brooklyn and Manhattan, endures as one of the most famous bridges in America. When completed in May 1883, the 5989-foot-long Brooklyn Bridge was the largest suspension bridge in the world.", - "page_start": 28, - "page_end": 28, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "See accompanying notes to consolidated financial statements.\n\n\n\n## RIVER TYNE BRIDGES, Newcastle\n\nSix bridges dominate the Tyne between Newcastle and Gateshead, enabling innovative railway and roadway advances over the past two centuries. At the time of its completion in 1929, the Tyne Bridge was the world's longest single span bridge.", - "page_start": 27, - "page_end": 27, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "- ( a ) one person from each of the follow ing areas, w hich person for the tim e being perform s the functions of the office of Kgosi in respect of such areas-\n - (i) Barolong Farm s in the S outhern D istrict,\n - (ii) C hobe in the N orth W est D istrict,\n - (iii) G a M alete in the S outh E ast D istrict,\n - (iv) G a M m angw ato in the C entral D istrict,\n - (v) G hanzi D istrict,\n - (vi) G oo Taw ana in the N orth W est D istrict,", - "page_start": 34, - "page_end": 34, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "Brooklyn Bridge, New York\n\n\n\n## B R I D G E O F D R E A M S\n\nIf you can dream it, build it.\n\nJust as bridges shape the skylines of the world's cities, they also still deeply influence our cultures, our commerce and our lives. Today, consumers are demanding greater convenience, personalized transactions, up-to-the-minute information and privacy as never before through the Internet, wireless access and other exciting new technologies.\n\nNew types of bridges - electronic bridges are emerging to link consumers with these services in innovative ways that redefine the financial transactions process. Now as we face a world constantly on the go, our mission is to create and implement flexible, secure solutions to connect people with their personal information.", - "page_start": 14, - "page_end": 14, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "In accordance with Statement of Financial Accounting Standards No. 34, 'Capitalization of Interest Cost' ('SFAS 34'), interest cost associated with major development and construction projects is capitalized as part of the cost of the project. Interest is typically capitalized on amounts expended on the project using the weightedaverage cost of our outstanding borrowings, since we typically do not borrow funds directly related to a development project. Capitalization of interest starts when construction activities, as defined in SFAS 34, begin and ceases when construction is substantially complete or development activity is suspended for more than a brief period.\n\nWhether we capitalize interest on a project depends in part on management's actions. In October 2002, we announced the suspension of development activities on our wholly-owned project on the Renaissance Pointe land in Atlantic City. In connection with that announcement, we stopped capitalizing interest associated with the project. Interest capitalized on this project for the year ended December", - "page_start": 42, - "page_end": 42, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "Table 4 - Noninterest Expense (in thousands):", - "page_start": 47, - "page_end": 47, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "Capitalized interest. The interest cost associated with major development and construction projects is capitalized and included in the cost of the project. When no debt is incurred specifically for a project, interest is capitalized on amounts expended on the project using the weighted-average cost of the Company's outstanding borrowings. Capitalization of interest ceases when the project is substantially complete or development activity is suspended for more than a brief period.", - "page_start": 57, - "page_end": 57, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "- 158. Marek, Miroslav. \"Capet 40\" (http://genealogy.euweb.cz/capet/capet40.html). euweb.cz . Archived (https://web.archi ve.org/web/20131014145729/http://www.genealogy.euweb.cz/capet/capet40.html) from the original on 14 October 2013. Retrieved 13 February 2009.\n - 159. \"Suzanne de Mésenge\" (http://roglo.eu/roglo?lang=es;i=337437). roglo.eu . Archived (https://web.archive.org/web/ 20160411124656/http://roglo.eu/roglo?lang=es;i=337437) from the original on 11 April 2016. Retrieved 14 September 2009.\n\n## Works cited\n\nAnselme de Sainte-Marie, Père (1726). Histoire généalogique et chronologique de la maison royale de France (http s://gallica.bnf.fr/ark:/12148/bpt6k76026j) [ Genealogical and chronological history of the royal house of France ] (in French). Vol. 1 (3rd ed.). Paris: La compagnie des libraires. Archived (https://web.archive.org/web/2022033108174 8/https://gallica.bnf.fr/ark:/12148/bpt6k76026j) from the original on 31 March 2022. Retrieved 30 August 2018.\n\nAntoine, Michel (1989). Louis XV (in French). Paris: Fayard. ISBN 978-2-2130-2277-2.\n\nBailey, Gauvin Alexander (2018). Architecture and Urbanism in the French Atlantic Empire: State, Church and Society, 1604-1830 . Kingston, Ontario: McGill-Queen's University Press. ISBN 978-0-7735-5376-7.\n\nBarentine, John C. (2016). Uncharted Constellations: Asterisms, Single-Source and Rebrands . Springer Publishing. ISBN 978-3-3192-7619-9.\n\nBarnes, Linda L. (2005). Needles, Herbs, Gods, and Ghosts: China, Healing, and the West to 1848 . Harvard University Press. ISBN 978-0-6740-1872-3.\n\nBeem, Charles (2018). Queenship in Early Modern Europe (https://books.google.com/books?id=301GEAAAQBAJ). Red Globe Press. ISBN 978-1-1370-0506-9. Archived (https://web.archive.org/web/20231124053309/https://book s.google.com/books?id=301GEAAAQBAJ) from the original on 24 November 2023. Retrieved 30 October 2023.\n\nBély, Lucien (2001). The History of France . Paris: Editions Jean-Paul Gisserot. ISBN 978-2-8774-7563-1.\n\nBlack, Jeremy (2011). Beyond the Military Revolution: War in the Seventeenth Century World . Palgrave Macmillan. ISBN 978-0-2302-5156-4.\n\nBlanning, Tim (2008). The Pursuit of Glory: The Five Revolutions That Made Modern Europe . Penguin Books. ISBN 978-0-1431-1389-8.\n\nBluche, François (1986). Louis XIV\n\n(in French). Paris: Hachette Littératures. ISBN 978-2-0101-3174-5.\n\nBluche, François (1990). Louis XIV . Translated by Greengrass, Mark. New York: Franklin Watts. p. 11. ISBN 978-05311-5112-9.\n\nBluche, François (2005). Dictionnaire du Grand Siècle 1589-1715 (in French). Fayard. ISBN 978-2-2136-2144-9.\n\nBryant, Mark (2004). \"Partner, Matriarch, and Minister: Mme de Maintenon of France, Clandestine Consort, 16801715\". In Campbell Orr, Clarissa (ed.). Queenship in Europe 1660-1815: The Role of the Consort . Cambridge University Press. pp. 77-106. ISBN 978-0-5218-1422-5.\n\nBuckley, Veronica (2008). Madame de Maintenon: The Secret Wife of Louis XIV . London: Bloomsbury. ISBN 978-07475-8098-0.\n\nBurke, Peter (1992). \"The Fabrication of Louis XIV\". History Today . 42 (2).\n\nClaydon, Tony (2007). Europe and the Making of England, 1660-1760 . Cambridge University Press. ISBN 978-05218-5004-9.\n\nDelon, Michel (2013). Encyclopedia of the Enlightenment (https://books.google.com/books?id=QEpJAgAAQBAJ). Routledge. ISBN 978-1-1359-5998-2.\n\nDunlop, Ian (2000). Louis XIV . London: Pimlico. ISBN 978-0-7126-6709-8.\n\nDurant, Will; Durant, Ariel (1963). The Story of Civilization . Vol. 8: The Age of Louis XIV. Boston: Simon & Schuster.\n\nDvornik, Francis (1962). The Slavs in European History and Civilization (https://books.google.com/books?id=LACpYP -g1y8C). Rutgers University Press. ISBN 978-0-8135-0799-6. Archived (https://web.archive.org/web/20231017044 641/https://books.google.com/books?id=LACpYP-g1y8C) from the original on 17 October 2023. Retrieved 21 August 2021.\n\nEdmunds, Martha (2002). Piety and Politics . University of Delaware Press. ISBN 0-8741-3693-8.", - "page_start": 30, - "page_end": 30, - "source_file": "wikipedia5.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_EEFT_2000.pdf", - "query": "What was the total amount of operating expenses of 2000 by Network Wordwide in 2000 ?", - "target_page": 17, - "target_passage": "Total operating expenses increased to $88.1 million for the year ended December 31, 2000", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Operating Expenses Total operating expenses increased to $88.1 million for the year ended December 31, 2000 from $68.3 million for the year ended December 31, 1999 and from $34.5 million for the year ended December 31, 1998. The increase from 1999 to 2000 can be broken down\n\n\n\nby segment as follows: (1) a $3.5 million increase in Network Services Segment operating costs due to growth in the size of the network operations; (2) a $15.2 million increase in Software Services Segment due to write down of intangibles of $11.2 million and investment in personnel and re s o u rces; and (3) a $1.1 million increase in Corporate Services Segment operating costs due to the expended operations. The i n c rease from 1998 to 1999 can be broken down by segment as follows: (1) a $13.0 million increase in Network Services Segment operating costs, (2) the addition of $19.6 million of Software Solutions Segment operating costs, and (3) a $1.2 million increase in Corporate Services Segment operating costs. Operating expenses for the years ended December 31, 2000 and 1999 are discussed more fully in the Segment Results of Operations sections below.\n\nOperating Loss The Company generated an operating loss of $35.4 million for the year ended December 31, 2000 compared to $26.8 million for the year ended December 31, 1999 and $22.6 million for the year ended December 31, 1998. The increased operating loss from 1999 to 2000 is due to the net effect of three factors: (1) a $6.8 million decrease in the operating loss from the Company's Network Services Segment; (2) a $14.3 million increase in the operating loss from the Company's Software Solutions Segment; and (3) a $1.1 million increase in the operating loss f rom the Company's Corporate Services Segment. The increased operating loss from 1998 to 1999 is due to the net effect of three factors: (1) a $1.9 million decrease in operating losses from the Company's Network Services Segment; (2) the addition of $4.8 million in operating losses fro m the Company's Software Solutions Segment; and (3) a $1.3 million increase in operating losses from the Company's Corporate Services Segment.", - "page_start": 16, - "page_end": 16, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "The Company re c o rded an $800,000 write-down of certain ATM hard w a re assets associated with the p u rchase of the Budapest Bank ATM network in May 2000 and the Service Bank ATM network in M a rch 1999 (see Note 10 to the Consolidated Financial Statements - Asset Write Down). In addition, the Company re c o rded a one-time gain in its Central European Sub-segment of $1.2 million. The gain is related to a change in Hungarian law that eliminates a major portion of the Company's liability for import taxes on ATM hard w a re to the Hungarian government. The gain is included as an element of direct operating costs.\n\nThe operating expenses for the Central European Sub-segment totaled $21.7 million for the year ended December 31, 2000 as compared to $20.7 million for the year ended December 31, 1999, an i n c rease of 5%. The increase in operating expenses is largely the result of an increase in the number of ATMs operated by the Company from 1,203 at December 31, 1999 to 1,391 at December 31, 2000, and increased transaction volumes.\n\n\n\nThe operating expenses for the We s t e rn European Sub-segment totaled $18.9 million for the year ended December 31, 2000 as compared to $16.5 million for the year ended December 31, 1999, an increase of 15%. The increase in operating expenses is largely the result of an increase in the number of ATMs operated by the Company from 621 at December 31, 1999 to 787 at December 31, 2000, and increased transaction volumes.\n\nThe operating expenses for the Other ATM Operations Sub-segment were $2.4 million for the year ended December 31, 2000 as compared to $2.2 million for the year ended December 31, 1999, an increase of 9%. The operating expenses from this segment are the result of the acquisition of the Dash network located in the United States in August 1999 and the unallocated costs associated with the Company's processing facilities.\n\nD i rect operating costs in the Network Services Segment consist primarily of: ATM installation costs; ATM site rentals; and costs associated with maintaining ATMs, ATM telecommunications, interest on network cash and cash delivery and security services to ATMs. Such costs increased to $24.4 million for the year ended December 31, 2000 from $21.9 million for the year ended December 31, 1999. The increase in direct operating costs is primarily attributable to costs associated with operating the increased number of ATMs in the network during the periods. Also, i n t e rcompany allocations were made to charge the ATM operations with transaction switching and bank connection fees associated with the operations central processing center in Budapest. These allocations totalled $3.5 million and $2.9 million for the years ended December 31, 2000 and 1999, re s p e c t i v e l y. Direct operating costs for 2000 include a one-time gain of $1.2 million due to a change in Hungarian law that eliminates a major portion of the Company's liability for import taxes on ATM hard w a re. Direct operating costs also include a $657,000 gain realized in 1999 f rom the sale of the Croatian network assets. The components of direct operating costs for the years ended December 31, 2000 and 1999 were:", - "page_start": 18, - "page_end": 18, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "## To Our Shareholders\n\nI n our report to you last year, we noted that Euronet's success has been built in large part on the question 'Would you like another transaction?' The answer from our clients and their customers was a resounding 'Yes!'\n\nTo reflect the rapid changes taking place in financial transactions worldwide, even that question has evolved. So in 2000, we also began asking 'How would you like your next transaction?'\n\nIn 2000, Euronet Worldwide focused on providing ways people can access their financial accounts and transactions through various electronic touchpoints. New secure transaction types and touchpoints-ATMs, point-of-sale (POS) devices, the Internet and mobile phones-continued to fuel transaction growth every month. In 2000, we processed a record 52.7 million billable transactions, a 60% increase over 1999, and in December 2000, our transaction levels exceeded 5 million per month and continue to accelerate.\n\nTaken together, our transaction growth and expanding number of consumer touchpoints translated into an accelerating and recurring revenue stream, which greatly improved our bottom line. Our 2000 revenue of $52.7 million represented a 27% increase over the company's 1999 revenue of $41.5 million. Euronet's 2000 EBITDA also improved $2.4 million, or 14.5%, over 1999.\n\nThis year we continued to focus on our core business of ATM driving and transaction processing, and we pursued new transactions through our mobile and Internet banking solutions. We also implemented our bill payment initiative, starting with electronic payments for prepaid mobile airtime. We are pleased to report that in 2000 our Network Services business turned EBITDA positive and posted revenue of $36.9 million, an increase of 39% over 1999 revenue.\n\nAdditional milestones were reached through several new strategic partnerships we announced late in the year. Gemplus, Sila Communications and Aether Systems chose Euronet mobile products to supplement their product offerings, proving the strength of Euronet's mobile products. Teaming up with these partners will further increase the sales penetration of our suite of mobile payment solutions around the world.\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "on the Company's ATM network. In addition, the Company continues to invest in the on-going development of products that were re c e n t l y i n t roduced to the market. The Company's re s e a rch and development costs incurred for computer products to be sold, leased or otherw i s e marketed increased to $6.7 million for the year ended December 31, 2000 from $3.2 million for the year ended December 31, 1999. Of this total f i g u re, $1.0 million and $322,000 were capitalized, as at December 31, 2000 and 1999, re s p e c t i v e l y, in conjunction with the Company's accounting policy requiring the capitalization of development costs on a product by product basis once technological feasibility is established. Technological feasibility of computer software products is established when the Company has completed all planning, designing, coding, and testing activities that are necessary to establish that the product can be produced to meet its design specifications including functions, feature s , and technical perf o rmance re q u i rements.\n\nOperating Loss The Software Solutions Segment incurred an operating loss of $21.5 million for the year ended December 31, 2000 and $7.1 million for the year ended December 31, 1999 as a result of the factors discussed above\n\n## Corporate Services Segment\n\nOperating Expenses Operating expenses for the Corporate Services Segment increased to $7.9 million for the year ended December 31, 2000 f rom $6.8 million for the year ended December 31, 1999. The components of corporate services operating costs for the years ended December 31, 2000 and 1999 were:\n\n| (in thousands) | Years ending December 31, | Years ending December 31, |\n|-----------------------------------------|-----------------------------|-----------------------------|\n| | 2 0 0 0 | 1 9 9 9 |\n| Salaries and benefits | $ 3 , 8 1 3 | $ 3 , 3 3 5 |\n| Selling, general and administrative | 3 , 8 4 1 | 3 , 2 7 0 |\n| D e p reciation and amort i z a t i o n | 2 0 8 | 1 4 5 |\n| Total direct operating expenses | $ 7 , 8 6 2 | $ 6 , 7 5 0 |\n\nThe Company's expansion of its network infrastru c t u re, and increases in corporate and administrative capabilities are the primary reasons for these i n c reased expenditures.\n\n## Non-Operating Results for the Years Ended December 31, 2000 and 1999\n\nInterest Income I n t e rest income decreased to $1.1 million for the year ended December 31, 2000 from $2.0 million for the year ended December 31, 1999 and from $2.5 million for the year ended December 31, 1998. The decrease is the result of the decrease in investment securities and cash as a result of negative cash flow from operations and capital expenditure s .\n\nInterest Expense I n t e rest expense decreased to $10.8 million for the year ended December 31, 2000 from $10.9 million for the year ended December 31, 1999 and increased from $7.8 million for the year ended December 31, 1998. The decrease from 1999 to 2000 is due to exchange rate diff e rences as the majority of the debt is denominated in Deutsche Mark. The increase from 1998 to 1999 is the result of accretion of the C o m p a n y 's Notes Payable for a full year in 1999 in comparison to 6 months' accretion in 1998.", - "page_start": 20, - "page_end": 20, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "guarantees for financial instruments and as deposits with customs officials. The decrease resulted primarily from the settlement of the forw a rd f o reign exchange contracts using restricted cash and a release of restricted cash resulting from the posting of a surety bond with the Hungarian banking institution that supplies cash to the Company's ATM network in Hungary.\n\nTrade Accounts Trade accounts receivable increased to $9.5 million at December 31, 2000 from $7.9 million at December 31, 1999 due primarily to sales from the Software Solutions Segment and increased Network Services Segment revenues.\n\nP r o p e r t y, Plant and Equipment Net pro p e rt y, plant and equipment decreased to $31.7 million at December 31, 2000 from $36.7 million at December 31, 1999. This decrease is due primarily to a reduction in the rate of installation of ATMs and fixed asset additions. Fixed asset d e p reciation was in excess of fixed asset additions, and the write-off of $800,000 in ATM hard w a re further reduced the net fixed asset position.\n\nIntangible Assets The decrease in net intangible assets to $2.6 million at December 31, 2000 from $16.3 million at December 31, 1999 is due primarily to the $11.2 million write-down of goodwill and other identifiable intangible assets associated with the Software Solutions Segment (see Note 9 to the Consolidated Financial Statements - Intangibles). In addition, the decrease is the result of amortization of purchased intangibles a c q u i red in the Euronet USA acquisition in 1998, and the SBK and Dash acquisitions in 1999.\n\nCurrent Liabilities C u rrent liabilities decreased to $20.5 million at December 31, 2000 from $26.9 million at December 31, 1999. This decre a s e is due primarily to decreases in accrued expenses, billings in excess of costs and estimated earnings on software installation costs and settlement of the forw a rd foreign exchange contracts.\n\nCapital Lease Total capital lease obligations including current installments increased to $11.5 million at December 31, 2000 from $10.6 million at December 31, 1999. This increase is due primarily to additional capital leases resulting from the Company's purchase of Budapest Bank's AT M network, consisting of 147 ATMs on May 1, 2000.\n\nNotes Payable Notes payable increased to $77.2 million at December 31, 2000 from $72.8 million at December 31, 1999. This is the result of several transactions as follows:\n\n| | (in millions) |\n|--------------------------------------------------|-----------------|\n| Balance at December 31, 1999 | $ 7 2 . 8. |\n| U n realized foreign exchange gain (DEM vs. US$) | (4.4) |\n| A c c retion of bond intere s t | 8 . 8. |\n| Balance at December 31, 2000 | $ 7 7 . 2. |\n\nS t o c k h o l d e r's Deficit Stockholders' deficit increased to $44.8 million at December 31, 2000 from $9.5 million at December 31, 1999. This is due to the net loss for the year ended December 31, 2000 of $49.6 million which was offset by an increase in additional paid in capital of $14.4 million due to the sale of 1,882,723 shares of common stock for proceeds of $13.0 million, the issue of $400,000 of warrants and the exercise of 390,231 stock options for proceeds of $900,000.\n\n## Year 2000 Compliance\n\nThe Company's European and U.S. Year 2000 compliance teams re p o rted no material Year 2000 problems during the advent of the year 2000, either with Euro n e t 's own systems or the systems of its customers. The Company is unaware of any material Year 2000 complications to date.\n\n## Impact of New Accounting Pronouncements Not Yet Adopted", - "page_start": 22, - "page_end": 22, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "Operating Loss The total Network Services Segment operating loss decreased to $6.1 million for the year ended December 31, 2000 from $12.9 million for the year ended December 31, 1999, an improvement of 53%, as a result of the factors discussed above. The Central European Subsegment re c o rded an operating loss of $3.1 million for the year ended December 31, 2000 compared to a loss of $8.0 million for the year ended December 30, 1999, an improvement of 61%, as a result of the factors discussed above. The We s t e rn European Sub-segment operating loss d e c reased to $2.3 million for year ended December 31, 2000 compared to a loss of $3.8 million for the year ended December 31, 1999, an i m p rovement of 39%, as a result of the factors discussed above. The Other ATM Operations Sub-segment incurred an operating loss of $700,000 for the year ended December 31, 2000 compared to a loss of $1.0 million for the year ended December 31, 1999, an improvement of 30%, as a result of the factors discussed above.\n\n## Software Solutions Segment\n\nSoftware Solutions Revenue Revenues from the Software Solutions Segment totaled $16.0 million before inter-segment eliminations for the year ended December 31, 2000 as compared to revenue of $15.1 for the year ended December 31, 1999. Software revenues are grouped into four b road categories: software license fees, professional service fees, maintenance fees and hard w a re sales. Software license fees are the initial fees c h a rged by the Company for the licensing of its pro p r i e t a ry application software to customers. Professional service fees are charged for customization, installation and consulting services provided to customers. Software maintenance fees are the ongoing fees charged to customers for the maintenance of the software products. Hard w a re sales revenues are derived from the sale of computer products and are re p o rted net of cost of sales. The components of software solutions revenue for the years ended December 31, 2000 and 1999 were:\n\n| (in thousands) | Years ending December 31, | Years ending December 31, |\n|---------------------------------|-----------------------------|-----------------------------|\n| | 2 0 0 0 | 1 9 9 9 |\n| S o f t w a re license fees | $ 4 , 1 1 7 | $ 2 , 4 3 0 |\n| P rofessional service fees | 6 , 8 6 7 | 8 , 2 9 8 |\n| Maintenance fees | 4 , 4 8 7 | 4 , 0 5 1 |\n| H a rd w a re sales | 5 3 5 | 3 7 0 |\n| Total direct operating expenses | $ 1 6 , 0 0 6 | $ 1 5 , 1 4 9 |\n\nThe increases in software license fees from 1999 to 2000 can be attributed to an increased number of software sales contracts signed in 2000 as c o m p a red to 1999, primarily in the first half of the year 2000. Sales of the Company's core software products have dropped off substantially in the third and fourth quarter of 2000 and are expected to be soft again during 2001. The Company believes that revenues of the Software Solutions Segment will increasingly be derived from the Company's new set of software solutions, including its wireless banking solutions. The decreases in professional service fees from 1999 to 2000 can be attributed to increased efficiency in the installation of software.", - "page_start": 19, - "page_end": 19, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "\n\nEuronet Worldwide Annual Report 2000\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "Most of Euro n e t 's financial instruments (cash and cash equivalents, trade accounts receivable, investment securities, prepaid expenses and other current assets, trade accounts payable, accrued expenses and other current liabilities, advance payments on contracts, billings in excess of costs and estimated earnings on software installation contracts, costs and estimated earnings in excess of billings on software installation contracts) are short - t e rm in nature. Accord i n g l y, the carrying value of these instruments approximates their fair values. The fair value of notes payable was determined based on quoted market prices for the same issue and amounted to $37.5 million (carrying value of $77.2 million) at December 31, 2000 and $52.0 million (carrying value of $72.8 million) at December 31, 1999. See Note 14 for details of the Company's foreign exchange contracts.\n\n## (21) Reconciliation of Net Loss to Net Cash Used in Operating Activities\n\nThe reconciliation of net loss to net cash used in operating activities for the years ended December 31, 2000, 1999, and 1998 follows.", - "page_start": 44, - "page_end": 44, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "At December 31, 2000 the Company had cash and cash equivalents of $7.2 million and working capital of $3.6 million. The Company had $2.1 million of restricted cash held as security with respect to cash provided by banks participating in Euro n e t 's ATM network, to cover guarantees on financial instruments and as deposits with customs officials (See Note 7 to the Consolidated Financial Statements - Restricted cash). In addition to the assets held on the balance sheet at December 31, 1999 the Company held re p u rchased notes payable with a face value of 48.4 million Deutsche Marks ($23.3 million as at December 31, 2000 based on a USD to DM rate of 1:2.08) and a fair market value at December 31, 2000 of $9.3 million (See Note 20 to the Consolidated Financial Statements - Financial instruments).\n\nOn June 28, 2000 the Company entered into an unsecured revolving credit agreement (the 'Credit Agreement') providing a facility of up to $4.0 million from three shareholders as follows: DST Systems in the amount of $2.4 million; Hungarian-American Enterprise Fund in the amount of $1.0 million; and Michael J. Brown in the amount of $600,000. The facility was available to be drawn upon until December 28, 2000, with repayment of any draws being due June 28, 2001. On December 28, 2000 the facility was amended and renewed for a further six months and is available to be drawn until June 28, 2001 with repayments of any draws being due December 28, 2001. Draws on the facility will accrue intere s t at 10 percent per annum, payable quart e r l y. A 'commitment' fee was paid for the initial facility of 100,000 warrants issued pro- rata to the lenders with a warrant strike price set at the average share price, as quoted on NASDAQ for 10 trading days prior to the warrant issue date, less 10 percent. An additional fee of 100,000 warrants, on the same terms, was paid for the subsequent extension of the facility. Wa rrants are to be issued on similar terms and conditions for each draw on the facility at the rate of 80,000 warrants for each $1.0 million of funds drawn. As of M a rch 1, 2001, the Company had not made any draws under the Credit Agreement.\n\nOn Febru a ry 25, 2000 the Company entered into two subscription agreements for the sale of an aggregate of 650,000 new common shares of the C o m p a n y. Closing under those agreements took place on March 13, 2000. These agreements were signed with certain accredited investors in transactions exempt from registration under the exemptions provided in Section 4(2) and Regulation D of the Act. The purchase price of each s h a re was $6.615, which re p resents ninety percent of the average closing price for the ten trading days prior to and including Febru a ry 15, 2000. The aggregate amount of proceeds to the Company from the private placement was $4.3 million. Under each of the agreements, for each two s h a res of common stock purchased in the private placement, the purchasers were issued one warrant to purchase a share of Euronet common stock at an exercise price of $11.615, expiring in each case on the one year anniversary date of the subscription agreement.", - "page_start": 21, - "page_end": 21, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "## (22) Non-Cash Financing and Investing Activities\n\nCapital lease obligations of $5.1 million, $5.2 million and $3.9 million during the years ended December 31, 2000, 1999 and 1998, re s p e c t i v e l y, were incurred when the Company entered into leases primarily for new automated teller machines.\n\nDuring the years ended December 31, 2000, 1999 and 1998, the Company issued warrants to purchase common stock totaling $ 372,000, $0, and $1,725,000, re s p e c t i v e l y.", - "page_start": 44, - "page_end": 44, - "source_file": "NASDAQ_EEFT_2000.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_EEFT_2000.pdf", - "query": "What was the share of revenues of Netwrok Wordwide made in Poland and Hungary in 2000 ?", - "target_page": 24, - "target_passage": "In 2000, 30% of the Company’s revenues were generated in Poland and Hungary", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Table 5: Physical health risks, Sectors and exposures - EWCS 2015\n\nCountry colours: Romania aquamarine, Poland orange, Hungary blue.\n\nThe figure below illustrates country differences, based on data from the EWCS 2015: the values of Ireland (green), the EU28 level (blue) with numbers, and the values of Poland (orange). Poland had a relatively high share of employment in industry of 24%, for which Ireland has a share of 12%. The impact on working conditions can be seen in the share of workers reporting exposures to vibrations (Poland 27%, Ireland 16%) and loud noise (Poland 35%, Ireland 25%).\n\nFigure 17: Physical health risks compared (%) - EWCS 2015\n\n", - "page_start": 41, - "page_end": 41, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "E u ronet and its subsidiaries operate in two business segments: (1) a segment that provides an independent shared ATM network and other e l e c t ronic payment network services to banks, retail and financial institutions (the 'Network Services Segment'); and (2) a segment that p roduces application software and solutions for payment and transaction delivery systems (the 'Software Solutions Segment'). These business segments are supported by a corporate service segment which provides corporate and other administrative services which are not d i rectly identifiable with the two business segments, (the 'Corporate Services Segment'). The accounting policies of each segment are the same as those described in the summary of significant accounting policies. The Company evaluates perf o rmance based on profit or loss fro m operations before income taxes not including nonre c u rring gains and net loss. Prior period segment information has been restated to conform to the current period's presentation.\n\nAs the Network Services Segment continued to grow throughout 1999, the Company's management began to divide the internal org a n i z a t i o n of the segment into Sub-segments. Accord i n g l y, beginning in January 2000, the Company divided the Network Services Segment into thre e Sub-segments: 'Central European Sub-segment' (including Hungary, Poland, the Czech Republic, Croatia, Greece and Romania), 'We s t e rn E u ropean Sub-segment' (including Germ a n y, France, and the United Kingdom) and 'Other Operations Sub-segment' (including the United States and unallocated processing center costs). Where practical, certain amounts have been reclassified to reflect the change in intern a l re p o rting. The Company is unable to present Network Services Segment assets by Sub-segment as of December 31, 1999. Prior to January 1, 2000, certain assets that were used to provide support services to the Company as a whole were included in the assets in the balance sheet of the Company's wholly owned Hungarian subsidiary, Bank Tech. In order to segregate corporate assets from those of the Hungarian operations, these assets were transferred as of December 31, 1999, from Bank Tech to an existing Hungarian shell company, Administrative S e rvices. Those assets are now shown under the Other Operations Sub-segment.\n\nThe following tables present the segment results of the Company's operations for the years ended December 31, 2000, 1999 and 1998.", - "page_start": 42, - "page_end": 42, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "The subsidiaries of Euronet Services Inc., all of which are, directly or indire c t l y, wholly owned are:\n\n - - EFT Services Holding B.V., incorporated in the Netherlands\n - - Euronet Banktechnikai Szolgaltato Kft. ('Bank Tech'), incorporated in Hungary\n - - Euronet Adminisztracios Szolgaltato Kft. ('Administrative Services') (formerly SatComNet), incorporated in Hungary\n - - Bankomat 24/Euronet Sp. z o.o. ('Bankomat'), incorporated in Poland\n - - EFT-Usluge d o.o., incorporated in Croatia\n - - Euronet Services GmbH, incorporated in Germany\n - - EFT Services France SAS, incorporated in France\n - - Euronet Services spol. s.r.o., incorporated in the Czech Republic\n - - Euronet Services SRL, incorporated in Romania\n - - Euronet Services (UK) Limited, incorporated in the United Kingdom\n - - Euronet USA Inc. (formerly Arkansas Systems, Inc.) ('Euronet USA') incorporated in Arkansas, United States of America\n - - EFT Network Services LLC ('Dash'), incorporated in Arkansas, United States of America\n - - Euronet Holding N.V., incorporated in the Netherlands Antilles (in liquidation)\n - - Euronet Eft Services Hellas, incorporated in Greece\n\n## ( 2 ) Financial Position and Basis of Preparation\n\nThe Company generated an operating loss of $35.4 million and negative cash flows from operations of $16.4 million for the year ended December 31, 2000, primarily due to the significant costs associated with its investment in delivery, support, re s e a rch and development in its s o f t w a re subsidiary which was acquired in December 1998. Based on the Company's current business plan and financial projections, the Company expects to reduce operating losses and net cash used in operating activities in 2001. In the Network Services Segment, the Company anticipates that increased transaction levels in its ATM network will result in additional revenues without a corresponding incre a s e in expenses. In addition, the Company expects to further expand its ATM outsourcing services and offer new value-added services, which will p rovide continued revenue growth without significantly increasing direct operating expenses or capital investments. In the Software Solutions Segment, the Company expects reduced operating expenses and improved operating perf o rmance due to a cost re s t ructuring pro g r a m i n t roduced in the first quarter of 2001. The Company believes that the credit facility (see note 13), certain asset sales and cash and cash equivalents at December 31, 2000 will provide the Company with sufficient cash re s o u rces until it achieves positive cash flow.\n\nBased on the above, management is confident that the Company will be able to continue as a going concern. Accord i n g l y, these consolidated financial statements have been pre p a red on a going concern basis which contemplates the continuation and expansion of trading activities as well as the realization of assets and liquidation of liabilities in the ord i n a ry course of business.\n\n## ( 3 ) S u m m a ry of Significant Accounting Policies and Practices\n\n## (a) Basis of presentation\n\nThe accompanying consolidated financial statements have been pre p a red in accordance with generally accepted accounting principles in the United States of America.\n\nAll significant intercompany balances and transactions have been eliminated.\n\n## (b) Foreign currencies\n\nF o reign currency transactions are re c o rded at the exchange rate prevailing on the date of the transactions. Assets and liabilitiesdenominated in foreign currencies are re m e a s u red at rates of exchange on the balance sheet date. Resulting gains and losses on f o reign currency transactions are included in the consolidated statement of operations and comprehensive loss.", - "page_start": 30, - "page_end": 30, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nNotes to Consolidated Financial Statements\n\nDecember 31, 2002, 2001 and 2000\n\n## Stock Based Compensation\n\nThe Company grants stock options for a fixed number of shares to employees with an exercise price equal to the fair value of the shares at the date of grant. The Company accounts for stock option grants using the intrinsic value method prescribed by APB Opinion No. 25, 'Accounting for Stock Issued to Employees' ('APB 25'). Under APB 25, because the exercise price of the Company's employee stock options equals the market price of the underlying stock on the date of grant, no compensation expense is recognized. Had compensation cost for the plan been determined consistent with Statement of Financial Accounting Standards No. 123, 'Accounting for Stock-Based Compensation,' the Company's net earnings and earnings per share would have been reduced by insignificant amounts on a pro forma basis for the years ended December 31, 2002, 2001 and 2000. Note 15 provides additional information on the Company's stock option plan.\n\n## Stock Repurchase\n\nOn July 25, 2000, the Company approved a stock repurchase plan, authorizing the repurchase of up to 740,690 shares of the Company's common stock. During the years ended December 31, 2001 and 2000, the Company repurchased 9,900 and 126,100 shares, respectively. The treasury shares were purchased for $4,240,119, which represented an average purchase price of $31.18 per share. The treasury shares were retired in 2001.\n\n## Per Share Data\n\nNet earnings per share ('EPS') are computed by dividing net earnings by the weighted average number of shares of common stock outstanding during the period. The Company calculates dilutive EPS assuming all outstanding options to purchase common stock have been exercised at the beginning of the year (or the time of issuance, if later.) The dilutive effect of the outstanding options is reflected by application of the treasury stock method, whereby the proceeds from the exercised options are assumed to be used to purchase common stock at the average market price during the period. The following table reconciles the computation of basic EPS to dilutive EPS:\n\n| | Net Earnings | Weighted Average Shares | Per Share Amount |\n|-------------------------------------------|-------------------|------------------------------|-----------------------|\n| For the year ended December 31, 2002: | | | |\n| Net earnings per share, basic | $33,952,550 | 12,359,966 | $ 2.75 |\n| Effect of stock options | - | 47,523 | |\n| Net earnings per share, assuming dilution | $33,952,550 | 12,409,489 | $ 2.74 |\n| For the year ended December 31, 2001: | | | |\n| Net earnings per share, basic | $29,354,505 | 12,318,346 | $ 2.38 |\n| Effect of stock options | - | 45,323 | |\n| Net earnings per share, assuming dilution | $29,354,505 | 12,363,669 | $ 2.37 |\n| For the year ended December 31, 2000: | | | |\n| Net earnings per share, basic | $28,316,047 | 12,426,344 | $ 2.28 |\n| Effect of stock options | - | 28,355 | |\n| Net earnings per share, assuming dilution | $28,316,047 | 12,454,699 | $ 2.27 |\n\n## Reclassifications", - "page_start": 75, - "page_end": 75, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## Bridging electronic payments in emerging markets\n\nNew business solutions are thriving as traditional banking environments transition rapidly from cash to electronic payments and transactions.\n\nhile credit is used for electronic transactions in Western Europe and North America, the model is quite different in many 'cash-based' economies around the world. And that's where Euronet continues to look for new opportunities - particularly in the emerging W\n\n\n\nThe Promise of Emerging Markets\n\nExpanding Poland's Payment Infrastructure\n\nAlthough still under-\n\ndeveloped compared to western economies, Poland is one of the most dynamic and promising markets in all of Europe.\n\nSince entering Poland in 1995, Euronet Worldwide has become one of the largest transaction processing service providers in the country, establishing a network of over 600 ATMs and providing software to eight major banks. Our agreement for electronic airtime distribution with all three mobile phone operators in the country - ERA GSM, Plus GSM and IDEA Centertel - further confirms that Euronet is embedded in the financial payments fabric in Poland.\n\nmarkets of Central Europe, the Middle East, Africa, Asia-Pacific, Latin America and the Caribbean.\n\nAlthough bank card use is just starting in these markets, the demand for non-cash payment is gaining momentum. The foundation for this marketplace is rapidly taking shape with greater technology support, well-designed infrastructure and rapidly growing networks, as well as a critical mass of users. So the shift to new electronic payment channels is on, and the number of electronic financial transactions has grown tremendously.\n\nEuronet Worldwide continuously monitors cash-based economies to identify their readiness to embrace electronic payment and transaction alternatives. With ATM, point-of-sale (POS), interactive voice response (IVR), Internet, mobile solutions and other innovative payment options, we can play a vital role in developing the electronic payments fabric of these countries.\n\nIn Greece, we are delivering ATM outsourcing solutions for a number of multinational banks with Greek operations. For Credigen Bank in Hungary, we are helping to open up the consumer credit market to a new base of shoppers who can perform POS and ATM transactions over Euronet's network. And in the Czech Republic we are providing outsourcing services for ABN AMRO's Visa Charge Card Program.\n\nLooking ahead, we see great potential for extending Euronet's brand into cash-based markets and for connecting a new world of users to dynamic transaction services.", - "page_start": 11, - "page_end": 11, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "## To Our Shareholders\n\nI n our report to you last year, we noted that Euronet's success has been built in large part on the question 'Would you like another transaction?' The answer from our clients and their customers was a resounding 'Yes!'\n\nTo reflect the rapid changes taking place in financial transactions worldwide, even that question has evolved. So in 2000, we also began asking 'How would you like your next transaction?'\n\nIn 2000, Euronet Worldwide focused on providing ways people can access their financial accounts and transactions through various electronic touchpoints. New secure transaction types and touchpoints-ATMs, point-of-sale (POS) devices, the Internet and mobile phones-continued to fuel transaction growth every month. In 2000, we processed a record 52.7 million billable transactions, a 60% increase over 1999, and in December 2000, our transaction levels exceeded 5 million per month and continue to accelerate.\n\nTaken together, our transaction growth and expanding number of consumer touchpoints translated into an accelerating and recurring revenue stream, which greatly improved our bottom line. Our 2000 revenue of $52.7 million represented a 27% increase over the company's 1999 revenue of $41.5 million. Euronet's 2000 EBITDA also improved $2.4 million, or 14.5%, over 1999.\n\nThis year we continued to focus on our core business of ATM driving and transaction processing, and we pursued new transactions through our mobile and Internet banking solutions. We also implemented our bill payment initiative, starting with electronic payments for prepaid mobile airtime. We are pleased to report that in 2000 our Network Services business turned EBITDA positive and posted revenue of $36.9 million, an increase of 39% over 1999 revenue.\n\nAdditional milestones were reached through several new strategic partnerships we announced late in the year. Gemplus, Sila Communications and Aether Systems chose Euronet mobile products to supplement their product offerings, proving the strength of Euronet's mobile products. Teaming up with these partners will further increase the sales penetration of our suite of mobile payment solutions around the world.\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "guarantees for financial instruments and as deposits with customs officials. The decrease resulted primarily from the settlement of the forw a rd f o reign exchange contracts using restricted cash and a release of restricted cash resulting from the posting of a surety bond with the Hungarian banking institution that supplies cash to the Company's ATM network in Hungary.\n\nTrade Accounts Trade accounts receivable increased to $9.5 million at December 31, 2000 from $7.9 million at December 31, 1999 due primarily to sales from the Software Solutions Segment and increased Network Services Segment revenues.\n\nP r o p e r t y, Plant and Equipment Net pro p e rt y, plant and equipment decreased to $31.7 million at December 31, 2000 from $36.7 million at December 31, 1999. This decrease is due primarily to a reduction in the rate of installation of ATMs and fixed asset additions. Fixed asset d e p reciation was in excess of fixed asset additions, and the write-off of $800,000 in ATM hard w a re further reduced the net fixed asset position.\n\nIntangible Assets The decrease in net intangible assets to $2.6 million at December 31, 2000 from $16.3 million at December 31, 1999 is due primarily to the $11.2 million write-down of goodwill and other identifiable intangible assets associated with the Software Solutions Segment (see Note 9 to the Consolidated Financial Statements - Intangibles). In addition, the decrease is the result of amortization of purchased intangibles a c q u i red in the Euronet USA acquisition in 1998, and the SBK and Dash acquisitions in 1999.\n\nCurrent Liabilities C u rrent liabilities decreased to $20.5 million at December 31, 2000 from $26.9 million at December 31, 1999. This decre a s e is due primarily to decreases in accrued expenses, billings in excess of costs and estimated earnings on software installation costs and settlement of the forw a rd foreign exchange contracts.\n\nCapital Lease Total capital lease obligations including current installments increased to $11.5 million at December 31, 2000 from $10.6 million at December 31, 1999. This increase is due primarily to additional capital leases resulting from the Company's purchase of Budapest Bank's AT M network, consisting of 147 ATMs on May 1, 2000.\n\nNotes Payable Notes payable increased to $77.2 million at December 31, 2000 from $72.8 million at December 31, 1999. This is the result of several transactions as follows:\n\n| | (in millions) |\n|--------------------------------------------------|-----------------|\n| Balance at December 31, 1999 | $ 7 2 . 8. |\n| U n realized foreign exchange gain (DEM vs. US$) | (4.4) |\n| A c c retion of bond intere s t | 8 . 8. |\n| Balance at December 31, 2000 | $ 7 7 . 2. |\n\nS t o c k h o l d e r's Deficit Stockholders' deficit increased to $44.8 million at December 31, 2000 from $9.5 million at December 31, 1999. This is due to the net loss for the year ended December 31, 2000 of $49.6 million which was offset by an increase in additional paid in capital of $14.4 million due to the sale of 1,882,723 shares of common stock for proceeds of $13.0 million, the issue of $400,000 of warrants and the exercise of 390,231 stock options for proceeds of $900,000.\n\n## Year 2000 Compliance\n\nThe Company's European and U.S. Year 2000 compliance teams re p o rted no material Year 2000 problems during the advent of the year 2000, either with Euro n e t 's own systems or the systems of its customers. The Company is unaware of any material Year 2000 complications to date.\n\n## Impact of New Accounting Pronouncements Not Yet Adopted", - "page_start": 22, - "page_end": 22, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "## WHY INVEST IN ROGERS\n\nRogers Communications has excellent positions in growing markets, powerful brands that stand for innovation, proven management, a long record of driving growth and shareholder value, and the financial strength to continue to deliver long-term growth.\n\n## LEADER IN CANADIAN COMMUNICATIONS INDUSTRY\n\nCanada's largest wireless carrier and a leading cable television provider, offering a 'quadruple play' of wireless, Internet, television and telephony services to consumers and businesses.\n\n## SUPERIOR ASSET MIX\n\nMajority of revenue and cash flow is generated from wireless and broadband services, the fastest growing segments of the telecommunications industry.\n\n## PROVEN LEADERSHIP AND ENGAGED EMPLOYEE BASE\n\nExperienced, performance-oriented management and operating teams with solid industry expertise, supported by the spirit of innovation and an entrepreneurial culture.\n\n## MUST-HAVE PRODUCTS AND SERVICES\n\nA leading provider of communications and entertainment products and services that are increasingly becoming integrated necessities in today's world.\n\n## STRONG FRANCHISES AND POWERFUL BRANDS\n\nStrong franchises with nationally recognized and highly respected brands that stand solidly in Canada for innovation, choice and value.\n\n## FINANCIAL STRENGTH AND FLEXIBILITY\n\nFinancially strong with an investment grade balance sheet, conservative debt leverage, and significant available financial liquidity.\n\n## ANNUALIZED DIVIDENDS PER SHARE: 2008-2013\n\n\n\n## CATEGORY-LEADING MEDIA ASSETS\n\nUnique and complementary collection of leading broadcast radio and television, specialty TV, sports entertainment, publishing and digital media assets.\n\n## LEADING NETWORKS AND INNOVATIVE PRODUCTS\n\nLeading wireless and broadband network platforms that deliver the most innovative communications, information and entertainment services.\n\n## HEALTHY TRADING VOLUME AND GROWING DIVIDENDS\n\nRCI common stock actively trades on the TSX and NYSE, with average daily trading volume of approximately 1.6 million shares. Each share pays an annualized dividend of $1.83 per share in 2014.\n\n## ADJUSTED NET INCOME AND EARNINGS PER SHARE\n\n", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## NOTES TO AND FORMING PART OF THE FINANCIAL STATEMENTS FOR THE FINANCIAL YEAR ENDED 30 JUNE 2000\n\n## 22. SUBSEQUENT EVENTS\n\nOn 25 August 2000 the Company announced that it had reached two agreements for the placement of a total of 16,666,666 ordinary fully paid shares in the Company at an issue price of 30 cents each (Shares).\n\nThe first agreement was with Mr Mark Bradley, who agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, a further 3,441,666 within 7 days of that meeting.\n\nOn Mr Bradley being appointed a Director of the Company, in order to comply with the requirements of the Corporations Law and the ASX Listing Rules, the Company and Mr Bradley agreed to defer the first issue of Shares, making both issues conditional on shareholder approval.\n\nThe second agreement was with Clough Engineering Limited, pursuant to which it agreed to take a placement of 3,225,000 Shares by 29 September 2000, followed by, if approved of by shareholders at the Company's annual general meeting, 6,775,000 shares, within 7 days of that meeting.\n\nOn 15 June 2000 the Company announced that with effect from 1 July 2000 it acquired a 50% interest in OIS MOC Joint Venture Pty Ltd, to be paid for by the issue of 800,000 Shares in the Company. OIS MOC Joint Venture Pty Ltd owns the goodwill of a successful labour hire company. That company is to be renamed Mermaid Labour and Management Limited (MLML).\n\nMLML offers a full labour hire service inclusive of industrial relations consultancy, negotiating agreements and awards and were appropriate, provides ongoing management of the labour force.\n\nThe financial effect of the above events have not been reflected in these financial statements.\n\n## 23. EARNINGS PER SHARE\n\n| | 2000 Cents per Share | 1999 Cents per Share |\n|-----------------------------------------------------------------------------------------------------------|-------------------------|-------------------------|\n| Basic earnings per share | (0.62) | 8.09 |\n| Diluted earnings per share | (0.21) | 8.05 |\n| | 2000 | 1999 |\n| | No. | No. |\n| Weighted average number of ordinary shares on issue used in the calculation of basic earnings per share | 43,000,000 | 30,356,164 |\n\n", - "page_start": 56, - "page_end": 56, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## Bookmarks\n\nBookmarks are included in the PDF for headings or Word bookmarks depending on the option selected.\n\n## Availability\n\nThe information in this article is applicable to the following versions of Word.\n\n - Word for Windows Version 2408 and later.\n - Word for Mac Version 16.89 and later.\n - Word for iOS Version 2.89 and later.\n - Word for Android Build 16.0.18025.XXXXX or later.\n - Word for the web Build 16.0.18025.XXXXX or later.\n\nIt is available to customers with Office 2024 or Office LTSC 2024 and to customers with a Microsoft 365 subscription on Current Channel or Monthly Enterprise Channel. For customers with a Microsoft 365 subscription on Semi-Annual Enterprise Channel it will be available on January 14, 2025.", - "page_start": 60, - "page_end": 60, - "source_file": "office-pdf.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_AIT_2012.pdf", - "query": "Under which name was the Applied company initially fouded ?", - "target_page": 6, - "target_passage": "The Company was founded in 1923 by Joseph M. Bruening as The Ohio Ball Bearing Company", - "chunk_present": { - "presence": true, - "index": 8 - } - }, - "top_chunk": [ - { - "text": "## SE L E C T E D CO N S O L I D AT E D FI N A N C I A L DATA\n\nThe summary consolidated financial data set forth below have been derived from, and are qualified by re f e rence to, the audited consolidated financial statements of the Company and the notes thereto, pre p a red in conformity with generally accepted accounting principles as applied in the United States ('U.S. GAAP'), which have been audited by KPMG Polska Sp. z o.o., independent public accountants. The Company believes that the period-to-period comparisons of its financial results are not necessarily meaningful due to its significant acquisitions in December 1998 and J a n u a ry 1999, and should not be relied upon as an indication of future perf o rmance. The following information should be read in conjunction with 'Management's Discussion and Analysis of Financial Condition and Results of Operations' included herein.\n\n## Consolidated Statements of Operations Data:", - "page_start": 14, - "page_end": 14, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "\n\n\n\nThis report contains statements that are forward-looking, as that term is defined by the Securities and Exchange Commission in its rules, regulations and releases. Applied intends that such forward-looking statements be subject to the safe harbors created thereby. All forwardlooking statements are based on current expectations regarding important risk factors, including those identified on page 12 of this report and in our Annual Report on Form 10-K for the fiscal year ended June 30, 2012. Accordingly, actual results may differ materially from those expressed in the forward-looking statements, and the making of such statements should not be regarded as a representation by Applied or any other person that results expressed therein will be achieved.\n\nPURPOSE PRODUCT PERFORMANCE PEOPLE\n\nApplied Industrial Technologies is a leading industrial distributor that offers more than four million parts to serve the needs of MRO and OEM customers in virtually every industry. In addition, Applied ® provides engineering, design and systems integration for industrial and fluid power applications, as well as customized mechanical, fabricated rubber and fluid power shop services. Applied also offers maintenance training and inventory management solutions that provide added value to its customers.\n\n## Applied at a Glance\n\nHeadquarters:\n\nCleveland, Ohio, USA\n\nOperating Facilities: More than 500 in the United States, Canada, Mexico, Puerto Rico, Australia and New Zealand\n\nE-Commerce:\n\nwww.Applied.com\n\nDistribution Centers:\n\n9\n\n## Stock Keeping Units (SKUs) Available\n\nto Customers:\n\nMore than 4 million\n\nProduct Manufacturers:\n\nMore than 2,000\n\nStock Ticker Symbol:\n\nAIT, listed on the\n\nNew York Stock Exchange\n\nEmployee Associates:\n\nApproximately 4,900\n\nData current as of August 1, 2012", - "page_start": 1, - "page_end": 1, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "important both for its role in ending the war between France and Spain, because many of the claims and objectives of Louis's foreign policy for the next 50 years would be based upon this marriage, and because it was through this marriage that the Spanish throne would ultimately be delivered to the House of Bourbon. [32]\n\n## Personal reign and reforms\n\n## Coming of age and early reforms\n\nLouis XIV was declared to have reached the age of majority on the 7th of September 1651. On the death of Mazarin, in March 1661, Louis personally took the reins of government and astonished his court by declaring that he would rule without a chief minister: \"Up to this moment I have been pleased to entrust the government of my affairs to the late Cardinal. It is now time that I govern them myself. You [secretaries and ministers] will assist me with your counsels when I ask for them. I request and order you to seal no orders except by my command . . . I order you not to sign anything, not even a passport . . . without my command; to render account to me personally each day and to favor no one\". [33] Capitalizing on the widespread public yearning for peace and order after decades of foreign and civil strife, the young king consolidated central political authority at the expense of the feudal aristocracy. Praising his ability to choose and encourage men of talent, the historian Chateaubriand noted: \"it is the voice of genius of all kinds which sounds from the tomb of Louis\". [34]\n\nLouis began his personal reign with administrative and fiscal reforms. In 1661, the treasury verged on bankruptcy. To rectify the situation, Louis chose Jean-Baptiste Colbert as Controller-General of Finances in 1665. However, Louis first had to neutralize Nicolas Fouquet, the powerful Superintendent of Finances. Although Fouquet's financial indiscretions were not very different from Mazarin's before him or Colbert's after him, his ambition worried Louis. He lavishly entertained the king at the opulent château of Vaux-le-\n\nRoyal Monogram\n\n\n\nVicomte, flaunting a wealth which could hardly have accumulated except through embezzlement of government funds.\n\nFouquet appeared eager to succeed Mazarin and Richelieu in power, and he indiscreetly purchased and privately fortified the remote island of Belle Île. These acts sealed his doom. Fouquet was charged with embezzlement; the Parlement found him guilty and sentenced him to exile; and finally Louis altered the sentence to life imprisonment.\n\nFouquet's downfall gave Colbert a free hand to reduce the national debt through more efficient taxation. The principal taxes included the aides and douanes (both customs duties), the gabelle (salt tax), and the taille (land tax). The taille was reduced at first, and certain tax-collection contracts were auctioned instead of being sold privately to a favoured few. Financial officials were required to keep regular accounts, revising inventories and removing unauthorized exemptions: up to 1661 only 10 per cent of income from the royal domain reached the king. Reform had to overcome vested interests: the taille was collected by officers of the Crown who had purchased their post at a high price, and punishment of abuses necessarily lowered the value of the purchase. Nevertheless, Colbert achieved excellent results, with the deficit of 1661 turning into a surplus by 1666, with interest on the debt decreasing from 52 million to 24 million livres. The taille was reduced to 42 million in 1661 and 35 million in 1665, while revenue from indirect taxation\n\nMembers of the Académie des sciences with Louis in 1667; in the background appears the new Paris Observatory.\n\n", - "page_start": 4, - "page_end": 4, - "source_file": "wikipedia5.pdf" - }, - { - "text": "## HON INDUSTRIES Inc. and SUBSIDIARIES\n\nclaims. The Company currently has a claim for approximately $7.6 million pending against it arising out of the bankruptcy of a customer filed in 2001. The Company was named a critical vendor by the bankruptcy court and, accordingly, was paid in full for all outstanding receivables. The claim alleges that the Company received preferential payments from the customer during the ninety days before the customer filed for bankruptcy protection. The claim was brought in February 2003. The Company has recorded an accrual with respect to this contingency, in an amount substantially less than the full amount of the claim, which represents the best estimate within the range of likely exposure and intends to vigorously defend against the claim. Given the nature of this claim, it is possible that the ultimate outcome could differ from the recorded amount. It is our opinion, after consultation with legal counsel, that additional liabilities, if any, resulting from these matters, are not expected to have a material adverse effect on our financial condition, although such matters could have a material effect on our quarterly or annual operating results and cash flows when resolved in a future period.\n\n## Looking Ahead\n\nThe Company is encouraged by indications that the economy is recovering and is cautiously optimistic that the office furniture industry will begin to rebound in the second half of 2004. Global Insight, BIFMA's forecasting consultant, increased its estimate for the industry shipment growth from 2.4% to 5.6% in 2004, with first quarter flat and improving as the year progresses.\n\nThe hearth segment is impacted by the housing market, which may experience a slight decline from record high levels, but is expected to remain at healthy levels. Management believes its strong brand recognition and new innovative product introductions in addition to strengthening distribution will allow it to grow its hearth segment.\n\nOn January 5, 2004, the Company completed the acquisition of Paoli Inc., a leading provider of wood case goods and seating. The Company intends to continue to build on Paoli's strong position in the market and excellent selling capabilities while leveraging its lean enterprise practices to achieve greater cost efficiencies and improved customer performance.\n\nThe Company's strategy is to grow its business through aggressive investment in building its brands, enhancing its strong member-owner culture, and remaining focused on its rapid continuous improvement program to continue to build best total cost. The Company plans to reinvest a large portion of its cost savings from plant\n\nconsolidations and its rapid continuous improvement program to continue to build brands, product solutions, and selling models.\n\nBecause of the following factors, as well as other variables affecting the Company's operating results, past financial performance may not be a reliable indicator of future performance, and historical trends should not be used to anticipate results or trends in future periods:", - "page_start": 37, - "page_end": 37, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "FIN 46R is effective at the end of the first interim period ending after March 15, 2004. Entities that have adopted FIN 46 prior to this effective date can continue to apply the provision of FIN 46 until the effective date of FIN 46R. The Company adopted FIN 46 on January 3, 2004, and it did not have an impact on the Company's financial statements.\n\nThe Financial Accounting Standards Board finalized SFAS No. 150, 'Accounting for Certain Financial Instruments with Characteristics of both Liabilities and Equity,' effective for financial instruments entered into or modified after May 31, 2003, and otherwise is effective at the beginning of the first interim period beginning after June 15, 2003. The adoption of SFAS No. 150 did not have an impact on the Company's financial statements.\n\nDuring 2002, the Financial Accounting Standards Board finalized SFAS No. 146, 'Accounting for Costs Associated with Exit or Disposal Activities' for exit and disposal activities that are initiated after December 31, 2002. This Statement requires that a liability for a cost associated with an exit or disposal activity be recognized when the liability is incurred. The Company applied this statement to its 2003 restructuring activities which resulted in a charge of $8.5 million during 2003.\n\nThe Financial Accounting Standards Board also issued Interpretation No. 45, 'Guarantor's Accounting and Disclosure Requirements for Guarantees, Including Indirect Guarantees of Indebtedness to Other.' FIN 45 clarifies the requirements of SFAS No. 5, 'Accounting for Contingencies' relating to the guarantor's accounting for and disclosure of the issuance of certain types of guarantees. The provisions for initial recognition and measurement are effective on a prospective basis for guarantees that are issued or modified after December 31, 2002. The adoption did not have a material impact on the Company's financial statements.\n\nIn December 2003, the Financial Accounting Standards Board issued a revised SFAS No. 132, 'Employers' Disclosures about Pensions and Other Postretirement Benefits.' In 2003, the Company adopted the revised disclosure requirements of this pronouncement.\n\n## RECLASSIFICATIONS\n\nCertain prior year amounts have been reclassified to conform to the 2003 presentation.\n\n## Restructuring Related Charges\n\nAs a result of the Company's business simplification and cost reduction strategies, the Company closed two office furniture facilities located in Milan, Tennessee, and Hazleton, Pennsylvania, and consolidated pro-\n\nduction into other U.S. manufacturing locations. Charges for the closures totaled $15.7 million, which consists of $6.7 million of accelerated depreciation of machinery and equipment which was recorded in cost of sales, $3.4 million of severance, and $5.6 million of facility exit, production relocation, and other costs which were recorded as restructuring costs. A total of 316 members were terminated and received severance due to these shutdowns. The closures and consolidation are substantially complete.\n\nThe Hazleton, Pennsylvania, facility is an owned facility and has been reclassified to current assets as it is currently being held as available for sale. It is included in the 'Prepaid expenses and other current assets' in the January 3, 2004, condensed consolidated balance sheet at its carrying value of $2.1 million. The Milan, Tennessee, facility is a leased facility that is no longer being used in the production of goods. The restructuring expense for 2003 included $1.4 million of costs that will continue to be incurred under the lease contract reduced by estimated sublease rentals that could be reasonably obtained.", - "page_start": 45, - "page_end": 45, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## HON INDUSTRIES Inc. and SUBSIDIARIES\n\n## PRODUCT DEVELOPMENT COSTS\n\nProduct development costs relating to the development of new products and processes, including significant improvements and refinements to existing products, are expensed as incurred. These costs include salaries, contractor fees, building costs, utilities, and administrative fees. The amounts charged against income were $25,791,000 in 2003, $25,849,000 in 2002, and $21,415,000 in 2001.\n\n## STOCK-BASED COMPENSATION\n\nThe Company accounts for its stock option plan using Accounting Principles Board Opinion No. 25, 'Accounting for Stock Issued to Employees,' whereby stock-based employee compensation is reflected in net income as all options granted under the plan had an exercise price equal to the market value of the underlying common stock on the date of grant. SFAS No. 123, 'Accounting for Stock-Based Compensation' issued subsequent to APB No. 25 and amended by SFAS No. 148, 'Accounting for Stock-Based Compensation - Transition and Disclosure' defines a fair value-based method of accounting for employees' stock options but allows companies to continue to measure compensation cost for employee stock options using the intrinsic value-based method described in APB No. 25.\n\nThe following table illustrates the effect on net income and earnings per share if the Company had applied the fair value recognition provisions of SFAS No. 123, 'Accounting for Stock-Based Compensation,' as amended by SFAS No. 148 'Accounting for StockBased Compensation - Transition and Disclosure,' to stock-based employee compensation.\n\n| (In thousands) | 2003 | 2002 | 2001 |\n|-------------------------------------------------------------------------------------------------------------------------------------------------|---------|---------|---------|\n| Net income, as reported | $ 98.1 | $ 91.4 | $ 74.4 |\n| Deduct: Total stock-based employee compensation expense determined under fair value-based method for all awards, net of related tax effects | (3.0) | (2.2) | (1.4) |\n| Pro forma net income | $ 95.1 | $ 89.2 | $ 73.0 |\n| Earnings per share: | | | |\n| Basic - as reported | $ 1.69 | $ 1.55 | $ 1.26 |\n| Basic - pro forma | $ 1.64 | $ 1.52 | $ 1.24 |\n| Diluted - as reported | $ 1.68 | $ 1.55 | $ 1.26 |\n| Diluted - pro forma | $ 1.62 | $ 1.51 | $ 1.24 |\n\nIncrease in expense in 2003 is due to accelerated vesting upon the retirement of plan participants.\n\n## I NCOME TAXES\n\nThe Company accounts for income taxes under SFAS No. 109, 'Accounting for Income Taxes.' This Statement uses an asset and lia-", - "page_start": 44, - "page_end": 44, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## A N D A N A L Y S I S O F F I N A N C I A L C O N D I T I O N A N D R E S U L T S O F O P E R A T I O N S\n\nThe following discussion of the Company's financial condition and results of operations should be read together with the other financial information and consolidated financial statements included in this Annual Report. This discussion contains forward-looking statements that involve risks and uncertainties. The Company's actual results could differ materially from the results anticipated in the forward-looking statements as a result of a variety of factors, including those discussed in 'Forward Looking Statements' and elsewhere in this Annual Report.\n\n## OVERVIEW\n\nThe Company designs, develops, manufactures, markets, sells and distributes products and components, primarily for the medical and health care industry. The Company markets components to other equipment manufacturers for incorporation in their products and sells finished devices to physicians, hospitals, clinics and other treatment centers. The Company's products and services primarily range from ophthalmology and cardiovascular products to fluid delivery devices, contract manufacturing and kitting services. In 2003 approximately 26 percent of the Company's sales were outside the U.S.\n\nThe Company's products are used in a wide variety of applications by numerous customers, the largest of which accounted for approximately 14 percent of net sales in 2003. The Company encounters competition in all of its markets and competes primarily on the basis of product quality, price, engineering, customer service and delivery time.\n\nThe Company's strategy is to provide a broad selection of products and a high level of service in the areas in which it competes. The Company focuses its research and development efforts to improve current products and develop highly-engineered products that meet customer needs and have the potential for broad market applications and significant sales. Proposed new products may be subject to regulatory clearance or approval prior to commercialization and the time period for introducing a new product to the marketplace can be unpredictable. The Company is also focused on controlling costs. The Company does this by investing in modern manufacturing technologies and controlling purchasing processes. Over the past three years, the Company has continued to be faced with increasing costs associated with all lines of insurance, including group health benefits. The Company has been successful in consistently generating cash from operations and uses that cash to reduce indebtedness, to fund capital expenditures, to repurchase stock and, starting in 2003, to pay dividends. During 2003, the Company reduced debt by approximately $6.0 million.\n\nThe Company's strategic objective is to further enhance its position in its served markets by:\n\n - · Focusing on customer needs\n - · Expanding existing product lines and developing new products\n - · Maintaining a culture of controlling cost\n - · Preserving and fostering a collaborative, entrepreneurial management structure\n\nFor the year ended December 31, 2003, the Company reported revenues of $62.8 million, income from continuing operations of $4.9 million and net income of $5.1 million, up 5 percent, 20 percent and 95 percent, respectively, from 2002.\n\n## RESULTS OF OPERATIONS", - "page_start": 25, - "page_end": 25, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "As a result of the formation of the Company a portion of the stock compensation cost re c o rded in 1996 became a temporary diff e rence for which the Company recognized a gross deferred tax asset of $1.4 million in 1997. A valuation allowance for this deferred tax asset was established. During 1997, certain of the stock options were exercised resulting in a tax deduction of $1.0 million. Because of the tax loss position of the Company in 1997 in the United States, this tax deduction was not utilized and increased the tax loss carry f o rw a rd. The Company established a valuation allowance for the deferred tax asset resulting from the tax loss carry f o rw a rd in the United States. This tax loss carry f o rw a rd was utilized in 1998 and there f o re, $951,553 of the tax benefit was re c o rded as an adjustment to additional paid in capital.", - "page_start": 38, - "page_end": 38, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "## Long-Range Strategy: Translating Potential Into Results (continued)\n\nAs a leadership team, we have developed a long-range strategic plan to accelerate profitable growth. Our plan includes numerous growth opportunities across our business, and implementation is underway, including:\n\n - · Leveraging sales capabilities and existing CRM (Customer Relationship Management) processes to expand our value-add and reach new customers\n - · Strengthening our position in attractive vertical markets while growing in our core segments\n - · Expanding our products and solutions; growing our core bearings and power transmission business at a rate greater than the market, along with focused product expansion via logical extensions and enhanced local capabilities\n - · Building on our fluid power market leadership via strengthened product offerings and value-added services for OEM and MRO customers\n - · Enhancing our operational excellence by capturing the full benefits of our ERP system and driving continuous improvement with customers, suppliers and throughout our operations\n - · Accelerating strategic acquisitions by leveraging our cash generation and strong financial position to extend into new markets\n\nToday, nearly 90 years since our founding, we are well-positioned and committed to realizing our potential - a potential that builds upon a proud past and the dedication of our associates around the globe.\n\nAs we look ahead, we see a bright future with excellent opportunities for growth and increased profitability - organically, via acquisition, and through our technology investments. We are in exciting times, and we firmly believe our best days are ahead.\n\nThank you for your ongoing investment and support of Applied.\n\n\n\n\n\nNeil A. Schrimsher Chief Executive Officer\n\nBenjamin J. Mondics President & Chief Operating Officer\n\nAugust 15, 2012\n\n\n\nCelebrating 90 Years of Strength in Distribution\n\nIn January 2013, Applied Industrial Technologies will celebrate its 90th anniversary. The Company was founded in 1923 by Joseph M. Bruening as The Ohio Ball Bearing Company, a distributor of bearings to customers in Cleveland, Ohio. Over the years, the Company grew to become a regional distributor of bearings, then an international distributor of a wide range of industrial technologies and components. Today, nearly 90 years since our beginning, customers served by Applied benefit from our years of accumulated experience, expertise and exceptional ability to improve our customers' operations.\n\nJoin us as we kick-off a year-long celebration of our strength in distribution. We thank all of you, our stakeholders, for making it possible.", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "## Management's Discussion and Analysis of Financial Condition and Results of Operations\n\n## ACCOUNTING PRINCIPLES ADOPTED IN 2004\n\n## Taxation on Foreign Earnings\n\nIn December 2004, the staff of the Financial Accounting Standards Board ('FASB') issued FASB Staff Position 109-2, 'Accounting and Disclosure Guidance for the Foreign Repatriation Provision within the American Jobs Creation Act of 2004' ('FSP 109-2'). FSP 109-2 allows us additional time beyond the financial reporting period in which the Act was enacted to evaluate the effects of the Act on our plans for repatriation of unremitted earnings. Under SFAS 109, we did not historically record a provision for U.S. Federal or State income taxes on undistributed earnings of foreign subsidiaries because such earnings were considered to be indefinitely reinvested in the operations of foreign subsidiaries. Upon the sale of MGM Grand Australia, we did provide deferred taxes of $11 million on the basis that the proceeds would be repatriated without the benefit of the 85 percent one-time deduction provided by the Act. The Act may allow a special one-time deduction of 85 percent of certain repatriated foreign earnings; however, additional clarifying language is necessary to ensure we qualify for the deduction. The potential benefit to us of the repatriation provisions of the Act is $7 million.\n\n## Discontinued operations\n\nIn November 2004, the Emerging Issues Task Force ('EITF') of the FASB reached a consensus on Issue No. 03-13, 'Applying the Conditions in Paragraph 42 of FASB Statement No. 144, Accounting for the Impairment or Disposal of Long-Lived Assets , in Determining Whether to Report Discontinued Operations,' ('EITF 03-13'). EITF 03-13 requires us to analyze whether the cash flows of a disposed component have been eliminated from our ongoing operations and whether we retain a continuing involvement in the operations of the disposed component. If significant migration of customers occurs to our other operations, we would be precluded from classifying a sold or disposed operation as a 'discontinued' operation. EITF 03-13 is effective for components disposed of or classified as held for sale in periods beginning after\n\nDecember 15, 2004, with optional application to components disposed of or classified as held for sale within that fiscal year. We did not apply EITF 03-13 to our sale of MGM Grand Australia, but if we had applied EITF 03-13 we still would have classified MGM Grand Australia as a discontinued operations.\n\n## RECENTLY ISSUED ACCOUNTING STANDARDS\n\n## Stock-based Compensation\n\nIn December 2004, the FASB issued FASB Statement No. 123 (revised 2004), 'Share-Based Payment' ('SFAS 123(R)'). Under the original standard, SFAS No. 123, 'Accounting for Stock-Based Compensation' ('SFAS 123'), companies had the option of recording stock options issued to employees at fair value or intrinsic value, which generally leads to no expense being recorded. Most companies, including us, opted to use this intrinsic value method and make required disclosures of fair value expense. SFAS 123(R) eliminates this intrinsic value alternative. SFAS 123(R) is effective for us on July 1, 2005, at which time all future share-based payments must be recorded at fair value. Transition methods are discussed below.\n\nWe must make certain changes in the manner of valuation of options and must make certain decisions which will affect the amount and timing of expense recognition, as discussed below.", - "page_start": 45, - "page_end": 45, - "source_file": "NYSE_MGM_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_AIT_2012.pdf", - "query": "By how much does Applied company plan to contribute to its pension benefits between 2018 and 2022 ?", - "target_page": 36, - "target_passage": "2018 through 2022 15,200", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "## NOTE 22: PENSIONS\n\nWe have contributory and non-contributory defined benefit pension plans that are made available to most of our employees. The plans provide pensions based on years of service, years of contributions and earnings. We do not provide any non-pension post-retirement benefits. We also provide unfunded supplemental pension benefits to certain executives.\n\nThe assets of the defined benefit pension plans are held in segregated accounts isolated from our assets. We administer the defined benefit pension plans pursuant to applicable regulations, the Statement of Investment Policies and Procedures and to the mandate of the Pension Committee of the Board of Directors. The Pension Committee of the Board of Directors oversees our administration of the defined benefits pension plans, which includes the following principal areas:", - "page_start": 121, - "page_end": 121, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Retirement Benefits\n\nThe Company has defined contribution profit-sharing plans covering substantially all employees who are not participants in certain defined benefit plans. The Company's annual contribution to the defined contribution plans is based on employee eligible earnings and results of operations and amounted to $26,489,000, $23,524,000, and $24,826,000 in 2003, 2002, and 2001, respectively.\n\nThe Company sponsors defined benefit plans which include a limited number of salaried and hourly employees at certain subsidiaries. The Company's funding policy is generally to contribute annually the minimum actuarially computed amount. Net pension costs relating to these plans were $176,000; $0; and $0 for 2003, 2002, and 2001, respectively. The actuarial present value of obligations, less related plan assets at fair value, is not significant.\n\nThe Company also participates in a multiemployer plan, which provides defined benefits to certain of the Company's union\n\nemployees. Pension expense for this plan amounted to $309,000, $309,000, and $310,000 in 2003, 2002, and 2001, respectively.\n\n## Postretirement Health Care\n\nIn accordance with the guidelines of revised SFAS No. 132, 'Employers' Disclosures about Pensions and other Postretirement Benefits,' the following table sets forth the funded status of the plan, reconciled to the accrued postretirement benefits cost recognized in the Company's balance sheet at:", - "page_start": 50, - "page_end": 50, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "\n\n## EMPLOYEE RETIREMENT AND BENEFIT PLANS\n\nA noncontributory defined benefit retirement plan is maintained for all regular employees of the Company except those of Quest Medical. This plan was amended effective January 1, 1998 to become a cash balance pension plan. The Company's funding policy is to make the annual contributions required by applicable regulations and recommended by its actuary. The Company uses a December 31 measurement date for the plan.\n\nThe changes in the plan's projected benefit obligation ('PBO') as of December 31, 2003 and 2002 are as follows (in thousands):\n\n| | 2003 | 2002 |\n|---------------------------------|---------|---------|\n| CHANGE IN BENEFIT OBLIGATION: | | |\n| Benefit obligation, January 1 | $ 4,170 | $ 4,599 |\n| Service cost | 214 | 320 |\n| Interest cost | 298 | 307 |\n| Amendments | -- | (616) |\n| Actuarial (gain)/loss | 529 | (93) |\n| Benefits paid | (333) | (347) |\n| Benefit obligation, December 31 | $ 4,878 | $ 4,170 |\n\nIn December 2002, the plan was amended to reduce benefit accruals for future service by plan participants by approximately 50 percent. This amendment caused a reduction in the PBO of approximately $616,000, and is reflected as a reduction in pension expense over the estimated employee service lives.\n\nThe changes in the fair value of plan assets, funded status of the plan and the status of the prepaid pension benefit recognized, which is included in the Company's balance sheets as of December 31, 2003 and 2002 are as follows (in thousands):\n\n| | 2003 | 2002 |\n|----------------------------------------|---------|---------|\n| CHANGE IN PLAN ASSETS: | | |\n| Fair value of plan assets, January 1 | $ 4,383 | $ 4,550 |\n| Actual return on plan assets | 963 | (750) |\n| Employer contributions | 400 | 930 |\n| Benefits paid | (333) | (347) |\n| Fair value of plan assets, December 31 | $ 5,413 | $ 4,383 |\n| Funded status of plan | $ 535 | $ 213 |\n| Unrecognized actuarial loss | 1,941 | 2,154 |\n| Unrecognized prior service cost | (502) | (539) |\n| Unrecognized net transition obligation | (88) | (132) |\n| Net amount recognized as other assets | $ 1,886 | $ 1,696 |", - "page_start": 21, - "page_end": 21, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "## NOTES TO CONSOLIDATED FINANCIAL STATEMENTS (Continued)\n\n(In thousands, except per share amounts)\n\n## Salary Continuation Benefits\n\nThe Company has agreements with certain retirees of acquired companies to pay monthly retirement benefits through fiscal 2020.\n\n## Retiree Health Care Benefits\n\nThe Company provides health care benefits to eligible retired associates who pay the Company a specified monthly premium. Premium payments are based upon current insurance rates for the type of coverage provided and are adjusted annually. Certain monthly health care premium payments are partially subsidized by the Company. Additionally, in conjunction with a fiscal 1998 acquisition, the Company assumed the obligation for a postretirement medical benefit plan which provides health care benefits to eligible retired associates at no cost to the individual.\n\nThe Company uses a June 30 measurement date for all plans.\n\nThe following table sets forth the changes in benefit obligations and plan assets during the year and the funded status for the postemployment plans at June 30:", - "page_start": 33, - "page_end": 33, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "## NOTES TO CONSOLIDATED FINANCIAL STATEMENTS (Continued)\n\n(In thousands, except per share amounts)\n\n## Plan Assets\n\nThe fair value of each major class of plan assets for the Company's Qualified Benefit Retirement Plan are valued using quoted market prices in active markets for identical instruments, or Level 1 in the fair value hierarchy. Following are the fair values and target allocation as of June 30:\n\n| | Target Allocation | Fair Value | Fair Value |\n|-------------------|---------------------|--------------|--------------|\n| | | 2012 | 2011 |\n| Asset Class: | | | |\n| Equity securities | 40 - 70% | $ 3,735 | $ 3,876 |\n| Debt securities | 20 - 50% | 2,382 | 1,756 |\n| Other | 0 - 20% | 322 | 424 |\n| Total | 100% | $ 6,439 | $ 6,056 |\n\nEquity securities do not include any Company common stock.\n\nThe Company has established an investment policy and regularly monitors the performance of the assets of the trust maintained in conjunction with the Qualified Defined Benefit Retirement Plan. The strategy implemented by the trustee of the Qualified Defined Benefit Retirement Plan is to achieve long-term objectives and invest the pension assets in accordance with ERISA and fiduciary standards. The long-term primary objectives are to provide for a reasonable amount of long-term capital, without undue exposure to risk; to protect the Qualified Defined Benefit Retirement Plan assets from erosion of purchasing power; and to provide investment results that meet or exceed the actuarially assumed long-term rate of return. The expected long-term rate of return on assets assumption was developed by considering the historical returns and the future expectations for returns of each asset class as well as the target asset allocation of the pension portfolio.\n\n## Cash Flows\n\n## Employer Contributions\n\nThe Company expects to contribute $6,000 to its pension benefit plans and $240 to its retiree health care benefit plans in 2013. Contributions do not equal estimated future payments as certain payments are made from plan assets.\n\n## Estimated Future Benefit Payments\n\nThe following benefit payments, which reflect expected future service, as applicable, are expected to be paid in each of the next five years and in the aggregate for the subsequent five years:\n\n| During Fiscal Years | Pension Benefits | Retiree Health Care Benefits |\n|-----------------------|--------------------|---------------------------------|\n| 2013 | $ 6,200 | $ 240 |\n| 2014 | 5,900 | 240 |\n| 2015 | 5,700 | 240 |\n| 2016 | 4,500 | 240 |\n| 2017 | 1,700 | 260 |\n| 2018 through 2022 | 15,200 | 1,420 |", - "page_start": 35, - "page_end": 35, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "## Key Executive Restoration Plan\n\nIn fiscal 2012, the Executive Organization & Compensation Committee of the Board of Directors adopted the Key Executive Restoration Plan (KERP), an unfunded, non-qualified deferred compensation plan, to replace the SERP. The Company recorded $128 of expense associated with this plan in fiscal 2012.\n\n## Qualified Defined Benefit Retirement Plan\n\nThe Company has a qualified defined benefit retirement plan that provides benefits to certain hourly associates at retirement. These associates do not participate in the Retirement Savings Plan. The benefits are based on length of service and date of retirement.", - "page_start": 32, - "page_end": 32, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "## Pension Obligations\n\nOur retiree pension plans had a funding deficit of approximately $172 million at December 31, 2013. We have been making special minimum monthly payments in addition to our regular contributions to eliminate the pension liability. During 2013, our funding deficit was reduced by $162 million.\n\nThe special payments, including contributions associated with benefits paid from the plans, were approximately $7 million in 2013. We expect our total estimated funding requirements to be $96 million in 2014 and to be adjusted annually thereafter, based on various market factors such as interest rates and expected returns and staffing assumptions.\n\nChanges in factors such as the discount rate, increase in compensation and the expected return on plan assets can affect the accrued benefit obligation, pension expense and the deficiency of plan assets over\n\naccrued obligations in the future. See Critical accounting estimates for more information.\n\n## Purchase of Annuities\n\nFrom time to time we have made additional lump-sum contributions to our pension plans, and the pension plans have purchased annuities from insurance companies to fund the pension benefit obligations for certain groups of retired employees in the plans. Purchasing the annuities relieves us of our primary responsibility for that portion of the accrued benefit obligations for the retired employees and eliminates the significant risk associated with the obligations.\n\nWe did not make any additional lump-sum contributions to our pension plans in 2013 or 2012, and the pension plans did not purchase additional annuities.\n\n## FINANCIAL RISK MANAGEMENT\n\nWe normally use three categories of derivative instruments to manage risks related to our business activities:\n\n| Categories | The risk it manages | Types of derivative instruments |\n|-------------------------|----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|\n| Debt Derivatives | GLYPH<129> Impact of fluctuations in foreign exchange rates on principal and interest payments for US denominated long-term debt | GLYPH<129> Cross-currency interest rate exchange agreements GLYPH<129> Forward foreign exchange agreements (from time to time, as applicable) |\n| Expenditure Derivatives | GLYPH<129> Impact of fluctuations in foreign exchange rates on forecasted US dollar denominated expenditures | GLYPH<129> Forward foreign exchange agreements |\n| Equity Derivatives | GLYPH<129> Impact of fluctuations in share price on stock-based compensation expense | GLYPH<129> Total return swap agreements |\n\nWe also manage our exposure to fluctuating interest rates and we have fixed the interest rate on 95.3 % of our debt including short-term borrowings at December 31, 2013 (2012 - 100 % ).\n\n## Debt Derivatives\n\nWe use cross currency interest exchange agreements (Debt Derivatives), to hedge the foreign exchange risk on all of the principal and interest obligations of our US dollar denominated senior notes and debentures. At December 31, 2013 we used Debt Derivatives to hedge the foreign exchange risk on 100 % of the principal and interest obligations on all our US dollar denominated debt. We use Debt Derivatives for risk management purposes only.\n\nDuring 2013, we completed Debt Derivatives transactions as follows:", - "page_start": 65, - "page_end": 65, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Performance Grants\n\nIn fiscal 2009 and 2008, the Executive Organization and Compensation Committee made annual awards of three-year performance grants to key officers. A target payout was established at the beginning of each three-year performance period. The actual payout at the end of the period is calculated based upon the Company's achievement of sales growth, return on sales, and total shareholder return targets. All performance periods had expired by June 30, 2011. During fiscal 2011 and 2010, the Company recorded $1,020 and $(231), respectively, of compensation expense (income) for achievement relative to the total shareholder return-based goals of the Company's performance grants. The liability at June 30, 2011 was $1,558; this was paid in fiscal 2012.\n\n## NOTE 10: BENEFIT PLANS\n\n## Retirement Savings Plan\n\nSubstantially all U.S. associates participate in the Applied Industrial Technologies, Inc. Retirement Savings Plan. Participants may elect to contribute up to 50% of their compensation, subject to Internal Revenue Code maximums. The Company makes a discretionary profit-sharing contribution to the Retirement Savings Plan generally based upon a percentage of the Company's U.S. income before income taxes and before the amount of the contribution (5% for fiscal 2012, 2011 and 2010). The Company partially matches 401(k) contributions by participants; this match was suspended from January 1, 2009 to June 30, 2010. The Company's expense for profit sharing and matching of associates' 401(k) contributions was $10,866, $11,251 and $4,891 during fiscal 2012, 2011 and 2010, respectively.\n\n## Deferred Compensation Plans\n\nThe Company has deferred compensation plans that enable certain associates of the Company to defer receipt of a portion of their compensation and non-employee directors to defer receipt of director fees. The Company funds these deferred compensation liabilities by making contributions to rabbi trusts. Assets held in these rabbi trusts consist of investments in money market and mutual funds and Company common stock.\n\n## Postemployment Benefit Plans\n\nThe Company provides the following postemployment benefits which, except for the Qualified Defined Benefit Retirement Plan, are unfunded:\n\n## Supplemental Executive Retirement Benefits Plan\n\nThe Company has a non-qualified pension plan to provide supplemental retirement benefits to certain officers. Benefits are payable beginning at retirement and determinable at retirement based upon a percentage of the participant's historical compensation. On December 19, 2011, the Executive Organization and Compensation Committee of the Board of Directors froze participant benefits (credited service and final average earnings) and entry into the Supplemental Executive Retirement Benefits Plan (SERP) effective December 31, 2011. This action constituted a plan curtailment. The plan liability was remeasured in conjunction with the curtailment using a 3.5% discount rate and participant final average earnings through the curtailment date. The remeasurement in conjunction with the curtailment resulted in an actuarial loss (recorded in other comprehensive income (loss)) of $302 ($492 loss, net of income tax of $190).\n\nThe curtailment is reflected in the Company's consolidated balance sheets as: 1) a reduction to the overall SERP liability (included in postemployment benefits) of $8,860, 2) a reduction to deferred tax assets of $3,411 and 3) an increase in accumulated other comprehensive income (loss) of $5,449. Prior service costs previously recorded through accumulated other comprehensive income (loss) were reclassified into the statements of consolidated income ($3,117 gross expense, net of income tax of $1,200). The gross expense is recorded in selling, distribution and administrative expense in fiscal 2012.\n\n## Key Executive Restoration Plan", - "page_start": 32, - "page_end": 32, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "## Nordstrom, Inc.\n\n## Notes to Consolidated Financial Statements\n\nDollar and share amounts in millions except per share, per option and per unit amounts\n\n## NOTE 5: SELF-INSURANCE\n\nOur self-insurance reserves are summarized as follows:\n\n| | January 31, 2015 | February 1, 2014 |\n|------------------------------|--------------------|--------------------|\n| Workers' compensation | $70 | $66 |\n| Employee health and welfare | 23 | 23 |\n| General liability | 16 | 16 |\n| Total self-insurance reserve | $109 | $105 |\n\nOur workers' compensation policies have a retention per claim of $1 or less and no policy limits.\n\nWe are self-insured for the majority of our employee health and welfare coverage and we do not use stop-loss coverage. Participants contribute to the cost of their coverage through both premiums and out-of-pocket expenses and are subject to certain plan limits and deductibles.\n\nOur general liability policies, encompassing employment practices liability and commercial general liability, have a retention per claim of $3 or less and a policy limit up to $30 and $150, respectively.\n\n## NOTE 6: 401(k) PLAN\n\nWe provide a 401(k) plan for our employees that allows for employee elective contributions and discretionary company contributions. Employee elective contributions are funded through voluntary payroll deductions. Our discretionary company contribution is funded in an amount determined by our Board of Directors each year. Our expense related to company contributions totaled $77, $77 and $83 in 2014, 2013 and 2012.\n\n## NOTE 7: POSTRETIREMENT BENEFITS\n\nWe have an unfunded defined benefit Supplemental Executive Retirement Plan ('SERP'), which provides retirement benefits to certain officers and select employees. The SERP has different benefit levels depending on the participant's role in the company. At the end of 2014, we had 59 participants in the plan, including 27 officers and select employees eligible for SERP benefits, 31 retirees and 1 beneficiary. This plan is non-qualified and does not have a minimum funding requirement.", - "page_start": 61, - "page_end": 61, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "The cost of pensions is actuarially determined and takes into account the following assumptions and methods for pension accounting related to our defined benefit plans:\n\n - GLYPH<129> the expected rates of salary increases for calculating increases in future benefits\n - GLYPH<129> mortality rates for calculating the life expectancy of plan members, and\n - GLYPH<129> past service costs from plan amendments are immediately expensed in net income.\n\nWe recognize contributions to defined contribution plans as an employee benefit expense in operating costs in the consolidated statements of income in the periods the employees provide the related services.\n\nSee note 22 for more information about our pension plans.\n\n## Termination Benefits\n\nWe recognize termination benefits as an expense when we are committed to a formal detailed plan to terminate employment before the normal retirement date and it is not realistic that we will withdraw it.\n\n## Property, Plant and Equipment\n\nRecognition and Measurement\n\nWe recognize property, plant and equipment at cost, less accumulated depreciation and accumulated impairment losses.\n\nCost includes expenditures that are directly attributable to the acquisition of the asset. The cost of self-constructed assets also includes:", - "page_start": 101, - "page_end": 101, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_AIT_2012.pdf", - "query": "What does Applied has to say regarding the potential creadit risk it could be exposed to ?", - "target_page": 21, - "target_passage": "The Company has a broad customer base representing many diverse industries primarily across North America. As such, the Company does not believe that a significant concentration of credit risk exists in its accounts receivable", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## ILO 'List of Occupational Diseases Recommendation'\n\n## 2.4. Mental and behavioural disorders\n\n - · 2.4.1. Post-traumatic stress disorder\n - · 2.4.2. Other mental or behavioural disorders not mentioned in the preceding item where a direct link is established scientifically, or determined by methods appropriate to national conditions and practice, between the exposure to risk factors arising from work activities and the mental and behavioural disorder(s) contracted by the worker\n\nAnd there are also emerging and new risks where health data will not be available until a certain number of workers are exposed for quite a while . Some prominent examples are nanotechnologies, the significant increase of new chemically based technologies, vision impairment due to long hours of work under artificial light at the same distance with small digital equipment, 183 more exposure to 'global' biological agents due to more interactional tasks, and travel and transport between countries and continents. On that note, the Covid-19 pandemic could also be used as an example. In 2022, the Commission proposed an update of the Recommendation on the ESOD to recognise Covid-19 as an occupational disease for workers particularly concerned: health and social care, home help or where there is a proven risk of infection (during a pandemic) in other sectors 184 .\n\nIt adds to these difficulties that workers are often not only exposed to one disease causing exposure but to several exposures at the same time (exposure is understood here in a broad sense: ranging from long working hours over postures and movements to harassment and violence and to noise and chemical and biological substances, etc.). In theory, a single risk - if below the threshold limit values and in line with legislation and standards will not cause harm - given that it is the only exposure . The impact of this single exposure is not strong enough to generate a disease on the level of severity of a recognised occupational disease. A combination of several risks might add several exposures, worsen the impact and cause serious harm.\n\nQuite well studied is the increased prevalence of musculoskeletal diseases, if not only ergonomic risks but also high psychosocial risks are prevalent at the workplace. 185 Research has also found unexpected connections like the synergistic effect of noise and certain chemicals on hearing impairments. Such outcomes of multi-risk profiles are often particularly difficult to identify and understand. Obviously, most sectors and occupations involve workplaces with multi-risk profiles . Some prominent major risks in certain sectors or occupations are:", - "page_start": 75, - "page_end": 75, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- · Impact of international and global supply chains on OSH: Does it improve or worsen the working conditions in the EU? Research could try to estimate the risk-reducing impact of the shift of some high-risk productions to enterprises outside the EU, for example, mining, base chemicals, recycling and so on (export of risks), and to estimate the OSH impact of EU export production, for example, vehicles, specialty chemicals, machines for risks at work inside the EU (import of risks).\n - · It would also be a big step forward if research could achieve an agreed standard value or a standard range (as reliable as possible) for the attributable fraction of work to widespread diseases, that is, cardiovascular diseases, mental and behavioural disorders, musculoskeletal diseases and cancer.\n - · Compliance with and impact of legislation. Currently, there are data on the percentage of enterprises with a risk assessment but very limited information about the quality of these risk assessments and of implemented risk management and reduction measures . Previous studies indicate that in many cases the risk assessment is conducted by an enterprise just to comply with legal obligations (paper compliance). A possible approach could be an anonymous evaluation of the quality of a representative share of risk assessments.", - "page_start": 139, - "page_end": 139, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "\n\nIf a risk assessment is conducted just for compliance purposes , and not used appropriately for the successful management of OSH and reduction of accidents and occupational diseases, the risk assessment may lose its dynamic nature, and findings may be neither implemented nor communicated appropriately to employees.\n\nThe types of risks included in risk assessments are related to the risk profiles of different sectors, for example, it is likely that risk assessments in heavy industries and manual occupations focus more on safety risks. However, while sectoral risk profiles will naturally bias the identification of risks, smaller establishments seem to have less of a focus on MSDs or psychosocial risk factors , which would suggest that they are less well recognised or understood, in particular for MSEs. 415 Establishments also report that psychosocial risk factors are more difficult to manage than other OSH risks, while as business size grows, so does the proportion of respondents who perceive psychosocial risks as more difficult to manage than other OSH risks. 416\n\nESENER 2019 shows that a reluctance to talk openly about these issues seems to be the main difficulty for addressing psychosocial risks (60% of establishments in the EU27). This, as with all the other difficulties considered (lack of awareness among staff/management and lack of expertise or specialist support), is reported in all enterprise sizes but more frequently as establishment size grows.\n\nSpecifically, among those establishments that report having to deal with difficult customers, patients or pupils, 51% of those employing 20 or more workers report having a procedure in place to deal with possible cases of threats, abuse or assaults by clients, patients or other external persons. This share rises to 74% among establishments in human health and social work activities.\n\nThe development of concrete outputs such as measures to better manage risks that can result in musculoskeletal diseases has actually seen a decline between 2014 and 2019, as follows:\n\n - · 85% to 77% on the measure of 'provision of equipment to help with the lifting or moving of loads or other physical heavy work'; 417\n - · 73% to 67% concerning 'provision of ergonomic equipment'; and\n - · 66% to 60% regarding 'encouraging regular breaks for people in uncomfortable or static postures including prolonged sitting'. 418", - "page_start": 127, - "page_end": 127, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Particularly difficult is the assessment of the quality of risk assessments . A complete quality assessment would require specific knowledge of several aspects: of the specific topic, of the - real situation at the workplaces in an enterprise, and of the expected reduction of these risks by the proposed or recommended risk mitigation measures. This has rarely been done. Moreover, even inside one enterprise the quality of a risk assessment might differ depending on the topic , for example, between 'easier' topics as 'correct provision of warning signals' or 'adequate temperatures', and more complex topics like psychosocial, musculoskeletal, or chemical and biological risks. 414", - "page_start": 126, - "page_end": 126, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "The EU treaties form the legal background for the development of specific EU legislation, related to working conditions in general and OSH in particular. In 1989, the EU agreed on the Framework Directive , a major step regarding OSH. 340 This directive introduced a distinguished preventive approach, based on a comprehensive risk assessment, as a dominant legal standard across all Member States. Its legal obligations prescribe several basic principles:\n\n - · the responsibility of employers for OSH, that is, 'the employer shall take the measures necessary for the safety and health protection of workers, including prevention of occupational risks and provision of information and training', 341 and the obligation of workers ' to take care as far as possible of his own safety and health and that of other persons affected …' ; 342\n - · the obligation to evaluate all risks (risk assessment);\n - · the preference of the risk elimination at source (combating the risk at source), a hierarchy of prevention measures, replacing the dangerous by the non- or the less dangerous;", - "page_start": 117, - "page_end": 117, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## Management's Discussion and Analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## PART VIII\n\n## Risk Management\n\nKillam faces a variety of risks, the majority of which are common to real estate entities. Real estate investments are generally subject to varying degrees of risk, depending on the nature of the property. These risks include (i) changes in general economic conditions, (ii) changes in local conditions (such as an oversupply of space or a reduction in demand for real estate in the area), (iii) changes to government regulations (such as new or revised residential tenant legislations), (iv) competition from others with available space, and (v) the ability of the landlord or owner to provide adequate maintenance economically.\n\nReal estate is relatively illiquid. Such illiquidity will tend to limit Killam's ability to rebalance its portfolio promptly in response to changing economic or investment conditions. In addition, financial difficulties of other property owners, resulting in distress sales, may depress real estate values in the markets in which the company operates.\n\nKillam's exposure to general risks associated with real estate investments is mitigated with both its geographic diversification, and investments in both apartments and mHcs.\n\nKillam is exposed to other risks, as outlined below:\n\n## Interest Rate Risk\n\nInterest risk is the risk that the Company would experience lower returns as the result of its exposure to a higher interest rate environment. The Company is exposed to interest rate risk as a result of its mortgages and loans payable, however this risk is mitigated through the Company's strategy to have the majority of its mortgages payable in fixed-term arrangements. The Company also structures its financings so as to stagger the maturities of its debt, minimizing the Company's exposure to interest rates in any one year.\n\nAs at December 31, 2013, no mortgages or vendor debt had floating interest rates except for four demand loans totaling $3.9 million. These loans have an interest rate of prime plus 1.0% - 2.0% (December 31, 2012 - prime plus 1.0% - 1.5%). Killam also has one construction loan of $14.8 million with a floating interest rate of prime plus 0.75% and consequently, Killam is exposed to short-term interest rate risk on these loans.\n\n## Liquidity Risk\n\nLiquidity risk is the risk that the Company may not have access to sufficient debt and equity capital to fund its growth program and/or refinance its debt obligations as they mature. Senior Management manages the Company's cash resources based on financial forecasts and anticipated cash flows. The maturities of the Company's long-term financial liabilities are set out in Notes 12 to 15 of the consolidated financial statements. The Company structures its financings so as to stagger the maturities of its debt, thereby minimizing the Company's exposure to liquidity risk in any one year. In addition, the Company's apartments qualify for CMHC insured debt, reducing the refinancing risk on mortgage maturities. The Company's MHCs do not qualify for CMHC insured debt, however, they continue to have access to mortgage debt.\n\n## Increased Supply Risk", - "page_start": 58, - "page_end": 58, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "We do not believe the adoption of SFAS 123(R) will have a material impact on our cash flows or financial position.\n\n## Market Risk\n\nMarket risk is the risk of loss arising from adverse changes in market rates and prices, such as interest rates, foreign currency exchange rates and commodity prices. Our primary exposure to market risk is interest rate risk associated with our", - "page_start": 46, - "page_end": 46, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## Enterprise Risk Management\n\nOur Enterprise Risk Management program seeks to ensure we identify, assess, manage, monitor and communicate risk consistently throughout the company and that we manage risk in a way that supports our strategic and business goals. This program supports the Audit Committee and the Board's responsibility for risk by facilitating a formal strategic risk assessment process.\n\nWe carry out an annual strategic risk assessment to identify our principal risks and their potential impact on our ability to achieve our strategic plans. This assessment includes reviewing risk reports, audit reports and industry benchmarks, and interviewing key risk owners. We also conduct a formal survey every two years to get management feedback on the key risks facing the organization and identify emerging risks. Then we prioritize the risks using standard risk assessment criteria. Enterprise Risk Management reports the results of the strategic risk assessment to the Executive Leadership Team and the Audit Committee.\n\nThe Executive Leadership Team is responsible for approving our enterprise risk policies and for identifying and assessing the key risks that affect our ability to meet our corporate objectives. It is also responsible for monitoring these key risks and our action plans to mitigate these risks.\n\nManagement develops risk management plans. They are responsible for identifying, assessing, managing and monitoring risks in the business units impacting our strategic and business plans, and reporting to the Executive Leadership Team and Enterprise Risk Management.", - "page_start": 75, - "page_end": 75, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## 27. Contingent liabilities\n\nThe Group had contingent liabilities at 30 June 2013 in respect of guarantees. Bank guarantees have been given by Kingsgate's controlled entities to participating banks in the syndicated loan facility and corporate loan facility as described in Note 16 as part of the security package. These guarantees may give rise to liabilities in the parent entity if the controlled entities do not meet their obligations under the terms of the loans subject to guarantees. No material losses are anticipated in respect of the above contingent liabilities.\n\nIncluded in non-current other asset is $1,838,000 relating to restricted cash deposits against bank guarantees supporting the rehabilitation bond requirements against the Group's mining operations.\n\n## 28. Financial risk management and instruments\n\n## Financial risk management\n\nThe Group's activities expose it to a variety of financial risks: market risk (including foreign currency risk, price risk, fair value risk, and interest rate risk), credit risk and liquidity risk.\n\nAt this point, the Directors believe that it is in the interest of shareholders to expose the Group to foreign currency risk, price risk and interest rate risk. Therefore, the Group does not employ any derivative hedging of foreign currency or interest rate risks though has entered into forward gold sale contracts to manage Australian gold price risk in respect of the forecast production from the Challenger Mine (refer 'commodity price risk' section below). The Directors and management monitor these risks, in particular market forecasts of future movements in foreign currency and prices movements and if it is to be believed to be in the interests of shareholders will implement risk management strategies to minimise potential adverse effects on the financial performance of the Group.\n\nRisk management is carried out by the senior executive team. The Board provides written principles for overall risk management, as well as policies covering specific areas, such as foreign exchange risk, interest rate risk, credit risk, use of derivative financial instruments and non-derivative financial instruments, and investment of excess liquidity.\n\nThe Group holds the following financial instruments:\n\n| | 2013 $'000 | 2012 $'000 |\n|-------------------------------------|---------------|---------------|\n| Financial assets | | |\n| Cash and cash equivalents | 32,987 | 90,623 |\n| Receivables | 9,431 | 12,226 |\n| Restricted cash | 5,474 | - |\n| Available-for-sale financial assets | 767 | 1,751 |\n| Other financial assets | 7,808 | 4,670 |\n| Total financial assets | 56,467 | 109,270 |\n| Financial liabilities | | |\n| Payables | ( 47,106) | (49,278) |\n| Borrowings | (202,565) | (157,544) |\n| Derivatives held for trading | (1,271) | (2,685) |\n| Total financial liabilities | (250,942) | (209,507) |\n\n## (a) Market risk\n\n## Foreign exchange risk\n\nThe Group operates internationally and is exposed to foreign exchange risk arising from currency exposures, primarily with respect to the US dollar and Thai Baht and as discussed earlier, no financial instruments are employed to mitigate the exposed risks. This is the Group's current policy and it is reviewed regularly including forecast movements in these currencies by management and the Board.\n\nCurrent year foreign exchange risks arise primarily from:", - "page_start": 100, - "page_end": 100, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "Acombination of the above questions is also relevant-how does the range of outcomes at 2°C compare to that at 1.5°C? This is also relevant to adaptation policy, as it can inform assessment on whether to adapt to potential impacts at 2°C or just 1.5°C. Putting in place adaptation measures to deal with potential impacts at 1.5°C and then increasing these to deal with 2°C later may be more expensive and difficult than adapting to potential risks at 2°C at the outset. On the other hand, because adaptation actions may themselves have consequences, unnecessary overadaptation may have undesirable effects which it may be preferable to avoid or at least delay until absolutely necessary.\n\nBoth questions require an appropriate assessment of uncertainty. There are considerable uncertainties in projections of regional climate change, with different climate models projecting regional climate changes that can differ in magnitude or even, in the case of precipitation and impacts quantities strongly related to this, differ in sign [5,6]. This may have important implications for regional impacts at specific levels of global warming. A common approach to exploring and presenting such uncertainties is to examine the ensemble mean and the level of consensus among the ensemble members on the sign of the change. While this can often be useful in informing an assessment of the level of confidence in future projections, it may not always be sufficient to fully inform decisions. Risk assessment approaches require consideration of a range of possible risks, not just the most likely. This paper explores a range of regional climate states and related impacts that occur at global warming of 2°C, and a range of differences with warming limited to 1.5°C.\n\nWe examine the implications of our new climate projections by applying some commonly used indices of climate extremes, and a further index quantifying relative vulnerability to food insecurity which combines climate extremes indices with information on a range of factors representing sensitivity and adaptability of food systems to climate hazards. We also use the climate projections to drive a global land surface model to simulate changes in run-off as an indicator of freshwater availability. We assess whether regional extremes are projected to increase or decrease at 2°C global warming, and whether the consequent impact on drought and vulnerability to food insecurity become greater or smaller. We also assess whether these changes are reduced by limiting global warming to 1.5°C. We explore some of the uncertainties in these projections, and, in particular, examine whether the use of ensemble-mean projections is a useful simple guide to impacts projections or whether this can lead to a misleading impression for some impacts. Regarding vulnerability to food insecurity, we consider the impacts of global warming at 1.5°C and 2°C alongside socio-economic influences that affect the sensitivity to climate change. Wealso consider our climate-change impacts results in comparison with other studies using older, lower-resolution climate projections.\n\nA large number of previous studies have assessed potential impacts of future climate change using the 5th Coupled Model Intercomparison Project (CMIP5) ensemble or subsets of this [7], and some have framed this in terms of impacts at global warming of 1.5°C and/or 2°C [8,9]. We also base our study on a subset of CMIP5 projections, but use a new, higher-resolution atmosphere model to provide greater spatial detail and improved representation of atmospheric processes.\n\n## 2. Methods and models\n\n## (a) Global climate simulations at 1.5 ° Cand2 ° Cglobalwarming", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed11.pdf" - } - ] - }, - { - "references": { - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf", - "query": "To what system of logic do OWL ontologies belong to ?", - "target_page": 7, - "target_passage": "OWL ontologies are an implementation of Description Logic (DL) which is a decidable subset of First Order Logic", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "next section. Which option you choose for your ontology will depend on the specific requirements you have as well as the standards established by your organization or organizations that you work with.\n\nFinally, another name related concept you should be aware of is the concept of a namespace. If you have worked with most modern programming languages such as Python or Java, you are already familiar with the concept of a namespace. The concept is identical in OWL. A namespace is used to avoid naming conflicts between different ontologies. For example, you may have a class called Network in an ontology about telecommunications. You might also have a class called Network in an ontology about graph theory. The two concepts are related but are different. Just as with programming languages you use namespace prefixes to determine what specific namespace a name refers to. E.g., in this example you might have the prefix tc for the Telecom ontology and gt for the Graph Theory ontology. Thus, when you referred to the Network class for the Telecom ontology you would use tc:Network and gt:Network for the graph theory class.\n\nNote that you already have some experience with other namespaces. The OWL namespace prefix is owl and is used to refer to classes such as owl:Thing and owl:Nothing . The Resource Description Framework Schema (RDFS) is a model that OWL is built on top of and thus some properties that ontologies use such as rdfs:label leverage this namespace.\n\nIn the bottom view of the Active ontology tab there is a tab called Ontology Prefixes. This tab shows all the current namespace mappings in your ontology. There are certain concepts from OWL, RDF, RDFS, XML and XSD that are required for every ontology, so those namespaces are by default mapped in every new Protégé ontology. There is also a mapping to the empty string for whatever the namespace is for your ontology. This allows you to display and refer to entities in your ontology without entering a namespace prefix. If you look at that tab now you should see a row where the first column is blank, and the second column has the base IRI for your ontology. It should be the same IRI as the Ontology IRI at the top of the Active ontology tab, except it also has a # sign at the end. E.g., the Pizza tutorial developed for this tutorial has an IRI of: http://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial and the row that has a blank first column in Ontology Prefixes has the IRI:\n\nhttp://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial#.", - "page_start": 61, - "page_end": 61, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "To understand what is going on you first need to understand that each SPARQL query consists of two parts. The first part at the beginning consists of several namespace prefixes. These statements consist of the prefix used for a particular namespace as well as the IRI associated with this namespace. Recall that these concepts were described in chapter 7. You may be wondering where all these prefixes came from since you didn't add them to your ontology. The answer is that every OWL ontology comes with a set of namespaces and prefixes that are required to define the ontology.\n\nAlso, to understand SPARQL you need to 'peak under the hood' of OWL. So far, we have been discussing concepts in purely logical and set theoretic terms, i.e., at the semantic level. However, like any language or database there is a lower level that describes how the concepts are mapped to actual data. In a relational database the fundamental construct to represent data is a table. In OWL the fundamental construct is a triple. OWL is actually built on top of RDFS which is a language built on top of RDF. RDF (Resource Description Framework) is a language to describe graphs (in the mathematical sense of the term). I.e., to describe nodes and links.\n\nThe foundation for RDF graphs are triples consisting of a subject, predicate, and object. This results in what is called an undirected or network graph because objects can be subjects and vice versa. Whenever you define a property in OWL you are defining a predicate. An individual can be a subject or an object (or both). E.g., in our ontology Customer1 purchasedPizza AmericanaHotPizza1 . In this example Customer1 is the subject, purchasedPizza is the predicate and AmericanaHotPizza1 is the object.\n\nHowever, classes and properties themselves are also represented as triples. So for example, when you create the class Pizza what Protégé does for you is to add the triple: Pizza rdf:type owl:Class to the ontology. I.e., the Pizza entity is of type (is an instance of) owl:Class . Similarly when you add NamedPizza as a subclass of Pizza , Protégé adds the triple: NamedPizza rdfs: s ubClassOf Pizza .\n\nHopefully, now you can make some sense of this initial query. The query is looking for all the entities that are the subjects of triples where the predicate is rdfs: s ubClassOf and the object is any other entity. The ? before a name indicates that the name is a wildcard that can match anything that fits with the rest of the pattern. This is part of the power of SPARQL, one can match a Subject, an Object, a Predicate or even all three. Making all 3 parts of the pattern wildcards would return every triple in the graph (in this case our entire Pizza ontology) being searched. You may notice that in some cases the object is simply the name of a class while in others it is a class expression with an orange circle in front of it. This is because when defining classes using DL axioms Protégé creates anonymous classes that correspond to various DL axioms.\n\nThe SELECT part of a SPARQL query determines what data to display. The WHERE part of a query determines what to match in the query. If you want to display everything matched in the WHERE clause you can just use a * for the SELECT clause. The initial default query in this tab is set up with no knowledge of the specific ontology. I.e., it will return all the classes that are subclasses of other classes regardless of the ontology. To get information about Pizzas the first thing we need to do is to add another prefix to the beginning of the query. In our case the Pizza ontology has been set up with a mapping to the prefix pizza (you can see this in the ontology prefixes tab in the Active ontology tab discussed in chapter 7). So, add the following to the SPARQL query after the last PREFIX statement:\n\n## PREFIX pizza: ", - "page_start": 68, - "page_end": 68, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "new formal systems have been proposed. There are disagreements about what makes a formal system a logic. [22] For example, it has been suggested that only logically complete systems, like first-order logic, qualify as logics. For such reasons, some theorists deny that higher-order logics are logics in the strict sense. [23]\n\n## Informal logic\n\n\n\nFormal logic needs to translate natural language arguments into a formal language, like first-order logic, to assess whether they are valid. In this example, the letter \"c\" represents Carmen while the letters \"M\" and \"T\" stand for \"Mexican\" and \"teacher\". The symbol \" ∧ \" has the meaning of \"and\".\n\nWhen understood in a wide sense, logic encompasses both formal and informal logic. [24] Informal logic uses non-formal criteria and standards to analyze and assess the correctness of arguments. Its main focus is on everyday discourse. [25] Its development was prompted by difficulties in applying the insights of formal logic to natural language arguments. [26] In this regard, it considers problems that formal logic on its own is unable to address. [27] Both provide criteria for assessing the correctness of arguments and distinguishing them from fallacies. [28]\n\nMany characterizations of informal logic have been suggested but there is no general agreement on its precise definition. [29] The most literal approach sees the terms \"formal\" and \"informal\" as applying to the language used to express arguments. On this view, informal logic studies arguments that are in informal or natural language. [30] Formal logic can only examine them indirectly by translating them first into a formal language while informal logic investigates them in their original form. [31] On this view, the argument \"Birds fly. Tweety is a bird. Therefore, Tweety flies.\" belongs to natural language and is examined by informal logic. But the formal translation \"(1) ; (2) ; (3) \" is studied by formal logic. [32] The study of natural language arguments comes with various difficulties. For example, natural language expressions are often ambiguous, vague, and context-dependent. [33] Another approach defines informal logic in a wide sense as the normative study of the standards, criteria, and procedures of argumentation. In this sense, it includes questions about the role of rationality, critical thinking, and the psychology of argumentation. [34]\n\nAnother characterization identifies informal logic with the study of non-deductive arguments. In this way, it contrasts with deductive reasoning examined by formal logic. [35] Non-deductive arguments make their conclusion probable but do not ensure that it is true. An example is the inductive argument from the empirical observation that \"all ravens I have seen so far are black\" to the conclusion \"all ravens are black\". [36]\n\nA further approach is to define informal logic as the study of informal fallacies. [37] Informal fallacies are incorrect arguments in which errors are present in the content and the context of the argument. [38] A false dilemma, for example, involves an error of content by excluding viable options. This is the case in the fallacy \"you are either with us or against us; you are not with us; therefore, you are against us\". [39] Some theorists state that formal logic studies the general form of arguments while informal logic studies particular instances of arguments. Another approach is to hold that formal logic only considers the role of", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia1.pdf" - }, - { - "text": "mathematics, it does not include logical vocabulary relevant to many other topics of philosophical importance. Examples of concepts it overlooks are the contrast between necessity and possibility and the problem of ethical obligation and permission. Similarly, it does not address the relations between past, present, and future. [119] Such issues are addressed by extended logics. They build on the basic intuitions of classical logic and expand it by introducing new logical vocabulary. This way, the exact logical approach is applied to fields like ethics or epistemology that lie beyond the scope of mathematics. [120]\n\n## Propositional logic\n\nPropositional logic comprises formal systems in which formulae are built from atomic propositions using logical connectives. For instance, propositional logic represents the conjunction of two atomic propositions and as the complex formula . Unlike predicate logic where terms and predicates are the smallest units, propositional logic takes full propositions with truth values as its most basic component. [121] Thus, propositional logics can only represent logical relationships that arise from the way complex propositions are built from simpler ones. But it cannot represent inferences that result from the inner structure of a proposition. [122]\n\n## First-order logic\n\nFirst-order logic includes the same propositional connectives as propositional logic but differs from it because it articulates the internal structure of propositions. This happens through devices such as singular terms, which refer to particular objects, predicates, which refer to properties and relations, and quantifiers, which treat notions like \"some\" and \"all\". [123] For example, to express the proposition \"this raven is black\", one may use the predicate for the property \"black\" and the singular term referring to the raven to form the expression . To express that some objects are black, the existential quantifier is combined\n\n\n\nGottlob Frege's Begriffschrift introduced the notion of quantifier in a graphical notation, which here represents the judgment that is true.\n\nwith the variable to form the proposition . First-order logic contains various rules of inference that determine how expressions articulated this way can form valid arguments, for example, that one may infer from . [124]\n\n## Extended\n\nExtended logics are logical systems that accept the basic principles of classical logic. They introduce additional symbols and principles to apply it to fields like metaphysics, ethics, and epistemology. [125]\n\n## Modal logic\n\nModal logic is an extension of classical logic. In its original form, sometimes called \"alethic modal logic\", it introduces two new symbols: expresses that something is possible while expresses that something is necessary. [126] For example, if the formula stands for the sentence \"Socrates is a banker\" then the formula articulates the sentence \"It is possible that Socrates is a banker\". [127] To include these symbols in the logical formalism, modal logic introduces new rules of inference that govern", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia1.pdf" - }, - { - "text": "propositions into account, like predicates and quantifiers. Extended logics accept the basic intuitions behind classical logic and apply it to other fields, such as metaphysics, ethics, and epistemology. Deviant logics, on the other hand, reject certain classical intuitions and provide alternative explanations of the basic laws of logic.\n\n## Definition\n\nThe word \"logic\" originates from the Greek word logos , which has a variety of translations, such as reason, discourse, or language. [4] Logic is traditionally defined as the study of the laws of thought or correct reasoning, [5] and is usually understood in terms of inferences or arguments. Reasoning is the activity of drawing inferences. Arguments are the outward expression of inferences. [6] An argument is a set of premises together with a conclusion. Logic is interested in whether arguments are correct, i.e. whether their premises support the conclusion. [7] These general characterizations apply to logic in the widest sense, i.e., to both formal and informal logic since they are both concerned with assessing the correctness of arguments. [8] Formal logic is the traditionally dominant field, and some logicians restrict logic to formal logic. [9]\n\n## Formal logic\n\nFormal logic is also known as symbolic logic and is widely used in mathematical logic. It uses a formal approach to study reasoning: it replaces concrete expressions with abstract symbols to examine the logical form of arguments independent of their concrete content. In this sense, it is topic-neutral since it is only concerned with the abstract structure of arguments and not with their concrete content. [10]\n\nFormal logic is interested in deductively valid arguments, for which the truth of their premises ensures the truth of their conclusion. This means that it is impossible for the premises to be true and the conclusion to be false. [11] For valid arguments, the logical structure of the premises and the conclusion follows a pattern called a rule of inference. [12] For example, modus ponens is a rule of inference according to which all arguments of the form \"(1) p , (2) if p then q , (3) therefore q \" are valid, independent of what the terms p and q stand for. [13] In this sense, formal logic can be defined as the science of valid inferences. An alternative definition sees logic as the study of logical truths. [14] A proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true in all possible worlds and under all interpretations of its non-logical terms, like the claim \"either it is raining, or it is not\". [15] These two definitions of formal logic are not identical, but they are closely related. For example, if the inference from p to q is deductively valid then the claim \"if p then q \" is a logical truth. [16]\n\nFormal logic uses formal languages to express and analyze arguments. [17] They normally have a very limited vocabulary and exact syntactic rules. These rules specify how their symbols can be combined to construct sentences, so-called well-formed formulas. [18] This simplicity and exactness of formal logic make it capable of formulating precise rules of inference. They determine whether a given argument is valid. [19] Because of the reliance on formal language, natural language arguments cannot be studied directly. Instead, they need to be translated into formal language before their validity can be assessed. [20]\n\nThe term \"logic\" can also be used in a slightly different sense as a countable noun. In this sense, a logic is a logical formal system. Distinct logics differ from each other concerning the rules of inference they accept as valid and the formal languages used to express them. [21] Starting in the late 19th century, many", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia1.pdf" - }, - { - "text": "incoming information. [154] Correct reasoning and the arguments it is based on follow the laws of probability, for example, the principle of conditionalization. Bad or irrational reasoning, on the other hand, violates these laws. [155]\n\n## Areas of research\n\nLogic is studied in various fields. In many cases, this is done by applying its formal method to specific topics outside its scope, like to ethics or computer science. [156] In other cases, logic itself is made the subject of research in another discipline. This can happen in diverse ways. For instance, it can involve investigating the philosophical assumptions linked to the basic concepts used by logicians. Other ways include interpreting and analyzing logic through mathematical structures as well as studying and comparing abstract properties of formal logical systems. [157]\n\n## Philosophy of logic and philosophical logic\n\nPhilosophy of logic is the philosophical discipline studying the scope and nature of logic. [59] It examines many presuppositions implicit in logic, like how to define its basic concepts or the metaphysical assumptions associated with them. [158] It is also concerned with how to classify logical systems and considers the ontological commitments they incur. [159] Philosophical logic is one of the areas within the philosophy of logic. It studies the application of logical methods to philosophical problems in fields like metaphysics, ethics, and epistemology. [160] This application usually happens in the form of extended or deviant logical systems. [161]\n\n## Metalogic\n\nMetalogic is the field of inquiry studying the properties of formal logical systems. For example, when a new formal system is developed, metalogicians may study it to determine which formulas can be proven in it. They may also study whether an algorithm could be developed to find a proof for each formula and whether every provable formula in it is a tautology. Finally, they may compare it to other logical systems to understand its distinctive features. A key issue in metalogic concerns the relation between syntax and semantics. The syntactic rules of a formal system determine how to deduce conclusions from premises, i.e. how to formulate proofs. The semantics of a formal system governs which sentences are true and which ones are false. This determines the validity of arguments since, for valid arguments, it is impossible for the premises to be true and the conclusion to be false. The relation between syntax and semantics concerns issues like whether every valid argument is provable and whether every provable argument is valid. Metalogicians also study whether logical systems are complete, sound, and consistent. They are interested in whether the systems are decidable and what expressive power they have. Metalogicians usually rely heavily on abstract mathematical reasoning when examining and formulating metalogical proofs. This way, they aim to arrive at precise and general conclusions on these topics. [162]\n\n## Mathematical logic\n\nThe term \"mathematical logic\" is sometimes used as a synonym of \"formal logic\". But in a more restricted sense, it refers to the study of logic within mathematics. Major subareas include model theory, proof theory, set theory, and computability theory. [164] Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic. However, it can also include attempts", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia1.pdf" - }, - { - "text": "\n\nIbn Sina (Avicenna) was the founder of Avicennian logic, which replaced Aristotelian logic as the dominant system of logic in the Islamic world. [189] It influenced Western medieval writers such as Albertus Magnus and William of Ockham. [190] Ibn Sina wrote on the hypothetical syllogism [191] and on the propositional calculus. [192] He developed an original \"temporally modalized\" syllogistic theory, involving temporal logic and modal logic. [193] He also made use of inductive logic, such as his methods of agreement, difference, and concomitant variation, which are critical to the scientific method. [191] Fakhr al-Din al-Razi was another influential Muslim logician. He criticized Aristotelian syllogistics and formulated an early system of inductive logic, foreshadowing the system of inductive logic developed by John Stuart Mill. [194]\n\nDuring the Middle Ages, many translations and interpretations of Aristotelian logic were made. The works of Boethius were particularly influential. Besides translating Aristotle's work into Latin, he also produced textbooks on logic. [195] Later, the works of Islamic philosophers such as Ibn Sina and Ibn Rushd (Averroes) were drawn on. This expanded the range of ancient works available to medieval Christian scholars since more Greek work was available to Muslim scholars that had been preserved in Latin commentaries. In 1323, William of Ockham's influential Summa Logicae was released. It is a comprehensive treatise on logic that discusses many basic concepts of logic and provides a systematic exposition of types of propositions and their truth conditions. [196]", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia1.pdf" - }, - { - "text": "relations, transitive relations, and many more. An understanding of the basic concepts of set theory will help the user get the most out of OWL but is not required. One of the benefits of Protégé is that it presents an intuitive GUI that enables domain experts to define models without a background in set theory. However, developers are encouraged to refresh their knowledge on logic and set theory. A good source is the first 3 chapters in Elements of the Theory of Computation by Lewis and Papadamitrious. Another good source is the PDF document Overview of Set Theory available at:\n\nhttps://www.michaeldebellis.com/post/owl-theoretical-basics\n\n## 3.1.1 Individuals\n\nIndividuals represent objects in the domain of interest. An important difference between OWL and most programming and knowledge representation languages is that OWL does not use the Unique Name Assumption (UNA). This means that two different names could actually refer to the same individual. For example, 'Queen Elizabeth', 'The Queen' and 'Elizabeth Windsor' might all refer to the same individual. In OWL, it must be explicitly stated that individuals are the same as each other, or different from each other. Figure 3.1 shows a representation of some individuals in a domain of people, nations, and relations - in this tutorial we represent individuals as diamonds.\n\nFigure 3.2: Representation of Properties\n\n\n\nIndividuals are also known as instances . Individuals can be referred to as instances of classes .\n\n", - "page_start": 7, - "page_end": 7, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "what role they play in inferences. One rule of inference states that, if something is necessary, then it is also possible. This means that follows from . Another principle states that if a proposition is necessary then its negation is impossible and vice versa. This means that is equivalent to . [128]\n\nOther forms of modal logic introduce similar symbols but associate different meanings with them to apply modal logic to other fields. For example, deontic logic concerns the field of ethics and introduces symbols to express the ideas of obligation and permission, i.e. to describe whether an agent has to perform a certain action or is allowed to perform it. [129] The modal operators in temporal modal logic articulate temporal relations. They can be used to express, for example, that something happened at one time or that something is happening all the time. [129] In epistemology, epistemic modal logic is used to represent the ideas of knowing something in contrast to merely believing it to be the case. [130]\n\n## Higher order logic\n\nHigher-order logics extend classical logic not by using modal operators but by introducing new forms of quantification. [131] Quantifiers correspond to terms like \"all\" or \"some\". In classical first-order logic, quantifiers are only applied to individuals. The formula \" \" ( some apples are sweet) is an example of the existential quantifier \" \" applied to the individual variable \" \". In higherorder logics, quantification is also allowed over predicates. This increases its expressive power. For example, to express the idea that Mary and John share some qualities, one could use the formula \" \". In this case, the existential quantifier is applied to the predicate variable \" \". [132] The added expressive power is especially useful for mathematics since it allows for more succinct formulations of mathematical theories. [43] But it has drawbacks in regard to its meta-logical properties and ontological implications, which is why first-order logic is still more commonly used. [133]\n\n## Deviant\n\nDeviant logics are logical systems that reject some of the basic intuitions of classical logic. Because of this, they are usually seen not as its supplements but as its rivals. Deviant logical systems differ from each other either because they reject different classical intuitions or because they propose different alternatives to the same issue. [134]\n\nIntuitionistic logic is a restricted version of classical logic. [135] It uses the same symbols but excludes some rules of inference. For example, according to the law of double negation elimination, if a sentence is not not true, then it is true. This means that follows from . This is a valid rule of inference in classical logic but it is invalid in intuitionistic logic. Another classical principle not part of intuitionistic logic is the law of excluded middle. It states that for every sentence, either it or its negation is true. This means that every proposition of the form is true. [135] These deviations from classical logic are based on the idea that truth is established by verification using a proof. Intuitionistic logic is especially prominent in the field of constructive mathematics, which emphasizes the need to find or construct a specific example to prove its existence. [136]", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia1.pdf" - }, - { - "text": "Bertrand Russell made various contributions to mathematical logic. [163]\n\n\n\nto use logic to analyze mathematical reasoning or to establish logic-based foundations of mathematics. [165] The latter was a major concern in early 20th-century mathematical logic, which pursued the program of logicism pioneered by philosopherlogicians such as Gottlob Frege, Alfred North Whitehead, and Bertrand Russell. Mathematical theories were supposed to be logical tautologies, and their program was to show this by means of a reduction of mathematics to logic. Many attempts to realize this program failed, from the crippling of Frege's project in his Grundgesetze by Russell's paradox, to the defeat of Hilbert's program by Gödel's incompleteness theorems. [166]\n\nSet theory originated in the study of the infinite by Georg Cantor, and it has been the source of many of the most challenging and important issues in mathematical logic. They include Cantor's theorem, the status of the Axiom of Choice, the question of the independence of the continuum hypothesis, and the modern debate on large cardinal axioms. [167]\n\nComputability theory is the branch of mathematical logic that studies effective procedures to solve calculation problems. One of\n\nits main goals is to understand whether it is possible to solve a given problem using an algorithm. For instance, given a certain claim about the positive integers, it examines whether an algorithm can be found to determine if this claim is true. Computability theory uses various theoretical tools and models, such as Turing machines, to explore this type of issue. [168]\n\n## Computational logic\n\nComputational logic is the branch of logic and computer science that studies how to implement mathematical reasoning and logical formalisms using computers. This includes, for example, automatic theorem provers, which employ rules of inference to construct a proof step by step from a set of premises to the intended conclusion without human intervention. [169] Logic programming languages are designed specifically to express facts using logical formulas and to draw inferences from these facts. For example, Prolog is a logic programming language based on predicate logic. [170] Computer scientists also apply concepts from logic to problems in computing. The works of Claude Shannon were influential in this regard. He showed how Boolean logic can be used to understand and implement computer circuits. [171] This can be achieved using electronic logic gates, i.e. electronic circuits with one or more inputs and usually one output. The truth values of propositions are represented by voltage levels. In this way, logic functions can be simulated by applying the corresponding voltages to the inputs of the circuit and determining the value of the function by measuring the voltage of the output. [172]\n\n## Formal semantics of natural language\n\nFormal semantics is a subfield of logic, linguistics, and the philosophy of language. The discipline of semantics studies the meaning of language. Formal semantics uses formal tools from the fields of symbolic logic and mathematics to give precise theories of the meaning of natural language expressions. It understands meaning usually in relation to truth conditions, i.e. it examines in which situations a", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia1.pdf" - } - ] - }, - { - "references": { - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf", - "query": "Concerning ontologies, what is an anonymous class ?", - "target_page": 30, - "target_passage": "They are created by the reasoner when you use class expressions. For example, if you define the range of a property to be PizzaTopping or PizzaBase then the reasoner will create an anonymous class representing the intersection of those two classes", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "The following are some examples of classes of individuals that we might want to define via property restrictions:\n\n - · The class of individuals with at least one hasChild relation.\n - · The class of individuals with 2 or more hasChild relations.\n - · The class of individuals that have at least one hasTopping relationship to individuals that are members of MozzarellaTopping - i.e. the class of things that have at least a mozzarella topping.\n - · The class of individuals that are Pizzas and only have hasTopping relations to instances of the class VegetableTopping (i.e., VegetarianPizza ).\n\nIn OWL we can describe all of the above classes using restrictions. OWL restrictions fall into three main categories:\n\n - 1. Quantifier restrictions. These describe that a property must have some or all values that are of a particular class.\n - 2. Cardinality restrictions. These describe the number of individuals that must be related to a class by a specific property.\n - 3. hasValue restrictions. These describe specific values that a property must have.\n\nWe will initially use quantifier restrictions. Quantifier restrictions can be further categorized as existential restrictions and universal restrictions 6 . Both types of restrictions will be illustrated with examples in this tutorial.\n\n - · Existential restrictions describe classes of individuals that participate in at least one relation along a specified property. For example, the class of individuals who have at least one (or some) hasTopping relation to instances of VegetableTopping . In OWL the keyword some is used to denote existential restrictions.\n - · Universal restrictions describe classes of individuals that for a given property only have relations along a property to individuals that are members of a specific class. For example, the class of individuals that only have hasTopping relations to instances of the class VegetableTopping . In OWL they keyword only is used for universal restrictions.\n\nLet's take a closer look at an example of an existential restriction. The restriction hasTopping some MozzarellaTopping is an existential restriction (as indicated by the some keyword), which restricts the hasTopping property, and has a filler MozzarellaTopping . This restriction describes the class of individuals that have at least one hasTopping relationship to an individual that is a member of the class MozzarellaTopping .\n\n\n\nA restriction always describes a class. Sometimes (as we will soon see) it can be a defined class. Other times it may be an anonymous class. In all cases the class contains all of the individuals that satisfy the restriction, i.e., all of the individuals that have the relationships required to be a member of the class. In section 9.2 one of our SPARQL queries will return several anonymous classes.", - "page_start": 30, - "page_end": 30, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "\n\nmost similar to the ones used in GPT-2's training data, i.e. documents linked to from Reddit [25], plus Wikipedia and a collection of books. While this was reportedly effective at filtering out documents that previous work characterized as 'unintelligible' [134], what is unmeasured (and thus unknown) is what else it filtered out. The Colossal Clean Crawled Corpus [107], used to train a trillion parameter LM in [43], is cleaned, inter alia , by discarding any page containing one of a list of about 400 'Dirty, Naughty, Obscene or Otherwise Bad Words' [p.6]. 14 This list is overwhelmingly words related to sex, with a handful of racial slurs and words related to white supremacy (e.g. swastika , white power ) included. While possibly effective at removing documents containing pornography (and the associated problematic stereotypes encoded in the language of such sites [125]) and certain kinds of hate speech, this approach will also undoubtedly attenuate, by suppressing such words as twink , the influence of online spaces built by and for LGBTQ people. 15 If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light.\n\nThus at each step, from initial participation in Internet fora, to continued presence there, to the collection and finally the filtering of training data, current practice privileges the hegemonic viewpoint. In accepting large amounts of web text as 'representative' of 'all' of humanity we risk perpetuating dominant viewpoints, increasing power imbalances, and further reifying inequality. We instead propose practices that actively seek to include communities underrepresented on the Internet. For instance, one can take inspiration from movements to decolonize education by moving towards oral histories due to the overrepresentation of colonial views in text [35, 76, 127], and curate training datasets through a thoughtful process of deciding what to put in, rather than aiming solely for scale and trying haphazardly to weed out, post-hoc, flotsam deemed 'dangerous', 'unintelligible', or 'otherwise bad'.\n\n## 4.2 Static Data/Changing Social Views\n\nA central aspect of social movement formation involves using language strategically to destabilize dominant narratives and call attention to underrepresented social perspectives. Social movements produce new norms, language, and ways of communicating. This adds challenges to the deployment of LMs, as methodologies reliant on LMs run the risk of 'value-lock', where the LM-reliant technology reifies older, less-inclusive understandings.\n\nFor instance, the Black Lives Matter movement (BLM) influenced Wikipedia article generation and editing such that, as the BLM movement grew, articles covering shootings of Black people increased in coverage and were generated with reduced latency [135]. Importantly, articles describing past shootings and incidents of police brutality were created and updated as articles for new events were created, reflecting how social movements make connections between events in time to form cohesive narratives [102]. More generally, Twyman et al. [135] highlight how social movements actively influence framings and reframings of minority narratives\n\nin the type of online discourse that potentially forms the data that underpins LMs.", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv5_ccby4license.pdf" - }, - { - "text": "provide a language that is called Description Logic or DL for short. One of the key features of DL is that these superclass-subclass relationships (aka subsumption relationships) can be computed automatically by a reasoner - more on this later. Figure 3.3 shows a representation of some classes containing individuals classes are represented as ovals, like sets in Venn diagrams.\n\nIn OWL classes can be built up of descriptions that specify the conditions that must be satisfied by an individual for it to be a member of the class. How to formulate these descriptions will be explained as the tutorial progresses.", - "page_start": 9, - "page_end": 9, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "| System Overview It is the reference in the field as it defines a | Pros and cons Ontologies can be exported in | | Ontology based-system | | | Ontology | |", - "page_start": 0, - "page_end": 0, - "source_file": "infographic5.pdf" - }, - { - "text": "To understand what is going on you first need to understand that each SPARQL query consists of two parts. The first part at the beginning consists of several namespace prefixes. These statements consist of the prefix used for a particular namespace as well as the IRI associated with this namespace. Recall that these concepts were described in chapter 7. You may be wondering where all these prefixes came from since you didn't add them to your ontology. The answer is that every OWL ontology comes with a set of namespaces and prefixes that are required to define the ontology.\n\nAlso, to understand SPARQL you need to 'peak under the hood' of OWL. So far, we have been discussing concepts in purely logical and set theoretic terms, i.e., at the semantic level. However, like any language or database there is a lower level that describes how the concepts are mapped to actual data. In a relational database the fundamental construct to represent data is a table. In OWL the fundamental construct is a triple. OWL is actually built on top of RDFS which is a language built on top of RDF. RDF (Resource Description Framework) is a language to describe graphs (in the mathematical sense of the term). I.e., to describe nodes and links.\n\nThe foundation for RDF graphs are triples consisting of a subject, predicate, and object. This results in what is called an undirected or network graph because objects can be subjects and vice versa. Whenever you define a property in OWL you are defining a predicate. An individual can be a subject or an object (or both). E.g., in our ontology Customer1 purchasedPizza AmericanaHotPizza1 . In this example Customer1 is the subject, purchasedPizza is the predicate and AmericanaHotPizza1 is the object.\n\nHowever, classes and properties themselves are also represented as triples. So for example, when you create the class Pizza what Protégé does for you is to add the triple: Pizza rdf:type owl:Class to the ontology. I.e., the Pizza entity is of type (is an instance of) owl:Class . Similarly when you add NamedPizza as a subclass of Pizza , Protégé adds the triple: NamedPizza rdfs: s ubClassOf Pizza .\n\nHopefully, now you can make some sense of this initial query. The query is looking for all the entities that are the subjects of triples where the predicate is rdfs: s ubClassOf and the object is any other entity. The ? before a name indicates that the name is a wildcard that can match anything that fits with the rest of the pattern. This is part of the power of SPARQL, one can match a Subject, an Object, a Predicate or even all three. Making all 3 parts of the pattern wildcards would return every triple in the graph (in this case our entire Pizza ontology) being searched. You may notice that in some cases the object is simply the name of a class while in others it is a class expression with an orange circle in front of it. This is because when defining classes using DL axioms Protégé creates anonymous classes that correspond to various DL axioms.\n\nThe SELECT part of a SPARQL query determines what data to display. The WHERE part of a query determines what to match in the query. If you want to display everything matched in the WHERE clause you can just use a * for the SELECT clause. The initial default query in this tab is set up with no knowledge of the specific ontology. I.e., it will return all the classes that are subclasses of other classes regardless of the ontology. To get information about Pizzas the first thing we need to do is to add another prefix to the beginning of the query. In our case the Pizza ontology has been set up with a mapping to the prefix pizza (you can see this in the ontology prefixes tab in the Active ontology tab discussed in chapter 7). So, add the following to the SPARQL query after the last PREFIX statement:\n\n## PREFIX pizza: ", - "page_start": 68, - "page_end": 68, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "## Chapter 1 Introduction\n\nThis introduces Protégé 5 for creating OWL ontologies as well as various plugins. If you have questions specific to this tutorial, please feel free to email me directly: mdebellissf@gmail.com However, if you have general questions about Protégé, OWL, or plugins you should subscribe to and send an email to the User Support for Protégé and Web Protégé email list. This list has many people (including me) who monitor it and can contribute their knowledge to help you understand how to get the most out of this technology. To subscribe to the list, go to: https://protege.stanford.edu/support.php and click on the first orange Subscribe button. That will enable you to subscribe to the list and give you the email to send questions to.\n\nThis chapter covers licensing and describes conventions used in the tutorial. Chapter 2 covers the requirements for the tutorial and describes the Protégé user interface. Chapter 3 gives a brief overview of the OWL ontology language. Chapter 4 focuses on building an OWL ontology with classes and object properties. Chapter 4 also describes using a Description Logic Reasoner to check the consistency of the ontology and automatically compute the ontology class hierarchy.\n\nChapter 5 describes data properties. Chapter 6 describes design patterns and shows one design pattern: adding an order to an enumerated class. Chapter 7 describes the various concepts related to the name of an OWL entity.\n\nChapter 8 introduces an extended version of the Pizza tutorial developed in chapters 1-7. This ontology has a small number of instances and property values already created which can be used to illustrate the tools in the later chapters for writing rules, doing queries, and defining constraints.\n\nChapter 9 describes two tools for doing queries: Description Logic queries and SPARQL queries. Chapter 10 introduces the Semantic Web Rule Language (SWRL) and walks you through creating SWRL and SQWRL rules. Chapter 11 introduces the Shapes Constraint Language (SHACL) and discusses the difference between defining logical axioms in Description Logic and data integrity constraints in SHACL. Chapter 12 has some concluding thoughts and opinions and Chapter 13 provides a bibliography.\n\n## 1.1 Licensing\n\nThis document is freely available under the Creative Commons Attribution-ShareAlike 4.0 International Public License. I typically distribute it as a PDF but if you want to make your own version send me an email and I will send you the Word version. For details on licensing see:\n\nhttps://creativecommons.org/licenses/by-sa/4.0/legalcode\n\n## 1.2 Conventions\n\nClass, property, rule, and individual names are written in Consolas font like this . The term used for any such construct in Protégé and in this document is an Entity . Individuals and classes can also be referred to as objects.\n\nNames for user interface tabs, views, menu selections, buttons, and text entry are highlighted like this.\n\nAny time you see highlighted text such as File>Preferences or OK or PizzaTopping it refers to something that you should or optionally could view or enter into the user interface. If you ever aren't sure what to do to accomplish some task look for the highlighted text. Often, as with PizzaTopping the text you enter into a field in the Protégé UI will be the name of a class, property, etc. In those cases, where the", - "page_start": 4, - "page_end": 4, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "## 5. Examining approaches to building a books data commons\n\nThere are many possible permutations for building a books data commons. To structure our exploration, we focused on two particular tracks, discussed below. We chose these tracks mindful of the above legal issues, and because there are already existence proofs that help to illuminate tradeoffs, challenges and potential paths forward for each.\n\n## 5a. Public domain and permissively licensed books\n\n## Existing Project Example : The Pile v2 27\n\nIn 2020, the nonprofit research group EleutherAI constructed and released The Pile - a large, diverse, open dataset for AI training. EleutherAI developed it not only to support their own training of LLMs, but also to lower the barriers for others. 28\n\nAlong with data drawn from the web at large, The Pile included books from three datasets. The first dataset was the Books3 corpus referenced at the outset of this paper. The second and third books datasets were smaller: BookCorpus2, which is a collection of 17,868 books by otherwise unpublished authors; and a 28,752 books in the public domain and published prior to 1919, drawn from a volunteer effort to digitize public domain works called Project Gutenberg.\n\nAs the awareness about The Pile dataset grew, certain rightsholders began sending copyright notices to have the dataset taken down from various websites.\n\nDespite the takedown requests, the importance of books to EleutherAI and the broader community's AI research remained. In hoping to forge a path forward EleutherAI announced in 2024 that they would create a new version of the dataset, which they will call The Pile v2. 29 Among other things, v2 would 'have many more books than the original Pile had, for example, and more diverse representation of non-academic non-fiction domains.' At the same time, it would only seek to include public domain books and permissively licensed content. As before, this corpus focuses on English language books.", - "page_start": 12, - "page_end": 12, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "There are many other words in tweets besides hashtags to express the author's intention. Multiple approaches, such as LDA and STM [32,73], can help to extract topics from unstructured texts. But in this study, targeting on hashtags is more in line with our research question. Firstly, hashtags were invented spontaneously by users of Twitter in 2007 as a mechanism to categorize discussions [74]. Words with hashtags are recognized as topics and considered worthy of public discussion. Secondly, by attaching # to certain words in tweets, the users intentionally anchor their tweets to certain topics. The operator # explicitly reflects the author's emphasis, which can help us extract rather than infer the author's identification of the topic of the tweets. Our research question is to analyze and visualize the associations of topics in public climate discourse. Compared with other approaches, analyzing hashtags co-occurrence pattern has advantage in extracting the structure of public discussions.", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed10.pdf" - }, - { - "text": "next section. Which option you choose for your ontology will depend on the specific requirements you have as well as the standards established by your organization or organizations that you work with.\n\nFinally, another name related concept you should be aware of is the concept of a namespace. If you have worked with most modern programming languages such as Python or Java, you are already familiar with the concept of a namespace. The concept is identical in OWL. A namespace is used to avoid naming conflicts between different ontologies. For example, you may have a class called Network in an ontology about telecommunications. You might also have a class called Network in an ontology about graph theory. The two concepts are related but are different. Just as with programming languages you use namespace prefixes to determine what specific namespace a name refers to. E.g., in this example you might have the prefix tc for the Telecom ontology and gt for the Graph Theory ontology. Thus, when you referred to the Network class for the Telecom ontology you would use tc:Network and gt:Network for the graph theory class.\n\nNote that you already have some experience with other namespaces. The OWL namespace prefix is owl and is used to refer to classes such as owl:Thing and owl:Nothing . The Resource Description Framework Schema (RDFS) is a model that OWL is built on top of and thus some properties that ontologies use such as rdfs:label leverage this namespace.\n\nIn the bottom view of the Active ontology tab there is a tab called Ontology Prefixes. This tab shows all the current namespace mappings in your ontology. There are certain concepts from OWL, RDF, RDFS, XML and XSD that are required for every ontology, so those namespaces are by default mapped in every new Protégé ontology. There is also a mapping to the empty string for whatever the namespace is for your ontology. This allows you to display and refer to entities in your ontology without entering a namespace prefix. If you look at that tab now you should see a row where the first column is blank, and the second column has the base IRI for your ontology. It should be the same IRI as the Ontology IRI at the top of the Active ontology tab, except it also has a # sign at the end. E.g., the Pizza tutorial developed for this tutorial has an IRI of: http://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial and the row that has a blank first column in Ontology Prefixes has the IRI:\n\nhttp://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial#.", - "page_start": 61, - "page_end": 61, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "| Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE |", - "page_start": 0, - "page_end": 0, - "source_file": "infographic5.pdf" - } - ] - }, - { - "references": { - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf", - "query": "When to use an enumerated class in OWL ontologies ?", - "target_page": 46, - "target_passage": "When a property has only a few possible values it can be useful to create a class to represent those values and to explicitly define the class by listing each possible value", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Figure 4.23 The Reasoner Inferred that Margherita and Soho Pizzas are subclasses of VegetarianPizza\n\n\n\n## 4.14 Defining an Enumerated Class\n\nA powerful tool in the object-oriented programming (OOP) community is the concept of design patterns. The idea of a design pattern is to capture a reusable model that is at a higher level of abstraction than a specific code library. One of the first and most common design patterns was the Model-View-Controller pattern first used in Smalltalk and now almost the default standard for good user interface design. Since there are significant differences between OWL and standard OOP the many excellent books on OOP design patterns don't directly translate into OWL design patterns. Also, since the use of OWL is more recent than OOP there does not yet exist the excellent documentation of OWL patterns that the OOP community has. However, there are already many design patterns that have been documented for OWL and that can provide users with ways to save time and to standardize their designs according to best practices.\n\nOne of the most common OWL design patterns is an enumerated class. When a property has only a few possible values it can be useful to create a class to represent those values and to explicitly define the class by listing each possible value. We will show an example of such an enumerated class by creating a new", - "page_start": 44, - "page_end": 44, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "next section. Which option you choose for your ontology will depend on the specific requirements you have as well as the standards established by your organization or organizations that you work with.\n\nFinally, another name related concept you should be aware of is the concept of a namespace. If you have worked with most modern programming languages such as Python or Java, you are already familiar with the concept of a namespace. The concept is identical in OWL. A namespace is used to avoid naming conflicts between different ontologies. For example, you may have a class called Network in an ontology about telecommunications. You might also have a class called Network in an ontology about graph theory. The two concepts are related but are different. Just as with programming languages you use namespace prefixes to determine what specific namespace a name refers to. E.g., in this example you might have the prefix tc for the Telecom ontology and gt for the Graph Theory ontology. Thus, when you referred to the Network class for the Telecom ontology you would use tc:Network and gt:Network for the graph theory class.\n\nNote that you already have some experience with other namespaces. The OWL namespace prefix is owl and is used to refer to classes such as owl:Thing and owl:Nothing . The Resource Description Framework Schema (RDFS) is a model that OWL is built on top of and thus some properties that ontologies use such as rdfs:label leverage this namespace.\n\nIn the bottom view of the Active ontology tab there is a tab called Ontology Prefixes. This tab shows all the current namespace mappings in your ontology. There are certain concepts from OWL, RDF, RDFS, XML and XSD that are required for every ontology, so those namespaces are by default mapped in every new Protégé ontology. There is also a mapping to the empty string for whatever the namespace is for your ontology. This allows you to display and refer to entities in your ontology without entering a namespace prefix. If you look at that tab now you should see a row where the first column is blank, and the second column has the base IRI for your ontology. It should be the same IRI as the Ontology IRI at the top of the Active ontology tab, except it also has a # sign at the end. E.g., the Pizza tutorial developed for this tutorial has an IRI of: http://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial and the row that has a blank first column in Ontology Prefixes has the IRI:\n\nhttp://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial#.", - "page_start": 61, - "page_end": 61, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "To understand what is going on you first need to understand that each SPARQL query consists of two parts. The first part at the beginning consists of several namespace prefixes. These statements consist of the prefix used for a particular namespace as well as the IRI associated with this namespace. Recall that these concepts were described in chapter 7. You may be wondering where all these prefixes came from since you didn't add them to your ontology. The answer is that every OWL ontology comes with a set of namespaces and prefixes that are required to define the ontology.\n\nAlso, to understand SPARQL you need to 'peak under the hood' of OWL. So far, we have been discussing concepts in purely logical and set theoretic terms, i.e., at the semantic level. However, like any language or database there is a lower level that describes how the concepts are mapped to actual data. In a relational database the fundamental construct to represent data is a table. In OWL the fundamental construct is a triple. OWL is actually built on top of RDFS which is a language built on top of RDF. RDF (Resource Description Framework) is a language to describe graphs (in the mathematical sense of the term). I.e., to describe nodes and links.\n\nThe foundation for RDF graphs are triples consisting of a subject, predicate, and object. This results in what is called an undirected or network graph because objects can be subjects and vice versa. Whenever you define a property in OWL you are defining a predicate. An individual can be a subject or an object (or both). E.g., in our ontology Customer1 purchasedPizza AmericanaHotPizza1 . In this example Customer1 is the subject, purchasedPizza is the predicate and AmericanaHotPizza1 is the object.\n\nHowever, classes and properties themselves are also represented as triples. So for example, when you create the class Pizza what Protégé does for you is to add the triple: Pizza rdf:type owl:Class to the ontology. I.e., the Pizza entity is of type (is an instance of) owl:Class . Similarly when you add NamedPizza as a subclass of Pizza , Protégé adds the triple: NamedPizza rdfs: s ubClassOf Pizza .\n\nHopefully, now you can make some sense of this initial query. The query is looking for all the entities that are the subjects of triples where the predicate is rdfs: s ubClassOf and the object is any other entity. The ? before a name indicates that the name is a wildcard that can match anything that fits with the rest of the pattern. This is part of the power of SPARQL, one can match a Subject, an Object, a Predicate or even all three. Making all 3 parts of the pattern wildcards would return every triple in the graph (in this case our entire Pizza ontology) being searched. You may notice that in some cases the object is simply the name of a class while in others it is a class expression with an orange circle in front of it. This is because when defining classes using DL axioms Protégé creates anonymous classes that correspond to various DL axioms.\n\nThe SELECT part of a SPARQL query determines what data to display. The WHERE part of a query determines what to match in the query. If you want to display everything matched in the WHERE clause you can just use a * for the SELECT clause. The initial default query in this tab is set up with no knowledge of the specific ontology. I.e., it will return all the classes that are subclasses of other classes regardless of the ontology. To get information about Pizzas the first thing we need to do is to add another prefix to the beginning of the query. In our case the Pizza ontology has been set up with a mapping to the prefix pizza (you can see this in the ontology prefixes tab in the Active ontology tab discussed in chapter 7). So, add the following to the SPARQL query after the last PREFIX statement:\n\n## PREFIX pizza: ", - "page_start": 68, - "page_end": 68, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "## Chapter 1 Introduction\n\nThis introduces Protégé 5 for creating OWL ontologies as well as various plugins. If you have questions specific to this tutorial, please feel free to email me directly: mdebellissf@gmail.com However, if you have general questions about Protégé, OWL, or plugins you should subscribe to and send an email to the User Support for Protégé and Web Protégé email list. This list has many people (including me) who monitor it and can contribute their knowledge to help you understand how to get the most out of this technology. To subscribe to the list, go to: https://protege.stanford.edu/support.php and click on the first orange Subscribe button. That will enable you to subscribe to the list and give you the email to send questions to.\n\nThis chapter covers licensing and describes conventions used in the tutorial. Chapter 2 covers the requirements for the tutorial and describes the Protégé user interface. Chapter 3 gives a brief overview of the OWL ontology language. Chapter 4 focuses on building an OWL ontology with classes and object properties. Chapter 4 also describes using a Description Logic Reasoner to check the consistency of the ontology and automatically compute the ontology class hierarchy.\n\nChapter 5 describes data properties. Chapter 6 describes design patterns and shows one design pattern: adding an order to an enumerated class. Chapter 7 describes the various concepts related to the name of an OWL entity.\n\nChapter 8 introduces an extended version of the Pizza tutorial developed in chapters 1-7. This ontology has a small number of instances and property values already created which can be used to illustrate the tools in the later chapters for writing rules, doing queries, and defining constraints.\n\nChapter 9 describes two tools for doing queries: Description Logic queries and SPARQL queries. Chapter 10 introduces the Semantic Web Rule Language (SWRL) and walks you through creating SWRL and SQWRL rules. Chapter 11 introduces the Shapes Constraint Language (SHACL) and discusses the difference between defining logical axioms in Description Logic and data integrity constraints in SHACL. Chapter 12 has some concluding thoughts and opinions and Chapter 13 provides a bibliography.\n\n## 1.1 Licensing\n\nThis document is freely available under the Creative Commons Attribution-ShareAlike 4.0 International Public License. I typically distribute it as a PDF but if you want to make your own version send me an email and I will send you the Word version. For details on licensing see:\n\nhttps://creativecommons.org/licenses/by-sa/4.0/legalcode\n\n## 1.2 Conventions\n\nClass, property, rule, and individual names are written in Consolas font like this . The term used for any such construct in Protégé and in this document is an Entity . Individuals and classes can also be referred to as objects.\n\nNames for user interface tabs, views, menu selections, buttons, and text entry are highlighted like this.\n\nAny time you see highlighted text such as File>Preferences or OK or PizzaTopping it refers to something that you should or optionally could view or enter into the user interface. If you ever aren't sure what to do to accomplish some task look for the highlighted text. Often, as with PizzaTopping the text you enter into a field in the Protégé UI will be the name of a class, property, etc. In those cases, where the", - "page_start": 4, - "page_end": 4, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "## 3.1.2 Properties\n\nProperties are binary relations between individuals. I.e., properties link two individuals together. For example, the property hasFriend might link the individual Biswanath to the individual Michael , or the property hasChild might link the individual Michael to the individual Oriana . Properties can have inverses. For example, the inverse of hasChild is hasParent . Properties can be limited to having a single value - i.e., to being functional. They can also be transitive or symmetric. These property characteristics are explained in detail in Section 4.8. Figure 3.2 shows a representation of some properties.\n\n\n\nProperties are similar to properties in Object-Oriented Programming (OOP). However, there are important differences between properties in OWL and OOP. The most important difference is that OWL properties are first class entities that exist independent of classes. OOP developers are encouraged to read: https://www.w3.org/2001/sw/BestPractices/SE/ODSD/\n\nFigure 3.3: Representation of Classes containing Individuals\n\n\n\n## 3.1.3 Classes\n\nOWL classes are sets that contain individuals. They are described using formal (mathematical) descriptions that rigorously define the requirements for membership of the class. For example, the class Cat would contain all the individuals that are cats in our domain of interest. 2 Classes may be organized into a superclass-subclass hierarchy, which is also known as a taxonomy. However, taxonomies are often trees. I.e., each node has only one parent node. Class hierarchies in OWL are not restricted to be trees and multiple inheritance can be a powerful tool to represent data in an intuitive manner.\n\nSubclasses specialize (aka are subsumed by ) their superclasses. For example, consider the classes Animal and Dog -Dog might be a subclass of Animal (so Animal is the superclass of Dog ). This says that All dogs are animals , All members of the class Dog are members of the class Animal . OWL and Protégé", - "page_start": 8, - "page_end": 8, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "I.e., it might be the case that we never get the data to satisfy every integrity constraint which would mean the reasoner is never of any use except to tell us that the ontology is not consistent.\n\nThus, SHACL provides a way to define data integrity constraints that overlap to some degree with what can be defined in OWL and SWRL. For example, both can define the number of values allowed for a specific property. E.g., that each instance of Employee must have one and only one social security number ( ssn ). If this were defined as a DL axiom, then the axiom would never fire for employees that had no ssn because of the OWA. On the other hand, if an Employee accidentally had 2 ssn values then the entire ontology would be inconsistent until one value was removed. SHACL on the other hand can handle both these examples and rather than making the entire ontology inconsistent it simply logs warnings at various levels of severity.\n\n## 11.3 Basic SHACL Concepts\n\nTo understand SHACL recall that the language underlying OWL is RDF which describes graphs as triples of the form: Subject Predicate Object. SHACL also works at the level of RDF because some developers may want to simply use that lower level for reasons of efficiency. Thus, RDF can validate an RDF graph as well as an OWL ontology. Fundamentally, SHACL consists of two components:\n\n - 1. An RDF vocabulary for defining data constraints on RDF graphs (which includes OWL since an OWL ontology is an RDF graph).\n - 2. A reasoner for applying the constraints defined in 1 to a specified data graph such as the Pizza ontology.\n\nOne of the most important classes in 1 is a SHACL Shape . An instance of the SHACL Shape class consists of a set of Targets and Constraints. A Target defines which nodes in the RDF graph that the data constraints apply to. For OWL ontologies this is typically the name of a class which indicates that the constraints apply to all instances of that class. The Constraints define the specific property for the constraint as well as the actual constraints such as the minimum or maximum number of values and the datatype. In the following example, a Target is the Employee class in the Pizza ontology. An example constraint is that the ssn property must have exactly one value. Another example constraint is that the format of the ssn value must be a string of the form: 'NNN-NN-NNNN' where each N must be an integer. For more on SHACL see the references in the bibliography.\n\n## 11.4 The Protégé SHACL Plug-In\n\nTo start go to Windows>Tabs and see if you have SHACL Editor as an option. If you don't then go to File>Check for plugins and select the SHACL4Protege Constraint Validator. You need to restart Protégé to see the new plugin so save your work and then quit and start Protégé and load the Pizza ontology with data.\n\nBecause editing SHACL is a bit more complex for this version of the tutorial we are only going to view some already written SHACL constraints and see how the validator processes them rather than writing additional constraints. First download the PizzaShapes.txt file to your local hard drive. This file can be found at: https://tinyurl.com/pizzatshapes Once you have downloaded the file open the SHACL Editor: Window>Tabs>SHACL Editor.\n\nYou will see an example shapes file in the editor when it opens but that isn't the shapes file you are looking for. From the editor click on the Open button at the top of the tab and navigate to the PizzaShapes.txt file you downloaded.", - "page_start": 77, - "page_end": 77, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "## Chapter 5 Datatype Properties\n\nSo far we have been describing object properties. These are properties that have a range that is some class. As with most other object-oriented languages OWL also has the capability to define properties with the range of a simple datatype such as a string or integer. Object purists will argue that everything should be an object. However, to borrow a quote from The Amazing Spiderman: 'with great power comes great overhead'. I.e., the extra capabilities that one has with a class and an instance also means that instances take up more space and can be slower to process than simple datatypes. For that reason, OWL comes with a large library of pre-existing datatypes that are mostly imported from XML. That is why many of the predefined datatypes in Protégé have a prefix of xsd for example xsd:string and xsd:integer . It is also possible to create new basic datatypes. However, for the majority of use cases, if one needs a datatype that doesn't map to one of the predefined types the best solution is to usually just define a class.\n\nA property with a range that is a simple datatype is known as a datatype property. This is analogous to the distinction between an association and an attribute in the Unified Modeling Language (UML) OOP modeling language. A UML association is similar to an OWL object property and a UML attribute is similar to an OWL datatype property. It is also analogous to the distinction between relations and attributes in entity-relation modeling. A relation in an E/R model is similar to an object property in OWL and an attribute is similar to a datatype property. Because datatypes don't have all the power of OWL objects, many of the capabilities for object properties described in section 4.8 such as having an inverse or being transitive aren't available for datatype properties.\n\n## 5.1 Defining a Data Property\n\nAs with other OWL entities, datatype properties can be defined either via the Data properties tab in the Entities tab or in the Data properties tab available via the Window>Tabs>Data properties option.\n\nWe will use datatype properties to describe the calorie content of pizzas. We will then use some numeric ranges to broadly classify particular pizzas as high or low calorie. In order to do this we need to complete the following steps:\n\n - 1. Create a datatype property hasCaloricContent , which will be used to state the calorie content of particular pizzas.\n - 2. Create several example Pizza individuals with specific calorie contents.\n - 3. Create two classes broadly categorizing pizzas as low or high calorie.\n\n## Exercise 27: Create a Datatype Property called hasCaloricContent\n\n\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_", - "page_start": 48, - "page_end": 48, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "relations, transitive relations, and many more. An understanding of the basic concepts of set theory will help the user get the most out of OWL but is not required. One of the benefits of Protégé is that it presents an intuitive GUI that enables domain experts to define models without a background in set theory. However, developers are encouraged to refresh their knowledge on logic and set theory. A good source is the first 3 chapters in Elements of the Theory of Computation by Lewis and Papadamitrious. Another good source is the PDF document Overview of Set Theory available at:\n\nhttps://www.michaeldebellis.com/post/owl-theoretical-basics\n\n## 3.1.1 Individuals\n\nIndividuals represent objects in the domain of interest. An important difference between OWL and most programming and knowledge representation languages is that OWL does not use the Unique Name Assumption (UNA). This means that two different names could actually refer to the same individual. For example, 'Queen Elizabeth', 'The Queen' and 'Elizabeth Windsor' might all refer to the same individual. In OWL, it must be explicitly stated that individuals are the same as each other, or different from each other. Figure 3.1 shows a representation of some individuals in a domain of people, nations, and relations - in this tutorial we represent individuals as diamonds.\n\nFigure 3.2: Representation of Properties\n\n\n\nIndividuals are also known as instances . Individuals can be referred to as instances of classes .\n\n", - "page_start": 7, - "page_end": 7, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "## Chapter 2 Requirements and the Protégé User Interface\n\nIn order to follow this tutorial, you must have Protégé 5, which is available from the Protégé website, 1 and some of the Protégé Plugins which will be described in more detail below. For now, just make sure you have the latest version of Protégé. At the time this is being written the latest version is 5.5 although the tutorial should work for later versions as well.\n\nThe Protégé user interface is divided up into a set of major tabs. These tabs can be seen in the Window>Tabs option. This option shows all the UI tabs that are currently loaded into the Protégé environment. Any tabs that are currently opened have a check mark next to them. To see a tab that is not visible just select it from the menu and it will be added to the top with the other major tabs and its menu item will now be checked. You can add additional major tabs to your environment by loading plugins. For example, when we load the SHACL4Protégé plugin the SHACLEditor will be added to the menu.\n\nEach major tab consists of various panes or as Protégé calls them views. Each view can be resized or closed using the icons in the top right corner of every view. The views can also be nested as sub-tabs within each major tab. When there could potentially be confusion between a tab that is a screen all its own (is under the Window>Tabs option) and a view that is a sub-tab we will call the screen tab a major tab. There are many views that are not in the default version of Protégé that can be added via the Window>Views option. The additional views are divided into various categories such as Window>Views>Individual views. Section 5.2 will show an example of adding a new view to a major tab.\n\n## Chapter 3 What are OWL Ontologies?\n\nOntologies are used to capture knowledge about some domain of interest. An ontology describes the concepts in the domain and also the relationships that hold between those concepts. Different ontology languages provide different facilities. The most recent development in standard ontology languages is OWL from the World Wide Web Consortium (W3C). A good primer on the basic concepts of OWL can be found at: https://www.w3.org/TR/owl2-primer/\n\nOWL makes it possible to describe concepts in an unambiguous manner based on set theory and logic. Complex concepts can be built up out of simpler concepts. The logical model allows the use of a reasoner which can check whether all of the statements and definitions in the ontology are mutually consistent and can also recognize which concepts fit under which definitions. The reasoner can therefore help to maintain the hierarchy correctly. This is particularly useful when dealing with cases where classes can have more than one parent. The reasoner can also infer additional information. For example, if two properties are inverses only one value needs to be asserted by the user and the inverse value will be automatically inferred by the reasoner.\n\n## 3.1 Components of OWL Ontologies\n\nAn OWL ontology consists of Classes, Properties, and Individuals. OWL ontologies are an implementation of Description Logic (DL) which is a decidable subset of First Order Logic. A class in OWL is a set, a property is a binary relation, and an individual is an element of a set. Other concepts from set theory are also implemented in OWL such as Disjoint sets, the Empty set ( owl:Nothing ), inverse", - "page_start": 6, - "page_end": 6, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "provide a language that is called Description Logic or DL for short. One of the key features of DL is that these superclass-subclass relationships (aka subsumption relationships) can be computed automatically by a reasoner - more on this later. Figure 3.3 shows a representation of some classes containing individuals classes are represented as ovals, like sets in Venn diagrams.\n\nIn OWL classes can be built up of descriptions that specify the conditions that must be satisfied by an individual for it to be a member of the class. How to formulate these descriptions will be explained as the tutorial progresses.", - "page_start": 9, - "page_end": 9, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - } - ] - }, - { - "references": { - "source_file": "sg246915.pdf", - "query": "Howcan I specify to Content Manager OnDemand to store the data on the server on which the program runs ?", - "target_page": 121, - "target_passage": "Local: Content Manager OnDemand stores data in a primary storage node on the server on which the data loading program runs", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- /SM590000 If the data source is on a remote system, you can load the data into Content Manager OnDemand on the remote system and directly store the export data to the specified Content Manager OnDemand library and object server.\n - Or, if the data source is on a remote system, you also can upload the data to the specified Content Manager OnDemand server through FTP and then load the data on the selected Content Manager OnDemand system.", - "page_start": 326, - "page_end": 326, - "source_file": "sg246915.pdf" - }, - { - "text": "Figure 1-1 Content Manager OnDemand system overview\n\n\n\nContent Manager OnDemand Client programs provide authorized users with high-speed access to the archived data that runs on the user devices (workstations) that are attached to the network and communicate with the Content Manager OnDemand servers.\n\nA Content Manager OnDemand server consists of multiple components that can be installed on a single system or multiple systems. In all cases, the installation appears to the users as a single server. The installation and is administered by the Content Manager OnDemand administrator as a single system.\n\nThe Content Manager OnDemand server includes the following components:\n\n - /SM590000 A single library server: The library server manages a database that contains the information about the users of the system, and the reports and data that are stored on the system.\n - /SM590000 One or more object servers: The object servers manage the data on disk or tape storage devices.\n - /SM590000 One or more archive servers: The archive server stores the archived data objects. Depending on the operating system, the archive servers might be IBM Tivolifi Storage Manager, object access method (OAM), or Archive Storage Manager (ASM).\n\nThe library server and the object server can be packaged separately or as a single executable file.\n\n## Content Manager OnDemand Client programs\n\nContent Manager OnDemand Client programs operate on various environments, including personal computers that are running on Windows, web browsers, and mobile devices. By using the client program, users can search for and retrieve reports that are stored on the system. Specifically, users can construct queries and search for reports, retrieve documents from Content Manager OnDemand, view, print, and fax copies or pages of documents, and attach electronic notes to the pages of a document.", - "page_start": 28, - "page_end": 28, - "source_file": "sg246915.pdf" - }, - { - "text": "## 15.4.1 Base configuration in Content Manager OnDemand\n\nTo enable FTS in Content Manager OnDemand, FTS must be enabled for each of your Content Manager OnDemand instances. In Windows, you enable FTS for each of your Content Manager OnDemand instances in the Content Manager OnDemand Configurator by selecting the Enable Full Text Index and Search check box on the Server (Advanced Options) window.\n\nOn all other platforms, the ars.cfg file of your Content Manager OnDemand instance must be edited. You must add the following line:\n\nARS\\_SUPPORT\\_FULL\\_TEXT\\_INDEX=1", - "page_start": 365, - "page_end": 365, - "source_file": "sg246915.pdf" - }, - { - "text": "## 2.3 Implementing a Content Manager OnDemand instance on a multiplatform UNIX environment\n\nIn this section, we describe how to set up a single instance in a Content Manager OnDemand for a multiplatform UNIX environment. Always refer to the product documentation of your release for the specific steps to follow.\n\n## 2.3.1 Defining a single instance\n\nBy default, the initial instance on any library server is named archive . Creating a single instance can be summarized by the following steps:\n\n - 1. Creating a user\n - 2. Creating a DB2 instance\n - 3. Installing IBM Global Security Kit\n - 4. Setting up Secure Sockets Layer (SSL)\n - 5. Storing user IDs and passwords in a stash file\n - 6. Installing and configuring Tivoli Storage Manager\n - 7. Configuring the instance\n - 8. Creating a Content Manager OnDemand database\n - 9. Initializing the system log and system load facility\n\n## Creating a user\n\nNew installations (instances) of Content Manager OnDemand can be configured to run under a user other than the root user. If you plan to run an instance under a user other than root, complete the following steps:\n\n - 1. Create the user for the Content Manager OnDemand instance owner that is a member of the database owners group.\n - 2. Give the user administrator authority to the database.\n - 3. Set permissions for the cache storage file systems.\n - 4. Set permissions for the Content Manager OnDemand configuration and script files.\n - 5. Give the instance owner permission to write to the system console.\n - 6. Specify the instance owner in the ARS.INI file.\n\nIf you plan to run a distributed library and object server system, with one or more object servers on different workstations or nodes than the library server, you must also configure Content Manager OnDemand on the object servers.\n\nTo configure Content Manager OnDemand on the object servers, complete the following steps:", - "page_start": 42, - "page_end": 42, - "source_file": "sg246915.pdf" - }, - { - "text": "## 3.1 Report administration\n\nReport design and definition are key to a successful implementation of a Content Manager OnDemand system. Knowledge of the data that will be indexed, loaded, and retrieved, with knowledge of Content Manager OnDemand preferred practices, results in the most efficient and easy-to-use system possible. In this section, we consider the processes that are followed when you define a Content Manager OnDemand report. We present hints and tips to help in the design and implementation process.\n\nThe system components that are required for creating, retrieving, and viewing a Content Manager OnDemand report are a storage set, an application group, an application, and a folder. Optionally, cabinets might be used to organize and simplify folder access. These elements, in combination, allow the Content Manager OnDemand administrator to define and create a report definition that can then be used to index and load data into Content Manager OnDemand. Figure 3-1 illustrates the relationship of these elements in a typical Content Manager OnDemand system.\n\nFigure 3-1 Content Manager OnDemand system components relationship\n\n\n\nTo help you better understand how to perform report administration, we use the example company that is mentioned in 1.2.1, 'Background information of an example company' on page 6 with the Content Manager OnDemand Administrator Client running on Windows to create the required system components. We use the monthly credit card statements that are generated by AFinancial Co in our example. These statements are stored in a single application group in Content Manager OnDemand.\n\n## 3.1.1 Storage sets\n\nWhen you define a report, the first component to create is a storage set if one does not exist. A storage set is a named collection of primary storage nodes that support application groups with similar archive storage management requirements.", - "page_start": 69, - "page_end": 69, - "source_file": "sg246915.pdf" - }, - { - "text": "To configure Content Manager OnDemand on the object servers, complete the following steps:\n\n - 1. Create a group and user for the Content Manager OnDemand instance owner.\n - 2. Give ownership of the cache storage file systems that are listed in the ARS.CACHE file to the group and user for the Content Manager OnDemand instance owner.", - "page_start": 42, - "page_end": 42, - "source_file": "sg246915.pdf" - }, - { - "text": "## 6.5 Data security\n\nAccess to the Content Manager OnDemand data tables is secured through various methods. These methods include a secure data model, user authentication, SQL Query support, annotation security, and securing access to the Content Manager OnDemand commands. These methods are described in further detail in this section.\n\n## 6.5.1 Content Manager OnDemand object-owner model\n\nContent Manager OnDemand internal security is based on an object-owner model, which is illustrated in Figure 6-6. Details about the object-owner model are in the IBM Content Manager OnDemand for Multiplatforms, V9.5, Administration Guide , SC19-3352. In this context, a Content Manager OnDemand instance is an implementation of the library server, one or more object servers, the data access, and the storage model. The data access and storage are implemented in the form of objects. The following objects are all Content Manager OnDemand objects:", - "page_start": 161, - "page_end": 161, - "source_file": "sg246915.pdf" - }, - { - "text": "- - A network-connected workstation.\n - This situation simulates either a web server that connects to the Content Manager OnDemand server or a user that connects to the Content Manager OnDemand server.", - "page_start": 330, - "page_end": 330, - "source_file": "sg246915.pdf" - }, - { - "text": "Figure 17-1 Configuration setup for Content Federation Services for Content Manager OnDemand\n\n\n\nDisabling this configuration setting does not affect any existing documents that were placed on hold by Enterprise Records. Documents continue to be held until Content Manager OnDemand is notified by Enterprise Records that the documents must be deleted.\n\n## 17.2.2 Identify the application groups where Content Federation will be enabled\n\nFor each application group, specify whether you want to enable FileNet P8 Content Federation Services for Content Manager OnDemand by using the Content Manager OnDemand Administrator Client, as shown in Figure 17-2 on page 369.", - "page_start": 391, - "page_end": 391, - "source_file": "sg246915.pdf" - }, - { - "text": "- /SM590000 The recommended expiration type for Content Manager OnDemand is Load . Content Manager OnDemand supports the expiration type of Load with the use of ARSEXOAM for expiring the indexes in Content Manager OnDemand.\n - /SM590000 Storage Manager expiration is incompatible with Enhanced Retention Manager and Content Federation Services for Content Manager OnDemand.", - "page_start": 260, - "page_end": 260, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "sg246915.pdf", - "query": "Does the XML indexer of Content Manager OnDemand support large objects ?", - "target_page": 188, - "target_passage": "No", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## 15.4.1 Base configuration in Content Manager OnDemand\n\nTo enable FTS in Content Manager OnDemand, FTS must be enabled for each of your Content Manager OnDemand instances. In Windows, you enable FTS for each of your Content Manager OnDemand instances in the Content Manager OnDemand Configurator by selecting the Enable Full Text Index and Search check box on the Server (Advanced Options) window.\n\nOn all other platforms, the ars.cfg file of your Content Manager OnDemand instance must be edited. You must add the following line:\n\nARS\\_SUPPORT\\_FULL\\_TEXT\\_INDEX=1", - "page_start": 365, - "page_end": 365, - "source_file": "sg246915.pdf" - }, - { - "text": "## 6.5 Data security\n\nAccess to the Content Manager OnDemand data tables is secured through various methods. These methods include a secure data model, user authentication, SQL Query support, annotation security, and securing access to the Content Manager OnDemand commands. These methods are described in further detail in this section.\n\n## 6.5.1 Content Manager OnDemand object-owner model\n\nContent Manager OnDemand internal security is based on an object-owner model, which is illustrated in Figure 6-6. Details about the object-owner model are in the IBM Content Manager OnDemand for Multiplatforms, V9.5, Administration Guide , SC19-3352. In this context, a Content Manager OnDemand instance is an implementation of the library server, one or more object servers, the data access, and the storage model. The data access and storage are implemented in the form of objects. The following objects are all Content Manager OnDemand objects:", - "page_start": 161, - "page_end": 161, - "source_file": "sg246915.pdf" - }, - { - "text": "Special thanks to the following people for their content contribution:\n\nBen Boltz is a Senior Software Engineer. He has 32 years of experience in the Software Industry. He has worked on Content Manager OnDemand for Multiplatforms for over 20 years.\n\nDarrell Bryant joined IBM as a manufacturing engineer and worked as a Systems Engineer who specialized in S/36 and AS/400 systems. In 2000, Darrell joined the OnDemand team. He has performed a mix of activities, including services, education, support, and testing. Darrell is now the lead tester for OnDemand for i. He also develops and teaches workshops to clients and partners. He is the editor of the OnDemand Newsletter.\n\nNelson Chen is a Software Developer with Content Manager OnDemand. He has over 30 years of experience in software development, among them 27 years at IBM and last 20 years in OnDemand. His areas of expertise include ArsXML, Install, and Configurator.\n\nTrang Kim Duong is a Software Developer with Content Manager OnDemand. Among her 17 years of experience in software development, the last 11 years were in Content Manager OnDemand. Trang's areas of expertise include workflow for Space Vehicle Design, Content Manager OnDemand Report Distribution, Exporter utility for Content Federation Services for Content Manager OnDemand (CFS-CMOD), and the Content Manager OnDemand back-end database component.\n\nHubert Hwang is a Software Developer with Content Manager OnDemand. He is a Certified Solutions Expert for Content Manager OnDemand with over 10 years of experience with the product. His areas of expertise include the Content Manager OnDemand Web Enablement Kit Java application programming interfaces (APIs), Content Navigator, and software test automation. He has extensive experience troubleshooting all aspects of the product. He has authored over 200 technotes on topics, such as migration, data collection, and troubleshooting guides, for Content Manager OnDemand.\n\nVicki Miller is a Senior Certified Client Technical Professional at IBM, working in the technology industry for 33 years with a focus on Enterprise Content Manager (ECM) since 1999. Her area of expertise in the realm of ECM is focused on solution sales consulting and technical account leadership that revolves around the management, processing, and analysis of any type of electronic content to help organizations optimize and protect their business. Vicki has spoken at IBM conferences on critical ECM topics, contributed to the development of technical publications, and led groups within IBM and client organizations to drive the enhancement ECM solutions and products.\n\nPaula Muir is a Software Developer with Content Manager OnDemand for Multiplatforms in Boulder, Colorado. Her areas of expertise include indexing and loading data, and AFP and PDF architecture.\n\nNancy O'Brian started at IBM as an applications programmer, then transferred to a branch office where she performed her first Content Manager OnDemand (then known as R/DARS) implementation. After many more implementation service engagements, she joined the Content Manager OnDemand development team and continued to perform implementation services, training, support, testing, and technical writing. She currently focuses primarily on technical writing and testing.\n\nSandi Pond is a Software Developer with Content Manager OnDemand for Multiplatforms. She has 17 years of experience with Content Manager OnDemand, working in various areas of the development team. Her area of expertise is the OnDemand Web Enablement Kit (ODWEK).\n\nDebbie Wagner is a Senior Software Engineer at IBM and has over 22 years of experience in content management, specifically, Content Manager OnDemand for Multiplatforms. Her areas", - "page_start": 18, - "page_end": 18, - "source_file": "sg246915.pdf" - }, - { - "text": "\n\nChapter 12.\n\n## Scalability, reliability, and availability architectures\n\nIBM Content Manager OnDemand (Content Manager OnDemand) is a lightweight process, that is, the Content Manager OnDemand code itself does not require extensive system resources to perform the functions that are required of it. Content Manager OnDemand installations scale to handle both large quantities of data and many users. The total quantity of data that is stored or retrieved at any time is the main contributor to the resource consumption on the server. This chapter focuses on the scalability, reliability, and availability of Content Manager OnDemand systems.\n\nIn this chapter, we cover the following topics:", - "page_start": 306, - "page_end": 306, - "source_file": "sg246915.pdf" - }, - { - "text": "The OS/390 indexer supports three exits to assist with indexing and loading documents into Content Manager OnDemand:", - "page_start": 265, - "page_end": 265, - "source_file": "sg246915.pdf" - }, - { - "text": "## 15.1 Introduction to full text search in Content Manager OnDemand\n\nContent Manager OnDemand users primarily search on the metadata (extracted index values) that is associated with documents. By using FTS, you can intelligently search through actual document content. To enable FTS, the documents are first parsed and an index is built. This index can then be queried by a full text engine.\n\nThe FTS feature in Content Manager OnDemand comes with a new server, the Full Text Search Server (FTS Server), which handles the text extraction, indexing, and searching of the indexed data. This new server offloads the processing of full text data to a machine other than your Content Manager OnDemand library and object servers.\n\nThe full text engine is the same search services engine that is used by other IBM products, such as DB2 or IBM FileNet P8. It is based on the Lucene engine and allows advanced and flexible queries. Users can perform wildcard searches, fuzzy (or similar) searches, proximity searches, Boolean searches, and other complex queries.\n\nThe full text feature can handle many formats, including Microsoft Office documents, XML files, and typical Content Manager OnDemand formats, such as AFP, Line Data, and Adobe Portable Document File (PDF).\n\nThe FTS feature supports full text indexing of both new and existing data. For new data, the FTS Server is configured to index the newly loaded reports by using the Administrator Client. For existing data, indexing is invoked by using the Content Manager OnDemand command-line utilities or the Content Manager OnDemand Web Enablement Kit (ODWEK) Java application programming interface (API).\n\nFTS is enabled through the Content Manager OnDemand folder and allows all clients to take advantage of full text queries after the server configuration is complete. Several new Content Manager OnDemand folder field types are defined in support of FTS. Search score, highlight, and summary are returned, aiding the user in determining whether the document is a good match.\n\nNote: Before the release of the FTS option in Content Manager OnDemand, a document content-based search was possible by using the server-based text search functionality. However, this functionality is limited to AFP, Line, SCS, and PDF documents. It does not use an index, but instead the server retrieves the documents and then scans those documents for the index values. This method limits the capabilities of the functions to exact matches of a query string and might cause workload problems on the Content Manager OnDemand server. FTS eliminates these issues and limitations by introducing new processing components.\n\n## 15.2 Full text search architecture in Content Manager OnDemand\n\nThe process of full text indexing can be lengthy in terms of time and processor consumption.\n\nTherefore, an integration architecture, which decouples the full text engine from the Content Manager OnDemand server and keeps the different workloads separate, is required.\n\nThe components and their basic communication are shown in Figure 15-1 on page 337.", - "page_start": 359, - "page_end": 359, - "source_file": "sg246915.pdf" - }, - { - "text": "## Preface\n\nThis IBMfi Redbooksfi publication provides a practical guide to the design, installation, configuration, and maintenance of IBM Content Manager OnDemand Version 9.5.\n\nContent Manager OnDemand manages the high-volume storage and retrieval of electronic statements and provides efficient enterprise report management. Content Manager OnDemand transforms formatted computer output and printed reports, such as statements and invoices, into electronic information for easy report management. Content Manager OnDemand helps eliminate costly, high-volume print output by capturing, indexing, archiving, and presenting electronic information for improved customer service.\n\nThis publication covers the key areas of Content Manager OnDemand, some of which might not be known to the Content Manager OnDemand community or are misunderstood. The book covers various topics, including basic information in administration, database structure, storage management, and security. In addition, the book covers data indexing, loading, conversion, and expiration. Other topics include user exits, performance, retention management, records management, and many more.\n\nBecause many other resources are available that address subjects on different platforms, this publication is not intended as a comprehensive guide for Content Manager OnDemand. Rather, it is intended to complement the existing Content Manager OnDemand documentation and provide insight into the issues that might be encountered in the setup and use of Content Manager OnDemand. This book is intended for individuals who need to design, install, configure, and maintain Content Manager OnDemand.\n\nThis book was produced by a team of specialists from around the world working at the International Technical Support Organization, San Jose Center.\n\nWei-Dong Zhu is a Content Management Project Leader with the ITSO at IBM US, California. She is a Certified Solution Designer for IBM Content Manager. She has more than 10 years of software development experience in accounting, image workflow processing, and digital media distribution (DMD). Her development work in one of the DMD solutions contributed to a first-time ever win for IBM of an Emmy award in 2005. Jackie joined IBM in 1996. She holds a Master of Science degree in Computer Science from the University of Southern California.\n\nJim Ilardi is a Consulting Client Solution Professional in Carmel, New York. Jim has over 30 years of experience in IT and over 18 years working with Content Manager OnDemand. Jim started with IBM in Lab Services installing many OnDemand systems around the world. Today, Jim works in Pre-Sales Technical Sales covering Enterprise Content Manager in the New York Area.\n\nDeborah Matamoros is a Software Developer with Content Manager OnDemand. She has 26 years of development experience at IBM, with the last seven of those years in OnDemand. Her area of expertise is Report Distribution. Debbie holds a degree in Computer Science from the University of Oregon and currently resides in Park City, Utah.\n\n## Authors", - "page_start": 16, - "page_end": 16, - "source_file": "sg246915.pdf" - }, - { - "text": "In XML, the definition and syntax of the markup language are defined in a schema file . For the Content Manager OnDemand XML batch program, the schema file is called ondemand.xsd . It contains the definitions for the Content Manager OnDemand objects: users, groups, applications, application groups, storage sets, folders, printers, and others. Each Content Manager OnDemand object definition contains one or more child objects. For example, a user object has a child object for permissions, and a group object has a child object for users in the group. The schema file ( ondemand.xsd ) must not be changed in any way by the user.\n\nThe input XML file for the XML batch program is parsed to ensure that it is valid according to the schema file. Each object within the file is examined to ensure that the attributes are valid according to the object type. The XML batch program generates XML when Content Manager OnDemand objects are exported. The XML that is generated can be used as an input for the subsequent arsxml command.\n\nExample 3-1 shows a sample of the file exportusers.xml from the XML samples directory. You can change the names of the users to the users that you want to export.\n\nExample 3-1 Sample XML input file for exporting users\n\n```\n \n```\n\nYou can export objects by running arsxml export . The following command exports the users that are listed in the exportuser.xml file, from the server odserver1, to an output file named users.xml :\n\narsxml export -u oduser1 -p /my/stash/pwfile -h odserver1 -i exportusers.xml -o users.xml -v\n\nYou can import objects by running arsxml add . The following command imports the users from the users.xml file (which is generated from the previous command) to server odserver2:\n\narsxml add -u oduser2 -p /my/stash/pwfile -h odserver2 -i users.xml -v\n\nYou can delete objects by running arsxml delete . The following command deletes the users from odserver2, based on the users that are listed in the users.xml file:\n\narsxml delete -u oduser2 -p /my/stash/pwfile -h odserver2 -i users.xml -v\n\nFor deletion, you are prompted before each object in the XML is deleted, unless the -x parameter is used.", - "page_start": 96, - "page_end": 96, - "source_file": "sg246915.pdf" - }, - { - "text": "\n\nChapter 1.\n\n## Overview and concepts\n\nIn this chapter, we provide an overview of the IBM Content Manager OnDemand (Content Manager OnDemand) system. We describe how Content Manager OnDemand manages reports and index data. We also provide information to help you better understand how Content Manager OnDemand works.\n\nIn this chapter, we cover the following topics:\n\n - /SM590000 Overview of Content Manager OnDemand\n - /SM590000 Content Manager OnDemand concepts\n - /SM590000 Content Manager OnDemand server and its components\n\n1", - "page_start": 26, - "page_end": 26, - "source_file": "sg246915.pdf" - }, - { - "text": "## 8.3.1 Content Manager OnDemand Web Enablement Kit\n\nODWEK provides a Java API to access Content Manager OnDemand servers and their documents. It is the strategic client API that provides the largest feature set of any Content Manager OnDemand API. It is used by web clients, such as Content Navigator or WEBi, by abstraction layers, such as Information Integrator, or by API components, such as CMIS.\n\nThe ODWEK Java API and its use to develop Content Manager OnDemand clients are described in detail in IBM Content Manager OnDemand Web Enablement Kit Java APIs: The Basics and Beyond , SG24-7646. This section covers only a basic overview and focuses on client considerations about ODWEK. Developers are encouraged to read the referenced book before they plan a client development that is based on ODWEK.\n\n## Scope\n\nODWEK is a Content Manager OnDemand component that can be used by all Content Manager OnDemand customers. It is focused on typical client use cases, such as searching for and accessing data that is stored in a Content Manager OnDemand archive. It also has web viewers, such as the line data applet and Content Manager OnDemand AFP viewer.", - "page_start": 225, - "page_end": 225, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "sg246915.pdf", - "query": "Considering storage efficiency, should I store my AFP documents as PDF to distribute them over the web ?", - "target_page": 232, - "target_passage": "If a requirement exists to present AFP documents in the Portable Document Format (PDF) format over the web, from a storage perspective, it is more efficient to store the documents in their native format and then convert them to PDF at retrieval tim", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## 9.1.2 When to convert data streams\n\nThe decision of when to convert data streams relies mainly on the use of the system. Typically, converting data at load time requires more time to process the print stream file, and converting data at retrieval time causes the user retrieval to be a little slower. The decision might depend on how many documents are retrieved, compared to how many documents are loaded daily. It might also depend on legal requirements about the format of stored data.\n\n## AFP to PDF\n\nIf a requirement exists to present AFP documents in the Portable Document Format (PDF) format over the web, from a storage perspective, it is more efficient to store the documents in their native format and then convert them to PDF at retrieval time. AFP documents are stored more efficiently than PDF documents.\n\nThe PDF print stream, when it is divided into separate customer statements, is larger than AFP because each statement contains its own set of structures that are required by the PDF architecture to define a document.\n\nElapsed time and processor time are also essential factors in the decision-making process. The amount of time (elapsed and CPU) that is needed to convert the document depends on how large the document is and how many resources or fonts are associated with the document.", - "page_start": 231, - "page_end": 231, - "source_file": "sg246915.pdf" - }, - { - "text": "## 7.2 Getting started with PDF indexing\n\nPDF is a standard that is specified by Adobe Systems, Incorporated, for the electronic distribution of documents. PDF files are compact. They can be distributed globally through email, the web, intranets, or CD-ROM, and viewed with Adobe Reader.\n\nPDF is a data type or file format that is platform (hardware, operating system)-independent. A PDF file contains a complete PDF document that is composed of text, graphics, and the resources that are referenced by that document.\n\nTwo PDF file layouts are possible:\n\n - /SM590000 Non-Linear (not 'optimized')\n - This file layout is optimized for space savings. Storing a PDF file by using a Non-Linear layout consumes less disk space than storing the same PDF file linearly. It is slower to access or display this type of layout because portions of the data that is required to assemble pages of the document are scattered throughout the PDF file, so the whole PDF file must be downloaded and accessed before the file can be displayed.\n - /SM590000 Linear ('optimized' or 'web optimized')\n\nIn this file format, the PDF file is created in a linear (in page order) fashion. This file format allows the PDF viewer to start displaying the PDF document pages when they are downloading without waiting for the whole PDF file to be downloaded.", - "page_start": 188, - "page_end": 188, - "source_file": "sg246915.pdf" - }, - { - "text": "## 13.4.1 PDF data\n\nPortable Document Format (PDF) data is an increasingly common data type that can be archived within Content Manager OnDemand. The following key advantages are available by using this data type as a document format:\n\n - /SM590000 It is a read-only format that does not require any external resources, such as images or fonts. It is self-contained.\n - /SM590000 The viewer for PDF can be downloaded at no charge from the Adobe website and the browser plug-ins for PDF are also available at no charge.\n\nDuring PDF document creation, resources, such as images and custom fonts, are placed in the data stream once and then referenced many times from within the PDF file. If a large report is produced from many small documents, that report requires only one copy of the resources.\n\nHowever, when the PDF is indexed, the PDF Indexer creates many PDF documents from the input file. Each of these documents requires a certain number of PDF structures, which define a document. These documents are concatenated together in the .out file, and then loaded into Content Manager OnDemand as separate documents. Because the resources are extracted and placed into a separate resource file, they are not included in each document. For an illustration of the process, see Figure 13-3.\n\nFigure 13-3 PDF indexing\n\n", - "page_start": 331, - "page_end": 331, - "source_file": "sg246915.pdf" - }, - { - "text": "- - Resource collection for AFP and Portable Document Format (PDF).\n - - Document compressibility, which is a function of document data complexity and data type. Text (such as Line Data or SCS) is typically more compressible than AFP, which is typically more compressible than PDF.", - "page_start": 325, - "page_end": 325, - "source_file": "sg246915.pdf" - }, - { - "text": "- 4. Segments the report into 'documents'.\n - 5. Compresses the documents.\n - 6. Stores the compressed documents in storage objects (10 MB by default).", - "page_start": 324, - "page_end": 324, - "source_file": "sg246915.pdf" - }, - { - "text": "On Multiplatforms and z/OS, you can aggregate documents that are loaded from Content Manager OnDemand Web Enablement Kit (ODWEK) before you store them in the archive. The document is stored to cache where it is appended to the storage object until the object reaches 10 MB (defined storage object size), at which point it is migrated to a storage manager, such as Tivoli Storage Manager. For more information about this topic, see the following website:\n\nhttp://www.ibm.com/support/docview.wss?uid=swg21587507", - "page_start": 310, - "page_end": 310, - "source_file": "sg246915.pdf" - }, - { - "text": "## 5.1 Content Manager OnDemand cache storage\n\nContent Manager OnDemand has a built-in cache storage management that is used to store documents on locally mounted disk subsystems. These subsystems can be network-attached storage (NAS), storage area networks (SAN), or any type of locally addressable disk that is available to the supported operating system. The cache storage manager uses a list of directories or file systems that are available to determine where space is available for storing and maintaining documents.\n\nEach Content Manager OnDemand object server in the system has a defined set of cache storage devices on which you can maintain the report data for a period to provide the fastest access times for system users.\n\nCertain implementations of Content Manager OnDemand use an all cache system to maintain data for its full retention. Other implementations store to both cache and archive storage. Other implementations store only to the archive.\n\nYou can configure Content Manager OnDemand so that at load time one of the following methods of data storage occurs:\n\n - /SM590000 Data is stored in cache and later is automatically migrated from the cache subsystem to an archive system.\n - /SM590000 Data is stored to both local cache and archive storage.\n - /SM590000 Data is stored directly to archive storage.\n\nThese options are described in the following sections.\n\n## 5.2 IBM Tivoli Storage Manager for Multiplatforms\n\nContent Manager OnDemand for Multiplatforms integrates with Tivoli Storage Manager and a license for this usage is included with Content Manager OnDemand. Within Tivoli Storage Manager, documents can be archived on various media, such as disk, optical, tape, and content-addressable storage (CAS) devices. These archive storage devices must be defined to the Tivoli Storage Manager system. Content Manager OnDemand uses the archive application programming interface (API) that is provided by Tivoli Storage Manager to store and retrieve documents.\n\nTo store application group data to the Tivoli Storage Manager ASM, the application group must be configured within Content Manager OnDemand to a defined storage set. This storage set contains a storage node that is defined within Tivoli Storage Manager and points to a specific storage area or media.\n\nWith the application group definition, you can specify whether and when the data is migrated to archive storage. For example, you can specify that the data will be migrated to archive storage when the document is originally loaded into the system, or that the data migration occurs the next time that the migration maintenance process is run, or that the data migration occurs after a certain number of days pass from the date that the data was loaded; or never.", - "page_start": 113, - "page_end": 113, - "source_file": "sg246915.pdf" - }, - { - "text": "- /SM590000 Document: With this expiration type, a document at a time is deleted from the application group. Data that is stored in archive storage is deleted by the storage manager based on the archive expiration date. Storing documents with an expiration type of Document causes the expiration process to search through every document in the segment to determine whether the expiration date was reached, which results in long processing times.\n\nWhen the arsmaint expiration process is run, data is deleted only from the application group if the upper threshold for the size of the cache storage is reached. By default, the cache threshold is 80%. A lower threshold can be forced by the expiration command parameters. Unless a reason exists to clear cache, leaving data in cache improves retrieval performance.\n\n## 5.2.6 Advanced application group storage management\n\nBy using the advanced storage management settings (Figure 5-11), you can adjust the size of the load object and determine when report data, indexes, and resources are migrated to archive storage.\n\nFigure 5-11 Advanced application group storage management\n\n\n\n## Object Size\n\nThe Object Size parameter determines the size of a storage object in kilobytes (KB). Content Manager OnDemand, by default, segments and compresses stored data into 10 MB storage objects. The default of 10 MB is the most commonly used object size value.\n\nImportant: Be careful when you change the value for Object Size. Setting the value too small or too large can adversely affect load performance. However, increasing this value might be necessary if you load large files and run out of Object IDs during the loading process.\n\nNote: The object size that is defined here must be equal to or larger than the size of the compressed storage objects that are defined in any application that is assigned to the application group.", - "page_start": 126, - "page_end": 126, - "source_file": "sg246915.pdf" - }, - { - "text": "Selecting a cache-only storage set requires the creation of backup and data management systems that are external to the Content Manager OnDemand system.\n\nCache-only storage: If the storage set contains cache-only storage nodes, ensure that the Cache Data value and the Life of Data and Indexes value are the same. Otherwise, the add or update operation cannot be completed.\n\n## Scenario 2: Cache, then migration to storage, and then expiration\n\nIn this scenario, the storage object is first stored to cache for a short period, after which it is migrated to a storage manager for long-term storage.", - "page_start": 245, - "page_end": 245, - "source_file": "sg246915.pdf" - }, - { - "text": "## 7.2.1 Limitations\n\nThe maximum input file size that is supported by PDF Indexer is 4 GB. The amount of data that can be processed from an input file is also limited by the amount of memory that is available on the server on which you are running the PDF Indexer. The maximum size of a single document within the input file that can be loaded into Content Manager OnDemand is 2 GB; however, we suggest that the size of a single PDF document does not exceed 50 MB.\n\nSecure PDF documents are not supported. PDF Digital Signatures are not supported. If a PDF document contains a digital signature, after indexing, the .out file does not contain the digital signature. To load a file that contains a PDF Digital Signature, create a generic index file for it, and load the file as one document.\n\n## 7.3 Performance considerations\n\nThe best performance of the PDF Indexer is on the Windows platform. For the preferred performance practices, see 13.4.1, 'PDF data' on page 308.\n\n## 7.3.1 PDF fonts and output file size\n\nThe fonts that are used in a PDF document are one of the factors that determines the indexing's output file size.\n\n## The base 14 Type 1 fonts\n\nThe base 14 Type 1 fonts are a core set of fonts that are always available to the Acrobat program. Because they are available on the system, they are not embedded in the document. Therefore, documents that are created with these fonts are more compact. The base 14 fonts are listed:", - "page_start": 189, - "page_end": 189, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20200438_en.pdf", - "query": "Where can I consult a summary of the impact of the International tax compliance regulations ?", - "target_page": 3, - "target_passage": "A Tax Information and Impact Note covering the International Tax Compliance Regulations 2015 was published on 18th March 2015 and is available on the HMRC website at https://www.gov.uk/government/publications/tax-administration-regulations-to-implement-the- uks-automatic-exchange-of-information-agreements", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "## S T A T U T O R Y I N S T R U M E N T S\n\n## 2020 No. 438\n\n## TAXES\n\n## The International Tax Compliance (Amendment) Regulations 2020\n\nMade\n\n-\n\n-\n\n-\n\n-\n\n20th April 2020\n\nLaid before the House of Commons\n\n21st April 2020\n\nComing into force\n\n- -\n\n13th May 2020\n\nThe Treasury make these Regulations in exercise of the powers conferred by section 222 of the Finance Act 2013( a ):\n\n## Citation and commencement\n\n- 1. These Regulations may be cited as the International Tax Compliance (Amendment) Regulations 2020 and come into force on 13th May 2020.\n\n## Amendments to the International Tax Compliance Regulations 2015\n\n- 2. -(1) The International Tax Compliance Regulations 2015( b ) are amended as follows.\n- (2) In regulation 1(3)(b)(i), for '16th May 2019' substitute '19th April 2020'( c ).\n- (3) In regulation 3(4A)(a), at the beginning insert 'subject to regulation 24(3)'.\n- (4) In regulation 24-\n- (a) in the table in paragraph (2), in the column headed 'the CRS'-\n- (i) at the beginning of the entry for 'new account' insert 'subject to paragraph (3)', and\n- (ii) at the beginning of the entry for 'pre-existing account' insert 'subject to regulation 3(4A)(a) and paragraph (3)', and\n- (b) after paragraph (2) insert-\n- '(3) In respect of the accounts listed in paragraph (4)-", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20200438_en.pdf" - }, - { - "text": "## EXPLANATORY NOTE\n\n(This note is not part of the Regulations)\n\nThese Regulations make amendments to secondary legislation relating to special educational needs and disability in order to provide exceptions to time limits set out in that legislation where they cannot be met because of a reason relating to the incidence or transmission of coronavirus.\n\nRegulation 2 contains review and expiry provisions. The Secretary of State is required to review the effectiveness of the Regulations during the period in which they have effect. The Regulations cease to have effect on 25th September 2020.\n\nRegulations 3 to 14 amend the Special Educational Needs and Disability Regulations 2014 ('the SEND Regulations 2014').\n\nRegulation 5 inserts a glossing provision into the SEND Regulations 2014 which relaxes certain requirements in those Regulations for actions to be taken within specified time limits where it is not reasonably practicable for a person to meet those requirements for a reason relating to the incidence or transmission of coronavirus. Instead, any such requirement is to be read as a requirement for such action to be taken as soon as reasonably practicable.\n\nRegulations 6 to 14 make textual amendments to the SEND Regulations 2014 to relax time limits.\n\nRegulations 15 to 17 amend the Special Educational Needs (Personal Budgets) Regulations 2014 ('the Personal Budgets Regulations 2014').\n\nRegulation 17 inserts a similar glossing provision into the Personal Budgets Regulations 2014 as regulation 5 does in respect of the SEND Regulations 2014.\n\nRegulations 18 to 27 amend the Special Educational Needs and Disability (Detained Persons) Regulations 2015 ('the Detained Persons Regulations 2015').\n\nRegulation 20 inserts a glossing provision into the Detained Persons Regulations 2015 similar to the ones in regulations 5 and 17 in relation to the SEND Regulations 2014 and the Personal Budgets Regulations 2014 respectively.\n\nRegulations 21 to 27 make textual amendments to the Detained Persons Regulations 2015 to relax time limits.\n\nRegulations 28 to 30 amend the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017 ('the First-tier Tribunal Regulations 2017').\n\nRegulation 30 inserts a glossing provision into the First-tier Tribunal Regulations 2017 similar to those in regulations 5, 17 and 20.\n\nAn impact assessment has not been produced for this instrument as this is a temporary, emergency measure and no significant impact on business, charities or voluntary bodies is foreseen.\n\nAn Explanatory Memorandum is published alongside this instrument on www.legislation.gov.uk.\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 5, - "page_end": 5, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "accounts so that these terms are defined by reference to the date that those accounts ceased to be excluded accounts. Regulation 2(3) and (4)(a) make consequential amendments.\n\nRegulation 3 makes a transitional provision for the calendar year 2020 in relation to accounts which were previously excluded accounts.\n\nA Tax Information and Impact Note covering the International Tax Compliance Regulations 2015 was published on 18th March 2015 and is available on the HMRC website at https://www.gov.uk/government/publications/tax-administration-regulations-to-implement-theuks-automatic-exchange-of-information-agreements. It remains an accurate summary of the impacts that apply to this instrument.\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20200438_en.pdf" - }, - { - "text": "- For this Status Report, SLIC evaluations of the labour inspection systems in Member States were not taken into account, because most of them are confidential.\n - 351 DG Employment, Social Affairs and Inclusion, 2015: Evaluation of the Practical Implementation of the EU Occupational Safety and Health (OSH) Directives in EU Member States (p. 89).\n - 352 Ibid., p. 105. See also p. 89: 'The Directives represent a mix of a goal-oriented approach - strongly expressed in the Framework Directive, but also mirrored in the individual Directives - and a prescriptive approach - which is, for instance, seen in the very detailed and specific requirements included in the annexes of some Directives.\n - 353 Ibid., p. 67.\n - 354 Ibid., p. 94.\n - 355 Graveling, 2018: Transposition, implementation and enforcement of EU OSH legislation - Thematic Discussion Paper\n - 356 EU-OSHA, 2021: Summary - Improving compliance with occupational safety and health regulations: an overarching review (p. 4).\n - 357 The authors explain the difference between 'substantive and rule compliance as follows: '... 'substantive compliance', which requires compliance with the collective goals underpinning the regulatory scheme (better OSH practice); and 'rule compliance', which envisages compliance with the content of legal standards only ' (p. 11). 358 EU-OSHA, 2021: Improving compliance with occupational safety and health regulations: an overarching review (p. 43).", - "page_start": 153, - "page_end": 153, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "\n\n## 6.3 Guidance and support\n\nSupervision is only one approach to implementing legislation. As mentioned, supervision by state authorities can only reach a small share of all enterprises, particularly not the many small ones and the self-employed. In addition to supervision and control, a broad variety of prevention-supporting activities has been developed during the past decades. 388\n\nThe authors of EU-OSHA's 'Supporting compliance' reports state a strong increase in 'compliance promotion strategies'. They write: 'The regulatory changes have been matched in more recent times by an increasingly diverse set of compliance promotion strategies. Not only has public regulation sought to engage and encourage duty holders in the pursuit of forms of regulated self-regulation, but … the discourse on regulation itself has sought a far broader understanding of its meaning and the role of the private and public regulatory actors and processes potentially involved in both defining and securing compliance.' 389\n\nOne important type of means are guidance and support tools for enterprises and workers to extend the reach and impact of legislation. Labour inspectorates and other state institutions produce these tools either themselves or in collaboration with social partners or professional organisations.\n\nProactive research and preventive guidelines , particularly in situations of new risks, have become a quite usual preventive activity (e.g. on nanotechnology, or on some developments in digitalisation). For very complex regulations, like REACH, national institutions installed helpdesks. European institutions also publish such guidance documents for EU-wide use, for example, the guidance on health and safety in agriculture, 390 the guidance regarding the implementation of the Machinery directive, 391 the guidance documents of EU-OSHA on COVID-19 392 and the European Commission guidance documents on seasonal workers and COVID-19. 393 Practically all EU and international OSH institutions published guidance documents on how to identify and reduce psychosocial risk at workplaces. 394\n\nA large amount of OSH guidance already exists in different formats, 395 starting with classical written guidance documents, increasingly complemented by audio-visual and interactive tools. EU-OSHA covers a large variety of workplaces with its digital risk assessment tool OiRA (Online interactive Risk", - "page_start": 124, - "page_end": 124, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "compliance harder to verify and, in the absence of that verification procedure, harder to enforce (especially in OSH cultures with a history of the prescriptive approach).' 353\n\nRegarding the level of compliance with the legal goals or prescriptions , the study authors assess it as 'moderate to good.' They see major differences depending on the topic and the size of the enterprises:\n\n'However, the collected data shows that overall compliance with the OSH acquis across the EU and across establishment sizes is moderate to good. There is no indication that compliance is measurably higher in the public sector compared to the private sector. Yet, in reality, compliance varies significantly from directive to directive, from MS to MS and across establishment sizes.\n\nMicro establishments: Cannot be assessed (limited evidence points to poor overall quantitative compliance)\n\n - · 10 to 19 employees: Poor overall quantitative compliance\n - · 20 to 49 employees: Moderate overall quantitative compliance\n - · 50 to 249 employees: Good overall quantitative compliance\n - · 250 to 499 employees: Good overall quantitative compliance\n - · 500+ employees: Very good overall quantitative compliance'. 354\n\nIn 2018, DG EMPL organised a peer review on 'The efficient transposition, implementation and enforcement of EU OSH legislation' for each EU Member State. 355 The overall conclusion is positive but refers to the difference between formal (paper) compliance and 'real improvements' :\n\n'Although not uniform across employers (with evidence that smaller businesses in particular find some of the demands challenging and difficult to implement) indications are also that the transposed legislation is being implemented within workplaces. However, there are indications that the fact of implementation is not necessarily a true indicator of the quality of that action, with suggestions that 'compliance' is to some extent a paper exercise and is not always reflected in real improvements in working environments.'\n\nThe authors of EU-OSHA's 'Supporting compliance' report 356 note the same difference, using the terms 'substantive' versus 'rule compliance' . 357 This report and underlying literature review have specifically analysed reasons and context for compliance and non-compliance. They analysed the influence of:\n\n - · social norms and social reporting strategies, and corporate social responsibility;\n - · economic incentives and the business case for OSH;\n - · the role of supply chain relations in supporting OSH;\n - · prevention services; and\n - · strategies and practices adopted by OSH regulators.", - "page_start": 121, - "page_end": 121, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- (3) In regulation 4ZA-\n - (a) in the heading, for 'the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021';\n - (b) in paragraph (1)(a), for 'regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the 2020 Regulations')' substitute 'regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021 ('the International Travel and Operator Liability Regulations')';\n - (c) in paragraph (1)(c), for 'paragraph 7(1)(f) of Schedule 2C to the 2020 Regulations' substitute 'paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations';\n - (d) in paragraph (3), for 'paragraph 7(1)(f) of Schedule 2C to the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations'.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "18. In determining how many fixed penalty notices a person ('P') has received for the purposes of paragraph 8 (breach of requirement in regulation 9 to self-isolate etc), if P received more than one fixed penalty notice for that offence before 2nd October 2020, only one of those notices may be taken into account.\n\n## SCHEDULE 15\n\nRegulation 26(2)\n\n## Consequential Amendments\n\n1. -(1) The Health Protection (Notification) Regulations 2010( a ) are amended as follows.\n\n(2) In regulation 4(3D)(b), for 'regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.", - "page_start": 87, - "page_end": 87, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## Transitional provision\n\n - 1. Passenger information provided before 4.00 a.m. on 17th May 2021 by a person pursuant to regulation 3 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the 2020 Regulations') in advance of arrival in England is treated as passenger information provided for the purposes of these Regulations where the person arrives in England on or after that date.\n - 2. Confirmation given by the Foreign, Commonwealth and Development Office that a person is not required to comply with regulation 3B of the 2020 Regulations is treated as confirmation that the person is not required to comply with regulation 6 of these Regulations where the person arrives in England on or after 4.00 a.m. on 17th May 2021.\n - 3. A designation by the Secretary of State of a person as an authorised person under regulation 5(7) of the 2020 Regulations has effect as a designation of that person as an authorised person under of regulation 11(11)(c) of these Regulations.\n - 4. Regulation 5A of the 2020 Regulations continues to have effect in relation to a constable who exercises the powers in that regulation in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "18. Guidance issued by the Secretary of State pursuant to paragraph 4(2) of Schedule 2D to the 2020 Regulations has effect as guidance issued pursuant to paragraph 4(2) of Schedule 9 to these Regulations.\n\n## EXPLANATORY NOTE\n\n(This note is not part of the Regulations)\n\nThese Regulations replace the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the International Travel Regulations'), the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020 and the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021.\n\nThey impose requirements on certain categories of person to provide information upon arrival in England, to take coronavirus tests before and after arrival and to self-isolate in order to prevent the spread of infection or contamination from coronavirus or coronavirus disease. They also impose obligations on operators to ensure that passengers receive information and comply with the requirements.\n\nAn impact assessment has not been produced for this instrument. An explanatory memorandum has been published alongside this instrument at www.legislation.gov.uk.\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 90, - "page_end": 90, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed12.pdf", - "query": "What was the muscle volume of the knee flexors of the 2024 word's strongest man ?", - "target_page": 7, - "target_passage": "Knee flexors 3,060 ", - "chunk_present": { - "presence": true, - "index": 9 - } - }, - "top_chunk": [ - { - "text": "\n\nTable 2. Muscle volume of all muscles, 5 functional muscle groups, and 22 individual muscles/compartments of a World ' s Strongest Man and deadlift champion and comparative elite sprinters, subelite sprinters, and untrained control participants", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed12.pdf" - }, - { - "text": "\n\npredictions of skeletal muscle mass nor dual-energy X-ray absorptiometry provides detailed information on the size of speci /uniFB01 c individual muscles. Given the known importance of muscle size as a determinant of muscular strength (9 -11), pronounced muscle size seems likely to be critical to extreme human strength; however, the speci /uniFB01 c muscle size of extremely strong individuals remains unknown. Similarly, a large moment arm (e.g., of the patella tendon at the knee joint) could contribute to the expression of high muscular strength (10, 12), and a large tendon may mitigate the mechanical stress it experiences with very high muscular loads, and therefore, these characteristics may also be expected in individuals selected for exceptional strength.\n\nIn this paper, we present the /uniFB01 ndings from a unique opportunity to examine the laboratory function, muscle size, and distribution of muscle mass, as well as patellar tendon size and moment arm, of a World ' s Strongest Man and deadlift champion (WSM) in comparison with existing data on untrained individuals, power athletes (100-m-track sprinters), and long-term resistance-trained populations that we have assessed previously (10, 11, 13 -15).\n\n## MATERIALS AND METHODS\n\n## Participant\n\nThe WSM ' s achievements included one World ' sStrongest Man title (14 mo prior to measurement), /uniFB01 ve Britain ' s Strongest Man titles (the most recent 6 mo prior to measurement), twice being World Deadlift Champion and Deadlift WorldRecordholder(500kg;atthetimeofmeasurement), and second place at Europe ' s Strongest Man. Prior to agreeing to participate, the purpose of the research study and the testing procedures were explained to the participant along with the risks and bene /uniFB01 ts of taking part. The participant gave his written informed consent to participate in the study that was approved by the Loughborough University Ethical Advisory Committee (Ethics Number R18-P090). Included in the written consent was a statement providing permission for publication of the collected data and the likelihood that their identity may be evident based on their achievements and characteristics, despite anonymization.\n\n## Training History\n\nThe WSM had been continuously involved in systematic, regular upper- and lower-body resistance training for 15 yr at the time of testing. In the 12 mo prior to testing, the participant ' s resistance training consisted of the following typical exercises: lower body: squats, deadlifts, leg press, and knee extension; and upper body: bench press, shoulder press, dumbbell/barbell rows, and lat pull-down. The proportion of the participant ' s training within the following repetition ranges over the last 12 mo was as follows: near maximum loads [1 -5 repetition maximum (RM)]: 10%; heavy loads (6 -14 RM): 80%; and moderate loads ( /C21 15 RM): 10%. The participant reported only occasional ( < 1 /C2 /week) use of advanced resistance training practices (i.e., complex training and accommodating resistance method) but frequently ( > 3 /C2 / week) executed training repetitions with the intention to move the load as fast as possible. The WSM ' snutritional\n\nsupplement consumption included protein, branched-chain amino acids, and electrolytes.\n\n## Overview", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed12.pdf" - }, - { - "text": "\n\n## RESEARCH ARTICLE\n\n## Muscle and tendon morphology of a world strongman and deadlift champion\n\n- Thomas G. Balshaw, 1 Garry J. Massey, 1,2 Robert Miller, 1,3,4 Emmet J. McDermott, 1,5\n- Thomas M. Maden-Wilkinson, 6 and Jonathan P. Folland 1\n\n1 School of Sport, Exercise, and Health Sciences, Loughborough University, Loughborough, United Kingdom; 2 College of Life and Environmental Sciences, University of Exeter, Exeter, United Kingdom; 3 UK Athletics, Loughborough University, Loughborough, United Kingdom; 4 Department of Sport Science, Aspire Academy, Doha, Qatar; 5 Department of Physical Education and Sport Sciences, University of Limerick, Limerick, Ireland; and 6 Academy of Sport and Physical Activity, Faculty of Health and Wellbeing, Shef /uniFB01 eld Hallam University, Shef /uniFB01 eld, United Kingdom\n\n## Abstract\n\nThis study compared the muscle and tendon morphology of an extraordinarily strong individual, a World ' sStrongestMananddeadlift champion (WSM), with that of various other athletic, trained, and untrained populations. The WSM completed the following: 1 )3.0-T MRI scans, to determine the volume of 22 individual lower limb muscles, 5 functional muscle groups, patellar tendon (PT) cross-sectional area (CSA), and PT moment arm; and 2 ) countermovement jumps (CMJ) and isometric midthigh pull (IMTP) contractions. The WSM was compared with previously assessed groups from our laboratory (muscle and tendon) and the wider research literature (CMJ and IMTP). The WSM ' s CMJ peak power (9,866 W) and gross (9,171 N) and net (7,480 N) IMTP peak forces were higher than any previously published values. The WSM ' s overall measured leg muscle volume was approximately twice that of untrained controls ( þ 96%) but with pronounced anatomical variability in the extent of muscular development. The plantar /uniFB02 exor group ( þ 120%) and the guy rope muscles (sartorius, gracilis, and semitendinosus: þ 140% to þ 202%), which stabilize the pelvis and femur, demonstrated the largest differences relative to that of untrained controls. The WSM ' s pronounced quadriceps size (greater than or equal to twofold vs. untrained) was accompanied by modest PT moment arm differences and, notably, was not matched by an equivalent difference in PT CSA ( þ 30%). These results provide novel insight into the musculotendinous characteristics of an extraordinarily strong individual, which may be toward the upper limit of human variation, such that the WSM ' s very pronounced lower limb muscularity also exhibited distinct anatomical variability and with muscle size largely uncoupled from tendon size.\n\nNEW & NOTEWORTHY Lower-body muscle size of an extraordinarily strong individual, a World ' s Strongest Man and deadlift champion (WSM), was approximately twice that of controls but was underpinned by pronounced anatomical variability in the extent of muscular development ( þ 23 -202%): the plantar /uniFB02 exor group and guy rope muscles demonstrating the largest differences. The WSM ' s quadriceps size (more than or equal to twice that of controls) contrasted with modest differences in patella tendon moment arm ( þ 18%) and was uncoupled from patellar tendon size ( þ 30%).\n\nisometric force; magnetic resonance imaging; power; strength\n\n## INTRODUCTION", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed12.pdf" - }, - { - "text": "Individual measurements are the average of both sides/legs (i.e., unilateral). All muscles are the sum of muscle volumes from all the individual muscles/compartments listed. Muscle volume data are presented as group means ± SD, except for the WSM ( n ¼ 1). Untrained control participants from Miller et al. (13).\n\nassessed (Fig. 5 B ). BFsh volume (135 cm 3 )oftheWSMwasa modest 26% greater than that of our pool of untrained control participants (107 ± 31 cm 3 ; Fig. 5 E ) but smaller than that of both long-term resistance-trained individuals ( /C0 1%; 136±27 cm 3 ) and elite sprinters ( /C0 19%; 167 ± 26 cm 3 ; Fig. 5 E ).\n\n## Patella Tendon Cross-Sectional Area and Moment Arm\n\nThe patellar tendon mean CSA of the WSM (133.8 mm 2 )was larger than that of average untrained ( þ 30%; 103.2±12.5 mm 2 ) and long-term resistance-trained individuals ( þ 27%; 105.4 ± 13.0 mm 2 ; Fig. 6 A )butwassmallerthanthelargest individual we have measured from these groups (149.5 mm 2 ). The WSM ' s patellar tendon moment arm (51.5 mm) was also larger than that of average untrained ( þ 18%; 43.8 ± 2.7 mm) or long-term resistance-trained groups ( þ 12%; 45.8 ± 2.5 mm; Fig. 6 B ) as well as being 3% greater than the highest individual moment arm we have previously assessed within these groups (49.9 mm).\n\n## DISCUSSION\n\nThis study is the /uniFB01 rst to document the lower-body muscle and tendon morphology of a World ' s Strongest Man and deadlift champion (i.e., an exceptionally strong individual), and these are presented alongside functional whole body assessments, which exceeded the highest IMTP force (gross\n\nand net) and CMJ power values previously reported by 54%, 100%, and 164%, respectively. The WSM had overall lowerbodymuscularityapproximatelytwicethatofuntrainedcontrols ( þ 96%) and 32% greater than that of elite 100-m sprinters. However, there was substantial anatomical variability in the magnitude of the differences, ranging from the plantar /uniFB02 exors ( þ 120% vs. untrained) to the hip /uniFB02 exors ( þ 65% vs. untrained). Similarly, some speci /uniFB01 c muscles, such as the guy rope muscles that stabilize the femur and pelvis, were 2.5 -3.0 times the volume of untrained individuals (gracilis þ 140%, semitendinosus þ 157%, and sartorius þ 202%) but others displayed more marginal differences (BFsh þ 23%, iliopsoas þ 32% vs. untrained). Considering the knee extensors, the WSM had both quadriceps femoris volume greater than or equal to twofold that of untrained controls and a greater patella tendon moment arm than we have previously measured ( þ 18% vs. untrained), which would be expected to combine to facilitate extraordinary strength. Furthermore, despite the WSM ' sextremelylargequadricepsfemoris,theirpatellartendonCSAwasonly30%greaterthanthatofuntrainedcontrols and not outside the range of tendons we have previously assessed. The results of this study provide novel insights into the muscle and tendon characteristics, as well as the strength and power capabilities, of an extraordinarily strong individual that may be toward the upper limit of human variation in these characteristics.", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed12.pdf" - }, - { - "text": "Values for comparative populations are means ± SD. CSA, cross-sectional area.\n\ntibialis anterior, extensor digitorum longus, and extensor hallucis longus. The lateral shank compartment included the peroneus longus and brevis. The deep posterior compartment consisted of plantaris, tibialis posterior, /uniFB02 exor digitorum longus, and /uniFB02 exor hallucis longus. All muscles were manually segmented in every other image (i.e., every 20 mm) starting from the most proximal image in which the muscle appeared, except the tensor fasciae latae, gluteus medius and minimus (combined), and popliteus, which were manually segmented in every slice (i.e., every 10 mm) due to their short length. The volume of each individual muscle ( V m) was calculated using previously outlined methods (16)asfollows:\n\nV m ¼ X n /C0 1 i ¼ 1 h 2 ð A m i þ A mi þ 1 Þ\n\nwhere A m represents the muscle CSA calculated from each image, i is the image number, n is the total number of images, and h is the distance between images. The volume of /uniFB01 ve functional muscle groups was calculated as the sum of the following muscles: hip extensors (gluteus maximus, adductor magnus, BFlh, SM, and ST), hip /uniFB02 exors (iliopsoas, RF, sartorius, and tensor fasciae latae), knee extensors (RF, VI, VM, and VL), knee /uniFB02 exors (gracilis, BFlh and BFsh, SM, ST, sartorius, popliteus, and medial and lateral gastrocnemius), and plantar /uniFB02 exors (medial and lateral gastrocnemius and soleus). The sum of all the measured lower-body muscles was also quanti /uniFB01 ed as the volume of ' all muscles. '\n\nOnce muscle MRI scanning had been completed, a /uniFB02 ex coil (GE Medical) was used to acquire unilateral T1-weighted axial (time of repetition/time to echo 650/9.476 ms, image matrix 512 /C2 512, /uniFB01 eld of view 180 /C2 180 mm, pixel size 0.3516 /C2 0.3516 mm, slice thickness 2 mm, and interslice gap 0 mm) and sagittal images (time of repetition/time to echo 606/9.512 ms, image matrix 512 /C2 512, /uniFB01 eld of view 180 /C2 180 mm, pixel size 0.3516 /C2 0.3516 mm, slice thickness 2 mm, and interslice gap ¼ 0 mm) from both knee joints. The axial images were obtained perpendicular to the line of the tendon from /C24 2 cm superior to the apex of the patella to /C24 2cm\n\ninferior to the patellar tendon ' s inferior insertion. Patellar tendon CSA was measured in each contiguous image along the length of the tendon (i.e., from the /uniFB01 rst image where the patella was no longer visible to the /uniFB01 nal image before the tibial insertion). The axial images of the patellar tendon were viewed in grayscale, sharpened, and the perimeter manually outlined. The average of all measured axial patellar tendon CSAs was calculated to produce a mean tendon CSA (mm 2 ) for each leg. The moment arm length of the patellar tendon for each leg was estimated from sagittal plane images as the perpendicular distance from the patellar tendon to the midpoint of tibiofemoral contact (17).\n\n## Countermovement Jump", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed12.pdf" - }, - { - "text": "isometric force; magnetic resonance imaging; power; strength\n\n## INTRODUCTION\n\nFeats of strength have fascinated man since the early stages of human civilization, as shown by the archeological evidence of inscribed heavy stones at Olympia and Thera in Greece, dated to the 6th century BC, detailing the way they were lifted by Bybon and Eumastus, respectively (1). Over the centuries, many types of strength competitions have existed; some of which have been codi /uniFB01 ed and endured within modern sporting competitions (e.g., weightlifting, powerlifting, and shot put).Inaddition,professionalstrongmancompetitions,such as the annually contested ' World ' s Strongest Man ' event, generate extensive global interest (2). Moreover, scienti /uniFB01 c understanding of muscular strength is important because of its role in athletic performance (3), injury prevention (4), and\n\n\n\nhealthy aging (5). However, our knowledge of extreme human strength is limited.\n\nTo date, there is little scienti /uniFB01 c information on the characteristics of extremely strong humans in terms of laboratorybased tests of strength and power, particularly the size and distribution of their muscle mass, as well as tendon size and joint mechanics (moment arm). Kraemer et al. (6)examinedthe body composition of elite strongman competitors using dualenergy X-ray absorptiometry scanning and found that they had a body mass (153±19 kg) and lean mass (118±12 kg) approximately twice that of an average untrained healthy young man. Whole body skeletal muscle mass of athletes from strength- and power-based sports has also been estimated using ultrasound measurements at a limited number of anatomical locations (7, 8). However, neither ultrasound-derived\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed12.pdf" - }, - { - "text": "Figure 5. Overall hamstrings (HAMS; A ), semimembranosus (SM; B ), semitendinosus (ST; C ), biceps femoris long head (BFlh; D ), and biceps femoris short head (BFsh; E ) muscle volume of a World ' s Strongest Man and deadlift champion (WSM) compared with long-term resistance trained [ n ¼ 16, from the work by Maden-Wilkinson et al. (10)], elite sprint runners [ n ¼ 5, from the work by Miller et al. (13)], subelite sprint runners [ n ¼ 26, from the work by Miller et al. (13)], and untrained control populations [ n ¼ 50, pooled population from the works by Miller et al. (13)( n ¼ 11) and Balshaw et al. (14) (pretest data n ¼ 39)].\n\n\n\n\n\n\n\n\n\n\n\npatellar tendon moment arm ( þ 18%). Therefore, of these two key strength determinants, muscle size, rather than joint leverage, appeared to be the predominant factor responsible for the WSM ' s extraordinary strength. Indeed, when we previously compared the muscle morphology and joint mechanics of individuals with distinct maximum strength capacity (long-term resistance-trained individuals vs. untrained controls), muscle size was the primary factor separating the groups with much more subtle differences in moment arm (10). The extreme exampleofmusclesizeprovidedbytheWSM ' squadriceps\n\nfemoris also gave the opportunity to investigate the scaling of tendon size to muscle size; extreme muscular size (greater than or equal to twice that for untrained controls) might be expected to be accompanied by comparable tendinous tissue size to effectively transmit high muscular forces to the skeleton. However, the WSM ' s patellar tendon CSA was only 30% larger than untrained controls and within the range of individuals we have previously measured (Fig. 6 A ). This observation supports the notion that tendon structure may be largely /uniFB01 xed by adulthood (40), with only slow/limited\n\n\n\nFigure 6. Patellar tendon mean cross-sectional area ( A ) and patellar tendon moment arm ( B )ofaWorld ' sStrongestManand deadlift champion (WSM) compared with long-term resistance trained [ n ¼ 16, from theworkbyMasseyetal.(15)] and untrained control populations [ n ¼ 39, from the work by Massey et al. (15)].\n\n", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed12.pdf" - }, - { - "text": "\n\nchanges in response to functional overload/resistance training. For example, we previously found patellar tendon CSA to show very subtle changes after 15 wk (45 training sessions) of heavy resistance training [ þ 1.4% (41)] and no differences between long-term resistance-trained individuals and untrained controls (15).\n\n## Limitations\n\nAlthough the current investigation provides a detailed assessment of an individual at/toward the upper limit of human strength performance, it is important to appreciate study limitations. First, the participant was not measured immediately before their World ' sStrongestManchampionship success or other landmark performances, and it is entirely possible the functional and structural characteristics we assessed mayhavebeenevenhigherdirectlypriortopeakperformances. Despite using a wide-bore MRI scanner, due to the size of the WSM ' s shoulders and arms, it was not possible to scan their upperbody.Thus,wewerenotabletoinvestigatethisaspectof the WSM ' s muscle morphology; although given that greater hypertrophy occurs in the upper body compared with the lower body (42), it is possible that the WSM ' s upper-body muscle size relative to untrained controls may have been even more pronounced than what we have documented for the lower body. In the current study to provide the most representative data on untrained control participants, the largest available untrained control populations were used for each category of measurements. Thus, different untrained control populations were used [e.g., comparison of quadricep and hamstring size ( n ¼ 102) vs. comparison of all the leg muscles ( n ¼ 11)], which led to some subtle discrepancies in the contrasts between these groups and the WSM [e.g., quadriceps femoris/knee extensors, þ 127% and þ 99% relative to our large pooled ( n ¼ 102) and smaller ( n ¼ 11) untrained control samples, respectively]. Importantly, however, this discrepancy does not appear to meaningfully affect the interpretation of the /uniFB01 ndings. There were subtle differences in the precise scanning and analysis approaches used with the reference populations featured in this study, including 1 )magnetic /uniFB01 eld strength [1.5 T (10, 11, 15) vs. 3.0 T, WSM and (13, 14)]; 2 ) the interslice distance used to quantify quadriceps femoris and hamstrings muscle volume [1.5 cm (10, 11, 14)vs.2.0cm,WSMand(13)]; 3 )thecalculation of muscle volume [area under the cubic spline ACSA-muscle length curve: (10, 11, 14) vs. the equation detailed earlier: WSM and (13)]; and 4 )theuseofunilateralMRImeasuresderived from one limb (10, 11, 14, 15) or collapsed across two limbs [WSM and (13)]. However, it seems likely that these subtle differences would have had at most a very minor effect on the /uniFB01 ndings.Finally,itisalsoimportanttohighlightthatthedifferences documented between the WSM and comparative populations for the various measures included in the current study cannot be assumed to be anything other than a combination of both innate (genetic) and environmental (training and nutrition) factors.\n\n## Conclusions\n\nIn conclusion, this novel investigation documented the muscle and tendon morphology and whole body strength and power characteristics of an exceptionally strong individual, relative to comparative athletic, trained, and untrained\n\npopulations. Overall leg muscle volume of the WSM was approximately twice that of untrained controls but with pronounced anatomical variability in the extent of muscular development. The plantar /uniFB02 exor muscle group and the guy rope muscles (sartorius, gracilis, and semitendinosus: þ 140 to þ 202%), which stabilize the pelvis and femur, demonstrated the largest differences. The pronounced quadriceps femoris size of the WSM (greater than or equal to twice that of untrained) was accompanied by a more modest difference in patella tendon moment arm ( þ 18%) and was not matched by a proportional difference in tendon size ( þ 30%).", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed12.pdf" - }, - { - "text": "A\n\nFigure 4. Quadriceps femoris (QF; A ), vastus medialis (VM; B ), vastus lateralis (VL; C ), vastus intermedius (VI; D ), and rectus femoris (RF; E ) muscle volume of a World ' s Strongest Man and deadlift champion (WSM) compared with long-term resistance-trained ( n ¼ 16, from the work by Maden-Wilkinson et al. (10)], elite sprint runners [ n ¼ 5, from the work by Miller et al. (13)], subelite sprint runners [ n ¼ 26, from the work by Miller et al. (13)], and untrained control populations [ n ¼ 102, pooled population from the works by Miller et al. (13)( n ¼ 11), Balshaw et al. (11) ( n ¼ 52), and Balshaw et al. (14)(pretest data n ¼ 39)].\n\n\n\n\n\n\n\n\n\nAlthough it was anticipated that the WSM would possess a larger total lower-body muscle volume/mass than untrained controls and other athletic/trained groups we have previously measured, the magnitude and pattern of the differences were unknown. The results indicated that the total volume of the measured muscles was almost twice that of average untrained participants and 32 -63%larger than subelite and elite sprinters. Pronounced development of the antigravity muscles (i.e., hip extensors, knee extensors, and plantar /uniFB02 exors) was perhaps not that surprising given the WSM ' s background in heavy lifting events (including being a double deadlift world champion and record holder). However, the hip /uniFB02 exors appear less important in these tasks, possibly explaining their more modest size, which was inferior to that of three elite 100-m sprinters we have previously assessed. The WSM ' splantar /uniFB02 exors were particularly large relative to untrained controls ( þ 120%). This could be due to the plantar /uniFB02 exors being the smallest of the antigravity muscle groups that may experience very high mechanical stress and, thus, a pronounced adaptive stimulus during heavy lifting, carrying, and pulling tasks. Furthermore, the very heavy and, therefore, low-velocity nature of these tasks may limit the contribution of the stretch-shortening cycle and tendon recoil to the positive/concentric work done by the plantar\n\n\n\n/uniFB02 exors, potentially placing a higher demand on the contractile apparatus than for running and jumping tasks.\n\nConsidering individual muscles/compartments, the muscular development of the WSM was distinctly nonuniform. It is striking that the largest muscles relative to the untrained control population were the three ' guy ropes ' (sartorius, gracilis, and semitendinosus: þ 140 -202%). These three muscles provide stability to the pelvis and femur by having origins at diverse points around the pelvis while sharing a common insertion onto the anteromedial tibia [via pes anserinus, the conjoined tendons of these three muscles (39)]. Large guy rope muscles likely enhance stabilization of the femur and pelvis and would be expected to be critical during heavy weight-bearing tasks. In contrast, the WSM ' s /uniFB01 ve smallest muscles (relative to untrained controls) consisted of two hip /uniFB02 exors (iliopsoas and RF) and two monoarticular knee /uniFB02 exors; actions that appear far less important for lifting, carrying, and pulling tasks.\n\nThe WSM ' s quadriceps volume and patellar tendon moment arm were both greater than that of untrained controls and indeed any individual we have previously measured. However, the magnitude of difference, relative to the untrained controls, was noticeably larger for quadriceps femoris volume (greater than or equal to twice as large) than for", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed12.pdf" - }, - { - "text": "| | Muscle Volume, cm 3 | Muscle Volume, cm 3 | Muscle Volume, cm 3 | Muscle Volume, cm 3 |\n|------------------------------------------|-----------------------|--------------------------|------------------------------|-----------------------|\n| Muscle Group/Muscle or Compartment | WSM | Elite Sprinters ( n 5 5) | Subelite Sprinters ( n 5 26) | Untrained ( n 5 11) |\n| All muscles | 14,922 | 11,323 ± 1,328 | 9,164 ± 1,207 | 7,628 ± 1,548 |\n| Hip /uniFB02 exors | 1,704 | 1,620 ± 200 | 1,314 ± 216 | 1,031 ± 151 |\n| Hip extensors | 4,724 | 4,002±489 | 3,029±422 | 2,257 ± 220 |\n| Knee /uniFB02 exors | 3,060 | 2,304 ± 178 | 1,859 ± 301 | 1,460 ± 196 |\n| Knee extensors | 4,386 | 3,218 ± 400 | 2,636±401 | 2,202±315 |\n| Plantar /uniFB02 exors | 1,888 | 1,112 ± 181 | 943±156 | 860±172 |\n| Iliopsoas | 681 | 702±97 | 618±101 | 514 ± 75 |\n| Sartorius | 429 | 306±46 | 209±50 | 142 ± 25 |\n| Tensor fasciae latae | 142 | 135 ± 41 | 86±25 | 73±24 |\n| Adductor magnus | 1,334 | 1,056 ± 83 | 828±128 | 624±81 |\n| Gracilis | 235 | 180±37 | 142 ± 37 | 98±23 |\n| Gluteus maximus | 1,980 | 1,797 ± 376 | 1,257 ± 197 | 931 ± 108 |\n| Gluteus medius and minimus | 1,172 | 626±129 | 575±97 | 583±76 |\n| Rectus femoris | 453 | 476±45 | 401±78 | 303±55 |\n| Vastus lateralis | 1,508 | 1,132 ± 180 | 925±156 | 743±98 |\n| Vastus intermedius | 1,336 | 962±145 | 789±140 | 680±115 |\n| Vastus medialis | 1,088 | 649±97 | 521±79 | 476±111 |\n| Semimembranosus | 392 | 359±60 | 327±59 | 262±18 |\n| Semitendinosus | 563 | 449±70 | 350±79 | 219 ± 39 |\n| Biceps femoris long head | 454 | 340±31 | 267±47 | 221±42 |\n| Biceps femoris short head | 135 | 167 ± 26 | 131 ± 34 | 110 ± 28 |\n| Popliteus | 27 | 23±5 | 17 ± 5 | 19 ± 6 156±41 |\n| Lateral gastrocnemius | 310 | 202±34 | 170 ± 37 | |", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed12.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed12.pdf", - "query": "What are the nutritionnal added components to the word's strongest man regime ?", - "target_page": 2, - "target_passage": "The WSM’s nutritional supplement consumption included protein, branched-chain amino acids, and electrolytes", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\npredictions of skeletal muscle mass nor dual-energy X-ray absorptiometry provides detailed information on the size of speci /uniFB01 c individual muscles. Given the known importance of muscle size as a determinant of muscular strength (9 -11), pronounced muscle size seems likely to be critical to extreme human strength; however, the speci /uniFB01 c muscle size of extremely strong individuals remains unknown. Similarly, a large moment arm (e.g., of the patella tendon at the knee joint) could contribute to the expression of high muscular strength (10, 12), and a large tendon may mitigate the mechanical stress it experiences with very high muscular loads, and therefore, these characteristics may also be expected in individuals selected for exceptional strength.\n\nIn this paper, we present the /uniFB01 ndings from a unique opportunity to examine the laboratory function, muscle size, and distribution of muscle mass, as well as patellar tendon size and moment arm, of a World ' s Strongest Man and deadlift champion (WSM) in comparison with existing data on untrained individuals, power athletes (100-m-track sprinters), and long-term resistance-trained populations that we have assessed previously (10, 11, 13 -15).\n\n## MATERIALS AND METHODS\n\n## Participant\n\nThe WSM ' s achievements included one World ' sStrongest Man title (14 mo prior to measurement), /uniFB01 ve Britain ' s Strongest Man titles (the most recent 6 mo prior to measurement), twice being World Deadlift Champion and Deadlift WorldRecordholder(500kg;atthetimeofmeasurement), and second place at Europe ' s Strongest Man. Prior to agreeing to participate, the purpose of the research study and the testing procedures were explained to the participant along with the risks and bene /uniFB01 ts of taking part. The participant gave his written informed consent to participate in the study that was approved by the Loughborough University Ethical Advisory Committee (Ethics Number R18-P090). Included in the written consent was a statement providing permission for publication of the collected data and the likelihood that their identity may be evident based on their achievements and characteristics, despite anonymization.\n\n## Training History\n\nThe WSM had been continuously involved in systematic, regular upper- and lower-body resistance training for 15 yr at the time of testing. In the 12 mo prior to testing, the participant ' s resistance training consisted of the following typical exercises: lower body: squats, deadlifts, leg press, and knee extension; and upper body: bench press, shoulder press, dumbbell/barbell rows, and lat pull-down. The proportion of the participant ' s training within the following repetition ranges over the last 12 mo was as follows: near maximum loads [1 -5 repetition maximum (RM)]: 10%; heavy loads (6 -14 RM): 80%; and moderate loads ( /C21 15 RM): 10%. The participant reported only occasional ( < 1 /C2 /week) use of advanced resistance training practices (i.e., complex training and accommodating resistance method) but frequently ( > 3 /C2 / week) executed training repetitions with the intention to move the load as fast as possible. The WSM ' snutritional\n\nsupplement consumption included protein, branched-chain amino acids, and electrolytes.\n\n## Overview", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed12.pdf" - }, - { - "text": "\n\nTable 2. Muscle volume of all muscles, 5 functional muscle groups, and 22 individual muscles/compartments of a World ' s Strongest Man and deadlift champion and comparative elite sprinters, subelite sprinters, and untrained control participants", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed12.pdf" - }, - { - "text": "isometric force; magnetic resonance imaging; power; strength\n\n## INTRODUCTION\n\nFeats of strength have fascinated man since the early stages of human civilization, as shown by the archeological evidence of inscribed heavy stones at Olympia and Thera in Greece, dated to the 6th century BC, detailing the way they were lifted by Bybon and Eumastus, respectively (1). Over the centuries, many types of strength competitions have existed; some of which have been codi /uniFB01 ed and endured within modern sporting competitions (e.g., weightlifting, powerlifting, and shot put).Inaddition,professionalstrongmancompetitions,such as the annually contested ' World ' s Strongest Man ' event, generate extensive global interest (2). Moreover, scienti /uniFB01 c understanding of muscular strength is important because of its role in athletic performance (3), injury prevention (4), and\n\n\n\nhealthy aging (5). However, our knowledge of extreme human strength is limited.\n\nTo date, there is little scienti /uniFB01 c information on the characteristics of extremely strong humans in terms of laboratorybased tests of strength and power, particularly the size and distribution of their muscle mass, as well as tendon size and joint mechanics (moment arm). Kraemer et al. (6)examinedthe body composition of elite strongman competitors using dualenergy X-ray absorptiometry scanning and found that they had a body mass (153±19 kg) and lean mass (118±12 kg) approximately twice that of an average untrained healthy young man. Whole body skeletal muscle mass of athletes from strength- and power-based sports has also been estimated using ultrasound measurements at a limited number of anatomical locations (7, 8). However, neither ultrasound-derived\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed12.pdf" - }, - { - "text": "column name to create two matrices. One matrix was created for the climate change discourse, and we filled the cell whose column name and row name were among the top 50 list in the climate change discourse with the frequency at which the two hashtags were associated in this discourse, and the other cells were filled with 0. This was repeated for the global warming matrix. We thus obtained two matrices with the same row and column names but di GLYPH<11> erent values in the cells. Then, the two matrices were input to the quadratic assignment procedure (QAP) [85] analysis provided by UCINET software [86] to assess their correlation for each year.\n\n## 4. Results\n\n## 4.1. General Descriptions\n\nAssociation networks surrounding #climatechange and #globalwarming showed di GLYPH<11> erent properties. The climate change discourse included 38,821 hashtags, whereas the global warming discourse only contained 8788 hashtags. Table 1 displays the 50 most significant hashtags in the two discourses based on centrality. As some hashtags were used in the form of an abbreviation or phrase, explanations are provided in the table. Two networks shared 32 out of the 50 most significant words. Hashtags 'canada', 'cdnpoli', 'sdgs', 'biodiversity', 'education', 'environmental', 'cop24', 'sustainable', 'auspol', 'food', 'agriculture', 'cleanenergy', 'renewableenergy', 'renewables', 'emissions', 'coal', 'fossilfuels', and 'cop21' only showed up on the top 50 list of the 'climate change' network. Hashtags 'tcot', 'california', 'p2', 'nyc', 'snow', 'agw', 'summer', 'global', 'winter', 'india', 'planet', 'heatwave', 'hoax', 'nasa', 'algore', 'world', 'oil', and 'eco' were unique on the top 50 list of the global warming network. The two lists only shared three out of the top five hashtags. In the #climatechange network, 'climateaction' was ranked third place and 'sustainability' was ranked fourth place, whereas they were ranked significantly lower, 17th and 22nd, respectxively, in the #globalwarming network. In the #globalwarming network, 'earth' and 'weather' were among the top five nodes, whereas they were ranked 14th and 24th in the #climatechange network, respectively.\n\nTable 1. The top 50 central hashtags on Twitter surrounding #climatechange and #globalwarming from 2009 to 2018. The hashtag with * is explained in Appendix A in ascending alphabetical order.", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed10.pdf" - }, - { - "text": "- [5] F. Brochard-Wyart and J. Daillant, 'Drying of solids wetted by thin liquid films,' Can. J. Phys. 68 , 1084-1088 (1989).\n - [6] P. Muller-Buschbaum, 'Dewetting and pattern formation in thin polymer films as investigated in real and reciprocal space,' J. Phys.-Condes. Matter 15 , R1549-R1582 (2003).\n - [7] R. Seemann, S. Herminghaus, C. Neto, S. Schlagowski, D. Podzimek, R. Konrad, H. Mantz, and K. Jacobs, 'Dynamics and structure formation in thin polymer melt films,' J. Phys.-Condes. Matter 17 , S267-S290 (2005).\n - [8] U. Thiele, 'Structure formation in thin liquid films,' in S. Kalliadasis and U. Thiele, editors, 'Thin films of Soft Matter,' pages 25-93, Springer, Wien (2007).\n - [9] R. Xie, A. Karim, J. F. Douglas, C. C. Han, and R. A. Weiss, 'Spinodal dewetting of thin polymer films,' Phys. Rev. Lett. 81 , 1251-1254 (1998).\n - [10] R. Seemann, S. Herminghaus, and K. Jacobs, 'Dewetting patterns and molecular forces: A reconciliation,' Phys. Rev. Lett. 86 , 5534-5537 (2001).\n - [11] U. Thiele, M. G. Velarde, and K. Neuffer, 'Dewetting: Film rupture by nucleation in the spinodal regime,' Phys. Rev. Lett. 87 , 016104 (2001).\n - [12] M. Bestehorn and K. Neuffer, 'Surface patterns of laterally extended thin liquid films in three dimensions,' Phys. Rev. Lett. 87 , 046101 (2001).\n - [13] J. Becker, G. Grun, R. Seemann, H. Mantz, K. Jacobs, K. R. Mecke, and R. Blossey, 'Complex dewetting scenarios captured by thin-film models,' Nat. Mater. 2 , 59-63 (2003).\n - [14] C. Redon, F. Brochard-Wyart, and F. Rondelez, 'Dynamics of dewetting,' Phys. Rev. Lett. 66 , 715718 (1991).\n - [15] R. Seemann, S. Herminghaus, and K. Jacobs, 'Shape of a liquid front upon dewetting,' Phys. Rev. Lett. 87 , 196101 (2001).\n - [16] R. Fetzer, K. Jacobs, A. Munch, B. Wagner, and T. P. Witelski, 'New slip regimes and the shape of dewetting thin liquid films,' Phys. Rev. Lett. 95 , 127801 (2005).\n - [17] F. Brochard-Wyart and C. Redon, 'Dynamics of liquid rim instabilities,' Langmuir 8 , 2324-2329 (1992).\n - [18] G. Reiter and A. Sharma, 'Auto-optimization of dewetting rates by rim instabilities in slipping polymer films,' Phys. Rev. Lett. 87 , 166103 (2001).\n - [19] A. Munch and B. Wagner, 'Contact-line instability of dewetting thin films,' Physica D 209 , 178-190 (2005).", - "page_start": 25, - "page_end": 25, - "source_file": "1001.2669.pdf" - }, - { - "text": "Individual measurements are the average of both sides/legs (i.e., unilateral). All muscles are the sum of muscle volumes from all the individual muscles/compartments listed. Muscle volume data are presented as group means ± SD, except for the WSM ( n ¼ 1). Untrained control participants from Miller et al. (13).\n\nassessed (Fig. 5 B ). BFsh volume (135 cm 3 )oftheWSMwasa modest 26% greater than that of our pool of untrained control participants (107 ± 31 cm 3 ; Fig. 5 E ) but smaller than that of both long-term resistance-trained individuals ( /C0 1%; 136±27 cm 3 ) and elite sprinters ( /C0 19%; 167 ± 26 cm 3 ; Fig. 5 E ).\n\n## Patella Tendon Cross-Sectional Area and Moment Arm\n\nThe patellar tendon mean CSA of the WSM (133.8 mm 2 )was larger than that of average untrained ( þ 30%; 103.2±12.5 mm 2 ) and long-term resistance-trained individuals ( þ 27%; 105.4 ± 13.0 mm 2 ; Fig. 6 A )butwassmallerthanthelargest individual we have measured from these groups (149.5 mm 2 ). The WSM ' s patellar tendon moment arm (51.5 mm) was also larger than that of average untrained ( þ 18%; 43.8 ± 2.7 mm) or long-term resistance-trained groups ( þ 12%; 45.8 ± 2.5 mm; Fig. 6 B ) as well as being 3% greater than the highest individual moment arm we have previously assessed within these groups (49.9 mm).\n\n## DISCUSSION\n\nThis study is the /uniFB01 rst to document the lower-body muscle and tendon morphology of a World ' s Strongest Man and deadlift champion (i.e., an exceptionally strong individual), and these are presented alongside functional whole body assessments, which exceeded the highest IMTP force (gross\n\nand net) and CMJ power values previously reported by 54%, 100%, and 164%, respectively. The WSM had overall lowerbodymuscularityapproximatelytwicethatofuntrainedcontrols ( þ 96%) and 32% greater than that of elite 100-m sprinters. However, there was substantial anatomical variability in the magnitude of the differences, ranging from the plantar /uniFB02 exors ( þ 120% vs. untrained) to the hip /uniFB02 exors ( þ 65% vs. untrained). Similarly, some speci /uniFB01 c muscles, such as the guy rope muscles that stabilize the femur and pelvis, were 2.5 -3.0 times the volume of untrained individuals (gracilis þ 140%, semitendinosus þ 157%, and sartorius þ 202%) but others displayed more marginal differences (BFsh þ 23%, iliopsoas þ 32% vs. untrained). Considering the knee extensors, the WSM had both quadriceps femoris volume greater than or equal to twofold that of untrained controls and a greater patella tendon moment arm than we have previously measured ( þ 18% vs. untrained), which would be expected to combine to facilitate extraordinary strength. Furthermore, despite the WSM ' sextremelylargequadricepsfemoris,theirpatellartendonCSAwasonly30%greaterthanthatofuntrainedcontrols and not outside the range of tendons we have previously assessed. The results of this study provide novel insights into the muscle and tendon characteristics, as well as the strength and power capabilities, of an extraordinarily strong individual that may be toward the upper limit of human variation in these characteristics.", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed12.pdf" - }, - { - "text": "Figure 5. Overall hamstrings (HAMS; A ), semimembranosus (SM; B ), semitendinosus (ST; C ), biceps femoris long head (BFlh; D ), and biceps femoris short head (BFsh; E ) muscle volume of a World ' s Strongest Man and deadlift champion (WSM) compared with long-term resistance trained [ n ¼ 16, from the work by Maden-Wilkinson et al. (10)], elite sprint runners [ n ¼ 5, from the work by Miller et al. (13)], subelite sprint runners [ n ¼ 26, from the work by Miller et al. (13)], and untrained control populations [ n ¼ 50, pooled population from the works by Miller et al. (13)( n ¼ 11) and Balshaw et al. (14) (pretest data n ¼ 39)].\n\n\n\n\n\n\n\n\n\n\n\npatellar tendon moment arm ( þ 18%). Therefore, of these two key strength determinants, muscle size, rather than joint leverage, appeared to be the predominant factor responsible for the WSM ' s extraordinary strength. Indeed, when we previously compared the muscle morphology and joint mechanics of individuals with distinct maximum strength capacity (long-term resistance-trained individuals vs. untrained controls), muscle size was the primary factor separating the groups with much more subtle differences in moment arm (10). The extreme exampleofmusclesizeprovidedbytheWSM ' squadriceps\n\nfemoris also gave the opportunity to investigate the scaling of tendon size to muscle size; extreme muscular size (greater than or equal to twice that for untrained controls) might be expected to be accompanied by comparable tendinous tissue size to effectively transmit high muscular forces to the skeleton. However, the WSM ' s patellar tendon CSA was only 30% larger than untrained controls and within the range of individuals we have previously measured (Fig. 6 A ). This observation supports the notion that tendon structure may be largely /uniFB01 xed by adulthood (40), with only slow/limited\n\n\n\nFigure 6. Patellar tendon mean cross-sectional area ( A ) and patellar tendon moment arm ( B )ofaWorld ' sStrongestManand deadlift champion (WSM) compared with long-term resistance trained [ n ¼ 16, from theworkbyMasseyetal.(15)] and untrained control populations [ n ¼ 39, from the work by Massey et al. (15)].\n\n", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed12.pdf" - }, - { - "text": "\n\n\n\nCompost adds organic material and nutrients to the soil, increases water-holding capacity and biological activity, and improves plant growth and health.", - "page_start": 0, - "page_end": 0, - "source_file": "CompostGuide.pdf" - }, - { - "text": "\n\n\n\n\n\n\n\n\n\n\n\n## 1 Introduction\n\n## 1.1 Purpose of the Document\n\nThe main purpose of this document is to present a User Manual for the main user functionalities of the Portal Version 4.3 , launched in production in May 2019. This document consists of an update of the User Manual for the Portal Version 3.0 published in November 2017[4].\n\n## 1.2 Reference Documents\n\nTable 1-1: Reference Documents\n\n| Id | Reference | Title | Version |\n|------|-------------|----------------------------------------------|-----------|\n| [1] | EDP\\_S1\\_MAN | EDP\\_S1\\_MAN\\_Portal-Version1-UserManual\\_v1.0 | 1 |\n| [2] | EDP\\_S1\\_MAN | EDP\\_S1\\_MAN\\_Portal-Version1.3-UserManual\\_v1.2 | 1.3 |\n| [3] | EDP\\_S1\\_MAN | EDP\\_S1\\_MAN\\_Portal-Version2.0-UserManual\\_v1.0 | 2 |\n| [4] | EDP\\_S1\\_MAN | EDP\\_S1\\_MAN\\_Portal-Version3.0-UserManual\\_v1.0 | 3 |\n\n## 1.3 Terminology\n\n| Acronym | Description |\n|-----------------|--------------------------------------------------------------------------------------------------|\n| API | Application Programmer Interface |\n| CKAN | (replaced by the ' Data Platform ' ) |\n| CSV | Comma separated values |\n| Data Platform | Single page web app for managing and displaying datasets |\n| DCAT-AP | DCAT Application Profile - Metadata specification based on the Data Catalogue vocabulary (DCAT) |\n| DRUPAL | Content Management System |\n| ECAS / EU-Login | EU user login page |\n| EDP | European Data Portal |\n| FME | Feature Manipulation Engine |\n| GUI | Graphical User Interface |\n| HTTP | Hypertext Transfer Protocol |\n| JSON | JavaScript Object Notation (a lightweight data-interchange format) |\n| maps.app | Geo-spatial data visualization application |\n| MQA | Metadata Quality Assistant |\n| RDF | Resource Description Framework |\n| SOLR | Search engine used for portal content search and dataset search |\n\n", - "page_start": 3, - "page_end": 3, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "\n\ncomparative populations drawn from the existing literature can be found in Supplemental Materials 1 (gross IMTP peak force and net IMTP peak force) and 2 (CMJ peak power and height).\n\n## Isometric Midthigh Pull and Countermovement Jump\n\nGross (including body weight) and net (above body weight) IMTP peak forces of the WSM were 9,171 N and 7,480 N, respectively. The WSM ' s gross IMTP peak force was 54% greater than the highest comparable group mean we located (subelite weightlifters: 5,942 ± 844 N (20); Fig. 2 A ). The WSM ' s net IMTP peak force was 100% greater than the highest comparable group mean value in the literature (collegiate soccer athletes: 3,740 ± 692 N (26); Fig. 2 B ).\n\nThe WSM ' s CMJ peak power and jump height were 9,866 W and 53.3 cm, respectively. The peak CMJ power of the WSM was > 2.5-fold (164%) that of the mean of an untrained control group previously measured in our laboratory (3,735 ± 760 W; unpublished) and 51% greater than the highest comparable group mean value we located in the literature (professional basketball players: 6,518 ± 923 W (32); Fig. 2 C ). Not surprisingly, given the WSM ' shighbodymass,hisjumpheightwas less exceptional, while still being 20% greater than that of a group of untrained control participants previously measured in our laboratory (44.3 ± 9.2 cm; unpublished). However, his jump height was 25% lower than the highest group mean CMJ height we are aware of in the published literature (elite international gymnasts: 71.3 ± 4.5 cm (37); Fig. 2 D ).\n\n## Leg Muscle Volumes\n\nThe total unilateral muscle volume of the 22 measured muscles/compartments of WSM (14,922 cm 3 ) was nearly twice that of a relatively modest ( n ¼ 11) sample of untrained controls (7,628 ± 1,548 cm 3 ; þ 96%; Fig. 3), while being 63% greater than subelite (9,164 ± 1,207 cm 3 ) and þ 32% greater than elite 100-m sprinters (11,323± 1,328 cm 3 ; Table 2). The muscle group differences were largest for the plantar /uniFB02 exors ( þ 120% vs. untrained; þ 100% vs. subelite sprinters; þ 70% vs. elite sprinters) and smallest for the hip /uniFB02 exors ( þ 65% vs. untrained; þ 30% vs. subelite sprinters; þ 5% vs. elite sprinters). The WSM had the highest values of any individual we have observed for four out of /uniFB01 ve muscle groups, but not the hip /uniFB02 exors, which were inferior to three of the elite 100-m sprinters ( n ¼ 5).\n\nCompared with untrained control participants ( n ¼ 11), all 22 of the WSM ' s individual muscles/compartments were larger than untrained controls (Table 2 and Fig. 3). However, the differences in muscle volume were extremely variable, with the biggest differences being for the ' guy ropes, ' which were 2.5 -3.0 times that of untrained controls ( þ 140% gracilis; þ 157% ST; þ 202% sartorius), compared with more modest differences such as 23% (BFsh) and 32% (iliopsoas) greater.\n\n## Quadriceps Femoris and Hamstring Size\n\nOverall quadriceps femoris volume of the WSM (4,386 cm 3 ) was 127% greater than a large, pooled population of untrained controls (1,932 ± 336; n ¼ 102), 66% greater than subelite sprinters (2,636 ± 401 cm 3 ), 53% greater than long-term resistancetrained individuals (2,876 ± 311 cm 3 ), and 36% greater than elite\n\nFigure 3. Percentage differences in muscle volumes of all muscles, 5 functional muscle groups, and 23 individual muscles/compartments between the World ' s Strongest Man and deadlift champion (WSM; n ¼ 1) and untrained control participants ( n ¼ 11) from the work by Miller et al. (13). A positive value indicates greater muscle volume of WSM relative to the group mean of the untrained controls. The functional muscle groups and individual muscles are ordered according to the magnitude of the percentage differences for absolute muscle volume.\n\n", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed12.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed12.pdf", - "query": "Why constraint made the scanning of the word's strongest man's upper body impossible using a MRI ?", - "target_page": 10, - "target_passage": "Despite using a wide-bore MRI scanner, due to the size of the WSM’s shoulders and arms, it was not possible to scan their upper body", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "isometric force; magnetic resonance imaging; power; strength\n\n## INTRODUCTION\n\nFeats of strength have fascinated man since the early stages of human civilization, as shown by the archeological evidence of inscribed heavy stones at Olympia and Thera in Greece, dated to the 6th century BC, detailing the way they were lifted by Bybon and Eumastus, respectively (1). Over the centuries, many types of strength competitions have existed; some of which have been codi /uniFB01 ed and endured within modern sporting competitions (e.g., weightlifting, powerlifting, and shot put).Inaddition,professionalstrongmancompetitions,such as the annually contested ' World ' s Strongest Man ' event, generate extensive global interest (2). Moreover, scienti /uniFB01 c understanding of muscular strength is important because of its role in athletic performance (3), injury prevention (4), and\n\n\n\nhealthy aging (5). However, our knowledge of extreme human strength is limited.\n\nTo date, there is little scienti /uniFB01 c information on the characteristics of extremely strong humans in terms of laboratorybased tests of strength and power, particularly the size and distribution of their muscle mass, as well as tendon size and joint mechanics (moment arm). Kraemer et al. (6)examinedthe body composition of elite strongman competitors using dualenergy X-ray absorptiometry scanning and found that they had a body mass (153±19 kg) and lean mass (118±12 kg) approximately twice that of an average untrained healthy young man. Whole body skeletal muscle mass of athletes from strength- and power-based sports has also been estimated using ultrasound measurements at a limited number of anatomical locations (7, 8). However, neither ultrasound-derived\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed12.pdf" - }, - { - "text": "## Materials & experimental systems\n\nn/a\n\nInvolved in the study\n\nAntibodies\n\nEukaryotic cell lines\n\nPalaeontology and archaeology\n\nAnimals and other organisms\n\nClinical data\n\nDual use research of concern\n\nPlants\n\n## Methods\n\nn/a\n\nInvolved in the study\n\nChIP-seq\n\nFlow cytometry\n\nMRI-based neuroimaging\n\n## Magnetic resonance imaging\n\n## Experimental design\n\nDesign type\n\nStructural & Diffusion MRI\n\nDesign specifications\n\nNo task-based fMRI used in this manuscript.\n\nBehavioral performance measures\n\nN/A; no performance metrics collected\n\n## Acquisition\n\nImaging type(s)\n\nStructural\n\nField strength\n\n3\n\nSequence & imaging parameters\n\nHigh-resolution anatomical scans were acquired using a T1-weighted (T1w) magnetization prepared rapid gradient echo (MPRAGE) sequence (TR = 2500 ms, TE = 2.31 ms, T1 = 934 ms, flip angle = 7°, 0.8 mm thickness) followed by a gradient echo fieldmap (TR = 758 ms; TE1 = 4.92 ms; TE2 = 7.38 ms; flip angle = 60°). A T2-weighted (T2w) turbo spin echo (TSE) scan was also acquired with an oblique coronal orientation positioned orthogonally to the main axis of the hippocampus (TR/TE = 9860/50 ms, flip angle = 122°, 0.4 × 0.4 mm2 in-plane resolution, 2 mm slice thickness, 38 interleaved slices with no gap, total acquisition time = 5:42 min).\n\nArea of acquisition\n\nT1-weighted and dMRI scans = whole-brain\n\nT2-weighted scan = high-resolution imaging of medial temporal lobe\n\nDiffusion MRI\n\nUsed\n\nNot used\n\nParameters TR = 4300 ms, echo time = 100.2 ms, 139 directions, b-max = 4990, FoV = 259 x 259 mm, 78 slices, 1.7986 x 1.7986 x 1.8 mm voxel resolution\n\n## Preprocessing\n\nPreprocessing software\n\nGray Matter Volume & Cortical Thickness: Advanced Normalization Tools (ANTs), version 2.1.0\n\nFreeSurfer, version 7\n\nT2-weighted MTL scans:\n\nAutomatic Segmentation of Hippocampal Subfields (ASHS), version 7/2018\n\nDiffusion imaging:\n\nQSIprep, version 0.15.3\n\nDSI Studio, version Chen-2022-07-31\n\nNormalization\n\nNormalization differed by modality due to inherent limitations of applicable processing pipelines.\n\nGray Matter Volume & Cortical Thickness:\n\nAll analyses were kept in native subject-space to limit the amount of warping and leverage the advantages of a precision imaging design.\n\nT2-weighted MTL scans:\n\nT2w images were registered to the segmentation template (see below) using ANTs deformable registration.\n\nDiffusion imaging:\n\nInitial preprocessing through QSIprep normalized diffusion images to the skull-stripped T1w images. Diffusion images were then reconstructed in MNI space using DSI studio's Q-space Diffeomorphic Reconstruction.", - "page_start": 15, - "page_end": 15, - "source_file": "pubmed4.pdf" - }, - { - "text": "\n\nchanges in response to functional overload/resistance training. For example, we previously found patellar tendon CSA to show very subtle changes after 15 wk (45 training sessions) of heavy resistance training [ þ 1.4% (41)] and no differences between long-term resistance-trained individuals and untrained controls (15).\n\n## Limitations\n\nAlthough the current investigation provides a detailed assessment of an individual at/toward the upper limit of human strength performance, it is important to appreciate study limitations. First, the participant was not measured immediately before their World ' sStrongestManchampionship success or other landmark performances, and it is entirely possible the functional and structural characteristics we assessed mayhavebeenevenhigherdirectlypriortopeakperformances. Despite using a wide-bore MRI scanner, due to the size of the WSM ' s shoulders and arms, it was not possible to scan their upperbody.Thus,wewerenotabletoinvestigatethisaspectof the WSM ' s muscle morphology; although given that greater hypertrophy occurs in the upper body compared with the lower body (42), it is possible that the WSM ' s upper-body muscle size relative to untrained controls may have been even more pronounced than what we have documented for the lower body. In the current study to provide the most representative data on untrained control participants, the largest available untrained control populations were used for each category of measurements. Thus, different untrained control populations were used [e.g., comparison of quadricep and hamstring size ( n ¼ 102) vs. comparison of all the leg muscles ( n ¼ 11)], which led to some subtle discrepancies in the contrasts between these groups and the WSM [e.g., quadriceps femoris/knee extensors, þ 127% and þ 99% relative to our large pooled ( n ¼ 102) and smaller ( n ¼ 11) untrained control samples, respectively]. Importantly, however, this discrepancy does not appear to meaningfully affect the interpretation of the /uniFB01 ndings. There were subtle differences in the precise scanning and analysis approaches used with the reference populations featured in this study, including 1 )magnetic /uniFB01 eld strength [1.5 T (10, 11, 15) vs. 3.0 T, WSM and (13, 14)]; 2 ) the interslice distance used to quantify quadriceps femoris and hamstrings muscle volume [1.5 cm (10, 11, 14)vs.2.0cm,WSMand(13)]; 3 )thecalculation of muscle volume [area under the cubic spline ACSA-muscle length curve: (10, 11, 14) vs. the equation detailed earlier: WSM and (13)]; and 4 )theuseofunilateralMRImeasuresderived from one limb (10, 11, 14, 15) or collapsed across two limbs [WSM and (13)]. However, it seems likely that these subtle differences would have had at most a very minor effect on the /uniFB01 ndings.Finally,itisalsoimportanttohighlightthatthedifferences documented between the WSM and comparative populations for the various measures included in the current study cannot be assumed to be anything other than a combination of both innate (genetic) and environmental (training and nutrition) factors.\n\n## Conclusions\n\nIn conclusion, this novel investigation documented the muscle and tendon morphology and whole body strength and power characteristics of an exceptionally strong individual, relative to comparative athletic, trained, and untrained\n\npopulations. Overall leg muscle volume of the WSM was approximately twice that of untrained controls but with pronounced anatomical variability in the extent of muscular development. The plantar /uniFB02 exor muscle group and the guy rope muscles (sartorius, gracilis, and semitendinosus: þ 140 to þ 202%), which stabilize the pelvis and femur, demonstrated the largest differences. The pronounced quadriceps femoris size of the WSM (greater than or equal to twice that of untrained) was accompanied by a more modest difference in patella tendon moment arm ( þ 18%) and was not matched by a proportional difference in tendon size ( þ 30%).", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed12.pdf" - }, - { - "text": "supplement consumption included protein, branched-chain amino acids, and electrolytes.\n\n## Overview\n\nThe WSM reported for a single test session that involved the following assessments (listed in order): axial T1 weighted 3.0-T MRI scans from T12 to the lateral malleolus [to assess muscle size throughout the lower body (left and right sides)], axial and sagittal T1-weighted MRI scans of both knees [to assess patellar tendon cross-sectional area (CSA) and patellar tendon moment arm], maximum countermovement jumps (CMJ), and maximum isometric midthigh pulls (IMTPs). The muscle size, patellar tendon CSA, and patellar tendon moment arm of the WSM were compared with various populations measured within our laboratory, as indicated in Table 1,alongsideparticipantdescriptives(10, 11, 13 -15). In addition, the IMTP and CMJ measures were compared with existing published literature (included studies are summarized in Supplemental Materials 1 and 2, alongside participant descriptives).\n\n## MRI Measurement of Muscle Tendon Unit Morphology and Moment Arm\n\nThe participant reported for their MRI scan [3.0-T Discovery MR750W (70-cm-wide bore), GE Medical] having not completed any strenuous physical activity in /C21 24 h and had received prior instruction to arrive in a relaxed state having eaten and drunk normally. The participant sat quietly for 15 min prior to their scan. The participant lay supine for the MRI scan of the lower-body musculature from T12 to the lateral malleolus. A body coil (GE Medical) allowed axial T1weighted images (time of repetition/time to echo 600/8.144 ms, image matrix 512 /C2 512, /uniFB01 eld of view 500 /C2 500 mm, pixel size 0.9766 /C2 0.9766 mm, slice thickness 5 mm, and interslice gap 5 mm) to be acquired in /uniFB01 ve overlapping blocks. Images of both sides of the body were acquired within a single scan for blocks 1 (T12 to pelvis), 4 (knee joint space to midshank), and 5 (midshank to lateral malleolus). However, due to the size of the participant ' s thighs, it was necessary to scan each thigh individually for blocks 2 (pelvis to midthigh) and 3 (midthigh to knee joint space); this involved the radiographer repositioning the /uniFB01 eld of view between scanning the /uniFB01 rst and the second thigh but not physically moving the coil or the participant. Oil/uniFB01 lled capsules were secured to the surface of the participant ' sskin with Transpore tape at intervals along the length of the lower body prior to the scan and in an of /uniFB02 ine analysis used to verify the alignment of the blocks (Horos software, Version 3.36, https://horosproject.org/).\n\nThe of /uniFB02 ine analysis was of the following muscles/compartments (Fig. 1): iliopsoas (psoas major and iliacus combined); sartorius; tensor fasciae latae; adductor magnus; gracilis; gluteus maximus; gluteus medius and minimus (combined, due to dif /uniFB01 culty separating the two muscles); rectus femoris (RF); vastus lateralis (VL), medialis (VM), and intermedius (VI); semimembranosus (SM); semitendinosus (ST); biceps femoris long (BFlh) and short heads (BFsh); popliteus; lateral and medial gastrocnemius; soleus; and the anterior, lateral, and deep posterior compartments of the shank. The anterior shank compartment consisted of the", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed12.pdf" - }, - { - "text": "\n\npredictions of skeletal muscle mass nor dual-energy X-ray absorptiometry provides detailed information on the size of speci /uniFB01 c individual muscles. Given the known importance of muscle size as a determinant of muscular strength (9 -11), pronounced muscle size seems likely to be critical to extreme human strength; however, the speci /uniFB01 c muscle size of extremely strong individuals remains unknown. Similarly, a large moment arm (e.g., of the patella tendon at the knee joint) could contribute to the expression of high muscular strength (10, 12), and a large tendon may mitigate the mechanical stress it experiences with very high muscular loads, and therefore, these characteristics may also be expected in individuals selected for exceptional strength.\n\nIn this paper, we present the /uniFB01 ndings from a unique opportunity to examine the laboratory function, muscle size, and distribution of muscle mass, as well as patellar tendon size and moment arm, of a World ' s Strongest Man and deadlift champion (WSM) in comparison with existing data on untrained individuals, power athletes (100-m-track sprinters), and long-term resistance-trained populations that we have assessed previously (10, 11, 13 -15).\n\n## MATERIALS AND METHODS\n\n## Participant\n\nThe WSM ' s achievements included one World ' sStrongest Man title (14 mo prior to measurement), /uniFB01 ve Britain ' s Strongest Man titles (the most recent 6 mo prior to measurement), twice being World Deadlift Champion and Deadlift WorldRecordholder(500kg;atthetimeofmeasurement), and second place at Europe ' s Strongest Man. Prior to agreeing to participate, the purpose of the research study and the testing procedures were explained to the participant along with the risks and bene /uniFB01 ts of taking part. The participant gave his written informed consent to participate in the study that was approved by the Loughborough University Ethical Advisory Committee (Ethics Number R18-P090). Included in the written consent was a statement providing permission for publication of the collected data and the likelihood that their identity may be evident based on their achievements and characteristics, despite anonymization.\n\n## Training History\n\nThe WSM had been continuously involved in systematic, regular upper- and lower-body resistance training for 15 yr at the time of testing. In the 12 mo prior to testing, the participant ' s resistance training consisted of the following typical exercises: lower body: squats, deadlifts, leg press, and knee extension; and upper body: bench press, shoulder press, dumbbell/barbell rows, and lat pull-down. The proportion of the participant ' s training within the following repetition ranges over the last 12 mo was as follows: near maximum loads [1 -5 repetition maximum (RM)]: 10%; heavy loads (6 -14 RM): 80%; and moderate loads ( /C21 15 RM): 10%. The participant reported only occasional ( < 1 /C2 /week) use of advanced resistance training practices (i.e., complex training and accommodating resistance method) but frequently ( > 3 /C2 / week) executed training repetitions with the intention to move the load as fast as possible. The WSM ' snutritional\n\nsupplement consumption included protein, branched-chain amino acids, and electrolytes.\n\n## Overview", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed12.pdf" - }, - { - "text": "\n\nTable 2. Muscle volume of all muscles, 5 functional muscle groups, and 22 individual muscles/compartments of a World ' s Strongest Man and deadlift champion and comparative elite sprinters, subelite sprinters, and untrained control participants", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed12.pdf" - }, - { - "text": "Hippocampal segmentation . T1- and T2-weighted images ( n = 25) were submitted to the automatic segmentation of hippocampal subfields package (ASHS 67 , version July 2018) for parcellation of seven MTL subregions: CA1, CA2/CA3, dentate gyrus, subiculum, perirhinal cortex, entorhinal cortex and PHC (Supplementary Fig. 6b). The ASHS segmentation pipeline automatically segmented the hippocampus in the T2w MRI scans using a segmented population atlas, the Princeton Young Adult 3T ASHS Atlas template 68 ( n = 24, mean age = 22.5 years). A rigid-body transformation aligned each T2w image to the respective T1w scan for each day. Using ANTs deformable registration, the T1w was registered to the population atlas. The resulting deformation fields were used to resample the data into the space of the left and right template MTL ROI. Within each template ROI, each of the T2w scans of the atlas package was registered to that day's T2w scan. The manual atlas segmentations were then mapped into the space of the T2w scan, with segmentation of the T2w scan computed using joint\n\nlabel fusion 69 . Finally, the corrective learning classifiers contained in ASHS were applied to the consensus segmentation produced by joint label fusion. The output of this step is a corrected segmentation of the T2w scan. Further description of the ASHS protocol can be found here 67 . T2w scans and segmentations were first visually examined using ITK-SNAP 70 for quality assurance and then subjected to manual editing in native space using ITK-SNAP (v.3.8.0-b; C.M.T.). One session (scan 15, third trimester) was discarded due to erroneous scan orientation. The anterior extent of the segmented labels was anchored 4 mm (two slices) anterior to the appearance of the limen insulae, and the posterior extent was anchored to the disappearance of hippocampal gray matter from the trigone of the lateral ventricle. Boundaries between perirhinal, entorhinal and parahippocampal cortices were established in keeping with the Olsen-Amaral-Palombo (OAP) segmentation protocol 71 . In instances where automatic segmentation did not clearly correspond to the underlying neuroanatomy, such as when a certain label was missing several gray matter voxels, manual retouching allowed for individual voxels to be added or removed. All results are reported using the manually retouched subregion volumes to ensure the most faithful representation of the underlying neuroanatomy. Scans were randomized and segmentation was performed in a random order, blind to pregnancy stage. To assess intrarater reliability for the present analyses, two days underwent manual editing a second time. The generalized Dice similarity coefficient 72 across subregions was 0.87 and the intraclass correlation coefficient was 0.97, suggesting robust reliability in segmentation.", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed4.pdf" - }, - { - "text": "## Acknowledgements\n\nThe authors would like to thank M. Mendoza for his phlebotomy and MRI assistance at the UCSB Brain Imaging Center; C. Stark and R. Tain for MRI assistance at the UCI Facility for Imaging and Brain", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed4.pdf" - }, - { - "text": "were as follows: estradiol-1.0 pg ml -1 , 1-500 pg ml -1 , <5% relative s.d. (RSD); progesterone-0.05 ng ml -1 , 0.05-10 ng ml -1 , 9.33% RSD. Serological samples were not acquired in five sessions due to scheduling conflicts with UC Irvine's Center for Clinical Research.\n\nMRI acquisition . MRI scanning sessions at the University of California, Santa Barbara and Irvine were conducted on 3T Prisma scanners equipped with 64-channel phased-array head/neck coil (of which 50 coils are used for axial brain imaging). High-resolution anatomical scans were acquired using a T1-weighted (T1w) magnetization prepared rapid gradient echo (MPRAGE) sequence (repetition time (TR) = 2,500 ms, time to echo (TE) = 2.31 ms, inversion time (TI) = 934 ms, flip angle = 7°, 0.8 mm thickness) followed by a gradient echo field map (TR = 758 ms, TE1 = 4.92 ms, TE2 = 7.38 ms, flip angle = 60°). A T2-weighted (T2w) turbo spin echo scan was also acquired with an oblique coronal orientation positioned orthogonally to the main axis of the hippocampus (TR/ TE = 9,860/50 ms, flip angle = 122°, 0.4 × 0.4 mm 2 in-plane resolution, 2-mm slice thickness, 38 interleaved slices with no gap, total acquisition time = 5 min and 42 sec). The Diffusion Spectrum Imaging (DSI) protocol sampled the entire brain with the following parameters: single phase, TR = 4,300 ms, echo time = 100.2 ms, 139 directions, b -max = 4,990, FoV = 259 × 259 mm, 78 slices, 1.7986 × 1.7986 × 1.8 mm voxel resolution. These images were linearly registered to the whole-brain T1w MPRAGE image. A custom foam headcase was used to provide extra padding around the head and neck, as well as to minimize head motion. Additionally, a custom-built sound-absorbing foam girdle was placed around the participant's waist to attenuate sound near the fetus during second-trimester and third-trimester scanning.", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed4.pdf" - }, - { - "text": "\n\n\n\n\n\n\n\nFigure 2. Gross (including body weight) isometric midthigh pull (IMTP) peak force ( A ), net (above body weight) IMTP peak force ( B ), countermovement jump (CMJ) peak power ( C ), and CMJ height ( D ) of a World ' s Strongest Man and deadlift champion (WSM) displayed against comparative data from the existing research literature. CMJ was performed with an arm swing by WSM and within all comparative data included in the /uniFB01 gure. /C3 Athletes from different sports or disciplines featured within the sample. Descriptive information (age, height, and body mass) of the groups included as comparative data can be found in Supplemental Materials 1 (IMTP) and 2 (CMJ).\n\n\n\nthe load cells across the two platforms) was displayed in front of the participant during the IMTP efforts, and a horizontal marker was placed on the highest force obtained after the /uniFB01 rst maximum effort. In the of /uniFB02 ine analysis, the force signals were low pass /uniFB01 ltered (10 Hz using a fourth-order zero-lag Butterworth /uniFB01 lter) before summating the force output from the two platforms to derive overall force produced. The instantaneous highest force during maximum efforts was identi /uniFB01 ed as the measure of gross IMTP peak force (i.e., including body weight). Force while the WSM was standing upright on the platform at rest (i.e., body weight) was also subtracted from the peak instantaneous force to calculate net IMTP peak force.\n\n## Analysis and Comparative Data\n\nMuscle volumes, patellar tendon CSA, and patellar tendon moment arm measurements assessed on both legs of the WSM were averaged to provide unilateral criterion values; this facilitated comparisons with various untrained, resistance-trained, and athletic groups previously investigated in published works from our laboratory (10, 11, 13 -15; Table 1). IMTP and CMJ values were predominantly compared with existing research literature with the highest comparable male data [e.g., IMTP gross peak force: (18 -25); IMTP net peak force:\n\n(26 -31); CMJ performed with an arm swing on a force platform (32 -38)]. Where the numerical values (means and SD) from previously published studies were not reported, they were extracted using online software (WebPlotDigitizer, version 4.6, https://automeris.io/WebPlotDigitizer). For IMTP peak force in cases where it was not clearly stated that body weight was subtracted from gross IMTP peak force, measures were assumed to be gross IMTP peak force. Muscle and tendon morphology /uniFB01 gures display means ± SD as well as individual participant data for comparative populations, as these values are from published research from our laboratory. IMTP peak force and CMJ outcome /uniFB01 gures display only means ± SD values for comparative populations, as we relied on published values from the literature where individual participant values were not typically available.\n\n## RESULTS\n\n## Participant Descriptives and Anthropometrics\n\nThe WSM was 30.6 yr old and 1.90 m tall and his body mass was 172 kg upon reporting for the laboratory visit. The age, height, and body mass of participants from the comparative datasets featured in our previously published research are presented in Table 1. Age, height, and body mass for", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed12.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed13.pdf", - "query": "What is typical age at which multiple sclerosis is diagnosed ?", - "target_page": 2, - "target_passage": "Multiple sclerosis (MS) is a progressive inflammatory disease of the central nervous system (CNS) that is typically diagnosed at 30– 40 years of ag", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- 39. Silveira SL, Cederberg KLJ, Jeng B, Sikes EM, Sandroff BM, Jones CD, et al. Do physical activity and social cognitive theory variable scores differ across symptom cluser severity groups in multiple sclerosis? Disabil Health J . (2021) 14(4):101163. doi: 10.1016/j.dhjo.2021.101163\n - 40. Learmonth YC, Motl RW. Exercise training for multiple sclerosis: a narrative review of history, bene /uniFB01 ts, safety, guidelines, and promotion. Int J Environ Res Public Health . (2021) 18(24):13245. doi: 10.3390/ijerph182413245\n - 41. Baird JF, Motl RW. Response heterogeneity with exercise training and physical activity interventions among persons with multiple sclerosis. Neurorehabil Neural Repair . (2019) 33(1):3 -14. doi: 10.1177/1545968318818904\n - 42. Sandroff BM, Baird JF, Silveira SL, Motl RW. Response heterogeneity in /uniFB01 tness, mobility and cognition with exercise-training in MS. Acta Neurol Scand . (2019) 139 (2):183 -91. doi: 10.1111/ane.13041\n - 43. Lahelle AF, Øberg GK, Normann B. Group dynamics in a group-based, individualized physiotherapy intervention for people with multiple sclerosis: a qualitative study. Physiother Res Int . (2019) 25(3):e1829. doi: 10.1002/pri.1829\n - 44. Normann B. Facilitation of movement: new perspectives provide expanded insights to guide clinical practice. Physiother Theory Pract . (2020) 36(7):769 -78. doi: 10.1080/09593985.2018.1493165\n - 45. Øberg GK, Normann B, Gallagher S. Embodied-enactive clinical reasoning in physical therapy. Physiother Theory Pract . (2015) 31(4):244 -52. doi: 10.3109/ 09593985.2014.1002873\n - 46. Anens E, Zetterberg L, Urell C, Emtner M, Hellström K. Self-reported physical activity correlates in Swedish adults with multiple sclerosis: a cross-sectional study. BMC Neurol . (2017) 17(1):204. doi: 10.1186/s12883-0170981-4\n - 47. Herring TE, Knowles LM, Alschuler KN. Outdoor adventure programs for persons with multiple sclerosis: a review and agenda for future research. Int J MS Care . (2021) 23(4):186 -92. doi: 10.7224/1537-2073.2020-066\n - 48. Creswell JW, Poth CN. Qualitative Inquiry & Research Design: Choosing Among Five Approaches . 4th ed. California: Sage (2018).", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed13.pdf" - }, - { - "text": "Figure 11: Number of recent (within two years) OCU initiates presenting to treatment in 2005 and 2013, by age of individual at first presentation.\n\n\n\nThe mode age of initiation has shifted from around 18 to around 25 and there is an older age profile throughout. Rises in average age of initiation have also been reported recently in cohorts of Australian injecting drug users (Horyniak et al., 2015). There appear to be two possible explanations.\n\n -  There is a genuine shift towards new initiates being older, and for them to present to treatment much faster than in previous years.\n -  There is a consistent, but small number of individuals who mis-report their age of onset when attending treatment i.e. who report that they have only been using opiates/crack for a short period when in fact they have been using for a far longer period, and that this is starting to really bias the numbers for recent cohorts because attendees from the original epidemic are becoming smaller.\n\nIt is possible then that the flattening we observe in the incidence trend is due to a small in-flux of older initiates, although mis-reporting may also explain that phenomenon. Either way though, as this analysis has made clear throughout, absolute numbers of new OCUs appear to be small probably fewer than 10,000 per annum and the numbers of those involved with crime will be smaller still. In addition, despite a flattening in the probable trend in new users, there is currently no sign that it is likely to tip upwards. If anything, the data suggest the downward trend is set to resume, though clearly it remains important to monitor the situation.", - "page_start": 28, - "page_end": 28, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "institutional requirements. The participants provided their written informed consent to participate in this study.\n\n## Author contributions\n\nSD: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Project administration, Resources, Visualization, Writing -original draft, Writing -review & editing. EA: Conceptualization, Formal Analysis, Methodology, Supervision, Writing -review & editing. BN: Conceptualization, Formal Analysis, Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing -review & editing.\n\n## Funding\n\nThe author(s) declare that /uniFB01 nancial support was received for the research, authorship, and/or publication of this article.\n\nThe development of the CoreDISTparticipation and the RCT is funded by the Northern Norway Health Authority (Helse Nord RHF). This interview study was funded by Nord University (PhD salary).\n\n## References\n\n- 1. Walton C, King R, Rechtman L, Kaye W, Leray E, Marrie RA, et al. Rising prevalence of multiple sclerosis worldwide: insights from the Atlas of MS, third edition. Mult Scler . (2020) 26(14):1816 -21. doi: 10.1177/1352458520970841\n- 2. Casey B, Coote S, Galvin R, Donnelly A. Objective physical activity levels in people with multiple sclerosis: meta-analysis. Scand J Med Sci Sports . (2018) 28 (9):1960 -9. doi: 10.1111/sms.13214\n- 3. Kinnett-Hopkins D, Adamson B, Rougeau K, Motl RW. People with MS are less physically active than healthy controls but as active as those with other chronic diseases: an updated meta-analysis. Mult Scler Relat Disord . (2017) 13:38 -43. doi: 10.1016/j.msard.2017.01.016\n- 4. Hoang PD, Lord S, Gandevia S, Menant J. Exercise and sports science Australia (ESSA) position statement on exercise for people with mild to moderate multiple sclerosis. J Sci Med Sport . (2022) 25(2):146 -54. doi: 10.1016/j.jsams.2021.08.015\n- 5. Dalgas U, Langeskov-Christensen M, Stenager E, Riemenschneider M, Hvid LG. Exercise as medicine in multiple sclerosis -time for a paradigm shift: preventive, symptomatic, and disease-modifying aspects and perspectives. Curr Neurol Neurosci Rep . (2019) 19(11):1 -12. doi: 10.1007/s11910-019-1002-3\n- 6. Riemenschneider M, Hvid LG, Ringgaard S, Nygaard MKE, Eskildsen SF, Gaemelke T, et al. Investigating the potential disease-modifying and neuroprotective ef /uniFB01 cacy of exercise therapy early in the disease course of multiple sclerosis: the early multiple sclerosis exercise study (EMSES). Mult Scler . (2022) 28(10):1620 -9. doi: 10. 1177/13524585221079200\n- 7. Kalb R, Brown TR, Coote S, Costello K, Dalgas U, Garmon E, et al. Exercise and lifestyle physical activity recommendations for people with multiple sclerosis throughout the disease course. Mult Scler . (2020) 26(12):1459 -69. doi: 10.1177/ 1352458520915629\n- 8. Moreno-Navarro P, Manca A, Martinez G, Ventura L, Barbado D, Vera-García FJ, et al. Test-retest reliability and known-groups validity of trunk muscle tests in people with multiple sclerosis: a cross-sectional, case-control study. Phys Ther . (2021) 101 (5):1 -9. doi: 10.1093/ptj/ptzab049\n- 9. Raats J, Arntzen EC, Lamers I, Feys P, Normann B. What is the distribution of trunk impairments and its relationship with disability level in individuals with multiple sclerosis? Mul Scler Relat Disord . (2021) 57:103325. doi: 10.1016/j.msard. 2021.103325\n- 10. Normann B, Arntzen EC. What are the relationships between trunk control, balance and walking in individuals with multiple sclerosis with minor to moderate disability? Eur J Physiother . (2021) 23(6):377 -83. doi: 10.1080/21679169.2020.1772870\n\n## Acknowledgments\n\nThe authors would like to thank the participants in this study and the user representatives from Nordland MS Association for their valuable contributions. The authors also acknowledge philosopher of the mind and cognitive sciences Hanne De Jaegher for the valuable comments on the interpretations and discussions of the results.\n\n## Con /uniFB02 ict of interest", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed13.pdf" - }, - { - "text": "Even in the short period between 2013 and 2018 (the period covered by these pilot statistics) the data show an overall decline and a decline of several relevant occupational diseases. The strongest decrease - practically a halving - can be seen for hearing impairments (diseases of the inner ear). Pneumoconiosis, mesothelioma and selected occupational cancers went down between 7% and 14%. Asthma and some recognised MSDs are more or less stagnating, probably due to unchanged exposure to biological or chemical substances and no change regarding the health outcomes of ergonomic working conditions.\n\nIf work is one of some causative factors, a clear assignment of work to a health outcome is complex. Moreover, in many cases a quite long observation period is necessary simply due to the latency time between exposure at work, outbreak and detection of a disease , which is obviously very different from the clear and immediate consequence of an accident at work.\n\nThe detection of a disease and the correlation between work and this disease depends highly on the monitoring capacities of the health system and its ability, tradition and standards to connect diseases and work-related causes . In a study on 'Asbestos -related occupational diseases in Central and East European Countries' the authors refer to different policies for identifying workers formerly exposed to asbestos and conclude:\n\n'Consequently, large differences are observed from one country to another regarding the number of recognised asbestos-related cases. In Slovenia, for example, the annual asbestosis rate (cases of asbestosis/population) amounts to 14.9, in Croatia 5.3, and in Poland 2.1. Moreover, in Estonia, the incidence of asbestosis is unknown as there is no systematic collection of data.' 181\n\nFor example, until now very few occupational diseases have been recognised as outcomes of psychosocial risks at work. The ILO proposes in its 'List of Occupational Diseases Recommendation' a large number of very specific and 'classic' occupational diseases - a very broad definition of 'Mental and behavioural disorders' but leaving the responsibility to science and to 'national conditions'. 182 Similarly, the development of the European Schedule of Occupational Diseases (ESOD) aims to improve knowledge, step up prevention and provide assistance in linking occupational activities and diseases.\n\n", - "page_start": 74, - "page_end": 74, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "The strong differences in the expectations to do the job until 60 years of age are probably also caused by the circumstance that the labour market for physically demanding jobs is more rigid. For example, one serious musculoskeletal issue might mean being out of a manual job far before the pension age. For diseases caused by excessive psychosocial burden, other difficulties can be observed: the recognition as work-related is less accepted, work-related and private life causes are closely intertwined, and the diagnosis can be difficult.\n\n", - "page_start": 96, - "page_end": 96, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "[\n\n]\n\n## Impact of Dyspnea on Adults With Respiratory Symptoms Without a De /uniFB01 ned Diagnosis\n\n\n\n\n\nJared Bierbrier, BSc; Emily Gerstein; George A. Whitmore, PhD; Katherine L. Vandemheen, MScN; Celine Bergeron, MD; Louis-Philippe Boulet, MD; Andreanne Cote, MD; Stephen K. Field, MD; Erika Penz, MD; R. Andrew McIvor, MD; Catherine Lemière, MD; Samir Gupta, MD; Paul Hernandez, MD; Irvin Mayers, MD; Mohit Bhutani, MD; M. Diane Lougheed, MD; Christopher J. Licskai, MD; Tanweer Azher, MD; Nicole Ezer, MD; Martha Ainslie, MD; Gonzalo G. Alvarez, MD; Sunita Mulpuru, MD; and Shawn D. Aaron, MD\n\nBACKGROUND: We investigated dyspnea; its associated risk factors; and its impact on health care utilization, quality of life, and work productivity in adults with undiagnosed respiratory symptoms.\n\nRESEARCH QUESTION: What is the impact of dyspnea in adults with undiagnosed respiratory symptoms?\n\nSTUDY DESIGN AND METHODS: This population-based study included 2,857 adults who were experiencing respiratory symptoms. These individuals had not been previously diagnosed with any lung conditions and were recruited from 17 Canadian centers using random digit dialing. Each participant underwent spirometry testing both before and after using a bronchodilator to determine if they met the diagnostic criteria for COPD, asthma, or preserved ratio impaired spirometry (PRISm), or if their spirometry results were normal. An agematched control group (n ¼ 231) was similarly recruited using random digit dialing. A dyspnea impact assessment score from 0 to 100 was produced using questions from the COPD Assessment Test and St. George ' s Respiratory questionnaire.\n\nRESULTS: Individuals with PRISm (n ¼ 172) reported more impactful dyspnea (mean score, 63.0; 95% CI, 59.5-66.4) than those with undiagnosed asthma (n ¼ 265; mean score, 56.6; 95% CI, 53.9-59.3) or undiagnosed COPD (n ¼ 330; mean score, 57.5; 95% CI, 55.1-59.9). All groups reported signi /uniFB01 cantly more impactful dyspnea than the control group (mean score, 13.8; 95% CI, 11.8-15.7). Patient-speci /uniFB01 c risk factors including age, sex, BMI, smoking, and comorbidities explained 20.6% of the variation in dyspnea. An additional 12.4% of the variation was explained by disease classi /uniFB01 cation and another 1.7% by the severity of lung function impairment assessed with spirometry. After adjusting for age, sex, and BMI, greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\nINTERPRETATION: Our /uniFB01 ndings showed that in community-based adults with undiagnosed respiratory symptoms, those identi /uniFB01 ed with PRISm experienced the greatest impact of dyspnea. Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity. CHEST 2024; 166(6):1296-1308\n\nKEY WORDS: asthma; case /uniFB01 nding; COPD; dyspnea\n\nFOR EDITORIAL COMMENT, SEE PAGE 1259\n\n[\n\n]", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "[\n\n- assessed through inspiratory resistive loading. J Bras Pneumol . 2015;41(2): 143-150.\n- 25. Ekström M, Bornefalk H, Sköld M, et al. Validation of the Swedish Multidimensional Dyspnea Pro /uniFB01 le (MDP) in outpatients with cardiorespiratory disease. BMJ Open Respir Res . 2019;6: e000381.\n- 26. Yorke J, Russell AM, Swigris J, et al. Assessment of dyspnea in asthma: validation of The Dyspnea-12. J Asthma . 2011;48(6):602-608.\n- 27. Boulet LP, Boulay ME, Cote A, et al. Airway in /uniFB02 ammation and hyperresponsiveness in subjects with respiratory symptoms and normal spirometry. Eur Respir J . 2023;61(3): 2201194.\n- 28. Gerstein E, Bierbrier J, Whitmore GA, et al. Impact of undiagnosed chronic obstructive pulmonary disease and asthma on symptoms, quality of life, healthcare use, and work productivity. Am J Respir Crit Care Med . 2023;208(12):1271-1282.\n- 29. Aaron SD, Vandemheen K, Whitmore GA, et al. Early diagnosis and treatment of COPD and asthma: a randomized, controlled trial. N Engl J Med . 2024;390(22):2061-2073.\n- 30. Han MK, Ye W, Wang D, et al. Bronchodilators in tobacco-exposed persons with symptoms and preserved lung function. N Engl J Med . 2022;387(13): 1173-1184.\n- 31. Marott JL, Ingebrigtsen TS, Çolak Y, et al. Impact of the metabolic syndrome on cardiopulmonary morbidity and mortality in individuals with lung function impairment: a prospective cohort study of the Danish general population. Lancet Reg Health Eur . 2023;35:100759.\n- 32. Stefan MS, Priya A, Martin B, et al. How well do patients and providers agree on the severity of dyspnea? J Hosp Med . 2016;11(10):701-707.\n- 33. Cherian M, Magner KMA, Whitmore GA, et al. Patient and physician factors associated with symptomatic undiagnosed asthma or COPD. Eur Respir J . 2023;61(2): 2201721.\n\n]", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "one causal agent and relatively easy to identify. On the other hand, there are all sorts of disorders without strong or specific connections to occupation and with numerous possible causal agents.' 176\n\nSome professions and regular work tasks had and have very specific risks, for example, hearing disability through high noise levels, or musculoskeletal diseases caused by permanent repetition of a certain movement or posture, or specific cancers after exposure to carcinogenic chemical substances, infections in healthcare or work in laboratories, or allergies to natural substances in agriculture. Some examples are:\n\n## Occupation, work task, exposure\n\n## Occupational disease\n\n - · healthcare of infected persons\n - ► infection with the same disease\n - · highly repetitive hand and arm movements\n - ► epicondylitis\n - · quartz dust\n - ► silicosis\n - · working long hours in a kneeling position\n - ► bursitis\n - · extensive UV exposure\n - ► skin cancer\n - · aromatic amines\n - ► bladder cancer\n - · professional musicians\n - ► focal dystonia\n - · grain dust (agriculture)\n - ► allergies, asthma\n\nSpecific and strong connections between a risk and an outcome (risk pairs) are covered by occupational disease recognition schemes in the EU Member States. 177 Some countries have opening options in their list systems, that is, in principle every disease with a dominant cause in working conditions can be recognised. However, many court cases about the recognition of occupational diseases demonstrate that a clear cause-effect relationship is not always evident, that is, due to missing workplace exposure data from the past or competing causes in private circumstances. All occupational diseases with a principally unambiguous relation between cause and consequence account only for a small percentage of all work-related diseases . 178\n\nWe can observe a decrease of some of the major recognised diseases , 179 either triggered by preventive measures or triggered by shifts of workforce to sectors with less recognised occupational diseases. The new experimental EODS Statistics of Eurostat 180 documents the following developments of recognised occupational diseases.\n\nTable 21: Development of recognised occupational diseases in the EU 2013-2019", - "page_start": 73, - "page_end": 73, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- 202 Mazeikaite et al., 2021: What Drives CrossCountry Health Inequality in the EU? Unpacking the Role of Socioeconomic Factors\n - 203 Eurostat: LFS 2020 Ad hoc module, here\n - 204 Eurostat: Persons reporting a work-related health problem by sex, age and occupation, here\n - 205 Murray & Lopez, 1996: The Global burden of disease : a comprehensive assessment of mortality and disability from diseases, injuries, and risk factors in 1990 and projected to 2020, here\n\nUpdate: GBD 2017 Risk Factor Collaborators, 2018: Global, regional, and national comparative risk assessment of 84 behavioural, environmental and occupational, and metabolic risks or clusters of risks for 195 countries and territories, 1990-2017: a systematic analysis for the Global Burden of Disease Study 2017, here\n\n## 206 European Burden of Disease Network\n\n - 207 WHO definition: 'One DALY represents the loss of the equivalent of one year of full health. DALYs for a disease or health condition are the sum of the years of life lost to due to premature mortality (YLLs) and the years lived with a disability (YLDs) due to prevalent cases of the disease or health condition in a population.' here\n - 208 Murray & Lopez, 1996:. The Global burden of disease : a comprehensive assessment of mortality and disability from diseases, injuries, and risk factors in 1990 and projected to 2020, here\n - 209 IHME/GDB: GDB Compare - Vizhub, Visualisation of global health data, here\n - 210 Takala et al., 2017: Comparative Analysis of the Burden of Injury and Illness at Work in Selected Countries and Regions\n\nEzzati et al., 2004: Comparative quantification of health risks: global and regional burden of disease attributable to selected major risk factors\n\nNelson et al., 2005: The global burden of selected occupational disease and injury risks: Methodology and summary", - "page_start": 148, - "page_end": 148, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- (v) the parent or carer of a domestic elite sportsperson under the age of 18;", - "page_start": 46, - "page_end": 46, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed13.pdf", - "query": "What was the average year of the group that participated to the study concerning the impact of outdoor pysiotherapy on patient with multiple sclerosis", - "target_page": 4, - "target_passage": "Age in years Mean 47.6", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "It was an added positive experience to use our city park and notice all the other people who were there … it is something about challenging our comfort-zone . (ID4, EDSS: 0)\n\nThe natural environment was also described as taking focus away from MS symptoms. Cold, rainy or snowy weather conditions required planning of adequate clothing; in addition, these conditions led some participants to use cautious behavior when the ground was slippery and led a few to omit sessions. However, mastering outdoor exercise was highlighted in positive terms, such as discovering new ways to become active.\n\n## 3.4 Professional leadership, tailoring and co-creation of enjoyment\n\nThe way the physiotherapists led the group and, in particular, interacted with each participant were regarded as helpful for improving their bodily functions and activity levels. Some participants reported being afraid to try out new activities or training at high intensities after being diagnosed with MS but felt safe to explore when supervised by the physiotherapist because of their trust in the relationship between them and in the physiotherapist ' s professional knowledge.\n\nHow the physiotherapist approached the participants individually was described as important from this perspective. In particular, bodily interactions in which the physiotherapist demonstrated with his or her own body or placed his or her hands on the participant ' s body to correct a movement were reported to be successful, as it helped to increase speed and gave participants a sense of performing better or for a longer duration. If they did an exercise in a suboptimal way, participants reported receiving precise supervision, or if they expressed pain or were injured, the physiotherapist was supportive, assessed them and", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed13.pdf" - }, - { - "text": "However, explained variance estimates in our models ranged from 34 to 61%, suggesting further research is necessary to identify additional factors contributing to healthcare utilization following physical therapy.\n\nThe primary limitation of the study is the high number of subjects lost to follow-up. We attempted to account for the bias introduced by loss to follow-up in our models with IPAW, which is a robust strategy for conducting analyses with missing data [41, 51]. We observed good concordance between results of complete case and weighted analyses, giving us confidence in our findings. However, important differences in age, race, education, symptom onset, baseline pain intensity, and baseline pain-related psychological distress were noted between those who did and did not complete follow-up. These differences suggest that the group lost to follow-up may represent a unique population to whom these results may not apply. Different factors may predict utilization outcomes for this unique population. As a result, readers should exercise caution when extending these findings to individuals and populations that substantially differ from the analytic sample in this study. Specifically, these predictive models may need to be adjusted for younger individuals of non-white race, with lower education levels, sudden onset of symptoms, and those with higher pain intensity and pain-associated distress.\n\nA second limitation is that we did not know about the subjects ' prior experiences with physical therapy, or whether they arrived at physical therapy through direct access or referral from another provider. These factors could be associated with treatment expectations, which have known effects on treatment outcomes [52, 53]. We also did not collect specific information on treatment. But by including changes in pain, disability, and pain-related psychological distress in the models, we were able to account for treatment response. The benefit of this approach is that models are generalizable for predicting utilization outcomes across ' real-world ' pragmatic physical therapy settings where treatment variation is expected. The drawback is that we are prohibited from making conclusions regarding which characteristics of the clinical encounter might influence subsequent pain-related healthcare utilization. Important characteristics to consider would include number of visits, type of interventions or whether patients completed their course of physical therapy. These have been proposed or identified as important contributors to downstream costs following physical therapy [54, 55] and may be a source of unexplained variance in our models. Characteristics of the clinical encounter should be considered in future studies to refine the prediction models developed in our analyses.\n\nThird, we were unable to adequately model the specific effects of worker ' s compensation, self-pay and some\n\ncommercial insurance coverage on utilization due to the low incidence of these forms of payment in our study sample. Modeling these separately would have created the potential for unreliable and imprecise effect estimates. Readers should consider the within-group heterogeneity caused by this approach and exercise caution when applying these results to individuals who do not have traditional public or private insurance coverage. Future studies should investigate the performance of the OSPRO tools in predicting outcomes for patients with Worker ' s Compensation.\n\nA final limitation is the use of patient recall to measure utilization. To mitigate recall bias, we used two follow-up points, at 6 and 12 months. However, underor over-reporting of utilization is often a concern with studies requiring subject recall [56 -58]. Medical record and claims data were not available for these subjects. Readers should consider our inability to independently confirm utilization when interpreting results.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed5.pdf" - }, - { - "text": "- 39. Silveira SL, Cederberg KLJ, Jeng B, Sikes EM, Sandroff BM, Jones CD, et al. Do physical activity and social cognitive theory variable scores differ across symptom cluser severity groups in multiple sclerosis? Disabil Health J . (2021) 14(4):101163. doi: 10.1016/j.dhjo.2021.101163\n - 40. Learmonth YC, Motl RW. Exercise training for multiple sclerosis: a narrative review of history, bene /uniFB01 ts, safety, guidelines, and promotion. Int J Environ Res Public Health . (2021) 18(24):13245. doi: 10.3390/ijerph182413245\n - 41. Baird JF, Motl RW. Response heterogeneity with exercise training and physical activity interventions among persons with multiple sclerosis. Neurorehabil Neural Repair . (2019) 33(1):3 -14. doi: 10.1177/1545968318818904\n - 42. Sandroff BM, Baird JF, Silveira SL, Motl RW. Response heterogeneity in /uniFB01 tness, mobility and cognition with exercise-training in MS. Acta Neurol Scand . (2019) 139 (2):183 -91. doi: 10.1111/ane.13041\n - 43. Lahelle AF, Øberg GK, Normann B. Group dynamics in a group-based, individualized physiotherapy intervention for people with multiple sclerosis: a qualitative study. Physiother Res Int . (2019) 25(3):e1829. doi: 10.1002/pri.1829\n - 44. Normann B. Facilitation of movement: new perspectives provide expanded insights to guide clinical practice. Physiother Theory Pract . (2020) 36(7):769 -78. doi: 10.1080/09593985.2018.1493165\n - 45. Øberg GK, Normann B, Gallagher S. Embodied-enactive clinical reasoning in physical therapy. Physiother Theory Pract . (2015) 31(4):244 -52. doi: 10.3109/ 09593985.2014.1002873\n - 46. Anens E, Zetterberg L, Urell C, Emtner M, Hellström K. Self-reported physical activity correlates in Swedish adults with multiple sclerosis: a cross-sectional study. BMC Neurol . (2017) 17(1):204. doi: 10.1186/s12883-0170981-4\n - 47. Herring TE, Knowles LM, Alschuler KN. Outdoor adventure programs for persons with multiple sclerosis: a review and agenda for future research. Int J MS Care . (2021) 23(4):186 -92. doi: 10.7224/1537-2073.2020-066\n - 48. Creswell JW, Poth CN. Qualitative Inquiry & Research Design: Choosing Among Five Approaches . 4th ed. California: Sage (2018).", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed13.pdf" - }, - { - "text": "## RESEARCH ARTICLE\n\n## Prediction of healthcare utilization following an episode of physical therapy for musculoskeletal pain\n\nTrevor A. Lentz 1* , Jason M. Beneciuk 2,3 and Steven Z. George 4\n\n## Abstract\n\nBackground: In the United States, value-based purchasing has created the need for healthcare systems to prospectively identify patients at risk for high healthcare utilization beyond a physical therapy episode for musculoskeletal pain. The purpose of this study was to determine predictors of pain-related healthcare utilization subsequent to an index episode of physical therapy for musculoskeletal pain.\n\nMethods: This study assessed data from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) longitudinal cohort study that recruited individuals with a primary complaint of neck, low back, knee or shoulder pain in physical therapy ( n = 440). Demographics, health-related information, review of systems, comorbidity and pain-related psychological distress measures were collected at baseline evaluation. Baseline to 4-week changes in pain intensity, disability, and pain-related psychological distress were measured as treatment response variables. At 6-months and 1-year after baseline evaluation, individuals reported use of opioids, injection, surgery, diagnostic tests or imaging, and emergency room visits for their pain condition over the follow-up period. Separate prediction models were developed for any subsequent care and service-specific utilization.\n\nResults: Subsequent pain-related healthcare utilization was reported by 43% ( n = 106) of the study sample that completed the 12-month follow-up ( n = 246). Baseline disability and 4-week change in pain intensity were important global predictors of subsequent healthcare utilization. Age, insurance status, comorbidity burden, baseline pain, and 4-week changes in pain intensity, disability and pain-related psychological distress predicted specific service utilization.\n\nConclusion: In those completing follow up measures, risk of additional pain-related healthcare utilization after physical therapy was best predicted by baseline characteristics and 4-week treatment response variables for pain intensity, disability and pain-related psychological distress. These findings suggest treatment monitoring of specific response variables could enhance identification of those at risk for future healthcare utilization in addition to baseline assessment. Further study is required to determine how specific characteristics of the clinical encounter influence future utilization.\n\nKeywords: Screening, Psychological distress, Multimorbidity, Value, Treatment monitoring\n\n## Background\n\nMusculoskeletal pain is a prevalent and costly health condition with far-reaching public health consequences including chronic pain, disability and opioid-related addiction [1]. Clinical practice guidelines now recommend non-pharmacological treatment as frontline management for musculoskeletal pain, which will lead\n\n1\n\nDuke Clinical Research Institute, Duke University, 2400 Pratt Street, Durham,\n\nNC 27705, USA\n\nFull list of author information is available at the end of the article\n\n\n\nto increased utilization of services such as physical therapy [1 -3]. Physical therapy is effective for improving disability and reducing costs associated with many musculoskeletal pain conditions [4 -9]. However, pain-related healthcare utilization beyond the physical therapy episode (e.g. subsequent use of surgery, injection, opioids, etc.) may indicate suboptimal treatment response, the presence of more complex needs, or unwarranted escalation of care. Downstream healthcare utilization is not often considered as an outcome of care or indication of treatment effectiveness for musculoskeletal pain. But the importance of\n\n\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed5.pdf" - }, - { - "text": "community healthcare in the two municipalities. The project team included three individuals representing users from the Nordland MS Association, along with an MS nurse and a neurologist from the MS-outpatient clinic, and three physiotherapists/ researchers.\n\n## 2.4 Research team and re /uniFB02 exivity\n\nAll researchers on the team are clinical specialists in neurological physiotherapy. BN and ECA developed the CoreDISTparticipation intervention, and SSHD contributed to the development of the outdoor part.\n\nThe researchers ' closeness to the intervention and the clinical /uniFB01 eld may have strengthened the depth and relevance of their interpretations in this study (27), as it was easy to understand what participants described and helped form follow-up questions during the interviews. However, closeness may also produce a risk of ' blind spots ' , as the researchers may prejudice participants ' experiences, omitting questions where the answers are believed to be obvious (27). Thus, throughout the process, trustworthiness and rigor were enhanced by discussing the methodology, /uniFB01 ndings, and interpretations with external researchers (including specialists in enactive theory), as well as user representatives. The presented theoretical framework (enactive theory) enhanced the distance to the material, as recommended in qualitative research (28).\n\n## 2.5 Recruitment and participants\n\nPrior to recruitment, the study was introduced to individuals with multiple sclerosis (pwMS) through a seminar hosted by the Nordland MS Association. Additionally, seminars were conducted for health professionals in community healthcare and at the regional hospital. Written information about this study (and the RCT) was sent from the MS clinic at the regional hospital by post to all eligible individuals af /uniFB01 liated with the hospital. Individuals who wished to participate signed the attached consent form and returned it in the pre-stamped envelope. The inclusion criteria were as follows: had been diagnosed with MS, had a score on the Expanded Disability Status Scale (EDSS) (29)of ≤ 3.5, was ≥ 18 years, was employed (10% -100% of full-time) and residential address in the two prede /uniFB01 ned municipalities. The exclusion criteria were as follows: pregnancy, exacerbation of symptoms within two weeks prior to enrollment and other serious conditions compromising balance, walking or work capacity. All participants in the intervention group of the RCT ( n = 15) were included (Table 3).\n\n## 2.6 Data collection\n\nThe interview guide (Table 4) was developed based on literature reviews, clinical experience and discussions within the research group and with user representatives. Two test interviews were\n\nTABLE 3 Participant demographic information.TABLE 4 Interview guide.\n\n| Variable | Total ( n =15) |\n|------------------------------------|-----------------------------------------------|\n| Age in years | Mean 47.6 (SD 6.04) |\n| Gender (women/men) | 12 woman/3 men (80%/20%) |\n| Type of MS | Relapsing remitting 15 (100%) |\n| EDSS | Mean 1.8 (SD 0.9) |\n| Years since diagnosis | Mean 10.4 (SD 7.8) |\n| Participation in the outdoor group | Mean 4.6 sessions/total mean attendance 57.3% |", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed13.pdf" - }, - { - "text": "given the heterogenic pathology and symptoms of MS (41, 42). However, our /uniFB01 ndings illuminate qualitative aspects of how to achieve tailored and meaningful intersubjective interactions in an exercise intervention.\n\nWe consider the instances of the physiotherapist running together with the participant, which were perceived as important for participants ' performance, to be an example of ' participatory sense-making ' (22) . As participants appreciated being guided or even pushed by the physiotherapists, it appears that the physiotherapists were trusted in directing this interaction. As such, we argue that the physiotherapists ' ability to adapt to participants ' movements, speech and gestures -tailoring the interaction to their needs -was important for this ability to be perceived as purposeful. This is supported by the few negative incidents described where the participant-physiotherapist interaction seemed to not be jointly coordinated and appeared to fail. The reported mutual in /uniFB02 uences of sensorimotor capabilities and interpersonal coordination, with the physiotherapists but also the group, are in accordance with sensorimotor capacities and intersubjective interactions being important for sensemaking in the world (35). The bene /uniFB01 ts of these individualized participant-physiotherapist interactions are also described in speci /uniFB01 c core-stability exercises in indoor groups (16, 43) and are in line with the theoretical framework of facilitation of movement through hands-on interaction previously proposed (44, 45). Our study informs new knowledge of physiotherapistparticipant interactions to achieve the recommended highintensity training and calls for physiotherapy clinical reasoning through bodily and verbal communication skills adapted to the participants ' responses in an ongoing and situated way.\n\nEnjoyment has previously been reported to promote PA in pwMS, and our study brings requested knowledge of what can constitute enjoyment in an exercise intervention (46): playful group-exercise tasks, a cheerful physiotherapist, and the outdoor environment.\n\nThe appreciation of being active outdoors in the study sample aligns with that in the general population (47). The outdoors provided a natural environment, which both invited participants to actively explore abilities thought of as left behind after their diagnosis with MS, such as running, and provided an appreciated break from focusing on MS symptoms. We also suggest that the positive experiences of mastering the challenging weather conditions and the added meaning of exercising among other people in the city park can be explained according to such terms. These positive experiences show how we are enmeshed in our history, context and social encounters (35) and how these aspects should also be accounted for when designing exercise interventions.\n\n## 4.3 Methodological considerations\n\nThe design and methods were adequate for deriving knowledge from individuals ' experiences. The participants selfreferred to the intervention and were recruited based on pre-set criteria. This approach yielded rich information from people with mild to moderate disabilities due to MS who were\n\nmotivated for physical activity (PA), employed, and residing in northern Norway. Ethnicity or socio-economic class were not recorded. However, considering that all these factors can in /uniFB02 uence PA engagement (46), it is possible that additional aspects of the phenomenon could be uncovered in a different sample (48). There was a higher percentage of women participating than men; however, this corresponds to the gender distribution in the MS population (1).\n\nThe use of enactive theory was innovative within the /uniFB01 eld and allowed for, in particular, new aspects of importance for selfef /uniFB01 cacy to be identi /uniFB01 ed. Transference of our results to similar populations can be achieved through theoretical generalization (28).\n\n## 4.4 Implications for clinical practice", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed13.pdf" - }, - { - "text": "routine pain-related psychological distress monitoring throughout the early phases of rehabilitation especially if the goal is to identify risk for subsequent pain-related healthcare utilization. The implications of these collective findings are that treatment pathways may provide greater value by 1) addressing modifiable health-related variables like pain, disability and pain-related psychological distress, 2) routine monitoring of these health-related variables and 3) offering treatment alternatives that safely escalate care if needed while minimizing risk of harm and unhelpful utilization.\n\nOpioids and diagnostic tests and imaging were the two most common subsequent healthcare services utilized following physical therapy. Of the individuals that completed follow up and had any subsequent healthcare utilization, approximately 42% reported opioid use and 70% reported use of diagnostic tests and imaging. An important health-related predictor of these services was level of comorbidity burden. For those with high comorbidity burden and inadequate treatment response to physical therapy, use of additional diagnostic tests and imaging or low-dose opioids may be appropriate in some cases. But given the growing public health concern over opioid use and the desire to avoid unnecessary treatment driven by imaging, our results suggest the importance of considering disease burden when developing treatment pathways and healthcare policy to mitigate risk for avoidable use of these services. Interestingly, neither versions of the OSPRO-ROS predicted utilization outcomes even though it has been linked to mental health, comorbidity, and persistent pain state in other analyses [20, 21]. Systemic symptom burden is a measure of patient complexity that is related to but distinct from comorbidity burden [36, 47]. In these analyses, the chronic condition measure (i.e. the CCI) was a better predictor of utilization than symptom burden (i.e. OSPRO-ROS). The reasons for this finding are unclear but may be related to providers and patients being more likely to pursue follow-up medical care for musculoskeletal pain when known co-existing conditions are present as opposed to reporting of symptoms alone. The distinction between symptom and disease burden in defining musculoskeletal patient complexity, and its influence on clinical decision-making and outcomes, should be the subject of future research particularly related to aging populations [48].\n\nUtilization outcomes benchmarks have not been established to determine how the percentage of subsequent healthcare use in this study compares to outcomes using other health services. Prior studies suggest physical therapy is associated with reduced incidence of additional healthcare use compared to not using physical therapy in patients with acute low back pain [10, 49]. Some", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed5.pdf" - }, - { - "text": "gave them advice for follow-up. Some participants said that when the physiotherapist conducted the exercises or ran/walked together with them, it made them increase their exercise intensity. One participant described this as follows:\n\nThe physiotherapists pushed me to perform beyond what I thought I was able to -and that was great! There is no doubt that if someone is running beside you and shouting ' come onwell done ' , you manage to push yourself further. (ID8, EDSS: 2)\n\nHowever, one participant described an incident where the interaction with the physiotherapists was not perceived as helpful:\n\nWhen I get tired, it gets dif /uniFB01 cult. I can only do one thing at a time, and then these physiotherapists came running, talking and trying to motivate at the same time. I got very tired, and my leg would not follow my commands to run. (ID7, EDSS: 3.5)\n\nParticipants reported that they appreciated that the physiotherapists made them engage in playful activities with a ball, run for beanbags, and sing and in general created an informal and nice atmosphere. The enjoyment created was described as important for adherence to the intervention and as encouraging participants ' physical effort during the session, as exercise felt easier when it was enjoyable. It was appreciated that the physiotherapists were perceived as both cheerful and serious about the intervention.\n\n## 4 Discussion\n\nThe main /uniFB01 ndings of this study are that (1) being supported to explore and push one ' s own physical capabilities by combining high-intensity running/walking with detailed exercises was meaningful and evoked strong emotions. Improving one ' s balance, walking, and running lead to increased beliefs in one ' s own possibilities. Some negative experiences were also described, particularly from the highintensity training. (2) An engaging outdoor group with tailored physiotherapist-participant interactions and the co-creation of enjoyment was perceived to be important for the success of the individual. These /uniFB01 ndings illustrate how the dynamic intertwining of the body and movement, context and intersubjective interactions create meaning and beliefs in one ' s own physical capabilities (19).\n\n## 4.1 Bodily experiences are inherent to beliefs in the mastery of physical activity\n\nThe meaningfulness of exploring the limits of training intensity that we identi /uniFB01 ed in our study corresponds with other studies of pwMS ' s experiences of interventions addressing intensity of activity (31, 32). The exercises emphasizing trunk control were reported to reduce movement impairments and are in line with a study of pwMS with higher\n\ndisabilities participating in an indoor group intervention (16). However, the perceived interlinking of improved sensorimotor functions and the ease of and ef /uniFB01 ciency in high-intensity walking/running have not been reported previously. It is likely that the detailed exercises prompted activations of the CNS and musculoskeletal systems, which are prerequisites for highintensity walking and running (33). Impairments in such systems commonly occur due to CNS lesions or secondary inactivity, and function can improve with increased use (18). Our results support the value of integrating such speci /uniFB01 city to optimize the capability to train at high intensity, even in individuals with low EDSS scores.", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed13.pdf" - }, - { - "text": "## Take-home Points\n\nStudy Question: How profoundly are adults with undiagnosed respiratory symptoms affected by dyspnea?\n\nResults: In community-based adults with undiagnosed respiratory symptoms, those identi /uniFB01 ed with preserved ratio impaired spirometry experienced the greatest impact of dyspnea, followed by those with undiagnosed asthma or COPD. Greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\nInterpretation: Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity.\n\nDyspnea refers to a subjective sensation of breathing discomfort. 1 In a study involving a community-based population aged > 70 years, the prevalence of dyspnea was found to be 32%. 2 Dyspnea can lead to limitations in daily activities, reduced exercise tolerance, and heightened mortality risks. 3\n\nDyspnea not only affects individuals with diagnosed respiratory conditions but also poses a signi /uniFB01 cant burden on those with undiagnosed conditions. In a systematic review by Müller et al, 4 the combined\n\n## Study Design and Methods\n\n## Recruitment of Undiagnosed Cases and Healthy\n\nControl Patients\n\nBetween June 2017 and January 2023, adults aged $ 18 years were recruited through a two-step process into the Undiagnosed COPD and Asthma Population (UCAP) study, a multicenter case /uniFB01 nding study. Approval for\n\nABBREVIATIONS: ASQ = Asthma Screening Questionnaire; BD = bronchodilator; CAT = COPD Assessment Test; PCA = principal component analysis; PRISm = preserved ratio impaired spirometry; SGRQ = St. George ' s Respiratory Questionnaire\n\nAFFILIATIONS: From The Ottawa Hospital Research Institute (J. B., E. G., K. L. V., G. G. A., S. M., and S. D. A.), University of Ottawa, Ottawa, ON; the Desautels Faculty of Management (G. A. W.), McGill University, Montreal, QC; the Department of Medicine (C. B.), The University of British Columbia, Vancouver, BC; the Centre de recherche (L.-P. B. and A. C.), Institut de cardiologie et de pneumologie de Québec, Université Laval, Quebec, QC; the Cumming School of Medicine (S. K. F.), University of Calgary, Calgary, AB; the Department of Medicine (E. P.), University of Saskatchewan, Regina, SK; the Firestone Institute for Respiratory Health (R. A. M.), McMaster University, Hamilton, ON; the Department of Medicine (C. L.), Université de Montreal, Montreal, QC; the Department of Medicine and the Li Ka Shing Knowledge Institute (S. G.), St. Michael ' s Hospital University of Toronto, Toronto, ON; the Department of Medicine\n\nprevalence of dyspnea in the adult general population across 11 studies was estimated to be 10%. Dyspnea can arise from a broad spectrum of underlying factors, including both respiratory and nonrespiratory conditions. Studies have revealed that dyspnea is not solely attributable to respiratory conditions but is also heavily in /uniFB02 uenced by cardiovascular deconditioning and by nonrespiratory factors, including psychosocial, social, and environmental determinants. 5,6\n\nDyspnea is a prevalent symptom with consequences that extend beyond its physiologic implications. A study in European patients with COPD explored the burden of dyspnea and identi /uniFB01 ed potential correlates. The study revealed that higher dyspnea impact correlated with lower health-related quality of life, increased work impairment, and a higher frequency of emergency department visits. 7", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "## Risk Factors Associated With Dyspnea\n\nPatient-related risk factors were considered /uniFB01 rst, and results of spirometry considered afterward. The spirometry risk factors chosen for the second stage analysis included the spirometry-based diagnosis of the patient (asthma, COPD, PRISm, or normal) and lung function results indicative of the severity of physiologic impairment. Severity was gauged by assessing three principal lung function measures: (1) post-BD FEV1 % predicted, (2) post-BD FEV1/FVC ratio, and (3) percentage reversal of FEV1 with BD.\n\n## Dyspnea Impact and Health Care Use, Quality of Life, and Work Productivity\n\nThe impact of dyspnea and its associations with health care use, quality of life, and work productivity were examined. Health care utilization was assessed through selfreported data. Quality of life was assessed using the 36Item Short Form Health Survey questionnaire, where higher scores indicate better health status. Work productivity was assessed using the Work Productivity and Activity Impairment questionnaire, where higher scores\n\n## Results\n\nFigure 1 illustrates the results of the case /uniFB01 nding approach, including the enrollment of the control group. Among 5,631 potentially eligible participants, 1,359\n\nindicate greater impairment in work productivity and daily activities.\n\n## Statistical Analysis\n\nBox plots were used to compare distribution patterns of dyspnea impact assessments among the disease groups. Pairwise comparison tests were conducted to evaluate mean dyspnea differences between groups. Multiple linear regression analysis was used to measure contributions to variability of dyspnea by selected patient-speci /uniFB01 c risk factors, spirometry disease classi /uniFB01 cation, and key lung function measures. The selected sets of risk factors were evaluated using successive regression analyses. Analysis of variance sums of squares from the successive regression analyses provided the cumulative percentage contributions to variability of dyspnea. Simple, multiple, and logistic regression analyses were used to study associations between dyspnea and health care utilization, quality of life, and work productivity outcomes. All statistical analyses were done using STATA 16 statistical software (StataCorp).\n\nparticipants (24%) did not meet the threshold of $ 6 points on the ASQ or $ 20 points on the COPDDiagnostic Questionnaire and were thus excluded, leaving 4,272 individuals deemed eligible for spirometry.\n\nFigure 1 -Study /uniFB02 ow diagram demonstrating the case /uniFB01 nding and control group recruitment and allocation. ASQ ¼ Asthma Screening Questionnaire; COPD-DQ ¼ COPD Diagnostic Questionnaire; CF ¼ cystic /uniFB01 brosis; MI ¼ myocardial infarction; PRISM ¼ preserved ratio impaired spirometry.\n\n", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed6_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed13.pdf", - "query": "What were the prerequisites allowing to be involved in the study concerning the impact of outdoor sport on patients witg multiple sclerosis ?", - "target_page": 4, - "target_passage": "The inclusion criteria were as follows: had been diagnosed with MS, had a score on the Expanded Disability Status Scale (EDSS) (29) of ≤3.5, was ≥18 years, was employed (10%–100% of full-time) and residential address in the two predefined municipalities", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "community healthcare in the two municipalities. The project team included three individuals representing users from the Nordland MS Association, along with an MS nurse and a neurologist from the MS-outpatient clinic, and three physiotherapists/ researchers.\n\n## 2.4 Research team and re /uniFB02 exivity\n\nAll researchers on the team are clinical specialists in neurological physiotherapy. BN and ECA developed the CoreDISTparticipation intervention, and SSHD contributed to the development of the outdoor part.\n\nThe researchers ' closeness to the intervention and the clinical /uniFB01 eld may have strengthened the depth and relevance of their interpretations in this study (27), as it was easy to understand what participants described and helped form follow-up questions during the interviews. However, closeness may also produce a risk of ' blind spots ' , as the researchers may prejudice participants ' experiences, omitting questions where the answers are believed to be obvious (27). Thus, throughout the process, trustworthiness and rigor were enhanced by discussing the methodology, /uniFB01 ndings, and interpretations with external researchers (including specialists in enactive theory), as well as user representatives. The presented theoretical framework (enactive theory) enhanced the distance to the material, as recommended in qualitative research (28).\n\n## 2.5 Recruitment and participants\n\nPrior to recruitment, the study was introduced to individuals with multiple sclerosis (pwMS) through a seminar hosted by the Nordland MS Association. Additionally, seminars were conducted for health professionals in community healthcare and at the regional hospital. Written information about this study (and the RCT) was sent from the MS clinic at the regional hospital by post to all eligible individuals af /uniFB01 liated with the hospital. Individuals who wished to participate signed the attached consent form and returned it in the pre-stamped envelope. The inclusion criteria were as follows: had been diagnosed with MS, had a score on the Expanded Disability Status Scale (EDSS) (29)of ≤ 3.5, was ≥ 18 years, was employed (10% -100% of full-time) and residential address in the two prede /uniFB01 ned municipalities. The exclusion criteria were as follows: pregnancy, exacerbation of symptoms within two weeks prior to enrollment and other serious conditions compromising balance, walking or work capacity. All participants in the intervention group of the RCT ( n = 15) were included (Table 3).\n\n## 2.6 Data collection\n\nThe interview guide (Table 4) was developed based on literature reviews, clinical experience and discussions within the research group and with user representatives. Two test interviews were\n\nTABLE 3 Participant demographic information.TABLE 4 Interview guide.\n\n| Variable | Total ( n =15) |\n|------------------------------------|-----------------------------------------------|\n| Age in years | Mean 47.6 (SD 6.04) |\n| Gender (women/men) | 12 woman/3 men (80%/20%) |\n| Type of MS | Relapsing remitting 15 (100%) |\n| EDSS | Mean 1.8 (SD 0.9) |\n| Years since diagnosis | Mean 10.4 (SD 7.8) |\n| Participation in the outdoor group | Mean 4.6 sessions/total mean attendance 57.3% |", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed13.pdf" - }, - { - "text": "institutional requirements. The participants provided their written informed consent to participate in this study.\n\n## Author contributions\n\nSD: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Project administration, Resources, Visualization, Writing -original draft, Writing -review & editing. EA: Conceptualization, Formal Analysis, Methodology, Supervision, Writing -review & editing. BN: Conceptualization, Formal Analysis, Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing -review & editing.\n\n## Funding\n\nThe author(s) declare that /uniFB01 nancial support was received for the research, authorship, and/or publication of this article.\n\nThe development of the CoreDISTparticipation and the RCT is funded by the Northern Norway Health Authority (Helse Nord RHF). This interview study was funded by Nord University (PhD salary).\n\n## References\n\n- 1. Walton C, King R, Rechtman L, Kaye W, Leray E, Marrie RA, et al. Rising prevalence of multiple sclerosis worldwide: insights from the Atlas of MS, third edition. Mult Scler . (2020) 26(14):1816 -21. doi: 10.1177/1352458520970841\n- 2. Casey B, Coote S, Galvin R, Donnelly A. Objective physical activity levels in people with multiple sclerosis: meta-analysis. Scand J Med Sci Sports . (2018) 28 (9):1960 -9. doi: 10.1111/sms.13214\n- 3. Kinnett-Hopkins D, Adamson B, Rougeau K, Motl RW. People with MS are less physically active than healthy controls but as active as those with other chronic diseases: an updated meta-analysis. Mult Scler Relat Disord . (2017) 13:38 -43. doi: 10.1016/j.msard.2017.01.016\n- 4. Hoang PD, Lord S, Gandevia S, Menant J. Exercise and sports science Australia (ESSA) position statement on exercise for people with mild to moderate multiple sclerosis. J Sci Med Sport . (2022) 25(2):146 -54. doi: 10.1016/j.jsams.2021.08.015\n- 5. Dalgas U, Langeskov-Christensen M, Stenager E, Riemenschneider M, Hvid LG. Exercise as medicine in multiple sclerosis -time for a paradigm shift: preventive, symptomatic, and disease-modifying aspects and perspectives. Curr Neurol Neurosci Rep . (2019) 19(11):1 -12. doi: 10.1007/s11910-019-1002-3\n- 6. Riemenschneider M, Hvid LG, Ringgaard S, Nygaard MKE, Eskildsen SF, Gaemelke T, et al. Investigating the potential disease-modifying and neuroprotective ef /uniFB01 cacy of exercise therapy early in the disease course of multiple sclerosis: the early multiple sclerosis exercise study (EMSES). Mult Scler . (2022) 28(10):1620 -9. doi: 10. 1177/13524585221079200\n- 7. Kalb R, Brown TR, Coote S, Costello K, Dalgas U, Garmon E, et al. Exercise and lifestyle physical activity recommendations for people with multiple sclerosis throughout the disease course. Mult Scler . (2020) 26(12):1459 -69. doi: 10.1177/ 1352458520915629\n- 8. Moreno-Navarro P, Manca A, Martinez G, Ventura L, Barbado D, Vera-García FJ, et al. Test-retest reliability and known-groups validity of trunk muscle tests in people with multiple sclerosis: a cross-sectional, case-control study. Phys Ther . (2021) 101 (5):1 -9. doi: 10.1093/ptj/ptzab049\n- 9. Raats J, Arntzen EC, Lamers I, Feys P, Normann B. What is the distribution of trunk impairments and its relationship with disability level in individuals with multiple sclerosis? Mul Scler Relat Disord . (2021) 57:103325. doi: 10.1016/j.msard. 2021.103325\n- 10. Normann B, Arntzen EC. What are the relationships between trunk control, balance and walking in individuals with multiple sclerosis with minor to moderate disability? Eur J Physiother . (2021) 23(6):377 -83. doi: 10.1080/21679169.2020.1772870\n\n## Acknowledgments\n\nThe authors would like to thank the participants in this study and the user representatives from Nordland MS Association for their valuable contributions. The authors also acknowledge philosopher of the mind and cognitive sciences Hanne De Jaegher for the valuable comments on the interpretations and discussions of the results.\n\n## Con /uniFB02 ict of interest", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed13.pdf" - }, - { - "text": "It was an added positive experience to use our city park and notice all the other people who were there … it is something about challenging our comfort-zone . (ID4, EDSS: 0)\n\nThe natural environment was also described as taking focus away from MS symptoms. Cold, rainy or snowy weather conditions required planning of adequate clothing; in addition, these conditions led some participants to use cautious behavior when the ground was slippery and led a few to omit sessions. However, mastering outdoor exercise was highlighted in positive terms, such as discovering new ways to become active.\n\n## 3.4 Professional leadership, tailoring and co-creation of enjoyment\n\nThe way the physiotherapists led the group and, in particular, interacted with each participant were regarded as helpful for improving their bodily functions and activity levels. Some participants reported being afraid to try out new activities or training at high intensities after being diagnosed with MS but felt safe to explore when supervised by the physiotherapist because of their trust in the relationship between them and in the physiotherapist ' s professional knowledge.\n\nHow the physiotherapist approached the participants individually was described as important from this perspective. In particular, bodily interactions in which the physiotherapist demonstrated with his or her own body or placed his or her hands on the participant ' s body to correct a movement were reported to be successful, as it helped to increase speed and gave participants a sense of performing better or for a longer duration. If they did an exercise in a suboptimal way, participants reported receiving precise supervision, or if they expressed pain or were injured, the physiotherapist was supportive, assessed them and", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed13.pdf" - }, - { - "text": "- 39. Silveira SL, Cederberg KLJ, Jeng B, Sikes EM, Sandroff BM, Jones CD, et al. Do physical activity and social cognitive theory variable scores differ across symptom cluser severity groups in multiple sclerosis? Disabil Health J . (2021) 14(4):101163. doi: 10.1016/j.dhjo.2021.101163\n - 40. Learmonth YC, Motl RW. Exercise training for multiple sclerosis: a narrative review of history, bene /uniFB01 ts, safety, guidelines, and promotion. Int J Environ Res Public Health . (2021) 18(24):13245. doi: 10.3390/ijerph182413245\n - 41. Baird JF, Motl RW. Response heterogeneity with exercise training and physical activity interventions among persons with multiple sclerosis. Neurorehabil Neural Repair . (2019) 33(1):3 -14. doi: 10.1177/1545968318818904\n - 42. Sandroff BM, Baird JF, Silveira SL, Motl RW. Response heterogeneity in /uniFB01 tness, mobility and cognition with exercise-training in MS. Acta Neurol Scand . (2019) 139 (2):183 -91. doi: 10.1111/ane.13041\n - 43. Lahelle AF, Øberg GK, Normann B. Group dynamics in a group-based, individualized physiotherapy intervention for people with multiple sclerosis: a qualitative study. Physiother Res Int . (2019) 25(3):e1829. doi: 10.1002/pri.1829\n - 44. Normann B. Facilitation of movement: new perspectives provide expanded insights to guide clinical practice. Physiother Theory Pract . (2020) 36(7):769 -78. doi: 10.1080/09593985.2018.1493165\n - 45. Øberg GK, Normann B, Gallagher S. Embodied-enactive clinical reasoning in physical therapy. Physiother Theory Pract . (2015) 31(4):244 -52. doi: 10.3109/ 09593985.2014.1002873\n - 46. Anens E, Zetterberg L, Urell C, Emtner M, Hellström K. Self-reported physical activity correlates in Swedish adults with multiple sclerosis: a cross-sectional study. BMC Neurol . (2017) 17(1):204. doi: 10.1186/s12883-0170981-4\n - 47. Herring TE, Knowles LM, Alschuler KN. Outdoor adventure programs for persons with multiple sclerosis: a review and agenda for future research. Int J MS Care . (2021) 23(4):186 -92. doi: 10.7224/1537-2073.2020-066\n - 48. Creswell JW, Poth CN. Qualitative Inquiry & Research Design: Choosing Among Five Approaches . 4th ed. California: Sage (2018).", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed13.pdf" - }, - { - "text": "## 4.4 Implications for clinical practice\n\nCombining high-intensity walking/running and detailed sensorimotor exercises was valued and provided meaningful embodied experiences, improving participants ' ability to master PA and their beliefs of their own possibilities for being active in the future. However, the manner in which the content of an exercise intervention is delivered and the environment in which it is delivered should be accounted for, as these aspects were perceived to be of great importance in creating and shaping participants ' experiences. In particular, tailored physiotherapistparticipant bodily interactions and an engaging group and outdoor environment were perceived to be pertinent for exploring one ' s own potential.\n\nTo minimize negative incidents in future interventions, we suggest that (1) the effort required from one ' s leg muscles during the detailed exercises (in between the running/walking intervals) should be low to minimize the negative consequences of leg muscle fatigue prior to high-intensity running/walking, (2) the capacity for running/walking at highintensity should be explored in one-to-one physiotherapy assessment prior to group training to optimize individuals capabilities and safety, and (3) homogenous and small-sized groups should be used to enable ongoing and tailored physiotherapist-participant interactions.\n\n## Data availability statement\n\nThe datasets presented in this article are not readily available because of ethical and legal restrictions. Requests to access the datasets should be directed to stine.s.dahl@nord.no.\n\n## Ethics statement\n\nThis study involving humans was approved by Regional Committee for Medical Research Ethics in North Norway (REK North: 174,837) and the Data Protection Of /uniFB01 cer at Nordlandssykehuset Hospital Trust, Norway. This study was conducted in accordance with the local legislation and", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed13.pdf" - }, - { - "text": "given the heterogenic pathology and symptoms of MS (41, 42). However, our /uniFB01 ndings illuminate qualitative aspects of how to achieve tailored and meaningful intersubjective interactions in an exercise intervention.\n\nWe consider the instances of the physiotherapist running together with the participant, which were perceived as important for participants ' performance, to be an example of ' participatory sense-making ' (22) . As participants appreciated being guided or even pushed by the physiotherapists, it appears that the physiotherapists were trusted in directing this interaction. As such, we argue that the physiotherapists ' ability to adapt to participants ' movements, speech and gestures -tailoring the interaction to their needs -was important for this ability to be perceived as purposeful. This is supported by the few negative incidents described where the participant-physiotherapist interaction seemed to not be jointly coordinated and appeared to fail. The reported mutual in /uniFB02 uences of sensorimotor capabilities and interpersonal coordination, with the physiotherapists but also the group, are in accordance with sensorimotor capacities and intersubjective interactions being important for sensemaking in the world (35). The bene /uniFB01 ts of these individualized participant-physiotherapist interactions are also described in speci /uniFB01 c core-stability exercises in indoor groups (16, 43) and are in line with the theoretical framework of facilitation of movement through hands-on interaction previously proposed (44, 45). Our study informs new knowledge of physiotherapistparticipant interactions to achieve the recommended highintensity training and calls for physiotherapy clinical reasoning through bodily and verbal communication skills adapted to the participants ' responses in an ongoing and situated way.\n\nEnjoyment has previously been reported to promote PA in pwMS, and our study brings requested knowledge of what can constitute enjoyment in an exercise intervention (46): playful group-exercise tasks, a cheerful physiotherapist, and the outdoor environment.\n\nThe appreciation of being active outdoors in the study sample aligns with that in the general population (47). The outdoors provided a natural environment, which both invited participants to actively explore abilities thought of as left behind after their diagnosis with MS, such as running, and provided an appreciated break from focusing on MS symptoms. We also suggest that the positive experiences of mastering the challenging weather conditions and the added meaning of exercising among other people in the city park can be explained according to such terms. These positive experiences show how we are enmeshed in our history, context and social encounters (35) and how these aspects should also be accounted for when designing exercise interventions.\n\n## 4.3 Methodological considerations\n\nThe design and methods were adequate for deriving knowledge from individuals ' experiences. The participants selfreferred to the intervention and were recruited based on pre-set criteria. This approach yielded rich information from people with mild to moderate disabilities due to MS who were\n\nmotivated for physical activity (PA), employed, and residing in northern Norway. Ethnicity or socio-economic class were not recorded. However, considering that all these factors can in /uniFB02 uence PA engagement (46), it is possible that additional aspects of the phenomenon could be uncovered in a different sample (48). There was a higher percentage of women participating than men; however, this corresponds to the gender distribution in the MS population (1).\n\nThe use of enactive theory was innovative within the /uniFB01 eld and allowed for, in particular, new aspects of importance for selfef /uniFB01 cacy to be identi /uniFB01 ed. Transference of our results to similar populations can be achieved through theoretical generalization (28).\n\n## 4.4 Implications for clinical practice", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed13.pdf" - }, - { - "text": "## Methods\n\n## Dataset and patient population\n\nThis study used data from the Orthopedic Physical Therapy -Investigative Network ' s (OPT-IN) Optimal Screening for Prediction of Referral and Outcome (OSPRO) validation cohort study, a longitudinal prospective study of individuals with knee, shoulder, back or neck pain seeking Physical Therapy in the US. A convenience sample was recruited from December 2014 and December 2015 by participating OPT-IN clinics. The OPT-IN clinics that participated in data collection represented multiple geographic regions in the US including the Mideast, Southeast, Great Lakes, Rocky Mountain States and Far West, with an attempt to balance recruitment between urban and rural settings over the entire OPT-IN network. Physical therapists practicing in these clinics identified eligible participants at initial evaluation and directed them to a secure study website for the informed consent process and baseline self-report assessment. Eligibility criteria have been thoroughly reported elsewhere [19] and were intentionally broad to develop a cohort that was generalizable to those seeking physical therapy for common musculoskeletal conditions in the US. Participants completed follow-up self-reported assessments on the study website at 4 weeks, 6 months and 12 months. Participants were notified of a pending assessment by an email that directed them back to the study website to complete their follow-up assessment. For additional details of the dataset and cohort, readers are directed to the published cohort profile [19].\n\nThe primary aim of the OSPRO cohort study was to develop and validate review of systems (i.e. evidence of systemic involvement) and yellow flag (i.e. pain-related psychological distress) screening tools for use in outpatient orthopedic physical therapy settings. These screening tools, once validated and refined for clinical decision making, may improve the value of care delivery by accurately identifying individuals who 1) are appropriate for referral to other providers for management of non-musculoskeletal symptoms, and/or 2) would benefit from enhanced, psychologically-informed physical therapy. Early identification of individuals most appropriate for these modified pathways of care has the potential to reduce wasteful downstream health care utilization, limit the risk of unwarranted and costly care escalation, and improve clinical outcomes. Results of the primary analyses examining the predictive ability of the OSPRO tools for pain, disability, health status, and comorbidity outcomes have been previously published [20]. Pre-planned secondary analyses included prediction of persistent pain state [21] and this current analysis predicting future healthcare utilization. All subjects consented to participation in the study and ethics approval was granted by the University of Florida Institutional Review Board.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed5.pdf" - }, - { - "text": "gave them advice for follow-up. Some participants said that when the physiotherapist conducted the exercises or ran/walked together with them, it made them increase their exercise intensity. One participant described this as follows:\n\nThe physiotherapists pushed me to perform beyond what I thought I was able to -and that was great! There is no doubt that if someone is running beside you and shouting ' come onwell done ' , you manage to push yourself further. (ID8, EDSS: 2)\n\nHowever, one participant described an incident where the interaction with the physiotherapists was not perceived as helpful:\n\nWhen I get tired, it gets dif /uniFB01 cult. I can only do one thing at a time, and then these physiotherapists came running, talking and trying to motivate at the same time. I got very tired, and my leg would not follow my commands to run. (ID7, EDSS: 3.5)\n\nParticipants reported that they appreciated that the physiotherapists made them engage in playful activities with a ball, run for beanbags, and sing and in general created an informal and nice atmosphere. The enjoyment created was described as important for adherence to the intervention and as encouraging participants ' physical effort during the session, as exercise felt easier when it was enjoyable. It was appreciated that the physiotherapists were perceived as both cheerful and serious about the intervention.\n\n## 4 Discussion\n\nThe main /uniFB01 ndings of this study are that (1) being supported to explore and push one ' s own physical capabilities by combining high-intensity running/walking with detailed exercises was meaningful and evoked strong emotions. Improving one ' s balance, walking, and running lead to increased beliefs in one ' s own possibilities. Some negative experiences were also described, particularly from the highintensity training. (2) An engaging outdoor group with tailored physiotherapist-participant interactions and the co-creation of enjoyment was perceived to be important for the success of the individual. These /uniFB01 ndings illustrate how the dynamic intertwining of the body and movement, context and intersubjective interactions create meaning and beliefs in one ' s own physical capabilities (19).\n\n## 4.1 Bodily experiences are inherent to beliefs in the mastery of physical activity\n\nThe meaningfulness of exploring the limits of training intensity that we identi /uniFB01 ed in our study corresponds with other studies of pwMS ' s experiences of interventions addressing intensity of activity (31, 32). The exercises emphasizing trunk control were reported to reduce movement impairments and are in line with a study of pwMS with higher\n\ndisabilities participating in an indoor group intervention (16). However, the perceived interlinking of improved sensorimotor functions and the ease of and ef /uniFB01 ciency in high-intensity walking/running have not been reported previously. It is likely that the detailed exercises prompted activations of the CNS and musculoskeletal systems, which are prerequisites for highintensity walking and running (33). Impairments in such systems commonly occur due to CNS lesions or secondary inactivity, and function can improve with increased use (18). Our results support the value of integrating such speci /uniFB01 city to optimize the capability to train at high intensity, even in individuals with low EDSS scores.", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed13.pdf" - }, - { - "text": "However, explained variance estimates in our models ranged from 34 to 61%, suggesting further research is necessary to identify additional factors contributing to healthcare utilization following physical therapy.\n\nThe primary limitation of the study is the high number of subjects lost to follow-up. We attempted to account for the bias introduced by loss to follow-up in our models with IPAW, which is a robust strategy for conducting analyses with missing data [41, 51]. We observed good concordance between results of complete case and weighted analyses, giving us confidence in our findings. However, important differences in age, race, education, symptom onset, baseline pain intensity, and baseline pain-related psychological distress were noted between those who did and did not complete follow-up. These differences suggest that the group lost to follow-up may represent a unique population to whom these results may not apply. Different factors may predict utilization outcomes for this unique population. As a result, readers should exercise caution when extending these findings to individuals and populations that substantially differ from the analytic sample in this study. Specifically, these predictive models may need to be adjusted for younger individuals of non-white race, with lower education levels, sudden onset of symptoms, and those with higher pain intensity and pain-associated distress.\n\nA second limitation is that we did not know about the subjects ' prior experiences with physical therapy, or whether they arrived at physical therapy through direct access or referral from another provider. These factors could be associated with treatment expectations, which have known effects on treatment outcomes [52, 53]. We also did not collect specific information on treatment. But by including changes in pain, disability, and pain-related psychological distress in the models, we were able to account for treatment response. The benefit of this approach is that models are generalizable for predicting utilization outcomes across ' real-world ' pragmatic physical therapy settings where treatment variation is expected. The drawback is that we are prohibited from making conclusions regarding which characteristics of the clinical encounter might influence subsequent pain-related healthcare utilization. Important characteristics to consider would include number of visits, type of interventions or whether patients completed their course of physical therapy. These have been proposed or identified as important contributors to downstream costs following physical therapy [54, 55] and may be a source of unexplained variance in our models. Characteristics of the clinical encounter should be considered in future studies to refine the prediction models developed in our analyses.\n\nThird, we were unable to adequately model the specific effects of worker ' s compensation, self-pay and some\n\ncommercial insurance coverage on utilization due to the low incidence of these forms of payment in our study sample. Modeling these separately would have created the potential for unreliable and imprecise effect estimates. Readers should consider the within-group heterogeneity caused by this approach and exercise caution when applying these results to individuals who do not have traditional public or private insurance coverage. Future studies should investigate the performance of the OSPRO tools in predicting outcomes for patients with Worker ' s Compensation.\n\nA final limitation is the use of patient recall to measure utilization. To mitigate recall bias, we used two follow-up points, at 6 and 12 months. However, underor over-reporting of utilization is often a concern with studies requiring subject recall [56 -58]. Medical record and claims data were not available for these subjects. Readers should consider our inability to independently confirm utilization when interpreting results.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed5.pdf" - }, - { - "text": "additional healthcare use is expected following physical therapy, especially among individuals that are on long-term pain management pathways due to chronic or persistent symptoms. Yet with over 40% reporting subsequent pain-related healthcare among those completing follow-up, it is apparent that opportunities exist to improve pathway selection and/or the effectiveness of physical therapy for individuals with musculoskeletal pain. This finding is particularly notable given recent efforts to define physical therapy as an effective first line, non-pharmacological treatment option against more invasive or higher risk services, such as surgery or opioid use, respectively. Predictive variables identified in this analysis can be used to develop risk models that better inform pathway selection for those seeking physical therapy for musculoskeletal pain. The precise application of these risk models, and how they inform policy and practice should be the target of future study. However, physical therapy re-design might incorporate enhanced treatment monitoring to assess ongoing risk for downstream utilization, as well as physical therapist-led interventions to more thoroughly address important modifiable factors such as pain intensity, disability and pain-related psychological distress [38]. Improved pathway selection might entail the consideration of referral to or co-treatment with other providers to more adequately address non-modifiable characteristics. Collectively, these approaches could improve the value of physical therapy by minimizing risk for high downstream healthcare utilization and potentially unwarranted escalation of care.\n\nThe primary strength of the study is longitudinal follow-up at multiple time points following an episode of physical therapy for a variety of musculoskeletal pain conditions. Anatomical location of pain was not a significant predictor of healthcare use in all but one model, suggesting results are widely applicable across a spectrum of musculoskeletal pain conditions. Another strength of this cohort study is the assessment of various healthcare utilization outcomes of interest for establishing health policy. When considered alongside more traditional pain- or disability-related outcomes prediction models, these findings will improve the ability of healthcare systems and providers to make decisions in value-based purchasing environments. The consideration of multiple screening tools (i.e. yellow flags and review of systems) and treatment monitoring variables is also a strength of this study as screening and systematic treatment monitoring are not routine in clinical practice. A final strength is inclusion of multiple sociodemographic, health-related and psychosocial factors as potential predictors. Healthcare outcomes and utilization exhibit emergent properties that require the consideration of multiple, competing factors to fully explain [50].", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed5.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_STO_2004.pdf", - "query": "What was the sales revenue of Santos in 2004 ?", - "target_page": 12, - "target_passage": " Sales revenue was a record $1,501 million", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "## MALEO NEGOTIATIONS ADVANCED\n\nOutside Australia, Santos and its co-venturers have executed a Heads of Agreement for the sale of the entire gas reserves of the Maleo field offshore East Java, Indonesia. Santos continued negotiations with PT Perusahaan Gas Negara, Indonesia's stateowned gas distributor, on behalf of the joint venture to finalise the Gas Sales Agreement. The project is targeting first production in the first half of 2006 at rates of up to 100 mmcf/d for more than five years.\n\n## FIRST RETAIL GAS SALES WITH SANTOS DIRECT\n\nAs well as selling gas into the wholesale gas market, Santos secured a retail gas licence from the Victorian Government in 2004. This allows Santos to sell gas direct to industrial customers and into the Victorian spot market through a wholly-owned\n\nsubsidiary, Santos Direct Pty Ltd ('Santos Direct').\n\nSantos Direct will market Santos' 10% share of gas production from the Minerva field - around 15 TJ/d - in the offshore Otway Basin, which commenced production at the end of 2004.\n\nThe move to market and sell gas directly into the Victorian retail market is a first for Santos and leverages off Santos' position as one of Australia's largest gas producers, supplying wholesale gas to major industrial customers and specialist marketers in all mainland Australian states and territories.\n\n## LIQUIDS MARKETING ALLIANCE WITH BP\n\nAnother important marketing development during the year was the decision to outsource the marketing of crude oil and natural gas liquids to BP. The new marketing arrangements are in response to the significantly\n\nhigher volumes of crude oil that Santos will receive from the Mutineer-Exeter and Oyong projects, coming on stream in 2005, and the increasing globalisation of the liquids marketplace.\n\nThe validity of this approach has already been demonstrated by the sale of the first Mutineer-Exeter oil cargo at a premium to Tapis despite a discount for the uncertain delivery date.\n\nSantos continues to build an inventory of high quality options to provide a platform for production growth over the coming years. Santos is committed to a program of diversification while capitalising on the long-term Cooper Basin legacy asset. Most importantly, this involves leveraging the strengths of the core competencies built up over a number of years and Santos' well-positioned domestic gas franchise.\n\n\n\n\n\n'During 2004 we brought together everyone at Santos responsible for commercialisation into a single team. One of the outcomes from this was the introduction of gas swaps, where we were able to move gas between Santos assets in different states.'\n\n## RICK WILKINSON\n\nVice President Gas Marketing and Commercialisation\n\nThe alignment of joint venture interests in the John Brookes and East Spar fields has created an important production hub at Varanus Island, Carnarvon Basin, offshore Western Australia.", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## ENHANCING THE PORTFOLIO\n\nIn 2004, Santos continued its normal business of actively managing its portfolio through the divestment of non-core assets and the acquisition of assets that fit well with existing Santos assets or can add to the ability of the Company to meet its strategic goals.\n\nAs a result of this activity, Santos realised an after-tax profit of $47.4 million on oil and gas asset sales and will continue to high-grade its portfolio on an ongoing basis.\n\nSantos entered into an agreement with PT Medco during the first half of 2004 to acquire some of Novus Petroleum's Indonesian and Cooper Basin assets conditional on the success of PT Medco's takeover offer for Novus, which was ultimately successful.\n\nSpecifically, Santos announced in September 2004 that it had executed formal agreements to acquire an additional 4.75% of the South Australian Cooper Basin, 18% of the Brantas PSC and 9% of the Kakap PSC from Medco for US$110 million. On 31 December 2004, Santos paid Medco US$98 million for the majority of the assets, with payment for the remaining 2.75% of Kakap PSC expected to be made in the first quarter of 2005.\n\nThis acquisition was an important piece in the strategic puzzle to tie up access to follow-up potential from the successful exploration at Jeruk and to provide a production base for the newly established Indonesian core area.\n\nAlso during the first half of 2004, Santos divested its remaining 18.4% shareholding in Magellan\n\nPetroleum Australia Ltd, raising approximately $10.6 million.\n\nEarly in the second half of 2004, Santos concluded the sale of its non-core onshore Otway Basin interests to Origin Energy for $25.75 million. This sale resulted in an after-tax profit of $18 million that was booked in 2004.\n\nIn addition, an exploration joint venture was formed with ConocoPhillips in the NT/P61 block offshore Darwin, Northern Territory, to drill the Caldita well and provide Santos with access rights to a potential expansion of the Wickham Point LNG facility. This deal further enhances Santos' infrastructure strategy to leverage its position within vital infrastructure to improve shareholder value while reducing the risk profile of the wildcat exploration program.\n\nDuring the third quarter, Santos expanded its offshore Victorian gas interests to 50% in both the Patricia-Baleen and the Sole gas fields through the acquisition from Trinity Gas Resources of an additional 30% interest in the Patricia-Baleen gas field and associated processing facilities in eastern Victoria and an additional 15% interest in the Sole gas field.\n\nSantos earned its 30% additional equity in the Patricia-Baleen gas field by meeting Trinity's remaining share of drilling costs on the Baleen 4 well which was drilled successfully as a sidetrack well of Baleen 3. Santos will earn its 15% additional equity in the Sole gas field by meeting certain development costs on behalf of Trinity, if and when the Sole joint venture partners proceed to develop this gas resource.\n\nThe acquisition of these Victorian gas interests strengthens Santos' domestic gas and infrastructure strategy that was further enhanced by the OMV purchase announced early in 2005. Importantly, Santos is now the operator of the strategic Orbost gas processing facility.\n\nLate in the year, Santos sold its 18.02% share in the Carpentaria Gas Pipeline between Ballera and Mount Isa in Queensland to Australian Pipeline Trust for $59 million, resulting in a $21 million after-tax profit that was booked in the 2004 financial year.\n\n## BRANTAS PSC\n\n", - "page_start": 24, - "page_end": 24, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "\n\nSantos employees rehabilitating a section of the River Torrens in Adelaide, as part of Santos' three-year commitment to the Our Patch project.\n\nof opportunities to use fewer greenhouse-emitting or renewable sources of energy.\n\nTo achieve these commitments Santos is actively pursuing an emissions intensity reduction target (greenhouse emissions per unit of production) of 20% in the period from 2002 to 2008.\n\n## SUPPORTING COMMUNITIES\n\nSantos has relationships with a number of communities where it operates. Some have been longterm and others are just beginning. Relationships with communities outside Australia, such as Indonesia and the United States, are also emerging as Santos' business grows in these locations.\n\nSantos made contributions during 2004 to a wide variety of organisations and events through the sponsorship program as part of the Company's commitment to supporting the communities to which it belongs.\n\nPartnerships continued in 2004 with the Australian School of Petroleum, the Adelaide Symphony Orchestra, the State Opera Company of South Australia, the Art Gallery of South Australia and the Lloyd McDermott Foundation.\n\nOne of the highlights of the 2004 program was the establishment of the Santos Community Fund. It brings together all of the contributions Santos makes to community-based organisations and recognises and supports the efforts of Santos employees who choose to contribute their own time and resources to improving their communities.\n\nThe 'Our Patch' program was a recipient of this fund in 2004. This is a joint initiative of the Patawalonga and Torrens Catchment Management Boards which encourages the local community to assist with the rehabilitation and management of Adelaide's water catchment.\n\nSantos has adopted a patch of the River Torrens and employees are assisting with the remediation and revegetation of this area in a volunteering program.\n\n## CORPORATE GOVERNANCE\n\nFor the third year running, the integrity of Santos' corporate governance was recognised in 2004 with the maximum five-star rating in the Corporate Governance Research Report prepared by Horwath and the University of Newcastle.\n\nA more detailed overview of corporate governance at Santos follows on page 29 of this Annual Report.\n\nMore detailed information about sustainability at Santos is contained in the Sustainability Review and copies are available from the Company and via the Santos website www.santos.com.", - "page_start": 29, - "page_end": 29, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## ANALYSING FINANCIAL PERFORMANCE\n\n\n\n'The sound operating results achieved in 2004 underline the changing face of Santos towards a higher value, higher margin business. We ended the year with a strong financial position and our financial flexibility intact.'\n\n## PETER WASOW\n\nChief Financial Officer\n\n## 2004 WAS A YEAR OF GOOD OPERATING RESULTS\n\nOverall the increase in 2004 profit of 16% reflected a year of sound operating performance. Sales revenue was a record $1,501 million, up 2.5% on 2003, reflecting higher prices across most products and was achieved despite lower production as a result of the Moomba incident and declining output from late life fields.\n\nSantos benefited from higher world oil prices and realised US$51.83 per boe in 2004, an increase of 19% over 2003. The benefit of higher world oil prices substantially offset the impact of lower production volumes.\n\nSantos was also able to negotiate higher domestic gas prices (up 4% on average) and deliver new revenue streams from project start-ups and acquisitions during the year.\n\n## PRODUCTION HAMPERED BY MOOMBA INCIDENT\n\n2004 production was lower due to the Moomba incident, which reduced production by 4.6 million\n\nboe. Field decline reduced production by a further 5.0 million boe.\n\nOffsetting these factors, Santos' growth projects are starting to come on line and have begun to reverse the decline experienced over the past three years. Two projects were commissioned in 2004: the Bayu-Undan liquids project and the Minerva gas project. In addition, acquisitions contributed 0.8 million boe to production.\n\nFor 2005, production is expected to improve by around 15%, or 4% excluding the impact of the Moomba incident. Santos now expects production to be around 54 million boe in 2005. This increase is largely driven by the commissioning of Mutineer-Exeter in March 2005 and the John Brookes gas field in the middle of the year.\n\n## PRODUCTION COSTS UNDER CONTROL\n\nProduction costs in 2004 were $309 million, up $45 million or 17% on 2003. Analysis shows that Santos was able to continue\n\n## PRODUCTION AND SALES REVENUE\n\n\n\nto effectively control its costs in the face of significant external pressures in the form of rising services and materials prices.\n\nExamining production costs in detail reveals:\n\n - · the start-up of Bayu-Undan and acquisitions added $16 million to Santos' cost base\n - · changes in our accounting added a further $16 million to Santos' production costs\n - · higher insurance premiums ($8 million) and one-off stock write-offs ($5 million) were offset by $17 million in cost savings largely as a result of Santos' continuous improvement initiatives\n - · the Moomba incident resulted in $17 million of one-off costs in 2004.\n\nPiecing this together, the key themes in our financial performance were:\n\n - · cost savings in established production areas more than offset increases in the price of services and materials\n - · Santos' cost base rose as production from new developments and acquisitions were added to the Company's expanding portfolio of producing assets.", - "page_start": 11, - "page_end": 11, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## OPERATING CASH FLOW AND CAPITAL EXPENDITURE\n\n$ million\n\n\n\n## DEPRECIATION, DEPLETION AND AMORTISATION\n\nAll things being equal, DD&A could have been expected to be lower this year, as Santos produced lower volumes and had written off the Heytesbury plant in the onshore Otway Basin last year.\n\nHowever, two factors caused an increase in 2004 DD&A. Firstly, while reserve revisions were positive overall, negative revisions were predominantly in producing areas which increased depletion rates in 2004, while positive reserve revisions were in areas where Santos is not yet producing or where straight line depreciation is dominant; for example, Casino and John Brookes.\n\nSecondly, on the future development cost side, depletion is up partly because Santos is starting to factor in higher steel and service company costs into long-term economic models.\n\n## CASH FLOW LOWER\n\nWhile Santos had a strong profit year, this is not fully reflected in cash flows.\n\nThere were large movements in trade debtors between years, reflecting the timing of liftings and the payments for them.\n\nIn addition, Santos has not yet been paid for the insurance claim relating to the Moomba incident. A total of $117 million was recognised in sundry income, which represents an estimate of the amount receivable from insurers for lost revenue, additional costs and replacement plant and equipment. At year end the money was still owed and so is not shown as part of operating cash flow. The final quantification of the claim with insurers is progressing.\n\n## RECORD CAPITAL EXPENDITURE\n\nCapital expenditure ended right on target at $930 million a record year for Santos approaching a level which is double DD&A, reflecting how rapidly the portfolio is changing.\n\nSantos will continue with a high development expenditure in 2005, but expects to spend more in line with cash generation. Exploration spend is estimated to be about $150 million, while development spend is expected to be reduced to $530 million and delineation to $90 million. Other capital spending is expected to be reduced to $80 million.\n\nThis results in a total planned capital expenditure for 2005 of approximately $850 million.\n\n## FINANCIAL FLEXIBILITY INTACT\n\nSantos ended the year in a strong financial position with its financial flexibility intact, despite the record development spending.\n\nThe FUELS issue was successful and Santos' gearing increased only marginally, despite the large capital program in 2004.\n\nThis is important in Santos' business as the Company needs to be able to fund exploration success as it occurs, and our development projects are increasing in size.\n\n", - "page_start": 12, - "page_end": 12, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "The fair value per share for shares granted during the year and the consideration received by the Company per share is Market Value (as defined above) less, in the case of General Employee Participation, the discount of 5% referred to above.\n\nThe amounts recognised in the financial statements of the Santos Group and the Company in relation to the Santos Employee Share Purchase Plan during the year were:\n\n| | Consolidated | Consolidated | Santos Ltd | Santos Ltd |\n|-------------------------------|----------------|----------------|---------------|---------------|\n| | 2004 $million | 2003 $million | 2004 $million | 2003 $million |\n| Issued ordinary share capital | 0.9 | 1.0 | 0.9 | 1.0 |\n\nAt 31 December 2004, the total number of shares acquired under the Plan since its commencement was 930,112.", - "page_start": 65, - "page_end": 65, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "Guarantees provided by Santos Ltd for borrowings in respect of controlled entities are disclosed in note 15.\n\nSantos Ltd has provided parent company guarantees in respect of:\n\n - (a) the funding obligations of its subsidiary companies, Santos Timor Sea Pipeline Pty Ltd and Santos Darwin LNG Pty Ltd, relating to the construction of a pipeline from the Bayu-Undan Field to Wickham Point in Darwin and the construction of the LNG Plant in Darwin respectively, and has provided a funding commitment letter to these subsidiary companies together with Santos (JPDA 91-12) Pty Ltd. As at 31 December 2004 the expenditure commitments of Santos Timor Sea Pipeline Pty Ltd and Santos Darwin LNG Pty Ltd for the above mentioned projects totalled US$41.3 million (2003: US$107.6 million);\n\n", - "page_start": 84, - "page_end": 84, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## MANAGING FOR SUSTAINABLE GROWTH\n\n\n\n'The publication of our first Sustainability Review in 2004 was a major achievement for Santos. The next steps are to undertake projects to improve our performance - not just in Australia but worldwide - and to accurately collect, verify and report on a range of sustainability data.'\n\n## MARTYN EAMES\n\nVice President Corporate and People\n\n\n\nLate in 2004 Santos published First Steps: Sustainability Review , the Company's first standalone publication on this topic. It describes how Santos is implementing the principles of sustainability in the areas of corporate governance, the environment, social responsibility and economic performance.\n\nThis was a significant milestone for Santos as it represents a starting point for the collection of data and the ongoing measurement of performance in the area of sustainability.\n\nCommunicating with stakeholders is an important activity and the publication of the Sustainability Review is a further extension of Santos' commitment in this regard. Santos applies considerable resources to the communication effort and aims to present information in a clear and concise manner in order to generate a greater understanding of the business by its stakeholders.\n\nSantos has been recognised for its achievements in this area. Santos' 2003 Annual Report was featured as an example of best practice reporting in PricewaterhouseCoopers' Trends in Corporate Reporting 2004 publication. Reports from companies worldwide are considered in compiling this publication and they must meet specified criteria. This is the third time a Santos annual report has been featured. Santos was also awarded a 2004 Silver Award for Excellence in Annual Reporting for the 2002 Annual Report by the Australasian Reporting Awards.\n\nReceiving independent recognition for these activities serves as a reference point for Santos' desire to continually improve communication performance.\n\nSantos has been listed as an inaugural member of the Australian SAM Sustainability Index (AuSSI). The AuSSI tracks the performance of around 70 Australian companies that lead their industry in terms of economic, environmental and\n\n## TOTAL RECORDABLE CASE FREQUENCY RATE\n\nTRCFR per millions hours worked\n\n\n\nsocial criteria. The index is calculated daily by Dow Jones Indexes and published in The Australian newspaper.\n\nFollowing is an overview of progress and achievements in the area of sustainability for 2004.\n\n## SAFETY IMPROVING\n\nThe health and safety of employees is of paramount concern to Santos. Santos delivered another year of improvement in 2004 and achieved its lowest total recordable case frequency rate of 6.4.\n\nFurther improvements were also made with the implementation of the Environment, Health and Safety Management System standards, with Santos operations undergoing full assessments against standards for the first time.\n\nThe results demonstrated considerable improvement over the baseline assessments conducted in 2003 with steady progress in the implementation of the procedures, processes and tools needed to achieve the requirements of the standards.\n\nProcess safety capability which deals with plant and equipment integrity assurance, design and construction, and maintenance, is being developed through the formation of a new set of standards to be incorporated\n\ninto the health and safety management system.\n\nThe safety focus in 2005 will be on finalising a comprehensive set of hazard standards which outline the required controls to ensure that hazards encountered across Santos' operations and activities are well managed.\n\n## POSITIONING THE WORKFORCE FOR THE FUTURE\n\nSantos commenced a major company-wide transformational change program in late 2003. The program was designed to significantly improve Santos' performance in four areas: key business processes, financial performance, organisation structure and company culture.", - "page_start": 27, - "page_end": 27, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "| Santos (Warim) Pty Ltd | SA | Santos QNT Pty Ltd | QLD |\n| Santos Australian Hydrocarbons Pty Ltd | QLD | Controlled entities of Santos QNT Pty Ltd | |\n| Santos (BOL) Pty Ltd | NSW | Santos QNT (No. 1) Pty Ltd | QLD |\n| Controlled entity of Santos (BOL) Pty Ltd | | Controlled entities of Santos QNT (No. 1) Pty Ltd | |\n| Bridge Oil Exploration Pty Limited | ACT | Santos Petroleum Management Pty Ltd | QLD |\n| Santos Darwin LNG Pty Ltd | ACT | Santos Petroleum Operations Pty Ltd | QLD |\n| Santos Direct Pty Ltd 3 | SA | TMOC Exploration Proprietary Limited | QLD |\n| Santos Facilities Pty Ltd | SA | Santos QNT (No. 2) Pty Ltd | QLD |\n| Santos Finance Ltd | NSW | Controlled entities of Santos QNT (No. 2) Pty Ltd | |\n| Santos Globe Pty Ltd (formerly Globex Far East Pty Ltd) | WA | Associated Petroleum Pty Ltd | QLD |\n| Santos International Holdings Pty Ltd | ACT | Moonie Oil Pty Ltd | QLD |\n| Controlled entities of Santos International Holdings Pty Ltd | | Petromin Pty Ltd | QLD |\n| Barracuda Limited | PNG | Santos (299) Pty Ltd | QLD |\n| Lavana Limited | PNG | Santos Exploration Pty Ltd | VIC |\n| Novus UK (Kakap 2) Limited 2 | UK | Santos Gnuco Pty Ltd | QLD |\n| Peko Offshore Ltd | BER | Transoil Pty Ltd | QLD |\n| Sanro Insurance Pte Ltd | SING | Santos Resources Pty Ltd | QLD |\n| Santos Americas and Europe Corporation | USA | Santos Timor Sea Pipeline Pty Ltd | NSW |\n| Controlled entity of Santos Americas and Europe Corporation | | Sesap Pty Ltd 2 | VIC |\n| Santos USA Corp | USA | Vamgas Pty Ltd | VIC |\n| Santos (Bawean) Pty Ltd | SA | | |", - "page_start": 71, - "page_end": 71, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "The financial impacts of the acquisitions on the Santos Group and the Company are summarised below:\n\n| | Consolidated | Consolidated | Santos Ltd | Santos Ltd |\n|------------------------------------------------------------------------------|------------------------------------------------------------------------------|----------------|---------------|---------------|\n| | 2004 $million | 2003 $million | 2004 $million | 2003 $million |\n| Fair value of net assets acquired | | | | |\n| Cash | (1.7) | 1.3 | (1.4) | 1.3 |\n| Other | (2.4) | 10.3 | (2.3) | 10.3 |\n| Exploration and development expenditure | 131.4 | 12.4 | 95.9 | 12.4 |\n| | 127.3 | 24.0 | 92.2 | 24.0 |\n| Purchase consideration | | | | |\n| Cash consideration paid | 110.6 | 24.0 | 92.2 | 24.0 |\n| Amount payable after balance date | 16.7 | - | - | - |\n| | 127.3 | 24.0 | 92.2 | 24.0 |\n| During the financial year the following controlled entities were registered: | During the financial year the following controlled entities were registered: | | | |\n| Santos Direct Pty Ltd | Santos Brantas Pty Ltd | | | |\n| Santos Egypt Pty Ltd | Santos (Donggala) Pty Ltd | | | |\n\n", - "page_start": 72, - "page_end": 72, - "source_file": "ASX_STO_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "1002.2525.pdf", - "query": "How have been confirmed nonvanishing neutrino ?", - "target_page": 2, - "target_passage": "The nonvanishing neutrino masses have been confirmed by various neutrino oscillation phenomena and indicate the evidence of new physics beyond the Standard Model.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## I. INTRODUCTION\n\nThe nonvanishing neutrino masses have been confirmed by various neutrino oscillation phenomena and indicate the evidence of new physics beyond the Standard Model. The most attractive idea to naturally explain the tiny neutrino masses is the seesaw mechanism [1], in which the right-handed (RH) neutrinos singlet under the SM gauge group are introduced. The minimal gauged U (1) B -L model based on the gauge group SU (3) C × SU (2) L × U (1) Y × U (1) B -L [2] is an elegant and simple extension of the SM, in which the RH neutrinos of three generations are necessarily introduced because of the gauge and gravitational anomaly cancellations. In addition, the mass of RH neutrinos arises associated with the U (1) B -L gauge symmetry breaking.\n\nAlthough the scale of the B -L gauge symmetry breaking is basically arbitrary as long as phenomenological constraints are satisfied, one interesting option is to take it to be the TeV scale [3]. It has been recently pointed out [4] that when the classical conformal invariance is imposed on the minimal U (1) B -L model, the symmetry breaking scale appears to be the TeV scale naturally. If this is the case, all new particles, the Z ' gauge boson, the B -L Higgs boson H and the RH neutrinos appear at the TeV scale unless the U (1) B -L gauge coupling is extremely small, and they can be discovered at Large Hadron Collider [5-8]. Then we may be able to understand the relation between the gauge symmetry breaking and the origin of neutrino masses.\n\nAlthough such a TeV scale model is interesting and appealing, one might think that the absence of dark matter (DM) candidate is a shortcoming of this model. A sterile RH neutrino with mass of the order of MeV is one possibility [9]. In this paper, we propose a very simple idea to introduce the DM candidate in the minimal gauged U (1) B -L model. We introduce the Z 2 parity into the model and impose one of three RH neutrinos to be odd, while the others even. In this way, the Z 2 -odd RH neutrino becomes stable and the DM candidate. Note that two RH neutrinos are enough to reconcile with the observed neutrino oscillation data, with a prediction of one massless light neutrino. Therefore, without introducing any additional new dynamical degrees of freedom, the DM particle arises in the minimal gauged U (1) B -L model.\n\nThe paper is organized as follows. In the next section, we briefly describe our model. In section III, we estimate the thermal relic density of the RH neutrino and identify the model", - "page_start": 1, - "page_end": 1, - "source_file": "1002.2525.pdf" - }, - { - "text": "## Higgs portal dark matter in the minimal gauged U (1) B -L model\n\nNobuchika Okada ∗\n\nDepartment of Physics and Astronomy,\n\nUniversity of Alabama, Tuscaloosa, AL 35487, USA\n\nOsamu Seto †\n\nDepartment of Architecture and Building Engineering, Hokkai-Gakuen University, Sapporo 062-8605, Japan\n\n## Abstract\n\nWe propose a scenario of the right-handed neutrino dark matter in the context of the minimal gauged U (1) B -L model by introducing an additional parity which ensures the stability of dark matter particle. The annihilation of this right-handed neutrino takes place dominantly through the s -channel Higgs boson exchange, so that this model can be called Higgs portal dark matter model. We show that the thermal relic abundance of the right-handed neutrino dark matter with help of Higgs resonance can match the observed dark matter abundance. In addition we estimate the cross section with nucleon and show that the next generation direct dark matter search experiments can explore this model.\n\nPACS numbers:", - "page_start": 0, - "page_end": 0, - "source_file": "1002.2525.pdf" - }, - { - "text": "parameters to be consistent with the current observations. Next we calculate the scattering cross section between the DM particle and a proton and discuss the implication for the direct DM search experiments.\n\n## A. Thermal relic density\n\nThe DM RH neutrino interacts with the SM particles through couplings with B -L gauge and B -L Higgs bosons. Note that neutrino Dirac Yukawa interactions are absent because of the Z 2 parity. The most of annihilation of the RH neutrinos occurs via Z ' , H and h exchange processes in the s -channel. In practice, the dominant contributions come from the Higgs ( h and H ) exchange diagrams, because the Z ' exchange processes are suppressed by the inverse square of the B -L Higgs VEV v ' /greaterorsimilar 3 TeV. Thus, we obtain Higgs portal DM of RH neutrino effectively. The relevant annihilation modes are the annihilation into f ¯ f , W + W -, ZZ , and h ( H ) h ( H ). Since RH neutrino DM couples to only B -L Higgs Ψ while a SM particle does to SM Higgs Φ, the DM annihilation occurs only through the mixing between these two Higgs bosons. Although it is not so severe, the precision electroweak measurements [12] as well as the unitarity bound [13] give constraints on the mixing angle and mass spectrum of the Higgs bosons.\n\nThe thermal relic abundance of DM\n\nΩ N h 2 = 1 . 1 × 10 9 m N /T d √ g ∗ M P 〈 σv 〉 GeV -1 , (14)\n\nwith the Planck mass M P , the thermal averaged product of the annihilation cross section and the relative velocity 〈 σv 〉 , the total number of relativistic degrees of freedom in the thermal bath g ∗ , and the decoupling temperature T d , is evaluated by solving the Boltzmann equation for the number density of RH neutrino n N ;\n\ndn N dt +3 Hn N = -〈 σv 〉 ( n 2 N -n 2 EQ ) , (15)\n\nand the Friedmann equation\n\nH 2 ≡ ( ˙ a a ) 2 = 8 π 3 M 2 P ρ, (16)\n\nwith n EQ and a ( t ) being the equilibrium number density and the scale factor, under the radiation dominated Universe with the energy density ρ = ρ rad [14].", - "page_start": 4, - "page_end": 4, - "source_file": "1002.2525.pdf" - }, - { - "text": "From Eq. (19), one can see that σ ( p ) SI ∝ (sin 2 θ/v ' ) 2 for a given DM mass m N . Fig. 3 shows the spin-independent cross section of RH neutrino with a proton. The resultant cross section is found to be far below the current limits reported by XENON10 [24] and CDMSII [25]: σ SI /lessorsimilar 4 × 10 -8 -2 × 10 -7 pb, for a DM mass of 100 GeV-1 TeV. Future experiments such as XENON1T [26] can reach the cross section predicted in our model.\n\nFIG. 3: The spin independent scattering cross section with a proton. All parameters are same as those used in the previous section. The upper and lower lines correspond to sin θ = 0 . 7 and 0 . 3, respectively.\n\n\n\n## IV. SUMMARY\n\nWe have proposed a scenario of the RH neutrino dark matter in the context of the minimal gauged U (1) B -L model. We have introduced a discrete Z 2 parity in the model, so that one RH neutrino assigned as Z 2 -odd can be stable and, hence, the DM candidate, while the other two RH neutrinos account for neutrino masses and mixings through the seesaw mechanism. No additional degrees of freedom are necessary to be added. We have evaluated the relic density of the dark matter particle. The dominant annihilation modes are via the Higgs boson exchange processes in the s -channel and thus, our model can be called Higgs portal DM model. It has been found that the relic density consistent with the current observation", - "page_start": 7, - "page_end": 7, - "source_file": "1002.2525.pdf" - }, - { - "text": "The Higgs fields φ and ψ are obtained by expanding Φ and Ψ as\n\nΦ =   0 1 √ 2 ( v + φ )   , (5)\n\nΨ = 1 √ 2 ( v ' + ψ ) , (6)\n\naround the true vacuum with the vacuum expectation values v and v ' . These are related with the mass eigenstates h and H through\n\n  h H   =   cos θ -sin θ sin θ cos θ     φ ψ   , (7)\n\nwith θ being the mixing angle. Their masses are given by\n\nM 2 h = 2 λ 1 v 2 cos 2 θ +2 λ 2 v ' 2 sin 2 θ -2 λ 3 vv ' sin θ cos θ, (8)\n\nM 2 H = 2 λ 1 v 2 sin 2 θ +2 λ 2 v ' 2 cos 2 θ +2 λ 3 vv ' sin θ cos θ. (9)\n\nThe mass of the new neutral gauge boson Z ' arises by the U (1) B -L gauge symmetry breaking,\n\nM 2 Z ' = 4 g 2 B -L v ' 2 . (10)\n\nAssociated with the U (1) B -L gauge symmetry breaking, the RH neutrinos N i acquire masses\n\nM N i = -λ R i v ' √ 2 . (11)\n\nFrom LEP experiment, the current lower bound on the Z ' boson mass has been found to be [10, 11]\n\nM Z ' g B -L = 2 v ' /greaterorsimilar 6 -7 TeV . (12)\n\nTwo Z 2 -even RH neutrinos N 1 and N 2 are responsible for light neutrino masses via the seesaw mechanism,\n\nm ν αβ = -∑ i =1 , 2 y αi y iβ v 2 2 M N i . (13)\n\nNote that the rank of this mass matrix is two, so that the lightest neutrino is massless.\n\n## III. RIGHT-HANDED NEUTRINO DARK MATTER\n\nDue to the Z 2 parity, one of RH neutrino N 3 (we denote it as N hereafter) in our model can be the DM candidate. We first estimate its relic abundance and identify the model", - "page_start": 3, - "page_end": 3, - "source_file": "1002.2525.pdf" - }, - { - "text": "parameter to be consistent with the current observations. We also calculate the scattering cross section between the DM particle and nucleon and discuss the implication for the direct DM search experiments. We summarize our results in the section IV. Our notations and the formulas used in our analysis are listed in Appendix.\n\n## II. THE MINIMAL GAUGED U (1) B -L MODEL WITH Z 2 PARITY\n\nThe model is based on the gauge group SU (3) C × SU (2) L × U (1) Y × U (1) B -L . Additional fields besides the standard model fields are a gauge field Z ' µ of the U (1) B -L , a SM singlet B -L Higgs boson Ψ with two U (1) B -L charge, and three RH neutrinos N i which are necessary for the gauge and gravitational anomaly cancellations. In describing the RH neutrinos, we use the four component representation of RH neutrino constructed from the Weyl spinor ν R i ,\n\nN i ≡   ν R i /epsilon1 ν ∗ R i   , (1)\n\nFor the two RH neutrinos, N 1 and N 2 , we assign Z 2 parity even, while odd for N 3 , so that the RH neutrino N 3 is stable and, hence, the DM candidate.\n\nDue to the additional gauge symmetry U (1) B -L , the covariant derivative for each fields is given by\n\nD µ = D ( SM ) µ -iq B -L g B -L Z ' µ , (2)\n\nwhere D ( SM ) µ is the covariant derivative in the SM, and q B -L is the charge of each fields under the U (1) B -L with its gauge coupling g B -L .\n\nYukawa interactions relevant for the neutrino masses are given by\n\nL int = 3 ∑ α =1 2 ∑ i =1 y αi ¯ L α ˜ Φ N i -1 2 3 ∑ i =1 λ R i ¯ N i Ψ P R N i +h . c ., (3)\n\nwhere ˜ Φ = -iτ 2 Φ ∗ for Φ being the SM Higgs doublet, and without loss of generality we have worked out in the basis where the second term in the right-hand-side is in flavor diagonal for RH neutrinos. Because of the Z 2 parity, the DM candidate N 3 has no Yukawa couplings with the left-handed lepton doublets.\n\nThe general Higgs potential for the SU (2) L doublet Φ and a singlet B -L Higgs Ψ is generally given by\n\nV (Φ , Ψ) = m 2 1 | Φ | 2 + m 2 2 | Ψ | 2 + λ 1 | Φ | 4 + λ 2 | Ψ | 4 + λ 3 | Φ | 2 | Ψ | 2 . (4)", - "page_start": 2, - "page_end": 2, - "source_file": "1002.2525.pdf" - }, - { - "text": "In the expression of annihilation cross section, we used the following notations :\n\n∂ Φ ∂h = 1 √ 2 cos θ, ∂ Φ ∂H = 1 √ 2 sin θ, ∂ Ψ ∂h = -1 √ 2 sin θ, ∂ Ψ ∂H = 1 √ 2 cos θ. (A6)\n\n## Appendix B: Amplitude\n\nWe give explicit formulas of the invariant amplitude squared for the pair annihilation processes of the RH neutrinos.\n\n## 1. Annihilation into charged fermions\n\n|M| 2 = 32 ∣ ∣ ∣ ∣ g 2 B -L q f q N s -M 2 Z ' + iM Z ' Γ Z ' ∣ ∣ ∣ ∣ 2 ( s -4 m 2 N ) ( 3 8 s -1 2 ( s 2 -m 2 f ) + 1 2 ( s 4 -m 2 f ) cos 2 θ ) +16 λ 2 N ∣ ∣ ∣ ∣ y f ( ∂ Φ ∂h i s -M 2 h + iM h Γ h ∂ Ψ ∂h + ∂ Φ ∂H i s -M 2 H + iM H Γ H ∂ Ψ ∂H )∣ ∣ ∣ ∣ 2 ( s -4 m 2 N ) ( s 4 -m 2 f ) . (B1)\n\n## 2. Annihilation into neutrinos\n\n- a. Annihilation into ν a , ν a (light active-like neutrinos)\n\n|M| 2 = 32 ∣ ∣ ∣ ∣ g 2 B -L q f q N s -M 2 Z ' + iM Z ' Γ Z ' ∣ ∣ ∣ ∣ 2 ( s -4 m 2 N ) ( 3 8 s -1 2 ( s 2 + m 2 ν a ) + 1 2 ( s 4 + m 2 ν a ) cos 2 θ ) . (B2)", - "page_start": 9, - "page_end": 9, - "source_file": "1002.2525.pdf" - }, - { - "text": "Fig. 1 shows the relic density Ω N h 2 as a function of the DM mass m N for a set of parameters: ( v ' , M h , M H , M Z ' , sin θ ) = (4000 GeV , 120 GeV , 200 GeV , 1000 GeV , 0 . 7), for example. Willkinson Microwave Anisotropy Probe measured the value of DM abundance as Ω DM h 2 /similarequal 0 . 1 [15]. The figure shows that a desired DM relic abundance can be obtained for only near Higgs resonances, m N ≈ M h / 2 or M H / 2.\n\nFig. 2 shows the relic density Ω N h 2 as a function of the DM mass m N for a smaller Higgs mixing sin θ = 0 . 3 (others are the same as in Fig. 1). Compared with Fig. 1, for m N /lessorsimilar M W where the DM particles dominantly annihilate into f ¯ f , the relic density further increases because of the small mixing angle. When the DM is heavier, the annihilation mode into Higgs boson pairs is opened and the relic density slightly deceases, but the reduction is not enough to reach Ω N h 2 /similarequal 0 . 1.\n\nFIG. 1: The thermal relic density of RH neutrino DM as a function of its mass for a parameter set: ( v ' , M h , M H , M Z ' , sin θ ) = (3000 GeV , 120 GeV , 200 GeV , 1000 GeV , 0 . 7).\n\n\n\nOur model is quite analogous to the so-called gauge singlet scalar dark matter [16-18]. Some recent studies can be found in Refs. [19, 20]. In the gauge singlet scalar DM model, the thermal abundance is mainly controlled by the interactions between the SM Higgs boson and the DM particle. In our model, B -L Higgs VEV v ' can play the same role for m N < M W , namely a larger v ' corresponds to weaker coupling between DM and Higgs for a fixed DM mass. On the other hand, for m N > M W the difference appears. Even if the annihilation", - "page_start": 5, - "page_end": 5, - "source_file": "1002.2525.pdf" - }, - { - "text": "Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping. arXiv:2002.06305 [cs] .\n\nYanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. 2020. When Bert Forgets How To POS: Amnesic Probing of Linguistic Properties and MLM Predictions. arXiv:2006.00995 [cs] .\n\nKawin Ethayarajh. 2019. How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 55-65, Hong Kong, China. Association for Computational Linguistics.\n\nAllyson Ettinger. 2019. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. arXiv:1907.13528 [cs] .\n\nAngela Fan, Edouard Grave, and Armand Joulin. 2019. Reducing Transformer Depth on Demand with Structured Dropout. In International Conference on Learning Representations .\n\nMaxwell Forbes, Ari Holtzman, and Yejin Choi. 2019. Do Neural Language Representations Learn Physical Commonsense? In Proceedings of the 41st Annual Conference of the Cognitive Science Society (CogSci 2019) , page 7.\n\nJonathan Frankle and Michael Carbin. 2019. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In International Conference on Learning Representations .\n\nPrakhar Ganesh, Yao Chen, Xin Lou, Mohammad Ali Khan, Yin Yang, Deming Chen, Marianne Winslett, Hassan Sajjad, and Preslav Nakov. 2020. Compressing large-scale transformerbased models: A case study on BERT. arXiv preprint arXiv:2002.11985 .\n\nSiddhant Garg, Thuy Vu, and Alessandro Moschitti. 2020. TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection. In AAAI .\n\nMichael Glass, Alfio Gliozzo, Rishav Chakravarti, Anthony Ferritto, Lin Pan, G P Shrivatsa Bhargav, Dinesh Garg, and Avi Sil. 2020. Span Selection Pre-training for Question Answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 2773-2782, Online. Association for Computational Linguistics.\n\nGoran Glavaš and Ivan Vuli'c. 2020. Is Supervised Syntactic Parsing Beneficial for Language Understanding? An Empirical Investigation. arXiv:2008.06788 [cs] .\n\nAdele Goldberg. 2006. Constructions at Work: The Nature of Generalization in Language . Oxford University Press, USA.\n\nYoav Goldberg. 2019. Assessing BERT's syntactic abilities. arXiv preprint arXiv:1901.05287 .\n\nLinyuan Gong, Di He, Zhuohan Li, Tao Qin, Liwei Wang, and Tieyan Liu. 2019. Efficient training of BERT by progressively stacking. In International Conference on Machine Learning , pages 2337-2346.\n\nMitchell A Gordon, Kevin Duh, and Nicholas Andrews. 2020. Compressing BERT: Studying the effects of weight pruning on transfer learning. arXiv preprint arXiv:2002.08307 .\n\nSaurabh Goyal, Anamitra Roy Choudhary, Venkatesan Chakaravarthy, Saurabh ManishRaje, Yogish Sabharwal, and Ashish Verma. 2020. Powerbert: Accelerating BERT inference for classification tasks. arXiv preprint arXiv:2001.08950 .\n\nFu-Ming Guo, Sijia Liu, Finlay S. Mungall, Xue Lin, and Yanzhi Wang. 2019. Reweighted Proximal Pruning for Large-Scale Language Representation. arXiv:1909.12486 [cs, stat] .\n\nKelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: Retrieval-Augmented Language Model PreTraining. arXiv:2002.08909 [cs] .\n\nYaru Hao, Li Dong, Furu Wei, and Ke Xu. 2019. Visualizing and Understanding the Effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 4143-4152, Hong", - "page_start": 14, - "page_end": 14, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "- [64] Welin D, Novikova LN, Wiberg M, Kellerth JO, Novikov LN. Survival and regeneration of cutaneous and muscular afferent neurons after peripheral nerve injury in adult rats. Exp Brain Res 2008;186:315-23.\n - [65] West CA, Davies KA, Hart AM, Wiberg M, Williams SR, Terenghi G. Volumetric magnetic resonance imaging of dorsal root ganglia for the objective quantitative assessment of neuron death after peripheral nerve injury. Exp Neurol 2007;203:22-33.\n - [66] West CA, Ljungberg C, Wiberg M, Hart A. Sensory neuron death after upper limb nerve injury and protective effect of repair: clinical evaluation using volumetric magnetic resonance imaging of dorsal root ganglia. Neurosurgery 2013;73:632-40.\n - [67] West SJ, Bonboire D, Bennett DL. StereoMate: 3D stereological automated analysis of biological structures. bioRxiv 2020:648337.\n - [68] Wiberg R, Novikova LN, Kingham PJ. Evaluation of apoptotic pathways in dorsal root ganglion neurons following peripheral nerve injury. Neuroreport 2018;29:779-85.\n - [69] Yu X, Liu H, Hamel KA, Morvan MG, Yu S, Leff J, Guan Z, Braz JM, Basbaum AI. Dorsal root ganglion macrophages contribute to both the initiation and persistence of neuropathic pain. Nat Commun 2020;11:264.\n - [70] Zheng J, Lu Y, Perl ER. Inhibitory neurones of the spinal substantia gelatinosa mediate interaction of signals from primary afferents. J Physiol 2010;588:2065-75.", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed2.pdf" - } - ] - }, - { - "references": { - "source_file": "1002.2525.pdf", - "query": "What are the dominant contributions in thermal relic density ?", - "target_page": 5, - "target_passage": "In practice, the dominant contributions come from the Higgs (h and H) exchange diagrams.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "Fig. 1 shows the relic density Ω N h 2 as a function of the DM mass m N for a set of parameters: ( v ' , M h , M H , M Z ' , sin θ ) = (4000 GeV , 120 GeV , 200 GeV , 1000 GeV , 0 . 7), for example. Willkinson Microwave Anisotropy Probe measured the value of DM abundance as Ω DM h 2 /similarequal 0 . 1 [15]. The figure shows that a desired DM relic abundance can be obtained for only near Higgs resonances, m N ≈ M h / 2 or M H / 2.\n\nFig. 2 shows the relic density Ω N h 2 as a function of the DM mass m N for a smaller Higgs mixing sin θ = 0 . 3 (others are the same as in Fig. 1). Compared with Fig. 1, for m N /lessorsimilar M W where the DM particles dominantly annihilate into f ¯ f , the relic density further increases because of the small mixing angle. When the DM is heavier, the annihilation mode into Higgs boson pairs is opened and the relic density slightly deceases, but the reduction is not enough to reach Ω N h 2 /similarequal 0 . 1.\n\nFIG. 1: The thermal relic density of RH neutrino DM as a function of its mass for a parameter set: ( v ' , M h , M H , M Z ' , sin θ ) = (3000 GeV , 120 GeV , 200 GeV , 1000 GeV , 0 . 7).\n\n\n\nOur model is quite analogous to the so-called gauge singlet scalar dark matter [16-18]. Some recent studies can be found in Refs. [19, 20]. In the gauge singlet scalar DM model, the thermal abundance is mainly controlled by the interactions between the SM Higgs boson and the DM particle. In our model, B -L Higgs VEV v ' can play the same role for m N < M W , namely a larger v ' corresponds to weaker coupling between DM and Higgs for a fixed DM mass. On the other hand, for m N > M W the difference appears. Even if the annihilation", - "page_start": 5, - "page_end": 5, - "source_file": "1002.2525.pdf" - }, - { - "text": "parameters to be consistent with the current observations. Next we calculate the scattering cross section between the DM particle and a proton and discuss the implication for the direct DM search experiments.\n\n## A. Thermal relic density\n\nThe DM RH neutrino interacts with the SM particles through couplings with B -L gauge and B -L Higgs bosons. Note that neutrino Dirac Yukawa interactions are absent because of the Z 2 parity. The most of annihilation of the RH neutrinos occurs via Z ' , H and h exchange processes in the s -channel. In practice, the dominant contributions come from the Higgs ( h and H ) exchange diagrams, because the Z ' exchange processes are suppressed by the inverse square of the B -L Higgs VEV v ' /greaterorsimilar 3 TeV. Thus, we obtain Higgs portal DM of RH neutrino effectively. The relevant annihilation modes are the annihilation into f ¯ f , W + W -, ZZ , and h ( H ) h ( H ). Since RH neutrino DM couples to only B -L Higgs Ψ while a SM particle does to SM Higgs Φ, the DM annihilation occurs only through the mixing between these two Higgs bosons. Although it is not so severe, the precision electroweak measurements [12] as well as the unitarity bound [13] give constraints on the mixing angle and mass spectrum of the Higgs bosons.\n\nThe thermal relic abundance of DM\n\nΩ N h 2 = 1 . 1 × 10 9 m N /T d √ g ∗ M P 〈 σv 〉 GeV -1 , (14)\n\nwith the Planck mass M P , the thermal averaged product of the annihilation cross section and the relative velocity 〈 σv 〉 , the total number of relativistic degrees of freedom in the thermal bath g ∗ , and the decoupling temperature T d , is evaluated by solving the Boltzmann equation for the number density of RH neutrino n N ;\n\ndn N dt +3 Hn N = -〈 σv 〉 ( n 2 N -n 2 EQ ) , (15)\n\nand the Friedmann equation\n\nH 2 ≡ ( ˙ a a ) 2 = 8 π 3 M 2 P ρ, (16)\n\nwith n EQ and a ( t ) being the equilibrium number density and the scale factor, under the radiation dominated Universe with the energy density ρ = ρ rad [14].", - "page_start": 4, - "page_end": 4, - "source_file": "1002.2525.pdf" - }, - { - "text": "From Eq. (19), one can see that σ ( p ) SI ∝ (sin 2 θ/v ' ) 2 for a given DM mass m N . Fig. 3 shows the spin-independent cross section of RH neutrino with a proton. The resultant cross section is found to be far below the current limits reported by XENON10 [24] and CDMSII [25]: σ SI /lessorsimilar 4 × 10 -8 -2 × 10 -7 pb, for a DM mass of 100 GeV-1 TeV. Future experiments such as XENON1T [26] can reach the cross section predicted in our model.\n\nFIG. 3: The spin independent scattering cross section with a proton. All parameters are same as those used in the previous section. The upper and lower lines correspond to sin θ = 0 . 7 and 0 . 3, respectively.\n\n\n\n## IV. SUMMARY\n\nWe have proposed a scenario of the RH neutrino dark matter in the context of the minimal gauged U (1) B -L model. We have introduced a discrete Z 2 parity in the model, so that one RH neutrino assigned as Z 2 -odd can be stable and, hence, the DM candidate, while the other two RH neutrinos account for neutrino masses and mixings through the seesaw mechanism. No additional degrees of freedom are necessary to be added. We have evaluated the relic density of the dark matter particle. The dominant annihilation modes are via the Higgs boson exchange processes in the s -channel and thus, our model can be called Higgs portal DM model. It has been found that the relic density consistent with the current observation", - "page_start": 7, - "page_end": 7, - "source_file": "1002.2525.pdf" - }, - { - "text": "## Higgs portal dark matter in the minimal gauged U (1) B -L model\n\nNobuchika Okada ∗\n\nDepartment of Physics and Astronomy,\n\nUniversity of Alabama, Tuscaloosa, AL 35487, USA\n\nOsamu Seto †\n\nDepartment of Architecture and Building Engineering, Hokkai-Gakuen University, Sapporo 062-8605, Japan\n\n## Abstract\n\nWe propose a scenario of the right-handed neutrino dark matter in the context of the minimal gauged U (1) B -L model by introducing an additional parity which ensures the stability of dark matter particle. The annihilation of this right-handed neutrino takes place dominantly through the s -channel Higgs boson exchange, so that this model can be called Higgs portal dark matter model. We show that the thermal relic abundance of the right-handed neutrino dark matter with help of Higgs resonance can match the observed dark matter abundance. In addition we estimate the cross section with nucleon and show that the next generation direct dark matter search experiments can explore this model.\n\nPACS numbers:", - "page_start": 0, - "page_end": 0, - "source_file": "1002.2525.pdf" - }, - { - "text": "If the effects of viscosity and compressibility are not of immediate importance, the remaining items can be combined for consideration. Since the major aerodynamic forces are the result of various pressures distributed on a surface, the surface area will be a major factor. Dynamic prcssurc of the airstream is another common denominator of aerodynamic forces and is a major factor since the magnitude of a pressure distribution depends on the source energy of the free stream. The remaining major factor is the relative peJJ#re dittribution", - "page_start": 39, - "page_end": 39, - "source_file": "00-80T-80.pdf" - }, - { - "text": "## Abstract\n\nWe review recent experiments on dewetting thin films of evaporating colloidal nanoparticle suspensions (nanofluids) and discuss several theoretical approaches to describe the ongoing processes including coupled transport and phase changes. These approaches range from microscopic discrete stochastic theories to mesoscopic continuous deterministic descriptions. In particular, we focus on (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nModels (i) and (ii) are employed to discuss the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor film' that remains behind a mesoscopic dewetting front. We highlight, in particular, the presence of a transverse instability in the evaporative dewetting front which results in highly branched fingering structures. The subtle interplay of decomposition in the film and contact line motion is discussed.\n\nFinally, we discuss a simple thin film model (iii) of the hydrodynamics on the mesoscale. We employ coupled evolution equations for the film thickness profile and mean particle concentration. The model is used to discuss the self-pinning and de-pinning of a contact line related to the 'coffee-stain' effect.\n\nIn the course of the review we discuss the advantages and limitations of the different theories, as well as possible future developments and extensions.\n\nThe paper is published in: J. Phys.-Cond. Mat. 21 , 264016 (2009), in the Volume 'Nanofluids on solid substrates' and can be obtained at http://dx.doi.org/10.1088/0953-8984/21/26/264016", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2669.pdf" - }, - { - "text": "on the model (see above). The purely two-dimensional character of the KMC was extended to a 'pseudo three-dimensional' one by making the effective chemical potential dependent on the mean liquid coverage [38]. As the latter is related to a mean film thickness, this corresponds to the introduction of a 'global' thickness-dependent disjoining pressure into the evaporation term without an explicit consideration of a film thickness. The amended model can reproduce bimodal structures that are beyond the scope of the purely two-dimensional model [38, 39]. Fully threedimensional models are also discussed in the literature [76, 77].\n\n## B. Dynamical Density Functional theory\n\nThe limitations of the kinetic Monte Carlo model introduced in the previous Section are related to its character as a two-dimensional lattice gas with only three states: gas, liquid or particle. This implies that (i) no liquid can be transported to a site on the surface already filled with liquid, i.e., diffusion of the liquid can not be incorporated in a sensible way and (ii) one is not able to distinguish between the influence of the short- and the long-range parts of the interactions with the substrate, as all such interactions are absorbed into the effective chemical potential.\n\nHowever, using dynamical density functional theory (DDFT) [78-83] one can develop a model for the processes in the ultrathin postcursor film without these limitations, although here we limit ourselves to developing the theory at the level of the KMC and solely discuss how to extend it to incorporate the influence of the liquid diffusion over the surface. Such a DDFT model describes the coupled dynamics of the density fields of the liquid ρ l and the nanoparticles ρ n . The densities ρ l and ρ n are defined as the probabilities of finding a given lattice site on the surface to be occupied by a film of liquid or by a nanoparticle, respectively. Note that the probability densities correspond to number densities as we use the lattice spacing σ = 1 as our unit of length.\n\nTo develop the DDFT, one must first derive the underlying free energy functional F [ ρ l , ρ n ] , and secondly, devise dynamical equations for both density fields that account for the conserved and the non-conserved aspects of their dynamics, i.e., transport and phase change processes, respectively. For a system governed by the hamiltonian (3), we may construct a mean-field (Bragg-Williams) approximation for the free energy of the system [78, 84] which contains an entropic contribution and contributions from the interactions between the different species (nanoparticles and liquid). The free energy is a semi-grand free energy, since the liquid is treated grand canonically (it is coupled to a reservoir with chemical potential µ ), whereas the nanoparticles are treated in the", - "page_start": 13, - "page_end": 13, - "source_file": "1001.2669.pdf" - }, - { - "text": "\n\nFIG. 8: (Colour online) Space-time plots are given for (left) the film thickness h and (right) the nanoparticle layer height h p = hφ . The plot corresponds to the complete evolution resulting in the ring profile of Fig. 6(b). In both panels bright [dark] parts denote high [low] regions. The prominent central dark-bright border in the left panel indicates the change of the position of the contact line in time. Over time, four regimes can be distinguished: (i) fast motion before pinning, (ii) nearly no front motion during self-pinning, (iii) slow motion after depinning, and (iv) final evaporation from the center.\n\n\n\nshould also be investigated further in the simple case presented here.\n\n## IV. CONCLUSION\n\nWe have discussed recent work on pattern formation processes in films and drops of evaporating suspensions/solutions of polymers and particles. After reviewing experiments on suspensions of thiol-coated gold nanoparticles in toluene we have focused on the modelling of the transport and phase change processes involved. A theoretical approach to the modelling of the hydrodynamics on the mesoscale has been described as well as more microscopic models for the dynamics in the observed nanoscopic 'postcursor' film. In particular, we have introduced (i) a microscopic kinetic Monte Carlo model, (ii) a dynamical density functional theory and (iii) a hydrodynamic thin film model.\n\nThe kinetic Monte Carlo model and the dynamical density functional theory can both be used to investigate and understand the formation of polygonal networks, spinodal and branched structures resulting from the dewetting of an ultrathin 'postcursor' film that remains behind the mesoscopic dewetting front. They are, however, not capable of describing the dynamical processes in a meso-", - "page_start": 22, - "page_end": 22, - "source_file": "1001.2669.pdf" - }, - { - "text": "## HOT! HOT! HOT!\n\nCompetitive, inventive, expansive by day … cozy, intimate, and warm by night. On one hand, a technology buff; while on the other hand, an incurable romantic. I'll bathe you in heat - and show you how to fill a room with a very special glow. Seek hearthwarming personality with a powerful appreciation for style and performance. Must have aspirational dreams and family values. My dream: someone to ignite my potential … someone to keep the home fires burning.\n\nHEARTH & HOME TECHNOLOGIES", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "TABLE OF CONTENTS\n\n| PREFACE.. ,., . | iii |\n|--------------------------------------------------------------------------------------------------------------------------------------------------|----------|\n| CHAPTER I: BASIC AERODYNAMICS | |\n| WING AND AIRFOIL FORCES | |\n| PROPERTIES OF THE ATMOSPHERE. Static pressure Temperature Density Viscosity Standard atmosphere Pressure altitude Density altitude | 1 |\n| BERNOULLI'S PRINCIPLE AND SUBSONIC AIRFLOW.. | 4 |\n| Bernoulli's equation, | 6 |\n| Incompressible tlow Variation of static pressure and velocity Kinetic and porcntial energy of flow Static and dynamic prcssurc, 4 | |\n| Airspeed measurement.. . . Stagnation prcssurc Measurement of dynamic pressure Pitot and static sources Indicated airspeed | 9 |\n| DEVELOPMENT OF AERODYNAMIC FORCES.. ....... | 14 |\n| Streamline pattern and pressure distribution. Generatioaoflift.......................................... ....... ....... | 14 |\n| Circulation Pressure distribution | 16 |\n| Airfoil terminology. Aerodynamic force coefficient . . Basic lift equation | ',: 2 3 |", - "page_start": 6, - "page_end": 6, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "1002.2525.pdf", - "query": "What happend to the annihilation and the relic density when the DM is heavier ?", - "target_page": 6, - "target_passage": "When the DM is heavier, the annihilation mode into Higgs boson pairs is opened and the relic density slightly deceases", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Fig. 1 shows the relic density Ω N h 2 as a function of the DM mass m N for a set of parameters: ( v ' , M h , M H , M Z ' , sin θ ) = (4000 GeV , 120 GeV , 200 GeV , 1000 GeV , 0 . 7), for example. Willkinson Microwave Anisotropy Probe measured the value of DM abundance as Ω DM h 2 /similarequal 0 . 1 [15]. The figure shows that a desired DM relic abundance can be obtained for only near Higgs resonances, m N ≈ M h / 2 or M H / 2.\n\nFig. 2 shows the relic density Ω N h 2 as a function of the DM mass m N for a smaller Higgs mixing sin θ = 0 . 3 (others are the same as in Fig. 1). Compared with Fig. 1, for m N /lessorsimilar M W where the DM particles dominantly annihilate into f ¯ f , the relic density further increases because of the small mixing angle. When the DM is heavier, the annihilation mode into Higgs boson pairs is opened and the relic density slightly deceases, but the reduction is not enough to reach Ω N h 2 /similarequal 0 . 1.\n\nFIG. 1: The thermal relic density of RH neutrino DM as a function of its mass for a parameter set: ( v ' , M h , M H , M Z ' , sin θ ) = (3000 GeV , 120 GeV , 200 GeV , 1000 GeV , 0 . 7).\n\n\n\nOur model is quite analogous to the so-called gauge singlet scalar dark matter [16-18]. Some recent studies can be found in Refs. [19, 20]. In the gauge singlet scalar DM model, the thermal abundance is mainly controlled by the interactions between the SM Higgs boson and the DM particle. In our model, B -L Higgs VEV v ' can play the same role for m N < M W , namely a larger v ' corresponds to weaker coupling between DM and Higgs for a fixed DM mass. On the other hand, for m N > M W the difference appears. Even if the annihilation", - "page_start": 5, - "page_end": 5, - "source_file": "1002.2525.pdf" - }, - { - "text": "parameters to be consistent with the current observations. Next we calculate the scattering cross section between the DM particle and a proton and discuss the implication for the direct DM search experiments.\n\n## A. Thermal relic density\n\nThe DM RH neutrino interacts with the SM particles through couplings with B -L gauge and B -L Higgs bosons. Note that neutrino Dirac Yukawa interactions are absent because of the Z 2 parity. The most of annihilation of the RH neutrinos occurs via Z ' , H and h exchange processes in the s -channel. In practice, the dominant contributions come from the Higgs ( h and H ) exchange diagrams, because the Z ' exchange processes are suppressed by the inverse square of the B -L Higgs VEV v ' /greaterorsimilar 3 TeV. Thus, we obtain Higgs portal DM of RH neutrino effectively. The relevant annihilation modes are the annihilation into f ¯ f , W + W -, ZZ , and h ( H ) h ( H ). Since RH neutrino DM couples to only B -L Higgs Ψ while a SM particle does to SM Higgs Φ, the DM annihilation occurs only through the mixing between these two Higgs bosons. Although it is not so severe, the precision electroweak measurements [12] as well as the unitarity bound [13] give constraints on the mixing angle and mass spectrum of the Higgs bosons.\n\nThe thermal relic abundance of DM\n\nΩ N h 2 = 1 . 1 × 10 9 m N /T d √ g ∗ M P 〈 σv 〉 GeV -1 , (14)\n\nwith the Planck mass M P , the thermal averaged product of the annihilation cross section and the relative velocity 〈 σv 〉 , the total number of relativistic degrees of freedom in the thermal bath g ∗ , and the decoupling temperature T d , is evaluated by solving the Boltzmann equation for the number density of RH neutrino n N ;\n\ndn N dt +3 Hn N = -〈 σv 〉 ( n 2 N -n 2 EQ ) , (15)\n\nand the Friedmann equation\n\nH 2 ≡ ( ˙ a a ) 2 = 8 π 3 M 2 P ρ, (16)\n\nwith n EQ and a ( t ) being the equilibrium number density and the scale factor, under the radiation dominated Universe with the energy density ρ = ρ rad [14].", - "page_start": 4, - "page_end": 4, - "source_file": "1002.2525.pdf" - }, - { - "text": "From Eq. (19), one can see that σ ( p ) SI ∝ (sin 2 θ/v ' ) 2 for a given DM mass m N . Fig. 3 shows the spin-independent cross section of RH neutrino with a proton. The resultant cross section is found to be far below the current limits reported by XENON10 [24] and CDMSII [25]: σ SI /lessorsimilar 4 × 10 -8 -2 × 10 -7 pb, for a DM mass of 100 GeV-1 TeV. Future experiments such as XENON1T [26] can reach the cross section predicted in our model.\n\nFIG. 3: The spin independent scattering cross section with a proton. All parameters are same as those used in the previous section. The upper and lower lines correspond to sin θ = 0 . 7 and 0 . 3, respectively.\n\n\n\n## IV. SUMMARY\n\nWe have proposed a scenario of the RH neutrino dark matter in the context of the minimal gauged U (1) B -L model. We have introduced a discrete Z 2 parity in the model, so that one RH neutrino assigned as Z 2 -odd can be stable and, hence, the DM candidate, while the other two RH neutrinos account for neutrino masses and mixings through the seesaw mechanism. No additional degrees of freedom are necessary to be added. We have evaluated the relic density of the dark matter particle. The dominant annihilation modes are via the Higgs boson exchange processes in the s -channel and thus, our model can be called Higgs portal DM model. It has been found that the relic density consistent with the current observation", - "page_start": 7, - "page_end": 7, - "source_file": "1002.2525.pdf" - }, - { - "text": "## Turning off data retention protection\n\nWhen you turn off data retention protection, the following descriptions explain what happens when you use the creation-based object expiration policy and the event-based retention object expiration policy:\n\n - /SM590000 Creation-based object expiration policy: Content Manager OnDemand issues a delete object command through the Tivoli Storage Manager API. Objects are deleted during the next inventory expiration. If a Content Manager OnDemand application group is deleted, a delete filespace command is issued instead, and the objects are immediately deleted with the file space.\n - /SM590000 Event-based retention object expiration policy: Content Manager OnDemand issues an event trigger command through the Tivoli Storage Manager API. The status of the objects that are affected changes from PENDING to STARTED , and the objects are expired by Tivoli Storage Manager based on their retention parameters. If the retention parameters are set to NOLIMIT , the objects never expire. If a Content Manager OnDemand application group is deleted, a delete filespace command is issued instead, and the objects are immediately deleted with the file space.\n\n## Turning on data retention protection\n\nWhen you turn on data retention protection, the following descriptions explain what happens when you use creation-based object expiration policy and event-based retention object expiration policy:", - "page_start": 258, - "page_end": 258, - "source_file": "sg246915.pdf" - }, - { - "text": "- /SM590000 Creation-based object expiration policy: Content Manager OnDemand issues no commands to Tivoli Storage Manager. The objects are effectively orphaned by Content Manager OnDemand and are expired by Tivoli Storage Manager based on their retention parameters. If the retention parameters are set to NOLIMIT , the objects never expire.\n - /SM590000 Event-based retention object expiration policy: Content Manager OnDemand issues an event trigger command through the Tivoli Storage Manager API. The event status of the objects that are affected is changed from PENDING to STARTED, and the affected objects are expired by Tivoli Storage Manager based on their retention parameters. If the retention parameters are set to NOLIMIT , the objects never expire.", - "page_start": 258, - "page_end": 258, - "source_file": "sg246915.pdf" - }, - { - "text": "- /SM590000 Considerations and Comparisons between IBM SDD for Linux and DM-MPIO", - "page_start": 811, - "page_end": 811, - "source_file": "sg247938.pdf" - }, - { - "text": "Tivoli Storage Manager supports two retention policies:\n\n - /SM590000 In creation-based retention , the policy becomes active when the data is stored (created) on the Tivoli Storage Manager server. This policy is the default retention policy method and it is used with normal backup/archive clients.\n - /SM590000 In event-based retention , the policy becomes active when the client sends a retention event to the Tivoli Storage Manager server. The retention event can be sent to the server any time after the data is stored on the server. Until the retention event is received, the data is indefinitely stored on the Tivoli Storage Manager server. For Content Manager OnDemand, the retention event is the call to delete the data. A load, unload, application group delete, or expiration of data triggers the retention event.\n\nIf you decide to use these policies in Tivoli Storage Manager, the Content Manager OnDemand scenarios that are described in the rest of this section are supported.", - "page_start": 257, - "page_end": 257, - "source_file": "sg246915.pdf" - }, - { - "text": "## 10.1 Introduction\n\nFor this chapter, unless explicitly stated otherwise, the term 'data' is used to refer to the report data, the extracted documents or segments, and their related indexes and the extracted resources.\n\nA Content Manager OnDemand system logically stores data in application groups . An application group is defined by the Content Manager OnDemand administrator. It consists of data that has the same indexing, data storage, and expiration requirements. The application group definition also specifies where the report and document data are stored, how long the data is stored, and how the data expires. The method or methods that can be used to expire the data are a function of the application group parameters that are defined before the data is loaded into Content Manager OnDemand. In a Content Manager OnDemand system, data typically goes through a lifecycle of loading, storing, migration, and an expiration process.\n\n## 10.2 Loading and storing the data\n\nThe Content Manager OnDemand architecture allows the control and management of the data throughout its lifecycle. The data lifecycle begins with running an efficient load process. Each load process invocation ingests report data for a specified application group.\n\nDuring a load process, Content Manager OnDemand stores report (document) data, its resources, and index data, as shown in Figure 10-1.\n\nFigure 10-1 Data and index storage locations\n\n\n\nThe Content Manager OnDemand load process identifies, segments, and compresses groups of documents into storage objects that are then stored in the Content Manager OnDemand archive, as illustrated in Figure 10-1. To improve the efficiency of the storage process, Content Manager OnDemand aggregates the stored documents (typically a few kilobytes in size) into storage objects. This aggregation provides efficient, high-volume storage, retrieval, and expiration performance.", - "page_start": 243, - "page_end": 243, - "source_file": "sg246915.pdf" - }, - { - "text": "## Object Size\n\nThe Object Size parameter determines the size of a storage object in kilobytes. Content Manager OnDemand, by default, segments and compresses stored data into 10 MB storage objects. The default of 10 MB is the recommended object size value.\n\nImportant : Setting the value too small or too large can adversely affect load performance.\n\nNote: The object size, which is defined here, must be equal to or larger than the size of the compressed storage objects that are defined in any application that is assigned to the application group.\n\n## Migrate Data from Cache pane\n\nThis section of the Advanced Storage Management window determines when documents and resources are migrated to archive storage. A storage set that is associated with a migration policy that uses archive media must be selected to enable migration to archive storage. The possible values are listed:\n\n - /SM590000 No: Data is never migrated from cache. This option is unavailable when a storage set that is associated with archive storage is selected for the application group.\n - /SM590000 When Data is Loaded: Data is migrated to archive storage when the load process runs because of a store command, such as Add Report ( ADDRPTOND ), Start Monitor ( STRMONOND ), or ARSLOAD .\n - /SM590000 Next Cache Migration: Data is migrated to archive storage the next time that Disk Storage Manager is run.\n - /SM590000 After Days in Cache: This value specifies the number of days that data remains in cache storage. After the data reaches the prescribed number of days in cache storage, the data is copied to archive storage the next time that Disk Storage Manager is run.\n\nASM is started with the STRASMOND command. The command must be run only in batch. For more information about running the STRASMOND command, see the IBM Content Manager OnDemand for i - Common Server Administration Guide , SC19-2792.", - "page_start": 152, - "page_end": 152, - "source_file": "sg246915.pdf" - }, - { - "text": "- /SM590000 Document: With this expiration type, a document at a time is deleted from the application group. Data that is stored in archive storage is deleted by the storage manager based on the archive expiration date. Storing documents with an expiration type of Document causes the expiration process to search through every document in the segment to determine whether the expiration date was reached, which results in long processing times.\n\nWhen the arsmaint expiration process is run, data is deleted only from the application group if the upper threshold for the size of the cache storage is reached. By default, the cache threshold is 80%. A lower threshold can be forced by the expiration command parameters. Unless a reason exists to clear cache, leaving data in cache improves retrieval performance.\n\n## 5.2.6 Advanced application group storage management\n\nBy using the advanced storage management settings (Figure 5-11), you can adjust the size of the load object and determine when report data, indexes, and resources are migrated to archive storage.\n\nFigure 5-11 Advanced application group storage management\n\n\n\n## Object Size\n\nThe Object Size parameter determines the size of a storage object in kilobytes (KB). Content Manager OnDemand, by default, segments and compresses stored data into 10 MB storage objects. The default of 10 MB is the most commonly used object size value.\n\nImportant: Be careful when you change the value for Object Size. Setting the value too small or too large can adversely affect load performance. However, increasing this value might be necessary if you load large files and run out of Object IDs during the loading process.\n\nNote: The object size that is defined here must be equal to or larger than the size of the compressed storage objects that are defined in any application that is assigned to the application group.", - "page_start": 126, - "page_end": 126, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv1.pdf", - "query": "What is the aim of LLM routers ?", - "target_page": 1, - "target_passage": "LLM routers aim to balance quality and cost of generation by classifying queries and routing them to a cheaper or more expensive LLM depending on their complexity. ", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "In contrast to routers motivated by controlling costs, several LLM router designs focus solely on improving quality of responses [31, 45, 57, 58].\n\nThe LLM routers described thus far do not modify the queries or individual LLM responses. Other types of control planes do. Ensemble approaches such as mixture-of-expert (MoE) [29, 30, 52, 56] architectures select a subset of underlying models to apply to each token of a query and merge their responses. LLM synthesis [40] architectures operate similarly, but route the entire query to a subset of underlying LLMs and merge their responses. These approaches reduce inference costs by using fewer and/or less complex underlying models.\n\nApplications of LLM routers. A key use case for LLM routers is to help LLM-based application reduce cost. Several commercial routers, including Unify [12], Martian [5], NotDiamond [7], and others, offer this as a service. By replacing a few lines of code, the application can send user queries to a router service, rather than directly to some LLM provider. The service selects the optimal LLM and forwards the queries. Commercial router services claim that this results in significant cost savings: up to 98% in the case of Martian [5], and 10 × in the case of NotDiamond [7].\n\n## 3 LLMControl Plane Integrity\n\nIn this section, we define LLM control plane integrity . Informally, it means that decisions made about underlying LLM queries made by the control plane algorithms cannot be subverted by adversarial queries. Looking ahead, we will focus on one class of control plane: predictive LLM routing as used to manage cost.\n\nFormalizing control planes. An LLM control plane R ω is a potentially randomized algorithm. It is parameterized by a string ω , called the parameters. It utilizes some number n of LLMs denoted by M . We will mostly focus on the case of n = 2 , and, for reasons that will be clear in a moment, use M s ('strong') and M w ('weak') to denote the two underlying LLMs. Then inference on an input x ∈ X for some set X of allowed queries is performed by computing a response via y ← $ R M ω ( x ) . Here we use ← $ to denote running R with fresh random coins; we use ← when R is deterministic. We focus on inference for a single query, but it is straightforward to extend our abstraction for control planes to include sessions: the controller would maintain state across invocations, potentially adapting its behavior as a function of a sequence of queries and responses.\n\nLLM control planes should, in general, be relatively computationally lightweight, at least compared to the underlying LLMs. This is particularly so in the cost-motivated usage of control planes, as a computationally or financially expensive control plane would eat into cost savings incurred by utilizing cheaper underlying LLMs for some queries. For example, predictive binary routers use relatively simple classifiers to determine which of M s or M w should be used to respond to a query.\n\nInference flow. Given a set of LLMs M , a control plane R ω , and an input x , an LLM inference flow is the sequence of LLM invocations M i j ( z j ) for 1 ≤ j ≤ m and i j ∈ { w , s } made when executing R M ω ( x ) . Here m is the total number of LLM invocations, and z 1 , . . . , z m are the queries made to the underlying LLMs. Should R be randomized, the sequence and its length are random variables. An inference flow can be written as a transcript\n\nT = ( i 1 , z 1 ) , ( i 2 , z 2 ) , . . . , ( i m , z m )\n\nof pairs of model indexes i j ∈ { w , s } and model inputs z j . Note that for simplicity we ignore the potential for parallelization, assuming execution proceeds serially. For binary routers, we have m = 1 and T ∈ { ( w , x ) , ( s , x ) } . We write submitting a sequence of inferences ⃗x = ⃗x 1 , . . . , ⃗x q to a control plane as\n\nR M ω ( ⃗x ) = ( R M ω ( ⃗x 1 ) , . . . , R M ω ( ⃗x q ))", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv1.pdf" - }, - { - "text": "## REROUTING LLM ROUTERS\n\n## A PREPRINT\n\n| Avital Shafran | Roei Schuster | Thomas Ristenpart | Vitaly Shmatikov |\n|-----------------------|-----------------|---------------------|--------------------|\n| The Hebrew University | Wild Moose | Cornell Tech | Cornell Tech |\n\n## ABSTRACT\n\nLLM routers aim to balance quality and cost of generation by classifying queries and routing them to a cheaper or more expensive LLM depending on their complexity. Routers represent one type of what we call LLM control planes: systems that orchestrate use of one or more LLMs. In this paper, we investigate routers' adversarial robustness.\n\nWe first define LLM control plane integrity, i.e., robustness of LLM orchestration to adversarial inputs, as a distinct problem in AI safety. Next, we demonstrate that an adversary can generate queryindependent token sequences we call 'confounder gadgets' that, when added to any query, cause LLM routers to send the query to a strong LLM.\n\nOur quantitative evaluation shows that this attack is successful both in white-box and black-box settings against a variety of open-source and commercial routers, and that confounding queries do not affect the quality of LLM responses. Finally, we demonstrate that gadgets can be effective while maintaining low perplexity, thus perplexity-based filtering is not an effective defense. We finish by investigating alternative defenses.\n\n## 1 Introduction\n\nLarge language models (LLMs) exhibit remarkable capabilities on many tasks. Today, hundreds of open-source and proprietary LLMs are available at different prices, ranging from expensive, state-of-the-art models to cheaper, smaller, less capable ones. LLM operators typically provide API access to their models (especially higher-quality models) on a pay-per-query basis. This imposes non-trivial costs on LLM-based applications and systems.\n\nDevelopers who want to integrate LLMs into their applications must therefore consider both utility and cost. They want to maximize the quality of responses to their queries while minimizing the cost. The two objectives conflict with each other: larger models tend to generate higher-quality answers but charge more per query. For example, at the time of this writing, GPT-3.5-turbo costs $0 . 5 / $1 . 5 per 1M input/output tokens, GPT-4o-mini $0 . 15 / $0 . 6 , GPT-4o $2 . 5 / $10 , o1-preview $15 / $60 . The difference in quality between models is not uniform across queries. For some queries, even a cheap model can generate an acceptable response. More complex queries require an expensive model to obtain a quality answer.\n\nA natural solution to balancing performance and economic considerations is to take advantage of the availability of multiple LLMs at different price-performance points. Recently proposed LLM routing systems [5, 12, 27, 47, 53] orchestrate two or more LLMs and adaptively route each query to the cheapest LLM they deem likely to generate a response of sufficient quality. In the two-LLM case, let M s be an expensive, high-quality model and M w a weaker, lower-grade one. Given query q , the routing algorithm R ( · ) applies a classifier to q that outputs 0 if M w is sufficient for answering q , or 1 if M s is required. The system then routes q accordingly.\n\nLLMrouting is an example of a general class of systems we call LLM control planes, which orchestrate the use of multiple LLMs to process inputs, as further described in Section 2.\n\nOur contributions. First, we introduce LLM control plane integrity as a novel problem in AI safety. Recently proposed LLM control-plane algorithms are learned, calibrated classifiers (see Section 2). Their inputs are queries from potentially adversarial users. Robustness of control-plane algorithms to adversarial queries is a new problem, distinct from adversarial robustness of the underlying LLMs.", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv1.pdf" - }, - { - "text": "Attacks against MoE. Mixture-of-Experts (MoE) architectures enable using multiple expert modules for processing a given query with a lower computational cost by including an inner routing mechanism that in every layer routes different tokens to a small number of experts [29, 30, 52, 56]. This can be thought of as an internal router within a single LLM, rather than an external control plane that orchestrates multiple LLMs. MoE has increased in popularity as it allows to build larger models at a fixed compute budget-not all parameters are used at the same time.\n\nHayes et al. [34] identified a vulnerability in MoE that can be exploited for a denial-of-service attack against MoE. Thus control plane integrity issues appear to extend to the context of single-LLM MoE systems, and future work could explore this connection further.\n\nYona et al. [67] presented a side-channel attack on MoE that enables an attacker to reveal other users' prompts. We expect that side-channel attacks against LLM control planes exist as well, for example, to infer which models are used via timing of responses. Such attacks, which target confidentiality, are outside the scope of control plane integrity.\n\n## 10 Conclusion\n\nLLM routers balance quality and cost of LLM inference by routing different queries to different LLMs. They are an example of a broader, emerging class of systems we call 'LLM control planes' that aim to achieve various quality, efficiency, and cost objectives by orchestrating use of multiple LLMs to respond to a query.", - "page_start": 16, - "page_end": 16, - "source_file": "arxiv1.pdf" - }, - { - "text": "| Surrogate Target | R MF | ˆ R SW R CLS | R LLM | R SW | ˆ R MF R CLS | R LLM | R SW | ˆ R CLS S FM | R LLM | R SW | ˆ R LLM R MF | R CLS |\n|--------------------|------------|----------------|------------|------------|----------------|------------|------------|----------------|------------|------------|----------------|------------|\n| LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 |\n| MT-Bench | - 0 . 1 | - 0 . 1 | 0 . 0 | - 0 . 1 | - 0 . 1 | 0 . 0 | - 0 . 1 | 0 . 0 | 0 . 1 | - 0 . 2 | - 0 . 1 | - 0 . 2 |\n| MMLU | - 0 . 1 | 0 . 3 | - 0 . 2 | 4 . 8 | 1 . 0 | 0 . 5 | 2 . 5 | - 1 . 3 | - 0 . 8 | 2 . 6 | - 0 . 9 | 0 . 3 |\n| GSM8K | 14 . 9 | 9 . 6 | 15 . 2 | 18 . 6 | 13 . 8 | 14 . 7 | 13 . 4 | 6 . 8 | 12 . 6 | 13 . 6 | 11 . 3 | 10 . 4 |\n| LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 |\n| MT-Bench | - 0 . 1 | - 0 . 1 | - 0 . 1 | - 0 . 2 | - 0 . 2 | - 0 . 2 | - 0 . 1 | - 0 . 1 | 0 . 0 | - 0 . 2 | - 0 . 2 | - 0 . 2 |\n| MMLU | 1 . 6 | 4 . 0 | 4 . 2 | 7 . 9 | 5 . 0 | 4 . 4 | 5 . 0 | - 2 . 9 | 3 . 2 | 5 . 2 | - 0 . 9 | 3 . 8 |\n| GSM8K | 13 . 6 | 8 . 7 | 18 . 5 | 18 . 9 | 14 . 4 | 18 . 3 | 13 . 1 | 4 . 0 | 15 . 5 | 11 . 3 | 8 . 4 | 10 . 8 |\n| LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 |\n| MT-Bench | 0 . 2 | 0 . 0 | 0 . 1 | - 0 . 1 | - 0 . 1 | 0 . 0 | 0 . 0 | 0 . 2 | 0 . 2 | - 0 . 1 | 0 . 1 | - 0 . 1 |\n| MMLU | 5 . 0 | 6 . 8 | 5 . 8 | 11 . 3 | 9 . 1 | 4 . 7 | 8 . 1 | - 3 . 7 | 4 . 8 | 7 . 8 | 0 . 1 | 7 . 2 |\n| GSM8K | 20 . 5 | 13 . 4 | 20 . 9 | 24 . 3 | 18 . 6 | 21 . 6 | 17 . 9 | 11 . 2 | 18 . 9 | 16 . 7 | 15 . 2 | 14 . 2 |\n\nTable 7: Differences between average benchmark specific scores of responses to the original and confounded queries, when the confounder gadget was generated for a different surrogate router than the target (black-box setting) for three LLM pairs. Positive values indicate a higher average score for responses to the confounded queries; higher values are better for the attacker. Results are averaged across gadgets. Standard errors were omitted for readability and are on average 0 . 1 , 0 . 8 , and 1 . 8 for MT-bench, MMLU and GSM8K, respectively. Aligned with the white-box setting, results show almost no decrease in performance, and improvement when there is a performance gap for the LLM pair.\n\nResults for LLM pair 4. As discussed in Section 5, we replace the strong model that was used by Ong et al. [47], GPT-41106-preview (rank 28 in the Chatbot Arena leaderboard [1, 21]), with the open-sourced Llama-3.1-8B (rank 58) to reduce the costs of our extensive set of evaluations. In this section we perform a smaller-scale evaluation of the quality-enhancing attack performance when using GPT as the strong model, i.e., LLM pair 4. We evaluate this setting using three of the n = 10 confounder gadgets for each router.\n\n## 7 Rerouting Commercial Routers", - "page_start": 11, - "page_end": 11, - "source_file": "arxiv1.pdf" - }, - { - "text": "Figure 1: LLM routers classify queries and route complex ones to an expensive/strong model, others to a cheaper/weak model. To control costs, LLM routers can be calibrated to maintain (for an expected workload) a specific ratio between queries sent to the strong and weak models.\n\n\n\nTo initiate the study of this problem, we show that existing LLM routing algorithms are not adversarially robust. We design, implement, and evaluate a method that generates query-independent adversarial token sequences we call 'confounder gadgets.' If a gadget is added to any query, this query is routed to the strong model with high probability. Next, we show that this attack is effective even in the transfer setting where the adversary does not have full knowledge of the target LLM router (it is black-box), but has access to another router (e.g., an internally trained surrogate). We also evaluate the integrity of commercial LLM routers, showing that they can be confounded as well.\n\nThird, we investigate defenses. Our basic method generates gadgets that have anomalously high perplexity. Confounded queries are thus easily distinguished from normal queries and can be filtered out by the routing system. Unfortunately, this defense can be evaded by an adversary who incorporates a low-perplexity objective into the gadget generation algorithm, producing gadgets that have low perplexity-and yet are effective at re-routing queries to the strong model. We also discuss higher-level defenses, such as identifying users whose queries are routed to the strong model with abnormal frequency.\n\nRouting attacks can be deployed for various adversarial objectives, e.g., to ensure that the adversary always obtains the highest-quality answer regardless of the target applications's internal routing policies and cost constraints, or to maliciously inflate the target's LLM costs. As LLM control planes grow in importance and sophistication, we hope that this work will motivate further research on their adversarial robustness.\n\n## 2 LLMControl Planes and Routing\n\nInference using large language models (LLMs) is traditionally monolithic: a single model is applied to an input or sequence of inputs. This methodology can be sub-optimal for various reasons. State-of-the-art models are often expensive, with API access to LLMs costing as much as several dollars for each query. Elsewhere, distinct LLMs may excel at different tasks, and selectively using them may improve overall quality on a diverse workload. Finally, combining multiple LLMs, even all trained for similar tasks, may become increasingly prevalent as performance improvements of individual LLMs plateaus [8-10].\n\nResearchers and practitioners are therefore now developing inference architectures that use multiple LLMs to answer queries. These LLMs are orchestrated by what we call an LLM control plane (borrowing the terminology from networking [13]). The control plane may route queries or parts of queries to different LLMs, derive new strings to query to underlying LLMs, combine answers from underlying LLMs, and more.\n\nLLM routers. A prominent example of this emerging class of LLM control planes are LLM routers [27, 41, 47, 53, 59]. LLM routers decide which of the two (or, sometimes, more) LLMs to use to answer a query. In prescriptive routing, the router applies some lightweight classifier to the input query that determines which underlying LLM to utilize for a response. The classifier is itself a learned function that scores the complexity of the query. Deployments can then configure a score threshold for when to route a query to the more expensive LLM. This threshold can be tuned using representative workloads to achieve a desired cost-performance trade-off. Figure 1 shows the basic workflow of binary LLM routers.", - "page_start": 1, - "page_end": 1, - "source_file": "arxiv1.pdf" - }, - { - "text": "Table 4: Average benchmark-specific scores of responses to the original and confounded queries with Mistral-7B-Instructv0.3 (LLM pair 2) or Llama-2-7B-chat-hf (LLM pair 3) as the weak model, in the white-box setting. Results further emphasize that the rerouting attack improves quality of responses when there is a significant gap between the weak and strong LLMs.\n\n| | R SW | R SW | R MF | R MF | R CLS | R CLS | R LLM | R LLM |\n|------------|------------|---------------|------------|---------------|------------|---------------|------------|---------------|\n| | Orig. | Conf. | Orig. | Conf. | Orig. | Conf. | Orig. | Conf. |\n| | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 |\n| MT-Bench | 8 . 5 | 8 . 3 ± 0 . 0 | 8 . 4 | 8 . 3 ± 0 . 1 | 8 . 4 | 8 . 4 ± 0 . 1 | 8 . 4 | 8 . 3 ± 0 . 1 |\n| MMLU | 55 | 64 ± 1 | 63 | 64 ± 0 | 58 | 66 ± 1 | 62 | 66 ± 0 |\n| GSM8K | 46 | 64 ± 1 | 51 | 67 ± 1 | 49 | 63 ± 1 | 38 | 63 ± 2 |\n| LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 |\n| MT-Bench | 8 . 4 | 8 . 3 ± 0 . 0 | 8 . 1 | 8 . 3 ± 0 . 1 | 8 . 3 | 8 . 4 ± 0 . 1 | 8 . 1 | 8 . 2 ± 0 . 1 |\n| MMLU | 51 | 64 ± 1 | 57 | 63 ± 1 | 52 | 66 ± 1 | 59 | 66 ± 1 |\n| GSM8K | 40 | 64 ± 1 | 44 | 67 ± 1 | 45 | 63 ± 1 | 37 | 64 ± 1 |\n\nTable 8 and Table 9 show the results for the white-box and black-box settings, respectively. (Here, percentage numbers are not averaged and there is no standard error since we used a single gadget per query.) The white-box results are nearly perfect; the black-box results are often better but sometimes somewhat worse than those for query-independent gadgets. We conjecture that this is due to some level of overfitting.\n\n| Surrogate Target | R MF | ˆ R SW R CLS | R LLM | R SW | ˆ R MF R CLS | R LLM | R SW | ˆ R CLS S FM | R LLM | R SW | ˆ R LLM R MF | R CLS |\n|--------------------|--------|----------------|---------|---------|----------------|---------|---------|----------------|---------|---------|----------------|---------|\n| MT-Bench | 99 ± 1 | 88 ± 5 | 45 ± 5 | 100 ± 0 | 96 ± 2 | 39 ± 3 | 100 ± 0 | 79 ± 9 | 51 ± 5 | 100 ± 0 | 83 ± 5 | 85 ± 7 |\n| MMLU | 66 ± 5 | 44 ± 11 | 81 ± 3 | 82 ± 4 | 56 ± 7 | 74 ± 2 | 64 ± 6 | 16 ± 7 | 80 ± 5 | 53 ± 4 | 20 ± 5 | 46 ± 11 |\n| GSM8K | 99 ± 1 | 72 ± 11 | 63 ± 4 | 92 ± 2 | 88 ± 3 | 62 ± 4 | 76 ± 6 | 60 ± 9 | 65 ± 8 | 60 ± 8 | 70 ± 7 | 73 ± 10 |\n\nTable 5: Average upgrade rates for our attack in the black-box setting. This is the average percentage of queries rerouted from the weak to strong model under the target router due to a confounder gadget generated using the surrogate. The average downgrade rate (i.e., strong-to-weak rerouting) is 1 . 2% across all routers. Upgrade rates are lower than in the white-box setting but still high, indicating that the attack transfers.\n\nabnormal about the query. Intuitively, this reflects the fact that while LLMs are built to be robust to noisy inputs, the router itself is not.\n\nIn summary, the attack is highly successful at rerouting queries from the weak to the strong model. Overall, quality improves if there is a significant gap between the strong and weak LLMs used by the router. Either way, confounding has no negative impact on the quality of responses.", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv1.pdf" - }, - { - "text": "## 7 Rerouting Commercial Routers\n\nWe evaluate our rerouting attack on several commercial routers: Unify [12], NotDiamond [7], OpenRouter [11], and Martian [5]. These routers are available through black-box APIs. Therefore, we use our black-box attack with the 40 gadgets optimized for the open-sourced routers R SW , R MF , R CLS , and R LLM ( 10 per router). We perform this evaluation using the MT-bench benchmark.\n\nUnify. This router lets users specify a list of models from different providers and a metric configuration for routing decisions. The available metrics are quality, time to first token, inter-token latency, and cost. The user can specify the weight for each metric. Time, latency, and cost metrics are static and precomputed. The quality metric is computed for", - "page_start": 11, - "page_end": 11, - "source_file": "arxiv1.pdf" - }, - { - "text": "We introduced and defined a new safety property, LLM control plane integrity . Informally, this property holds if an adversarial user cannot influence routing decisions made by the control plane. To show that existing LLM routers do not satisfy this property, we designed, implemented, and evaluated a black-box optimization method for generating queryindependent 'confounder gadgets.' When added to any query, the confounder gadget confuses the router into routing the query to the adversary-chosen LLM.\n\nWe evaluated the efficacy of confounder gadgets on multiple open-source and commercial routers and demonstrated that they successfully reroute queries without a negative impact on the quality of responses. We also discussed defenses against these attacks and indicated directions for future research.\n\n## Acknowledgments\n\nThis research was supported in part by the Google Cyber NYC Institutional Research Program, the Israel Science Foundation (Grant No. 1336/22), and the European Union (ERC, FTRC, 101043243). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.", - "page_start": 17, - "page_end": 17, - "source_file": "arxiv1.pdf" - }, - { - "text": "We have experimented with variations of this approach that don't work quite as well, for example adding c as a suffix instead of a prefix. See Appendix B for details.\n\n## 5 Open-Source Routers: Experimental Setup\n\nTo evaluate efficacy of confounder gadgets generated using the method from Section 4, we perform experiments with several LLM routers. This section explains our experimental setup for the open-source routers proposed in the research literature [47]; results of this evaluation appear in Section 6. In Section 7, we discuss experiments with proprietary, commercial routers. Figure 3 shows the summary of our experimental setup.", - "page_start": 5, - "page_end": 5, - "source_file": "arxiv1.pdf" - }, - { - "text": "an extra potentially expensive LLM invocation for each query processed by the router. Second, it may degrade the quality of responses from the destination LLMs, which are sensitive to the phrasing of queries and prompts.\n\nDetecting anomalous user workloads. Another possible defense requires the router to monitor individual user workloads, and identify those users whose queries are routed to the strongest model with an abnormally high frequency. The router can then impose a user-specific threshold. Of course such workloads may have a benign explanation, e.g., the user's queries may be unusually complex. Even so, routers could potentially be designed to perform user-specific routing. For example, one could imagine using per-user thresholds that are calibrated dynamically to attempt to maintain a consistent fraction of queries being routed to the strong model.\n\nSuch user-specific routing would complicate implementations, and would make inaccurate decisions for a user until there is sufficient data about their queries. The latter is relevant in adversarial settings, since such an approach would still be circumventable should attackers be able to mount Sybil attacks in which the attacker creates a new user for, in the limit, each query.\n\n## 9 Related Work\n\nEvasion attacks against ML systems. A large body of work has investigated evasion attacks against ML systems [25, 43, 60], also referred to as adversarial examples [32, 48, 49], and these attacks are now being explored in the context of multi-modal LLMs [28] as well as text-only LLMs (for just one example, see [22]). We discussed in Section 3 how our results compare: LLM control plane integrity is a distinct AI safety issue, but related in that: (1) control plane integrity attacks may use evasion-style techniques, and (2) control plane integrity attacks might be useful for performing evasion.\n\nPrompt injection against LLMs. Prompt injection is a class of attacks against LLMs in which the adversary manipulates the prompt, i.e., the textual input fed directly to the LLM, causing the LLM to generate outputs that satisfy some adversarial objective [50, 64]. Evasion attacks as discussed above can use prompt injection, jailbreaking attacks being a widely explored example in which the adversary aims to bypass some safety guardrail included in the LLM system, such as 'do not output expletives' [23, 42, 54, 66, 72, 73].\n\nPrompt injection is also used for extraction attacks that aim to infer some information from or about the model, for example, the system prompt [50, 54, 70], training data samples [46], or model parameters [18]. In indirect prompt injection attacks [33], the adversaries do not directly interact with the target LLM, and instead inject adversarial inputs into thirdparty data, which is then added to the LLM prompt (intentionally or unintentionally) by the victim application and/or its users. This relates to another category of attacks that target LLM-based applications, such as RAG systems, and invalidate their integrity by exploiting the weaknesses of the underlying LLM [19, 55].\n\nOur attacks also modify queries, but with a different aim than the above types of attacks: undermining the integrity of the control plane routing, rather than the LLM itself. Future work might investigate indirect control plane integrity attacks that, analogously to indirect prompt injection, serve to somehow trick users of a routing system into forming controlplane-confounding queries.", - "page_start": 16, - "page_end": 16, - "source_file": "arxiv1.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv1.pdf", - "query": "What is an LLM control plane ?", - "target_page": 3, - "target_passage": " An LLM control plane Rω is a potentially randomized algorithm.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "In contrast to routers motivated by controlling costs, several LLM router designs focus solely on improving quality of responses [31, 45, 57, 58].\n\nThe LLM routers described thus far do not modify the queries or individual LLM responses. Other types of control planes do. Ensemble approaches such as mixture-of-expert (MoE) [29, 30, 52, 56] architectures select a subset of underlying models to apply to each token of a query and merge their responses. LLM synthesis [40] architectures operate similarly, but route the entire query to a subset of underlying LLMs and merge their responses. These approaches reduce inference costs by using fewer and/or less complex underlying models.\n\nApplications of LLM routers. A key use case for LLM routers is to help LLM-based application reduce cost. Several commercial routers, including Unify [12], Martian [5], NotDiamond [7], and others, offer this as a service. By replacing a few lines of code, the application can send user queries to a router service, rather than directly to some LLM provider. The service selects the optimal LLM and forwards the queries. Commercial router services claim that this results in significant cost savings: up to 98% in the case of Martian [5], and 10 × in the case of NotDiamond [7].\n\n## 3 LLMControl Plane Integrity\n\nIn this section, we define LLM control plane integrity . Informally, it means that decisions made about underlying LLM queries made by the control plane algorithms cannot be subverted by adversarial queries. Looking ahead, we will focus on one class of control plane: predictive LLM routing as used to manage cost.\n\nFormalizing control planes. An LLM control plane R ω is a potentially randomized algorithm. It is parameterized by a string ω , called the parameters. It utilizes some number n of LLMs denoted by M . We will mostly focus on the case of n = 2 , and, for reasons that will be clear in a moment, use M s ('strong') and M w ('weak') to denote the two underlying LLMs. Then inference on an input x ∈ X for some set X of allowed queries is performed by computing a response via y ← $ R M ω ( x ) . Here we use ← $ to denote running R with fresh random coins; we use ← when R is deterministic. We focus on inference for a single query, but it is straightforward to extend our abstraction for control planes to include sessions: the controller would maintain state across invocations, potentially adapting its behavior as a function of a sequence of queries and responses.\n\nLLM control planes should, in general, be relatively computationally lightweight, at least compared to the underlying LLMs. This is particularly so in the cost-motivated usage of control planes, as a computationally or financially expensive control plane would eat into cost savings incurred by utilizing cheaper underlying LLMs for some queries. For example, predictive binary routers use relatively simple classifiers to determine which of M s or M w should be used to respond to a query.\n\nInference flow. Given a set of LLMs M , a control plane R ω , and an input x , an LLM inference flow is the sequence of LLM invocations M i j ( z j ) for 1 ≤ j ≤ m and i j ∈ { w , s } made when executing R M ω ( x ) . Here m is the total number of LLM invocations, and z 1 , . . . , z m are the queries made to the underlying LLMs. Should R be randomized, the sequence and its length are random variables. An inference flow can be written as a transcript\n\nT = ( i 1 , z 1 ) , ( i 2 , z 2 ) , . . . , ( i m , z m )\n\nof pairs of model indexes i j ∈ { w , s } and model inputs z j . Note that for simplicity we ignore the potential for parallelization, assuming execution proceeds serially. For binary routers, we have m = 1 and T ∈ { ( w , x ) , ( s , x ) } . We write submitting a sequence of inferences ⃗x = ⃗x 1 , . . . , ⃗x q to a control plane as\n\nR M ω ( ⃗x ) = ( R M ω ( ⃗x 1 ) , . . . , R M ω ( ⃗x q ))", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv1.pdf" - }, - { - "text": "| Surrogate Target | R MF | ˆ R SW R CLS | R LLM | R SW | ˆ R MF R CLS | R LLM | R SW | ˆ R CLS S FM | R LLM | R SW | ˆ R LLM R MF | R CLS |\n|--------------------|------------|----------------|------------|------------|----------------|------------|------------|----------------|------------|------------|----------------|------------|\n| LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 |\n| MT-Bench | - 0 . 1 | - 0 . 1 | 0 . 0 | - 0 . 1 | - 0 . 1 | 0 . 0 | - 0 . 1 | 0 . 0 | 0 . 1 | - 0 . 2 | - 0 . 1 | - 0 . 2 |\n| MMLU | - 0 . 1 | 0 . 3 | - 0 . 2 | 4 . 8 | 1 . 0 | 0 . 5 | 2 . 5 | - 1 . 3 | - 0 . 8 | 2 . 6 | - 0 . 9 | 0 . 3 |\n| GSM8K | 14 . 9 | 9 . 6 | 15 . 2 | 18 . 6 | 13 . 8 | 14 . 7 | 13 . 4 | 6 . 8 | 12 . 6 | 13 . 6 | 11 . 3 | 10 . 4 |\n| LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 |\n| MT-Bench | - 0 . 1 | - 0 . 1 | - 0 . 1 | - 0 . 2 | - 0 . 2 | - 0 . 2 | - 0 . 1 | - 0 . 1 | 0 . 0 | - 0 . 2 | - 0 . 2 | - 0 . 2 |\n| MMLU | 1 . 6 | 4 . 0 | 4 . 2 | 7 . 9 | 5 . 0 | 4 . 4 | 5 . 0 | - 2 . 9 | 3 . 2 | 5 . 2 | - 0 . 9 | 3 . 8 |\n| GSM8K | 13 . 6 | 8 . 7 | 18 . 5 | 18 . 9 | 14 . 4 | 18 . 3 | 13 . 1 | 4 . 0 | 15 . 5 | 11 . 3 | 8 . 4 | 10 . 8 |\n| LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 |\n| MT-Bench | 0 . 2 | 0 . 0 | 0 . 1 | - 0 . 1 | - 0 . 1 | 0 . 0 | 0 . 0 | 0 . 2 | 0 . 2 | - 0 . 1 | 0 . 1 | - 0 . 1 |\n| MMLU | 5 . 0 | 6 . 8 | 5 . 8 | 11 . 3 | 9 . 1 | 4 . 7 | 8 . 1 | - 3 . 7 | 4 . 8 | 7 . 8 | 0 . 1 | 7 . 2 |\n| GSM8K | 20 . 5 | 13 . 4 | 20 . 9 | 24 . 3 | 18 . 6 | 21 . 6 | 17 . 9 | 11 . 2 | 18 . 9 | 16 . 7 | 15 . 2 | 14 . 2 |\n\nTable 7: Differences between average benchmark specific scores of responses to the original and confounded queries, when the confounder gadget was generated for a different surrogate router than the target (black-box setting) for three LLM pairs. Positive values indicate a higher average score for responses to the confounded queries; higher values are better for the attacker. Results are averaged across gadgets. Standard errors were omitted for readability and are on average 0 . 1 , 0 . 8 , and 1 . 8 for MT-bench, MMLU and GSM8K, respectively. Aligned with the white-box setting, results show almost no decrease in performance, and improvement when there is a performance gap for the LLM pair.\n\nResults for LLM pair 4. As discussed in Section 5, we replace the strong model that was used by Ong et al. [47], GPT-41106-preview (rank 28 in the Chatbot Arena leaderboard [1, 21]), with the open-sourced Llama-3.1-8B (rank 58) to reduce the costs of our extensive set of evaluations. In this section we perform a smaller-scale evaluation of the quality-enhancing attack performance when using GPT as the strong model, i.e., LLM pair 4. We evaluate this setting using three of the n = 10 confounder gadgets for each router.\n\n## 7 Rerouting Commercial Routers", - "page_start": 11, - "page_end": 11, - "source_file": "arxiv1.pdf" - }, - { - "text": "Attacks against MoE. Mixture-of-Experts (MoE) architectures enable using multiple expert modules for processing a given query with a lower computational cost by including an inner routing mechanism that in every layer routes different tokens to a small number of experts [29, 30, 52, 56]. This can be thought of as an internal router within a single LLM, rather than an external control plane that orchestrates multiple LLMs. MoE has increased in popularity as it allows to build larger models at a fixed compute budget-not all parameters are used at the same time.\n\nHayes et al. [34] identified a vulnerability in MoE that can be exploited for a denial-of-service attack against MoE. Thus control plane integrity issues appear to extend to the context of single-LLM MoE systems, and future work could explore this connection further.\n\nYona et al. [67] presented a side-channel attack on MoE that enables an attacker to reveal other users' prompts. We expect that side-channel attacks against LLM control planes exist as well, for example, to infer which models are used via timing of responses. Such attacks, which target confidentiality, are outside the scope of control plane integrity.\n\n## 10 Conclusion\n\nLLM routers balance quality and cost of LLM inference by routing different queries to different LLMs. They are an example of a broader, emerging class of systems we call 'LLM control planes' that aim to achieve various quality, efficiency, and cost objectives by orchestrating use of multiple LLMs to respond to a query.", - "page_start": 16, - "page_end": 16, - "source_file": "arxiv1.pdf" - }, - { - "text": "Table 4: Average benchmark-specific scores of responses to the original and confounded queries with Mistral-7B-Instructv0.3 (LLM pair 2) or Llama-2-7B-chat-hf (LLM pair 3) as the weak model, in the white-box setting. Results further emphasize that the rerouting attack improves quality of responses when there is a significant gap between the weak and strong LLMs.\n\n| | R SW | R SW | R MF | R MF | R CLS | R CLS | R LLM | R LLM |\n|------------|------------|---------------|------------|---------------|------------|---------------|------------|---------------|\n| | Orig. | Conf. | Orig. | Conf. | Orig. | Conf. | Orig. | Conf. |\n| | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 |\n| MT-Bench | 8 . 5 | 8 . 3 ± 0 . 0 | 8 . 4 | 8 . 3 ± 0 . 1 | 8 . 4 | 8 . 4 ± 0 . 1 | 8 . 4 | 8 . 3 ± 0 . 1 |\n| MMLU | 55 | 64 ± 1 | 63 | 64 ± 0 | 58 | 66 ± 1 | 62 | 66 ± 0 |\n| GSM8K | 46 | 64 ± 1 | 51 | 67 ± 1 | 49 | 63 ± 1 | 38 | 63 ± 2 |\n| LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 |\n| MT-Bench | 8 . 4 | 8 . 3 ± 0 . 0 | 8 . 1 | 8 . 3 ± 0 . 1 | 8 . 3 | 8 . 4 ± 0 . 1 | 8 . 1 | 8 . 2 ± 0 . 1 |\n| MMLU | 51 | 64 ± 1 | 57 | 63 ± 1 | 52 | 66 ± 1 | 59 | 66 ± 1 |\n| GSM8K | 40 | 64 ± 1 | 44 | 67 ± 1 | 45 | 63 ± 1 | 37 | 64 ± 1 |\n\nTable 8 and Table 9 show the results for the white-box and black-box settings, respectively. (Here, percentage numbers are not averaged and there is no standard error since we used a single gadget per query.) The white-box results are nearly perfect; the black-box results are often better but sometimes somewhat worse than those for query-independent gadgets. We conjecture that this is due to some level of overfitting.\n\n| Surrogate Target | R MF | ˆ R SW R CLS | R LLM | R SW | ˆ R MF R CLS | R LLM | R SW | ˆ R CLS S FM | R LLM | R SW | ˆ R LLM R MF | R CLS |\n|--------------------|--------|----------------|---------|---------|----------------|---------|---------|----------------|---------|---------|----------------|---------|\n| MT-Bench | 99 ± 1 | 88 ± 5 | 45 ± 5 | 100 ± 0 | 96 ± 2 | 39 ± 3 | 100 ± 0 | 79 ± 9 | 51 ± 5 | 100 ± 0 | 83 ± 5 | 85 ± 7 |\n| MMLU | 66 ± 5 | 44 ± 11 | 81 ± 3 | 82 ± 4 | 56 ± 7 | 74 ± 2 | 64 ± 6 | 16 ± 7 | 80 ± 5 | 53 ± 4 | 20 ± 5 | 46 ± 11 |\n| GSM8K | 99 ± 1 | 72 ± 11 | 63 ± 4 | 92 ± 2 | 88 ± 3 | 62 ± 4 | 76 ± 6 | 60 ± 9 | 65 ± 8 | 60 ± 8 | 70 ± 7 | 73 ± 10 |\n\nTable 5: Average upgrade rates for our attack in the black-box setting. This is the average percentage of queries rerouted from the weak to strong model under the target router due to a confounder gadget generated using the surrogate. The average downgrade rate (i.e., strong-to-weak rerouting) is 1 . 2% across all routers. Upgrade rates are lower than in the white-box setting but still high, indicating that the attack transfers.\n\nabnormal about the query. Intuitively, this reflects the fact that while LLMs are built to be robust to noisy inputs, the router itself is not.\n\nIn summary, the attack is highly successful at rerouting queries from the weak to the strong model. Overall, quality improves if there is a significant gap between the strong and weak LLMs used by the router. Either way, confounding has no negative impact on the quality of responses.", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv1.pdf" - }, - { - "text": "## REROUTING LLM ROUTERS\n\n## A PREPRINT\n\n| Avital Shafran | Roei Schuster | Thomas Ristenpart | Vitaly Shmatikov |\n|-----------------------|-----------------|---------------------|--------------------|\n| The Hebrew University | Wild Moose | Cornell Tech | Cornell Tech |\n\n## ABSTRACT\n\nLLM routers aim to balance quality and cost of generation by classifying queries and routing them to a cheaper or more expensive LLM depending on their complexity. Routers represent one type of what we call LLM control planes: systems that orchestrate use of one or more LLMs. In this paper, we investigate routers' adversarial robustness.\n\nWe first define LLM control plane integrity, i.e., robustness of LLM orchestration to adversarial inputs, as a distinct problem in AI safety. Next, we demonstrate that an adversary can generate queryindependent token sequences we call 'confounder gadgets' that, when added to any query, cause LLM routers to send the query to a strong LLM.\n\nOur quantitative evaluation shows that this attack is successful both in white-box and black-box settings against a variety of open-source and commercial routers, and that confounding queries do not affect the quality of LLM responses. Finally, we demonstrate that gadgets can be effective while maintaining low perplexity, thus perplexity-based filtering is not an effective defense. We finish by investigating alternative defenses.\n\n## 1 Introduction\n\nLarge language models (LLMs) exhibit remarkable capabilities on many tasks. Today, hundreds of open-source and proprietary LLMs are available at different prices, ranging from expensive, state-of-the-art models to cheaper, smaller, less capable ones. LLM operators typically provide API access to their models (especially higher-quality models) on a pay-per-query basis. This imposes non-trivial costs on LLM-based applications and systems.\n\nDevelopers who want to integrate LLMs into their applications must therefore consider both utility and cost. They want to maximize the quality of responses to their queries while minimizing the cost. The two objectives conflict with each other: larger models tend to generate higher-quality answers but charge more per query. For example, at the time of this writing, GPT-3.5-turbo costs $0 . 5 / $1 . 5 per 1M input/output tokens, GPT-4o-mini $0 . 15 / $0 . 6 , GPT-4o $2 . 5 / $10 , o1-preview $15 / $60 . The difference in quality between models is not uniform across queries. For some queries, even a cheap model can generate an acceptable response. More complex queries require an expensive model to obtain a quality answer.\n\nA natural solution to balancing performance and economic considerations is to take advantage of the availability of multiple LLMs at different price-performance points. Recently proposed LLM routing systems [5, 12, 27, 47, 53] orchestrate two or more LLMs and adaptively route each query to the cheapest LLM they deem likely to generate a response of sufficient quality. In the two-LLM case, let M s be an expensive, high-quality model and M w a weaker, lower-grade one. Given query q , the routing algorithm R ( · ) applies a classifier to q that outputs 0 if M w is sufficient for answering q , or 1 if M s is required. The system then routes q accordingly.\n\nLLMrouting is an example of a general class of systems we call LLM control planes, which orchestrate the use of multiple LLMs to process inputs, as further described in Section 2.\n\nOur contributions. First, we introduce LLM control plane integrity as a novel problem in AI safety. Recently proposed LLM control-plane algorithms are learned, calibrated classifiers (see Section 2). Their inputs are queries from potentially adversarial users. Robustness of control-plane algorithms to adversarial queries is a new problem, distinct from adversarial robustness of the underlying LLMs.", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv1.pdf" - }, - { - "text": "e.g., cruise, climb, maneuvers, etc. The region of reversed command is encountered primarily in the low speed phases of flight during takeoff and landing. Because of the extensive low speed flight during carrier operations, the Naval Aviator will be more familiar with the region of reversed command than the ordinary pilot.\n\nThe characteristics of flight in the region'of normal command are illustrated at point A on the second curve of figure 6.2. If the airplane is established in steady, level flight at point A, lift is equal to weight and the power available is set equal to the power required. When the airplane is disturbed to some airspeed slightly greater than point 'A, a power deficiency exists and, wheq,:the &+la&is disturbed to some airspeed slightly lower than point A, a power excess exists. This relationship provides a tendency for the airplane to return to the equilibrium of point A and resume the original flight condition following a disturbance. Also, the static longitudinal stability of the airplane tends to return the airplane to the original trimmed CL and velocity corresponding to this C,. The phugoid usually has most satisfactory qualities at low values of C,. so the high speed of the region 'of normal command provides little tendency of. the airplane's, airspeed to vary or wander abom.\n\nWith all factors considered, flight in Lhe region of noi& command is characterized by a relatively strong tendency of the airplane to maintain the trim speed quite naturally. However, flight in the region of normal command can lead to some unusual and erroneous impres-, sions regarding proper flying technique. For example, if the airplane is established at point A in steady level flight, a controlled increase in airspeed without a change in power setting will create a deficiency of power and cause the airplane to descend. Similarly, a controlled decrease in airspeed without a change in power setting will create an excess of power and cause the airplane to climb. This fact, coupled with Lhe transient motion of the airplane when the\n\n## NAVWEPS OD4OT-80 APPLICATION OF AERODYNAMICS TO SPECIFIC PROBLEMS OF FLYING\n\nangle of attack is changed rapidly, may lead to the impression thal rate of climb and descent can be controlled by changes in angle of attack. While such is true in the region of normal command, for the conditions of stead' flight, primary control of altitude remains the power setting and the primary control of airspeed remains the angle of attack. The impressions and habits that can be developed in the region of normal command can bring about disastrous consequences in the region of reversed command\n\nThe characteristics of flight in the region of reversed command are illustrated at point B on the second curve of figure 6.2. If the airplane is established in steady, level flight at point B, lift is equal to weight and the power available is set equal to the. power required. When the airplane is disturbed to some airspeed slightly greater than point B, an excess of power exists and, when the airplane is disturbed to some airspeed slightly lower than point B, a deficiency of power exists. This relationship is basically unstable because the variation of excess power to either side of point B tends to magnify any original disturbance. While the static longitudinal stability of the airplane tends to maintain the original trimmed C, and airspeed corresponding to that CL, the phugoid usually has the least satisfactory qualities at the high values of CL corresponding to low speed flight.", - "page_start": 372, - "page_end": 372, - "source_file": "00-80T-80.pdf" - }, - { - "text": "When all factors are considered, flight in the region of reversed command is characterized by a relatively weak tendency of the airplane to maintain the trim speed naturally. In fact it is likely that the airplane will exhibit no inherent tendency to maintain the trim speed in this regime of flight. For this reason, the pilot inust give particular attention to precise control of airspeed when operating in the low flight speeds of the region of reversed command.\n\nWhile flight in the region of normal command may create doubt as to the primary control of airspeed and altitude, operation in the region of reversed command should leave little", - "page_start": 372, - "page_end": 372, - "source_file": "00-80T-80.pdf" - }, - { - "text": "We now discuss adversarial capabilities. We assume that our victim application's prompt includes a substring that can be controlled by the adversary. This represents many real-world apps such as chatbots, coding assistants, writing assistants, and others, that insert user inputs into an LLM prompt. In crafting adversarial portions of prompts, an adversary may have various levels of knowledge about the victim application's router. We consider the following knowledge settings:\n\n- · White-box setting : The adversary knows the control plane algorithm and its parameters ω .\n- · Black-box (transfer) setting : The adversary does not know the control plane algorithm R and ω for the target model, but knows instead another control plane algorithm R ' ω ' and its parameters. We refer to R ' ω ' as the surrogate . For example, this could arise if an adversary trains their own router using available data. In this setting our attacks are also zero-shot in that they do not require any interaction with the target control plane before the query that is being rerouted.\n\n## 4 Confounding Control Planes with Gadgets\n\nWe now turn to our main contribution: a methodology for attacking LLM control plane integrity. The key insight is that an adversary can modify queries to mislead or 'confound' the routing logic into routing these queries to an LLM of the adversary's choosing. Furthermore, we will demonstrate that these attacks can be black-box and query-independent , i.e., a single modification works for all queries and does not require advance knowledge of the specific router being attacked.", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv1.pdf" - }, - { - "text": "## NAVWEPS DD-EOT-80 STABMTY AND CONTROL\n\nThe first four items can be effected,only during design or by design changes. Some roll performance restriction is inevitable since all of the desirable characteristics are difficult to obtain without serious compromise elsewhere in the airplane design. The typical high speed airplane will have some sort of roll petformance limitation provided by flight restrictions or automatic control devices to prevent reaching some critical condition from which recovery is impossible. Any roll restriction provided an airplane must be regarded as a principal flight operating limitation since the more severe motions can cause complete loss of control and structural failure.\n\n## HELICOPTER STABILITY AND CONTROL\n\nIn discussing many of the problems of stability and control that occur in high speed airplanes, one might be prone to believe that the slow flying helicopter does not have any such problems. Unfortunately, this is not the case. Flying qualities that would be considered totally unsatisfactory by fixed-wing standards ate normal for helicopters. Helicopter pilots are living evidence that an unstable aircraft ca. k'.: controlled. Also, they are evidence ~a. control without stability requires constant attention and results in considerable pilot fatigue.\n\n'Inertia coupling' problems are relatively new to fixed-wing aircraft but a similar effect in the helicopter rotor has resulted in some of its most important characteristics. This aerodynamic-dynamic coupling effect is so important that it must be considered in discussing both stability and control. The helicopter derives both longitudinal and lateral control by tilting the main rotor and thus producing a pltchmg or rolling moment as indicated in figure 4.35. The magnitude of the rotor thrust the angle of tilt, and the height of the rotor hub above the c.g. determine the control moment produced. It should be noted that low control effectiveness would result when the rotor thrust is low. Some helicopters", - "page_start": 336, - "page_end": 336, - "source_file": "00-80T-80.pdf" - }, - { - "text": "an extra potentially expensive LLM invocation for each query processed by the router. Second, it may degrade the quality of responses from the destination LLMs, which are sensitive to the phrasing of queries and prompts.\n\nDetecting anomalous user workloads. Another possible defense requires the router to monitor individual user workloads, and identify those users whose queries are routed to the strongest model with an abnormally high frequency. The router can then impose a user-specific threshold. Of course such workloads may have a benign explanation, e.g., the user's queries may be unusually complex. Even so, routers could potentially be designed to perform user-specific routing. For example, one could imagine using per-user thresholds that are calibrated dynamically to attempt to maintain a consistent fraction of queries being routed to the strong model.\n\nSuch user-specific routing would complicate implementations, and would make inaccurate decisions for a user until there is sufficient data about their queries. The latter is relevant in adversarial settings, since such an approach would still be circumventable should attackers be able to mount Sybil attacks in which the attacker creates a new user for, in the limit, each query.\n\n## 9 Related Work\n\nEvasion attacks against ML systems. A large body of work has investigated evasion attacks against ML systems [25, 43, 60], also referred to as adversarial examples [32, 48, 49], and these attacks are now being explored in the context of multi-modal LLMs [28] as well as text-only LLMs (for just one example, see [22]). We discussed in Section 3 how our results compare: LLM control plane integrity is a distinct AI safety issue, but related in that: (1) control plane integrity attacks may use evasion-style techniques, and (2) control plane integrity attacks might be useful for performing evasion.\n\nPrompt injection against LLMs. Prompt injection is a class of attacks against LLMs in which the adversary manipulates the prompt, i.e., the textual input fed directly to the LLM, causing the LLM to generate outputs that satisfy some adversarial objective [50, 64]. Evasion attacks as discussed above can use prompt injection, jailbreaking attacks being a widely explored example in which the adversary aims to bypass some safety guardrail included in the LLM system, such as 'do not output expletives' [23, 42, 54, 66, 72, 73].\n\nPrompt injection is also used for extraction attacks that aim to infer some information from or about the model, for example, the system prompt [50, 54, 70], training data samples [46], or model parameters [18]. In indirect prompt injection attacks [33], the adversaries do not directly interact with the target LLM, and instead inject adversarial inputs into thirdparty data, which is then added to the LLM prompt (intentionally or unintentionally) by the victim application and/or its users. This relates to another category of attacks that target LLM-based applications, such as RAG systems, and invalidate their integrity by exploiting the weaknesses of the underlying LLM [19, 55].\n\nOur attacks also modify queries, but with a different aim than the above types of attacks: undermining the integrity of the control plane routing, rather than the LLM itself. Future work might investigate indirect control plane integrity attacks that, analogously to indirect prompt injection, serve to somehow trick users of a routing system into forming controlplane-confounding queries.", - "page_start": 16, - "page_end": 16, - "source_file": "arxiv1.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv1.pdf", - "query": "What is a confounder gadget ?", - "target_page": 5, - "target_passage": " Given a query xi, we prepend a confounder gadget ci, which is a short sequence of adversarially chosen tokens.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Let B = { ˜ c 0 , . . . , ˜ c B } .\n\n - (3) Find the candidate that maximizes the score:\n\nc ( t +1) i ← arg max c ∈B S θ ( c ∥ x i ) . (1)\n\nThe final confounder c ( T ) i is used with query x i . We early abort if, after 25 iterations, there is no update to the confounder gadget. Technically, we could abort early if we find a confounder whose score exceeds τ . Running further can be useful when an adversary does not know τ .\n\nThe attack's runtime is dominated by T · B times the cost of executing S . In practice, S are designed to be fast (otherwise routers would significantly increase the latency of applications that use them). We report precise timings later; in summary, the attack is fast because we can set T to be relatively small and still find high-scoring confounders.\n\nDue to the randomness in index and token selection, the method converges to different, yet similarly effective, confounder gadgets on each run. Our evaluation will thus measure average performance over multiple gadgets.\n\nQuery-independent confounders. One downside of the per-query approach is that the adversary must repeat, for each query, the search for a good confounder. In practice, the adversary might prefer a query-independent attack. Our confounder gadget approach extends to this setting readily: perform the search routine above for an empty query. In other words, just ignore x i in the query-dependent attack above, replacing S θ ( c ∥ x i ) in Eq. 1 with S θ ( c ) . This finds a single query-independent confounder c that can be prefixed to all queries, i.e., ˆ x i = c ∥ x i . We will show that this works surprisingly well.\n\nIt is tempting to assume the reason a query-independent confounder works well is that a good scoring function should be roughly monotonic in query extensions, i.e., one might expect that S θ ( c ∥ x ) ≥ S θ ( c ) for almost any suffix x . This intuition is not correct. In our experiments, we found that S θ ( c ∥ x ) < S θ ( c ) for many x and some of the routers discussed below. Nevertheless, by ensuring that S θ ( c ) is pretty high (set the number of iterations T higher) the resulting query-independent confounder works well. That is, we at least get that S θ ( c ∥ x ) > S θ ( x ) .\n\nThe black-box setting: confounders that transfer. Finally, the attacks so far are in the white-box setting, where the attacker can optimize directly against S θ . While in some cases routing control planes will be public knowledge, in others, including the proprietary control planes we explore in Section 7, they are hidden. This gives rise to the black-box setting. While an attacker might seek to perform model extraction attacks [43, 65] to learn θ , we instead explore attacks that transfer from one router to another.\n\nIn more detail, we assume the adversary has access to a router R ' ω ' , called the surrogate , that is trained on data similar to that used for the target router. Then the attack is the same as above, except that we use the surrogate's scoring function S ' θ ' instead of the target's S θ . Again, we will see that this works surprisingly well: the query-independent confounders found for the surrogate transfer to successfully reroute queries against the target router.\n\nPutting it all together. In summary, our methodology for input adaptation attacks is:\n\n - (1) (Preprocessing) Develop a single query-independent confounder gadget c , using either the target router or surrogate to score the confounder.\n - (2) (Input adaptation) For each query x i , submit ˆ x i = c ∥ x i instead to obtain a response ˆ y i .\n\nThe confounder is applied to all queries, i.e., the adversary does not need to guess whether the original query would have been routed to the weak or strong model. In the rest of the paper, we demonstrate the confounders rarely result in 'downgrades,' i.e., rerouting of queries from the strong to weak model.", - "page_start": 5, - "page_end": 5, - "source_file": "arxiv1.pdf" - }, - { - "text": "We introduced and defined a new safety property, LLM control plane integrity . Informally, this property holds if an adversarial user cannot influence routing decisions made by the control plane. To show that existing LLM routers do not satisfy this property, we designed, implemented, and evaluated a black-box optimization method for generating queryindependent 'confounder gadgets.' When added to any query, the confounder gadget confuses the router into routing the query to the adversary-chosen LLM.\n\nWe evaluated the efficacy of confounder gadgets on multiple open-source and commercial routers and demonstrated that they successfully reroute queries without a negative impact on the quality of responses. We also discussed defenses against these attacks and indicated directions for future research.\n\n## Acknowledgments\n\nThis research was supported in part by the Google Cyber NYC Institutional Research Program, the Israel Science Foundation (Grant No. 1336/22), and the European Union (ERC, FTRC, 101043243). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.", - "page_start": 17, - "page_end": 17, - "source_file": "arxiv1.pdf" - }, - { - "text": "In Appendix C we evaluate optimization-free alternatives for generating our confounding gadgets, and show they significantly underperform our optimization-based approach.\n\nWhite-box confounder gadget generation. Following our attack framework described in Section 4, we construct a query-independent control-plane gadget designed to confuse each router. We start with the white-box setting, setting the batch size to B = 32 and the number of iterations to T = 100 , ignoring thresholds. We generate four sets of n = 10 gadgets, i.e., ten for each router. Examples of generated gadgets can be found in Appendix A.\n\nWhen reporting scores below, we therefore report the average over the n gadgets used with all 72 MT-bench queries, 100 randomly selected MMLU queries, and 100 randomly selected GSM8K queries. None of these testing queries were used in the training of the routers or their calibration.\n\nRuntime and convergence. Figure 4 shows the convergence rates for 10 different gadgets, against different routing algorithms. The overall average number of iterations before convergence is 58. Generation against R SW converges the", - "page_start": 7, - "page_end": 7, - "source_file": "arxiv1.pdf" - }, - { - "text": "\n\nand view content on demand. They can search content and control their PVR remotely from their smartphone. They can stream programming to their tablet anywhere in their home. A single Rogers Nextbox serves as a master PVR for the entire home enabling simultaneous viewing and recording of up to eight separate shows and storage of over 250 hours of high-definition programming. And customers can access television and movie content on-demand from anywhere by laptop, tablet or smartphone using the Rogers Anyplace TV app.\n\nTelevision has never been this good, this easy, or this simple to control. And it's even better when combined with innovative Rogers features, such as the ability to screen phone calls on their TV, listen to voicemail on their tablet, or receive talking text messages on their home phone. Wireless customers can also use Rogers One Number to switch calls\n\namong their computer, home phone and wireless device without interruption; manage e-mails; text messages and voicemail; hold live video chats; and combine and sync contacts from across multiple devices.\n\nWhen they're not at home, more and more customers also rely on Rogers Smart Home Monitoring, a complete monitoring, automation and security solution that includes the most innovative technology and features available. Smart Home Monitoring lets customers monitor, control and receive alerts by smartphone or online, staying connected to their home from almost anywhere, and enjoying the peace of mind that comes with having the most reliable monitoring solution available. Smart Home Monitoring also gives customers the ability to automate lights, appliances, thermostats and more, so they know their homes are not only secure but more energy-efficient and convenient, also.", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "To evaluate the effectiveness of such a defense against our attack, we compare the perplexity values of original and confounded queries. Figure 5 presents histograms of perplexity values for both the original evaluated GSM8K queries and their corresponding confounded versions, generated using one of the confounder gadgets, sampled uniformly at random. Additionally, the figure displays the ROC curve for the defense that detects confounded queries by checking if their perplexity exceeds a threshold. As can be seen, the confounded queries exhibit significantly higher perplexity values, making them readily distinguishable from the original queries. For instance, in the case of the R SW router, setting the threshold value at 55 yields a false-positive rate of 3% and a true-positive rate of 97% . Results are similar for other gadgets and benchmarks and were omitted due to space constraints.\n\nUnfortunately, this defense can be evaded if an adversary incorporates a perplexity constraint into the gadget generation process. To demonstrate the feasibility of this evasion strategy, we modify gadget generation to maximize the score of the routing algorithm R and simultaneously aligning the the gadget's perplexity to some predefined perplexity value. In more detail, in each iteration t ∈ [ T ] , we uniformly sample a target index j ∈ [1 , n ] and generate a set B of B +1 candidates as explained in Section 4. We then modify Eq. 1 such that we now find the candidate that maximizes the difference between the router's score and the perplexity constraint for the confounder:\n\nc ( t +1) ← arg max c ∈B ( S θ ( c ∥ x i ) -α · | PPL ( c ) -ρ | ) ,", - "page_start": 13, - "page_end": 13, - "source_file": "arxiv1.pdf" - }, - { - "text": "- [71] L. Zheng, W.-L. Chiang, Y. Sheng, S. Zhuang, Z. Wu, Y. Zhuang, Z. Lin, Z. Li, D. Li, E. Xing et al. , 'Judging LLMas-a-judge with MT-Bench and chatbot arena,' Advances in Neural Information Processing Systems (NeurIPS) , 2023.\n - [72] S. Zhu, R. Zhang, B. An, G. Wu, J. Barrow, Z. Wang, F. Huang, A. Nenkova, and T. Sun, 'AutoDAN: Automatic and interpretable adversarial attacks on large language models,' arXiv preprint arXiv:2310.15140 , 2023.\n - [73] A. Zou, Z. Wang, J. Z. Kolter, and M. Fredrikson, 'Universal and transferable adversarial attacks on aligned language models,' arXiv preprint arXiv:2307.15043 , 2023.\n\n## A Gadget Examples\n\nBelow are a few examples of the optimized confounder gadgets c we got, for the different routing algorithms:\n\n - · R SW :\n\nlegationbelongs967reglo'hui\n\n(DictionaryizedNameantal bidi.numberOf\n\ndemanding RESULTS! XVI-ioibili Written Hormchriftlatable\n\n - • R MF :\n\n041 personne.nama tomeTN Exact relac¸˜ao tinha seis.Description\n\nNUMAGEMENTMedian157 mut-camerauacao nije Hir Ter\n\n - • R CLS :\n\ncontiene DEM USERNAME fourteen chees781 consultants200 inici DOJ\n\n571:\n\nOrd:nth Norwegian Mercer docs Abr226 METADATA\n\n - • R LLM :\n\ndated:frameifyumi345 Kurdasciiuzeiphertext\n\nMidnightexecution431!784 below1 unwrap : / n / n\n\n## B Ablation Study\n\nIn this section, we evaluate the effect of different hyperparameters and design choices (in the white-box setting).\n\nPrefix vs. suffix. As described in Section 4, we prepend the confounder gadget to the query. An alternative is to append it. This is straightforward for MT-bench and GSM8K, but MMLU consists of multi-choice questions followed by a list of possible answers, and the term 'Answer:'. We insert the gadget at the end of the question text and before the possible answers. If we append it at the very end, after 'Answer:', the LLM assumes the query was answered and in many cases does not generate any output at all.\n\nTable 12 shows that average upgrade rates are similar regardless of whether the gadget was inserted as a prefix or a suffix. For MMLU, prefix works better. The downgrade rate is 0% in all cases.", - "page_start": 21, - "page_end": 21, - "source_file": "arxiv1.pdf" - }, - { - "text": "- 4. When the user needs the device repaired, please take the device to our company or our company's dealership.\n - 5. All functions of the device please refer to the actual product.\n\nPurchase date: IMEI code: Where to buy: Customer Signature: Signature of Store Clerk: Stamp of Store:\n\n## FCC Caution:\n\nThis device complies with part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) This device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation.\n\nAny changes or modifications not expressly approved by the party responsible for compliance could void the user's authority to operate the equipment.\n\nNOTE: This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation.\n\nIf this equipment does cause harmful interference to radio or television reception,\n\nwhich can be determined by turning the equipment off and on, the user is encouraged to try to correct the interference by one or more of the following measures:\n\n - -- Reorient or relocate the receiving antenna.\n - -- Increase the separation between the equipment and receiver.\n - -- Connect the equipment into an outlet on a circuit different\n\nfrom that to which the receiver is connected.\n\n - -- Consult the dealer or an experienced radio/TV technician for help.\n\nThe device has been evaluated to meet general RF exposure requirement. The device can be used in portable exposure condition without restriction.\n\nFCC ID:2A54U-DT3MATE", - "page_start": 9, - "page_end": 9, - "source_file": "6126797.pdf" - }, - { - "text": "## B . Bind to the APP\n\n## 1. APP download method\n\n## 1.1 Scan the QR code to download\n\n\n\n1.2 Search the application at App market and download For Android users:\n\nSearch for \"WearPro\" in the Google Play app store or any customized Android store to download, remember to check the pop-up box on your phone when installing, and agree to the permission. For iOS users:\n\nSearch for \"WearPro\" in the APP Store to download, remember to check the pop-up box on your phone when installing, and agree to the permission.\n\n\n\nAfter WearPro is installed, the app icon appears as\n\n.\n\n## 2.Bind Bluetooth\n\n\n\n## 2.1 Unconnected to the APP state:\n\nAfter the watch is turned on, the Bluetooth will be in the state of being searched. After open the APK/APP, go to Devices > Add Device > click to start searching, select and click the corresponding watch device name, and the watch will be successfully bound to the app.\n\n## 2.2 Connected to the APP state:\n\n\n\nWatch time synchronization: the time shown at the smartwatch and your mobile phone will synchronized after the smartwatch is bound to the APP successfully.\n\n2.3 Binding the audio/calling Bluetooth\n\nWhen the smartwatch is in the dial interface, you can find the audio/calling Bluetooth icon, and click it to turn it on, then go to the Bluetooth settings of your mobile phone and click the name of the audio/calling Bluetooth of the smartwatch to bind it.\n\n## 3. Find Watch\n\nAfter the smartwatch is bound to the APP, you click 'Find Watch' in the APP, the smartwatch will light up and vibrate for once.\n\n## 4. Camera", - "page_start": 5, - "page_end": 5, - "source_file": "6126797.pdf" - }, - { - "text": "Quality of attack responses. We now turn to evaluating the quality of the responses generated by the attack. Note that because we have calibrated the routers to target ϵ = 0 . 5 , our attacks can improve response quality by rerouting to the stronger model. In the other direction, our attacks add confounder gadgets which might degrade response quality.", - "page_start": 8, - "page_end": 8, - "source_file": "arxiv1.pdf" - }, - { - "text": "We have experimented with variations of this approach that don't work quite as well, for example adding c as a suffix instead of a prefix. See Appendix B for details.\n\n## 5 Open-Source Routers: Experimental Setup\n\nTo evaluate efficacy of confounder gadgets generated using the method from Section 4, we perform experiments with several LLM routers. This section explains our experimental setup for the open-source routers proposed in the research literature [47]; results of this evaluation appear in Section 6. In Section 7, we discuss experiments with proprietary, commercial routers. Figure 3 shows the summary of our experimental setup.", - "page_start": 5, - "page_end": 5, - "source_file": "arxiv1.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2670.pdf", - "query": "What is called bad-cavity Ramsey laser ?", - "target_page": 1, - "target_passage": "We considerthe case of a two-level atomic beam interacting with a single-mode Ramsey cavity of separated-oscillating-field resonators with the cavity mode linewidth is much wider than the atomic gain linewidth. Thus we call it bad-cavity Ramsey laser. ", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": ".\n\n## The Linewidth of Ramsey Laser with Bad Cavity\n\nYang Li, Wei Zhuang, Jinbiao Chen, ∗ and Hong Guo † CREAM Group, State Key Laboratory of Advanced Optical Communication Systems and Networks (Peking University) and Institute of Quantum Electronics, School of Electronics Engineering and Computer Science, and Center for Computational Science and Engineering (CCSE), Peking University, Beijing 100871, P. R. China (Dated: October 29, 2018)\n\nWe investigate a new laser scheme by using Ramsey separated-field technique with bad cavity. By studying the linewidth of the stimulated-emission spectrum of this kind of laser inside the cavity, we find its linewidth is more than two orders of magnitude narrower than atomic natural linewidth, and it is far superior to that of conventional optical Ramsey method and any other available subnatural linewidth spectroscopy at present. Since any cavity related noise is reduced to cavity-pulling e ff ect in bad cavity laser, this Ramsey laser provides the possibility of precision subnatural linewidth spectroscopy, which is critical for the next generation of optical clock and atom interferometers.\n\nPACS numbers: 42.55.Ah, 42.50.Ar, 42.60.Da, 32.30.-r\n\nIntroduction: Since the invention of the separated-field technique [1], it has played an important role in the field of precision spectroscopy due to its linewidth narrowing e ff ect via multiple coherent interaction. Atomic clocks based on this technique have greatly extended our ability for frequency measurement, further, almost all the atom interferometers are based on this technique [2].\n\nThough, the natural linewidth of quantum transition was regarded as the ultimate limit to high-resolution laser spectroscopy [4], several methods of subnatural linewidth spectroscopy have been proposed to gain subnatural linewidth [310]. However, in all these e ff orts, including optical Ramsey spectroscopy, subnatural line is realized at the expense of a quick reduction in signal-to-noise (SNR) ratio due to the exponential decaying of signal, thus all these schemes can only get the linewidth several times narrower than the atomic natural linewidth. In the past three decades, this situation does not change in the field of the precision laser spectroscopy. On the other hand, the thermal noise of the cavity mirrors is the main obstacle for further linewidth reduction of a laser [11, 12], and it is a challenge to substantially reduce this noise further[13]. Recently, a new scheme, called active optical clock [14-18], was proposed to substantially reduce the laser linewidth. With lattice trapped atoms, it is possible to reach mHz linewidth laser based on the mechanism of active optical clock [14, 15, 19]. The principal mechanism of active optical clock is to directly extract light emitted from the ultranarrow atomic transition with a cavity mode linewidth much wider than that of lasing. This bad cavity ensures that any frequency shift due to cavity noise reduces to cavity-pulling e ff ect [1517], then the thermal noise is not the major obstacle again for reducing the linewidth. This means the bad cavity can play an indispensable role in new subnatural linewidth spectroscopy.\n\nIn this Letter, we propose a new scheme called Ramsey laser with bad cavity. Distinct from any previous applications of conventional Ramsey separated oscillating fields method [1], which focuses on the absorption spectrum, we here fo-", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2670.pdf" - }, - { - "text": "cus on the stimulated emission spectrum via multiple coherent interactions inside the cavity. We find this Ramsey laser can provide a stimulated-emission spectrum with a linewidth much narrower than that of any conventional optical Ramsey seperated-field spectroscopy, which is commonly applied in optical atomic clock. Our results also show that a subnatural linewidth spectroscopy, superior to any other available subnatural spectroscopy technique at present [3-10], can be reached by this kind of laser, if a suitable atomic level structure is chosen. Thus, this method can provide an e ff ective subnatural spectroscopy, and the possibilities for the new optical clock scheme [15] and atom interferometers [2].\n\nTheoretical framework: We consider the case of a two-level atomic beam interacting with a single-mode Ramsey cavity of separated-oscillating-field resonators with the cavity mode linewidth is much wider than the atomic gain linewidth. Thus we call it bad-cavity Ramsey laser. All atoms are pumped onto the upper lasing state a before entering the first cavity of seperated field, and the lower lasing state is b . We assume all the atoms have the same velocities υ , that means what we consider here is a homogeneous laser system. And for the sake of simplicity, we consider the two-standing waves linear optical Ramsey configuration with a grid as spatial selector [20, 21]. Our treatment can be extended to other configurations as in [22-24]. The length of each oscillating part is l , and the length of the free drift region is L . The corresponding Hamiltonian is\n\nH = /planckover2pi1 ω ˆ a † ˆ a + /planckover2pi1 ∑ j [ ω j a ( t ) σ j a + ω j b ( t ) σ j b ] + /planckover2pi1 g ∑ j Γ j ( t )(ˆ a † ˆ σ j -e -i /vector k · /vector rj + ˆ σ j + ˆ ae i /vector k · /vector rj ) , (1)\n\nwhere ˆ a , ˆ a † are the annihilation and creation operators of the field mode inside the cavity, with the frequency ω , σ j a = ( | a 〉 〈 a | ) j and σ j b = ( | b 〉 〈 b | ) j are the projection operators for the jth atom corresponding to the upper and lower lasing levels,", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2670.pdf" - }, - { - "text": "Conclusion: In summary, we propose a new subnatural linewidth spectroscopy technique, which is a laser by using Ramsey seperated-field cavity to realize the output of stimulated-emission radiation via multiple coherent interaction with atomic beam. We find the linewidth of Ramsey laser is subnatural if we choose an appropriate atomic level, and the bad-cavity laser mechanism will dramatically reduce cavityrelated noise as discussed in active optical clock [15-19]. Our results show that this new subnatural linewidth spectroscopy is superior to conventional optical Ramsey seperated-field spectroscopy and any other available subnatural spectroscopy technique at present [3-10]. Considering one have to apply the separated-field method in any phase detection as in Ramsey-Bord e 'interferometer [2], to investigate the e ff ects of phase di ff erences between the two oscillating fields [31] in this stimulated separated-field method with such subnatural linewidth will be our next research aim.\n\nWe acknowledge Yiqiu Wang and Deshui Yu for fruitful discussions. This work is supported by MOST of China (grant 2005CB724500, National Natural Science Foundation of China (grant 60837004, 10874009), National Hi-Tech Research and Development (863) Program.\n\n- ∗ E-mail: jbchen@pku.edu.cn\n- † E-mail: hongguo@pku.edu.cn.\n- [1] N. F. Ramsey, Phys. Rev. 76 , 996 (1949).\n- [2] B. Dubetsky and P. R. Berman, In Atom Interferometry , edited by P. R. Berman (Academic Press, Cambridge, MA, 1997).\n- [3] M. M. Salour, Rev. Mod. Phys. 50 , 667 (1978).\n- [4] J. Wong and J. C. Garrison, Phys. Rev. Lett. 44 , 1254 (1980).\n- [5] P. L. Knight and P. E. Coleman, J. Phys. B: Atom. Molec. Phys. 13 4345 (1980).\n- [6] H. -W. Lee, P. Meystre, and M. O. Scully, Phys. Rev. A 24 , 1914 (1981).\n- [7] F. Shimizu, K. Shimizu, and H. Takuma, Phys. Rev. A 28 , 2248 (1983).\n- [8] W. Gawlik, J. Kowalski, F. Trager, and M. Vollmer, Phys.Rev.\n\n- Lett. 48 , 871 (1982).\n- [9] H. J. Carmichael, R. J. Brecha, M. G. Raizen, H. J. Kimble, and P. R. Rice, Phys. Rev. A 40 , 5516 (1989).\n- [10] U. W. Rathe, M. O. Scully, Letters in Mathematical Physics 34 , 297 (1995)\n- [11] K. Numata, A. Kemery, J. Camp, Phys Rev Lett, 93 , 250602 (2004).\n- [12] A. D. Ludlow et al. , Opt. Lett. 32 , 641 (2007).\n- [13] H. J. Kimble, B. L. Lev, and J. Ye, Phys. Rev. Lett. 101 , 260602 (2008).\n- [14] J. Chen, and X.Chen, In Proceedings of the 2005 IEEE International Frequency Control Symposium and Exposition , (IEEE, 2005), p.608.\n- [15] J. Chen, e-print arXiv:0512096 quant-ph; Chinese Science Bulletin 54 , 348 (2009).\n- [16] D. Yu and J. Chen, Phys. Rev. A 78 , 013846 (2008).\n- [17] J. Chen, In Frequency Standards and Metrology: Proceedings of the 7th Symposium , edited by Maleki Lute (World Scientific Publishing Company, 2009).\n- [18] Y. Wang, Chinese Science Bulletin 54 , 347 (2009).\n- [19] D. Meiser, J. Ye, D. R. Carlson, and M. J. Holland, Phys. Rev. Lett. 102 , 163601 (2009)\n- [20] F. Strumia, Metrologia 8 , 85 (1972).\n- [21] G. Kramer, J. Opt. Soc. Am. 68 , 1634 (1978).\n- [22] V. S. Letokhov and B. D. Pavlik, Opt. Spectrosc. USSR 32 , 455 (1972).\n- [23] Ye. V. Baklanov, B. Ya, Dubetsky, V. P. Chebotayev, Appl. Phys. 9 , 171 (1976).\n- [24] J. C. Bergquist, S. A. Lee, and L. L. Hall, Phys. Rev. Lett. 38 , 159 (1977).\n- [25] L. Davidovich, Rev. Mod. Phys. 68 , 127 (1996).\n- [26] M. I. Kolobov, L. Davidovich, E. Giacobino, and C. Fabre, Phys. Rev. A 47 , 1431 (1993).\n- [27] M. Sargent III, M. O. Scully, and W. E. Lamb, Laser Physics (Addition Wesley, Reading, MA, 1974).\n- [28] N. A. Abraham, P. Mandel, and L. M. Narducci, Dynamic Instabilities and Pulsations in Lasers , Progress in Optics XXV, edited by E. Wolf (Elsevier, Amsterdam, 1988).\n- [29] L. Pasternack, D. M. Silver, D. R. Yarkony, and P. J. Dagdigian, J. Phys. B 13 , 2231 (1980).\n- [30] K. An and M. S. Feld, Phys. Rev. A 56 , 1662(1997).\n- [31] N. F. Ramsey and H. B. Silsbee, Phys. Rev. 84 , 506(1951).", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2670.pdf" - }, - { - "text": "Our method of Ramsey laser is suitable for any atoms with metastable energy level, as an example, we choose the transition from the metastable state 4 s 4 p 3 P 1 to the ground state 4 s 2 1 S 0 of 40 Ca to check the striking feature of this laser: subnatural linewidth. As mentioned in [29], the corresponding natural linewidth of the metastable state 4 s 4 p 3 P 1 is 320Hz. As in the recently proposed active optical clock with atomic beam [15], the velocity of the atoms in thermal atomic beam is about 500m / s, and the length of the interaction region is about 1mm, then the time for the atom to traverse each coherentinteraction region is on the order of magnitude of 1 µ s. If a bad cavity with κ is on the order of 10 7 Hz, the relation κ/ 2 /greatermuch τ -1 is satisfied. Then when g is on the order of the magnitude of kHz, which can be easily achieved for current technique [30], from the linewidth expression of Eq.(16) the order of magnitude of linewidth is below 1 Hz. This means the linewidth of a Ramsey laser can be more than two orders of magnitude narrower than the atomic natural linewidth, therefore our Ramsey method provides a new subnatural spectroscopy technique. And since it is stimulated-emission spectrum, it overcomes the di ffi culty in other subnatural linewidth spectroscopy schemes where the quick reduction of signal to noise ratio is a formidable limit. We should point out that this Ramsey laser does not escape the limitation of all active optical clock: in order to pump atoms to the excited state effectively and to be stimulated emit photon during the lifetime of a metastable state, this new method will only be applicable to some special transitions [17].", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2670.pdf" - }, - { - "text": "- [65] J. P. Burelbach, S. G. Bankoff, and S. H. Davis, 'Nonlinear stability of evaporating/condensing liquid films,' J. Fluid Mech. 195 , 463-494 (1988).\n - [66] A. Oron and S. G. Bankoff, 'Dewetting of a heated surface by an evaporating liquid film under conjoining/disjoining pressures,' J. Colloid Interface Sci. 218 , 152-166 (1999).\n - [67] L. W. Schwartz, R. V. Roy, R. R. Eley, and S. Petrash, 'Dewetting patterns in a drying liquid film,' J. Colloid Interface Sci. 214 , 363-374 (2001).\n - [68] K. Kargupta, R. Konnur, and A. Sharma, 'Spontaneous dewetting and ordered patterns in evaporating thin liquid films on homogeneous and heterogeneous substrates,' Langmuir 17 , 1294-1305 (2001).\n - [69] M. Bestehorn and D. Merkt, 'Regular surface patterns on Rayleigh-Taylor unstable evaporating films heated from below,' Phys. Rev. Lett. 97 , 127802 (2006).\n - [70] G. F. Teletzke, H. T. Davis, and L. E. Scriven, 'Wetting hydrodynamics,' Rev. Phys. Appl. 23 , 9891007 (1988).\n - [71] J. N. Israelachvili, Intermolecular and Surface Forces , Academic Press, London (1992).\n - [72] V. S. Mitlin, 'Dewetting of solid surface: Analogy with spinodal decomposition,' J. Colloid Interface Sci. 156 , 491-497 (1993).\n - [73] L. M. Pismen and Y. Pomeau, 'Disjoining potential and spreading of thin liquid layers in the diffuse interface model coupled to hydrodynamics,' Phys. Rev. E 62 , 2480-2492 (2000).\n - [74] L. Onsager, 'Crystal statistics. I. A two-dimensional model with an order-disorder transition,' Phys. Rev. 65 , 117-149 (1944).\n - [75] G. Reiter, 'Unstable thin polymer films: Rupture and dewetting processes,' Langmuir 9 , 1344-1351 (1993).\n - [76] C. G. Sztrum, O. Hod, and E. Rabani, 'Self-assembly of nanoparticles in three-dimensions: Formation of stalagmites,' J. Phys. Chem. B 109 , 6741-6747 (2005).\n - [77] G. Yosef and E. Rabani, 'Self-assembly of nanoparticles into rings: A lattice-gas model,' J. Phys. Chem. B 110 , 20965-20972 (2006).\n - [78] J. F. Gouyet, M. Plapp, W. Dieterich, and P. Maass, 'Description of far-from-equilibrium processes by mean-field lattice gas models,' Adv. Phys. 52 , 523-638 (2003).\n - [79] U. M. B. Marconi and P. Tarazona, 'Dynamic density functional theory of fluids,' J. Chem. Phys. 110 , 8032-8044 (1999).\n - [80] U. M. B. Marconi and P. Tarazona, 'Dynamic density functional theory of fluids,' J. Phys.-Condes. Matter 12 , A413-A418 (2000).", - "page_start": 29, - "page_end": 29, - "source_file": "1001.2669.pdf" - }, - { - "text": "- [19] T. Vo, Y.-D. Wu, R. Car, and M. Robert, 'Structures, interactions, and ferromagnetism of Fe-carbon nanotube systems', J. Phys. Chem. C 112 (22), 400 (May 2008), doi:10.1021/jp0761968.\n- [20] J. A. Furst, M. Brandbyge, A.-P. Jauho, and K. Stokbro, ' Ab initio study of spin-dependent transport in carbon nanotubes with iron and vanadium adatoms', Phys. Rev. B 78 (19), 195405 (Nov. 2008), doi:10.1103/PhysRevB.78.195405.\n- [21] A. V. Krasheninnikov, P. O. Lehtinen, A. S. Foster, P. Pyykko, and R. M. Nieminen, 'Embedding transitionmetal atoms in graphene: Structure, bonding, and magnetism', Phys. Rev. Lett. 102 (12), 126807 (Mar. 2009), doi:10.1103/PhysRevLett.102.126807.\n- [22] J. J. Mortensen, L. B. Hansen, and K. W. Jacobsen, 'Real-space grid implementation of the projector augmented wave method', Phys. Rev. B 71 (3), 035109 (Jan. 2005), doi:10.1103/PhysRevB.71.035109.\n- [23] J. P. Perdew, K. Burke, and M. Ernzerhof, 'Generalized gradient approximation made simple', Phys. Rev. Lett. 77 (18), 3865 (Oct. 1996), doi:10.1103/PhysRevLett.77.3865.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2538.pdf" - }, - { - "text": "## VERITAS Observations of Blazars\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E > 100 GeV) γ -ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ -ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼ 30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ -rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n## 1. Introduction\n\nActive galactic nuclei are the most numerous class of identified VHE γ -ray sources. These objects emit non-thermal radiation across ∼ 20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ -ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ -rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH ( ∼ 2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ -rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## 2. VERITAS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "- [5] F. Brochard-Wyart and J. Daillant, 'Drying of solids wetted by thin liquid films,' Can. J. Phys. 68 , 1084-1088 (1989).\n - [6] P. Muller-Buschbaum, 'Dewetting and pattern formation in thin polymer films as investigated in real and reciprocal space,' J. Phys.-Condes. Matter 15 , R1549-R1582 (2003).\n - [7] R. Seemann, S. Herminghaus, C. Neto, S. Schlagowski, D. Podzimek, R. Konrad, H. Mantz, and K. Jacobs, 'Dynamics and structure formation in thin polymer melt films,' J. Phys.-Condes. Matter 17 , S267-S290 (2005).\n - [8] U. Thiele, 'Structure formation in thin liquid films,' in S. Kalliadasis and U. Thiele, editors, 'Thin films of Soft Matter,' pages 25-93, Springer, Wien (2007).\n - [9] R. Xie, A. Karim, J. F. Douglas, C. C. Han, and R. A. Weiss, 'Spinodal dewetting of thin polymer films,' Phys. Rev. Lett. 81 , 1251-1254 (1998).\n - [10] R. Seemann, S. Herminghaus, and K. Jacobs, 'Dewetting patterns and molecular forces: A reconciliation,' Phys. Rev. Lett. 86 , 5534-5537 (2001).\n - [11] U. Thiele, M. G. Velarde, and K. Neuffer, 'Dewetting: Film rupture by nucleation in the spinodal regime,' Phys. Rev. Lett. 87 , 016104 (2001).\n - [12] M. Bestehorn and K. Neuffer, 'Surface patterns of laterally extended thin liquid films in three dimensions,' Phys. Rev. Lett. 87 , 046101 (2001).\n - [13] J. Becker, G. Grun, R. Seemann, H. Mantz, K. Jacobs, K. R. Mecke, and R. Blossey, 'Complex dewetting scenarios captured by thin-film models,' Nat. Mater. 2 , 59-63 (2003).\n - [14] C. Redon, F. Brochard-Wyart, and F. Rondelez, 'Dynamics of dewetting,' Phys. Rev. Lett. 66 , 715718 (1991).\n - [15] R. Seemann, S. Herminghaus, and K. Jacobs, 'Shape of a liquid front upon dewetting,' Phys. Rev. Lett. 87 , 196101 (2001).\n - [16] R. Fetzer, K. Jacobs, A. Munch, B. Wagner, and T. P. Witelski, 'New slip regimes and the shape of dewetting thin liquid films,' Phys. Rev. Lett. 95 , 127801 (2005).\n - [17] F. Brochard-Wyart and C. Redon, 'Dynamics of liquid rim instabilities,' Langmuir 8 , 2324-2329 (1992).\n - [18] G. Reiter and A. Sharma, 'Auto-optimization of dewetting rates by rim instabilities in slipping polymer films,' Phys. Rev. Lett. 87 , 166103 (2001).\n - [19] A. Munch and B. Wagner, 'Contact-line instability of dewetting thin films,' Physica D 209 , 178-190 (2005).", - "page_start": 25, - "page_end": 25, - "source_file": "1001.2669.pdf" - }, - { - "text": "| /* */ /* *created -> 0 exit did not create the table space, */ /* OnDemand needs to create the table space */ |\n|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| /* 1) OnDemand will invoke the exit with action == 1 */ /* so that the exit can create the table space (tblsp\\_name) */ /* using (sql) */ |\n| /* 2) OnDemand will then invoke the exit with action == 2 */ /* so that the exit can create the table (table\\_name) */ /* inside of the table space (tblsp\\_name) using (sql) */ |\n| /* sql -> If sql is not an empty string, OnDemand */ /* will issue (sql) to the database */ |\n| /* so that the exit can drop the table space (tblsp\\_name) */ /* using (sql) */ /* *created -> 0 exit did not drop the table space, */ |\n| /* *created -> 0 exit did not create the table, */ /* OnDemand needs to create the table */ /* using (sql), which can be left unchanged */ /* or modified by the exit */ |\n| /* *created -> 1 exit created the table */ /* */ /* 3) OnDemand will then invoke the exit with action == 3 */ |\n| /* */ /* If ARS\\_DB\\_TABLESPACE\\_USEREXIT\\_EXTRA=1 is defined in */ |\n| /* ars.cfg, then the following actions will also be invoked */ /* when OnDemand needs to do further actions: */ |\n| /* */ /* 5) OnDemand will invoke the exit with action == 5 */ |\n| /* OnDemand needs to drop the table space */ |\n| /* using (sql), which can be left unchanged */ /* or modified by the exit */ /* *created -> 1 exit dropped the table space */ /* */ |", - "page_start": 296, - "page_end": 296, - "source_file": "sg246915.pdf" - }, - { - "text": "- [24] M. Strange, I. S. Kristensen, K. S. Thygesen, and K. W. Jacobsen, 'Benchmark density functional theory calculations for nanoscale conductance', J. Chem. Phys. 128 (11), 114714 (Mar. 2008), doi:10.1063/1.2839275.\n- [25] J. M. Soler, E. Artacho, J. D. Gale, A. Garcia, J. Junquera, P. Ordej'on, and D. S'anchez-Portal, 'The SIESTA method for ab initio ordern materials simulation', J. Phys.: Condens. Matter 14 (11), 2745 (Mar. 2002), doi:10.1088/0953-8984/14/11/302.\n- [26] J. S. Griffith, The Theory of Transition-Metal Ions (Cambridge University Press, London, 1961).\n- [27] P. Atkins and J. de Paula, Physical Chemistry , 8th ed. (Oxford University Press, London, 2006).\n- [28] D. Lide, Handbook of Chemistry and Physics , 87th ed. (CRCPress, 2006-2007).\n- [29] T. Markussen, R. Rurali, A.-P. Jauho, and M. Brandbyge, 'Scal-\n\n- ing theory put into practice: First-principles modeling of transport in doped silicon wires', Phys. Rev. Lett. 99 (7), 076803 (Aug. 2007), doi:10.1103/PhysRevLett.99.076803.\n- [30] M. Ushiro, K. Uno, T. Fujikawa, Y. Sato, K. Tohji, F. Watari, W.-J. Chun, Y. Koike, and K. Asakura, 'X-ray absorption fine structure (XAFS) analyses of Ni species trapped in graphene sheet of carbon nanofibers', Phys. Rev. B 73 (14), 144103 (Apr. 2006), doi:10.1103/PhysRevB.73.144103.\n- [31] C. Gomez-Navarro, P. J. de Pablo, J. Gomez-Herrero, B. Biel, F. J. Garcia-Vidal, A. Rubio, and F. Flores, 'Tuning the conductance of single-walled carbon nanotubes by ion irradiation in the Anderson localization regime', Nature Materials 4 , 534 (Jun. 2005), doi:10.1038/nmat1414.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.2538.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2670.pdf", - "query": "How the steady-state solutions for the mean values of the field and atomic variables for laser operation are obtained ?", - "target_page": 2, - "target_passage": "The steady-state solutions for the mean values of the field and atomic variables for laser operation are obtained by dropping the noise terms of the c-number Langevin equations and setting the time derivatives equal to zero.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "where ˜ D ( i ) kl are the c-number Langevin di ff usion coe ffi cients, related to quantum Langevin di ff usion coe ffi cients D ( i ) kl as in [27].\n\nSteady-state solutions: The steady-state solutions for the mean values of the field and atomic variables for laser operation are obtained by dropping the noise terms of the cnumber Langevin equations and setting the time derivatives equal to zero. The analytical solutions are very complex, and one could numerically solve the steady-state equations. In this paper, we only care about the bad cavity limit γ max /lessmuch T -1 /lessmuch τ -1 /lessmuch κ/ 2. Since the atomic transit time is much shorter than the damping times of atomic variables, one could ignore the e ff ect of the spontaneous emission of the atom. By the standard way [25], We get the following steady-state values:\n\n∣ ∣ ∣ ˜ Ass ∣ ∣ ∣ 2 = R (1 -A 0 + A 1 -A 2) κ = R ( B 0 -B 1 + B 2) κ ,\n\n˜ Nass = R τ 2 [ 1 + C 0 -C 1 + C 2 g τ √ κ R ( B 0 -B 1 + B 2) ] ,", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2670.pdf" - }, - { - "text": "cus on the stimulated emission spectrum via multiple coherent interactions inside the cavity. We find this Ramsey laser can provide a stimulated-emission spectrum with a linewidth much narrower than that of any conventional optical Ramsey seperated-field spectroscopy, which is commonly applied in optical atomic clock. Our results also show that a subnatural linewidth spectroscopy, superior to any other available subnatural spectroscopy technique at present [3-10], can be reached by this kind of laser, if a suitable atomic level structure is chosen. Thus, this method can provide an e ff ective subnatural spectroscopy, and the possibilities for the new optical clock scheme [15] and atom interferometers [2].\n\nTheoretical framework: We consider the case of a two-level atomic beam interacting with a single-mode Ramsey cavity of separated-oscillating-field resonators with the cavity mode linewidth is much wider than the atomic gain linewidth. Thus we call it bad-cavity Ramsey laser. All atoms are pumped onto the upper lasing state a before entering the first cavity of seperated field, and the lower lasing state is b . We assume all the atoms have the same velocities υ , that means what we consider here is a homogeneous laser system. And for the sake of simplicity, we consider the two-standing waves linear optical Ramsey configuration with a grid as spatial selector [20, 21]. Our treatment can be extended to other configurations as in [22-24]. The length of each oscillating part is l , and the length of the free drift region is L . The corresponding Hamiltonian is\n\nH = /planckover2pi1 ω ˆ a † ˆ a + /planckover2pi1 ∑ j [ ω j a ( t ) σ j a + ω j b ( t ) σ j b ] + /planckover2pi1 g ∑ j Γ j ( t )(ˆ a † ˆ σ j -e -i /vector k · /vector rj + ˆ σ j + ˆ ae i /vector k · /vector rj ) , (1)\n\nwhere ˆ a , ˆ a † are the annihilation and creation operators of the field mode inside the cavity, with the frequency ω , σ j a = ( | a 〉 〈 a | ) j and σ j b = ( | b 〉 〈 b | ) j are the projection operators for the jth atom corresponding to the upper and lower lasing levels,", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2670.pdf" - }, - { - "text": "- c. Click the Define a Field icon on the toolbar.\n - d. In the Add a Field window, complete the following steps:\n - i. On the Field Information tab, verify the attributes of the Index field. For example, the text string that you selected in the report window is displayed under Reference String and the trigger identifies the trigger on which the field is based. Click Help for assistance with the options and values that you can specify.\n - ii. On the Database Field Attributes tab, verify the attributes of the database field. In the Database Field Name field, enter the name of the application group field into which you want Content Manager OnDemand to store the index value. In the Folder Field Name field, enter the name of the folder field to display in the client search window. Click Help for assistance with the other options and values that you can specify.", - "page_start": 195, - "page_end": 195, - "source_file": "sg246915.pdf" - }, - { - "text": "Conclusion: In summary, we propose a new subnatural linewidth spectroscopy technique, which is a laser by using Ramsey seperated-field cavity to realize the output of stimulated-emission radiation via multiple coherent interaction with atomic beam. We find the linewidth of Ramsey laser is subnatural if we choose an appropriate atomic level, and the bad-cavity laser mechanism will dramatically reduce cavityrelated noise as discussed in active optical clock [15-19]. Our results show that this new subnatural linewidth spectroscopy is superior to conventional optical Ramsey seperated-field spectroscopy and any other available subnatural spectroscopy technique at present [3-10]. Considering one have to apply the separated-field method in any phase detection as in Ramsey-Bord e 'interferometer [2], to investigate the e ff ects of phase di ff erences between the two oscillating fields [31] in this stimulated separated-field method with such subnatural linewidth will be our next research aim.\n\nWe acknowledge Yiqiu Wang and Deshui Yu for fruitful discussions. This work is supported by MOST of China (grant 2005CB724500, National Natural Science Foundation of China (grant 60837004, 10874009), National Hi-Tech Research and Development (863) Program.\n\n- ∗ E-mail: jbchen@pku.edu.cn\n- † E-mail: hongguo@pku.edu.cn.\n- [1] N. F. Ramsey, Phys. Rev. 76 , 996 (1949).\n- [2] B. Dubetsky and P. R. Berman, In Atom Interferometry , edited by P. R. Berman (Academic Press, Cambridge, MA, 1997).\n- [3] M. M. Salour, Rev. Mod. Phys. 50 , 667 (1978).\n- [4] J. Wong and J. C. Garrison, Phys. Rev. Lett. 44 , 1254 (1980).\n- [5] P. L. Knight and P. E. Coleman, J. Phys. B: Atom. Molec. Phys. 13 4345 (1980).\n- [6] H. -W. Lee, P. Meystre, and M. O. Scully, Phys. Rev. A 24 , 1914 (1981).\n- [7] F. Shimizu, K. Shimizu, and H. Takuma, Phys. Rev. A 28 , 2248 (1983).\n- [8] W. Gawlik, J. Kowalski, F. Trager, and M. Vollmer, Phys.Rev.\n\n- Lett. 48 , 871 (1982).\n- [9] H. J. Carmichael, R. J. Brecha, M. G. Raizen, H. J. Kimble, and P. R. Rice, Phys. Rev. A 40 , 5516 (1989).\n- [10] U. W. Rathe, M. O. Scully, Letters in Mathematical Physics 34 , 297 (1995)\n- [11] K. Numata, A. Kemery, J. Camp, Phys Rev Lett, 93 , 250602 (2004).\n- [12] A. D. Ludlow et al. , Opt. Lett. 32 , 641 (2007).\n- [13] H. J. Kimble, B. L. Lev, and J. Ye, Phys. Rev. Lett. 101 , 260602 (2008).\n- [14] J. Chen, and X.Chen, In Proceedings of the 2005 IEEE International Frequency Control Symposium and Exposition , (IEEE, 2005), p.608.\n- [15] J. Chen, e-print arXiv:0512096 quant-ph; Chinese Science Bulletin 54 , 348 (2009).\n- [16] D. Yu and J. Chen, Phys. Rev. A 78 , 013846 (2008).\n- [17] J. Chen, In Frequency Standards and Metrology: Proceedings of the 7th Symposium , edited by Maleki Lute (World Scientific Publishing Company, 2009).\n- [18] Y. Wang, Chinese Science Bulletin 54 , 347 (2009).\n- [19] D. Meiser, J. Ye, D. R. Carlson, and M. J. Holland, Phys. Rev. Lett. 102 , 163601 (2009)\n- [20] F. Strumia, Metrologia 8 , 85 (1972).\n- [21] G. Kramer, J. Opt. Soc. Am. 68 , 1634 (1978).\n- [22] V. S. Letokhov and B. D. Pavlik, Opt. Spectrosc. USSR 32 , 455 (1972).\n- [23] Ye. V. Baklanov, B. Ya, Dubetsky, V. P. Chebotayev, Appl. Phys. 9 , 171 (1976).\n- [24] J. C. Bergquist, S. A. Lee, and L. L. Hall, Phys. Rev. Lett. 38 , 159 (1977).\n- [25] L. Davidovich, Rev. Mod. Phys. 68 , 127 (1996).\n- [26] M. I. Kolobov, L. Davidovich, E. Giacobino, and C. Fabre, Phys. Rev. A 47 , 1431 (1993).\n- [27] M. Sargent III, M. O. Scully, and W. E. Lamb, Laser Physics (Addition Wesley, Reading, MA, 1974).\n- [28] N. A. Abraham, P. Mandel, and L. M. Narducci, Dynamic Instabilities and Pulsations in Lasers , Progress in Optics XXV, edited by E. Wolf (Elsevier, Amsterdam, 1988).\n- [29] L. Pasternack, D. M. Silver, D. R. Yarkony, and P. J. Dagdigian, J. Phys. B 13 , 2231 (1980).\n- [30] K. An and M. S. Feld, Phys. Rev. A 56 , 1662(1997).\n- [31] N. F. Ramsey and H. B. Silsbee, Phys. Rev. 84 , 506(1951).", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2670.pdf" - }, - { - "text": "Our method of Ramsey laser is suitable for any atoms with metastable energy level, as an example, we choose the transition from the metastable state 4 s 4 p 3 P 1 to the ground state 4 s 2 1 S 0 of 40 Ca to check the striking feature of this laser: subnatural linewidth. As mentioned in [29], the corresponding natural linewidth of the metastable state 4 s 4 p 3 P 1 is 320Hz. As in the recently proposed active optical clock with atomic beam [15], the velocity of the atoms in thermal atomic beam is about 500m / s, and the length of the interaction region is about 1mm, then the time for the atom to traverse each coherentinteraction region is on the order of magnitude of 1 µ s. If a bad cavity with κ is on the order of 10 7 Hz, the relation κ/ 2 /greatermuch τ -1 is satisfied. Then when g is on the order of the magnitude of kHz, which can be easily achieved for current technique [30], from the linewidth expression of Eq.(16) the order of magnitude of linewidth is below 1 Hz. This means the linewidth of a Ramsey laser can be more than two orders of magnitude narrower than the atomic natural linewidth, therefore our Ramsey method provides a new subnatural spectroscopy technique. And since it is stimulated-emission spectrum, it overcomes the di ffi culty in other subnatural linewidth spectroscopy schemes where the quick reduction of signal to noise ratio is a formidable limit. We should point out that this Ramsey laser does not escape the limitation of all active optical clock: in order to pump atoms to the excited state effectively and to be stimulated emit photon during the lifetime of a metastable state, this new method will only be applicable to some special transitions [17].", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2670.pdf" - }, - { - "text": ".\n\n## The Linewidth of Ramsey Laser with Bad Cavity\n\nYang Li, Wei Zhuang, Jinbiao Chen, ∗ and Hong Guo † CREAM Group, State Key Laboratory of Advanced Optical Communication Systems and Networks (Peking University) and Institute of Quantum Electronics, School of Electronics Engineering and Computer Science, and Center for Computational Science and Engineering (CCSE), Peking University, Beijing 100871, P. R. China (Dated: October 29, 2018)\n\nWe investigate a new laser scheme by using Ramsey separated-field technique with bad cavity. By studying the linewidth of the stimulated-emission spectrum of this kind of laser inside the cavity, we find its linewidth is more than two orders of magnitude narrower than atomic natural linewidth, and it is far superior to that of conventional optical Ramsey method and any other available subnatural linewidth spectroscopy at present. Since any cavity related noise is reduced to cavity-pulling e ff ect in bad cavity laser, this Ramsey laser provides the possibility of precision subnatural linewidth spectroscopy, which is critical for the next generation of optical clock and atom interferometers.\n\nPACS numbers: 42.55.Ah, 42.50.Ar, 42.60.Da, 32.30.-r\n\nIntroduction: Since the invention of the separated-field technique [1], it has played an important role in the field of precision spectroscopy due to its linewidth narrowing e ff ect via multiple coherent interaction. Atomic clocks based on this technique have greatly extended our ability for frequency measurement, further, almost all the atom interferometers are based on this technique [2].\n\nThough, the natural linewidth of quantum transition was regarded as the ultimate limit to high-resolution laser spectroscopy [4], several methods of subnatural linewidth spectroscopy have been proposed to gain subnatural linewidth [310]. However, in all these e ff orts, including optical Ramsey spectroscopy, subnatural line is realized at the expense of a quick reduction in signal-to-noise (SNR) ratio due to the exponential decaying of signal, thus all these schemes can only get the linewidth several times narrower than the atomic natural linewidth. In the past three decades, this situation does not change in the field of the precision laser spectroscopy. On the other hand, the thermal noise of the cavity mirrors is the main obstacle for further linewidth reduction of a laser [11, 12], and it is a challenge to substantially reduce this noise further[13]. Recently, a new scheme, called active optical clock [14-18], was proposed to substantially reduce the laser linewidth. With lattice trapped atoms, it is possible to reach mHz linewidth laser based on the mechanism of active optical clock [14, 15, 19]. The principal mechanism of active optical clock is to directly extract light emitted from the ultranarrow atomic transition with a cavity mode linewidth much wider than that of lasing. This bad cavity ensures that any frequency shift due to cavity noise reduces to cavity-pulling e ff ect [1517], then the thermal noise is not the major obstacle again for reducing the linewidth. This means the bad cavity can play an indispensable role in new subnatural linewidth spectroscopy.\n\nIn this Letter, we propose a new scheme called Ramsey laser with bad cavity. Distinct from any previous applications of conventional Ramsey separated oscillating fields method [1], which focuses on the absorption spectrum, we here fo-", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2670.pdf" - }, - { - "text": "```\n✞ # Give single observation to the agent observation = [1] action = single\\_in put! (agent, observation) # Give multiple observations to the agent and simulate actions observations = [1, 2, 1, 2, 3] actions = give\\_inputs!(agent, observations) ✝\n```\n\n```\n✞ # Get all current belief states states = get\\_states(agent) # Get a specific state, like the expected free energies only efe = get\\_states(agent, \"expected\\_free\\_energies\") # Get history for all states history = get\\_history(agent) # Get history of expected free energies only history\\_efe = get\\_history(agent, \"expected\\_free\\_energies\") ✝\n```\n\nAdditionally, a set of convenience functions can extract and set parameters and (histories of) beliefs. We briefly show how to extract the current or histories of past states:\n\nAnd how to change the parameters of a created agent:\n\n## 3.3. Model Fitting with ActionModels\n\n```\n✞ # Set individual parameter , alpha to 1.0 set\\_parameters!(agent, \"alpha\", 1.0) # Set multiple parameters by passing a dictionary new\\_parameters = Dict( \"alpha\" => 1.0, # Set alpha to 1.0 \"lr\\_pA\" => 0.5 # Set learning rate of A to 0.5 ) set\\_parameters!(agent, new\\_parameters) # Set new parameters to agent ✝\n```\n\nIn addition to simulating the behaviour and belief updating of agents, ActionModels also makes it possible to fit models to data and perform parameter estimation. This is used in general to form better models and theories of mental processes, as well as to find mechanistic differences (usually prior beliefs in AIF) between, for example, clinical populations or investigating how computational constructs, like Bayesian beliefs, relate to, for example, neuronal dynamics. This is performed in fields like cognitive modelling and mathematical psychology [34], as well as computational psychiatry [14,53]. In the following, we briefly describe the high-level functions needed to fit AIF models to empirical data with ActionModels .\n\nWe have our agent object defined as above. We then need to specify priors for the parameters we want to estimate. Here, we estimate the α parameter, and use a gamma distribution as prior:\n\nWe can now use the create\\_model function to instantiate a probabilistic model object with data. This takes the agent object, the priors, and a set of observations and actions as the arguments:\n\n```\n✞ # Load package for specifying distributions using Distributions # Initialize priors priors = Dict(\"alpha\" => Gamma(4,4)) ✝\n```\n\n☎\n\n✆\n\n☎\n\n✆\n\n☎\n\n✆\n\n☎\n\n✆", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "FIG. 1: Effective McMillan-Mayer short-range pair potentials extracted from explicit solvent simulations using the HNC closure. (a) Cation anion, (b) cation cation, (c) anion anion, (d) cation anion RDF obtained from explicit solvent MD and implicit solvent MC simulations.\n\n\n\npute all ion thermodynamic properties through implicit solvent MC simulations.\n\nThe second stage of our coarse-graining procedure consists in applying LPT, in order to deduce the best analytical model of electrolyte solutions which reproduces this molecular description. The principle of LPT is to describe the properties of a given system in terms of those of a well known reference system, with the difference between them treated as a perturbation in the reference potential. Assuming pairwise additive potentials, V ij = V (0) ij + ∆V ij , a first-order truncated expression for the free energy density of the system βf v is obtained,\n\nβf v /lessorsimilar βf (0) v + 1 2 β ∑ i,j ρ i ρ j ∫ d r g (0) ij ( r ) ∆V ij ( r ) (1)\n\nwhich depends only on the free-energy density f (0) v and RDF g (0) of the reference fluid, with β = ( k B T ) -1 and ρ i the concentration of species i . The Gibbs-Bogoliubov inequality [15] ensures that the right-hand side of Eq. (1) is actually a strict upper bound. Once a reference system has been chosen, the expression on the right-hand side of Eq. (1) must be minimized with respect to the parameters defining the reference. This procedure yields the best first-order approximation to the free energy of the system under consideration.\n\nFor a system of charged particles in solution, the natural reference is the PM, defined in terms of the charge and diameter ( σ i ) of each species. In this case, the perturbing potentials are just the short-range effective potentials computed above (∆ V ij = V SR ij ). We use the MSA [3] solution to the PM, since it provides analytical expressions for both the free energy and the RDF. The perturbation term is evaluated using an exponential approximation to the RDF obtained within the MSA, g ( r ) = exp [ g MSA ( r ) -1], which removes any unphysical negative regions and improves the comparison with HNC calculations.\n\nΦ\n\nFIG. 2: (Color online) (a) Osmotic coefficient Φ in the McMillan-Mayer frame of reference. (diamond) MC simulations, (dot dashed) MSA2, (dot) Debye Huckel Limiting law (DHLL), (cross) experiments (Ref. [18] with the McMillanMayer to Lewis Randall conversion). (b) Minimization diameters. (dot dashed) MSA2 and (diamond) MSA-fit.\n\n\n\nWe first used LPT for a two-component system (Na + and Cl -free ions) within the MSA (model MSA2), for concentrations ranging from 0.1 to 2 . 0 mol l -1 . The minimization leads to almost constant diameters on the whole range of concentration: σ 1 = 3 . 67 ˚ A and σ 2 = 4 . 78 ˚ A. As shown in Fig. 2, these parameters yield osmotic coefficients close to MC calculations only at very low concentration, i.e., c ≤ 0 . 1 moll -1 (experimental values are given for indicative purposes only, since a perfect model will exactly match the MC results). For molar solutions, the LPT results differ considerably from MC calculations. This discrepancy can easily be understood by comparing the diameters found within the MSA2 calculation with the effective potentials given in Fig. 1. The anion/cation contact distance obtained within the MSA2 calculation is 4 . 2 ˚ A, which is in the region of the second minimum of the effective potential and corresponds to the situation where there is a single layer of water molecules between the ions. The first minimum of the potential, which corresponds to the contact ion pair (CIP) is thus completely ignored by the MSA2 calculation. If the MSA diameters are directly fitted to reproduce the MC osmotic pressure, much smaller values are obtained. These MSA-fit hydrated diameters, which are compared to the MSA2 diameters in the bottom part of Fig. 2, are averages of the CIP and the solvent-separated ion pair.", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2648.pdf" - }, - { - "text": "dependence of different samples during the measurement stage. For each temperature we have usually performed three independent simulations, each one containing at least 2 × 10 5 measurements, taken after discarding up to 5 × 10 4 Monte Carlo steps in order to assure thermal equilibration.\n\nIn the proximity of the critical region the multiple histogram (MH) technique was also employed 21 , as it allows us to estimate the physical observables of interest over a whole temperature range in a substantially continuous way by interpolating results obtained from sets of simulations performed at some different temperatures.\n\nFor all the quantities of interest, the average value and the error estimate were obtained by the bootstrap resampling method 22 given that, as pointed out in Ref. 23, for a large enough number of measurements, this method turns out to be more accurate than the usual blocking technique. In our implementation, we pick out randomly a sizable number of measurements (typically, between 1 and 1 × 10 3 for the single simulation, and between 1 and 5 × 10 4 for the MH technique), and iterate the re-sampling at least one hundred times.\n\nThe thermodynamic observables we have investigated include the FM order parameter for each plane l :\n\nm l = √ ( m x l ) 2 +( m y l ) 2 , (2)\n\nwhich is related to the SO (2) symmetry breaking. At the same time, it turns out to be significant also the average order parameter of the film, defined as\n\nM = 1 n n ∑ l =1 m l . (3)\n\nTurning to the helical order, which is the relevant quantity for the Z 2 × SO (2) symmetry, we can explore it along two different directions. The first one is by the introduction of the chirality order parameter 1,2\n\nκ = 1 4( n -1) L 2 sin Q z ∑ 〈 ij 〉 [ S x i S y j -S y i S x j ] , (4)\n\nwhere the sum refers to spins belonging to NN layers i and j , respectively, while Q z is the bulk helical pitch vector along the z direction. The second possibility is that of looking at the integral of the structure factor:\n\nM HM = 1 K ∫ π 0 dq z S ( /vector q ) (5)\n\nwhere S ( /vector q ), with /vectorq = (0 , 0 , q z ), is the structure factor 24 (i.e. the Fourier transform of the spin correlation function) along the z-direction of the film, while the normalization factor K is the structure factor integral at T = 0. Although the use of the last observable can be seen as a suitable and elegant way to overcome the intrinsic difficulties met in defining a correct helical order parameter, free of any undue external bias (as the wave-vector Q z\n\nFIG. 2: (color online) Specific heat c v per spin vs. temperature for thickness n = 16 (for lateral dimension, see the legend inside the figure). Inset: Maximum of c v vs. L obtained through MH technique. The continuum red line is a power law fit.\n\n\n\nentering the definition of κ in Eq. (4)), we remind that such quantity has generally to be managed with particular care, as discussed in details in Refs. 14,15 , where it was shown that the presence of block structures prevents us to unambiguously relate the evolution of S ( /vectorq ) with the onset of helical order. However, for the specific case of the model under investigation such integrated quantity can still be considered a fairly significant order parameter, as no block structures emerge from the simulations (see below).\n\nIn order to get a clear picture of the critical region and to give an accurate estimate of the critical temperature, we look also at the following quantities\n\nc v = nL 2 β 2 ( 〈 e 2 〉 - 〈 e 〉 2 ) , (6)\n\n∂ β o = nL 2 ( 〈 oe 〉 - 〈 o 〉〈 e 〉 ) , (8)\n\nχ o = nL 2 β ( 〈 o 2 〉 - 〈 o 〉 2 ) , (7)\n\nu 4 ( o ) = 1 -〈 o 4 〉 3 〈 o 2 〉 2 , (9)", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0510.pdf" - }, - { - "text": "change in CNT resistivity may then be obtained from the calculated coverages and single impurity conductances.\n\nWe find that oxidation of the active metal site passivates the sensor in the case of doping by Ti, V, Cr, and Mn under standard conditions (room temperature and 1 bar of pressure). Among the remaining metals, we identify Ni as is the most promising candidate for CO detection. For this system the change in resistance per active site is generally significant ( > 1 Ω ) for small changes in CO concentration in the relevant range of around 0.1-10 ppm. Our approach is quite general and is directly applicable to other nanostructures than CNTs, other functionalizations than metal doping, and other backgrounds than atmospheric air.\n\nAll total energy calculations and structure optimizations have been performed with the real-space density functional theory (DFT) code GPAW [22] which is based on the projector augmented wave method. We use a grid spacing of 0.2 ˚ A for representing the density and wave functions and the PBE exchange correlation functional [23]. Transport calculations for the optimized structures have been performed using the nonequilibrium Green's function method [24] with an electronic Hamiltonian obtained from the SIESTA code [25] in a double zeta polarized (DZP) basis set. Spin polarization has been taken into account in all calculations.\n\nMetallic doping of a (6,6) CNT has been modeled in a supercell containing six repeated minimal unit cells along the CNT axis (dimensions: 15 ˚ A × 15 ˚ A × 14.622 ˚ A). For this size of supercell a Γ -point sampling of the Brillouin zone was found to be sufficient. The formation energy for creating a vacancy (VC) occupied by a transition metal atom (M) was calculated using the relation\n\nE form [ M @ VC ] = E [ M @ VC ] + nE [ C ] -E [ M@NT ] (1)\n\nwhere E [M@VC] is the total energy of a transition metal atom occupying a vacancy in the nanotube, n is the number of carbon atoms removed to form the vacancy, E [C] is the energy per carbon atom in a pristine nanotube, and E [M@NT]", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2538.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2670.pdf", - "query": "What are the consequences on the linewidth for regular and Poissonian injections ?", - "target_page": 3, - "target_passage": " For regular injection (p = 1), the linewidth is the narrowest, while for Poissonian injection (p = 0), the linewidth is the broadest.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": ".\n\n## The Linewidth of Ramsey Laser with Bad Cavity\n\nYang Li, Wei Zhuang, Jinbiao Chen, ∗ and Hong Guo † CREAM Group, State Key Laboratory of Advanced Optical Communication Systems and Networks (Peking University) and Institute of Quantum Electronics, School of Electronics Engineering and Computer Science, and Center for Computational Science and Engineering (CCSE), Peking University, Beijing 100871, P. R. China (Dated: October 29, 2018)\n\nWe investigate a new laser scheme by using Ramsey separated-field technique with bad cavity. By studying the linewidth of the stimulated-emission spectrum of this kind of laser inside the cavity, we find its linewidth is more than two orders of magnitude narrower than atomic natural linewidth, and it is far superior to that of conventional optical Ramsey method and any other available subnatural linewidth spectroscopy at present. Since any cavity related noise is reduced to cavity-pulling e ff ect in bad cavity laser, this Ramsey laser provides the possibility of precision subnatural linewidth spectroscopy, which is critical for the next generation of optical clock and atom interferometers.\n\nPACS numbers: 42.55.Ah, 42.50.Ar, 42.60.Da, 32.30.-r\n\nIntroduction: Since the invention of the separated-field technique [1], it has played an important role in the field of precision spectroscopy due to its linewidth narrowing e ff ect via multiple coherent interaction. Atomic clocks based on this technique have greatly extended our ability for frequency measurement, further, almost all the atom interferometers are based on this technique [2].\n\nThough, the natural linewidth of quantum transition was regarded as the ultimate limit to high-resolution laser spectroscopy [4], several methods of subnatural linewidth spectroscopy have been proposed to gain subnatural linewidth [310]. However, in all these e ff orts, including optical Ramsey spectroscopy, subnatural line is realized at the expense of a quick reduction in signal-to-noise (SNR) ratio due to the exponential decaying of signal, thus all these schemes can only get the linewidth several times narrower than the atomic natural linewidth. In the past three decades, this situation does not change in the field of the precision laser spectroscopy. On the other hand, the thermal noise of the cavity mirrors is the main obstacle for further linewidth reduction of a laser [11, 12], and it is a challenge to substantially reduce this noise further[13]. Recently, a new scheme, called active optical clock [14-18], was proposed to substantially reduce the laser linewidth. With lattice trapped atoms, it is possible to reach mHz linewidth laser based on the mechanism of active optical clock [14, 15, 19]. The principal mechanism of active optical clock is to directly extract light emitted from the ultranarrow atomic transition with a cavity mode linewidth much wider than that of lasing. This bad cavity ensures that any frequency shift due to cavity noise reduces to cavity-pulling e ff ect [1517], then the thermal noise is not the major obstacle again for reducing the linewidth. This means the bad cavity can play an indispensable role in new subnatural linewidth spectroscopy.\n\nIn this Letter, we propose a new scheme called Ramsey laser with bad cavity. Distinct from any previous applications of conventional Ramsey separated oscillating fields method [1], which focuses on the absorption spectrum, we here fo-", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2670.pdf" - }, - { - "text": "Our method of Ramsey laser is suitable for any atoms with metastable energy level, as an example, we choose the transition from the metastable state 4 s 4 p 3 P 1 to the ground state 4 s 2 1 S 0 of 40 Ca to check the striking feature of this laser: subnatural linewidth. As mentioned in [29], the corresponding natural linewidth of the metastable state 4 s 4 p 3 P 1 is 320Hz. As in the recently proposed active optical clock with atomic beam [15], the velocity of the atoms in thermal atomic beam is about 500m / s, and the length of the interaction region is about 1mm, then the time for the atom to traverse each coherentinteraction region is on the order of magnitude of 1 µ s. If a bad cavity with κ is on the order of 10 7 Hz, the relation κ/ 2 /greatermuch τ -1 is satisfied. Then when g is on the order of the magnitude of kHz, which can be easily achieved for current technique [30], from the linewidth expression of Eq.(16) the order of magnitude of linewidth is below 1 Hz. This means the linewidth of a Ramsey laser can be more than two orders of magnitude narrower than the atomic natural linewidth, therefore our Ramsey method provides a new subnatural spectroscopy technique. And since it is stimulated-emission spectrum, it overcomes the di ffi culty in other subnatural linewidth spectroscopy schemes where the quick reduction of signal to noise ratio is a formidable limit. We should point out that this Ramsey laser does not escape the limitation of all active optical clock: in order to pump atoms to the excited state effectively and to be stimulated emit photon during the lifetime of a metastable state, this new method will only be applicable to some special transitions [17].", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2670.pdf" - }, - { - "text": "## Consequences of test results", - "page_start": 58, - "page_end": 58, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## 2.3. FastBlue tracer injections\n\nMice were briefly anesthetized during the procedure, induced with 3%to5%isoflurane, and then maintained at 1.5% to 2% as required. Hindlimbs were taped with the plantar surface of the paw facing up, and a custom, 26G removable needle with a 30˚ bevel, attached to a 25m L Hamilton syringe, was inserted between the 2 distal-most footpads, towards the medial aspect of the hindpaw. The needle wasthen rotated 90˚, so the bevel faced medially. Furthermore, 4m L FastBlue (FB; 2% in sterile phosphate-buffered saline (PBS); CAS# 73819-41-7; Polysciences, Inc, Warrington, PA) per paw was then slowly injected, and the needle was left in place for 10 seconds, before rotating and carefully retracting to avoid backflow of FB along the needle track. This prevented the FB bolus from contacting the sural innervation territory of the lateral hindpaw, restricting it largely to the tibial innervation territory of the glabrous hindpaw skin.\n\n## 2.4. Immunohistochemistry and image acquisition\n\nMice were anesthetized with an overdose of pentobarbital (20 mg) and transcardially perfused with a fixative containing 4%\n\n## Primary and secondary antibodies used in the study.\n\nTable 2", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed2.pdf" - }, - { - "text": "cus on the stimulated emission spectrum via multiple coherent interactions inside the cavity. We find this Ramsey laser can provide a stimulated-emission spectrum with a linewidth much narrower than that of any conventional optical Ramsey seperated-field spectroscopy, which is commonly applied in optical atomic clock. Our results also show that a subnatural linewidth spectroscopy, superior to any other available subnatural spectroscopy technique at present [3-10], can be reached by this kind of laser, if a suitable atomic level structure is chosen. Thus, this method can provide an e ff ective subnatural spectroscopy, and the possibilities for the new optical clock scheme [15] and atom interferometers [2].\n\nTheoretical framework: We consider the case of a two-level atomic beam interacting with a single-mode Ramsey cavity of separated-oscillating-field resonators with the cavity mode linewidth is much wider than the atomic gain linewidth. Thus we call it bad-cavity Ramsey laser. All atoms are pumped onto the upper lasing state a before entering the first cavity of seperated field, and the lower lasing state is b . We assume all the atoms have the same velocities υ , that means what we consider here is a homogeneous laser system. And for the sake of simplicity, we consider the two-standing waves linear optical Ramsey configuration with a grid as spatial selector [20, 21]. Our treatment can be extended to other configurations as in [22-24]. The length of each oscillating part is l , and the length of the free drift region is L . The corresponding Hamiltonian is\n\nH = /planckover2pi1 ω ˆ a † ˆ a + /planckover2pi1 ∑ j [ ω j a ( t ) σ j a + ω j b ( t ) σ j b ] + /planckover2pi1 g ∑ j Γ j ( t )(ˆ a † ˆ σ j -e -i /vector k · /vector rj + ˆ σ j + ˆ ae i /vector k · /vector rj ) , (1)\n\nwhere ˆ a , ˆ a † are the annihilation and creation operators of the field mode inside the cavity, with the frequency ω , σ j a = ( | a 〉 〈 a | ) j and σ j b = ( | b 〉 〈 b | ) j are the projection operators for the jth atom corresponding to the upper and lower lasing levels,", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2670.pdf" - }, - { - "text": "| Failback | When the failed node rejoins the system, all failed over IP addresses are failed back from the surviving node to the rejoined node, and volume access is restored through this node. |\n| linkbandwidthmbits | Aggregate bandwidth of all physical links between two sites in Mbps. |", - "page_start": 574, - "page_end": 574, - "source_file": "sg247938.pdf" - }, - { - "text": "- 169 'Presence' means according to the Eurostat ESAW methodology 2013: 'Presence of the victim or of a third person in itself creating a danger for oneself and possibly others' (p. 28).\n - 170 European Commission, 2009: Causes and circumstances of accidents at work in the EU (p. 106).\n - 171 The DIFR is defined as the total number of reported disabling and fatal injuries per 1 million hours worked. See: Government of Canada, 2021: 2019 Annual Report - Occupational Injuries amongst Employees Under Federal Jurisdiction (9.39 per 1 million hours worked).\n - 172 Safe Work Australia, 2021: Comparative performance monitoring report 23rd edition (p. 12ff) (3.6 claims per 1,000 employees).\n - 173 Franco, 2012: Bernardino Ramazzini and women workers' health in the second half of the XVIIth century 174 Ramazzini, 1713: De morbis artificum diatriba (p. 199ff). Latin original text: 'Infortunium ergo, quod huiusmodi\n - Artificibus ex suis opificiis, praeter vitae sedentaria incommoda, est Myopia, affectus nempe oculorum satis notus,\n - cum scilicet visibilia oculis propius admovere necesse est.'\n - 175 EU-OSHA, 2019: The value of occupational safety and health and the societal costs of work-related injuries and diseases\n - 176 ILO Encyclopaedia: Work-related Diseases and Occupational Diseases: The ILO International List\n - 177 European Commission, 2013: Report on the current situation in relation to occupational diseases' systems in EU Member States and EFTA/EEA countries, in particular relative to Commission Recommendation 2003/670/EC concerning the European Schedule of Occupational Diseases and gathering of data on relevant related aspects, here", - "page_start": 147, - "page_end": 147, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "We expect that this simulation model violates qpAdm assumptions of no (or limited) gene flow after admixture between sources and reference groups. Consistent with this idea, qpAdm models are rejected ( P = 4 × 10 -38 for migration rates of 0.001 and P = 5 × 10 -8 for migration rates of 0.005) when using Twigstats with a cut-off of 1,000 generations. However, these are not rejected using regular qpAdm, including", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed3.pdf" - }, - { - "text": "also shift the spinodal and binodal lines as compared to the locations of these lines in the phase diagram for the pure solvent [41]. As a consequence, the solute concentration influences the hole nucleation rate. More importantly, the solute particles may also destabilise the dewetting fronts. As a result, one may find strongly ramified structures in all three systems [23, 25, 40, 42]. A selection of images exhibiting some of the possible structures is displayed in Fig.1.\n\nFor volatile solvents, the contact lines retract even for wetting fluids. It has been found that such evaporatively receding contact lines may deposit very regular line or ring patterns parallel to the moving contact line [24, 43]. The deposition of a single ring of colloids from a evaporating drop of colloidal suspension is well known as the 'coffee stain effect' [44]. Detailed investigations reveal the emergence of rich structures including multiple irregular rings, networks, regular droplet patterns, sawtooth patterns, Sierpinski carpets, and - in the case of DNA - liquid crystalline structures [22, 30, 45-49]. The deposition of regularly spaced straight lines orthogonal to the moving contact line has also been reported [50]. Droplet patterns may as well be created employing solvent-induced dewetting of glassy polymer layers below the glass transition temperature [51-53].\n\nNote that the dewetting of pure volatile liquids has also been studied experimentally [54] and theoretically [55-58]. In this case, different contact line instabilities have been observed for evaporating liquid drops [59, 60].\n\nIn the present article we review and preview the experiments and in particular the various modelling approaches for dewetting suspensions of (nano-)particles in volatile partially wetting solvents. After reviewing the basic experimental results in Section II, we discuss in Section III several theoretical approaches. In particular, we present a kinetic Monte Carlo model in Section III A, a dynamic density functional theory in Section III B, and a thin film evolution equation in Section III C. Finally, we conclude in Section IV by discussing advantages and shortcomings of the individual approaches and future challenges to all of them.\n\n## II. EXPERIMENT WITH NANOPARTICLE SOLUTIONS\n\nWe focus on experiments that use monodisperse colloidal suspensions of thiol-passivated gold nanoparticles in toluene [33, 34, 37-40, 61]. The gold core of 2 - 3 nm diameter is coated by a layer of alkyl-thiol molecules. The length of the carbon backbone of the thiol used in the experiments ranges from 6 to 12 carbon atoms ( C 6 to C 12 ) [40]. By varying the chain length, one can control", - "page_start": 3, - "page_end": 3, - "source_file": "1001.2669.pdf" - }, - { - "text": "## NAVWEPS 00-8OT-80 HIGH SPEED AERODYNAMICS\n\nfigure 3.17. Structurd Complications Due to Sweephk\n\n", - "page_start": 250, - "page_end": 250, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2449.pdf", - "query": "Give me the advantages of Ferromagnetic semiconductors", - "target_page": 1, - "target_passage": "Ferromagnetic (FM) semiconductors offer the prospect of combining high-density storage and gate-controlled logic in a single material.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Exchange bias of a ferromagnetic semiconductor by a ferromagnetic metal\n\nK. Olejnik, 1, 2 P. Wadley, 3 J. Haigh, 3 K. W. Edmonds, 3 R. P. Campion, 3 A. W. Rushforth, 3 B. L. Gallagher, 3 C. T. Foxon, 3 T. Jungwirth, 2, 3 J. Wunderlich, 1, 2 S. S. Dhesi, 4 S. Cavill, 4 G. van der Laan, 4 and E. Arenholz 5\n\n1 Hitachi Cambridge Laboratory, Cambridge CB3 0HE, United Kingdom\n\n2 Institute of Physics ASCR, v.v.i., Cukrovarnicka 10, 16253 Praha 6, Czech Republic\n\n3 School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD, United Kingdom 4 Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA\n\n5 (Dated: August 24, 2018)\n\nWe demonstrate an exchange bias in (Ga,Mn)As induced by antiferromagnetic coupling to a thin overlayer of Fe. Bias fields of up to 240 Oe are observed. Using element-specific x-ray magnetic circular dichroism measurements, we distinguish a strongly exchange coupled (Ga,Mn)As interface layer in addition to the biassed bulk of the (Ga,Mn)As film. The interface layer remains polarized at room temperature.\n\nPACS numbers: 75.70.Cn, 75.50.Pp, 75.50.Bb\n\nFerromagnetic (FM) semiconductors offer the prospect of combining high-density storage and gate-controlled logic in a single material. The realization of spin-valve devices from FM semiconductors requires the controlled switching of magnetization in adjacent layers between antiferromagnetic (AFM) and FM configurations. This has motivated several theoretical investigations of interlayer coupling in all-semiconductor devices 1 , and AFM coupling has recently been demonstrated in (Ga,Mn)As multilayers separated by p -type non-magnetic spacers 2 . However, the Curie temperature T C of (Ga,Mn)As is currently limited to 185 K in single layers 3 , and is typically much lower for layers embedded within a heterostructure 2 , which is an obstacle to the practical implementation of semiconductor spintronics.\n\nThe development of FM metal/FM semiconductor heterostructures has the potential to bring together the benefits of metal and semiconductor based spintronics, offering access to new functionalities and physical phenomena. Recent studies of MnAs/(Ga,Mn)As and NiFe/(Ga,Mn)As bilayer films have shown FM interlayer coupling and independent magnetization behavior, respectively 4,5 . Of particular interest is the Fe/(Ga,Mn)As system, since the growth of epitaxial Fe/GaAs(001) films is well-established 6 . Remarkably, a recent x-ray magnetic circular dichroism (XMCD) study has shown that Fe may induce a proximity polarization in the near-surface region of (Ga,Mn)As, antiparallel to the Fe moment and persisting even above room temperature 7 . Devices incorporating Fe/(Ga,Mn)As therefore offer the prospect of obtaining non-volatile room temperature spin-polarization in a semiconductor.\n\nUntil now, no information has been revealed about the coupling of Fe to (Ga,Mn)As layers away from the nearsurface region. At the surface, the (Ga,Mn)As layer may be highly non-stoichiometric and Mn-rich, due to its nonequilibrium nature 8,9 . Previously, Fe/(Ga,Mn)As layers were produced by a process including exposure to air followed by sputtering and annealing prior to Fe deposition,\n\nwhich may further disrupt the interface order. The origin of the interface magnetism then had to be inferred by comparison to a series of reference samples 7 . Demonstration of coupling between the bulk of the layers, i.e. , an exchange bias effect, would provide direct evidence of the interface magnetic order. Moreover, such coupling would offer new means of manipulating the FM semiconductor spin state and utilizing the proximity polarization effect in a spintronic device.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "Here, we demonstrate an antiferromagnetic coupling and exchange bias in Fe/(Ga,Mn)As bilayer films, by combining element-specific XMCD measurements and bulk-sensitive superconducting quantum interference device (SQUID) magnetometry. As with previous studies of FM metal/FM semiconductor bilayers 4,5 (and in contrast to AFM coupled FM metal/FM metal exchange bias structures 10,11 ) the layers are in direct contact without a non-magnetic spacer in between. We distinguish interface and bulk (Ga,Mn)As layers that are respectively strongly and weakly antiferromagnetically coupled to the Fe overlayer. In agreement with Ref. 7 , the interface layer remains polarized at room temperature.\n\nThe Fe and (Ga,Mn)As layers of the present study were both grown by molecular beam epitaxy in the same ultra-high vacuum system, in order to ensure a clean interface between them. The (Ga,Mn)As layer of thickness 10 to 50 nm was deposited on a GaAs(001) substrate at a temperature of 260 · C, using previously established methods 3,8 . A low Mn concentration of x ≈ 0 . 03 was chosen in order to avoid the formation of compensating Mn interstitials. The substrate temperature was then reduced to ∼ 0 · C, before depositing a 2 nm Fe layer, plus a 2 nm Al capping layer. In-situ reflection high energy electron diffraction and ex-situ x-ray reflectivity and diffraction measurements confirmed that the layers are single-crystalline with sub-nm interface roughness. SQUID magnetometry measurements were performed using a Quantum Design Magnetic Property Measurement System. Mn and Fe L 2 , 3 x-ray absorption and XMCD", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "measurements were performed on beamline I06 at the Diamond Light Source, and on beamline 4.0.2 at the Advanced Light Source. Total-electron yield (TEY) and fluorescence yield (FY) were monitored simultaneously using the sample drain current and the photocurrent of a diode mounted at 90 · to the incident beam, respectively.\n\nSQUID magnetometry measurements were first performed on control Fe/GaAs(001) and (Ga,Mn)As/GaAs(001) samples, grown under the same conditions as the bilayers, to determine the magnetic anisotropies of the individual layers and the Curie temperature of the (Ga,Mn)As layer. The Fe film has a uniaxial magnetic anisotropy with easy axis along the [110] orientation, similar to previous studies 6 . For the (Ga,Mn)As control sample, there is a competition between cubic and uniaxial magnetic anisotropies, with the former dominant at low temperatures and favoring easy axes along the in-plane 〈 100 〉 orientations, and the latter dominant close to T C ( ∼ 35 K) giving an easy axis along the [1 ¯ 10] orientation. Figure 1 shows [110] magnetization versus temperature curves and low temperature hysteresis loops for a bilayer film containing a 20 nm thick (Ga,Mn)As layer. The total remnant moment of the bilayer film decreases on cooling under zero magnetic field below the T C of the (Ga,Mn)As, indicating that this layer aligns antiparallel to the Fe magnetization at zero field. The hysteresis curve shows a two-step magnetization reversal, indicating different behavior of the Fe and (Ga,Mn)As layers, with the smaller loop attributed to the dilute moment (Ga,Mn)As film. The minor hysteresis loop shown in Fig. 1 clearly shows a shift from zero field by a bias field H E , indicating that the Fe layer induces an exchange bias in the magnetic semiconductor. The shape and size of the minor loop is in agreement with the hysteresis loop for the control (Ga,Mn)As sample, also shown in Fig. 1. This strongly indicates that the exchange bias affects the whole of the (Ga,Mn)As layer in the bilayer sample.\n\nSimilar behavior is observed for bilayer samples containing a 10 nm or 50 nm (Ga,Mn)As layer, with a bias field which is approximately inversely proportional to the thickness d of the ferromagnetic semiconductor layer (Fig. 1, inset). This 1/ d dependence of H E was found previously for MnAs/(Ga,Mn)As bilayers 4 , and is generally observed in exchanged-biased thin films 12 . From this dependence it is possible to describe the exchange bias in terms of an interface energy per unit area, ∆ E = M FS H E d = 0 . 003 erg/cm 2 . This value is rather small compared to typical exchange bias systems 12 , reflecting the low moment density M FS of the diluted FM semiconductor layer. However, the bias field for a given (Ga,Mn)As thickness is larger than is observed for MnO/(Ga,Mn)As structures 13 , while the reproducibility and flexibility of the present structures is much higher due to the single-crystalline ferromagnetic nature of the Fe layer.\n\nTo confirm the presence of AFM interlayer coupling, we performed XMCD measurements at the Mn and Fe", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2449.pdf" - }, - { - "text": "## Interplay among helical order, surface effects and range of interacting layers in ultrathin films.\n\nF. Cinti (1 , 2 , 3) , A. Rettori (2 , 3) , and A. Cuccoli (2)\n\n(1) Department of Physics, University of Alberta, Edmonton, Alberta, Canada T6G 2J1\n\n(2) CNISM and Department of Physics, University of Florence, 50019 Sesto Fiorentino (FI), Italy. and\n\n(3) CNR-INFM S 3 National Research Center, I-41100 Modena, Italy\n\n(Dated: June 8, 2022)\n\nThe properties of helical thin films have been thoroughly investigated by classical Monte Carlo simulations. The employed model assumes classical planar spins in a body-centered tetragonal lattice, where the helical arrangement along the film growth direction has been modeled by nearest neighbor and next-nearest neighbor competing interactions, the minimal requirement to get helical order. We obtain that, while the in-plane transition temperatures remain essentially unchanged with respect to the bulk ones, the helical/fan arrangement is stabilized at more and more low temperature when the film thickness, n , decreases; in the ordered phase, increasing the temperature, a softening of the helix pitch wave-vector is also observed. Moreover, we show also that the simulation data around both transition temperatures lead us to exclude the presence of a first order transition for all analyzed sizes. Finally, by comparing the results of the present work with those obtained for other models previously adopted in literature, we can get a deeper insight about the entwined role played by the number (range) of interlayer interactions and surface effects in non-collinear thin films.\n\nPACS numbers: 64.60.an,64.60.De,75.10.Hk,75.40.Cx,75.70.Ak.\n\n## I. INTRODUCTION\n\nThe study of low dimensional frustrated magnetic systems 1 still raises great interest, both in consequence of theoretical aspects, related to their peculiar critical properties 2 , and in view of possible technological applications 3 . Indeed, beside conventional ferromagnetic or antiferromagnetic phase transitions, in many new materials other nontrivial and unconventional forms of ordering have been observed 4,5 . A quantity of particular interest in this context is the spin chirality, an order parameter which turned out to be extremely relevant in, e.g., magnetoelectric materials 6 , itinerant MnSi 7 , binary compounds as FeGe 8 , glass transition of spins 9 , and XY helimagnets, as Holmium, Terbium or Dysprosium 10 . In the latter case, a new universality class was predicted because a Z 2 × SO (2) symmetry is spontaneously broken in the ordered phase 2 : In fact, when dealing with such systems, in addition to the SO (2) symmetry of the spin degrees of freedom /vector S i , one has to consider also the Z 2 symmetry of the spin chirality κ ij ∝ [ /vector S i × /vector S j ] z .\n\nFor these rare-earth elements, the development of new and sophisticated experimental methods 11 has allowed to obtain ultra-thin films where the non-collinear modulation is comparable with the film thickness. Under such conditions the lack of translational invariance due to the presence of surfaces results decisive in order to observe a drastic change of the magnetic structures 12 . Recent experimental data on ultra-thin Holmium films 13 have been lately interpreted and discussed 14,15 on the basis of detailed classical Monte Carlo (MC) simulations of a spin Hamiltonian, which is believed to give a realistic modeling of bulk Holmium. Such Hamiltonian, proposed by Bohr et al. 16 , allows for competitive middle-range in-", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0510.pdf" - }, - { - "text": "In summary, we have demonstrated antiferromagnetic coupling between Fe and (Ga,Mn)As layers in bilayer structures. A markedly different coupling is observed for the bulk of the (Ga,Mn)As layer and for Mn moments in the near-interface region. A thickness-dependent exchange bias field is observed to affect the whole of the bulk (Ga,Mn)As layer, which aligns antiparallel to the Fe layer at low fields, and switches to parallel when the external field is large enough to overcome the bias field and the magnetocrystalline anisotropy fields. In contrast, the interfacial Mn moments remain aligned antiparallel to the Fe layer even at 20 kOe, the largest field studied, and are polarized at temperatures well above the T C of the bulk (Ga,Mn)As layer. The latter observation confirms the recently reported result of Ref. 7, in which the Fe/(Ga,Mn)As bilayers were produced by a different method but showed qualitatively similar behavior of the interfacial moments. Our results shed new light on the magnetic coupling in Fe/(Ga,Mn)As hybrid layers which are of potential interest for room temperature spintronics, and also offer a means of controlling the spin orientation in a FM semiconductor.\n\nWe acknowledge support from EU grants SemiSpinNet-215368 and NAMASTE-214499, and STFC studentship grant CMPC07100. The Advanced Light Source is supported by the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. We thank Leigh Shelford for help during the Diamond beamtime.\n\n- Polesya, H. Ebert, U. Wurstbauer, M. Hochstrasser, G. Rossi, G. Woltersdorf, W. Wegscheider, and C. H. Back, Phys. Rev. Lett. 101 , 267201 (2008).\n- 8 R. P. Campion, K. W. Edmonds, L. X. Zhao, K. Y. Wang, C. T. Foxon, B. L. Gallagher, and C. R. Staddon, J. Crystal Growth 247 , 42 (2003).\n- 9 F. Maccherozzi, G. Panaccione, G. Rossi, M. Hochstrasser, M. Sperl, M. Reinwald, G. Woltersdorf, W. Wegscheider, and C. H. Back, Phys. Rev. B 74 , 104421 (2006).\n- 10 Ch. Binek, S. Polisetty, X. He and A. Berger, Phys. Rev. Lett. 96 , 067201 (2006).\n- 11 C. Won, Y.Z. Wu, E. Arenholz, J. Choi, J. Wu, and Z. Q. Qiu, Phys. Rev. Lett. 99 , 077203 (2007).\n- 12 J. Nogues and I. K. Schuller, J. Magn. Magn. Mater. 192 , 203 (1999).\n- 13 K. F. Eid, M. B. Stone, K. C. Ku, O. Maksimov, P. Schiffer, N. Samarth, T. C. Shih and C. J. Palmstrom, Appl. Phys. Lett. 85 , 1556 (2004).\n- 14 B. T. Thole, P. Carra, F. Sette, and G. van der Laan, Phys. Rev. Lett. 68 , 1943 (1992); P. Carra, B. T. Thole, M. Altarelli, and X. Wang, Phys. Rev. Lett. 70 , 694 (1993).\n- 15 T. Jungwirth, J. Masek, K. Y. Wang, K. W. Edmonds,", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2449.pdf" - }, - { - "text": "To confirm the presence of AFM interlayer coupling, we performed XMCD measurements at the Mn and Fe\n\nL 2 , 3 absorption edges in order to determine the magnetic response of the individual elements. In L 2 , 3 XMCD, electrons are excited from a 2 p core level to the unoccupied 3 d valence states of the element of interest by circularly polarized x-rays at the resonance energies of the transitions. The difference in absorption for opposite polarizations gives a direct and element-specific measurement of the projection of the 3 d magnetic moment along the xray polarization vector. The absorption cross-section is conventionally obtained by measuring the decay products - either fluorescent x-rays or electrons - of the photoexcited core hole. The type of decay product measured determines the probing depth of the technique. For Mn L 2 , 3 absorption, the probing depths for FY and TEY detection are λ FY ≈ 100 nm and λ TEY ≈ 3 nm. In the current experiment, the Mn XMCD measured using FY and TEY are thus sensitive to the bulk of the (Ga,Mn)As film and the near-interface layers, respectively.\n\nFigure 2(a)-(c) shows the magnetic field dependence of XMCD asymmetry, defined as ( I l -I r ) / ( I l + I r ) where I l ( r ) is the absorption for left- (right-) circularly polarized x-rays. This is measured at the Fe and Mn L 3 absorption peaks for a Fe(2 nm)/(Ga,Mn)As(10 nm) sample at 2 K. The external field is applied along the photon incidence direction, which is at 70 · to the surface normal with an in-plane projection along the [110] axis. The XMCD data show that the Fe film displays a square hysteresis loop with a single magnetization switch, as expected for a monocrystalline Fe film with strong uniaxial magnetic anisotropy. The Mn XMCD shows a more complicated loop due to the effect of the interlayer coupling. The projected Mn moment aligns antiparallel to the Fe moment at remanence, and undergoes a magnetization reversal of opposite sign to the Fe. With further increase of the external magnetic field, the Mn moment gradually rotates away from antiparallel alignment with the Fe layer, and into the field direction. Qualitatively similar behavior is observed for the Fe(2 nm)/(Ga,Mn)As(20 nm) sample: the (Ga,Mn)As layer is aligned antiparallel to the Fe layer at zero field, although the bias field is lower by approximately a factor of two.\n\nClear differences are observed between the Mn XMCD hysteresis loops obtained using TEY and FY detection modes. For FY the magnitude of the XMCD is similar (but of opposite sign) at remanence and at high magnetic fields, whereas for TEY at remanence it is approximately a factor of two larger than at 1000 Oe. The Mn L 2 , 3 XMCD spectra recorded at remanence and at 1000 Oe, shown in Fig. 3, confirm this result. At remanence the FY and TEY detected XMCD have similar magnitudes. However, under a large external field the XMCD is substantially smaller in TEY than in FY, confirming that the net magnetization of the Mn ions near the interface is significantly less than in the bulk of the (Ga,Mn)As film. This is the case even up to the highest field applied (20 kOe). By applying the XMCD sum rules 14 to the TEY data, and by comparing the spectra to previous measurements on well-characterized (Ga,Mn)As", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2449.pdf" - }, - { - "text": "## Realization of the Exactly Solvable Kitaev Honeycomb Lattice Model in a Spin Rotation Invariant System\n\nFa Wang 1\n\n1 Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA\n\nThe exactly solvable Kitaev honeycomb lattice model is realized as the low energy effect Hamiltonian of a spin-1/2 model with spin rotation and time-reversal symmetry. The mapping to low energy effective Hamiltonian is exact, without truncation errors in traditional perturbation series expansions. This model consists of a honeycomb lattice of clusters of four spin-1/2 moments, and contains short-range interactions up to six-spin(or eight-spin) terms. The spin in the Kitaev model is represented not as these spin-1/2 moments, but as pseudo-spin of the two-dimensional spin singlet sector of the four antiferromagnetically coupled spin-1/2 moments within each cluster. Spin correlations in the Kitaev model are mapped to dimer correlations or spin-chirality correlations in this model. This exact construction is quite general and can be used to make other interesting spin-1/2 models from spin rotation invariant Hamiltonians. We discuss two possible routes to generate the high order spin interactions from more natural couplings, which involves perturbative expansions thus breaks the exact mapping, although in a controlled manner.\n\nPACS numbers: 75.10.Jm, 75.10.Kt\n\n## Contents\n\n## I. Introduction.\n\n1\n\n- II. Formulation of the Pseudo-spin-1/2 from Four-spin Cluster.\n\n## III. Realization of the Kitaev Model.\n\n3\n\n- IV. Generate the High Order Physical Spin Interactions by Perturbative Expansion.\n- A. Generate the High Order Terms by Coupling to Optical Phonon.\n- B. Generate the High Order Terms by Magnetic Interactions between Clusters.\n\n## V. Conclusions.\n\n8\n\n## Acknowledgments\n\n8\n\n- A. Coupling between Distortions of a Tetrahedron and the Pseudo-spins\n- B. Derivation of the Terms Generated by Second Order Perturbation of Inter-cluster Magnetic Interactions\n\n8\n\n9\n\nReferences 10\n\n## I. INTRODUCTION.\n\nKitaev's exactly solvable spin-1/2 honeycomb lattice model 1 (noted as the Kitaev model hereafter) has inspired great interest since its debut, due to its exact solvability, fractionalized excitations, and the potential\n\n5\n\n5\n\n7\n\n2\n\nto realize non-Abelian anyons. The model simply reads\n\nH Kitaev = -∑ x -links J x τ x j τ x k -∑ y -links J y τ y j τ y k -∑ z -links J z τ z j τ z k (1)\n\nwhere τ x,y,z are Pauli matrices, and x, y, z -links are defined in FIG. 1. It was shown by Kitaev 1 that this spin1/2 model can be mapped to a model with one Majorana fermion per site coupled to Ising gauge fields on the links. And as the Ising gauge flux has no fluctuation, the model can be regarded as, under each gauge flux configuration, a free Majorana fermion problem. The ground state is achieved in the sector of zero gauge flux through each hexagon. The Majorana fermions in this sector have Dirac-like gapless dispersion resembling that of graphene, as long as | J x | , | J y | , and | J z | satisfy the triangular relation, sum of any two of them is greater than the third one 1 . It was further proposed by Kitaev 1 that opening of fermion gap by magnetic field can give the Ising vortices non-Abelian anyonic statistics, because the Ising vortex will carry a zero-energy Majorana mode, although magnetic field destroys the exact solvability.\n\nGreat efforts have been invested to better understand the properties of the Kitaev model. For example, several groups have pointed out that the fractionalized Majorana fermion excitations may be understood from the more familiar Jordan-Wigner transformation of 1D spin systems 2,3 . The analogy between the non-Abelian Ising vortices and vortices in p + ip superconductors has been raised in serveral works 4-7 . Exact diagonalization has been used to study the Kitaev model on small lattices 8 . And perturbative expansion methods have been developed to study the gapped phases of the Kitaev-type models 9 .", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0266.pdf" - }, - { - "text": "chirality interactions in cold atom optical lattices has been proposed 38 .\n\nOur model (8) is achieved at second order of the perturbation series. Higher order terms become truncation errors but may be controlled by small parameters λ x,y,z /J cluster ∼ √ | J x,y,z | /J cluster .\n\n## V. CONCLUSIONS.\n\nWe constructed the exactly solvable Kitaev honeycomb model 1 as the exact low energy effective Hamiltonian of a spin-1/2 model [equations (8) or (9)] with spin-rotation and time reversal symmetry. The spin in Kitaev model is represented as the pseudo-spin in the two-fold degenerate spin singlet subspace of a cluster of four antiferromagnetically coupled spin-1/2 moments. The physical spin model is a honeycomb lattice of such four-spin clusters, with certain inter-cluster interactions. The machinery for the exact mapping to pseudo-spin Hamiltonian was developed (see e.g. TABLE I), which is quite general and can be used to construct other interesting (exactly solvable) spin-1/2 models from spin rotation invariant systems.\n\nIn this construction the pseudo-spin correlations in the Kitaev model will be mapped to dimer or spin-chirality correlations in the physical spin system. The corresponding picture of the fractionalized Majorana fermion excitations and Ising vortices still remain to be clarified.\n\nThis exact construction contains high order physical spin interactions, which is undesirable for practical implementation. We described two possible approaches to reduce this problem: generating the high order spin interactions by perturbative expansion of the coupling to optical phonon, or the magnetic coupling between clusters. This perturbative construction will introduce truncation error of perturbation series, which may be controlled by small expansion parameters. Whether these constructions can be experimentally engineered is however beyond the scope of this study. It is conceivable that other perturbative expansion can also generate these high order spin interactions, but this possibility will be left for future works.\n\n## Acknowledgments\n\nThe author thanks Ashvin Vishwanath, Yong-Baek Kim and Arun Paramekanti for inspiring discussions, and Todadri Senthil for critical comments. The author is supported by the MIT Pappalardo Fellowship in Physics.\n\n## Appendix A: Coupling between Distortions of a Tetrahedron and the Pseudo-spins\n\nIn this Appendix we reproduce from Ref. 35 the couplings of all tetrahedron distortion modes to the spin\n\nsystem. And convert them to pseudo-spin notation in the physical spin singlet sector.\n\nConsider a general small distortion of the tetrahedron, the spin Hamiltonian becomes\n\nH cluster , SL = ( J cluster / 2)( ∑ /lscript S /lscript ) 2 + J ' ∑ /lscript\n\nFor each of the resources, various metrics are available and you can select which to be displayed. For example, as shown in Figure A-8, from the four available metrics for the MDisks view (Read, Write, Read latency, and Write latency) only Read and Write IOPS are selected.\n\nFigure A-8 Displaying performance counters\n\n\n\n## Performance data collection and IBM Spectrum Control\n\nAlthough you can obtain performance statistics in standard . xml files, the use of .xml files is a less practical and more complicated method to analyze the IBM Spectrum Virtualize performance statistics. IBM Spectrum Control is the supported IBM tool to collect and analyze Storwize V7000 performance statistics.", - "page_start": 773, - "page_end": 773, - "source_file": "sg247938.pdf" - }, - { - "text": "- (i) the number of tests they sold on that day, and\n - (ii) in relation to each test sold on that day-\n - (aa) the date of arrival in England of the person in respect of whom the test was sold, and\n - (bb) whether the person in respect of whom the test was sold is a category 1 arrival or not;\n - (h) if they arrange with another person ('X') for X to carry out any element of the single end-to-end testing service on their behalf, the test provider ensures that X complies with the following so far as relevant to the carrying out of that element-\n - (i) paragraph 3(1)(e) to (i) of Schedule 10 as applied by paragraph (a) of this subparagraph,\n - (ii) paragraph (b) to (g) of this sub-paragraph,", - "page_start": 63, - "page_end": 63, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (h) they have systems in place to identify any adverse incidents or quality control issues in relation to appropriate tests and be able to report them as soon as reasonably practicable to the Secretary of State;\n - (i) they administer or provide an appropriate test to P, on or after the fifth day after the day on which P arrived in England having received the information required by paragraph 4(b) and (c) (as appropriate); and\n - (j) if they arrange with another person ('X') for X to carry out any element of the single end-to-end testing service on their behalf, the test provider ensures that X complies with any of paragraphs (c) to (i) and 5(2), (3) and (5) as is relevant to the carrying out of that element.\n - (2) For the purposes of sub-paragraph (1)-\n - (a) 'point of care test' means a test processed outside a laboratory environment;", - "page_start": 69, - "page_end": 69, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "participant's first two baseline scans (that is, preconception) to derive within-participant variability estimates.\n\nBenchmarking our data in this way allows us to capture the degree of change expected due to factors such as image processing and instrumentation variability or other day-to-day changes that could potentially modulate brain size and shape (see ref. 80 for review). The percent change observed over pregnancy (baseline versus 36 weeks gestation) far exceeds the expected variability estimated using both the Day2Day dataset (Supplementary Fig. 11) and our within-participant control data. This was quantified by dividing the observed percent change in GMV metrics (baseline versus 36 weeks) by the global measure of GMV percent variability of each control group (that is, Day2Day, within-participant control), independently for cortex and subcortex.\n\n## Reporting summary\n\nFurther information on research design is available in the Nature Portfolio Reporting Summary linked to this article.\n\n## Data availability\n\nThe dataset consists of 26 MRI scans (T1w, T2w and diffusion scans) alongside state-dependent measures and serum assessments of ovarian sex hormones for each session. The raw data is publicly available at https://openneuro.org/datasets/ds005299. Source data are provided with this paper.\n\n## Code availability\n\nNo custom code was used.\n\n## References", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed4.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.2449.pdf", - "query": "What are the differences observed between the Mn XMCD hysteresis loops obtained using TEY and FY detection modes ?", - "target_page": 2, - "target_passage": "For FY the magnitude of the XMCD is similar (but of opposite sign) at remanence and at high mag netic fields, whereas for TEY at remanence it is approx imately a factor of two larger than at 1000 Oe.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "To confirm the presence of AFM interlayer coupling, we performed XMCD measurements at the Mn and Fe\n\nL 2 , 3 absorption edges in order to determine the magnetic response of the individual elements. In L 2 , 3 XMCD, electrons are excited from a 2 p core level to the unoccupied 3 d valence states of the element of interest by circularly polarized x-rays at the resonance energies of the transitions. The difference in absorption for opposite polarizations gives a direct and element-specific measurement of the projection of the 3 d magnetic moment along the xray polarization vector. The absorption cross-section is conventionally obtained by measuring the decay products - either fluorescent x-rays or electrons - of the photoexcited core hole. The type of decay product measured determines the probing depth of the technique. For Mn L 2 , 3 absorption, the probing depths for FY and TEY detection are λ FY ≈ 100 nm and λ TEY ≈ 3 nm. In the current experiment, the Mn XMCD measured using FY and TEY are thus sensitive to the bulk of the (Ga,Mn)As film and the near-interface layers, respectively.\n\nFigure 2(a)-(c) shows the magnetic field dependence of XMCD asymmetry, defined as ( I l -I r ) / ( I l + I r ) where I l ( r ) is the absorption for left- (right-) circularly polarized x-rays. This is measured at the Fe and Mn L 3 absorption peaks for a Fe(2 nm)/(Ga,Mn)As(10 nm) sample at 2 K. The external field is applied along the photon incidence direction, which is at 70 · to the surface normal with an in-plane projection along the [110] axis. The XMCD data show that the Fe film displays a square hysteresis loop with a single magnetization switch, as expected for a monocrystalline Fe film with strong uniaxial magnetic anisotropy. The Mn XMCD shows a more complicated loop due to the effect of the interlayer coupling. The projected Mn moment aligns antiparallel to the Fe moment at remanence, and undergoes a magnetization reversal of opposite sign to the Fe. With further increase of the external magnetic field, the Mn moment gradually rotates away from antiparallel alignment with the Fe layer, and into the field direction. Qualitatively similar behavior is observed for the Fe(2 nm)/(Ga,Mn)As(20 nm) sample: the (Ga,Mn)As layer is aligned antiparallel to the Fe layer at zero field, although the bias field is lower by approximately a factor of two.\n\nClear differences are observed between the Mn XMCD hysteresis loops obtained using TEY and FY detection modes. For FY the magnitude of the XMCD is similar (but of opposite sign) at remanence and at high magnetic fields, whereas for TEY at remanence it is approximately a factor of two larger than at 1000 Oe. The Mn L 2 , 3 XMCD spectra recorded at remanence and at 1000 Oe, shown in Fig. 3, confirm this result. At remanence the FY and TEY detected XMCD have similar magnitudes. However, under a large external field the XMCD is substantially smaller in TEY than in FY, confirming that the net magnetization of the Mn ions near the interface is significantly less than in the bulk of the (Ga,Mn)As film. This is the case even up to the highest field applied (20 kOe). By applying the XMCD sum rules 14 to the TEY data, and by comparing the spectra to previous measurements on well-characterized (Ga,Mn)As", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2449.pdf" - }, - { - "text": "samples 15 , the projected Mn 3 d magnetic moments are obtained as -1.4 µ B and +0.8 µ B per ion at remanence and 1000 Oe, respectively.\n\nThe difference between these values can be understood as being due to an interface layer which is strongly antiferromagnetically coupled to the Fe layer. At zero field, both the interfacial and bulk Mn are aligned antiparallel to the Fe layer. At high fields, the bulk of the (Ga,Mn)As layer away from the interface is re-oriented into the external field direction. However, the interfacial Mn remains antiparallel to the Fe layer and thus partially compensates the XMCD signal from the bulk of the (Ga,Mn)As. From the size of the remanent and 1000 Oe magnetic moments, it can be estimated that around 25-30% of the TEY XMCD signal can be ascribed to the interfacial Mn which is strongly coupled to the Fe moments.\n\nThe interfacial Mn moments are ascribed to the proximity polarization of the (Ga,Mn)As interface by the Fe layer, such as was shown previously by XMCD as well as ab initio theory 7 . Evidence for this can be observed from measurement of the Mn L 2 , 3 XMCD signal at temperatures above the (Ga,Mn)As T C . Similar to the previous study 7 , we observe a small but not negligible signal at room temperature (Fig. 3), with opposite sign to the Fe L 2 , 3 XMCD. Its spectral shape is characteristic of a localized electronic configuration close to d 5 , similar to bulk (Ga,Mn)As 7,9,15 but in contrast to Mn in more metallic environments such as Mn x Fe 1 -x 7 or MnAs 16 . A slight broadening is observed on the low energy side of the Mn L 3 peak, which may be due to the different screening induced by proximity to the Fe layer. Since the measured intensity is attenuated with distance z from the surface as I = I 0 exp( -z/λ TEY ), the thickness of the strongly coupled interface layer is estimated to be ∼ 0.7 nm or 2-3\n\n- 2 J.-H. Chung, S. J. Chung, S. Lee, B. J. Kirby, J. A. Borchers, Y. J. Cho, X. Liu, and J. K. Furdyna, Phys. Rev. Lett. 101 , 237202 (2008).\n- 3 M. Wang, R. P. Campion, A. W. Rushforth, K. W. Edmonds, C. T. Foxon, and R. P. Campion, Appl. Phys. Lett. 93 , 132103 (2008).\n- 4 M. Zhu, M. J. Wilson, B. L. Sheu, P. Mitra, P. Schiffer, and N. Samarth, Appl. Phys. Lett. 91 , 192503 (2007); M. Zhu, M. J. Wilson, P. Mitra, P. Schiffer, and N. Samarth, Phys. Rev. B 78 , 195307 (2008).\n- 5 S. Mark, C. Gould, K. Pappert, J. Wenisch, K. Brunner, G. Schmidt, and L. W. Molenkamp, Phys. Rev. Lett. 103 , 017204 (2009).\n- 6 G. Wastlbauer and J.A.C. Bland, Adv. Phys. 54 , 137 (2005).\n- 7 F. Maccherozzi, M. Sperl, G. Panaccione, J. Minar, S.\n\nmonolayers, assuming a uniform distribution of Mn ions and magnetic moments throughout the (Ga,Mn)As film. This is around a factor of three thinner than in Ref. 7 , which could be due to the lower Mn concentration or the different preparation method of the present samples.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2449.pdf" - }, - { - "text": "/s32\n\nFIG. 3. (color online) (a) Polarization-averaged Mn L 2 , 3 spectrum for a Fe/(Ga,Mn)As film; (b) XMCD spectra measured in remanence at 2 K; (c) XMCD spectra measured under a 1000 Oe applied field at 2 K; (d) XMCD spectrum measured under a 2000 Oe applied field at 300 K. XMCD spectra are obtained using TEY (thick red lines) and FY (thin blue lines) detection.\n\n\n\n/s32", - "page_start": 5, - "page_end": 5, - "source_file": "1001.2449.pdf" - }, - { - "text": "measurements were performed on beamline I06 at the Diamond Light Source, and on beamline 4.0.2 at the Advanced Light Source. Total-electron yield (TEY) and fluorescence yield (FY) were monitored simultaneously using the sample drain current and the photocurrent of a diode mounted at 90 · to the incident beam, respectively.\n\nSQUID magnetometry measurements were first performed on control Fe/GaAs(001) and (Ga,Mn)As/GaAs(001) samples, grown under the same conditions as the bilayers, to determine the magnetic anisotropies of the individual layers and the Curie temperature of the (Ga,Mn)As layer. The Fe film has a uniaxial magnetic anisotropy with easy axis along the [110] orientation, similar to previous studies 6 . For the (Ga,Mn)As control sample, there is a competition between cubic and uniaxial magnetic anisotropies, with the former dominant at low temperatures and favoring easy axes along the in-plane 〈 100 〉 orientations, and the latter dominant close to T C ( ∼ 35 K) giving an easy axis along the [1 ¯ 10] orientation. Figure 1 shows [110] magnetization versus temperature curves and low temperature hysteresis loops for a bilayer film containing a 20 nm thick (Ga,Mn)As layer. The total remnant moment of the bilayer film decreases on cooling under zero magnetic field below the T C of the (Ga,Mn)As, indicating that this layer aligns antiparallel to the Fe magnetization at zero field. The hysteresis curve shows a two-step magnetization reversal, indicating different behavior of the Fe and (Ga,Mn)As layers, with the smaller loop attributed to the dilute moment (Ga,Mn)As film. The minor hysteresis loop shown in Fig. 1 clearly shows a shift from zero field by a bias field H E , indicating that the Fe layer induces an exchange bias in the magnetic semiconductor. The shape and size of the minor loop is in agreement with the hysteresis loop for the control (Ga,Mn)As sample, also shown in Fig. 1. This strongly indicates that the exchange bias affects the whole of the (Ga,Mn)As layer in the bilayer sample.\n\nSimilar behavior is observed for bilayer samples containing a 10 nm or 50 nm (Ga,Mn)As layer, with a bias field which is approximately inversely proportional to the thickness d of the ferromagnetic semiconductor layer (Fig. 1, inset). This 1/ d dependence of H E was found previously for MnAs/(Ga,Mn)As bilayers 4 , and is generally observed in exchanged-biased thin films 12 . From this dependence it is possible to describe the exchange bias in terms of an interface energy per unit area, ∆ E = M FS H E d = 0 . 003 erg/cm 2 . This value is rather small compared to typical exchange bias systems 12 , reflecting the low moment density M FS of the diluted FM semiconductor layer. However, the bias field for a given (Ga,Mn)As thickness is larger than is observed for MnO/(Ga,Mn)As structures 13 , while the reproducibility and flexibility of the present structures is much higher due to the single-crystalline ferromagnetic nature of the Fe layer.\n\nTo confirm the presence of AFM interlayer coupling, we performed XMCD measurements at the Mn and Fe", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2449.pdf" - }, - { - "text": "Here, we demonstrate an antiferromagnetic coupling and exchange bias in Fe/(Ga,Mn)As bilayer films, by combining element-specific XMCD measurements and bulk-sensitive superconducting quantum interference device (SQUID) magnetometry. As with previous studies of FM metal/FM semiconductor bilayers 4,5 (and in contrast to AFM coupled FM metal/FM metal exchange bias structures 10,11 ) the layers are in direct contact without a non-magnetic spacer in between. We distinguish interface and bulk (Ga,Mn)As layers that are respectively strongly and weakly antiferromagnetically coupled to the Fe overlayer. In agreement with Ref. 7 , the interface layer remains polarized at room temperature.\n\nThe Fe and (Ga,Mn)As layers of the present study were both grown by molecular beam epitaxy in the same ultra-high vacuum system, in order to ensure a clean interface between them. The (Ga,Mn)As layer of thickness 10 to 50 nm was deposited on a GaAs(001) substrate at a temperature of 260 · C, using previously established methods 3,8 . A low Mn concentration of x ≈ 0 . 03 was chosen in order to avoid the formation of compensating Mn interstitials. The substrate temperature was then reduced to ∼ 0 · C, before depositing a 2 nm Fe layer, plus a 2 nm Al capping layer. In-situ reflection high energy electron diffraction and ex-situ x-ray reflectivity and diffraction measurements confirmed that the layers are single-crystalline with sub-nm interface roughness. SQUID magnetometry measurements were performed using a Quantum Design Magnetic Property Measurement System. Mn and Fe L 2 , 3 x-ray absorption and XMCD", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "Figure 1: (Left) The preliminary significance measured from each of the 49 non-detected candidates using standard analysis cuts. The curve shows a Gaussian distribution, with mean zero and standard deviation one, normalized to the number of blazars. A similar result is obtained using analysis cuts optimized for soft-spectrum sources. (Right) The distribution of flux upper limits for the non-detected blazars in percentage of Crab Nebula flux above the observation threshold. The time-weighted average limit is less than ∼ 2% Crab flux.\n\n\n\n\n\nσ\n\nsince the launch of Fermi include LAT detections. In addition, several MWL campaigns on the well-studied VHE blazars Mkn 421 and Mkn 501 (please see the contributions of D. Gall and A. Konopelko in these proceedings) were also performed. Highlights of these campaigns include:\n\n - · 1ES 2344+514: A major (50% Crab) VHE flare, along with correlations of the VHE and X-ray flux were observed from this HBL. The VHE and X-ray spectra harden during bright states, and a synchrotron self-Compton (SSC) model can explain the observed SED in both the high and low states [26].\n - · 1ES 1218+304: This HBL flared during VERITAS MWL observations. Its unusually hard VHE spectrum strongly constrains the EBL. The observed flaring rules out kpc-scale jet emission as the explanation of the spectral hardness and places the EBL constraints on more solidfooting [27, 28].\n - · 1ES 0806+524: The observed SED of this new VHE HBL can be explained by an SSC model [16].\n - · W Comae: This IBL, the first discovered at VHE, flared twice in 2008 [14, 15]. Modeling of the SED is improved by including an externalCompton (EC) component in an SSC interpretation.\n - · 3C 66A: This IBL flared at VHE and MeV-GeV energies in 2008[17, 18]. Similar to W Comae and PKS 1424+240, modeling of observed SED suggests a strong EC component in addition to an SSC component.\n - · Mkn 421: This HBL exhibited major flaring behavior for several months in 2008. Correlations of the VHE and X-ray flux were observed, along with spectral hardening with increased flux in both bands [29].\n - · RGBJ0710+591: Modeling the SED of this HBL with an SSC model yields a good fit to the data. The inclusion of an external Compton component does not improve the fit.\n - · PKS1424+240: The broadband SED of this IBL (at unknown redshift) is well described by an SSC model favoring a redshift of less than 0.1 [21]. Using the photon index measured with Fermi-LAT in combination with recent EBL absorption models, the VERITAS data indicate that the redshift of PKS 1424+240 is less than 0.66.\n\n## 8. Conclusions\n\nThe first two years of the VERITAS blazar KSP were highly successful. Highlights include the detection of more than a 16 VHE blazars with the observations almost always having contemporaneous MWL data. Among these detections are 8 VHE blazar discoveries, including the first three IBLs known to emit VHE γ -rays. All but a handful of the blazars on the initial VERITAS discovery target list were observed, and the flux limits generated for those not VHE detected are generally the most-constraining ever. The excess seen in the stacked blazar analysis suggests that the initial direction of the VERITAS discovery program was well justified, and that follow-up observations of many of these initial targets will result in VHE discoveries. In addition, the Fermi-LAT is identifying many new compelling targets for the VERITAS blazar discovery program. These new candidates have already resulted in 3 VHE blazar discoveries. The future of the VERITAS blazar discovery program is clearly very bright.\n\nThe MWL aspect of the VERITAS blazar KSP has also been highly successful. Every VERITAS observation of a known, or newly discovered, VHE blazar has been accompanied by contemporaneous MWL observations. These data have resulted in the identifica-", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0770.pdf" - }, - { - "text": "## Materials & experimental systems\n\nn/a\n\nInvolved in the study\n\nAntibodies\n\nEukaryotic cell lines\n\nPalaeontology and archaeology\n\nAnimals and other organisms\n\nClinical data\n\nDual use research of concern\n\nPlants\n\n## Methods\n\nn/a\n\nInvolved in the study\n\nChIP-seq\n\nFlow cytometry\n\nMRI-based neuroimaging\n\n## Magnetic resonance imaging\n\n## Experimental design\n\nDesign type\n\nStructural & Diffusion MRI\n\nDesign specifications\n\nNo task-based fMRI used in this manuscript.\n\nBehavioral performance measures\n\nN/A; no performance metrics collected\n\n## Acquisition\n\nImaging type(s)\n\nStructural\n\nField strength\n\n3\n\nSequence & imaging parameters\n\nHigh-resolution anatomical scans were acquired using a T1-weighted (T1w) magnetization prepared rapid gradient echo (MPRAGE) sequence (TR = 2500 ms, TE = 2.31 ms, T1 = 934 ms, flip angle = 7°, 0.8 mm thickness) followed by a gradient echo fieldmap (TR = 758 ms; TE1 = 4.92 ms; TE2 = 7.38 ms; flip angle = 60°). A T2-weighted (T2w) turbo spin echo (TSE) scan was also acquired with an oblique coronal orientation positioned orthogonally to the main axis of the hippocampus (TR/TE = 9860/50 ms, flip angle = 122°, 0.4 × 0.4 mm2 in-plane resolution, 2 mm slice thickness, 38 interleaved slices with no gap, total acquisition time = 5:42 min).\n\nArea of acquisition\n\nT1-weighted and dMRI scans = whole-brain\n\nT2-weighted scan = high-resolution imaging of medial temporal lobe\n\nDiffusion MRI\n\nUsed\n\nNot used\n\nParameters TR = 4300 ms, echo time = 100.2 ms, 139 directions, b-max = 4990, FoV = 259 x 259 mm, 78 slices, 1.7986 x 1.7986 x 1.8 mm voxel resolution\n\n## Preprocessing\n\nPreprocessing software\n\nGray Matter Volume & Cortical Thickness: Advanced Normalization Tools (ANTs), version 2.1.0\n\nFreeSurfer, version 7\n\nT2-weighted MTL scans:\n\nAutomatic Segmentation of Hippocampal Subfields (ASHS), version 7/2018\n\nDiffusion imaging:\n\nQSIprep, version 0.15.3\n\nDSI Studio, version Chen-2022-07-31\n\nNormalization\n\nNormalization differed by modality due to inherent limitations of applicable processing pipelines.\n\nGray Matter Volume & Cortical Thickness:\n\nAll analyses were kept in native subject-space to limit the amount of warping and leverage the advantages of a precision imaging design.\n\nT2-weighted MTL scans:\n\nT2w images were registered to the segmentation template (see below) using ANTs deformable registration.\n\nDiffusion imaging:\n\nInitial preprocessing through QSIprep normalized diffusion images to the skull-stripped T1w images. Diffusion images were then reconstructed in MNI space using DSI studio's Q-space Diffeomorphic Reconstruction.", - "page_start": 15, - "page_end": 15, - "source_file": "pubmed4.pdf" - }, - { - "text": "were as follows: estradiol-1.0 pg ml -1 , 1-500 pg ml -1 , <5% relative s.d. (RSD); progesterone-0.05 ng ml -1 , 0.05-10 ng ml -1 , 9.33% RSD. Serological samples were not acquired in five sessions due to scheduling conflicts with UC Irvine's Center for Clinical Research.\n\nMRI acquisition . MRI scanning sessions at the University of California, Santa Barbara and Irvine were conducted on 3T Prisma scanners equipped with 64-channel phased-array head/neck coil (of which 50 coils are used for axial brain imaging). High-resolution anatomical scans were acquired using a T1-weighted (T1w) magnetization prepared rapid gradient echo (MPRAGE) sequence (repetition time (TR) = 2,500 ms, time to echo (TE) = 2.31 ms, inversion time (TI) = 934 ms, flip angle = 7°, 0.8 mm thickness) followed by a gradient echo field map (TR = 758 ms, TE1 = 4.92 ms, TE2 = 7.38 ms, flip angle = 60°). A T2-weighted (T2w) turbo spin echo scan was also acquired with an oblique coronal orientation positioned orthogonally to the main axis of the hippocampus (TR/ TE = 9,860/50 ms, flip angle = 122°, 0.4 × 0.4 mm 2 in-plane resolution, 2-mm slice thickness, 38 interleaved slices with no gap, total acquisition time = 5 min and 42 sec). The Diffusion Spectrum Imaging (DSI) protocol sampled the entire brain with the following parameters: single phase, TR = 4,300 ms, echo time = 100.2 ms, 139 directions, b -max = 4,990, FoV = 259 × 259 mm, 78 slices, 1.7986 × 1.7986 × 1.8 mm voxel resolution. These images were linearly registered to the whole-brain T1w MPRAGE image. A custom foam headcase was used to provide extra padding around the head and neck, as well as to minimize head motion. Additionally, a custom-built sound-absorbing foam girdle was placed around the participant's waist to attenuate sound near the fetus during second-trimester and third-trimester scanning.", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed4.pdf" - }, - { - "text": "space, so no transformation is needed.\n\n - iTargetPage Specifies the page number of the destination page within the document.\n - xtfvTarget Specifies the x-coordinate of the target location on the destination page. The unit of measure for this value is points.\n - ytfvTarget Specifies the y-coordinate of the target location on the destination page. The unit of measure for this value is points.\n - dytfTargetPage The height of the destination page in points. The offset specified by the ytfvTarget member is relative to the upper-left corner of the page. However, some fixed-format types use a coordinate system that is relative to the bottom-left corner of the page. For these types of documents, the page height is required to convert the offset.\n\n## DocExComment\\_ColorInfo\n\nThe DocExComment\\_ColorInfo structure specifies color-state information for the EMF. For more information about this structure, see the section Extended Color Support.\n\n```\nC++ struct DocExComment\\_ColorInfo { DWORD ident {}; DWORD iComment {}; COLORREF clr { 0 }; BOOL fForeColor {}; };\n```\n\nThe members of the DocExComment\\_ColorInfo structure are as follows:\n\n - ident Specifies the constant value, msodocexsignature, which identifies this EMF comment as containing semantic information.\n - iComment Specifies the MSODOCEXCOMMENT value, msodocexcommentColorInfo.\n - clr Specifies a color ID that represents a current color state in the EMF.\n - fForeColor Specifies whether the color ID in the clr member represents a foreground color or a background color. If this member has a value of true , the", - "page_start": 17, - "page_end": 17, - "source_file": "office-pdf.pdf" - }, - { - "text": "## IST MODE OR PHUGOID\n\n\n\n2ND\n\nMODE OR SHORT PERIOD\n\nOSCILLATION\n\nMOTION OCCURS AT ESSENTIALLY CONSTANT SPEED\n\nFigure 4.20. Longiitudinal Dynamic Sttxbility\n\n\n\n0", - "page_start": 297, - "page_end": 297, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_KCN_2013.pdf", - "query": "What is Kingsgate ?", - "target_page": 2, - "target_passage": "Kingsgate is a highly successful gold mining, development and exploration company with two operating gold mines and two advanced development projects.", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "The Board of Kingsgate is determined to reestablish the path to building shareholder wealth via profits and dividends despite a difficult external environment. Shareholders can look forward to a steady performance from Chatree and a turn-around at Challenger coupled with the completion of feasibility studies at the two major development projects over the coming year.\n\nI would also like to thank our Chief Executive Officer and Managing Director, Gavin Thomas, Kingsgate management and all of the Kingsgate, Akara and Challenger personnel and the project teams for their part in delivering the operational performance during what was a difficult year for your Company.", - "page_start": 3, - "page_end": 3, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "\n\n## Performance rights\n\nThe number of performance rights held during the financial year by each Director of Kingsgate and each of the specified executives of the Group, including their personally-related entities, are set out as follows:", - "page_start": 108, - "page_end": 108, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "| Kingsgate Peru SRL | Peru | Ordinary | 100 | 100 |\n| Minera Kingsgate Argentina S.A. | Argentina | Ordinary | 100 | 100 |", - "page_start": 95, - "page_end": 95, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "\n\n\n\nTHAI LA ND\n\nKingsgate is a highly successful gold mining, development and exploration company with two operating gold mines and two advanced development projects. Shareholders can look forward to the benefits of this strong operating and development platform, where Kingsgate aims to build value though operating, earnings and dividend growth for the benefit of all stakeholders.\n\nAUST RA LI\n\nA\n\n", - "page_start": 1, - "page_end": 1, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Senior Management\n\nKingsgate's executives have a comprehensive range of skills and experience including mine development and operations, exploration, finance and administration. They are supported by highly qualified specialists, whose backgrounds cover the full scope of mining resources activities.\n\nSenior members of Kingsgate's management team are:\n\n## Gavin Thomas\n\nBSc (Geology), FAusIMM\n\n## Managing Director and Chief Executive Officer\n\nGavin Thomas was appointed Chief Executive Officer of Kingsgate in 2004 and joined the Kingsgate Board on 16th November 2007. Gavin has had a successful career in developing mining companies from the exploration phase into mid-tier gold or copper producers. He has over 42 years of international experience in exploring for, evaluating, developing, operating and reclaiming mines in North and South America, Australia, the Southwest Pacific, Asia and Europe. Amongst Gavin's credits is the discovery of 'Lihir' in Papua New Guinea, one of the largest gold deposits in the world. In particular, he has extensive experience in Thailand and South America.\n\n## Duane Woodbury\n\nBEc (Hons)\n\n## Chief Financial Officer\n\nDuane Woodbury was appointed Chief Financial Officer of Kingsgate on 1 September 2011. Duane has a BEc (Hons) Degree and has worked in various financial, accounting and advisory roles during his career in a number of locations, including London, New York and Singapore. He has been assisting Kingsgate in its business development initiatives since August 2007 and brings over 20 years of experience in financial markets and corporate finance transactions, principally with the Macquarie Group.\n\n## Tim Benfield\n\nDip CSM (mining), MBA, MAusIMM\n\n## Chief Operating Officer\n\nTim Benfield joined Kingsgate in February 2012 as Chief Operating Officer. Tim is a mining engineer with over 21 years underground and open pit experience in the mining industry in both operational and corporate roles. He has operational and project development experience in Australia, Africa and Saudi Arabia. This includes 10 years with Barrick Gold of Australia where he provided support to four operating mines and two development projects. Tim was most recently General Manager of the Pajingo Gold mine in Queensland for Evolution Mining Limited.\n\n## Ross Coyle\n\nBA, FCPA, FCIS\n\n## General Manager Finance and Administration Company Secretary\n\nRoss Coyle joined Kingsgate in March 2011 following the Company's acquisition of Dominion Mining Limited and was with the Dominion group for over 25 years. He is a qualified accountant and has over 30 years experience in finance and accounting within the resource industry. He was Finance Director of Dominion from 1996. Ross was appointed Kingsgate's Company Secretary in September 2011.\n\n## Joel Forwood\n\nBsc (Hons) FFin\n\n## General Manager Corporate and Markets\n\nJoel Forwood joined Kingsgate in November 2010 and has over 27 years experience in the resource and investment industries covering investor relations, funds management and exploration. For over 12 years, he has been leading investor relations at a number of listed companies, most recently for Lihir Gold Limited. Prior to this he was a fund manager with Queensland Investment Corporation (QIC) following his early career in mineral exploration with BHP and corporate development with RGC.\n\n## Ronald James\n\nBSc (Geology), MAusIMM, MAIG\n\n## General Manager Exploration and Resource Development", - "page_start": 40, - "page_end": 40, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Deferred rights\n\nThe number of deferred rights held during the financial year by each Director of Kingsgate and each of the specified executives of the Group, including their personally-related entities, are set out as follows:", - "page_start": 108, - "page_end": 108, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "\n\n## Directors' Report\n\nYour Directors present their report on the Group consisting of Kingsgate Consolidated Limited and the entities it controlled at the end of, or during, the year ended 30 June 2013.\n\n## Directors\n\nThe following persons were Directors of Kingsgate Consolidated Limited during the whole of the financial year and up to the date of this report.\n\n - 〉 Ross Smyth-Kirk Chairman\n - 〉\n\nPeter Alexander Non-Executive Director\n\n - 〉\n\nCraig Carracher Non-Executive Director\n\n - 〉 Peter McAleer\n\nNon-Executive Director\n\n - 〉 Gavin Thomas\n\nExecutive Director\n\n## Principal activities\n\nThe principal activities of Kingsgate Consolidated Limited are mining and mineral exploration in Australia, South East Asia and South America.\n\n## Dividends\n\n## Review of operations and results\n\n## Operational performance\n\nKingsgate is a gold mining, development and exploration company based in Sydney, Australia. Kingsgate owns and operates two gold mines, the world class Chatree Mine in Thailand and the underground Challenger Mine in South Australia. In addition, the Company has two advanced development projects, the Nueva Esperanza Silver / Gold Project, in the highly prospective Maricunga Gold / Silver Belt in Chile, and the Bowdens Silver Project in New South Wales, Australia. From this operating and development platform, Kingsgate aims to build value for all shareholders.\n\nGroup gold production was 199,897 ounces, a decrease of 4% on the previous corresponding year. The contribution from Chatree was 133,681 ounces with 66,216 ounces from Challenger.\n\nDividends paid to members during the financial year were as follows:\n\n| | 2013 $'000 | 2012 $'000 |\n|-------------------------------------------------------------------------------------------------------------------|---------------|---------------|\n| Final dividend declared for the year ended 30 June 2012 of 10 cents per fully paid share paid on 1 October 2012 | 15,148 | 6,829 |\n| Interim dividend declared for the year ended 30 June 2013 of 5 cents per fully paid share paid on 12 April 2013 | 7,591 | 15,196 |\n| Total dividends | 22,739 | 22,025 |\n\nChatree gold production was 10% higher than the previous corresponding period as a result of an increase in throughput from the expanded Chatree process plant and access to higher grade oxide ore from Q Prospect.\n\nChallenger gold production was 24% lower than the previous corresponding year given additional dilution and depletion at Challenger Deeps and a shortfall in planned development. This resulted in lower ore tonnes from the mine that was supplemented by low grade stockpiled ore. Following the fall in the gold price a strategic review of Challenger was implemented that has resulted in a new mine plan to focus primarily on the higher grade Challenger West orebody. The new mine plan will be implemented during the first three months of the 2014 financial year.", - "page_start": 43, - "page_end": 43, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## (d) Equity instrument disclosures relating to key management personnel\n\n## Share holdings\n\nThe number of shares in the Company held during the financial year by each Director of Kingsgate and each of the other Key Management Personnel of the Group, including their personally-related entities are set out as follows:", - "page_start": 106, - "page_end": 106, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Financing Arrangements\n\n## Corporate loan facility\n\nKingsgate has a three year secured loan facility with Investec which was amended during the year. The amended facility has a limit of $40 million (30 June 2012: $50 million), of which $20 million has been drawn down as at 30 June 2013 (30 June 2012: $40 million).\n\n## Convertible loan facility\n\nKingsgate has a five year A$35 million convertible loan facility with Investec entered into in a prior period to provide funding for the Bowdens acquisition. Kingsgate has the option to make a prepayment against the facility with an issue of Kingsgate shares.\n\n## Restructure of corporate loan and convertible loan facilities\n\nAs indicated previously in the Preliminary Final report, at balance date it was the Group's intention to restructure and amalgamate these facilities in the next financial year. This relates to the potential for completion of the Initial Public Offering ('IPO') of Akara on the Stock Exchange of Thailand and the updated mine plan for Challenger. Any restructure would optimise the Group's anticipated balance sheet liquidity and operational cash flows. Accordingly, the Group classified the total amount drawn down under these facilities of $55 million as a current liability at 30 June 2013.\n\nSubsequent to the end of the financial year, the Group received from its lenders a credit approved term sheet (subject to formal documentation) for the restructure of the corporate loan and convertible loan facilities. Following completion of the restructure the total amount outstanding will be reduced to $40 million. This loan will be provided through a single senior corporate facility which will consist of two tranches:\n\n - 〉 Tranche one will be a $25 million Akara Pre IPO Bond with a maturity date of 31 July 2015. The current intention is for this tranche to be repaid as part of the Akara IPO, although at Kingsgate's election repayment can be made by either cash or in Kingsgate's shares.\n - 〉 Tranche two is an amortising facility with $5 million to be repaid during the 2014 financial year and the balance of $10 million repaid during the 2015 financial year.\n\n\n\n## Convertible revolving credit facility\n\nThe Group also has a three year $25 million Convertible Revolving Credit Facility available. As at the date of this report the facility is undrawn. Under the terms of this facility, Kingsgate has the option of repaying any funds drawn down under the facility through either cash or by issuing ordinary shares. It is intended that this facility will be utilised during the 2014 financial year for corporate and working capital purposes. It is the current intention of the company to repay any cash drawdown under the facility by the issuance of fully paid ordinary shares which would rank parri pasu with all existing ordinary shares, although this position will be reviewed at the appropriate time. The number of shares has not yet been determined and they will be issued at a 2.5% discount to VWAP over a period by reference to the draw down date. Shareholder approval is not required.\n\n## Multi-currency and syndicated loan facilities\n\nKingsgate's Thai operating subsidiary, Akara, established a six year amortising multi-currency loan facility equivalent to US$125 million (fully drawn as at period end) and an additional Thai Baht denominated working capital facility equivalent to US$15 million (undrawn as at year end) during the period. The proceeds from these borrowings were used to fully repay the outstanding balance on the US$100 million Baht denominated syndicated loan facility in existence at the beginning of the period as well as to repay part of the corporate loan facility noted above.\n\n\n\n## Financial Position\n\nShareholders' equity at 30 June 2013 was $474 million (2012: $776 million). The decrease of $302 million reflects the year's loss together with dividends paid.\n\n## Dividends", - "page_start": 11, - "page_end": 11, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "\n\n## 110\n\nNotes to the Financial Statements\n\n## 32. Parent entity financial information continued\n\n## Contingent liabilities of the parent entity\n\nBank guarantees have been given by Kingsgate's controlled entities to participating banks in the syndicated loan facility and revolving loan facility as described in Note 16 as part of the security package.\n\nThese guarantees may give rise to liabilities in the parent entity if the controlled entities do not meet their obligations under the terms of the loans subject to guarantees. No material losses are anticipated in respect of the above contingent liabilities.\n\n## 33. Sale of exploration assets\n\nOn 28 March 2013, the Group sold its exploration assets in Western Australia and Queensland through the sale of shares in its subsidiary company, Quadrio Resources Limited, to Caravel Minerals Limited ('Caravel'), an Australian company listed on the ASX.\n\nKingsgate received 135,000,000 fully paid ordinary shares in the issued capital of Caravel and 20,000,000 unlisted options to acquire Caravel shares exercisable at 10 cents on or before three years from the date of issue. Subsequent to the sale, Kingsgate became the largest shareholder in Caravel with 35.54% held at 30 June 2013. Kingsgate's holding in Caravel reduced to 27.04% post 30 June 2013 following a rights issue by Caravel that Kingsgate did not participate in.\n\nThe financial impact of the sale transaction as at the date of disposal is summarised below:\n\n| Fair value of consideration | 2013 $'000 |\n|------------------------------------------------|---------------|\n| 135,000,000 Caravel shares at $0.025 per share | 3,375 |\n| 20,000,000 unlisted Caravel options | - |\n| Total consideration | 3,375 |\n| Carry value of the exploration assets sold | 20,084 |\n| Loss on sale | 16,709 |\n\n", - "page_start": 111, - "page_end": 111, - "source_file": "ASX_KCN_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_KCN_2013.pdf", - "query": "What does demonstatre the feasibility study on the Nueva Esperanza Project ?", - "target_page": 6, - "target_passage": "The study demonstrated that open pit mining at two million tonnes per year and processing by milling and agitation leaching in cyanide was technically feasible, although high capital and power costs negatively impacted project economic returns. ", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "\n\n\n\nu\n\n## Nueva Esperanza Project\n\n## Chile\n\n## Summary\n\nThe Nueva Esperanza Project is 100% owned by Kingsgate since February 2012. Nueva Esperanza is located in the Maricunga Gold Belt near Copiapó, a regional mining centre in Northern Chile. The silver-rich mineralisation is hosted by the Esperanza high-sulphidation epithermal alteration system associated with the Cerros Bravos volcanic complex.\n\nThe project consists of three well-defined mineralised deposits and a number of undeveloped exploration targets. The main deposits are Arqueros, Chimberos and Teterita. Arqueros was previously mined on a limited scale by underground methods and Chimberos was exploited as an open pit mine, delivering about 40 million ounces of silver in 1998/99. All three deposits currently have a combined Mineral Resources of about 93 million ounces of silver equivalent or 1.6 million ounces of gold equivalent (EQ60) 1 .\n\nA feasibility study for a decision to mine the Arqueros portion of Nueva Esperanza was completed in late 2012, demonstrating that open pit mining at two million tonnes per year and processing by milling and agitation leaching in cyanide was technically feasible. Work remained to integrate the Teterita and Chimberos deposits into the project, as well as to test lower cost options for processing. Continued metallurgical testwork has shown that mineralisation from all three deposits by heap leaching is technically and economically feasible and the preferred alternative for development.\n\nEnvironmental approvals to commence construction and mining at Nueva Esperanza were granted in July 2013 for the original Arqueros project. Work is underway to modify and update the environmental assessment to incorporate the heap leach process.\n\n - 1 Equivalence is based on gold/silver price ratio of 60. Gold equivalence = gold content plus (silver content divided by 60), whereas Silver equivalent silver content plus (gold content multiplied by 60).", - "page_start": 29, - "page_end": 29, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "\n\n4\n\nManaging Director and CEO's Report\n\n## Development Projects\n\n## Bowdens\n\nThe Bowdens Project continued to advance during the year with field programs supporting the ongoing feasibility and environmental studies. Sterilisation drilling and additional metallurgical sampling were undertaken with the resource evaluation drilling completed in October 2012.\n\nDuring 2013, the process design and engineering work for the Definitive Feasibility Study ('DFS') progressed to a point where the draft study was close to completion as at 30 June 2013. The study encompassed detailed process design based on using the most recent metallurgical test results, capital and operating cost estimates, project water and power supply, infrastructure requirements and mine optimisation.\n\nThe preparation for lodgement of an Environmental Impact Statement ('EIS') to the NSW Department of Planning continues. It is envisaged that the EIS will be completed and lodged in 2014. Data for flora and fauna, surface water, groundwater, meteorology, ambient noise and dust levels are collected routinely. Further investigations of cultural heritage, social-economic impact, traffic impact, soil type and agricultural suitability have also been undertaken.\n\nWith the fall in metal prices in late 2013, work and expenditure on the DFS and EIS have been phased to coordinate and synchronise the timing of the two programs with completion and lodgement now not expected before mid-2014.\n\n## Nueva Esperanza\n\nThe Nueva Esperanza Project was advanced during the year with the completion of a draft feasibility study. This study included a decision to mine the Arqueros and Teterita portions of Nueva Esperanza. The study demonstrated that open pit mining at two million tonnes per year and processing by milling and agitation leaching in cyanide was technically feasible, although high capital and power costs negatively impacted project economic returns.\n\nAs a consequence, feasibility work has transitioned to assess a lower capital cost and lower power requirement options, namely the potential for heap leach processing. Metallurgical testwork recently completed demonstrated that processing of mineralisation from all three deposits by heap leaching has the potential to be technically and economically feasible and as a consequence may become the preferred alternative for development.\n\nEnvironmental approval for the original Arqueros Project was granted in July 2013.\n\n## Financials\n\nKingsgate made an after tax loss of $323.7 million for the full year to 30 June 2013 compared to an after tax profit of $75.0 million for the previous corresponding year. The result for the year reflected an impairment of $311.9 million pre-tax ($291.3 million post-tax) against the Challenger Mine and associated assets and an impairment of $20.4 million against greenfield exploration projects in Australia and Thailand.\n\n| Financial Summary | 2013 $000 | 2012 $000 |\n|----------------------------------------|---------------|--------------|\n| Total sales revenue | 329,282 | 357,372 |\n| EBITDA before significant items | 115,845 | 168,583 |\n| (Loss) / profit before tax | ( 339,615) | 91,277 |\n| Income tax benefit / (expense) | 15,889 | (16,271) |\n| (Loss) / profit after income after tax | (323,726) | 75,006 |\n| Dividend declared (¢/share) | 5 | 20 |\n\n\n\n", - "page_start": 5, - "page_end": 5, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "A lower gold price and industry wide cost pressures had a negative impact on the underlying earnings of the Group which contributed to a major impairment to the carrying value of a number of Group assets, particularly assets relating to the Challenger Gold Operations. Impairments totalling $332,808,000 were the major contributor to the after tax loss of $323,726,000 for the year.\n\nThe development projects continued to advance during the year. At Nueva Esperanza, the feasibility work shifted to focus on identifying the lowest cost and lowest power consumption development alternatives. This included reviewing a heap leach process option with on-site power generation. Further work is expected to be completed in the December quarter 2013. At Bowdens, the feasibility work has confirmed the optimum process route. Completion of the technical feasibility study including mine planning, infrastructure and metallurgy, and lodging of the Environmental Impact Statement ('EIS') are scheduled for 2014.\n\n", - "page_start": 43, - "page_end": 43, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "The environmental permitting process for the original Arqueros project has been completed, with approval to commence construction and mining granted by the Chilean authorities. A modification of the environmental assessment is being prepared to have the approvals modified for heap leaching and on-site power generation.\n\nExtensive community consultation has been undertaken with positive outcomes, and relationships with indigenous rural and urban communities remain a priority.\n\n\n\n", - "page_start": 30, - "page_end": 30, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "- 10.Run the oc new-project command to create a project that is called project1 , as shown in Example A-12.\n\nExample A-12 Creating a project\n\n$ oc new-project project1\n\nNow using project \"project1\" on server \"https://localhost:8443\". You can add applications to this project with the 'new-app' command. For example, try: oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby.", - "page_start": 218, - "page_end": 218, - "source_file": "sg248459.pdf" - }, - { - "text": "- 10.Use the oc new-project command to create a project that is called project2 , as shown in Example B-12.\n\n## Example B-12 Creating a project\n\n$ oc new-project project2\n\nNow using project \"project2\" on server \"https://localhost:8443\". You can add applications to this project with the 'new-app' command. For example, try: oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby.", - "page_start": 238, - "page_end": 238, - "source_file": "sg248459.pdf" - }, - { - "text": "- 5. The Results window is displayed and shows the progress of the deployment (see Figure A-7). To continue, click Close .\n\nFigure A-7 Results window\n\n\n\n - 6. In the Application Console view, browse to the project1 project page by selecting the project from the project list, as shown in Figure A-8. Scroll down the list to find your project. Click project1 .\n\nFigure A-8 Application Console view\n\n", - "page_start": 224, - "page_end": 224, - "source_file": "sg248459.pdf" - }, - { - "text": "## a) Dividend reinvestment plan\n\nUnder the dividend reinvestment plan 761,448 fully paid ordinary shares were issued during the year (2012: 412,835).\n\n## b) Capital risk management\n\nThe Group's objectives when managing capital are to safeguard the Group's ability to continue as a going concern, so as to maintain a strong capital base sufficient to maintain future exploration and development of its projects. In order to maintain or adjust the capital structure, the Group may return capital to shareholders, issue new shares or sell assets to reduce debt. The Group's focus has been to utilise surplus cash from operations and raise additional funds from debt capital markets to fund capital investment at Chatree and Challenger, working capital and exploration and evaluation activities, including the Nueva Esperanza Project in Chile and Bowdens Silver Project in New South Wales.\n\nu", - "page_start": 92, - "page_end": 92, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "As a consequence, feasibility work has transitioned to assess a lower capital cost and lower power requirement option, namely the potential for heap leach processing. Recently completed metallurgical testwork demonstrated that processing of mineralisation from all three deposits by heap leaching has the potential to be technically and economically feasible and, as a consequence, may become the preferred alternative for development.\n\nEnvironmental approval for the original Arqueros Project was granted in July 2013.\n\n## Bowdens Silver Project\n\nThe Bowdens Project continued to advance during the year with field programs supporting the feasibility and environmental studies ongoing. Sterilisation drilling and additional metallurgical sampling were undertaken with the resource evaluation drilling completed in October 2012.\n\n\n\nDuring 2013, the process design and engineering work for the Definitive Feasibility Study ('DFS') progressed to a point where the study was close to draft completion as at 30 June 2013. The study encompassed detailed process design based on using the most recent metallurgical test results, capital and operating cost estimates, project water and power supply, infrastructure requirements and mine optimisation.\n\nThe preparation for lodgement of an EIS to the NSW Department of Planning continues. It is envisaged that the EIS will be completed and lodged in 2014. Data for flora and fauna, surface water, groundwater, meteorology, ambient noise and dust levels are collected routinely. Further investigations of cultural heritage, social-economic impact, traffic impact, soil type and agricultural suitability have also been undertaken.\n\nWith the fall in metal prices in late 2013, work and expenditure on the DFS and EIS have been phased to coordinate the two programs with completion and submission now not expected before mid-2014.\n\n## Exploration\n\nThe Group has a portfolio of exploration tenements and applications in Thailand, Chile and Lao PDR. Following the sale of exploration tenements to Caravel (refer below), exploration in Australia is currently only conducted in the vicinity of the Challenger Mine in South Australia and the Bowdens Silver Project in New South Wales.\n\n## Sale of Exploration Assets\n\nOn 28 March 2013, the Group sold its exploration assets in Western Australia and Queensland through the sale of shares in its subsidiary company, Quadrio Resources Limited, to Caravel Minerals Limited ('Caravel'), an Australian company listed on the ASX.\n\nKingsgate received 135,000,000 fully paid ordinary shares in the issued capital of Caravel and 20,000,000 unlisted options to acquire Caravel shares exercisable at 10 cents on or before three years from the date of issue. Subsequent to the sale, Kingsgate became the largest shareholder in Caravel with 35.54% held at 30 June 2013. Kingsgate's holding in Caravel reduced to 27.04% post 30 June 2013 following a rights issue by Caravel that Kingsgate did not participate in.\n\nu", - "page_start": 44, - "page_end": 44, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "community healthcare in the two municipalities. The project team included three individuals representing users from the Nordland MS Association, along with an MS nurse and a neurologist from the MS-outpatient clinic, and three physiotherapists/ researchers.\n\n## 2.4 Research team and re /uniFB02 exivity\n\nAll researchers on the team are clinical specialists in neurological physiotherapy. BN and ECA developed the CoreDISTparticipation intervention, and SSHD contributed to the development of the outdoor part.\n\nThe researchers ' closeness to the intervention and the clinical /uniFB01 eld may have strengthened the depth and relevance of their interpretations in this study (27), as it was easy to understand what participants described and helped form follow-up questions during the interviews. However, closeness may also produce a risk of ' blind spots ' , as the researchers may prejudice participants ' experiences, omitting questions where the answers are believed to be obvious (27). Thus, throughout the process, trustworthiness and rigor were enhanced by discussing the methodology, /uniFB01 ndings, and interpretations with external researchers (including specialists in enactive theory), as well as user representatives. The presented theoretical framework (enactive theory) enhanced the distance to the material, as recommended in qualitative research (28).\n\n## 2.5 Recruitment and participants\n\nPrior to recruitment, the study was introduced to individuals with multiple sclerosis (pwMS) through a seminar hosted by the Nordland MS Association. Additionally, seminars were conducted for health professionals in community healthcare and at the regional hospital. Written information about this study (and the RCT) was sent from the MS clinic at the regional hospital by post to all eligible individuals af /uniFB01 liated with the hospital. Individuals who wished to participate signed the attached consent form and returned it in the pre-stamped envelope. The inclusion criteria were as follows: had been diagnosed with MS, had a score on the Expanded Disability Status Scale (EDSS) (29)of ≤ 3.5, was ≥ 18 years, was employed (10% -100% of full-time) and residential address in the two prede /uniFB01 ned municipalities. The exclusion criteria were as follows: pregnancy, exacerbation of symptoms within two weeks prior to enrollment and other serious conditions compromising balance, walking or work capacity. All participants in the intervention group of the RCT ( n = 15) were included (Table 3).\n\n## 2.6 Data collection\n\nThe interview guide (Table 4) was developed based on literature reviews, clinical experience and discussions within the research group and with user representatives. Two test interviews were\n\nTABLE 3 Participant demographic information.TABLE 4 Interview guide.\n\n| Variable | Total ( n =15) |\n|------------------------------------|-----------------------------------------------|\n| Age in years | Mean 47.6 (SD 6.04) |\n| Gender (women/men) | 12 woman/3 men (80%/20%) |\n| Type of MS | Relapsing remitting 15 (100%) |\n| EDSS | Mean 1.8 (SD 0.9) |\n| Years since diagnosis | Mean 10.4 (SD 7.8) |\n| Participation in the outdoor group | Mean 4.6 sessions/total mean attendance 57.3% |", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed13.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_KCN_2013.pdf", - "query": "What is the Kingsgate net cash outflows from finiancing activities in 2013 ?", - "target_page": 11, - "target_passage": " Net cash outflows from financing activities was $1.7 million", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "\n\n## Performance rights\n\nThe number of performance rights held during the financial year by each Director of Kingsgate and each of the specified executives of the Group, including their personally-related entities, are set out as follows:", - "page_start": 108, - "page_end": 108, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Financing Arrangements\n\n## Corporate loan facility\n\nKingsgate has a three year secured loan facility with Investec which was amended during the year. The amended facility has a limit of $40 million (30 June 2012: $50 million), of which $20 million has been drawn down as at 30 June 2013 (30 June 2012: $40 million).\n\n## Convertible loan facility\n\nKingsgate has a five year A$35 million convertible loan facility with Investec entered into in a prior period to provide funding for the Bowdens acquisition. Kingsgate has the option to make a prepayment against the facility with an issue of Kingsgate shares.\n\n## Restructure of corporate loan and convertible loan facilities\n\nAs indicated previously in the Preliminary Final report, at balance date it was the Group's intention to restructure and amalgamate these facilities in the next financial year. This relates to the potential for completion of the Initial Public Offering ('IPO') of Akara on the Stock Exchange of Thailand and the updated mine plan for Challenger. Any restructure would optimise the Group's anticipated balance sheet liquidity and operational cash flows. Accordingly, the Group classified the total amount drawn down under these facilities of $55 million as a current liability at 30 June 2013.\n\nSubsequent to the end of the financial year, the Group received from its lenders a credit approved term sheet (subject to formal documentation) for the restructure of the corporate loan and convertible loan facilities. Following completion of the restructure the total amount outstanding will be reduced to $40 million. This loan will be provided through a single senior corporate facility which will consist of two tranches:\n\n - 〉 Tranche one will be a $25 million Akara Pre IPO Bond with a maturity date of 31 July 2015. The current intention is for this tranche to be repaid as part of the Akara IPO, although at Kingsgate's election repayment can be made by either cash or in Kingsgate's shares.\n - 〉 Tranche two is an amortising facility with $5 million to be repaid during the 2014 financial year and the balance of $10 million repaid during the 2015 financial year.\n\n\n\n## Convertible revolving credit facility\n\nThe Group also has a three year $25 million Convertible Revolving Credit Facility available. As at the date of this report the facility is undrawn. Under the terms of this facility, Kingsgate has the option of repaying any funds drawn down under the facility through either cash or by issuing ordinary shares. It is intended that this facility will be utilised during the 2014 financial year for corporate and working capital purposes. It is the current intention of the company to repay any cash drawdown under the facility by the issuance of fully paid ordinary shares which would rank parri pasu with all existing ordinary shares, although this position will be reviewed at the appropriate time. The number of shares has not yet been determined and they will be issued at a 2.5% discount to VWAP over a period by reference to the draw down date. Shareholder approval is not required.\n\n## Multi-currency and syndicated loan facilities\n\nKingsgate's Thai operating subsidiary, Akara, established a six year amortising multi-currency loan facility equivalent to US$125 million (fully drawn as at period end) and an additional Thai Baht denominated working capital facility equivalent to US$15 million (undrawn as at year end) during the period. The proceeds from these borrowings were used to fully repay the outstanding balance on the US$100 million Baht denominated syndicated loan facility in existence at the beginning of the period as well as to repay part of the corporate loan facility noted above.\n\n\n\n## Financial Position\n\nShareholders' equity at 30 June 2013 was $474 million (2012: $776 million). The decrease of $302 million reflects the year's loss together with dividends paid.\n\n## Dividends", - "page_start": 11, - "page_end": 11, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "The Board of Kingsgate is determined to reestablish the path to building shareholder wealth via profits and dividends despite a difficult external environment. Shareholders can look forward to a steady performance from Chatree and a turn-around at Challenger coupled with the completion of feasibility studies at the two major development projects over the coming year.\n\nI would also like to thank our Chief Executive Officer and Managing Director, Gavin Thomas, Kingsgate management and all of the Kingsgate, Akara and Challenger personnel and the project teams for their part in delivering the operational performance during what was a difficult year for your Company.", - "page_start": 3, - "page_end": 3, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Senior Management\n\nKingsgate's executives have a comprehensive range of skills and experience including mine development and operations, exploration, finance and administration. They are supported by highly qualified specialists, whose backgrounds cover the full scope of mining resources activities.\n\nSenior members of Kingsgate's management team are:\n\n## Gavin Thomas\n\nBSc (Geology), FAusIMM\n\n## Managing Director and Chief Executive Officer\n\nGavin Thomas was appointed Chief Executive Officer of Kingsgate in 2004 and joined the Kingsgate Board on 16th November 2007. Gavin has had a successful career in developing mining companies from the exploration phase into mid-tier gold or copper producers. He has over 42 years of international experience in exploring for, evaluating, developing, operating and reclaiming mines in North and South America, Australia, the Southwest Pacific, Asia and Europe. Amongst Gavin's credits is the discovery of 'Lihir' in Papua New Guinea, one of the largest gold deposits in the world. In particular, he has extensive experience in Thailand and South America.\n\n## Duane Woodbury\n\nBEc (Hons)\n\n## Chief Financial Officer\n\nDuane Woodbury was appointed Chief Financial Officer of Kingsgate on 1 September 2011. Duane has a BEc (Hons) Degree and has worked in various financial, accounting and advisory roles during his career in a number of locations, including London, New York and Singapore. He has been assisting Kingsgate in its business development initiatives since August 2007 and brings over 20 years of experience in financial markets and corporate finance transactions, principally with the Macquarie Group.\n\n## Tim Benfield\n\nDip CSM (mining), MBA, MAusIMM\n\n## Chief Operating Officer\n\nTim Benfield joined Kingsgate in February 2012 as Chief Operating Officer. Tim is a mining engineer with over 21 years underground and open pit experience in the mining industry in both operational and corporate roles. He has operational and project development experience in Australia, Africa and Saudi Arabia. This includes 10 years with Barrick Gold of Australia where he provided support to four operating mines and two development projects. Tim was most recently General Manager of the Pajingo Gold mine in Queensland for Evolution Mining Limited.\n\n## Ross Coyle\n\nBA, FCPA, FCIS\n\n## General Manager Finance and Administration Company Secretary\n\nRoss Coyle joined Kingsgate in March 2011 following the Company's acquisition of Dominion Mining Limited and was with the Dominion group for over 25 years. He is a qualified accountant and has over 30 years experience in finance and accounting within the resource industry. He was Finance Director of Dominion from 1996. Ross was appointed Kingsgate's Company Secretary in September 2011.\n\n## Joel Forwood\n\nBsc (Hons) FFin\n\n## General Manager Corporate and Markets\n\nJoel Forwood joined Kingsgate in November 2010 and has over 27 years experience in the resource and investment industries covering investor relations, funds management and exploration. For over 12 years, he has been leading investor relations at a number of listed companies, most recently for Lihir Gold Limited. Prior to this he was a fund manager with Queensland Investment Corporation (QIC) following his early career in mineral exploration with BHP and corporate development with RGC.\n\n## Ronald James\n\nBSc (Geology), MAusIMM, MAIG\n\n## General Manager Exploration and Resource Development", - "page_start": 40, - "page_end": 40, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "- 〉 Tranche one will be a $25,000,000 Akara Pre IPO Bond with a maturity date of 31 July 2015. The current intention is for this tranche to be repaid as part of the Akara IPO although at Kingsgate's election repayment can be made by either cash or in Kingsgate's shares.\n - 〉 Tranche two is an amortising facility with $5,000,000 to be repaid during the 2014 financial year and the balance of $10,000,000 repaid during the 2015 financial year.\n\n## Convertible revolving credit facility\n\nThe Group also has a three year $25,000,000 Convertible Revolving Credit Facility available. At the date of this report the facility is undrawn. Under the terms of this facility, Kingsgate has the option of repaying any funds drawn down under the facility through either cash or by issuing ordinary shares. It is intended that this facility will be utilised during the 2014 financial year for corporate and working capital purposes. It is the current intention of the Company to repay any cash drawdown under the facility by the issuance of fully paid ordinary shares which\n\nwould rank parri pasu with all existing ordinary shares, although this position will be reviewed at the appropriate time. The number of shares has not yet been determined and they will be issued at a 2.5% discount to VWAP over a period by reference to the draw down date. Shareholder approval is not required.\n\n## Multi-currency and syndicated loan facilities\n\nKingsgate's Thai operating subsidiary, Akara, established a six year amortising multi-currency loan facility equivalent to US$125,000,000 (fully drawn as at year end) and an additional Thai Baht denominated working capital facility equivalent to US$15,000,000 (undrawn as at year end) during the period. The proceeds from these borrowings were used to fully repay the outstanding balance on the US$100,000,000 Baht denominated syndicated loan facility in existence at the beginning of the year as well as to repay part of the corporate loan facility noted above. Finance costs include the write off of the balance of capitalised borrowing fees of $1,800,000 following the Akara refinancing.\n\n## Significant change in the state of affairs\n\nThere were no significant changes in the state of affairs of the Group that occurred during the financial year not otherwise disclosed in this report or the consolidated financial statements.\n\n## Matters subsequent to the end of the financial year\n\nKingsgate has received from its lender a credit approved term sheet (subject to formal documentation) for the restructure of the existing corporate loan facility which is drawn to $20,000,000 and the existing convertible loan facility which is drawn to $35,000,000.\n\nSubsequent to the end of the financial year, the Group has received from its lenders a credit approved term sheet (subject to formal documentation) for the restructure of the corporate loan and convertible loan facilities. Following completion of the restructure the total amount outstanding will be reduced to $40,000,000. This loan will be provided through a single senior corporate facility which will consist of two tranches:", - "page_start": 47, - "page_end": 47, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## LIQUIDITY AND CAPITAL RESOURCES\n\nWe strive to maintain a level of liquidity sufficient to allow us to cover our seasonal cash needs and to maintain appropriate levels of shortterm borrowings. We believe that our operating cash flows, available credit facilities and potential future borrowings are sufficient to finance our cash requirements for the next 12 months and beyond.\n\nOver the long term, we manage our cash and capital structure to maximize shareholder return, maintain our financial position, manage refinancing risk and allow flexibility for strategic initiatives. We regularly assess our debt and leverage levels, capital expenditure requirements, debt service payments, dividend payouts, potential share repurchases and other future investments. We believe that as of January 31, 2015, our existing cash and cash equivalents on-hand of $827, available credit facilities of $800 and potential future operating cash flows and borrowings will be sufficient to fund these scheduled future payments and potential long-term initiatives. Additionally, if an agreement is reached and a transaction is consummated in regards to our credit card receivables, it could result in additional cash flows to further support our capital requirements and strategic initiatives.\n\n## Operating Activities\n\nNet cash provided by operating activities was $1,220 in 2014, $1,320 in 2013 and $1,110 in 2012. The majority of our operating cash inflows are derived from sales. We also receive cash payments for property incentives from developers. Our operating cash outflows generally consist of payments to our merchandise vendors (net of vendor allowances), payments to our employees for wages, salaries and other employee benefits and payments to our landlords for rent. Operating cash outflows also include payments for income taxes and interest payments on our short-term and long-term borrowings.\n\nCash provided by operating activities decreased in 2014 compared with 2013, which was primarily due to higher state tax payments made in 2014 compared with 2013, as well as changes in working capital in 2014.\n\nCash provided by operating activities increased in 2013 compared with 2012, resulting from less state tax payments made in 2013 due to additional payments made in 2012 as a result of the 53rd week, along with increased property incentives received from developers and changes in working capital.\n\n## Investing Activities\n\nNet cash used in investing activities was $889 in 2014, $822 in 2013 and $369 in 2012. Our investing cash flows primarily consist of capital expenditures, changes in restricted cash accumulated for debt maturities and changes in credit card receivables associated with cardholder purchases outside of Nordstrom using our Nordstrom Visa credit cards.\n\n## Capital Expenditures\n\nOur capital expenditures over the last three years totaled $2,177, with $861 in 2014, $803 in 2013 and $513 in 2012. Capital expenditures increased in 2014 compared with 2013 primarily due to ongoing store expansion and increased technology investments.\n\nCapital expenditures increased in 2013 compared with 2012 as we continued to make progress executing our customer strategy through increased investments in technology, ecommerce, remodels and new stores, including Nordstrom Rack and our Manhattan full-line store.\n\nThe following table summarizes our store count and square footage activity:", - "page_start": 38, - "page_end": 38, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "| Kingsgate Peru SRL | Peru | Ordinary | 100 | 100 |\n| Minera Kingsgate Argentina S.A. | Argentina | Ordinary | 100 | 100 |", - "page_start": 95, - "page_end": 95, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "\n\n## 106\n\nNotes to the Financial Statements\n\n## 29. Key management personnel disclosures continued\n\n## Option holdings\n\nThe number of options over ordinary shares in the Company held during the financial year by each Director of Kingsgate Consolidated Limited and each of the specified executives of the Group, including their personally-related entities, are set out as follows:", - "page_start": 107, - "page_end": 107, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "The contractual cash flows presented above in respect of 30 June 2013 and the increase in the one year or less time category of $46,132,000 when compared to 30 June 2012 mainly relates to classification of the corporate loan facility of $20,000,000 and the convertible loan facility of $35,000,000 as current liability at 30 June 2013. These facilities were mainly included in the one to two years and two to five years' time category at 30 June 2012. As indicated in Note 16, these facilities have been classified as current liabilities at 30 June 2013 on the basis that at balance sheet date it was the Group's intention to restructure and amalgamate these facilities in the next financial year.\n\nSubsequent to the end of the financial year, the Group has received from its lenders a credit approved term sheet (subject to formal documentation) for the restructure of the corporate loan and convertible loan facilities. Following completion of the restructure the total amount outstanding will be reduced to $40,000,000. This loan will be provided through a single senior corporate facility which will consist of two tranches:\n\n - 〉 Tranche one will be a $25,000,000 Akara Pre IPO Bond with a maturity date of 31 July 2015. The current intention is for this tranche to be repaid as part of the Akara IPO although at Kingsgate's election repayment can be made by either cash or in Kingsgate's shares.\n - 〉 Tranche two is an amortising facility with $5,000,000 to be repaid during the 2014 financial year and the balance of $10,000,000 repaid during the 2015 financial year.\n\nThe Group also has a three year $25,000,000 Convertible Revolving Credit Facility available. At the date of this report the facility is undrawn. Under the terms of this facility, Kingsgate has the option of repaying any funds drawn down under the facility through either cash or by issuing ordinary shares. It is intended that this facility will be utilised during the 2014 financial year for corporate and working capital purposes. It is the current intention of the Company to repay any cash drawdown under the facility by issuance of fully paid ordinary shares which would rank parri pasu with all existing ordinary shares, although this position will be reviewed at the appropriate time. The number of shares has not yet been determined and they will be issued at a 2.5% discount to VWAP over a period by reference to the draw down date. Shareholder approval is not required.\n\nAs indicated in Note 16, Kingsgate's Thai operating subsidiary, Akara, established a six year amortising multi-currency loan facility equivalent to US$125,000,000 (fully drawn as at year end) and an additional Thai Baht denominated working capital facility equivalent to US$15,000,000 (undrawn as at year end) during the period. The proceeds from these borrowings were used to fully repay the outstanding balance on the US$100,000,000 Baht denominated syndicated loan facility in existence at the beginning of the period as well as to repay part of the corporate loan facility noted above.\n\n## (d) Fair value measurements\n\nThe carrying values of financial assets and liabilities of the Group approximate their fair values. Fair values of financial assets and liabilities have been determined for measurement and / or disclosure purposes.\n\n## Fair value hierarchy", - "page_start": 104, - "page_end": 104, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## (d) Equity instrument disclosures relating to key management personnel\n\n## Share holdings\n\nThe number of shares in the Company held during the financial year by each Director of Kingsgate and each of the other Key Management Personnel of the Group, including their personally-related entities are set out as follows:", - "page_start": 106, - "page_end": 106, - "source_file": "ASX_KCN_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20210538_en.pdf", - "query": "To which countries extend the marriage regulations ?", - "target_page": 1, - "target_passage": "These Regulations extend to England and Wales. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## S T A T U T O R Y I N S T R U M E N T S\n\n## 2021 No. 538\n\n## MARRIAGE, ENGLAND AND WALES\n\nThe Marriage (Keeping of Records in Churches and Chapels) Regulations 2021\n\nMade\n\n-\n\n-\n\n-\n\n-\n\n29th April 2021\n\nComing into force - -\n\n4th May 2021\n\nThe Registrar General makes these Regulations with the approval of the Secretary of State in exercise of the powers conferred by section 74(1)(c)(v), (1A)(a) and (3) of the Marriage Act 1949( a ).\n\n## Citation, commencement, extent and interpretation\n\n- 1. -(1) These Regulations may be cited as the Marriage (Keeping of Records in Churches and Chapels) Regulations 2021.\n- (2) These Regulations come into force on 4th May 2021.\n- (3) These Regulations extend to England and Wales.\n- (4) In these Regulations, 'chapel' does not include a chapel to which Part 5 of the Marriage Act 1949 (marriages in naval, military and air force chapels) applies( b ).\n\n## Duty of parochial church councils to provide registers of marriage services\n\n- 2. -(1) The parochial church council of a parish must provide books for the purpose of making records under regulation 3 to each church and chapel of the Church of England( c ) in that parish in which banns of matrimony may be published.\n- (2) Books provided under paragraph (1) are to be known as 'registers of marriage services'.\n- (3) A register of marriage services provided under paragraph (1) must meet the requirements of paragraphs (4) and (5).\n- (4) The register must be made of durable material.\n- (5) For the purposes of enabling a record to be made in the register under regulation 3 in respect of a marriage, the register must be printed in such a way that it-", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "29th April 2021\n\nKevin Foster Parliamentary Under Secretary of State Home Office\n\n## EXPLANATORY NOTE\n\n(This note is not part of the Regulations)\n\nThese Regulations provide for records of marriages to be kept in churches and chapels of the Church of England and the Church in Wales, other than chapels to which Part 5 of the Marriage Act 1949 applies (naval, military and air force chapels).\n\nRegulation 2 requires parochial church councils to provide books known as 'registers of marriage services' to churches and chapels in their parish in which banns of matrimony may be published, for the purposes of keeping the records required by regulation 3. Regulation 2 also imposes requirements relating to the durability and pre-printed content of these registers, and provides that they belong to the parochial church council.\n\nRegulation 3 requires specified information to be recorded in a register of marriage services when a marriage has been solemnized on or after 4th May 2021 according to the rites of the Church of England or Church in Wales in a church or chapel in which banns of matrimony may be published. The record must be made and signed by the member of the clergy by whom the marriage was solemnized.\n\nRegulation 4 imposes requirements relating to the keeping of registers of marriage services provided under regulation 2.\n\nA full impact assessment has not been produced for this instrument because no, or no significant, impact on the private, public or voluntary sector is foreseen.\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "## EXPLANATORY NOTE\n\n(This note is not part of the Regulations)\n\nThese Regulations make amendments to secondary legislation relating to special educational needs and disability in order to provide exceptions to time limits set out in that legislation where they cannot be met because of a reason relating to the incidence or transmission of coronavirus.\n\nRegulation 2 contains review and expiry provisions. The Secretary of State is required to review the effectiveness of the Regulations during the period in which they have effect. The Regulations cease to have effect on 25th September 2020.\n\nRegulations 3 to 14 amend the Special Educational Needs and Disability Regulations 2014 ('the SEND Regulations 2014').\n\nRegulation 5 inserts a glossing provision into the SEND Regulations 2014 which relaxes certain requirements in those Regulations for actions to be taken within specified time limits where it is not reasonably practicable for a person to meet those requirements for a reason relating to the incidence or transmission of coronavirus. Instead, any such requirement is to be read as a requirement for such action to be taken as soon as reasonably practicable.\n\nRegulations 6 to 14 make textual amendments to the SEND Regulations 2014 to relax time limits.\n\nRegulations 15 to 17 amend the Special Educational Needs (Personal Budgets) Regulations 2014 ('the Personal Budgets Regulations 2014').\n\nRegulation 17 inserts a similar glossing provision into the Personal Budgets Regulations 2014 as regulation 5 does in respect of the SEND Regulations 2014.\n\nRegulations 18 to 27 amend the Special Educational Needs and Disability (Detained Persons) Regulations 2015 ('the Detained Persons Regulations 2015').\n\nRegulation 20 inserts a glossing provision into the Detained Persons Regulations 2015 similar to the ones in regulations 5 and 17 in relation to the SEND Regulations 2014 and the Personal Budgets Regulations 2014 respectively.\n\nRegulations 21 to 27 make textual amendments to the Detained Persons Regulations 2015 to relax time limits.\n\nRegulations 28 to 30 amend the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017 ('the First-tier Tribunal Regulations 2017').\n\nRegulation 30 inserts a glossing provision into the First-tier Tribunal Regulations 2017 similar to those in regulations 5, 17 and 20.\n\nAn impact assessment has not been produced for this instrument as this is a temporary, emergency measure and no significant impact on business, charities or voluntary bodies is foreseen.\n\nAn Explanatory Memorandum is published alongside this instrument on www.legislation.gov.uk.\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 5, - "page_end": 5, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- (a) indicates the descriptions of information required by each of sub-paragraphs (a) to (h) of regulation 3(2) in relation to the marriage, and\n - (b) provides corresponding spaces for recording information required by each of those subparagraphs in relation to the marriage.\n - (6) A register of marriage services provided under paragraph (1) by a parochial church council belongs to that parochial church council.\n\n## Duty to record information about marriages solemnized according to the rites of the Church of England or Church in Wales\n\n - 3. -(1) Paragraphs (2), (3) and (4) apply where a marriage has been solemnized according to the rites of the Church of England in a church or chapel in which banns of matrimony may be published.\n - (2) As soon as practicable after the marriage has been solemnized, the clergyman by whom the marriage was solemnized must make a record of the following information in relation to that marriage in a register of marriage services provided to the church or chapel under regulation 2(1)-\n - (a) the date and place of the marriage;\n - (b) the name and surname of each party;\n - (c) the date of birth of each party;\n - (d) the occupation (if any) of each party;\n - (e) the address of each party at the time of the marriage;\n - (f) the names and surnames of each party's parents, so far as those names and surnames are known to the clergyman who solemnized the marriage;\n - (g) the name and surname of each of the witnesses in whose presence the marriage was solemnized;\n - (h) the name and surname of the clergyman by whom the marriage was solemnized.\n - (3) The clergyman must record the information required by paragraph (2) in English, and may also record information required by that paragraph in Welsh where the church or chapel is situated in Wales.\n - (4) After making a record under paragraph (2) the clergyman must sign it.\n - (5) This regulation does not apply in relation to a marriage solemnized before 4th May 2021.\n\n## Requirements about the keeping of registers of marriage services\n\n - 4. -(1) The rector, vicar or curate in charge of a church or chapel to which a register of marriage services has been provided under regulation 2(1) must-\n - (a) ensure that the register is kept in that church or chapel, and\n - (b) do everything that is reasonably practicable to ensure that the register is protected against theft, loss or damage.\n - (2) Where there is no rector, vicar or curate in charge of a church or chapel to which a register of marriage services has been provided under regulation 2(1), the obligations under paragraph (1) in respect of that register fall on the churchwardens of the parish in which the church or chapel is situated.\n\nGiven under my hand on 29th April 2021\n\nAbi Tierney Registrar General", - "page_start": 1, - "page_end": 1, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "## PART 6\n\n## Final provisions\n\n## Review of need for requirements\n\n24. The Secretary of State must review the need for the requirements imposed by these Regulations by 14th June 2021 and at least once every 28 days thereafter.\n\n## Expiry of Regulations\n\n25. These Regulations expire at the end of 16th May 2022.\n\n## Revocations, transitional provision consequential amendments and savings\n\n26. -(1) The following Regulations are revoked-\n\n - (a) the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020( a );\n - (b) the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the International Travel Regulations')( b ); and\n - (c) the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021( c ).\n - (2) Schedule 15 makes consequential amendments to other instruments specified in that Schedule.\n - (3) Schedule 16 makes transitional provisions.\n - (4) Nothing in these Regulations applies in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021 (and accordingly, the regulations mentioned in paragraph (1) continue to have effect in relation to such a person).\n\nSigned by authority of the Secretary of State\n\nAt 10.32 a.m. on 14th May 2021\n\nRobert Courts Parliamentary Under Secretary of State Department for Transport", - "page_start": 30, - "page_end": 30, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## Transitional provision\n\n - 1. Passenger information provided before 4.00 a.m. on 17th May 2021 by a person pursuant to regulation 3 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the 2020 Regulations') in advance of arrival in England is treated as passenger information provided for the purposes of these Regulations where the person arrives in England on or after that date.\n - 2. Confirmation given by the Foreign, Commonwealth and Development Office that a person is not required to comply with regulation 3B of the 2020 Regulations is treated as confirmation that the person is not required to comply with regulation 6 of these Regulations where the person arrives in England on or after 4.00 a.m. on 17th May 2021.\n - 3. A designation by the Secretary of State of a person as an authorised person under regulation 5(7) of the 2020 Regulations has effect as a designation of that person as an authorised person under of regulation 11(11)(c) of these Regulations.\n - 4. Regulation 5A of the 2020 Regulations continues to have effect in relation to a constable who exercises the powers in that regulation in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "\n\n## 6.3 Guidance and support\n\nSupervision is only one approach to implementing legislation. As mentioned, supervision by state authorities can only reach a small share of all enterprises, particularly not the many small ones and the self-employed. In addition to supervision and control, a broad variety of prevention-supporting activities has been developed during the past decades. 388\n\nThe authors of EU-OSHA's 'Supporting compliance' reports state a strong increase in 'compliance promotion strategies'. They write: 'The regulatory changes have been matched in more recent times by an increasingly diverse set of compliance promotion strategies. Not only has public regulation sought to engage and encourage duty holders in the pursuit of forms of regulated self-regulation, but … the discourse on regulation itself has sought a far broader understanding of its meaning and the role of the private and public regulatory actors and processes potentially involved in both defining and securing compliance.' 389\n\nOne important type of means are guidance and support tools for enterprises and workers to extend the reach and impact of legislation. Labour inspectorates and other state institutions produce these tools either themselves or in collaboration with social partners or professional organisations.\n\nProactive research and preventive guidelines , particularly in situations of new risks, have become a quite usual preventive activity (e.g. on nanotechnology, or on some developments in digitalisation). For very complex regulations, like REACH, national institutions installed helpdesks. European institutions also publish such guidance documents for EU-wide use, for example, the guidance on health and safety in agriculture, 390 the guidance regarding the implementation of the Machinery directive, 391 the guidance documents of EU-OSHA on COVID-19 392 and the European Commission guidance documents on seasonal workers and COVID-19. 393 Practically all EU and international OSH institutions published guidance documents on how to identify and reduce psychosocial risk at workplaces. 394\n\nA large amount of OSH guidance already exists in different formats, 395 starting with classical written guidance documents, increasingly complemented by audio-visual and interactive tools. EU-OSHA covers a large variety of workplaces with its digital risk assessment tool OiRA (Online interactive Risk", - "page_start": 124, - "page_end": 124, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- 2. -(1) The Health Protection (Coronavirus, Restrictions) (Self-Isolation) (England) Regulations 2020( a ) are amended as follows.\n - (2) In regulation 2D(1)(c), for 'regulation 4 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'regulation 9 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.\n - (3) In regulation 6(1)-\n - (a) in the definitions of 'designated place', 'isolation requirements' and 'self-isolating worker', for 'regulation 4' substitute 'regulation 9';", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- For this Status Report, SLIC evaluations of the labour inspection systems in Member States were not taken into account, because most of them are confidential.\n - 351 DG Employment, Social Affairs and Inclusion, 2015: Evaluation of the Practical Implementation of the EU Occupational Safety and Health (OSH) Directives in EU Member States (p. 89).\n - 352 Ibid., p. 105. See also p. 89: 'The Directives represent a mix of a goal-oriented approach - strongly expressed in the Framework Directive, but also mirrored in the individual Directives - and a prescriptive approach - which is, for instance, seen in the very detailed and specific requirements included in the annexes of some Directives.\n - 353 Ibid., p. 67.\n - 354 Ibid., p. 94.\n - 355 Graveling, 2018: Transposition, implementation and enforcement of EU OSH legislation - Thematic Discussion Paper\n - 356 EU-OSHA, 2021: Summary - Improving compliance with occupational safety and health regulations: an overarching review (p. 4).\n - 357 The authors explain the difference between 'substantive and rule compliance as follows: '... 'substantive compliance', which requires compliance with the collective goals underpinning the regulatory scheme (better OSH practice); and 'rule compliance', which envisages compliance with the content of legal standards only ' (p. 11). 358 EU-OSHA, 2021: Improving compliance with occupational safety and health regulations: an overarching review (p. 43).", - "page_start": 153, - "page_end": 153, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "18. In determining how many fixed penalty notices a person ('P') has received for the purposes of paragraph 8 (breach of requirement in regulation 9 to self-isolate etc), if P received more than one fixed penalty notice for that offence before 2nd October 2020, only one of those notices may be taken into account.\n\n## SCHEDULE 15\n\nRegulation 26(2)\n\n## Consequential Amendments\n\n1. -(1) The Health Protection (Notification) Regulations 2010( a ) are amended as follows.\n\n(2) In regulation 4(3D)(b), for 'regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.", - "page_start": 87, - "page_end": 87, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20210538_en.pdf", - "query": "What the parochial church council must provide to make marriage records ?", - "target_page": 1, - "target_passage": " The parochial church council of a parish must provide books for the purpose of making records under regulation 3 to each church and chapel of the Church of England(c) in that parish in which banns of matrimony may be published.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "- (a) indicates the descriptions of information required by each of sub-paragraphs (a) to (h) of regulation 3(2) in relation to the marriage, and\n - (b) provides corresponding spaces for recording information required by each of those subparagraphs in relation to the marriage.\n - (6) A register of marriage services provided under paragraph (1) by a parochial church council belongs to that parochial church council.\n\n## Duty to record information about marriages solemnized according to the rites of the Church of England or Church in Wales\n\n - 3. -(1) Paragraphs (2), (3) and (4) apply where a marriage has been solemnized according to the rites of the Church of England in a church or chapel in which banns of matrimony may be published.\n - (2) As soon as practicable after the marriage has been solemnized, the clergyman by whom the marriage was solemnized must make a record of the following information in relation to that marriage in a register of marriage services provided to the church or chapel under regulation 2(1)-\n - (a) the date and place of the marriage;\n - (b) the name and surname of each party;\n - (c) the date of birth of each party;\n - (d) the occupation (if any) of each party;\n - (e) the address of each party at the time of the marriage;\n - (f) the names and surnames of each party's parents, so far as those names and surnames are known to the clergyman who solemnized the marriage;\n - (g) the name and surname of each of the witnesses in whose presence the marriage was solemnized;\n - (h) the name and surname of the clergyman by whom the marriage was solemnized.\n - (3) The clergyman must record the information required by paragraph (2) in English, and may also record information required by that paragraph in Welsh where the church or chapel is situated in Wales.\n - (4) After making a record under paragraph (2) the clergyman must sign it.\n - (5) This regulation does not apply in relation to a marriage solemnized before 4th May 2021.\n\n## Requirements about the keeping of registers of marriage services\n\n - 4. -(1) The rector, vicar or curate in charge of a church or chapel to which a register of marriage services has been provided under regulation 2(1) must-\n - (a) ensure that the register is kept in that church or chapel, and\n - (b) do everything that is reasonably practicable to ensure that the register is protected against theft, loss or damage.\n - (2) Where there is no rector, vicar or curate in charge of a church or chapel to which a register of marriage services has been provided under regulation 2(1), the obligations under paragraph (1) in respect of that register fall on the churchwardens of the parish in which the church or chapel is situated.\n\nGiven under my hand on 29th April 2021\n\nAbi Tierney Registrar General", - "page_start": 1, - "page_end": 1, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "29th April 2021\n\nKevin Foster Parliamentary Under Secretary of State Home Office\n\n## EXPLANATORY NOTE\n\n(This note is not part of the Regulations)\n\nThese Regulations provide for records of marriages to be kept in churches and chapels of the Church of England and the Church in Wales, other than chapels to which Part 5 of the Marriage Act 1949 applies (naval, military and air force chapels).\n\nRegulation 2 requires parochial church councils to provide books known as 'registers of marriage services' to churches and chapels in their parish in which banns of matrimony may be published, for the purposes of keeping the records required by regulation 3. Regulation 2 also imposes requirements relating to the durability and pre-printed content of these registers, and provides that they belong to the parochial church council.\n\nRegulation 3 requires specified information to be recorded in a register of marriage services when a marriage has been solemnized on or after 4th May 2021 according to the rites of the Church of England or Church in Wales in a church or chapel in which banns of matrimony may be published. The record must be made and signed by the member of the clergy by whom the marriage was solemnized.\n\nRegulation 4 imposes requirements relating to the keeping of registers of marriage services provided under regulation 2.\n\nA full impact assessment has not been produced for this instrument because no, or no significant, impact on the private, public or voluntary sector is foreseen.\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "## S T A T U T O R Y I N S T R U M E N T S\n\n## 2021 No. 538\n\n## MARRIAGE, ENGLAND AND WALES\n\nThe Marriage (Keeping of Records in Churches and Chapels) Regulations 2021\n\nMade\n\n-\n\n-\n\n-\n\n-\n\n29th April 2021\n\nComing into force - -\n\n4th May 2021\n\nThe Registrar General makes these Regulations with the approval of the Secretary of State in exercise of the powers conferred by section 74(1)(c)(v), (1A)(a) and (3) of the Marriage Act 1949( a ).\n\n## Citation, commencement, extent and interpretation\n\n- 1. -(1) These Regulations may be cited as the Marriage (Keeping of Records in Churches and Chapels) Regulations 2021.\n- (2) These Regulations come into force on 4th May 2021.\n- (3) These Regulations extend to England and Wales.\n- (4) In these Regulations, 'chapel' does not include a chapel to which Part 5 of the Marriage Act 1949 (marriages in naval, military and air force chapels) applies( b ).\n\n## Duty of parochial church councils to provide registers of marriage services\n\n- 2. -(1) The parochial church council of a parish must provide books for the purpose of making records under regulation 3 to each church and chapel of the Church of England( c ) in that parish in which banns of matrimony may be published.\n- (2) Books provided under paragraph (1) are to be known as 'registers of marriage services'.\n- (3) A register of marriage services provided under paragraph (1) must meet the requirements of paragraphs (4) and (5).\n- (4) The register must be made of durable material.\n- (5) For the purposes of enabling a record to be made in the register under regulation 3 in respect of a marriage, the register must be printed in such a way that it-", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "\n\n - Conseil des dépêches (\"Council of Messages\", concerning notices and administrative reports from the provinces).\n - Conseil de Conscience (\"Council of Conscience\", concerning religious affairs and episcopal appointments).\n - Conseil royal des finances (\"Royal Council of Finances\") headed by the \"chef du conseil des finances\" (an honorary post in most cases)-this was one of the few posts in the council available to the high aristocracy. [38]\n\n## Early wars in the Low Countries\n\n## Spain\n\nThe death of Louis's maternal uncle King Philip IV of Spain in 1665 precipitated the War of Devolution. In 1660, Louis had married Philip IV's eldest daughter, Maria Theresa, as one of the provisions of the 1659 Treaty of the Pyrenees. [39] The marriage treaty specified that Maria Theresa was to renounce all claims to Spanish territory for herself and all her descendants. [39] Mazarin", - "page_start": 5, - "page_end": 5, - "source_file": "wikipedia5.pdf" - }, - { - "text": "## Edict of Fontainebleau\n\nLouis decided to persecute Protestants and revoke the 1598 Edict of Nantes, which awarded Huguenots political and religious freedom. He saw the persistence of Protestantism as a disgraceful reminder of royal powerlessness. After all, the Edict was the pragmatic concession of his grandfather Henry IV to end the longstanding French Wars of Religion. An additional factor in Louis's thinking was the prevailing contemporary European principle to assure socio-political stability, cuius regio, eius religio (\"whose realm, his religion\"), the idea that the religion of the ruler should be the religion of the realm (as originally confirmed in central Europe in the Peace of Augsburg of 1555). [67]\n\nResponding to petitions, Louis initially excluded Protestants from office, constrained the meeting of synods, closed churches outside of Edict-stipulated areas, banned Protestant outdoor preachers, and prohibited domestic Protestant migration. He also disallowed Protestant-Catholic intermarriages to which third parties objected, encouraged missions to the Protestants, and", - "page_start": 9, - "page_end": 9, - "source_file": "wikipedia5.pdf" - }, - { - "text": "## 12 ASX CORPORATE GOVERNANCE COUNCIL BEST PRACTICE RECOMMENDATIONS", - "page_start": 36, - "page_end": 36, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "The division of the Metropolis of Lyon in large electoral wards often grouping various communes and dividing the commune of Lyon into six wards was criticized by the suburban mayors, as it ended the rule of 'one commune, one metropolitan councilor'. The goal of this electoral division of the metropolis was to focus metropolitan elections more on metropolitan issues than parochial communal issues, and ensure the 'one person, one vote' rule be respected, by creating electoral wards of more homogeneous population sizes. Opponents said it diluted the voice of the small suburban communes, which are now part of large electoral wards and do not each possess a representative in the metropolitan council anymore.\n\n## Presidents of the Metropolitan Council\n\nThe two first presidents of the Metropolis of Lyon's metropolitan council were chosen by indirectly elected metropolitan councilors. The current president since July 2020 was elected by new metropolitan councilors following their election by universal suffrage in March (1st round) and June (2nd round) 2020, the first direct election of a metropolitan council in France.\n\n| President of the Metropolitan Council | Term start | Term end | Party |\n|-----------------------------------------|----------------|--------------|---------|\n| Gérard Collomb | 1 January 2015 | 10 July 2017 | PS |\n| David Kimelfeld | 10 July 2017 | 2 July 2020 | LREM |\n| Bruno Bernard | 2 July 2020 | Incumbent | EELV |\n\n## Main sights\n\n## Antiquity\n\n - The Roman ruins on the hillside near the Fourvière Basilica, with the Ancient Theatre of Fourvière, the Odeon of Lyon and the accompanying Gallo-Roman museum\n - Amphitheatre of the Three Gauls - ruins of a Roman amphitheatre.\n\n\n\n\n\nAncient Theatre of Fourvière\n\n\n\nOdeon of Lyon\n\nAmphitheatre of the Three Gauls\n\n## Middle Ages and Renaissance\n\n - Cathedral of St. John, a medieval church with architectural elements of the 13th, 14th and 15th centuries, also the principal religious structure in the city and the seat of the Archbishop of Lyon\n - Basilica of St-Martin-d'Ainay, one of the rare surviving Romanesque basilica-style churches in Lyon\n - Église Saint-Paul, Romanesque (12th and 13th century) and Gothic (15th-16th century) church\n - Église Saint-Bonaventure, 14th- and 15th-century Gothic church\n - Église Saint-Nizier, Gothic church from the 15th century, having a doorway carved in the 16th century by Philibert Delorme\n - Vieux Lyon (English: Old Lyon) area, Medieval and Renaissance quarter of the town, with shops, dining and cobbled streets\n - The many Renaissance hôtels particuliers of the Old Lyon quarter, such as the Hôtel de Bullioud , were also built by Philibert Delorme\n\nMap showing the 14 electoral wards of the Metropolis of Lyon\n\n", - "page_start": 9, - "page_end": 9, - "source_file": "wikipedia4.pdf" - }, - { - "text": "## II.8. Confidentiality\n\n - II.8.1. The contracting authority and the contractor must treat with confidentiality any information or documents, in any format, disclosed in writing or orally, relating to the implementation of the FWC and identified in writing as confidential.\n\n## II.8.2. Each party must:\n\n - (a) not use confidential information or documents for any purpose other than to perform its obligations under the FWC or a specific contract without the prior written agreement of the other party;\n - (b) ensure the protection of such confidential information or documents with the same level of protection as its own confidential information or documents and in any case with due diligence;\n - (c) not disclose, directly or indirectly, confidential information or documents to third parties without the prior written agreement of the other party.\n - II.8.3 The confidentiality obligations set out in this Article are binding on the contracting authority and the contractor during the implementation of the FWC and for as long as the information or documents remain confidential unless:\n - (a) the disclosing party agrees to release the receiving party from the confidentiality obligation earlier;\n - (b) the confidential information or documents become public through other means than a breach of the confidentiality obligation;\n - (c) the applicable law requires the disclosure of the confidential information or documents .\n - II.8.4 The contractor must obtain from any natural person with the power to represent it or take decisions on its behalf, as well as from third parties involved in the implementation of the FWC a commitment that they will comply with this Article. At the request of the contracting authority, the contractor must provide a document providing evidence of this commitment.\n\n## II.9. Processing of personal data\n\n## II.9.1 Processing of personal data by the contracting authority\n\nAny personal data included in or relating to the FWC, including its implementation, shall be processed in accordance with Regulation (EU) No 2018/1725. Such data shall be processed solely for the purposes of the implementation, management and monitoring of the FWC by the data controller.\n\nThe contractor or any other person whose personal data is processed by the data controller in relation to this FWC has specific rights as a data subject under Chapter III (Articles 1425) of Regulation (EU) No 2018/1725, in particular the right to access, rectify or erase their personal data and the right to restrict or, where applicable, the right to object to processing or the right to data portability.\n\nShould the contractor or any other person whose personal data is processed in relation to this FWC have any queries concerning the processing of its personal data, it shall address", - "page_start": 19, - "page_end": 19, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "Louis XIV in 1685, the year he revoked the Edict of Nantes\n\n\n\nrewarded converts to Catholicism. [68] This discrimination did not encounter much Protestant resistance, and a steady conversion of Protestants occurred, especially among the noble elites.\n\nIn 1681, Louis dramatically increased his persecution of Protestants. The principle of cuius regio, eius religio generally also meant that subjects who refused to convert could emigrate, but Louis banned emigration and effectively insisted that all Protestants must be converted. Secondly, following the proposal of René de Marillac and the Marquis of Louvois, he began quartering dragoons in Protestant homes. Although this was within his legal rights, the dragonnades inflicted severe financial strain on Protestants and atrocious abuse. Between 300,000 and 400,000 Huguenots converted, as this entailed financial rewards and exemption from the dragonnades . [69]\n\nOn 15 October 1685, Louis issued the Edict of Fontainebleau, which cited the redundancy of privileges for Protestants given their scarcity after the extensive conversions. The Edict of Fontainebleau revoked the Edict of Nantes and repealed all the privileges that arose therefrom. [4] By his edict, Louis no longer tolerated the existence of Protestant groups, pastors, or churches in France.\n\nNo further churches were to be constructed, and those already existing were to be demolished. Pastors could choose either exile or secular life. Those Protestants who had resisted conversion were now to be baptised forcibly into the established church. [70]\n\nHistorians have debated Louis's reasons for issuing the Edict of Fontainebleau. He may have been seeking to placate Pope Innocent XI, with whom relations were tense and whose aid was necessary to determine the outcome of a succession crisis in the Electorate of Cologne. He may also have acted to upstage Emperor Leopold I and regain international prestige after the latter defeated the Turks without Louis's help. Otherwise, he may simply\n\nProtestant peasants rebelled against the officially sanctioned dragonnades (conversions enforced by dragoons, labeled \"missionaries in boots\") that followed the Edict of Fontainebleau.\n\n\n\nhave desired to end the remaining divisions in French society dating to the Wars of Religion by fulfilling his coronation oath to eradicate heresy. [71][72]\n\nMany historians have condemned the Edict of Fontainebleau as gravely harmful to France. [73] In support, they cite the emigration of about 200,000 highly skilled Huguenots (roughly one quarter of the Protestant population, or 1% of the French population) who defied royal decrees and fled France for various Protestant states, weakening the French economy and enriching that of Protestant states. On the other hand, some historians view this as an exaggeration. They argue that most of France's preeminent Protestant businessmen and industrialists converted to Catholicism and remained. [74]\n\nWhat is certain is that the reaction to the Edict was mixed. Even while French Catholic leaders exulted, Pope Innocent XI still argued with Louis over Gallicanism and criticized the use of violence. Protestants across Europe were horrified at the treatment of their co-religionists, but most Catholics in France applauded the move. Nonetheless, it is indisputable that Louis's public image in most of Europe, especially in Protestant regions, was dealt a severe blow.", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia5.pdf" - }, - { - "text": "- (i) where the results are or include logos or subject-matter which could be registered as a trademark: the right to register such logo or subject-matter as a trademark and to further exploit and use it;\n - (j) where the results are or include know-how: the right to use such know-how as is necessary to make use of the results to the full extent provided for by this FWC, and the right to make it available to contractors or subcontractors acting on behalf of the contracting authority, subject to their signing of adequate confidentiality undertakings where necessary;\n - (k) where the results are documents:\n - (i) the right to authorise the reuse of the documents in conformity with the Commission Decision of 12 December 2011 on the reuse of Commission documents (2011/833/EU), to the extent it is applicable and the documents fall within its scope and are not excluded by any of its provisions; for the sake of this provision, \"reuse\" and \"document\" have the meaning given to them by this Decision;\n - (ii) the right to store and archive the results in line with the document management rules applicable to the contracting authority, including digitisation or converting the format for preservation or new use purposes;\n - (l) where the results are or incorporate software, including source code, object code and, where relevant, documentation, preparatory materials and manuals, in addition to the other rights mentioned in this Article:\n - (i) end-user rights, for all uses by the contracting authority or by subcontractors which result from this FWC and from the intention of the parties;\n - (ii) the rights to receive both the source code and the object code;\n - (m) the right to license to third parties any of the exclusive rights or of the modes of exploitation set out in this FWC; however, for pre-existing materials which are only licensed to the contracting authority, the right to sub-license does not apply, except in the two cases foreseen by Article II.13.2.;\n - (n) to the extent that the contractor may invoke moral rights, the right for the contracting authority, except where otherwise provided in this FWC, to publish the results with or without mentioning the creator (s)' name(s), and the right to decide when and whether the results may be disclosed and published.", - "page_start": 24, - "page_end": 24, - "source_file": "EN-Draft FWC for services 0142.pdf" - } - ] - }, - { - "references": { - "source_file": "legal4_opengouvernementlicense.pdf", - "query": "What is the prison population grew in average by year between 1993 and 2008 ?", - "target_page": 8, - "target_passage": "The prison population grew rapidly between 1993 to 2008, at an average of 4% a year.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## 2. Recent trends in the population\n\nThe 'Story of the Prison Population 1993 to 2012' is an in-depth look at what happened to the prison population between 1993 and 2012 and the major factors contributing to the changes. 4\n\nThe prison population grew rapidly between 1993 to 2008, at an average of 4% a year. This rapid rise was driven by:\n\n -  increased numbers of people sentenced to immediate custody from 1993 to 2002;\n -  increases in the average custodial sentence length and increased use of indeterminate sentences; and\n -  an increase in numbers recalled to prison following breaches of the conditions of licence and these offenders spending longer in prison once recalled.\n\nThe rise in the prison population slowed considerably from the summer of 2008, in part due to the introduction of the Criminal Justice and Immigration Act (CJIA) 2008 5 which changed sentencing and offender management in ways which helped to reduce growth in the prison population.\n\nThis flatter trend continued until the public disorder seen in UK cities from 6 to 9 August 2011 which had an immediate but temporary impact on the prison population.\n\nDuring 2012 and into 2013, the prison population began to fall due to a falling remand population and a continued decline in the number of under 18s in custody. The falling remand population during 2012 reflected falling volumes going through the courts plus the introduction, in December 2012, of measures restricting the use of remand for all offenders who would be unlikely to receive a custodial sentence. 6\n\nFrom the end of August 2013 to the end of October 2013, the remand\n\npopulation rose sharply, driving an overall increase in the prison population. This was being driven by an increase in demand in the Crown Courts, especially among more serious tri-able either way cases. The total population has continued to rise since the beginning of 2014 and reached 85,925 7 on the", - "page_start": 7, - "page_end": 7, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "## Key points\n\nThis bulletin presents projections of the prison population in England and Wales from November 2014 to December 2020. The prison population projections are based on assumptions about future custodial convictions and incorporate the anticipated impacts of agreed policy and procedural initiatives.\n\nThe 'Central Scenario' estimates that the prison population will increase from the current position 85,925 1 to 87,700 by June 2015. By the end of June 2020 the prison population is projected to be 90,200. This Central Scenario is our best estimate based on the available information. The projected prison population under our Central Scenario is shown in Chart 1.\n\nThe prison population projections are produced using a model of flows of offenders into and out of prison which counts the resulting prison population each month.\n\nChart 1: Projected prison population (Central Scenario)\n\n\n\nThe Central Scenario has been modelled assuming custodial convictions are broadly in line with recent trends and average length of sentence to be flat based on recent trends.\n\nThe projections do not attempt to estimate the impact of any future Government policy that is yet to achieve Royal Assent, and therefore become less certain over time.", - "page_start": 3, - "page_end": 3, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "## 4. Results\n\nThe Central Scenario estimates that the prison population will rise to 87,700 by the end of June 2015 and to 90,200 by the end of June 2020.\n\nChart 2 presents Prison population projections from November 2014 to December 2020.\n\nChart 2: Projected monthly prison population (all scenarios)\n\n\n\nIllustrative Scenario 1 estimates that the prison population will rise to 87,100 by the end of June 2015 and then fall to 81,400 by the end of June 2020.\n\nIllustrative Scenario 2 estimates that the prison population will rise to 88,900 by the end of June 2015 and to 98,900 by the end of June 2020.\n\nThe projected trends reflect the cumulative impacts of the various sentencing, legislative and procedural assumptions that are used to generate the projections. The seasonal pattern reflects the dip in the prison population which is always seen around the Christmas period.\n\nIn the Central Scenario, the prison population is expected to rise to 90,200 by June 2020. The projected population increase is largely due to the recent trends in case mix where we have seen more serious cases come before the courts. This results in offenders receiving longer custodial sentence lengths, which in turn places an upward pressure on the prison population. The growth in this scenario is largely driven by the rise in the determinate population which is projected to grow to 60,200 by June 2020. This is partially due to the", - "page_start": 12, - "page_end": 12, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "## 3a) Producing prison population projections\n\nPrison population projections are produced using the Prison Population StockFlow Model. The principal sub-populations in prison - determinate sentence, life sentence, imprisonment for public protection (IPP) and remand - are modelled using stock-flow structures based on the generic structure shown in Figure B2. The stock-flow structures model the flow of offenders into and out of prison and count the resulting prison population at the end of each month.\n\nFigure B2: Generic stock-flow structure in the Prison Population Stock-Flow Model\n\n\n\nFor the determinate population, the monthly inflows to prison are based on the custodial convictions projections described above. These custodial convictions include offenders that may already be serving a sentence for a previous crime or those who would serve their whole custodial sentence on remand, meaning that they would not be a new reception to prison. To convert from custodial convictions to prison receptions we apply a conversion ratio derived from the historical proportions of custodial convictions to prison receptions for each sub-population averaged over the last twelve months of historical data (April 2013 to March 2014 inclusive).\n\nMonthly outflows for the determinate population are based on observed custodial sentence lengths and the observed percentage of sentence length served taken from October 2013 to April 2014. Each projected offender that enters the model is given a custodial sentence length that is randomly selected from the relevant distribution. These distributions are populated with custodial sentence lengths from actual offender receptions who share the same characteristics of offence, gender and age group in the observed time period. The percent of custodial sentence length served is derived in the same manner, except that the observed distribution is made up of discharged offenders further disaggregated by custodial sentence length band.\n\nFor offenders who receive the new EDS sentence an adjustment is made to the percent of custodial length served to reflect that these offenders will spend a greater proportion of their sentence in custody than standard determinate sentenced offenders discharged to date.\n\nProjected prison receptions are sub-divided by age category (Juvenile, Young Adult, Adult) with the exact age of the offender attributed in the same manner as the custodial sentence lengths. This allows the model to explicitly age the offenders whilst in prison (e.g. move from Juvenile to Young Adult categories).", - "page_start": 26, - "page_end": 26, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "## Appendix B: Detail of models, scenarios and assumptions\n\n## The updated modelling approach\n\nThe prison projections form part of the Ministry of Justice's wider work to develop a consistent and coherent suite of models of the criminal courts and offender management, driven by common projections of demand for the Ministry of Justice's services.\n\nThe prisons model used to generate the 2014 projections has not changed substantially from that used in the 2013 projections. As in the 2013 projections custodial sentence lengths used in the model are disaggregated by gender, age of the offender and offence type. The total time to be served in prison by projected future prisoners is assigned by matching their gender and age characteristics to relevant distributions of (i) custodial sentence lengths and (ii) the percentage of custodial sentence served. These distributions are derived from data for the period October 2013 to April 2014. This allows us to:\n\n -  understand the Criminal Justice System factors which contribute to change in the prison population, including sentences lengths issued, the percentage of sentence served in custody, trial court and sentencing court changes, or shifts in the demographic characteristics of defendants;\n -  model the impact on the prison population of specific Ministry of Justice and other Criminal Justice Agency policy changes; and\n -  quantify the impact of uncertainty around the time a defendant serves in prison on the prison population.\n\n## Overview of the modelling approach\n\nCentral to the modelling approach is the Prison Population Stock-Flow model. Projections of future custodial convictions are fed into this model and outputs are adjusted to account for the impact of changes in legislation and process on the prison population, as shown in Figure B1, and described below.", - "page_start": 22, - "page_end": 22, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "The approach for the other sub-populations is similar and has not been substantially revised since the 2013 publication. The methodology applied to each is briefly outlined below.\n\nThe recall population is projected going forward based on time-series data available to October 2014.\n\nFor remand prisoners the average time served on remand is calculated from the ratio of the remand population to remand receptions. The modelled stock of prisoners is calibrated to historical actuals by varying levels of receptions. The remand population is generated in two parts both using this approach untried remand and unsentenced remand populations being treated separately.\n\nIPP and life sentence prisoners have an extra section in the stock-flow structure which models the indeterminate nature of their sentence lengths. Outflows for IPP and life sentence prisoners depend on the tariff lengths they receive and on the frequency and outcome of Parole Board hearings. The values of these parameters are set and calibrated to reflect the most recent data on Parole Board outcomes.\n\nNOMS have made an agreement with the Home Office to hold an increased number of immigration detainees, which are only seen in the final two periods of historical data. The projected size of the non-criminal population is therefore set equal to the average size of the non-criminal population over the last two months of available data. This ensures that the non-criminal projections reflect the latest and most accurate count of the non-criminal population.\n\nThe population in prison at the end of each modelled month is aggregated into the categories defined by gender, current age group and, for determinate sentence prisoners, sentence length band, to produce raw, unadjusted prison population projections.\n\n## 3b) Accounting for the impacts of circumstance, legislation, and for seasonal effects\n\nThe raw, unadjusted prison population projections are subject to model adjustments to show the impact of certain provisions in the Offender Rehabilitation Act 2014, changes at the Verne and the ROTL review. Model adjustments are also used to account for seasonal variation in the population. Model adjustments have been applied equally to all the scenarios modelled.\n\nThe Home Office is to gain access to all 580 places at the Verne IRC by January 2015. The estimated impacts have been applied to the non-criminal projection in the model.\n\nProvisions in the Offender Rehabilitation Act 2014 will mean that offenders sentenced to custodial sentences of less than 12 months will be released subject to licence (in the same way as offenders currently released from", - "page_start": 27, - "page_end": 27, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "At the core of the method is a model of flows of offenders into and out of prison which counts the resulting prison population each month for sentenced, recall and remand prisoners.\n\nInputs to the prison projections model include projections of future custodial convictions. These are generated from time series projections of numbers of defendants entering the criminal courts and take into account the age, gender and offence of defendants entering the system, the flow of cases through the courts and the sentences which concluded cases attract.\n\nThe prison projections model monitors the sizes of the sentenced, recall and remand prison populations. These populations depend on the inflows defined above and the outflows. These outflows are defined by observed distributions of custodial sentence lengths, and the proportion of custodial sentences served for subsets of these populations. The model also simulates the ageing of the prison population over time.\n\nThe projection model is based on data up to June 2014 from various sources including court proceedings and performance data, sentencing data and prison receptions and population data.\n\nThe results of the prison projections model are supplemented with an estimate of the future non-criminal and fine defaulter populations, which is based on the latest available data to September 2014.\n\nThree scenarios have been modelled. These scenarios track the impact of three different incremental changes in sentencing behaviour:\n\n -  The Central Scenario assumes custodial convictions are broadly in line with recent trends. The average length of sentence is assumed to be flat based on recent trends in sentence lengths. This broadly reflects the assumptions for Scenario 2 in the November 2013 projections.\n\nWe also consider two illustrative scenarios\n\n -  Scenario 1 assumes that custodial convictions will fall against recent trends. The average length of sentence is assumed to be lower than what has been observed in recent trends in sentence lengths.\n -  Scenario 2 assumes a rise in custodial convictions when compared to recent trends. Also the average length of sentence is assumed to be higher than what has been observed in recent trends in sentence lengths.\n\nThe three scenarios also incorporate the impact of:\n\n -  trends in the age, gender and offence of defendants entering the system and in the flow of cases through the courts;", - "page_start": 10, - "page_end": 10, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "longer custodial terms). During the licence period, offenders are under probation supervision and can be subject to various conditions for the purposes of rehabilitation and public protection. The 2014 Act will also introduce a new post-sentence supervision period that follows licence for offenders released from custodial sentences of less than 2 years. Breaches of these licence or supervision periods could result in the offender being recalled or committed to custody, impacting the prison population. The estimated impacts have been applied to the recall populations in the model.\n\nThe impact of the ROTL review has also been included as a post model adjustment. The review decided that all offenders who have previously absconded will no longer be allowed to return to the open estate or be released on temporary licence except in exceptional circumstances. Alongside protecting the public this may have the impact of delaying the release decision for such offenders impacting the prison population. The estimated impacts have been applied to the determinate population with sentences of greater than 12 months and the indeterminate population.\n\nOther ongoing changes within the system - included in previous published projections as model adjustments - are assumed to be captured in the past data and the trends detected therein.\n\nCustodial conviction projections for each sub-population were smoothed using a centred 12 month average. No seasonality in prison receptions and discharges was modelled explicitly. Seasonality was measured in the historical prison population and applied as a series of percentage adjustments to the final population projections. Seasonal factors for a set of sub-population categories (Remand, Determinate by sentence length band and Recall) were identified for each month by measuring statistically significant deviations from a centred 12 month average.", - "page_start": 28, - "page_end": 28, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "The assumptions used are based on consultation with policy and operational experts at the Ministry of Justice and the National Offender Management Service. They also take into account observed data trends:\n\n-  These projections represent a change from last year where the 2013 Scenario 2 (central) saw the population gradually falling over the six year lifetime of the projection. The Central Scenario in the projections this year shows the population rising over the next six years. This change arises from the fact that the latest projections capture a recent upward trend in prosecutions of more serious offences.\n-  Despite the fact that overall crime is falling there has been an increase in recorded crime for certain offence types:\n- o Prosecutions for sexual offences are the highest in the decade and increased by 19% in the 12 months ending June 2014, in line with a 21% increase in recorded crime. Offenders sentenced for sexual offences had an Average Custodial Sentence Length (ASCL) of 59.7 months, a rise of 2.4 months, compared with year ending June 2013.\n- o Violence against the person proceedings for indictable offences have increased by 7% in the 12 months ending June 2014. This is in line with an 11% increase in recorded crime.\n\nFurther statistics and commentary on the changes seen in Court proceedings and sentencing over the last year is presented in the Criminal Justice System Statistics Quarterly publication. This is available online on GOV.UK at: www.gov.uk/government/collections/criminal-justice-statistics-quarterly", - "page_start": 4, - "page_end": 4, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "## 1. Central Scenario\n\nThis bulletin presents prison population projections for England and Wales from November 2014 to December 2020. The central projection is produced to aid development, capacity planning and resource allocation within the Criminal Justice System (CJS) and the National Offender Management Service (NOMS). The latest published useable operational capacity (21 November 2014) is 88,015 2 .\n\nThe Central Scenario estimates that the prison population will rise to 87,700 by the end of June 2015 and to 90,200 by the end of June 2020.\n\nThe Central Scenario tracks the impact of current trends in sentencing on custodial convictions, custodial sentence lengths and hence on the resulting prison population. These assumptions have been agreed through a consultative process. Government policy is only included in these projections when it has received Royal Assent. These projections also take into account other drivers including:\n\n -  trends in the age, gender and offence of defendants entering the system and in the flow of cases through the courts;\n -  assumptions regarding future parole hearing frequency and expected outcomes for indeterminate (Life and Indeterminate for the Public Protection) sentences;\n -  the Home Office gaining access to all 580 places at the Verne Immigration Removal Centre (IRC) by January 2015;\n -  the impacts of the Offender Rehabilitation Act 2014 3 which achieved Royal Assent on 13 March 2014 meaning offenders sentenced to custodial sentences of less than 12 months will be released subject to licence. There will also be a new post-sentence supervision period following licence for offenders released from custodial sentences of less than 2 years;\n -  the impacts of the Release on Temporary Licence (ROTL) review deciding that all offenders who have previously absconded will no longer be allowed to return to the open estate or be released on temporary licence except in exceptional circumstances.", - "page_start": 5, - "page_end": 5, - "source_file": "legal4_opengouvernementlicense.pdf" - } - ] - }, - { - "references": { - "source_file": "legal4_opengouvernementlicense.pdf", - "query": "Do you know the prison population estimation for the and of June 2020 ?", - "target_page": 13, - "target_passage": "The Central Scenario estimates that the prison population will rise to 87,700 by the end of June 2015 and to 90,200 by the end of June 2020. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## 4. Results\n\nThe Central Scenario estimates that the prison population will rise to 87,700 by the end of June 2015 and to 90,200 by the end of June 2020.\n\nChart 2 presents Prison population projections from November 2014 to December 2020.\n\nChart 2: Projected monthly prison population (all scenarios)\n\n\n\nIllustrative Scenario 1 estimates that the prison population will rise to 87,100 by the end of June 2015 and then fall to 81,400 by the end of June 2020.\n\nIllustrative Scenario 2 estimates that the prison population will rise to 88,900 by the end of June 2015 and to 98,900 by the end of June 2020.\n\nThe projected trends reflect the cumulative impacts of the various sentencing, legislative and procedural assumptions that are used to generate the projections. The seasonal pattern reflects the dip in the prison population which is always seen around the Christmas period.\n\nIn the Central Scenario, the prison population is expected to rise to 90,200 by June 2020. The projected population increase is largely due to the recent trends in case mix where we have seen more serious cases come before the courts. This results in offenders receiving longer custodial sentence lengths, which in turn places an upward pressure on the prison population. The growth in this scenario is largely driven by the rise in the determinate population which is projected to grow to 60,200 by June 2020. This is partially due to the", - "page_start": 12, - "page_end": 12, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "## Key points\n\nThis bulletin presents projections of the prison population in England and Wales from November 2014 to December 2020. The prison population projections are based on assumptions about future custodial convictions and incorporate the anticipated impacts of agreed policy and procedural initiatives.\n\nThe 'Central Scenario' estimates that the prison population will increase from the current position 85,925 1 to 87,700 by June 2015. By the end of June 2020 the prison population is projected to be 90,200. This Central Scenario is our best estimate based on the available information. The projected prison population under our Central Scenario is shown in Chart 1.\n\nThe prison population projections are produced using a model of flows of offenders into and out of prison which counts the resulting prison population each month.\n\nChart 1: Projected prison population (Central Scenario)\n\n\n\nThe Central Scenario has been modelled assuming custodial convictions are broadly in line with recent trends and average length of sentence to be flat based on recent trends.\n\nThe projections do not attempt to estimate the impact of any future Government policy that is yet to achieve Royal Assent, and therefore become less certain over time.", - "page_start": 3, - "page_end": 3, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "## 3. Modelling methodology and projection scenarios\n\nThe prison projections model is part of wider work within the Ministry of Justice to develop a consistent and coherent suite of models of the criminal courts and offender management, driven by common projections of demand for the Ministry of Justice's services.\n\nThe custodial convictions model uses projections of numbers of defendants entering the criminal courts. In order to project volumes of defendants being given a custodial sentence, it also takes into account:\n\n -  the age, gender and offence of defendants entering the system;\n -  the flow of cases through the courts; and\n -  the sentences which concluded cases attract.\n\nThe prison population projections model takes projections of custodial convictions, converts them to projections of prison receptions and then models the amount of time that offenders spend in prison to calculate the resulting prison population.\n\nThe benefits of this method are that it allows us to:\n\n -  explicitly project custodial convictions (rather than just convictions);\n -  understand the Criminal Justice System factors which contribute to change in the prison population, such as time served, sentences given, trial and sentencing court changes or shifts in defendant demographics; and\n -  more easily model the impact on the prison population of specific Ministry of Justice and other Criminal Justice Agency policy changes relating to specific offences or specific sentences.\n\nAppendix B provides details of the methods used to produce the prison population projections and the assumptions behind them.\n\nThe assumptions informing these projections, and therefore the projections themselves, are subject to significant uncertainty. This is represented by the three scenarios, with each scenario being only as likely as the assumptions which inform it.\n\nThe method used for generating projections of the prison population in England and Wales for the 2014-2020 projections is consistent with the approach used to generate the 2013-2019 projections published on 7 November 2013.", - "page_start": 9, - "page_end": 9, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "## 2. Recent trends in the population\n\nThe 'Story of the Prison Population 1993 to 2012' is an in-depth look at what happened to the prison population between 1993 and 2012 and the major factors contributing to the changes. 4\n\nThe prison population grew rapidly between 1993 to 2008, at an average of 4% a year. This rapid rise was driven by:\n\n -  increased numbers of people sentenced to immediate custody from 1993 to 2002;\n -  increases in the average custodial sentence length and increased use of indeterminate sentences; and\n -  an increase in numbers recalled to prison following breaches of the conditions of licence and these offenders spending longer in prison once recalled.\n\nThe rise in the prison population slowed considerably from the summer of 2008, in part due to the introduction of the Criminal Justice and Immigration Act (CJIA) 2008 5 which changed sentencing and offender management in ways which helped to reduce growth in the prison population.\n\nThis flatter trend continued until the public disorder seen in UK cities from 6 to 9 August 2011 which had an immediate but temporary impact on the prison population.\n\nDuring 2012 and into 2013, the prison population began to fall due to a falling remand population and a continued decline in the number of under 18s in custody. The falling remand population during 2012 reflected falling volumes going through the courts plus the introduction, in December 2012, of measures restricting the use of remand for all offenders who would be unlikely to receive a custodial sentence. 6\n\nFrom the end of August 2013 to the end of October 2013, the remand\n\npopulation rose sharply, driving an overall increase in the prison population. This was being driven by an increase in demand in the Crown Courts, especially among more serious tri-able either way cases. The total population has continued to rise since the beginning of 2014 and reached 85,925 7 on the", - "page_start": 7, - "page_end": 7, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "## Appendix B: Detail of models, scenarios and assumptions\n\n## The updated modelling approach\n\nThe prison projections form part of the Ministry of Justice's wider work to develop a consistent and coherent suite of models of the criminal courts and offender management, driven by common projections of demand for the Ministry of Justice's services.\n\nThe prisons model used to generate the 2014 projections has not changed substantially from that used in the 2013 projections. As in the 2013 projections custodial sentence lengths used in the model are disaggregated by gender, age of the offender and offence type. The total time to be served in prison by projected future prisoners is assigned by matching their gender and age characteristics to relevant distributions of (i) custodial sentence lengths and (ii) the percentage of custodial sentence served. These distributions are derived from data for the period October 2013 to April 2014. This allows us to:\n\n -  understand the Criminal Justice System factors which contribute to change in the prison population, including sentences lengths issued, the percentage of sentence served in custody, trial court and sentencing court changes, or shifts in the demographic characteristics of defendants;\n -  model the impact on the prison population of specific Ministry of Justice and other Criminal Justice Agency policy changes; and\n -  quantify the impact of uncertainty around the time a defendant serves in prison on the prison population.\n\n## Overview of the modelling approach\n\nCentral to the modelling approach is the Prison Population Stock-Flow model. Projections of future custodial convictions are fed into this model and outputs are adjusted to account for the impact of changes in legislation and process on the prison population, as shown in Figure B1, and described below.", - "page_start": 22, - "page_end": 22, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "The approach for the other sub-populations is similar and has not been substantially revised since the 2013 publication. The methodology applied to each is briefly outlined below.\n\nThe recall population is projected going forward based on time-series data available to October 2014.\n\nFor remand prisoners the average time served on remand is calculated from the ratio of the remand population to remand receptions. The modelled stock of prisoners is calibrated to historical actuals by varying levels of receptions. The remand population is generated in two parts both using this approach untried remand and unsentenced remand populations being treated separately.\n\nIPP and life sentence prisoners have an extra section in the stock-flow structure which models the indeterminate nature of their sentence lengths. Outflows for IPP and life sentence prisoners depend on the tariff lengths they receive and on the frequency and outcome of Parole Board hearings. The values of these parameters are set and calibrated to reflect the most recent data on Parole Board outcomes.\n\nNOMS have made an agreement with the Home Office to hold an increased number of immigration detainees, which are only seen in the final two periods of historical data. The projected size of the non-criminal population is therefore set equal to the average size of the non-criminal population over the last two months of available data. This ensures that the non-criminal projections reflect the latest and most accurate count of the non-criminal population.\n\nThe population in prison at the end of each modelled month is aggregated into the categories defined by gender, current age group and, for determinate sentence prisoners, sentence length band, to produce raw, unadjusted prison population projections.\n\n## 3b) Accounting for the impacts of circumstance, legislation, and for seasonal effects\n\nThe raw, unadjusted prison population projections are subject to model adjustments to show the impact of certain provisions in the Offender Rehabilitation Act 2014, changes at the Verne and the ROTL review. Model adjustments are also used to account for seasonal variation in the population. Model adjustments have been applied equally to all the scenarios modelled.\n\nThe Home Office is to gain access to all 580 places at the Verne IRC by January 2015. The estimated impacts have been applied to the non-criminal projection in the model.\n\nProvisions in the Offender Rehabilitation Act 2014 will mean that offenders sentenced to custodial sentences of less than 12 months will be released subject to licence (in the same way as offenders currently released from", - "page_start": 27, - "page_end": 27, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "\n\n\n\n## Prison Population Projections 2014 - 2020 England and Wales\n\nMinistry of Justice Statistics Bulletin\n\nPublished 27th November 2014", - "page_start": 0, - "page_end": 0, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "At the core of the method is a model of flows of offenders into and out of prison which counts the resulting prison population each month for sentenced, recall and remand prisoners.\n\nInputs to the prison projections model include projections of future custodial convictions. These are generated from time series projections of numbers of defendants entering the criminal courts and take into account the age, gender and offence of defendants entering the system, the flow of cases through the courts and the sentences which concluded cases attract.\n\nThe prison projections model monitors the sizes of the sentenced, recall and remand prison populations. These populations depend on the inflows defined above and the outflows. These outflows are defined by observed distributions of custodial sentence lengths, and the proportion of custodial sentences served for subsets of these populations. The model also simulates the ageing of the prison population over time.\n\nThe projection model is based on data up to June 2014 from various sources including court proceedings and performance data, sentencing data and prison receptions and population data.\n\nThe results of the prison projections model are supplemented with an estimate of the future non-criminal and fine defaulter populations, which is based on the latest available data to September 2014.\n\nThree scenarios have been modelled. These scenarios track the impact of three different incremental changes in sentencing behaviour:\n\n -  The Central Scenario assumes custodial convictions are broadly in line with recent trends. The average length of sentence is assumed to be flat based on recent trends in sentence lengths. This broadly reflects the assumptions for Scenario 2 in the November 2013 projections.\n\nWe also consider two illustrative scenarios\n\n -  Scenario 1 assumes that custodial convictions will fall against recent trends. The average length of sentence is assumed to be lower than what has been observed in recent trends in sentence lengths.\n -  Scenario 2 assumes a rise in custodial convictions when compared to recent trends. Also the average length of sentence is assumed to be higher than what has been observed in recent trends in sentence lengths.\n\nThe three scenarios also incorporate the impact of:\n\n -  trends in the age, gender and offence of defendants entering the system and in the flow of cases through the courts;", - "page_start": 10, - "page_end": 10, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "## 1. Central Scenario\n\nThis bulletin presents prison population projections for England and Wales from November 2014 to December 2020. The central projection is produced to aid development, capacity planning and resource allocation within the Criminal Justice System (CJS) and the National Offender Management Service (NOMS). The latest published useable operational capacity (21 November 2014) is 88,015 2 .\n\nThe Central Scenario estimates that the prison population will rise to 87,700 by the end of June 2015 and to 90,200 by the end of June 2020.\n\nThe Central Scenario tracks the impact of current trends in sentencing on custodial convictions, custodial sentence lengths and hence on the resulting prison population. These assumptions have been agreed through a consultative process. Government policy is only included in these projections when it has received Royal Assent. These projections also take into account other drivers including:\n\n -  trends in the age, gender and offence of defendants entering the system and in the flow of cases through the courts;\n -  assumptions regarding future parole hearing frequency and expected outcomes for indeterminate (Life and Indeterminate for the Public Protection) sentences;\n -  the Home Office gaining access to all 580 places at the Verne Immigration Removal Centre (IRC) by January 2015;\n -  the impacts of the Offender Rehabilitation Act 2014 3 which achieved Royal Assent on 13 March 2014 meaning offenders sentenced to custodial sentences of less than 12 months will be released subject to licence. There will also be a new post-sentence supervision period following licence for offenders released from custodial sentences of less than 2 years;\n -  the impacts of the Release on Temporary Licence (ROTL) review deciding that all offenders who have previously absconded will no longer be allowed to return to the open estate or be released on temporary licence except in exceptional circumstances.", - "page_start": 5, - "page_end": 5, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "longer custodial terms). During the licence period, offenders are under probation supervision and can be subject to various conditions for the purposes of rehabilitation and public protection. The 2014 Act will also introduce a new post-sentence supervision period that follows licence for offenders released from custodial sentences of less than 2 years. Breaches of these licence or supervision periods could result in the offender being recalled or committed to custody, impacting the prison population. The estimated impacts have been applied to the recall populations in the model.\n\nThe impact of the ROTL review has also been included as a post model adjustment. The review decided that all offenders who have previously absconded will no longer be allowed to return to the open estate or be released on temporary licence except in exceptional circumstances. Alongside protecting the public this may have the impact of delaying the release decision for such offenders impacting the prison population. The estimated impacts have been applied to the determinate population with sentences of greater than 12 months and the indeterminate population.\n\nOther ongoing changes within the system - included in previous published projections as model adjustments - are assumed to be captured in the past data and the trends detected therein.\n\nCustodial conviction projections for each sub-population were smoothed using a centred 12 month average. No seasonality in prison receptions and discharges was modelled explicitly. Seasonality was measured in the historical prison population and applied as a series of percentage adjustments to the final population projections. Seasonal factors for a set of sub-population categories (Remand, Determinate by sentence length band and Recall) were identified for each month by measuring statistically significant deviations from a centred 12 month average.", - "page_start": 28, - "page_end": 28, - "source_file": "legal4_opengouvernementlicense.pdf" - } - ] - }, - { - "references": { - "source_file": "legal4_opengouvernementlicense.pdf", - "query": "What is the phone number of the Ministry of Justice press office ?", - "target_page": 30, - "target_passage": "Press enquiries should be directed to the Ministry of Justice press office, telephone: 020 3334 3536 ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Contact Points for further information\n\nCurrent and previous editions of this publication are available for download from www.justice.gov.uk/publications/statistics-and-data/index.htm\n\nPress enquiries should be directed to the Ministry of Justice press office, telephone: 020 3334 3536\n\nOther enquiries about these statistics should be directed to:\n\nJustice Statistics Analytical Services Ministry of Justice 7th Floor 102 Petty France London SW1H 9AJ\n\nGeneral enquiries about the statistical work of the Ministry of Justice can be emailed to: statistics.enquiries@justice.gsi.gov.uk\n\nGeneral information about the official statistics system of the UK is available from www.statistics.gov.uk", - "page_start": 29, - "page_end": 29, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "Alternative format versions of this report are available on request from the Ministry of Justice at statistics.enquiries@justice.gsi.gov.uk\n\n© Crown copyright Produced by the Ministry of Justice", - "page_start": 30, - "page_end": 30, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "\n\n\n\n## Prison Population Projections 2014 - 2020 England and Wales\n\nMinistry of Justice Statistics Bulletin\n\nPublished 27th November 2014", - "page_start": 0, - "page_end": 0, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "\n\n## Corporate Information\n\nKingsgate Consolidated Limited ABN 42 000 837 472\n\n## Directors\n\nRoss Smyth-Kirk (Chairman)\n\nGavin Thomas (Managing Director)\n\nPeter Alexander\n\nCraig Carracher\n\nPeter McAleer\n\n## Company Secretary\n\nRoss Coyle\n\n## Chief Executive Officer\n\nGavin Thomas\n\n## Stock Exchange Listing\n\nKingsgate Consolidated Limited is a Company limited by shares, listed on the Australian Stock Exchange under the code KCN. The Company's shares also trade in the United States of America over-the-counter (OTC) as an American Depository Receipt (ADR) under the code OTC: KSKGY.\n\n## Registered Office & Principal Business Address\n\nKingsgate Consolidated Limited\n\nSuite 801, Level 8, 14 Martin Place Sydney NSW 2000 Australia\n\nTel:\n\n+61 2 8256 4800\n\nFax:\n\n+61 2 8256 4810\n\nEmail: info@kingsgate.com.au\n\n## Bangkok Office\n\nAkara Resources Public Company Limited\n\n19th Floor, Sathorn Thani Building 2 No. 92/54-55 North Sathorn Road Kwaeng Silom, Khet Bangrak Bangkok 10500 Thailand\n\nTel:\n\n+66 2 233 9469\n\nFax:\n\n+66 2 236 5512\n\n## Chatree Mine Office\n\nAkara Resources Public Company Limited\n\nNo. 99 Moo 9, Tambon Khao Luk Amphur Thap Khlo Phichit 66230 Thailand Tel: +66 56 614 500 Fax: +66 56 614 195\n\n## Thailand Exploration Office\n\nIssara Mining Limited\n\n156/9-10 Moo 11, Tambol Dong Khui Amphur Chon Daen Phetchabun 67190\n\nThailand\n\nTel:\n\n+66 56 649 253\n\nFax:\n\n+66 56 649 082\n\n## Challenger Mine\n\nChallenger Gold Operations Pty Ltd\n\nC/- 14 Lum Street Export Park SA 5950 Australia\n\nTel:\n\n+61 8 8450 0100\n\nFax:\n\n+61 8 8234 3956\n\n## Chile Office\n\nLaguna Resources Chile Ltda\n\nSan Pio X 2460 oficina 508 Providencia, Santiago Chile\n\nTel:\n\n+56 2 2231 7565\n\n\n\n## Share Registry\n\nSecurity Transfer Registrars Pty Ltd\n\n770 Canning Highway Applecross WA 6153 PO Box 535\n\nApplecross WA 6953\n\nAustralia\n\nTel:\n\n+61 8 9315 2333\n\nFax:\n\n+61 8 9315 2233\n\nEmail: registrar@securitytransfer.com.au Website: www.securitytransfer.com.au\n\n## ADR Depository\n\n(American Depository Receipts)\n\nThe Bank of New York Mellon ADR Division 101 Barclay Street, 22nd Floor New York, NY 10286 USA Tel: +1 212 815 2293\n\n## Auditor\n\nPricewaterhouseCoopers\n\n201 Sussex Street Sydney NSW 2000\n\nAustralia\n\nTel:\n\n+61 2 8266 0000\n\nFax:\n\n+61 2 8266 9999", - "page_start": 117, - "page_end": 117, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "than tw o other m em bers, w ho hold or have held high judicial office;\n\n - ( b ) the tribunal shall enquire into the m atter and report on the facts thereof to the President and advise the P resident w hether the judge ought to be rem oved from office under this section for inability as aforesaid or for m isbehaviour.\n\n(4) W here a tribunal, appointed under subsection (3) of this section, advises the President that a judge of the C ourt of A ppeal ought to be rem oved from office for inability as aforesaid or for m isbehaviour, the P resident shall rem ove such judge from office.\n\n(5) If the question of rem oving a judge of the C ourt of A ppeal from office has been referred to a tribunal under subsection (3) of this section, the P resident m ay suspend the judge from perform ing the functions of his or her office, and any such suspension m ay at any tim e be revoked by the P resident and shall in any case cease to have effect if the tribunal advises the P resident that the judge ought not to be rem oved from office.\n\n## 102. O aths to be taken by judges of C ourt of A ppeal\n\nA judge of the C ourt of A ppeal shall not enter upon the duties of his or her office unless he or she has taken and subscribed such oath for the due execution of his or her office as m ay be prescribed by P arliam ent.\n\n## P A R T III\n\n## Judicial S ervice C om m ission (ss 103-104)\n\n## 103. C om position and procedure\n\n(1) There shall be a Judicial S ervice C om m ission for B otsw ana w hich shall consist of-", - "page_start": 44, - "page_end": 44, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "## C H A P TE R IX M iscellaneous (ss 125-127)\n\n## 125. R esignations\n\n - (1) A ny person w ho is appointed or elected to any office established by this C onstitution m ay resign from that office by w riting under his or her hand addressed to the person or authority by w hom he or she w as appointed or elected:\n\nProvided that in the case of a person w ho holds office as P resident his or her resignation from that office shall be addressed to the C hief Justice, in the case of a person w ho holds office as S peaker or D eputy S peaker of the N ational A ssem bly his or her resignation from that office shall be addressed to the A ssem bly, in the case of an", - "page_start": 52, - "page_end": 52, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "\n\n\n\n## Corporate Headquarters\n\n110 SE 6th Street, 28th Floor, Fort Lauderdale, Florida 33301 Phone: (954) 769-2400 · Fax: (954) 769-2664 · www.republicservices.com", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## Board of Directors\n\n## Michael J. Brown\n\nChairman and Chief Executive Officer Euronet Services Inc.\n\n## Daniel R. Henry\n\nChief Operating Officer\n\nEuronet Services Inc.\n\n## Thomas A. McDonnell\n\nPresident and Chief Executive Officer DST Systems, Inc. (information processing and computer software company)\n\n## Steven J. Buckley\n\nManaging Partner Innova Capital LLC (advisor to Poland Partners venture capital fund)\n\n## Eriberto R. Scocimara\n\nPresident and Chief Executive Officer Hungarian-American Enterprise Fund (private investment company, funded by US Government)\n\n## M. Jeannine Strandjord\n\nSenior Vice President, Finance Long Distance Division Sprint Corporation\n\n## Executive Officers\n\n## Michael J. Brown\n\nPresident and Chief Executive Officer\n\n## Daniel R. Henry\n\nChief Operating Officer\n\n## Jeffery B. Newman\n\nExecutive Vice President and General Counsel\n\n## Miro Bergman\n\nExecutive Vice President, Managing Director EMEA\n\n## Ronald G. Ferguson\n\nExecutive Vice President, Managing Director North America\n\n## Thierry Michel\n\nVice President, Managing Director New Technologies\n\n## James P. Jerome\n\nVice President, Managing Director Software Solutions\n\n## Corporate Information\n\n## Professional Offices\n\nIndependent Auditors KPMG LIM Center, IX Floor Al. Jerozolimskie 65/79 00-697 Wa r s a w, Poland\n\n## Transfer Agent\n\nE q u i S e r v e P.O. Box 9187 Canton, Massachusetts 02021-9187 Shareholder Inquiries: Tel: 877-282-1169 (within the US) Tel: 781-575-3226 (outside the US)\n\n## Investor Information\n\nCopies of Euronet Services Inc.'s Form 10-K, as filed with the Securities and Exchange Commission, are available from the Company at no charge. Requests for copies of Form 10-K and other investor information should be addressed to:\n\n## James McCroy\n\nManaging Director Investor Relations Euronet Services Inc. Tel: 913-327-4232 Fax: 913-327-1921\n\nj m c c r o y @ e u r o n e t w o r l d w i d e . c o m\n\nCorporate Offices\n\nCorporate Headquarters\n\n4601 College Boulevard, Suite 300 Leawood, Kansas 66211 Tel: 913-327-4200 Fax: 913-327-1921\n\n## European Headquarters\n\nHorvát u. 14-24. 1027 Budapest, Hungary Tel: 36-1-224-1000\n\nFax: 36-1-224-1013\n\nSoftware Division\n\n17500 Chenal Parkway Little Rock, Arkansas 72223-9138 Tel: 501-218-7300\n\nFax: 501-218-7302\n\nGlobal Sales Offices\n\n## C r o a t i a\n\nZelinska 3/5 10000 Zagreb, Croatia Tel: 385-1-63-26-777\n\nFax: 385-1-63-26-778\n\n## Czech Republic\n\nIBC - Pobrezní 3 186 00 Prague 8, Czech Republic Tel: 420-2-2483-2252 Fax: 420-2-2323-954\n\n## F r a n c e\n\n120 avenue Charles de Gaulle 92200 Neuilly-sur-Seine, France Tel: 33-1-41-92-95-55\n\nFax: 33-1-47-22-32-82\n\n## G e r m a n y\n\nCharlottenstrasse 18 10117 Berlin, Germany Tel: 49-30-2039-6800 Fax: 49-30-2039-6855\n\n## G r e e c e\n\n90, Kifissias Av e n u e 15125 Marousi Athens, Greece Tel: 301-809-9688 Fax: 301-809-9700\n\n## H u n g a r y\n\nHorvát u. 14-24. 1027 Budapest, Hungary Tel: 36-1-224-1000\n\nFax: 36-1-224-1013\n\n## Middle East\n\n11 Gamal El Dine Abou El Mahasen Garden City - Cairo - Egypt Tel: 20-10-136-6774 Fax: 44-845-127-4748\n\n## P o l a n d\n\nul. Emilii Plater 28 00-688 Wa r s a w, Poland Tel: 48-22-690-5100\n\nFax: 48-22-690-5101\n\n## R o m a n i a\n\n9 Alexandru Ioan Cuza Blvd. Sector 1, Bucharest, Romania Tel: 401-310-3363 Fax: 401-310-3383\n\n## Tu r k e y\n\nBeybi Giz Plaza Kat 26 Meydan Sokak No 28 Maslak 80670 Istanbul, Tu r k e y\n\n## United Kingdom\n\n3A The Courtyard, Alban Park St. Albans, Hertfordshire AL4 0LA United Kingdom Tel: 44-1727-799870\n\nFax: 44-1727-799880\n\n## U S A\n\n4601 College Boulevard, Suite 300 Leawood, Kansas 66211 Tel: 913-327-4200 Fax: 913-327-1921\n\n17500 Chenal Parkway Little Rock, Arkansas 72223-9138 Tel: 501-218-7300 Fax: 501-218-7302\n\nTel: 90-212-335-2512 or 2513\n\n## Web Site\n\nFor further information, visit:\n\nw w w.euronetworldwide.com\n\n## Common Stock Information\n\nThe table below sets forth the high and low closing sales prices for the stock as reported by Nasdaq.", - "page_start": 46, - "page_end": 46, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "- ( c ) he or she is qualified to practise as an advocate or attorney and he or she has had the experience in the teaching of law in a recognised university for not less than ten years; or\n - ( d ) he or she is a C hief M agistrate w ho has held that office for not less than five years.\n - (4) In com puting, for the purposes of subsection (3) of this section, the period during w hich any person has been qualified to practise as an advocate or attorney any period during w hich he or she has held judicial office after becom ing so qualified shall be included.\n - (5) If the office of C hief Justice is vacant or if the C hief Justice is for any reason unable to perform the functions of his or her office, then, until a person has been appointed to and has assum ed the functions of that office or until the C hief Justice has resum ed those functions, as the case m ay be, those functions shall be perform ed by such one of the judges of the H igh C ourt or such other person qualified for appointm ent as a judge of the H igh C ourt as the P resident m ay appoint for that purpose:", - "page_start": 40, - "page_end": 40, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "Allsteel Inc. provides high quality office furniture solutions with advanced functionality and lifetime durability for the contract market. Products are distributed through a national network of aligned, independent contract dealers as well as our sales force, targeting corporate, government, and institutional markets.\n\n## HIGHLIGHTS/AWARDS:\n\n- · Major product introductions - Get Set TM and Terrace ® 2.6 have been well received by the market, winning industry awards.\n- · Get Set TM - 2003 Editor's Choice, Top Pick Annual Award by Buildings Magazine and the Chicago Atheneum Good Design Award.\n- · Terrace ® 2.6 - recognized among top products of 2003 by Architectural Record magazine.\n- · The #19 ® chair, introduced in 2002, continues to receive numerous awards including the California IIDA Acclaim Award and the Best of Category Award by I.D. magazine.\n- · Office Furniture Dealers Alliance (OFDA), 2003 Dealers Choice award for Management.\n- · General Services Administration's (GSA) 2003 'Evergreen Furniture and Furnishings Award' for environmental stewardship.\n\nW W W . A L L S T E E L O F F I C E . C O M\n\n## HON INDUSTRIES 2003\n\n## OFFICE FURNITURE AT-A-GLANCE\n\n\n\nThe Gunlocke Company L.L.C. is one of America's oldest and most respected producers of quality wood office furniture. The company handcrafts executive case goods, as well as a wide range of executive seating, lounge furniture, and conference tables. Known for more than a century for crafting elegantly tailored solutions for distinctive business and government clients, Gunlocke focuses primarily on the contract market and furniture specifying communities.\n\n## HIGHLIGHTS/AWARDS:\n\n- · Aggressive 2003 product launch of nine new seating lines: Amalfi TM , Valor TM , Porter TM , Tiara TM , Raffaella TM , Napoli TM , Sirmione TM , Fitzgerald TM , and Debonair TM .\n- · Launched Mantra TM , a new modular and contemporary case good line. Using mixed materials - from wood to brushed aluminum and glass - the line focuses on the integration of technology into today's executive office environments.\n- · The Amalfi TM line won the Silver Award at NeoCon.\n- · Experienced record operational performance.\n\nWWW.GUNLOCKE.COM\n\n\n\nThe HON Company is North America's leading manufacturer and marketer of office solutions for small and medium-sized workplaces. Its strong distribution channel of independent dealers, wholesalers, and retailers supports the broadest mid-market product offering in the industry.\n\n## HIGHLIGHTS/AWARDS:\n\n- · Launched contemporary Perpetual ® collection targeting the 18- to 35-year-old segment.\n- · 2003 Shingo Award for Excellence in Manufacturing.\n- · Office Furniture Dealers Alliance (OFDA), 2003 Dealers' Choice Manufacturer of the Year, Best Support, Service, and Training, and Best Management.\n- · General Services Administration's (GSA) 2003 'Evergreen Furniture and Furnishings Award' for environmental stewardship.\n- · The Chicago Athenaeum: Museum of Architecture and Design Award for the Olson Flex Stacker TM Chair and Perpetual ® desking.\n- · Buildings Magazine' s Innovations Award and Editor's Top 100 - Perpetual ® desking.\n- · Today's Facilities Manager Readers' Choice Award - Non-task seating, storage, and conference room furnishings.\n\nWWW.HON.COM", - "page_start": 27, - "page_end": 27, - "source_file": "NYSE_HNI_2003.pdf" - } - ] - }, - { - "references": { - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf", - "query": "What is SOLR ?", - "target_page": 4, - "target_passage": "Search engine used for portal content search and dataset search ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Figure 9.1 The DL Query Tab\n\n\n\n## 9.2 SPARQL Queries\n\nSPARQL is a powerful language, and one could write a whole book about it. In fact, there are books written about it. The best one I have seen is the O'Reilly book Learning SPARQL by Bob DuCharme. This is an excellent book that not only goes into SPARQL but into topics such as RDF/RDFS and how triples are used to represent all information in OWL. I will only touch on those issues here, there is much more to say about them and DuCharme's book is a great place to learn more. If some of the following is a bit hard to understand don't be discouraged. This is just an attempt to give a very high level introduction to something that requires significant study to really understand.\n\nEssentially SPARQL is to the Semantic Web and Knowledge Graphs as SQL is to relational databases. Just as SQL can do more than just query, it can also assert new information into a database, so SPARQL can as well. The current SPARQL plugins for Protégé are somewhat limited and don't support the statements such as INSERT for entering new data so we will just cover the basics of using SPARQL as a query language but keep in mind there is a lot more to it than what we briefly cover here.\n\n## 9.21 Some SPARQL Pizza Queries\n\nTo start with go to the SPARQL Query tab. If it isn't already there you can as always add it using Window>Tabs>SPARQL Query. This tab consists of two views, the top which holds the query and the bottom which holds the results. There should be some text already there. It may look confusing, but we'll explain it. Just to start with hit the Execute button at the bottom of the tab. You should see a bunch of classes and class expressions returned.", - "page_start": 67, - "page_end": 67, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "--> Found Docker image 3414ee2 (4 weeks old) from registry.access.redhat.com for \"registry.access.redhat.com/rhscl/mongodb-36-rhel7\"\n\nMongoDB 3.6\n\n-----------\n\nMongoDB (from humongous) is a free and open-source cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. This container image contains programs to run mongod server.\n\nTags: database, mongodb, rh-mongodb36", - "page_start": 181, - "page_end": 181, - "source_file": "sg248459.pdf" - }, - { - "text": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n## 3.7.4 SPARQL Queries\n\n\n\nThe third tab displays all queries set to 'public'. Upon creat ing or modifying a query the user may set the attribute 'public', which will mak e the query accessible to everyone.\n\nOnce the user is logged-in, another list with all private queries including the owned public queries will be displayed. Besides 'Query name ' and 'Query comment ', the attribute 'Enabled' visualizes if the query is currently running on a recurring time interval.\n\nAll attributes may be changed upon se lecting 'Details' if the logg ed-in user is the owner of selected query.", - "page_start": 56, - "page_end": 56, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n## 3.7 SPARQL Manager\n\nThe SPARQL Manager provides a graphical user interface (GUI) for sending user defined queries to the Virtuoso SPARQL query engine.\n\nThe powerful SPARQL Protocol and RDF Query Language are primarily aimed at professionals for querying metadata as Linked Data. A basic knowledge of the DCAT-AP specification is highly recommended.\n\nIn the future, users of the SPARQL Manager will be able to save their queries for scheduled execution. Additionally a notification will be send to the user when a result has changed.\n\nClicking the info icon in the upper right corner will display a step-by-step walkthrough of all components with a short info about their function.\n\nThis is possible in both of modes of the SPARQL Manager, the search and the assistant mode, which will be described in the following sections.\n\n## 3.7.1 SPARQL Search\n\n\n\nIn this mode you can load some predefined example queries from the right side into the editable text area to introduce yourself with the very basic SPARQL syntax. Limiting the number of returned results is possible by selecting a value from the Limit-dropdown or by editing the query directly. Furthermore the format for the result can be selected. After clicking the Search-Button the result is displayed in Result data preview area below. The preview may be truncated depending on the size of the result. The complete result could always be downloaded as a file by clicking the Download-link on the right side.", - "page_start": 53, - "page_end": 53, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "## Server Based Sorting option\n\nThe Server Based Sorting option (Figure 3-8 on page 56) is used to sort the document hit list on the server before it is returned to the client.\n\nImportant: Sorting might still occur on the client if any of the following items are true:\n\n - /SM590000 Multiple application groups are searched. (The folder contains multiple application groups.)\n - /SM590000 The search query is too long or too complex for a single SQL statement.\n - /SM590000 The user specifies the Append option.\n\n## Text Search\n\nText Search (Figure 3-9) is used to search documents that contain a specific word or phrase before the document hit list is built. Only documents that contain the specified word or phrase are returned as part of the hit list. The search takes place on the server.\n\nFigure 3-9 shows the Text Search option in the Field Definition tab of the Add a Folder window.\n\nFigure 3-9 Text Search\n\n\n\nBy using Text Search, a user can further qualify a search without adding the processing that is associated with adding and maintaining additional index fields to the database. Text search is performed on the documents that match the criteria for the other query fields. For example, if the other query fields are date and account number, a text search is performed on the documents that match the specified date and account number. If the document contains the text search string, it is returned as part of the hit list. Text search fields are not mapped to database fields.", - "page_start": 80, - "page_end": 80, - "source_file": "sg246915.pdf" - }, - { - "text": "Table 8: Prompts used for the evaluation of e5-mistral-7b-instruct .\n\n| Task type | Prompt |\n|---------------------|-------------------------------------------------------|\n| Classification | \"Classify the following task: \" |\n| Clustering | \"Identify the topic or theme based on the text: \" |\n| Retrieval | \"Retrieve semantically similar text: \" |\n| Reranking | \"Re-rank the following text: \" |\n| Pair Classification | \"Classify the following pair of text: \" |\n| STS | \"Determine the similarity between the following text: |\n| Summarization | \"Summarize the following text: \" |\n| Bitext Mining | \"Translate the following text: \" |", - "page_start": 19, - "page_end": 19, - "source_file": "arxiv4.pdf" - }, - { - "text": "## ITEM 7. MANAGEMENT'S DISCUSSION AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS\n\nYou should read the following discussion in conjunction with our Consolidated Financial Statements and their Notes contained in this Annual Report on Form 10-K.\n\n## Overview of Our Business\n\nWe are a leading provider of non-hazardous solid waste collection and disposal services in the United States. We provide solid waste collection services for commercial, industrial, municipal and residential customers through 140 collection companies in 22 states. We also own or operate 96 transfer stations, 58 solid waste landÑlls and 35 recycling facilities.\n\nWe generate revenue primarily from our solid waste collection operations. Our remaining revenue is from other services including landÑll disposal, recycling, compost, mulch and soil operations.\n\nThe following table reÖects our revenue by source for the years ended December 31, 2004, 2003 and 2002 (in millions):\n\n| | Years Ended December 31, | Years Ended December 31, | Years Ended December 31, | Years Ended December 31, | Years Ended December 31, | Years Ended December 31, |\n|--------------------------------------|----------------------------|----------------------------|----------------------------|----------------------------|----------------------------|----------------------------|\n| | 2004 | 2004 | 2003 | 2003 | 2002 | 2002 |\n| Collection: | | | | | | |\n| Residential ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | $ 655.2 | | 24.2% $ 601.2 | | 23.9% $ 530.7 | 22.4% |\n| Commercial ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 737.9 | 27.2 | 706.0 | 28.0 | 696.7 | 29.5 |\n| Industrial ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 558.1 | 20.6 | 523.0 | 20.8 | 501.6 | 21.2 |\n| Other ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 62.2 | 2.3 | 50.9 | 2.0 | 50.8 | 2.1 |\n| Total collection ÏÏÏÏÏÏÏÏÏÏ | 2,013.4 | 74.3 | 1,881.1 | 74.7 | 1,779.8 | 75.2 |\n| Transfer and disposal ÏÏÏÏÏÏÏÏÏÏÏÏÏ | 1,031.0 | | 967.5 | | 854.1 | |\n| Less: IntercompanyÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | (519.8) | | (493.7) | | (428.5) | |\n| Transfer and disposal, net ÏÏÏÏÏÏÏÏÏ | 511.2 | 18.9 | 473.8 | 18.8 | 425.6 | 18.0 |\n| Other ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 183.5 | 6.8 | 162.9 | 6.5 | 159.7 | 6.8 |\n| Revenue ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | $2,708.1 | | 100.0% $2,517.8 | | 100.0% $2,365.1 | 100.0% |", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "| ##---------------------## | ##---------------------## |\n| # NOTE: read initial identities in htpasswd format from /root/htpasswd.openshift openshift\\_master\\_identity\\_providers=[{'name': 'htpasswd\\_auth', 'login': 'True', 'challenge': 'True', 'kind': 'HTPasswdPasswordIdentityProvider'}] | # NOTE: read initial identities in htpasswd format from /root/htpasswd.openshift openshift\\_master\\_identity\\_providers=[{'name': 'htpasswd\\_auth', 'login': 'True', 'challenge': 'True', 'kind': 'HTPasswdPasswordIdentityProvider'}] |\n| # To define initial users directly in the inventory file: | # To define initial users directly in the inventory file: |\n| # Note: | # Note: |\n| https://docs.openshift.com/container-platform/3.3/admin\\_solutions/master\\_node\\_config.html#h | https://docs.openshift.com/container-platform/3.3/admin\\_solutions/master\\_node\\_config.html#h |\n| tpasswd | tpasswd |\n| openshift\\_master\\_htpasswd\\_users={'admin':'$apr1$hYehsOQ6$DQWSmGhPdS2LzS5cDJuU21','developer | openshift\\_master\\_htpasswd\\_users={'admin':'$apr1$hYehsOQ6$DQWSmGhPdS2LzS5cDJuU21','developer |\n| ':'$apr1$I0a9K2v0$ZLPrXnQseMlwTJIYzM8Hd.'} | ':'$apr1$I0a9K2v0$ZLPrXnQseMlwTJIYzM8Hd.'} |", - "page_start": 138, - "page_end": 138, - "source_file": "sg248459.pdf" - }, - { - "text": "You can add persistent volumes later by running 'volume dc/mongodb-36-rhel7 --add\n\n...'\n\n```\n--> Creating resources ... imagestream.image.openshift.io \"mongodb-36-rhel7\" created\n```", - "page_start": 181, - "page_end": 181, - "source_file": "sg248459.pdf" - }, - { - "text": "## TESLA, INC.\n\n## FORM 10-Q FOR THE QUARTER ENDED SEPTEMBER 30, 2024\n\n## INDEX\n\n| | | Page |\n|-------------------------------|---------------------------------------------------------------------------------------|--------|\n| PART I. FINANCIAL INFORMATION | PART I. FINANCIAL INFORMATION | |\n| Item 1. | Financial Statements | 4 |\n| | Consolidated Balance Sheets | 4 |\n| | Consolidated Statements of Operations | 5 |\n| | Consolidated Statements of Comprehensive Income | 6 |\n| | Consolidated Statements of Redeemable Noncontrolling Interests and Equity | 7 |\n| | Consolidated Statements of Cash Flows | 9 |\n| | Notes to Consolidated Financial Statements | 10 |\n| Item 2. | Management's Discussion and Analysis of Financial Condition and Results of Operations | 26 |\n| Item 3. | Quantitative and Qualitative Disclosures about Market Risk | 35 |\n| Item 4. | Controls and Procedures | 35 |\n| PART II. OTHER INFORMATION | PART II. OTHER INFORMATION | |\n| Item 1. | Legal Proceedings | 36 |\n| Item 1A. | Risk Factors | 36 |\n| Item 2. | Unregistered Sales of Equity Securities and Use of Proceeds | 36 |\n| Item 3. | Defaults Upon Senior Securities | 36 |\n| Item 4. | Mine Safety Disclosures | 36 |\n| Item 5. | Other Information | 36 |\n| Item 6. | Exhibits | 37 |\n| Signatures | Signatures | 38 |", - "page_start": 2, - "page_end": 2, - "source_file": "tesla_form_10q.pdf" - } - ] - }, - { - "references": { - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf", - "query": "What is the function of the Graphical Data Visualisation Tool module ?", - "target_page": 6, - "target_passage": "How to visualize graphical data from a dataset resource ", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n## 3.4 Graphical Data Visualisation Tool\n\nThis section describes the features of the graphical visualisation tool for numeric data. The features are currently available for XLS (Excel) and CSV files, except for the selection of the sheet name which is applicable only for Excel files.\n\nMost GUI elements from th e 'Graph' tab (records selection, search box, filters and fields buttons) are al so available on the 'Grid' tab and work in the same way.\n\n## 3.4.1 How to visualize graphical data from a dataset resource\n\nAs a result of a dataset search, the system displays on th e 'Dataset' tab all distributions (resource/data files) that are part of the selected dataset. Each XLS or CSV distribution of the dataset can be further explored by clicking on ' Open Visualization ' under the ' Options ' button -if available.\n\n", - "page_start": 42, - "page_end": 42, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n## 3.3 Visualization of Geo-Spatial Data (map.apps)\n\nThe visualization of geo-spatial data within the European Data Portal provides previewing functionality for spatial open data. The aim is to allow the user to assess if a dataset meets specific requirements in terms of spatial and thematic coverage. The functionality that is provided in the header (links to disclaimers and language switching) is consistent in the entire portal.\n\n## 3.3.1 How to visualize geo-spatial data from a dataset resource\n\nAccessing the geo-spatial visualization is achieved via the Data Platform interface. A user searches for specific data, enters the dataset view of reasonable results and displays the available distributions (see Section 3.2.5). If a dataset distribution is supported by the geo-spatial visualization, a globe button is displayed (see Figure 3). This is the entry point into the map viewer application. Supported formats are OGC Web Map Service (WMS) and GeoJSON. If the user visits the geo-spatial visualization for the first time, an interactive user tutorial is provided to guide the use through specific functions of the user interface, similar to this written user manual.\n\nFigure 3 -Dataset Resource Page with Link to Geo-Spatial Visualisation.\n\n", - "page_start": 37, - "page_end": 37, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "| 4 | Graphical Data Visualisation Tool | How to visualize graphical data from a dataset resource |\n| 5 | Help Desk | How to contact The Portal's Help Desk |\n| 6 | Metadata Quality Assurance (MQA) | Monitoring tool for the metadata quality: - The Global Dashboard View - The Catalogue details view |\n| 7 | SPARQL Manager | How to run SPARQL Queries using: - SPARQL Search |", - "page_start": 5, - "page_end": 5, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "\n\n - · Data modeling - build new data models, or design models based on existing data models.\n - · Data visualization - map queries and visualize the access patterns (facets) of the application without writing code. Every facet corresponds to a different access pattern in DynamoDB. You can manually add data to your data model.\n - · Operation builder - use the operation builder to develop and test queries, and query live datasets. You can also build and perform data plane operations, including creating projection and condition expressions, and generating sample code in multiple languages.\n\nYou can also run a local instance of DynamoDB on your workstation. Combined with NoSQL workbench, this can provide a fast local setup for experimentation and learning.\n\n## Related resources:\n\n - · NoSQL Workbench & Building data models with NoSQL Workbench - model and query data with a desktop tool\n - · Setting up DynamodDB local (downloadable version)", - "page_start": 83, - "page_end": 83, - "source_file": "serverless-core.pdf" - }, - { - "text": "You can also display a Node Comparison by selecting the same information as for the cluster, and then switching the button, as shown in Figure A-1 and Figure A-2.\n\nFigure A-1 IBM Spectrum Virtualize Dashboard displaying System performance overview\n\n\n\nFigure A-2 shows the display after switching the button.\n\nFigure A-2 IBM Spectrum Virtualize Dashboard displaying Nodes performance overview\n\n\n\nYou can also use real-time statistics to monitor CPU utilization, volume, interface, and MDisk bandwidth of your system and nodes. Each graph represents 5 minutes of collected statistics and provides a means of assessing the overall performance of your system.\n\nThe real-time statistics are available from the IBM Spectrum Virtualize GUI. Click Monitoring → Performance (as shown in Figure A-3) to open the Performance Monitoring window.\n\nFigure A-3 Selecting performance pane in the monitoring menu\n\n", - "page_start": 770, - "page_end": 770, - "source_file": "sg247938.pdf" - }, - { - "text": "Ensure that no existing application has the same application ID in the target application group. For more information, see the section 'Adding items to a server' in the IBM Content Manager OnDemand for Multiplatforms, V9.5, Administration Guide , SC19-3352.\n\n## Selecting font by line data graphical indexer\n\nThe font that is used by the line data graphical indexer to display a document can be changed from within the line data graphical indexer at the Content Manager OnDemand Administrator Client.", - "page_start": 76, - "page_end": 76, - "source_file": "sg246915.pdf" - }, - { - "text": "On any of these views, you can select any point by using your cursor to know the exact value and when it occurred. When you place your cursor over the timeline, it becomes a dotted line with the various values gathered, as shown in Figure A-7.\n\nFigure A-7 Viewing performance with details\n\n\n\nFor each of the resources, various metrics are available and you can select which to be displayed. For example, as shown in Figure A-8, from the four available metrics for the MDisks view (Read, Write, Read latency, and Write latency) only Read and Write IOPS are selected.\n\nFigure A-8 Displaying performance counters\n\n\n\n## Performance data collection and IBM Spectrum Control\n\nAlthough you can obtain performance statistics in standard . xml files, the use of .xml files is a less practical and more complicated method to analyze the IBM Spectrum Virtualize performance statistics. IBM Spectrum Control is the supported IBM tool to collect and analyze Storwize V7000 performance statistics.", - "page_start": 773, - "page_end": 773, - "source_file": "sg247938.pdf" - }, - { - "text": "IBM STAT can be downloaded from this IBM Support web page.\n\nYou can download the Storage Tier Advisor Tool and install it on your Windows-based computer. The tool is packaged as an ISO file that must be extracted to a temporary location.\n\nThe tool installer is at temporary\\_location\\IMAGES\\STAT\\Disk1\\InstData\\NoVM\\ . By default, the Storage Tier Advisor Tool is installed in C:\\Program Files\\IBM\\STAT\\ .\n\nOn IBM Storwize V7000, the heat data files are found in the /dumps/easytier directory on the configuration node and are named dpa\\_heat.node\\_panel\\_name.time\\_stamp.data . Any heat data file is erased when it exists for longer than 7 days.\n\nHeat files must be offloaded and Storage Tier Advisor Tool started from a Windows command prompt console with the file specified as a parameter, as shown in Example 10-6.\n\nExample 10-6 Running STAT in Windows command prompt\n\nC:\\Program Files (x86)\\IBM\\STAT>stat dpa\\_heat.7822DFF-1.181028.073824.data\n\nThe Storage Tier Advisor Tool creates a set of .html and .csv files that can be used for Easy Tier analysis.\n\nTo download a heat data file, open Settings → Support → Support Package → Download Support Package → Download Existing Package , as shown in Figure 10-8.\n\nFigure 10-8 Download Easy Tier heat file: Download Support Package\n\n", - "page_start": 435, - "page_end": 435, - "source_file": "sg247938.pdf" - }, - { - "text": "## PRACTICE EXERCISE\n\n## The Quick Analysis Tools\n\n## Tasks:\n\n## Completed:\n\nBefore starting this exercise you MUST have completed all of the topics in the chapter The Quick Analysis Tools…\n\n -  Open the workbook PE\\_Quick Analysis.xlsx (it can be found in the same folder as the student files)\n\n\n\n -  Use the Quick Analysis tools to apply a colour scale to the data in the worksheet\n\n\n\n -  Use the Quick Analysis tools to create a chart for the Overheads data. This chart should be a clustered column chart that has the column headings as the x axis, and displays the legend at the bottom of the chart. Make the chart title Cost of Overheads .\n\n\n\n -  Reposition the chart below the data\n\n\n\n -  Use the Quick Analysis tools to create Sparklines for the Qtr1 to Qtr4 and Total columns for Overheads\n\nYour worksheet should appear as shown on the following page…\n\n\n\n -  Use the Save As command to save the workbook as PE\\_Quick Analysis (Completed).xlsx\n\n\n\n", - "page_start": 41, - "page_end": 41, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "Figure 7-3 Capturing text with the PDF graphical indexer\n\n", - "page_start": 194, - "page_end": 194, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf", - "query": "How to view “Tweets” on the EDP ?", - "target_page": 20, - "target_passage": "The Home Page displays the latest tweets on the European Data Portal in the “Tweets” panel on the right hand side. ‐ ‐ Click on any of the tweets to display the complete tweet on twitter. Scroll vertically to see previous tweets. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n## 3.1.5 How to view 'Tweets' on the EDP\n\nThe Home Page displays the latest tweets on the European Data Portal in the 'Tweets' pa nel on the right hand side.\n\n - -Click on any of the tweets to display the complete tweet on twitter.\n - -Scroll vertically to see previous tweets.\n\n", - "page_start": 19, - "page_end": 19, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "Given our goal of exploring the di GLYPH<11> erence between the two discourses, the 615,816 tweets containing both hashtags simultaneously were excluded to di GLYPH<11> erentiate between the two datasets following [67,80]. A total of 6,662,478 tweets were retained, of which 5,774,747 contained #climatechange, and 887,731 contained '#globalwarming'. The number of qualified tweets containing #climatechange and #globalwarming in each year is displayed in Figure 1a. Given our goal of exploring the difference between the two discourses, the 615,816 tweets containing both hashtags simultaneously were excluded to differentiate between the two datasets following [67,80]. A total of 6,662,478 tweets were retained, of which 5,774,747 contained #climatechange, and 887,731 contained '#globalwarming'. The number of qualified tweets containing #climatechange and #globalwarming in each year is displayed in Figure 1a.\n\nTo collect these tweets, we used a Python-based crawler to send requests to the Twitter server to select hashtags, language, start date, and end date as inputs. Once the first request was completed, the server responded with a file in json format and the first 20 qualified tweets in a time-descending order. By parsing the json file, we obtained a string for the crawler to build the next request and obtain the next 20 tweets. Thus, a loop was written to keep the crawler sending requests and the crawler was automatically terminated when all the qualified tweets publicly available were collected. Our crawler respected Twitter's robot.txt and we did not collect, analyze or display any user information in our study. To collect these tweets, we used a Python-based crawler to send requests to the Twitter server to select hashtags, language, start date, and end date as inputs. Once the first request was completed, the server responded with a file in json format and the first 20 qualified tweets in a time-descending order. By parsing the json file, we obtained a string for the crawler to build the next request and obtain the next 20 tweets. Thus, a loop was written to keep the crawler sending requests and the crawler was automatically terminated when all the qualified tweets publicly available were collected. Our crawler respected Twitter's robot.txt and we did not collect, analyze or display any user information in our study.\n\nFigure 1. The number of tweets containing #climatechange or #globalwarming, and their ratio from 2009 to 2018 ( a ). The number of hashtags contained in the 'climate change' or 'global warming' datasets, and their ratio from 2009 to 2018 ( b ). Figure 1. The number of tweets containing #climatechange or #globalwarming, and their ratio from 2009 to 2018 ( a ). The number of hashtags contained in the 'climate change' or 'global warming' datasets, and their ratio from 2009 to 2018 ( b ).\n\n", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed10.pdf" - }, - { - "text": "Int. J. Environ. Res. Public Health\n\n2020\n\n,\n\nxx\n\n, 5\n\n5 of 22\n\n3. Methods\n\n## 3. Methods 3.1. Data Source\n\nAs Twitter has been recognized as a popular discussion forum [75] and a social activity\n\nplatform [76] for climate issues, we followed the literature [5,8,18] and used tweets to investigate\n\n## 3.1. Data Source\n\nAsTwitter has been recognized as a popular discussion forum [75] and a social activity platform [76] for climate issues, we followed the literature [5,8,18] and used tweets to investigate distinct perceptions of climate issues and evolution on social media. Although Twitter's ecosystem has been changing in terms of the number of active users, user demographics, and tweeting conventions in the past years [77,78], the problem is unavoidable for all the information ecosystems on the Internet. As Twitter is one of the most popular social websites, we defined our study as characterizing the perception of climate issues among social media users rather than all the netizens or the whole population. distinct perceptions of climate issues and evolution on social media. Although Twitter's ecosystem has been changing in terms of the number of active users, user demographics, and tweeting conventions in the past years [77,78], the problem is unavoidable for all the information ecosystems on the Internet. As Twitter is one of the most popular social websites, we defined our study as characterizing the perception of climate issues among social media users rather than all the netizens or the whole population.\n\n3.2. Data\n\nIn this research, we were interested in tweets containing either #climatechange or #globalwarming,\n\n## 3.2. Data\n\nIn this research, we were interested in tweets containing either #climatechange or #globalwarming, as these two hashtags exactly correspond to climate change and global warming, respectively, the two competing definitions of climate issues. We did not follow [79] to include #AGW (anthropogenic global warming) as query hashtags in our research because we think that this refers to global warming in a defined category so cannot be regarded in parallel with the two considered hashtags. We limited the scope of the search to English-language tweets generated between 1 January 2009 and 31 December 2018. We only collected tweets containing either of the two hashtags in the body of the tweets rather than those containing these hashtags in the retweeted or quoted text, as we think that retweeted text or quoted texts cannot directly represent the tweeter's usage pattern of the two terminologies. as these two hashtags exactly correspond to climate change and global warming, respectively, the two competing definitions of climate issues. We did not follow [79] to include #AGW (anthropogenic global warming) as query hashtags in our research because we think that this refers to global warming in a defined category so cannot be regarded in parallel with the two considered hashtags. We limited the scope of the search to English-language tweets generated between 1 January 2009 and 31 December 2018. We only collected tweets containing either of the two hashtags in the body of the tweets rather than those containing these hashtags in the retweeted or quoted text, as we think that retweeted text or quoted texts cannot directly represent the tweeter's usage pattern of the two terminologies.", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed10.pdf" - }, - { - "text": "There are many other words in tweets besides hashtags to express the author's intention. Multiple approaches, such as LDA and STM [32,73], can help to extract topics from unstructured texts. But in this study, targeting on hashtags is more in line with our research question. Firstly, hashtags were invented spontaneously by users of Twitter in 2007 as a mechanism to categorize discussions [74]. Words with hashtags are recognized as topics and considered worthy of public discussion. Secondly, by attaching # to certain words in tweets, the users intentionally anchor their tweets to certain topics. The operator # explicitly reflects the author's emphasis, which can help us extract rather than infer the author's identification of the topic of the tweets. Our research question is to analyze and visualize the associations of topics in public climate discourse. Compared with other approaches, analyzing hashtags co-occurrence pattern has advantage in extracting the structure of public discussions.", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed10.pdf" - }, - { - "text": "- 53. Bruns, A.; Stieglitz, S. Quantitative approaches to comparing communication patterns on Twitter. J. Technol. Hum. Serv. 2012 , 30 , 160-185. [CrossRef]\n - 54. Yang, G. Narrative agency in hashtag activism: The case of# BlackLivesMatter. Media Commun. 2016 , 4 , 13.\n - 55. Bruns, A.; Burgess, J.E. The use of Twitter hashtags in the formation of ad hoc publics. In Proceedings of the 6th European Consortium for Political Research (ECPR) General Conference 2011, Reykjav í k, Iceland, 25-27 August 2011.\n - 56. Rzeszotarski, J.M.; Spiro, E.S.; Matias, J.N.; Monroy-Hern á ndez, A.; Morris, M.R. Is anyone out there?: Unpacking Q&A hashtags on twitter. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April-1 May 2014; pp. 2755-2758.\n - 57. Tsur, O.; Rappoport, A. What's in a hashtag?: Content based prediction of the spread of ideas in microblogging communities. In Proceedings of the Fifth ACM International Conference on Web Search and Data Mining, Seattle, WA, USA, 8-12 February 2012; pp. 643-652.", - "page_start": 18, - "page_end": 18, - "source_file": "pubmed10.pdf" - }, - { - "text": "All the hashtags in the tweets were automatically extracted with the Regular Expression Library in Python. Hashtags were transformed to lowercase letters, and clear synonyms were stemmed (e.g., #trump, #DonaldTrump, #donaldtrump). As all the tweets in the 'climate change' dataset contained the #climatechange hashtag and all the tweets in the 'global warming' dataset contained the #globalwarming hashtag, we did not document these two hashtags when processing data. The number of hashtags contained in the two discourses in each year is displayed in Figure 1b. Hashtags whose frequency was lower than ten times are excluded in the network analysis. As hashtags are intended to be a topic anchor [52], extremely low frequency means that the hashtag is not recognized socially, and excluding them helps researchers focus on meaningful rather than occasional associations.\n\n## 3.3. Measurement\n\n## 3.3.1. Hashtag Co-Occurrence Network\n\nThe co-occurrence patterns of hashtags in tweets from two datasets were documented to build semantic networks for climate change and global warming. For instance, for '#cimatechange redistributes #fish species at high latitudes. @\\_OScience @AarhusUni #Arctic', a tweet in the climate change dataset, hashtags #fish and #arctic were documented as co-occurring and their associations plus one in the semantic network of climate change. In the semantic network, nodes represent hashtags and the weight of edge refers to the frequency at which two hashtags co-occurred.\n\nWe visualized the network using Gephi software [81]. Following the established literature [60,61,82], only the most prominent hashtags were included in the visualization to concentrate our analysis on the most important hashtags. In this research, the top 50 hashtags with the highest centrality in each network were selected for visualization. Modularity analysis was then analyzed to identify the clusters of hashtags in each semantic network, and hashtags belonging to the same cluster were drawn in the same color. The network spatialization was conducted with Gephi's built-in force-directed layout algorithm proposed by Fruchterman and Reingold [83], where the more associated the hashtags, the closer they are to each other in the spatial layout.\n\n## 3.3.2. Temporal Analysis\n\nAtemporal analysis was introduced to understand the evolution of the two climate discourses over a long period. We first examined how the two semantic networks evolved in the past years. All the nodes once ranked top 50 in any of the 10 years were gathered to form a union set for each dataset. Then, they were clustered according to the strength of their associations in the whole dataset and mapped with a force-directed layout algorithm in Gephi to produce a graph of nodes. With the dynamic network function supplied by Gephi, we then added the associations between the nodes ranked on the top 50 list in 2009 to the graph of nodes and obtained the relationship of the top 50 nodes for 2009. Similarly, we produced a total of 10 graphs from 2009 to 2018, where the positions of the nodes on the 10 maps are the same, but the strengths of their associations are di GLYPH<11> erent to represent the changes in the associations of key hashtags for each discourse.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed10.pdf" - }, - { - "text": "- 58. Yang, L.; Sun, T.; Zhang, M.; Mei, Q. We know what@ you# tag: Does the dual role a GLYPH<11> ect hashtag adoption? In Proceedings of the 21st international conference on World Wide Web, Lyon, France, 16-20 April 2012; pp. 261-270.\n - 59. Weller, K.; Dröge, E.; Puschmann, C. Citation Analysis in Twitter: Approaches for Defining and Measuring Information Flows within Tweets during Scientific Conferences. In Proceedings of the Making Sense of Microposts 2011, Heraklion, Greece, 30 May 2011; pp. 1-12.\n - 60. Meraz, S. Hashtag wars and networked framing: The private / public networked protest repertoires of occupy on twitter. In Between the Public and Private in Mobile Communication ; Routledge: Abingdon, UK, 2017; pp. 303-323.\n - 61. Meraz, S.; Papacharissi, Z. Networked gatekeeping and networked framing on# Egypt. Int. J. Press. 2013 , 18 , 138-166.\n - 62. Papacharissi, Z.; de Fatima Oliveira, M. A GLYPH<11> ective news and networked publics: The rhythms of news storytelling on# Egypt. J. Commun. 2012 , 62 , 266-282.\n - 63. Wang, X.; Wei, F.; Liu, X.; Zhou, M.; Zhang, M. Topic sentiment analysis in twitter: A graph-based hashtag sentiment classification approach. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management, Scotland, UK, 24-28 October 2011; pp. 1031-1040.\n - 64. Laniado, D.; Mika, P. Making sense of twitter. In Proceedings of the International Semantic Web Conference 2010, Shanghai, China, 7-11 November 2010; pp. 470-485.\n - 65. Gonz á lez-Ib á nez, R.; Muresan, S.; Wacholder, N. Identifying sarcasm in Twitter: A closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers-Volume 2, Portland, OR, USA, 19-24 June 2011; pp. 581-586.\n - 66. Conover, M.D.; Ratkiewicz, J.; Francisco, M.; Gonçalves, B.; Menczer, F.; Flammini, A. Political polarization on twitter. In Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, Barcelona, Spain, 17-21 July 2011.\n - 67. Kitzie, V.; Ghosh, D. # Criming and# Alive: Network and content analysis of two sides of a story on twitter. In Proceedings of the 78th ASIS&T Annual Meeting: Information Science with Impact: Research in and for the Community, St. Louis, MO, USA, 6-10 October; 2015; p. 41.\n - 68. Burgess, J.; Galloway, A.; Sauter, T. Hashtag as hybrid forum: The case of# agchatoz. In Hashtag Publics. The Power and Politics of Discursive Networks ; Peter Lang: New York, NY, USA, 2015; pp. 61-76.\n - 69. Rushko GLYPH<11> , D. 17. Permanent revolution: Occupying democracy. In The Playful Citizen ; Amsterdam University Press: Amsterdam, The Netherlands, 2013; p. 335.\n - 70. Grundberg, M.D.; Lindgren, S. Translocal frame extensions in a networked protest: Situating the# IdleNoMore hashtag. IC Rev. Cient í fica De Inf. Y Comun. 2015 , 11 , 49-57.\n - 71. Bruns, A.; Burgess, J.E. # ausvotes: How Twitter covered the 2010 Australian federal election. Commun. Politics Cult. 2011 , 44 , 37-56.\n - 72. Pearce, W.; Holmberg, K.; Hellsten, I.; Nerlich, B. Climate change on Twitter: Topics, communities and conversations about the 2013 IPCC Working Group 1 report. PLoS ONE 2014 , 9 , e94785. [CrossRef]\n - 73. Zhao, W.X.; Jiang, J.; Weng, J.; He, J.; Lim, E.P.; Yan, H.; Li, X. Comparing twitter and traditional media using topic models. In Proceedings of the European Conference on Information Retrieval, Dublin, Ireland, 18-21 April 2011; pp. 338-349.\n - 74. Doctor, V. Hashtag History: When and What Started It? Available online: https: // www.hashtags.org / featured / hashtag-history-when-and-what-started-it / (accessed on 16 January 2020).", - "page_start": 19, - "page_end": 19, - "source_file": "pubmed10.pdf" - }, - { - "text": "- 75. Newman, T.P. Tracking the release of IPCC AR5 on Twitter: Users, comments, and sources following the release of the Working Group I Summary for Policymakers. Public Underst. Sci. 2017 , 26 , 815-825. [CrossRef]\n - 76. Segerberg, A.; Bennett, W.L. Social media and the organization of collective action: Using Twitter to explore the ecologies of two climate change protests. Commun. Rev. 2011 , 14 , 197-215. [CrossRef]\n - 77. Statista. Number of Monthly Active Twitter Users Worldwide from 1st Quarter 2010 to 1st Quarter 2019 (in Millions). 2019. Available online: https: // www.statista.com / statistics / 282087 / number-of-monthly-activetwitter-users / (accessed on 10 October 2019).\n - 78. Liu, Y.; Kliman-Silver, C.; Mislove, A. The tweets they are a-changin': Evolution of Twitter users and behavior. In Proceedings of the Eighth International AAAI Conference on Weblogs and Social Media, Ann Arbor, MI, USA, 1-4 June 2014.", - "page_start": 19, - "page_end": 19, - "source_file": "pubmed10.pdf" - }, - { - "text": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n## 3.1.7 How to search for EDP Site Content\n\nIn order to search within the Portal's site content (i.e. editorial content, articles, events, reports etc.), enter any keyword in the 'Search site content' text box and click on the button .\n\n\n\nThe site will display all matching content found (here for keywo rd ' Brussels '):\n\n\n\n## Note:\n\nThe 'Search site content' does not perform any search on datasets.\n\nIn order to search for datasets from the EDP Home page, the user should refer to section 3.2.", - "page_start": 21, - "page_end": 21, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "\n\nmost similar to the ones used in GPT-2's training data, i.e. documents linked to from Reddit [25], plus Wikipedia and a collection of books. While this was reportedly effective at filtering out documents that previous work characterized as 'unintelligible' [134], what is unmeasured (and thus unknown) is what else it filtered out. The Colossal Clean Crawled Corpus [107], used to train a trillion parameter LM in [43], is cleaned, inter alia , by discarding any page containing one of a list of about 400 'Dirty, Naughty, Obscene or Otherwise Bad Words' [p.6]. 14 This list is overwhelmingly words related to sex, with a handful of racial slurs and words related to white supremacy (e.g. swastika , white power ) included. While possibly effective at removing documents containing pornography (and the associated problematic stereotypes encoded in the language of such sites [125]) and certain kinds of hate speech, this approach will also undoubtedly attenuate, by suppressing such words as twink , the influence of online spaces built by and for LGBTQ people. 15 If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light.\n\nThus at each step, from initial participation in Internet fora, to continued presence there, to the collection and finally the filtering of training data, current practice privileges the hegemonic viewpoint. In accepting large amounts of web text as 'representative' of 'all' of humanity we risk perpetuating dominant viewpoints, increasing power imbalances, and further reifying inequality. We instead propose practices that actively seek to include communities underrepresented on the Internet. For instance, one can take inspiration from movements to decolonize education by moving towards oral histories due to the overrepresentation of colonial views in text [35, 76, 127], and curate training datasets through a thoughtful process of deciding what to put in, rather than aiming solely for scale and trying haphazardly to weed out, post-hoc, flotsam deemed 'dangerous', 'unintelligible', or 'otherwise bad'.\n\n## 4.2 Static Data/Changing Social Views\n\nA central aspect of social movement formation involves using language strategically to destabilize dominant narratives and call attention to underrepresented social perspectives. Social movements produce new norms, language, and ways of communicating. This adds challenges to the deployment of LMs, as methodologies reliant on LMs run the risk of 'value-lock', where the LM-reliant technology reifies older, less-inclusive understandings.\n\nFor instance, the Black Lives Matter movement (BLM) influenced Wikipedia article generation and editing such that, as the BLM movement grew, articles covering shootings of Black people increased in coverage and were generated with reduced latency [135]. Importantly, articles describing past shootings and incidents of police brutality were created and updated as articles for new events were created, reflecting how social movements make connections between events in time to form cohesive narratives [102]. More generally, Twyman et al. [135] highlight how social movements actively influence framings and reframings of minority narratives\n\nin the type of online discourse that potentially forms the data that underpins LMs.", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv5_ccby4license.pdf" - } - ] - }, - { - "references": { - "source_file": "welcome_to_word_template.pdf", - "query": "Where can we open a document saved on OneDrive ?", - "target_page": 2, - "target_passage": "When you save this document in OneDrive, you’ll be able to open it anywhere: on your computer, tablet, or phone. Your changes will be saved automatically.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Count on Word to count your words\n\nTry it: Hit return after this line and type some words.\n\nThe status bar at the bottom of the window keeps a running count of the number of words in the document.\n\n\n\n## Save this for later, access it anywhere\n\nWhen you save this document in OneDrive, you'll be able to open it anywhere: on your computer, tablet, or phone. Your changes will be saved automatically.\n\nTry it: Select File > Save As , and then select OneDrive and give this document a name.\n\n\n\nIf you sign in to Office 365 on another device, this document will be in your list of recent files. You can pick up where you left off… even if you left the document open on the computer you're using now.", - "page_start": 1, - "page_end": 1, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "## Word\n\n## Create something\n\nBegin with a Blank document to get right to work. Or start with a template to save yourself time and steps. Just select File > New , and then select or search for the template you want.\n\n\n\n\n\n## Access files anywhere\n\nNeed to work on the go and across different devices? Click File > Account to sign in with your Microsoft account and access your recently used files anywhere, on any device, through seamless integration between Office, OneDrive, OneDrive for Business, and SharePoint.\n\n\n\n## Discover related options\n\nWhen you select objects in your document, options related to your selection will appear. For example, selecting a table displays the Table Design and Layout tabs, which offer additional options.\n\n\n\n## Find recent files\n\nWhether you only work with files stored on your PC's local hard drive or you store files in multiple shared locations, selecting File > Open takes you to your recently used documents and any files that you may have pinned to your list.", - "page_start": 1, - "page_end": 1, - "source_file": "Word QS.pdf" - }, - { - "text": "- 1. Open the Cloud Volumes window.", - "page_start": 530, - "page_end": 530, - "source_file": "sg247938.pdf" - }, - { - "text": "## Share and collaborate\n\nWith this document saved in OneDrive, you can share it with others. They don't even need Word to open it.\n\nTry it: Select Share , and send a link to this document. (keyboard shortcut - Alt+F+Z or Alt+Z+S)\n\nYou can send the link by typing someone's email address or by copying the link and pasting it into a message or chat. If you want them to read the document but not edit it, set their permission to view-only.\n\nIf they don't have Word, the document will open in their web browser, in Word Online.\n\n## Add visuals with pictures from the web\n\n\n\nWord works with Bing to give you access to thousands of pictures you can use in your documents.\n\nTry it: Hit enter after this line to make a blank line:\n\n- 1. With your cursor in the blank space above, go to the Insert tab, select Online Pictures , and then search for something, like puppy clip art .\n- 2. Select the picture you want, and select Insert .", - "page_start": 2, - "page_end": 2, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "If the system is authenticated, it can then access cloud storage to copy data to the cloud storage or restore data that is copied to cloud storage back to the system. The system supports one cloud account to a single cloud service provider. Migration between providers is not supported.", - "page_start": 199, - "page_end": 199, - "source_file": "sg247938.pdf" - }, - { - "text": "You might be tempted to resolve this thread (note the Resolve link at the top of the initial comment) however, we aren't really done. Remember that we need to not just create the class but also define the axiom that a ChicagoPizza must have a DeepPanBase . Since we can't add axioms in Web Protégé we need to export our ontology back to Protégé. Typically, we would collect many more comments and changes before exporting but we want to demonstrate how round-trip editing works between Protégé and Web Protégé. We could of course just export the ontology from Web Protégé to Protégé and then create another new Project, but it would be cumbersome to have to constantly create new projects every time you want to make a change in Protégé and if we did this, we would lose our audit trail of comments and changes. Luckily, there is a better way to do it.\n\nTo start we need to export the ontology to a file. Note that one of the tabs at the top is History. Select that tab. This tab shows a list of each version of the ontology. There should be 2 versions labelled R1 and R2 (in the right corner of each version). The most recent version is always at the top since that is typically what you want although it is also possible to roll back changes to previous versions. We want to export the latest version R2. Click on the R2 icon. This should give you a drop-down menu with two options: Revert changes in revision 2 and Download revision 2. Select Download revision 2. This will prompt you with the standard file browser for your OS to save a zip file with the new ontology. The ontology is saved with a zip file because ontologies can be large and since Web Protégé is working over a network we may want to limit the network traffic for large ontologies. Select the appropriate place to save the Zip archive file on the machine where you have Protégé. Do the standard things you would do to unzip the file and load it into Protégé. Note that when you unzip the file it will create a directory as well, so the file won't be directly under whatever directory you save it to. Instead, there will be a directory titled something like pizza-with-data-ontologies-owl-REVISION-2 that the OWL file will be in.\n\nLoad the downloaded file into Protégé. Go to the Class hierarchy tab and navigate to the new ChicagoPizza class under NamedPizza. Add the axiom (refer back to chapter 4 if you need to remember how to add axioms to classes) hasBase some DeepPanBase. Save the file. Now go back to Web Protégé and your version of the Pizza ontology there. Note that in the upper right corner of the window there are links (drop down menus) such as Display and Project. Select Project and from the drop down menu select Apply External Edits. This will give you a small dialog titled Upload ontologies with a little button to Choose File. Click on Choose File. That will give you the standard OS dialog for selecting a file. Navigate to the file you saved from Protégé and select that then choose OK. That should result in a new pop-up window titled Merge ontologies where you will see the changes (in this case only the addition of the ChicagoPizza axiom) and a text box where you can describe the changes. Add an appropriate Commit message or just take the default and select OK. You should get a message that says the changes were successfully applied.\n\nIf you navigate back to ChicagoPizza you should see that it now has that axiom. You can also navigate back to NamedPizza. In the right most column, you should see the comments about needing to add ChicagoPizza as a subclass. Now that this has been done you can click on the Resolve link in the upper right corner of the comment thread and the comments will be removed from NamedPizza .", - "page_start": 87, - "page_end": 87, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "If the storage node identifies a client node in OAM or Virtual Storage Access Method (VSAM), the storage manager copies the resources to archive storage.\n\n## Document Data\n\nFor Document Data, the following selections are valid:\n\n - /SM590000 Yes for Cache Data: You can cache document data and resource data or only resource data.\n - /SM590000 No Cache : Document data is not stored in cache.\n - /SM590000 Cache Document Data for xxx Days : Document data is stored in cache for xxx number of days before the data expires.\n\n## 10.3 Configuring for migration and expiration\n\nMany customers choose to expire their document data and indexes somewhere in the range of 5 - 10 years. In one extreme, document and index data might expire daily. In another extreme, document and index data might never expire.\n\nFour typical lifecycle scenarios are common. The Content Manager OnDemand administrator selects the scenario to implement through various parameters (as shown in this section), which are on the Storage Management tab of the Application Group window. The four scenarios are illustrated in Figure 10-2 on page 222.\n\n## 10.3.1 Migrating index data", - "page_start": 248, - "page_end": 248, - "source_file": "sg246915.pdf" - }, - { - "text": "Note: For the best indexing results, select a monospacing font with the line data graphical indexer.\n\nIf the font is changed by using the Administrator Client, the selected font is also used by the Windows client the next time that the Windows client is started and a line data document is viewed.\n\nFor more information, see Technote 1215957, which is available at the following web address:\n\nhttp://www.ibm.com/support/docview.wss?uid=swg21215957\n\n## 3.1.4 Folders\n\nA folder is the interface that allows a user to search for reports and documents that are stored in the Content Manager OnDemand system. One or more application groups can be defined to a folder. The user enters index search criteria into the folder search fields. In the background, an SQL search is issued for each included application group. The results of the queries are accumulated, and a document hit list is constructed and returned to the user. The folder can be customized to provide the look and feel that is wanted for the users of the Content Manager OnDemand system. The Content Manager OnDemand administrator can also grant specific permissions for users and groups to use the folders.\n\nFigure 3-7 shows the Add a Folder window.\n\nFigure 3-7 Folder general information\n\n\n\n## Display Document Location\n\nThe Display Document Location setting (Figure 3-7) determines whether the client shows the storage location of each document in the document list by placing an icon next to each entry. The possible locations are cache storage (on the library server or an object server) or archive storage.", - "page_start": 77, - "page_end": 77, - "source_file": "sg246915.pdf" - }, - { - "text": "- 4. When finished making changes, click Save to apply them. The editing window closes.", - "page_start": 380, - "page_end": 380, - "source_file": "sg247938.pdf" - }, - { - "text": "On Multiplatforms and z/OS, you can aggregate documents that are loaded from Content Manager OnDemand Web Enablement Kit (ODWEK) before you store them in the archive. The document is stored to cache where it is appended to the storage object until the object reaches 10 MB (defined storage object size), at which point it is migrated to a storage manager, such as Tivoli Storage Manager. For more information about this topic, see the following website:\n\nhttp://www.ibm.com/support/docview.wss?uid=swg21587507", - "page_start": 310, - "page_end": 310, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "welcome_to_word_template.pdf", - "query": "What is the bold keyboard shortcut on word ?", - "target_page": 4, - "target_passage": "Bold (keyboard shortcut: Ctrl+B)", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## Make your meaning more visual by formatting text\n\n\n\nTo format text, select it, and then select a button in the Font or Paragraph area on the Home tab.\n\nTry it: Select text in the lines below and choose formatting options so that the text is an example of the formatting it's describing:\n\n\n\nPro tip: If you selected whole words for this exercise, did you notice that Word popped up a little toolbar, with the font formatting options?\n\n\n\nBetween that and keyboard shortcuts like Ctrl+B and Ctrl+I, you save time by not having to go up to the Home tab all the time.", - "page_start": 3, - "page_end": 3, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "\n\n\n\n## Handy to Know…\n\n -  You can jump directly to a font. For example, if you want to preview Garamond , click on the name of the font in the Font command and press . Excel will jump to the fonts that start with G and Live Preview will display the text temporarily. Keep typing the name until you reach the required font.\n\n", - "page_start": 21, - "page_end": 21, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## NAVIGATING IN A FILE\n\nArrow Keys\n\nMove one cell to the right, left, up or down\n\nTab\n\nMove once cell to the right\n\nCtrl+Home\n\nTo beginning file\n\nCtrl+End\n\nTo end of typed information\n\nHome\n\nBeginning of a line\n\nEnd\n\nEnd of a line\n\nPage Down\n\nDown one screen\n\nPage Up\n\nUp one screen\n\nF5\n\nTo a specific page\n\nScroll bars\n\nAppear at the right and on the bottom of the screen. You may click the scroll arrows, drag the scroll box or click the scroll bar to move through the document.", - "page_start": 5, - "page_end": 5, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## CHANGING FONTS\n\nThe appearance that you choose for your text is referred to as the font or typeface . Font traditionally refers to a combination of typeface, style and size in points (e.g. Arial Bold 12 pt).\n\nIn Excel 2007, font just refers to the typeface or shape of the letters. Typical classic fonts include Times New Roman , Arial, Century Gothic and Copperplate .\n\n## Try This Yourself:\n\n\n\nContinue using the previous file with this exercise, or open the file E722 Font Formatting\\_1.xls...\n\n -  Click in cell A1 to make the cell with the main heading the active cell\n -  Click on the drop arrow next to the Font command in the Font group on the Home tab to display a gallery of available fonts\n -  Point to Arial Narrow , then Book Antiqua , Garamond and Gill Sans MT\n\nIf you don't have these fonts, try different ones. As you point to each font, the preview will change...\n\n -  Scroll to and click on\n - Comics Sans MS , or another font of your choice if you don't have this one\n\n\n\nThis time the font formatting has changed in the cell and is no longer just a preview - it won't change again unless you make another font selection.\n\n## For Your Reference…\n\n## To apply font formatting :\n\n - 1. Select the text\n - 2. Click on the drop arrow\n\nfor Font\n\n - 3. Point to a font to preview it\n - 4. Click on the font to apply it\n\n\n\n\n\n## Handy to Know…", - "page_start": 21, - "page_end": 21, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## Make magic: use Heading styles\n\nThe heading for this part ('Make magic: use Heading styles') looks the same as the other headings in this document, but it's not as useful. It's formatted with font settings (font, size, and color), while the other headings are formatted with a Heading style (Heading 1, to be exact).\n\n\n\nSee the little triangle when you mouse over those other headings?\n\nYou can collapse and expand everything under a heading, like an outline. But this one's not working. Let's fix it.\n\n## Try it: Apply the Heading 1 style:\n\n - 1. Put your cursor somewhere in the heading above ('Make magic: use Heading styles') don't select anything.\n - 2. On the Home tab, find Styles , and select Heading 1 (keyboard shortcut Ctrl+Alt+1).\n\nTa-da! Now it looks like a heading, and acts like one too.", - "page_start": 4, - "page_end": 4, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "\n\n## Up button:\n\nShort press to light up or turn off the screen; one press to go back the dial interface; long press to reactivate the watch.\n\n## Button down:\n\nShort press to enter multi-sport mode.\n\nIn addition, when the watch is in the off-screen state, you can light up the screen by pressing any buttons.\n\n## Charging instructions:\n\nWireless charging, as shown in the picture below.\n\n\n\n## 1.1 Shortcut function:\n\n- 1) Swipe to the left till you find the \"+\" icon, click the icon to add part of the functions in the shortcut.\n- 2) Scroll down the screen when the watch is in the dial interface, you can find Bluetooth connection status, time, power, brightness adjustment and other functions.", - "page_start": 1, - "page_end": 1, - "source_file": "6126797.pdf" - }, - { - "text": "Figure B-2 Generate keys\n\n\n\nTo generate keys : The blank area that is indicated by the message is the large blank rectangle on the GUI inside the section of the GUI labeled Key. Continue to move the mouse pointer over the blank area until the progress bar reaches the far right. This action generates random characters to create a unique key pair.", - "page_start": 779, - "page_end": 779, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 Helvetica-Bold\n - /SM590000 Helvetica-BoldOblique\n - /SM590000 Helvetica-Oblique\n - /SM590000 Times-Roman\n - /SM590000 Times-Bold", - "page_start": 189, - "page_end": 189, - "source_file": "sg246915.pdf" - }, - { - "text": "## PRACTICE EXERCISE\n\n## Font Formatting\n\n## Tasks:\n\nBefore starting this exercise you MUST have completed all of the topics in the chapter Font Formatting…\n\n -  Open the workbook called PE\\_Font Formatting.xlsx (it can be found in the same folder as the student files)\n -  Format the heading in cell A1 as Cambria , 36 pt , bold , Orange Accent 2\n -  Format the other headings as bold, italic or underline as shown on the following page\n -  Use Orange, Accent 2, Lighter 80% to fill the area behind the headings as shown on the following page\n -  Add the superscript 1 in cell H3 and in cell B27 with the following comment\n - 1 Fee may be reduced as the result of Government Assistance\n\nYour completed worksheet should appear as shown on the following page...\n\n - \n\nUse the Save As command to save the workbook as PE\\_Font Formatting (Completed).xlsx\n\n\n\n", - "page_start": 26, - "page_end": 26, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## CHANGING FONT SIZE\n\nOne way that text can be emphasised is by changing the size of the font. For example, if your normal text is 11 pt, you may like to make the headings 13 pt or larger. Font size may also\n\nbe changed for small detailed items, such as comments or a caption. Main headings in a worksheet usually appear in a slightly larger font size compared to the rest of the data.\n\n## Try This Yourself:\n\n\n\nContinue using the previous file with this exercise, or open the file E722 Font Formatting\\_2.xlsx...\n\n -  Click in cell A1 to make the cell with the main heading the active cell\n - \n - Click on the drop arrow next to the Font Size command in the Font group on the Home tab to display a gallery of available sizes\n -  Point to various sizes and notice how Live Preview shows you how the heading will look\n -  Click on 16 to change the heading to 16 pt\n - You can also change the font size of parts of a document, and you can use the Mini toolbar...\n -  Click in cell A2\n -  Click with the right-mouse button to display the minitoolbar and the shortcut menu\n -  Click on the drop arrow next to Font Size and click on 14\n -  Click in cell A3 to hide the toolbar", - "page_start": 22, - "page_end": 22, - "source_file": "Excel Training Manual 1.pdf" - } - ] - }, - { - "references": { - "source_file": "welcome_to_word_template.pdf", - "query": "What is the advise to make the style sets and themes work well ? ", - "target_page": 6, - "target_passage": "They work best when your document is formatted with styles", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Give your doc an instant makeover\n\n\n\nStyle sets and themes let you completely change the look of your document in an instant. They work best when your document is formatted with styles (so it's good that we fixed that Heading style, above).\n\nTry it: Explore style sets and themes:\n\n - 1. On the Design tab, select Themes , and choose a theme from the drop-down. Notice that the gallery of style sets updates to reflect the theme you picked.\n - 2. Select any theme you like from the drop-down and click to apply.", - "page_start": 5, - "page_end": 5, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "## Make magic: use Heading styles\n\nThe heading for this part ('Make magic: use Heading styles') looks the same as the other headings in this document, but it's not as useful. It's formatted with font settings (font, size, and color), while the other headings are formatted with a Heading style (Heading 1, to be exact).\n\n\n\nSee the little triangle when you mouse over those other headings?\n\nYou can collapse and expand everything under a heading, like an outline. But this one's not working. Let's fix it.\n\n## Try it: Apply the Heading 1 style:\n\n - 1. Put your cursor somewhere in the heading above ('Make magic: use Heading styles') don't select anything.\n - 2. On the Home tab, find Styles , and select Heading 1 (keyboard shortcut Ctrl+Alt+1).\n\nTa-da! Now it looks like a heading, and acts like one too.", - "page_start": 4, - "page_end": 4, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "## 3.4 Mobile work, home as workplace and domestic work\n\nThe place of work - premises of the employer or any other place - is another major characteristic of working conditions, which significantly influences the risks and the preventive measures. This chapter takes a closer look at three types of work, that is, mobile work, private homes as workplace and domestic work, All pose - in a broad sense - similar challenges for OSH. 77\n\nFor OSH, the major question for all mobile and non-stationery work is: to what degree does the OSH level at these workplaces' deviate from the OSH level at stationary workplaces ? Current OSH legislation illustrates these difficulties: The Workplace Directive 78 excludes several types of mobile work, and the Display screen equipment directive 79 was issued in 1990 and does not reflect the variety and specific OSH issues of digital equipment development of the past 30 years. Both directives are currently under revision.\n\nMobile work is a standard characteristic of work in the construction and transport sector , extreme for workers in the maritime and other long-distance and international transport sectors, often in tourism and also for certain categories of sales personnel , and often standard for qualified craft workers during service or construction of plants and installations and during maintenance. 80\n\nTriggered by developments in digital and communication technologies, several new types of mobile work have developed. In principle, the place of work can be anywhere, in a car, train, hotel, at the premises of other employers, at remote office-like locations, or at the client's workplace or at private homes of clients; it is not 'place-bound'. Most of this mobile work still takes place in the contractual form of regular employment, but mobile work is also a major field for many new forms of new work contracts, triggered by the technological possibilities.\n\nTraditional home-based work consists of the production of small goods that - from a technical point of view - can be produced in private homes (clothes, artisan work and very repetitive work like sorting). This work is performed for an enterprise or a person contracted by the enterprise for the organisation of home-based work and is located at the homes of the workers. It might require extra technical equipment, but sometimes usual private equipment is sufficient. The traditional home-based work very probably has decreased to a low level, the quantity of this type of home-based work is not monitored at EU level. 81 Regulation of OSH for such home-based work has a long tradition in OSH legislation, mostly aimed at achieving working conditions as similar as possible to the other employees in an enterprise, regarding wages, social protection, and safety and health.\n\nWork at, from and in homes. We can distinguish major types: work at (own) home , either as independent work (self-employed) or classical home-based work; work from private home embedded in daily routine work processes in an enterprise or institution; and work in homes of others . Long-term care work, domestic work and teaching are large categories of work in homes; the work is performed in the private homes of clients. Regarding work that is done at home, from home and in homes , the application of some basic OSH standards has to take into account the dominantly private character of a home. This triggers the question of responsibility and supervision : Who is responsible for risk assessment and prevention measures? Is a supervision of compliance by state authorities in private homes legally possible?", - "page_start": 48, - "page_end": 48, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "respect to a proposed appointee to the Board and the workings of the Board and its Committees are conveyed in interviews with the Chairman and induction procedures include access to appropriate executives in relation to details of the business of the Company.\n\nThe Chairman of the Board is the Chairman of the Nomination Committee. The current members of the Nomination Committee, all of whom are independent non-executive Directors, are Mr S Gerlach (Chairman), Mr P C Barnett and Mr G W McGregor.\n\n## 3. REVIEW OF BOARD AND EXECUTIVE PERFORMANCE\n\nThe Board Guidelines provide that:\n\n - · non-executive Directors are to be appointed on the basis that their nomination for re-election as a Director is subject to review and support by the Board;\n - · there should be appropriate circumstances justifying reelection after a specified period of service as a Director; and\n - · the contribution of the Board and of individual Directors is the subject of formal review and discussion on a biennial and annual basis, respectively.\n\nAs the biennial review of the Board and of its Committees was conducted by an independent consultant in 2003, no formal performance appraisal of the Board was conducted in 2004.\n\nPerformance evaluation of key executives is undertaken on a quarterly and annual basis by the CEO and summarised in presentation to the\n\nRemuneration Committee of the\n\nBoard, both specifically for determination of remuneration and generally in relation to management succession planning for review by the Board.\n\n## 4. INDEMNITY, ACCESS TO INFORMATION AND INDEPENDENT PROFESSIONAL ADVICE\n\nInformation in respect to indemnity and insurance arrangements for Directors and senior executives appears in the Directors' Statutory Report on page 49 of this Annual Report.\n\nThe Board Guidelines set out the circumstances and procedures pursuant to which a Director, in furtherance of his or her duties, may seek independent professional advice at the Company's expense. Those procedures require prior consultation with, and approval by, the Chairman and assurances as to the qualifications and reasonableness of the fees of the relevant expert and, under normal circumstances, the provision of the expert's advice to the Board.\n\nPursuant to a deed executed by the Company and each Director, a Director also has the right to have access to all documents which have been presented to meetings of the Board or to any Committee of the Board or otherwise made available to the Director whilst in office. This right continues for a term of seven years after ceasing to be a Director or such longer period as is necessary to determine relevant legal proceedings that commenced during that term.\n\n## 5. REMUNERATION\n\nThe role, responsibilities and composition of the Remuneration Committee and details of\n\nthe Company's remuneration objectives and principles, nonexecutive Director remuneration and executive remuneration are set out on pages 37 to 40 of this Annual Report in the Directors' and Executives' Remuneration section, as well as in the Directors' Statutory Report and in Notes 18 and 26 of the Financial Statements.\n\nDetails of the nature and amount of the remuneration of:\n\n - · the Directors; and\n - · the Specified Executives;\n\nare set out on pages 37 to 40 of this Annual Report.\n\n## 6. AUDIT COMMITTEE", - "page_start": 32, - "page_end": 32, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "\n\n## Introduction\n\nThis Remuneration Report forms part of the Directors' Report. It outlines the Remuneration Policy and framework applied by the Company as well as details of the remuneration paid to Key Management Personnel. Key Management Personnel are defined as those persons having the authority and responsibility for planning, directing and controlling the activities of the Company, directly or indirectly, including Directors and members of the Executive Management group.\n\nThe information provided in this report has been prepared in accordance with s300A and audited as required by section 308 (3c) of the Corporations Act 2001 .\n\nThe objective of the Company's remuneration philosophy is to ensure that Directors and senior staff are remunerated fairly and responsibly at a level that is competitive, reasonable and appropriate, in order to attract and retain suitably skilled and experienced people.\n\nDuring the year the Company introduced a STI Plan that is based on Key Management Personnel individual performance measures and a LongTerm Incentive ('LTI') Executive Rights Plan that provides performance-based remuneration to members of management through the issue of Deferred Rights and Performance Rights vesting over a period of three years. These new plans are discussed in further detail later in this report.\n\n## Voting and comments made at the Company's 2012 AGM\n\nThe table below provides a summary of the Board's action and / or comments in response to concerns raised by shareholders at the 2012 AGM in relation to remuneration.\n\n## Concern\n\nKey issues raised were:\n\n - 〉 t he granting of deferred rights;\n - 〉 definition of what compromises 'fixed pay'; and\n - 〉 a lack of understanding of the TSR Alpha™ concept recommended as the LTI performance assessment process.\n\n\n\n## Remuneration Policy\n\nThe Remuneration Policy has been designed to align the interests of shareholders, Directors, and employees. This is achieved by setting a framework to:", - "page_start": 51, - "page_end": 51, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "\n\n## Remuneration Policy\n\nThe Remuneration Policy has been designed to align the interests of shareholders, Directors, and employees. This is achieved by setting a framework to:\n\n - 〉 help ensure an applicable balance of fixed and at-risk remuneration, with the at-risk component linking incentive and performance measures to both Group and individual performance;\n - 〉 provide an appropriate reward for Directors and Executive Management to manage and lead the business successfully and to drive strong, long-term growth in line with the Company's strategy and business objectives;\n - 〉 encourage executives to strive for superior performance;\n - 〉 facilitate transparency and fairness in executive remuneration policy and practices;\n - 〉 be competitive and cost effective in the current employment market; and\n - 〉 contribute to appropriate attraction and retention strategies for Directors and executives.\n\nIn consultation with external remuneration consultants, the Group has structured an executive remuneration framework that is market competitive and complimentary to the business strategy of the organisation.\n\nThe framework is intended to provide a mix of fixed and variable remuneration, with a blend of short and long-term incentives as appropriate. As executives gain seniority within the Group, the balance of this mix shifts to a higher proportion of 'at risk' rewards (refer to chart Remuneration Reward Mix on the following page).\n\n## Remuneration Governance\n\n## Role of the Remuneration Committee\n\nThe Remuneration Committee is a committee of the Board and has responsibility for setting policy for determining the nature and amount of emoluments of Board members and senior executives. The Committee makes recommendations to the Board concerning:\n\n - 〉 Non-Executive Director fees;\n - 〉 remuneration levels of Executive Directors and other Key Management Personnel;\n - 〉 the executive remuneration framework and operation of the incentive plan; and\n - 〉 key performance indicators and performance hurdles for the executive team.\n\nIn forming its recommendations the Committee takes into consideration the Group's stage of development, remuneration in the industry and performance. The Corporate Governance Statement provides further information on the role of this committee.\n\n## Remuneration Consultants", - "page_start": 51, - "page_end": 51, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Claw Back Provisions\n\nThe Board, in its sole discretion, shall reserve the right to claw back any incentive awards issued if any of the following conditions apply:\n\n - · The Company's financial statements are required to be restated due to material non-compliance with any financial reporting requirements under the federal securities laws (other than a restatement due to a change in accounting rules); and\n - o As a result of such restatement, a performance measure which was a material factor in determining the award is restated, and\n - o In the discretion of the Board, a lower payment would have been made to the executive officer based upon the restated financial results;\n - · Should it subsequently be found that the information or assumptions are materially erroneous;\n - · In the event that there is evidence of fraud by any employee resulting in material adverse change in the Company's financial statements;\n - · In the event that there is a material adverse change in the circumstances of the Company.\n\n## E. Remuneration Policy and Framework\n\n## The Remuneration and Nominations Committee\n\nThe Remuneration and Nominations Committee makes recommendations to our board of directors in relation to total remuneration of directors and executives and reviews their remuneration annually. The Committee members are all independent directors, and independent external advice is sought when required.\n\n## Remuneration Consultant\n\nGiven the unique structure of being traded on the ASX but having a U.S.-based management team and operations, the Remuneration and Nominations Committee retained Meridian Compensation Partners, LLC (Meridian) as its independent remuneration consultant for the 2014 fiscal year. Meridian was retained to provide executive and director remuneration consulting services to the Committee, including advice regarding the design and implementation of remuneration programs that are competitive and common among the U.S. oil and gas exploration and production industry, competitive market information, comparison advice with Australian companies and practice, regulatory updates and analyses and trends on executive base salary,", - "page_start": 33, - "page_end": 33, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "## 2.2 Remuneration and Nominations Committee\n\nThe remuneration and nominations committee is structured so that it:\n\n - · Consists of a majority of independent Directors;\n - · Is chaired by an independent Director; and\n - · Has at least three members.\n\nThe responsibilities of the committee include recommendations to the Board about:\n\n - · Remuneration practices and levels of Executives and Non-executive Directors;\n - · The necessary and desirable competencies of Directors;\n - · Review of board succession plans;\n - · The development of a process for evaluation of the performance of the board, its committees and Directors; and,\n - · The appointment and re-election of Directors.\n\nThe combined Remuneration and Nominations Committee consists of three independent Non-Executive Directors and reports its recommendations to the Board for approval. Formal minutes are kept of each meeting and submitted to the Board for review. The members of the Remuneration and Nominations Committee is listed on page 26 of the Directors' Report. A Remuneration and Nominations Committee charter is published on the Company's website.\n\nThe Board reviews the composition and skill sets of the Committee on a regular basis, and considers that the current composition, size and skills of the Committee to be appropriate.\n\nCurrently no formal description of the procedure for the selection and appointment of new Directors or the re-election of incumbent Directors exists as it is considered that due to the size of the Company that this process is effectively managed by the Board. However, this activity is discussed by the Committee from time to time.\n\n## 2.3 Director Performance Review and Evaluation\n\nIn fiscal year 2014, Sundance's Board regularly met, both formally and informally, to discuss Board matters and to ensure that the Board acts in an effective way. The Board is provided with information that allows it to discharge its duties effectively, and Non-Executive Directors can and do request additional information as necessary to make informed decisions. The skills, experience and expertise relevant to the position of Director held by each director in office at the date of the annual report can be found in the Directors' Report on pages 23 to 25.\n\nNo formal process exists for Directors to access continuing education, as this is not considered practicable for the size of the Company and the financial resources available. However the four Non-Executive Directors have wide experience of directors' duties and are involved in a variety of outside business and professional activities that add to their knowledge and professionalism.\n\nThe Company Secretary is D Connor. He is accountable to the Board through the Chairman and accessible to all Directors. The appointment and removal of the Company Secretary is a matter for decision by the Board as a whole.\n\n## Principle 3: Promote Ethical and Responsible Decision-making\n\n## 3.1 Code of Conduct", - "page_start": 51, - "page_end": 51, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "Figure 7: Psychosocial risk factors - Differences between skill groups (Skill discretion)\n\n\n\nFor 'Decision authority' and 'Skill discretion', the authors found a stable situation since 1995, even a small rise of skill discretion for manual workers after 2010. Regarding 'Psychological demands' and 'Job strain', the major increase for all groups took place between 1995 and 2005. This growth decelerated after 2005, this observation is also valid for other working conditions, like work intensity.\n\n## 3.1.1 Working time in hours and at atypical times\n\nToo many hours of working time and/or working hours at atypical or unsocial times can put the mental and the physical health of humans at risk. It is also regarded as a major contributing factor to work accidents , due to fatigue or exhaustion. 16\n\nThe main indicator to describe working time is the number of the weekly average working hours of full-time employees. However, regarding its impact on health and safety, other aspects of working time are of the same relevance :\n\n - · How long is the average working day?\n - · At which times and days is this work done (typical, atypical times)?\n - · How often do long working hours take place?\n - · Is the work split between two jobs?\n - · How flexible are start and end?\n - · How intense is the work during this time (breaks, deadlines)?\n - · Which groups of workers have standard working times and which do not (e.g. depending on the sector or the type of contract, e.g. sub-contracted workers or self-employed)?\n\nThere is a slight trend towards fewer working hours for full-time employees (not 'Employed persons') in the EU27; between 2006 and 2019 the average weekly working time dropped from 40.2 to 39.9 hours, a decrease of approximately 15 minutes. 17\n\nRegarding the weekly hours, there are no striking differences between the EU27 Member States. In 2019, Cyprus, Austria and Malta with a high share of workers in the sector of tourism (accommodation) had the highest number of working hours per week (above 41 hours), and Denmark, the Netherlands and Italy the lowest number (39 or fewer) (full-time, employees, 15-64 years, all NACE codes). 18", - "page_start": 28, - "page_end": 28, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## CHAPTER 10:\n\n## LANGUAGE SKILLS AT WORK HOW TO WRITE A COVER LETTER\n\n\n\nIf you've ever applied for a job, you'll know that writing the cover letter is the most difficult part of almost any job application. Your cover letter creates the first impression, and often determines whether an employer will even look at your CV.\n\nYou need to use this opportunity to introduce yourself and your skills, and to set yourself apart from all the other candidates. You can also use this opportunity to explain any gaps in your CV, and to motivate why you are the right person for the job.", - "page_start": 44, - "page_end": 44, - "source_file": "basic-english-language-skills.PDF" - } - ] - }, - { - "references": { - "source_file": "1001.0770.pdf", - "query": "Where are the peaks of the VHE blazars ?", - "target_page": 1, - "target_passage": " VHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "## 3. VERITAS Blazar KSP\n\nVERITAS observes for ∼ 750 h and ∼ 250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- · A VHE blazar discovery program ( ∼ 200 h / yr): Each year ∼ 10 targets are selected to receive ∼ 10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- · A target-of-opportunity (ToO) observation program ( ∼ 50 h / yr): VERITAS blazar observations can be triggered by either a VERITAS blazar discovery, a VHE flaring alert ( > 2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- · Multi-wavelength (MWL) studies of VHE blazars ( ∼ 50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- · Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n## 4. Blazar Discovery Program\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ -rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles ( -8 · < δ < 72 · ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0 . 3. To further the study of the\n\nEBL a few objects having a large ( z > 0 . 3) are also included in the target list. The target list includes:\n\n- · All nearby ( z < 0 . 3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- · The X-ray brightest HBL ( z < 0 . 3) in the recent Sedentary [8] and ROXA [9] surveys.\n- · Four distant ( z > 0 . 3) BL Lac objects recommended by [5, 10].\n- · Several FSRQ recommended as potential VHE emitters in [6, 11].\n- · All nearby ( z < 0 . 3) blazars detected by EGRET [12].\n- · All nearby ( z < 0 . 3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- · All sources ( | b | > 10 · ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ -ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERITAS blazar discovery program.\n\n## 5. VERITAS AGN Detections\n\nVERITAS has detected VHE γ -ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n## 5.1. Recent VERITAS Blazar Discoveries", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "## 6. Blazars Upper Limits\n\nMore than 50 VHE blazar candidates were observed by VERITAS between September 2007 and June 2009. The total exposure on the 49 non-detected candidates is ∼ 305 h live time (average of 6.2 h per candidate). Approximately 55% of the total exposure is split amongst the 27 observed HBL. The remainder is divided amongst the 8 IBL (26%), 5 LBL (6%), and 9 FSRQ (13%). There are no clear indications of significant VHE γ -ray emission from any of these 49 blazars [25]. However, the observed significance distribution is clearly skewed towards positive values (see Figure 1). A stacking analysis performed on the entire data sample shows an overall excess of 430 γ -rays, corresponding to a statistical significance of 4.8 σ , observed from the directions of the candidate blazars. The IBL and HBL targets make up 96% of the observed excess. Observations of these objects also comprise ∼ 80% of the total exposure. An identical stacked analysis of all the extragalactic non-blazar targets observed, but not clearly detected ( > 5 σ ), by VERITAS does not show a significant excess ( ∼ 120 h exposure). The stacked excess persists using alternate methods for estimating the background at each blazar location, and with different event selection criteria (e.g. soft cuts optimized for sources with Γ VHE > 4). The distribution of VHE flux upper limits is shown in Figure 1. These 49 VHE flux upper limits are generally the most-constraining ever reported for these objects.\n\n## 7. Multi-wavelength Studies of VHE Blazars\n\nDuring the first three seasons of VERITAS observations, pre-planned extensive MWL campaigns were organized for three blazars 1ES 2344+514 (2007-08), 1ES 1218+304 (2008-09) and 1ES 0229+200 (200910 - ongoing). In addition, numerous ToO MWLobservation campaigns were performed. These include campaigns for every blazar/AGN discovered by VERITAS, and all include Swift (XRT and UVOT) data. All MWL campaigns on the VHE blazars discovered", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0770.pdf" - }, - { - "text": "## VERITAS Observations of Blazars\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E > 100 GeV) γ -ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ -ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼ 30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ -rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n## 1. Introduction\n\nActive galactic nuclei are the most numerous class of identified VHE γ -ray sources. These objects emit non-thermal radiation across ∼ 20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ -ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ -rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH ( ∼ 2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ -rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## 2. VERITAS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "## 5.1. Recent VERITAS Blazar Discoveries\n\nPrior to the launch of Fermi VERITAS had discovered VHE emission from 2 blazars. These included the first VHE-detected IBL, W Comae [14, 15], and the HBL 1ES0806+524 [16]. VERITAS has discovered 6 VHE blazars since the launch of Fermi. Three of these were initially observed by VERITAS prior to the release of Fermi-LAT results, due to the X-ray brightness of the synchrotron peaks of their SEDs.\n\nVHEemission from 3C66A was discovered by VERITAS in September 2008 [17] during a flaring episode that was also observed by the Fermi-LAT [18]. The observed flux above 200 GeV was 6% of the Crab Nebula flux and the measured VHE spectrum was very soft (Γ VHE ∼ 4 . 1). RGBJ0710+591 was detected", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "tion of correlated VHE and X-ray flux variability, as well as correlated spectral hardening in both the VHE and X-ray bands. The VHE MWL observations were performed in both 'quiescent' and flaring states for some of the observed blazars. For the observed HBL objects, the SEDs can be well described by a simple SSC model in both high and low states. However, an additional external Compton component is necessary to adequately fit the SEDs of the IBL objects.\n\nThe Fermi-LAT is already having a significant impact on the blazar KSP. In future seasons, the VERITAS blazar discovery program will focus its discovery program on hard-spectrum blazars detected by Fermi-LAT, and will likely have a greater focus on high-risk/high-reward objects at larger redshifts (0 . 3 < z < 0 . 7). In addition, the number of VHE blazars studied in pre-planned MWL campaigns will increase as data from the Fermi-LAT will be publicly available. In particular, the extensive pre-planned MWL campaigns will focus on objects that are noteworthy for the impact their data may have on understanding the EBL. The simultaneous observations of blazars by VERITAS and Fermi-LAT will completely resolve the higher-energy SED peak, often for the first time, enabling unprecedented constraints on the underlying blazar phenomena to be derived.\n\n## Acknowledgments\n\nThis research is supported by grants from the US Department of Energy, the US National Science Foundation, and the Smithsonian Institution, by NSERC in Canada, by Science Foundation Ireland, and by STFC in the UK. We acknowledge the excellent work of the technical support staff at the FLWO and the collab-\n\norating institutions in the construction and operation of the instrument.\n\n## References\n\n - [1] F. Aharonian et al. 2007, ApJ , 664 , L71\n - [2] F. Aharonian et al. 2006, Nature , 440 , 1018\n - [3] F. Aharonian et al. 2007, A&A , 475 , L9\n - [4] J. Holder, et al. 2008, AIPC , 1085 , 657\n - [5] L. Costamante & G. Ghisellini 2002, A&A , 384 , 56\n - [6] E.S. Perlman 2000, AIPC , 515 , 53\n - [7] F.W. Stecker et al. 1996, ApJ , 473 , L75\n - [8] P. Giommi et al. 2005, A&A , 434 , 385\n - [9] S. Turriziani et al. 2007, A&A , 472 , 699\n - [10] L. Costamante 2006, arXiv:0612709\n - [11] P. Padovani et al. 2002, ApJ , 581 , 895\n - [12] R. Muhkerjee et al. 2001, AIPC , 558 , 324\n - [13] A.A. Abdo et al. 2009, ApJ , 700 , 597\n - [14] V.A. Acciari et al. 2008, ApJ , 684 , L73\n - [15] V.A. Acciari et al. 2009, ApJ , 707 , 612\n - [16] V.A. Acciari et al. 2009, ApJ , 690 , L126\n - [17] V.A. Acciari et al. 2009, ApJ , 693 , L104\n - [18] L.C. Reyes 2009, arXiv:0907.5175\n - [19] R.A. Ong 2009, ATel , 1941\n - [20] R.A. Ong et al. 2009, ATel , 2272\n - [21] V.A. Acciari et al. 2009, ApJ , 708 , L100\n - [22] R.A. Ong et al. 2009, ATel , 2301\n - [23] R.A. Ong et al. 2009, ATel , 2260\n - [24] R.A. Ong et al. 2009, ATel , 2309\n - [25] W. Benbow 2009, arXiv:0908.1412\n - [26] V.A. Acciari et al. 2009, ApJ , submitted\n - [27] V.A. Acciari et al. 2009, ApJ , 695 , 1370\n - [28] V.A. Acciari et al. 2009, ApJ , in press\n - [29] J. Grube 2009, arXiv:0907.4862", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 1: (Left) The preliminary significance measured from each of the 49 non-detected candidates using standard analysis cuts. The curve shows a Gaussian distribution, with mean zero and standard deviation one, normalized to the number of blazars. A similar result is obtained using analysis cuts optimized for soft-spectrum sources. (Right) The distribution of flux upper limits for the non-detected blazars in percentage of Crab Nebula flux above the observation threshold. The time-weighted average limit is less than ∼ 2% Crab flux.\n\n\n\n\n\nσ\n\nsince the launch of Fermi include LAT detections. In addition, several MWL campaigns on the well-studied VHE blazars Mkn 421 and Mkn 501 (please see the contributions of D. Gall and A. Konopelko in these proceedings) were also performed. Highlights of these campaigns include:\n\n - · 1ES 2344+514: A major (50% Crab) VHE flare, along with correlations of the VHE and X-ray flux were observed from this HBL. The VHE and X-ray spectra harden during bright states, and a synchrotron self-Compton (SSC) model can explain the observed SED in both the high and low states [26].\n - · 1ES 1218+304: This HBL flared during VERITAS MWL observations. Its unusually hard VHE spectrum strongly constrains the EBL. The observed flaring rules out kpc-scale jet emission as the explanation of the spectral hardness and places the EBL constraints on more solidfooting [27, 28].\n - · 1ES 0806+524: The observed SED of this new VHE HBL can be explained by an SSC model [16].\n - · W Comae: This IBL, the first discovered at VHE, flared twice in 2008 [14, 15]. Modeling of the SED is improved by including an externalCompton (EC) component in an SSC interpretation.\n - · 3C 66A: This IBL flared at VHE and MeV-GeV energies in 2008[17, 18]. Similar to W Comae and PKS 1424+240, modeling of observed SED suggests a strong EC component in addition to an SSC component.\n - · Mkn 421: This HBL exhibited major flaring behavior for several months in 2008. Correlations of the VHE and X-ray flux were observed, along with spectral hardening with increased flux in both bands [29].\n - · RGBJ0710+591: Modeling the SED of this HBL with an SSC model yields a good fit to the data. The inclusion of an external Compton component does not improve the fit.\n - · PKS1424+240: The broadband SED of this IBL (at unknown redshift) is well described by an SSC model favoring a redshift of less than 0.1 [21]. Using the photon index measured with Fermi-LAT in combination with recent EBL absorption models, the VERITAS data indicate that the redshift of PKS 1424+240 is less than 0.66.\n\n## 8. Conclusions\n\nThe first two years of the VERITAS blazar KSP were highly successful. Highlights include the detection of more than a 16 VHE blazars with the observations almost always having contemporaneous MWL data. Among these detections are 8 VHE blazar discoveries, including the first three IBLs known to emit VHE γ -rays. All but a handful of the blazars on the initial VERITAS discovery target list were observed, and the flux limits generated for those not VHE detected are generally the most-constraining ever. The excess seen in the stacked blazar analysis suggests that the initial direction of the VERITAS discovery program was well justified, and that follow-up observations of many of these initial targets will result in VHE discoveries. In addition, the Fermi-LAT is identifying many new compelling targets for the VERITAS blazar discovery program. These new candidates have already resulted in 3 VHE blazar discoveries. The future of the VERITAS blazar discovery program is clearly very bright.\n\nThe MWL aspect of the VERITAS blazar KSP has also been highly successful. Every VERITAS observation of a known, or newly discovered, VHE blazar has been accompanied by contemporaneous MWL observations. These data have resulted in the identifica-", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0770.pdf" - }, - { - "text": "## Submillimeter Variability and the Gamma-ray Connection in Fermi Blazars\n\nA. Strom Univ. of Arizona, AZ 85721, USA A. Siemiginowska, M. Gurwell, B. Kelly\n\nCfA, MA 02138, USA\n\nWe present multi-epoch observations from the Submillimeter Array ( SMA ) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August-October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## 1. INTRODUCTION\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ -ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ -ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submil-\n\nlimeter Array 1 ( SMA ) at 1mm and 850 µ m, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ -ray indices and luminosities.\n\n## 2. SMA BLAZARS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "Table I VERITAS AGN Detections. The only non-blazar object is the radio galaxy M 87. The blazars discovered at VHE by VERITAS are marked with a dagger.\n\n| Object | | Class Redshift |\n|----------------|------|------------------|\n| M87 | FR I | 0.004 |\n| Mkn421 | HBL | 0.030 |\n| Mkn501 | HBL | 0.034 |\n| 1ES2344+514 | HBL | 0.044 |\n| 1ES1959+650 | HBL | 0.047 |\n| WComae † | IBL | 0.102 |\n| RGBJ0710+591 † | HBL | 0.125 |\n| H1426+428 | HBL | 0.129 |\n| 1ES0806+524 † | HBL | 0.138 |\n| 1ES0229+200 | HBL | 0.139 |\n| 1ES1218+304 | HBL | 0.182 |\n| RBS0413 † | HBL | 0.190 |\n| 1ES0502+675 † | HBL | 0.341 |\n| 3C66A † | IBL | 0.444? |\n| PKS1424+240 † | IBL | ? |\n| VERJ0521+211 † | ? | ? |\n\n( ∼ 5.5 σ ; 3% Crab flux above 300 GeV; Γ VHE ∼ 2 . 7) during VERITAS observations from December 2008 to March 2009. The initial announcement of the VHE discovery [19] led to its discovery above 1 GeV in the Fermi-LAT data using a special analysis. RBS 0413, a relatively distant HBL (z=0.19), was observed for 16 h good-quality live time in 2008-09 2 . These data resulted in the discovery of VHE gamma-rays ( > 270 γ , ∼ 6 σ ) at a flux ( > 200 GeV) of ∼ 2% of the Crab Nebula flux. The discovery [20] was announced simultaneously with the LAT MeV-GeV detection. The VHE and other MWL observations, including Fermi-LAT data, for each of these three sources will be the subject of a joint publication involving both the VERITAS and LAT collaborations.\n\n## 5.2. Discoveries Motivated by Fermi-LAT\n\nThe successful VHE discovery observations by VERITAS of three blazars was motivated primarily by results from the first year of LAT data taking. In particular, the VHE detections of PKS 1424+240 [21] and 1ES0502+675 [22] were the result of VERITAS observations triggered by the inclusion of these objects in the Fermi-LAT Bright AGN List [13]. The former is only the third IBL known to emit VHE gammarays, and the latter is the most distant BL Lac object\n\n( z = 0 . 341) detected in the VHE band. In addition, VERJ0521+211, likely associated with the radio-loud AGN RGBJ0521.8+2112, was detected by VERTAS in ∼ 4 h of observations in October 2009 [23]. These observations were motivated by its identification as a > 30 GeV γ -ray source in the public Fermi-LAT data. Its VHE flux is 5% of the Crab Nebula flux, placing it among the brightest VHE blazars detected in recent years. VERITAS later observed even brighter VHE flaring from VERJ0521+211 in November 2009 [24], leading to deeper VHE observations.\n\n## 6. Blazars Upper Limits", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0770.pdf" - }, - { - "text": "## 2. SMA BLAZARS\n\nThe Submillimeter Array [4] consists of eight 6 m antennas located near the summit of Mauna Kea. The SMA is used in a variety of baseline configurations and typically operates in the 1mm and 850 µ m windows, achieving spatial resolution as fine as 0.25' at 850 µ m. The sources used as phase calibrators for the array are compiled in a database known as the SMA Calibrator List 2 [5]. Essentially a collection of bright objects (stronger than 750 mJy at 230 GHz and 1 Jy at 345 GHz), these sources are monitored regularly, both during science observations and dedicated observing tracks.\n\nTo select our sample, we identified objects in the calibrator list that were also classified as BL Lacs or FSRQs by the Candidate Gamma-Ray Blazar Survey [6, CGRaBS]. Of the 243 total objects in the calibrator list, 171 (35 BL Lacs and 136 FSRQs) have positive blazar class identifications, although there are three sources (J0238+166, J0428-379, and", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 1: The SMA light curves for 3C 454.3. The open circles represent the 850 µ m observations, and the open triangles represent the 1mm observations.\n\n\n\nJ1751+096) which have conflicting classifications between Fermi and CGRaBS. Some blazars found in the calibrator list have been studied extensively (e.g., 3C 279 and 3C 454.3) but the SMA blazars have not been studied collectively.\n\nForty-four of the objects in our total blazar sample were detected by Fermi and can be found in the catalog of LAT Bright AGN Sources (LBAS) from Abdo et al. [7]. J0050-094 has no redshift in either the LBAS catalog or CGRaBS and is not included in our study. Of the 43 remaining sources, 14 are BL Lac objects and 29 are FSRQs, with 0 . 03 ≤ z ≤ 2 . 19.\n\nWe examined submillimeter light curves for all of the SMA blazars, with observations beginning in approximately 2003 (see Figure 1). Typically, the 1mm band is much more well-sampled in comparison to the 850m band, but visual inspection reveals that the regularity and quality of observations vary greatly from source to source. Many of the objects exhibit nonperiodic variability, either in the form of persistent, low-amplitude fluctuations or higher amplitude flaring behavior.\n\n## 2.1. Submillimeter Properties\n\nSubmillimeter Luminosities. Since we are primarily concerned with comparisons to Fermi observations, we note that only 129 of the SMA blazars (23 BL Lacs and 106 FSRQs) were observed by the SMA in either band during the three months August-October 2008. For these objects, submillimeter luminosities are calculated in the standard way:\n\nν e L ν e = 4 πD 2 L ν obs F obs 1 + z , (1)\n\nwhere D L is the luminosity distance, ν obs is the frequency of the observed band, and F obs is the average\n\nFigure 2: Variability index for our sample (top: 1mm, bottom: 850 µ m), with FSRQs as the hatched distribution and BL Lacs as the solid distribution. There is no signicant difference in the class distributions in either band; the 'tail' to the left is populated by objects with errors larger than the intrinsic variability.\n\n\n\nflux (in erg cm -2 s -1 Hz -1 ) over the three month period. We adopt a lambda cold dark matter cosmology with values of H 0 = 71 km s -1 Mpc -1 , Ω M = 0 . 27, and Λ = 0 . 73.\n\nEnergy Spectral Indices. We derive submillimeter spectral energy indices from observations quasisimultaneous with the Fermi observations. To be consistent with the use of α γ , we define spectral energy index as νF ν = ν -α S and calculate α S from the average of the energy spectral indices over the corresponding three months. We only calculate α S for the 16 objects (8 BL Lacs and 35 FSRQs) with observations at both 1mm and 850 µ m during this time frame.\n\n## 3. VARIABILITY ANALYSIS\n\n## 3.1. Variability Index\n\nWe roughly characterize the level of variability of each source using the variability index from Hovatta et al. [8]:\n\nV = ( F max -σ F max ) -( F min + σ F min ) ( F max -σ F max ) + ( F min + σ F min ) (2)\n\nFigure 2 shows the distribution for the SMA blazars. Objects with V ≤ 0 are typically unsuitable for more", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0806.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0770.pdf", - "query": "What are the blazars observed in the discovery program ?", - "target_page": 2, - "target_passage": "The blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. How ever, the program also includes IBLs (intermediate peaked) and LBLs (low-peaked), as well as flat spec trum radio quasars (FSRQs), in an attempt to in crease the types of blazars known to emit VHE γ-rays.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## 3. VERITAS Blazar KSP\n\nVERITAS observes for ∼ 750 h and ∼ 250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- · A VHE blazar discovery program ( ∼ 200 h / yr): Each year ∼ 10 targets are selected to receive ∼ 10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- · A target-of-opportunity (ToO) observation program ( ∼ 50 h / yr): VERITAS blazar observations can be triggered by either a VERITAS blazar discovery, a VHE flaring alert ( > 2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- · Multi-wavelength (MWL) studies of VHE blazars ( ∼ 50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- · Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n## 4. Blazar Discovery Program\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ -rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles ( -8 · < δ < 72 · ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0 . 3. To further the study of the\n\nEBL a few objects having a large ( z > 0 . 3) are also included in the target list. The target list includes:\n\n- · All nearby ( z < 0 . 3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- · The X-ray brightest HBL ( z < 0 . 3) in the recent Sedentary [8] and ROXA [9] surveys.\n- · Four distant ( z > 0 . 3) BL Lac objects recommended by [5, 10].\n- · Several FSRQ recommended as potential VHE emitters in [6, 11].\n- · All nearby ( z < 0 . 3) blazars detected by EGRET [12].\n- · All nearby ( z < 0 . 3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- · All sources ( | b | > 10 · ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ -ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERITAS blazar discovery program.\n\n## 5. VERITAS AGN Detections\n\nVERITAS has detected VHE γ -ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n## 5.1. Recent VERITAS Blazar Discoveries", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "## 6. Blazars Upper Limits\n\nMore than 50 VHE blazar candidates were observed by VERITAS between September 2007 and June 2009. The total exposure on the 49 non-detected candidates is ∼ 305 h live time (average of 6.2 h per candidate). Approximately 55% of the total exposure is split amongst the 27 observed HBL. The remainder is divided amongst the 8 IBL (26%), 5 LBL (6%), and 9 FSRQ (13%). There are no clear indications of significant VHE γ -ray emission from any of these 49 blazars [25]. However, the observed significance distribution is clearly skewed towards positive values (see Figure 1). A stacking analysis performed on the entire data sample shows an overall excess of 430 γ -rays, corresponding to a statistical significance of 4.8 σ , observed from the directions of the candidate blazars. The IBL and HBL targets make up 96% of the observed excess. Observations of these objects also comprise ∼ 80% of the total exposure. An identical stacked analysis of all the extragalactic non-blazar targets observed, but not clearly detected ( > 5 σ ), by VERITAS does not show a significant excess ( ∼ 120 h exposure). The stacked excess persists using alternate methods for estimating the background at each blazar location, and with different event selection criteria (e.g. soft cuts optimized for sources with Γ VHE > 4). The distribution of VHE flux upper limits is shown in Figure 1. These 49 VHE flux upper limits are generally the most-constraining ever reported for these objects.\n\n## 7. Multi-wavelength Studies of VHE Blazars\n\nDuring the first three seasons of VERITAS observations, pre-planned extensive MWL campaigns were organized for three blazars 1ES 2344+514 (2007-08), 1ES 1218+304 (2008-09) and 1ES 0229+200 (200910 - ongoing). In addition, numerous ToO MWLobservation campaigns were performed. These include campaigns for every blazar/AGN discovered by VERITAS, and all include Swift (XRT and UVOT) data. All MWL campaigns on the VHE blazars discovered", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0770.pdf" - }, - { - "text": "tion of correlated VHE and X-ray flux variability, as well as correlated spectral hardening in both the VHE and X-ray bands. The VHE MWL observations were performed in both 'quiescent' and flaring states for some of the observed blazars. For the observed HBL objects, the SEDs can be well described by a simple SSC model in both high and low states. However, an additional external Compton component is necessary to adequately fit the SEDs of the IBL objects.\n\nThe Fermi-LAT is already having a significant impact on the blazar KSP. In future seasons, the VERITAS blazar discovery program will focus its discovery program on hard-spectrum blazars detected by Fermi-LAT, and will likely have a greater focus on high-risk/high-reward objects at larger redshifts (0 . 3 < z < 0 . 7). In addition, the number of VHE blazars studied in pre-planned MWL campaigns will increase as data from the Fermi-LAT will be publicly available. In particular, the extensive pre-planned MWL campaigns will focus on objects that are noteworthy for the impact their data may have on understanding the EBL. The simultaneous observations of blazars by VERITAS and Fermi-LAT will completely resolve the higher-energy SED peak, often for the first time, enabling unprecedented constraints on the underlying blazar phenomena to be derived.\n\n## Acknowledgments\n\nThis research is supported by grants from the US Department of Energy, the US National Science Foundation, and the Smithsonian Institution, by NSERC in Canada, by Science Foundation Ireland, and by STFC in the UK. We acknowledge the excellent work of the technical support staff at the FLWO and the collab-\n\norating institutions in the construction and operation of the instrument.\n\n## References\n\n - [1] F. Aharonian et al. 2007, ApJ , 664 , L71\n - [2] F. Aharonian et al. 2006, Nature , 440 , 1018\n - [3] F. Aharonian et al. 2007, A&A , 475 , L9\n - [4] J. Holder, et al. 2008, AIPC , 1085 , 657\n - [5] L. Costamante & G. Ghisellini 2002, A&A , 384 , 56\n - [6] E.S. Perlman 2000, AIPC , 515 , 53\n - [7] F.W. Stecker et al. 1996, ApJ , 473 , L75\n - [8] P. Giommi et al. 2005, A&A , 434 , 385\n - [9] S. Turriziani et al. 2007, A&A , 472 , 699\n - [10] L. Costamante 2006, arXiv:0612709\n - [11] P. Padovani et al. 2002, ApJ , 581 , 895\n - [12] R. Muhkerjee et al. 2001, AIPC , 558 , 324\n - [13] A.A. Abdo et al. 2009, ApJ , 700 , 597\n - [14] V.A. Acciari et al. 2008, ApJ , 684 , L73\n - [15] V.A. Acciari et al. 2009, ApJ , 707 , 612\n - [16] V.A. Acciari et al. 2009, ApJ , 690 , L126\n - [17] V.A. Acciari et al. 2009, ApJ , 693 , L104\n - [18] L.C. Reyes 2009, arXiv:0907.5175\n - [19] R.A. Ong 2009, ATel , 1941\n - [20] R.A. Ong et al. 2009, ATel , 2272\n - [21] V.A. Acciari et al. 2009, ApJ , 708 , L100\n - [22] R.A. Ong et al. 2009, ATel , 2301\n - [23] R.A. Ong et al. 2009, ATel , 2260\n - [24] R.A. Ong et al. 2009, ATel , 2309\n - [25] W. Benbow 2009, arXiv:0908.1412\n - [26] V.A. Acciari et al. 2009, ApJ , submitted\n - [27] V.A. Acciari et al. 2009, ApJ , 695 , 1370\n - [28] V.A. Acciari et al. 2009, ApJ , in press\n - [29] J. Grube 2009, arXiv:0907.4862", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0770.pdf" - }, - { - "text": "## VERITAS Observations of Blazars\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E > 100 GeV) γ -ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ -ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼ 30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ -rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n## 1. Introduction\n\nActive galactic nuclei are the most numerous class of identified VHE γ -ray sources. These objects emit non-thermal radiation across ∼ 20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ -ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ -rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH ( ∼ 2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ -rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## 2. VERITAS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "## Submillimeter Variability and the Gamma-ray Connection in Fermi Blazars\n\nA. Strom Univ. of Arizona, AZ 85721, USA A. Siemiginowska, M. Gurwell, B. Kelly\n\nCfA, MA 02138, USA\n\nWe present multi-epoch observations from the Submillimeter Array ( SMA ) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August-October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## 1. INTRODUCTION\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ -ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ -ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submil-\n\nlimeter Array 1 ( SMA ) at 1mm and 850 µ m, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ -ray indices and luminosities.\n\n## 2. SMA BLAZARS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "## 5.1. Recent VERITAS Blazar Discoveries\n\nPrior to the launch of Fermi VERITAS had discovered VHE emission from 2 blazars. These included the first VHE-detected IBL, W Comae [14, 15], and the HBL 1ES0806+524 [16]. VERITAS has discovered 6 VHE blazars since the launch of Fermi. Three of these were initially observed by VERITAS prior to the release of Fermi-LAT results, due to the X-ray brightness of the synchrotron peaks of their SEDs.\n\nVHEemission from 3C66A was discovered by VERITAS in September 2008 [17] during a flaring episode that was also observed by the Fermi-LAT [18]. The observed flux above 200 GeV was 6% of the Crab Nebula flux and the measured VHE spectrum was very soft (Γ VHE ∼ 4 . 1). RGBJ0710+591 was detected", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 1: The SMA light curves for 3C 454.3. The open circles represent the 850 µ m observations, and the open triangles represent the 1mm observations.\n\n\n\nJ1751+096) which have conflicting classifications between Fermi and CGRaBS. Some blazars found in the calibrator list have been studied extensively (e.g., 3C 279 and 3C 454.3) but the SMA blazars have not been studied collectively.\n\nForty-four of the objects in our total blazar sample were detected by Fermi and can be found in the catalog of LAT Bright AGN Sources (LBAS) from Abdo et al. [7]. J0050-094 has no redshift in either the LBAS catalog or CGRaBS and is not included in our study. Of the 43 remaining sources, 14 are BL Lac objects and 29 are FSRQs, with 0 . 03 ≤ z ≤ 2 . 19.\n\nWe examined submillimeter light curves for all of the SMA blazars, with observations beginning in approximately 2003 (see Figure 1). Typically, the 1mm band is much more well-sampled in comparison to the 850m band, but visual inspection reveals that the regularity and quality of observations vary greatly from source to source. Many of the objects exhibit nonperiodic variability, either in the form of persistent, low-amplitude fluctuations or higher amplitude flaring behavior.\n\n## 2.1. Submillimeter Properties\n\nSubmillimeter Luminosities. Since we are primarily concerned with comparisons to Fermi observations, we note that only 129 of the SMA blazars (23 BL Lacs and 106 FSRQs) were observed by the SMA in either band during the three months August-October 2008. For these objects, submillimeter luminosities are calculated in the standard way:\n\nν e L ν e = 4 πD 2 L ν obs F obs 1 + z , (1)\n\nwhere D L is the luminosity distance, ν obs is the frequency of the observed band, and F obs is the average\n\nFigure 2: Variability index for our sample (top: 1mm, bottom: 850 µ m), with FSRQs as the hatched distribution and BL Lacs as the solid distribution. There is no signicant difference in the class distributions in either band; the 'tail' to the left is populated by objects with errors larger than the intrinsic variability.\n\n\n\nflux (in erg cm -2 s -1 Hz -1 ) over the three month period. We adopt a lambda cold dark matter cosmology with values of H 0 = 71 km s -1 Mpc -1 , Ω M = 0 . 27, and Λ = 0 . 73.\n\nEnergy Spectral Indices. We derive submillimeter spectral energy indices from observations quasisimultaneous with the Fermi observations. To be consistent with the use of α γ , we define spectral energy index as νF ν = ν -α S and calculate α S from the average of the energy spectral indices over the corresponding three months. We only calculate α S for the 16 objects (8 BL Lacs and 35 FSRQs) with observations at both 1mm and 850 µ m during this time frame.\n\n## 3. VARIABILITY ANALYSIS\n\n## 3.1. Variability Index\n\nWe roughly characterize the level of variability of each source using the variability index from Hovatta et al. [8]:\n\nV = ( F max -σ F max ) -( F min + σ F min ) ( F max -σ F max ) + ( F min + σ F min ) (2)\n\nFigure 2 shows the distribution for the SMA blazars. Objects with V ≤ 0 are typically unsuitable for more", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0806.pdf" - }, - { - "text": "## 2. SMA BLAZARS\n\nThe Submillimeter Array [4] consists of eight 6 m antennas located near the summit of Mauna Kea. The SMA is used in a variety of baseline configurations and typically operates in the 1mm and 850 µ m windows, achieving spatial resolution as fine as 0.25' at 850 µ m. The sources used as phase calibrators for the array are compiled in a database known as the SMA Calibrator List 2 [5]. Essentially a collection of bright objects (stronger than 750 mJy at 230 GHz and 1 Jy at 345 GHz), these sources are monitored regularly, both during science observations and dedicated observing tracks.\n\nTo select our sample, we identified objects in the calibrator list that were also classified as BL Lacs or FSRQs by the Candidate Gamma-Ray Blazar Survey [6, CGRaBS]. Of the 243 total objects in the calibrator list, 171 (35 BL Lacs and 136 FSRQs) have positive blazar class identifications, although there are three sources (J0238+166, J0428-379, and", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 5: Ratio of γ -ray luminosity to submillimeter luminosity in the 1mm band. The location of an object in this plot should be directly correlated with its blazar 'state', with FSRQs occupying the upper right and BL Lacs the lower left. Flat-spectrum radio quasar 3C 454.3 is the object with the highest submillimeter luminosity in this plot.\n\n\n\n - · BL Lacs and FSRQs do not exhibit significant differences in amplitude of submillimeter variability or characteristic timescale, but our sample of BL Lacs may be dominated by highpeaked BL Lacs (HBLs), which exhibit observational similarities with FSRQs.\n - · Blazar submillimeter light curves are consistent with being produced by a single process that accounts for both high and low states, with characteristic timescales 10 < τ rest < 500 days.\n - · The blazars detected by Fermi have synchrotron peaks at higher frequencies, regardless of submillimeter luminosity.\n - · FSRQs exhibit higher ratios of γ -ray to submillimeter luminosity than BL Lacs (Figure 5), but all objects inhabit a region of parameter space suggesting transitions between states during flaring epochs.\n\nAs Fermi continues to observe fainter sources, the sample of objects for which we can perform this type of analysis will increase and provide better limits on our results. To understand the physical relevance of these results, however, it is important to be able to distinguish between the difference in variability between BL\n\nLacs and FSRQs. One avenue for exploring this difference is to monitor changing submillimeter energy spectral index and the ratio of γ -ray to submillimeter luminosity as functions of time. The full meaning of the results of our autoregressive method is not yet clear, and will require better-sampled blazar light curves and the comparison between τ rest with physical timescales such as the synchrotron cooling timescale. These analyses would allow us to place constraints on the processes occurring near the base of the jet in blazars and further understand the intimate connection between them.\n\n## Acknowledgments\n\nThis work was supported in part by the NSF REU and DoD ASSURE programs under Grant no. 0754568 and by the Smithsonian Institution. Partial support was also provided by NASA contract NAS8-39073 and NASA grant NNX07AQ55G. We have made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the NASA/IPAC Extragalactic Database (NED) which is operated by the JPL, Caltech, under contract with NASA.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0806.pdf" - }, - { - "text": "Table I VERITAS AGN Detections. The only non-blazar object is the radio galaxy M 87. The blazars discovered at VHE by VERITAS are marked with a dagger.\n\n| Object | | Class Redshift |\n|----------------|------|------------------|\n| M87 | FR I | 0.004 |\n| Mkn421 | HBL | 0.030 |\n| Mkn501 | HBL | 0.034 |\n| 1ES2344+514 | HBL | 0.044 |\n| 1ES1959+650 | HBL | 0.047 |\n| WComae † | IBL | 0.102 |\n| RGBJ0710+591 † | HBL | 0.125 |\n| H1426+428 | HBL | 0.129 |\n| 1ES0806+524 † | HBL | 0.138 |\n| 1ES0229+200 | HBL | 0.139 |\n| 1ES1218+304 | HBL | 0.182 |\n| RBS0413 † | HBL | 0.190 |\n| 1ES0502+675 † | HBL | 0.341 |\n| 3C66A † | IBL | 0.444? |\n| PKS1424+240 † | IBL | ? |\n| VERJ0521+211 † | ? | ? |\n\n( ∼ 5.5 σ ; 3% Crab flux above 300 GeV; Γ VHE ∼ 2 . 7) during VERITAS observations from December 2008 to March 2009. The initial announcement of the VHE discovery [19] led to its discovery above 1 GeV in the Fermi-LAT data using a special analysis. RBS 0413, a relatively distant HBL (z=0.19), was observed for 16 h good-quality live time in 2008-09 2 . These data resulted in the discovery of VHE gamma-rays ( > 270 γ , ∼ 6 σ ) at a flux ( > 200 GeV) of ∼ 2% of the Crab Nebula flux. The discovery [20] was announced simultaneously with the LAT MeV-GeV detection. The VHE and other MWL observations, including Fermi-LAT data, for each of these three sources will be the subject of a joint publication involving both the VERITAS and LAT collaborations.\n\n## 5.2. Discoveries Motivated by Fermi-LAT\n\nThe successful VHE discovery observations by VERITAS of three blazars was motivated primarily by results from the first year of LAT data taking. In particular, the VHE detections of PKS 1424+240 [21] and 1ES0502+675 [22] were the result of VERITAS observations triggered by the inclusion of these objects in the Fermi-LAT Bright AGN List [13]. The former is only the third IBL known to emit VHE gammarays, and the latter is the most distant BL Lac object\n\n( z = 0 . 341) detected in the VHE band. In addition, VERJ0521+211, likely associated with the radio-loud AGN RGBJ0521.8+2112, was detected by VERTAS in ∼ 4 h of observations in October 2009 [23]. These observations were motivated by its identification as a > 30 GeV γ -ray source in the public Fermi-LAT data. Its VHE flux is 5% of the Crab Nebula flux, placing it among the brightest VHE blazars detected in recent years. VERITAS later observed even brighter VHE flaring from VERJ0521+211 in November 2009 [24], leading to deeper VHE observations.\n\n## 6. Blazars Upper Limits", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0770.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0770.pdf", - "query": "How many VHE blazar candidates were observed by VERITAS between September 2007 andJune 2009 ?", - "target_page": 3, - "target_passage": "More than 50 VHE blazar candidates were observed by VERITAS betweenSeptember 2007 andJune 2009.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## 6. Blazars Upper Limits\n\nMore than 50 VHE blazar candidates were observed by VERITAS between September 2007 and June 2009. The total exposure on the 49 non-detected candidates is ∼ 305 h live time (average of 6.2 h per candidate). Approximately 55% of the total exposure is split amongst the 27 observed HBL. The remainder is divided amongst the 8 IBL (26%), 5 LBL (6%), and 9 FSRQ (13%). There are no clear indications of significant VHE γ -ray emission from any of these 49 blazars [25]. However, the observed significance distribution is clearly skewed towards positive values (see Figure 1). A stacking analysis performed on the entire data sample shows an overall excess of 430 γ -rays, corresponding to a statistical significance of 4.8 σ , observed from the directions of the candidate blazars. The IBL and HBL targets make up 96% of the observed excess. Observations of these objects also comprise ∼ 80% of the total exposure. An identical stacked analysis of all the extragalactic non-blazar targets observed, but not clearly detected ( > 5 σ ), by VERITAS does not show a significant excess ( ∼ 120 h exposure). The stacked excess persists using alternate methods for estimating the background at each blazar location, and with different event selection criteria (e.g. soft cuts optimized for sources with Γ VHE > 4). The distribution of VHE flux upper limits is shown in Figure 1. These 49 VHE flux upper limits are generally the most-constraining ever reported for these objects.\n\n## 7. Multi-wavelength Studies of VHE Blazars\n\nDuring the first three seasons of VERITAS observations, pre-planned extensive MWL campaigns were organized for three blazars 1ES 2344+514 (2007-08), 1ES 1218+304 (2008-09) and 1ES 0229+200 (200910 - ongoing). In addition, numerous ToO MWLobservation campaigns were performed. These include campaigns for every blazar/AGN discovered by VERITAS, and all include Swift (XRT and UVOT) data. All MWL campaigns on the VHE blazars discovered", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0770.pdf" - }, - { - "text": "## 5.1. Recent VERITAS Blazar Discoveries\n\nPrior to the launch of Fermi VERITAS had discovered VHE emission from 2 blazars. These included the first VHE-detected IBL, W Comae [14, 15], and the HBL 1ES0806+524 [16]. VERITAS has discovered 6 VHE blazars since the launch of Fermi. Three of these were initially observed by VERITAS prior to the release of Fermi-LAT results, due to the X-ray brightness of the synchrotron peaks of their SEDs.\n\nVHEemission from 3C66A was discovered by VERITAS in September 2008 [17] during a flaring episode that was also observed by the Fermi-LAT [18]. The observed flux above 200 GeV was 6% of the Crab Nebula flux and the measured VHE spectrum was very soft (Γ VHE ∼ 4 . 1). RGBJ0710+591 was detected", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "## VERITAS Observations of Blazars\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E > 100 GeV) γ -ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ -ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼ 30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ -rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n## 1. Introduction\n\nActive galactic nuclei are the most numerous class of identified VHE γ -ray sources. These objects emit non-thermal radiation across ∼ 20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ -ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ -rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH ( ∼ 2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ -rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## 2. VERITAS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "## 3. VERITAS Blazar KSP\n\nVERITAS observes for ∼ 750 h and ∼ 250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- · A VHE blazar discovery program ( ∼ 200 h / yr): Each year ∼ 10 targets are selected to receive ∼ 10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- · A target-of-opportunity (ToO) observation program ( ∼ 50 h / yr): VERITAS blazar observations can be triggered by either a VERITAS blazar discovery, a VHE flaring alert ( > 2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- · Multi-wavelength (MWL) studies of VHE blazars ( ∼ 50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- · Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n## 4. Blazar Discovery Program\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ -rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles ( -8 · < δ < 72 · ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0 . 3. To further the study of the\n\nEBL a few objects having a large ( z > 0 . 3) are also included in the target list. The target list includes:\n\n- · All nearby ( z < 0 . 3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- · The X-ray brightest HBL ( z < 0 . 3) in the recent Sedentary [8] and ROXA [9] surveys.\n- · Four distant ( z > 0 . 3) BL Lac objects recommended by [5, 10].\n- · Several FSRQ recommended as potential VHE emitters in [6, 11].\n- · All nearby ( z < 0 . 3) blazars detected by EGRET [12].\n- · All nearby ( z < 0 . 3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- · All sources ( | b | > 10 · ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ -ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERITAS blazar discovery program.\n\n## 5. VERITAS AGN Detections\n\nVERITAS has detected VHE γ -ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n## 5.1. Recent VERITAS Blazar Discoveries", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "tion of correlated VHE and X-ray flux variability, as well as correlated spectral hardening in both the VHE and X-ray bands. The VHE MWL observations were performed in both 'quiescent' and flaring states for some of the observed blazars. For the observed HBL objects, the SEDs can be well described by a simple SSC model in both high and low states. However, an additional external Compton component is necessary to adequately fit the SEDs of the IBL objects.\n\nThe Fermi-LAT is already having a significant impact on the blazar KSP. In future seasons, the VERITAS blazar discovery program will focus its discovery program on hard-spectrum blazars detected by Fermi-LAT, and will likely have a greater focus on high-risk/high-reward objects at larger redshifts (0 . 3 < z < 0 . 7). In addition, the number of VHE blazars studied in pre-planned MWL campaigns will increase as data from the Fermi-LAT will be publicly available. In particular, the extensive pre-planned MWL campaigns will focus on objects that are noteworthy for the impact their data may have on understanding the EBL. The simultaneous observations of blazars by VERITAS and Fermi-LAT will completely resolve the higher-energy SED peak, often for the first time, enabling unprecedented constraints on the underlying blazar phenomena to be derived.\n\n## Acknowledgments\n\nThis research is supported by grants from the US Department of Energy, the US National Science Foundation, and the Smithsonian Institution, by NSERC in Canada, by Science Foundation Ireland, and by STFC in the UK. We acknowledge the excellent work of the technical support staff at the FLWO and the collab-\n\norating institutions in the construction and operation of the instrument.\n\n## References\n\n - [1] F. Aharonian et al. 2007, ApJ , 664 , L71\n - [2] F. Aharonian et al. 2006, Nature , 440 , 1018\n - [3] F. Aharonian et al. 2007, A&A , 475 , L9\n - [4] J. Holder, et al. 2008, AIPC , 1085 , 657\n - [5] L. Costamante & G. Ghisellini 2002, A&A , 384 , 56\n - [6] E.S. Perlman 2000, AIPC , 515 , 53\n - [7] F.W. Stecker et al. 1996, ApJ , 473 , L75\n - [8] P. Giommi et al. 2005, A&A , 434 , 385\n - [9] S. Turriziani et al. 2007, A&A , 472 , 699\n - [10] L. Costamante 2006, arXiv:0612709\n - [11] P. Padovani et al. 2002, ApJ , 581 , 895\n - [12] R. Muhkerjee et al. 2001, AIPC , 558 , 324\n - [13] A.A. Abdo et al. 2009, ApJ , 700 , 597\n - [14] V.A. Acciari et al. 2008, ApJ , 684 , L73\n - [15] V.A. Acciari et al. 2009, ApJ , 707 , 612\n - [16] V.A. Acciari et al. 2009, ApJ , 690 , L126\n - [17] V.A. Acciari et al. 2009, ApJ , 693 , L104\n - [18] L.C. Reyes 2009, arXiv:0907.5175\n - [19] R.A. Ong 2009, ATel , 1941\n - [20] R.A. Ong et al. 2009, ATel , 2272\n - [21] V.A. Acciari et al. 2009, ApJ , 708 , L100\n - [22] R.A. Ong et al. 2009, ATel , 2301\n - [23] R.A. Ong et al. 2009, ATel , 2260\n - [24] R.A. Ong et al. 2009, ATel , 2309\n - [25] W. Benbow 2009, arXiv:0908.1412\n - [26] V.A. Acciari et al. 2009, ApJ , submitted\n - [27] V.A. Acciari et al. 2009, ApJ , 695 , 1370\n - [28] V.A. Acciari et al. 2009, ApJ , in press\n - [29] J. Grube 2009, arXiv:0907.4862", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 1: (Left) The preliminary significance measured from each of the 49 non-detected candidates using standard analysis cuts. The curve shows a Gaussian distribution, with mean zero and standard deviation one, normalized to the number of blazars. A similar result is obtained using analysis cuts optimized for soft-spectrum sources. (Right) The distribution of flux upper limits for the non-detected blazars in percentage of Crab Nebula flux above the observation threshold. The time-weighted average limit is less than ∼ 2% Crab flux.\n\n\n\n\n\nσ\n\nsince the launch of Fermi include LAT detections. In addition, several MWL campaigns on the well-studied VHE blazars Mkn 421 and Mkn 501 (please see the contributions of D. Gall and A. Konopelko in these proceedings) were also performed. Highlights of these campaigns include:\n\n - · 1ES 2344+514: A major (50% Crab) VHE flare, along with correlations of the VHE and X-ray flux were observed from this HBL. The VHE and X-ray spectra harden during bright states, and a synchrotron self-Compton (SSC) model can explain the observed SED in both the high and low states [26].\n - · 1ES 1218+304: This HBL flared during VERITAS MWL observations. Its unusually hard VHE spectrum strongly constrains the EBL. The observed flaring rules out kpc-scale jet emission as the explanation of the spectral hardness and places the EBL constraints on more solidfooting [27, 28].\n - · 1ES 0806+524: The observed SED of this new VHE HBL can be explained by an SSC model [16].\n - · W Comae: This IBL, the first discovered at VHE, flared twice in 2008 [14, 15]. Modeling of the SED is improved by including an externalCompton (EC) component in an SSC interpretation.\n - · 3C 66A: This IBL flared at VHE and MeV-GeV energies in 2008[17, 18]. Similar to W Comae and PKS 1424+240, modeling of observed SED suggests a strong EC component in addition to an SSC component.\n - · Mkn 421: This HBL exhibited major flaring behavior for several months in 2008. Correlations of the VHE and X-ray flux were observed, along with spectral hardening with increased flux in both bands [29].\n - · RGBJ0710+591: Modeling the SED of this HBL with an SSC model yields a good fit to the data. The inclusion of an external Compton component does not improve the fit.\n - · PKS1424+240: The broadband SED of this IBL (at unknown redshift) is well described by an SSC model favoring a redshift of less than 0.1 [21]. Using the photon index measured with Fermi-LAT in combination with recent EBL absorption models, the VERITAS data indicate that the redshift of PKS 1424+240 is less than 0.66.\n\n## 8. Conclusions\n\nThe first two years of the VERITAS blazar KSP were highly successful. Highlights include the detection of more than a 16 VHE blazars with the observations almost always having contemporaneous MWL data. Among these detections are 8 VHE blazar discoveries, including the first three IBLs known to emit VHE γ -rays. All but a handful of the blazars on the initial VERITAS discovery target list were observed, and the flux limits generated for those not VHE detected are generally the most-constraining ever. The excess seen in the stacked blazar analysis suggests that the initial direction of the VERITAS discovery program was well justified, and that follow-up observations of many of these initial targets will result in VHE discoveries. In addition, the Fermi-LAT is identifying many new compelling targets for the VERITAS blazar discovery program. These new candidates have already resulted in 3 VHE blazar discoveries. The future of the VERITAS blazar discovery program is clearly very bright.\n\nThe MWL aspect of the VERITAS blazar KSP has also been highly successful. Every VERITAS observation of a known, or newly discovered, VHE blazar has been accompanied by contemporaneous MWL observations. These data have resulted in the identifica-", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0770.pdf" - }, - { - "text": "## 2. VERITAS\n\nVERITAS, a stereoscopic array of four 12-m atmospheric-Cherenkov telescopes located in Arizona, is used to study VHE γ -rays from a variety of astrophysical sources [4]. VERITAS began scientific observations with a partial array in September 2006 and has routinely observed with the full array since September 2007. The performance metrics of VERITAS include an energy threshold of ∼ 100 GeV, an energy resolution of ∼ 15%, an angular resolution of ∼ 0.1 · , and a sensitivity yielding a 5 σ detection of a 1% Crab Nebula flux object in < 30 hours 1 . VERITAS has an active maintenance program (e.g. frequent mirror recoating and alignment) to ensure its continued high performance over time, and an upgrade improving both the camera (higher quantum-efficiency PMTs) and the trigger system has been proposed to the funding agencies.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "Table I VERITAS AGN Detections. The only non-blazar object is the radio galaxy M 87. The blazars discovered at VHE by VERITAS are marked with a dagger.\n\n| Object | | Class Redshift |\n|----------------|------|------------------|\n| M87 | FR I | 0.004 |\n| Mkn421 | HBL | 0.030 |\n| Mkn501 | HBL | 0.034 |\n| 1ES2344+514 | HBL | 0.044 |\n| 1ES1959+650 | HBL | 0.047 |\n| WComae † | IBL | 0.102 |\n| RGBJ0710+591 † | HBL | 0.125 |\n| H1426+428 | HBL | 0.129 |\n| 1ES0806+524 † | HBL | 0.138 |\n| 1ES0229+200 | HBL | 0.139 |\n| 1ES1218+304 | HBL | 0.182 |\n| RBS0413 † | HBL | 0.190 |\n| 1ES0502+675 † | HBL | 0.341 |\n| 3C66A † | IBL | 0.444? |\n| PKS1424+240 † | IBL | ? |\n| VERJ0521+211 † | ? | ? |\n\n( ∼ 5.5 σ ; 3% Crab flux above 300 GeV; Γ VHE ∼ 2 . 7) during VERITAS observations from December 2008 to March 2009. The initial announcement of the VHE discovery [19] led to its discovery above 1 GeV in the Fermi-LAT data using a special analysis. RBS 0413, a relatively distant HBL (z=0.19), was observed for 16 h good-quality live time in 2008-09 2 . These data resulted in the discovery of VHE gamma-rays ( > 270 γ , ∼ 6 σ ) at a flux ( > 200 GeV) of ∼ 2% of the Crab Nebula flux. The discovery [20] was announced simultaneously with the LAT MeV-GeV detection. The VHE and other MWL observations, including Fermi-LAT data, for each of these three sources will be the subject of a joint publication involving both the VERITAS and LAT collaborations.\n\n## 5.2. Discoveries Motivated by Fermi-LAT\n\nThe successful VHE discovery observations by VERITAS of three blazars was motivated primarily by results from the first year of LAT data taking. In particular, the VHE detections of PKS 1424+240 [21] and 1ES0502+675 [22] were the result of VERITAS observations triggered by the inclusion of these objects in the Fermi-LAT Bright AGN List [13]. The former is only the third IBL known to emit VHE gammarays, and the latter is the most distant BL Lac object\n\n( z = 0 . 341) detected in the VHE band. In addition, VERJ0521+211, likely associated with the radio-loud AGN RGBJ0521.8+2112, was detected by VERTAS in ∼ 4 h of observations in October 2009 [23]. These observations were motivated by its identification as a > 30 GeV γ -ray source in the public Fermi-LAT data. Its VHE flux is 5% of the Crab Nebula flux, placing it among the brightest VHE blazars detected in recent years. VERITAS later observed even brighter VHE flaring from VERJ0521+211 in November 2009 [24], leading to deeper VHE observations.\n\n## 6. Blazars Upper Limits", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0770.pdf" - }, - { - "text": "Note: The detectmdisk command does not return any response.\n\nIf the zoning was implemented correctly, any new WWPNs are discovered by the Storwize V7000 system after running the detectmdisk command.\n\n - 2. List the candidate WWPNs and identify the WWPNs belonging to the new host, as shown in Example 8-15.\n\nExample 8-15 Available WWPNs\n\n```\nIBM\\_Storwize:ITSO-V7000:superuser>lsfcportcandidate fc\\_WWPN 2100000E1E09E3E9 2100000E1E30E5E8 2100000E1E30E60F 2100000E1EC2E5A2 2100000E1E30E597 2100000E1E30E5EC\n```", - "page_start": 395, - "page_end": 395, - "source_file": "sg247938.pdf" - }, - { - "text": "```\nIBM\\_Storwize:ITSO-V7000:superuser>lsfcportcandidate fc\\_WWPN 2100000E1E09E3E9 2100000E1E30E5E8 2100000E1E30E60F 2100000E1EC2E5A2 2100000E1E30E597 2100000E1E30E5EC\n```\n\n - 3. Run the mkhost command with the required parameters, as shown in Example 8-16.\n\nExample 8-16 Host creation\n\n```\nIBM\\_Storwize:ITSO-V7000:superuser> mkhost -name ITSO-VMHOST-03 -fcwwpn 2100000E1E30E597:2100000E1E30E5EC Host, id [3], successfully created IBM\\_Storwize:ITSO-V7000:superuser>\n```\n\n## Creating iSCSI hosts\n\nBefore you create an iSCSI host in Storwize V7000, the iSCSI qualified name (IQN) address of the host must be known. See your host operating system-specific documentation to find the IQN of the host.\n\nCreate a host by completing the following steps:", - "page_start": 395, - "page_end": 395, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed7_cc4.pdf", - "query": "For which language have been introduced the ActiveInference.jl library ?", - "target_page": 1, - "target_passage": " We introduce a new software package for the Julia programming language, the library ActiveInference.jl.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\n\n\nArticle\n\n## Introducing ActiveInference.jl : A Julia Library for Simulation and Parameter Estimation with Active Inference Models\n\n\n\nSamuel William Nehrer 1,† , Jonathan Ehrenreich Laursen 1,† , Conor Heins 2,3, * , Karl Friston 3,4 ,\n\nChristoph Mathys 5 and Peter Thestrup Waade 5\n\n- 1 School of Culture and Communication, Aarhus University, 8000 Aarhus, Denmark; 202204724@post.au.dk (S.W.N.); 202204836@post.au.dk (J.E.L.)\n- 2 Department of Collective Behaviour, Max Planck Institute of Animal Behavior, D-78457 Konstanz, Germany\n- 3 VERSES Research Lab., Los Angeles, CA 90016, USA; k.friston@ucl.ac.uk\n- 4 Queen Square Institute of Neurology, University College London, London WC1N 3BG, UK\n- 5 Interacting Minds Centre, Aarhus University, 8000 Aarhus, Denmark; chmathys@cas.au.dk (C.M.); ptw@cas.au.dk (P.T.W.)\n- * Correspondence: cheins@ab.mpg.de\n- † These authors contributed equally to this work.\n\nAbstract: We introduce a new software package for the Julia programming language, the library ActiveInference.jl . To make active inference agents with Partially Observable Markov Decision Process (POMDP) generative models available to the growing research community using Julia, we re-implemented the pymdp library for Python. ActiveInference.jl is compatible with cutting-edge Julia libraries designed for cognitive and behavioural modelling, as it is used in computational psychiatry, cognitive science and neuroscience. This means that POMDP active inference models can now be easily fit to empirically observed behaviour using sampling, as well as variational methods. In this article, we show how ActiveInference.jl makes building POMDP active inference models straightforward, and how it enables researchers to use them for simulation, as well as fitting them to data or performing a model comparison.\n\nKeywords: active inference; free energy principle; predictive processing; Markov decision process; cognitive modelling; Julia\n\nPACS: 87.15.Aa\n\nMSC: 91-08\n\nJEL Classification: C63\n\n## 1. Introduction\n\nWe introduce a novel software library for Julia, ActiveInference , which lets users produce the simulated behaviour of agents and their internal belief states with active inference (AIF) models, as well as fit such models to empirically observed behaviour. AIF [1-3] is a generally applicable formal framework for understanding and simulating intelligent behaviour that is based in neurobiology and first principles from statistical physics [4-8]. AIF treats action and perception as unified under a joint imperative: to minimise the variational free energy ( VFE ), which quantifies how well the agent's internal generative model explains incoming sensory observations. It is an upper bound on the the surprise from sensory observations, making AIF formally related to prediction error\n\n\n\nAcademic Editor: Astero Provata\n\nReceived: 25 October 2024 Revised: 2 January 2025 Accepted: 7 January 2025\n\nPublished: 12 January 2025\n\nCitation: Nehrer, S.W.; Ehrenreich Laursen, J.; Heins, C.; Friston, K.; Mathys, C.; Thestrup Waade, P. Introducing ActiveInference.jl : A Julia Library for Simulation and Parameter Estimation with Active Inference Models. Entropy 2025 , 27 , 62. https://doi.org/10.3390/e27010062\n\nCopyright: ©2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/ licenses/by/4.0/).", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "Θ is then described by a Dirichlet distribution parametrised by a set of concentration parameters θ :\n\np ( Θ ) = Dir ( Θ | θ ) (19)\n\nThe concentration parameter of a Dirichlet distribution is essentially a non-negative count of how many times the given category (be it a type of observation or state transition) has occurred. The distribution of concentration parameter counts will determine the shape of the estimated categorical probability distribution, while the scale of the concentration parameters will determine the certainty per precision of the belief. Updating beliefs about Θ (the parameters in the matrices) then corresponds to updating these concentration parameters θ with the following update equation:\n\nθ t + 1 = ω ∗ θ t + η ∗ χ t (20)\n\nThe updated value for the concentration parameter ( θ t + 1 ) is found by adding the previous concentration parameter θ t multiplied by a forgetting rate ω to the observed data count χ (either the observation in the case of A learning, or the inferred state or state transition for other matrices) multiplied by a learning rate η . With this relatively simple update equation-which, in essence, amounts to just counting the occurrences of categories-an AIF agent can update its beliefs about the various matrices it uses to make inferences about environmental states. For more details on parameter learning with POMDPs, see [23,33,52].\n\n## 3. Using ActiveInference.jl\n\nIn this section, we provide an overview of the various functions a user will need to operate ActiveInference . This includes functionalities for creating POMDP agents, for simulating behaviour and for fitting the models to data. In the next section, we demonstrate how to use the package on a concrete worked example. ActiveInference is under continual development, and the newest version of the package, including documentation for how to use it, can be found at github.com/ilabcode/ActiveInference.jl.\n\n## 3.1. Creating and Using a POMDP\n\nThe general structure of ActiveInference.jl is heavily inspired by pymdp [23], a Python library for implementing simulations of AIF in discrete state spaces. Those already acquainted with pymdp should find the syntax here familiar. ActiveInference can be installed as normal from the official Julia General Registry using the Julia's native package manager Pkg:\n\nIt can then be loaded into the current project environment:\n\n☎\n\n✆\n\n☎\n\nCentral to the package is the AIF object. This is a structure containing all the components of the generative model, as well as the dynamic belief states and the various settings needed to perform AIF, and is used in conjunction with most of the high-level functions of the package. An AIF object can be created with the init\\_aif function, which takes as arguments the components of the generative model and a dictionary of various settings and parameters:\n\n✆\n\n```\n✞ using Pkg Pkg.add(ActiveInference) ✝\n```\n\n```\n✞ using ActiveInference ✝\n```", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "In this paper, we introduce ActiveInference.jl , a new software library for Julia [28] that aims to provide easy-to-use tools for model fitting with AIF models and to introduce AIF to the growing community of researchers using Julia for computational psychiatry and cognitive modelling. Julia is a free and open-source high-level programming language that retains an easy user interface reminiscent of that in MATLAB and Python. Simultaneously,", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "| Surrogate Target | R MF | ˆ R SW R CLS | R LLM | R SW | ˆ R MF R CLS | R LLM | R SW | ˆ R CLS S FM | R LLM | R SW | ˆ R LLM R MF | R CLS |\n|--------------------|------------|----------------|------------|------------|----------------|------------|------------|----------------|------------|------------|----------------|------------|\n| LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 | LLM pair 1 |\n| MT-Bench | - 0 . 1 | - 0 . 1 | 0 . 0 | - 0 . 1 | - 0 . 1 | 0 . 0 | - 0 . 1 | 0 . 0 | 0 . 1 | - 0 . 2 | - 0 . 1 | - 0 . 2 |\n| MMLU | - 0 . 1 | 0 . 3 | - 0 . 2 | 4 . 8 | 1 . 0 | 0 . 5 | 2 . 5 | - 1 . 3 | - 0 . 8 | 2 . 6 | - 0 . 9 | 0 . 3 |\n| GSM8K | 14 . 9 | 9 . 6 | 15 . 2 | 18 . 6 | 13 . 8 | 14 . 7 | 13 . 4 | 6 . 8 | 12 . 6 | 13 . 6 | 11 . 3 | 10 . 4 |\n| LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 | LLM pair 2 |\n| MT-Bench | - 0 . 1 | - 0 . 1 | - 0 . 1 | - 0 . 2 | - 0 . 2 | - 0 . 2 | - 0 . 1 | - 0 . 1 | 0 . 0 | - 0 . 2 | - 0 . 2 | - 0 . 2 |\n| MMLU | 1 . 6 | 4 . 0 | 4 . 2 | 7 . 9 | 5 . 0 | 4 . 4 | 5 . 0 | - 2 . 9 | 3 . 2 | 5 . 2 | - 0 . 9 | 3 . 8 |\n| GSM8K | 13 . 6 | 8 . 7 | 18 . 5 | 18 . 9 | 14 . 4 | 18 . 3 | 13 . 1 | 4 . 0 | 15 . 5 | 11 . 3 | 8 . 4 | 10 . 8 |\n| LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 | LLM pair 3 |\n| MT-Bench | 0 . 2 | 0 . 0 | 0 . 1 | - 0 . 1 | - 0 . 1 | 0 . 0 | 0 . 0 | 0 . 2 | 0 . 2 | - 0 . 1 | 0 . 1 | - 0 . 1 |\n| MMLU | 5 . 0 | 6 . 8 | 5 . 8 | 11 . 3 | 9 . 1 | 4 . 7 | 8 . 1 | - 3 . 7 | 4 . 8 | 7 . 8 | 0 . 1 | 7 . 2 |\n| GSM8K | 20 . 5 | 13 . 4 | 20 . 9 | 24 . 3 | 18 . 6 | 21 . 6 | 17 . 9 | 11 . 2 | 18 . 9 | 16 . 7 | 15 . 2 | 14 . 2 |\n\nTable 7: Differences between average benchmark specific scores of responses to the original and confounded queries, when the confounder gadget was generated for a different surrogate router than the target (black-box setting) for three LLM pairs. Positive values indicate a higher average score for responses to the confounded queries; higher values are better for the attacker. Results are averaged across gadgets. Standard errors were omitted for readability and are on average 0 . 1 , 0 . 8 , and 1 . 8 for MT-bench, MMLU and GSM8K, respectively. Aligned with the white-box setting, results show almost no decrease in performance, and improvement when there is a performance gap for the LLM pair.\n\nResults for LLM pair 4. As discussed in Section 5, we replace the strong model that was used by Ong et al. [47], GPT-41106-preview (rank 28 in the Chatbot Arena leaderboard [1, 21]), with the open-sourced Llama-3.1-8B (rank 58) to reduce the costs of our extensive set of evaluations. In this section we perform a smaller-scale evaluation of the quality-enhancing attack performance when using GPT as the strong model, i.e., LLM pair 4. We evaluate this setting using three of the n = 10 confounder gadgets for each router.\n\n## 7 Rerouting Commercial Routers", - "page_start": 11, - "page_end": 11, - "source_file": "arxiv1.pdf" - }, - { - "text": "Julia uses its 'just-in-time' (JIT) compilations via the LLVM framework to approach the speed of languages like C without relying on external compilers [36]. Julia is also natively auto-differentiable, which means it can solve what is called the two-language problem (i.e., that high-level languages often have to rely on lower-level languages, either for performance or for auto-differentiability; this is the case with standard tools for cognitive modelling, where languages like R [37] must rely on external languages like STAN [38] for Bayesian model fitting). This means that ActiveInference , in conjunction with Turing [39], Julia's powerful library for Bayesian model fitting, and its newly developed extension for behavioural modelling, ActionModels , makes it possible to use cutting-edge Markov Chain Monte Carlo [40] methods, as well as variational methods [35], for Bayesian model fitting with AIF. Crucially, this allows researchers to not only simulate AIF in a fast programming language, but to also fit them to empirical behaviour, as is performed in cognitive modelling and computational psychiatry. Importantly, this also places AIF models in an ecosystem of other models for computational psychiatry so that it can easily be compared with models, like Hierarchical Gaussian Filters [41], and reinforcement learning models, like the classic Rescorla-Wagner model [42]. As part of making ActiveInference.jl available to the scientific community, and to the larger software ecosystem within computational psychiatry, it is implemented as part of the Translational Algorithms for Psychiatry-Advancing Science (TAPAS) ecosystem [43].\n\nIn the next section, we provide a conceptual and formal introduction to AIF, particularly in the context of using POMDP generative models. In Section 3, we demonstrate how to use the package in practice, both for simulation and parameter estimation. In Section 4, we give a fully worked example of how ActiveInference can be used with a concrete simulated dataset. Finally, we discuss potential applications and future directions for developing the package.", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "On Reading-comprehension. arXiv preprint arXiv:1912.06638 .\n\nShubham Toshniwal, Haoyue Shi, Bowen Shi, Lingyu Gao, Karen Livescu, and Kevin Gimpel. 2020. A Cross-Task Analysis of Text Span Representations. In Proceedings of the 5th Workshop on Representation Learning for NLP , pages 166-176, Online. Association for Computational Linguistics.\n\nHenry Tsai, Jason Riesa, Melvin Johnson, Naveen Arivazhagan, Xin Li, and Amelia Archer. 2019. Small and Practical BERT Models for Sequence Labeling. arXiv preprint arXiv:1909.00100 .\n\nIulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation. arXiv preprint arXiv:1908.08962 .\n\nMarten van Schijndel, Aaron Mueller, and Tal Linzen. 2019. Quantity doesn't buy quality syntax with neural language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 5831-5837, Hong Kong, China. Association for Computational Linguistics.\n\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in neural information processing systems , pages 59986008.\n\nJesse Vig. 2019. Visualizing Attention in Transformer-Based Language Representation Models. arXiv:1904.02679 [cs, stat] .\n\nJesse Vig and Yonatan Belinkov. 2019. Analyzing the Structure of Attention in a Transformer Language Model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP , pages 63-76, Florence, Italy. Association for Computational Linguistics.\n\nDavid Vilares, Michalina Strzyz, Anders Søgaard, and Carlos Gómez-Rodríguez. 2020. Parsing as pretraining. In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) .\n\nElena Voita, Rico Sennrich, and Ivan Titov. 2019a. The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 4387-4397.\n\nElena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019b. Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned. arXiv preprint arXiv:1905.09418 .\n\nElena Voita and Ivan Titov. 2020. InformationTheoretic Probing with Minimum Description Length. arXiv:2003.12298 [cs] .\n\nEric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019a. Universal Adversarial Triggers for Attacking and Analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP) , pages 2153-2162, Hong Kong, China. Association for Computational Linguistics.\n\nEric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019b. Do NLP Models Know Numbers? Probing Numeracy in Embeddings. arXiv preprint arXiv:1909.07940 .\n\nAlex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP , pages 353-355, Brussels, Belgium. Association for Computational Linguistics.\n\nRuize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2020a. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. arXiv:2002.01808 [cs] .\n\nWei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Liwei Peng, and Luo Si. 2019a. StructBERT: Incorporating Language Structures into", - "page_start": 20, - "page_end": 20, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "in [21, 93] and direct resources away from efforts that would facilitate long-term progress towards natural language understanding, without using unfathomable training data.\n\nFurthermore, the tendency of human interlocutors to impute meaning where there is none can mislead both NLP researchers and the general public into taking synthetic text as meaningful. Combined with the ability of LMs to pick up on both subtle biases and overtly abusive language patterns in training data, this leads to risks of harms, including encountering derogatory language and experiencing discrimination at the hands of others who reproduce racist, sexist, ableist, extremist or other harmful ideologies reinforced through interactions with synthetic language. We explore these potential harms in §6 and potential paths forward in §7.\n\nWe hope that a critical overview of the risks of relying on everincreasing size of LMs as the primary driver of increased performance of language technology can facilitate a reallocation of efforts towards approaches that avoid some of these risks while still reaping the benefits of improvements to language technology.\n\n## 2 BACKGROUND\n\nSimilar to [14], we understand the term language model (LM) to refer to systems which are trained on string prediction tasks: that is, predicting the likelihood of a token (character, word or string) given either its preceding context or (in bidirectional and masked LMs) its surrounding context. Such systems are unsupervised and when deployed, take a text as input, commonly outputting scores or string predictions. Initially proposed by Shannon in 1949 [117], some of the earliest implemented LMs date to the early 1980s and were used as components in systems for automatic speech recognition (ASR), machine translation (MT), document classification, and more [111]. In this section, we provide a brief overview of the general trend of language modeling in recent years. For a more in-depth survey of pretrained LMs, see [105].\n\nBefore neural models, n-gram models also used large amounts of data [20, 87]. In addition to ASR, these large n-gram models of English were developed in the context of machine translation from another source language with far fewer direct translation examples. For example, [20] developed an n-gram model for English with a total of 1.8T n-grams and noted steady improvements in BLEU score on the test set of 1797 Arabic translations as the training data was increased from 13M tokens.\n\nThe next big step was the move towards using pretrained representations of the distribution of words (called word embeddings ) in other (supervised) NLP tasks. These word vectors came from systems such as word2vec [85] and GloVe [98] and later LSTM models such as context2vec [82] and ELMo [99] and supported state of the art performance on question answering, textual entailment, semantic role labeling (SRL), coreference resolution, named entity recognition (NER), and sentiment analysis, at first in English and later for other languages as well. While training the word embeddings required a (relatively) large amount of data, it reduced the amount of labeled data necessary for training on the various supervised tasks. For example, [99] showed that a model trained with ELMo reduced the necessary amount of training data needed to achieve similar results on SRL compared to models without, as shown in one instance where a model trained with ELMo reached\n\nTable 1: Overview of recent large language models", - "page_start": 1, - "page_end": 1, - "source_file": "arxiv5_ccby4license.pdf" - }, - { - "text": "In recent years there has been an accelerated interest in using LLMs to automate clinical tasks in an effort to unburden physicians and reduce burnout. 22 Computer-generated text within clinical notes using natural language processing (NLP) have been overall shown to improve note completion rates, physician satisfaction, and patient outcomes. 23 Since 2018, NLP has made rapid advancements in health care with the discovery of the transformer model architecture, the building block of large language models (LLMs). LLMs can automate workflows such as discharge summaries, 24 radiology reports, 25 patient messaging, 26 after-visit summaries, 27 and ambient dictation 28 with various levels of perceived quality in each workflow. 29 LLMs are particularly effective at summarizing large unstructured clinical datasets, such as ED patient medical records. 30 Acommonconcern of LLMs is their ability to hallucinate data, or LLMs generating output text that is not factually consistent with the original source content. 31 Much work has been done in health care to reduce hallucinations through building larger-parameter models trained on trillions of datasets, and then instruction finetuning the LLM on smaller, well-curated datasets. 32,33 LLMs can also be designed with explainability by citing inferred content back to the reference source notes. 34 For short-context length notes, using few-shot prompt engineering approaches with large language models like GPT-4 can produce summaries that outperform standard physician documentation in completeness and error frequency. 35 However, factual inconsistencies in the summaries produced by LLMs increase as the context length increases, 36 and for medium- to long-context tasks, fine-tuning an open-source model has been shown to perform better than a prompt-learning approach. 37 In prior work, members of this study team demonstrated 62% of LLM-generated hospital course summaries met standard-of-care for a formal inpatient discharge summary. 24 However, recently published clinical\n\n\n\n(Reprinted)", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed8.pdf" - }, - { - "text": "## 12.2.2 Horizontal scalability: Library server\n\nEven though Content Manager OnDemand allows a single library server for each instance, this library server can be scaled horizontally. The library server is scaled horizontally by using one or both of the following methods:", - "page_start": 311, - "page_end": 311, - "source_file": "sg246915.pdf" - }, - { - "text": "- 31. van de Laar, T.; ¸Senöz, ˙ I.; Özçelikkale, A.; Wymeersch, H. Chance-Constrained Active Inference. Neural Comput. 2021 , 33 , 2710-2735. [CrossRef]\n - 32. Busemeyer, J.R.; Diederich, A. Cognitive Modeling ; SAGE: Thousand Oaks, CA, USA, 2010; Google-Books-ID: R7KDF35g5LQC.\n - 33. Smith, R.; Friston, K.J.; Whyte, C.J. A step-by-step tutorial on active inference and its application to empirical data. J. Math. Psychol. 2022 , 107 , 102632. [CrossRef] [PubMed]\n - 34. Lee, M.D.; Wagenmakers, E.J. Bayesian Cognitive Modeling: A Practical Course , 1st ed.; Cambridge University Press: Cambridge, UK, 2014. [CrossRef]\n - 35. Blei, D.M.; Kucukelbir, A.; McAuliffe, J.D. Variational Inference: A Review for Statisticians. J. Am. Stat. Assoc. 2017 , 112 , 859-877. [CrossRef]\n - 36. Lattner, C.; Adve, V. LLVM: A compilation framework for lifelong program analysis & transformation. In Proceedings of the International Symposium on Code Generation and Optimization, 2004, CGO 2004, Palo Alto, CA, USA, 20-24 March 2004; pp. 75-86. [CrossRef]\n - 37. R Core Team. R: A Language and Environment for Statistical Computing ; R Foundation for Statistical Computing: Vienna, Austria, 2021.\n - 38. Carpenter, B.; Gelman, A.; Hoffman, M.D.; Lee, D.; Goodrich, B.; Betancourt, M.; Brubaker, M.; Guo, J.; Li, P.; Riddell, A. Stan: A Probabilistic Programming Language. J. Stat. Softw. 2017 , 76 , 1-32. [CrossRef] [PubMed]\n - 39. Ge, H.; Xu, K.; Ghahramani, Z. Turing: A Language for Flexible Probabilistic Inference. In Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics (PMLR), Playa Blanca, Lanzarote, 9-11 April 2018; pp. 1682-1690. ISSN: 2640-3498.", - "page_start": 30, - "page_end": 30, - "source_file": "pubmed7_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed7_cc4.pdf", - "query": "To which system does the AIF apply ?", - "target_page": 2, - "target_passage": "AIF was argued to be applicable to any self organising system that actively maintains a stable boundary that defines its integrity [10], a broad category that includes cells and plants [11], as well as humans [2] and even collectives [12].", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "\n\n## Artificial intelligence\n\nArtificial intelligence ( AI ), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. [1] Such machines may be called AIs.\n\nHigh-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.\" [2][3]\n\nVarious subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. [a] General intelligence-the ability to complete any task performed by a human on an at least equal level-is among the field's long-term goals. [4] To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. [b] AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields. [5]\n\nArtificial intelligence was founded as an academic discipline in 1956, [6] and the field went through multiple cycles of optimism throughout its history, [7][8] followed by periods of disappointment and loss of funding, known as AI winters. [9][10] Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques. [11] This growth accelerated further after 2017 with the transformer architecture, [12] and by the early 2020s many billions of dollars were being invested in AI and the field experienced rapid ongoing progress in what has become known as the AI boom. The emergence of advanced generative AI in the midst of the AI boom and its ability to create and modify content exposed several unintended consequences and harms in the present and raised concerns about the risks of AI and its long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.\n\n## Goals", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI, [367] with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did \"not actually use AI in a material way\". [368]\n\n## Evaluating approaches to AI\n\nNo established unifying theory or paradigm has guided AI research for most of its history. [aa] The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term \"artificial intelligence\" to mean \"machine learning with neural networks\"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.\n\n## Symbolic AI and its limits\n\nSymbolic AI (or \"GOFAI\") [370] simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at \"intelligent\" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: \"A physical symbol system has the necessary and sufficient means of general intelligent action.\" [371]\n\nHowever, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level \"intelligent\" tasks were easy for AI, but low level \"instinctive\" tasks were extremely difficult. [372] Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a \"feel\" for the situation, rather than explicit symbolic knowledge. [373] Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him. [ab][16]\n\nThe issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence, [375][376] in part because subsymbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.\n\n## Neat vs. scruffy\n\n\"Neats\" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). \"Scruffies\" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s, [377] but eventually was seen as irrelevant. Modern AI has elements of both.\n\n## Soft vs. hard computing", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, including virtual assistants, chatbots, autonomous vehicles, game-playing systems, and industrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks. [175][176][177]\n\nVincent van Gogh in watercolour created by generative AI software\n\n\n\n## Other industry-specific tasks\n\nThere are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated \"AI\" in some offerings or processes. [178] A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management.\n\nAI applications for evacuation and disaster management are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions. [179][180][181]\n\nIn agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.\n\nArtificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for \"classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights.\" For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers. [300]\n\nThe UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities. [301]\n\n## Regulation\n\nThe regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms. [302] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. [303] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone. [304][305] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI. [306] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and\n\nThe first global AI Safety Summit was held in 2023 with a declaration calling for international cooperation.\n\n\n\nVietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia. [306] The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. [306] Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI. [307] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. [308] In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics. [309] In 2024, the Council of Europe created the first international legally binding treaty on AI, called the \"Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law\". It was adopted by the European Union, the United States, the United Kingdom, and other signatories. [310]\n\nIn a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that \"products and services using AI have more benefits than drawbacks\". [304] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. [311] In a 2023 Fox News poll, 35% of Americans thought it \"very important\", and an additional 41% thought it \"somewhat important\", for the federal government to regulate AI, versus 13% responding \"not very important\" and 8% responding \"not at all important\". [312][313]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Alternatively, dedicated models for mathematical problem solving with higher precision for the outcome including proof of theorems have been developed such as Alpha Tensor , Alpha Geometry and Alpha Proof all from Google DeepMind, [157] Llemma from eleuther [158] or Julius . [159]\n\nWhen natural language is used to describe mathematical problems, converters transform such prompts into a formal language such as Lean to define mathematical tasks.\n\nSome models have been developed to solve challenging problems and reach good results in benchmark tests, others to serve as educational tools in mathematics. [160]\n\n## Finance\n\nFinance is one of the fastest growing sectors where applied AI tools are being deployed: from retail online banking to investment advice and insurance, where automated \"robot advisers\" have been in use for some years. [161]\n\nWorld Pensions experts like Nicolas Firzli insist it may be too early to see the emergence of highly innovative AI-informed financial products and services: \"the deployment of AI tools will simply further automatise things: destroying tens of thousands of jobs in banking, financial planning, and pension advice in the process, but I'm not sure it will unleash a new wave of [e.g., sophisticated] pension innovation.\" [162]\n\n## Military\n\nVarious countries are deploying AI military applications. [163] The main applications enhance command and control, communications, sensors, integration and interoperability. [164] Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles. [163] AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles involving manned and unmanned teams. [164]\n\nAI has been used in military operations in Iraq, Syria, Israel and Ukraine. [163][165][166][167]\n\n## Generative AI\n\nIn the early 2020s, generative AI gained widespread prominence. GenAI is AI capable of generating text, images, videos, or other data using generative models, [168][169] often in response to prompts. [170][171]\n\nIn March 2023, 58% of U.S. adults had heard about ChatGPT and 14% had tried it. [172] The increasing realism and ease-of-use of AI-based text-to-image generators such as Midjourney, DALL-E, and Stable Diffusion sparked a trend of viral AI-generated photos. Widespread attention was gained by a fake photo of Pope Francis wearing a white puffer coat, the fictional arrest of Donald Trump, and a hoax of an attack on the Pentagon, as well as the usage in professional creative arts. [173][174]\n\n## Agents", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia3.pdf" - }, - { - "text": "show that even a computer capable of perfectly simulating human behavior would not have a mind. [387]\n\n## AI welfare and rights\n\nIt is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree. [388] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals. [389][390] Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights. [389] Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society. [391]\n\nIn 2017, the European Union considered granting \"electronic personhood\" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities. [392] Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own. [393][394]\n\nProgress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited. [390][389]\n\n## Future\n\n## Superintelligence and the singularity\n\nA superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. [379] If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an \"intelligence explosion\" and Vernor Vinge called a \"singularity\". [395]\n\nHowever, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do. [396]\n\n## Transhumanism\n\nRobot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger. [397]", - "page_start": 26, - "page_end": 26, - "source_file": "wikipedia3.pdf" - }, - { - "text": "\n\n## ISSUE\n\nDecember 2024\n\n## CATEGORIES\n\nTechnology & Cybersecurity\n\nEditor's Picks Finance - Personal Home - Interior\n\n\n\n## The top AI-powered tech trends in 2025\n\n\n\n(NC) As we look ahead to 2025, artificial intelligence (AI) continues to revolutionize our lives. From enhancing our daily routines to transforming entire industries, AI's impact is undeniable.\n\nThese five innovations are set to shape our future, offering unprecedented convenience, efficiency and personalization.\n\n## AI-powered computing\n\nAI-powered computing, such as Intel-powered laptops - or AI PC - is at the forefront of technological advancement. But what, exactly, is an AI PC? They're computers that have AI built into their processors - also known as the brain of the computer - which optimizes performance, enhances security and provides a more personalized experience as they learn from your usage patterns. For consumers, this means faster, smarter and more secure computing tailored to your individual needs.\n\n## Smart home automation\n\nSmart home automation has been around for a while, but AI is taking it to the next level. Imagine a home that not only follows your commands, but also anticipates your needs. Enhanced smart home systems can learn your daily routines and adjust settings accordingly, from lighting and temperature to security and entertainment, making your home smarter and more responsive than ever before.\n\n## Health and wellness\n\nThe health-care industry is seeing significant transformation. AI-driven health and wellness applications can monitor vital signs, predict potential health issues, and even provide personalized fitness and nutrition plans. Wearable devices equipped with this technology can offer real-time health insights, helping individuals make informed decisions about their well-being.\n\n## Financial services\n\nAI is also making waves in the financial sector, offering smarter and more secure ways to manage money. From AI-driven investment platforms that provide personalized financial advice to fraud detection systems that protect against cyber threats, AI can analyze vast amounts of data to identify trends and make more informed financial decisions.\n\n## Enhanced education\n\nIn education, enhanced learning tools provide personalized learning experiences that adapt to each student's strengths and weaknesses. This technology can offer real-time feedback, helping students improve their skills more effectively. Additionally, AI can assist educators by automating administrative tasks and providing insights into student performance, allowing for more focused and effective teaching.\n\nLearn more at intel.com/aipc.\n\nwww.newscanada.com\n\nWord Count: 346\n\n\n\n\n\n\n\n\n\nRADIO\n\n\n\n\n\n\n\n\n\nEN", - "page_start": 0, - "page_end": 0, - "source_file": "news4.pdf" - }, - { - "text": "A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision. [o] Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction. [248] Even when used in conventional warfare, it is unlikely that they will be unable to reliably choose targets and could potentially kill an innocent person. [248] In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed. [249] By 2015, over fifty countries were reported to be researching battlefield robots. [250]\n\nAI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware. [251] All these technologies have been available since 2020 or earlier-AI facial recognition systems are already being used for mass surveillance in China. [252][253]\n\nThere many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours. [254]\n\n## Technological unemployment\n\nEconomists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment. [255]\n\nIn the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that \"we're in uncharted territory\" with AI. [256] A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in longterm unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. [257] Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at \"high risk\" of potential automation, while an OECD report classified only 9% of U.S. jobs as \"high risk\". [p][259] The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies. [255] In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence. [260][261]\n\nUnlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that \"the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution\" is \"worth taking seriously\". [262] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. [263]", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 160. Alex McFarland: 7 Best AI for Math Tools. (https://www.unite.ai/best-ai-for-math-tools/) Archived (https://web.archive.org/web/20240911125615/https://www.unite.ai/best-ai-for-mat h-tools/) 11 September 2024 at the Wayback Machine unite.ai. Retrieved 2024-08-07\n - 161. Matthew Finio & Amanda Downie: IBM Think 2024 Primer, \"What is Artificial Intelligence (AI) in Finance?\" 8 Dec. 2023\n - 162. M. Nicolas, J. Firzli: Pensions Age/European Pensions magazine, \"Artificial Intelligence: Ask the Industry\" May June 2024 https://videovoice.org/ai-in-finance-innovationentrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligence-act-wont-work-asintended/ Archived (https://web.archive.org/web/20240911125502/https://videovoice.org/ai-i n-finance-innovation-entrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligenceact-wont-work-as-intended/) 11 September 2024 at the Wayback Machine.\n - 163. Congressional Research Service (2019). Artificial Intelligence and National Security (https://f as.org/sgp/crs/natsec/R45178.pdf) (PDF). Washington, DC: Congressional Research Service.PD-notice\n - 164. Slyusar, Vadym (2019). Artificial intelligence as the basis of future control networks (Preprint). doi:10.13140/RG.2.2.30247.50087 (https://doi.org/10.13140%2FRG.2.2.30247.5 0087).\n - 165. Iraqi, Amjad (3 April 2024). \" 'Lavender': The AI machine directing Israel's bombing spree in Gaza\" (https://www.972mag.com/lavender-ai-israeli-army-gaza/). +972 Magazine . Retrieved 6 April 2024.\n - 166. Davies, Harry; McKernan, Bethan; Sabbagh, Dan (1 December 2023). \" 'The Gospel': how Israel uses AI to select bombing targets in Gaza\" (https://www.theguardian.com/world/2023/ dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets). The Guardian . Retrieved 4 December 2023.\n - 167. Marti, J Werner (10 August 2024). \"Drohnen haben den Krieg in der Ukraine revolutioniert, doch sie sind empfindlich auf Störsender - deshalb sollen sie jetzt autonom operieren\" (http s://www.nzz.ch/international/die-ukraine-setzt-auf-drohnen-die-autonom-navigieren-und-toet en-koennen-ld.1838731). Neue Zürcher Zeitung (in German). Retrieved 10 August 2024.\n - 168. Newsom, Gavin; Weber, Shirley N. (6 September 2023). \"Executive Order N-12-23\" (https:// www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-\\_-GGN-Signed.pdf) (PDF). Executive Department, State of California. Archived (https://web.archive.org/web/202402212 22035/https://www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-\\_-GGN-Signed.pd f) (PDF) from the original on 21 February 2024. Retrieved 7 September 2023.\n - 169. Pinaya, Walter H. L.; Graham, Mark S.; Kerfoot, Eric; Tudosiu, Petru-Daniel; Dafflon, Jessica; Fernandez, Virginia; Sanchez, Pedro; Wolleb, Julia; da Costa, Pedro F.; Patel, Ashay (2023). \"Generative AI for Medical Imaging: extending the MONAI Framework\". arXiv:2307.15208 (https://arxiv.org/abs/2307.15208) [eess.IV (https://arxiv.org/archive/eess.I V)].\n - 170. Griffith, Erin; Metz, Cade (27 January 2023). \"Anthropic Said to Be Closing In on $300 Million in New A.I. Funding\" (https://www.nytimes.com/2023/01/27/technology/anthropic-ai-fu nding.html). The New York Times . Archived (https://web.archive.org/web/20231209074235/h ttps://www.nytimes.com/2023/01/27/technology/anthropic-ai-funding.html) from the original on 9 December 2023. Retrieved 14 March 2023.\n - 171. Lanxon, Nate; Bass, Dina; Davalos, Jackie (10 March 2023). \"A Cheat Sheet to AI Buzzwords and Their Meanings\" (https://news.bloomberglaw.com/tech-and-telecom-law/a-c heat-sheet-to-ai-buzzwords-and-their-meanings-quicktake). Bloomberg News . Archived (http s://web.archive.org/web/20231117140835/https://news.bloomberglaw.com/tech-and-telecom -law/a-cheat-sheet-to-ai-buzzwords-and-their-meanings-quicktake) from the original on 17 November 2023. Retrieved 14 March 2023.", - "page_start": 38, - "page_end": 38, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Gleick, James, \"The Fate of Free Will\" (review of Kevin J. Mitchell, Free Agents: How Evolution Gave Us Free Will , Princeton University Press, 2023, 333 pp.), The New York Review of Books , vol. LXXI, no. 1 (18 January 2024), pp. 27-28, 30. \"Agency is what distinguishes us from machines. For biological creatures, reason and purpose come from acting in the world and experiencing the consequences. Artificial intelligences - disembodied, strangers to blood, sweat, and tears - have no occasion for that.\" (p. 30.)\n\nHalpern, Sue, \"The Coming Tech Autocracy\" (review of Verity Harding, AI Needs You: How We Can Change AI's Future and Save Our Own , Princeton University Press, 274 pp.; Gary Marcus, Taming Silicon Valley: How We Can Ensure That AI Works for Us , MIT Press, 235 pp.; Daniela Rus and Gregory Mone, The Mind's Mirror: Risk and Reward in the Age of AI , Norton, 280 pp.; Madhumita Murgia, Code Dependent: Living in the Shadow of AI , Henry Holt, 311 pp.), The New York Review of Books , vol. LXXI, no. 17 (7 November 2024), pp. 44-46. \"'We can't realistically expect that those who hope to get rich from AI are going to have the interests of the rest of us close at heart,' ... writes [Gary Marcus]. 'We can't count on governments driven by campaign finance contributions [from tech companies] to push back.'... Marcus details the demands that citizens should make of their governments and the tech companies. They include transparency on how AI systems work; compensation for individuals if their data [are] used to train LLMs (large language model)s and the right to consent to this use; and the ability to hold tech companies liable for the harms they cause by eliminating Section 230, imposing cash penalties, and passing stricter product liability laws... Marcus also suggests... that a new, AI-specific federal agency, akin to the FDA, the FCC, or the FTC, might provide the most robust oversight.... [T]he Fordham law professor Chinmayi Sharma... suggests... establish[ing] a professional licensing regime for engineers that would function in a similar way to medical licenses, malpractice suits, and the Hippocratic oath in medicine. 'What if, like doctors,' she asks..., 'AI engineers also vowed to do no harm?'\" (p. 46.)\n\nHenderson, Mark (24 April 2007). \"Human rights for robots? We're getting carried away\" (http:// www.thetimes.co.uk/tto/technology/article1966391.ece). The Times Online . London. Archived (https://web.archive.org/web/20140531104850/http://www.thetimes.co.uk/tto/techn ology/article1966391.ece) from the original on 31 May 2014. Retrieved 31 May 2014.\n\nHughes-Castleberry, Kenna, \"A Murder Mystery Puzzle: The literary puzzle Cain's Jawbone , which has stumped humans for decades, reveals the limitations of natural-languageprocessing algorithms\", Scientific American , vol. 329, no. 4 (November 2023), pp. 81-82. \"This murder mystery competition has revealed that although NLP (natural-language processing) models are capable of incredible feats, their abilities are very much limited by the amount of context they receive. This [...] could cause [difficulties] for researchers who hope to use them to do things such as analyze ancient languages. In some cases, there are few historical records on long-gone civilizations to serve as training data for such a purpose.\" (p. 82.)\n\nImmerwahr, Daniel, \"Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?\", The New Yorker , 20 November 2023, pp. 54-59. \"If by 'deepfakes' we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren't deep, and the deeps aren't fake. [...] A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of cartoons, especially smutty ones.\" (p. 59.)\n\nJohnston, John (2008) The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI , MIT Press.", - "page_start": 67, - "page_end": 67, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed7_cc4.pdf", - "query": "What is the definition of POMDP ?", - "target_page": 4, - "target_passage": " The Partially Observable Markov Decision Process is a type of flexible generative model that is widely used in the AIF literature. In discrete time and usually a discrete state space, this model type is parametrised to fit a given task by a set matrices containing probability distributions.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "Alibrary of pre-made canonical POMDP models could be created so that users can easily implement them directly. Alternatives to the fixed-point iteration method for updating posteriors over environmental states could be included, like the marginal message passing algorithm. There are various ways in which the package can be made more computationally efficient, and it could be compared with other software implementations. There are plenty of utility and plotting functions that could be added to the package to make it easier to use and to facilitate integration with the model-fitting packages it relies on; for example, to allow for combining the models with linear regressions to compare parameters values of different populations in a single model. More complex types of POMDP models can also be added, like hierarchical and temporally deep POMDPs. Model structure learning could be considered, where different model structures are compared and chosen between by evaluating their free energies. Sophisticated inference, where predictions are also made about changes in one's own beliefs-depending on expected action-dependent observations in the future-could also be implemented [58]. Finally, the package could be extended to other types of generative models than POMDPs, including other universal models, like generalised filtering [17] and Hierarchical Gaussian Filter models [41], as well as custom", - "page_start": 28, - "page_end": 28, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "Θ is then described by a Dirichlet distribution parametrised by a set of concentration parameters θ :\n\np ( Θ ) = Dir ( Θ | θ ) (19)\n\nThe concentration parameter of a Dirichlet distribution is essentially a non-negative count of how many times the given category (be it a type of observation or state transition) has occurred. The distribution of concentration parameter counts will determine the shape of the estimated categorical probability distribution, while the scale of the concentration parameters will determine the certainty per precision of the belief. Updating beliefs about Θ (the parameters in the matrices) then corresponds to updating these concentration parameters θ with the following update equation:\n\nθ t + 1 = ω ∗ θ t + η ∗ χ t (20)\n\nThe updated value for the concentration parameter ( θ t + 1 ) is found by adding the previous concentration parameter θ t multiplied by a forgetting rate ω to the observed data count χ (either the observation in the case of A learning, or the inferred state or state transition for other matrices) multiplied by a learning rate η . With this relatively simple update equation-which, in essence, amounts to just counting the occurrences of categories-an AIF agent can update its beliefs about the various matrices it uses to make inferences about environmental states. For more details on parameter learning with POMDPs, see [23,33,52].\n\n## 3. Using ActiveInference.jl\n\nIn this section, we provide an overview of the various functions a user will need to operate ActiveInference . This includes functionalities for creating POMDP agents, for simulating behaviour and for fitting the models to data. In the next section, we demonstrate how to use the package on a concrete worked example. ActiveInference is under continual development, and the newest version of the package, including documentation for how to use it, can be found at github.com/ilabcode/ActiveInference.jl.\n\n## 3.1. Creating and Using a POMDP\n\nThe general structure of ActiveInference.jl is heavily inspired by pymdp [23], a Python library for implementing simulations of AIF in discrete state spaces. Those already acquainted with pymdp should find the syntax here familiar. ActiveInference can be installed as normal from the official Julia General Registry using the Julia's native package manager Pkg:\n\nIt can then be loaded into the current project environment:\n\n☎\n\n✆\n\n☎\n\nCentral to the package is the AIF object. This is a structure containing all the components of the generative model, as well as the dynamic belief states and the various settings needed to perform AIF, and is used in conjunction with most of the high-level functions of the package. An AIF object can be created with the init\\_aif function, which takes as arguments the components of the generative model and a dictionary of various settings and parameters:\n\n✆\n\n```\n✞ using Pkg Pkg.add(ActiveInference) ✝\n```\n\n```\n✞ using ActiveInference ✝\n```", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "## Core Concepts\n\nAIF\n\nVFE\n\nEFE\n\nGenerative model\n\nPOMDP\n\nA ctive i nference is a formal framework for modelling behaviour and cognition. Perception and action are cast as minimising free energy-the VFE and EFE , respectively-given a generative model of the environment.\n\nThe v ariational f ree e nergy F quantifies how well a generative model explains incoming sensory observations. It can be rewritten as the negative log model evidence (called surprise) upper-bounded by the divergence from the optimal posterior p ( s | o ) . Perception as inference is accomplished by selecting the approximate posterior q ( s ) with the lowest associated VFE .\n\nF [ q ( s ) , o ] ≜ D KL [ q ( s ) ∥ p ( o , s )] = D KL [ q ( s ) ∥ p ( s | o )] ︸ ︷︷ ︸ Divergence -ln p ( o ) ︸ ︷︷ ︸ Surprise\n\nThe e xpected f ree e nergy G quantifies the expected future free energy under an action policy π . It consists of an information gain term and a pragmatic value term that provide a natural balance between exploratory and goal-seeking behaviour. Action as inference is accomplished by selecting the action policy with the lowest associated EFE .\n\nG π = -E q ( ˜ o , ˜ s | π ) [ ln q ( ˜ s | ˜ o , π ) -ln q ( ˜ s | π )] ︸ ︷︷ ︸ Information gain -E q ( ˜ o | π ) [ ln p ( ˜ o | C )] ︸ ︷︷ ︸ Pragmatic value\n\nThe generative model is an agent's formal assumptions about the structure and dynamics of its environment, based on which perceptual and active inferences are carried out. Many types of generative models exist that are suitable for different environments and tasks.\n\nThe P artially O bservable M arkov D ecision P rocess is a type of flexible generative model that is widely used in the AIF literature. In discrete time and usually a discrete state space, this model type is parametrised to fit a given task by a set matrices containing probability distributions.\n\n## 2. Active Inference with POMDPs\n\nIn this section, we briefly describe the core concepts of AIF and POMDPs. This should familiarise the reader with the vernacular used in the later sections regarding the functionalities of the package. While various extensions, such as structure learning, which enables an agent to learn the structure or shape of its environment through model comparison [44-47], or hierarchical and temporally deep POMDPs [48,49], are relevant for future work, describing these in detail is beyond the scope of this foundational paper.\n\nAt the core of AIF lies the minimisation of a variational free energy upper bound on surprise for perception, as well as action. This is motivated by the free energy principle [4-8], which states that self-organising systems can be described as minimising the variational free energy of their sensory states. The minimisation of free energy generally takes two", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "- [20] C. Tomlinson, 'On the motion of certain liquids on the surface of water,' Phil. Mag. Ser. 4 39 , 32-48 (1870).\n - [21] C. G. Marangoni, 'Ueber die Ausbreitung der Tropfen einer Flussigkeit auf der Oberflache einer anderen,' Ann. Phys. (Poggendorf) 143 , 337-354 (1871).\n - [22] O. Karthaus, L. Grasjo, N. Maruyama, and M. Shimomura, 'Formation of ordered mesoscopic polymer arrays by dewetting,' Chaos 9 , 308-314 (1999).\n - [23] X. Gu, D. Raghavan, J. F. Douglas, and A. Karim, 'Hole-growth instability in the dewetting of evaporating polymer solution films,' J. Polym. Sci. Pt. B-Polym. Phys. 40 , 2825-2832 (2002).\n - [24] S. W. Hong, J. F. Xia, and Z. Q. Lin, 'Spontaneous formation of mesoscale polymer patterns in an evaporating bound solution,' Adv. Mater. 19 , 1413-1417 (2007).\n - [25] G. Liu, C. F. Zhang, J. Zhao, and Y. X. Zhu, 'Study of the morphology of the three-phase contact line and its evolution by morphological examination after droplet evaporation of aqueous polymer solutions,' Langmuir 24 , 7923-7930 (2008).\n - [26] M. Mertig, U. Thiele, J. Bradt, G. Leibiger, W. Pompe, and H. Wendrock, 'Scanning force microscopy and geometrical analysis of two-dimensional collagen network formation,' Surface and Interface Analysis 25 , 514-521 (1997).\n - [27] M. Mertig, U. Thiele, J. Bradt, D. Klemm, and W. Pompe, 'Dewetting of thin collagenous precursor films,' Appl. Phys. A 66 , S565-S568 (1998).\n - [28] U. Thiele, M. Mertig, and W. Pompe, 'Dewetting of an evaporating thin liquid film: Heterogeneous nucleation and surface instability,' Phys. Rev. Lett. 80 , 2869-2872 (1998).\n - [29] H. Maeda, 'An atomic force microscopy study of ordered molecular assemblies and concentric ring patterns from evaporating droplets of collagen solutions,' Langmuir 15 , 8505-8513 (1999).\n - [30] I. I. Smalyukh, O. V. Zribi, J. C. Butler, O. D. Lavrentovich, and G. C. L. Wong, 'Structure and dynamics of liquid crystalline pattern formation in drying droplets of DNA,' Phys. Rev. Lett. 96 , 177801 (2006).\n - [31] L. Zhang, S. Maheshwari, H. C. Chang, and Y. X. Zhu, 'Evaporative self-assembly from complex DNA-colloid suspensions,' Langmuir 24 , 3911-3917 (2008).\n - [32] M. Maillard, L. Motte, A. T. Ngo, and M. P. Pileni, 'Rings and hexagons made of nanocrystals: A Marangoni effect,' J. Phys. Chem. B 104 , 11871-11877 (2000).\n - [33] G. L. Ge and L. Brus, 'Evidence for spinodal phase separation in two-dimensional nanocrystal selfassembly,' J. Phys. Chem. B 104 , 9573-9575 (2000).", - "page_start": 26, - "page_end": 26, - "source_file": "1001.2669.pdf" - }, - { - "text": "\n\n\n\nArticle\n\n## Introducing ActiveInference.jl : A Julia Library for Simulation and Parameter Estimation with Active Inference Models\n\n\n\nSamuel William Nehrer 1,† , Jonathan Ehrenreich Laursen 1,† , Conor Heins 2,3, * , Karl Friston 3,4 ,\n\nChristoph Mathys 5 and Peter Thestrup Waade 5\n\n- 1 School of Culture and Communication, Aarhus University, 8000 Aarhus, Denmark; 202204724@post.au.dk (S.W.N.); 202204836@post.au.dk (J.E.L.)\n- 2 Department of Collective Behaviour, Max Planck Institute of Animal Behavior, D-78457 Konstanz, Germany\n- 3 VERSES Research Lab., Los Angeles, CA 90016, USA; k.friston@ucl.ac.uk\n- 4 Queen Square Institute of Neurology, University College London, London WC1N 3BG, UK\n- 5 Interacting Minds Centre, Aarhus University, 8000 Aarhus, Denmark; chmathys@cas.au.dk (C.M.); ptw@cas.au.dk (P.T.W.)\n- * Correspondence: cheins@ab.mpg.de\n- † These authors contributed equally to this work.\n\nAbstract: We introduce a new software package for the Julia programming language, the library ActiveInference.jl . To make active inference agents with Partially Observable Markov Decision Process (POMDP) generative models available to the growing research community using Julia, we re-implemented the pymdp library for Python. ActiveInference.jl is compatible with cutting-edge Julia libraries designed for cognitive and behavioural modelling, as it is used in computational psychiatry, cognitive science and neuroscience. This means that POMDP active inference models can now be easily fit to empirically observed behaviour using sampling, as well as variational methods. In this article, we show how ActiveInference.jl makes building POMDP active inference models straightforward, and how it enables researchers to use them for simulation, as well as fitting them to data or performing a model comparison.\n\nKeywords: active inference; free energy principle; predictive processing; Markov decision process; cognitive modelling; Julia\n\nPACS: 87.15.Aa\n\nMSC: 91-08\n\nJEL Classification: C63\n\n## 1. Introduction\n\nWe introduce a novel software library for Julia, ActiveInference , which lets users produce the simulated behaviour of agents and their internal belief states with active inference (AIF) models, as well as fit such models to empirically observed behaviour. AIF [1-3] is a generally applicable formal framework for understanding and simulating intelligent behaviour that is based in neurobiology and first principles from statistical physics [4-8]. AIF treats action and perception as unified under a joint imperative: to minimise the variational free energy ( VFE ), which quantifies how well the agent's internal generative model explains incoming sensory observations. It is an upper bound on the the surprise from sensory observations, making AIF formally related to prediction error\n\n\n\nAcademic Editor: Astero Provata\n\nReceived: 25 October 2024 Revised: 2 January 2025 Accepted: 7 January 2025\n\nPublished: 12 January 2025\n\nCitation: Nehrer, S.W.; Ehrenreich Laursen, J.; Heins, C.; Friston, K.; Mathys, C.; Thestrup Waade, P. Introducing ActiveInference.jl : A Julia Library for Simulation and Parameter Estimation with Active Inference Models. Entropy 2025 , 27 , 62. https://doi.org/10.3390/e27010062\n\nCopyright: ©2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/ licenses/by/4.0/).", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "quantities as its target: the variational free energy ( VFE ) in the case of perception and the expected free energy ( EFE ) in the case of action. The VFE is the free energy associated with a given sensory observation and is resolved perceptually by updating beliefs about the environment. The EFE is the free energy that is expected in the future, contingent on a given policy or course of action. Choosing action policies associated with a low EFE lead to reducing uncertainty about the environment, as well as making preferred observations more likely.\n\n## 2.1. POMDPs in Active Inference\n\nIn AIF, the POMDP is one of the most common families of generative models used to make inferences about the environment. It is a Markovian discrete state-space model, where employing it means representing the environment and observations as inhabiting one among a set of possible (possibly multidimensional) states, and that the changes in these states can only depend on the system's previous state and the agent's actions. Environmental states are not directly observable, so they have to be inferred based on incoming sensory observations. In AIF for POMDPs and other generative models in general, both perception and action are cast as Bayesian inferences (see Sections 2.2 and 2.3), as well as the learning of parameters of the generative model (see Section 2.4). Crucially, an agent's generative model does not a priori have to be isomorphic to the true environment (i.e., the data-generating process), although this will generally lead to a successful inference, and that the generative model will therefore often come to resemble the environment through learning.\n\nAdiscrete state-space POMDP in AIF is conventionally defined by five main sets of parameters: A , B , C , D and E [1,33], see Figure 1. Together, these parametrise the agent's prior beliefs about the prior probability of different states in the environment, how states of the environment change and how they generate observations. Typically, they will be vectors, matrices or tensors; however, henceforth we denote them by their corresponding letter in bold. These make up the components needed for the agent to perform AIF.\n\nA , also called the observation model , represents the state-to-observation likelihood model. This describes how observations depend on or are generated by states of the environment. It is structured as a matrix with a column for each possible environmental state s , and a row for each possible observation o . Each column is then a categorical probability distribution over the observations that will occur given the environmental state (meaning that each column must contain non-negative values that sum to 1). If the observations are multidimensional (i.e., multiple observations are made at each time point), there is a matrix for each observation modality. If two or more states determine the observation, the likelihood model then becomes a tensor. If A is imprecise (i.e., the probabilities are highly entropic and evenly distributed), observations are taken to carry less information about the environment, in many cases leading to more uncertain inferences, and vice versa.", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "```\nimport com.ibm.edms.od.*; public class CustomTransform { public static HashMap transformData(HashMap odMap) throws Exception { System.out.println(\"Inside transformData method\"); // List this transform name from the XML file System.out.println(\" Transform name: \" + (String)odMap.get(ODTransform.TXFRM\\_REQ\\_NAME)); // List the property keys and values ODWEK read from the transform XML // file and provided to this Custom Class System.out.println(\" Transform properties:\"); Properties gtProps = (Properties)odMap.get(ODTransform.TXFRM\\_REQ\\_PROPS); Enumeration enumeration = gtProps.keys(); List list = new ArrayList(); while (enumeration.hasMoreElements()) { list.add((String)enumeration.nextElement()); } Collections.sort(list); for (String key : list) System.out.println(String.format(\"%25s = %-25s\", key, gtProps.getProperty(key))); // Retrieve the native document from ODWEK byte[] inDoc = (byte [])odMap.get(ODTransform.TXFRM\\_REQ\\_DATA); System.out.println(\" Native document size: \" + (inDoc == null ? null: inDoc.length)); // Retrieve the document resources from ODWEK byte[] inRes = (byte [])odMap.get(ODTransform.TXFRM\\_REQ\\_RES); System.out.println(\" Native doc resource size: \" + (inRes == null ? null: inRes.length)); // Normally this is where you do the transform or do something with the byte data. // Let's just concat the resources if there are any to the doc byte[] transformedDoc; if (inRes != null) { transformedDoc = new byte[inRes.length + inDoc.length]; System.arraycopy(inRes, 0, transformedDoc, 0, inRes.length); System.arraycopy(inDoc, 0, transformedDoc, inRes.length, inDoc.length); } else transformedDoc = inDoc; System.out.println(\" Concatenated resources to doc size: \" + transformedDoc.length); // Send the transformed data back to ODWEK HashMap rtnMap = new HashMap(); rtnMap.put(ODTransform.TXFRM\\_RESP\\_DATA, transformedDoc); return rtnMap; } }\n```\n\nExample 9-4 on page 214 shows how to set up the HashMap to pass document byte arrays in and out of this custom interface, and how to define a custom Java class that contains the transformData() method.", - "page_start": 238, - "page_end": 238, - "source_file": "sg246915.pdf" - }, - { - "text": "distance between particle clusters resulting from the demixing process that occurs already in the bulk liquid and is not related to the front instability at all. Note that one finds a similar sequence of regimes (i) to (iv) when increasing the particle-particle interaction strengths for fixed ε nl (see Ref. [41]) for further details.\n\nFIG. 3: (Colour online) Dependence of the mean finger number left behind by the unstable dewetting front on the particle-liquid interaction strength ε nl . The regions marked (i) to (iv) are discussed in the main text. The insets display typical snapshots obtained in the four different regions. Particles are black, liquid is grey (green online) and the empty substrate is white. The remaining parameters are kT = 0 . 2 , M = 20 , µ = -2 . 2 , ρ av n = 0 . 1 , glyph[epsilon1] nn = 2 . 0 , domain size 1200 × 1200 . For the insets, from left to right, glyph[epsilon1] nl = 1 . 2 , 1 . 4 , 1 . 45 , 1 . 8 .\n\n\n\nnl\n\nWe note also that the fingering process may be viewed as self-optimising the front motion - i.e. the front keeps its average velocity constant by expelling particles into the fingers. A similar effect exists for dewetting polymer films [18], where liquid is expelled from the growing moving rim which collects the dewetted polymer. There, the surplus liquid is left on the surface as a droplet pattern.\n\nThe kinetic Monte Carlo model is a very useful tool that helps one to understand the pattern formation in drying nanoparticle suspensions. One has, however, to keep in mind the restrictions", - "page_start": 12, - "page_end": 12, - "source_file": "1001.2669.pdf" - }, - { - "text": "- /SM590000 cleardumps", - "page_start": 747, - "page_end": 747, - "source_file": "sg247938.pdf" - }, - { - "text": "Figure 14-10 Building the SQL query\n\n\n\nAdditionally, users can specify a wildcard with a substring in the SQL statement. On execution, ODF will substitute the correct portion of the recipient or recipient list name.\n\nThe format of the wildcard is shown:\n\n - /SM590000 $ODF\\_RECIPIENT( start pos:length ) where start pos is the number of the characters to start and length is the number of characters to use. ( start pos:length ) is optional.\n - /SM590000 $ODF\\_RECIPLIST( start pos:length ) where start pos is the number of the characters to start and length is the number of characters to use. ( start pos:length ) is optional.\n\n## Job Name, Location, Dataset Name, and Print Options\n\nThese fields can be used to override the values that are specified in the distribution definition. Use this capability to specify the values at the distribution level that apply to most of your report bundles and still customize for individual report bundles.\n\n## 14.3 Defining the objects by using batch administration\n\nARSXML provides a batch interface to add, update, delete, or export a list of ODF objects. We show the arsxml command and a sample XML file that is used to create each of the objects that we added earlier.\n\n## 14.3.1 Recipient\n\nRun the following command to add a recipient:", - "page_start": 350, - "page_end": 350, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed6_cc4.pdf", - "query": "What is dyspnea ?", - "target_page": 2, - "target_passage": "Dyspnea refers to a subjective sensation of breathing discomfort.", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "## Assessment of the Impact of Participants ' Dyspnea\n\nAlthough neither the CAT nor the SGRQ are dyspneaspeci /uniFB01 c tools, both are recommended by the Global Initiative for Chronic Obstructive Lung Disease to evaluate symptoms, including dyspnea, 20 and both yield a richer assessment of dyspnea than the modi /uniFB01 ed Medical Research Council breathlessness scale. 20 Fifteen questions were taken from the CAT and SGRQ questionnaires that referred to individuals ' experiences with dyspnea, and a composite measure of dyspnea impact using a weighted sum of the responses to the 15 questions was constructed. Questions were coded so that larger values indicate more impactful dyspnea. Weights used for question responses in calculating the dyspnea impact assessment measure were those of the /uniFB01 rst component of a principal component analysis (PCA) based on the covariance matrix of question responses. Questions with multiple responses and ordinal structure are individually more informative and thus were accorded higher weight than individual true-false questions. No additional PCA component was anticipated a priori to be material for our investigation, and an eigenvalue analysis of the PCA was conducted to verify this assumption.\n\nThe composite dyspnea impact measure was scaled so its minimum value was 0 if the response to each of the 15 questions was 0, and the maximum value was scaled to 100 if the individual responses for all 15 questions represented the most severe dyspnea response.\n\n[\n\n]", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "The prevalence of individuals who were obese and morbidly obese in the PRISm group partially explains the between-group difference in dyspnea. The excess dyspnea seen in the PRISm group when compared with the normal spirometry group is partly explained by patient-speci /uniFB01 c risk factors, including BMI, which shrink the mean dyspnea differential between the groups from 11.2 to 5.5 points (Tables 3-6). The remaining 5.5point difference indicates that PRISm patients have excess dyspnea relative to symptomatic individuals with normal spirometry for additional reasons other than obesity.\n\n[\n\n]", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "## TABLE 2 ] (Continued)\n\nTable 4 presents the association of dyspnea with patient-speci /uniFB01 c risk factors. Dyspnea impact increased with younger age, being female, higher BMI, higher smoking and smoke exposure history, and total work", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "## Take-home Points\n\nStudy Question: How profoundly are adults with undiagnosed respiratory symptoms affected by dyspnea?\n\nResults: In community-based adults with undiagnosed respiratory symptoms, those identi /uniFB01 ed with preserved ratio impaired spirometry experienced the greatest impact of dyspnea, followed by those with undiagnosed asthma or COPD. Greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\nInterpretation: Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity.\n\nDyspnea refers to a subjective sensation of breathing discomfort. 1 In a study involving a community-based population aged > 70 years, the prevalence of dyspnea was found to be 32%. 2 Dyspnea can lead to limitations in daily activities, reduced exercise tolerance, and heightened mortality risks. 3\n\nDyspnea not only affects individuals with diagnosed respiratory conditions but also poses a signi /uniFB01 cant burden on those with undiagnosed conditions. In a systematic review by Müller et al, 4 the combined\n\n## Study Design and Methods\n\n## Recruitment of Undiagnosed Cases and Healthy\n\nControl Patients\n\nBetween June 2017 and January 2023, adults aged $ 18 years were recruited through a two-step process into the Undiagnosed COPD and Asthma Population (UCAP) study, a multicenter case /uniFB01 nding study. Approval for\n\nABBREVIATIONS: ASQ = Asthma Screening Questionnaire; BD = bronchodilator; CAT = COPD Assessment Test; PCA = principal component analysis; PRISm = preserved ratio impaired spirometry; SGRQ = St. George ' s Respiratory Questionnaire\n\nAFFILIATIONS: From The Ottawa Hospital Research Institute (J. B., E. G., K. L. V., G. G. A., S. M., and S. D. A.), University of Ottawa, Ottawa, ON; the Desautels Faculty of Management (G. A. W.), McGill University, Montreal, QC; the Department of Medicine (C. B.), The University of British Columbia, Vancouver, BC; the Centre de recherche (L.-P. B. and A. C.), Institut de cardiologie et de pneumologie de Québec, Université Laval, Quebec, QC; the Cumming School of Medicine (S. K. F.), University of Calgary, Calgary, AB; the Department of Medicine (E. P.), University of Saskatchewan, Regina, SK; the Firestone Institute for Respiratory Health (R. A. M.), McMaster University, Hamilton, ON; the Department of Medicine (C. L.), Université de Montreal, Montreal, QC; the Department of Medicine and the Li Ka Shing Knowledge Institute (S. G.), St. Michael ' s Hospital University of Toronto, Toronto, ON; the Department of Medicine\n\nprevalence of dyspnea in the adult general population across 11 studies was estimated to be 10%. Dyspnea can arise from a broad spectrum of underlying factors, including both respiratory and nonrespiratory conditions. Studies have revealed that dyspnea is not solely attributable to respiratory conditions but is also heavily in /uniFB02 uenced by cardiovascular deconditioning and by nonrespiratory factors, including psychosocial, social, and environmental determinants. 5,6\n\nDyspnea is a prevalent symptom with consequences that extend beyond its physiologic implications. A study in European patients with COPD explored the burden of dyspnea and identi /uniFB01 ed potential correlates. The study revealed that higher dyspnea impact correlated with lower health-related quality of life, increased work impairment, and a higher frequency of emergency department visits. 7", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "[\n\n]\n\n## Impact of Dyspnea on Adults With Respiratory Symptoms Without a De /uniFB01 ned Diagnosis\n\n\n\n\n\nJared Bierbrier, BSc; Emily Gerstein; George A. Whitmore, PhD; Katherine L. Vandemheen, MScN; Celine Bergeron, MD; Louis-Philippe Boulet, MD; Andreanne Cote, MD; Stephen K. Field, MD; Erika Penz, MD; R. Andrew McIvor, MD; Catherine Lemière, MD; Samir Gupta, MD; Paul Hernandez, MD; Irvin Mayers, MD; Mohit Bhutani, MD; M. Diane Lougheed, MD; Christopher J. Licskai, MD; Tanweer Azher, MD; Nicole Ezer, MD; Martha Ainslie, MD; Gonzalo G. Alvarez, MD; Sunita Mulpuru, MD; and Shawn D. Aaron, MD\n\nBACKGROUND: We investigated dyspnea; its associated risk factors; and its impact on health care utilization, quality of life, and work productivity in adults with undiagnosed respiratory symptoms.\n\nRESEARCH QUESTION: What is the impact of dyspnea in adults with undiagnosed respiratory symptoms?\n\nSTUDY DESIGN AND METHODS: This population-based study included 2,857 adults who were experiencing respiratory symptoms. These individuals had not been previously diagnosed with any lung conditions and were recruited from 17 Canadian centers using random digit dialing. Each participant underwent spirometry testing both before and after using a bronchodilator to determine if they met the diagnostic criteria for COPD, asthma, or preserved ratio impaired spirometry (PRISm), or if their spirometry results were normal. An agematched control group (n ¼ 231) was similarly recruited using random digit dialing. A dyspnea impact assessment score from 0 to 100 was produced using questions from the COPD Assessment Test and St. George ' s Respiratory questionnaire.\n\nRESULTS: Individuals with PRISm (n ¼ 172) reported more impactful dyspnea (mean score, 63.0; 95% CI, 59.5-66.4) than those with undiagnosed asthma (n ¼ 265; mean score, 56.6; 95% CI, 53.9-59.3) or undiagnosed COPD (n ¼ 330; mean score, 57.5; 95% CI, 55.1-59.9). All groups reported signi /uniFB01 cantly more impactful dyspnea than the control group (mean score, 13.8; 95% CI, 11.8-15.7). Patient-speci /uniFB01 c risk factors including age, sex, BMI, smoking, and comorbidities explained 20.6% of the variation in dyspnea. An additional 12.4% of the variation was explained by disease classi /uniFB01 cation and another 1.7% by the severity of lung function impairment assessed with spirometry. After adjusting for age, sex, and BMI, greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\nINTERPRETATION: Our /uniFB01 ndings showed that in community-based adults with undiagnosed respiratory symptoms, those identi /uniFB01 ed with PRISm experienced the greatest impact of dyspnea. Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity. CHEST 2024; 166(6):1296-1308\n\nKEY WORDS: asthma; case /uniFB01 nding; COPD; dyspnea\n\nFOR EDITORIAL COMMENT, SEE PAGE 1259\n\n[\n\n]", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "TABLE 9 ] Unadjusted and Adjusted Dyspnea Associations With Work Productivity (WPAI)\n\n| | Unadjusted | Unadjusted | Adjusted | Adjusted |\n|-----------------------------------------------|--------------------------------------|--------------|--------------------------------------|------------|\n| Measure | Dyspnea OR (95% CI) | P Value | Dyspnea OR (95% CI) | P Value |\n| Are you currently employed (working for pay)? | 0.995 (0.992-0.998) | .002 | 0.993 (0.990-0.997) | < .001 |\n| Measure a | Dyspnea Coef /uniFB01 cient (95% CI) | P Value | Dyspnea Coef /uniFB01 cient (95% CI) | P Value |\n| Absenteeism | 0.061 (0.040-0.083) | < .001 | 0.066 (0.044-0.089) | < .001 |\n| Presenteeism | 0.334 (0.293-0.375) | < .001 | 0.349 (0.306-0.392) | < .001 |\n| Work productivity loss | 0.368 (0.323-0.413) | < .001 | 0.383 (0.336-0.430) | < .001 |\n| Activity impairment | 0.503 (0.463-0.544) | < .001 | 0.501 (0.458-0.544) | < .001 |\n\nORs and regression coef /uniFB01 cients are presented with 95% CIs and P values. Adjusted coef /uniFB01 cients are adjusted for age, sex, and BMI. WPAI ¼ Work Productivity and Activity Impairment questionnaire.\n\n[\n\n]", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "## Risk Factors Associated With Dyspnea\n\nPatient-related risk factors were considered /uniFB01 rst, and results of spirometry considered afterward. The spirometry risk factors chosen for the second stage analysis included the spirometry-based diagnosis of the patient (asthma, COPD, PRISm, or normal) and lung function results indicative of the severity of physiologic impairment. Severity was gauged by assessing three principal lung function measures: (1) post-BD FEV1 % predicted, (2) post-BD FEV1/FVC ratio, and (3) percentage reversal of FEV1 with BD.\n\n## Dyspnea Impact and Health Care Use, Quality of Life, and Work Productivity\n\nThe impact of dyspnea and its associations with health care use, quality of life, and work productivity were examined. Health care utilization was assessed through selfreported data. Quality of life was assessed using the 36Item Short Form Health Survey questionnaire, where higher scores indicate better health status. Work productivity was assessed using the Work Productivity and Activity Impairment questionnaire, where higher scores\n\n## Results\n\nFigure 1 illustrates the results of the case /uniFB01 nding approach, including the enrollment of the control group. Among 5,631 potentially eligible participants, 1,359\n\nindicate greater impairment in work productivity and daily activities.\n\n## Statistical Analysis\n\nBox plots were used to compare distribution patterns of dyspnea impact assessments among the disease groups. Pairwise comparison tests were conducted to evaluate mean dyspnea differences between groups. Multiple linear regression analysis was used to measure contributions to variability of dyspnea by selected patient-speci /uniFB01 c risk factors, spirometry disease classi /uniFB01 cation, and key lung function measures. The selected sets of risk factors were evaluated using successive regression analyses. Analysis of variance sums of squares from the successive regression analyses provided the cumulative percentage contributions to variability of dyspnea. Simple, multiple, and logistic regression analyses were used to study associations between dyspnea and health care utilization, quality of life, and work productivity outcomes. All statistical analyses were done using STATA 16 statistical software (StataCorp).\n\nparticipants (24%) did not meet the threshold of $ 6 points on the ASQ or $ 20 points on the COPDDiagnostic Questionnaire and were thus excluded, leaving 4,272 individuals deemed eligible for spirometry.\n\nFigure 1 -Study /uniFB02 ow diagram demonstrating the case /uniFB01 nding and control group recruitment and allocation. ASQ ¼ Asthma Screening Questionnaire; COPD-DQ ¼ COPD Diagnostic Questionnaire; CF ¼ cystic /uniFB01 brosis; MI ¼ myocardial infarction; PRISM ¼ preserved ratio impaired spirometry.\n\n", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "bronchial challenge testing into a case /uniFB01 nding strategy identi /uniFB01 ed asthma in 26% of symptomatic individuals who had normal spirometry and no response to BD. 27\n\nIndividuals with undiagnosed respiratory symptoms, determined to have asthma or COPD through spirometry, experience poor health status. 28 Therefore, the implementation of known treatment approaches for asthma or COPD is important to improve their conditions. 29 In contrast, those with normal spirometry or PRISm face unclear treatment approaches. Longacting BD therapy in symptomatic individuals with tobacco exposure with normal spirometry is not effective. 30 Weight management programs may be useful for individuals who are obese with PRISm-related dyspnea; however, this awaits de /uniFB01 nitive clinical trials. 31\n\nDyspnea was severe and prevalent within our study group; however, it remained undiagnosed. A study conducted by Stefan et al 32 revealed that physicians underestimated their patients ' dyspnea 37.9% of the time, whereas nurses underestimated it 3.5% of the time. Moreover, many patients limit their physical activities, which lead them to downplay the extent of their dyspnea. 19 Patient underreporting of symptoms, coupled\n\n## Acknowledgments\n\nAuthor contributions: S. D. A. and G. A. W. contributed to conception and design. J. B., E. G., G. A. W., K. L. V., and S. D. A. contributed to analysis and interpretation. J. B., E. G., G. A. W., K. L. V., S. D. A., C. B., C. L., L.-P. B., A. C., E. P., S. K. F., S. G., R. A. M., I. M., M. B., P. H., M. D. L., M. A., C. J. L., T. A., N. E., G. G. A., and S. M. contributed to drafting the manuscript for important intellectual content. All authors had access to and participated in the interpretation of the data and provided input into the preparation and submission of the manuscript. The authors vouch for the accuracy and completeness of the data.\n\nRole of sponsors: The sponsor had no role in the design of the study, the collection and analysis of the data, or the preparation of the manuscript.\n\nOther contributions: We thank the following individuals from the Canadian study sites: Ottawa Hospital Research Institute, Ottawa, Ontario: Taylor Poulin; Susan Deveau, RRT; Victoria Thompson; Meredith McCleery; Angelina Tohme; Vicky Panteleakos, RRT; Geneviève Longtin, RRT; Joanne Cassidy, RRT; Amanda Bergeron, MSc; Jennifer Biggs, RN; Jessica Bergeron; and Elisabet White; Vancouver General Hospital, Vancouver, British Columbia: Shelley Abercromby, BSc; Jana Caine; David\n\nwith inadequate physician-led investigations of symptoms, may explain why dyspnea often goes undiagnosed in the population. 33\n\nIn conclusion, our study measured dyspnea impact in individuals with no preexisting diagnosis of lung disease who reported respiratory symptoms as part of a purposeful case /uniFB01 nding strategy. Individuals with PRISm exhibited the greatest impact of dyspnea, even higher than those newly diagnosed with asthma or COPD. After adjusting for patient factors, comorbidities, pulmonary diseases, and severity of lung physiologic impairment, most of the variability in dyspnea remained unexplained. We also showed that dyspnea was associated with increased health care utilization, impaired quality of life, and work productivity.\n\n## Funding/Support\n\nThis study is supported by the Canadian Institutes of Health Research [FDN Grant 154322].\n\n## Financial/Non /uniFB01 nancial Disclosures\n\nNone declared.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "TABLE 8 ] Unadjusted and Adjusted Dyspnea Associations With Health Care Use\n\n| | Unadjusted | Unadjusted | Adjusted | Adjusted |\n|---------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------|--------------|---------------------|------------|\n| Measure | Dyspnea OR (95% CI) | P Value | Dyspnea OR (95% CI) | P Value |\n| In the past 12 mo, did you visit your general practitioner or a nurse practitioner or another physician at a walk-in clinic for any breathing problems? | 1.011 (1.007-1.014) | < .001 | 1.011 (1.007-1.014) | < .001 |\n| In the past 12 mo, did you visit an emergency department for any breathing problems? | 1.015 (1.009-1.021) | < .001 | 1.015 (1.009-1.022) | < .001 |\n| In the past 12 mo, were you hospitalized for any breathing problems or respiratory illness? | 1.021 (1.006-1.037) | .006 | 1.023 (1.007-1.039) | .005 |\n\nData are presented as OR (95% CI) with P values. Adjusted values are adjusted for age, sex, and BMI.\n\noutpatients with cardiorespiratory disease 25 and the Dyspnea-12 in patients with asthma 26 and found that the affective aspect of dyspnea can signi /uniFB01 cantly in /uniFB02 uence the impact of dyspnea on health status, irrespective of the intensity of breathlessness.\n\nIn those with PRISm, there was a strong, positive association between higher values for the FEV1/FVC ratio and dyspnea. For the PRISm group, a higher FEV1/FVC ratio may re /uniFB02 ect diminished lung compliance due to interstitial lung disease and/or respiratory system restriction due to obesity, which could contribute to worse dyspnea. Conversely, the association of dyspnea with the FEV1/FVC ratio was in the opposite direction for those with asthma or COPD, and a lower FEV1/FVC ratio correlated with worse dyspnea, as expected.\n\nOur study complements the literature by focusing on adults with undiagnosed respiratory symptoms who were randomly selected and recruited through active case /uniFB01 nding in the community. This increases the generalizability of our results to a broader population. Our dyspnea questions were derived from widely used\n\nand validated respiratory health questionnaires, and our dyspnea assessment measure is a weighted average of responses to these validated questions. Consequently, the measure has an immediate interpretation in terms of the lived day-to-day experience of individuals.\n\nOur study has limitations. We did not undertake reliability/reproducibility testing of our questionnaire. The dyspnea impact assessment score was statistically associated with increased health care utilization, lower quality of life, and reduced work productivity; therefore, by virtue of this analysis, our questionnaire has construct validity. However, further attempts at external validation of the questionnaire using an independent data set would be important. Health care utilization during the preceding 12 months was assessed on entry into the study, and there is potential for impaired recall of events. Our study may have missed asthma in some participants because bronchial challenge testing was not conducted on those who tested negative for air /uniFB02 ow obstruction or BD responsiveness. A previous study showed that an additional diagnostic step incorporating\n\nTABLE 9 ] Unadjusted and Adjusted Dyspnea Associations With Work Productivity (WPAI)", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "[\n\n- assessed through inspiratory resistive loading. J Bras Pneumol . 2015;41(2): 143-150.\n- 25. Ekström M, Bornefalk H, Sköld M, et al. Validation of the Swedish Multidimensional Dyspnea Pro /uniFB01 le (MDP) in outpatients with cardiorespiratory disease. BMJ Open Respir Res . 2019;6: e000381.\n- 26. Yorke J, Russell AM, Swigris J, et al. Assessment of dyspnea in asthma: validation of The Dyspnea-12. J Asthma . 2011;48(6):602-608.\n- 27. Boulet LP, Boulay ME, Cote A, et al. Airway in /uniFB02 ammation and hyperresponsiveness in subjects with respiratory symptoms and normal spirometry. Eur Respir J . 2023;61(3): 2201194.\n- 28. Gerstein E, Bierbrier J, Whitmore GA, et al. Impact of undiagnosed chronic obstructive pulmonary disease and asthma on symptoms, quality of life, healthcare use, and work productivity. Am J Respir Crit Care Med . 2023;208(12):1271-1282.\n- 29. Aaron SD, Vandemheen K, Whitmore GA, et al. Early diagnosis and treatment of COPD and asthma: a randomized, controlled trial. N Engl J Med . 2024;390(22):2061-2073.\n- 30. Han MK, Ye W, Wang D, et al. Bronchodilators in tobacco-exposed persons with symptoms and preserved lung function. N Engl J Med . 2022;387(13): 1173-1184.\n- 31. Marott JL, Ingebrigtsen TS, Çolak Y, et al. Impact of the metabolic syndrome on cardiopulmonary morbidity and mortality in individuals with lung function impairment: a prospective cohort study of the Danish general population. Lancet Reg Health Eur . 2023;35:100759.\n- 32. Stefan MS, Priya A, Martin B, et al. How well do patients and providers agree on the severity of dyspnea? J Hosp Med . 2016;11(10):701-707.\n- 33. Cherian M, Magner KMA, Whitmore GA, et al. Patient and physician factors associated with symptomatic undiagnosed asthma or COPD. Eur Respir J . 2023;61(2): 2201721.\n\n]", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed6_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed6_cc4.pdf", - "query": "What are the criterion to be control patient in the dyspnea study ?", - "target_page": 3, - "target_passage": "Control patients reported no respiratory symptoms in the preceding 6 months and obtained a score of 0 on the ASQ.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## Assessment of the Impact of Participants ' Dyspnea\n\nAlthough neither the CAT nor the SGRQ are dyspneaspeci /uniFB01 c tools, both are recommended by the Global Initiative for Chronic Obstructive Lung Disease to evaluate symptoms, including dyspnea, 20 and both yield a richer assessment of dyspnea than the modi /uniFB01 ed Medical Research Council breathlessness scale. 20 Fifteen questions were taken from the CAT and SGRQ questionnaires that referred to individuals ' experiences with dyspnea, and a composite measure of dyspnea impact using a weighted sum of the responses to the 15 questions was constructed. Questions were coded so that larger values indicate more impactful dyspnea. Weights used for question responses in calculating the dyspnea impact assessment measure were those of the /uniFB01 rst component of a principal component analysis (PCA) based on the covariance matrix of question responses. Questions with multiple responses and ordinal structure are individually more informative and thus were accorded higher weight than individual true-false questions. No additional PCA component was anticipated a priori to be material for our investigation, and an eigenvalue analysis of the PCA was conducted to verify this assumption.\n\nThe composite dyspnea impact measure was scaled so its minimum value was 0 if the response to each of the 15 questions was 0, and the maximum value was scaled to 100 if the individual responses for all 15 questions represented the most severe dyspnea response.\n\n[\n\n]", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "The prevalence of individuals who were obese and morbidly obese in the PRISm group partially explains the between-group difference in dyspnea. The excess dyspnea seen in the PRISm group when compared with the normal spirometry group is partly explained by patient-speci /uniFB01 c risk factors, including BMI, which shrink the mean dyspnea differential between the groups from 11.2 to 5.5 points (Tables 3-6). The remaining 5.5point difference indicates that PRISm patients have excess dyspnea relative to symptomatic individuals with normal spirometry for additional reasons other than obesity.\n\n[\n\n]", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "## Take-home Points\n\nStudy Question: How profoundly are adults with undiagnosed respiratory symptoms affected by dyspnea?\n\nResults: In community-based adults with undiagnosed respiratory symptoms, those identi /uniFB01 ed with preserved ratio impaired spirometry experienced the greatest impact of dyspnea, followed by those with undiagnosed asthma or COPD. Greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\nInterpretation: Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity.\n\nDyspnea refers to a subjective sensation of breathing discomfort. 1 In a study involving a community-based population aged > 70 years, the prevalence of dyspnea was found to be 32%. 2 Dyspnea can lead to limitations in daily activities, reduced exercise tolerance, and heightened mortality risks. 3\n\nDyspnea not only affects individuals with diagnosed respiratory conditions but also poses a signi /uniFB01 cant burden on those with undiagnosed conditions. In a systematic review by Müller et al, 4 the combined\n\n## Study Design and Methods\n\n## Recruitment of Undiagnosed Cases and Healthy\n\nControl Patients\n\nBetween June 2017 and January 2023, adults aged $ 18 years were recruited through a two-step process into the Undiagnosed COPD and Asthma Population (UCAP) study, a multicenter case /uniFB01 nding study. Approval for\n\nABBREVIATIONS: ASQ = Asthma Screening Questionnaire; BD = bronchodilator; CAT = COPD Assessment Test; PCA = principal component analysis; PRISm = preserved ratio impaired spirometry; SGRQ = St. George ' s Respiratory Questionnaire\n\nAFFILIATIONS: From The Ottawa Hospital Research Institute (J. B., E. G., K. L. V., G. G. A., S. M., and S. D. A.), University of Ottawa, Ottawa, ON; the Desautels Faculty of Management (G. A. W.), McGill University, Montreal, QC; the Department of Medicine (C. B.), The University of British Columbia, Vancouver, BC; the Centre de recherche (L.-P. B. and A. C.), Institut de cardiologie et de pneumologie de Québec, Université Laval, Quebec, QC; the Cumming School of Medicine (S. K. F.), University of Calgary, Calgary, AB; the Department of Medicine (E. P.), University of Saskatchewan, Regina, SK; the Firestone Institute for Respiratory Health (R. A. M.), McMaster University, Hamilton, ON; the Department of Medicine (C. L.), Université de Montreal, Montreal, QC; the Department of Medicine and the Li Ka Shing Knowledge Institute (S. G.), St. Michael ' s Hospital University of Toronto, Toronto, ON; the Department of Medicine\n\nprevalence of dyspnea in the adult general population across 11 studies was estimated to be 10%. Dyspnea can arise from a broad spectrum of underlying factors, including both respiratory and nonrespiratory conditions. Studies have revealed that dyspnea is not solely attributable to respiratory conditions but is also heavily in /uniFB02 uenced by cardiovascular deconditioning and by nonrespiratory factors, including psychosocial, social, and environmental determinants. 5,6\n\nDyspnea is a prevalent symptom with consequences that extend beyond its physiologic implications. A study in European patients with COPD explored the burden of dyspnea and identi /uniFB01 ed potential correlates. The study revealed that higher dyspnea impact correlated with lower health-related quality of life, increased work impairment, and a higher frequency of emergency department visits. 7", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "## TABLE 2 ] (Continued)\n\nTable 4 presents the association of dyspnea with patient-speci /uniFB01 c risk factors. Dyspnea impact increased with younger age, being female, higher BMI, higher smoking and smoke exposure history, and total work", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "## Risk Factors Associated With Dyspnea\n\nPatient-related risk factors were considered /uniFB01 rst, and results of spirometry considered afterward. The spirometry risk factors chosen for the second stage analysis included the spirometry-based diagnosis of the patient (asthma, COPD, PRISm, or normal) and lung function results indicative of the severity of physiologic impairment. Severity was gauged by assessing three principal lung function measures: (1) post-BD FEV1 % predicted, (2) post-BD FEV1/FVC ratio, and (3) percentage reversal of FEV1 with BD.\n\n## Dyspnea Impact and Health Care Use, Quality of Life, and Work Productivity\n\nThe impact of dyspnea and its associations with health care use, quality of life, and work productivity were examined. Health care utilization was assessed through selfreported data. Quality of life was assessed using the 36Item Short Form Health Survey questionnaire, where higher scores indicate better health status. Work productivity was assessed using the Work Productivity and Activity Impairment questionnaire, where higher scores\n\n## Results\n\nFigure 1 illustrates the results of the case /uniFB01 nding approach, including the enrollment of the control group. Among 5,631 potentially eligible participants, 1,359\n\nindicate greater impairment in work productivity and daily activities.\n\n## Statistical Analysis\n\nBox plots were used to compare distribution patterns of dyspnea impact assessments among the disease groups. Pairwise comparison tests were conducted to evaluate mean dyspnea differences between groups. Multiple linear regression analysis was used to measure contributions to variability of dyspnea by selected patient-speci /uniFB01 c risk factors, spirometry disease classi /uniFB01 cation, and key lung function measures. The selected sets of risk factors were evaluated using successive regression analyses. Analysis of variance sums of squares from the successive regression analyses provided the cumulative percentage contributions to variability of dyspnea. Simple, multiple, and logistic regression analyses were used to study associations between dyspnea and health care utilization, quality of life, and work productivity outcomes. All statistical analyses were done using STATA 16 statistical software (StataCorp).\n\nparticipants (24%) did not meet the threshold of $ 6 points on the ASQ or $ 20 points on the COPDDiagnostic Questionnaire and were thus excluded, leaving 4,272 individuals deemed eligible for spirometry.\n\nFigure 1 -Study /uniFB02 ow diagram demonstrating the case /uniFB01 nding and control group recruitment and allocation. ASQ ¼ Asthma Screening Questionnaire; COPD-DQ ¼ COPD Diagnostic Questionnaire; CF ¼ cystic /uniFB01 brosis; MI ¼ myocardial infarction; PRISM ¼ preserved ratio impaired spirometry.\n\n", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "[\n\n]\n\n## Impact of Dyspnea on Adults With Respiratory Symptoms Without a De /uniFB01 ned Diagnosis\n\n\n\n\n\nJared Bierbrier, BSc; Emily Gerstein; George A. Whitmore, PhD; Katherine L. Vandemheen, MScN; Celine Bergeron, MD; Louis-Philippe Boulet, MD; Andreanne Cote, MD; Stephen K. Field, MD; Erika Penz, MD; R. Andrew McIvor, MD; Catherine Lemière, MD; Samir Gupta, MD; Paul Hernandez, MD; Irvin Mayers, MD; Mohit Bhutani, MD; M. Diane Lougheed, MD; Christopher J. Licskai, MD; Tanweer Azher, MD; Nicole Ezer, MD; Martha Ainslie, MD; Gonzalo G. Alvarez, MD; Sunita Mulpuru, MD; and Shawn D. Aaron, MD\n\nBACKGROUND: We investigated dyspnea; its associated risk factors; and its impact on health care utilization, quality of life, and work productivity in adults with undiagnosed respiratory symptoms.\n\nRESEARCH QUESTION: What is the impact of dyspnea in adults with undiagnosed respiratory symptoms?\n\nSTUDY DESIGN AND METHODS: This population-based study included 2,857 adults who were experiencing respiratory symptoms. These individuals had not been previously diagnosed with any lung conditions and were recruited from 17 Canadian centers using random digit dialing. Each participant underwent spirometry testing both before and after using a bronchodilator to determine if they met the diagnostic criteria for COPD, asthma, or preserved ratio impaired spirometry (PRISm), or if their spirometry results were normal. An agematched control group (n ¼ 231) was similarly recruited using random digit dialing. A dyspnea impact assessment score from 0 to 100 was produced using questions from the COPD Assessment Test and St. George ' s Respiratory questionnaire.\n\nRESULTS: Individuals with PRISm (n ¼ 172) reported more impactful dyspnea (mean score, 63.0; 95% CI, 59.5-66.4) than those with undiagnosed asthma (n ¼ 265; mean score, 56.6; 95% CI, 53.9-59.3) or undiagnosed COPD (n ¼ 330; mean score, 57.5; 95% CI, 55.1-59.9). All groups reported signi /uniFB01 cantly more impactful dyspnea than the control group (mean score, 13.8; 95% CI, 11.8-15.7). Patient-speci /uniFB01 c risk factors including age, sex, BMI, smoking, and comorbidities explained 20.6% of the variation in dyspnea. An additional 12.4% of the variation was explained by disease classi /uniFB01 cation and another 1.7% by the severity of lung function impairment assessed with spirometry. After adjusting for age, sex, and BMI, greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\nINTERPRETATION: Our /uniFB01 ndings showed that in community-based adults with undiagnosed respiratory symptoms, those identi /uniFB01 ed with PRISm experienced the greatest impact of dyspnea. Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity. CHEST 2024; 166(6):1296-1308\n\nKEY WORDS: asthma; case /uniFB01 nding; COPD; dyspnea\n\nFOR EDITORIAL COMMENT, SEE PAGE 1259\n\n[\n\n]", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "bronchial challenge testing into a case /uniFB01 nding strategy identi /uniFB01 ed asthma in 26% of symptomatic individuals who had normal spirometry and no response to BD. 27\n\nIndividuals with undiagnosed respiratory symptoms, determined to have asthma or COPD through spirometry, experience poor health status. 28 Therefore, the implementation of known treatment approaches for asthma or COPD is important to improve their conditions. 29 In contrast, those with normal spirometry or PRISm face unclear treatment approaches. Longacting BD therapy in symptomatic individuals with tobacco exposure with normal spirometry is not effective. 30 Weight management programs may be useful for individuals who are obese with PRISm-related dyspnea; however, this awaits de /uniFB01 nitive clinical trials. 31\n\nDyspnea was severe and prevalent within our study group; however, it remained undiagnosed. A study conducted by Stefan et al 32 revealed that physicians underestimated their patients ' dyspnea 37.9% of the time, whereas nurses underestimated it 3.5% of the time. Moreover, many patients limit their physical activities, which lead them to downplay the extent of their dyspnea. 19 Patient underreporting of symptoms, coupled\n\n## Acknowledgments\n\nAuthor contributions: S. D. A. and G. A. W. contributed to conception and design. J. B., E. G., G. A. W., K. L. V., and S. D. A. contributed to analysis and interpretation. J. B., E. G., G. A. W., K. L. V., S. D. A., C. B., C. L., L.-P. B., A. C., E. P., S. K. F., S. G., R. A. M., I. M., M. B., P. H., M. D. L., M. A., C. J. L., T. A., N. E., G. G. A., and S. M. contributed to drafting the manuscript for important intellectual content. All authors had access to and participated in the interpretation of the data and provided input into the preparation and submission of the manuscript. The authors vouch for the accuracy and completeness of the data.\n\nRole of sponsors: The sponsor had no role in the design of the study, the collection and analysis of the data, or the preparation of the manuscript.\n\nOther contributions: We thank the following individuals from the Canadian study sites: Ottawa Hospital Research Institute, Ottawa, Ontario: Taylor Poulin; Susan Deveau, RRT; Victoria Thompson; Meredith McCleery; Angelina Tohme; Vicky Panteleakos, RRT; Geneviève Longtin, RRT; Joanne Cassidy, RRT; Amanda Bergeron, MSc; Jennifer Biggs, RN; Jessica Bergeron; and Elisabet White; Vancouver General Hospital, Vancouver, British Columbia: Shelley Abercromby, BSc; Jana Caine; David\n\nwith inadequate physician-led investigations of symptoms, may explain why dyspnea often goes undiagnosed in the population. 33\n\nIn conclusion, our study measured dyspnea impact in individuals with no preexisting diagnosis of lung disease who reported respiratory symptoms as part of a purposeful case /uniFB01 nding strategy. Individuals with PRISm exhibited the greatest impact of dyspnea, even higher than those newly diagnosed with asthma or COPD. After adjusting for patient factors, comorbidities, pulmonary diseases, and severity of lung physiologic impairment, most of the variability in dyspnea remained unexplained. We also showed that dyspnea was associated with increased health care utilization, impaired quality of life, and work productivity.\n\n## Funding/Support\n\nThis study is supported by the Canadian Institutes of Health Research [FDN Grant 154322].\n\n## Financial/Non /uniFB01 nancial Disclosures\n\nNone declared.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "[\n\n- assessed through inspiratory resistive loading. J Bras Pneumol . 2015;41(2): 143-150.\n- 25. Ekström M, Bornefalk H, Sköld M, et al. Validation of the Swedish Multidimensional Dyspnea Pro /uniFB01 le (MDP) in outpatients with cardiorespiratory disease. BMJ Open Respir Res . 2019;6: e000381.\n- 26. Yorke J, Russell AM, Swigris J, et al. Assessment of dyspnea in asthma: validation of The Dyspnea-12. J Asthma . 2011;48(6):602-608.\n- 27. Boulet LP, Boulay ME, Cote A, et al. Airway in /uniFB02 ammation and hyperresponsiveness in subjects with respiratory symptoms and normal spirometry. Eur Respir J . 2023;61(3): 2201194.\n- 28. Gerstein E, Bierbrier J, Whitmore GA, et al. Impact of undiagnosed chronic obstructive pulmonary disease and asthma on symptoms, quality of life, healthcare use, and work productivity. Am J Respir Crit Care Med . 2023;208(12):1271-1282.\n- 29. Aaron SD, Vandemheen K, Whitmore GA, et al. Early diagnosis and treatment of COPD and asthma: a randomized, controlled trial. N Engl J Med . 2024;390(22):2061-2073.\n- 30. Han MK, Ye W, Wang D, et al. Bronchodilators in tobacco-exposed persons with symptoms and preserved lung function. N Engl J Med . 2022;387(13): 1173-1184.\n- 31. Marott JL, Ingebrigtsen TS, Çolak Y, et al. Impact of the metabolic syndrome on cardiopulmonary morbidity and mortality in individuals with lung function impairment: a prospective cohort study of the Danish general population. Lancet Reg Health Eur . 2023;35:100759.\n- 32. Stefan MS, Priya A, Martin B, et al. How well do patients and providers agree on the severity of dyspnea? J Hosp Med . 2016;11(10):701-707.\n- 33. Cherian M, Magner KMA, Whitmore GA, et al. Patient and physician factors associated with symptomatic undiagnosed asthma or COPD. Eur Respir J . 2023;61(2): 2201721.\n\n]", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "TABLE 9 ] Unadjusted and Adjusted Dyspnea Associations With Work Productivity (WPAI)\n\n| | Unadjusted | Unadjusted | Adjusted | Adjusted |\n|-----------------------------------------------|--------------------------------------|--------------|--------------------------------------|------------|\n| Measure | Dyspnea OR (95% CI) | P Value | Dyspnea OR (95% CI) | P Value |\n| Are you currently employed (working for pay)? | 0.995 (0.992-0.998) | .002 | 0.993 (0.990-0.997) | < .001 |\n| Measure a | Dyspnea Coef /uniFB01 cient (95% CI) | P Value | Dyspnea Coef /uniFB01 cient (95% CI) | P Value |\n| Absenteeism | 0.061 (0.040-0.083) | < .001 | 0.066 (0.044-0.089) | < .001 |\n| Presenteeism | 0.334 (0.293-0.375) | < .001 | 0.349 (0.306-0.392) | < .001 |\n| Work productivity loss | 0.368 (0.323-0.413) | < .001 | 0.383 (0.336-0.430) | < .001 |\n| Activity impairment | 0.503 (0.463-0.544) | < .001 | 0.501 (0.458-0.544) | < .001 |\n\nORs and regression coef /uniFB01 cients are presented with 95% CIs and P values. Adjusted coef /uniFB01 cients are adjusted for age, sex, and BMI. WPAI ¼ Work Productivity and Activity Impairment questionnaire.\n\n[\n\n]", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "Data are presented as mean (SD) for Q1, Q2, and Q3 (total), and Q3 to Q15 were presented to participants as yes or no questions, where percentages of parti cipants who answered yes are shown. Question weights (principal component analysis scoring coef /uniFB01 cients) used for calculating the dyspnea assessment are shown below individual questions. CAT ¼ COPD Assessment Test; PRISm ¼ preserved ratio impaired spirometry; Q ¼ question; SGRQ ¼ St. George ' s Respiratory Questionnaire.\n\nHowever, 1,415 either did not attend or were unable to complete adequate spirometry. Ultimately, 2,857 (67%) of those eligible underwent both pre- and post-BD spirometry.\n\nOf these 2,857 participants, 2,090 (73.2%) had normal spirometry, 265 (9.3%) had undiagnosed asthma, 330 (11.5%) had undiagnosed COPD, and 172 (6.0%) had PRISm based on post-BD spirometry. Of the 595 individuals with spirometric evidence of asthma or COPD, 253 were independently assessed by a pulmonologist. In 245 of these 253 cases (97%), the independent physician diagnosis agreed with the study diagnosis of asthma or COPD.\n\nIndividuals in the COPD group were generally older andmorelikelytobemalecomparedwithallother study groups (Table 1). All groups, including healthy control participants, had mean BMIs in the overweight orobeseranges.ThePRISmgroupwasheaviestwithan average BMI of 34.7, and 22% of PRISm patients met BMI criteria for morbid obesity. Compared with all other groups, those with COPD were the most likely to have active or previous tobacco use, with the highest average total pack-years of 32.7. The control group had the lowest number of people with active or previous tobacco use.\n\nTable 2 shows mean responses to the 15 dyspnea questions for each disease classi /uniFB01 cation and presents question weights (PCA scoring coef /uniFB01 cients) used for calculating the dyspnea impact assessment.\n\nIndividuals with PRISm reported the highest dyspnea impact, with a signi /uniFB01 cantly greater mean score (63.0; 95% CI, 59.5-66.4) than those with undiagnosed asthma or COPD (Table 3). Those with undiagnosed asthma or COPD had similar mean scores (56.6; 95% CI, 53.9-59.3 and 57.5; 95% CI, 55.1-59.9, respectively), followed by those with normal spirometry (51.8; 95% CI, 50.7-52.8). All four groups reported signi /uniFB01 cantly more impactful dyspnea than the control group (mean score, 13.8; 95% CI, 11.815.7). Table 3 shows between-group differences in mean dyspnea impact assessments for each pair of disease outcomes. Figure 2 compares box plots of the dyspnea impact assessment values across disease classi /uniFB01 cations.\n\n[\n\n]", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed6_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_RSG_2004.pdf", - "query": "What is the revenue of Republic Services in 2002 ?", - "target_page": 2, - "target_passage": " $ 2,365.1", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## Note 11. Major Customers\n\nThe Company has one major customer and relationship that is a significant source of revenue. In 2003, as during the past number of years, the Company's relationship with Sprint continued to increase, due to growth in the PCS business segment. Approximately 61.2% of total revenues in 2003 were generated by or through Sprint and its customers using the Company's portion of Sprint's nationwide PCS network. This was compared to 57.6% in 2002, and 47.1% of total revenue in 2001. No other customer relationship on a stand-alone basis generates more than 2.5% of the Company's total revenue for 2003, 2002 and 2001.\n\n■", - "page_start": 34, - "page_end": 34, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## REPUBLIC SERVICES, INC. AND SUBSIDIARIES\n\n## CONSOLIDATED STATEMENTS OF CASH FLOWS\n\n(in millions)", - "page_start": 63, - "page_end": 63, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "Income tax expense in 2003 totaled $1.9 million, compared with $1.4 million in 2002 and $1.8 million in 2001. The effective tax rates for 2003, 2002 and 2001 were 27.8 percent, 25.7 percent and 29.7 percent, respectively. Benefits from tax incentives for exports and R&D expenditures totaled $350,000 in 2003, $408,000 in 2002 and $404,000 in 2001. The higher effective tax rate in 2003 is primarily a result of benefits from tax incentives for exports and R&D expenditures being a lesser percentage of taxable income in 2003 than in 2002. The lower effective tax rate in 2002 is primarily a result of benefits from tax incentives for exports and R&D expenditures being a larger percentage of taxable income in 2002 than in 2001 and the utilization of capital loss carryforwards in 2002.\n\nThe Company believes that 2004 revenues will be higher than 2003 revenues and that the cost of goods sold, gross profit, operating income and income from continuing operations will each be higher in 2004 than in 2003. The Company further believes that it will have continuing volume growth in most of its product lines in 2004, complemented by the introduction of new products, and that it will achieve a double-digit annual rate of growth in earnings per share from continuing operations for the next several years.\n\n## DISCONTINUED OPERATIONS\n\nDuring 1997, the Company sold all of its natural gas operations. The financial statements presented herein reflect the Company's natural gas operations as discontinued operations for all periods presented. The financial statements also reflect an after-tax gain on disposal of these discontinued operations of $ .2 million, or $ .10 per basic and $ .09 per diluted share, in both 2003 and 2002, and $5.5 million, or $2.70 per basic and $2.42 per diluted share, in 2001.\n\nIn addition to the initial consideration received in 1997 upon the sale of the natural gas operations, certain annual contingent deferred payments of up to $250,000 per year were to be paid to the Company over an eight-year period which began in 1999, with the amount paid each year to be dependent upon revenues received by the purchaser from certain gas transportation contracts. The Company received deferred payments of $250,000 each, before tax, from the purchaser in April 2003, 2002 and 2001 which are reflected in each year as a gain from discontinued operations of $165,000, net of tax. The 2001 gain also includes a $5,327,000 non-cash gain from reversal of a reserve established when the Company disposed of its natural gas operations in 1997. This reversal in the third quarter of 2001 followed the resolution of an outstanding contingency related to the sale of those assets.", - "page_start": 26, - "page_end": 26, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "Business Solutions generates revenue from services and equipment sales.\n\nNext generation revenue is generated by the provision of high-speed, high-reliability data and voice communications, provided on Rogers advanced IP and Ethernet and Cloud platforms and mainly over the extensive Rogers fibre, cable and wireless networks. Next generation revenue also includes Data Centre services revenue from the 2013 dates of business acquisitions.\n\nLegacy revenue is generated mainly by long distance, switched voice services and lower speed data communications, provided over TDM and end of life data platforms with client access primarily delivered through the use of third-party networks and tariffed ILEC services.\n\nBusiness Solutions continues to focus mainly on next generation IPbased services, and on leveraging higher margin on-net and near-net service revenue opportunities, using existing network facilities to expand offerings to the medium and large sized enterprise, public sector and carrier markets. Next generation services now represent 59 % of total service revenue.\n\nRevenue from the lower margin off-net legacy business generally includes local and long-distance voice services and legacy data services which often use facilities that are leased rather than owned.\n\nFollowing our recent data centre business acquisitions, Business Solutions is now also focused on data centre colocation, hosting, cloud and disaster recovery services.\n\n## Higher Operating Revenue\n\nOperating revenue was 7 % higher this year compared to last year, the net result of:\n\n - GLYPH<129> higher revenue from next generation services, which grew by 31 % , reflecting the impact of our acquisitions of Blackiron and Pivot Data Centres\n - GLYPH<129> continued execution of our plan to grow higher margin on-net and next generation IP-based services revenue\n - GLYPH<129> partially offset by ongoing decline in the legacy voice and data business, a trend management expects to continue as customers move to faster and more reliable IP services.\n\n## Higher Operating Expenses\n\nWe assess Business Solutions operating expenses in two categories:\n\n - GLYPH<129> the cost of operating and maintaining telecom and data networking equipment\n - GLYPH<129> all other expenses involved in day-to-day operations, to service existing subscriber relationships and attract new subscribers.\n\nOperating expenses were higher this year, the net result of:", - "page_start": 49, - "page_end": 49, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "| Advertising revenue | GLYPH<129> Record revenue in the period the advertising airs on our radio or television stations, is featured in our publications or displayed on our digital properties |\n| Monthly subscription revenues received by television stations for subscriptions from cable and satellite providers | GLYPH<129> Record revenue in the month the services are delivered to cable and satellite providers' subscribers |\n| Toronto Blue Jays' revenue from home game admission and concessions | GLYPH<129> Recognize revenue as the related games are played during the baseball season and goods are sold |\n| Toronto Blue Jays' revenue from the Major League Baseball Revenue Sharing Agreement which redistributes funds between member clubs based on each club's relative revenues | GLYPH<129> Recognize revenue when it can be determined |\n| Revenue from Toronto Blue Jays, radio and television broadcast agreements | GLYPH<129> Record revenue at the time the related games are aired |\n| Interest income on credit card receivables | GLYPH<129> Record revenue as earned using the effective interest rate method |", - "page_start": 98, - "page_end": 98, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## CONSENT OF INDEPENDENT REGISTERED PUBLIC ACCOUNTING FIRM\n\nWe consent to the incorporation by reference in the Registration Statements (Form S-8 Nos. 333-81801, 333-78125, 333-45542 and 333-104048) pertaining to the Republic Services 401(k) Plan, 1998 Stock Incentive Plan, Republic Services, Inc. Amended and Restated Employee Stock Purchase Plan, and Republic Services, Inc. Amended and Restated 1998 Stock Incentive Plan, respectively, of our reports dated February 24, 2005, with respect to the consolidated Ñnancial statements and schedule of Republic Services, Inc., Republic Services, Inc. management's assessment of the eÅectiveness of internal control over Ñnancial reporting, and the eÅectiveness of internal control over Ñnancial reporting of Republic Services, Inc., included in this Annual Report (Form 10-K) for the year ended December 31, 2004.\n\n/s/ ERNST & YOUNG LLP CertiÑed Public Accountants\n\nFort Lauderdale, Florida February 24, 2005", - "page_start": 102, - "page_end": 102, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## SHENANDOAH TELECOMMUNICATIONS COMPANY AND SUBSIDIARIES MANAGEMENT'S DISCUSSION AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS\n\nFacility lease revenue contributed $5.5 million to wireline revenues, a decrease of $0.2 million or 3.5%. The decrease was primarily the result of the prolonged decline of lease rates associated with competitive pricing pressures and the economic downturn in the telecommunications industry. During 2002 the Company completed a second, diverse fiber route to its existing interconnection point in the Dulles airport area of Northern Virginia. This fiber route provides increased reliability for customers in the event of fiber cuts or breaks, and extends the availability of the Company's fiber network to additional market locations but to date has not added additional revenue to the Company's operation.\n\nBilling and collection services and other revenues contributed $0.4 million to wireline revenues, which was the same as 2002 results. Revenues from this service had declined in recent years, with interexchange carriers now issuing a greater proportion of their bills directly to their customers.\n\nWireline revenues from cable television services were $4.4 million, an increase of $0.1 million or 1.7%. The number of subscribers and service plan prices remained relatively constant during 2003.\n\nOther revenues, primarily consisting of Internet and 511Virginia service revenues were $5.8 million in 2003, an increase of $0.7 million or 13.5%. The Company had 17,420 dial-up Internet subscribers at December 31, 2003, compared to 18,050 at the end of the previous year. During 2003, the Company's DSL high-speed Internet access subscriber count increased to 1,298 from 646. Total Internet service revenue was $4.5 million, an increase of $0.3 million or 10.7%. The 511Virginia contract with the Virginia Department of Transportation contributed $1.3 million to other revenues, an increase of $0.4 million or 41.3%. Telecommunications equipment sales, services and lease revenues were $1.1 million, which reflects a $0.1 million decrease from 2002 results.\n\nTotal operating expenses were $87.2 million, an increase of $3.6 million or 4.3%. The primary driver in the increase in operating expenses is continued growth in the PCS operation somewhat offset by a significant decline in bad debt expense compared to 2002.\n\nLate in 2003, the Company made an employee benefits policy change, which eliminated the requirement for the Company to accrue a vacation liability in advance of the year in which the benefit was used. The result of this change was a reduction of benefit expense of $0.5 million for the year compared to 2002. Benefit expenses impact all operating departments based on the amount of direct labor charged to the department. The change has a one-time impact on the financial statements of the Company. The benefits policy now provides that employees earn and use their paid time off in the same period. In the future, under this policy, unused hours can be banked but only used for extended illness, not carried over for use as vacation.", - "page_start": 48, - "page_end": 48, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## Results of Continuing Operations\n\n## 2003 compared to 2002\n\nTotal revenue was $105.9 million in 2003, an increase of $12.9 million or 13.9%. Total revenues included $70.0 million of wireless revenues, an increase of $12.0 million or 20.7%; wireline revenues of $29.0 million, an increase of $0.3 million or 0.9%; and other revenues of $7.0 million, an increase of $0.6 million or 9.7%.\n\nWithin wireless revenues, the PCS operation contributed $69.8 million, an increase of $11.6 million, or 20.8%. PCS service revenues were $44.4 million, an increase of $10.9 million or 32.4%. Service revenue growth was driven by the increase in subscribers, totaling 85,139 at December 31, 2003, an increase of 17,297 or 25.5%, compared to 67,842 subscribers at year-end 2002. The company had churn of 2.1% in 2003 compared to 2.8% in 2002. The decline in the churn rate is the result of tightening the credit screening for new subscribers as well as continued efforts to improve the after sales support. Competition in the wireless industry continues to have a significant impact on the results of the Company's PCS operation.\n\nPCS travel revenue, including reseller revenue, which is compensation between Sprint and its PCS Affiliates for use of the other party's network, was $16.8 million, an increase of $0.3 million or 1.8%. Travel revenue is impacted by the geographic size of the Company's network service area, the overall number of Sprint wireless customers, their travel patterns and the travel exchange rate. The rate received on travel was $0.058 per minute in 2003, compared to $0.10 per minute in 2002. As a part of the amended management agreement signed on January 30, 2004, Sprint and the Company agreed to maintain the travel rate at $0.058 per minute through December 31, 2006.\n\n\n\n■", - "page_start": 46, - "page_end": 46, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "carrier accounts, but due to the telecommunication industry down-turn of the last few years, the Company experienced write-offs in this area of the business totaling $0.5 million in 2002, due to bankruptcy filings of several significant telecommunications companies. In 2003, the inter-carrier segment of the business improved and the Company recovered $240 thousand of bad debt from the sale of certain accounts that were previously written-off.\n\nBad Debt expense summary, net of recoveries for the three years ended December 31, 2003:\n\nIn thousands\n\n| | 2003 2003 | 2002 2002 | 2001 2001 |\n|--------------------------------|-------------|-------------|-------------|\n| PCS subscribers | $1,716 | $ 3,744 | $ 1,241 |\n| Interexchange carriers | 48 | 488 | - |\n| Other subscribers and entities | 71 | 170 | 82 |\n| Total bad debt expense | $1,835 | $ 4,402 | $ 1,323 |\n\n## Revenue Recognition\n\nThe Company recognizes revenues when persuasive evidence of an arrangement exists, services have been rendered or products have been delivered, the price to the buyer is fixed and determinable, and collectibility is reasonably assured. The Company's revenue recognition polices are consistent with the guidance in Staff Accounting Bulletin (\"SAB\") No. 101, Revenue Recognition in Financial Statements promulgated by the Securities and Exchange Commission, and the Emerging Issues Task Force ('EITF') 00-21, 'Revenue Arrangements with Multiple Deliverables' ('EITF 00-21'). Effective July 1, 2003 the Company adopted EITF 00-21. The EITF guidance addresses how to account for arrangements that may involve multiple revenue-generating activities, i.e., the delivery or performance of multiple products, services, and/or rights to use assets. In applying this guidance, separate contracts with the same party, entered into at or near the same time, will be presumed to be a bundled transaction, and the consideration will be measured and allocated to the separate units based on their relative fair values. The consensus guidance was applicable to new PCS service agreements entered into for quarters beginning July 1, 2003. The adoption of EITF 00-21 required evaluation of each arrangement entered into by the Company for each sales channel. The Company will continue to monitor arrangements with its sales channels to determine if any changes in revenue recognition will need to be made in the future. The adoption of EITF 00-21 has resulted in substantially all of the PCS activation fee revenue generated through Company-owned retail stores and associated direct costs being recognized at the time the related wireless handset is sold and it is classified as equipment revenue and cost of equipment, respectively. Upon adoption of EITF 00-21, previously deferred PCS revenue and costs will continue to be amortized over the remaining estimated life of a subscriber, not to exceed 30 months. PCS revenue and costs for activations at other retail locations and through other sales channels will continue to be deferred and amortized over their estimated lives as prescribed by SAB 101. The adoption of EITF 00-21 had the effect of increasing equipment revenue by $68 thousand and increasing costs of equipment by $23 thousand, which otherwise would have been deferred and amortized.", - "page_start": 45, - "page_end": 45, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "| 10.1 | ÌSeparation and Distribution Agreement dated June 30, 1998 by and between Republic Services, Inc. and AutoNation, Inc. (then known as Republic Industries, Inc.) (incorporated by reference to Exhibit 10.1 of the Company's Quarterly Report on Form 10-Q for the period ended June 30, 1998). |\n| 10.2 | ÌTax IndemniÑcation and Allocation Agreement dated June 30, 1998 by and between Republic Services, Inc. and AutoNation, Inc. (then known as Republic Industries, Inc.) (incorporated by reference to Exhibit 10.4 of the Company's Quarterly Report on Form 10-Q for the period ended June 30, 1998). |\n| 10.3 | ÌRepublic Services, Inc. 1998 Stock Incentive Plan (as amended and restated March 6, 2002) (incorporated by reference to Exhibit 10.1 of the Company's Quarterly Report on Form 10-Q for the period ended March 31, 2002).* |\n| 10.4 | ÌEmployment Agreement dated October 25, 2000 by and between James E. O'Connor and Republic Services, Inc. (incorporated by reference to Exhibit 10.7 of the Company's Annual Report on Form 10-K for the year ended December 31, 2000).* |\n| 10.5 | ÌEmployment Agreement dated October 25, 2000 by and between Tod C. Holmes and Republic Services, Inc. (incorporated by reference to Exhibit 10.9 of the Company's Annual Report on Form 10-K for the year ended December 31, 2000).* |", - "page_start": 98, - "page_end": 98, - "source_file": "NYSE_RSG_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_RSG_2004.pdf", - "query": "Who is the Vice Chairmain of the Board of Republic Services ?", - "target_page": 5, - "target_passage": " Harris W. Hudson1 Vice Chairman of the Board", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Board of Directors\n\n\n\n\n\nJames E. O'Connor 1 Chairman & Chief Executive Officer\n\n\n\n\n\nW. Lee Nutter 2, 3, 4 Chairman, Compensation Committee Chairman, President & Chief Executive Officer Rayonier, Inc.\n\nHarris W. Hudson 1 Vice Chairman of the Board\n\n\n\n\n\n\n\nRamon A. Rodriguez 2, 3, 4 Chairman, Audit Committee President & Chief Executive Officer Madsen, Sapp, Mena, Rodriguez & Co. (a public accounting firm)\n\nAllan C. Sorensen 2, 3, 4 Presiding Director President & Chief Executive Officer Interim Health Care, Inc. (a provider of temporary labor to the healthcare industry)\n\nMichael W. Wickham 2, 3, 4 Retired Chairman, President & Chief Executive Officer, Roadway Corporation\n\n(a forest products company)\n\n1 Member, Executive Committee · 2 Member, Audit Committee · 3 Member, Compensation Committee · 4 Member, Nominating and Corporate Governance Committee\n\n## Officers\n\nJames E. O'Connor Chairman & Chief Executive Officer Michael J. Cordesman President & Chief Operating Officer David A. Barclay Senior Vice President & General Counsel Tod C. Holmes Senior Vice President & Chief Financial Officer Lee V. Twyford Senior Vice President & Chief Information Officer Brian A. Bales Vice President, Corporate Development Kenneth M. Baylor Vice President, Employee & Labor Relations Tim M. Benter Vice President & Associate General Counsel Jerry S. Clark Vice President & Controller Paul J. Connealy Vice President, Tax Matthew E. Davies Vice President, Environmental Engineering & Compliance Arthur J. Dudzinski\n\nRegional Vice President - Western Region\n\nWilliam C. Flower Vice President, Communications Matthew D. Katz Vice President & Associate General Counsel Ronald R. Krall Regional Vice President - Eastern Region Edward A. Lang III Vice President, Finance & Treasurer Thomas E. Miller Regional Vice President - Southwest Region Craig J. Nichols Vice President, Human Resources Charles F. Serianni Vice President & Chief Accounting Officer Robert N. Shepard Regional Vice President - Southern Region Gary L. Sova Vice President, Marketing & Sales Kevin C. Walbridge Regional Vice President - Central Region Gerard W. Wickett\n\nVice President, Purchasing & Maintenance\n\nJohn W. Croghan 2, 3, 4 Chairman, Nominating and Corporate Governance Committee Chairman, Rail-Splitter Capital Management, LLC (an investment management firm)", - "page_start": 4, - "page_end": 4, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## BOARD OF DIRECTORS\n\n## STEPHEN GERLACH\n\n## LLB\n\nAge 59. Director since 5 September 1989 and Chairman since 4 May 2001. Chairman of Santos Finance Ltd and of the Environmental and Safety Committee, Finance Committee and Nomination Committee and member of the Remuneration Committee of the Board. Chairman of Futuris Corporation Ltd and Challenger Beston Limited and a Director of Southcorp Ltd. Former Managing Partner of the Adelaide legal firm, Finlaysons. Former Chairman of Amdel Ltd and Equitoral Mining Ltd.\n\n## JOHN CHARLES ELLICE-FLINT BSc (Hons)\n\nAge 54. Managing Director since 19 December 2000, member of the Environmental and Safety Committee of the Board, Director of Santos Finance Ltd and also Chairman of other Santos Ltd subsidiary companies. Thirty years' experience in the international oil and gas industry including twenty six years with Unocal, including as Senior Vice President: Global Exploration and Technology and Vice President: Corporate Planning and Economics. Member and Chair of the South Australian Museum Board.\n\n## PETER CHARLES BARNETT FCPA\n\nAge 64. Director since 31 October 1995 and member of the Environmental and Safety Committee, Nomination Committee, Finance Committee and Remuneration Committee of the Board. Director of AMCIL Ltd and Opis Capital Ltd. Former Managing Director and Chief Executive Officer of Pasminco Ltd (1988-1995) and Chief Executive Officer of EZ Industries Ltd. Former director of Mayne Group Ltd.\n\n## KENNETH ALFRED DEAN\n\n## FCPA, MAICD\n\nAge 52. Independent nonexecutive Director effective 23 February 2005. Extensive financial\n\nexperience in the international petroleum industry, having held the position of Chief Executive Officer, Shell Financial Services. During his 30-year career with Shell, held several other senior executive positions in treasury, audit, accounting, IT and financial and corporate services. Fellow of the Australian Society of Certified Practising Accountants and member of the Australian Institute of Company Directors.\n\n## RICHARD MICHAEL HARDING MSc\n\nAge 55. Director since 1 March 2004 and member of the Audit Committee of the Board. Former President and General Manager of BP Developments Australia Limited and former Vice-Chairman and Council member of the Australian Petroleum Production and Exploration Association. Chairman of the Ministry of Defence Command Support, Training and Simulation Project Governance Board and Director of Arc Energy Ltd.\n\n## GRAEME WILLIAM MCGREGOR\n\nAO, BEc, FCPA, FAIM, FAICD Age 66. Director since\n\n3 September 1999. Chairman of the Audit Committee and member of the Finance Committee and Nomination Committee of the Board. Director of Santos Finance Ltd. Director of Foster's Group Ltd, Nufarm Ltd, WMC Resources Ltd and Goldman Sachs JB Were Managed Funds Limited. Member of the Financial Reporting Council. Former Executive Director Finance of The Broken Hill Proprietary Company Limited and former Director of Community Foundation Network Ltd.\n\n## MICHAEL ANTHONY O'LEARY\n\nDipMinE, BSc, FAusIMM, FAIM, FAICD\n\nAge 69. Director since 15 October 1996 and member of the Environmental and Safety Committee of the Board. Director of Newcrest Mining Ltd. Former Chairman of Hamersley Iron, Argyle Diamonds, Dampier Salt, former Deputy Chairman of Bank of Western Australia Ltd and former Director of Rio Tinto Ltd and Rio Tinto plc.\n\n## CHRISTOPHER JOHN RECNY\n\nBSc, MSc, MBA\n\nAge 51. Independent non-\n\nexecutive Director effective 23 February 2005. Extensive international management and project management experience, including as global head of international consultancy L.E.K.'s natural resources practice - a company he helped establish in the 1980s. Regional head of Asia-Pacific for L.E.K. and previously spent eight years with Fluor Corporation as a project manager on, and undertaking feasibility studies for, major resource developments.\n\n## PROFESSOR JUDITH SLOAN", - "page_start": 42, - "page_end": 42, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## DIRECTORS\n\nJOHN F. MEIER (3, 4) Age 64 Former Chairman and Chief Executive Officer Libbey Inc. (Tableware Products) Chairman of the Board of Directors\n\nWILLIAM G. BARES (4) Age 71\n\nFormer Chairman and Chief Executive Officer The Lubrizol Corporation (Specialty Chemical Products)\n\nTHOMAS A. COMMES (1, 3) Age 70 Former President and Chief Operating Officer The Sherwin-Williams Company (Paints and Coatings)\n\nPETER A. DORSMAN (2) Age 57 Executive Vice President & Chief Quality Officer NCR Corporation (Self-Service Technology Solutions)\n\nL. THOMAS HILTZ (2, 3) Age 66\n\nAttorney\n\nEDITH KELLY-GREEN (2) Age 59\n\nFormer Vice President and Chief Sourcing Officer FedEx Express (Express Transportation)\n\nDAN P. KOMNENOVICH (2) Age 60 President and Chief Executive Officer Aviall, Inc. (Aviation Parts, Related Aftermarket Operations)\n\nJ. MICHAEL MOORE (1) Age 69\n\nPresident Oak Grove Consulting Group, Inc. (Management Consulting) Former Chairman and Chief Executive Officer Invetech Company (Industrial Distributor)\n\nVINCENT K. PETRELLA (1) Age 52 Senior Vice President, Chief Financial Officer and Treasurer Lincoln Electric Holdings, Inc. (Welding, Brazing Products Manufacturer)\n\nNEIL A. SCHRIMSHER (3) Age 48\n\nChief Executive Officer Applied Industrial Technologies, Inc.\n\nJERRY SUE THORNTON, Ph.D. (1) Age 65 President Cuyahoga Community College\n\n(Two-Year Educational Institution)\n\nPETER C. WALLACE (3, 4) Age 58\n\nPresident and Chief Executive Officer Robbins & Myers, Inc. (Equipment Manufacturer)\n\n## Committees of The Board\n\n(1) Audit Committee\n\nChairman: Thomas A. Commes\n\n - (2) Corporate Governance Committee\n\nChairman: L. Thomas Hiltz\n\n - (3) Executive Committee\n\nChairman: John F. Meier\n\n(4) Executive Organization and Compensation\n\nCommittee\n\nChairman: Peter C. Wallace\n\n## OFFICERS\n\nNEIL A. SCHRIMSHER Age 48\n\nChief Executive Officer\n\nBENJAMIN J. MONDICS Age 54\n\nPresident & Chief Operating Officer\n\nTHOMAS E. ARMOLD Age 57\n\nVice President - Marketing and Strategic Accounts\n\nTODD A. BARLETT Age 57\n\nVice President - Acquisitions and Global Business Development\n\nFRED D. BAUER Age 46\n\nVice President - General Counsel & Secretary\n\nMICHAEL L. COTICCHIA Age 49\n\nVice President - Chief Human Resources Officer\n\nMARK O. EISELE Age 55\n\nVice President - Chief Financial Officer & Treasurer DANIEL T. BREZOVEC Age 51 Corporate Controller JODY A. CHABOWSKI Age 52 Assistant Controller\n\n## OTHER KEY MANAGEMENT\n\nDARREN B. 'BEN' PADD Age 39\n\nVice President - Midwest Area IVAN J. BATISTA Age 39 General Director - Rafael Benitez Carrillo, Inc. (Puerto Rico) ROBERT E. CURLEY Age 52 Vice President - Southeast Area BARBARA D. EMERY Age 53 Vice President - Human Resources\n\nWARREN E. 'BUD' HOFFNER Age 52\n\nVice President, General Manager - Fluid Power JAMES A. JEFFIERS Age 38 Vice President - Central States Area LONNY D. LAWRENCE Age 49 Vice President - Information Technology JOHN M. LEYO Age 61 Vice President - North Atlantic Area\n\nSERGIO H. NEVÁREZ Age 54 General Director - Applied Mexico\n\nJILL A. OLSEN Age 54\n\nVice President - Project Genesis\n\nRONALD A. SOWINSKI Age 51\n\nPresident & Chief Operating Officer - Applied Industrial Technologies Ltd. (Canada)\n\nKURT J. WEINHEIMER Age 56 Vice President - Western Area", - "page_start": 45, - "page_end": 45, - "source_file": "NYSE_AIT_2012.pdf" - }, - { - "text": "## Executive Officers\n\n## Teri Bariquit, 49 Executive Vice President, Nordstrom Merchandising Group\n\n## Kirk Beardsley, 46\n\nPresident of Nordstrom Credit, Inc.\n\nExecutive Vice President,\n\nOnline Merchandising\n\n## Scott A. Meden, 52\n\nSupply Chain\n\n## Terence Boyle, 42\n\nExecutive Vice President,\n\nNordstromrack.com|HauteLook\n\n## Brian K. Dennehy, 49\n\nExecutive Vice President and\n\nChief Marketing Officer\n\nSouthern California\n\n## Geevy S. K. Thomas, 50\n\nJames A. Howell, 49 Executive Vice President, Finance and Treasurer\n\n## Michael G. Koppel, 58\n\nExecutive Vice President and Chief Financial Officer\n\n## Gemma Lionello, 49\n\nExecutive Vice President and General Merchandise Manager, Cosmetics Division\n\nErik B. Nordstrom, 51 Executive Vice President and President, Nordstrom.com\n\nDaniel F. Little, 53 Executive Vice President and\n\nChief Information Officer\n\nLisa Luther, 46 Executive Vice President of Finance and Operations,\n\nNordstrom.com\n\n## James F. Nordstrom, Jr., 42\n\nExecutive Vice President and President, Stores\n\n## Peter E. Nordstrom, 53\n\nExecutive Vice President and President, Merchandising\n\nBrian Saltzman, 47 Executive Vice President, User Experience and Optimization\n\n## Margaret Myers, 68\n\nPresident, Nordstrom Rack\n\nExecutive Vice President and General Merchandise Manager, Accessories and Women's Specialized Divisions\n\n## Blake W. Nordstrom,\n\n54\n\nPresident\n\n## Mark J. Tritton, 51\n\nExecutive Vice President and President, Nordstrom Product Group\n\n## David M. Witman, 56\n\nExecutive Vice President and General Merchandise Manager, Men's Apparel\n\nKenneth J. Worzel, 50 Executive Vice President,\n\nStrategy and Development\n\nPaige L. Thomas, 43 Executive Vice President and General Merchandise Manager, Nordstrom Rack\n\nExecutive Vice President and\n\nExecutive Vice President and General Merchandise Manager, Shoe Division\n\n## Robert J. Middlemas, 58\n\nExecutive Vice President and Regional Manager,\n\nTricia D. Smith, 43 Executive Vice President and General Merchandise Manager, Designer, Women's and Kids' Apparel\n\n## Steven C. Mattics, 46\n\nExecutive Vice President;\n\nChairman and Chief Executive Officer of\n\nGeneral Counsel and Secretary\n\nNordstrom fsb,\n\nMichael Sato, 48 Executive Vice President,\n\nRobert B. Sari, 58 Executive Vice President,", - "page_start": 91, - "page_end": 91, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "of, non-executive, independent Directors, except for the Environmental and Safety Committee, which includes the CEO as a member.\n\nThe Board Guidelines prescribe that the Board is to meet at least eight times a year, including a strategy meeting of two days duration. The number of meetings of the Board and of each of its Committees and the names of attendees at those meetings are set out on page 47 of this Annual Report. Board Meetings are structured in two separate sessions, without management present for one of those sessions. The agenda for meetings is prepared by the Company Secretary in conjunction with the Chairman and CEO, with periodic input from the Board. Comprehensive Board papers are distributed to Directors in advance of scheduled meetings. Board meetings take place both at the Company's head office and at key operating sites, to assist the Board in its understanding of operational issues.\n\nExecutive management attend Board and Committee meetings, at which they report to Directors within their respective areas of responsibility. This assists the Board in maintaining its understanding of the Company's business and assessing the executive management team. Where appropriate, advisors to the Company attend meetings of the Board and of its Committees.\n\n## 2.3 Composition of the Board\n\nThe composition of the Board is determined in accordance with the Company's Constitution and the Board Guidelines which, among other things, require that:\n\n - · the Board is to comprise a minimum of five and a maximum of ten Directors (exclusive of the CEO);\n - · the Board should comprise a substantial majority of independent, non-executive Directors;\n - · there should be a separation of the roles of Chairman and Chief Executive Officer of the Company; and\n - · the Chairman of the Board should be an independent, non-executive Director.\n\nUnder the Company's Constitution approximately onethird of Directors retire by rotation each year and Directors appointed during the year are required to submit themselves for election by shareholders at the Company's next Annual General Meeting. The Board Guidelines encourage Directors to retire at the first Annual General Meeting after reaching the age of 72 years and not seek reappointment.\n\nCurrently, the Board comprises eight non-executive Directors and one executive Director. The Board has adopted the definition set out in the ASX Best Practice Recommendations and as defined in the 2002 guidelines of the Investment and Financial Services Association Limited and considers all current nonexecutive Directors, including the Chairman, to be independent directors.\n\nGenerally, the Board considers a Director to be independent if he or she is not a member of management and is free of any business or other relationship that could materially interfere with, or could reasonably be\n\nperceived to materially interfere with, the Director's ability to act in the best interests of the Company. The Board will assess the materiality of any given relationship that may affect independence on a case by case basis and has adopted materiality guidelines to assist in that assessment. Under these guidelines, the following interests are regarded as material in the absence of any mitigating factors:", - "page_start": 31, - "page_end": 31, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "respect to a proposed appointee to the Board and the workings of the Board and its Committees are conveyed in interviews with the Chairman and induction procedures include access to appropriate executives in relation to details of the business of the Company.\n\nThe Chairman of the Board is the Chairman of the Nomination Committee. The current members of the Nomination Committee, all of whom are independent non-executive Directors, are Mr S Gerlach (Chairman), Mr P C Barnett and Mr G W McGregor.\n\n## 3. REVIEW OF BOARD AND EXECUTIVE PERFORMANCE\n\nThe Board Guidelines provide that:\n\n - · non-executive Directors are to be appointed on the basis that their nomination for re-election as a Director is subject to review and support by the Board;\n - · there should be appropriate circumstances justifying reelection after a specified period of service as a Director; and\n - · the contribution of the Board and of individual Directors is the subject of formal review and discussion on a biennial and annual basis, respectively.\n\nAs the biennial review of the Board and of its Committees was conducted by an independent consultant in 2003, no formal performance appraisal of the Board was conducted in 2004.\n\nPerformance evaluation of key executives is undertaken on a quarterly and annual basis by the CEO and summarised in presentation to the\n\nRemuneration Committee of the\n\nBoard, both specifically for determination of remuneration and generally in relation to management succession planning for review by the Board.\n\n## 4. INDEMNITY, ACCESS TO INFORMATION AND INDEPENDENT PROFESSIONAL ADVICE\n\nInformation in respect to indemnity and insurance arrangements for Directors and senior executives appears in the Directors' Statutory Report on page 49 of this Annual Report.\n\nThe Board Guidelines set out the circumstances and procedures pursuant to which a Director, in furtherance of his or her duties, may seek independent professional advice at the Company's expense. Those procedures require prior consultation with, and approval by, the Chairman and assurances as to the qualifications and reasonableness of the fees of the relevant expert and, under normal circumstances, the provision of the expert's advice to the Board.\n\nPursuant to a deed executed by the Company and each Director, a Director also has the right to have access to all documents which have been presented to meetings of the Board or to any Committee of the Board or otherwise made available to the Director whilst in office. This right continues for a term of seven years after ceasing to be a Director or such longer period as is necessary to determine relevant legal proceedings that commenced during that term.\n\n## 5. REMUNERATION\n\nThe role, responsibilities and composition of the Remuneration Committee and details of\n\nthe Company's remuneration objectives and principles, nonexecutive Director remuneration and executive remuneration are set out on pages 37 to 40 of this Annual Report in the Directors' and Executives' Remuneration section, as well as in the Directors' Statutory Report and in Notes 18 and 26 of the Financial Statements.\n\nDetails of the nature and amount of the remuneration of:\n\n - · the Directors; and\n - · the Specified Executives;\n\nare set out on pages 37 to 40 of this Annual Report.\n\n## 6. AUDIT COMMITTEE", - "page_start": 32, - "page_end": 32, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "- GLYPH<129> A udit Committee - reviews our accounting policies and practices, the integrity of our financial reporting processes and procedures and the financial statements and other relevant disclosure for release to the public. It also assists the Board in its oversight of our compliance with legal and regulatory requirements for financial reporting, and assesses our internal accounting and financial control systems and the qualifications, independence and work of our internal and external auditors.\n - GLYPH<129> Corporate Governance Committee - assists the Board so it has appropriate systems and procedures for carrying out its responsibilities. This committee develops governance policies and practices and recommends them to the board for approval, and leads the Board in its periodic review of board and committee performance.\n - GLYPH<129> Nominating Committee - identifies prospective candidates to serve on our Board. Nominated directors are either elected by shareholders at a meeting, or appointed by the Board. The committee also recommends nominees for each Board committee, including each committee chair.\n - GLYPH<129> Human Resources Committee - assists the Board in monitoring, reviewing and approving compensation and benefit policies and practices. It is also responsible for recommending the compensation of senior management and monitoring the senior executive succession plan.", - "page_start": 74, - "page_end": 74, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "- · a holding of 5% or more of the Company's voting shares or a direct association with an entity that holds more than 5% of the Company's voting shares;\n - · an affiliation with an entity which accounts for 5% or more or the revenue or expense of the Company.\n\nThe Board has determined that there should not be any arbitrary length of tenure that should be considered to materially interfere with a Director's ability to act in the best interests of the Company, as it believes this assessment must be made on a case by case basis with reference to the length of service of all members of the Board.\n\nEach Director's independence is assessed by the Board on an individual basis, with reference to the above materiality guidelines and focussing on an assessment of each Director's capacity to bring independence of judgment to Board decisions. In this context, as mentioned below, Directors are required to promptly disclose their interests in contracts and other directorships and offices held.\n\nThe names and details of the experience, qualifications, special\n\nresponsibilities, and term of office of each Director of the Company are set out on page 41 of this Annual Report. Details of each Director's attendance at Board and Committee Meetings and their shareholdings are also set out on page 47 of this Annual Report.\n\n## 2.4 Nomination Committee\n\nThe role, responsibilities and membership requirements of the Nomination Committee are documented in the Board Guidelines and in a separate Charter, approved by the Board.\n\nUnder the Board Guidelines, it is the responsibility of the Nomination Committee to devise the criteria for, and review membership of, and nominations to, the Board. The primary criteria adopted in selection of suitable Board candidates is their capacity to contribute to the ongoing development of the Company having regard to the location and nature of the Company's significant business interests and to the candidates' age and experience by reference to the attributes of existing Board members.\n\nWhen a Board vacancy exists or where it is considered that the Board would benefit from the services of a new Director with particular skills, the Nomination Committee has responsibility for proposing candidates for consideration by the Board and, where appropriate, engages the services of external consultants.\n\nPrior to appointment, each Director is provided with a letter of appointment which encloses a copy of the Company's Constitution and of the relevant policies. Additionally, the expectations of the Board in", - "page_start": 31, - "page_end": 31, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## BOARD OF DIRECTORS AND OFFICERS\n\n## BOARD OF DIRECTORS\n\n## Stan A. Askren\n\nPresident, HON INDUSTRIES Inc.\n\n## Gary M. Christensen\n\nRetired President and\n\nChief Executive Officer,\n\nPella Corporation\n\n## Cheryl A. Francis\n\nAdvisor/Consultant Former Executive Vice President and Chief Financial Officer,\n\nRR Donnelley & Sons\n\n## Robert L. Katz\n\nPresident,\n\nRobert L. Katz and Associates\n\n## Dennis J. Martin\n\nChairman, President and\n\nChief Executive Officer,\n\nGeneral Binding Corporation\n\n## Jack D. Michaels\n\nChairman and Chief Executive Officer, HON INDUSTRIES Inc.\n\n## Joseph Scalzo\n\nVice President and President, Personal Care Products,\n\nThe Gillette Company\n\n## Abbie J. Smith\n\nChaired Professor,\n\nThe University of Chicago\n\nGraduate School of Business\n\n## Richard H. Stanley\n\nVice Chairman, HON INDUSTRIES Inc.\n\nChairman, SC Companies, Inc.\n\nChairman, Stanley Consultants, Inc.\n\n## Brian E. Stern\n\nPresident,\n\nXerox Supplies Technology Enterprises\n\nXerox Corporation\n\n## Ronald V. Waters, III\n\nChief Operating Officer,\n\nWm. Wrigley Jr. Company\n\n## COMMITTEES OF THE BOARD\n\nAUDIT\n\nCheryl A. Francis, Chairperson\n\nDennis J. Martin\n\nRonald V. Waters, III\n\n## HUMAN RESOURCES AND COMPENSATION\n\nGary M. Christensen, Chairperson\n\nRobert L. Katz\n\nAbbie J. Smith\n\n## PUBLIC POLICY AND CORPORATE GOVERNANCE\n\nRichard H. Stanley, Chairperson\n\nJoseph Scalzo\n\nBrian E. Stern\n\n## HON INDUSTRIES INC. OFFICERS\n\nJack D. Michaels\n\nChairman and Chief Executive Officer\n\n## Stan A. Askren\n\nPresident\n\nPeter R. Atherton\n\nVice President and Chief Technology Officer\n\nJerald K. Dittmer\n\nVice President and Chief Financial Officer\n\nRobert J. Driessnack\n\nVice President, Controller\n\n## Melinda C. Ellsworth\n\nVice President, Treasurer and\n\nInvestor Relations\n\n## Jeffrey D. Fick\n\nVice President, Member and\n\nCommunity Relations\n\nMalcolm C. Fields\n\nVice President and Chief Information Officer\n\nJames I. Johnson\n\nVice President, General Counsel and Secretary\n\nTimothy R. Summers\n\nVice President, Lean Enterprise\n\n## SUBSIDIARIES\n\nDavid C. Burdakin\n\nExecutive Vice President, HON INDUSTRIES, Inc.\n\nPresident, The HON Company\n\n## Brad D. Determan\n\nPresident,\n\nHearth and Home Technologies Inc.\n\n## Thomas D. Head\n\nVice President,\n\nGeneral Manager, Holga Inc.\n\nEric K. Jungbluth\n\nPresident, Allsteel Inc.\n\nDonald T. Mead\n\nPresident, The Gunlocke Company L.L.C.\n\n## Marco V. Molinari\n\nPresident, International and Business\n\nDevelopment\n\nJean M. Reynolds\n\nPresident, Maxon Furniture Inc.\n\n## Thomas A. Tolone\n\nPresident, Paoli Inc.", - "page_start": 61, - "page_end": 61, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## PROFESSOR JUDITH SLOAN\n\nBA (Hons), MA, MSc Age 50. Director since 5 September 1994. Chairperson of the Remuneration Committee and member of the Audit Committee of the Board. Deputy Chair of the Australian Broadcasting Corporation and Part-time Commissioner of the Productivity Commission. Former Professor of Labour Studies at the Flinders University of South Australia and Director of the National Institute of Labour Studies. Former Chairperson of SGIC Holdings Ltd and Director of Mayne Group Ltd.\n\nSantos Board of Directors during November 2004 Board meeting held at Moomba, Cooper Basin. Left to right: Graeme McGregor, John Ellice-Flint, Peter Barnett, Stephen Gerlach, Michael Harding, Judith Sloan, Michael O'Leary and Frank Conroy (who retired in December 2004). Kenneth Dean and Christopher Recny subsequently joined the Board in February 2005.\n\n", - "page_start": 42, - "page_end": 42, - "source_file": "ASX_STO_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_STO_2004.pdf", - "query": "How mush did the Moomba incident cost to Santos in 2004 ?", - "target_page": 12, - "target_passage": " the Moomba incident resulted in $17 million of one-off costs in 2004.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## ANALYSING FINANCIAL PERFORMANCE\n\n\n\n'The sound operating results achieved in 2004 underline the changing face of Santos towards a higher value, higher margin business. We ended the year with a strong financial position and our financial flexibility intact.'\n\n## PETER WASOW\n\nChief Financial Officer\n\n## 2004 WAS A YEAR OF GOOD OPERATING RESULTS\n\nOverall the increase in 2004 profit of 16% reflected a year of sound operating performance. Sales revenue was a record $1,501 million, up 2.5% on 2003, reflecting higher prices across most products and was achieved despite lower production as a result of the Moomba incident and declining output from late life fields.\n\nSantos benefited from higher world oil prices and realised US$51.83 per boe in 2004, an increase of 19% over 2003. The benefit of higher world oil prices substantially offset the impact of lower production volumes.\n\nSantos was also able to negotiate higher domestic gas prices (up 4% on average) and deliver new revenue streams from project start-ups and acquisitions during the year.\n\n## PRODUCTION HAMPERED BY MOOMBA INCIDENT\n\n2004 production was lower due to the Moomba incident, which reduced production by 4.6 million\n\nboe. Field decline reduced production by a further 5.0 million boe.\n\nOffsetting these factors, Santos' growth projects are starting to come on line and have begun to reverse the decline experienced over the past three years. Two projects were commissioned in 2004: the Bayu-Undan liquids project and the Minerva gas project. In addition, acquisitions contributed 0.8 million boe to production.\n\nFor 2005, production is expected to improve by around 15%, or 4% excluding the impact of the Moomba incident. Santos now expects production to be around 54 million boe in 2005. This increase is largely driven by the commissioning of Mutineer-Exeter in March 2005 and the John Brookes gas field in the middle of the year.\n\n## PRODUCTION COSTS UNDER CONTROL\n\nProduction costs in 2004 were $309 million, up $45 million or 17% on 2003. Analysis shows that Santos was able to continue\n\n## PRODUCTION AND SALES REVENUE\n\n\n\nto effectively control its costs in the face of significant external pressures in the form of rising services and materials prices.\n\nExamining production costs in detail reveals:\n\n - · the start-up of Bayu-Undan and acquisitions added $16 million to Santos' cost base\n - · changes in our accounting added a further $16 million to Santos' production costs\n - · higher insurance premiums ($8 million) and one-off stock write-offs ($5 million) were offset by $17 million in cost savings largely as a result of Santos' continuous improvement initiatives\n - · the Moomba incident resulted in $17 million of one-off costs in 2004.\n\nPiecing this together, the key themes in our financial performance were:\n\n - · cost savings in established production areas more than offset increases in the price of services and materials\n - · Santos' cost base rose as production from new developments and acquisitions were added to the Company's expanding portfolio of producing assets.", - "page_start": 11, - "page_end": 11, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "\n\nSantos employees rehabilitating a section of the River Torrens in Adelaide, as part of Santos' three-year commitment to the Our Patch project.\n\nof opportunities to use fewer greenhouse-emitting or renewable sources of energy.\n\nTo achieve these commitments Santos is actively pursuing an emissions intensity reduction target (greenhouse emissions per unit of production) of 20% in the period from 2002 to 2008.\n\n## SUPPORTING COMMUNITIES\n\nSantos has relationships with a number of communities where it operates. Some have been longterm and others are just beginning. Relationships with communities outside Australia, such as Indonesia and the United States, are also emerging as Santos' business grows in these locations.\n\nSantos made contributions during 2004 to a wide variety of organisations and events through the sponsorship program as part of the Company's commitment to supporting the communities to which it belongs.\n\nPartnerships continued in 2004 with the Australian School of Petroleum, the Adelaide Symphony Orchestra, the State Opera Company of South Australia, the Art Gallery of South Australia and the Lloyd McDermott Foundation.\n\nOne of the highlights of the 2004 program was the establishment of the Santos Community Fund. It brings together all of the contributions Santos makes to community-based organisations and recognises and supports the efforts of Santos employees who choose to contribute their own time and resources to improving their communities.\n\nThe 'Our Patch' program was a recipient of this fund in 2004. This is a joint initiative of the Patawalonga and Torrens Catchment Management Boards which encourages the local community to assist with the rehabilitation and management of Adelaide's water catchment.\n\nSantos has adopted a patch of the River Torrens and employees are assisting with the remediation and revegetation of this area in a volunteering program.\n\n## CORPORATE GOVERNANCE\n\nFor the third year running, the integrity of Santos' corporate governance was recognised in 2004 with the maximum five-star rating in the Corporate Governance Research Report prepared by Horwath and the University of Newcastle.\n\nA more detailed overview of corporate governance at Santos follows on page 29 of this Annual Report.\n\nMore detailed information about sustainability at Santos is contained in the Sustainability Review and copies are available from the Company and via the Santos website www.santos.com.", - "page_start": 29, - "page_end": 29, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## MALEO NEGOTIATIONS ADVANCED\n\nOutside Australia, Santos and its co-venturers have executed a Heads of Agreement for the sale of the entire gas reserves of the Maleo field offshore East Java, Indonesia. Santos continued negotiations with PT Perusahaan Gas Negara, Indonesia's stateowned gas distributor, on behalf of the joint venture to finalise the Gas Sales Agreement. The project is targeting first production in the first half of 2006 at rates of up to 100 mmcf/d for more than five years.\n\n## FIRST RETAIL GAS SALES WITH SANTOS DIRECT\n\nAs well as selling gas into the wholesale gas market, Santos secured a retail gas licence from the Victorian Government in 2004. This allows Santos to sell gas direct to industrial customers and into the Victorian spot market through a wholly-owned\n\nsubsidiary, Santos Direct Pty Ltd ('Santos Direct').\n\nSantos Direct will market Santos' 10% share of gas production from the Minerva field - around 15 TJ/d - in the offshore Otway Basin, which commenced production at the end of 2004.\n\nThe move to market and sell gas directly into the Victorian retail market is a first for Santos and leverages off Santos' position as one of Australia's largest gas producers, supplying wholesale gas to major industrial customers and specialist marketers in all mainland Australian states and territories.\n\n## LIQUIDS MARKETING ALLIANCE WITH BP\n\nAnother important marketing development during the year was the decision to outsource the marketing of crude oil and natural gas liquids to BP. The new marketing arrangements are in response to the significantly\n\nhigher volumes of crude oil that Santos will receive from the Mutineer-Exeter and Oyong projects, coming on stream in 2005, and the increasing globalisation of the liquids marketplace.\n\nThe validity of this approach has already been demonstrated by the sale of the first Mutineer-Exeter oil cargo at a premium to Tapis despite a discount for the uncertain delivery date.\n\nSantos continues to build an inventory of high quality options to provide a platform for production growth over the coming years. Santos is committed to a program of diversification while capitalising on the long-term Cooper Basin legacy asset. Most importantly, this involves leveraging the strengths of the core competencies built up over a number of years and Santos' well-positioned domestic gas franchise.\n\n\n\n\n\n'During 2004 we brought together everyone at Santos responsible for commercialisation into a single team. One of the outcomes from this was the introduction of gas swaps, where we were able to move gas between Santos assets in different states.'\n\n## RICK WILKINSON\n\nVice President Gas Marketing and Commercialisation\n\nThe alignment of joint venture interests in the John Brookes and East Spar fields has created an important production hub at Varanus Island, Carnarvon Basin, offshore Western Australia.", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "is also located in relatively shallow water with infrastructure nearby, creating options for early production.\n\nAt Santos, we are proud that an Australian company took on that challenge and succeeded, and I congratulate the exploration and drilling teams on a great effort. With the Jeruk discovery behind us, Indonesia is at the forefront of our international exploration efforts. With eight wells planned in the region for 2005, Santos is currently the most active explorer in Indonesia.\n\n## A STRONG FINANCIAL PERFORMANCE\n\nIt was pleasing that Santos was able to conclude 2004 on a higher note than it started.\n\nWe achieved record annual revenue thanks to higher oil and gas prices combined with the return of full production at Moomba to produce a 21.5% jump in second half sales: the best result for any six-month period in Santos' history.\n\nThe average realised price for crude oil was up nearly 19% to A$51.83 per barrel.\n\nThese results have left Santos well positioned to continue its strong investment program which saw capital expenditure peak at $930 million in 2004.\n\nIn 2005 we expect to invest around $850 million of new capital in projects and our strategy is to plan for firm developments based on affordability at relatively low oil prices. If higher prices continue and some projects mature quickly and can be given the green light, our overall capital expenditure may be higher.\n\nProduction is expected to rise in 2005 when, as usual, our financial performance will be subject to oil prices, exchange rates and interest rates. These factors have a significant effect on our bottom line. A US$1 per barrel change in the oil price equates to a A$16 million change in net profit after tax in 2005.\n\nA one US cent movement in the Australia-US dollar exchange rate would produce a change in profit after tax of A$8 million, and a 1% change in interest rates equates to a change in net profit after tax of A$9 million.\n\n2004 has also been an important period for shareholders, with a significant improvement in the Santos share price combined with an increase in the dividend.\n\n## PRODUCTION TO REBOUND\n\nWhile we expected lower production overall in 2004, our output was obviously curtailed further by the incident at the Moomba plant. The good news is that several projects emerged from the development pipeline during the year and made positive contributions to our expanding suite of oil and gas facilities.\n\nProduction is forecast to increase by 15% in 2005, or by 4% after excluding the effect of the Moomba downtime, to about 54 million boe. We expect this positive forward trend to be followed by further production growth of more than 10% in 2006.\n\nThe Bayu-Undan liquids project came on line in April 2004 and, at its increased design throughput of just over one billion cubic feet of gas per day, produced liquids at a rate of 100,000 barrels per day.\n\nBayu-Undan is currently stripping liquids and re-injecting the gas pending tie-in of the pipeline to Darwin in May 2005 for future LNG production. The onshore LNG facilities are more than two-thirds complete. With a gross production of 19 million barrels, 22% above expectations for the year, we were pleased with the performance of Bayu-Undan and look forward to a full year contribution from this exciting project in 2005.\n\nThe Minerva gas field off Victoria's western coast started production in December 2004 and is ramping up to full field production of around 150 TJ per day. Our share in this project is 10%, and is significant because it represents our first foray into marketing gas directly to customers or into the Victorian spot market through our sales vehicle, Santos Direct, aimed at delivering higher prices.\n\n## RECORD EXPLORATION EFFORT AHEAD\n\nExploration is a great way to increase shareholder value so I am pleased to be able to report that in 2004, Santos drilled 16 wildcat wells resulting in seven hydrocarbon discoveries.", - "page_start": 6, - "page_end": 6, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## OPERATING CASH FLOW AND CAPITAL EXPENDITURE\n\n$ million\n\n\n\n## DEPRECIATION, DEPLETION AND AMORTISATION\n\nAll things being equal, DD&A could have been expected to be lower this year, as Santos produced lower volumes and had written off the Heytesbury plant in the onshore Otway Basin last year.\n\nHowever, two factors caused an increase in 2004 DD&A. Firstly, while reserve revisions were positive overall, negative revisions were predominantly in producing areas which increased depletion rates in 2004, while positive reserve revisions were in areas where Santos is not yet producing or where straight line depreciation is dominant; for example, Casino and John Brookes.\n\nSecondly, on the future development cost side, depletion is up partly because Santos is starting to factor in higher steel and service company costs into long-term economic models.\n\n## CASH FLOW LOWER\n\nWhile Santos had a strong profit year, this is not fully reflected in cash flows.\n\nThere were large movements in trade debtors between years, reflecting the timing of liftings and the payments for them.\n\nIn addition, Santos has not yet been paid for the insurance claim relating to the Moomba incident. A total of $117 million was recognised in sundry income, which represents an estimate of the amount receivable from insurers for lost revenue, additional costs and replacement plant and equipment. At year end the money was still owed and so is not shown as part of operating cash flow. The final quantification of the claim with insurers is progressing.\n\n## RECORD CAPITAL EXPENDITURE\n\nCapital expenditure ended right on target at $930 million a record year for Santos approaching a level which is double DD&A, reflecting how rapidly the portfolio is changing.\n\nSantos will continue with a high development expenditure in 2005, but expects to spend more in line with cash generation. Exploration spend is estimated to be about $150 million, while development spend is expected to be reduced to $530 million and delineation to $90 million. Other capital spending is expected to be reduced to $80 million.\n\nThis results in a total planned capital expenditure for 2005 of approximately $850 million.\n\n## FINANCIAL FLEXIBILITY INTACT\n\nSantos ended the year in a strong financial position with its financial flexibility intact, despite the record development spending.\n\nThe FUELS issue was successful and Santos' gearing increased only marginally, despite the large capital program in 2004.\n\nThis is important in Santos' business as the Company needs to be able to fund exploration success as it occurs, and our development projects are increasing in size.\n\n", - "page_start": 12, - "page_end": 12, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "| Santos (Warim) Pty Ltd | SA | Santos QNT Pty Ltd | QLD |\n| Santos Australian Hydrocarbons Pty Ltd | QLD | Controlled entities of Santos QNT Pty Ltd | |\n| Santos (BOL) Pty Ltd | NSW | Santos QNT (No. 1) Pty Ltd | QLD |\n| Controlled entity of Santos (BOL) Pty Ltd | | Controlled entities of Santos QNT (No. 1) Pty Ltd | |\n| Bridge Oil Exploration Pty Limited | ACT | Santos Petroleum Management Pty Ltd | QLD |\n| Santos Darwin LNG Pty Ltd | ACT | Santos Petroleum Operations Pty Ltd | QLD |\n| Santos Direct Pty Ltd 3 | SA | TMOC Exploration Proprietary Limited | QLD |\n| Santos Facilities Pty Ltd | SA | Santos QNT (No. 2) Pty Ltd | QLD |\n| Santos Finance Ltd | NSW | Controlled entities of Santos QNT (No. 2) Pty Ltd | |\n| Santos Globe Pty Ltd (formerly Globex Far East Pty Ltd) | WA | Associated Petroleum Pty Ltd | QLD |\n| Santos International Holdings Pty Ltd | ACT | Moonie Oil Pty Ltd | QLD |\n| Controlled entities of Santos International Holdings Pty Ltd | | Petromin Pty Ltd | QLD |\n| Barracuda Limited | PNG | Santos (299) Pty Ltd | QLD |\n| Lavana Limited | PNG | Santos Exploration Pty Ltd | VIC |\n| Novus UK (Kakap 2) Limited 2 | UK | Santos Gnuco Pty Ltd | QLD |\n| Peko Offshore Ltd | BER | Transoil Pty Ltd | QLD |\n| Sanro Insurance Pte Ltd | SING | Santos Resources Pty Ltd | QLD |\n| Santos Americas and Europe Corporation | USA | Santos Timor Sea Pipeline Pty Ltd | NSW |\n| Controlled entity of Santos Americas and Europe Corporation | | Sesap Pty Ltd 2 | VIC |\n| Santos USA Corp | USA | Vamgas Pty Ltd | VIC |\n| Santos (Bawean) Pty Ltd | SA | | |", - "page_start": 71, - "page_end": 71, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "Guarantees provided by Santos Ltd for borrowings in respect of controlled entities are disclosed in note 15.\n\nSantos Ltd has provided parent company guarantees in respect of:\n\n - (a) the funding obligations of its subsidiary companies, Santos Timor Sea Pipeline Pty Ltd and Santos Darwin LNG Pty Ltd, relating to the construction of a pipeline from the Bayu-Undan Field to Wickham Point in Darwin and the construction of the LNG Plant in Darwin respectively, and has provided a funding commitment letter to these subsidiary companies together with Santos (JPDA 91-12) Pty Ltd. As at 31 December 2004 the expenditure commitments of Santos Timor Sea Pipeline Pty Ltd and Santos Darwin LNG Pty Ltd for the above mentioned projects totalled US$41.3 million (2003: US$107.6 million);\n\n", - "page_start": 84, - "page_end": 84, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## ENHANCING THE PORTFOLIO\n\nIn 2004, Santos continued its normal business of actively managing its portfolio through the divestment of non-core assets and the acquisition of assets that fit well with existing Santos assets or can add to the ability of the Company to meet its strategic goals.\n\nAs a result of this activity, Santos realised an after-tax profit of $47.4 million on oil and gas asset sales and will continue to high-grade its portfolio on an ongoing basis.\n\nSantos entered into an agreement with PT Medco during the first half of 2004 to acquire some of Novus Petroleum's Indonesian and Cooper Basin assets conditional on the success of PT Medco's takeover offer for Novus, which was ultimately successful.\n\nSpecifically, Santos announced in September 2004 that it had executed formal agreements to acquire an additional 4.75% of the South Australian Cooper Basin, 18% of the Brantas PSC and 9% of the Kakap PSC from Medco for US$110 million. On 31 December 2004, Santos paid Medco US$98 million for the majority of the assets, with payment for the remaining 2.75% of Kakap PSC expected to be made in the first quarter of 2005.\n\nThis acquisition was an important piece in the strategic puzzle to tie up access to follow-up potential from the successful exploration at Jeruk and to provide a production base for the newly established Indonesian core area.\n\nAlso during the first half of 2004, Santos divested its remaining 18.4% shareholding in Magellan\n\nPetroleum Australia Ltd, raising approximately $10.6 million.\n\nEarly in the second half of 2004, Santos concluded the sale of its non-core onshore Otway Basin interests to Origin Energy for $25.75 million. This sale resulted in an after-tax profit of $18 million that was booked in 2004.\n\nIn addition, an exploration joint venture was formed with ConocoPhillips in the NT/P61 block offshore Darwin, Northern Territory, to drill the Caldita well and provide Santos with access rights to a potential expansion of the Wickham Point LNG facility. This deal further enhances Santos' infrastructure strategy to leverage its position within vital infrastructure to improve shareholder value while reducing the risk profile of the wildcat exploration program.\n\nDuring the third quarter, Santos expanded its offshore Victorian gas interests to 50% in both the Patricia-Baleen and the Sole gas fields through the acquisition from Trinity Gas Resources of an additional 30% interest in the Patricia-Baleen gas field and associated processing facilities in eastern Victoria and an additional 15% interest in the Sole gas field.\n\nSantos earned its 30% additional equity in the Patricia-Baleen gas field by meeting Trinity's remaining share of drilling costs on the Baleen 4 well which was drilled successfully as a sidetrack well of Baleen 3. Santos will earn its 15% additional equity in the Sole gas field by meeting certain development costs on behalf of Trinity, if and when the Sole joint venture partners proceed to develop this gas resource.\n\nThe acquisition of these Victorian gas interests strengthens Santos' domestic gas and infrastructure strategy that was further enhanced by the OMV purchase announced early in 2005. Importantly, Santos is now the operator of the strategic Orbost gas processing facility.\n\nLate in the year, Santos sold its 18.02% share in the Carpentaria Gas Pipeline between Ballera and Mount Isa in Queensland to Australian Pipeline Trust for $59 million, resulting in a $21 million after-tax profit that was booked in the 2004 financial year.\n\n## BRANTAS PSC\n\n", - "page_start": 24, - "page_end": 24, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## DELIVERING ON THE STRATEGY\n\n\n\nDear Shareholder,\n\nI am pleased to report that in 2004 Santos continued to deliver on its strategy to transform the Company into a truly international exploration and production business with world-class operations.\n\nWhile the year saw many positives in terms of development and exploration success, it did not get off to a good start with the incident on New Year's Day at the Moomba processing facility in central Australia.\n\nImportantly, Santos was able to work effectively with its key stakeholders, including customers, joint venturers and government departments, to minimise the commercial impacts.\n\nNatural gas supplies were quickly restored, in part by recovering processed gas from underground storage reservoirs. Liquids processing facilities were progressively reinstated allowing further increases to gas production and sales volumes, with the ramp-up to full liquids production achieved by August as planned.\n\nA large proportion of the costs and foregone revenues associated with the repair of the damaged plant and the reduced oil and gas production volumes are being recovered under insurance policies.\n\nDue to the long cycle times inherent in the oil and gas business, it had been recognised that 2004 would be a year in which production was marginally below the previous year, with subsequent increases in 2005 and beyond driven by new development projects.\n\nIn this light, it is pleasing to report that the Minerva gas and Bayu-Undan liquids projects commenced production during the year as planned, while first oil from Mutineer-Exeter and several other key growth projects are progressing to plan.\n\nIndonesia matured into a core area during 2004, through a strategy of prudent acquisition, portfolio management and exploration. In particular, the Jeruk discovery has the potential to add significant value, with further evaluation activities underway.\n\nEven with the large effort expended on the Moomba incident, Santos was able to deliver strong results for 2004, reflecting higher average prices across most products.\n\nGroup sales revenue increased by 2.5% to a record $1,501 million, earnings before interest and tax improved by 23% to $574 million and net profit after tax rose by 16% to $380 million.\n\nThis strong financial performance, combined with the confidence that Santos will continue to grow earnings in the future, enabled the Board to increase the final dividend on ordinary shares by 20% from 15 cents to 18 cents per share, fully franked. For the full year, dividends increased by 10% to 33 cents per share, compared with 30 cents per share\n\nin each of the four previous years. On a grossed up basis, this represents a yield of over 5%.\n\nIn response to increasing interest and enquiry from shareholders, the Dividend Reinvestment Plan has been reintroduced and applied to the final dividend paid during March 2005.\n\nSantos continued its proactive approach to capital management with the redemption and buyback of the outstanding Preference Shares and the issue of FUELS (Franked Unsecured Equity Listed Securities). This initiative was driven by the alignment of Australian accounting standards with international requirements, and closed oversubscribed, raising $600 million in new equity.\n\nThe total shareholder return for the year, including share price appreciation and dividends paid, was 28% - an excellent result.\n\nIn addition to our focus on shareholder value, Santos takes its corporate social responsibilities seriously and is committed to sustainability as a core value in all operations. The Company's first Sustainability Review was released during the year.\n\nSantos continues to be recognised for the high quality of its corporate governance, receiving a measure of five out of five for corporate governance for the third successive year in an independent report prepared by leading accounting and management firm, Horwath, and the University of Newcastle.", - "page_start": 3, - "page_end": 3, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "The financial impacts of the acquisitions on the Santos Group and the Company are summarised below:\n\n| | Consolidated | Consolidated | Santos Ltd | Santos Ltd |\n|------------------------------------------------------------------------------|------------------------------------------------------------------------------|----------------|---------------|---------------|\n| | 2004 $million | 2003 $million | 2004 $million | 2003 $million |\n| Fair value of net assets acquired | | | | |\n| Cash | (1.7) | 1.3 | (1.4) | 1.3 |\n| Other | (2.4) | 10.3 | (2.3) | 10.3 |\n| Exploration and development expenditure | 131.4 | 12.4 | 95.9 | 12.4 |\n| | 127.3 | 24.0 | 92.2 | 24.0 |\n| Purchase consideration | | | | |\n| Cash consideration paid | 110.6 | 24.0 | 92.2 | 24.0 |\n| Amount payable after balance date | 16.7 | - | - | - |\n| | 127.3 | 24.0 | 92.2 | 24.0 |\n| During the financial year the following controlled entities were registered: | During the financial year the following controlled entities were registered: | | | |\n| Santos Direct Pty Ltd | Santos Brantas Pty Ltd | | | |\n| Santos Egypt Pty Ltd | Santos (Donggala) Pty Ltd | | | |\n\n", - "page_start": 72, - "page_end": 72, - "source_file": "ASX_STO_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_STO_2004.pdf", - "query": "What is the main focus of the Santos 2005 program ?", - "target_page": 19, - "target_passage": " Oil is the main focus of the 2005 program", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "\n\nSantos employees rehabilitating a section of the River Torrens in Adelaide, as part of Santos' three-year commitment to the Our Patch project.\n\nof opportunities to use fewer greenhouse-emitting or renewable sources of energy.\n\nTo achieve these commitments Santos is actively pursuing an emissions intensity reduction target (greenhouse emissions per unit of production) of 20% in the period from 2002 to 2008.\n\n## SUPPORTING COMMUNITIES\n\nSantos has relationships with a number of communities where it operates. Some have been longterm and others are just beginning. Relationships with communities outside Australia, such as Indonesia and the United States, are also emerging as Santos' business grows in these locations.\n\nSantos made contributions during 2004 to a wide variety of organisations and events through the sponsorship program as part of the Company's commitment to supporting the communities to which it belongs.\n\nPartnerships continued in 2004 with the Australian School of Petroleum, the Adelaide Symphony Orchestra, the State Opera Company of South Australia, the Art Gallery of South Australia and the Lloyd McDermott Foundation.\n\nOne of the highlights of the 2004 program was the establishment of the Santos Community Fund. It brings together all of the contributions Santos makes to community-based organisations and recognises and supports the efforts of Santos employees who choose to contribute their own time and resources to improving their communities.\n\nThe 'Our Patch' program was a recipient of this fund in 2004. This is a joint initiative of the Patawalonga and Torrens Catchment Management Boards which encourages the local community to assist with the rehabilitation and management of Adelaide's water catchment.\n\nSantos has adopted a patch of the River Torrens and employees are assisting with the remediation and revegetation of this area in a volunteering program.\n\n## CORPORATE GOVERNANCE\n\nFor the third year running, the integrity of Santos' corporate governance was recognised in 2004 with the maximum five-star rating in the Corporate Governance Research Report prepared by Horwath and the University of Newcastle.\n\nA more detailed overview of corporate governance at Santos follows on page 29 of this Annual Report.\n\nMore detailed information about sustainability at Santos is contained in the Sustainability Review and copies are available from the Company and via the Santos website www.santos.com.", - "page_start": 29, - "page_end": 29, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## MALEO NEGOTIATIONS ADVANCED\n\nOutside Australia, Santos and its co-venturers have executed a Heads of Agreement for the sale of the entire gas reserves of the Maleo field offshore East Java, Indonesia. Santos continued negotiations with PT Perusahaan Gas Negara, Indonesia's stateowned gas distributor, on behalf of the joint venture to finalise the Gas Sales Agreement. The project is targeting first production in the first half of 2006 at rates of up to 100 mmcf/d for more than five years.\n\n## FIRST RETAIL GAS SALES WITH SANTOS DIRECT\n\nAs well as selling gas into the wholesale gas market, Santos secured a retail gas licence from the Victorian Government in 2004. This allows Santos to sell gas direct to industrial customers and into the Victorian spot market through a wholly-owned\n\nsubsidiary, Santos Direct Pty Ltd ('Santos Direct').\n\nSantos Direct will market Santos' 10% share of gas production from the Minerva field - around 15 TJ/d - in the offshore Otway Basin, which commenced production at the end of 2004.\n\nThe move to market and sell gas directly into the Victorian retail market is a first for Santos and leverages off Santos' position as one of Australia's largest gas producers, supplying wholesale gas to major industrial customers and specialist marketers in all mainland Australian states and territories.\n\n## LIQUIDS MARKETING ALLIANCE WITH BP\n\nAnother important marketing development during the year was the decision to outsource the marketing of crude oil and natural gas liquids to BP. The new marketing arrangements are in response to the significantly\n\nhigher volumes of crude oil that Santos will receive from the Mutineer-Exeter and Oyong projects, coming on stream in 2005, and the increasing globalisation of the liquids marketplace.\n\nThe validity of this approach has already been demonstrated by the sale of the first Mutineer-Exeter oil cargo at a premium to Tapis despite a discount for the uncertain delivery date.\n\nSantos continues to build an inventory of high quality options to provide a platform for production growth over the coming years. Santos is committed to a program of diversification while capitalising on the long-term Cooper Basin legacy asset. Most importantly, this involves leveraging the strengths of the core competencies built up over a number of years and Santos' well-positioned domestic gas franchise.\n\n\n\n\n\n'During 2004 we brought together everyone at Santos responsible for commercialisation into a single team. One of the outcomes from this was the introduction of gas swaps, where we were able to move gas between Santos assets in different states.'\n\n## RICK WILKINSON\n\nVice President Gas Marketing and Commercialisation\n\nThe alignment of joint venture interests in the John Brookes and East Spar fields has created an important production hub at Varanus Island, Carnarvon Basin, offshore Western Australia.", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "\n\nSantos is investing in the future of Australia's petroleum industry through the funding of the Australian School of Petroleum at the University of Adelaide.\n\nbe working in business operations with a lean and efficient corporate and services group.\n\nWith the exception of a small number of project teams, all non-award based positions in the Company were declared vacant and a selection process commenced around a set of criteria designed to ensure that people with the right skills and the ability to successfully grow Santos were appointed. As is often the case with transformational change initiatives, not everyone was re-appointed and, as a result, the workforce was reduced by 9%.\n\n## CULTURE CHANGE\n\nThe need to develop a culture that supports the newly designed business processes was another of the major outcomes of the change program. A Santos-wide culture change program led by employees is currently underway.\n\nThis long-term program is designed to ensure that the way employees work together enhances Santos' ability to be successful.\n\nOne of the first tasks undertaken was a voluntary employee survey to identify the gaps between the existing culture and the desired culture. The outcomes of the survey will assist in the development of programs and activities that will better align work practices with Santos' strategic goals.\n\n## TRAINING AND DEVELOPING PEOPLE\n\nMaking sure training and development supports current and future business requirements, and provides opportunities for people to develop their skills to achieve optimum performance, are key aspects of Santos' human resources strategy.\n\nSantos has a number of long-term projects underway which will optimise the substantial investment the Company makes in training people. Importantly, these projects will deliver programs that are targeted to meet business and individual needs and to support culture change initiatives.\n\n## BANKSIA AWARDS\n\nSantos was selected in 2004 as a finalist in the Banksia Environmental Awards for the work undertaken in the Companyled initiative to protect the world-renowned Coongie Lakes, resulting in the area being declared a new National Park by the South Australian Government.\n\nAs a finalist for this award Santos was recognised for its leadership role in bringing together a group of disparate parties to develop a Memorandum of Understanding recommending further protection for the Coongie Lakes.\n\n## WASTE MANAGEMENT\n\nSantos trialled innovative waste management techniques during 2004 to reduce the volume of hydrocarbon waste generated from Cooper Basin operations. Preliminary results indicate that these waste volumes can be reduced to 3-5% of their original volume, which is a significant achievement.\n\nThis technology will be implemented where possible\n\n## OIL SPILL VOLUMES\n\n3\n\n\n\nm\n\nacross Santos operations. The long-term environmental and financial benefits of using this technology are expected to be considerable.\n\n## REDUCED OIL SPILLS\n\nThere was a substantial reduction in the volume of hydrocarbons released to the environment in 2004, with uncontained hydrocarbons spilt reducing from 1,943 cubic metres to 83 cubic metres and Santos continues to focus on reducing oil spills.\n\n## GREENHOUSE POLICY\n\nSantos released its Greenhouse Policy in 2004 to drive performance improvements in this area through reducing emissions and producing oil and gas more efficiently.\n\nSantos' Greenhouse Policy is being rolled out across the organisation through crossfunctional greenhouse gas teams that have the right skill sets and responsibilities to progress this initiative. These teams will manage Greenhouse Policy and regulation, carbon management and trading opportunities, and energy efficiency. A key internal driver for emissions reduction will be the promotion of energy efficiency.\n\nSantos is committed to achieving effective emission reduction targets, to the pursuit of energy efficiency strategies and to the identification and implementation", - "page_start": 28, - "page_end": 28, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## HIGH IMPACT DRILLING IN 2005\n\nThe 2005 exploration program has the highest resource potential of any program undertaken at Santos.\n\nSantos is planning a large, high impact drilling campaign that is already well underway.\n\nSantos plans to drill 25 wells and will invest $150 million testing prospects within its expanding domestic and international exploration portfolio - up 19% from the $126 million spent on exploration in 2004.\n\nOil is the main focus of the 2005 program with most activity in the Kutei and East Java Basins offshore Indonesia, the Gulf of\n\nSuez in Egypt, the Bonaparte Basin in the Timor Sea and the Carnarvon Basin offshore Western Australia.\n\nThe 2005 program reflects the increasing materiality of Santos' exploration portfolio and continues the emphasis on more globally-focused exploration as an important part of the Company's growth strategy.\n\nSantos has already had drilling success early in 2005 with the Hiu Aman 1 well - the first to be drilled by Santos in the Donggala PSC. Hiu Aman 1 has indicated the presence of a prolific hydrocarbon system in this area. The discovery should add other lower risk prospects to Santos'\n\n## 2005 WILDCAT EXPLORATION PROGRAM\n\n\n\nexploration portfolio. A multi-well drilling program will be undertaken in Santos' Kutei Basin PSCs during 2005.\n\nAnother gas discovery has been made at Hurricane 1 in the Carnarvon Basin, offshore Western Australia. While both wells were discoveries, they require further evaluation to determine their commercial significance.", - "page_start": 18, - "page_end": 18, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## ENHANCING THE PORTFOLIO\n\nIn 2004, Santos continued its normal business of actively managing its portfolio through the divestment of non-core assets and the acquisition of assets that fit well with existing Santos assets or can add to the ability of the Company to meet its strategic goals.\n\nAs a result of this activity, Santos realised an after-tax profit of $47.4 million on oil and gas asset sales and will continue to high-grade its portfolio on an ongoing basis.\n\nSantos entered into an agreement with PT Medco during the first half of 2004 to acquire some of Novus Petroleum's Indonesian and Cooper Basin assets conditional on the success of PT Medco's takeover offer for Novus, which was ultimately successful.\n\nSpecifically, Santos announced in September 2004 that it had executed formal agreements to acquire an additional 4.75% of the South Australian Cooper Basin, 18% of the Brantas PSC and 9% of the Kakap PSC from Medco for US$110 million. On 31 December 2004, Santos paid Medco US$98 million for the majority of the assets, with payment for the remaining 2.75% of Kakap PSC expected to be made in the first quarter of 2005.\n\nThis acquisition was an important piece in the strategic puzzle to tie up access to follow-up potential from the successful exploration at Jeruk and to provide a production base for the newly established Indonesian core area.\n\nAlso during the first half of 2004, Santos divested its remaining 18.4% shareholding in Magellan\n\nPetroleum Australia Ltd, raising approximately $10.6 million.\n\nEarly in the second half of 2004, Santos concluded the sale of its non-core onshore Otway Basin interests to Origin Energy for $25.75 million. This sale resulted in an after-tax profit of $18 million that was booked in 2004.\n\nIn addition, an exploration joint venture was formed with ConocoPhillips in the NT/P61 block offshore Darwin, Northern Territory, to drill the Caldita well and provide Santos with access rights to a potential expansion of the Wickham Point LNG facility. This deal further enhances Santos' infrastructure strategy to leverage its position within vital infrastructure to improve shareholder value while reducing the risk profile of the wildcat exploration program.\n\nDuring the third quarter, Santos expanded its offshore Victorian gas interests to 50% in both the Patricia-Baleen and the Sole gas fields through the acquisition from Trinity Gas Resources of an additional 30% interest in the Patricia-Baleen gas field and associated processing facilities in eastern Victoria and an additional 15% interest in the Sole gas field.\n\nSantos earned its 30% additional equity in the Patricia-Baleen gas field by meeting Trinity's remaining share of drilling costs on the Baleen 4 well which was drilled successfully as a sidetrack well of Baleen 3. Santos will earn its 15% additional equity in the Sole gas field by meeting certain development costs on behalf of Trinity, if and when the Sole joint venture partners proceed to develop this gas resource.\n\nThe acquisition of these Victorian gas interests strengthens Santos' domestic gas and infrastructure strategy that was further enhanced by the OMV purchase announced early in 2005. Importantly, Santos is now the operator of the strategic Orbost gas processing facility.\n\nLate in the year, Santos sold its 18.02% share in the Carpentaria Gas Pipeline between Ballera and Mount Isa in Queensland to Australian Pipeline Trust for $59 million, resulting in a $21 million after-tax profit that was booked in the 2004 financial year.\n\n## BRANTAS PSC\n\n", - "page_start": 24, - "page_end": 24, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## MANAGING FOR SUSTAINABLE GROWTH\n\n\n\n'The publication of our first Sustainability Review in 2004 was a major achievement for Santos. The next steps are to undertake projects to improve our performance - not just in Australia but worldwide - and to accurately collect, verify and report on a range of sustainability data.'\n\n## MARTYN EAMES\n\nVice President Corporate and People\n\n\n\nLate in 2004 Santos published First Steps: Sustainability Review , the Company's first standalone publication on this topic. It describes how Santos is implementing the principles of sustainability in the areas of corporate governance, the environment, social responsibility and economic performance.\n\nThis was a significant milestone for Santos as it represents a starting point for the collection of data and the ongoing measurement of performance in the area of sustainability.\n\nCommunicating with stakeholders is an important activity and the publication of the Sustainability Review is a further extension of Santos' commitment in this regard. Santos applies considerable resources to the communication effort and aims to present information in a clear and concise manner in order to generate a greater understanding of the business by its stakeholders.\n\nSantos has been recognised for its achievements in this area. Santos' 2003 Annual Report was featured as an example of best practice reporting in PricewaterhouseCoopers' Trends in Corporate Reporting 2004 publication. Reports from companies worldwide are considered in compiling this publication and they must meet specified criteria. This is the third time a Santos annual report has been featured. Santos was also awarded a 2004 Silver Award for Excellence in Annual Reporting for the 2002 Annual Report by the Australasian Reporting Awards.\n\nReceiving independent recognition for these activities serves as a reference point for Santos' desire to continually improve communication performance.\n\nSantos has been listed as an inaugural member of the Australian SAM Sustainability Index (AuSSI). The AuSSI tracks the performance of around 70 Australian companies that lead their industry in terms of economic, environmental and\n\n## TOTAL RECORDABLE CASE FREQUENCY RATE\n\nTRCFR per millions hours worked\n\n\n\nsocial criteria. The index is calculated daily by Dow Jones Indexes and published in The Australian newspaper.\n\nFollowing is an overview of progress and achievements in the area of sustainability for 2004.\n\n## SAFETY IMPROVING\n\nThe health and safety of employees is of paramount concern to Santos. Santos delivered another year of improvement in 2004 and achieved its lowest total recordable case frequency rate of 6.4.\n\nFurther improvements were also made with the implementation of the Environment, Health and Safety Management System standards, with Santos operations undergoing full assessments against standards for the first time.\n\nThe results demonstrated considerable improvement over the baseline assessments conducted in 2003 with steady progress in the implementation of the procedures, processes and tools needed to achieve the requirements of the standards.\n\nProcess safety capability which deals with plant and equipment integrity assurance, design and construction, and maintenance, is being developed through the formation of a new set of standards to be incorporated\n\ninto the health and safety management system.\n\nThe safety focus in 2005 will be on finalising a comprehensive set of hazard standards which outline the required controls to ensure that hazards encountered across Santos' operations and activities are well managed.\n\n## POSITIONING THE WORKFORCE FOR THE FUTURE\n\nSantos commenced a major company-wide transformational change program in late 2003. The program was designed to significantly improve Santos' performance in four areas: key business processes, financial performance, organisation structure and company culture.", - "page_start": 27, - "page_end": 27, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## OPERATING CASH FLOW AND CAPITAL EXPENDITURE\n\n$ million\n\n\n\n## DEPRECIATION, DEPLETION AND AMORTISATION\n\nAll things being equal, DD&A could have been expected to be lower this year, as Santos produced lower volumes and had written off the Heytesbury plant in the onshore Otway Basin last year.\n\nHowever, two factors caused an increase in 2004 DD&A. Firstly, while reserve revisions were positive overall, negative revisions were predominantly in producing areas which increased depletion rates in 2004, while positive reserve revisions were in areas where Santos is not yet producing or where straight line depreciation is dominant; for example, Casino and John Brookes.\n\nSecondly, on the future development cost side, depletion is up partly because Santos is starting to factor in higher steel and service company costs into long-term economic models.\n\n## CASH FLOW LOWER\n\nWhile Santos had a strong profit year, this is not fully reflected in cash flows.\n\nThere were large movements in trade debtors between years, reflecting the timing of liftings and the payments for them.\n\nIn addition, Santos has not yet been paid for the insurance claim relating to the Moomba incident. A total of $117 million was recognised in sundry income, which represents an estimate of the amount receivable from insurers for lost revenue, additional costs and replacement plant and equipment. At year end the money was still owed and so is not shown as part of operating cash flow. The final quantification of the claim with insurers is progressing.\n\n## RECORD CAPITAL EXPENDITURE\n\nCapital expenditure ended right on target at $930 million a record year for Santos approaching a level which is double DD&A, reflecting how rapidly the portfolio is changing.\n\nSantos will continue with a high development expenditure in 2005, but expects to spend more in line with cash generation. Exploration spend is estimated to be about $150 million, while development spend is expected to be reduced to $530 million and delineation to $90 million. Other capital spending is expected to be reduced to $80 million.\n\nThis results in a total planned capital expenditure for 2005 of approximately $850 million.\n\n## FINANCIAL FLEXIBILITY INTACT\n\nSantos ended the year in a strong financial position with its financial flexibility intact, despite the record development spending.\n\nThe FUELS issue was successful and Santos' gearing increased only marginally, despite the large capital program in 2004.\n\nThis is important in Santos' business as the Company needs to be able to fund exploration success as it occurs, and our development projects are increasing in size.\n\n", - "page_start": 12, - "page_end": 12, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## ANALYSING FINANCIAL PERFORMANCE\n\n\n\n'The sound operating results achieved in 2004 underline the changing face of Santos towards a higher value, higher margin business. We ended the year with a strong financial position and our financial flexibility intact.'\n\n## PETER WASOW\n\nChief Financial Officer\n\n## 2004 WAS A YEAR OF GOOD OPERATING RESULTS\n\nOverall the increase in 2004 profit of 16% reflected a year of sound operating performance. Sales revenue was a record $1,501 million, up 2.5% on 2003, reflecting higher prices across most products and was achieved despite lower production as a result of the Moomba incident and declining output from late life fields.\n\nSantos benefited from higher world oil prices and realised US$51.83 per boe in 2004, an increase of 19% over 2003. The benefit of higher world oil prices substantially offset the impact of lower production volumes.\n\nSantos was also able to negotiate higher domestic gas prices (up 4% on average) and deliver new revenue streams from project start-ups and acquisitions during the year.\n\n## PRODUCTION HAMPERED BY MOOMBA INCIDENT\n\n2004 production was lower due to the Moomba incident, which reduced production by 4.6 million\n\nboe. Field decline reduced production by a further 5.0 million boe.\n\nOffsetting these factors, Santos' growth projects are starting to come on line and have begun to reverse the decline experienced over the past three years. Two projects were commissioned in 2004: the Bayu-Undan liquids project and the Minerva gas project. In addition, acquisitions contributed 0.8 million boe to production.\n\nFor 2005, production is expected to improve by around 15%, or 4% excluding the impact of the Moomba incident. Santos now expects production to be around 54 million boe in 2005. This increase is largely driven by the commissioning of Mutineer-Exeter in March 2005 and the John Brookes gas field in the middle of the year.\n\n## PRODUCTION COSTS UNDER CONTROL\n\nProduction costs in 2004 were $309 million, up $45 million or 17% on 2003. Analysis shows that Santos was able to continue\n\n## PRODUCTION AND SALES REVENUE\n\n\n\nto effectively control its costs in the face of significant external pressures in the form of rising services and materials prices.\n\nExamining production costs in detail reveals:\n\n - · the start-up of Bayu-Undan and acquisitions added $16 million to Santos' cost base\n - · changes in our accounting added a further $16 million to Santos' production costs\n - · higher insurance premiums ($8 million) and one-off stock write-offs ($5 million) were offset by $17 million in cost savings largely as a result of Santos' continuous improvement initiatives\n - · the Moomba incident resulted in $17 million of one-off costs in 2004.\n\nPiecing this together, the key themes in our financial performance were:\n\n - · cost savings in established production areas more than offset increases in the price of services and materials\n - · Santos' cost base rose as production from new developments and acquisitions were added to the Company's expanding portfolio of producing assets.", - "page_start": 11, - "page_end": 11, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## UNLOCKING THE VALUE OF STRATEGIC ASSETS\n\n\n\n'Our objective is to derive value from undeveloped assets which have been outside of Santos' base business.'\n\n## BRUCE WOOD\n\nVice President Strategic Projects\n\nSantos' Strategic Projects team focuses on assets that have proven difficult to commercialise or that need to be considered in a regional context rather than on an individual basis.\n\nThe other key activity for this team has been to lead Santos' continuous improvement focus.\n\n## UNITED STATES GAS\n\nThe US gas business was a major focus in 2004 for a number of reasons, not the least of which are the higher gas prices in the US compared with the domestic Australian market, and the ability to rapidly commercialise new discoveries.\n\nAn ongoing development and delineation program was carried out during the year, yielding better than planned production. The exploration initiative also continued to seek higher risk but more material prospects, aimed at enhancing the move into the shallow water area of the Gulf of Mexico. Exploration results in this area during 2005 will shape Santos' future strategy in the US.\n\n## TIGHT GAS\n\nHydrocarbons contained in traps with poor permeability are known as 'tight gas'. Large tight gas resources are known to exist in the Cooper Basin. Under current circumstances, this gas cannot be economically developed but, with the combination of improved production techniques and better commercial terms, could prove attractive.\n\nSantos assessed the resources and potential technologies that could be applied to unlock these resources during 2004 and is now\n\nworking up a range of possible evaluation projects to be undertaken in 2005.\n\n## NORTHERN AUSTRALIA GAS\n\nSantos has a significant existing gas resource base and some promising exploration acreage in the waters offshore Darwin, where it intends to drill a gas exploration well later this year.\n\nThe Company currently operates the Mereenie gas field in the Amadeus Basin in central Australia, which supplies gas to Darwin. Santos' first offshore gas production in northern Australia begins in 2006, sending BayuUndan gas to Darwin for conversion to LNG. Santos plans to build upon its growing position in the region to target further development which could ensure long-term gas supplies for the current market, or an expanded Northern Territory domestic market, or for export.\n\n## PAPUA NEW GUINEA GAS\n\nSantos is in active discussions with the PNG Gas Project participants to potentially re-enter the PNG Gas Project. Santos has a significant interest in a large part of the liquids-rich Hides gas field which is integral to the development of the Project.\n\n2004 CONTINGENT RESOURCES (TOTAL 1,443 mmboe)\n\n\n\n - Northern Australia 709 mmboe\n\nWestern Australia\n\n71 mmboe\n\nCentral Australia 240 mmboe\n\n - Southern Australia 32 mmboe\n - Papua New Guinea 391 mmboe\n\n\n\n\n\n", - "page_start": 23, - "page_end": 23, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "Guarantees provided by Santos Ltd for borrowings in respect of controlled entities are disclosed in note 15.\n\nSantos Ltd has provided parent company guarantees in respect of:\n\n - (a) the funding obligations of its subsidiary companies, Santos Timor Sea Pipeline Pty Ltd and Santos Darwin LNG Pty Ltd, relating to the construction of a pipeline from the Bayu-Undan Field to Wickham Point in Darwin and the construction of the LNG Plant in Darwin respectively, and has provided a funding commitment letter to these subsidiary companies together with Santos (JPDA 91-12) Pty Ltd. As at 31 December 2004 the expenditure commitments of Santos Timor Sea Pipeline Pty Ltd and Santos Darwin LNG Pty Ltd for the above mentioned projects totalled US$41.3 million (2003: US$107.6 million);\n\n", - "page_start": 84, - "page_end": 84, - "source_file": "ASX_STO_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed5.pdf", - "query": "What is the primary aim of the OSPRO cohort study ?", - "target_page": 2, - "target_passage": " The primary aim of the OSPRO cohort study was to de velop and validate review of systems (i.e. evidence of sys temic involvement) and yellow flag (i.e. pain-related psychological distress) screening tools for use in out patient orthopedic physical therapy settings", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Methods\n\n## Dataset and patient population\n\nThis study used data from the Orthopedic Physical Therapy -Investigative Network ' s (OPT-IN) Optimal Screening for Prediction of Referral and Outcome (OSPRO) validation cohort study, a longitudinal prospective study of individuals with knee, shoulder, back or neck pain seeking Physical Therapy in the US. A convenience sample was recruited from December 2014 and December 2015 by participating OPT-IN clinics. The OPT-IN clinics that participated in data collection represented multiple geographic regions in the US including the Mideast, Southeast, Great Lakes, Rocky Mountain States and Far West, with an attempt to balance recruitment between urban and rural settings over the entire OPT-IN network. Physical therapists practicing in these clinics identified eligible participants at initial evaluation and directed them to a secure study website for the informed consent process and baseline self-report assessment. Eligibility criteria have been thoroughly reported elsewhere [19] and were intentionally broad to develop a cohort that was generalizable to those seeking physical therapy for common musculoskeletal conditions in the US. Participants completed follow-up self-reported assessments on the study website at 4 weeks, 6 months and 12 months. Participants were notified of a pending assessment by an email that directed them back to the study website to complete their follow-up assessment. For additional details of the dataset and cohort, readers are directed to the published cohort profile [19].\n\nThe primary aim of the OSPRO cohort study was to develop and validate review of systems (i.e. evidence of systemic involvement) and yellow flag (i.e. pain-related psychological distress) screening tools for use in outpatient orthopedic physical therapy settings. These screening tools, once validated and refined for clinical decision making, may improve the value of care delivery by accurately identifying individuals who 1) are appropriate for referral to other providers for management of non-musculoskeletal symptoms, and/or 2) would benefit from enhanced, psychologically-informed physical therapy. Early identification of individuals most appropriate for these modified pathways of care has the potential to reduce wasteful downstream health care utilization, limit the risk of unwarranted and costly care escalation, and improve clinical outcomes. Results of the primary analyses examining the predictive ability of the OSPRO tools for pain, disability, health status, and comorbidity outcomes have been previously published [20]. Pre-planned secondary analyses included prediction of persistent pain state [21] and this current analysis predicting future healthcare utilization. All subjects consented to participation in the study and ethics approval was granted by the University of Florida Institutional Review Board.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed5.pdf" - }, - { - "text": "shown to identify approximately 95% of positive red-flag responders. For statistical analyses, the ' yes ' responses were added for each version and included in each model as a continuous independent variable.\n\n## OSPRO Yellow Flag tool (OSPRO-YF)\n\nThe OSPRO-YF is a yellow flag assessment tool that includes items from pain vulnerability domains (negative affect and fear-avoidance) and pain resilience domains (positive affect and self-efficacy) to aid with identification of pain-related psychological distress in outpatient orthopedic physical therapy settings [37]. The OSPRO-YF has good concurrent validity with pain intensity and region-specific disability [37] and is capable of predicting pain intensity, disability, quality of life and persistent pain 12 months following physical therapy in patients with musculoskeletal pain [20, 21]. The full-length OSPRO-YF has 17-items, however a shortened 10-item version is also available with an acceptable trade-off in accuracy. Like the OSPRO-ROS, the OSPRO-YF is designed for implementation into electronic medical record (EMR) systems to quickly and accurately identify risk for a variety of clinical outcomes [19]. For statistical analyses, a summary score was derived for each version by adding the item responses after reverse-scoring items 2, 13, 14, 15 and 17 so that higher scores indicate higher pain-related psychological distress. The summary score was then included in each model as a continuous independent variable.\n\n## Intervention\n\nAll physical therapy treatment was provided at the discretion of the treating clinician. The duration of the episode, the number of physical therapy visits, and individual treatment parameters (type, intensity, duration, frequency) were not collected for pragmatic reasons. In particular, clinical and utilization data are not commonly collected in a standardized format and would need to be extracted from disparate medical record databases across different health care systems to assess treatment. This was not feasible given the scope and design of this multisite survey-based study. However, instead of coding treatment type we included baseline-to-4 week change in pain intensity, region-specific disability, and OSPRO-YF scores in each model as measures of treatment response. In that manner the individual effects of the treatment received were included in the predictive models, without directly accounting for the type of treatment.\n\n## Healthcare utilization outcomes\n\nSelf-reported health care utilization was assessed at 6- and 12-months following initial evaluation by online assessment. Questions were derived from previous population-based studies involving musculoskeletal pain that have used survey methods for follow-up assessment [22, 23]. Study\n\nparticipants were asked whether they used any of the following healthcare services for their primary musculoskeletal pain complaint in the time following their physical therapy treatment:\n\n- 1. Opioid painkillers (eg. Vicodin, Lortab, Hydrocodone, Fentanyl, Percocet, Oxycontin, Oxycodone, tramadol, Ultram, Diludid, etc)\n- 2. Injections\n- 3. Surgery\n- 4. Diagnostic tests or Imaging (eg. xray, MRI, CT scan, nerve conduction test, etc.)\n- 5. Emergency room visits", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed5.pdf" - }, - { - "text": "In future studies, we will embed the OSPRO tools into electronic medical record (EMR) databases to refine and test outcomes prediction models at the health care systems level. Importantly, we will collect clinical encounter data through the EMR and combine it with administrative or billing data to confirm the results of this study with more objective measures of health care use. These studies will also allow us to provide better guidance on how to use the OSPRO tools to identify serious psychiatric involvement or systemic sources of pain that require medical referral. Finally, we will explore alternative scoring strategies for the tools, such as weighted scoring for the OSPRO-ROS and use of predicted full-length psychological questionnaire scores for the OSPRO-YF. Healthcare providers could then use the collective information from these studies to build learning health systems that facilitate effective, real-time clinical decision-making support to improve value of care for patients with musculoskeletal pain.\n\n## Conclusion\n\nBaseline disability and change in pain intensity were important predictors of any subsequent pain-related healthcare utilization, while predictors of individual service utilization were outcome-specific. Identification of risk is improved through treatment monitoring for pain and, in some cases, disability and pain-related psychological distress. Comorbidity burden was an important predictor of subsequent utilization of opioids and diagnostic tests and imaging, both of which have been recent targets of healthcare policy to constrain their unnecessary use. Future research is needed to refine these predictor variables and incorporate them into risk models that support clinical decision-making so that treatment effectiveness and efficiency are optimized in value-based systems.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed5.pdf" - }, - { - "text": "## Comorbidities\n\n## Charlson comorbidity index (CCI)\n\nThe Charlson Comorbidity Index was used to measure the presence of chronic comorbid medical conditions [34]. It lists 19 medical conditions that participants are asked to indicate whether they ' have ever been diagnosed with by a physician ' . Conditions are weighted and added for an overall measure of comorbidity burden. The CCI has demonstrated good test-retest reliability (0.91) and positive but weak to modest correlations with medication use, hospitalizations, length of stay, total charges, and pharmacy and laboratory charges for older adults in general medical care and surgical care settings [35].\n\n## Assessment tools\n\n## OSPRO Review of Systems tool (OSPRO-ROS)\n\nThe OSPRO-ROS is a review-of-systems screening tool for use in outpatient orthopedic physical therapy settings [36]. The OSPRO-ROS has demonstrated good concurrent validity with depression and a comprehensive 97-item battery of non-musculoskeletal symptoms (i.e., red flags). [36] Moderate to strong predictive capabilities of the OSPRO-ROS have been reported for persistence of pain, quality of life, and change in comorbidity 12 months following physical therapy in patients with musculoskeletal pain [20, 21]. The OSPRO-ROS includes standard symptom descriptors to aid with identification of systemic or non-musculoskeletal origins of musculoskeletal pain. It includes questions related to symptoms of the cardiovascular, gastrointestinal, endocrine, nervous, integumentary, pulmonary, and musculoskeletal systems. The full-length 23-item version of the OSPRO-ROS is capable of identifying 100% of positive red-flag responders (i.e. indicating ' yes ' to at least one systemic symptom on a questionnaire) in outpatient orthopedic physical therapy settings. [36] A shorter, 10-item version is also available that has been", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed5.pdf" - }, - { - "text": "Block 2: 10-item OSPRO-YF and 10-item OSPRO-ROS at baseline.\n\nBlock 3: Remaining items from the OSPRO-YF (+ 7 items) and OSPRO-ROS (+ 13 items). These were included to determine whether full-length versions of the tools provided better prediction over shortened versions.\n\nBlock 4: Baseline-to-4 week change in pain intensity, region-specific disability, and OSPRO-YF scores. Early changes in these variables may be associated with improved prediction of outcomes over baseline variables alone [38]. This approach modeled change in these variables as a measure of treatment response and allowed us to assess the relative value of treatment monitoring for the prediction of healthcare utilization outcomes.\n\nFor the first analysis, binary logistic regression was used to determine predictors of any healthcare utilization following physical therapy, with the dependent variable defined as reporting one or more utilization events for any of the potential healthcare services over the entire follow-up period. For analyses of specific services, utilization was dichotomized for each service. Specific service utilization over early (through 6 months) and late (6 months to 12 months) phases following physical therapy were collapsed to create a single dichotomous utilization indicator for each service over the entire study follow-up period. Any utilization of the service over that period was categorized as YES. Separate multivariate binary logistic regression models were then fitted for the dichotomous utilization indicator (i.e. YES or NO) of each healthcare service (e.g. opioid use, injection, imaging, surgery, and emergency room visits).\n\nFor all analyses, full hierarchical multivariate models were first fit to assess the unique contributions of each block. This approach allowed us to determine the relative contributions of baseline demographic and health-related variables, the newly developed OSPRO-ROS and OSPRO-YF tools, and response to treatment via time varying variables (e.g., pain intensity and region specific function). However, since our primary aim was to develop concise and accurate utilization prediction models for efficient assessment of risk, we then separately developed stepwise models using backward selection for each dependent variable to derive parsimonious prediction item sets. Parsimonious models were chosen as a more conservative approach to identifying individual predictors given the potential for overfitting full multivariate models because of high subject attrition. For stepwise models, the p -value threshold was 0.05 for entry and 0.10 for removal. Overall fit for each model was examined with Hosmer & Lemeshow test, chi-square and pseudo-R 2 values (e.g. Nagelkerke) when\n\nappropriate. Comparison of adjusted odds ratios (OR) and 95% confidence interval (CI) were used to determine the relative strength of each predictor in parsimonious models. Multicollinearity was assessed using variance inflation factor (VIF) and tolerance, where VIFs < 10 and tolerances > 0.1 suggested no significant collinearity among independent variables [39].\n\n## Planned sensitivity analyses for missing data", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed5.pdf" - }, - { - "text": "Table 2 Baseline health-related information for the full cohort, and for those with complete and incomplete follow-up", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed5.pdf" - }, - { - "text": "## Abbreviations\n\nCCI: Charlson comorbidity index; OSPRO: Optimal Screening for Prediction of Referral and Outcome; OSPRO-ROS: Review of systems screening tool from OSPRO cohort study; OSPRO-YF: Pain-related psychological distress screening tool from OSPRO cohort study\n\n## Acknowledgements\n\nThe authors wish to acknowledge Dr. Roger B. Fillingim and Dr. Nicole M. Marlow for their input on study design and analysis. OPT-IN Network Participants included: University of Florida: Joel Bialosky; UF Health: Giorgio Zeppieri, Jr., Daniel Broome, Marty Huegel, Debi Jones, Steve Emery, Mike Hodges, Derek Miles, Jodi Davis, Charlene Stubbington, Mike Darcy; ATI Physical Therapy: Ellen Shanley, Thomas Denninger, Jenna Bartsokas, Elise Harris, Jordan Floyd, Wade Harrell; University of Southern California: Lori Michener, Amy Pomrantz, Brooks Rehabilitation: Raine Osborne, Nata Salvatori, John Leschitz, Brian Hagist, Laura Langer, Tim Shreve, Nando Malaman, Michael Bourassa, Justin Zych, Tasha Mouton Shanklin; University of Illinois at Chicago: Aaron Keil, Brad Myers, Deb Davey, Justin Payette, Adam Wielechowski, Richard Severin, Erik Martinez; Indiana State University: Ryan Hanigan, Carolina Valencia, Danielle Jena, Nicole Woodard; Arcadia University: Angela Tate; Life ' s Work Physical Therapy: Sandra Stryker, Aaron Leonard, Erin Courtney, Brandon Little, Kathryn Jankord, Brad Simpson, Charleen Hall, Paige Nixon, Julia Neufeld; University of Colorado, Denver: Paul Mintken, Virginia Arnette, Andrea Barsch.\n\n## Funding\n\nThis project was supported by the 2013 Clinical Research Network grant from the Orthopaedic Section, American Physical Therapy Association. The funding body had no role in the design of the study or collection, analysis, and interpretation of the data or in writing the manuscript. TAL received additional support from the Foundation for Physical Therapy with Promotion of Doctoral Studies I & II (PODS I& II) Awards. SZG and JMB received additional support from Brooks Rehabilitation while designing this study. JMB received support from the American National Institutes of Health (NIH) Rehabilitation Research Career Development Program (K12-HD055929).\n\n## Availability of data and materials\n\nThe data that support the findings of this study are available from the corresponding author upon reasonable request.\n\n## Authors ' contributions\n\nTAL provided input on study design and analysis plan, drafted the manuscript and approved final version of the manuscript. SZG secured funding, provided overall design, gave input on the analysis plan and approved final version of the manuscript. JMB provided input on design and analysis plan and approved final version of the manuscript.\n\n## Ethics approval and consent to participate\n\nEthics approval for this study was granted by the University of Florida Institutional Review Board-01 (Study #: 525 -2012). All participants provided written consent to participate in the study.\n\n## Consent for publication\n\nNot applicable.\n\n## Competing interests\n\nThe authors declare that they have no competing interests.\n\n## Publisher ' s Note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Author details\n\n1 Duke Clinical Research Institute, Duke University, 2400 Pratt Street, Durham, NC 27705, USA. 2 Department of Physical Therapy, College of Public Health & Health Professions, University of Florida, Box 100154, UFHSC, Gainesville, FL 32610-0154, USA. 3 Brooks Rehabilitation Clinical Research Center, 3901 University Blvd. South, Suite 103, Jacksonville, FL 32216, USA. 4 Duke Clinical Research Institute, Department of Orthopaedic Surgery, Duke University, 2400 Pratt Street, Durham, NC 27705, USA.\n\nReceived: 9 November 2017 Accepted: 14 August 2018\n\n## References", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed5.pdf" - }, - { - "text": "reproduced in an independent sample [42, 43]. With 18 potential predictors, a sample of n = 180 reporting healthcare utilization at follow-up would be sufficient for the proposed analyses. However, this estimate is conservative. Other methods for determining sample size for prediction analyses suggest an overall sample size of N > 50 + 8* m (where m = number of independent variables) [44] or N > 104 + number of independent predictors [45, 46]. For these less conservative estimates, the projected study sample size is sufficient for the proposed analyses.\n\n## Results\n\nFour hundred and forty subjects were recruited at initial evaluation. Follow-up at 4 weeks was 75.0% ( n =330), at 6 months was 69.0% ( n =304) and at 12 months was 65.2% ( n =287). Baseline demographics and health-related characteristics for the full cohort, as well as those who did and did not complete all follow-up are presented in Tables 1, 2 and 3. Those who did not complete follow-up were younger, more likely to be non-white, had less than a college degree, were more likely to have had sudden symptom onset, had higher baseline pain intensity, and had higher baseline pain-related psychological distress measured by the OSPRO-YF. Only those with complete\n\nfollow-up data at each time point were considered for prediction analyses ( n = 246, 55.9%).\n\nOverall, 43.1% ( n = 106/246) of those with complete follow-up data utilized at least one healthcare service following the physical therapy episode. Distribution of utilization for specific services is provided in Table 4. For multivariate analyses, all VIFs were less than 10 and tolerance values greater than 0.1 suggesting no significant multicollinearity among independent variables.\n\n## Full multivariate model performance\n\nOverall performance for each full multivariate model is listed in Table 5. Block 1 (Demographic, clinical and comorbidity) consistently contributed to prediction of healthcare utilization and accounted for the greatest amount of variation in utilization outcome for all models. Block 4 (change scores for pain, disability, and OSPRO-YF) provided statistically significant contributions in all models except prediction of injection. Blocks including baseline OSPRO-YF and OSPRO-ROS, both short and long forms, did not predict utilization outcomes. Weighted models consistently outperformed their complete case analysis model counterparts with overall model pseudo-R 2 values ranging from .337 (Any care) to .611 (Emergency room).\n\nTable 1 Demographic information for the full cohort, and for those with complete and incomplete follow-up", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed5.pdf" - }, - { - "text": "- /SM590000 Creating or adding comments to the BDT Structured Field\n - /SM590000 Creating or adding group names to the BNG - ENG Structured Fields\n - /SM590000 Changing obsolete Structured Fields to current Structured Fields (for example, MCF1 to MCF2, or PTD1 to PTD2)\n\n## 7.5 OS/390 indexer on z/OS and AIX\n\nThe OS/390 indexer is supported on both the z/OS and AIX implementations of Content Manager OnDemand. The indexing parameters are the same for both implementations. If you are migrating from z/OS to AIX, or from AIX to z/OS, you can continue to use the OS/390 indexer and not change your indexing parameters.\n\nYou can use the OS/390 indexer to extract index data from line data and AFP reports. In addition, other data types, such as TIFF images, can be captured by using the ANYSTORE exit (ANYEXIT is described in 11.3, 'OS/390 indexer exits' on page 248).\n\nThe OS/390 indexer is a single pass indexer. (It does not create an intermediate file.) It therefore provides better performance than ACIF. The COBOL Runtime Library is required on AIX to run the OS/390 indexer, and it is included in the Content Manager OnDemand Multiplatform software.", - "page_start": 202, - "page_end": 202, - "source_file": "sg246915.pdf" - }, - { - "text": "## Data\n\nPolicy information about availability of data\n\nAll manuscripts must include a data availability statement. This statement should provide the following information, where applicable:\n\n - - Accession codes, unique identifiers, or web links for publicly available datasets\n - - A description of any restrictions on data availability\n - - For clinical datasets or third party data, please ensure that the statement adheres to our policy\n\nThe dataset consists of 26 MRI scans (T1w, T2w, and diffusion scans) alongside state-dependent measures and serum assessments of ovarian sex hormones for each session. The data is publicly available on https://openneuro.org/datasets/ds005299.\n\n## Research involving human participants, their data, or biological material\n\nPolicy information about studies with human participants or human data. See also policy information about sex, gender (identity/presentation), and sexual orientation and race, ethnicity and racism.\n\nReporting on sex and gender\n\nOur study focused on a single female participant to explore how pregnancy shapes the human brain.\n\nReporting on race, ethnicity, or other socially relevant groupings\n\nThe subject was white.\n\nPopulation characteristics\n\nThis was a precision imaging study of one 38-year old primiparous woman.\n\nRecruitment\n\nOur participant (corresponding author E.R.C.) was a healthy primiparous woman who underwent in-vitro fertilization (IVF) to achieve pregnancy. The project was conceived by E.R.C. and she wished to use herself as the participant, as has been done in previous \"dense-sampling\" studies (cf. Poldrack et al., 2015; Pritschet et al., 2020).\n\nEthics oversight\n\nThe participant gave written informed consent and the study was approved by the University of California, Irvine Human Subjects Committee.\n\nNote that full information on the approval of the study protocol must also be provided in the manuscript.\n\n## Field-specific reporting\n\nPlease select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection.\n\nLife sciences\n\nBehavioural & social sciences\n\nEcological, evolutionary & environmental sciences\n\nFor a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf\n\n## Life sciences study design\n\nAll studies must disclose on these points even when the disclosure is negative.\n\nSample size\n\nWe used precision imaging to deeply-phenotype, densely-sample an individual over the gestational window. As this study was the first of it's kind, our sample size was an N=1 design. Although this limits the generalizability of our findings, this project serves as a proof-of-concept, showcasing the value and feasibility of studying a woman's brain during the transition to motherhood.\n\nData exclusions\n\nno history of neuropsychiatric diagnosis, endocrine disorders, prior head trauma or history of smoking\n\nReplication\n\nThis is the first study of it's kind; therefore, there are no study replications as of yet. However, to reproduce our results internally across software packages, we also ran the T1w data through the longitudinal FreeSurfer cortical thickness pipeline (Dale et al., 1999), which corroborated our finding that gray matter volume declines throughout gestation (e.g., successful internal replication). This pattern of results not only held across software packages, but also brain parcellations (e.g., Schaefer 400-cortical atlas and Desikan-Killiany cortical atlas).\n\nRandomization\n\nThis was an observational study design, and therefore not randomized.\n\nBlinding\n\nFor medial temporal lobe segmentation, scans were randomized and segmentation was performed in a random order, blind to pregnancy stage. No other blinding was applicable, given the observational study of brain changes in response to advancing gestational week.\n\n## Reporting for specific materials, systems and methods", - "page_start": 14, - "page_end": 14, - "source_file": "pubmed4.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed5.pdf", - "query": "What is the range of the pain rating scale ?", - "target_page": 3, - "target_passage": "Pain intensity was assessed by the numerical pain rating scale (NPRS) ranging from “0” (no pain) to “10” (worst pain imaginable)", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Healthcare utilization predictors\n\nWe collected potential predictors by self-reported questionnaires at initial evaluation using an online study website. Participants were directed back to the study website 4 weeks following initial evaluation to again complete questions on pain intensity, disability, and pain-related psychological distress. Change in pain intensity, disability, and pain-related psychological distress from baseline to 4 weeks were modeled as treatment response variables and included as potential predictors.\n\n## Sociodemographic and health-related information\n\nParticipants completed a standard intake questionnaire form previously used in our clinical studies that assessed age, sex, race, and insurance provider type. This questionnaire also assessed health-related variables included anatomical region of primary pain complaint (low back, neck, shoulder, or knee) and whether the patient had undergone surgery for their primary pain complaints (yes or no). Due to small cell sizes for certain categories, race was dichotomized as white or non-white. For insurance type, participants were asked to choose one of the following options: private, public (Medicare and/or Medicaid), uninsured/self-pay, worker ' s compensation, and other/commercial insurance. Among the study sample, we observed few with no insurance ( n = 7) or worker ' s compensation ( n = 14). The study also included relatively few with ' other/commercial insurance ' ( n = 45). Within this group, informal assessment of these various plans suggested high heterogeneity of plan characteristics and coverage. Due to the small number of subjects in these individual insurance strata and to improve interpretability of results, we collapsed those reporting no insurance, worker ' s compensation and other/commercial insurance into a single category (i.e., ' Other ' ). Therefore, insurance type was categorized as private, public, or other (no insurance, worker ' s compensation, or other/commercial insurance) for purposes of analysis.\n\n## Pain-related clinical variables\n\nPain status was determined using established definitions that account for the duration of pain and activity limitations [22, 23] using the following two questions: 1) ' How long have you been experiencing your current painful symptoms? ' and 2) ' Have you experienced ANY pain and activity limitations every day for the past 3 months? ' Responses to question 1 of ' greater than 90 days ' or responses to question 2 of ' Yes ' were used to classify patients as having persistent pain at initial evaluation.\n\n## Pain intensity\n\nPain intensity was assessed by the numerical pain rating scale (NPRS) ranging from ' 0 ' (no pain) to ' 10 ' (worst\n\npain imaginable) [24 -26]. Participants rated their current pain intensity, as well as their best (lowest) and worst (highest) pain intensity over the past 24 h. Current, best and worst pain ratings were averaged for purposes of analysis.\n\n## Region-specific disability\n\nSelf-reported region-specific disability was assessed with the Neck Disability Index [27, 28], Oswestry Disability Questionnaire [29, 30], Quick Disability of Arm Shoulder and Hand [31] or International Knee Documentation Committee Subjective Knee Form [32] for cervical, low back, shoulder and knee pain, respectively. Region-specific disability measures were z-transformed for purposes of analysis, consistent with our prior work involving multiple anatomical regions [33].\n\n## Comorbidities\n\n## Charlson comorbidity index (CCI)", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed5.pdf" - }, - { - "text": "Credit ratings are not recommendations for investors to purchase, hold or sell the rated securities, nor are they a comment on market price or investor suitability. There is no assurance that a rating will remain in effect for a given period of time, or that a rating will not be revised or withdrawn entirely by a rating agency if it believes circumstances warrant it. The ratings on our senior debt provided by Standard & Poor's, Fitch and Moody's are investment grade ratings.\n\n", - "page_start": 64, - "page_end": 64, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## RESEARCH ARTICLE\n\n## Prediction of healthcare utilization following an episode of physical therapy for musculoskeletal pain\n\nTrevor A. Lentz 1* , Jason M. Beneciuk 2,3 and Steven Z. George 4\n\n## Abstract\n\nBackground: In the United States, value-based purchasing has created the need for healthcare systems to prospectively identify patients at risk for high healthcare utilization beyond a physical therapy episode for musculoskeletal pain. The purpose of this study was to determine predictors of pain-related healthcare utilization subsequent to an index episode of physical therapy for musculoskeletal pain.\n\nMethods: This study assessed data from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) longitudinal cohort study that recruited individuals with a primary complaint of neck, low back, knee or shoulder pain in physical therapy ( n = 440). Demographics, health-related information, review of systems, comorbidity and pain-related psychological distress measures were collected at baseline evaluation. Baseline to 4-week changes in pain intensity, disability, and pain-related psychological distress were measured as treatment response variables. At 6-months and 1-year after baseline evaluation, individuals reported use of opioids, injection, surgery, diagnostic tests or imaging, and emergency room visits for their pain condition over the follow-up period. Separate prediction models were developed for any subsequent care and service-specific utilization.\n\nResults: Subsequent pain-related healthcare utilization was reported by 43% ( n = 106) of the study sample that completed the 12-month follow-up ( n = 246). Baseline disability and 4-week change in pain intensity were important global predictors of subsequent healthcare utilization. Age, insurance status, comorbidity burden, baseline pain, and 4-week changes in pain intensity, disability and pain-related psychological distress predicted specific service utilization.\n\nConclusion: In those completing follow up measures, risk of additional pain-related healthcare utilization after physical therapy was best predicted by baseline characteristics and 4-week treatment response variables for pain intensity, disability and pain-related psychological distress. These findings suggest treatment monitoring of specific response variables could enhance identification of those at risk for future healthcare utilization in addition to baseline assessment. Further study is required to determine how specific characteristics of the clinical encounter influence future utilization.\n\nKeywords: Screening, Psychological distress, Multimorbidity, Value, Treatment monitoring\n\n## Background\n\nMusculoskeletal pain is a prevalent and costly health condition with far-reaching public health consequences including chronic pain, disability and opioid-related addiction [1]. Clinical practice guidelines now recommend non-pharmacological treatment as frontline management for musculoskeletal pain, which will lead\n\n1\n\nDuke Clinical Research Institute, Duke University, 2400 Pratt Street, Durham,\n\nNC 27705, USA\n\nFull list of author information is available at the end of the article\n\n\n\nto increased utilization of services such as physical therapy [1 -3]. Physical therapy is effective for improving disability and reducing costs associated with many musculoskeletal pain conditions [4 -9]. However, pain-related healthcare utilization beyond the physical therapy episode (e.g. subsequent use of surgery, injection, opioids, etc.) may indicate suboptimal treatment response, the presence of more complex needs, or unwarranted escalation of care. Downstream healthcare utilization is not often considered as an outcome of care or indication of treatment effectiveness for musculoskeletal pain. But the importance of\n\n\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed5.pdf" - }, - { - "text": "- 21. Beneciuk JM, Lentz TA, He Y, Wu SS, George SZ. Prediction of persistent musculoskeletal pain at 12 months: a secondary analysis of the Optimal Screening for Prediction of Referral and Outcome (OSPRO) validation cohort study. Phys Ther. 2018;98:290 -301.\n - 22. Freburger JK, Holmes GM, Agans RP, Jackman AM, Darter JD, Wallace AS, et al. The rising prevalence of chronic low back pain. Arch Intern Med. 2009; 169:251 -8.\n - 23. Carey TS, Freburger JK, Holmes GM, Jackman A, Knauer S, Wallace A, et al. Race, care seeking, and utilization for chronic back and neck pain: population perspectives. J Pain Off J Am Pain Soc. 2010;11:343 -50.\n - 24. Jensen MP, Turner JA, Romano JM, Fisher LD. Comparative reliability and validity of chronic pain intensity measures. Pain. 1999;83:157 -62.\n - 25. Bolton JE. Accuracy of recall of usual pain intensity in back pain patients. Pain. 1999;83:533 -9.\n - 26. Childs JD, Piva SR, Fritz JM. Responsiveness of the numeric pain rating scale in patients with low back pain. Spine. 2005;30:1331 -4.\n - 27. Vernon H. The neck disability index: state-of-the-art, 1991-2008. J Manip Physiol Ther. 2008;31:491 -502.\n - 28. Vernon H, Mior S. The neck disability index: a study of reliability and validity. J Manip Physiol Ther. 1991;14:409 -15.\n - 29. Hudson-Cook N, Tomes-Nicholson K, Breen A. A revised Oswestry disability questionnaire. In: Roland M, Jenner J, editors. Back pain: new approaches to rehabilitation and education. New York: Manchester University Press; 1989. p. 187 -204.\n - 30. Fritz JM, Irrgang JJ. A comparison of a modified Oswestry low back pain disability questionnaire and the Quebec back pain disability scale. Phys Ther. 2001;81:776 -88.\n - 31. Beaton DE, Wright JG, Katz JN, Upper Extremity Collaborative Group. Development of the QuickDASH: comparison of three item-reduction approaches. J Bone Joint Surg Am. 2005;87:1038 -46.\n - 32. Irrgang JJ, Anderson AF, Boland AL, Harner CD, Kurosaka M, Neyret P, et al. Development and validation of the international knee documentation committee subjective knee form. Am J Sports Med. 2001;29:600 -13.\n - 33. Butera KA, Lentz TA, Beneciuk JM, George SZ. Preliminary evaluation of a modified STarT back screening tool across different musculoskeletal pain conditions. Phys Ther. 2016;96:1251 -61.\n - 34. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40:373 -83.\n - 35. Katz JN, Chang LC, Sangha O, Fossel AH, Bates DW. Can comorbidity be measured by questionnaire rather than medical record review? Med Care. 1996;34:73 -84.\n - 36. George SZ, Beneciuk JM, Bialosky JE, Lentz TA, Zeppieri G, Pei Q, et al. Development of a review-of-systems screening tool for orthopaedic physical therapists: results from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) cohort. J Orthop Sports Phys Ther. 2015;45: 512 -26.\n - 37. Lentz TA, Beneciuk JM, Bialosky JE, Zeppieri G, Dai Y, Wu SS, et al. Development of a yellow flag assessment tool for orthopaedic physical therapists: results from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) cohort. J Orthop Sports Phys Ther. 2016;46:327 -43.\n - 38. Beneciuk JM, Fritz JM, George SZ. The STarT back screening tool for prediction of 6-month clinical outcomes: relevance of change patterns in outpatient physical therapy settings. J Orthop Sports Phys Ther. 2014;44: 656 -64.", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed5.pdf" - }, - { - "text": "shown to identify approximately 95% of positive red-flag responders. For statistical analyses, the ' yes ' responses were added for each version and included in each model as a continuous independent variable.\n\n## OSPRO Yellow Flag tool (OSPRO-YF)\n\nThe OSPRO-YF is a yellow flag assessment tool that includes items from pain vulnerability domains (negative affect and fear-avoidance) and pain resilience domains (positive affect and self-efficacy) to aid with identification of pain-related psychological distress in outpatient orthopedic physical therapy settings [37]. The OSPRO-YF has good concurrent validity with pain intensity and region-specific disability [37] and is capable of predicting pain intensity, disability, quality of life and persistent pain 12 months following physical therapy in patients with musculoskeletal pain [20, 21]. The full-length OSPRO-YF has 17-items, however a shortened 10-item version is also available with an acceptable trade-off in accuracy. Like the OSPRO-ROS, the OSPRO-YF is designed for implementation into electronic medical record (EMR) systems to quickly and accurately identify risk for a variety of clinical outcomes [19]. For statistical analyses, a summary score was derived for each version by adding the item responses after reverse-scoring items 2, 13, 14, 15 and 17 so that higher scores indicate higher pain-related psychological distress. The summary score was then included in each model as a continuous independent variable.\n\n## Intervention\n\nAll physical therapy treatment was provided at the discretion of the treating clinician. The duration of the episode, the number of physical therapy visits, and individual treatment parameters (type, intensity, duration, frequency) were not collected for pragmatic reasons. In particular, clinical and utilization data are not commonly collected in a standardized format and would need to be extracted from disparate medical record databases across different health care systems to assess treatment. This was not feasible given the scope and design of this multisite survey-based study. However, instead of coding treatment type we included baseline-to-4 week change in pain intensity, region-specific disability, and OSPRO-YF scores in each model as measures of treatment response. In that manner the individual effects of the treatment received were included in the predictive models, without directly accounting for the type of treatment.\n\n## Healthcare utilization outcomes\n\nSelf-reported health care utilization was assessed at 6- and 12-months following initial evaluation by online assessment. Questions were derived from previous population-based studies involving musculoskeletal pain that have used survey methods for follow-up assessment [22, 23]. Study\n\nparticipants were asked whether they used any of the following healthcare services for their primary musculoskeletal pain complaint in the time following their physical therapy treatment:\n\n- 1. Opioid painkillers (eg. Vicodin, Lortab, Hydrocodone, Fentanyl, Percocet, Oxycontin, Oxycodone, tramadol, Ultram, Diludid, etc)\n- 2. Injections\n- 3. Surgery\n- 4. Diagnostic tests or Imaging (eg. xray, MRI, CT scan, nerve conduction test, etc.)\n- 5. Emergency room visits", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed5.pdf" - }, - { - "text": "## Emergency room\n\nModels for emergency room use had the highest pseudo-R 2 values of any individual service (0.48 -0.50), but also had the largest number of predictors (8 -9). Agreement between complete case and weighted models was moderate. The models converged on the following predictors: age (OR = 0.91 -0.94, p < 0.05), insurance (OR = 8.99 -13.15, p <0.05), baseline disability (OR = 3.33 -4.88, p < 0.001), and change in pain (OR = 1.59 -1.77, p < 0.05). Higher utilization was associated with younger age, other insurance (e.g., self-pay,\n\nWorker ' s Compensation, or other commercial insurance) compared to private insurance, higher baseline disability and worsening of pain. In the weighted analysis, subjects with knee pain were less likely to utilize the emergency room than those with low back pain. However, this relationship was not significant ( p = .06) in the complete case analysis. Of the significant predictors in both models, insurance status was the strongest individual predictor of subsequent emergency room use.\n\n## Discussion\n\nThis study identified novel predictors for pain-related utilization outcomes following an episode of physical therapy for a primary complaint of musculoskeletal pain. The most robust finding from these analyses was that baseline disability and change in pain intensity over the first 4 weeks following physical therapy evaluation were consistent predictors of subsequent pain-related healthcare utilization in those participants that completed all follow up. Aside from these robust predictors, other individual predictors of utilization were highly outcome-specific. High model specificity for utilization outcomes observed in this study is consistent with a recent systematic review that found similar levels of model specificity for more traditional outcomes like pain intensity, disability and work absenteeism [14]. Across models, health-related variables were generally stronger predictors than sociodemographic factors, which is also supported by prior research [15, 16]. Additionally, there were cases when prediction models were improved for specific services (e.g. surgery, use of opioids) when considering change in pain, disability or pain-related psychological distress. A notable finding is that the OSPRO-YF had the greatest utility when used to measure change in pain-related psychological distress. Current risk prediction paradigms in musculoskeletal pain consider only baseline pain-related psychological distress. However, these results underscore the importance of", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed5.pdf" - }, - { - "text": "subsequently evaluated 2 ED-to-inpatient handoff notes for each patient: (1) the physician-written note and (2) the LLM-generated note.\n\nOnaLikert scale of 1 to 5, where 1 is unacceptable and 5 is excellent, the 3 physicians rated the completeness, curation, readability, and correctness of the summary as shown in eTable 1 in Supplement 1. Physicians rated the usefulness of the summary, defined as the capability of the summary being incorporated into a workflow where a physician would make edits before final completion, mitigating potential future self-referential learning loops and the downstream adverse consequences. 51 Likewise, the raters assessed potential patient safety implications of unmitigated model errors using a scale from 1 to 5, where 1 denotes life-threatening risks and 5 denotes no identified patient safety risk for completeness, curation, readability, and the 4 subcategories within correctness (hallucination, faulty logic, knowledge gap, and bias), as well as the overall patient safety risk. 45 Evaluators arrived at prestudy consensus that a usefulness Likert score of at least a 3 out of 5 indicated that the LLM-generated summary likely demonstrated baseline acceptability for such a workflow. To extrapolate a theoretical worst case scenario, the physicians rated the safety of the LLM-generated summary as defined as the capability of the summary to fully replace a physicianwritten note (unmitigated).\n\nTo improve consistency and agreement, the 3 reviewers met to familiarize themselves with the framework and evaluated 10 separate cases from the test dataset that were not included in the clinical evaluation results. Additionally, after independently scoring the summaries, they met to ensure consensus interpretation of the multidimensional scoring framework. Interrater reliability was calculated using intraclass correlation coefficient (ICC), using a 2-way random effects model for consistency with the Pingouin statistical package version 0.5.4 in Python (Python Software Foundation). The ICC measures the similarity of the 3 raters to confirm the consistency and validity of the evaluation protocol; the scores are from 0 to 1, where 1 indicates unanimous agreement and 0 represents no agreement. 52 Data were analyzed from October 2023 to March 2024.\n\n## Results\n\n## AutomatedTasks\n\nOf 1600 patients, the mean (SD) age was 59.8 (18.9) years and 832 (52%) were female. In Table 2 , ROUGE and BERTScore compare the summaries with the testing set from our annotations, and SCALE score compares the summaries with the source notes. From automatic evaluation results, we observed that LLM-generated summaries had better scores than the physician summaries, such that ROUGE-2 was 0.322 vs 0.088, BERT-precision was 0.859 vs 0.796, and SCALE was 0.691 vs 0.456, suggesting the LLM-generated summaries were more similar and more detailed than the physician summaries.\n\n## Clinical Evaluation Tasks\n\nThe clinical evaluation results for LLM-generated summaries and physician-written summaries are shown in Table 3 and Table 4 . The mean clinical quality scores of the automated summaries are in a comparable range (4-5) to those of the physician summaries. However, the automated summaries were observed to be of lower quality compared with the physician-written summaries with regards to mean (SD) usefulness (4.04 [0.85] vs 4.36 [0.71]), completeness (4.00 [0.88] vs 4.16 [0.84]),\n\nTable 2. Automated Evaluation Scores, Large Language Model (LLM)-Generated and Physician-Written\n\n| Summary type | R-1 a | R-2 a | R-L a | BERT-p | BERT-r | SCALE |\n|-------------------|---------|---------|---------|----------|----------|---------|\n| LLM-generated | 0.494 | 0.322 | 0.391 | 0.859 | 0.876 | 0.691 |\n| Physician-written | 0.251 | 0.088 | 0.154 | 0.796 | 0.827 | 0.456 |", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed8.pdf" - }, - { - "text": "## Comorbidities\n\n## Charlson comorbidity index (CCI)\n\nThe Charlson Comorbidity Index was used to measure the presence of chronic comorbid medical conditions [34]. It lists 19 medical conditions that participants are asked to indicate whether they ' have ever been diagnosed with by a physician ' . Conditions are weighted and added for an overall measure of comorbidity burden. The CCI has demonstrated good test-retest reliability (0.91) and positive but weak to modest correlations with medication use, hospitalizations, length of stay, total charges, and pharmacy and laboratory charges for older adults in general medical care and surgical care settings [35].\n\n## Assessment tools\n\n## OSPRO Review of Systems tool (OSPRO-ROS)\n\nThe OSPRO-ROS is a review-of-systems screening tool for use in outpatient orthopedic physical therapy settings [36]. The OSPRO-ROS has demonstrated good concurrent validity with depression and a comprehensive 97-item battery of non-musculoskeletal symptoms (i.e., red flags). [36] Moderate to strong predictive capabilities of the OSPRO-ROS have been reported for persistence of pain, quality of life, and change in comorbidity 12 months following physical therapy in patients with musculoskeletal pain [20, 21]. The OSPRO-ROS includes standard symptom descriptors to aid with identification of systemic or non-musculoskeletal origins of musculoskeletal pain. It includes questions related to symptoms of the cardiovascular, gastrointestinal, endocrine, nervous, integumentary, pulmonary, and musculoskeletal systems. The full-length 23-item version of the OSPRO-ROS is capable of identifying 100% of positive red-flag responders (i.e. indicating ' yes ' to at least one systemic symptom on a questionnaire) in outpatient orthopedic physical therapy settings. [36] A shorter, 10-item version is also available that has been", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed5.pdf" - }, - { - "text": "weighted analytic models for each type of healthcare service.\n\n## Any healthcare\n\nThe final parsimonious models for any healthcare utilization differed slightly between complete case and weighted analyses (Table 6). Included in the models were chronicity of symptoms, CCI, baseline pain, baseline disability, and change in pain from baseline to 4-week follow-up. However, only baseline disability (OR = 1.48 -2.47, p < 0.05) and change in pain (OR = 1.28 -1.45, p < 0.05) were significant predictors in both models, with greater baseline disability and worsening pain associated with higher odds of any utilization.\n\n## Utilization of individual services\n\n## Opioids\n\nComorbidity index score (i.e. CCI), baseline pain and change in pain were consistent predictors between the models of opioid utilization. In these models, higher pain (OR = 1.70 -1.76, p < 0.001), CCI (OR = 1.54 -1.60, p < 0.001) and increase in pain (OR = 1.70 -1.71, p <0.001) were associated with higher odds of opioid utilization. These models explained approximately 30% of the variation in opioid use.\n\n## Injection\n\nA combination of race, chronicity and baseline disability explained slightly more than 20% of the variance in\n\nTable 5 Overall performance of full logistic multivariate regression models ( n =246)", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed5.pdf" - }, - { - "text": "- 17. Pérez C, Navarro A, Saldaña MT, Wilson K, Rejas J. Modeling the predictive value of pain intensity on costs and resources utilization in patients with peripheral neuropathic pain. Clin J Pain. 2015;31:273 -9.\n - 18. Hill JC, Fritz JM. Psychosocial influences on low back pain, disability, and response to treatment. Phys Ther. 2011;91:712 -21.\n - 19. George SZ, Beneciuk JM, Lentz TA, Wu SS. The Optimal Screening for Prediction of Referral and Outcome (OSPRO) in patients with musculoskeletal pain conditions: a longitudinal validation cohort from the USA. BMJ Open. 2017;7:e015188.\n - 20. George SZ, Beneciuk JM, Lentz TA, Wu SS, Dai Y, Bialosky JE, Zeppieri G Jr. Optimal Screening for Prediction of Referral and Outcome (OSPRO) for Musculoskeletal Pain Conditions: Results From the Validation Cohort. J Orthop Sports Phys Ther. 2018;48(6):460 -75.", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed5.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed5.pdf", - "query": "What are the health consequences of musculoskeletal pain ?", - "target_page": 1, - "target_passage": "Musculoskeletal pain is a prevalent and costly health condition with far-reaching public health consequences including chronic pain, disability and opioid-related ad diction [1].", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "In a similar way, the levels of ergonomic risks are related with the sectoral structure of a country, determining the type of occupations and work tasks. EU-OSHA provided a detailed analysis of the prevalence of musculoskeletal disorders (MSDs) and the related risk factors in several studies on musculoskeletal diseases, for example, 'Work-related musculoskeletal disorders: why are they still so prevalent?' 58\n\nAn example of the interrelation between sectors and risks is the connection between the sector aggregate 'Trade, transport, food/accommodation and recreation activities' and three major indicators of ergonomic burden, that is, 'Painful, tiring positions', 'Repetitive hand or arm movements', and 'Carrying or moving heavy loads'.\n\nSeven countries have a share of employees in this sector of more than 30% (Cyprus, Greece, Spain, Malta, Bulgaria, Croatia and Latvia), and many of them are present in two or three lists of countries with the highest number of responses regarding the indicators.\n\n", - "page_start": 42, - "page_end": 42, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## RESEARCH ARTICLE\n\n## Prediction of healthcare utilization following an episode of physical therapy for musculoskeletal pain\n\nTrevor A. Lentz 1* , Jason M. Beneciuk 2,3 and Steven Z. George 4\n\n## Abstract\n\nBackground: In the United States, value-based purchasing has created the need for healthcare systems to prospectively identify patients at risk for high healthcare utilization beyond a physical therapy episode for musculoskeletal pain. The purpose of this study was to determine predictors of pain-related healthcare utilization subsequent to an index episode of physical therapy for musculoskeletal pain.\n\nMethods: This study assessed data from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) longitudinal cohort study that recruited individuals with a primary complaint of neck, low back, knee or shoulder pain in physical therapy ( n = 440). Demographics, health-related information, review of systems, comorbidity and pain-related psychological distress measures were collected at baseline evaluation. Baseline to 4-week changes in pain intensity, disability, and pain-related psychological distress were measured as treatment response variables. At 6-months and 1-year after baseline evaluation, individuals reported use of opioids, injection, surgery, diagnostic tests or imaging, and emergency room visits for their pain condition over the follow-up period. Separate prediction models were developed for any subsequent care and service-specific utilization.\n\nResults: Subsequent pain-related healthcare utilization was reported by 43% ( n = 106) of the study sample that completed the 12-month follow-up ( n = 246). Baseline disability and 4-week change in pain intensity were important global predictors of subsequent healthcare utilization. Age, insurance status, comorbidity burden, baseline pain, and 4-week changes in pain intensity, disability and pain-related psychological distress predicted specific service utilization.\n\nConclusion: In those completing follow up measures, risk of additional pain-related healthcare utilization after physical therapy was best predicted by baseline characteristics and 4-week treatment response variables for pain intensity, disability and pain-related psychological distress. These findings suggest treatment monitoring of specific response variables could enhance identification of those at risk for future healthcare utilization in addition to baseline assessment. Further study is required to determine how specific characteristics of the clinical encounter influence future utilization.\n\nKeywords: Screening, Psychological distress, Multimorbidity, Value, Treatment monitoring\n\n## Background\n\nMusculoskeletal pain is a prevalent and costly health condition with far-reaching public health consequences including chronic pain, disability and opioid-related addiction [1]. Clinical practice guidelines now recommend non-pharmacological treatment as frontline management for musculoskeletal pain, which will lead\n\n1\n\nDuke Clinical Research Institute, Duke University, 2400 Pratt Street, Durham,\n\nNC 27705, USA\n\nFull list of author information is available at the end of the article\n\n\n\nto increased utilization of services such as physical therapy [1 -3]. Physical therapy is effective for improving disability and reducing costs associated with many musculoskeletal pain conditions [4 -9]. However, pain-related healthcare utilization beyond the physical therapy episode (e.g. subsequent use of surgery, injection, opioids, etc.) may indicate suboptimal treatment response, the presence of more complex needs, or unwarranted escalation of care. Downstream healthcare utilization is not often considered as an outcome of care or indication of treatment effectiveness for musculoskeletal pain. But the importance of\n\n\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed5.pdf" - }, - { - "text": "additional healthcare use is expected following physical therapy, especially among individuals that are on long-term pain management pathways due to chronic or persistent symptoms. Yet with over 40% reporting subsequent pain-related healthcare among those completing follow-up, it is apparent that opportunities exist to improve pathway selection and/or the effectiveness of physical therapy for individuals with musculoskeletal pain. This finding is particularly notable given recent efforts to define physical therapy as an effective first line, non-pharmacological treatment option against more invasive or higher risk services, such as surgery or opioid use, respectively. Predictive variables identified in this analysis can be used to develop risk models that better inform pathway selection for those seeking physical therapy for musculoskeletal pain. The precise application of these risk models, and how they inform policy and practice should be the target of future study. However, physical therapy re-design might incorporate enhanced treatment monitoring to assess ongoing risk for downstream utilization, as well as physical therapist-led interventions to more thoroughly address important modifiable factors such as pain intensity, disability and pain-related psychological distress [38]. Improved pathway selection might entail the consideration of referral to or co-treatment with other providers to more adequately address non-modifiable characteristics. Collectively, these approaches could improve the value of physical therapy by minimizing risk for high downstream healthcare utilization and potentially unwarranted escalation of care.\n\nThe primary strength of the study is longitudinal follow-up at multiple time points following an episode of physical therapy for a variety of musculoskeletal pain conditions. Anatomical location of pain was not a significant predictor of healthcare use in all but one model, suggesting results are widely applicable across a spectrum of musculoskeletal pain conditions. Another strength of this cohort study is the assessment of various healthcare utilization outcomes of interest for establishing health policy. When considered alongside more traditional pain- or disability-related outcomes prediction models, these findings will improve the ability of healthcare systems and providers to make decisions in value-based purchasing environments. The consideration of multiple screening tools (i.e. yellow flags and review of systems) and treatment monitoring variables is also a strength of this study as screening and systematic treatment monitoring are not routine in clinical practice. A final strength is inclusion of multiple sociodemographic, health-related and psychosocial factors as potential predictors. Healthcare outcomes and utilization exhibit emergent properties that require the consideration of multiple, competing factors to fully explain [50].", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed5.pdf" - }, - { - "text": "routine pain-related psychological distress monitoring throughout the early phases of rehabilitation especially if the goal is to identify risk for subsequent pain-related healthcare utilization. The implications of these collective findings are that treatment pathways may provide greater value by 1) addressing modifiable health-related variables like pain, disability and pain-related psychological distress, 2) routine monitoring of these health-related variables and 3) offering treatment alternatives that safely escalate care if needed while minimizing risk of harm and unhelpful utilization.\n\nOpioids and diagnostic tests and imaging were the two most common subsequent healthcare services utilized following physical therapy. Of the individuals that completed follow up and had any subsequent healthcare utilization, approximately 42% reported opioid use and 70% reported use of diagnostic tests and imaging. An important health-related predictor of these services was level of comorbidity burden. For those with high comorbidity burden and inadequate treatment response to physical therapy, use of additional diagnostic tests and imaging or low-dose opioids may be appropriate in some cases. But given the growing public health concern over opioid use and the desire to avoid unnecessary treatment driven by imaging, our results suggest the importance of considering disease burden when developing treatment pathways and healthcare policy to mitigate risk for avoidable use of these services. Interestingly, neither versions of the OSPRO-ROS predicted utilization outcomes even though it has been linked to mental health, comorbidity, and persistent pain state in other analyses [20, 21]. Systemic symptom burden is a measure of patient complexity that is related to but distinct from comorbidity burden [36, 47]. In these analyses, the chronic condition measure (i.e. the CCI) was a better predictor of utilization than symptom burden (i.e. OSPRO-ROS). The reasons for this finding are unclear but may be related to providers and patients being more likely to pursue follow-up medical care for musculoskeletal pain when known co-existing conditions are present as opposed to reporting of symptoms alone. The distinction between symptom and disease burden in defining musculoskeletal patient complexity, and its influence on clinical decision-making and outcomes, should be the subject of future research particularly related to aging populations [48].\n\nUtilization outcomes benchmarks have not been established to determine how the percentage of subsequent healthcare use in this study compares to outcomes using other health services. Prior studies suggest physical therapy is associated with reduced incidence of additional healthcare use compared to not using physical therapy in patients with acute low back pain [10, 49]. Some", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed5.pdf" - }, - { - "text": "## Comorbidities\n\n## Charlson comorbidity index (CCI)\n\nThe Charlson Comorbidity Index was used to measure the presence of chronic comorbid medical conditions [34]. It lists 19 medical conditions that participants are asked to indicate whether they ' have ever been diagnosed with by a physician ' . Conditions are weighted and added for an overall measure of comorbidity burden. The CCI has demonstrated good test-retest reliability (0.91) and positive but weak to modest correlations with medication use, hospitalizations, length of stay, total charges, and pharmacy and laboratory charges for older adults in general medical care and surgical care settings [35].\n\n## Assessment tools\n\n## OSPRO Review of Systems tool (OSPRO-ROS)\n\nThe OSPRO-ROS is a review-of-systems screening tool for use in outpatient orthopedic physical therapy settings [36]. The OSPRO-ROS has demonstrated good concurrent validity with depression and a comprehensive 97-item battery of non-musculoskeletal symptoms (i.e., red flags). [36] Moderate to strong predictive capabilities of the OSPRO-ROS have been reported for persistence of pain, quality of life, and change in comorbidity 12 months following physical therapy in patients with musculoskeletal pain [20, 21]. The OSPRO-ROS includes standard symptom descriptors to aid with identification of systemic or non-musculoskeletal origins of musculoskeletal pain. It includes questions related to symptoms of the cardiovascular, gastrointestinal, endocrine, nervous, integumentary, pulmonary, and musculoskeletal systems. The full-length 23-item version of the OSPRO-ROS is capable of identifying 100% of positive red-flag responders (i.e. indicating ' yes ' to at least one systemic symptom on a questionnaire) in outpatient orthopedic physical therapy settings. [36] A shorter, 10-item version is also available that has been", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed5.pdf" - }, - { - "text": "Table 6: Physical health risks, Ergonomics - EWCS 2015\n\nCountry colours: Cyprus aquamarine, Greece orange, Spain blue.\n\nThe exposure to painful and tiring positions and hand/arm movements are highest in southern and eastern EU Member States. They also seem to be closely correlated to the sector aggregate 'Trade, transport, food/accommodation and recreation activities'. Moving and carrying heavy loads is also connected to the sectors agriculture, manufacturing and construction - Romania, Latvia, Slovakia and Spain are part of the top seven. Looking at the countries for the share of workers who are lifting or moving people, Romania, Sweden and Ireland are the countries with the highest shares (15%, 14% and 13%).\n\nRegarding occupations, manual workers - craft workers, plant and machine operators, and agricultural workers have the highest score of posture-related risks and ambient ergonomic risks.\n\nTable 7: Physical health risks, Ergonomics - EWCS 2015 59", - "page_start": 43, - "page_end": 43, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- 21. Beneciuk JM, Lentz TA, He Y, Wu SS, George SZ. Prediction of persistent musculoskeletal pain at 12 months: a secondary analysis of the Optimal Screening for Prediction of Referral and Outcome (OSPRO) validation cohort study. Phys Ther. 2018;98:290 -301.\n - 22. Freburger JK, Holmes GM, Agans RP, Jackman AM, Darter JD, Wallace AS, et al. The rising prevalence of chronic low back pain. Arch Intern Med. 2009; 169:251 -8.\n - 23. Carey TS, Freburger JK, Holmes GM, Jackman A, Knauer S, Wallace A, et al. Race, care seeking, and utilization for chronic back and neck pain: population perspectives. J Pain Off J Am Pain Soc. 2010;11:343 -50.\n - 24. Jensen MP, Turner JA, Romano JM, Fisher LD. Comparative reliability and validity of chronic pain intensity measures. Pain. 1999;83:157 -62.\n - 25. Bolton JE. Accuracy of recall of usual pain intensity in back pain patients. Pain. 1999;83:533 -9.\n - 26. Childs JD, Piva SR, Fritz JM. Responsiveness of the numeric pain rating scale in patients with low back pain. Spine. 2005;30:1331 -4.\n - 27. Vernon H. The neck disability index: state-of-the-art, 1991-2008. J Manip Physiol Ther. 2008;31:491 -502.\n - 28. Vernon H, Mior S. The neck disability index: a study of reliability and validity. J Manip Physiol Ther. 1991;14:409 -15.\n - 29. Hudson-Cook N, Tomes-Nicholson K, Breen A. A revised Oswestry disability questionnaire. In: Roland M, Jenner J, editors. Back pain: new approaches to rehabilitation and education. New York: Manchester University Press; 1989. p. 187 -204.\n - 30. Fritz JM, Irrgang JJ. A comparison of a modified Oswestry low back pain disability questionnaire and the Quebec back pain disability scale. Phys Ther. 2001;81:776 -88.\n - 31. Beaton DE, Wright JG, Katz JN, Upper Extremity Collaborative Group. Development of the QuickDASH: comparison of three item-reduction approaches. J Bone Joint Surg Am. 2005;87:1038 -46.\n - 32. Irrgang JJ, Anderson AF, Boland AL, Harner CD, Kurosaka M, Neyret P, et al. Development and validation of the international knee documentation committee subjective knee form. Am J Sports Med. 2001;29:600 -13.\n - 33. Butera KA, Lentz TA, Beneciuk JM, George SZ. Preliminary evaluation of a modified STarT back screening tool across different musculoskeletal pain conditions. Phys Ther. 2016;96:1251 -61.\n - 34. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40:373 -83.\n - 35. Katz JN, Chang LC, Sangha O, Fossel AH, Bates DW. Can comorbidity be measured by questionnaire rather than medical record review? Med Care. 1996;34:73 -84.\n - 36. George SZ, Beneciuk JM, Bialosky JE, Lentz TA, Zeppieri G, Pei Q, et al. Development of a review-of-systems screening tool for orthopaedic physical therapists: results from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) cohort. J Orthop Sports Phys Ther. 2015;45: 512 -26.\n - 37. Lentz TA, Beneciuk JM, Bialosky JE, Zeppieri G, Dai Y, Wu SS, et al. Development of a yellow flag assessment tool for orthopaedic physical therapists: results from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) cohort. J Orthop Sports Phys Ther. 2016;46:327 -43.\n - 38. Beneciuk JM, Fritz JM, George SZ. The STarT back screening tool for prediction of 6-month clinical outcomes: relevance of change patterns in outpatient physical therapy settings. J Orthop Sports Phys Ther. 2014;44: 656 -64.", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed5.pdf" - }, - { - "text": "identifying risk for additional utilization has emerged due to the growth of cost-sharing and capitated payment models, particularly in the United States (US). As a result, many US health care services organizations have begun to prioritize early identification of individuals at risk for downstream healthcare use at the onset of treatment [10, 11]. Early risk assessment allows systems to deliver greater value by 1) focusing limited health care resources towards patients who are most in need, and 2) identifying those who may require coordination of multiple providers and services to optimize outcomes.\n\nProspective identification of risk for high subsequent healthcare utilization is a different approach to outcomes prediction for musculoskeletal pain [12, 13] and one that has not been evaluated in physical therapy settings in the US. Most existing outcomes prediction models focus on pain and disability endpoints [12 -14]. They also concentrate on condition-specific and psychological predictors, with less attention to factors that could influence healthcare utilization more directly [15 -17]. These factors include insurance, comorbidities, symptoms unrelated to the pain condition, and treatment response. As a result, predictors of pain-related healthcare utilization beyond physical therapy are unknown. A better understanding of these predictors will have significant implications for future healthcare pathway development. For instance, an influence of modifiable factors like pain-related psychological distress might imply the need to build clinical pathways that address those factors directly through physical therapist provided intervention. Additionally, understanding the relative predictive capabilities of baseline versus change estimates for modifiable factors would clarify whether prediction is improved by routinely assessing outcomes during the course of treatment (i.e. treatment monitoring) [18].\n\nThis study was undertaken in a nationwide, US cohort of patients receiving outpatient physical therapy for a primary complaint of knee, shoulder, back or neck pain. The primary aim of the analysis was to predict incidence of additional pain-related healthcare utilization in the year following the episode of physical therapy for musculoskeletal pain. We considered factors not commonly assessed in outcomes prediction for musculoskeletal pain, like insurance, comorbidities, and treatment response, as well as those more often associated with pain-related outcomes (e.g. psychological distress). This project will lead to the development of potentially novel outcome prediction models for this population in a common, non-pharmacological US healthcare setting. The results of this study will be particularly important in value-based payment settings where enhanced clinical decision-making drives treatment effectiveness and system efficiency.\n\n## Methods\n\n## Dataset and patient population", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed5.pdf" - }, - { - "text": "Looking at the data, it is quite obvious that the northern and the central European countries are underrepresented in the group of countries with the highest share of physical and ergonomic risks. The central European countries (Austria, Germany, the Netherlands, Belgium, Luxembourg and France, and the two northern European countries Denmark and Sweden) are practically not present in these lists. The picture changes if it is about lifting or moving of people, a consequence of the relatively larger relevance of care work in these countries.\n\nPhysical inactivity and permanent or prolonged sitting or standing is a specific ergonomic risk with health impacts for the musculoskeletal system but also contributing to other health impacts like cardiovascular diseases, tendency to overweight and so on. 60 According to ESENER 2019, the second most frequently reported risk factor in the EU27 was prolonged sitting . By sector, it was most frequently reported by enterprises in financial and insurance activities (92% of establishments in the sector in the EU28), information and communication (92%), and public administration (89%). On average, three to four hours of this sedentary behaviour occurs at work. In the EU, 28% of workers report that their work involves sitting almost all the time and a further 30% report sitting a quarter to three quarters of the time, and throughout Europe 18% of the workers sit more than 7.5 hours a day.\n\nAs mentioned in previous chapters, there exists a share of workers exposed to physical risks that is prevalent in spite of all structural and sectoral changes. Some of the structural changes of the economy, for example, from industrial production to maintenance and repair, 61 might even cause higher ergonomic risks; in general it will be more difficult to use technical help tools in varying maintenance and repair situations, compared to more homogenous tasks in industry. Growing sectors, for example, home care of ill or elderly people, involve ergonomic risks due to transport and moving of patients and/or tiring positions.\n\nOSH Barometer - Physical risks:\n\n\n\nhttps://visualisation.osha.europa.eu/osh-barometer/working-conditions-preventions/physicalrisk/vibrations-loud-noise-and-temperature\n\n## ESENER - Data visualisation:\n\nhttps://visualisation.osha.europa.eu/esener/en/survey/datavisualisation/2019\n\nEU-OSHA Themes - Musculoskeletal disorders:\n\nhttps://osha.europa.eu/en/themes/musculoskeletal-disorders\n\n## 3.3 Contract types and work locations\n\nThe chapter deals with the impact of non-standard types of work on working conditions in comparison to standard work, focusing on the impact of the 'Conditions of employment' on OSH.\n\nMost studies that dealt with the connection between the employment forms and health outcomes and in particular safety and health aspects found significant correlations. 62 A census-based study from Belgium on non-standard forms of work and mortality from Belgium concluded (2021):\n\n' Our study, which to our knowledge is the first one to assess associations between forms of nonstandard employment and mortality using population-wide data, revealed considerable mortality inequalities within the salaried employee population in Belgium. Over the subsequent 13 years and three months of follow-up, certain non-standard workers were at increased risk of death compared to permanently employed workers.' 63\n\nThe conventional non-standard types of work start with widespread temporary (or fixed-term) work, seasonal work, casual work, remote work in different forms (at home or other places), self-employed work, family work, mobile work in transport and often in construction, domestic work, care and craft work at the places of clients, plus several types of less regular and undeclared work.", - "page_start": 44, - "page_end": 44, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## Healthcare utilization predictors\n\nWe collected potential predictors by self-reported questionnaires at initial evaluation using an online study website. Participants were directed back to the study website 4 weeks following initial evaluation to again complete questions on pain intensity, disability, and pain-related psychological distress. Change in pain intensity, disability, and pain-related psychological distress from baseline to 4 weeks were modeled as treatment response variables and included as potential predictors.\n\n## Sociodemographic and health-related information\n\nParticipants completed a standard intake questionnaire form previously used in our clinical studies that assessed age, sex, race, and insurance provider type. This questionnaire also assessed health-related variables included anatomical region of primary pain complaint (low back, neck, shoulder, or knee) and whether the patient had undergone surgery for their primary pain complaints (yes or no). Due to small cell sizes for certain categories, race was dichotomized as white or non-white. For insurance type, participants were asked to choose one of the following options: private, public (Medicare and/or Medicaid), uninsured/self-pay, worker ' s compensation, and other/commercial insurance. Among the study sample, we observed few with no insurance ( n = 7) or worker ' s compensation ( n = 14). The study also included relatively few with ' other/commercial insurance ' ( n = 45). Within this group, informal assessment of these various plans suggested high heterogeneity of plan characteristics and coverage. Due to the small number of subjects in these individual insurance strata and to improve interpretability of results, we collapsed those reporting no insurance, worker ' s compensation and other/commercial insurance into a single category (i.e., ' Other ' ). Therefore, insurance type was categorized as private, public, or other (no insurance, worker ' s compensation, or other/commercial insurance) for purposes of analysis.\n\n## Pain-related clinical variables\n\nPain status was determined using established definitions that account for the duration of pain and activity limitations [22, 23] using the following two questions: 1) ' How long have you been experiencing your current painful symptoms? ' and 2) ' Have you experienced ANY pain and activity limitations every day for the past 3 months? ' Responses to question 1 of ' greater than 90 days ' or responses to question 2 of ' Yes ' were used to classify patients as having persistent pain at initial evaluation.\n\n## Pain intensity\n\nPain intensity was assessed by the numerical pain rating scale (NPRS) ranging from ' 0 ' (no pain) to ' 10 ' (worst\n\npain imaginable) [24 -26]. Participants rated their current pain intensity, as well as their best (lowest) and worst (highest) pain intensity over the past 24 h. Current, best and worst pain ratings were averaged for purposes of analysis.\n\n## Region-specific disability\n\nSelf-reported region-specific disability was assessed with the Neck Disability Index [27, 28], Oswestry Disability Questionnaire [29, 30], Quick Disability of Arm Shoulder and Hand [31] or International Knee Documentation Committee Subjective Knee Form [32] for cervical, low back, shoulder and knee pain, respectively. Region-specific disability measures were z-transformed for purposes of analysis, consistent with our prior work involving multiple anatomical regions [33].\n\n## Comorbidities\n\n## Charlson comorbidity index (CCI)", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed5.pdf" - } - ] - }, - { - "references": { - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf", - "query": "What is Creative Commons ?", - "target_page": 2, - "target_passage": "Creative Commons (CC) is the global nonprofit organization behind the CC Licenses and public domain tools, which power open sharing on popular platforms like Wikipedia, Flickr, YouTube, Medium, Vimeo, and Khan Academy.", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "\n\nThis is a frame from 'Twenty Years of Creative Commons (in Sixty Seconds)' by Ryan Junell and Glenn Otis Brown for Creative Commons licensed under CC BY 4.0. It includes adaptations of multiple open and public domain works. View full licensing and attribution information about all works included in the video on Flickr.\n\n## Creative Commons\n\nPO Box 1866 Mountain View CA 94042 USA +1 415 429 6753 info@creativecommons.org\n\n", - "page_start": 11, - "page_end": 11, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate\n\ncredit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.\n\n© The Author(s) 2025", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed3.pdf" - }, - { - "text": "Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional a/ff.shortiliations.\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.\n\n© The Author(s) 2024", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed4.pdf" - }, - { - "text": "## Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work - on conditions of your choice. CC licenses let you change your copyright terms from the default of 'all rights reserved' to 'some rights reserved.'\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\n\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n\n\nPublic domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark . Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n## Where public domain tools fit in the copyright spectrum\n\n\n\n## The CC0 Public Domain Dedication\n\nUse this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.\n\n\n\n\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.\n\n## What is the di/fference between CC0 and the Public Domain Mark?\n\n\n\nCC0 ('CC Zero') is intended for use only by authors or holders of copyright and related rights (including database rights), in connection with works that are still subject to those rights in one or more countries.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "\n\n\"great colors of nature\" by marcostetter is published under Public Domain Mark 1.0.\n\n## About Us\n\nCreative Commons (CC) is the global nonprofit organization behind the CC Licenses and public domain tools, which power open sharing on popular platforms like Wikipedia, Flickr, YouTube, Medium, Vimeo, and Khan Academy. Since 2002, the CC Licenses have served as an alternative to traditional copyright, providing a simple, standardized, and legal way for individuals and institutions to freely share images, music, research, educational resources, and cultural artifacts.\n\n## Chief Executive Officer\n\nAnna Tumadóttir\n\nGeneral Counsel Kat Walsh\n\n## Board of Directors\n\nMarta Belcher Glenn Otis Brown Delia Browne James Grimmelmann\n\nLawrence Lessig * Emeritus\n\nAngela Oduor Lungati Bilal Randeree Alek Tarkowski Jeni Tennison Luis Villa\n\nExcept where otherwise noted, 'Annual Report 2023' by Creative Commons is licensed under CC BY 4.0.\n\n", - "page_start": 1, - "page_end": 1, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "## Author contributions\n\nK.L. designed the framework of the article and analyzed the yield results and the maize price under future scenarios. J.P. simulated the climate data from 5 climate models recommended by ISI-MIP under 4 RCP scenarios. W.X. simulated the maize yields in whole world under di/fferent scenarios. W.X. simulated the market price of maize at national and global levels. T.A. helped the revision of language.\n\n## Funding\n\nFunding was provided by the National Key Research and Development program of China (Grant Nos. 2019YFA0607403 and 2017YFD0300301) and National Natural Science Foundation of China (Grant Nos. 41961124007 and 41871026).\n\n## Competing interests\n\n/T\\_he authors declare no competing interests.\n\n## Additional information\n\nCorrespondence and requests for materials should be addressed to K.L.\n\nReprints and permissions information is available at www.nature.com/reprints.\n\nPublisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional a/ffiliations.\n\n\n\nOpen Access /T\\_his article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. /T\\_he images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.\n\n© /T\\_he Author(s) 2022\n\nVol:.(1234567890)", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed9.pdf" - }, - { - "text": "## A Note from Leadership\n\nCC staff photos are licensed under CC BY 4.0.\n\n\n\n2023 was a busy year at Creative Commons. Our Open Culture program and Open Climate Campaign entered their third and second years, respectively. We hosted our first in-person CC Global Summit since 2019 in Mexico City. We held critical consultations and open panels on AI, copyright, and the CC Licenses, cultural heritage, education, and science; and we launched our Open Infrastructure Circle in an effort to ensure the CC Licenses are funded well into the future.\n\nWe also marked transitions in leadership. At the end of December, Catherine Stihler concluded her time as Chief Executive Officer (CEO) at Creative Commons, and I transitioned in as Interim. In March 2024, I was appointed CC's permanent CEO. I look forward to working closely with our Board of Directors, staff, and larger community on the critical work that awaits us in 2024 .\n\n## Anna Tumadóttir, CEO\n\n\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "content repositories, like libraries, with that of AI developers. A 'books data commons' needs to be both responsibly managed, and useful for developers of AI models.\n\nWe use 'commons' here in the sense of a resource that is broadly shared and accessible, and thus obviates the need for each individual actor to acquire, digitize, and format their own corpus of books for AI training. This resource could be collectively and intentionally managed, though we do not mean to select a particular form of governance in this paper. 4\n\nThis paper is descriptive, rather than prescriptive, mapping possible paths to building a books data commons as defined above and key questions relevant to developers, repositories, and other stakeholders, building on our workshop discussions. We first explain why books matter for AI training and how broader access could be beneficial. We then summarize two tracks that might be considered for developing such a resource, highlighting existing projects that help foreground both the potential and challenges. Finally, we present several key design choices, and next steps that could advance further development of this approach. 5", - "page_start": 2, - "page_end": 2, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## Acknowledgements\n\nAuthored by Alek Tarkowski and Paul Keller (Open Future), Derek Slater and Betsy Masiello (Proteus Strategies) in collaboration with Creative Commons.\n\nWe are grateful to participants in the workshops, including Luis Villa, Tidelift and openml.fyi; Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan, Cornell; Lila Bailey, Internet Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin, Library Futures/ NYU Engelberg Center, Nicholas P. Garcia, Public Knowledge; Sayeed Choudhury; Erik Stallman, UC Berkeley School of Law. The paper represents the views of the authors, however, and should not be attributed to the workshop as a whole. All mistakes or errors are the authors'.\n\n\n\nThis report is published under the terms of the Creative Commons Attribution License.", - "page_start": 21, - "page_end": 21, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## 7. Conclusion\n\nThis paper is a snapshot of an idea that is as underexplored as it is rooted in decades of existing work. The concept of mass digitization of books, including to support text and data mining, of which AI is a subset, is not new. But AI training is newly of the zeitgeist, and its transformative use makes questions about how we digitize, preserve, and make accessible knowledge and cultural heritage salient in a distinct way.\n\nAs such, efforts to build a books data commons need not start from scratch; there is much to glean from studying and engaging existing and previous efforts. Those learnings might inform substantive decisions about how to build a books data commons for AI training. For instance, looking at the design decisions of HathiTrust may inform how the technical infrastructure and data management practices for AI training might be designed, as well as how to address challenges to building a comprehensive, diverse, and useful corpus. In addition, learnings might inform the process by which we get to a books data commons for example, illustrating ways to attend to the interests of those likely to be impacted by the dataset's development. 41\n\nWhile this paper does not prescribe a particular path forward, we do think finding a path (or paths) to extend access to books for AI training is critical. In the status quo, large swaths of knowledge contained in books are effectively locked up and inaccessible to most everyone. Google is an exception - it can reap the benefits of their 40 million books dataset for research, development, and deployment of AI models. Large, well-resourced entities could theoretically try to replicate Google's digitization efforts, although it would be incredibly expensive, impractical, and largely duplicative for each entity to individually pursue their own efforts. Even then, it isn't clear how everyone else - independent researchers, entrepreneurs, and smaller entities - will have access. The controversy around the Books3 dataset discussed at the outset should not, then, be an argument in favor of preserving the status quo. Instead, it should highlight the urgency of building a books data commons to support an AI ecosystem that provides broad benefits beyond the privileged few.", - "page_start": 20, - "page_end": 20, - "source_file": "creative_common_ai.pdf" - } - ] - }, - { - "references": { - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf", - "query": "When was the first CC licence created?", - "target_page": 4, - "target_passage": "The first CC License was created in 2002.", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "## Training in how to use CC Licenses is key to their adoption.\n\nWe offer a ten-week CC Certificate program that is now tailored not only to the education and library sectors, but also galleries, archives, libraries, and museums and available in 10 languages .\n\nAs of 2023, we've certified:\n\n\n\n1,705 Graduates\n\n\n\n65 Countries\n\n## In 2023, we greatly expanded our CC Licenses training and education offerings:\n\n## 19 Workshops & Trainings\n\nwith institutions like ALA, Connecticut Humanities & State University of New York, Digital Research Alliance of Canada, and WikiConf North America.\n\n## 2 Week-Long CC Certificate Bootcamps\n\nfor California Community Colleges.\n\n## 27 Webinars\n\non topics like the basics of Open Culture, the possibilties of Open Educational Resources (OER) for business-university cooperation, and the future of CC Licenses in digital and online education.\n\n## 12 CC Legal Open Office Hours\n\nhosted by our legal team, providing a personalized opportunity for the CC community to ask questions about CC Licenses, open access, and sharing.\n\n", - "page_start": 4, - "page_end": 4, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "with. The vast majority of in-copyright books are out-of-print or out-of-commerce, and most are not actively managed by their rightsholders. There is no official registry of copyrighted works and their owners, and existing datasets can be incomplete or erroneous. 16\n\nAs a result, there may be no way to license the vast majority of in-copyright books, especially those that have or have had limited commercial value. Put differently, the barrier to using 17 most books is not simply to pay publishers; even if one had significant financial resources, licensing would not enable access to most works.\n\n## Permissively licensed works\n\nThere are books that have been permissively licensed in an easily identifiable way, such as works placed under Creative Commons (CC) licenses. Such works explicitly allow particular uses of works subject to various responsibilities (e.g., requiring attribution by the user in their follow-on use).\n\nWhile such works could be candidates for inclusion in a books data commons, their inclusion depends on whether the license's terms can be complied with in the context of AI training. For instance, in the context of CC licensed works, there are requirements for proper attribution across all licenses (the CC tools Public Domain Dedication (CC0) and Public Domain Mark (PDM) are not licenses and do not require attribution). 18", - "page_start": 9, - "page_end": 9, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "\n\n## Creative Commons license\n\n## Understanding\n\nbefore licensing your work\n\n## THREE-LAYER DESIGN\n\nCreative Commons (CC) license has three layers:\n\n- \"Legal Code\" (base layer): contains terms and conditions to be used by lawyers and legally applicable in court.\n- \"Human Readable\" (commons deeds): contain the summary of the legal code and key terms.\n- \"Machine Readable\": contains HTML or codes for machines to recognize a work is available under a Creative Commons license.\n\n\n\n## FOUR ELEMENTS\n\n- BY (\"Attribution\"): users must credit the author of the work they are using.\n- SA (\"ShareAlike\"): adaptations based on this work must be licensed under the same license.\n- NC (\"NonCommercial\"): the work is only available to be used for noncommercial purposes.\n- ND (\"NoDerivative\"): reusers making cannot share adaptations of the work.\n\n\n\n## SIX LICENSES\n\n- CC BY (\"Attribution\") allows people to use the work for any purpose (even commercially and even in modified form) as long as they give attribution to the creator.\n- CC BY-SA (\"Attribution-ShareAlike\") allows people to use the work for any purpose (even commercially and even in modified form), as long as they give attribution to the creator and make any adaptations they share with others available under the same or a compatible license.\n- CC BY-NC (\"Attribution-NonCommercial\") allows people to use the work for noncommercial purposes only, and only as long as they give attribution to the creator.\n- CC BY-NC-SA (\"Attribution-NonCommercial-ShareAlike\") allows people to use the work for noncommercial purposes only, and only as long as they give attribution to the creator and make any adaptations they share with others available under the same or a compatible license.\n- CC BY-ND (\"Attribution-NoDerivative\") allows people to use the unadapted work for any purpose (even commercially), as long as they give attribution to the creator.\n- CC BY-NC-ND (\"Attribution-NonCommercial-NoDerivative\") allows people to use the unadapted work for noncommercial purposes only, and only as long as they give attribution to the licensor.\n\n## REMIND THAT…\n\nCC license only applicable to the work that is within the scope of copyright law. CC license can be used when …\n\n- you want to give others permissions to freely copy and redistribute your work, and\n- you want to give others permission to freely transform, alter, or otherwise create derivative works based on your work.\n\n\n\n\n\n## CC LICENSE CAN'T BE USED FOR …\n\nfair use, fair dealing, or some other limitation and exception to copyright applies the the work.\n\n## ALSO FOR …\n\nthe work that is already in the Public Domain.\n\nFor those who want to waive their rights from copyright protection, use CC0 (\"CC Zero\").\n\n## NOW, SHARE YOUR WORK!\n\nhttps://creativecommons.org/choose/\n\n\n\n\n\nBY\n\n\n\nSA\n\n\n\nND\n\nNC", - "page_start": 0, - "page_end": 0, - "source_file": "Understanding_Creative_Commons_license_(infographic).pdf" - }, - { - "text": "## Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work - on conditions of your choice. CC licenses let you change your copyright terms from the default of 'all rights reserved' to 'some rights reserved.'\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\n\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n\n\nPublic domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark . Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n## Where public domain tools fit in the copyright spectrum\n\n\n\n## The CC0 Public Domain Dedication\n\nUse this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.\n\n\n\n\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.\n\n## What is the di/fference between CC0 and the Public Domain Mark?\n\n\n\nCC0 ('CC Zero') is intended for use only by authors or holders of copyright and related rights (including database rights), in connection with works that are still subject to those rights in one or more countries.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n## 3.2.6 How to view licensing information\n\nLicensing information is available for all datasets associated with common licences, which are supported by the Licence Assistant. When available a link to the assistant is provided on left side of a dataset page.\n\nBy clicking on the licence name (here: cc-by), the Licence Assistant tool is opened in a new window, displaying relevant information for this particular licence.\n\n", - "page_start": 33, - "page_end": 33, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "\n\nThe first CC License was created in 2002. Today, we boast six CC Licenses and two public domain tools, setting a global standard for sharing.\n\n## We've estimated that over 2.5 billion pieces of content were CC Licensed by the end of 2023.\n\n\n\n\n\n\"The great growling engine of change - technology. Alvin Toffler\" by katerha is licensed under CC BY 2.0.\n\nOur legal and technology staff continued to make key infrastructure updates and manage daily maintenance to ensure these Licenses work for everyone.\n\n## In 2023, we launched the Open Infrastructure Circle (OIC) to ensure consistent funding for this work.\n\nWe're grateful to the early supporters of the OIC, including the William + Flora Hewlett Foundation, Bill & Melinda Gates Foundation, Filecoin Foundation for the Decentralized Web, Robert Wood Johnson Foundation, Chan Zuckerberg Initiative, Endless, Siegel Family Endowment, Flickr, Microsoft, and Paul and Iris Brest.\n\n", - "page_start": 3, - "page_end": 3, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional a/ff.shortiliations.\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.\n\n© The Author(s) 2024", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed4.pdf" - }, - { - "text": "Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate\n\ncredit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.\n\n© The Author(s) 2025", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed3.pdf" - }, - { - "text": "Combined, these limits can enable effective foreign control of up to 46.7 % .\n\nThe chief executive officer and 80 % of the members of the Board of Directors of the operating licensee must be resident Canadians. There are no restrictions on the number of non-voting shares that may be held by non-Canadians at either the holding-company or licenseecompany level. Neither the Canadian carrier nor its parent may be otherwise controlled in fact by non-Canadians. Subject to appeal to the federal Cabinet, the CRTC has the jurisdiction to determine as a question of fact whether a given licensee is controlled by nonCanadians.\n\nPursuant to the Telecommunications Act and associated regulations, the same rules also apply to Canadian telecommunications carriers such as Wireless, except that there is no requirement that the chief executive officer be a resident Canadian. We believe we are in compliance with the foregoing foreign ownership and control requirements.\n\nOn June 29, 2012, Bill C-38 amending the Telecommunications Act passed into law. The amendments exempt telecommunications companies with less than 10 % of total Canadian telecommunications market measured by revenue from foreign investment restrictions. Companies that are successful in growing their market shares in excess of 10 % of total Canadian telecommunications market revenues other than by way of merger or acquisitions will continue to be exempt from the restrictions.\n\n## WIRELESS\n\n## Consultation on the Renewal of Cellular and Personal Communications S ervices (PC S ) S pectrum Licences\n\nIn March 2011, Industry Canada released its decisions about the renewal process for cellular and PCS licences that began expiring at that time. Key things to note:\n\n - GLYPH<129> At the end of the current licence term, new cellular and PCS licences with a 20-year term will be issued to licensees that are in compliance with all licence conditions.\n - GLYPH<129> The previously existing annual fee of $0.0351 per MHz per population of the licenced area will continue to apply to all cellular and PCS licences, including those initially assigned by auction. The Minister of Industry Canada may review and amend the fees during the licence term after further consultation with licensees.\n - GLYPH<129> A determination regarding existing research and development conditions of licence was not released at that time and will be released separately. A decision has not been made to date, and until such a time, the current conditions of licence remain in effect.", - "page_start": 71, - "page_end": 71, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Licensing\n\nThe base license that is provided with your system includes the use of its basic functions. However, the extra licenses can be purchased to expand the capabilities of your system. Administrators are responsible for purchasing extra licenses and configuring the systems within the license agreement, which includes configuring the settings of each licensed function on the system.\n\nThe IBM Storwize V7000 supports enclosure-based licensing, which allows the use of certain licensed functions that are based on the number of enclosures that are indicated in the license.\n\nComplete the following steps to view or configure the licensing settings:\n\n - 1. From the main Settings pane, point to Settings and click System .\n - 2. In the left column, select Licensed Functions , as shown in Figure 5-69.\n\nFigure 5-69 Licensing window\n\n", - "page_start": 194, - "page_end": 194, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf", - "query": "To what subjects Creative Commons expand its work in 2023 ?", - "target_page": 8, - "target_passage": "We expanded our work in biodiversity, climate, and life sciences focused on ensuring that science research and data are open", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "\n\nThis is a frame from 'Twenty Years of Creative Commons (in Sixty Seconds)' by Ryan Junell and Glenn Otis Brown for Creative Commons licensed under CC BY 4.0. It includes adaptations of multiple open and public domain works. View full licensing and attribution information about all works included in the video on Flickr.\n\n## Creative Commons\n\nPO Box 1866 Mountain View CA 94042 USA +1 415 429 6753 info@creativecommons.org\n\n", - "page_start": 11, - "page_end": 11, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate\n\ncredit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.\n\n© The Author(s) 2025", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed3.pdf" - }, - { - "text": "Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional a/ff.shortiliations.\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.\n\n© The Author(s) 2024", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed4.pdf" - }, - { - "text": "## Author contributions\n\nK.L. designed the framework of the article and analyzed the yield results and the maize price under future scenarios. J.P. simulated the climate data from 5 climate models recommended by ISI-MIP under 4 RCP scenarios. W.X. simulated the maize yields in whole world under di/fferent scenarios. W.X. simulated the market price of maize at national and global levels. T.A. helped the revision of language.\n\n## Funding\n\nFunding was provided by the National Key Research and Development program of China (Grant Nos. 2019YFA0607403 and 2017YFD0300301) and National Natural Science Foundation of China (Grant Nos. 41961124007 and 41871026).\n\n## Competing interests\n\n/T\\_he authors declare no competing interests.\n\n## Additional information\n\nCorrespondence and requests for materials should be addressed to K.L.\n\nReprints and permissions information is available at www.nature.com/reprints.\n\nPublisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional a/ffiliations.\n\n\n\nOpen Access /T\\_his article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. /T\\_he images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.\n\n© /T\\_he Author(s) 2022\n\nVol:.(1234567890)", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed9.pdf" - }, - { - "text": "## A Note from Leadership\n\nCC staff photos are licensed under CC BY 4.0.\n\n\n\n2023 was a busy year at Creative Commons. Our Open Culture program and Open Climate Campaign entered their third and second years, respectively. We hosted our first in-person CC Global Summit since 2019 in Mexico City. We held critical consultations and open panels on AI, copyright, and the CC Licenses, cultural heritage, education, and science; and we launched our Open Infrastructure Circle in an effort to ensure the CC Licenses are funded well into the future.\n\nWe also marked transitions in leadership. At the end of December, Catherine Stihler concluded her time as Chief Executive Officer (CEO) at Creative Commons, and I transitioned in as Interim. In March 2024, I was appointed CC's permanent CEO. I look forward to working closely with our Board of Directors, staff, and larger community on the critical work that awaits us in 2024 .\n\n## Anna Tumadóttir, CEO\n\n\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "## Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work - on conditions of your choice. CC licenses let you change your copyright terms from the default of 'all rights reserved' to 'some rights reserved.'\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\n\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n\n\nPublic domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark . Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n## Where public domain tools fit in the copyright spectrum\n\n\n\n## The CC0 Public Domain Dedication\n\nUse this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.\n\n\n\n\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.\n\n## What is the di/fference between CC0 and the Public Domain Mark?\n\n\n\nCC0 ('CC Zero') is intended for use only by authors or holders of copyright and related rights (including database rights), in connection with works that are still subject to those rights in one or more countries.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "\n\n\"great colors of nature\" by marcostetter is published under Public Domain Mark 1.0.\n\n## About Us\n\nCreative Commons (CC) is the global nonprofit organization behind the CC Licenses and public domain tools, which power open sharing on popular platforms like Wikipedia, Flickr, YouTube, Medium, Vimeo, and Khan Academy. Since 2002, the CC Licenses have served as an alternative to traditional copyright, providing a simple, standardized, and legal way for individuals and institutions to freely share images, music, research, educational resources, and cultural artifacts.\n\n## Chief Executive Officer\n\nAnna Tumadóttir\n\nGeneral Counsel Kat Walsh\n\n## Board of Directors\n\nMarta Belcher Glenn Otis Brown Delia Browne James Grimmelmann\n\nLawrence Lessig * Emeritus\n\nAngela Oduor Lungati Bilal Randeree Alek Tarkowski Jeni Tennison Luis Villa\n\nExcept where otherwise noted, 'Annual Report 2023' by Creative Commons is licensed under CC BY 4.0.\n\n", - "page_start": 1, - "page_end": 1, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "## Our Impact\n\nCC believes that opening up knowledge is key to addressing the world's most pressing challenges. Today, we steer campaigns, programming, and training in many areas:\n\n## Open Culture\n\n2023 was quite a year for the CC Open Culture Program, thanks to generous funding from Arcadia . We grew our Open Culture team from one to two and a half staff, rolling out new initiatives like TAROC (Towards a Recommendation on Open Culture) and Open Culture Live: A Webinar Series . We invite you to read ' What did Creative Commons do for Open Culture in 2023? ' to learn more.\n\n## Open Journalism\n\nThanks to generous funding from the John D. and Catherine T. MacArthur Foundation , CC hosted its very first Open Journalism track at the CC Global Summit, including eight presentations, lightning talks, panel discussions, and workshops as well as a keynote by Anya Kamenetz .\n\nRepresentatives from 33 news outlets and digital rights-focused organizations attended the CC Summit sessions. The Open Journalism track built on numerous collaborations and workshops throughout 2023.\n\n## Open Education\n\nWe delivered workshops and presentations on CC Licenses and Open Educational Resources at over 16 conferences and events. The CC Open Education Platform also funded six global projects, including work to advance the UNESCO Recommendation on OER.\n\n\"Follow the Color Brick Road\" by Bert Kaufmann is licensed under CC BY-SA 2.0.\n\n\n\n", - "page_start": 6, - "page_end": 6, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "content repositories, like libraries, with that of AI developers. A 'books data commons' needs to be both responsibly managed, and useful for developers of AI models.\n\nWe use 'commons' here in the sense of a resource that is broadly shared and accessible, and thus obviates the need for each individual actor to acquire, digitize, and format their own corpus of books for AI training. This resource could be collectively and intentionally managed, though we do not mean to select a particular form of governance in this paper. 4\n\nThis paper is descriptive, rather than prescriptive, mapping possible paths to building a books data commons as defined above and key questions relevant to developers, repositories, and other stakeholders, building on our workshop discussions. We first explain why books matter for AI training and how broader access could be beneficial. We then summarize two tracks that might be considered for developing such a resource, highlighting existing projects that help foreground both the potential and challenges. Finally, we present several key design choices, and next steps that could advance further development of this approach. 5", - "page_start": 2, - "page_end": 2, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## 7. Conclusion\n\nThis paper is a snapshot of an idea that is as underexplored as it is rooted in decades of existing work. The concept of mass digitization of books, including to support text and data mining, of which AI is a subset, is not new. But AI training is newly of the zeitgeist, and its transformative use makes questions about how we digitize, preserve, and make accessible knowledge and cultural heritage salient in a distinct way.\n\nAs such, efforts to build a books data commons need not start from scratch; there is much to glean from studying and engaging existing and previous efforts. Those learnings might inform substantive decisions about how to build a books data commons for AI training. For instance, looking at the design decisions of HathiTrust may inform how the technical infrastructure and data management practices for AI training might be designed, as well as how to address challenges to building a comprehensive, diverse, and useful corpus. In addition, learnings might inform the process by which we get to a books data commons for example, illustrating ways to attend to the interests of those likely to be impacted by the dataset's development. 41\n\nWhile this paper does not prescribe a particular path forward, we do think finding a path (or paths) to extend access to books for AI training is critical. In the status quo, large swaths of knowledge contained in books are effectively locked up and inaccessible to most everyone. Google is an exception - it can reap the benefits of their 40 million books dataset for research, development, and deployment of AI models. Large, well-resourced entities could theoretically try to replicate Google's digitization efforts, although it would be incredibly expensive, impractical, and largely duplicative for each entity to individually pursue their own efforts. Even then, it isn't clear how everyone else - independent researchers, entrepreneurs, and smaller entities - will have access. The controversy around the Books3 dataset discussed at the outset should not, then, be an argument in favor of preserving the status quo. Instead, it should highlight the urgency of building a books data commons to support an AI ecosystem that provides broad benefits beyond the privileged few.", - "page_start": 20, - "page_end": 20, - "source_file": "creative_common_ai.pdf" - } - ] - }, - { - "references": { - "source_file": "TSX_KMP_2013.pdf", - "query": "From which country does Killam Properties Inc originate ?", - "target_page": 3, - "target_passage": "Killam Properties Inc. is a growth oriented Canadian real estate company.", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "\n\nKillam properties inc 2013 annual report", - "page_start": 0, - "page_end": 0, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## management's Discussion and analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## PART II\n\n## Business Overview\n\nKillam Properties Inc., based in Halifax, Nova Scotia, is one of Canada's largest residential landlords, owning, operating, managing and developing multi-family residential and Manufactured Home Community ('MHC') properties. Killam's 164 apartment properties are located in Atlantic Canada's six largest urban centres and in Ontario. The Company's 35 MHCs are located in Ontario and Atlantic Canada. The value of Killam's real estate assets at December 31, 2013, was $1.5 billion. Killam is focused on growing its portfolio, maximizing the value of its properties and increasing FFo per share.\n\nKillam was founded in 2000, based on the recognition of an opportunity to create value through the consolidation of apartments in Atlantic Canada and MHCs across Canada. Killam's first apartment was purchased in 2002 and its first MHC was purchased in 2003. From 2002 to 2009, Killam's apartment portfolio grew through the acquisition of properties in Atlantic Canada's six largest cities, namely Halifax, Moncton, Saint John, Fredericton, St. John's and Charlottetown. Killam is now Atlantic Canada's largest residential landlord, with a 14.2% market share of the multi-family rental units in these core markets. Killam entered the Ontario apartment market in 2010, and today owns twelve properties in the province, including assets in Toronto, Ottawa, London and Cambridge. Killam plans to expand its presence in Ontario with additional acquisitions and developments. The apartment business is Killam's largest business segment, accounting for 86% of the Company's NOI from property operations and equity income in 2013. At December 31, 2013, Killam's apartment portfolio consisted of 12,647 units.\n\nKillam complements its acquisition program with the construction of apartment buildings. During 2013, Killam completed the development of four projects totalling 282 units and commenced two additional projects in the second half of the year. Management does not expect developments to exceed 5% of the total asset base in any given year.\n\nIn addition, the Company owns MHCs, also known as land-lease communities or trailer parks. Killam owns the land and infrastructure supporting each community and leases the lots to tenants, who own their own homes and pay Killam a monthly site rent. Killam owns 35 communities which accounted for 14% of Killam's NOI in 2013. During the year Killam sold ten MHC properties located in New Brunswick, allowing the Company to crystallize the value of the properties at attractive cap-rates and use the funds to continue to grow the apartment portfolio.\n\n## Key Performance Indicators (KPIs)\n\nManagement measures Killam's performance based on the following KPIs:", - "page_start": 22, - "page_end": 22, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## a Diversi/fied portfolio\n\nKillam has a diverse portfolio of both apartments and manufactured home communities. The apartment portfolio represents 86% of Killam's earnings and includes a variety of property types, such as high-rises, mid-rises and walk-ups, in nine urban centres across /five provinces. With a wide selection of properties and price points in each city, Killam caters to a broad tenant base. Killam's 35 manufactured home communities represent 14% of earnings and are located primarily in Nova Scotia and Ontario. The manufactured home communities complement the apartment business, providing stable and predictable cash /flows.\n\n", - "page_start": 12, - "page_end": 12, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## management's Discussion and analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## Killam's NOI by Province\n\nCombining apartment and MHC's, the following chart highlights the percentage of Killam's forward-looking NOI by province based on ownership interest at December 31, 2013:\n\n## NOI by Province\n\n\n\n## The Multi-family Market Leader in Atlantic Canada\n\nAtlantic Canada is home to 2.3 million people, approximately 43% of whom live in the six largest cities, representing Killam's core markets in the region. Killam has a 14.2% market share of apartment units in these six largest centres. The chart below highlights the apartment NOI generated from each of the key urban markets in Atlantic Canada in 2013, and Killam's market share in each.\n\n", - "page_start": 30, - "page_end": 30, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## about Killam properties inc.\n\nKillam Properties Inc. is a growth oriented Canadian real estate company. We own, manage and develop multi-family residential properties in Atlantic Canada and Ontario. Since our /first acquisition in 2002, our real estate portfolio has grown to $1.5 billion and includes 12,647 apartment units and 5,164 manufactured home community (MHC) sites. We are committed to growing Killam's earnings by maximizing the returns from our existing portfolio and expanding through acquisitions and development.\n\n## our mission\n\nTo have a team of caring sta/ff deliver clean, safe, quality housing to tenants who are proud to call our properties home.\n\n## our core Values\n\nBuild Community\n\nCurb Appeal\n\nDo the Right Thing\n\npresident's letter\n\n9\n\nasset portfolio\n\n18\n\nMD&a\n\n21\n\nFinancial Statements\n\n66\n\nFive-Year Summary\n\n96\n\nStrong Customer Relationships\n\n\n\n180 mill street, london, ontario", - "page_start": 2, - "page_end": 2, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## Management's Discussion and Analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## Business Strategy\n\n## Maximize NOI from Existing Portfolio\n\nManagement is focused on increasing the value of its real estate portfolio by maximizing revenue and operating efficiencies. To achieve NOI growth, Killam must address three critical factors; occupancy, rental rates, and operating costs. The Company focuses on customer service, investing in its properties, leasing and marketing initiatives, and training its employees to maximize these outcomes.\n\nManagement is able to directly control approximately 40% of operating expenses, including labour costs, repairs and maintenance and property general and administrative expenses. The remaining operating costs, including utilities and property taxes, are less controllable. Killam's apartments are currently heated with a combination of natural gas, electricity and oil. Volatile oil and natural gas prices have an impact on Killam's operating costs. To mitigate this volatility, the Company is active in energy conservation initiatives and regularly monitors its energy usage.\n\n## Growth through Acquisitions\n\nKillam is expanding its portfolio by acquiring newer, centrally located buildings and is focused on Ontario. During 2013 Killam completed $121.1 million in acquisitions, including properties in Toronto, Ottawa, Moncton and Prince Edward Island.\n\n## Growth through Development\n\nKillam enhances its portfolio growth opportunities by developing properties. Killam started apartment developments in 2010 and has completed five properties to-date, including four in 2013. Building new properties directly allows Killam to control the quality and features of the buildings, maximizes the use of excess land and eliminates the seller's profit, generating higher returns than through acquisitions. Management expects to limit development projects to approximately 5% of the balance sheet on an annual basis.\n\n## Investment in New Properties\n\nIn addition to developing new properties, Killam also acquires newly constructed assets. Management believes that increasing Killam's ownership in new, high-quality buildings will result in above-market and long-term demand for the Company's assets from an aging population, reduce annual capital requirements for deferred maintenance, and transform Killam's portfolio, over time, into one of the highest quality portfolios in canada.\n\nDemand by renters for newly constructed rental apartments is strong, with high occupancy rates and above-average rents. CMHC's Fall 2013 Halifax Rental Market Report reported 97.3% occupancy for properties built in 2000 or later, compared to 96.8% for all rental markets in the city. The average rent for a two-bedroom unit in these newer buildings was $1,320 per month, compared to a market average two-bedroom rent of $976.\n\nThe new properties added to Killam's portfolio are condo quality, providing tenants with features and amenities traditionally associated with ownership. The Company believes that demand for this type of rental accommodation will grow given an increasing number of homeowners reaching retirement age and looking for alternatives to home ownership. Killam is also attracted to the low capital spend requirements from new assets compared to older buildings, which often include significant capital investment to address deferred maintenance. Generally, the amount of annual capital to maintain a property increases as the building ages. In addition, with energy efficient features, the NOI margins are generally higher in newer buildings.\n\nWith strong demand for the acquisition of apartments over the last three years, cap-rates have declined and the pricing differential between older and newer buildings has reduced. This enables Killam to increase the amount of newer apartments in its portfolio without paying a significant premium for quality assets.\n\n## Geographic Diversification", - "page_start": 28, - "page_end": 28, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## increasing Geographic Diversi/fication\n\nWith a home base in Halifax, Killam's roots are in atlantic canada and the company has successfully grown by consolidating the residential real estate market in the region's urban centres. in order to meet its long-term growth targets and increase its investment in canada's most dynamic real estate markets, Killam has been actively expanding its apartment portfolio in ontario and is exploring investment opportunities in Western canada. since 2010, Killam has expanded its apartment target markets to include speci/fic cities in ontario, and has invested approximately $200 million in real estate assets in the province. approximately 15% of Killam's 2014 net operating income is expected to be earned in ontario. the company has set a long-term target to earn 50% of its net operating income outside atlantic canada.", - "page_start": 16, - "page_end": 16, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## opportunities for Growth\n\nKillam's growth opportunities include increasing earnings of its existing portfolio and expanding the portfolio through acquisitions and development. acquisitions have been an important part of Killam's growth, having completed over $1.1 billion in acquisitions since the /first property was acquired in 2002. Killam began development as a complement to its acquisition program in 2010, and to-date has invested approximately $90 million in new developments. 2013 was Killam's largest year for growth since 2005, adding $191 million of properties to the portfolio, including $121 million in acquisitions and $70 million in new developments. looking ahead to 2014, Killam has targeted a minimum of $75 million in acquisitions, and the development of two new apartment buildings totaling approximately $46 million.\n\n", - "page_start": 13, - "page_end": 13, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## Increased Supply Risk\n\nIncreased supply risk is the risk of loss from increased competition from the addition of new rental units in Killam's core markets. Numerous other residential developers and apartment owners compete for potential tenants. Although it is Killam's strategy to own multifamily residential properties in premier locations in each market in which it operates, some of the apartments or MHCs of Killam's competitors may be newer, better located or offer lower rents. An increase in alternative housing could have a material adverse effect on Killam's ability to lease units and in the rents charged and could adversely affect Killam's revenues and ability to meet its obligations. To mitigate against this risk Killam has a geographically diverse asset base. Management is expanding this diversification by increasing Killam's investment in apartment markets outside Atlantic Canada.\n\n## Credit Risk\n\nCredit risk arises from the possibility that tenants may experience financial difficulty and be unable to fulfill their lease term commitments. The Company mitigates the risk of credit loss through the diversification of its existing portfolio and limiting its exposure to any one tenant. Credit assessments are conducted with respect to all new leasing and the Company also obtains a security deposit to assist in potential recovery requirements. In addition, the receivable balances are monitored on an ongoing basis with the result that the Company's exposure to bad debt is not significant. The Company's bad debt expense experience has historically been less than 0.4% of revenues. None of Killam's tenants account for more than 1% of tenant receivables.\n\n## Development Risk\n\nDevelopment risk is the risk that costs of developments will exceed original estimates, unforeseen delays occur and/or units will not be leased in the timeframe and/or at rents anticipated. Killam minimizes its exposure to development risk my limiting the amount of development underway at any one time. To reduce the Company's exposure to price increases, Killam enters into fixed-rate contracts when possible. To reduce the lease-up risk, Killam does extensive market research in advance of each development to support expected rental rates, and pre-markets its properties early on in the process, to increase demand for the new developments.", - "page_start": 58, - "page_end": 58, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## Geographic Diversification\n\nGeographic diversification in the apartment segment is a priority for Killam. With a 14.2% market share in its core markets in Atlantic Canada, Killam is the region's largest residential landlord. The maximum market share Management foresees Killam reaching in Atlantic Canada is between 15%-18%. With Atlantic Canada representing only 4.9% of the Canadian rental market, Killam's growth opportunities increase significantly when considering assets outside Atlantic Canada.\n\nWith its strong operating platform, Killam can support a larger and more geographically diverse portfolio. The Company is actively building a portfolio in targeted Ontario markets, including Ottawa, the Greater Toronto Area, and Southwestern Ontario. An increased investment in Ontario, and potentially Western Canada, will increase the Company's diversification and exposure in high growth centres in Canada. Based on the Company's portfolio at year-end, 15% of Killam's 2014 NOI will be generated in Ontario. Management has set a long-term target of growing the amount of NOI generated outside of Atlantic Canada to 50%.\n\nIn 2013, Killam sold a portfolio of ten MHCs in New Brunswick that allowed Killam to crystallize the increased value of this portfolio at attractive cap-rates. This creates moderate short-term dilution but it provides the Company with funds to continue its geographic diversification by accretively growing its apartment portfolio in Ontario.", - "page_start": 28, - "page_end": 28, - "source_file": "TSX_KMP_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "TSX_KMP_2013.pdf", - "query": "How Killam Properties Inc does increase its geographic diversification ? ", - "target_page": 5, - "target_passage": "We are increasing our geographic diversification by expanding our apartment ownership outside Atlantic Canada. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "\n\nKillam properties inc 2013 annual report", - "page_start": 0, - "page_end": 0, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## Geographic Diversification\n\nGeographic diversification in the apartment segment is a priority for Killam. With a 14.2% market share in its core markets in Atlantic Canada, Killam is the region's largest residential landlord. The maximum market share Management foresees Killam reaching in Atlantic Canada is between 15%-18%. With Atlantic Canada representing only 4.9% of the Canadian rental market, Killam's growth opportunities increase significantly when considering assets outside Atlantic Canada.\n\nWith its strong operating platform, Killam can support a larger and more geographically diverse portfolio. The Company is actively building a portfolio in targeted Ontario markets, including Ottawa, the Greater Toronto Area, and Southwestern Ontario. An increased investment in Ontario, and potentially Western Canada, will increase the Company's diversification and exposure in high growth centres in Canada. Based on the Company's portfolio at year-end, 15% of Killam's 2014 NOI will be generated in Ontario. Management has set a long-term target of growing the amount of NOI generated outside of Atlantic Canada to 50%.\n\nIn 2013, Killam sold a portfolio of ten MHCs in New Brunswick that allowed Killam to crystallize the increased value of this portfolio at attractive cap-rates. This creates moderate short-term dilution but it provides the Company with funds to continue its geographic diversification by accretively growing its apartment portfolio in Ontario.", - "page_start": 28, - "page_end": 28, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## management's Discussion and analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## PART II\n\n## Business Overview\n\nKillam Properties Inc., based in Halifax, Nova Scotia, is one of Canada's largest residential landlords, owning, operating, managing and developing multi-family residential and Manufactured Home Community ('MHC') properties. Killam's 164 apartment properties are located in Atlantic Canada's six largest urban centres and in Ontario. The Company's 35 MHCs are located in Ontario and Atlantic Canada. The value of Killam's real estate assets at December 31, 2013, was $1.5 billion. Killam is focused on growing its portfolio, maximizing the value of its properties and increasing FFo per share.\n\nKillam was founded in 2000, based on the recognition of an opportunity to create value through the consolidation of apartments in Atlantic Canada and MHCs across Canada. Killam's first apartment was purchased in 2002 and its first MHC was purchased in 2003. From 2002 to 2009, Killam's apartment portfolio grew through the acquisition of properties in Atlantic Canada's six largest cities, namely Halifax, Moncton, Saint John, Fredericton, St. John's and Charlottetown. Killam is now Atlantic Canada's largest residential landlord, with a 14.2% market share of the multi-family rental units in these core markets. Killam entered the Ontario apartment market in 2010, and today owns twelve properties in the province, including assets in Toronto, Ottawa, London and Cambridge. Killam plans to expand its presence in Ontario with additional acquisitions and developments. The apartment business is Killam's largest business segment, accounting for 86% of the Company's NOI from property operations and equity income in 2013. At December 31, 2013, Killam's apartment portfolio consisted of 12,647 units.\n\nKillam complements its acquisition program with the construction of apartment buildings. During 2013, Killam completed the development of four projects totalling 282 units and commenced two additional projects in the second half of the year. Management does not expect developments to exceed 5% of the total asset base in any given year.\n\nIn addition, the Company owns MHCs, also known as land-lease communities or trailer parks. Killam owns the land and infrastructure supporting each community and leases the lots to tenants, who own their own homes and pay Killam a monthly site rent. Killam owns 35 communities which accounted for 14% of Killam's NOI in 2013. During the year Killam sold ten MHC properties located in New Brunswick, allowing the Company to crystallize the value of the properties at attractive cap-rates and use the funds to continue to grow the apartment portfolio.\n\n## Key Performance Indicators (KPIs)\n\nManagement measures Killam's performance based on the following KPIs:", - "page_start": 22, - "page_end": 22, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## opportunities for Growth\n\nKillam's growth opportunities include increasing earnings of its existing portfolio and expanding the portfolio through acquisitions and development. acquisitions have been an important part of Killam's growth, having completed over $1.1 billion in acquisitions since the /first property was acquired in 2002. Killam began development as a complement to its acquisition program in 2010, and to-date has invested approximately $90 million in new developments. 2013 was Killam's largest year for growth since 2005, adding $191 million of properties to the portfolio, including $121 million in acquisitions and $70 million in new developments. looking ahead to 2014, Killam has targeted a minimum of $75 million in acquisitions, and the development of two new apartment buildings totaling approximately $46 million.\n\n", - "page_start": 13, - "page_end": 13, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## a Diversi/fied portfolio\n\nKillam has a diverse portfolio of both apartments and manufactured home communities. The apartment portfolio represents 86% of Killam's earnings and includes a variety of property types, such as high-rises, mid-rises and walk-ups, in nine urban centres across /five provinces. With a wide selection of properties and price points in each city, Killam caters to a broad tenant base. Killam's 35 manufactured home communities represent 14% of earnings and are located primarily in Nova Scotia and Ontario. The manufactured home communities complement the apartment business, providing stable and predictable cash /flows.\n\n", - "page_start": 12, - "page_end": 12, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## Management's Discussion and Analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## Business Strategy\n\n## Maximize NOI from Existing Portfolio\n\nManagement is focused on increasing the value of its real estate portfolio by maximizing revenue and operating efficiencies. To achieve NOI growth, Killam must address three critical factors; occupancy, rental rates, and operating costs. The Company focuses on customer service, investing in its properties, leasing and marketing initiatives, and training its employees to maximize these outcomes.\n\nManagement is able to directly control approximately 40% of operating expenses, including labour costs, repairs and maintenance and property general and administrative expenses. The remaining operating costs, including utilities and property taxes, are less controllable. Killam's apartments are currently heated with a combination of natural gas, electricity and oil. Volatile oil and natural gas prices have an impact on Killam's operating costs. To mitigate this volatility, the Company is active in energy conservation initiatives and regularly monitors its energy usage.\n\n## Growth through Acquisitions\n\nKillam is expanding its portfolio by acquiring newer, centrally located buildings and is focused on Ontario. During 2013 Killam completed $121.1 million in acquisitions, including properties in Toronto, Ottawa, Moncton and Prince Edward Island.\n\n## Growth through Development\n\nKillam enhances its portfolio growth opportunities by developing properties. Killam started apartment developments in 2010 and has completed five properties to-date, including four in 2013. Building new properties directly allows Killam to control the quality and features of the buildings, maximizes the use of excess land and eliminates the seller's profit, generating higher returns than through acquisitions. Management expects to limit development projects to approximately 5% of the balance sheet on an annual basis.\n\n## Investment in New Properties\n\nIn addition to developing new properties, Killam also acquires newly constructed assets. Management believes that increasing Killam's ownership in new, high-quality buildings will result in above-market and long-term demand for the Company's assets from an aging population, reduce annual capital requirements for deferred maintenance, and transform Killam's portfolio, over time, into one of the highest quality portfolios in canada.\n\nDemand by renters for newly constructed rental apartments is strong, with high occupancy rates and above-average rents. CMHC's Fall 2013 Halifax Rental Market Report reported 97.3% occupancy for properties built in 2000 or later, compared to 96.8% for all rental markets in the city. The average rent for a two-bedroom unit in these newer buildings was $1,320 per month, compared to a market average two-bedroom rent of $976.\n\nThe new properties added to Killam's portfolio are condo quality, providing tenants with features and amenities traditionally associated with ownership. The Company believes that demand for this type of rental accommodation will grow given an increasing number of homeowners reaching retirement age and looking for alternatives to home ownership. Killam is also attracted to the low capital spend requirements from new assets compared to older buildings, which often include significant capital investment to address deferred maintenance. Generally, the amount of annual capital to maintain a property increases as the building ages. In addition, with energy efficient features, the NOI margins are generally higher in newer buildings.\n\nWith strong demand for the acquisition of apartments over the last three years, cap-rates have declined and the pricing differential between older and newer buildings has reduced. This enables Killam to increase the amount of newer apartments in its portfolio without paying a significant premium for quality assets.\n\n## Geographic Diversification", - "page_start": 28, - "page_end": 28, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## increasing Geographic Diversi/fication\n\nWith a home base in Halifax, Killam's roots are in atlantic canada and the company has successfully grown by consolidating the residential real estate market in the region's urban centres. in order to meet its long-term growth targets and increase its investment in canada's most dynamic real estate markets, Killam has been actively expanding its apartment portfolio in ontario and is exploring investment opportunities in Western canada. since 2010, Killam has expanded its apartment target markets to include speci/fic cities in ontario, and has invested approximately $200 million in real estate assets in the province. approximately 15% of Killam's 2014 net operating income is expected to be earned in ontario. the company has set a long-term target to earn 50% of its net operating income outside atlantic canada.", - "page_start": 16, - "page_end": 16, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## management's Discussion and analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## Killam's NOI by Province\n\nCombining apartment and MHC's, the following chart highlights the percentage of Killam's forward-looking NOI by province based on ownership interest at December 31, 2013:\n\n## NOI by Province\n\n\n\n## The Multi-family Market Leader in Atlantic Canada\n\nAtlantic Canada is home to 2.3 million people, approximately 43% of whom live in the six largest cities, representing Killam's core markets in the region. Killam has a 14.2% market share of apartment units in these six largest centres. The chart below highlights the apartment NOI generated from each of the key urban markets in Atlantic Canada in 2013, and Killam's market share in each.\n\n", - "page_start": 30, - "page_end": 30, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## Increased Supply Risk\n\nIncreased supply risk is the risk of loss from increased competition from the addition of new rental units in Killam's core markets. Numerous other residential developers and apartment owners compete for potential tenants. Although it is Killam's strategy to own multifamily residential properties in premier locations in each market in which it operates, some of the apartments or MHCs of Killam's competitors may be newer, better located or offer lower rents. An increase in alternative housing could have a material adverse effect on Killam's ability to lease units and in the rents charged and could adversely affect Killam's revenues and ability to meet its obligations. To mitigate against this risk Killam has a geographically diverse asset base. Management is expanding this diversification by increasing Killam's investment in apartment markets outside Atlantic Canada.\n\n## Credit Risk\n\nCredit risk arises from the possibility that tenants may experience financial difficulty and be unable to fulfill their lease term commitments. The Company mitigates the risk of credit loss through the diversification of its existing portfolio and limiting its exposure to any one tenant. Credit assessments are conducted with respect to all new leasing and the Company also obtains a security deposit to assist in potential recovery requirements. In addition, the receivable balances are monitored on an ongoing basis with the result that the Company's exposure to bad debt is not significant. The Company's bad debt expense experience has historically been less than 0.4% of revenues. None of Killam's tenants account for more than 1% of tenant receivables.\n\n## Development Risk\n\nDevelopment risk is the risk that costs of developments will exceed original estimates, unforeseen delays occur and/or units will not be leased in the timeframe and/or at rents anticipated. Killam minimizes its exposure to development risk my limiting the amount of development underway at any one time. To reduce the Company's exposure to price increases, Killam enters into fixed-rate contracts when possible. To reduce the lease-up risk, Killam does extensive market research in advance of each development to support expected rental rates, and pre-markets its properties early on in the process, to increase demand for the new developments.", - "page_start": 58, - "page_end": 58, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "\n\n## president's letter\n\n## Dear Shareholders,\n\nI am pleased to review Killam's 2013 performance with you, and outline our strategy and plans for the future. We are progressing nicely with our priorities to increase the quality of our portfolio and expand geographically. In addition, we are focused on three key areas of growth for the Company: increase the value of our existing portfolio, acquire accretively and develop pro/fitably.\n\nDuring the past year we expanded communication of our corporate strategy to reach the broader Killam community with the introduction of Killam's Core Values. These values have been inherent in the Company since our /first acquisition in 2002, but had not been broadly promoted until this past year. Our Core Values (Curb Appeal, Build Community, Strong Customer Relationships, Do the Right Thing and Creative Solutions)\n\nare represented in the colourful squares you will see throughout this year's report. Killam employees across the Company demonstrate these values in their daily work, distinguishing Killam as a high-quality landlord. The introduction of a quarterly awards program, which recognizes employees who exemplify Killam's\n\nCore Values, enables us to celebrate these values. I have been impressed by both the number and quality of nominations. We truly have a remarkable group of employees who go above and beyond in providing exceptional service to our tenants.\n\n## A Look Back at 2013\n\nI would summarize 2013 as a mixed year for Killam. We were successful in achieving many of the objectives and targets we had set for ourselves, as summarized in the adjacent chart, but faced challenges that impacted our /financial performance. We added $191 million in new assets to our portfolio through acquisitions and the completion of four new developments. We also enhanced our leasing and marketing programs, which allowed us to realize gains in occupancy in the second half of the year and improve our position for 2014. We further bene/fited from both interest and administrative cost savings in the year. These improvements were mitigated somewhat by large increases in natural gas costs in Atlantic Canada and a more competitive rental market in the Maritimes, which resulted in increased year-over-year vacancy. The challenges we faced in 2013 resulted in funds from operations (FFO) per share of $0.72, the same as Killam's 2012 FFO per share.\n\n## Growing the Cash Flow from our Properties\n\nWe expect to generate, on average, between 2% and 4% in net operating income (NOI) growth through our same store portfolio on an annual basis. Our same store portfolio represents properties we have owned for equivalent periods year-over-year. Due to commodity price volatility, we experienced an unexpected spike in natural gas prices in Nova Scotia and New Brunswick throughout the 2013 heating season that increased same store utility and fuel expenses by 14%. We were able to partially o/ffset this unprecedented increase by managing controllable expenses to a modest 0.3% increase in the year; however, overall same store operating costs grew by 5.0%. These higher expenses more than o/ffset a 1.8% growth in revenue, resulting in a disappointing 0.4% decline in same store NOI for the year.", - "page_start": 8, - "page_end": 8, - "source_file": "TSX_KMP_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "TSX_KMP_2013.pdf", - "query": "What is the Killam Properties Inc 2013 performance about the Geographic Diversification objective ?", - "target_page": 8, - "target_passage": "Target achieved. Killam acquired $55 million in Ontario real estate in 2013, representing 45% of its acquisition program in the year. Assets acquired included a 102-unit property in Ottawa, a newly built, 179-unit, mixed-used property in downtown Toronto and a 5.2 acre parcel of land for development in Cambridge, Ontario. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "\n\nKillam properties inc 2013 annual report", - "page_start": 0, - "page_end": 0, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## management's Discussion and analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## PART II\n\n## Business Overview\n\nKillam Properties Inc., based in Halifax, Nova Scotia, is one of Canada's largest residential landlords, owning, operating, managing and developing multi-family residential and Manufactured Home Community ('MHC') properties. Killam's 164 apartment properties are located in Atlantic Canada's six largest urban centres and in Ontario. The Company's 35 MHCs are located in Ontario and Atlantic Canada. The value of Killam's real estate assets at December 31, 2013, was $1.5 billion. Killam is focused on growing its portfolio, maximizing the value of its properties and increasing FFo per share.\n\nKillam was founded in 2000, based on the recognition of an opportunity to create value through the consolidation of apartments in Atlantic Canada and MHCs across Canada. Killam's first apartment was purchased in 2002 and its first MHC was purchased in 2003. From 2002 to 2009, Killam's apartment portfolio grew through the acquisition of properties in Atlantic Canada's six largest cities, namely Halifax, Moncton, Saint John, Fredericton, St. John's and Charlottetown. Killam is now Atlantic Canada's largest residential landlord, with a 14.2% market share of the multi-family rental units in these core markets. Killam entered the Ontario apartment market in 2010, and today owns twelve properties in the province, including assets in Toronto, Ottawa, London and Cambridge. Killam plans to expand its presence in Ontario with additional acquisitions and developments. The apartment business is Killam's largest business segment, accounting for 86% of the Company's NOI from property operations and equity income in 2013. At December 31, 2013, Killam's apartment portfolio consisted of 12,647 units.\n\nKillam complements its acquisition program with the construction of apartment buildings. During 2013, Killam completed the development of four projects totalling 282 units and commenced two additional projects in the second half of the year. Management does not expect developments to exceed 5% of the total asset base in any given year.\n\nIn addition, the Company owns MHCs, also known as land-lease communities or trailer parks. Killam owns the land and infrastructure supporting each community and leases the lots to tenants, who own their own homes and pay Killam a monthly site rent. Killam owns 35 communities which accounted for 14% of Killam's NOI in 2013. During the year Killam sold ten MHC properties located in New Brunswick, allowing the Company to crystallize the value of the properties at attractive cap-rates and use the funds to continue to grow the apartment portfolio.\n\n## Key Performance Indicators (KPIs)\n\nManagement measures Killam's performance based on the following KPIs:", - "page_start": 22, - "page_end": 22, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## Geographic Diversification\n\nGeographic diversification in the apartment segment is a priority for Killam. With a 14.2% market share in its core markets in Atlantic Canada, Killam is the region's largest residential landlord. The maximum market share Management foresees Killam reaching in Atlantic Canada is between 15%-18%. With Atlantic Canada representing only 4.9% of the Canadian rental market, Killam's growth opportunities increase significantly when considering assets outside Atlantic Canada.\n\nWith its strong operating platform, Killam can support a larger and more geographically diverse portfolio. The Company is actively building a portfolio in targeted Ontario markets, including Ottawa, the Greater Toronto Area, and Southwestern Ontario. An increased investment in Ontario, and potentially Western Canada, will increase the Company's diversification and exposure in high growth centres in Canada. Based on the Company's portfolio at year-end, 15% of Killam's 2014 NOI will be generated in Ontario. Management has set a long-term target of growing the amount of NOI generated outside of Atlantic Canada to 50%.\n\nIn 2013, Killam sold a portfolio of ten MHCs in New Brunswick that allowed Killam to crystallize the increased value of this portfolio at attractive cap-rates. This creates moderate short-term dilution but it provides the Company with funds to continue its geographic diversification by accretively growing its apartment portfolio in Ontario.", - "page_start": 28, - "page_end": 28, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "\n\n## president's letter\n\n## Dear Shareholders,\n\nI am pleased to review Killam's 2013 performance with you, and outline our strategy and plans for the future. We are progressing nicely with our priorities to increase the quality of our portfolio and expand geographically. In addition, we are focused on three key areas of growth for the Company: increase the value of our existing portfolio, acquire accretively and develop pro/fitably.\n\nDuring the past year we expanded communication of our corporate strategy to reach the broader Killam community with the introduction of Killam's Core Values. These values have been inherent in the Company since our /first acquisition in 2002, but had not been broadly promoted until this past year. Our Core Values (Curb Appeal, Build Community, Strong Customer Relationships, Do the Right Thing and Creative Solutions)\n\nare represented in the colourful squares you will see throughout this year's report. Killam employees across the Company demonstrate these values in their daily work, distinguishing Killam as a high-quality landlord. The introduction of a quarterly awards program, which recognizes employees who exemplify Killam's\n\nCore Values, enables us to celebrate these values. I have been impressed by both the number and quality of nominations. We truly have a remarkable group of employees who go above and beyond in providing exceptional service to our tenants.\n\n## A Look Back at 2013\n\nI would summarize 2013 as a mixed year for Killam. We were successful in achieving many of the objectives and targets we had set for ourselves, as summarized in the adjacent chart, but faced challenges that impacted our /financial performance. We added $191 million in new assets to our portfolio through acquisitions and the completion of four new developments. We also enhanced our leasing and marketing programs, which allowed us to realize gains in occupancy in the second half of the year and improve our position for 2014. We further bene/fited from both interest and administrative cost savings in the year. These improvements were mitigated somewhat by large increases in natural gas costs in Atlantic Canada and a more competitive rental market in the Maritimes, which resulted in increased year-over-year vacancy. The challenges we faced in 2013 resulted in funds from operations (FFO) per share of $0.72, the same as Killam's 2012 FFO per share.\n\n## Growing the Cash Flow from our Properties\n\nWe expect to generate, on average, between 2% and 4% in net operating income (NOI) growth through our same store portfolio on an annual basis. Our same store portfolio represents properties we have owned for equivalent periods year-over-year. Due to commodity price volatility, we experienced an unexpected spike in natural gas prices in Nova Scotia and New Brunswick throughout the 2013 heating season that increased same store utility and fuel expenses by 14%. We were able to partially o/ffset this unprecedented increase by managing controllable expenses to a modest 0.3% increase in the year; however, overall same store operating costs grew by 5.0%. These higher expenses more than o/ffset a 1.8% growth in revenue, resulting in a disappointing 0.4% decline in same store NOI for the year.", - "page_start": 8, - "page_end": 8, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## Management's Discussion and Analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## Business Strategy\n\n## Maximize NOI from Existing Portfolio\n\nManagement is focused on increasing the value of its real estate portfolio by maximizing revenue and operating efficiencies. To achieve NOI growth, Killam must address three critical factors; occupancy, rental rates, and operating costs. The Company focuses on customer service, investing in its properties, leasing and marketing initiatives, and training its employees to maximize these outcomes.\n\nManagement is able to directly control approximately 40% of operating expenses, including labour costs, repairs and maintenance and property general and administrative expenses. The remaining operating costs, including utilities and property taxes, are less controllable. Killam's apartments are currently heated with a combination of natural gas, electricity and oil. Volatile oil and natural gas prices have an impact on Killam's operating costs. To mitigate this volatility, the Company is active in energy conservation initiatives and regularly monitors its energy usage.\n\n## Growth through Acquisitions\n\nKillam is expanding its portfolio by acquiring newer, centrally located buildings and is focused on Ontario. During 2013 Killam completed $121.1 million in acquisitions, including properties in Toronto, Ottawa, Moncton and Prince Edward Island.\n\n## Growth through Development\n\nKillam enhances its portfolio growth opportunities by developing properties. Killam started apartment developments in 2010 and has completed five properties to-date, including four in 2013. Building new properties directly allows Killam to control the quality and features of the buildings, maximizes the use of excess land and eliminates the seller's profit, generating higher returns than through acquisitions. Management expects to limit development projects to approximately 5% of the balance sheet on an annual basis.\n\n## Investment in New Properties\n\nIn addition to developing new properties, Killam also acquires newly constructed assets. Management believes that increasing Killam's ownership in new, high-quality buildings will result in above-market and long-term demand for the Company's assets from an aging population, reduce annual capital requirements for deferred maintenance, and transform Killam's portfolio, over time, into one of the highest quality portfolios in canada.\n\nDemand by renters for newly constructed rental apartments is strong, with high occupancy rates and above-average rents. CMHC's Fall 2013 Halifax Rental Market Report reported 97.3% occupancy for properties built in 2000 or later, compared to 96.8% for all rental markets in the city. The average rent for a two-bedroom unit in these newer buildings was $1,320 per month, compared to a market average two-bedroom rent of $976.\n\nThe new properties added to Killam's portfolio are condo quality, providing tenants with features and amenities traditionally associated with ownership. The Company believes that demand for this type of rental accommodation will grow given an increasing number of homeowners reaching retirement age and looking for alternatives to home ownership. Killam is also attracted to the low capital spend requirements from new assets compared to older buildings, which often include significant capital investment to address deferred maintenance. Generally, the amount of annual capital to maintain a property increases as the building ages. In addition, with energy efficient features, the NOI margins are generally higher in newer buildings.\n\nWith strong demand for the acquisition of apartments over the last three years, cap-rates have declined and the pricing differential between older and newer buildings has reduced. This enables Killam to increase the amount of newer apartments in its portfolio without paying a significant premium for quality assets.\n\n## Geographic Diversification", - "page_start": 28, - "page_end": 28, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## a Diversi/fied portfolio\n\nKillam has a diverse portfolio of both apartments and manufactured home communities. The apartment portfolio represents 86% of Killam's earnings and includes a variety of property types, such as high-rises, mid-rises and walk-ups, in nine urban centres across /five provinces. With a wide selection of properties and price points in each city, Killam caters to a broad tenant base. Killam's 35 manufactured home communities represent 14% of earnings and are located primarily in Nova Scotia and Ontario. The manufactured home communities complement the apartment business, providing stable and predictable cash /flows.\n\n", - "page_start": 12, - "page_end": 12, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## about Killam properties inc.\n\nKillam Properties Inc. is a growth oriented Canadian real estate company. We own, manage and develop multi-family residential properties in Atlantic Canada and Ontario. Since our /first acquisition in 2002, our real estate portfolio has grown to $1.5 billion and includes 12,647 apartment units and 5,164 manufactured home community (MHC) sites. We are committed to growing Killam's earnings by maximizing the returns from our existing portfolio and expanding through acquisitions and development.\n\n## our mission\n\nTo have a team of caring sta/ff deliver clean, safe, quality housing to tenants who are proud to call our properties home.\n\n## our core Values\n\nBuild Community\n\nCurb Appeal\n\nDo the Right Thing\n\npresident's letter\n\n9\n\nasset portfolio\n\n18\n\nMD&a\n\n21\n\nFinancial Statements\n\n66\n\nFive-Year Summary\n\n96\n\nStrong Customer Relationships\n\n\n\n180 mill street, london, ontario", - "page_start": 2, - "page_end": 2, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## opportunities for Growth\n\nKillam's growth opportunities include increasing earnings of its existing portfolio and expanding the portfolio through acquisitions and development. acquisitions have been an important part of Killam's growth, having completed over $1.1 billion in acquisitions since the /first property was acquired in 2002. Killam began development as a complement to its acquisition program in 2010, and to-date has invested approximately $90 million in new developments. 2013 was Killam's largest year for growth since 2005, adding $191 million of properties to the portfolio, including $121 million in acquisitions and $70 million in new developments. looking ahead to 2014, Killam has targeted a minimum of $75 million in acquisitions, and the development of two new apartment buildings totaling approximately $46 million.\n\n", - "page_start": 13, - "page_end": 13, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## management's Discussion and analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## Killam's NOI by Province\n\nCombining apartment and MHC's, the following chart highlights the percentage of Killam's forward-looking NOI by province based on ownership interest at December 31, 2013:\n\n## NOI by Province\n\n\n\n## The Multi-family Market Leader in Atlantic Canada\n\nAtlantic Canada is home to 2.3 million people, approximately 43% of whom live in the six largest cities, representing Killam's core markets in the region. Killam has a 14.2% market share of apartment units in these six largest centres. The chart below highlights the apartment NOI generated from each of the key urban markets in Atlantic Canada in 2013, and Killam's market share in each.\n\n", - "page_start": 30, - "page_end": 30, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## Increased Supply Risk\n\nIncreased supply risk is the risk of loss from increased competition from the addition of new rental units in Killam's core markets. Numerous other residential developers and apartment owners compete for potential tenants. Although it is Killam's strategy to own multifamily residential properties in premier locations in each market in which it operates, some of the apartments or MHCs of Killam's competitors may be newer, better located or offer lower rents. An increase in alternative housing could have a material adverse effect on Killam's ability to lease units and in the rents charged and could adversely affect Killam's revenues and ability to meet its obligations. To mitigate against this risk Killam has a geographically diverse asset base. Management is expanding this diversification by increasing Killam's investment in apartment markets outside Atlantic Canada.\n\n## Credit Risk\n\nCredit risk arises from the possibility that tenants may experience financial difficulty and be unable to fulfill their lease term commitments. The Company mitigates the risk of credit loss through the diversification of its existing portfolio and limiting its exposure to any one tenant. Credit assessments are conducted with respect to all new leasing and the Company also obtains a security deposit to assist in potential recovery requirements. In addition, the receivable balances are monitored on an ongoing basis with the result that the Company's exposure to bad debt is not significant. The Company's bad debt expense experience has historically been less than 0.4% of revenues. None of Killam's tenants account for more than 1% of tenant receivables.\n\n## Development Risk\n\nDevelopment risk is the risk that costs of developments will exceed original estimates, unforeseen delays occur and/or units will not be leased in the timeframe and/or at rents anticipated. Killam minimizes its exposure to development risk my limiting the amount of development underway at any one time. To reduce the Company's exposure to price increases, Killam enters into fixed-rate contracts when possible. To reduce the lease-up risk, Killam does extensive market research in advance of each development to support expected rental rates, and pre-markets its properties early on in the process, to increase demand for the new developments.", - "page_start": 58, - "page_end": 58, - "source_file": "TSX_KMP_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv2_taclccby4_license.pdf", - "query": "What is the conventional workflow for BERT ?", - "target_page": 1, - "target_passage": "The conventional workflow for BERT consists of two stages: pre-training and fine-tuning. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## A Primer in BERTology: What We Know About How BERT Works\n\n## Anna Rogers\n\nCenter for Social Data Science University of Copenhagen arogers@sodas.ku.dk\n\n## Olga Kovaleva\n\nUniversity of Massachusetts Lowell\n\nDept. of Computer Science okovalev@cs.uml.edu\n\n## Abstract\n\nTransformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.\n\n## 1 Introduction\n\nSince their introduction in 2017, Transformers (Vaswani et al., 2017) have taken NLP by storm, offering enhanced parallelization and better modeling of long-range dependencies. The best known Transformer-based model is BERT (Devlin et al., 2019); it obtained state-of-the-art results in numerous benchmarks and is still a must-have baseline.\n\nWhile it is clear that BERT works remarkably well, it is less clear why , which limits further hypothesis-driven improvement of the architecture. Unlike CNNs, the Transformers have little cognitive motivation, and the size of these models limits our ability to experiment with pre-training and perform ablation studies. This explains a large number of studies over the past year that attempted to understand the reasons behind BERT's performance.\n\nIn this paper, we provide an overview of what has been learned to date, highlighting the questions which are still unresolved. We first consider the linguistic aspects of it, i.e., the current evidence regarding the types of linguistic and world knowledge learned by BERT, as well as where and how this knowledge may be stored in the model. We then turn to the technical aspects of the model and provide an overview of the current proposals to\n\n## Anna Rumshisky\n\nDept. of Computer Science University of Massachusetts Lowell\n\narum@cs.uml.edu\n\nimprove BERT's architecture, pre-training and finetuning. We conclude by discussing the issue of overparameterization, the approaches to compressing BERT, and the nascent area of pruning as a model analysis technique.\n\n## 2 Overview of BERT architecture\n\nFundamentally, BERT is a stack of Transformer encoder layers (Vaswani et al., 2017) which consist of multiple self-attention \"heads\". For every input token in a sequence, each head computes key, value and query vectors, used to create a weighted representation. The outputs of all heads in the same layer are combined and run through a fully-connected layer. Each layer is wrapped with a skip connection and followed by layer normalization.\n\nThe conventional workflow for BERT consists of two stages: pre-training and fine-tuning. Pretraining uses two self-supervised tasks: masked language modeling (MLM, prediction of randomly masked input tokens) and next sentence prediction (NSP, predicting if two input sentences are adjacent to each other). In fine-tuning for downstream applications, one or more fully-connected layers are typically added on top of the final encoder layer.\n\nThe input representations are computed as follows: each word in the input is first tokenized into wordpieces (Wu et al., 2016), and then three embedding layers (token, position, and segment) are combined to obtain a fixed-length vector. Special token [CLS] is used for classification predictions, and [SEP] separates input segments.\n\nGoogle 1 and HuggingFace (Wolf et al., 2020) provide many variants of BERT, including the original \"base\" and \"large\" versions. They vary in the number of heads, layers, and hidden state size.\n\ngoogle-research/bert", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "| | Compression Performance Speedup | | | Model | Evaluation |\n|----------------------------------------------------------------------------|-----------------------------------|-------------|-------------|-------------------|---------------------------------------------------|\n| BERT-base (Devlin et al., 2019) | × 1 | 100% | × 1 | BERT12 | All GLUE tasks, SQuAD |\n| BERT-small | × 3.8 | 91% | - | BERT4 † | All GLUE tasks |\n| DistilBERT (Sanh et al., 2019a) BERT6-PKD (Sun et al., 2019a) | × 1.5 × 1.6 | 90% § 98% | × 1.6 × 1.9 | BERT6 BERT6 | All GLUE tasks, SQuAD No WNLI, CoLA, STS-B; RACE |\n| BERT3-PKD (Sun et al., 2019a) | × 2.4 | 92% | × 3.7 | BERT3 | No WNLI, CoLA, STS-B; RACE |\n| Aguilar et al. (2019), Exp. 3 | × 1.6 | 93% | - | BERT6 | CoLA, MRPC, QQP, RTE |\n| | | 87% | | | |\n| BERT-48 (Zhao et al., 2019) | × 62 | | × 77 | BERT12 ∗† | MNLI, MRPC, SST-2 |\n| BERT-192 (Zhao et al., 2019) | × 5.7 | 93% | × 22 | BERT12 ∗† | MNLI, MRPC, SST-2 |\n| Distillation TinyBERT (Jiao et al., 2019) | × 7.5 | 96% | × 9.4 | BERT4 † | No WNLI; SQuAD |\n| MobileBERT (Sun et al., 2020) | × 4.3 | 100% | × 4 ‡ | BERT24 † † | No WNLI; SQuAD No WNLI, CoLA and STS-B |\n| PD (Turc et al., 2019) | × 1.6 | 98% 93% | × 2.5 × 9 | BERT6 BERT8 †‖ | SQuAD |\n| WaLDORf (Tian et al., 2019) MiniLM (Wang et al., 2020b) | × 4.4 × 1.65 | 99% | × 2 | | |\n| | ∗∗ | | × | BERT6 | No WNLI, STS-B, MNLImm; SQuAD |\n| MiniBERT(Tsai et al., 2019) | × 6 | 98% | 27 ∗∗ | mBERT3 † | CoNLL-18 POS and morphology |\n| BiLSTM-soft (Tang et al., 2019) | × 110 × | 91% ¶ | × 434 ‡ - | | BiLSTM1 MNLI, QQP, SST-2 |\n| Quanti-zation Q-BERT-MP (Shen et al., 2019) BERT-QAT (Zafrir et al., 2019) | 13 × 4 | 98% 99% | - | BERT12 BERT12 | MNLI, SST-2, CoNLL-03, SQuAD No WNLI, MNLI; SQuAD |", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "\n\nFigure 5: Pre-trained weights help BERT find wider optima in fine-tuning on MRPC (right) than training from scratch (left) (Hao et al., 2019)\n\n\n\nbeddings as input for training BERT, while Poerner et al. (2019) adapt entity vectors to BERT representations. As mentioned above, Wang et al. (2020c) integrate knowledge not through entity embeddings, but through additional pre-training objective of knowledge base completion. Sun et al. (2019b,c) modify the standard MLM task to mask named entities rather than random words, and Yin et al. (2020) train with MLM objective over both text and linearized table data. Wang et al. (2020a) enhance RoBERTa with both linguistic and factual knowledge with task-specific adapters.\n\nPre-training is the most expensive part of training BERT, and it would be informative to know how much benefit it provides. On some tasks, a randomly initialized and fine-tuned BERT obtains competitive or higher results than the pre-trained BERT with the task classifier and frozen weights (Kovaleva et al., 2019). The consensus in the community is that pre-training does help in most situations, but the degree and its exact contribution requires further investigation. Prasanna et al. (2020) found that most weights of pre-trained BERT are useful in fine-tuning, although there are \"better\" and \"worse\" subnetworks. One explanation is that pre-trained weights help the fine-tuned BERT find wider and flatter areas with smaller generalization error, which makes the model more robust to overfitting (see Figure 5 from Hao et al. (2019)).\n\nGiven the large number and variety of proposed modifications, one would wish to know how much impact each of them has. However, due to the overall trend towards large model sizes, systematic ablations have become expensive. Most new models claim superiority on standard benchmarks, but gains are often marginal, and estimates of model stability and significance testing are very rare.\n\n## 5.4 Fine-tuning BERT\n\nPre-training + fine-tuning workflow is a crucial part of BERT. The former is supposed to provide task-independent knowledge, and the latter would presumably teach the model to rely more on the representations useful for the task at hand.\n\nKovaleva et al. (2019) did not find that to be the case for BERT fine-tuned on GLUE tasks 5 : during fine-tuning, the most changes for 3 epochs occurred in the last two layers of the models, but those changes caused self-attention to focus on [SEP] rather than on linguistically interpretable patterns. It is understandable why fine-tuning would increase the attention to [CLS] , but not [SEP] . If Clark et al. (2019) are correct that [SEP] serves as \"noop\" indicator, fine-tuning basically tells BERT what to ignore.\n\nSeveral studies explored the possibilities of improving the fine-tuning of BERT:\n\n - · Taking more layers into account : learning a complementary representation of the information in deep and output layers (Yang and Zhao, 2019), using a weighted combination of all layers instead of the final one (Su and Cheng, 2019; Kondratyuk and Straka, 2019), and layer dropout (Kondratyuk and Straka, 2019).\n - · Two-stage fine-tuning introduces an intermediate supervised training stage between pre-training and fine-tuning (Phang et al., 2019; Garg et al., 2020; Arase and Tsujii, 2019; Pruksachatkun et al., 2020; Glavaš and Vuli'c, 2020). Ben-David et al. (2020) propose a pivot-based variant of MLM to fine-tune BERT for domain adaptation.\n - · Adversarial token perturbations improve robustness of the model (Zhu et al., 2019).\n - · Adversarial regularization in combination with Bregman Proximal Point Optimization helps alleviate pre-trained knowledge forgetting and therefore prevents BERT from overfitting to downstream tasks (Jiang et al., 2019a).\n - · Mixout regularization improves the stability of BERT fine-tuning even for a small number of training examples (Lee et al., 2019).", - "page_start": 8, - "page_end": 8, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "## 3.3 World knowledge\n\nThe bulk of evidence about commonsense knowledge captured in BERT comes from practitioners using it to extract such knowledge. One direct probing study of BERT reports that BERT struggles with pragmatic inference and role-based event knowledge (Ettinger, 2019). BERT also struggles with abstract attributes of objects, as well as visual and perceptual properties that are likely to be assumed rather than mentioned (Da and Kasai, 2019).\n\nThe MLM component of BERT is easy to adapt for knowledge induction by filling in the\n\nKG\n\nDante\n\nborn-in\n\nFlorence\n\nFigure 1:\n\n\n\nQuerying knowledge bases (KB) and lan-\n\nguage models (LM) for factual knowledge. Figure 2: BERT world knowledge (Petroni et al., 2019)\n\nvast amounts of linguistic knowledge (Peters et al., 2018b; Goldberg, 2019; Tenney et al., 2019) useful for downstream tasks. This knowledge is usually accessed either by conditioning on latent context representations produced by the original model or by using the original model weights to initialize a task-specific model which is then further fine-tuned. This type of knowledge transfer is crucial for current state-of-the-art results on a wide range of tasks. In contrast, knowledge bases are e ective soblanks (e.g. \"Cats like to chase [\\_\\_\\_]\"). Petroni et al. (2019) showed that, for some relation types, vanilla BERT is competitive with methods relying on knowledge bases (Figure 2), and Roberts et al. (2020) show the same for open-domain QA using T5 model (Raffel et al., 2019). Davison et al. (2019) suggest that it generalizes better to unseen data. In order to retrieve BERT's knowledge, we need good template sentences, and there is work on their automatic extraction and augmentation (Bouraoui et al., 2019; Jiang et al., 2019b).\n\nff lutions for accessing annotated gold-standard relational data by enabling queries such as (D ante , born-in , X ). However, in practice we often need to extract relational data from text or other modalities to populate these knowledge bases. This requires complex NLP pipelines involving entity extraction, coreference resolution, entity linking and relation extraction (Surdeanu and Ji, 2014)components that often need supervised data and fixed schemas. Moreover, errors can easily propagate and accumulate throughout the pipeline. Instead, we could attempt to query neural language models for relational data by asking them to fill in masked tokens in sequences like 'Dante was born However, BERT cannot reason based on its world knowledge . Forbes et al. (2019) show that BERTcan \"guess\" the affordances and properties of many objects, but can not reason about the relationship between properties and affordances. For example, it 'knows\" that people can walk into houses, and that houses are big, but it cannot infer that houses are bigger than people. Zhou et al. (2020) and Richardson and Sabharwal (2019) also show that the performance drops with the number of necessary inference steps. Some of BERT's world knowledge success comes from learning stereotypical associations (Poerner et al., 2019), e.g., a person with an Italian-sounding name is predicted to be Italian, even when it is incorrect.\n\n## 3.4 Limitations\n\nMultiple probing studies in section 3 and section 4 report that BERT possesses a surprising amount of syntactic, semantic, and world knowledge. However, Tenney et al. (2019a) remarks, 'the fact that a linguistic pattern is not observed by our probing classifier does not guarantee that it is not there, and the observation of a pattern does not tell us how it is used.\" There is also the issue of how complex a probe should be allowed to be (Liu et al., 2019a). If a more complex probe recovers more information, to what extent are we still relying on the original model?\n\nFurthermore, different probing methods may lead to complementary or even contradictory conclusions, which makes a single test (as in most stud-\n\n(\n\nDante\n\n,\n\nborn-in\n\n,\n\nX\n\n)\n\nSymbolic\n\nMemory Access\n\nFlorence", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "| GOBO(Zadeh and Moshovos, 2020) | × 9 . 8 | 99% | - | BERT12 | MNLI |\n| McCarley et al. (2020), ff2 RPP (Guo et al., 2019) | × 2.2 ‡ × 1.7 ‡ | 98% ‡ 99% ‡ | × 1.9 ‡ - | BERT24 | SQuAD, Natural Questions |\n| Pruning Soft MvP (Sanh et al., 2020) | × 33 | 94% ¶ | - | BERT24 | No WNLI, STS-B; SQuAD |\n| | × | 94-100% | | BERT12 | MNLI, QQP, SQuAD |\n| IMP (Chen et al., 2020), rewind 50% | 1.4-2.5 | | - | BERT12 | No MNLI-mm; SQuAD |\n| ALBERT-base (Lan et al., 2020b) ALBERT-xxlarge (Lan et al., 2020b) | × 9 × 0.47 | 97% 107% | - - | BERT12 † BERT12 † | MNLI, SST-2 MNLI, SST-2 |\n| Other BERT-of-Theseus (Xu et al., 2020) | × 1.6 | 98% | × 1.9 | BERT6 | No WNLI |\n| PoWER-BERT (Goyal et al., 2020) | | 99% | × 2-4.5 | BERT12 | No WNLI; RACE |\n| | N/A | | | | |", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "0\n\n2\n\np\n\ne\n\nS\n\n4\n\n]\n\nL\n\nC\n\n.\n\ns\n\nc\n\n[\n\n2\n\nv\n\n6\n\n6\n\n0\n\n1\n\n0\n\n.\n\n9\n\n0\n\n9\n\n1\n\n:\n\nv\n\ni\n\nX\n\nr\n\na\n\nRecent progress in pretraining language mod-\n\nels on large textual corpora led to a surge\n\nof improvements for downstream NLP tasks.\n\nWhilst learning linguistic knowledge, these\n\nreport that an intermediate fine-tuning step with supervised parsing does not make much difference for downstream task performance. models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as 'fillin-the-blank' cloze statements. Language\n\nmodels have many advantages over structured\n\n## 3.2 Semantic knowledge knowledge bases: they require no schema en-\n\nTo date, more studies have been devoted to BERT's knowledge of syntactic rather than semantic phenomena. However, we do have evidence from an MLMprobing study that BERT has some knowledge of semantic roles (Ettinger, 2019). BERT even displays some preference for the incorrect fillers for semantic roles that are semantically related to the correct ones, as opposed to those that are unrelated (e.g. \"to tip a chef\" is better than \"to tip a robin\", but worse than \"to tip a waiter\"). gineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-theart pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answer-\n\nBERTstruggles with representations of numbers. Addition and number decoding tasks showed that BERT does not form good representations for floating point numbers and fails to generalize away from the training data (Wallace et al., 2019b). A part of the problem is BERT's wordpiece tokenization, since numbers of similar values can be divided up into substantially different word chunks. call factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https: //github.com/facebookresearch/LAMA . 1 Introduction Recently, pretrained high-capacity language models such as ELMo (Peters et al., 2018a) and BERT\n\nTenney et al. (2019b) showed that BERT encodes information about entity types, relations, semantic roles, and proto-roles , since this information can be detected with probing classifiers. ing against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to re-\n\nOut-of-the-box BERTis surprisingly brittle to named entity replacements : e.g. replacing names in the coreference task changes 85% of predictions (Balasubramanian et al., 2020). This suggests that the model does not actually form a generic idea of named entities, although its F1 scores on NER probing tasks are high (Tenney et al., 2019a). Broscheit (2019) find that fine-tuning BERT on Wikipedia entity linking \"teaches\" it additional entity knowledge, which would suggest that it did not absorb all the relevant entity information during pre-training on Wikipedia. (Devlin et al., 2018a) have become increasingly important in NLP. They are optimised to either predict the next word in a sequence or some masked word anywhere in a given sequence ( e.g. 'Dante was born in [M ask ] in the year 1265.'). The parameters of these models appear to store\n\n## 3.3 World knowledge", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "be successfully approximated with adapter modules. They achieve competitive performance on 26 classification tasks at a fraction of the computational cost. Adapters in BERT were also used for multi-task learning (Stickland and Murray, 2019) and cross-lingual transfer (Artetxe et al., 2019). An alternative to fine-tuning is extracting features from frozen representations, but fine-tuning works better for BERT (Peters et al., 2019b).\n\nA big methodological challenge in the current NLP is that the reported performance improvements of new models may well be within variation induced by environment factors (Crane, 2018). BERT is not an exception. Dodge et al. (2020) report significant variation for BERT fine-tuned on GLUE tasks due to both weight initialization and training data order. They also propose early stopping on the less-promising seeds.\n\nAlthough we hope that the above observations may be useful for the practitioners, this section does not exhaust the current research on fine-tuning and its alternatives. For example, we do not cover such topics as Siamese architectures, policy gradient training, automated curriculum learning, and others.\n\n## 6 How big should BERT be?\n\n## 6.1 Overparameterization\n\nTransformer-based models keep growing by orders of magnitude: the 110M parameters of base BERT are now dwarfed by 17B parameters of Turing-NLG (Microsoft, 2020), which is dwarfed by 175B of GPT-3 (Brown et al., 2020). This trend raises concerns about computational complexity of self-attention (Wu et al., 2019a), environmental issues (Strubell et al., 2019; Schwartz et al., 2019), fair comparison of architectures (Aßenmacher and Heumann, 2020), and reproducibility.\n\nHuman language is incredibly complex, and would perhaps take many more parameters to describe fully, but the current models do not make good use of the parameters they already have. Voita et al. (2019b) showed that all but a few Transformer heads could be pruned without significant losses in performance . For BERT, Clark et al. (2019) observe that most heads in the same layer show similar self-attention patterns (perhaps related to the fact that the output of all self-attention heads in a layer is passed through the same MLP), which explains why Michel et al. (2019) were able to reduce most layers to a single head.\n\nDepending on the task, some BERT heads/layers are not only redundant (Kao et al., 2020), but also harmful to the downstream task performance. Positive effect from head disabling was reported for machine translation (Michel et al., 2019), abstractive summarization (Baan et al., 2019), and GLUE tasks (Kovaleva et al., 2019). Additionally, Tenney et al. (2019a) examine the cumulative gains of their structural probing classifier, observing that in 5 out of 8 probing tasks some layers cause a drop in scores (typically in the final layers). Gordon et al. (2020) find that 30-40% of the weights can be pruned without impact on downstream tasks.\n\nIn general, larger BERT models perform better (Liu et al., 2019a; Roberts et al., 2020), but not always: BERT-base outperformed BERT-large on subject-verb agreement (Goldberg, 2019) and sentence subject detection (Lin et al., 2019). Given the complexity of language, and amounts of pretraining data, it is not clear why BERT ends up with redundant heads and layers. Clark et al. (2019) suggest that one possible reason is the use of attention dropouts, which causes some attention weights to be zeroed-out during training.\n\n## 6.2 Compression techniques\n\nGiven the above evidence of overparameterization, it does not come as a surprise that BERT can be efficiently compressed with minimal accuracy loss , which would be highly desirable for real-world applications. Such efforts to date are summarized in Table 1. The main approaches are knowledge distillation, quantization, and pruning.", - "page_start": 9, - "page_end": 9, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "this strategy often requires compatible hardware.\n\nAs discussed in section 6, individual selfattention heads and BERT layers can be disabled without significant drop in performance (Michel et al., 2019; Kovaleva et al., 2019; Baan et al., 2019). Pruning is a compression technique that takes advantage of that fact, typically reducing the amount of computation via zeroing out of certain parts of the large model. In structured pruning, architecture blocks are dropped, as in LayerDrop (Fan et al., 2019). In unstructured, the weights in the entire model are pruned irrespective of their location, as in magnitude pruning (Chen et al., 2020) or movement pruning (Sanh et al., 2020).\n\nPrasanna et al. (2020) and Chen et al. (2020) explore BERT from the perspective of the lottery ticket hypothesis (Frankle and Carbin, 2019), looking specifically at the \"winning\" subnetworks in pre-trained BERT. They independently find that such subnetworks do exist, and that transferability between subnetworks for different tasks varies.\n\nIf the ultimate goal of training BERT is compression, Li et al. (2020) recommend training larger\n\nmodels and compressing them heavily rather than compressing smaller models lightly.\n\nOther techniques include decomposing BERT's embedding matrix into smaller matrices (Lan et al., 2020a), progressive module replacing (Xu et al., 2020) and dynamic elimination of intermediate encoder outputs (Goyal et al., 2020). See Ganesh et al. (2020) for a more detailed discussion of compression methods.\n\n## 6.3 Pruning and model analysis\n\nThere is a nascent discussion around pruning as a model analysis technique. The basic idea is that a compressed model a priori consists of elements that are useful for prediction; therefore by finding out what they do we may find out what the whole network does. For instance, BERT has heads that seem to encode frame-semantic relations, but disabling them might not hurt downstream task performance Kovaleva et al. (2019); this suggests that this knowledge is not actually used.\n\nFor the base Transformer, Voita et al. (2019b) identify the functions of self-attention heads and", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Table 1: Comparison of BERT compression studies. Compression, performance retention, inference time speedup figures are given with respect to BERTbase, unless indicated otherwise. Performance retention is measured as a ratio of average scores achieved by a given model and by BERTbase. The subscript in the model description reflects the number of layers used. ∗ Smaller vocabulary used. † The dimensionality of the hidden layers is reduced. ‖ Convolutional layers used. ‡ Compared to BERTlarge. ∗∗ Compared to mBERT. § As reported in (Jiao et al., 2019). ¶ In comparison to the dev set.", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Pre-Training for Deep Language Understanding. arXiv:1908.04577 [cs] .\n\nWenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020b. MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers. arXiv preprint arXiv:2002.10957 .\n\nXiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2020c. KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation. arXiv:1911.06136 [cs] .\n\nYile Wang, Leyang Cui, and Yue Zhang. 2020d. How Can BERT Help Lexical Semantics Tasks? arXiv:1911.02929 [cs] .\n\nZihan Wang, Stephen Mayhew, Dan Roth, et al. 2019b. Cross-Lingual Ability of Multilingual BERT: An Empirical Study. arXiv preprint arXiv:1912.07840 .\n\nAlex Warstadt and Samuel R. Bowman. 2020. Can neural networks acquire a structural bias from raw linguistic data? In Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society , Online.\n\nAlex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, et al. 2019. Investigating BERT's Knowledge of Language: Five Analysis Methods with NPIs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 2870-2880.\n\nGregor Wiedemann, Steffen Remus, Avi Chawla, and Chris Biemann. 2019. Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings. arXiv preprint arXiv:1909.10430 .\n\nSarah Wiegreffe and Yuval Pinter. 2019. Attention is not not Explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 1120, Hong Kong, China. Association for Computational Linguistics.\n\nThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2020. HuggingFace's Transformers: State-of-the-Art Natural Language Processing. arXiv:1910.03771 [cs] .\n\nFelix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. 2019a. Pay Less Attention with Lightweight and Dynamic Convolutions. In International Conference on Learning Representations .\n\nXing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019b. Conditional BERT Contextual Augmentation. In ICCS 2019: Computational Science - ICCS 2019 , pages 84-95. Springer.\n\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv preprint arXiv:1609.08144 .\n\nZhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020. Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 4166-4176, Online. Association for Computational Linguistics.\n\nCanwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020. BERT-of-Theseus: Compressing BERT by Progressive Module Replacing. arXiv preprint arXiv:2002.02925 .\n\nJunjie Yang and Hai Zhao. 2019. Deepening Hidden Representations from Pre-Trained Language Models for Natural Language Understanding. arXiv:1911.01940 [cs] .\n\nZhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv:1906.08237 [cs] .\n\nPengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. 2020. TaBERT: Pretraining for Joint Understanding of Textual and Tabular", - "page_start": 21, - "page_end": 21, - "source_file": "arxiv2_taclccby4_license.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv2_taclccby4_license.pdf", - "query": "Is syntaxis encoded with Bert model ?", - "target_page": 2, - "target_passage": " As far as how syntaxis represented, it seems that syntactic structure is not directly encoded in self-attention weights.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "0\n\n2\n\np\n\ne\n\nS\n\n4\n\n]\n\nL\n\nC\n\n.\n\ns\n\nc\n\n[\n\n2\n\nv\n\n6\n\n6\n\n0\n\n1\n\n0\n\n.\n\n9\n\n0\n\n9\n\n1\n\n:\n\nv\n\ni\n\nX\n\nr\n\na\n\nRecent progress in pretraining language mod-\n\nels on large textual corpora led to a surge\n\nof improvements for downstream NLP tasks.\n\nWhilst learning linguistic knowledge, these\n\nreport that an intermediate fine-tuning step with supervised parsing does not make much difference for downstream task performance. models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as 'fillin-the-blank' cloze statements. Language\n\nmodels have many advantages over structured\n\n## 3.2 Semantic knowledge knowledge bases: they require no schema en-\n\nTo date, more studies have been devoted to BERT's knowledge of syntactic rather than semantic phenomena. However, we do have evidence from an MLMprobing study that BERT has some knowledge of semantic roles (Ettinger, 2019). BERT even displays some preference for the incorrect fillers for semantic roles that are semantically related to the correct ones, as opposed to those that are unrelated (e.g. \"to tip a chef\" is better than \"to tip a robin\", but worse than \"to tip a waiter\"). gineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-theart pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answer-\n\nBERTstruggles with representations of numbers. Addition and number decoding tasks showed that BERT does not form good representations for floating point numbers and fails to generalize away from the training data (Wallace et al., 2019b). A part of the problem is BERT's wordpiece tokenization, since numbers of similar values can be divided up into substantially different word chunks. call factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https: //github.com/facebookresearch/LAMA . 1 Introduction Recently, pretrained high-capacity language models such as ELMo (Peters et al., 2018a) and BERT\n\nTenney et al. (2019b) showed that BERT encodes information about entity types, relations, semantic roles, and proto-roles , since this information can be detected with probing classifiers. ing against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to re-\n\nOut-of-the-box BERTis surprisingly brittle to named entity replacements : e.g. replacing names in the coreference task changes 85% of predictions (Balasubramanian et al., 2020). This suggests that the model does not actually form a generic idea of named entities, although its F1 scores on NER probing tasks are high (Tenney et al., 2019a). Broscheit (2019) find that fine-tuning BERT on Wikipedia entity linking \"teaches\" it additional entity knowledge, which would suggest that it did not absorb all the relevant entity information during pre-training on Wikipedia. (Devlin et al., 2018a) have become increasingly important in NLP. They are optimised to either predict the next word in a sequence or some masked word anywhere in a given sequence ( e.g. 'Dante was born in [M ask ] in the year 1265.'). The parameters of these models appear to store\n\n## 3.3 World knowledge", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "\n\nFigure 5: Pre-trained weights help BERT find wider optima in fine-tuning on MRPC (right) than training from scratch (left) (Hao et al., 2019)\n\n\n\nbeddings as input for training BERT, while Poerner et al. (2019) adapt entity vectors to BERT representations. As mentioned above, Wang et al. (2020c) integrate knowledge not through entity embeddings, but through additional pre-training objective of knowledge base completion. Sun et al. (2019b,c) modify the standard MLM task to mask named entities rather than random words, and Yin et al. (2020) train with MLM objective over both text and linearized table data. Wang et al. (2020a) enhance RoBERTa with both linguistic and factual knowledge with task-specific adapters.\n\nPre-training is the most expensive part of training BERT, and it would be informative to know how much benefit it provides. On some tasks, a randomly initialized and fine-tuned BERT obtains competitive or higher results than the pre-trained BERT with the task classifier and frozen weights (Kovaleva et al., 2019). The consensus in the community is that pre-training does help in most situations, but the degree and its exact contribution requires further investigation. Prasanna et al. (2020) found that most weights of pre-trained BERT are useful in fine-tuning, although there are \"better\" and \"worse\" subnetworks. One explanation is that pre-trained weights help the fine-tuned BERT find wider and flatter areas with smaller generalization error, which makes the model more robust to overfitting (see Figure 5 from Hao et al. (2019)).\n\nGiven the large number and variety of proposed modifications, one would wish to know how much impact each of them has. However, due to the overall trend towards large model sizes, systematic ablations have become expensive. Most new models claim superiority on standard benchmarks, but gains are often marginal, and estimates of model stability and significance testing are very rare.\n\n## 5.4 Fine-tuning BERT\n\nPre-training + fine-tuning workflow is a crucial part of BERT. The former is supposed to provide task-independent knowledge, and the latter would presumably teach the model to rely more on the representations useful for the task at hand.\n\nKovaleva et al. (2019) did not find that to be the case for BERT fine-tuned on GLUE tasks 5 : during fine-tuning, the most changes for 3 epochs occurred in the last two layers of the models, but those changes caused self-attention to focus on [SEP] rather than on linguistically interpretable patterns. It is understandable why fine-tuning would increase the attention to [CLS] , but not [SEP] . If Clark et al. (2019) are correct that [SEP] serves as \"noop\" indicator, fine-tuning basically tells BERT what to ignore.\n\nSeveral studies explored the possibilities of improving the fine-tuning of BERT:\n\n - · Taking more layers into account : learning a complementary representation of the information in deep and output layers (Yang and Zhao, 2019), using a weighted combination of all layers instead of the final one (Su and Cheng, 2019; Kondratyuk and Straka, 2019), and layer dropout (Kondratyuk and Straka, 2019).\n - · Two-stage fine-tuning introduces an intermediate supervised training stage between pre-training and fine-tuning (Phang et al., 2019; Garg et al., 2020; Arase and Tsujii, 2019; Pruksachatkun et al., 2020; Glavaš and Vuli'c, 2020). Ben-David et al. (2020) propose a pivot-based variant of MLM to fine-tune BERT for domain adaptation.\n - · Adversarial token perturbations improve robustness of the model (Zhu et al., 2019).\n - · Adversarial regularization in combination with Bregman Proximal Point Optimization helps alleviate pre-trained knowledge forgetting and therefore prevents BERT from overfitting to downstream tasks (Jiang et al., 2019a).\n - · Mixout regularization improves the stability of BERT fine-tuning even for a small number of training examples (Lee et al., 2019).", - "page_start": 8, - "page_end": 8, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "## A Primer in BERTology: What We Know About How BERT Works\n\n## Anna Rogers\n\nCenter for Social Data Science University of Copenhagen arogers@sodas.ku.dk\n\n## Olga Kovaleva\n\nUniversity of Massachusetts Lowell\n\nDept. of Computer Science okovalev@cs.uml.edu\n\n## Abstract\n\nTransformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.\n\n## 1 Introduction\n\nSince their introduction in 2017, Transformers (Vaswani et al., 2017) have taken NLP by storm, offering enhanced parallelization and better modeling of long-range dependencies. The best known Transformer-based model is BERT (Devlin et al., 2019); it obtained state-of-the-art results in numerous benchmarks and is still a must-have baseline.\n\nWhile it is clear that BERT works remarkably well, it is less clear why , which limits further hypothesis-driven improvement of the architecture. Unlike CNNs, the Transformers have little cognitive motivation, and the size of these models limits our ability to experiment with pre-training and perform ablation studies. This explains a large number of studies over the past year that attempted to understand the reasons behind BERT's performance.\n\nIn this paper, we provide an overview of what has been learned to date, highlighting the questions which are still unresolved. We first consider the linguistic aspects of it, i.e., the current evidence regarding the types of linguistic and world knowledge learned by BERT, as well as where and how this knowledge may be stored in the model. We then turn to the technical aspects of the model and provide an overview of the current proposals to\n\n## Anna Rumshisky\n\nDept. of Computer Science University of Massachusetts Lowell\n\narum@cs.uml.edu\n\nimprove BERT's architecture, pre-training and finetuning. We conclude by discussing the issue of overparameterization, the approaches to compressing BERT, and the nascent area of pruning as a model analysis technique.\n\n## 2 Overview of BERT architecture\n\nFundamentally, BERT is a stack of Transformer encoder layers (Vaswani et al., 2017) which consist of multiple self-attention \"heads\". For every input token in a sequence, each head computes key, value and query vectors, used to create a weighted representation. The outputs of all heads in the same layer are combined and run through a fully-connected layer. Each layer is wrapped with a skip connection and followed by layer normalization.\n\nThe conventional workflow for BERT consists of two stages: pre-training and fine-tuning. Pretraining uses two self-supervised tasks: masked language modeling (MLM, prediction of randomly masked input tokens) and next sentence prediction (NSP, predicting if two input sentences are adjacent to each other). In fine-tuning for downstream applications, one or more fully-connected layers are typically added on top of the final encoder layer.\n\nThe input representations are computed as follows: each word in the input is first tokenized into wordpieces (Wu et al., 2016), and then three embedding layers (token, position, and segment) are combined to obtain a fixed-length vector. Special token [CLS] is used for classification predictions, and [SEP] separates input segments.\n\nGoogle 1 and HuggingFace (Wolf et al., 2020) provide many variants of BERT, including the original \"base\" and \"large\" versions. They vary in the number of heads, layers, and hidden state size.\n\ngoogle-research/bert", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "layers are more transferable (Liu et al., 2019a). In fine-tuning, it explains why the final layers change the most (Kovaleva et al., 2019), and why restoring the weights of lower layers of fine-tuned BERT to their original values does not dramatically hurt the model performance (Hao et al., 2019).\n\nTenney et al. (2019a) suggest that while syntactic information appears early in the model and can be localized, semantics is spread across the entire model , which explains why certain non-trivial examples get solved incorrectly at first but correctly at the later layers. This is rather to be expected: semantics permeates all language, and linguists debate whether meaningless structures can exist at all (Goldberg, 2006, p.166-182). But this raises the question of what stacking more Transformer layers in BERT actually achieves in terms of the spread of semantic knowledge, and whether that is beneficial. Tenney et al. compared BERT-base and BERT-large, and found that the overall pattern of cumulative score gains is the same, only more spread out in the larger model.\n\nNote that Tenney et al. (2019a)'s experiments concern sentence-level semantic relations; Cui et al. (2020) report that the encoding of ConceptNet semantic relations is the worst in the early layers and increases towards the top. Jawahar et al. (2019) place \"surface features in lower layers, syntactic features in middle layers and semantic features in higher layers\", but their conclusion is surprising, given that only one semantic task in this study actually topped at the last layer, and three others peaked around the middle and then considerably degraded by the final layers.\n\n## 5 Training BERT\n\nThis section reviews the proposals to optimize the training and architecture of the original BERT.\n\n## 5.1 Model architecture choices\n\nTo date, the most systematic study of BERT architecture was performed by Wang et al. (2019b), who experimented with the number of layers, heads, and model parameters, varying one option and freezing the others. They concluded that the number of heads was not as significant as the number of layers . That is consistent with the findings of Voita et al. (2019b) and Michel et al. (2019) (section 6), and also the observation by Liu et al. (2019a) that the middle layers were the most transferable. Larger hidden representation size was con-\n\nsistently better, but the gains varied by setting.\n\nAll in all, changes in the number of heads and layers appear to perform different functions . The issue of model depth must be related to the information flow from the most task-specific layers closer to the classifier (Liu et al., 2019a), to the initial layers which appear to be the most task-invariant (Hao et al., 2019), and where the tokens resemble the input tokens the most (Brunner et al., 2020) (see subsection 4.3). If that is the case, a deeper model has more capacity to encode information that is not task-specific.\n\nOn the other head, many self-attention heads in vanilla BERT seem to naturally learn the same patterns (Kovaleva et al., 2019). This explains why pruning them does not have too much impact. The question that arises from this is how far we could get with intentionally encouraging diverse self-attention patterns: theoretically, this would mean increasing the amount of information in the model with the same number of weights. Raganato et al. (2020) show for Transformer-based machine translation we can simply pre-set the patterns that we already know the model would learn, instead of learning them from scratch.\n\nVanilla BERT is symmetric and balanced in terms of self-attention and feed-forward layers, but it may not have to be. For the base Transformer, Press et al. (2020) report benefits from more selfattention sublayers at the bottom and more feedforward sublayers at the top.\n\n## 5.2 Improvements to the training regime", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Table 1: Comparison of BERT compression studies. Compression, performance retention, inference time speedup figures are given with respect to BERTbase, unless indicated otherwise. Performance retention is measured as a ratio of average scores achieved by a given model and by BERTbase. The subscript in the model description reflects the number of layers used. ∗ Smaller vocabulary used. † The dimensionality of the hidden layers is reduced. ‖ Convolutional layers used. ‡ Compared to BERTlarge. ∗∗ Compared to mBERT. § As reported in (Jiao et al., 2019). ¶ In comparison to the dev set.", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "## 3.3 World knowledge\n\nThe bulk of evidence about commonsense knowledge captured in BERT comes from practitioners using it to extract such knowledge. One direct probing study of BERT reports that BERT struggles with pragmatic inference and role-based event knowledge (Ettinger, 2019). BERT also struggles with abstract attributes of objects, as well as visual and perceptual properties that are likely to be assumed rather than mentioned (Da and Kasai, 2019).\n\nThe MLM component of BERT is easy to adapt for knowledge induction by filling in the\n\nKG\n\nDante\n\nborn-in\n\nFlorence\n\nFigure 1:\n\n\n\nQuerying knowledge bases (KB) and lan-\n\nguage models (LM) for factual knowledge. Figure 2: BERT world knowledge (Petroni et al., 2019)\n\nvast amounts of linguistic knowledge (Peters et al., 2018b; Goldberg, 2019; Tenney et al., 2019) useful for downstream tasks. This knowledge is usually accessed either by conditioning on latent context representations produced by the original model or by using the original model weights to initialize a task-specific model which is then further fine-tuned. This type of knowledge transfer is crucial for current state-of-the-art results on a wide range of tasks. In contrast, knowledge bases are e ective soblanks (e.g. \"Cats like to chase [\\_\\_\\_]\"). Petroni et al. (2019) showed that, for some relation types, vanilla BERT is competitive with methods relying on knowledge bases (Figure 2), and Roberts et al. (2020) show the same for open-domain QA using T5 model (Raffel et al., 2019). Davison et al. (2019) suggest that it generalizes better to unseen data. In order to retrieve BERT's knowledge, we need good template sentences, and there is work on their automatic extraction and augmentation (Bouraoui et al., 2019; Jiang et al., 2019b).\n\nff lutions for accessing annotated gold-standard relational data by enabling queries such as (D ante , born-in , X ). However, in practice we often need to extract relational data from text or other modalities to populate these knowledge bases. This requires complex NLP pipelines involving entity extraction, coreference resolution, entity linking and relation extraction (Surdeanu and Ji, 2014)components that often need supervised data and fixed schemas. Moreover, errors can easily propagate and accumulate throughout the pipeline. Instead, we could attempt to query neural language models for relational data by asking them to fill in masked tokens in sequences like 'Dante was born However, BERT cannot reason based on its world knowledge . Forbes et al. (2019) show that BERTcan \"guess\" the affordances and properties of many objects, but can not reason about the relationship between properties and affordances. For example, it 'knows\" that people can walk into houses, and that houses are big, but it cannot infer that houses are bigger than people. Zhou et al. (2020) and Richardson and Sabharwal (2019) also show that the performance drops with the number of necessary inference steps. Some of BERT's world knowledge success comes from learning stereotypical associations (Poerner et al., 2019), e.g., a person with an Italian-sounding name is predicted to be Italian, even when it is incorrect.\n\n## 3.4 Limitations\n\nMultiple probing studies in section 3 and section 4 report that BERT possesses a surprising amount of syntactic, semantic, and world knowledge. However, Tenney et al. (2019a) remarks, 'the fact that a linguistic pattern is not observed by our probing classifier does not guarantee that it is not there, and the observation of a pattern does not tell us how it is used.\" There is also the issue of how complex a probe should be allowed to be (Liu et al., 2019a). If a more complex probe recovers more information, to what extent are we still relying on the original model?\n\nFurthermore, different probing methods may lead to complementary or even contradictory conclusions, which makes a single test (as in most stud-\n\n(\n\nDante\n\n,\n\nborn-in\n\n,\n\nX\n\n)\n\nSymbolic\n\nMemory Access\n\nFlorence", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "be successfully approximated with adapter modules. They achieve competitive performance on 26 classification tasks at a fraction of the computational cost. Adapters in BERT were also used for multi-task learning (Stickland and Murray, 2019) and cross-lingual transfer (Artetxe et al., 2019). An alternative to fine-tuning is extracting features from frozen representations, but fine-tuning works better for BERT (Peters et al., 2019b).\n\nA big methodological challenge in the current NLP is that the reported performance improvements of new models may well be within variation induced by environment factors (Crane, 2018). BERT is not an exception. Dodge et al. (2020) report significant variation for BERT fine-tuned on GLUE tasks due to both weight initialization and training data order. They also propose early stopping on the less-promising seeds.\n\nAlthough we hope that the above observations may be useful for the practitioners, this section does not exhaust the current research on fine-tuning and its alternatives. For example, we do not cover such topics as Siamese architectures, policy gradient training, automated curriculum learning, and others.\n\n## 6 How big should BERT be?\n\n## 6.1 Overparameterization\n\nTransformer-based models keep growing by orders of magnitude: the 110M parameters of base BERT are now dwarfed by 17B parameters of Turing-NLG (Microsoft, 2020), which is dwarfed by 175B of GPT-3 (Brown et al., 2020). This trend raises concerns about computational complexity of self-attention (Wu et al., 2019a), environmental issues (Strubell et al., 2019; Schwartz et al., 2019), fair comparison of architectures (Aßenmacher and Heumann, 2020), and reproducibility.\n\nHuman language is incredibly complex, and would perhaps take many more parameters to describe fully, but the current models do not make good use of the parameters they already have. Voita et al. (2019b) showed that all but a few Transformer heads could be pruned without significant losses in performance . For BERT, Clark et al. (2019) observe that most heads in the same layer show similar self-attention patterns (perhaps related to the fact that the output of all self-attention heads in a layer is passed through the same MLP), which explains why Michel et al. (2019) were able to reduce most layers to a single head.\n\nDepending on the task, some BERT heads/layers are not only redundant (Kao et al., 2020), but also harmful to the downstream task performance. Positive effect from head disabling was reported for machine translation (Michel et al., 2019), abstractive summarization (Baan et al., 2019), and GLUE tasks (Kovaleva et al., 2019). Additionally, Tenney et al. (2019a) examine the cumulative gains of their structural probing classifier, observing that in 5 out of 8 probing tasks some layers cause a drop in scores (typically in the final layers). Gordon et al. (2020) find that 30-40% of the weights can be pruned without impact on downstream tasks.\n\nIn general, larger BERT models perform better (Liu et al., 2019a; Roberts et al., 2020), but not always: BERT-base outperformed BERT-large on subject-verb agreement (Goldberg, 2019) and sentence subject detection (Lin et al., 2019). Given the complexity of language, and amounts of pretraining data, it is not clear why BERT ends up with redundant heads and layers. Clark et al. (2019) suggest that one possible reason is the use of attention dropouts, which causes some attention weights to be zeroed-out during training.\n\n## 6.2 Compression techniques\n\nGiven the above evidence of overparameterization, it does not come as a surprise that BERT can be efficiently compressed with minimal accuracy loss , which would be highly desirable for real-world applications. Such efforts to date are summarized in Table 1. The main approaches are knowledge distillation, quantization, and pruning.", - "page_start": 9, - "page_end": 9, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Figure 3: Attention patterns in BERT (Kovaleva et al., 2019)\n\n\n\nies) insufficient (Warstadt et al., 2019). A given method might also favor one model over another, e.g., RoBERTa trails BERT with one tree extraction method, but leads with another (Htut et al., 2019). The choice of linguistic formalism also matters (Kuznetsov and Gurevych, 2020).\n\nIn view of all that, the alternative is to focus on identifying what BERT actually relies on at inference time. This direction is currently pursued both at the level of architecture blocks (to be discussed in detail in subsection 6.3), and at the level of information encoded in model weights. Amnesic probing (Elazar et al., 2020) aims to specifically remove certain information from the model and see how it changes performance, finding, for example, that language modeling does rely on part-of-speech information.\n\nAnother direction is information-theoretic probing. Pimentel et al. (2020) operationalize probing as estimating mutual information between the learned representation and a given linguistic property, which highlights that the focus should be not on the amount of information contained in a representation, but rather on how easily it can be extracted from it. Voita and Titov (2020) quantify the amount of effort needed to extract information from a given representation as minimum description length needed to communicate both the probe size and the amount of data required for it to do well on a task.\n\n## 4 Localizing linguistic knowledge\n\n## 4.1 BERT embeddings\n\nIn studies of BERT, the term \"embedding\" refers to the output of a Transformer layer (typically, the final one). Both conventional static embeddings (Mikolov et al., 2013) and BERT-style embeddings can be viewed in terms of mutual information maximization (Kong et al., 2019), but the latter are contextualized . Every token is represented by a vector dependent on the particular context of occurrence, and contains at least some information about that context (Miaschi and Dell'Orletta, 2020).\n\nSeveral studies reported that distilled contextualized embeddings better encode lexical semantic information (i.e. they are better at traditional word-level tasks such as word similarity). The methods to distill a contextualized representation into static include aggregating the information across multiple contexts (Akbik et al., 2019; Bommasani et al., 2020), encoding \"semantically bleached\" sentences that rely almost exclusively on the meaning of a given word (e.g. \"This is <>\") (May et al., 2019), and even using contextualized embeddings to train static embeddings (Wang et al., 2020d).\n\nBut this is not to say that there is no room for improvement. Ethayarajh (2019) measure how similar the embeddings for identical words are in every layer, reporting that later BERT layers produce more context-specific representations 3 . They also find that BERT embeddings occupy a narrow cone in the vector space, and this effect increases from the earlier to later layers. That is, two random words will on average have a much higher cosine similarity than expected if embeddings were directionally uniform (isotropic) . Since isotropy was shown to be beneficial for static word embeddings (Mu and Viswanath, 2018), this might be a fruitful direction to explore for BERT.\n\nSince BERT embeddings are contextualized, an interesting question is to what extent they capture phenomena like polysemy and homonymy. There is indeed evidence that BERT's contextualized embeddings form distinct clusters corresponding to word senses (Wiedemann et al., 2019; Schmidt and Hofmann, 2020), making BERT successful at word sense disambiguation task. However, Mickus et al. (2019) note that the representations of the same word depend on the position of the sentence in which it occurs , likely due to the NSP objective. This is not desirable from the linguistic point of view, and could be a promising", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "We now outline the routing methods considered in this work. See Ong et al. [47] for their full implementation details.\n\nSimilarity-weighted ranking: The first method is based on the Bradley-Terry (BT) model [17]. For a given user query, this model derives a function to compute the probability of the weak model being preferred over the strong model. The probability-function expressions all share parameters, which are optimized to minimize the sum of cross-entropy losses over the training-set queries, where each element in the sum is weighted by the respective query's similarity with the user's query (computed as embeddings cosine similarity, with the embedding derived using OpenAI's text-embedding-3small [6]). We denote this method as R SW .\n\nMatrix factorization: The second method is based on matrix factorization. The training queries are used to train a bilinear function mapping a model's embedding and a query's embedding to a score corresponding to how well the model performs on the query. Routing is done by computing the score of the input query for each model, and choosing the highest-scoring model. We denote this method as R MF .\n\nBERT classifier: The third method involves fine-tuning a classifier, based on the BERT-base architecture [26], to predict which of the two models produces a better response for the given query or whether they do equally well (a tie). The routing decision is based on the probability of the weak model providing a better response versus the strong model or the tie. We denote this method as R CLS .\n\nLLM classifier: The last method is based on asking an LLM to provide a score in the range 1 -5 of how an AI expert would struggle to respond to a given query based on the query's complexity. For this, Ong et al. fine-tuned a Llama-3-8B model [4] using their reference set of queries and corresponding scores. We denote this method as R LLM .\n\nUnderlying LLMs. In [47], Ong et al. trained the routers with GPT-4-1106-preview [14] as the strong model and Mixtral 8x7B [39] as the weak model. They report successful generalization between the underlying LLMs, stating that their routers trained for a particular strong-weak LLM pair can be used with other strong-weak LLM pairs.\n\nTo allow our evaluation to scale, we use as the strong model M s the open-sourced Llama-3.1-8B [3] and as M w the 4-bit quantized version of Mixtral 8x7B (for efficiency reasons). This reduced the cost of our experiments by avoiding expensive GPT API calls and lowering the computational costs of Mixtral. Unless mentioned otherwise, all of our results", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv1.pdf" - }, - { - "text": "tion and, in practical applications, the underlying storage and compute costs. We selected models with embedding dimensions ranging from 384 to 4096.\n\n - · Sequence length: Being the number of tokens that a model can consider as input, the sequence length is important as it impacts the unit that can be encoded (sentence, paragraph, document). However, encoding overly long sequences requires efficiently storing the relevant information into a single vector. Among the selected methods, this criterion varies from 128 tokens to 32768.\n - · Model parameters: Often correlated with the two first characteristics, parameter count is important for practical applications as it affects usability on resource-efficient machines. The selected models have a number of parameters ranging from 20 million ( ∼ 100Mb in float32) to 7 billion ( ∼ 28Gb).\n - · Language: This is a major feature of language models. Some are monolingual, and others are multilingual. Language is usually acquired during pre-training, but sometimes, models familiarize themselves with new languages at tuning. For the benchmark, we selected French models, as well as bilingual or multilingual models. We also included a few ones that claimed to be English (e.g. allMiniLM-L12-v2 9 ).\n - · Model types: There are several strategies to generate text embeddings such as aggregating (e.g. with average pooling) token-level embeddings from raw pre-trained models, or adding an extra contrastive learning step on a sentence similarity task with, optionally, additional transformation layers. We included models of all types in our benchmark, summarizing the model type information under two relevant criteria: finetuned vs pretrained, and trained for sentence similarity or not.\n\nThe selected models are visible in Figure 1, and all of their characteristics are summarized in appendix Table 7. Overall, the selection includes the best models from the sentence transformers framework (Reimers and Gurevych, 2019), the most popular French NLP models (Le et al., 2020; Martin\n\net al., 2019), their variants optimized for semantic similarity (Reimers and Gurevych, 2019), numerous multilingual models performing at the top on MTEB (e.g E5 and T5 ), Bloom variants (Zhang et al., 2023), models based on very recent powerful LLMs (Wang et al., 2023; Faysse et al., 2024) and finally the proprietary models of OpenAI, Cohere and Voyage. Certain models were selected in multiple sizes to isolate the dimensionality effect effectively. We provide information on the models' licenses as reported in the Hugging Face hub 10 . However, we encourage readers to conduct further research before utilizing a model.\n\n## 3.3 Evaluation\n\nFor the sake of homogeneity, models are evaluated using the same metrics per task as in MTEB (Muennighoff et al., 2022): Classification (Accuracy), Bitext mining (F1 score), Pair classification (AP), Clustering (V measure), Reranking (MAP), Retrieval (NDCG@10), Summarization and STS (Spearman correlation based on cosine similarity). BitextMining tasks are excluded from the average performance scores and therefore the figures, as this task evaluates 2 languages instead of one, and this benchmark focuses only on one language (French). We present the results for both DiaBlaBitextMining and FloresBitextMining in Table 12.\n\nUsing the overall benchmark results, our goal will be to answer the following research questions: Q1: Is a model outstanding on all tasks?\n\nAs we are trying to find out whether one embedding model is statistically better than the others for French, the objective will also be to analyze the performance of the models by tasks to facilitate model choice for specific applications.\n\nQ2: Are there any links between the model characteristics and performance?", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv4.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv2_taclccby4_license.pdf", - "query": "Is BERT good with numbers representations ?", - "target_page": 3, - "target_passage": " BERTstruggles with representations of numbers. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "0\n\n2\n\np\n\ne\n\nS\n\n4\n\n]\n\nL\n\nC\n\n.\n\ns\n\nc\n\n[\n\n2\n\nv\n\n6\n\n6\n\n0\n\n1\n\n0\n\n.\n\n9\n\n0\n\n9\n\n1\n\n:\n\nv\n\ni\n\nX\n\nr\n\na\n\nRecent progress in pretraining language mod-\n\nels on large textual corpora led to a surge\n\nof improvements for downstream NLP tasks.\n\nWhilst learning linguistic knowledge, these\n\nreport that an intermediate fine-tuning step with supervised parsing does not make much difference for downstream task performance. models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as 'fillin-the-blank' cloze statements. Language\n\nmodels have many advantages over structured\n\n## 3.2 Semantic knowledge knowledge bases: they require no schema en-\n\nTo date, more studies have been devoted to BERT's knowledge of syntactic rather than semantic phenomena. However, we do have evidence from an MLMprobing study that BERT has some knowledge of semantic roles (Ettinger, 2019). BERT even displays some preference for the incorrect fillers for semantic roles that are semantically related to the correct ones, as opposed to those that are unrelated (e.g. \"to tip a chef\" is better than \"to tip a robin\", but worse than \"to tip a waiter\"). gineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-theart pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answer-\n\nBERTstruggles with representations of numbers. Addition and number decoding tasks showed that BERT does not form good representations for floating point numbers and fails to generalize away from the training data (Wallace et al., 2019b). A part of the problem is BERT's wordpiece tokenization, since numbers of similar values can be divided up into substantially different word chunks. call factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https: //github.com/facebookresearch/LAMA . 1 Introduction Recently, pretrained high-capacity language models such as ELMo (Peters et al., 2018a) and BERT\n\nTenney et al. (2019b) showed that BERT encodes information about entity types, relations, semantic roles, and proto-roles , since this information can be detected with probing classifiers. ing against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to re-\n\nOut-of-the-box BERTis surprisingly brittle to named entity replacements : e.g. replacing names in the coreference task changes 85% of predictions (Balasubramanian et al., 2020). This suggests that the model does not actually form a generic idea of named entities, although its F1 scores on NER probing tasks are high (Tenney et al., 2019a). Broscheit (2019) find that fine-tuning BERT on Wikipedia entity linking \"teaches\" it additional entity knowledge, which would suggest that it did not absorb all the relevant entity information during pre-training on Wikipedia. (Devlin et al., 2018a) have become increasingly important in NLP. They are optimised to either predict the next word in a sequence or some masked word anywhere in a given sequence ( e.g. 'Dante was born in [M ask ] in the year 1265.'). The parameters of these models appear to store\n\n## 3.3 World knowledge", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "## A Primer in BERTology: What We Know About How BERT Works\n\n## Anna Rogers\n\nCenter for Social Data Science University of Copenhagen arogers@sodas.ku.dk\n\n## Olga Kovaleva\n\nUniversity of Massachusetts Lowell\n\nDept. of Computer Science okovalev@cs.uml.edu\n\n## Abstract\n\nTransformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.\n\n## 1 Introduction\n\nSince their introduction in 2017, Transformers (Vaswani et al., 2017) have taken NLP by storm, offering enhanced parallelization and better modeling of long-range dependencies. The best known Transformer-based model is BERT (Devlin et al., 2019); it obtained state-of-the-art results in numerous benchmarks and is still a must-have baseline.\n\nWhile it is clear that BERT works remarkably well, it is less clear why , which limits further hypothesis-driven improvement of the architecture. Unlike CNNs, the Transformers have little cognitive motivation, and the size of these models limits our ability to experiment with pre-training and perform ablation studies. This explains a large number of studies over the past year that attempted to understand the reasons behind BERT's performance.\n\nIn this paper, we provide an overview of what has been learned to date, highlighting the questions which are still unresolved. We first consider the linguistic aspects of it, i.e., the current evidence regarding the types of linguistic and world knowledge learned by BERT, as well as where and how this knowledge may be stored in the model. We then turn to the technical aspects of the model and provide an overview of the current proposals to\n\n## Anna Rumshisky\n\nDept. of Computer Science University of Massachusetts Lowell\n\narum@cs.uml.edu\n\nimprove BERT's architecture, pre-training and finetuning. We conclude by discussing the issue of overparameterization, the approaches to compressing BERT, and the nascent area of pruning as a model analysis technique.\n\n## 2 Overview of BERT architecture\n\nFundamentally, BERT is a stack of Transformer encoder layers (Vaswani et al., 2017) which consist of multiple self-attention \"heads\". For every input token in a sequence, each head computes key, value and query vectors, used to create a weighted representation. The outputs of all heads in the same layer are combined and run through a fully-connected layer. Each layer is wrapped with a skip connection and followed by layer normalization.\n\nThe conventional workflow for BERT consists of two stages: pre-training and fine-tuning. Pretraining uses two self-supervised tasks: masked language modeling (MLM, prediction of randomly masked input tokens) and next sentence prediction (NSP, predicting if two input sentences are adjacent to each other). In fine-tuning for downstream applications, one or more fully-connected layers are typically added on top of the final encoder layer.\n\nThe input representations are computed as follows: each word in the input is first tokenized into wordpieces (Wu et al., 2016), and then three embedding layers (token, position, and segment) are combined to obtain a fixed-length vector. Special token [CLS] is used for classification predictions, and [SEP] separates input segments.\n\nGoogle 1 and HuggingFace (Wolf et al., 2020) provide many variants of BERT, including the original \"base\" and \"large\" versions. They vary in the number of heads, layers, and hidden state size.\n\ngoogle-research/bert", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "\n\nFigure 5: Pre-trained weights help BERT find wider optima in fine-tuning on MRPC (right) than training from scratch (left) (Hao et al., 2019)\n\n\n\nbeddings as input for training BERT, while Poerner et al. (2019) adapt entity vectors to BERT representations. As mentioned above, Wang et al. (2020c) integrate knowledge not through entity embeddings, but through additional pre-training objective of knowledge base completion. Sun et al. (2019b,c) modify the standard MLM task to mask named entities rather than random words, and Yin et al. (2020) train with MLM objective over both text and linearized table data. Wang et al. (2020a) enhance RoBERTa with both linguistic and factual knowledge with task-specific adapters.\n\nPre-training is the most expensive part of training BERT, and it would be informative to know how much benefit it provides. On some tasks, a randomly initialized and fine-tuned BERT obtains competitive or higher results than the pre-trained BERT with the task classifier and frozen weights (Kovaleva et al., 2019). The consensus in the community is that pre-training does help in most situations, but the degree and its exact contribution requires further investigation. Prasanna et al. (2020) found that most weights of pre-trained BERT are useful in fine-tuning, although there are \"better\" and \"worse\" subnetworks. One explanation is that pre-trained weights help the fine-tuned BERT find wider and flatter areas with smaller generalization error, which makes the model more robust to overfitting (see Figure 5 from Hao et al. (2019)).\n\nGiven the large number and variety of proposed modifications, one would wish to know how much impact each of them has. However, due to the overall trend towards large model sizes, systematic ablations have become expensive. Most new models claim superiority on standard benchmarks, but gains are often marginal, and estimates of model stability and significance testing are very rare.\n\n## 5.4 Fine-tuning BERT\n\nPre-training + fine-tuning workflow is a crucial part of BERT. The former is supposed to provide task-independent knowledge, and the latter would presumably teach the model to rely more on the representations useful for the task at hand.\n\nKovaleva et al. (2019) did not find that to be the case for BERT fine-tuned on GLUE tasks 5 : during fine-tuning, the most changes for 3 epochs occurred in the last two layers of the models, but those changes caused self-attention to focus on [SEP] rather than on linguistically interpretable patterns. It is understandable why fine-tuning would increase the attention to [CLS] , but not [SEP] . If Clark et al. (2019) are correct that [SEP] serves as \"noop\" indicator, fine-tuning basically tells BERT what to ignore.\n\nSeveral studies explored the possibilities of improving the fine-tuning of BERT:\n\n - · Taking more layers into account : learning a complementary representation of the information in deep and output layers (Yang and Zhao, 2019), using a weighted combination of all layers instead of the final one (Su and Cheng, 2019; Kondratyuk and Straka, 2019), and layer dropout (Kondratyuk and Straka, 2019).\n - · Two-stage fine-tuning introduces an intermediate supervised training stage between pre-training and fine-tuning (Phang et al., 2019; Garg et al., 2020; Arase and Tsujii, 2019; Pruksachatkun et al., 2020; Glavaš and Vuli'c, 2020). Ben-David et al. (2020) propose a pivot-based variant of MLM to fine-tune BERT for domain adaptation.\n - · Adversarial token perturbations improve robustness of the model (Zhu et al., 2019).\n - · Adversarial regularization in combination with Bregman Proximal Point Optimization helps alleviate pre-trained knowledge forgetting and therefore prevents BERT from overfitting to downstream tasks (Jiang et al., 2019a).\n - · Mixout regularization improves the stability of BERT fine-tuning even for a small number of training examples (Lee et al., 2019).", - "page_start": 8, - "page_end": 8, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "| | Compression Performance Speedup | | | Model | Evaluation |\n|----------------------------------------------------------------------------|-----------------------------------|-------------|-------------|-------------------|---------------------------------------------------|\n| BERT-base (Devlin et al., 2019) | × 1 | 100% | × 1 | BERT12 | All GLUE tasks, SQuAD |\n| BERT-small | × 3.8 | 91% | - | BERT4 † | All GLUE tasks |\n| DistilBERT (Sanh et al., 2019a) BERT6-PKD (Sun et al., 2019a) | × 1.5 × 1.6 | 90% § 98% | × 1.6 × 1.9 | BERT6 BERT6 | All GLUE tasks, SQuAD No WNLI, CoLA, STS-B; RACE |\n| BERT3-PKD (Sun et al., 2019a) | × 2.4 | 92% | × 3.7 | BERT3 | No WNLI, CoLA, STS-B; RACE |\n| Aguilar et al. (2019), Exp. 3 | × 1.6 | 93% | - | BERT6 | CoLA, MRPC, QQP, RTE |\n| | | 87% | | | |\n| BERT-48 (Zhao et al., 2019) | × 62 | | × 77 | BERT12 ∗† | MNLI, MRPC, SST-2 |\n| BERT-192 (Zhao et al., 2019) | × 5.7 | 93% | × 22 | BERT12 ∗† | MNLI, MRPC, SST-2 |\n| Distillation TinyBERT (Jiao et al., 2019) | × 7.5 | 96% | × 9.4 | BERT4 † | No WNLI; SQuAD |\n| MobileBERT (Sun et al., 2020) | × 4.3 | 100% | × 4 ‡ | BERT24 † † | No WNLI; SQuAD No WNLI, CoLA and STS-B |\n| PD (Turc et al., 2019) | × 1.6 | 98% 93% | × 2.5 × 9 | BERT6 BERT8 †‖ | SQuAD |\n| WaLDORf (Tian et al., 2019) MiniLM (Wang et al., 2020b) | × 4.4 × 1.65 | 99% | × 2 | | |\n| | ∗∗ | | × | BERT6 | No WNLI, STS-B, MNLImm; SQuAD |\n| MiniBERT(Tsai et al., 2019) | × 6 | 98% | 27 ∗∗ | mBERT3 † | CoNLL-18 POS and morphology |\n| BiLSTM-soft (Tang et al., 2019) | × 110 × | 91% ¶ | × 434 ‡ - | | BiLSTM1 MNLI, QQP, SST-2 |\n| Quanti-zation Q-BERT-MP (Shen et al., 2019) BERT-QAT (Zafrir et al., 2019) | 13 × 4 | 98% 99% | - | BERT12 BERT12 | MNLI, SST-2, CoNLL-03, SQuAD No WNLI, MNLI; SQuAD |", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "| GOBO(Zadeh and Moshovos, 2020) | × 9 . 8 | 99% | - | BERT12 | MNLI |\n| McCarley et al. (2020), ff2 RPP (Guo et al., 2019) | × 2.2 ‡ × 1.7 ‡ | 98% ‡ 99% ‡ | × 1.9 ‡ - | BERT24 | SQuAD, Natural Questions |\n| Pruning Soft MvP (Sanh et al., 2020) | × 33 | 94% ¶ | - | BERT24 | No WNLI, STS-B; SQuAD |\n| | × | 94-100% | | BERT12 | MNLI, QQP, SQuAD |\n| IMP (Chen et al., 2020), rewind 50% | 1.4-2.5 | | - | BERT12 | No MNLI-mm; SQuAD |\n| ALBERT-base (Lan et al., 2020b) ALBERT-xxlarge (Lan et al., 2020b) | × 9 × 0.47 | 97% 107% | - - | BERT12 † BERT12 † | MNLI, SST-2 MNLI, SST-2 |\n| Other BERT-of-Theseus (Xu et al., 2020) | × 1.6 | 98% | × 1.9 | BERT6 | No WNLI |\n| PoWER-BERT (Goyal et al., 2020) | | 99% | × 2-4.5 | BERT12 | No WNLI; RACE |\n| | N/A | | | | |", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Table 1: Comparison of BERT compression studies. Compression, performance retention, inference time speedup figures are given with respect to BERTbase, unless indicated otherwise. Performance retention is measured as a ratio of average scores achieved by a given model and by BERTbase. The subscript in the model description reflects the number of layers used. ∗ Smaller vocabulary used. † The dimensionality of the hidden layers is reduced. ‖ Convolutional layers used. ‡ Compared to BERTlarge. ∗∗ Compared to mBERT. § As reported in (Jiao et al., 2019). ¶ In comparison to the dev set.", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "## 3.3 World knowledge\n\nThe bulk of evidence about commonsense knowledge captured in BERT comes from practitioners using it to extract such knowledge. One direct probing study of BERT reports that BERT struggles with pragmatic inference and role-based event knowledge (Ettinger, 2019). BERT also struggles with abstract attributes of objects, as well as visual and perceptual properties that are likely to be assumed rather than mentioned (Da and Kasai, 2019).\n\nThe MLM component of BERT is easy to adapt for knowledge induction by filling in the\n\nKG\n\nDante\n\nborn-in\n\nFlorence\n\nFigure 1:\n\n\n\nQuerying knowledge bases (KB) and lan-\n\nguage models (LM) for factual knowledge. Figure 2: BERT world knowledge (Petroni et al., 2019)\n\nvast amounts of linguistic knowledge (Peters et al., 2018b; Goldberg, 2019; Tenney et al., 2019) useful for downstream tasks. This knowledge is usually accessed either by conditioning on latent context representations produced by the original model or by using the original model weights to initialize a task-specific model which is then further fine-tuned. This type of knowledge transfer is crucial for current state-of-the-art results on a wide range of tasks. In contrast, knowledge bases are e ective soblanks (e.g. \"Cats like to chase [\\_\\_\\_]\"). Petroni et al. (2019) showed that, for some relation types, vanilla BERT is competitive with methods relying on knowledge bases (Figure 2), and Roberts et al. (2020) show the same for open-domain QA using T5 model (Raffel et al., 2019). Davison et al. (2019) suggest that it generalizes better to unseen data. In order to retrieve BERT's knowledge, we need good template sentences, and there is work on their automatic extraction and augmentation (Bouraoui et al., 2019; Jiang et al., 2019b).\n\nff lutions for accessing annotated gold-standard relational data by enabling queries such as (D ante , born-in , X ). However, in practice we often need to extract relational data from text or other modalities to populate these knowledge bases. This requires complex NLP pipelines involving entity extraction, coreference resolution, entity linking and relation extraction (Surdeanu and Ji, 2014)components that often need supervised data and fixed schemas. Moreover, errors can easily propagate and accumulate throughout the pipeline. Instead, we could attempt to query neural language models for relational data by asking them to fill in masked tokens in sequences like 'Dante was born However, BERT cannot reason based on its world knowledge . Forbes et al. (2019) show that BERTcan \"guess\" the affordances and properties of many objects, but can not reason about the relationship between properties and affordances. For example, it 'knows\" that people can walk into houses, and that houses are big, but it cannot infer that houses are bigger than people. Zhou et al. (2020) and Richardson and Sabharwal (2019) also show that the performance drops with the number of necessary inference steps. Some of BERT's world knowledge success comes from learning stereotypical associations (Poerner et al., 2019), e.g., a person with an Italian-sounding name is predicted to be Italian, even when it is incorrect.\n\n## 3.4 Limitations\n\nMultiple probing studies in section 3 and section 4 report that BERT possesses a surprising amount of syntactic, semantic, and world knowledge. However, Tenney et al. (2019a) remarks, 'the fact that a linguistic pattern is not observed by our probing classifier does not guarantee that it is not there, and the observation of a pattern does not tell us how it is used.\" There is also the issue of how complex a probe should be allowed to be (Liu et al., 2019a). If a more complex probe recovers more information, to what extent are we still relying on the original model?\n\nFurthermore, different probing methods may lead to complementary or even contradictory conclusions, which makes a single test (as in most stud-\n\n(\n\nDante\n\n,\n\nborn-in\n\n,\n\nX\n\n)\n\nSymbolic\n\nMemory Access\n\nFlorence", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "layers are more transferable (Liu et al., 2019a). In fine-tuning, it explains why the final layers change the most (Kovaleva et al., 2019), and why restoring the weights of lower layers of fine-tuned BERT to their original values does not dramatically hurt the model performance (Hao et al., 2019).\n\nTenney et al. (2019a) suggest that while syntactic information appears early in the model and can be localized, semantics is spread across the entire model , which explains why certain non-trivial examples get solved incorrectly at first but correctly at the later layers. This is rather to be expected: semantics permeates all language, and linguists debate whether meaningless structures can exist at all (Goldberg, 2006, p.166-182). But this raises the question of what stacking more Transformer layers in BERT actually achieves in terms of the spread of semantic knowledge, and whether that is beneficial. Tenney et al. compared BERT-base and BERT-large, and found that the overall pattern of cumulative score gains is the same, only more spread out in the larger model.\n\nNote that Tenney et al. (2019a)'s experiments concern sentence-level semantic relations; Cui et al. (2020) report that the encoding of ConceptNet semantic relations is the worst in the early layers and increases towards the top. Jawahar et al. (2019) place \"surface features in lower layers, syntactic features in middle layers and semantic features in higher layers\", but their conclusion is surprising, given that only one semantic task in this study actually topped at the last layer, and three others peaked around the middle and then considerably degraded by the final layers.\n\n## 5 Training BERT\n\nThis section reviews the proposals to optimize the training and architecture of the original BERT.\n\n## 5.1 Model architecture choices\n\nTo date, the most systematic study of BERT architecture was performed by Wang et al. (2019b), who experimented with the number of layers, heads, and model parameters, varying one option and freezing the others. They concluded that the number of heads was not as significant as the number of layers . That is consistent with the findings of Voita et al. (2019b) and Michel et al. (2019) (section 6), and also the observation by Liu et al. (2019a) that the middle layers were the most transferable. Larger hidden representation size was con-\n\nsistently better, but the gains varied by setting.\n\nAll in all, changes in the number of heads and layers appear to perform different functions . The issue of model depth must be related to the information flow from the most task-specific layers closer to the classifier (Liu et al., 2019a), to the initial layers which appear to be the most task-invariant (Hao et al., 2019), and where the tokens resemble the input tokens the most (Brunner et al., 2020) (see subsection 4.3). If that is the case, a deeper model has more capacity to encode information that is not task-specific.\n\nOn the other head, many self-attention heads in vanilla BERT seem to naturally learn the same patterns (Kovaleva et al., 2019). This explains why pruning them does not have too much impact. The question that arises from this is how far we could get with intentionally encouraging diverse self-attention patterns: theoretically, this would mean increasing the amount of information in the model with the same number of weights. Raganato et al. (2020) show for Transformer-based machine translation we can simply pre-set the patterns that we already know the model would learn, instead of learning them from scratch.\n\nVanilla BERT is symmetric and balanced in terms of self-attention and feed-forward layers, but it may not have to be. For the base Transformer, Press et al. (2020) report benefits from more selfattention sublayers at the bottom and more feedforward sublayers at the top.\n\n## 5.2 Improvements to the training regime", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "be successfully approximated with adapter modules. They achieve competitive performance on 26 classification tasks at a fraction of the computational cost. Adapters in BERT were also used for multi-task learning (Stickland and Murray, 2019) and cross-lingual transfer (Artetxe et al., 2019). An alternative to fine-tuning is extracting features from frozen representations, but fine-tuning works better for BERT (Peters et al., 2019b).\n\nA big methodological challenge in the current NLP is that the reported performance improvements of new models may well be within variation induced by environment factors (Crane, 2018). BERT is not an exception. Dodge et al. (2020) report significant variation for BERT fine-tuned on GLUE tasks due to both weight initialization and training data order. They also propose early stopping on the less-promising seeds.\n\nAlthough we hope that the above observations may be useful for the practitioners, this section does not exhaust the current research on fine-tuning and its alternatives. For example, we do not cover such topics as Siamese architectures, policy gradient training, automated curriculum learning, and others.\n\n## 6 How big should BERT be?\n\n## 6.1 Overparameterization\n\nTransformer-based models keep growing by orders of magnitude: the 110M parameters of base BERT are now dwarfed by 17B parameters of Turing-NLG (Microsoft, 2020), which is dwarfed by 175B of GPT-3 (Brown et al., 2020). This trend raises concerns about computational complexity of self-attention (Wu et al., 2019a), environmental issues (Strubell et al., 2019; Schwartz et al., 2019), fair comparison of architectures (Aßenmacher and Heumann, 2020), and reproducibility.\n\nHuman language is incredibly complex, and would perhaps take many more parameters to describe fully, but the current models do not make good use of the parameters they already have. Voita et al. (2019b) showed that all but a few Transformer heads could be pruned without significant losses in performance . For BERT, Clark et al. (2019) observe that most heads in the same layer show similar self-attention patterns (perhaps related to the fact that the output of all self-attention heads in a layer is passed through the same MLP), which explains why Michel et al. (2019) were able to reduce most layers to a single head.\n\nDepending on the task, some BERT heads/layers are not only redundant (Kao et al., 2020), but also harmful to the downstream task performance. Positive effect from head disabling was reported for machine translation (Michel et al., 2019), abstractive summarization (Baan et al., 2019), and GLUE tasks (Kovaleva et al., 2019). Additionally, Tenney et al. (2019a) examine the cumulative gains of their structural probing classifier, observing that in 5 out of 8 probing tasks some layers cause a drop in scores (typically in the final layers). Gordon et al. (2020) find that 30-40% of the weights can be pruned without impact on downstream tasks.\n\nIn general, larger BERT models perform better (Liu et al., 2019a; Roberts et al., 2020), but not always: BERT-base outperformed BERT-large on subject-verb agreement (Goldberg, 2019) and sentence subject detection (Lin et al., 2019). Given the complexity of language, and amounts of pretraining data, it is not clear why BERT ends up with redundant heads and layers. Clark et al. (2019) suggest that one possible reason is the use of attention dropouts, which causes some attention weights to be zeroed-out during training.\n\n## 6.2 Compression techniques\n\nGiven the above evidence of overparameterization, it does not come as a surprise that BERT can be efficiently compressed with minimal accuracy loss , which would be highly desirable for real-world applications. Such efforts to date are summarized in Table 1. The main approaches are knowledge distillation, quantization, and pruning.", - "page_start": 9, - "page_end": 9, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Figure 3: Attention patterns in BERT (Kovaleva et al., 2019)\n\n\n\nies) insufficient (Warstadt et al., 2019). A given method might also favor one model over another, e.g., RoBERTa trails BERT with one tree extraction method, but leads with another (Htut et al., 2019). The choice of linguistic formalism also matters (Kuznetsov and Gurevych, 2020).\n\nIn view of all that, the alternative is to focus on identifying what BERT actually relies on at inference time. This direction is currently pursued both at the level of architecture blocks (to be discussed in detail in subsection 6.3), and at the level of information encoded in model weights. Amnesic probing (Elazar et al., 2020) aims to specifically remove certain information from the model and see how it changes performance, finding, for example, that language modeling does rely on part-of-speech information.\n\nAnother direction is information-theoretic probing. Pimentel et al. (2020) operationalize probing as estimating mutual information between the learned representation and a given linguistic property, which highlights that the focus should be not on the amount of information contained in a representation, but rather on how easily it can be extracted from it. Voita and Titov (2020) quantify the amount of effort needed to extract information from a given representation as minimum description length needed to communicate both the probe size and the amount of data required for it to do well on a task.\n\n## 4 Localizing linguistic knowledge\n\n## 4.1 BERT embeddings\n\nIn studies of BERT, the term \"embedding\" refers to the output of a Transformer layer (typically, the final one). Both conventional static embeddings (Mikolov et al., 2013) and BERT-style embeddings can be viewed in terms of mutual information maximization (Kong et al., 2019), but the latter are contextualized . Every token is represented by a vector dependent on the particular context of occurrence, and contains at least some information about that context (Miaschi and Dell'Orletta, 2020).\n\nSeveral studies reported that distilled contextualized embeddings better encode lexical semantic information (i.e. they are better at traditional word-level tasks such as word similarity). The methods to distill a contextualized representation into static include aggregating the information across multiple contexts (Akbik et al., 2019; Bommasani et al., 2020), encoding \"semantically bleached\" sentences that rely almost exclusively on the meaning of a given word (e.g. \"This is <>\") (May et al., 2019), and even using contextualized embeddings to train static embeddings (Wang et al., 2020d).\n\nBut this is not to say that there is no room for improvement. Ethayarajh (2019) measure how similar the embeddings for identical words are in every layer, reporting that later BERT layers produce more context-specific representations 3 . They also find that BERT embeddings occupy a narrow cone in the vector space, and this effect increases from the earlier to later layers. That is, two random words will on average have a much higher cosine similarity than expected if embeddings were directionally uniform (isotropic) . Since isotropy was shown to be beneficial for static word embeddings (Mu and Viswanath, 2018), this might be a fruitful direction to explore for BERT.\n\nSince BERT embeddings are contextualized, an interesting question is to what extent they capture phenomena like polysemy and homonymy. There is indeed evidence that BERT's contextualized embeddings form distinct clusters corresponding to word senses (Wiedemann et al., 2019; Schmidt and Hofmann, 2020), making BERT successful at word sense disambiguation task. However, Mickus et al. (2019) note that the representations of the same word depend on the position of the sentence in which it occurs , likely due to the NSP objective. This is not desirable from the linguistic point of view, and could be a promising", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv2_taclccby4_license.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_FFIN_2002.pdf", - "query": "How many affiliate banks has First Financial Bankshares ?", - "target_page": 4, - "target_passage": "The corporation has 10 affiliate banks, which provide services from 28 full-service locations in the Central, West and High Plains regions of Texas. ", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nNotes to Consolidated Financial Statements\n\nDecember 31, 2002, 2001 and 2000", - "page_start": 88, - "page_end": 88, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nNotes to Consolidated Financial Statements\n\nDecember 31, 2002, 2001 and 2000", - "page_start": 77, - "page_end": 77, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "\n\nFirst Financial Bankshares, Inc. is a financial holding company\n\nheadquartered in Abilene, Texas, with consolidated assets of $2.0 billion as of December 31, 2002. The corporation has 10 affiliate banks, which provide services from 28 full-service locations in the Central, West and High Plains regions of Texas. The common stock of First Financial Bankshares, Inc. is held by more than 3,500 shareholders and is listed on The NASDAQ Stock Market ¤ under the symbol FFIN.\n\n'Our 10 affiliate banks provide services from 28 full-service locations in the Central, West and High Plains regions of Texas.'", - "page_start": 3, - "page_end": 3, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nNotes to Consolidated Financial Statements December 31, 2002, 2001 and 2000\n\n## 1. SUMMARY OF SIGNIFICANT ACCOUNTING POLICIES:\n\n## Nature of Operations\n\nFirst Financial Bankshares, Inc. (a Texas corporation) ('Bankshares') is a financial holding company which owns (through its wholly-owned Delaware subsidiary) all of the capital stock of ten banks located in Texas as of December 31, 2002. Those subsidiary banks are First National Bank of Abilene; Hereford State Bank; First National Bank, Sweetwater; Eastland National Bank; First Financial Bank, National Association, Cleburne; Stephenville Bank & Trust Co.; San Angelo National Bank; Weatherford National Bank; First Financial Bank, National Association, Southlake and City National Bank, Mineral Wells. Each subsidiary bank's primary source of revenue is providing loans and banking services to consumers and commercial customers in the market area in which the subsidiary is located.\n\nA summary of significant accounting policies of Bankshares and subsidiaries (collectively, the 'Company') applied in the preparation of the accompanying consolidated financial statements follows. The accounting principles followed by the Company and the methods of applying them are in conformity with both accounting principles generally accepted in the United States of America and prevailing practices of the banking industry.\n\n## Use of Estimates in Preparation of Financial Statements\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the United States of America requires management to make estimates and assumptions that affect the reported amounts of assets and liabilities and disclosure of contingent assets and liabilities at the date of the financial statements and reported amounts of revenues and expenses during the reporting period. Actual results could differ from those estimates. Material estimates that are particularly susceptible to significant change in the near term relate to the determination of the allowance for loan losses, the valuations of foreclosed real estate, deferred income tax assets, and the fair value of financial instruments.\n\n## Consolidation\n\nThe accompanying consolidated financial statements include the accounts of Bankshares and its subsidiaries, all of which are wholly-owned. All significant intercompany accounts and transactions have been eliminated.\n\n## Investment Securities\n\nManagement classifies debt and equity securities as held-to-maturity, available-for-sale, or trading based on its intent. Debt securities that management has the positive intent and ability to hold to maturity are classified as heldto-maturity and recorded at cost, adjusted for amortization of premiums and accretion of discounts, which are recognized as adjustments to interest income using the interest method. Securities not classified as held-to-maturity or trading are classified as available-for-sale and recorded at estimated fair value, with unrealized gains and losses, net of deferred income taxes, excluded from earnings and reported in a separate component of shareholders' equity. Securities classified as trading are recorded at estimated fair value, with unrealized gains and losses included in earnings. The Company had no trading securities at December 31, 2002, 2001, or 2000.\n\n## Loans and Allowance for Loan Losses", - "page_start": 72, - "page_end": 72, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nConsolidated Balance Sheets\n\nDecember 31, 2002 and 2001", - "page_start": 67, - "page_end": 67, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nConsolidated Statements of Cash Flows\n\nDecember 31, 2002, 2001 and 2000", - "page_start": 71, - "page_end": 71, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "range of services to individuals, associations, and corporations. These services include administering estates, testamentary trusts, various types of living trusts, and agency accounts. In addition, First National Bank of Abilene, First Financial Bank, Cleburne, San Angelo National Bank and First Financial Bank, National Association, Southlake, Texas provide securities brokerage services through arrangements with various third parties.\n\nWe have filed an application with the office of the Comptroller of the Currency to form a limited purpose national bank under which we will consolidate the management of our current trust departments. The new entity will operate as a subsidiary of our subsidiary holding company, First Financial Bankshares of Delaware, Inc. We believe that with this structure we can more effectively manage our current trust operations and provide trust services to customers of our banks that do not currently have trust departments. We anticipate that the new trust company will begin operations in the latter part of 2003.\n\n## Competition\n\nCommercial banking in Texas is highly competitive, and because we hold less than 1% of the state's deposits, we represent only a minor segment of the industry. To succeed in this industry, our management believes that our banks must have the capability to compete in the areas of (1) interest rates paid or charged; (2) scope of services offered; and (3) prices charged for such services. Our subsidiary banks compete in their respective service areas against highly competitive banks, thrifts, savings and loan associations, small loan companies, credit unions, mortgage companies, and brokerage firms, all of which are engaged in providing financial products and services and some of which are larger than our subsidiary banks in terms of capital, resources and personnel.\n\nOur business does not depend on any single customer or any few customers, the loss of any one of which would have a materially adverse effect upon our business. Although we have a broad base of customers that are not related to us, our customers also occasionally include our officers and directors, as well as other entities with which we are affiliated. With our subsidiary banks we may make loans to officers and directors, and entities with which we are affiliated, in the ordinary course of business. We make these loans on substantially the same terms, including interest rates and collateral, as those prevailing at the time for comparable transactions with other persons. Loans to directors, officers and their affiliates are also subject to numerous restrictions under federal and state banking laws which we describe in greater detail below.\n\n## Employees\n\nWith our subsidiary banks we employed approximately 750 full-time equivalent employees at February 1, 2003. Our management believes that our employee relations have been and will continue to be good.\n\n## Supervision and Regulation\n\nBoth federal and state laws extensively regulate bank holding companies, financial holding companies and banks. These laws (and the regulations promulgated thereunder) are primarily intended to protect depositors and the deposit insurance fund of the Federal Deposit Insurance Corporation, or FDIC, although shareholders may also benefit. The following information describes particular laws and regulatory provisions relating to financial holding companies and banks. This discussion is qualified in its entirety by reference to the particular laws and regulatory provisions. A change in any of these laws or regulations may have a material effect on our business and the business of our subsidiary banks.\n\n## Bank Holding Companies and Financial Holding Companies", - "page_start": 30, - "page_end": 30, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nConsolidated Statements of Earnings\n\nDecember 31, 2002, 2001 and 2000", - "page_start": 68, - "page_end": 68, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "the parent's general unsecured creditors. If a depository institution fails to submit an acceptable capital restoration plan, it shall be treated as if it is significantly undercapitalized. 'Significantly undercapitalized' depository institutions may be subject to a number of requirements and restrictions, including orders to sell sufficient voting stock to become 'adequately capitalized,' requirements to reduce total assets, and cessation of receipt of deposits from correspondent banks. 'Critically undercapitalized' institutions are subject to the appointment of a receiver or conservator. Finally, FDICIA requires the various regulatory agencies to set forth certain standards that do not relate to capital. Such standards relate to the safety and soundness of operations and management and to asset quality and executive compensation, and permit regulatory action against a financial institution that does not meet such standards.\n\nIf an insured bank fails to meet its capital guidelines, it may be subject to a variety of other enforcement remedies, including a prohibition on the taking of brokered deposits and the termination of deposit insurance by the FDIC. Bank regulators continue to indicate their desire to raise capital requirements beyond their current levels.\n\nIn addition to FDICIA capital standards, Texas-chartered banks must also comply with the capital requirements imposed by the Texas Banking Department. Neither the Texas Finance Code nor its regulations specify any minimum capital-to-assets ratio that must be maintained by a Texas-chartered bank. Instead, the Texas Banking Department determines the appropriate ratio on a bank by bank basis, considering factors such as the nature of a bank's business, its total revenue, and the bank's total assets. As of December 31, 2002, all of our Texas-chartered banks exceeded the minimum ratios applied to them.\n\n## Our Support of Our Subsidiary Banks\n\nUnder Federal Reserve Board policy, we are expected to commit resources to act as a source of strength to support each of our subsidiary banks. This support may be required at times when, absent such Federal Reserve Board policy, we would not otherwise be required to provide it. In addition, any loans we make to our subsidiary banks would be subordinate in right of payment to deposits and to other indebtedness of our banks. In the event of a bank holding company's bankruptcy, any commitment by the bank holding company to a federal bank regulatory agency to maintain the capital of a subsidiary bank will be assumed by the bankruptcy trustee and be subject to a priority of payment.\n\nUnder the National Bank Act, if the capital stock of a national bank is impaired by losses or otherwise, the OCC is authorized to require the bank's shareholders to pay the deficiency on a pro-rata basis. If any shareholder refuses to pay the pro-rata assessment after three months notice, then the bank's board of directors must sell an appropriate amount of the shareholder's stock at a public auction to make up the deficiency. To the extent necessary, if a deficiency in capital still exists and the bank refuses to go into liquidation, then a receiver may be appointed to wind up the bank's affairs. Additionally, under the Federal Deposit Insurance Act, in the event of a loss suffered or anticipated by the FDIC (either as a result of the default of a banking subsidiary or related to FDIC assistance provided to a subsidiary in danger of default) our other banking subsidiaries may be assessed for the FDIC's loss.\n\n## Interstate Banking and Branching Act", - "page_start": 35, - "page_end": 35, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\n## Consolidated Statements of Shareholders' Equity December 31, 2002, 2001 and 2000", - "page_start": 70, - "page_end": 70, - "source_file": "NASDAQ_FFIN_2002.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_FFIN_2002.pdf", - "query": "What was the net income of First Financial Bankshares in 1995 ?", - "target_page": 14, - "target_passage": " 16,355", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nNotes to Consolidated Financial Statements\n\nDecember 31, 2002, 2001 and 2000", - "page_start": 77, - "page_end": 77, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nNotes to Consolidated Financial Statements\n\nDecember 31, 2002, 2001 and 2000", - "page_start": 88, - "page_end": 88, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nConsolidated Balance Sheets\n\nDecember 31, 2002 and 2001", - "page_start": 67, - "page_end": 67, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nConsolidated Statements of Earnings\n\nDecember 31, 2002, 2001 and 2000", - "page_start": 68, - "page_end": 68, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nNotes to Consolidated Financial Statements December 31, 2002, 2001 and 2000\n\n## 1. SUMMARY OF SIGNIFICANT ACCOUNTING POLICIES:\n\n## Nature of Operations\n\nFirst Financial Bankshares, Inc. (a Texas corporation) ('Bankshares') is a financial holding company which owns (through its wholly-owned Delaware subsidiary) all of the capital stock of ten banks located in Texas as of December 31, 2002. Those subsidiary banks are First National Bank of Abilene; Hereford State Bank; First National Bank, Sweetwater; Eastland National Bank; First Financial Bank, National Association, Cleburne; Stephenville Bank & Trust Co.; San Angelo National Bank; Weatherford National Bank; First Financial Bank, National Association, Southlake and City National Bank, Mineral Wells. Each subsidiary bank's primary source of revenue is providing loans and banking services to consumers and commercial customers in the market area in which the subsidiary is located.\n\nA summary of significant accounting policies of Bankshares and subsidiaries (collectively, the 'Company') applied in the preparation of the accompanying consolidated financial statements follows. The accounting principles followed by the Company and the methods of applying them are in conformity with both accounting principles generally accepted in the United States of America and prevailing practices of the banking industry.\n\n## Use of Estimates in Preparation of Financial Statements\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the United States of America requires management to make estimates and assumptions that affect the reported amounts of assets and liabilities and disclosure of contingent assets and liabilities at the date of the financial statements and reported amounts of revenues and expenses during the reporting period. Actual results could differ from those estimates. Material estimates that are particularly susceptible to significant change in the near term relate to the determination of the allowance for loan losses, the valuations of foreclosed real estate, deferred income tax assets, and the fair value of financial instruments.\n\n## Consolidation\n\nThe accompanying consolidated financial statements include the accounts of Bankshares and its subsidiaries, all of which are wholly-owned. All significant intercompany accounts and transactions have been eliminated.\n\n## Investment Securities\n\nManagement classifies debt and equity securities as held-to-maturity, available-for-sale, or trading based on its intent. Debt securities that management has the positive intent and ability to hold to maturity are classified as heldto-maturity and recorded at cost, adjusted for amortization of premiums and accretion of discounts, which are recognized as adjustments to interest income using the interest method. Securities not classified as held-to-maturity or trading are classified as available-for-sale and recorded at estimated fair value, with unrealized gains and losses, net of deferred income taxes, excluded from earnings and reported in a separate component of shareholders' equity. Securities classified as trading are recorded at estimated fair value, with unrealized gains and losses included in earnings. The Company had no trading securities at December 31, 2002, 2001, or 2000.\n\n## Loans and Allowance for Loan Losses", - "page_start": 72, - "page_end": 72, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nConsolidated Statements of Cash Flows\n\nDecember 31, 2002, 2001 and 2000", - "page_start": 71, - "page_end": 71, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\n## Consolidated Statements of Shareholders' Equity December 31, 2002, 2001 and 2000", - "page_start": 70, - "page_end": 70, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## REPORT OF INDEPENDENT AUDITORS\n\nTo the Board of Directors and Shareholders of\n\nFirst Financial Bankshares, Inc.\n\nWe have audited the accompanying consolidated balance sheet of First Financial Bankshares, Inc. (a Texas corporation) and subsidiaries as of December 31, 2002, and the related consolidated statements of earnings, comprehensive earnings, shareholders' equity, and cash flows for the year then ended. These financial statements are the responsibility of the Company's management. Our responsibility is to express an opinion on these financial statements based on our audit. The consolidated financial statements of First Financial Bankshares, Inc. and subsidiaries as of December 31, 2001 and for each of the two years then ended, were audited by other auditors who have ceased operations and whose report dated January 11, 2002, expressed an unqualified opinion on those statements.\n\nWe conducted our audit in accordance with auditing standards generally accepted in the United States. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement. An audit includes examining, on a test basis, evidence supporting the amounts and disclosures in the financial statements. An audit also includes assessing the accounting principles used and significant estimates made by management, as well as evaluating the overall financial statement presentation. We believe that our audit provides a reasonable basis for our opinion.\n\nIn our opinion, the financial statements referred to above present fairly, in all material respects, the financial position of First Financial Bankshares, Inc. and subsidiaries at December 31, 2002, and the consolidated results of their operations and their cash flows for the year then ended in conformity with accounting principles generally accepted in the United States.\n\nAs discussed above, the financial statements of First Financial Bankshares, Inc. as of December 31, 2001 and the two years then ended were audited by other auditors who have ceased operations. As described in Note 1, these financial statements have been revised to include the transitional disclosures required by Statement of Financial Accounting Standards No. 142, Goodwill and Other Intangible Assets , which was adopted by the Company as of January 1, 2002. Our audit procedures with respect to the disclosures in Note 1 with respect to 2001 and 2000 included (a) agreeing the previously reported net income to the previously issued financial statements and the adjustments to reported net income representing amortization expense including related tax effects recognized in those periods related to goodwill to the Company's underlying records obtained from management, and (b) testing the mathematical accuracy of the reconciliation of adjusted net income to reported net income, and the related earnings per share amounts. In our opinion, the disclosures for 2001 and 2000 are appropriate. However, we were not engaged to audit, review, or apply any procedures to the 2001 and 2000 financial statements of the Company other than with respect to such disclosures and, accordingly, we do not express an opinion or any other form of assurance on the 2001 and 2000 financial statements taken as a whole.\n\nErnst & Young LLP\n\nDallas, Texas January 14, 2003", - "page_start": 64, - "page_end": 64, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nNotes to Consolidated Financial Statements\n\nDecember 31, 2002, 2001 and 2000\n\nQuantitative measures established by regulation to ensure capital adequacy require Bankshares and each of its subsidiaries to maintain minimum amounts and ratios (set forth in the table below) of total and Tier I capital (as defined in the regulations) to risk-weighted assets (as defined), and of Tier I capital (as defined), to average assets (as defined). Management believes as of December 31, 2002 and 2001, that Bankshares and each of its subsidiaries meet all capital adequacy requirements to which they are subject.\n\nAs of December 31, 2002 and 2001, the most recent notification from each respective subsidiaries' primary regulator categorized each of Bankshares' subsidiaries as well-capitalized under the regulatory framework for prompt corrective action. To be categorized as well capitalized, the subsidiaries must maintain minimum total risk-based, Tier I risk-based, and Tier I leverage ratios as set forth in the table.\n\nThere are no conditions or events since that notification that management believes have changed the institutions' categories. Bankshares' and its significant subsidiaries' actual capital amounts and ratios are presented in the table below:", - "page_start": 87, - "page_end": 87, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "amounted to 49.1%, 48.9% and 45.2% of net earnings, respectively, in 2002, 2001 and 2000. Given our current strong capital position and projected earnings and asset growth rates, we do not anticipate any change in our current dividend policy.\n\nEach state bank that is a member of the Federal Reserve System and each national banking association is required by federal law to obtain the prior approval of the Federal Reserve Board and the OCC, respectively, to declare and pay dividends if the total of all dividends declared in any calendar year would exceed the total of (1) such bank's net profits (as defined and interpreted by regulation) for that year plus (2) its retained net profits (as defined and interpreted by regulation) for the preceding two calendar years, less any required transfers to surplus. In addition, these banks may only pay dividends to the extent that retained net profits (including the portion transferred to surplus) exceed bad debts (as defined by regulation).\n\nTo pay dividends, we and our subsidiary banks must maintain adequate capital above regulatory guidelines. In addition, if the applicable regulatory authority believes that a bank under its jurisdiction is engaged in or is about to engage in an unsafe or unsound practice (which, depending on the financial condition of the bank, could include the payment of dividends), the authority may require, after notice and hearing, that such bank cease and desist from the unsafe practice. The Federal Reserve Board and the OCC have each indicated that paying dividends that deplete a bank's capital base to an inadequate level would be an unsafe and unsound banking practice. The Federal Reserve Board, the OCC and the FDIC have issued policy statements that recommend that bank holding companies and insured banks should generally only pay dividends out of current operating earnings.\n\n## ITEM 7A. QUANTITATIVE AND QUALITATIVE DISCLOSURES ABOUT MARKET RISK\n\nOur management considers interest rate risk to be a significant market risk for us. See 'Item 7-Management's Discussion and Analysis of Financial Condition and Results of Operations-Balance Sheet Review-Interest Rate Risk' for disclosure regarding this market risk.", - "page_start": 55, - "page_end": 55, - "source_file": "NASDAQ_FFIN_2002.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_FFIN_2002.pdf", - "query": "What is the address of the San Angelo National Bank main office ?", - "target_page": 21, - "target_passage": "Main Office 301 W. Beauregard San Angelo, Texas 76903 Chartered 1997 ", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "National Banking Associations . Banks that are organized as national banking associations under the National Bank Act are subject to regulation and examination by the Office of the Comptroller of the Currency, or OCC. The OCC supervises, regulates and regularly examines the First National Bank of Abilene, First National Bank, Sweetwater, First Financial Bank, National Association, Cleburne, Eastland National Bank, San Angelo National Bank, Weatherford National Bank, First Financial Bank, National Association, Southlake and City National Bank, Mineral Wells. The OCC's supervision and regulation of banks is primarily intended to protect the interests of depositors. The National Bank Act:\n\n - · requires each national banking association to maintain reserves against deposits,\n - · restricts the nature and amount of loans that may be made and the interest that may be charged, and\n - · restricts investments and other activities.", - "page_start": 31, - "page_end": 31, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "- · Eastland National Bank, Eastland, Texas;\n - · Stephenville Bank and Trust Co., Stephenville, Texas;\n - · First Financial Bank, National Association, Cleburne, Texas;\n - · San Angelo National Bank, San Angelo, Texas;\n - · First Financial Bank, National Association, Southlake, Texas; and\n - · Weatherford National Bank, Weatherford, Texas;\n - · City National Bank, Mineral Wells, Texas.\n\nAs described in more detail below, we elected to be treated as a financial holding company in September 2001.\n\nOur service centers are located primarily in North Central and West Texas. Considering the branches and locations of all our subsidiary banks, as of December 31, 2002, we had 28 financial centers across Texas, with seven locations in Abilene, two locations in Cleburne, two locations in Stephenville, two locations in San Angelo, three locations in Weatherford, and one location each in Mineral Wells, Hereford, Sweetwater, Eastland, Southlake, Aledo, Alvarado, Burleson, Keller, Trophy Club, Roby, and Trent.\n\nInformation on our revenues, profits and losses and total assets appears in the discussion of our Results of Operations contained in Item 7 hereof.\n\n## First Financial Bankshares, Inc.\n\nWe provide management and technical resources and policy direction to our subsidiary banks, which enables them to improve or expand their banking services while continuing their local activity and identity. Each of our subsidiary banks operates under the day-to-day management of its own board of directors and officers, with substantial authority in making decisions concerning their own investments, loan policies, interest rates, and service charges. We provide resources and policy direction in, among other things, the following areas:\n\n - · asset and liability management;\n - · accounting, budgeting, planning and insurance;\n - · capitalization; and\n - · regulatory compliance.\n\nIn particular, we assist our subsidiary banks with, among other things, decisions concerning major capital expenditures, employee fringe benefits, including pension plans and group insurance, dividend policies, and appointment of officers and directors and their compensation. We also perform, through corporate staff groups or by outsourcing to third parties, internal audits and loan reviews of our subsidiary banks. Through First National Bank of Abilene, we provide advice and specialized services for our banks related to lending, investing, purchasing, advertising, public relations, and computer services.\n\nWhile we have no specific acquisition agreements in place or commitments to expand our branch network, we periodically evaluate various potential financial institution acquisition opportunities and also periodically evaluate potential locations for new branch offices. We anticipate that funding for any acquisitions or expansions would be provided from our existing cash balances, available dividends from subsidiary banks, utilization of available lines of credit and future debt or equity offerings.\n\n## Services Offered by Our Subsidiary Banks\n\nEach of our subsidiary banks is a separate legal entity that operates under the day-to-day management of its own board of directors and officers. Each of our subsidiary banks provides general commercial banking services, which include accepting and holding checking, savings and time deposits, making loans, automated teller machines, drivein and night deposit services, safe deposit facilities, transmitting funds, and performing other customary commercial banking services. Certain of our subsidiary banks also administer pension plans, profit sharing plans and other employee benefit plans. First National Bank of Abilene, First National Bank, Sweetwater, Stephenville Bank and Trust Co. and San Angelo National Bank have active trust departments. The trust departments offer a complete", - "page_start": 29, - "page_end": 29, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## Bob Housley appreciates loyalty.\n\nHis company, Housley Communications, is a thriving business with a staff of 225 and contracting relationships with over 700 firms. The company provides engineering and implementation of advanced telecommunications systems. 'We provide everything a company needs to go from zero to 100 percent.'\n\nSuccess hasn't necessarily been easy. 'We had some difficult times when we were starting out in the '80s,' says Housley. 'San Angelo National Bank worked very diligently to help me get where I am today. They stuck with me and were always team players.'\n\nHousley is a demanding customer - a trait to which he credits much of his success. 'I am very customer service-oriented. It's how I built my business. I appreciate that I can get that same type of dedication from San Angelo National Bank, and I see it reflected throughout the First Financial Bankshares organization.'\n\nHousley the shareholder is no less demanding, but he's had good reason to be pleased with his returns from First Financial Bankshares. 'First Financial's expansion strategy is excellent - they do their research and find banks with good opportunity. Their operations are sound, and their growth is well-managed. I believe they are one of the best mid-size banking organizations around.'\n\nBob Housley President Housley Communications San Angelo, Texas\n\n## 'They stuck with me and were always team players.'\n\n", - "page_start": 10, - "page_end": 10, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## San Angelo National Bank\n\n## Main Office\n\n301 W. Beauregard San Angelo, Texas 76903 Chartered 1997\n\n## Branch\n\n3471 Knickerbocker San Angelo, Texas 76904\n\n## Senior Officers\n\nMichael L. Boyd\n\nPresident and Chief Executive Officer\n\nDavid Byrd\n\nExecutive Vice President and Trust Officer\n\nRobert Pate\n\nExecutive Vice President\n\nKatherine Reeves\n\nExecutive Vice President and Cashier\n\nMichael L. Boyd President and Chief Executive Officer\n\n\n\n## Directors\n\nDal DeWees\n\nChairman of the Board\n\nGeorge Alexander\n\nPartner, Alexander Construction Company\n\nMichael L. Boyd\n\nPresident and Chief Executive Officer\n\nW. Dan Cravy, M.D.\n\nPhysician\n\nDavid B. Drake\n\nInvestment Advisor\n\nF. Scott Dueser\n\nFirst Financial Bankshares, Inc.\n\nDoug Eakman\n\nOwner, Pecos Street Pharmacy\n\nJoe Henderson\n\nPresident, Porter Henderson Implement Company, Inc.\n\nRobert D. Housley President and Owner, Housley Communications\n\nJim Johnson\n\nShannon, Porter, Johnson, Pfluger, Davis & Joynton, LLP\n\nDavid F. Lupton\n\nPresident, Angelo Glass & Mirror\n\nCompany, Inc.\n\nKenneth T. Murphy\n\nFirst Financial Bankshares, Inc.\n\nBill Pfluger\n\nRancher\n\nRichard W. Salmon Investments\n\n\n\nJohn E. Schwartz, Sr. Farmer/Rancher\n\nF.L. (Steve) Stephens Retired Chairman and Chief Executive Officer, Town & Country Food Stores, Inc.\n\n| IN THOUSANDS | December 31, 2002 | December 31, 2001 |\n|--------------------------|---------------------|---------------------|\n| Assets | $303,124 | $299,808 |\n| Loans | 115,450 | 110,685 |\n| Deposits | 251,931 | 257,212 |\n| Equity | 30,634 | 27,986 |\n| Net Income | 4,917 | 4,167 |\n| Trust Assets | 144,047 | 129,471 |\n| Return on Average Assets | 1.70% | 1.46% |\n| Return on Average Equity | 16.48 | 15.13 |\n\nTom Green County Deposit Market Share\n\nSan Angelo\n\n2\n\n4\n\n%", - "page_start": 20, - "page_end": 20, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "range of services to individuals, associations, and corporations. These services include administering estates, testamentary trusts, various types of living trusts, and agency accounts. In addition, First National Bank of Abilene, First Financial Bank, Cleburne, San Angelo National Bank and First Financial Bank, National Association, Southlake, Texas provide securities brokerage services through arrangements with various third parties.\n\nWe have filed an application with the office of the Comptroller of the Currency to form a limited purpose national bank under which we will consolidate the management of our current trust departments. The new entity will operate as a subsidiary of our subsidiary holding company, First Financial Bankshares of Delaware, Inc. We believe that with this structure we can more effectively manage our current trust operations and provide trust services to customers of our banks that do not currently have trust departments. We anticipate that the new trust company will begin operations in the latter part of 2003.\n\n## Competition\n\nCommercial banking in Texas is highly competitive, and because we hold less than 1% of the state's deposits, we represent only a minor segment of the industry. To succeed in this industry, our management believes that our banks must have the capability to compete in the areas of (1) interest rates paid or charged; (2) scope of services offered; and (3) prices charged for such services. Our subsidiary banks compete in their respective service areas against highly competitive banks, thrifts, savings and loan associations, small loan companies, credit unions, mortgage companies, and brokerage firms, all of which are engaged in providing financial products and services and some of which are larger than our subsidiary banks in terms of capital, resources and personnel.\n\nOur business does not depend on any single customer or any few customers, the loss of any one of which would have a materially adverse effect upon our business. Although we have a broad base of customers that are not related to us, our customers also occasionally include our officers and directors, as well as other entities with which we are affiliated. With our subsidiary banks we may make loans to officers and directors, and entities with which we are affiliated, in the ordinary course of business. We make these loans on substantially the same terms, including interest rates and collateral, as those prevailing at the time for comparable transactions with other persons. Loans to directors, officers and their affiliates are also subject to numerous restrictions under federal and state banking laws which we describe in greater detail below.\n\n## Employees\n\nWith our subsidiary banks we employed approximately 750 full-time equivalent employees at February 1, 2003. Our management believes that our employee relations have been and will continue to be good.\n\n## Supervision and Regulation\n\nBoth federal and state laws extensively regulate bank holding companies, financial holding companies and banks. These laws (and the regulations promulgated thereunder) are primarily intended to protect depositors and the deposit insurance fund of the Federal Deposit Insurance Corporation, or FDIC, although shareholders may also benefit. The following information describes particular laws and regulatory provisions relating to financial holding companies and banks. This discussion is qualified in its entirety by reference to the particular laws and regulatory provisions. A change in any of these laws or regulations may have a material effect on our business and the business of our subsidiary banks.\n\n## Bank Holding Companies and Financial Holding Companies", - "page_start": 30, - "page_end": 30, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "the parent's general unsecured creditors. If a depository institution fails to submit an acceptable capital restoration plan, it shall be treated as if it is significantly undercapitalized. 'Significantly undercapitalized' depository institutions may be subject to a number of requirements and restrictions, including orders to sell sufficient voting stock to become 'adequately capitalized,' requirements to reduce total assets, and cessation of receipt of deposits from correspondent banks. 'Critically undercapitalized' institutions are subject to the appointment of a receiver or conservator. Finally, FDICIA requires the various regulatory agencies to set forth certain standards that do not relate to capital. Such standards relate to the safety and soundness of operations and management and to asset quality and executive compensation, and permit regulatory action against a financial institution that does not meet such standards.\n\nIf an insured bank fails to meet its capital guidelines, it may be subject to a variety of other enforcement remedies, including a prohibition on the taking of brokered deposits and the termination of deposit insurance by the FDIC. Bank regulators continue to indicate their desire to raise capital requirements beyond their current levels.\n\nIn addition to FDICIA capital standards, Texas-chartered banks must also comply with the capital requirements imposed by the Texas Banking Department. Neither the Texas Finance Code nor its regulations specify any minimum capital-to-assets ratio that must be maintained by a Texas-chartered bank. Instead, the Texas Banking Department determines the appropriate ratio on a bank by bank basis, considering factors such as the nature of a bank's business, its total revenue, and the bank's total assets. As of December 31, 2002, all of our Texas-chartered banks exceeded the minimum ratios applied to them.\n\n## Our Support of Our Subsidiary Banks\n\nUnder Federal Reserve Board policy, we are expected to commit resources to act as a source of strength to support each of our subsidiary banks. This support may be required at times when, absent such Federal Reserve Board policy, we would not otherwise be required to provide it. In addition, any loans we make to our subsidiary banks would be subordinate in right of payment to deposits and to other indebtedness of our banks. In the event of a bank holding company's bankruptcy, any commitment by the bank holding company to a federal bank regulatory agency to maintain the capital of a subsidiary bank will be assumed by the bankruptcy trustee and be subject to a priority of payment.\n\nUnder the National Bank Act, if the capital stock of a national bank is impaired by losses or otherwise, the OCC is authorized to require the bank's shareholders to pay the deficiency on a pro-rata basis. If any shareholder refuses to pay the pro-rata assessment after three months notice, then the bank's board of directors must sell an appropriate amount of the shareholder's stock at a public auction to make up the deficiency. To the extent necessary, if a deficiency in capital still exists and the bank refuses to go into liquidation, then a receiver may be appointed to wind up the bank's affairs. Additionally, under the Federal Deposit Insurance Act, in the event of a loss suffered or anticipated by the FDIC (either as a result of the default of a banking subsidiary or related to FDIC assistance provided to a subsidiary in danger of default) our other banking subsidiaries may be assessed for the FDIC's loss.\n\n## Interstate Banking and Branching Act", - "page_start": 35, - "page_end": 35, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "Assets managed by the Trust Departments at First National Bank of Abilene, San Angelo National Bank, Stephenville Bank & Trust Co. and First National Bank, Sweetwater, increased $27.3 million during the past year to a December 31, 2002 book value of $986.2 million. However, due to depressed stock market values and volumes, trust department revenue declined in 2002. Trust combined revenues for the year were down slightly from $5.89 million in 2001 to $5.83 million for 2002. In 2003, we anticipate a return to improved income growth.\n\nThe performance of the stock market the past three years has been a challenge that our trust investment professionals have managed well. Not since 1939-1941 have we seen the S&P 500 drop 35% in a three-year period. Our portfolio managers outperformed their indices in Large Cap stocks by 83 basis points and Fixed Income securities by 168 basis points. This performance bodes well for the present and future of our client accounts.\n\nDuring 2002, we saw a successful conversion of Stephenville Bank & Trust to the SEI Corporation accounting system. In March 2003, we will be converting First National Bank, Sweetwater, to this system as well. This will provide all First Financial Bankshares trust clients with the strength and advantages of a uniform accounting system. Other operational systems have been examined and consistent practices and procedures have been implemented.\n\nTo further enhance our risk management assessments in 2003, we will be introducing an Operational Peer Review Team similar to the successful peer review teams used in the Personal Trust areas of our four locations.\n\nRobert S. Patterson First National Bank of Abilene\n\nPerry Elliott Stephenville Bank & Trust Co.\n\n\n\n\n\nJanis McDowell First National Bank, Sweetwater\n\n\n\nPlans for the formation of a First Financial Bankshares trust company are moving forward with regulatory approval anticipated in late Spring or early Summer. This will permit your Company to provide quality, locally delivered trust services to additional markets.\n\nWith skilled trust professionals offering a complete range of financial products and services, the future of our trust departments look bright. Through dedication to individualized portfolio design and personalized service, our trust departments stand ready to meet the needs of our present and future clients.\n\n\n\nSenior Vice President, Trust Services\n\n## TRUST ASSETS in millions\n\n\n\n\n\nDavid Byrd San Angelo National Bank\n\n", - "page_start": 14, - "page_end": 14, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## FIRST FINANCIAL BANKSHARES, INC. AND SUBSIDIARIES\n\nNotes to Consolidated Financial Statements December 31, 2002, 2001 and 2000\n\n## 1. SUMMARY OF SIGNIFICANT ACCOUNTING POLICIES:\n\n## Nature of Operations\n\nFirst Financial Bankshares, Inc. (a Texas corporation) ('Bankshares') is a financial holding company which owns (through its wholly-owned Delaware subsidiary) all of the capital stock of ten banks located in Texas as of December 31, 2002. Those subsidiary banks are First National Bank of Abilene; Hereford State Bank; First National Bank, Sweetwater; Eastland National Bank; First Financial Bank, National Association, Cleburne; Stephenville Bank & Trust Co.; San Angelo National Bank; Weatherford National Bank; First Financial Bank, National Association, Southlake and City National Bank, Mineral Wells. Each subsidiary bank's primary source of revenue is providing loans and banking services to consumers and commercial customers in the market area in which the subsidiary is located.\n\nA summary of significant accounting policies of Bankshares and subsidiaries (collectively, the 'Company') applied in the preparation of the accompanying consolidated financial statements follows. The accounting principles followed by the Company and the methods of applying them are in conformity with both accounting principles generally accepted in the United States of America and prevailing practices of the banking industry.\n\n## Use of Estimates in Preparation of Financial Statements\n\nThe preparation of financial statements in conformity with accounting principles generally accepted in the United States of America requires management to make estimates and assumptions that affect the reported amounts of assets and liabilities and disclosure of contingent assets and liabilities at the date of the financial statements and reported amounts of revenues and expenses during the reporting period. Actual results could differ from those estimates. Material estimates that are particularly susceptible to significant change in the near term relate to the determination of the allowance for loan losses, the valuations of foreclosed real estate, deferred income tax assets, and the fair value of financial instruments.\n\n## Consolidation\n\nThe accompanying consolidated financial statements include the accounts of Bankshares and its subsidiaries, all of which are wholly-owned. All significant intercompany accounts and transactions have been eliminated.\n\n## Investment Securities\n\nManagement classifies debt and equity securities as held-to-maturity, available-for-sale, or trading based on its intent. Debt securities that management has the positive intent and ability to hold to maturity are classified as heldto-maturity and recorded at cost, adjusted for amortization of premiums and accretion of discounts, which are recognized as adjustments to interest income using the interest method. Securities not classified as held-to-maturity or trading are classified as available-for-sale and recorded at estimated fair value, with unrealized gains and losses, net of deferred income taxes, excluded from earnings and reported in a separate component of shareholders' equity. Securities classified as trading are recorded at estimated fair value, with unrealized gains and losses included in earnings. The Company had no trading securities at December 31, 2002, 2001, or 2000.\n\n## Loans and Allowance for Loan Losses", - "page_start": 72, - "page_end": 72, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## Pending and Proposed Legislation\n\nNew regulations and statutes are regularly proposed containing wide-ranging proposals for altering the structures, regulations and competitive relationships of financial institutions operating in the United States. We cannot predict whether or in what form any proposed regulation or statute will be adopted or the extent to which our business may be affected by any new regulation or statute.\n\n## Enforcement Powers of Federal Banking Agencies\n\nThe Federal Reserve and other state and federal banking agencies and regulators have broad enforcement powers, including the power to terminate deposit insurance, issue cease-and-desist orders, impose substantial fees and other civil and criminal penalties and appoint a conservator or receiver. Our failure to comply with applicable laws, regulations and other regulatory pronouncements could subject us, as well as our officers and directors, to administrative sanctions and potentially substantial civil penalties.\n\n## Available Information\n\nWe file annual, quarterly and special reports, proxy statements and other information with the Securities and Exchange Commission. You may read and copy any document we file at the Securities and Exchange Commission's Public Reference Room at 450 Fifth Street, N.W., Washington, D.C. 20549. Please call the Securities and Exchange Commission at 1-800-SEC-0330 for further information on the public reference room. Our SEC filings are also available to the public at the Securities and Exchange Commission's web site at http://www.sec.gov. No information from this web page is incorporated by reference herein. Our web site is http://www.ffin.com. You may also obtain copies of our annual, quarterly and special reports, proxy statements and certain other information filed with the SEC, as well as amendments thereto, free of charge from our web site. These documents are posted to our web site as soon as reasonably practicable after we have filed them with the SEC.\n\n## ITEM 2. PROPERTIES\n\nOur principal office is located in the First National Bank Building at 400 Pine Street in downtown Abilene, Texas. We lease two spaces in a building owned by First National Bank of Abilene. The lease for approximately 2,300 square feet of space expires December 31, 2004. The lease for approximately 1,100 square feet of space expires May 31, 2006. Our subsidiary banks collectively own 22 banking facilities, some of which are detached drive-ins, and they also lease six banking facilities. Our management considers all of our existing locations to be well-suited for conducting the business of banking. We believe that our existing facilities are adequate to meet our requirements and our subsidiary banks' requirements for the foreseeable future.\n\n## ITEM 3. LEGAL PROCEEDINGS\n\nFrom time to time we and our subsidiary banks are parties to lawsuits arising in the ordinary course of our banking business. However, there are no material pending legal proceedings to which we, our subsidiary banks or our other direct and indirect subsidiaries, or any of their properties, are currently subject. Other than regular, routine examinations by state and federal banking authorities, there are no proceedings pending or known to be contemplated by any governmental authorities.\n\n## ITEM 4. SUBMISSION OF MATTERS TO A VOTE OF SECURITY HOLDERS\n\nNo matters were submitted to a vote of our security holders during the fourth quarter of our fiscal year ended December 31, 2002.", - "page_start": 38, - "page_end": 38, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## Target Markets\n\n## Clients\n\n - ■ Banks\n - ■ Credit unions\n - ■ Independent ATM owners\n - ■ Mobile operators\n - ■ Payment associations\n - ■ Retailers and merchants\n - ■ Banks\n - ■ Credit unions\n - ■ EFT networks\n - ■ Independent ATM owners\n - ■ Resellers\n - ■ Retailers and merchants\n - ■ Mobile phone operators\n - ■ Third-party prepaid suppliers for mobile phone operators\n - ■ Banks\n - ■ Brokerages\n - ■ Credit card issuers\n - ■ Credit unions\n - ■ Investment community\n - ■ Retailers and merchants\n - ■ Banks\n - ■ Credit unions\n - ■ Independent ATM owners\n - ■ Retailers\n - ■ Bank Austria/Creditanstalt (CZE)\n - ■ Budapest Bank (HUN)\n - ■ Citibank (GRC, HUN, POL, CZE)\n - ■ Deutsche Bank (HUN, POL)\n - ■ DiBa (DEU)\n - ■ Dillards Inc. (USA)\n - ■ ABN Amro (HUN, CUR)\n - ■ Banco Comercial Português (MOZ)\n - ■ Banco de Oro, Unibank (PHL)\n - ■ Bank Slaski (POL)\n - ■ Century Bank (ZWE)\n - ■ Cayman National Bank (CYM)\n - ■ Commercial Bank of Romania (ROM)\n - ■ ALLTEL (USA)\n - ■ Centertel (POL)\n - ■ Eurotel (CZE)\n - ■ ERA GSM (POL)\n - ■ Bank of Cyprus (GRC, GBR)\n - ■ Commercial Bank of Ceylon (LKA)\n - ■ First Federal Savings Bank of LaCrosse (USA)\n - ■ Fortis Bank (POL)\n - ■ ING/Bank Slaski (POL)\n - ■ Oyak Bank (TUR)\n - ■ Metropolitan National Bank (USA)\n - ■ Millennium Bank (POL)\n - ■ Raiffeisenbank (HRV)\n - ■ Saks Inc. (USA)\n - ■ Maduro and Curiel's Bank N.V. (CUR)\n - ■ Nova Bank (GRC)\n - ■ Old National Service Corp. (USA)\n - ■ Seylan Bank (LKA)\n - ■ VIFI Card Services (USA)\n - ■ WestPac Banking Corp. (FJI, PNG)\n - ■ Old National Service Corp. (USA)\n - ■ Plus GSM (POL)\n - ■ VIPnet (HRV)\n - ■ Maduro and Curiel's Bank N.V. (CUR)\n - ■ National Bank of Kuwait-Lebanon (LBN)\n - ■ Splitska Banka (HRV)\n - ■ Union Bank Ltd. (PAK)\n - ■ Union Banka (CZE)", - "page_start": 8, - "page_end": 8, - "source_file": "NASDAQ_EEFT_2000.pdf" - } - ] - }, - { - "references": { - "source_file": "news3.pdf", - "query": "What kind of scholarship programs are available to start a financial career?", - "target_page": 1, - "target_passage": "Some are offered directly through colleges and universities that have financial planning degree and certificate programs. Others are available through nonprofits and organizations like the CFP Board Center for Financial Planning", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\nHome / Money / 3 Great Resources to Kick-Start Your Financial Planning Career\n\n\n\nMONEY\n\n## 3 Great Resources to Kick-Start Your Financial Planning Career\n\n11/23/2022\n\n(NewsUSA) - Finding a rewarding career that offers growth potential, work-life balance and the satisfaction of helping others is a key priority for many job seekers. With those goals in mind, a career in financial planning should be a top contender, whether you are just starting out or looking to make a career change. But once you have decided that financial planning is the field for you, how do you get started? Here are three resources that can help you launch a successful financial planning career.\n\n- 1. Guide to Careers in Financial Planning. Based on interviews with leading financial services firms, this guide introduces you to the wide range of career opportunities in the financial planning profession. It identifies typical entry points and career tracks, explores the types of companies that hire financial planners and provides information on how to find financial planning career opportunities. It also includes resources such as a list of recommended questions to ask in a job interview.\n- 2. Scholarship Programs. Dozens of scholarship programs are available to support you on your professional journey. Some are offered directly through colleges and universities that have financial planning degree and certificate programs. Others are available through nonprofits and organizations like the CFP Board Center for Financial Planning, which administers 16 scholarship programs that help pay for the education and exam requirements to become a CERTIFIED FINANCIAL PLANNERTM professional. Financial services firms may offer scholarships or tuition reimbursements to employees to cover the costs of obtaining professional designations and credentials such as CFP® certification -- some of which may be required to advance within the company.\n- 3. Career Fairs. In-person and virtual career fairs provide valuable opportunities to connect with prospective employers. CFP Board's spring and fall career fairs are some of the most popular hiring events in the profession, with dozens of firms participating in these online exhibitions. Job seekers can visit employers' virtual exhibit booths and view open jobs and internships, apply for open positions and interact with employers through one-on-one video meetings and messaging. You can also visit the CFP Board Career Center to browse current job and internship opportunities in financial planning, as well as a collection of articles providing career guidance.\n\nOther top resources include career offices at your college or university, financial services companies' career websites and professional organizations that may have a local chapter near you.\n\nMaking the most of these resources will not only help you find a financial planning job, but also support your growth and development as a future financial planning professional. To learn more about CFP® certification, visit the CFP Board website.\n\nArticle Link\n\nhttps://about.newsusa.com/3-great-resources-to-kick-start-your-financial-planni…", - "page_start": 0, - "page_end": 0, - "source_file": "news3.pdf" - }, - { - "text": "to selected students pursuing careers in finance, economics, accounting, marketing, business administration, computer science and information technology. In addition, scholars will take part in a Chesapeake Presidential Leadership Course facilitated by faculty members in coordination with designated Chesapeake leadership coaches, including a Chesapeake senior vice president and OCU alumni.\n\nIn 2007 Chesapeake launched a scholarship program in Texas with an initial $1.25 million contribution, challenging the cities of Fort Worth and Dallas to match its gift within a year. The cities responded and matched the gift, so Chesapeake in 2008 added another $1.25 million to the fund, bringing the total to $3.75 million. The Chesapeake Scholarship Fund currently funds the cost of higher education for 48 minority students. The fund provides each student $20,000 a year for up to four years at the school of their choice. To date more than $1.0 million has been distributed to deserving local students.\n\nTo help ensure the training of qualified geologists, engineers, landmen and energy lawyers in the next generation, we award scholarships to students pursuing energy-related degrees. We also help mentor them through Chesapeake's Peak Program. Junior- and senior-level scholarship recipients are paired with Chesapeake employee mentors who help develop students' knowledge and provide career advice. There are currently 25 mentors and 40 scholarship recipients participating in the Peak Program.\n\nOur recruiting team also initiated a strategic military recruitment effort during the past two years to hire former military personnel to work in a variety of leadership and crew positions. This effort earned Chesapeake an honor from G.I. JOBS magazine when we were named a 2011 Top 100 Military-Friendly Employer. Chesapeake currently employs 37 men and women who formerly served as junior military officers and more than 100 former servicemen and servicewomen who joined the company through a program called Troops 2 Roughnecks.\n\nIn addition to our specific scholarship programs, one-time educational donations and recruitment efforts, in 2010 we gave more than $1.8 million to fund higher education for nearly 400 other students in 12 states through our Chesapeake Scholars program. Chesapeake's scholarships help recruit the best and brightest students and provide educational opportunities in communities where we operate. In Oklahoma City, more than 400 employees volunteer for up to an hour a week on company time at four local public schools. Chesapeake's program has grown to become the largest corporate mentoring program in Oklahoma.\n\n## Community Impact\n\nChesapeake employees have been enriching their hometowns as volunteers for many years. We formalized those efforts in 2009 by establishing an official employee volunteer program, the H.E.L.P. (Helping Energize Local Progress) Initiative, wherein employees are invited to volunteer each month for a variety of organizations from food pantries to animal shelters. Through that program, employees donated more than 26,000 hours to their communities in 2009.\n\nIn the summer of 2010, Chesapeake took the H.E.L.P. Initiative to a higher level through the launch of Operation Blue. From Memorial Day through Labor Day, each employee was given four hours of company time to complete the volunteer project of their choice. Our employees eagerly accepted the challenge, and in three months more than 4,900 employees donated 30,900 hours of service to 519 organizations in more than 96 communities across the country. Operation Blue is now an annual\n\nvolunteer program in which employees roll up their sleeves in the communities they call home.", - "page_start": 26, - "page_end": 26, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "## NOTES TO AND FORMING PART OF THE FINANCIAL STATEMENTS FOR THE FINANCIAL YEAR ENDED 30 JUNE 2000", - "page_start": 50, - "page_end": 50, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## NOTES TO AND FORMING PART OF THE FINANCIAL STATEMENTS FOR THE FINANCIAL YEAR ENDED 30 JUNE 2000", - "page_start": 58, - "page_end": 58, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## DEVOTION TO SERVICE\n\nor us, it is a measure of responsible corporate citizenship. The MGM MIRAGE Corporate Charitable Giving Program is the principal source of financial donations to community and social initiatives. Funded by a percentage of the company's net profits, the Corporate Charitable Giving Program supports various community efforts impacting four critical areas: F\n\nCHILDHOOD DEVELOPMENT Community-based programs that focus on the overall development and well-being of children.\n\nCOMMUNITY DEVELOPMENT Programs that focus on low-income or socio-economically disadvantaged communities.\n\nDIVERSITY Programs which are inclusive receive priority in funding. This includes efforts that encourage economic development and enhance individual and community resources.\n\nEDUCATION Programs and efforts to strengthen public education from kindergarten through higher education.\n\nThrough various education partnerships with institutions such as the University of Nevada, we award scholarships to help students achieve their educational goals and to encourage their interest in our business. Additionally, scholarship programs assist the children of our employees with their higher education aspirations\n\n\n\n\n\n## MGM GRAND DETROIT\n\nPresident George Boyer epitomizes the company's commitment to corporate social responsibility. Boyer reads to a child at the Northwest Community Center in Detroit during an after-school mentoring program funded by the Voice Foundation.\n\n\n\nMGM MIRAGE supports a variety of programs to further educational aspirations of both students and employees, including tuition reimbursement for employees, scholarships for children of employees, and on-site GED, naturalization and English-asa-second-language (ESL) classes.\n\nGiving back to the communities in which MGM MIRAGE operates its businesses and where our employees live, work, and care for their families is a serious and dedicated commitment.\n\nMGM MIRAGE employee Christina Fuentes embraces a child during an event to benefit the Variety Day Home's Emergency Childcare Assistance Program in Las Vegas, one of the many programs supported by MGM MIRAGE to support the well-being of children. The program helps underwrite childcare assistance for low-income working parents.\n\n\n\n## In 2004, MGM MIRAGE\n\nemployees raised nearly $3 million for the Voice Foundation. Companywide, Aid for AIDS of Nevada (AFAN) was among one of the leading nonprofit agencies to receive the most funding support from the Voice Foundation.", - "page_start": 21, - "page_end": 21, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "The statements of financial position are to be read in conjunction with the notes to the financial statements.", - "page_start": 52, - "page_end": 52, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## Financial Information", - "page_start": 55, - "page_end": 55, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "## Financial Information", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_HIG_2001.pdf" - }, - { - "text": "\n\n## NOTES TO AND FORMING PART OF THE FINANCIAL STATEMENTS FOR THE FINANCIAL YEAR ENDED 30 JUNE 2000", - "page_start": 61, - "page_end": 61, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## NOTES TO AND FORMING PART OF THE FINANCIAL STATEMENTS FOR THE FINANCIAL YEAR ENDED 30 JUNE 2000\n\n4.\n\n## 5. INCOME TAX", - "page_start": 46, - "page_end": 46, - "source_file": "ASX_MRM_2000.pdf" - } - ] - }, - { - "references": { - "source_file": "news3.pdf", - "query": "what are career fairs for?", - "target_page": 1, - "target_passage": " In-person and virtual career fairs provide valuable opportunities to connect with prospective employers.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\nHome / Money / 3 Great Resources to Kick-Start Your Financial Planning Career\n\n\n\nMONEY\n\n## 3 Great Resources to Kick-Start Your Financial Planning Career\n\n11/23/2022\n\n(NewsUSA) - Finding a rewarding career that offers growth potential, work-life balance and the satisfaction of helping others is a key priority for many job seekers. With those goals in mind, a career in financial planning should be a top contender, whether you are just starting out or looking to make a career change. But once you have decided that financial planning is the field for you, how do you get started? Here are three resources that can help you launch a successful financial planning career.\n\n- 1. Guide to Careers in Financial Planning. Based on interviews with leading financial services firms, this guide introduces you to the wide range of career opportunities in the financial planning profession. It identifies typical entry points and career tracks, explores the types of companies that hire financial planners and provides information on how to find financial planning career opportunities. It also includes resources such as a list of recommended questions to ask in a job interview.\n- 2. Scholarship Programs. Dozens of scholarship programs are available to support you on your professional journey. Some are offered directly through colleges and universities that have financial planning degree and certificate programs. Others are available through nonprofits and organizations like the CFP Board Center for Financial Planning, which administers 16 scholarship programs that help pay for the education and exam requirements to become a CERTIFIED FINANCIAL PLANNERTM professional. Financial services firms may offer scholarships or tuition reimbursements to employees to cover the costs of obtaining professional designations and credentials such as CFP® certification -- some of which may be required to advance within the company.\n- 3. Career Fairs. In-person and virtual career fairs provide valuable opportunities to connect with prospective employers. CFP Board's spring and fall career fairs are some of the most popular hiring events in the profession, with dozens of firms participating in these online exhibitions. Job seekers can visit employers' virtual exhibit booths and view open jobs and internships, apply for open positions and interact with employers through one-on-one video meetings and messaging. You can also visit the CFP Board Career Center to browse current job and internship opportunities in financial planning, as well as a collection of articles providing career guidance.\n\nOther top resources include career offices at your college or university, financial services companies' career websites and professional organizations that may have a local chapter near you.\n\nMaking the most of these resources will not only help you find a financial planning job, but also support your growth and development as a future financial planning professional. To learn more about CFP® certification, visit the CFP Board website.\n\nArticle Link\n\nhttps://about.newsusa.com/3-great-resources-to-kick-start-your-financial-planni…", - "page_start": 0, - "page_end": 0, - "source_file": "news3.pdf" - }, - { - "text": "## Nordstrom, Inc.\n\n## Notes to Consolidated Financial Statements\n\nDollar and share amounts in millions except per share, per option and per unit amounts\n\n## NOTE 9: FAIR VALUE MEASUREMENTS\n\nWe disclose our financial assets and liabilities that are measured at fair value in our Consolidated Balance Sheets by level within the fair value hierarchy as defined by applicable accounting standards:\n\n - Level 1: Quoted market prices in active markets for identical assets or liabilities\n - Level 2: Other observable market-based inputs or unobservable inputs that are corroborated by market data\n - Level 3: Unobservable inputs that cannot be corroborated by market data that reflect the reporting entity's own assumptions\n\nWe did not have any financial assets or liabilities that were measured at fair value on a recurring basis as of January 31, 2015 or February 1, 2014.\n\nFinancial instruments not measured at fair value on a recurring basis include cash and cash equivalents, accounts receivable and accounts payable and approximate fair value due to their short-term nature. We estimate the fair value of long-term debt using quoted market prices of the same or similar issues and, as such, this is considered a Level 2 fair value measurement. The following table summarizes the carrying value and fair value estimate of our long-term debt, including current maturities:\n\n| | January 31, 2015 | February 1, 2014 |\n|------------------------------------|--------------------|--------------------|\n| Carrying value of long-term debt 1 | $3,131 | $3,113 |\n| Fair value of long-term debt | 3,693 | 3,511 |\n\nWe also measure certain non-financial assets at fair value on a nonrecurring basis, primarily goodwill and long-lived tangible and intangible assets, in connection with periodic evaluations for potential impairment. See Note 1: Nature of Operations and Summary of Significant Accounting Policies for additional information related to goodwill, intangible assets and long-lived assets. We recorded no material impairment charges for these assets in 2014, 2013 and 2012. We estimate the fair value of goodwill and long-lived tangible and intangible assets using primarily unobservable inputs and, as such, these are considered Level 3 fair value measurements.\n\n## NOTE 10: LEASES\n\nWe lease the land or the land and buildings at many of our stores. Additionally, we lease office facilities, warehouses and equipment. Most of these leases are classified as operating leases and they expire at various dates through 2080. The majority of our fixed, non-cancelable lease terms are 15 to 30 years for Nordstrom full-line stores and 10 to 15 years for Nordstrom Rack stores. Many of our leases include options that allow us to extend the lease term beyond the initial commitment period, subject to terms agreed to at lease inception. Most of our leases also provide for payment of operating expenses, such as common area charges, real estate taxes and other executory costs, and some leases require additional payments based on sales, referred to as 'percentage rent.'\n\nFuture minimum lease payments as of January 31, 2015 are as follows:", - "page_start": 65, - "page_end": 65, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## NOTES TO THE CONSOLIDATED FINANCIAL STATEMENTS\n\n## NOTE 14 - FAIR VALUE MEASUREMENT\n\nThe following table presents financial assets and liabilities measured at fair value in the consolidated statement of financial position in accordance with the fair value hierarchy. This hierarchy groups financial assets and liabilities into three levels based on the significance of inputs used in measuring the fair value of the financial assets and liabilities. The fair value hierarchy has the following levels:\n\n - Level 1: quoted prices (unadjusted) in active markets for identical assets or liabilities;\n - Level 2: inputs other than quoted prices included within Level 1 that are observable for the asset or liability, either directly (i.e. as prices) or indirectly (i.e. derived from prices); and\n - Level 3: inputs for the asset or liability that are not based on observable market data (unobservable inputs).\n\nThe Level within which the financial asset or liability is classified is determined based on the lowest level of significant input to the fair value measurement. The financial assets and liabilities measured at fair value in the statement of financial position are grouped into the fair value hierarchy as follows:\n\n## Consolidated 31 December 2014\n\n| (US$'000) | Level 1 | Level 2 | Level 3 | Total |\n|------------------------------------|-----------|-----------|-----------|---------|\n| Assets measured at fair value | | | | |\n| Derivative commodity contracts | - | 9,476 | - | 9,476 |\n| Interest rate swap contracts | - | 107 | - | 107 |\n| Development and production | - | - | 455,084 | 455,084 |\n| Liabilities measured at fair value | | | | |\n| Interest rate swap contracts | - | (130) | - | (130) |\n| Net fair value | - | 9,453 | 455,084 | 464,537 |\n\n## Consolidated 31 December 2013\n\n| (US$'000) | Level 1 | Level 2 | Level 3 | Total |\n|------------------------------------|-----------|-----------|-----------|---------|\n| Assets measured at fair value | | | | |\n| Interest rate swap contract | - | 176 | - | 176 |\n| Liabilities measured at fair value | | | | |\n| Derivative commodity contracts | - | (219) | - | (219) |\n| Interest rate swap contracts | - | (147) | - | (147) |\n| Net fair value | - | (190) | - | (190) |\n\nDuring the years ended 31 December 2014 and 2013, respectively, there were no transfers between level 1 and level 2 fair value measurements, and no transfer into or out of level 3 fair value measurements.", - "page_start": 82, - "page_end": 82, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "## NOTES TO AND FORMING PART OF THE FINANCIAL STATEMENTS FOR THE FINANCIAL YEAR ENDED 30 JUNE 2000\n\n## 29. FINANCIAL INSTRUMENTS (continued)\n\n## (c) Net fair values\n\nThe aggregrate net fair values of financial assets and liabilities are identical to the carrying amount in the balance sheet.\n\nThe following methods and assumptions are used to determine the net fair values of financial assets and liabilities:\n\n## Cash and cash equivalents\n\nThe carrying amount approximates fair value because of their short term to maturity.\n\n## Trade debtors, other debtors and loans\n\nThe carrying amount approximates fair value.\n\n## Investments\n\nFor investments where there is no quoted market price, a reasonable estimate of the fair value is calculated based on the underlying net asset base of the investment.\n\n## Trade creditors, other creditors and accruals\n\nThe carrying amount approximates fair value.\n\n## (d) Credit risk exposures\n\nThe economic entity's maximum exposure to credit risk at balance date in relation to each class of recognised financial assets is the carrying amount of those assets as indicated in the balance sheet.\n\nCompany\n\n2000\n\n1999\n\n$\n\n$\n\n18,447,843\n\n15,578,523\n\n## 30. CONTINGENT LIABILITIES\n\nAs detailed in Note 11, the company has entered into a deed of cross-guarantee with certain whollyowned controlled entities. The total liabilities of these wholly-owned controlled entities (excluding amounts owed to the parent entity) for which the Company is potentially liable are:\n\n", - "page_start": 64, - "page_end": 64, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## notes to the consolidated Financial statements\n\nDollar amounts are in thousands of Canadian dollars (except share and per share amounts)\n\n## 5. Investment Properties (continued)\n\nThe investment property segment defined as Other consists of one commercial property of which the Company has a 50% ownership. The property has a fair value of $2.2 million (December 31, 2012 - $2.1 million).\n\nIPUC includes land held for future development, which is recorded at a fair value of $2.8 million (December 31, 2012 - $5.2 million) and properties under construction of $21.6 million (December 31, 2012 - $52.7 million). Fair value cannot be reliably determined for properties under construction as the projects are in the early stages of development and therefore IPUC is recorded at cost.\n\n## Sensitivity Analysis\n\nThe significant unobservable inputs used in the fair value measurements categorized within Level 3 of the fair value hierarchy include cap-rates, vacancy rates and management fee rates. Investment property valuations are most sensitive to changes in the cap-rate. The cap-rate assumptions for the investment properties are included in the following table:", - "page_start": 79, - "page_end": 79, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "The fair values of our Equity Derivatives are based on the quoted market value of RCI's Class B Non-Voting shares.\n\nFair value estimates are made at a specific point in time based on relevant market information and information about the financial instruments. The estimates are subjective in nature and involve uncertainties and matters of judgment.\n\nOur disclosure of the three-level hierarchy reflects the significance of the inputs used in measuring fair value:\n\n - GLYPH<129> Financial assets and financial liabilities in Level 2 include valuations using inputs based on observable market data, either directly or indirectly, other than the quoted prices.\n - GLYPH<129> Level 3 valuations are based on inputs that are not based on observable market data.\n\nThere were no material financial instruments categorized in Level 3 as at December 31, 2013 and 2012.\n\n - GLYPH<129> We determine fair value of financial assets and financial liabilities in Level 1 by referring to quoted prices in active markets for identical assets and liabilities.\n\nThe table below shows the financial instruments carried at fair value by valuation method as at December 31, 2013 and 2012.", - "page_start": 120, - "page_end": 120, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## management's Discussion and analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\nIPUC is also valued at fair value, except if such values cannot be reliably determined. In the case when a fair value cannot be reliably determined, such property is recorded at cost. The fair value of IPUC is determined using the capitalization of net income method.\n\nThe determination of the fair value of investment property requires the use of estimates such as future cash flows from assets and cap-rates applicable to those assets. In addition, development risks (such as construction and leasing risks) are also taken into consideration when determining the fair value of IPUC. These estimates are based on local market conditions existing at the reporting date. In arriving at their estimates of market values, the external valuator uses their market knowledge and professional judgment and does not rely solely on historical transaction comparables. The critical estimates and assumptions underlying the valuation of investment properties and developments are set out in note 5.\n\n## Fair Value of Financial Instruments\n\nWhere the fair value of financial assets and financial liabilities recorded in the Notes to the Consolidated Financial Statements cannot be derived from active markets, they are determined using valuation techniques, including the discounted cash flow model. Inputs to these models are taken from observable markets where possible, but where this is not feasible a degree of judgment is required in establishing fair values. The judgments include considerations of inputs such as liquidity risk, credit risk and volatility. Changes in assumptions about these factors could affect the reported fair value of financial instruments.\n\n## Changes in Accounting Policies\n\nThe accounting policies applied during the year ended December 31, 2013, are consistent with those used in the audited consolidated financial statements for the year ended December 31, 2012, except for the following new and amended IFRS and International Financial Reporting Interpretations Committee ('IFRIC') interpretations which were effective for periods beginning on or after July 1, 2012, and January 1, 2013:\n\n## IAS 1 - Financial Statement Presentation ('IAS 1') - Presentation of Items of Other Comprehensive Income ('OCI')\n\nThe amendments to IAS 1 change the grouping of items presented in OCI. Items that could be reclassified (or recycled) to profit or loss at a future point in time (for example, upon derecognition or settlement) would be presented separately from items that will never be reclassified. The adoption of this standard did not have an impact on the Company's financial position or performance.\n\n## IFRS 10 - Consolidated Financial Statements ('IFRS 10')\n\nIFRS 10 replaces the portion of IAS 27 - Consolidated and Separate Financial Statements ('IAS 27') that addresses the accounting for consolidated financial statements. IFRS 10 establishes a single control model that applies to all entities including special purpose entities. The changes introduced by IFRS 10 require Management to exercise significant judgment to determine which entities are controlled, and therefore, are required to be consolidated by a parent, compared with the requirements that were in IAS 27. The adoption of this standard did not have an impact on the Company's financial position or performance.\n\n## IFRS 11 - Joint Arrangements ('IFRS 11')", - "page_start": 61, - "page_end": 61, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "Estimated fair values have been determined by the Company using the best available data, as generally provided in the Company's regulatory reports, and an estimation methodology suitable for each category of financial instruments. For those loans and deposits with floating interest rates, it is presumed that estimated fair values generally approximate the carrying value.", - "page_start": 82, - "page_end": 82, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "The determination of the fair value of investment property requires the use of estimates such as future cash flows from assets and cap-rates applicable to those assets. In addition, development risks (such as construction and leasing risks) are also taken into consideration when determining the fair value of IPUC. These estimates are based on local market conditions existing at the reporting date. In arriving at their estimates of market values, the External Valuator uses their market knowledge and professional judgment and does not rely solely on historical transaction comparables. The critical estimates and assumptions underlying the valuation of investment properties and developments are set out in note 5.\n\n## Fair Value of Financial Instruments\n\nWhere the fair value of financial assets and financial liabilities recorded in the Notes to the Consolidated Financial Statements cannot be derived from active markets, they are determined using valuation techniques, including the discounted cash flow model. Inputs to these models are taken from observable markets where possible, but where this is not feasible a degree of judgment is required in establishing fair values. The judgments include considerations of inputs such as liquidity risk, credit risk and volatility. Changes in assumptions about these factors could affect the reported fair value of financial instruments.\n\n## Changes in Accounting Policies\n\nThe accounting policies applied during the year ended December 31, 2013 are consistent with those used in the audited consolidated financial statements for the year ended December 31, 2012, except for the following new and amended IFRS and International Financial Reporting Interpretations Committee ('IFRIC') interpretations which were effective for periods beginning on or after July 1, 2012, and January 1, 2013:\n\n - IAS 1 - Financial Statement Presentation ('IAS 1') - Presentation of Items of Other Comprehensive Income ('OCI')\n\nThe amendments to IAS 1 change the grouping of items presented in OCI. Items that could be reclassified (or recycled) to profit or loss at a future point in time (for example, upon derecognition or settlement) would be presented separately from items that will never be reclassified. The adoption of this standard did not have an impact on the Company's financial position or performance.", - "page_start": 75, - "page_end": 75, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "In 2013, we repurchased for cancellation a total of 546,674 (2012 9,637,230) Class B Non-Voting shares for total proceeds of $22 million (2012 - $350 million), resulting in a reduction to Class B Non-Voting share capital, share premium and retained earnings of $1 million, nil and $21 million (2012 - $10 million, $243 million and $97 million), respectively. All of the 2013 purchases were made in June 2013 and were carried out through the facilities of the TSX. In 2013, we cancelled 43,993 Class B Non-Voting shares that related to old employee share plans for proceeds of nil.\n\n## Available-forS ale Financial Assets Reserve\n\nWe carry available-for-sale investments at fair value on the consolidated statements of financial position, and record changes in fair value in the available-for-sale financial assets reserve as a component of equity, through other comprehensive income, until the investments are disposed of or impaired, at which time we record the change in fair value in net income.\n\n## Hedging Reserve\n\nWe measure all derivatives at fair value on the consolidated statements of financial position, and record changes in fair value of cash flow hedging derivatives in the fair value reserve as a component of equity through other comprehensive income, if the derivatives are effective and until we recognize the hedged asset or liability in net income.\n\n## Defined Benefit Pension Plans\n\nOur defined benefit pension plan obligation is actuarially determined at the end of the year, and we recognize remeasurements in other comprehensive income and retained earnings.", - "page_start": 123, - "page_end": 123, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "news3.pdf", - "query": "What are the priorities for job seekers ?", - "target_page": 1, - "target_passage": " Finding a rewarding career that offers growth potential, work-life balance and the satisfaction of helping others is a key priority for many job seekers.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\nHome / Money / 3 Great Resources to Kick-Start Your Financial Planning Career\n\n\n\nMONEY\n\n## 3 Great Resources to Kick-Start Your Financial Planning Career\n\n11/23/2022\n\n(NewsUSA) - Finding a rewarding career that offers growth potential, work-life balance and the satisfaction of helping others is a key priority for many job seekers. With those goals in mind, a career in financial planning should be a top contender, whether you are just starting out or looking to make a career change. But once you have decided that financial planning is the field for you, how do you get started? Here are three resources that can help you launch a successful financial planning career.\n\n- 1. Guide to Careers in Financial Planning. Based on interviews with leading financial services firms, this guide introduces you to the wide range of career opportunities in the financial planning profession. It identifies typical entry points and career tracks, explores the types of companies that hire financial planners and provides information on how to find financial planning career opportunities. It also includes resources such as a list of recommended questions to ask in a job interview.\n- 2. Scholarship Programs. Dozens of scholarship programs are available to support you on your professional journey. Some are offered directly through colleges and universities that have financial planning degree and certificate programs. Others are available through nonprofits and organizations like the CFP Board Center for Financial Planning, which administers 16 scholarship programs that help pay for the education and exam requirements to become a CERTIFIED FINANCIAL PLANNERTM professional. Financial services firms may offer scholarships or tuition reimbursements to employees to cover the costs of obtaining professional designations and credentials such as CFP® certification -- some of which may be required to advance within the company.\n- 3. Career Fairs. In-person and virtual career fairs provide valuable opportunities to connect with prospective employers. CFP Board's spring and fall career fairs are some of the most popular hiring events in the profession, with dozens of firms participating in these online exhibitions. Job seekers can visit employers' virtual exhibit booths and view open jobs and internships, apply for open positions and interact with employers through one-on-one video meetings and messaging. You can also visit the CFP Board Career Center to browse current job and internship opportunities in financial planning, as well as a collection of articles providing career guidance.\n\nOther top resources include career offices at your college or university, financial services companies' career websites and professional organizations that may have a local chapter near you.\n\nMaking the most of these resources will not only help you find a financial planning job, but also support your growth and development as a future financial planning professional. To learn more about CFP® certification, visit the CFP Board website.\n\nArticle Link\n\nhttps://about.newsusa.com/3-great-resources-to-kick-start-your-financial-planni…", - "page_start": 0, - "page_end": 0, - "source_file": "news3.pdf" - }, - { - "text": "## Diversity\n\nThe Company has a policy to improve the diversity of its workforce over time by identifying women and individuals from under-represented backgrounds for recruitment, and by rewarding and promoting employees on the basis of performance.\n\nHowever, at this stage of its development, the Company has a small Board of Directors, and a small management team which is geographically dispersed and because of the industry in which the Company operates, the Board does not consider it to be practicable to set measurable objectives to achieve greater gender diversity at this time.\n\nIn addition, the Board acknowledges the benefits of seeking to improve gender diversity at all levels in the Company over time and will keep this issue under review.\n\nThe Company aims to foster continuous improvement in the area of diversity; building on achievement realised through the implementation of historical diversity initiatives, by applying principles successfully used at our leading operation in this area, to other parts of the business.\n\nOur flagship 'Chatree' Mine in Thailand boasts the enviable statistic of having equal representation by women on the senior management team. Recruitment, training and promotion principles employed at Chatree are currently being applied to our 'Challenger' Mine in Australia, where we currently have 14% representation of women across the senior management and professional categories and to other parts of the business.\n\nThere is currently no representation by women on our Board of Directors. Whilst this is in part reflective of the relatively small size of the Board and stage of development of key elements of the business, it forms part of an overall business review process to consider the issue of gender diversity at this level and will be the subject of ongoing review.\n\nThe Company considers that it will benefit from its ongoing commitment to promote a diverse workforce with treatment of employees and future employees on the basis of merit, abilities and potential, regardless of gender, colour, ethnic or national origin, race, disability, age, sexual orientation, gender reassignment, socioeconomic background, religious or political belief, non / trade union membership, family circumstances or other irrelevant distinction.\n\nThe Company has set various criteria and procedures in order to support equality and diversity in the workforce and applies these principles to:\n\n - 〉 Provide fair access to workplace opportunities and benefits, including internal promotion, leadership development, flexible work practices and fair and comparable wages;\n - 〉 Attracting and retaining a skilled and diverse workforce;\n - 〉 Creating an inclusive workplace culture where discriminatory behaviour is unacceptable; and\n - 〉 Providing an effective grievance mechanism for employees.\n\n## Current Proportion of Women Employees\n\n| Board | 0.0% |\n|-------------------|--------|\n| Senior Executives | 0.0% |\n| Senior Managers | 1.8% |\n| Managers | 1.0% |\n| Professionals | 8.6% |\n| Non-professionals | 6.4% |\n| Total Workforce | 17.8% |\n\n## Share Trading Policy\n\nIn the interests of shareholder confidence and compliance with insider trading laws, the Company has formal policies governing the trading of the Company's securities by Directors, officers and employees. Details of Directors' shareholdings are disclosed in the Directors' Report.", - "page_start": 38, - "page_end": 38, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "\n\nF or over 100 years Shenandoah Telecommunications Company has been committed to providing outstanding service to our customers. Our employees take that same dedication after hours to make a difference in their community.\n\nWe take this opportunity to share with you, our shareholders, the stories of just a few of your dedicated employees.\n\nPatty Pomeroy\n\n\n\nVolunteerism is in Patty Pomeroy's blood. Her grandfather was a dispatcher for the rescue squad in Middletown, VA for 25 years and her grandmother was in the ladies auxiliary. Her father was a charter member of the Middletown Rescue Squad. In 1997, Patty, a customer service representative at Shentel for four years, continued the family tradition by earning her Emergency Medical Technician certification and going to 'work' for the Strasburg Rescue Squad. Patty is the administrator of membership recruitment and retention for the squad and is the liaison coordinator for junior squad members under 18. It is her job to make sure that new members are brought in to the squad and current members stay active.\n\n'There is a great satisfaction that comes from knowing that what you can do will help people.'\n\nJeff Beard has been an installer repairman with Shentel for almost five years. Two years ago, Jeff helped start Project Isaiah 58, a faith-based recovery ministry that reaches out to people who are struggling with addiction. Project Isaiah 58 has weekly group meetings in Winchester, Woodstock and Warrenton, VA. Jeff, who lives in Winchester, participates in the group meetings and also makes time to meet one-on-one with people who need personal attention.\n\n'I feel the need to reach out to people who are suffering.'\n\nJeff Beard\n\n\n\nJohn Gardner has been with Shentel for two years as a PCS technician in Central Pennsylvania, but for almost a year of that time he was on Naval Reserve duty in Sasebo, Japan. John joined the Reserves after serving 10 years of active duty. In October 2002, he was activated under Noble Eagle-Enduring Freedom as part of the increase in security at bases around the world. John worked on Motorola radios and repeater systems while stationed in Japan. It was tough for the serviceman to be away from his wife and children, but John believes very strongly in serving his country.\n\n'Being in the Reserves is a way for me to be a civilian and still serve my country.'\n\nJohn Gardner\n\nAt Shentel, George Brinkley, the store manager in Front Royal, VA, is known for being one of the biggest fund-raisers for the Shenandoah County American Cancer Society Relay for Life event. In his six years at the Company, George has raised nearly $20,000. In 2003, he raised $4,246 and was recognized as the top individual fund-raiser for the entire event.\n\nIn 2002, George was chairman of the parade committee for the Woodstock, VA 250th anniversary celebration. Under George's leadership, the 26-member committee worked for a year preparing for the parade, which was the largest in the town's history.\n\n'I just have a knack for volunteering. I want to make my community better any way I can.'\n\nGeorge Brinkley\n\n\n\n■", - "page_start": 4, - "page_end": 4, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "\n\n## CHAIRMAN'S REPORT\n\nLabour hire is heavily dependent upon the quality of the personnel database and our intention has been announced to offer training at Dampier, Broome and Darwin for those who live in the North West and wish to work in the offshore industry there. Planning for this new initiative is well advanced and we expect to be running courses for prospective offshore employees in coming months. Although the training program is not directed to any particular community group, it has been encouraging to have active support from Aboriginal leaders in the Kimberley region.\n\nWorld prospects for energy, the need for Australia to add value to its resources, Government initiatives for the support of these activities and environmental imperatives, heavily favour gas, giving every indication that Mermaid Marine's development push has been extremely timely.\n\nIt is also important to draw attention to increased efforts in terms of health, safety and environmental protection. Our workplace is largely at sea, where operations involve natural dangers and the safety of our people is paramount. We also work in a setting where the tasks in which we are involved cast us in the role of environmental caretakers of the sea and coastline.\n\nOver the past twelve months, we have worked even more closely with producers to take this side of our business to the highest possible standard. We are proud of the achievement and at the time of this report, despite the inherent dangers involved in the work, our employees have accrued a record 348 days free of Lost Time Injuries, a tremendous effort.\n\nAverage turnover for the last two years was $20 million, our target in the near term is to achieve earnings of at least $100million, with appropriate levels of accompanying profit. That will be addressed through our policy of strategic positioning and development in the North West of Australia, and also by acquisition where merger or purchase will add to our earnings and strengths. Mermaid Marine Australia Limited is in excellent shape, with confidence that we are well able to pursue and secure our ambitious program.\n\nAlan Birchmore\n\nChairman", - "page_start": 9, - "page_end": 9, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "example, in Sweden. 378 Meanwhile, the spectrum of guidance developed regarding work-related psychosocial risks is very wide; it covers aspects such as job satisfaction (overall level of wellbeing), engagement, performance and work-related stress, 379 and also discrimination, harassment, aggression and violence. 380\n\n## 6.2 EU and national OSH strategies\n\nThe EU and many Member States applied and apply strategic approaches , based on EU or national evidence of the state of OSH. OSH strategies are a steering instrument to focus the activities of all actors on major recognised deficits of OSH infrastructures or processes. 381\n\nThe newest EU Strategic Framework on Health and Safety at Work 2021-2027 puts the focus on change, with the title 'Occupational safety and health in a changing world of work' . 382 Consequently, the strategic framework focuses on three key objectives for these years:\n\n - · anticipating and managing change in the new world of work brought about by the green, digital and demographic transitions;\n - · improving prevention of workplace accidents and illnesses;\n - · increasing preparedness for any potential future health crises.\n\nThe proposed focus areas and actions are related to these three objectives. Under the first key objective there are actions like 'Modernising and simplifying EU OSH rules in the context of the green and digital transitions'; a special focus is on psychosocial and ergonomic risks. The second objective promotes a vision zero approach to work-related deaths, particularly referring to hazardous substances and cardiovascular diseases, the promotion of health at work and inclusive workplaces for all. 383\n\nThe third objective responds to the impact of the pandemic situation in 2020 and 2021. It includes the development of emergency procedures for future similar situations ('Health crisis'). The Strategic Framework repeats and corroborates the value of research and data-based evidence by stating: 'Research and data collection, both at EU and national level, are a pre-condition for the prevention of work-related diseases and accidents. Scientific advice and the latest technological developments feed into OSH legislation and policy.'\n\nAlso, many Member States have agreed on provision of better data as an objective in their national strategies. 384 The EU strategy often gives orientation for the development of national OSH strategies. Under the last strategy period, 24 of the 27 Member States had applied a strategy. Many national OSH strategies contained similar targets. EU-OSHA published an overview report on national strategies, and the OSH Barometer contains as one indicator a harmonised overview on the aspects of national strategies. 385\n\nOSH strategies are regarded as an important and innovative policy area, a chance for better collaboration, and also a very relevant joint national OSH activity. Those strategies help in priority setting and focused action on weaknesses. Strategies were often agreed in social dialogue processes, and many strategy actors also developed new and better monitoring instruments and indicators. 386 Labour inspections play an important or essential role in most of these strategies. 387\n\n\n\nOSH Barometer Steering of OSH, National strategies:\n\nhttps://visualisation.osha.europa.eu/osh-barometer/osh-steering/national-strategies\n\nOSHWiki: Section 'OSH System at national level', descriptions of the OSH Systems of the EU Member States: https://oshwiki.eu/wiki/Category:OSH\\_systems\\_at\\_national\\_level", - "page_start": 123, - "page_end": 123, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## Ronald James\n\nBSc (Geology), MAusIMM, MAIG\n\n## General Manager Exploration and Resource Development\n\nRon James has 30 years of experience in exploration and mining at management level inclusive of setting up gold mines and exploration projects from their earliest stages through to development and sustainability. Before joining Kingsgate, he was Chief Mine Geologist at the Gold Ridge Mine in the Solomon Islands and later Group Exploration Manager for Ross Mining NL. Ron is familiar with the technical and operating requirements for emerging projects in a variety of terrains and environments and has a strong focus on maximising returns from ore bodies through optimum waste and ore classification as well as increasing reserves from nearmine resource development.\n\nu\n\nSenior Management", - "page_start": 40, - "page_end": 40, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "## Table of Contents\n\n## Forward-Looking Statements\n\nThe discussions in this Quarterly Report on Form 10-Q contain forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are based on assumptions with respect to the future and management's current expectations, involve certain risks and uncertainties and are not guarantees. These forward-looking statements include, but are not limited to, statements concerning supply chain constraints, our strategy, competition, future operations and production capacity, future financial position, future revenues, projected costs, profitability, expected cost reductions, capital adequacy, expectations regarding demand and acceptance for our technologies, growth opportunities and trends in the markets in which we operate, prospects and plans and objectives of management. The words 'anticipates,' 'believes,' 'could,' 'estimates,' 'expects,' 'intends,' 'may,' 'plans,' 'projects,' 'will,' 'would,' 'predicts' and similar expressions are intended to identify forward-looking statements, although not all forward-looking statements contain these identifying words. We may not actually achieve the plans, intentions or expectations disclosed in our forward-looking statements and you should not place undue reliance on our forward-looking statements. Future results may differ materially from the plans, intentions and expectations disclosed in the forward-looking statements that we make. These forward-looking statements involve risks and uncertainties that could cause our actual results to differ materially from those in the forwardlooking statements, including, without limitation, the risks set forth in Part I, Item 1A, 'Risk Factors' of the Annual Report on Form 10-K for the fiscal year ended December 31, 2023 and that are otherwise described or updated from time to time in our other filings with the Securities and Exchange Commission (the 'SEC'). The discussion of such risks is not an indication that any such risks have occurred at the time of this filing. We do not assume any obligation to update any forward-looking statements.", - "page_start": 3, - "page_end": 3, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "As one of Japa As one of Japan's leading financial services groups, s leading financial services groups, the SMFG Group is taking the lead in aggressively addressing the four priority issues the SMFG Group is taking the lead in aggressively addressing the four priority issues we have identified as significantly impacting the nation. we have identified as significantly impacting the nation.\n\n## Ensuring peace of mind for the future\n\n## Shrinking and aging population\n\nCurrently, the proportion of people aged 65 or over in Japan has reached 23.4%*. SMFG will help create Currently, the proportion of people aged 65 or over in Japan has reached 23.4%*. SMFG will help create frameworks enabling the elderly to enjoy a vibrant lifestyle with peace of mind, through support for life-cycle frameworks enabling the elderly to enjoy a vibrant lifestyle with peace of mind, through support for life-cycle planning and other measures. The SMFG Group aims to create systems and a corporate culture that foster a sound planning and other measures. The SMFG Group aims to create systems and a corporate culture that foster a sound balance between work and care needs, given that many group employees will later need to nurse ailing relatives. balance between work and care needs, given that many group employees will later need to nurse ailing relatives.\n\n*Estimates by the Statistics Bureau, Ministry of Internal Affairs and Communications (October 1, 2011)\n\n## Further measures needed\n\n - ● Support businesses involved in health, medical and Support businesses involved in health, medical and nursing care nursing care\n - ● Expand range of financial products and services for the Expand range of financial products and services for the elderly (planning for asset management for old age) elderly (planning for asset management for old age)\n - ● Foster a better work-life balance Foster a better work-life balance\n\n\n\nSymbiosis and diversity\n\n## Global challenges\n\nIn anticipation of further global expansion, the SMFG Group is aggressively internationalizing its In anticipation of further global expansion, the SMFG Group is aggressively internationalizing its operations both in Japan and overseas. Initiatives include aggressive development of advisory operations both in Japan and overseas. Initiatives include aggressive development of advisory services for infrastructure upgrades in emerging economies, a cross-departmental endeavor, services for infrastructure upgrades in emerging economies, a cross-departmental endeavor, as well as contributions to the international community and the environmental business, chiefly as well as contributions to the international community and the environmental business, chiefly through branches and representative offices overseas. through branches and representative offices overseas.\n\nWe will continue to discuss and review various approaches to issues facing the international We will continue to discuss and review various approaches to issues facing the international community so as to build up trust internationally as a global player. community so as to build up trust internationally as a global player.\n\n\n\n## Further measures needed\n\n - ● Share expertise in corporate social responsibility Share expertise in corporate social responsibility with the international community with the international community\n - ● Improve financial services in preparation for the Improve financial services in preparation for the globalization of operations in Japan (multilingual globalization of operations in Japan (multilingual support) support)\n - ● Promote diversity Promote diversity\n\n", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "\n\n## SOCIAL RESPONSIBILITY\n\n\n\nATTAINING LEADERSHIP IN OUR INDUSTRY AND THE PRIVILEGE OF BEING CANADIANS' COMPANY-OF-CHOICE IS ABOUT DELIVERING THE BEST INNOVATIVE SERVICES WHILE BEING A RESPONSIBLE BUSINESS - AIMS THAT ARE DEEPLY CONNECTED.\n\nEach year we work hard to build a more sustainable business and contribute to building a more sustainable world. Applying social and environmental responsibility throughout Rogers' daily operations - and beyond our own walls to our supply chain and communities - helps us attract customers, enhance employee recruitment and retention, mitigate risks and provide value to all of our stakeholders.\n\nTo create a great workplace, we focus on all aspects of the employee experience - investing millions in employee training and development, providing attractive compensation and benefits, and developing a", - "page_start": 19, - "page_end": 19, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Exploration\n\nWith the approvals of the Special Prospecting Licence ('SPL') applications in Thailand still awaiting the Minister of Industry's consent, exploration attention over the past 12 months has focused on new exploration opportunities and Mineral Resource enhancement targets within the Mining Leases. This exploration formed part of a strategic exploration program within the mining leases at Chatree that commenced in late 2012. The program has successfully defined several new areas of mineralisation within the Mining Lease, most notably at Q and A North Prospects, and has also upgraded several larger areas of Inferred Resources to the Measured and Indicated Mineral Resource category.\n\n## Looking Ahead\n\nOver the current financial year and beyond, Kingsgate remains focused on optimising production within an uncertain metal price environment, continuing to build resources and reserves and advancing the development project pipeline of Nueva Esperanza and Bowdens. These initiatives are designed to grow earnings per share for the benefit of all shareholders.\n\nIn late September, Kingsgate's Thai subsidiary, Akara Resources Public Company Limited ('Akara') has submitted its listing application and draft Prospectus to the Thai Securities Exchange Commission (SEC) and the Stock Exchange of Thailand (SET) for an initial public offering of its shares on the SET.\n\nThe SEC and SET will review the draft Prospectus in the coming months in order to approve the listing of Akara. The decision to list Akara will depend on market conditions and other factors at the time of approval.\n\nGroup gold production for the full year to 30 June 2014 is expected to be in the range of 190,000 to 210,000 ounces. This includes 120,000 to 130,000 ounces from Chatree and 70,000 to 80,000 ounces from Challenger.", - "page_start": 6, - "page_end": 6, - "source_file": "ASX_KCN_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "Understanding_Creative_Commons_license_(infographic).pdf", - "query": "What does ShareAlike mean in terms of licencing ?", - "target_page": 1, - "target_passage": "adaptations based on this work must be licensed under the same license.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "- 42. -(1) A person ordinarily resident in the United Kingdom and who pursues a work-related activity in another country to which they usually travel at least once a week which is certified by their employer, or in the case of a self-employed person certified by them, as being-\n - (a) an activity that cannot be done remotely; and\n - (b) critical.\n - (2) For the purposes of sub-paragraph (1), an activity is critical if-\n - (a) it would be defined as critical, or equivalent terminology, in legislation or guidance in use in that country; or\n - (b) if the country has no such definition, if a person is pursuing an activity which would fall under one of the other paragraphs in this Schedule if it were carried out in the United Kingdom.\n - 43. -(1) A person who has an offer of employment for seasonal work to carry out specified activities in edible horticulture on a specified farm.\n - (2) For the purposes of sub-paragraph (1)-\n - (a) 'seasonal work' is employment which fluctuates or is restricted due to the season or time of the year;", - "page_start": 45, - "page_end": 45, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "\n\n## Creative Commons license\n\n## Understanding\n\nbefore licensing your work\n\n## THREE-LAYER DESIGN\n\nCreative Commons (CC) license has three layers:\n\n- \"Legal Code\" (base layer): contains terms and conditions to be used by lawyers and legally applicable in court.\n- \"Human Readable\" (commons deeds): contain the summary of the legal code and key terms.\n- \"Machine Readable\": contains HTML or codes for machines to recognize a work is available under a Creative Commons license.\n\n\n\n## FOUR ELEMENTS\n\n- BY (\"Attribution\"): users must credit the author of the work they are using.\n- SA (\"ShareAlike\"): adaptations based on this work must be licensed under the same license.\n- NC (\"NonCommercial\"): the work is only available to be used for noncommercial purposes.\n- ND (\"NoDerivative\"): reusers making cannot share adaptations of the work.\n\n\n\n## SIX LICENSES\n\n- CC BY (\"Attribution\") allows people to use the work for any purpose (even commercially and even in modified form) as long as they give attribution to the creator.\n- CC BY-SA (\"Attribution-ShareAlike\") allows people to use the work for any purpose (even commercially and even in modified form), as long as they give attribution to the creator and make any adaptations they share with others available under the same or a compatible license.\n- CC BY-NC (\"Attribution-NonCommercial\") allows people to use the work for noncommercial purposes only, and only as long as they give attribution to the creator.\n- CC BY-NC-SA (\"Attribution-NonCommercial-ShareAlike\") allows people to use the work for noncommercial purposes only, and only as long as they give attribution to the creator and make any adaptations they share with others available under the same or a compatible license.\n- CC BY-ND (\"Attribution-NoDerivative\") allows people to use the unadapted work for any purpose (even commercially), as long as they give attribution to the creator.\n- CC BY-NC-ND (\"Attribution-NonCommercial-NoDerivative\") allows people to use the unadapted work for noncommercial purposes only, and only as long as they give attribution to the licensor.\n\n## REMIND THAT…\n\nCC license only applicable to the work that is within the scope of copyright law. CC license can be used when …\n\n- you want to give others permissions to freely copy and redistribute your work, and\n- you want to give others permission to freely transform, alter, or otherwise create derivative works based on your work.\n\n\n\n\n\n## CC LICENSE CAN'T BE USED FOR …\n\nfair use, fair dealing, or some other limitation and exception to copyright applies the the work.\n\n## ALSO FOR …\n\nthe work that is already in the Public Domain.\n\nFor those who want to waive their rights from copyright protection, use CC0 (\"CC Zero\").\n\n## NOW, SHARE YOUR WORK!\n\nhttps://creativecommons.org/choose/\n\n\n\n\n\nBY\n\n\n\nSA\n\n\n\nND\n\nNC", - "page_start": 0, - "page_end": 0, - "source_file": "Understanding_Creative_Commons_license_(infographic).pdf" - }, - { - "text": "| Induced drag coefficient, | 68 |\n| Effect of lift coefficient Effect of aspect ratio | |\n| Effectoflift........................................................ EffectofsPeed...................................................... | 68 |\n| Effea of altitude.. | |\n| | 2; |\n| Effect of aspect ratio. | 71 |\n| Lift and dra Influcncc of ow aspxt ratio configurations f characteristics | |\n| EFFECT StiEEPtiACK. | 74 |\n| Spanwise lift distribution localinducedflow................................................. | 74 |\n| | 76 |\n| Effect on lift and drag characteristics. .', | 76 |\n| STALL PATI'ERNS. | 77 |\n| Pnvorablestallpattern.............................................. EffeaofpIanform.................................................. | :: |\n| Taper | |\n| Sweepback | 86 |\n| Modifications for stall characteristics. | |", - "page_start": 8, - "page_end": 8, - "source_file": "00-80T-80.pdf" - }, - { - "text": "| | Figure 2: EDP Home Page (lower part) .................................................................................................... 9 |\n| Figure 3 | - Dataset Resource Page with Link to Geo-Spatial Visualisation. ........................................... 38 |\n| Figure 4 | - Selection of layers................................................................................................................. 39 |\n| Figure 5 | - Feature Info tool. .................................................................................................................. 40 |\n| Figure 6 | - Legend tool. .......................................................................................................................... 40 |\n| Figure 7 - Disclaimer and tutorial buttons. ........................................................................................... 41 | Figure 7 - Disclaimer and tutorial buttons. ........................................................................................... 41 |\n| Figure 8 - Error message dialog. ........................................................................................................... 42 | Figure 8 - Error message dialog. ........................................................................................................... 42 |", - "page_start": 2, - "page_end": 2, - "source_file": "edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf" - }, - { - "text": "| Figure 14: Employed persons and percentage of working time under pressure - Eurostat LFS Ad hoc 2019 ....................................................................................................................................................... 35 |\n| Figure 15: Percentage of employed persons with working time under pressure (per country, sum of responses 'Always' and 'Often') - LFS Ad hoc 2019 ............................................................................ 36 |\n| Figure 16: Exposure to physical risks - ESENER, EWCS and LFS ..................................................... 39 |\n| Figure 17: Physical health risks compared (%) - EWCS 2015 ............................................................. 42 |\n| Figure 18: Employment types in EU27, development 2005 to 2022 - Eurostat .................................. 47 |\n| Figure 19: Employed persons by main place of work - Eurostat .......................................................... 51 |\n| Figure 20: Employees working mostly from home (in % of employed persons) - Eurostat .................. 52 |\n| Figure 21: Development of the total number of non-fatal accidents at work and incidence rates (accidents per 100,000 workers), 1998 and 2019 - Eurostat ................................................................................. 65 |\n| Figure 22: Share of people reporting any accident and accidents resulting in time off work by country, 2020 ....................................................................................................................................................... 70 |\n| Figure 23: Comparison of the average incidence rate of fatal accidents in two periods: 2010-2014 and 2015-2020 ............................................................................................................................................. 71 |\n| Figure 24: Main causes of mortality 2019, EU27 .................................................................................. 79 |\n| Figure 25: Work-related deaths - estimates by WHO/ILO and ICOH for EU27 ................................... 83 |\n| Figure 26: Work-related DALYs - estimates by WHO/ILO and ICOH for the EU27 ............................. 84 |\n| Figure 27: Prevalence of musculoskeletal diseases - EWCS 2015 ..................................................... 88 |", - "page_start": 4, - "page_end": 4, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "| Chapter 13 Conclusion: Some Personal Thoughts and Opinions ............................................................... 88 |\n| Chapter 14 Bibliography ............................................................................................................................. 89 |\n| 14.1 W3C Documents ............................................................................................................................. 89 |\n| 14.2 Web Sites, Tools, And Presentations. ............................................................................................. 89 |\n| 14.3 Papers .............................................................................................................................................. 89 |\n| 14.4 Books .............................................................................................................................................. 90 |\n| 14.5 Vendors |", - "page_start": 3, - "page_end": 3, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "| Next steps .................................................................................................................................................... 29 |\n| Focusing on core services ............................................................................................................. 30 |\n| Common serverless services .................................................................................................................... 30 |\n| Networking & content delivery ......................................................................................................... 32 |\n| Front-end web & mobile ..................................................................................................................... 32 |\n| Application integration ........................................................................................................................ 32 |\n| Database & storage .............................................................................................................................. 32 |\n| Compute .................................................................................................................................................. 33 |\n| Security, identity & compliance ......................................................................................................... 33 |\n| Management & governance ................................................................................................................ 33 |\n| Developer tools and code instrumentation ..................................................................................... 34 |\n| Streaming & batch processing ........................................................................................................... 34 |\n| Typical microservice example .................................................................................................................. 34 |", - "page_start": 2, - "page_end": 2, - "source_file": "serverless-core.pdf" - }, - { - "text": "| Equity in earnings of unconsolidated subsidiaries and affiliates ...................................................................................... | Equity in earnings of unconsolidated subsidiaries and affiliates ...................................................................................... | (1.9) | (0.6) |\n| Adjustments in deferred tax assets and liabilities due to change in tax rate ......................................................... | Adjustments in deferred tax assets and liabilities due to change in tax rate ......................................................... | - | - |\n| Other .................................................................................................................................................................................................................................. | Other .................................................................................................................................................................................................................................. | 0.1 | 0.2 |\n| Effective tax rates .................................................................................................................................................................................................................. | Effective tax rates .................................................................................................................................................................................................................. | 32.5% | 29.7% |", - "page_start": 85, - "page_end": 85, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "| Other .............................................................................................................................................................................................. | 3,735 | - | 34,907 |\n| Lease obligation ................................................................................................................................................................... | 154,876 | 134,643 | 1,447,439 |\n| ..................................................................................................................................................................................................................... | 2,858,050 | 2,756,127 | 26,710,748 |\n| Less current portion ................................................................................................................................................................... | 894,877 | 1,061,334 | 8,363,337 |\n| ..................................................................................................................................................................................................................... | | ¥1,694,793 | |\n| | ¥1,963,173 | | $18,347,411 |", - "page_start": 82, - "page_end": 82, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "| overstress, effect on service life. ................ | 344 |\n| parasite area, equivalent. .... .............. | 89 |\n| parasitedrag ................................. | 87 |\n| performance, Airplane Performance, Chapter II. | 95 |\n| pilot induced oscillarion. ..................... | 314 |\n| pitching moment airfoil. .................................... | 47 |\n| longitudinal. .................. ........ | 249, 251 |\n| pitch-up ..................................... | 313 |\n| pitot-static system. ............... ........... | |\n| planform effects. ............................. | 61 |\n| power effects on stability. ..................... | 259 |\n| power off stability ... ........................ | 259 |\n| power required. ................... .......... | 96 |\n| power settling. ................... ........ | 4c3 |\n| preignition, ........... .... .... | 140 |\n| pressure altitude. ......... .... ... ... | .. 4 |\n| pressure distribution. .. ......... | ..... 14 |", - "page_start": 432, - "page_end": 432, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "Understanding_Creative_Commons_license_(infographic).pdf", - "query": "What is the most restricive Creative Common licence ?", - "target_page": 1, - "target_passage": "CC BY-NC-ND (\"Attribution-NonCommercial-NoDerivative\") allows people to use the unadapted work for noncommercial purposes only, and only as long as they give attribution to the licensor.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional a/ff.shortiliations.\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.\n\n© The Author(s) 2024", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed4.pdf" - }, - { - "text": "Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate\n\ncredit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.\n\n© The Author(s) 2025", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed3.pdf" - }, - { - "text": "with. The vast majority of in-copyright books are out-of-print or out-of-commerce, and most are not actively managed by their rightsholders. There is no official registry of copyrighted works and their owners, and existing datasets can be incomplete or erroneous. 16\n\nAs a result, there may be no way to license the vast majority of in-copyright books, especially those that have or have had limited commercial value. Put differently, the barrier to using 17 most books is not simply to pay publishers; even if one had significant financial resources, licensing would not enable access to most works.\n\n## Permissively licensed works\n\nThere are books that have been permissively licensed in an easily identifiable way, such as works placed under Creative Commons (CC) licenses. Such works explicitly allow particular uses of works subject to various responsibilities (e.g., requiring attribution by the user in their follow-on use).\n\nWhile such works could be candidates for inclusion in a books data commons, their inclusion depends on whether the license's terms can be complied with in the context of AI training. For instance, in the context of CC licensed works, there are requirements for proper attribution across all licenses (the CC tools Public Domain Dedication (CC0) and Public Domain Mark (PDM) are not licenses and do not require attribution). 18", - "page_start": 9, - "page_end": 9, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "Combined, these limits can enable effective foreign control of up to 46.7 % .\n\nThe chief executive officer and 80 % of the members of the Board of Directors of the operating licensee must be resident Canadians. There are no restrictions on the number of non-voting shares that may be held by non-Canadians at either the holding-company or licenseecompany level. Neither the Canadian carrier nor its parent may be otherwise controlled in fact by non-Canadians. Subject to appeal to the federal Cabinet, the CRTC has the jurisdiction to determine as a question of fact whether a given licensee is controlled by nonCanadians.\n\nPursuant to the Telecommunications Act and associated regulations, the same rules also apply to Canadian telecommunications carriers such as Wireless, except that there is no requirement that the chief executive officer be a resident Canadian. We believe we are in compliance with the foregoing foreign ownership and control requirements.\n\nOn June 29, 2012, Bill C-38 amending the Telecommunications Act passed into law. The amendments exempt telecommunications companies with less than 10 % of total Canadian telecommunications market measured by revenue from foreign investment restrictions. Companies that are successful in growing their market shares in excess of 10 % of total Canadian telecommunications market revenues other than by way of merger or acquisitions will continue to be exempt from the restrictions.\n\n## WIRELESS\n\n## Consultation on the Renewal of Cellular and Personal Communications S ervices (PC S ) S pectrum Licences\n\nIn March 2011, Industry Canada released its decisions about the renewal process for cellular and PCS licences that began expiring at that time. Key things to note:\n\n - GLYPH<129> At the end of the current licence term, new cellular and PCS licences with a 20-year term will be issued to licensees that are in compliance with all licence conditions.\n - GLYPH<129> The previously existing annual fee of $0.0351 per MHz per population of the licenced area will continue to apply to all cellular and PCS licences, including those initially assigned by auction. The Minister of Industry Canada may review and amend the fees during the licence term after further consultation with licensees.\n - GLYPH<129> A determination regarding existing research and development conditions of licence was not released at that time and will be released separately. A decision has not been made to date, and until such a time, the current conditions of licence remain in effect.", - "page_start": 71, - "page_end": 71, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "\n\nThis is a frame from 'Twenty Years of Creative Commons (in Sixty Seconds)' by Ryan Junell and Glenn Otis Brown for Creative Commons licensed under CC BY 4.0. It includes adaptations of multiple open and public domain works. View full licensing and attribution information about all works included in the video on Flickr.\n\n## Creative Commons\n\nPO Box 1866 Mountain View CA 94042 USA +1 415 429 6753 info@creativecommons.org\n\n", - "page_start": 11, - "page_end": 11, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "## Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work - on conditions of your choice. CC licenses let you change your copyright terms from the default of 'all rights reserved' to 'some rights reserved.'\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\n\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n\n\nPublic domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark . Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n## Where public domain tools fit in the copyright spectrum\n\n\n\n## The CC0 Public Domain Dedication\n\nUse this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.\n\n\n\n\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.\n\n## What is the di/fference between CC0 and the Public Domain Mark?\n\n\n\nCC0 ('CC Zero') is intended for use only by authors or holders of copyright and related rights (including database rights), in connection with works that are still subject to those rights in one or more countries.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "Regulatory changes or decisions can adversely affect our consolidated results of operations.\n\nOur costs of providing services may increase from time to time as we comply with industry or legislative initiatives to address consumer protection concerns or Internet-related issues like copyright infringement, unsolicited commercial e-mail, cybercrime and lawful access.\n\nGenerally, our spectrum and broadcast licences are granted for a specified term and are subject to conditions for maintaining these licences. The regulators can modify these licensing conditions at any time, and they can decide not to renew a licence when it expires. If we do not comply with the conditions, a licence may be forfeited or revoked, or we may be fined.\n\nThe licences have conditions that require us, amongst other things, to comply with Canadian ownership restrictions of the applicable legislation, and we are currently in compliance with them. If we violate the requirements, we would be subject to various penalties and it could include losing a licence in extreme cases.\n\nCable, wireless and broadcasting licences generally cannot be transferred without regulatory approval.\n\n## Canadian Broadcasting Operations\n\nOur Canadian broadcasting operations - including our cable television systems, radio and television stations, and specialty services - are licenced (or operated under an exemption order) and regulated by the CRTC under the Broadcasting Act.\n\nThe CRTC is responsible for regulating and supervising all aspects of the Canadian broadcasting system. It is also responsible under the Telecommunications Act for the regulation of telecommunications carriers, including:\n\n - GLYPH<129> Wireless' mobile voice and data operations\n - GLYPH<129> Cable's Internet and telephone services.\n\nOur cable and telecommunications retail services are not subject to price regulation, because the CRTC believes there is enough competition for these services provided by other carriers to protect the interests of users, so has forborne from regulating them. Regulations\n\ncan and do, however, affect the terms and conditions under which we offer these services.\n\n## S pectrum Licences\n\nIndustry Canada sets technical standards for telecommunications under the Radiocommunication Act (Canada) (Radiocommunication Act) and the Telecommunications Act. It licences and oversees:", - "page_start": 70, - "page_end": 70, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Author contributions\n\nK.L. designed the framework of the article and analyzed the yield results and the maize price under future scenarios. J.P. simulated the climate data from 5 climate models recommended by ISI-MIP under 4 RCP scenarios. W.X. simulated the maize yields in whole world under di/fferent scenarios. W.X. simulated the market price of maize at national and global levels. T.A. helped the revision of language.\n\n## Funding\n\nFunding was provided by the National Key Research and Development program of China (Grant Nos. 2019YFA0607403 and 2017YFD0300301) and National Natural Science Foundation of China (Grant Nos. 41961124007 and 41871026).\n\n## Competing interests\n\n/T\\_he authors declare no competing interests.\n\n## Additional information\n\nCorrespondence and requests for materials should be addressed to K.L.\n\nReprints and permissions information is available at www.nature.com/reprints.\n\nPublisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional a/ffiliations.\n\n\n\nOpen Access /T\\_his article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. /T\\_he images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.\n\n© /T\\_he Author(s) 2022\n\nVol:.(1234567890)", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed9.pdf" - }, - { - "text": "## 4.3.4 Working life perspective - health\n\nThis EWCS 2015 question on the working life perspective ( 'Will you be able to do this or a similar job at 60 years of age?' ) gives quite a good hint to the individual long-term prospects, which might even be more valuable than the question on currently affected health because it is a personal assessment of the overall status of health.\n\nDifferences between countries are significant but not as significant as between other categories, for example, between sectors and occupations . The EU average of 'No' responses to the question 'Do you think you will be able to do your current job or a similar one until you are 60 years old?' is at 27%; the eight countries with the highest rates of 'No' responses (between 44% and 33%) are France, Slovenia, Poland, Slovakia, Croatia, Belgium, Malta and Bulgaria. Under 25% of 'No' responses were given in eight countries, starting from Portugal (16%) over Germany, Denmark, Ireland, Sweden, Italy, Estonia and Lithuania (24%). 263\n\nFigure 35: Opinion on work until the age of 60 - EWCS 2015\n\n\n\nYoung workers under 35 are much more sceptic than those over 50; 38% say that they will not be able, a much higher percentage than the 22% of workers aged over 50. The employment status is also very important; 26% of the permanently employed respond with a 'No' compared to 39% of those with 'Other arrangements'. Remarkably, only 19% of the self-employed do not believe that they will be able to do their job at 60 years.\n\nLarge differences can be seen between occupation levels. 37% per cent of the low-skilled manual workers respond with 'No', and 30% of the highly skilled manual workers respond 'No', as do 27% of the low-skilled clerical workers and only 21% of the high-skilled clerical workers, a 16% difference between high-skilled clerical workers and low-skilled manual workers. In some countries only 10% to 15% of the highly skilled clerical workers respond with 'No' while in a number of countries more than 50% of the low-skilled manual workers respond with 'No', for example, in Slovenia, Croatia, Slovakia and Czechia.\n\nThe authors of the Senior Working Life study describe these differences as follows: 264\n\n'For ISCO groups 1-4 (seated work) main expected reasons for retiring were freedom to choose and desire for more leisure time, but many would consider staying longer if there were better possibilities for additional senior days, longer vacations and flexible working hours. For ISCO groups 5-9 (physical work), poor physical health and not being capable of doing the job were common expected reasons for retiring, but many would consider staying longer if the work were less physically demanding and there were more senior days. Possibility for pension was a general expected reason for retiring. Expected reasons differed to a less extent between genders than between ISCO groups, e.g. economic factors were more important for men and high work demands more important for women.", - "page_start": 95, - "page_end": 95, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Generally, our licences are granted for a specified term and are subject to conditions on the maintenance of these licences. These licencing conditions may be modified at any time by the regulators. The regulators may decide not to renew a licence when it expires, and any failure by us to comply with the conditions on the maintenance of a licence could result in a revocation or forfeiture of any of our licences or the imposition of fines.\n\nThe licences include conditions requiring us to comply with Canadian ownership restrictions of the applicable legislation. We are currently in compliance with all of these Canadian ownership and control requirements. However, if these requirements are violated, we would be subject to various penalties, possibly including, in the extreme case, the loss of a licence.\n\n## The Wireless Code\n\nThe CRTC's decision to implement its wireless consumer code of conduct, among other things, effectively requires Canadian wireless carriers to move away from offering three-year service contracts and instead offer two-year contracts, and this could change our customer acquisition and retention costs and subscriber churn. The Wireless Code also sets billing caps on data roaming and domestic data overage charges, creates a prohibition on requiring customers to provide 30days' notice of cancellation, and requires the payment of interest on security deposits, which could also reduce our results of operations.\n\nOur wireless business could be materially adversely affected if laws, regulation or customer behaviour makes it difficult for us to impose term commitments or early cancellation fees on customers or receive the service revenues we anticipate from the term commitments.\n\n## S pectrum\n\nRadio spectrum is one of the fundamental assets required to carry on the wireless business. Our ability to continue to offer and improve current services and to offer new services depends on, among other factors, continued access to and deployment of adequate spectrum, including both the ability to renew current spectrum licenses and acquire new spectrum licenses.\n\nIf we cannot acquire and retain needed spectrum, we may not be able to continue to offer and improve our current services and deploy new services on a timely basis including providing competitive data speeds that customers want. As a result, our ability to attract and retain customers could be materially adversely affected. In addition, an inability to acquire and retain needed spectrum could affect network quality and result in higher capital expenditures, as a consequence of network densification and other related network upgrades.\n\n## S pectrum Fees\n\nChanges to government spectrum fees could significantly increase our payments and therefore materially reduce our operating profit. Spectrum licences are an indefinite life intangible asset and we do not amortize them, however, any potential increases in spectrum licence fees may affect our current accounting policies.", - "page_start": 78, - "page_end": 78, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "Understanding_Creative_Commons_license_(infographic).pdf", - "query": "In which case CC licence can't be used ?", - "target_page": 1, - "target_passage": "fair use, fair dealing, or some other limitation and exception to copyright applies the the work.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "with. The vast majority of in-copyright books are out-of-print or out-of-commerce, and most are not actively managed by their rightsholders. There is no official registry of copyrighted works and their owners, and existing datasets can be incomplete or erroneous. 16\n\nAs a result, there may be no way to license the vast majority of in-copyright books, especially those that have or have had limited commercial value. Put differently, the barrier to using 17 most books is not simply to pay publishers; even if one had significant financial resources, licensing would not enable access to most works.\n\n## Permissively licensed works\n\nThere are books that have been permissively licensed in an easily identifiable way, such as works placed under Creative Commons (CC) licenses. Such works explicitly allow particular uses of works subject to various responsibilities (e.g., requiring attribution by the user in their follow-on use).\n\nWhile such works could be candidates for inclusion in a books data commons, their inclusion depends on whether the license's terms can be complied with in the context of AI training. For instance, in the context of CC licensed works, there are requirements for proper attribution across all licenses (the CC tools Public Domain Dedication (CC0) and Public Domain Mark (PDM) are not licenses and do not require attribution). 18", - "page_start": 9, - "page_end": 9, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "\n\n## Creative Commons license\n\n## Understanding\n\nbefore licensing your work\n\n## THREE-LAYER DESIGN\n\nCreative Commons (CC) license has three layers:\n\n- \"Legal Code\" (base layer): contains terms and conditions to be used by lawyers and legally applicable in court.\n- \"Human Readable\" (commons deeds): contain the summary of the legal code and key terms.\n- \"Machine Readable\": contains HTML or codes for machines to recognize a work is available under a Creative Commons license.\n\n\n\n## FOUR ELEMENTS\n\n- BY (\"Attribution\"): users must credit the author of the work they are using.\n- SA (\"ShareAlike\"): adaptations based on this work must be licensed under the same license.\n- NC (\"NonCommercial\"): the work is only available to be used for noncommercial purposes.\n- ND (\"NoDerivative\"): reusers making cannot share adaptations of the work.\n\n\n\n## SIX LICENSES\n\n- CC BY (\"Attribution\") allows people to use the work for any purpose (even commercially and even in modified form) as long as they give attribution to the creator.\n- CC BY-SA (\"Attribution-ShareAlike\") allows people to use the work for any purpose (even commercially and even in modified form), as long as they give attribution to the creator and make any adaptations they share with others available under the same or a compatible license.\n- CC BY-NC (\"Attribution-NonCommercial\") allows people to use the work for noncommercial purposes only, and only as long as they give attribution to the creator.\n- CC BY-NC-SA (\"Attribution-NonCommercial-ShareAlike\") allows people to use the work for noncommercial purposes only, and only as long as they give attribution to the creator and make any adaptations they share with others available under the same or a compatible license.\n- CC BY-ND (\"Attribution-NoDerivative\") allows people to use the unadapted work for any purpose (even commercially), as long as they give attribution to the creator.\n- CC BY-NC-ND (\"Attribution-NonCommercial-NoDerivative\") allows people to use the unadapted work for noncommercial purposes only, and only as long as they give attribution to the licensor.\n\n## REMIND THAT…\n\nCC license only applicable to the work that is within the scope of copyright law. CC license can be used when …\n\n- you want to give others permissions to freely copy and redistribute your work, and\n- you want to give others permission to freely transform, alter, or otherwise create derivative works based on your work.\n\n\n\n\n\n## CC LICENSE CAN'T BE USED FOR …\n\nfair use, fair dealing, or some other limitation and exception to copyright applies the the work.\n\n## ALSO FOR …\n\nthe work that is already in the Public Domain.\n\nFor those who want to waive their rights from copyright protection, use CC0 (\"CC Zero\").\n\n## NOW, SHARE YOUR WORK!\n\nhttps://creativecommons.org/choose/\n\n\n\n\n\nBY\n\n\n\nSA\n\n\n\nND\n\nNC", - "page_start": 0, - "page_end": 0, - "source_file": "Understanding_Creative_Commons_license_(infographic).pdf" - }, - { - "text": "Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional a/ff.shortiliations.\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.\n\n© The Author(s) 2024", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed4.pdf" - }, - { - "text": "Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate\n\ncredit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.\n\n© The Author(s) 2025", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed3.pdf" - }, - { - "text": "## Author contributions\n\nK.L. designed the framework of the article and analyzed the yield results and the maize price under future scenarios. J.P. simulated the climate data from 5 climate models recommended by ISI-MIP under 4 RCP scenarios. W.X. simulated the maize yields in whole world under di/fferent scenarios. W.X. simulated the market price of maize at national and global levels. T.A. helped the revision of language.\n\n## Funding\n\nFunding was provided by the National Key Research and Development program of China (Grant Nos. 2019YFA0607403 and 2017YFD0300301) and National Natural Science Foundation of China (Grant Nos. 41961124007 and 41871026).\n\n## Competing interests\n\n/T\\_he authors declare no competing interests.\n\n## Additional information\n\nCorrespondence and requests for materials should be addressed to K.L.\n\nReprints and permissions information is available at www.nature.com/reprints.\n\nPublisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional a/ffiliations.\n\n\n\nOpen Access /T\\_his article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. /T\\_he images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.\n\n© /T\\_he Author(s) 2022\n\nVol:.(1234567890)", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed9.pdf" - }, - { - "text": "Regulatory changes or decisions can adversely affect our consolidated results of operations.\n\nOur costs of providing services may increase from time to time as we comply with industry or legislative initiatives to address consumer protection concerns or Internet-related issues like copyright infringement, unsolicited commercial e-mail, cybercrime and lawful access.\n\nGenerally, our spectrum and broadcast licences are granted for a specified term and are subject to conditions for maintaining these licences. The regulators can modify these licensing conditions at any time, and they can decide not to renew a licence when it expires. If we do not comply with the conditions, a licence may be forfeited or revoked, or we may be fined.\n\nThe licences have conditions that require us, amongst other things, to comply with Canadian ownership restrictions of the applicable legislation, and we are currently in compliance with them. If we violate the requirements, we would be subject to various penalties and it could include losing a licence in extreme cases.\n\nCable, wireless and broadcasting licences generally cannot be transferred without regulatory approval.\n\n## Canadian Broadcasting Operations\n\nOur Canadian broadcasting operations - including our cable television systems, radio and television stations, and specialty services - are licenced (or operated under an exemption order) and regulated by the CRTC under the Broadcasting Act.\n\nThe CRTC is responsible for regulating and supervising all aspects of the Canadian broadcasting system. It is also responsible under the Telecommunications Act for the regulation of telecommunications carriers, including:\n\n - GLYPH<129> Wireless' mobile voice and data operations\n - GLYPH<129> Cable's Internet and telephone services.\n\nOur cable and telecommunications retail services are not subject to price regulation, because the CRTC believes there is enough competition for these services provided by other carriers to protect the interests of users, so has forborne from regulating them. Regulations\n\ncan and do, however, affect the terms and conditions under which we offer these services.\n\n## S pectrum Licences\n\nIndustry Canada sets technical standards for telecommunications under the Radiocommunication Act (Canada) (Radiocommunication Act) and the Telecommunications Act. It licences and oversees:", - "page_start": 70, - "page_end": 70, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work - on conditions of your choice. CC licenses let you change your copyright terms from the default of 'all rights reserved' to 'some rights reserved.'\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\n\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n\n\nPublic domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark . Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n## Where public domain tools fit in the copyright spectrum\n\n\n\n## The CC0 Public Domain Dedication\n\nUse this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.\n\n\n\n\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.\n\n## What is the di/fference between CC0 and the Public Domain Mark?\n\n\n\nCC0 ('CC Zero') is intended for use only by authors or holders of copyright and related rights (including database rights), in connection with works that are still subject to those rights in one or more countries.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "Combined, these limits can enable effective foreign control of up to 46.7 % .\n\nThe chief executive officer and 80 % of the members of the Board of Directors of the operating licensee must be resident Canadians. There are no restrictions on the number of non-voting shares that may be held by non-Canadians at either the holding-company or licenseecompany level. Neither the Canadian carrier nor its parent may be otherwise controlled in fact by non-Canadians. Subject to appeal to the federal Cabinet, the CRTC has the jurisdiction to determine as a question of fact whether a given licensee is controlled by nonCanadians.\n\nPursuant to the Telecommunications Act and associated regulations, the same rules also apply to Canadian telecommunications carriers such as Wireless, except that there is no requirement that the chief executive officer be a resident Canadian. We believe we are in compliance with the foregoing foreign ownership and control requirements.\n\nOn June 29, 2012, Bill C-38 amending the Telecommunications Act passed into law. The amendments exempt telecommunications companies with less than 10 % of total Canadian telecommunications market measured by revenue from foreign investment restrictions. Companies that are successful in growing their market shares in excess of 10 % of total Canadian telecommunications market revenues other than by way of merger or acquisitions will continue to be exempt from the restrictions.\n\n## WIRELESS\n\n## Consultation on the Renewal of Cellular and Personal Communications S ervices (PC S ) S pectrum Licences\n\nIn March 2011, Industry Canada released its decisions about the renewal process for cellular and PCS licences that began expiring at that time. Key things to note:\n\n - GLYPH<129> At the end of the current licence term, new cellular and PCS licences with a 20-year term will be issued to licensees that are in compliance with all licence conditions.\n - GLYPH<129> The previously existing annual fee of $0.0351 per MHz per population of the licenced area will continue to apply to all cellular and PCS licences, including those initially assigned by auction. The Minister of Industry Canada may review and amend the fees during the licence term after further consultation with licensees.\n - GLYPH<129> A determination regarding existing research and development conditions of licence was not released at that time and will be released separately. A decision has not been made to date, and until such a time, the current conditions of licence remain in effect.", - "page_start": 71, - "page_end": 71, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Training in how to use CC Licenses is key to their adoption.\n\nWe offer a ten-week CC Certificate program that is now tailored not only to the education and library sectors, but also galleries, archives, libraries, and museums and available in 10 languages .\n\nAs of 2023, we've certified:\n\n\n\n1,705 Graduates\n\n\n\n65 Countries\n\n## In 2023, we greatly expanded our CC Licenses training and education offerings:\n\n## 19 Workshops & Trainings\n\nwith institutions like ALA, Connecticut Humanities & State University of New York, Digital Research Alliance of Canada, and WikiConf North America.\n\n## 2 Week-Long CC Certificate Bootcamps\n\nfor California Community Colleges.\n\n## 27 Webinars\n\non topics like the basics of Open Culture, the possibilties of Open Educational Resources (OER) for business-university cooperation, and the future of CC Licenses in digital and online education.\n\n## 12 CC Legal Open Office Hours\n\nhosted by our legal team, providing a personalized opportunity for the CC community to ask questions about CC Licenses, open access, and sharing.\n\n", - "page_start": 4, - "page_end": 4, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "When CC0 is applied to a work, copyright and related rights are relinquished worldwide, making the work free from those restrictions to the greatest extent possible.\n\n\n\nThe Public Domain Mark (PDM) is used to label works that are already free of known copyright restrictions. Unlike CC0, PDM doesn't change the copyright status of a work.\n\nPDM can be used by anyone, and is intended for use with works that are already free of known copyright restrictions throughout the world.\n\n## Public Domain Mark\n\nUse this tool if you have identified a work that is free of known copyright restrictions.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_RSG_2004.pdf", - "query": "In how many regions the Republic Services operations are organized ?", - "target_page": 9, - "target_passage": "As of December 31, 2004, our operations were organized into five regions whose boundaries may change from time to time: Eastern, Central, Southern, Southwestern and Western.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "E u ronet and its subsidiaries operate in two business segments: (1) a segment that provides an independent shared ATM network and other e l e c t ronic payment network services to banks, retail and financial institutions (the 'Network Services Segment'); and (2) a segment that p roduces application software and solutions for payment and transaction delivery systems (the 'Software Solutions Segment'). These business segments are supported by a corporate service segment which provides corporate and other administrative services which are not d i rectly identifiable with the two business segments, (the 'Corporate Services Segment'). The accounting policies of each segment are the same as those described in the summary of significant accounting policies. The Company evaluates perf o rmance based on profit or loss fro m operations before income taxes not including nonre c u rring gains and net loss. Prior period segment information has been restated to conform to the current period's presentation.\n\nAs the Network Services Segment continued to grow throughout 1999, the Company's management began to divide the internal org a n i z a t i o n of the segment into Sub-segments. Accord i n g l y, beginning in January 2000, the Company divided the Network Services Segment into thre e Sub-segments: 'Central European Sub-segment' (including Hungary, Poland, the Czech Republic, Croatia, Greece and Romania), 'We s t e rn E u ropean Sub-segment' (including Germ a n y, France, and the United Kingdom) and 'Other Operations Sub-segment' (including the United States and unallocated processing center costs). Where practical, certain amounts have been reclassified to reflect the change in intern a l re p o rting. The Company is unable to present Network Services Segment assets by Sub-segment as of December 31, 1999. Prior to January 1, 2000, certain assets that were used to provide support services to the Company as a whole were included in the assets in the balance sheet of the Company's wholly owned Hungarian subsidiary, Bank Tech. In order to segregate corporate assets from those of the Hungarian operations, these assets were transferred as of December 31, 1999, from Bank Tech to an existing Hungarian shell company, Administrative S e rvices. Those assets are now shown under the Other Operations Sub-segment.\n\nThe following tables present the segment results of the Company's operations for the years ended December 31, 2000, 1999 and 1998.", - "page_start": 42, - "page_end": 42, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "## REPUBLIC SERVICES, INC. AND SUBSIDIARIES\n\n## CONSOLIDATED STATEMENTS OF CASH FLOWS\n\n(in millions)", - "page_start": 63, - "page_end": 63, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## MA N A G E M E N T'S DI S C U S S I O N A N D AN A LY S I S O F FI N A N C I A L CO N D I T I O N A N D RE S U LT S O F OP E R AT I O N S\n\n## General Overv i e w\n\nE u ronet Worldwide is a leading provider of secure electronic financial transaction solutions. The Company provides financial payment m i d d l e w a re, financial network gateways, outsourcing, and consulting services to financial institutions, retailers and mobile operators. The Company operates an independent automated teller machine ('ATM') network of over 2,600 ATMs in Europe and the United States, and t h rough its software subsidiary, Euronet USA Inc. (form e r l y, Arkansas Systems, Inc.)('Euronet USA'), offers a suite of integrated software solutions for electronic payment and transaction delivery systems. Euronet Worldwide thus offers comprehensive electronic payment solutions consisting of ATM network participation, outsourced ATM management solutions and software solutions. Its principal customers are banks and other companies such as retail outlets that re q u i re transaction processing services. With eleven offices in Europe and three in the United States, the Company offers its solutions in more than 60 countries around the world.\n\nE u ronet Worldwide and its subsidiaries operate in two business segments: (1) a segment providing secure processing of financial transactions (the 'Network Services Segment'); and (2) a segment producing application software for the processing of secure electronic financial transaction (the ' S o f t w a re Solutions Segment'). In addition, the Company's management divides the Network Services Segment into three sub-segments: 'Central E u ropean Sub-segment' (including Hungary, Poland, the Czech Republic, Croatia, Greece and Romania), 'We s t e rn European Sub-segment' (including Germ a n y, France and the United Kingdom) and 'Other Operations Sub-segment' (including the United States and unallocated p rocessing center costs). These business segments, and their sub-segments, are supported by a corporate service segment, which pro v i d e s corporate and other administrative services that are not directly identifiable with the two business segments (the 'Corporate Services Segment'). The accounting policies of each segment are the same as those described in the summary of significant accounting policies. The Company evaluates perf o rmance based on profit or loss from operations before income taxes not including nonre c u rring gains and net loss. Prior period segment information has been restated to conform to the current period's presentation. (See Note 19 to the Consolidated Financial Statements Business segment information.)\n\n## Comparison of Results of Operations for the Years Ended December 31, 2000, 1999 and 1998\n\nRevenues The Company's total revenues increased to $52.7 million for the year ended December 31, 2000 from $41.5 million for the year ended December 31, 1999 and $11.9 million for the year ended December 31, 1998. The increase in revenues from 1999 to 2000 is primarily due to two factors: (1) a $10.4 million increase in Network Services Segment revenues resulting from the i n c rease in transaction volumes in the Company owned ATMs and an increase in the number of AT M s operated by the Company during this period; and (2) an increase of $800,000 in Software Solutions Segment revenues. The increase in revenues from 1998 to 1999 is primarily due to two factors: (1) a $15.0 million increase in Network Services Segment revenues resulting from the increase in transaction volume attributable to an increase in the number of ATMs operated by the Company during this period; and (2) the addition of $14.6 million of Software Solutions Segment re v e n u e s . Revenues for the years ended December 31, 2000 and 1999 are discussed more fully in the Segment Results of Operations sections below.", - "page_start": 16, - "page_end": 16, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "## CONSENT OF INDEPENDENT REGISTERED PUBLIC ACCOUNTING FIRM\n\nWe consent to the incorporation by reference in the Registration Statements (Form S-8 Nos. 333-81801, 333-78125, 333-45542 and 333-104048) pertaining to the Republic Services 401(k) Plan, 1998 Stock Incentive Plan, Republic Services, Inc. Amended and Restated Employee Stock Purchase Plan, and Republic Services, Inc. Amended and Restated 1998 Stock Incentive Plan, respectively, of our reports dated February 24, 2005, with respect to the consolidated Ñnancial statements and schedule of Republic Services, Inc., Republic Services, Inc. management's assessment of the eÅectiveness of internal control over Ñnancial reporting, and the eÅectiveness of internal control over Ñnancial reporting of Republic Services, Inc., included in this Annual Report (Form 10-K) for the year ended December 31, 2004.\n\n/s/ ERNST & YOUNG LLP CertiÑed Public Accountants\n\nFort Lauderdale, Florida February 24, 2005", - "page_start": 102, - "page_end": 102, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "- , Decentralized Management Structure. We maintain a relatively small corporate headquarters staÅ, relying on a decentralized management structure to minimize administrative overhead costs and to manage our day-to-day operations more eÇciently. Our local management has extensive industry experience in growing, operating and managing solid waste companies and has substantial experience in their local geographic markets. In early 2001, we added a sales, maintenance and operations manager to each of our regional management teams, which previously consisted of a regional vice president and a regional controller. We believe that strengthening our regional management teams allows us to more eÅectively and eÇciently drive our company's initiatives and helps ensure consistency throughout our organization. Our regional management teams and our area presidents have extensive authority, responsibility and autonomy for operations within their respective geographic markets. Compensation for regional and area management teams is primarily based on the improvement in operating income produced and the free cash Öow and return on invested capital generated in each manager's geographic area of responsibility. In addition, through long-term incentive programs, including stock options, we believe we have one of the lowest turnover levels in the industry for our local management teams. As a result of retaining experienced managers with extensive knowledge of and involvement in their local communities, we are proactive in anticipating our customers' needs and adjusting to changes in our markets. We also seek to implement the best practices of our various regions and areas throughout our operations to improve operating margins.\n - , Integrated Operations. We seek to achieve a high rate of internalization by controlling waste streams from the point of collection through disposal. We expect that our fully integrated markets generally will have a lower cost of operations and more favorable cash Öows than our non-integrated markets. Through acquisitions and other market development activities, we create market-speciÑc, integrated operations typically consisting of one or more collection companies, transfer stations and landÑlls. We consider acquiring companies that own or operate landÑlls with signiÑcant permitted disposal capacity and appropriate levels of waste volume. We also seek to acquire solid waste collection companies in markets in which we own or operate landÑlls. In addition, we generate internal growth in our disposal operations by developing new landÑlls and expanding our existing landÑlls from time to time in markets in which we have signiÑcant collection operations or in markets that we determine lack suÇcient disposal capacity. During the three months ended December 31, 2004, approximately 54% of the total volume of waste that we collected was disposed of at landÑlls we own or operate. In a number of our larger markets, we and our competitors are required to take waste to government-controlled disposal facilities. This provides us with an opportunity to eÅectively compete in these markets without investing in landÑll capacity. Because we do not have landÑll facilities or government-controlled disposal facilities for all markets in which we provide collection services, we believe that through landÑll and transfer station acquisitions and development we have the opportunity to increase our waste internalization rate and further integrate our operations. By further integrating operations in existing markets through acquisitions and development of landÑlls and transfer stations, we may be able to reduce our disposal costs.", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "\n\nFirst Financial Bankshares, Inc. is a financial holding company\n\nheadquartered in Abilene, Texas, with consolidated assets of $2.0 billion as of December 31, 2002. The corporation has 10 affiliate banks, which provide services from 28 full-service locations in the Central, West and High Plains regions of Texas. The common stock of First Financial Bankshares, Inc. is held by more than 3,500 shareholders and is listed on The NASDAQ Stock Market ¤ under the symbol FFIN.\n\n'Our 10 affiliate banks provide services from 28 full-service locations in the Central, West and High Plains regions of Texas.'", - "page_start": 3, - "page_end": 3, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "## REPUBLIC SERVICES, INC. AND SUBSIDIARIES\n\n## NOTES TO CONSOLIDATED FINANCIAL STATEMENTS (All tables in millions, except per share data) Ì (Continued)", - "page_start": 76, - "page_end": 76, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "The Company participates in the telecommunications industry, which requires substantial investment in fixed assets or plant. This significant capital requirement may preclude profitability during the initial years of operation. The strategy of the Company is to grow and diversify the business by adding services and geographic areas that can leverage the existing plant, but to do so within the opportunities and constraints presented by the industry. For many years the Company focused on reducing reliance on the regulated telephone operation, which up until 1981 was the primary business within the Company. This initial diversification was concentrated in other wireline businesses, such as the cable television and regional fiber facility businesses, but in 1990 the Company made its first significant investment in the wireless sector through its former investment in the Virginia 10 RSA Limited partnership. By 1998, revenues of the regulated telephone operation had decreased to 59.2% of total revenues. In that same year more than 76.6% of the Company's total revenue was generated by wireline operations, and initiatives were already underway to make wireless a more significant contributor to total revenues.\n\nDuring the 1990's significant investments were made in the cellular and PCS (wireless) businesses. The VA 10 RSA cellular operation, in which the Company held a 66% interest and was the general partner, experienced rapid revenue growth and excellent margins in the late 1990's. The cellular operation covered only six counties, and became increasingly dependent on roaming revenues. Management believed the roaming revenues and associated margins would be unsustainable as other wireless providers increasingly offered nationally-branded services with significantly reduced usage charges. To position it to participate in the newer, more advanced, digital wireless services, in 1995 the Company entered the PCS business through an affiliation with American Personal Communications (APC), initiating service along the Interstate 81 corridor from Harrisonburg, Virginia to Chambersburg, Pennsylvania. This territory was a very close match to the Company's fiber network, thereby providing economic integration that might not be available to other wireless carriers. In 1999, the Company entered a new affiliation arrangement with Sprint, the successor to APC (which introduced the Company to a nationally-branded wireless service) and expanded the PCS footprint further into Central Pennsylvania. The Company's combined capital investment in 2000 and 2001 in the PCS operation was $45.1 million.\n\n■", - "page_start": 40, - "page_end": 40, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## REPUBLIC SERVICES, INC. AND SUBSIDIARIES\n\n## CONSOLIDATED BALANCE SHEETS\n\n(in millions, except share data)", - "page_start": 60, - "page_end": 60, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## REPUBLIC SERVICES, INC. AND SUBSIDIARIES\n\n## NOTES TO CONSOLIDATED FINANCIAL STATEMENTS\n\n(All tables in millions, except per share data) Ì (Continued)\n\nThe unaudited pro forma results of operations are presented for informational purposes only and may not necessarily reÖect the future results of operations of the Company or what the results of operations would have been had the Company owned and operated these businesses as of the beginning of the periods presented.\n\n## 5. DEBT\n\nNotes payable and long-term debt are as follows:", - "page_start": 82, - "page_end": 82, - "source_file": "NYSE_RSG_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_MGM_2004.pdf", - "query": "What was one of the seminal moment of 2004 for MGM MIRAGE ?", - "target_page": 12, - "target_passage": "The announcement of the merger between MGM MIRAGE and Mandalay Resort Group was one of the seminal moments of 2004", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "MGM MIRAGE 2004 ANNUAL REPORT\n\n## defining momentum\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "\n\nRecently, we opened the SKYLOFTS, a new level of luxury for guests atop MGM Grand Las Vegas.\n\nWe'll follow the success of these new resort features with a category-defining new nightclub at The Mirage, two fabulous restaurants by Joël Robuchon at MGM Grand Las Vegas and gaming upgrades company-wide. Second, we are doubling down on Las Vegas by merging with Mandalay, a company we have long admired. The Mandalay merger represents a tremendous opportunity to build on the momentum established by Mike Ensign and his team. And third, we are dreaming of a not-so-distant future, when\n\n\n\nAL FACCINTO President, MGM MIRAGE International Marketing\n\n\n\nALAN FELDMAN Senior VP Public Affairs, MGM MIRAGE\n\nBRUCE GEBHARDT Senior VP, MGM MIRAGE Global Security\n\nWILLIAM J. HORNBUCKLE President & COO, MGM MIRAGE Europe\n\nPHYLLIS JAMES Senior VP & Senior Counsel, MGM MIRAGE\n\nProject CityCenter will literally redefine the Las Vegas Strip and change the face of Las Vegas forever.\n\n## Mandalay in Motion\n\nWe are incredibly excited to begin our journey with the talented people of Mandalay, as we work to maximize the value of Mandalay's instantly recognized brands and worldclass resorts. Long a fixture in Las Vegas, Mandalay's resorts will add to our premium portfolio and allow us to accelerate the pace of our growth. Our hotel people will be able to market a wider range of rooms and benefit from a world-class\n\n\n\n\n\nconvention center. Our casino marketing people will be able to offer their customers wonderful new amenities to expand our market reach. And our development people will be able to maximize the potential of priceless Las Vegas Strip land.\n\nThe Mandalay merger represents another defining moment for MGM MIRAGE, much like the Mirage Resorts transaction in 2000, at a time when Las Vegas is in a state of astounding metamorphosis. No company is better positioned to help shape the future of Las Vegas than MGM MIRAGE. We employ more people, invest more money and hold more prime real estate than any other company in Las Vegas. The\n\n\n\n", - "page_start": 24, - "page_end": 24, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## FINANCIAL OVERVIEW\n\n## ACHIEVING MOMENTOUS RESULTS\n\nJAMES J. MURREN President, CFO & Treasurer\n\n\n\n\n\n\n\n\n\nGAMAL AZIZ President, MGM Grand\n\n\n\nGLENN BONNER Senior VP & CIO, MGM MIRAGE Information Systems\n\nGEORGE R. BOYER III President, MGM Grand Detroit\n\nJOSEPH BRUNINI President, MGM Grand Resorts National Marketing\n\nJEFF DAHL President, Beau Rivage\n\no some, momentum is intangible - a product of fortune, a power that cannot be harnessed, and typically a short-lived sensation. Others wonder how they lost their momentum. At MGM MIRAGE, we are constantly thinking of better ways to maximize it. We believe momentum is a product of effort and excellence, a force which can be observed and measured, and something that can be a lasting and defining quality of a great company. Our 2004 results are a clear reminder of the power of moving forward. Our financial policies have long been designed to create and maintain momentum. By investing in our best assets and thinking of new ways to add value to our shareholders, we are able to redefine our Company's place in history every year - and 2004 was a defining time even by our exacting standards. T\n\nSo how did we get here? Last year, we discussed the importance of focus, and the laser-like precision with which we operated our resorts in 2004 affirms the power of our single-minded dedication to excellence. The hard work of our 40,000 employees resulted in a record year in almost every regard. Net revenues increased 10% over 2003 to a record $4.2 billion, with 12% REVPAR growth at our Las Vegas resorts; property-level EBITDA was an all-time record, nearly $1.5 billion, and 23% higher than the prior year. We exceeded the expectations of every market observer, and significantly beat our forecasts. And 2004 will not be a zenith year for your company - rather, we expect to continue our excellent operating performance, re-invest the resulting cash flow to stimulate future growth and move forward to new defining moments.\n\nHow do we re-define a company that is already at the top of its industry? First, we continue to execute on our vision for our existing resorts - to continually evolve and increase the 'Wow!' factor for our guests. This strategy requires investment, and we will ensure that our resorts are not only world-class, but best-in-class. Examples include the beautiful Spa Tower at Bellagio and KÀ , the latest spectacular creation in collaboration with Cirque du Soleil.\n\n", - "page_start": 23, - "page_end": 23, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## TO OUR SHAREHOLDERS\n\nBELLAGIO underwent a significant expansion during 2004 resulting in the opening of the Spa Tower and several important new amenities at this AAA Five Diamond property. Bellagio remains Las Vegas' first and only hotel-casino to receive this prestigious recognition. These new additions add dimension and depth to the world-famous experience awaiting guests at Bellagio.\n\nMGM GRAND LAS VEGAS completed a transformation, begun in 2003, of its food and beverage and entertainment offerings. MGM Grand is one of the must-see attractions of Las Vegas, with Cirque du Soleil's newest production, KA ' TM , and several of the Strip's finest restaurants and hottest nightspots. 18 .0 %\n\nTI 's transformation was no less extensive, as the property's management team conceived and implemented a program to enliven the property with new restaurants and nightlife.\n\nTHE MIRAGE was the site of a revolution in Las Vegas' history as the venerable buffet was given new life as a top dining establishment, Cravings. Others may follow this lead, but The Mirage was the first property to breathe new life into what remained of the last bastion of 'old' Las Vegas.\n\n## EXPANDING WITH EXCELLENCE\n\nThese investments in your company's future paid dividends even before the year was out. We established a new record for net revenues posting $4.2 billion, a 10% increase over 2003.\n\nYour company's resorts produced record EBITDA of $1.46 billion, an increase of 23% over 2003, while operating income was $951 million, an increase of 36%, with record results at Bellagio, MGM Grand Las Vegas and Beau Rivage.\n\n## Defining Momentum in the Community\n\nI've spent 27 years in this profession and the incredible generosity of our employees never ceases to amaze me. Shortly after the merger with Mirage Resorts in 2000, we established the Voice Foundation. This allows employees to express themselves in the communities we serve by providing them a mechanism to raise monies for worthy causes. It's their money and they decide where it goes. Your company provides the marketing and administrative support. .6% .5 %\n\nIn each year since we established the program, employees have given record amounts to support a\n\n\n\n## 2004 Revenue Mix\n\n\n\n\n\n\n\n\n\nCasino\n\nRooms\n\nFood & Beverage\n\nEntertainment, Retail,\n\n& Other\n\nSKYLOFTS MGM Grand A private sanctuary of sleek, elegant two-story accommodations, offering discerning guests the quintessential loft environment - harmonizing design, décor, ambiance and unparalleled vistas.\n\nBELLAGIO SPA Unique design elements, combined with an international array of innovative treatments and specially trained therapists, provide the ultimate indulgent experience.\n\nTEATRO MGM Grand A new genre of Las Vegas nightlife where European club influences permeate. DJs spin jazz/ house throughout the evening, giving way to an energetic after-hours vibe with live catwalk entertainment.\n\n\n\nKÀ The most spectacular production ever, by a troupe renowned for its pageantry. Cirque du Soleil's KÀ debuted at a new theatre at MGM Grand in the fourth quarter of 2004.\n\n\n\nWhat exactly is a defining moment? Try a multi-billion dollar project centered in the heart of Las Vegas.\n\n\n\n", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "wide array of community needs. From homeless shelters to after-school programs, MGM MIRAGE employees have generously donated more than $8 million since 2001.\n\nYour company also sets aside a portion of its profits each year to be given to important programs intended to build stronger communities. Since 2001, your company has given more than $18 million to support such programs.\n\n## Defining Momentum in Our Family\n\nOur momentum is driven from within by acknowledging the contributions of each and every one of our employees, business partners and customers. Our commitment to diversity is recognition of the fact that in today's everchanging marketplace, we must reflect that which we see in the world around us.\n\nThis commitment should be seen as a commonsense business decision. That said, we are proud of the recognition our Diversity program has received, including accolades from prestigious media such as Fortune and DiversityInc. magazines.\n\nSince formalizing our program only four years ago, we've made enormous strides. There is still progress to be made and your company has the momentum to remain at the forefront on diversity initiatives, providing yet another advantage for sustaining performance in the long term.\n\nSENSI BELLAGIO An eclectic menu features diverse cuisines in an earthy arena replete with waterfalls and chrome. A bold wine list complements Chef Martin Heierling's sumptuous work.\n\n\n\nJEAN-PHILIPPE PATISSERIE BELLAGIO A mesmerizing fountain of cascading liquid chocolate showcases a splendid selection of chocolates, cakes, crêpes, salads and sandwiches.\n\nISLA TI Designed by Jeffrey Beers, Isla brightens all the senses. Chef Richard Sandoval gives an innovative and modern interpretation of traditional Mexican cuisine.\n\n\n\n(from left to right) KENNETH ROSEVEAR President, MGM MIRAGE Development; JOHN T. REDMOND President & CEO, MGM Grand Resorts, LLC; J. TERRENCE LANNI Chairman & CEO, MGM MIRAGE; ROBERT H. BALDWIN President & CEO, Mirage Resorts, Incorporated & President, Project CityCenter; GARY N. JACOBS Executive Vice President, General Counsel & Secretary, MGM MIRAGE; JAMES J. MURREN President, CFO & Treasurer, MGM MIRAGE\n\n## Defining Momentum in the Future\n\nYour company achieved many business goals in 2004 and set in motion plans for future growth. These initiatives will provide unmatched returns. We have also created unrivaled opportunities for our employees and will continue our rich history of strengthening the communities in which we do business.\n\n\n\nAs exciting as 2004 was, our momentum will carry us to even greater achievements in 2005 and beyond.\n\nJ. TERRENCE LANNI Chairman of the Board & Chief Executive Officer March 31, 2005\n\n\n\n", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "(from left to right) ROBERT C. SELWOOD Senior Vice PresidentAccounting; JAMES J. MURREN President, CFO & Treasurer; BRYAN L. WRIGHT Senior Vice President - Assistant General Counsel & Assistant Secretary; DANIEL J. D'ARRIGO Senior Vice President-Finance\n\n\n\n## No company is better positioned to help shape the future of Las Vegas than MGM MIRAGE.\n\n\n\nCYNTHIA KISER MURPHEY Senior VP, MGM MIRAGE Human Resources\n\n\n\nWILLIAM MCBEATH President, The Mirage\n\nROBERT V. MOON Chairman, MGM MIRAGE Marketing\n\nFELIX D. RAPPAPORT President, New York-New York\n\n\n\nPUNAM MATHUR Senior VP, MGM MIRAGE Diversity/Community Relations\n\n\n\ncombination of Mandalay's assets with our financial strength and industry-leading financial discipline will yield significant returns for all of our stakeholders.\n\nWe are currently planning the integration of the two companies, and over time, we expect to realize the full potential of cost and revenue synergies. We will report on our progress throughout the coming year.\n\n## The Next Moment - A City is Born\n\nWhat makes a great city? Las Vegas has long been recognized as the leisure capital of the world. The resorts in our valley have been the innovative leaders in the hospitality industry and have driven the tremendous growth in visitor volume, high occupancy rates and surging food, beverage, entertainment and gaming volumes. But there is another Las Vegas - a community of two million residents on its way to three million by the end of the decade. Las Vegas is leading the U.S. migration to the Southwest. Our newcomers are attracted by the lifestyle, weather, cost of living and economic opportunity. Many have come from cities in the East, West and Midwest and take elements of established communities for granted, such as medical, educational and cultural excellence and diversity.\n\nThe people of Las Vegas today have great aspirations and\n\nexpect and demand more of our community. We are a city without a proper city, and that is about to change. Ambitious plans are underway to revitalize Downtown Las Vegas, centered around a beautiful performing arts center and an academic medical center; UNLV is in the midst of a major capital campaign to enhance the Midtown section of Las Vegas; and your company has embarked on the most comprehensive project to date - Project CityCenter, at the heart of the Las Vegas Strip.\n\nThe Las Vegas Strip has no sense of city now - but we believe it can. The future of Las Vegas is centered around our great resorts and our future development. There are many reasons we believe Project CityCenter is the right project for our Las Vegas Strip development. We believe there is a social imperative that Las Vegas mature as a city, not just a conglomeration of suburbs. A city deserves a center - a center for living, working and playing. We want to be an integral part in defining the Las Vegas of the future.\n\nAnd there is a business motivation. Companies in the gaming industry have historically not been valued on par with other hospitality companies and mixed-use real estate companies. We plan to break out of the gaming mold, and define a company based on extensive holdings in multiple businesses. Project CityCenter will include major residential, retail and entertainment components. We will partner with boutique\n\n\n\nSCOTT SIBELLA President, TI\n\n", - "page_start": 25, - "page_end": 25, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## POINTS IN TIME\n\n19\n\n93\n\n\n\n## DEFINING MOMENTS OF MGM MIRAGE\n\n96 19\n\n\n\nTHE NEW YORK-NEW YORK SKYLINE BECOMES\n\nA TOWERING PRESENCE IN THE PORTFOLIO. We acquired Primadonna Resorts to gain full ownership of the spectacular New York-New York as well as three hotel-casinos on the Nevada state line and two championship golf courses.\n\nIT ALL BEGINS WITH MGM GRAND. MGM Grand, the largest hotel-casino in the world, opened to great fanfare. 'The City of Entertainment' redefined the urban resort and provided the foundation for our company's momentous growth.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "A portion of our tax reserves was assumed in the Mirage Acquisition. The IRS audit of the tax returns of Mirage through the merger date was settled in August 2003, resulting in a payment to the IRS of $45 million, including interest. These matters had been previously reserved for, so the settlement had no impact on our income tax provision or our results of operations. Any future adjustments to the acquired Mirage tax reserves will be recorded as an adjustment to goodwill.", - "page_start": 44, - "page_end": 44, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## TO OUR SHAREHOLDERS\n\n## MGM MIRAGE DEFINES MOMENTUM\n\n## 'Your company has undergone several defining moments throughout its history.'\n\nrom its roots some 35 years ago with the opening of the International Hotel, we have played a leading role in continuously redefining the Las Vegas experience. F\n\nWe announced two significant initiatives in 2004 that, taken together, give your company unrivaled momentum to set industry standards for creativity, performance and responsibility for decades to come.\n\n## Defining Momentum for Las Vegas\n\nOur merger agreement with Mandalay Resort Group and our plans to develop Project CityCenter on the Las Vegas Strip are among the most significant announcements in Las Vegas history. As this fabled city begins its second hundred years, MGM MIRAGE is positioned like no other company to take advantage of unsurpassed growth opportunities in the most dynamic gaming and entertainment market in the world.\n\nProject CityCenter will uniquely re-position Las Vegas like no other project before it. Far more than simply another casino-hotel, Project CityCenter encompasses a\n\nBELLAGIO SPA TOWER The quintessential luxury hotel is now even more opulent. This expansion includes 928 rooms and suites, 80,000 square feet of convention space, retail outlets, and restaurants.\n\n\n\nmyriad of elements that will propel Las Vegas into a new generation of urban sophistication.\n\nWhile additional details of this extraordinary development will come in the months ahead, I am pleased to tell you that we have secured the services of the internationally acclaimed architect Cesar Pelli to design our anchor resort at the heart of Project CityCenter.\n\nCesar Pelli & Associates has worked with corporate, government and private clients to design major public spaces, museums, airports, research centers, performing arts centers, academic buildings, hotels, office and residential towers and mixed-use projects.\n\nThe work of Cesar Pelli is not constrained by a personal style or a signature that would limit his architecture; instead, it celebrates the unique characteristics of each project. Using this approach, he has designed several exceptional buildings in the United States and abroad.\n\nWe are very excited about our partnership with Mr. Pelli and his colleagues and believe they will deliver for MGM MIRAGE and the residents of Southern Nevada a building of iconic stature around the world.\n\nSHIBUYA MGM GRAND Designed by superstar team Yabu Pushelberg, Shibuya features stellar sushi and the widest sake selection this side of the Pacific, all served in a sleek, airy ambiance.\n\n\n\nCRAVINGS THE MIRAGE The zenith of all-you-can-eat. Designed by Adam Tihany, Cravings boasts 11 cooking stations, a street of unique restaurants, and an array of temptations in what's unquestionably the ultimate buffet dining experience.J. TERRENCE LANNI Chairman & Chief Executive Officer\n\n\n\n", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## Overall Outlook\n\nWe have invested heavily in our existing operations in 2003 and 2004, and expect to continue to do so on a targeted basis in 2005. Our Las Vegas Strip resorts require ongoing capital investment to maintain their competitive advantages. We believe the investments in additional non-gaming amenities we made in 2003 and 2004 have enhanced our ability to generate increased visitor volume and allowed us to charge premium prices for our amenities.\n\nThe most likely significant factors affecting operating results at our existing resorts in 2005 will be the expected continued strength of the leisure and convention travel segments, the expansion of Bellagio and the opening of KÀ and other amenities at MGM Grand Las Vegas, and new competition from Wynn Las Vegas on the Las Vegas Strip. Various lodging market observers, such as PricewaterhouseCoopers and Smith Travel Research, are forecasting mid-single digit percentage growth in REVPAR in 2005, with greater REVPAR gains in full service hotels. Our REVPAR growth, and REVPAR growth in Las Vegas in general, has outpaced that of the national market, and we expect that trend to continue.\n\nThe Bellagio expansion opened in late 2004 and added over 30% to the resort's room base. In addition, we added new meeting, retail and dining space and significantly expanded the spa and salon. KÀ opened in late November 2004 at MGM Grand Las Vegas, which had been without a featured production show for almost two years. Along with the numerous restaurant and other entertainment additions at MGM Grand Las Vegas, KÀ will enhance our ability to generate visitor traffic and capture a greater share of our guests' spending.\n\nWynn Las Vegas will add room capacity to the Las Vegas market, with its 2,700 rooms representing a 2% increase in Las Vegas room supply. Wynn Las Vegas will also feature numerous upscale restaurants and generally target customers who might otherwise choose Bellagio, MGM Grand Las Vegas or The Mirage. We believe there\n\nwill be some impact on these resorts from Wynn Las Vegas, but also believe that the breadth of amenities in our portfolio of resorts and our loyalty and other marketing programs will help minimize these competitive pressures. The proximity of Wynn Las Vegas to TI and The Mirage, along with pedestrian bridges linking TI with the Fashion Show Mall and Venetian, will also benefit these resorts.\n\n## Mandalay Merger\n\nOn June 16, 2004, we announced that we had entered into a definitive merger agreement with Mandalay Resort Group ('Mandalay'), a publicly traded company, under which we will acquire Mandalay for $71.00 in cash for each share of common stock of Mandalay. Mandalay owns and operates eleven properties in Nevada, including Mandalay Bay, Luxor, Excalibur, Circus Circus, and Slots-A-Fun in Las Vegas, Circus Circus-Reno in Reno, Colorado Belle and Edgewater in Laughlin, Gold Strike and Nevada Landing in Jean, and Railroad Pass in Henderson. Mandalay also owns and operates Gold Strike, a hotel/casino in Tunica County, Mississippi. In addition, Mandalay owns a 50% interest in Silver Legacy in Reno, a 50% interest in Monte Carlo in Las Vegas, a 50% interest in Grand Victoria, a riverboat in Elgin, Illinois, and a 53.5% interest in MotorCity in Detroit, Michigan. The total consideration is approximately $8.1 billion, including equity value of approximately $4.8 billion, convertible debentures with a redemption value of approximately $574 million, the assumption or repayment of other outstanding Mandalay debt with a fair value of approximately $2.6 billion as of December 31, 2004, and $100 million of estimated transaction costs. The transaction is structured as a merger of one of our wholly-owned subsidiaries with and into Mandalay. The transaction will be accounted for as a purchase and is anticipated to close during the first quarter of 2005.", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_MGM_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_MGM_2004.pdf", - "query": " What are the most significant piece of undeveloped land remaining on the Las Vegas Strip ?", - "target_page": 21, - "target_passage": "W RESIDENTIAL In lofts, brown stones and high-rise buildings, residential options abound to populate the new city and ener gize the surrounding areas. e have been working for some time on con ceiving the best use of the 66 acres between Monte Carlo and Bellagio, the most significant piece of undeveloped land remaining on the Las Vegas Strip.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\nRESIDENTIAL In lofts, brownstones and high-rise buildings, residential options abound to populate the new city and energize the surrounding areas.\n\nENTERTAINMENT From street performers to Broadway shows, our entertainment will evoke the best of New York or London.\n\n\n\ne have been working for some time on conceiving the best use of the 66 acres between Monte Carlo and Bellagio, the most significant piece of undeveloped land remaining on the Las Vegas Strip. We certainly could have come up with a spectacular casino-hotel. But, the truth is, Las Vegas is ready for so much more. W\n\nAs the city eclipses two million residents on its way to passing three million by the end of the decade, and with land prices on the Strip soaring, it has become clear that there is a much better and higher use for this location. As Las Vegas marks its Centennial, Project CityCenter stands as a defining moment for development in this fabled city.\n\nProject CityCenter represents a new era of the urban complex, one that encompasses tourism, entertainment, gaming, retail and residential elements. Only MGM MIRAGE has the momentum - financially, intellectually and professionally - to effectively develop such a project.\n\nThe signature building within Project CityCenter is the 4,000-room hotel-casino. The internationally acclaimed architect Cesar Pelli has been commissioned to design this iconic structure. Pelli's initial concept drawing defines a new generation of urban landscape for the Las Vegas Strip, one which includes gaming at its economic center but not as an emotional centerpiece.\n\nProject CityCenter will provide the momentum for the next era of amazing growth for your company and Las Vegas.\n\nTHE SITE Located in the heart of the Las Vegas Strip, Project CityCenter will dwarf every development that preceded it. Its 66 acres will include a 4,000-room hotel-casino and three boutique hotels.\n\n\n\n", - "page_start": 20, - "page_end": 20, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## Overall Outlook\n\nWe have invested heavily in our existing operations in 2003 and 2004, and expect to continue to do so on a targeted basis in 2005. Our Las Vegas Strip resorts require ongoing capital investment to maintain their competitive advantages. We believe the investments in additional non-gaming amenities we made in 2003 and 2004 have enhanced our ability to generate increased visitor volume and allowed us to charge premium prices for our amenities.\n\nThe most likely significant factors affecting operating results at our existing resorts in 2005 will be the expected continued strength of the leisure and convention travel segments, the expansion of Bellagio and the opening of KÀ and other amenities at MGM Grand Las Vegas, and new competition from Wynn Las Vegas on the Las Vegas Strip. Various lodging market observers, such as PricewaterhouseCoopers and Smith Travel Research, are forecasting mid-single digit percentage growth in REVPAR in 2005, with greater REVPAR gains in full service hotels. Our REVPAR growth, and REVPAR growth in Las Vegas in general, has outpaced that of the national market, and we expect that trend to continue.\n\nThe Bellagio expansion opened in late 2004 and added over 30% to the resort's room base. In addition, we added new meeting, retail and dining space and significantly expanded the spa and salon. KÀ opened in late November 2004 at MGM Grand Las Vegas, which had been without a featured production show for almost two years. Along with the numerous restaurant and other entertainment additions at MGM Grand Las Vegas, KÀ will enhance our ability to generate visitor traffic and capture a greater share of our guests' spending.\n\nWynn Las Vegas will add room capacity to the Las Vegas market, with its 2,700 rooms representing a 2% increase in Las Vegas room supply. Wynn Las Vegas will also feature numerous upscale restaurants and generally target customers who might otherwise choose Bellagio, MGM Grand Las Vegas or The Mirage. We believe there\n\nwill be some impact on these resorts from Wynn Las Vegas, but also believe that the breadth of amenities in our portfolio of resorts and our loyalty and other marketing programs will help minimize these competitive pressures. The proximity of Wynn Las Vegas to TI and The Mirage, along with pedestrian bridges linking TI with the Fashion Show Mall and Venetian, will also benefit these resorts.\n\n## Mandalay Merger\n\nOn June 16, 2004, we announced that we had entered into a definitive merger agreement with Mandalay Resort Group ('Mandalay'), a publicly traded company, under which we will acquire Mandalay for $71.00 in cash for each share of common stock of Mandalay. Mandalay owns and operates eleven properties in Nevada, including Mandalay Bay, Luxor, Excalibur, Circus Circus, and Slots-A-Fun in Las Vegas, Circus Circus-Reno in Reno, Colorado Belle and Edgewater in Laughlin, Gold Strike and Nevada Landing in Jean, and Railroad Pass in Henderson. Mandalay also owns and operates Gold Strike, a hotel/casino in Tunica County, Mississippi. In addition, Mandalay owns a 50% interest in Silver Legacy in Reno, a 50% interest in Monte Carlo in Las Vegas, a 50% interest in Grand Victoria, a riverboat in Elgin, Illinois, and a 53.5% interest in MotorCity in Detroit, Michigan. The total consideration is approximately $8.1 billion, including equity value of approximately $4.8 billion, convertible debentures with a redemption value of approximately $574 million, the assumption or repayment of other outstanding Mandalay debt with a fair value of approximately $2.6 billion as of December 31, 2004, and $100 million of estimated transaction costs. The transaction is structured as a merger of one of our wholly-owned subsidiaries with and into Mandalay. The transaction will be accounted for as a purchase and is anticipated to close during the first quarter of 2005.", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## Management's Discussion and Analysis of Financial Condition and Results of Operations\n\n## RESULTS OF OPERATIONS\n\nAt December 31, 2004, our operations consisted of 11 wholly-owned casino resorts and 50% investments in two other casino resorts, including:\n\nLas Vegas, Nevada:\n\nOther:\n\nBellagio, MGM Grand Las Vegas, The Mirage, TI, New YorkNew York, Boardwalk, and Monte Carlo (50% owned).\n\nThe Primm Valley Resorts (Buffalo Bill's, Primm Valley Resort and Whiskey Pete's) in Primm, Nevada; Beau Rivage in Biloxi, Mississippi; MGM Grand Detroit; Borgata (50% owned) in Atlantic City, New Jersey.\n\nWe operate in one segment, the operation of casino resorts, which includes offering gaming, hotel, dining, entertainment, retail and other resort amenities. Slightly over half of our net revenues are derived from gaming activities, a lower percentage than many of our competitors, as our operating philosophy is to provide a complete resort experience for our guests, including non-gaming amenities which command premium prices based on their quality.\n\nWe generate a majority of our net revenues and operating income from our Las Vegas Strip resorts. In 2004, over 75% of our net revenues and operating income was generated by wholly-owned Las Vegas Strip resorts. We believe that we own the premier casino resorts on the Las Vegas Strip, and a main focus of our strategy is to continually reinvest in these resorts to maintain that competitive advantage. Our concentration on the Las Vegas Strip exposes us to certain risks outside of our control, such as competition from other Las Vegas Strip resorts as well as new or expanded resorts in Las Vegas, including Wynn Las Vegas expected to open in 2005, and the impact from potential expansion of gaming in California. This concentration also exposes us to risks related to tourism and the general economy, including national and global economic conditions and terrorist attacks or other global events.\n\n## Key Performance Indicators\n\nAs a resort-based company, our operating results are highly dependent on the volume of customers at our resorts, which in turn impacts the price we can charge for our hotel rooms and other amenities. We also generate a significant portion of our operating income from the high-end gaming segment, which can cause variability in our results. Key performance indicators related to revenue are:\n\n - · Gaming revenue indicators - table games drop and slot handle (volume indicators); 'win' or 'hold' percentage, which is not fully controllable by us. Our normal table games win percentage is in the range of 18% to 22% of table games drop and our normal slot win percentage is in the range of 6% to 7% of slot handle;\n - · Hotel revenue indicators - hotel occupancy (volume indicator); average daily rate ('ADR', price indicator); revenue per available room ('REVPAR'), a summary measure of hotel results, combining ADR and occupancy rate.\n\nMost of our revenue is essentially cash-based, through customers wagering with cash or paying for non-gaming services with cash or credit cards. Our resorts, like many in the industry, generate significant operating cash flow. Our industry is capital intensive and we rely heavily on the ability of our resorts to generate operating cash flow to repay debt financing, fund maintenance capital expenditures and provide excess cash for future development.\n\nOur results of operations do not tend to be seasonal in nature, though a variety of factors can affect the results of any interim period, including the timing of major Las Vegas conventions, the amount and timing of marketing and special events for our high-end customers, and the level of play during major holidays, including New Year and Chinese New Year.", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "\n\nRecently, we opened the SKYLOFTS, a new level of luxury for guests atop MGM Grand Las Vegas.\n\nWe'll follow the success of these new resort features with a category-defining new nightclub at The Mirage, two fabulous restaurants by Joël Robuchon at MGM Grand Las Vegas and gaming upgrades company-wide. Second, we are doubling down on Las Vegas by merging with Mandalay, a company we have long admired. The Mandalay merger represents a tremendous opportunity to build on the momentum established by Mike Ensign and his team. And third, we are dreaming of a not-so-distant future, when\n\n\n\nAL FACCINTO President, MGM MIRAGE International Marketing\n\n\n\nALAN FELDMAN Senior VP Public Affairs, MGM MIRAGE\n\nBRUCE GEBHARDT Senior VP, MGM MIRAGE Global Security\n\nWILLIAM J. HORNBUCKLE President & COO, MGM MIRAGE Europe\n\nPHYLLIS JAMES Senior VP & Senior Counsel, MGM MIRAGE\n\nProject CityCenter will literally redefine the Las Vegas Strip and change the face of Las Vegas forever.\n\n## Mandalay in Motion\n\nWe are incredibly excited to begin our journey with the talented people of Mandalay, as we work to maximize the value of Mandalay's instantly recognized brands and worldclass resorts. Long a fixture in Las Vegas, Mandalay's resorts will add to our premium portfolio and allow us to accelerate the pace of our growth. Our hotel people will be able to market a wider range of rooms and benefit from a world-class\n\n\n\n\n\nconvention center. Our casino marketing people will be able to offer their customers wonderful new amenities to expand our market reach. And our development people will be able to maximize the potential of priceless Las Vegas Strip land.\n\nThe Mandalay merger represents another defining moment for MGM MIRAGE, much like the Mirage Resorts transaction in 2000, at a time when Las Vegas is in a state of astounding metamorphosis. No company is better positioned to help shape the future of Las Vegas than MGM MIRAGE. We employ more people, invest more money and hold more prime real estate than any other company in Las Vegas. The\n\n\n\n", - "page_start": 24, - "page_end": 24, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "(from left to right) ROBERT C. SELWOOD Senior Vice PresidentAccounting; JAMES J. MURREN President, CFO & Treasurer; BRYAN L. WRIGHT Senior Vice President - Assistant General Counsel & Assistant Secretary; DANIEL J. D'ARRIGO Senior Vice President-Finance\n\n\n\n## No company is better positioned to help shape the future of Las Vegas than MGM MIRAGE.\n\n\n\nCYNTHIA KISER MURPHEY Senior VP, MGM MIRAGE Human Resources\n\n\n\nWILLIAM MCBEATH President, The Mirage\n\nROBERT V. MOON Chairman, MGM MIRAGE Marketing\n\nFELIX D. RAPPAPORT President, New York-New York\n\n\n\nPUNAM MATHUR Senior VP, MGM MIRAGE Diversity/Community Relations\n\n\n\ncombination of Mandalay's assets with our financial strength and industry-leading financial discipline will yield significant returns for all of our stakeholders.\n\nWe are currently planning the integration of the two companies, and over time, we expect to realize the full potential of cost and revenue synergies. We will report on our progress throughout the coming year.\n\n## The Next Moment - A City is Born\n\nWhat makes a great city? Las Vegas has long been recognized as the leisure capital of the world. The resorts in our valley have been the innovative leaders in the hospitality industry and have driven the tremendous growth in visitor volume, high occupancy rates and surging food, beverage, entertainment and gaming volumes. But there is another Las Vegas - a community of two million residents on its way to three million by the end of the decade. Las Vegas is leading the U.S. migration to the Southwest. Our newcomers are attracted by the lifestyle, weather, cost of living and economic opportunity. Many have come from cities in the East, West and Midwest and take elements of established communities for granted, such as medical, educational and cultural excellence and diversity.\n\nThe people of Las Vegas today have great aspirations and\n\nexpect and demand more of our community. We are a city without a proper city, and that is about to change. Ambitious plans are underway to revitalize Downtown Las Vegas, centered around a beautiful performing arts center and an academic medical center; UNLV is in the midst of a major capital campaign to enhance the Midtown section of Las Vegas; and your company has embarked on the most comprehensive project to date - Project CityCenter, at the heart of the Las Vegas Strip.\n\nThe Las Vegas Strip has no sense of city now - but we believe it can. The future of Las Vegas is centered around our great resorts and our future development. There are many reasons we believe Project CityCenter is the right project for our Las Vegas Strip development. We believe there is a social imperative that Las Vegas mature as a city, not just a conglomeration of suburbs. A city deserves a center - a center for living, working and playing. We want to be an integral part in defining the Las Vegas of the future.\n\nAnd there is a business motivation. Companies in the gaming industry have historically not been valued on par with other hospitality companies and mixed-use real estate companies. We plan to break out of the gaming mold, and define a company based on extensive holdings in multiple businesses. Project CityCenter will include major residential, retail and entertainment components. We will partner with boutique\n\n\n\nSCOTT SIBELLA President, TI\n\n", - "page_start": 25, - "page_end": 25, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## TO OUR SHAREHOLDERS\n\nBELLAGIO underwent a significant expansion during 2004 resulting in the opening of the Spa Tower and several important new amenities at this AAA Five Diamond property. Bellagio remains Las Vegas' first and only hotel-casino to receive this prestigious recognition. These new additions add dimension and depth to the world-famous experience awaiting guests at Bellagio.\n\nMGM GRAND LAS VEGAS completed a transformation, begun in 2003, of its food and beverage and entertainment offerings. MGM Grand is one of the must-see attractions of Las Vegas, with Cirque du Soleil's newest production, KA ' TM , and several of the Strip's finest restaurants and hottest nightspots. 18 .0 %\n\nTI 's transformation was no less extensive, as the property's management team conceived and implemented a program to enliven the property with new restaurants and nightlife.\n\nTHE MIRAGE was the site of a revolution in Las Vegas' history as the venerable buffet was given new life as a top dining establishment, Cravings. Others may follow this lead, but The Mirage was the first property to breathe new life into what remained of the last bastion of 'old' Las Vegas.\n\n## EXPANDING WITH EXCELLENCE\n\nThese investments in your company's future paid dividends even before the year was out. We established a new record for net revenues posting $4.2 billion, a 10% increase over 2003.\n\nYour company's resorts produced record EBITDA of $1.46 billion, an increase of 23% over 2003, while operating income was $951 million, an increase of 36%, with record results at Bellagio, MGM Grand Las Vegas and Beau Rivage.\n\n## Defining Momentum in the Community\n\nI've spent 27 years in this profession and the incredible generosity of our employees never ceases to amaze me. Shortly after the merger with Mirage Resorts in 2000, we established the Voice Foundation. This allows employees to express themselves in the communities we serve by providing them a mechanism to raise monies for worthy causes. It's their money and they decide where it goes. Your company provides the marketing and administrative support. .6% .5 %\n\nIn each year since we established the program, employees have given record amounts to support a\n\n\n\n## 2004 Revenue Mix\n\n\n\n\n\n\n\n\n\nCasino\n\nRooms\n\nFood & Beverage\n\nEntertainment, Retail,\n\n& Other\n\nSKYLOFTS MGM Grand A private sanctuary of sleek, elegant two-story accommodations, offering discerning guests the quintessential loft environment - harmonizing design, décor, ambiance and unparalleled vistas.\n\nBELLAGIO SPA Unique design elements, combined with an international array of innovative treatments and specially trained therapists, provide the ultimate indulgent experience.\n\nTEATRO MGM Grand A new genre of Las Vegas nightlife where European club influences permeate. DJs spin jazz/ house throughout the evening, giving way to an energetic after-hours vibe with live catwalk entertainment.\n\n\n\nKÀ The most spectacular production ever, by a troupe renowned for its pageantry. Cirque du Soleil's KÀ debuted at a new theatre at MGM Grand in the fourth quarter of 2004.\n\n\n\nWhat exactly is a defining moment? Try a multi-billion dollar project centered in the heart of Las Vegas.\n\n\n\n", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## Notes to Consolidated Financial Statements\n\n## NOTE 1 - ORGANIZATION\n\nMGM MIRAGE (the 'Company'), formerly MGM Grand, Inc., is a Delaware corporation, incorporated on January 29, 1986. As of December 31, 2004 approximately 58% of the outstanding shares of the Company's common stock were owned by Tracinda Corporation, a Nevada corporation wholly owned by Kirk Kerkorian. MGM MIRAGE acts largely as a holding company and, through wholly-owned subsidiaries, owns and/or operates casino resorts.\n\nThe Company owns and operates the following casino resorts on the Las Vegas Strip in Las Vegas, Nevada: Bellagio, MGM Grand Las Vegas, The Mirage, Treasure Island ('TI'), New York-New York and the Boardwalk Hotel and Casino. The Company owns a 50% interest in the joint venture that owns and operates the Monte Carlo Resort & Casino, also located on the Las Vegas Strip.\n\nThe Company owns three resorts in Primm, Nevada at the California/Nevada state line - Whiskey Pete's, Buffalo Bill's and the Primm Valley Resort - as well as two championship golf courses located near the resorts. The Company also owns Shadow Creek, an exclusive world-class golf course located approximately ten miles north of its Las Vegas Strip resorts.\n\nThe Company, through its wholly owned subsidiary, MGM Grand Detroit, Inc., and its local partners formed MGM Grand Detroit, LLC, to develop a hotel, casino and entertainment complex in Detroit, Michigan. MGM Grand Detroit, LLC operates a casino in an interim facility in downtown Detroit. See Note 10 for discussion of the revised development agreement with the City of Detroit and plans for a permanent casino resort.\n\nThe Company owns and operates Beau Rivage, a beachfront resort located in Biloxi, Mississippi. The Company also owns a 50% interest in a limited liability company that owns Borgata, a casino resort at Renaissance Pointe, located in the Marina area\n\nof Atlantic City, New Jersey. Boyd Gaming Corporation owns the other 50% of Borgata and also operates the resort. Borgata opened in July 2003. The Company owns approximately 95 developable acres adjacent to Borgata, a portion of which consists of common roads, landscaping and master plan improvements which the Company designed and developed as required under the agreement with Boyd.\n\nUntil July 2004, the Company owned and operated MGM Grand Australia and until January 2004, the Company owned and operated the Golden Nugget Las Vegas in downtown Las Vegas and the Golden Nugget Laughlin in Laughlin, Nevada (the 'Golden Nugget Subsidiaries'). Until June 2003, the Company operated PLAYMGMMIRAGE.com, the Company's online gaming website based in the Isle of Man. See Note 3 for further information regarding these discontinued operations. In the second quarter of 2002, the Company received proceeds of $11 million upon termination of management agreements covering four casinos in the Republic of South Africa. Prior to the termination, the Company managed three permanent casinos and one interim casino and received management fees from its partner, Tsogo Sun Gaming & Entertainment. The termination fee was recorded as part of other revenues in the accompanying consolidated statements of income.\n\nThe Company is actively seeking future development opportunities in the United Kingdom. In May 2003, the Company acquired a 25% interest in Metro Casinos Limited, a United Kingdom gaming company which operates a casino in Bristol. See Note 10 for discussion of other potential developments in the United Kingdom.\n\nIn June 2004, the Company entered into a joint venture agreement to develop, build and operate a hotel-casino resort in Macau S.A.R. The agreement is subject to, among other things, the approval of the government of Macau S.A.R., and other regulatory approvals, as well as the entry into a subconcession agreement with the holder of one of the existing concessions.", - "page_start": 55, - "page_end": 55, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## TO OUR SHAREHOLDERS\n\n## MGM MIRAGE DEFINES MOMENTUM\n\n## 'Your company has undergone several defining moments throughout its history.'\n\nrom its roots some 35 years ago with the opening of the International Hotel, we have played a leading role in continuously redefining the Las Vegas experience. F\n\nWe announced two significant initiatives in 2004 that, taken together, give your company unrivaled momentum to set industry standards for creativity, performance and responsibility for decades to come.\n\n## Defining Momentum for Las Vegas\n\nOur merger agreement with Mandalay Resort Group and our plans to develop Project CityCenter on the Las Vegas Strip are among the most significant announcements in Las Vegas history. As this fabled city begins its second hundred years, MGM MIRAGE is positioned like no other company to take advantage of unsurpassed growth opportunities in the most dynamic gaming and entertainment market in the world.\n\nProject CityCenter will uniquely re-position Las Vegas like no other project before it. Far more than simply another casino-hotel, Project CityCenter encompasses a\n\nBELLAGIO SPA TOWER The quintessential luxury hotel is now even more opulent. This expansion includes 928 rooms and suites, 80,000 square feet of convention space, retail outlets, and restaurants.\n\n\n\nmyriad of elements that will propel Las Vegas into a new generation of urban sophistication.\n\nWhile additional details of this extraordinary development will come in the months ahead, I am pleased to tell you that we have secured the services of the internationally acclaimed architect Cesar Pelli to design our anchor resort at the heart of Project CityCenter.\n\nCesar Pelli & Associates has worked with corporate, government and private clients to design major public spaces, museums, airports, research centers, performing arts centers, academic buildings, hotels, office and residential towers and mixed-use projects.\n\nThe work of Cesar Pelli is not constrained by a personal style or a signature that would limit his architecture; instead, it celebrates the unique characteristics of each project. Using this approach, he has designed several exceptional buildings in the United States and abroad.\n\nWe are very excited about our partnership with Mr. Pelli and his colleagues and believe they will deliver for MGM MIRAGE and the residents of Southern Nevada a building of iconic stature around the world.\n\nSHIBUYA MGM GRAND Designed by superstar team Yabu Pushelberg, Shibuya features stellar sushi and the widest sake selection this side of the Pacific, all served in a sleek, airy ambiance.\n\n\n\nCRAVINGS THE MIRAGE The zenith of all-you-can-eat. Designed by Adam Tihany, Cravings boasts 11 cooking stations, a street of unique restaurants, and an array of temptations in what's unquestionably the ultimate buffet dining experience.J. TERRENCE LANNI Chairman & Chief Executive Officer\n\n\n\n", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "- · The Bellagio expansion completed in December 2004;\n - · The theatre for KÀ at MGM Grand Las Vegas, completed in November 2004.\n\nSpending on these two projects totaled approximately $325 million. Other capital expenditures were made for maintenance capital activities, including room remodel projects at New York-New York and MGM Grand Las Vegas and new restaurant and entertainment amenities at several resorts. Capital expenditures in 2003 were significantly higher than 2002, due largely to major projects at our existing resorts, including projects described above which began in 2003, the Zumanity theatre at New York-New York, the Bellagio room remodel and slot technology improvements. Capital expenditures in 2002 included general property improvements at our resorts, such as a room remodel project at The Mirage, new restaurant and nightclub development at several of our resorts, and various other remodeling projects.", - "page_start": 36, - "page_end": 36, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## SETTING THE FUTURE IN MOTION\n\n\n\nMGM GRAND MACAU Our joint venture has secured a prime location to develop and construct an exciting addition to this dynamic gaming destination.\n\n\n\nhile the international opportunities for growth remain to be fully defined, in 2004 MGM MIRAGE entered into a joint venture agreement with Pansy Ho Chiu-king to develop, build and operate a major hotel-casino resort in Macau S.A.R. No other international market has shown its ability to sustain improved growth even as the government takes important steps to modernize its regulatory structure. We have methodically moved through the regulatory process and look forward to initiating construction in 2005 and opening in 2007. W\n\nWe continue to monitor and pursue opportunities as they arise in the United Kingdom. The bill modernizing British gaming law has moved steadily through the legislative process throughout the year. Several key issues are yet to be resolved, but we remain hopeful that Great Britain will become one of the world's leading jurisdictions with significant growth opportunities for decades to come.\n\nWe are also excited about the emergence of possible new jurisdictions in the Far East. We plan to pursue additional development opportunities as they become available, as we believe that the Far East holds considerable promise as a growing gaming market.\n\nDomestically, we are selectively expanding our presence as well, moving into markets and business lines where our superior brands and assets can provide the best returns. In Las Vegas we will maximize the use of our vast land holdings, beginning with The Residences at MGM Grand. This unique venture is a breakthrough combination of a hotel and condominiums - the first of its kind in Las Vegas. In Atlantic City, we own an exceptional site for future development. The already successful Borgata is prepared to grow bigger and better. Expansion plans include more casino space, a new hotel tower, more restaurants, retail outlets and an expanded spa.\n\n\n\nTHE RESIDENCES AT MGM GRAND Our joint venture with Turnberry Associates to build luxury condo/hotels ignited a flurry of development in Las Vegas.\n\n", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_MGM_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_MGM_2004.pdf", - "query": "Which events negatively impacted leisure travel and MCM Mirage high-end gaming business in late 2002 and early 2003 ?", - "target_page": 32, - "target_passage": "The war with Iraq and the outbreak of SARS in Asia, both of which negatively impacted leisure travel and our high-end gaming business in late 2002 and early 2003", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "On a consolidated basis, the most important factors and trends contributing to our operating performance over the last three years have been:\n\n - · The war with Iraq and the outbreak of SARS in Asia, both of which negatively impacted leisure travel and our high-end gaming business in late 2002 and early 2003;\n - · The new labor contract covering our Las Vegas Strip employees since mid-2002, which calls for significant annual wage and benefits increases through 2007;\n - · The current economic recovery in the United States, which began to impact our operations in the latter half of 2003 and continued to positively affect our results in 2004.", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## Management's Discussion and Analysis of Financial Condition and Results of Operations\n\n## RESULTS OF OPERATIONS\n\nAt December 31, 2004, our operations consisted of 11 wholly-owned casino resorts and 50% investments in two other casino resorts, including:\n\nLas Vegas, Nevada:\n\nOther:\n\nBellagio, MGM Grand Las Vegas, The Mirage, TI, New YorkNew York, Boardwalk, and Monte Carlo (50% owned).\n\nThe Primm Valley Resorts (Buffalo Bill's, Primm Valley Resort and Whiskey Pete's) in Primm, Nevada; Beau Rivage in Biloxi, Mississippi; MGM Grand Detroit; Borgata (50% owned) in Atlantic City, New Jersey.\n\nWe operate in one segment, the operation of casino resorts, which includes offering gaming, hotel, dining, entertainment, retail and other resort amenities. Slightly over half of our net revenues are derived from gaming activities, a lower percentage than many of our competitors, as our operating philosophy is to provide a complete resort experience for our guests, including non-gaming amenities which command premium prices based on their quality.\n\nWe generate a majority of our net revenues and operating income from our Las Vegas Strip resorts. In 2004, over 75% of our net revenues and operating income was generated by wholly-owned Las Vegas Strip resorts. We believe that we own the premier casino resorts on the Las Vegas Strip, and a main focus of our strategy is to continually reinvest in these resorts to maintain that competitive advantage. Our concentration on the Las Vegas Strip exposes us to certain risks outside of our control, such as competition from other Las Vegas Strip resorts as well as new or expanded resorts in Las Vegas, including Wynn Las Vegas expected to open in 2005, and the impact from potential expansion of gaming in California. This concentration also exposes us to risks related to tourism and the general economy, including national and global economic conditions and terrorist attacks or other global events.\n\n## Key Performance Indicators\n\nAs a resort-based company, our operating results are highly dependent on the volume of customers at our resorts, which in turn impacts the price we can charge for our hotel rooms and other amenities. We also generate a significant portion of our operating income from the high-end gaming segment, which can cause variability in our results. Key performance indicators related to revenue are:\n\n - · Gaming revenue indicators - table games drop and slot handle (volume indicators); 'win' or 'hold' percentage, which is not fully controllable by us. Our normal table games win percentage is in the range of 18% to 22% of table games drop and our normal slot win percentage is in the range of 6% to 7% of slot handle;\n - · Hotel revenue indicators - hotel occupancy (volume indicator); average daily rate ('ADR', price indicator); revenue per available room ('REVPAR'), a summary measure of hotel results, combining ADR and occupancy rate.\n\nMost of our revenue is essentially cash-based, through customers wagering with cash or paying for non-gaming services with cash or credit cards. Our resorts, like many in the industry, generate significant operating cash flow. Our industry is capital intensive and we rely heavily on the ability of our resorts to generate operating cash flow to repay debt financing, fund maintenance capital expenditures and provide excess cash for future development.\n\nOur results of operations do not tend to be seasonal in nature, though a variety of factors can affect the results of any interim period, including the timing of major Las Vegas conventions, the amount and timing of marketing and special events for our high-end customers, and the level of play during major holidays, including New Year and Chinese New Year.", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "MGM MIRAGE 2004 ANNUAL REPORT\n\n## defining momentum\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "\n\nRecently, we opened the SKYLOFTS, a new level of luxury for guests atop MGM Grand Las Vegas.\n\nWe'll follow the success of these new resort features with a category-defining new nightclub at The Mirage, two fabulous restaurants by Joël Robuchon at MGM Grand Las Vegas and gaming upgrades company-wide. Second, we are doubling down on Las Vegas by merging with Mandalay, a company we have long admired. The Mandalay merger represents a tremendous opportunity to build on the momentum established by Mike Ensign and his team. And third, we are dreaming of a not-so-distant future, when\n\n\n\nAL FACCINTO President, MGM MIRAGE International Marketing\n\n\n\nALAN FELDMAN Senior VP Public Affairs, MGM MIRAGE\n\nBRUCE GEBHARDT Senior VP, MGM MIRAGE Global Security\n\nWILLIAM J. HORNBUCKLE President & COO, MGM MIRAGE Europe\n\nPHYLLIS JAMES Senior VP & Senior Counsel, MGM MIRAGE\n\nProject CityCenter will literally redefine the Las Vegas Strip and change the face of Las Vegas forever.\n\n## Mandalay in Motion\n\nWe are incredibly excited to begin our journey with the talented people of Mandalay, as we work to maximize the value of Mandalay's instantly recognized brands and worldclass resorts. Long a fixture in Las Vegas, Mandalay's resorts will add to our premium portfolio and allow us to accelerate the pace of our growth. Our hotel people will be able to market a wider range of rooms and benefit from a world-class\n\n\n\n\n\nconvention center. Our casino marketing people will be able to offer their customers wonderful new amenities to expand our market reach. And our development people will be able to maximize the potential of priceless Las Vegas Strip land.\n\nThe Mandalay merger represents another defining moment for MGM MIRAGE, much like the Mirage Resorts transaction in 2000, at a time when Las Vegas is in a state of astounding metamorphosis. No company is better positioned to help shape the future of Las Vegas than MGM MIRAGE. We employ more people, invest more money and hold more prime real estate than any other company in Las Vegas. The\n\n\n\n", - "page_start": 24, - "page_end": 24, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "- · The ongoing capital investments in upscale amenities at our resorts, which we believe is allowing us to market more effectively to visitors, capture a greater share of these visitors' increased travel budgets, and generate premium pricing for our resorts' rooms and other amenities.\n\nAs a result of the above trends, our net revenues increased 10% in 2004, while increasing only 3% in 2003. Net revenues at MGM Grand Las Vegas increased 14% in 2004, due to the addition of several new restaurants, bars and other amenities, and in spite of fewer rooms in service due to room remodel activity. Net revenues at New York-New York increased 26% as the resort continues to benefit from Zumanity and Nine Fine Irishmen, both of which opened in summer 2003. Net revenues at The Mirage decreased 2% as the resort was without the Siegfried & Roy show and the buffet was closed for a portion of the year while Cravings was constructed.\n\nOur operating income in 2004 increased 36%, due primarily to the strong revenue trends and a full year of Borgata's results. The increase in income from unconsolidated affiliates is responsible for approximately one-third of the increase in operating income, while improvements at our operating resorts, particularly Bellagio, MGM Grand Las Vegas and New York-New York, make up the rest of the increase. Operating income at MGM Grand Detroit was essentially flat year-overyear, despite an increase in the gaming tax rate from 18% to 24% effective September 2004. Several other factors largely offset: Higher corporate expense due to increased development costs; lower bad debt expense due to improved collections; lower preopening expenses due to Borgata preopening expenses in 2003; and higher property transactions, net due to a $37 million gain on sale of land in 2003.\n\nIn 2003, our operating income decreased by 6%. While revenues grew especially in the second half of 2003, expense growth, particularly in payroll, outpaced revenues.\n\n## Operating Results - Detailed Revenue Information\n\n## The following table presents details of our net revenues:\n\n## (In thousands)\n\nTable games revenues increased as a result of the improvements in the U.S. economy and the general economy worldwide, as well as increased attendance at targeted marketing events, including the New Years period. Total table games volume for the year was up 9%, with particular strength in baccarat volume, up 18%. These are the most significant increases in table games volumes since 2000. Table games revenues decreased in 2003, as a slightly lower hold percentage and the impact of the Iraq war and SARS outbreak in early 2003 were not fully offset by strong volume levels over the latter half of 2003. Table games win percentages were within our normal range for all periods presented.", - "page_start": 32, - "page_end": 32, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## TO OUR SHAREHOLDERS\n\nBELLAGIO underwent a significant expansion during 2004 resulting in the opening of the Spa Tower and several important new amenities at this AAA Five Diamond property. Bellagio remains Las Vegas' first and only hotel-casino to receive this prestigious recognition. These new additions add dimension and depth to the world-famous experience awaiting guests at Bellagio.\n\nMGM GRAND LAS VEGAS completed a transformation, begun in 2003, of its food and beverage and entertainment offerings. MGM Grand is one of the must-see attractions of Las Vegas, with Cirque du Soleil's newest production, KA ' TM , and several of the Strip's finest restaurants and hottest nightspots. 18 .0 %\n\nTI 's transformation was no less extensive, as the property's management team conceived and implemented a program to enliven the property with new restaurants and nightlife.\n\nTHE MIRAGE was the site of a revolution in Las Vegas' history as the venerable buffet was given new life as a top dining establishment, Cravings. Others may follow this lead, but The Mirage was the first property to breathe new life into what remained of the last bastion of 'old' Las Vegas.\n\n## EXPANDING WITH EXCELLENCE\n\nThese investments in your company's future paid dividends even before the year was out. We established a new record for net revenues posting $4.2 billion, a 10% increase over 2003.\n\nYour company's resorts produced record EBITDA of $1.46 billion, an increase of 23% over 2003, while operating income was $951 million, an increase of 36%, with record results at Bellagio, MGM Grand Las Vegas and Beau Rivage.\n\n## Defining Momentum in the Community\n\nI've spent 27 years in this profession and the incredible generosity of our employees never ceases to amaze me. Shortly after the merger with Mirage Resorts in 2000, we established the Voice Foundation. This allows employees to express themselves in the communities we serve by providing them a mechanism to raise monies for worthy causes. It's their money and they decide where it goes. Your company provides the marketing and administrative support. .6% .5 %\n\nIn each year since we established the program, employees have given record amounts to support a\n\n\n\n## 2004 Revenue Mix\n\n\n\n\n\n\n\n\n\nCasino\n\nRooms\n\nFood & Beverage\n\nEntertainment, Retail,\n\n& Other\n\nSKYLOFTS MGM Grand A private sanctuary of sleek, elegant two-story accommodations, offering discerning guests the quintessential loft environment - harmonizing design, décor, ambiance and unparalleled vistas.\n\nBELLAGIO SPA Unique design elements, combined with an international array of innovative treatments and specially trained therapists, provide the ultimate indulgent experience.\n\nTEATRO MGM Grand A new genre of Las Vegas nightlife where European club influences permeate. DJs spin jazz/ house throughout the evening, giving way to an energetic after-hours vibe with live catwalk entertainment.\n\n\n\nKÀ The most spectacular production ever, by a troupe renowned for its pageantry. Cirque du Soleil's KÀ debuted at a new theatre at MGM Grand in the fourth quarter of 2004.\n\n\n\nWhat exactly is a defining moment? Try a multi-billion dollar project centered in the heart of Las Vegas.\n\n\n\n", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## Management's Discussion and Analysis of Financial Condition and Results of Operations\n\nSlot revenues increased substantially in both 2003 and 2004. Improvements were the result of strong customer visitation, enhanced marketing programs, the impact of our Players Club rewards program, and the implementation of cashless gaming technology in 2003. Slot win percentages were consistent among all three periods.\n\nNon-casino revenue increased in 2004 primarily due to the enhanced amenities at our resorts. In addition, we were able to increase the pricing for our rooms and other non-gaming amenities. Our hotel results began to improve notably in the latter half of 2003, particularly at our Las Vegas Strip resorts. For the year ended December 31, 2004 REVPAR at our Las Vegas Strip resorts was $141 compared to $126 in 2003, an increase of 12%. Company-wide REVPAR was $121, an increase of 10% over 2003. This increase was largely rate driven, as occupancy increased from 91% to 92% and ADR increased from $121 to $132. In 2003, company-wide REVPAR increased 6% from $104 to $110, with most of the gains coming in the second half of the year.\n\n## Operating Results - Details of Certain Charges\n\nPre-opening and start-up expenses consisted of the following:\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | 2002 |\n|--------------------------------------------------------------------------------|---------|----------|----------|\n| Bellagio expansion . . . . . . . . . . . . . . . . . . . . . . . . . | $ 3,805 | $ - | $ - |\n| KÀ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 3,655 | - | - |\n| Borgata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | - | 19,326 | 7,757 |\n| New York-New York ( Zumanity, Nine Fine Irishmen) | - | 4,310 | - |\n| Players Club . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | - | 3,051 | 5,117 |\n| Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2,816 | 2,579 | 1,267 |\n| | $10,276 | $ 29,266 | $ 14,141 |\n\nPre-opening and start-up expenses related to Borgata represent our share of the operating results of Borgata prior to its July 2003 opening.\n\n## Restructuring costs (credit) consisted of the following:\n\n| Year Ended December 31 (In thousands) | 2004 | 2003 | 2002 |\n|-------------------------------------------------------------------------------|---------|---------|------------|\n| Contract termination costs . . . . . . . . . . . . . . . . . . . . | $ 3,693 | $ 4,049 | $ 3,257 |\n| Reversal of certain September 11 charges . . . . . . . . | - | - | (10,421) |\n| Siegfried & Roy show closure - The Mirage . . . . . . . | - | 1,623 | - |\n| Reversal of 2000 contract termination costs . . . . . . | - | - | (9,857) |\n| Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 1,932 | 925 | - |\n| | $ 5,625 | $ 6,597 | $ (17,021) |\n\nIn 2004, restructuring costs include $3 million for contract termination costs related to the Aqua restaurant at Bellagio and $2 million of workforce reduction costs at MGM Grand Detroit as a result of our efforts to minimize the impact of a gaming tax increase in Michigan.\n\nIn 2003, our primary restructuring activities included closing two marketing offices and terminating the related leases, terminating a lease agreement with a restaurant tenant at MGM Grand Las Vegas, and closing the Siegfried & Roy show, which resulted in a charge for employee severance costs.", - "page_start": 33, - "page_end": 33, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## Overall Outlook\n\nWe have invested heavily in our existing operations in 2003 and 2004, and expect to continue to do so on a targeted basis in 2005. Our Las Vegas Strip resorts require ongoing capital investment to maintain their competitive advantages. We believe the investments in additional non-gaming amenities we made in 2003 and 2004 have enhanced our ability to generate increased visitor volume and allowed us to charge premium prices for our amenities.\n\nThe most likely significant factors affecting operating results at our existing resorts in 2005 will be the expected continued strength of the leisure and convention travel segments, the expansion of Bellagio and the opening of KÀ and other amenities at MGM Grand Las Vegas, and new competition from Wynn Las Vegas on the Las Vegas Strip. Various lodging market observers, such as PricewaterhouseCoopers and Smith Travel Research, are forecasting mid-single digit percentage growth in REVPAR in 2005, with greater REVPAR gains in full service hotels. Our REVPAR growth, and REVPAR growth in Las Vegas in general, has outpaced that of the national market, and we expect that trend to continue.\n\nThe Bellagio expansion opened in late 2004 and added over 30% to the resort's room base. In addition, we added new meeting, retail and dining space and significantly expanded the spa and salon. KÀ opened in late November 2004 at MGM Grand Las Vegas, which had been without a featured production show for almost two years. Along with the numerous restaurant and other entertainment additions at MGM Grand Las Vegas, KÀ will enhance our ability to generate visitor traffic and capture a greater share of our guests' spending.\n\nWynn Las Vegas will add room capacity to the Las Vegas market, with its 2,700 rooms representing a 2% increase in Las Vegas room supply. Wynn Las Vegas will also feature numerous upscale restaurants and generally target customers who might otherwise choose Bellagio, MGM Grand Las Vegas or The Mirage. We believe there\n\nwill be some impact on these resorts from Wynn Las Vegas, but also believe that the breadth of amenities in our portfolio of resorts and our loyalty and other marketing programs will help minimize these competitive pressures. The proximity of Wynn Las Vegas to TI and The Mirage, along with pedestrian bridges linking TI with the Fashion Show Mall and Venetian, will also benefit these resorts.\n\n## Mandalay Merger\n\nOn June 16, 2004, we announced that we had entered into a definitive merger agreement with Mandalay Resort Group ('Mandalay'), a publicly traded company, under which we will acquire Mandalay for $71.00 in cash for each share of common stock of Mandalay. Mandalay owns and operates eleven properties in Nevada, including Mandalay Bay, Luxor, Excalibur, Circus Circus, and Slots-A-Fun in Las Vegas, Circus Circus-Reno in Reno, Colorado Belle and Edgewater in Laughlin, Gold Strike and Nevada Landing in Jean, and Railroad Pass in Henderson. Mandalay also owns and operates Gold Strike, a hotel/casino in Tunica County, Mississippi. In addition, Mandalay owns a 50% interest in Silver Legacy in Reno, a 50% interest in Monte Carlo in Las Vegas, a 50% interest in Grand Victoria, a riverboat in Elgin, Illinois, and a 53.5% interest in MotorCity in Detroit, Michigan. The total consideration is approximately $8.1 billion, including equity value of approximately $4.8 billion, convertible debentures with a redemption value of approximately $574 million, the assumption or repayment of other outstanding Mandalay debt with a fair value of approximately $2.6 billion as of December 31, 2004, and $100 million of estimated transaction costs. The transaction is structured as a merger of one of our wholly-owned subsidiaries with and into Mandalay. The transaction will be accounted for as a purchase and is anticipated to close during the first quarter of 2005.", - "page_start": 30, - "page_end": 30, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "The increase in aggregate dollars in all periods presented is primarily a result of the expansion of our operations through internal growth and acquisitions.\n\nThe increase in cost of operations as a percentage of revenue from 2002 to 2003 and the decrease in cost of operations as a percentage of revenue from 2003 to 2004 is primarily attributable to higher self-insurance expense in 2003. Self-insurance expense was $165.3 million, $189.5 million and $138.1 million for the years ended December 31, 2004, 2003 and 2002, respectively. The increase in self-insurance expense in 2003 related to existing claims and was attributable to the expansion of our operations and various changes in estimates as a result of continued negative trends through the 2003 policy year.\n\nExcluding self-insurance expense, cost of operations as a percentage of revenue increased during the year ended December 31, 2004 versus the comparable 2003 period. This increase is primarily attributable to increased fuel prices, labor costs and subcontracting costs associated with the long-haul transport of waste by third-party vendors. Excluding self-insurance expense, cost of operations as a percentage of revenue decreased in 2003 versus the comparable 2002 period due to the elimination of closure and post-closure expense as a component of cost of operations in accordance with SFAS 143 in 2003 and the termination of our operating lease facility in July 2002. This decrease was partially oÅset by increased fuel prices, an increase in waste taxes levied on landÑll volumes in certain states, an increase in revenue generated by lines of business that produce lower operating margins and an increase in the long-haul transport of waste by third-party vendors.\n\nTo date in 2005, we have experienced a signiÑcant increase in fuel prices. We believe that cost of operations as a percentage of revenue may continue to remain high depending upon the cost of fuel, health insurance, risk insurance and other key components of our cost structure and general economic conditions.\n\nDepreciation, Amortization and Depletion of Property and Equipment. Depreciation, amortization and depletion expenses for property and equipment were $252.4 million, $233.8 million and $193.5 million, or, as a percentage of revenue, 9.3%, 9.3% and 8.2%, for the years ended December 31, 2004, 2003 and 2002, respectively. The increase in aggregate dollars from 2003 to 2004 is primarily due to the expansion of our operations through internal growth and acquisitions. The increase in aggregate dollars and as a percentage of revenue from 2002 to 2003 is primarily due to an increase in landÑll amortization associated with the adoption of SFAS 143. The remaining increase from 2002 to 2003 is due to increased depreciation expense resulting from capital expenditures, acquisitions and the purchase of equipment originally placed into service pursuant to an operating lease.\n\nAmortization of Intangible Assets. Intangible assets consist primarily of cost in excess of fair value of net assets acquired, but also includes values assigned to long-term contracts, covenants not to compete and customer relationships. Expenses for amortization of intangible assets were $7.0 million, $5.3 million and $6.1 million, or, as a percentage of revenue, .3%, .2% and .2%, for the years ended December 31, 2004, 2003 and 2002, respectively. The increase in such expenses in aggregate dollars and as a percentage of revenue from 2003 to 2004 is primarily due to amortization expense on amounts that were recorded in other intangible assets during the three months ended September 30, 2004 resulting from an extensive internal review of all recent acquisitions. The increase in amortization of intangible assets in aggregate dollars is also due to the amortization of intangible assets associated with businesses acquired during 2004.", - "page_start": 43, - "page_end": 43, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "hotel, retail and residential companies, companies previously serving only major urban hubs. And CityCenter will ensure the greatest possible return on our investment on this Las Vegas Strip land.\n\nAs always, we are working on many growth opportunities to increase our momentum and become a company with a global scale. We are excited about the expansion projects underway at Borgata, the rapid sales pace at The Residences at MGM Grand Las Vegas, and the development of a hotel-casino in Macau. And we are exploring additional development opportunities in the Far East and the United Kingdom. All of these endeavors will be handled with the clear intent of expanding prudently and ensuring financial stability, as our capital allocation focus in 2005 will be to reduce debt and continue to invest in our resorts.\n\n## Defining Value\n\nIt has become a custom to include our financial core values in this letter to our owners. We believe that management's most important role is to most effectively manage assets and allocate capital. We hire the best casino resort operators in the world, and they provide us the fuel we need, operating cash flow, to propel us forward. That cash flow generates real value for shareholders in several ways.\n\n\n\nWILLIAM SMITH President, MGM MIRAGE Design Group\n\n\n\nRICHARD A. STURM President, MGM MIRAGE Sports & Entertainment\n\nFRANK VISCONTI President, MGM MIRAGE Retail\n\nRENEE WEST President, Primadonna Resorts\n\nFORREST WOODWARD President, Boardwalk\n\nFirst, we can re-invest in our resorts, as we have done over the past several years and will continue to do so in 2005 and beyond. These investments create the impetus for increased guest spending, and the relationship is not linear. We are capturing an increased share of guests and an increased share of each guest's spending budget. Since 2000, we have invested over $2.0 billion in capital in our resorts and our unconsolidated affiliates, which helped drive EBITDA from $1.1 billion to $1.5 billion in 2004, with significant cash flow-producing assets just coming on line in late 2004. Second, we can return capital to the shareholders. In 2004, we repurchased eight million shares of common stock for $349 million bringing the total since May 2000 to 30 million shares for $1.0 billion. Third, we can reduce debt, and maintain a low cost of borrowing for the future. In 2004, we repaid almost $100 million in net debt, bringing total debt reduction since May 2000 to $1.1 billion.\n\nOur financial strength allowed us to issue over $1.5 billion in fixed rate debt in 2004 at historically low interest rates, as well as securing a $7 billion credit facility to fund the Mandalay acquisition, the largest ever for a gaming company. And the recent redemptions of certain of our Senior Notes means our assets are no longer securing our remaining senior debt, including the new credit facility.\n\n\n\n\n\n\n\n## Always in Motion\n\nWe would love to look back at 2004 forever, given that it was our company's best year ever. But our work is only beginning. New history is still to be made; records are waiting to be broken; and we must vigilantly maintain our momentum. As stewards of your company, our goals are to continue to perform at peak levels and manage our growth initiatives to ensure maximum value for our shareholders. I hope to report on new defining moments in next year's Annual Report.\n\nJAMES J. MURREN President, Chief Financial Officer & Treasurer\n\n\n\n\n\nS T O C K P R I C E H I S T O R Y ( 2 0 0 2 - 2 0 0 4 )", - "page_start": 26, - "page_end": 26, - "source_file": "NYSE_MGM_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "00-80T-80.pdf", - "query": "What possess all naval aviators ?", - "target_page": 5, - "target_passage": "All Naval Aviators possess a natural interest in the basic aerodynamic factors which affect the performance of all aircraft. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## PREFACE\n\nThe purpose of this textbook is to present the elements of applied aerodynamics and aeronautical engineering which relate directly to the problems of flying operations. All Naval Aviators possess a natural interest in the basic aerodynamic factors which affect the performance of all aircraft. Due .to the increasing complexity of modern aircraft, this natural interest must be applied to develop a sound understanding of basic engineering principles and an appreciation of some of the more advanced problems of aerodynamics and engineering. The safety and effectiveness of flying operations will depend greatly on the understanding and appreciation of how and why an airplane flies. The principles of aerodynamics will provide the foundations for developing exacting and precise flying techniques and operational procedures.\n\nThe content of this textbook has been arranged to provide as complete as possible a reference for all phases of flying in Naval Aviation. Hence, the text material is applicable to the problems of flight training, transition training, and general flying operations. The manner of presentation throughout the text has been designed to provide the elements of both theory and application and will allow either directed or unassisted study. As a result, the text material'will be applicable to supplement formal class Iectures and briefings and provide reading material as a background for training and flying operations.\n\nMuch of the specialized mathematical detail of aerodynamics has been omitted wherever it was considered unnecessary in the field of flying operations. Also, many of the basic assumptions and limitations of certain parts of aerodynamic theory have been omitted for the sake of simplicity and clarity of presentation. In order to contend with these specific shortcomings, the Naval Aviator should rely on the assistance of certain specially qualified individuals within Naval Aviation. For example, graduate aeronautical engineers, graduates of the Test Pilot Training School at the Naval Air Test Center, graduates of the Naval Aviation Safety Officers Course, and technical representatives of the manufacturers are qualified to assist in interpreting and applying the more difficult parts of aerodynamics and aeronautical engineering. To be sure, the specialized qualifications of these individuals should be utilized wherever possible.", - "page_start": 4, - "page_end": 4, - "source_file": "00-80T-80.pdf" - }, - { - "text": "## AERODYNAMICS FOR NAVAL AVIATORS\n\nBY\n\nH. H. HURT, JR. UNIVERSITY OF SOUTHERN CALIFORNIA\n\n\n\nDISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. DESTRUCTION NOTICE - For unclassified, limited documents, destroy by any method that will prevent disclosure of contents or reconstruction of the document.\n\nPUBLISHED BY DIRECTION OF COMMANDER, NAVAL AIR SYSTEMS COMMAND", - "page_start": 0, - "page_end": 0, - "source_file": "00-80T-80.pdf" - }, - { - "text": "\n\n## Chapter 6\n\n## APPLICATION OF AERODYNAMICS TO SPECIFBC PROW OF FLYING\n\nWhile the previous chapters have presented the detailed parts of the general field of aerodynamics, there remain various problems of flying which require the application of principles from many parts of aerodynamics. The application of aerodynamics to these various problems of flying will assist the Naval Aviator in understanding these problems and developing good flying techniques.\n\n## PRIMARY CONTROL OF AIRSPEED AND ALTITUDE\n\nFor the conditions of steady flight, the airplane must be in equilibrium. Equilibrium will be achieved when there is no unbalance of force'or moment acting on the airplane. If it is assumed that the airplane is trimmed so that no unbalance of pitching, yawing, or rolling moments exists, the principal concern is for", - "page_start": 366, - "page_end": 366, - "source_file": "00-80T-80.pdf" - }, - { - "text": "\n\nThe performance of an aircraft is. the most important feature which defines its suitability for specific missions. The principal items of airplane performance deserve detailed consideration in order to better understand and appreciate the capabilities of each airplane. Knowledge of the various items of airplane performance will provide the Naval Aviator with a more complete appreciation of the\n\noperating limitations and insight to obtain the design performance of his aircraft. The performance section of the flight handbook provides the specific information regarding the capabilities and limitations of each airplane. Every Naval Aviator must rely upon these handbook data as the guide to safe and effecrive operation of his aircraft.", - "page_start": 112, - "page_end": 112, - "source_file": "00-80T-80.pdf" - }, - { - "text": "\n\n## Chapter 5\n\n## OPERATING STRENGTH LIMITATIONS\n\nThe weight of the structural components of an aircraft is an extremely important factor in the development of an efficient aircraft configuration. In no other field of mechanical design is there such necessary importance assigned to structural weight. The efficient aircraft and powerplant structure is the zenith of highly reined rknimum weight design. in\n\norder to obtain the required service life from his aircraft, the Naval Aviator must undetstand, appreciate, and observe the operating strength limitations. Failure to do so will incur excessive maintenance costs and a high incidence of failure during the service life of an aircraft.", - "page_start": 342, - "page_end": 342, - "source_file": "00-80T-80.pdf" - }, - { - "text": "\n\n## Chapter 1 BASIC AERODYNAMKS\n\nIn order to understand the characteristics of his aircraft and develop precision flying techniques, the Naval Aviator must be familiar with the fundamentals of aerodynamics. There are certain physical laws which describe the behavior of airflow and define the various aerodynamic forces and moments acting on a surface. These principles of aerodynamics provide the foundations for good, precise flying techniques.\n\n## WING AND AIRFOIL FORCES\n\n## PROPERTIES OF THE ATMOSPHERE\n\nThe aerodynamic forces and moments acting on a surface are due in great part to the properties of the air mass in which the surface is operating.~ The composition, of the earth's atmosphere by volume is approximately 78 percent. nitrogen, 21 percent oxygen, and 1", - "page_start": 18, - "page_end": 18, - "source_file": "00-80T-80.pdf" - }, - { - "text": "| loo. ................. | l.lm | 1.30 | 20.00 |\n|---------------------------|--------|--------|---------|\n| 110 .................. | ,826 | 1.24 | 15.P |\n| 17.0 .................. | ,694 | 1.04 | 12.7' |\n| lY) .................. | .444 | .61 | 8.20 |\n| 200 .................. | 230 | .38 | 4.6' |\n| MO. ................. | ,111 | .I7 | 2.10 |\n| 4&l. ................. | .c453 | .o!J | 1.10 |\n| 30.7. ................. | ,040 | .06 | .T= |\n| 600 .................. | .028 | .04 | .5O |\n\nNote that for the conditions of steady flight, each airspeed requites a specific angle of attack and lift coefficient. This fact provides a fundamental concept of flying technique: Angle of attack is tbs primary Control of airspeed in steady flight. Of course, the control stick or wheel allows the pilot to control the angle of attack and, thus, control the airspeed in steady flight. In the same sense, the throttle controls the output of the powerplant and allows the pilot to control rate of climb and descent at various airspeeds.\n\nThe teal believers of these concepts ate professional instrument pilots, LSO's, and glider pilots.. The glider pilot (or flameout enthusiast) has no recourse but to control airspeed by angle of attack and accept whatever rate of descent is incurred at the various airspeeds. The LSO must become quite proficient at judging the flight path and angle of attack of the airplane in the pattern. The more complete visual reference field available to the LSO allows him to judge the angle of attack of the airplane mote accurately than the pilot. When the airplane approaches the LSO, the precise judgment of airspeed is by the angle of attack rather than the rate of closure. If the LSO sees the airplane on the desired flight path but with too low an angle of attack, the airspeed is too high; if the angle of attack is too high, the airspeed is too low and the aitplane is approaching the stall. The mirror landing system coupled with an angle of attack indicator is an obvious refinement. The mittot indicates the desired flight path and the\n\nangle of attack indicator allows precision control of the airspeed. The accomplished insttument pilot is the devotee of 'attitude' flying technique-his creed being 'attitude plus power equals performance.' During a GCA approach, the professional instrument pilot controls airspeed with stick (angle of attack) and rate of descent with power adjustment.\n\nManeuvering flight and certain transient conditions of flight tend to complicate the relationship of angle of attack and airspeed. However, the majority of flight and, certainly, the most critical regime of flight (takeoff, approach, and landing), is conducted in essentially steady flight condition.", - "page_start": 44, - "page_end": 44, - "source_file": "00-80T-80.pdf" - }, - { - "text": "## NAVWEPS 00-801-80 PREFACE\n\nThe majority of aircraft accidents are due to some type of error of the pilot. This fact has been true in the past and, unfortunately, most probably will be true in the future. Each Naval Aviator should strive to arm himself with knowledge, training, and exacting, professional attitudes and techniques. The fundamentals of aerodynamics as presented in this text will provide the knowledge and background for safe and effective flying operations. The flight handbooks for the aircraft will provide the particular techniques, procedures, and operating data which are necessary for each aircraft. Diligent study and continuous training are necessary to develop the professional skills and techniques for successful flying operations.\n\nThe author takes this opportunity to express appreciation to those who have assisted in the preparation of the manuscript. In particular, thanks are due to Mr. J. E. Fairchild for his assistance with the portions dealing with helicopter aerodynamics and roll coupling phenomena. Also, thanks are due to Mr. J. F. Detwiler and Mr. E. Dimitruk for their review of the text material.\n\nHUGH HARRISON HURT, Jr.\n\nAugust 1959 University of Southern California Los Angelesj Cnlif.", - "page_start": 5, - "page_end": 5, - "source_file": "00-80T-80.pdf" - }, - { - "text": "## NAVWEPS 00-801-80 HIGH SPEED AERODYNAMICS\n\nChapter 3 HIGH SPEED AERODYNAMICS\n\n\n\nDevelopments in aircraft and powerplants have produced high performance airplanes with capabilities for very high speed flight. The study of aerodynamics at these very high flight speeds has many significant differences from the study of classical low speed aerodynamics. Therefore, it is quite necessary that the Naval Aviator be familiar with the nature of high speed airflow and the characteristics of high performance airplane configurations.\n\n## GENERAL CONCEPTS AND SUPERSONIC FLOW PATTERNS\n\n## NATURE OF COMPRESSIBILITY\n\nAt low flight speeds the study of aerodynamics is greatly simplified by the fact that air may experience relatively small changes in pressure with only negligible changes in density. This airflow is termed incompressible since the air may undergo changes", - "page_start": 218, - "page_end": 218, - "source_file": "00-80T-80.pdf" - }, - { - "text": "Another important form of direct interference is common when the two airplanes are in a trail position and stepped down. As shown in figure 6.10, the single airplane in flight develops upwash ahead of the wing and downwash behind and any restriction accorded the flow can alter the distribution and magnitude of the upwash and downwash. When the trailing airplane is in close proximity aft and below the leading airplane a mutual interference takes place betweetrthe two airplanes. The leading airplane above will experience an effect which would be somewhat similar to encountermg ground effect, i.e., a reduction in induced drag, a reduction in downwash at the tail, and a change in pitching moment nose down. The trailing airplane below will experience an effect which is generally the opposite of the airplane above. In other words, the airplane below will experience an increase in induced drag, an increase in downwash at the tail, and a change in pitching moment nose up. Thus, when the airplanes are in close proximity, a definite collision possibility exists because of the trim change experienced by each airplane. The magnitude of the trim change is greatest when the airplanes are operating at high lift coefficients, e.g., low speed flight, and when the airplanes are in close proximity.\n\nIn formation flying, this sort of interference must be appreciated and anticipated. In crossing under another airplane, care must be taken to anticipate the trim change and adequate clearance must be maintained, otherwise a collision may result. The pilot of the leading aircraft will know of the presence of the trailing airplane by the trim change experienced. Obviously, some anticipation is necessary and adequate separation is necessary to prevent a disturbing magnitude of the trim change. In a close diamond formation the leader will be able to 'feel' the presence of the slot man even though the airplane is not within view. Obviously, the slot man will have a difficult job during formation maneuvers because of the unstable trim changes\n\n## NAVWEPS OO-ROT-80 APPLICATION OF AERODYNAMICS\n\n## TO SPECIFIC PROBLEMS OF FLYING\n\nand greater power changes required to hold position.\n\nA common collision problem is the case of an airplane with a malfunctioning landing gear. If another'airpIane is called to inspect the malfunctioning landing gear, great care must be taken to maintain adequate separation and preserve orientation. Many instances such as this have resulted in a collision when the pilo: of the trailing airplane became disoriented and did not maintain adequate separation.\n\nDuring inflight refueling, essentially the same problems of interference exist. AS the receiver approaches the tanker from behind and below, the receiver will encounter the downwash from the tanker and require a slight, gradual increase in power and pitch attitude to continue approach to the receiving position. While.'the .receiver may not be visible to the pilot 'of the tanker, he will anticipate the receiver coming into position by the slight reduttion in power required and nose down changein pitching moment. Adequate clearance and, proper position must be maintained by the pilot of the receiver for a collision possibility is enhanced by the relative positions of the airplanes. A hazardous condition exists if the pilot of the receiver has excessive speed and runs under the tanker in close proximity.* 'The trim change experienced by both airphines may be large and unexpected and it may be difficult to avoid a collision.", - "page_start": 402, - "page_end": 402, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "00-80T-80.pdf", - "query": "What is the static pressure of the aire at standard sea level ?", - "target_page": 20, - "target_passage": "At standard sea level conditions the static pressure of the air is 2,116 psf (or 14.7 psi, 29.92 in. Hg, etc.) ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## NAVWEe3 OO-BOT-80 BASIC AERODYNAMICS\n\npercent water vapor, argon, carbon dioxide, etc. For the majority of all aerodynamic considerations air is considered as a uniform mixture of these gases. The usual quantities used to define the properties of an air mass are as follows:\n\nSTATIC PRESSURE. The absolute static pressure of the air is a property of primary importance. The static pressure of the air at any altitude results from the mass of air supported above that level. At standard sea level conditions the static pressure of the air is 2,116 psf (or 14.7 psi, 29.92 in. Hg, etc.) and at 40,000 feet altitude this static pressure decreases to approximately 19 percent of the sea level value. The shorthand notation for the ambient static pressure is 'p' and the standard sea level static pressure is given the subscript 'a' for zero altitude, pa. A more usual reference in aerodynamics and performance is the proportion of the ambient sta~tic pressure and the standard sea level static pressure. This static pressure ratio is assigned the shorthand notation of 8 (delta).\n\nAltitude pressure ratio\n\nAmbient static pressure =Standard sea level static pressure 6 = PIP0\n\nMany items of gas turbine engine performance are directly related to some parameter involving the altitude pressure ratio.\n\nTEMPERATURE. The absolute temperacure of the air is another important property. The ordinary temperature measurement by the Centigrade scale has a/datum at the freezing point of water but absolute zero temperature is obtained at a temperature of -273' Centigrade. Thus, the standard sea level tcmperature of 15' C. is an absolute temperature of 288'. This scale of absolute temperature using the Centigrade increments is the Kelvin scale, e.g., o K. The shorthand notation for the ambient air temperature is 'T' and the standard sea level air temperature of 288' K. is signified by Ta. The more usual reference is,\n\nthe proportion of the ambient air temperature and the standard sea level air temperature. This temperature ratio is assigned the shorthand notation of 0 (theta).\n\nTemperature ratio\n\n## Ambient air temperature\n\n=Standard sea level air temperature @=TITtl ,+273 288\n\nMany items of compressibility effects and jet engine performance involve consideration of the temperature ratio.\n\nDENSITY. The density of the air is a property of greatest importance in the study of aerodynamics. The density of air is simply the mass of air per~cubic foot of volume and is a direct measure of the quantity of matter in each cubic foot of air. Air at standard sea lcvcl conditions weighs 0.0765 pounds per cubic foot and has a density of 0.002378 slugs per cubic foot. At an altitude of 40,000 feet the air density is approximately 25 percent of the sea level value.\n\nThe shorthand notation used for air density is p (rho) and the standard sea level air density is then pO. In many parts of aerodynamics it is very convenient to consider the proportion of the ambient air density and standard sea level air density. This density ratio is assigned the shorthand notation of c (sigma).\n\ndensity ratio= ambient air density standard sea level air density a = PIP0\n\nA general gas law defines the relationship of pressure temperature, and density when there is no change of state or heat transfer. Simply stated this would be 'density varies directly with pressure, inversely with temperature.' Using the properties previously defined,\n\ndensity ratio= Pressure rat'o. temperature rat10", - "page_start": 19, - "page_end": 19, - "source_file": "00-80T-80.pdf" - }, - { - "text": "\n\nAIRSTREAM AHEAD HAS AMBIENT STATIC PRESSURE AND DYNAMIC PRESSURE\n\nSTAGNATION PRESSURE IS AIRSTREAM TOTAL PRESSURE P+q\n\nFtgure 1.4. Flow Pattern on a Symmetrical Object\n\nsurface anflow continues to the aft stagnation point where the local velocity is again zero. The important point of this example of aerodynamic flow is existence of the stagnation point. The change in airflow static pressure which takes place at the stagnation point IS equal to the free stream dynamic pressure, q.\n\nThe measurement of free stream dynamic pressure is fundamental to the indication of airspeed. In fact, airspeed indicators are simply pressure gauges which measure dynamic pressure related to various airspeeds. Typical airspeed measuring systems are illustrated in figure 1.5. The pitot head has no internal flow velocity and the pressure in the pitot tube is equal to the total pressure of the airstream. The purpose of the static-ports is to sense the true static pressure of the free airstream. The total pressure and static pressure lines are attached to a differential pressure gauge and the net pressure indicated is the dynamic\n\npressure, q. The pressure gauge is then calibrated to indicate flight speed in the standard sea level air mass. For example, a dynamic pressure of 305 psf would be realized at a sea level flight ,speed of 300 knots.\n\nActually there can be many conditions of flight where the airspeed indicator does not truly reflect the actual velocity through the air mass. The corrections that must be applied are many and lisred in sequence below:\n\n- (1) The indicated airspeed (IAS) is the actual instrument indication for some given flight condition. Factors such as an altitude other than standard sea level, errors of the instrument and errors due to the installation, compressibility, etc. may create great variance between this instrument indication and the actual flight speed.\n- (2) The calibrated airspeed (CM) is the result of correcting IAS for errors of the", - "page_start": 27, - "page_end": 27, - "source_file": "00-80T-80.pdf" - }, - { - "text": "If the potential energy is represented by the static pressure, p, the sum of the potential and kinetic energy is the total pressure of the airstream.\n\nH=p+% P V' where H=total pressure, psf (sometimes referred to as 'head ' pressure) p=static pressure, psf. p=density, siugs per cu. ft. V= velocity, ft./set.\n\nThis equation is the Bernoulli equation for 'incompressible flow. It is important to appreciate that the term >$pV2 has the units of pressure, psf. This term is one of the most important in all aerodynamics and appears so frequently t&it is given the name 'dynamic pressure' and the shorthand notation '4'.\n\nq= dynamic pressure, psf = jgpv2\n\nWith this definition it could be said that the sum of static and dynamic pressure in the flow tube remains constant.\n\nFigure 1.3 illustrates the variation of static, dynamic, and total pressure of air flowing through a closed tube. Note that the total pressure is con,stant throughout the length and any change in dynamic pressure produces the same magnitude change in static pressure.\n\nThe dynamic pressure of a free airstream is the one 'common denominator of all aerodynamic forces and moments. Dynamic pressure represents the kinetic energy of the free airstream and is a factor relating the capability for producing changes in static pressure on a surface. As defined, the dynamic, pressure varies directly as the density and the square of the velocity. Typical values of dynamic pressure, 4, are shown in table l-1 for various true airspeeds in the standard atmosphere. Notice that the dynamic pressure at some fixed velocity varies directly with the density ratio at any altitude. Also, appreciate the fact that at an altitude of 40,oM) feet (where the density ratio, b, is 0.2462) it is necessary to have a true air velocity twice that at sea level in order to product the same dynamic pressure.\n\nTABLE l-l. Effect of Speed and Altitvde on Dwzmnic Prerrure\n\n| True air | - |\n|-------------------|-------|\n| speed (fr./scc.) | ,I I |\n| m= | c |\n| | \\_- |\n| 169 | |\n| 338 | |\n| 507 | |\n| 616 | |\n| 845 | |\n| I, 013 | |\n\nAIRSPEED MEASUREMENT. If a symmetrically shaped object were placed in a moving airstream, the flow pattern typical of figure 1.4 would result. The airstream at the very nose of the object would stagnate and the relative flow velocity at this point would be zero. The airflow ahead of the object possesses some certain dynamic pressure and ambient static pressure. At the very nose of the object the local velocity will drop to zero and the airstream dynamic pressure will be converted into an increase in static pressure at the stagnation point. In other words, there will exist a static pressure at the stagnation point which is equal to the airstream total pressure-ambient static pressure plus dynamic pressure.\n\nAround the surface of the object the airflow will divide and the local velocity will increase from zero at the stagnation point to some maximum on the sides of the object. If friction and viscosity effects are neglected, the", - "page_start": 26, - "page_end": 26, - "source_file": "00-80T-80.pdf" - }, - { - "text": "## PITOT-STATIC SYSTEM\n\n## PITOT WITH SEPARATE STATIC SOURCE\n\nPRESSURE INDICATED BY GAUGE IS DIFFERENCE BETWEEN TOTAL AND STATIC PRESSURE, H-p= q\n\n\n\nFigure. 1.5. Airspeed Measurement\n\ninstrument and errors due to position or location of the installation. The instrument error must be small by design of the equipment and is usually negligible in equjpment which is properly maintained and cared for. The position error of the installation must be small in the range of airspeeds involving critical performance conditions. Position errors are most usually confine,d to the static source in that the actual static pressure sensed at the static port may be different from the free airstream static pressure. When the .,aircraft is operated through a large range' of angles of attack, the static pressure distribution varies 'quite greatly and it becomes quite difficult to'minimize the static source error. In most instances a compensating group of static sources may be combined to reduce the position error. In order to appreciate the magnitude of this problem, at flight speed near 100 knots a\n\n0.05 psi position error is an airspeed error of 10 knots. A typical variation of airspeed system position error is illustrated in figure 1.6.\n\n(3) The equivalent airspeed (PAS) is the result of correcting the (CAS) for compressibility effects. At high flight speeds the stagnation pressure recovered in the pitot tube is not representative of the airstream dynamic pressure due to a magnification by compressibility. Compressibility of the airflow produces a stagnation pressure in the pitot which is greater than if the flow were incompressible. As a result, the airspeed indication is given an erroneous magnihcation. The standard airspeed indicator is calibrated to read correct when at standard sea level conditions and thus has a compressibility correction appropriate for these conditions. However, when the aircraft is operating above standard sea level altitude,", - "page_start": 28, - "page_end": 28, - "source_file": "00-80T-80.pdf" - }, - { - "text": "the inherent compensation is inadequate and additional correction must be applied. The subtractive corrections that must be applied to CA$ depend on pressure altitude and CAS and are shown on figure 1.6 for the subsonic flight range. The equivalent airspeed (EAS) is the flight speed in the standard sea level air mass which would produce the same free stream dynamic pressure as the actual flight condition.\n\n(4) The true airspeed (TAS) results when the &4X is corrected for density altitude. Since the airspeed indicator is calibrated for the dynamic pressures corresponding to airspeeds at standard sea level conditions, variations in air density must be accounted for. To relate EAS and TAX requires consideration that the EAS coupled with stand.ard sea level density produces the same dynamic pressure as the TAX Soupled with the ^^\\_.\\_^ 1 .:.. 2---:... ,.f *L., bl:A.* rnrJ;r;m.. dCLUd, 'all UcIIJIcy 'I L11L ''6°C C'IIUACI'L'. From this reasoning, it can be shown that:\n\n(TAS)2p=(EAS)2 po d -or, TAS=EAS 62 P TAS= EAS 2\n\n4\n\nwhere TAX= true airspeed EAS=equivalent airspeed p=actual air density PO= standard sea level air density n=altitude density ratio, p/pa\n\nThe result shows that the TAX is a function of EAS and density altitude. Figure 1.6 shows a chart of density altitude as a function of pressure altitude and temperature. Each particular density altitude fixes the proportion between TAX and EAS. The use of a navigation computer requires setting appropriate values of pressure altitude and temperature on the scales which then fixes rhe proportion between the scales of TAS and EAS (or TAS and CAS when compressibiliry corrections are applicable).\n\nThus, the airspeed indicator system measures dynamic pressure and will relate true flight velocity when instrument, position, compressibility, and density corrections are applied. These corrections are quite necessary for accurate determination of true airspeed and accurate navigation.\n\nBernoulli's principle and the concepts of static, dynamic, and total pressure are the basis of aerodynamic fundamentals. The pressure distribution caused by the variation of local stack and dynamic pressures on a surface is the source of the major aerodynamic forces and moment.\n\n## DEVELOPMENT OF AERODYNAMIC FORCES\n\nThe typical airflow patterns exemplify the relationship of static pressure and velocity defined by Bernoulli. Any object placed in an airstream will have the a& to impact or stagnate at some point near the leading edge. The pressure at this point of stagnation will be an absolute static pressure equal to the total pressure of the airstream. In other words, the static pressure at the stagnation point will be greater than the atmospheric pressure by the amount of the dynamic pressure of the airstream. As the flow divides and proceeds around. the object, the increases in local velocity produce decreases in static pressure. This procedure of flow is best illustrated by the flow patterns and pressure distributions of figure 1.7.\n\nSTREAMLINE PATTERN AND PRESSURE DISTRIBUTION. The flow pattern of the cylinder of figure 1.7 is characterized by the streamlines which denote the local flow direction. Velocity distribution is noted by the streamline pattern since the streamlines effect a boundary of flow, and the airflow between the streamlines is similar to flow in a closed tube. When the streamlines contract and are close together, high local velocities exist; when the streamlines expand and are far apart, low local velocities exist. At the", - "page_start": 31, - "page_end": 31, - "source_file": "00-80T-80.pdf" - }, - { - "text": "This relationship has great application in aerodynamics and is quite fundamental and necessary in certain parts of airplane performance.\n\nVISCOSITY. The viscosity of the air is important in scale and friction effects. The coefficient of absolute viscosity is the proportion between the shearing stress and velocity gradient for a fluid flow. The viscosity of gases is unusual in that the viscosity is generally a function of temperature alone and an increase in temperature increases the viscosity. The coefficient of absolute viscosity is assigned the shorthand notation I, (mu). Since many parts of aerodynamics involve consideration of viscosity and density, a more usual form of viscosity measure is the proportion of the coefficient of absolute viscosity and density. This combination is termed the 'kinematic viscosity' and is noted by Y (nu).\n\nkinematic viscosity\n\ncoefficient of absolute viscosity\n\ncc density\n\nv=PlP\n\nThe kinematic viscosity of air at standard sea level conditions is 0.0001576 square feet per second. At an altitude of 40,000 feet the kinematic viscosity is increased to 0.0005059 square foot per second.\n\nIn order to provide a common denominator for comparison of various aircraft, a standard atmosphere has been adopted. The standard atmosphere actually represents the mean or average properties of the atmosphere. Figure 1.1 illustrates the variation of the most important properties of the air throughout the standard atmosphere. Notice that the lapse rate is constant in the troposphere and the stratosphere begins with the isothermal region.\n\nSince all aircraft performance is compared and,evaluated in the environment of the standard atmosphere, all of the aircraft instrumentation is calibrated for the standard atmosphere.\n\nThus, certain corrections must apply to the instrumentation as well as the aircraft performance if the operating conditions do not fit the standard atmosphere. In order to properly account for the nonstandard atmosphere certain terms must be defined. Pressure .&itudc is the altitude in the standard atmosphere corresponditrg to a particular pressure. The aircraft altimeter is essentially a sensitive barometer calibrated to indicate altitude in the staotlard atmosphere. If the altimeter is set for 29.92 in. Hg the altitude indicated is the pressure altitude-the altitude in the standard atmosphere corresponding to the sensed pressure. Of course, this indicated pressure altitude may not be the actual height above sea level due to variations in remperature, lapse rate; atniospheric pressure, and possible errors in the sensed pressure.\n\nThe more appropriate term for correlating aerodynamic performance in the nonstandard atmosphere is density &it&-the altitude in the standard atmosphere corresponding to a particular value of air density. The computation of density altitude must certainly involve consideration of pressure (pressure altitude) and temperature. Figure 1.6 illustrates the manner in which pressure altitude and temperature combine to produce a certain density altitude. This chart is quite standard in use and is usually included in the performance section of the flight handbook. Many subject areas of aerodynamics and aircraft performance will emphasize density altitude and temperature as the most important factors requiring consideration.\n\n## BERNOULLI'S PRINCIPLE AND SUBSONIC AIRFLOW\n\nAll of the external aerodynamic forces on a surface are the result of air pressure or air friction. Friction effects are generally confined to a thin layer of air in the immediate vicinity of the surface and friction forces are not the predominating aerodynamic forces. Therefore,", - "page_start": 21, - "page_end": 21, - "source_file": "00-80T-80.pdf" - }, - { - "text": "## NAVWEPS 00401-80 BASIC AERODYNAMICS\n\nthe pressure forces created on an aerodynamic surface can be studied in a simple form which at first neglects the effect of friction and viscosity of the airflow. The most appropriate means of visualizing the effect of airflow and the resulting aerodynamic pressures is to study the fluid flow within a closed tube.\n\nSuppose a stream of air is flowing through the tube shown in figure 1.2. The airflow at station 1 in the tube has a certain velocity, static pressure, and density. As the airstream approaches the constriction at station 2 certain changes must take place. Since the airflow is enclosed within the tube, the mass flow at any point along the tube must be the same and the velocity, pressure, or density must change to accommodate this continuity of flow.\n\nBERNOULLI'S EQUATION. A distinguishing feature of submnic airflow is that changes in pressure and velocity take place with sniall and negligible changes in density. For this reason the study of subsonic airflow can be simplified by neglecting the variation of density in the flow and assuming the flow to be incomprmiblc. Of course, at high flow speeds whjch approach the speed of sound, the flow must be considered as compressible and 'compressibility effects' taken into account. However, if the flow through the tube of figure 1.2 is considered subsonic, the density of the airstream is essentially constant at all stations along the length.\n\nIf the density of the flow remains constant, static pressure and velocity are the variable quantities. As the flow approaches the constriction of station 2 the velocity must increase to maintain the same mass flow. As the velocity increases the static pressure will decrease and the decrease in static pressure which accompanies the increase in velocity can be verified in two ways:\n\n(I) Newton's laws of motion state the requirement of an unbalanced force to produce an acceleration (velocity change). If the airstream experiences an increase in velocity approaching the constriction, there must\n\n.'\n\nbe an unbalance of force to provide the acceleration. Since there is only air within the tube, the unbalance of force is provided by the static pressure at station 1 being greater than the static pressure at the constriction, station 2.\n\n(2) The total energy of the air stream in the tube is unchanged. However, the airstream energy may be in two forms. The airstream may have a potential energy which is related by the static pressure and a kimtic energy by virtue of mass and motion. As the total energy is unchanged, an increase in velocity (kinetic energy) will be accompanied by a decrease in static pressure (potential energy). This situation is analagous to a ball rolling along-a smooth surface. As the ball rolls downhill, the potential energy due to position is exchanged for kinetic energy of motion. If .frictionwere negligibie, the change of potential energy would equal the change in ki,netic energy. This- is also the case for the airflow within the tube.\n\nThe relationship of static pressure and velocity is maintained throughout the length of the tube. As the flow moves past the constriction toward station 3, the velocity decreases and the static pressure increases.\n\nThe Bernoulli equation for incompressible flow is most readily explained ,by accounting for the energy of the~airflow within the tube. As the airstream has no energy added or subtracted at any point, the sum of the potential +id kinetic energy must be constant. The kinetic energy of an object is found by:\n\n'KE. =%MV=\n\nwhere K;E. = kinetic energy, ft.-lbs.\n\nM = mass, slugs", - "page_start": 23, - "page_end": 23, - "source_file": "00-80T-80.pdf" - }, - { - "text": "TABLE OF CONTENTS\n\n| PREFACE.. ,., . | iii |\n|--------------------------------------------------------------------------------------------------------------------------------------------------|----------|\n| CHAPTER I: BASIC AERODYNAMICS | |\n| WING AND AIRFOIL FORCES | |\n| PROPERTIES OF THE ATMOSPHERE. Static pressure Temperature Density Viscosity Standard atmosphere Pressure altitude Density altitude | 1 |\n| BERNOULLI'S PRINCIPLE AND SUBSONIC AIRFLOW.. | 4 |\n| Bernoulli's equation, | 6 |\n| Incompressible tlow Variation of static pressure and velocity Kinetic and porcntial energy of flow Static and dynamic prcssurc, 4 | |\n| Airspeed measurement.. . . Stagnation prcssurc Measurement of dynamic pressure Pitot and static sources Indicated airspeed | 9 |\n| DEVELOPMENT OF AERODYNAMIC FORCES.. ....... | 14 |\n| Streamline pattern and pressure distribution. Generatioaoflift.......................................... ....... ....... | 14 |\n| Circulation Pressure distribution | 16 |\n| Airfoil terminology. Aerodynamic force coefficient . . Basic lift equation | ',: 2 3 |", - "page_start": 6, - "page_end": 6, - "source_file": "00-80T-80.pdf" - }, - { - "text": "## NAVWEPS 00-601-60 HIGH SPEED AERODYNAMICS\n\nin pressure without apparent changes in density. Such a condition of airflow is analogous to the flow of water, hydraulic fluid, or any other incompressible fluid. However, at high flight speeds the pressure changes that take place are quite large and significant changes in air density occur. The study of airflow at high speeds must account for these changes 1 in air density and must consider that the 1 air is compressible and that there will be 'compressibility effects.'\n\nA factor of great importance in the study of high speed airflow is the speed of sound. The speed of sound is the rate at which small pressure disturbances will be propagated through the air and this propagation speed is solely a function of air temperature. The accompanying table illustrates the variation of the speed of sound in the standard atmosphere.\n\nTABLE 3-I. V.r;afIm < Altitude in ,I T< the -\n\n| -- | -- | |\n|--------|--------|-------|\n| D F. | - c. | K?uI, |\n| 59.0 | 15.0 | |\n| 41.1 | 5.1 | |\n| 23.3 | -4.8 | 6%. 6 |\n| 5.5 | -14.7 | |\n| --12., | --24.6 | 614.6 |\n| --30.2 | -34.5 | 602.2 |\n| -48.0 | -44.4 | 589.6 |\n| -65.8 | --w.3 | 516.6 |\n| -69.7 | -56.5 | 573:s |\n| -69.1 | -56.5 | 573.8 |\n| -69.7 | -56.5 | 573.8 |\n\n661.7\n\n650.3\n\n6X6.7\n\nAs an object moves through the air mass, velocity and pressure changes occur which create pressure disturbances in the airflow surrounding the object. Of course, these pressure disturbances are propagated through the air at the speed of sound. If the object is travelling at low speed the pressure disturbances are propagated ahead of the object and the airflow immediately ahead of the object is influenced by the pressure field on the object. Actually, these pressure disturbances are transmitted in all directions and extend indefinitely in all\n\ndirections. Evidence of this 'pressure warning' ' is seeii in the typical subsonic flow pattern of figure 3.1 where there is upwash and flow direction change well ahead of the leading edge. If the object is travelling at some ,speed above the speed of sound the airflow ahead of the object will not be influenced by the pressure field on the object since pres-sure disturbances cannot. be propagated ahead of the object. Thus, as the flight speed nears the speed of sound a compression wave will form at the leading edge and all changes in velocity and pressure will take place quite sharply and suddenly. The airflow, ahead of the object is not influenced until the air particles are suddenly forced out .of the way by the concentrated pressure wave set up by the object. Evidence of this phenomenon is seen in the typical supersonic flow pattern of figure 3.1.\n\nThe analogy of surface waves on the water may help clarify these phenomena. Since a surface wave is simply the propagation of a pressure disturbance, a ship moving at a speed much less than the wave speed will not form a 'bow wave.' As the. ship's speed nears the wave pro$agation speed the bow wave will form and become stronger as speed is increased beyond the wave speed.\n\nAt this point it should become apparent that all compressibility effects depend upon the relationship of airspeed to the speed of sound. The term used to describe this relationship is the Mach number, M, and this term is the ratio of the true airspeed to the speed of sound. ,-I\n\nM=;\n\n```\nwhere M=Mach number V= true airspeed, knots d= speed of sound, knots =a& aO=speed of sound at standard sea level conditions, 661 knots e= temperature ratio = T/T,\n```", - "page_start": 219, - "page_end": 219, - "source_file": "00-80T-80.pdf" - }, - { - "text": "## NAVWEPS OO-BOT-80 BASIC AERODYNAMICS\n\nforward stagnation point the local velocity is zero and the maximum positive pressure results. As the flow proceeds from the forward stagnation point the velocity increases as shown by the change in streamlines. The local velocities reach a maximum at the upper and lower extremities and a peak suction pressure is produced at these points on the cylinder. (NOTE: Positive pressures are pressures above atmospheric and negative or .ruction pressures are less than atmospheric.) As the flow continues aft from the peak suction pressure, the diverging streamlines indicate decreasing local velocities and increasing local pressures. If friction and compressibility effects are not considered, the velocity would decrease to zero at the aft stagnation point and the full stagnation pressure would be recovered. The pressure distribution for the cylinder in perfect fluid flow would be symmetrical and no net force (lift or dragj wvuid rcsuit. Of course, thr relationship between static pressure and ~elocity along the surface is defined by Bernoulli's equation.\n\nThe flow pattern for the cylinder in an actual fluid demonstrates the effect of friction or viscosity. The viscosity of air produces a thin layer of retarded flow immediately adjacent to the surface. The energy expended in this 'boundary layer' can alter the pressure distribution and destroy the symmetry of the pattern. The force unbalance caused by the change in pressure distribution creates a drag force which is in addition to the drag due to skin friction.\n\nThe streamline pattern for the symmetrical airfoil of figure 1.7 again provides the basis for the velocity and pressure distribution. At the leading edge the streamlines are widely diverged in the vicinity of the positive pressures. The maximum local velocities and suction (or negative) pressures exist where the streamlines are the closest together, One notable difference between the flow on the cylinder and the airfoil is that the maximum velocity and minimum pressure points on the\n\nairfoil do not ,necessarily occtir at the point of maximum thickness. However, a similarity does exist in that the minimum pressure points correspond to the points where the streamlines are closest together and this condition exists when the streamlines are forced to the greatest curvature.\n\nGENERATION OF LIFT. An important phenomenon associated with the production of lift by an airfoil is the 'circulation' imparted to the airstream. The best practical illustration of this phenomenon is shown in figure 1.8 by the streamlines and pressure distributions existing on cylinders in an airstream. The cylinder without circulation has a symmetrical streamline pattern and a pressure distribution which creates n-0 n\\_et lift. If the cylinder is given a clockwise rotation and induces a rotational or circulatory flow, a distinct change takes place in the streamline pattern and p'ess.~re &str~''u~~oii, The vriocitirs due to the vortex of circulatory flow cause increased 104 velocity on the upper surface of the cylinder and decreased local velocity on the lower surface of the cylinder. Also, the circulatory flow produces an upwash immediately ahead and downwash immediately behind the cylinder and both fore and aft stagnation points are lowered.", - "page_start": 33, - "page_end": 33, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "00-80T-80.pdf", - "query": "What is the phenomenon associated with the production of lift by an airfoil ?", - "target_page": 34, - "target_passage": "An important phenomenon associated with the production of lift by an airfoil is the “circulation” parted to the airstream. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## NAVWEPS 00-801-80 BASIC AERODYNAMICS\n\nrotation will be quite a 'curve ball artist' the golfer that cannot control the lateral motion of the club face striking the golf ball will impart an uncontrollable spin and have trouble with a 'hook' or 'slice.'\n\nWhile a rotating cylinder can produce a net lift from the circulatory flow, the method is relatively inefficient and only serves to point out the relationship between lift and circula-, tion. An airfoil is capable of producing lift with relatively high efficiency and the process is illustrated in figure 1.8. If a symmetrical airfoil is placed at zero angle of attack to the airstream, the streamline pattern and pressure distribution give evidence of zero lift. HOWever, if the airfoil is given a positive angle of attack, changes occur in the streamline pattern and pressure distribution similar to changes caused by the addition of circulation to the cylinder. The positive angle of attack causes increased velocity on the upper surface with an increase in upper surface suction while the decreased velocity on the lower surface causes a decrease in lower surface suction. Also, upwash is generated ahead of the airfoil, the forward stagnation point moves under the leading edge, and a downwash is evident aft of the airfoil. The pressure distribution 0' the airfoil now provides a net force perpendicular to the airstream-lift.\n\nThe generation of lift by an airfoil is dependent upon the airfoil being able to create circulation in the airstream and develop the lifting, pressure distribution on the surface. In all cases, the generated lift will be the net force caused by the distribution of pressure over the upper and lower surfaces of the airfoil. At low angles of attack, suction pressures usually will exist on both upper and lower surfaces. but the upper surface suction must be greater for positive lift. At high angles of attack near that for maximum lift, a positive pressure will exist on the lower surface but this will account for approximately one-third the net lift.\n\nThe effect of free stream density and velocity is a necessary consideration when studying the development of the various aerodynamic forces. Suppose that a particular shape of airfoil is fixed at a particular angle to the airstream. The relative velocity and pressure distribution will be determined by the shape of the airfoil and the angle to the airstream. The effect of varying the airfoil size, air density and airspeed is shown in figure 1.9. If the same airfoil shape is placed at the same angle to an airstream with twice as great a dynamic pressure the magnitude of the pressure distribution will be twice as great but the r&rive shape of the pressure distribution will be the same. With twice as great a pressure existing over the surface, all aerodynamic forces and moments will ~double. If a half-size airfoil ib placed at the same angle to the original airstream, the magnitude of the pressure distribution is the same as the origina! airfoi! and again the relative shape of the pressure distribution is identical. The same pressure acting on the half-size surface would reduce all aerodynamic forces to one-half that of the original. This similarity of flow patterns means that the stagnation point occurs at the same place, the peak suction pressure occurs at the same place, and the actual magnitude of the aerodynamic forces and moments depends upon the airstream dynamic pressure and the surface area. This concept is extremely important when attempting to separate and analyze the most important factors affecting the development of aerodynamic forces.", - "page_start": 37, - "page_end": 37, - "source_file": "00-80T-80.pdf" - }, - { - "text": "Next, consider the cambered airfoil of figure 1.21 at zero lift. To produce zero lift, the upper and lower surface lifts must be equal. One difference noted from the symmetrical airfoil is that the upper and lower surface lifts are not opposite one another. While no net lift exists on the airfoil, the couple produced by the upper and lower surface lifts creates a nose down moment. As the angle of attack is increased, the upper surface lift increases while the lower surface lift decreases. While a change in lift has taken place, no change in moment takes place about the point where the lift change occurs. Since the moment about the aerodynamic center is the product of a force (lift at the c.P.) and a lever arm (distance from c.9. to a.~.), an increase in lift moves the center of pressure toward the aerodynamic center.\n\nIt should be noted that the symmetrical airfoil at zero lift has no pitching moment about the aerodynamic center because the upper and", - "page_start": 64, - "page_end": 64, - "source_file": "00-80T-80.pdf" - }, - { - "text": "AIRFOIL TERMINOLOGY. Since the shape of an airfoil and the inclination to the airstream are so important in determining the pressure distribution, it is necessary to properly define the airfoil terminology. Figure 1.10 shows a typical airfoil and illustrates the various items of airfoil terminology\n\n - (1) The chord line is a straight line connecting the leading and trailing edges of the airfoil.", - "page_start": 37, - "page_end": 37, - "source_file": "00-80T-80.pdf" - }, - { - "text": "AIRFOIL LIFT CHARACTERISTICS. Airfoil section properties differ from wing or airplane properties because of the effect of the planform. Actually, the wing may have vatious airfoil sections from root to tip with taper, twist, sweepback and local flow components in a spanwise direction. The resulting aetodynamic properties of the wing are determined by the action of each section along the span and the three-dimensional flow. Airfoil section properties are derived from the basic shape or profile in two-dimensional flow and the force coefficients are given a notation of lower case letters. For example, a wing or airplane lift coefficient is C, while an airfoil section lift coefficient is termed cr. Also, wing angle of attack is Q while section angle of attack is differentiated by the use of 01~. The study of section properties allows an objective consideration of the effects of camber, thickness, etc.\n\nThe lift characteristics of five illustrative airfoil sections are shown in figure 1.12. The section lift coe&icient, c,, is plotted versus section angle of attack, olO, for five standard NACA airfoil profiles. One characteristic feature of all airfoil sections is that the slope of the various lift curves is essentially the same. At low lift coefhcients, the section lift coefficient increases approximately 0.1 for each degree increase in angle of attack. For each of the airfoils shown, a S' change in angle of", - "page_start": 44, - "page_end": 44, - "source_file": "00-80T-80.pdf" - }, - { - "text": "and high power, the dynamic pressure in the shaded area can be much greater than the free stream and this causes considerably greater lift than at zero thrust. At high power conditions the induced flow also causes an effect similar to boundary layer control and increases the maximum lift angle of attack. The typical four-engine propeller driven airplane may have 60 to 80 percent of the wing area affected by the induced flow and power effects on stall speeds may be considerable. Also, the lift of the airplane at a given angle of attack and airspeed will be greatly affected. Suppose the airplane shown is in the process of landing flare from a power-on approach. If there is a sharp, sudden reduction of power, the airplane may drop suddenly because of the reduced lift.\n\nThe typical jet aircraft does not experience the induced flow velocities encountered in propeller driven airplanes, thus the only significant factor is the vertical component of thrust. Since this vertical component contributes to supporting the airplane, less aerodynamic lift is required to hold the airplane in flight. If the thrust is small and the thrust inclination is slight at maximum lift angle, only negligible changes in stall speed will result. On the other hand, if the thrust is very great and is given a large inclination at maximum lift angle, the effect on stall speed can be very large. One important relationship remains-since there is very little induced flow from the jet, the angle of attack at stall is essentially the same power-on or power-off.\n\n## DEVELOPMENT OF AERODYNAMIC PITCHING MOMENTS\n\nThe distribution of pressure over a surface is the ,source of the aerodynamic moments as well as the aerodynamic forces. A typical example of this fact is the pressure distribution acting on the cambered airfoil of figure 1.21. The upper surface has pressures distributed which produce the upper surface lift; the lower surface has pressures distributed which produce the lower surface lift. Of course, the\n\n## NAVWEPS 00-801~0 BASIC AERODYNAMICS\n\nnet lift produced by the airfoil is difference between the lifts on the upper and lower surfaces. The point along the chord where the distributed lift is effectively concentrated is termed the 'center of pressure, c.p.' The center of pressure is essentially the 'center of gravity' of the distributed lift pressure and the location of the c.p. is a function of camber and section lift coe&cient\n\nAnother aerodynamic reference point is the 'aerodynamic center, d.e.' The aerodynamic center is defmed as the point along the chord where all changes in lift effectively take place. To visualize the existence of such a point, notice the change in pressure distribution with angle of attack for the symmetrical airfoil of figure 1.21. When at zero lift, the upper and lower surface lifts are equal and located at the same point. With an increase in angle of attack, the upper surface lift increases while the lower surface lift decreases. The change ,of lift has taken place with no change in the center of pressure-a characteristic of symmetrical airfoils.", - "page_start": 64, - "page_end": 64, - "source_file": "00-80T-80.pdf" - }, - { - "text": "## HIGH LIFT DEVICES\n\nThere are many different types of high lift devices used to increase the maximum lift coefficient for low speed flight. The high lift devices applied to the trailing edge of a section consist of a flap which is usually 15 to 25 percent of the chord. The deflection of a flap produces the effect of a large amount of camber added well aft on the chord. The principal types of flaps are shown applied to a basic section of airfoil. The effect of a 30' deflection of a 25 percent chord flap is shown on the lift and drag curves of figure 1.17.", - "page_start": 56, - "page_end": 56, - "source_file": "00-80T-80.pdf" - }, - { - "text": "However, if high speed flight is the primary consideration, the airfoil must be chosen to have. the highest practical critical Mach number.\n\nCritical Mach number has been defined as the flight Mach number which produces first evidence of local sonic flow. Thus, the airfoil shape and lift coe&ient-which determine the pressure and velocity distribution-will have a profound effect on critical Mach number. Conventional, low speed airfoil shapes have relatively poor compressibility characteristics because of the high local velocities near the leading edge. These high local velocities are inevitable if both the maximum thickness and camber are well forward on the chord. An improvement of the compressibility characteristics can be obtained by moving the points of maximum camber and thickness aft on the chord. This would distribute the pressure and velocity more evenly along the chord and produce a lower peak velocity for the same lift coefficient. Fortunately, the airfoil shape to provide extensive lamiaar flow and low profile drag in low speed, subsonic flight will provide a pressure distribution which is favorable for high speed flight. Figure 3.12 illustrates the pressure distributions and variation of critical Mach number with lift coefficient for a conventional low speed airfoil and a high speed section.\n\nIn order to obtain a high critical Mach number from an airfoil at some low lift coefficient the section must have:\n\n - (u) Low thickness ratio. The point of maximum thickness should be aft to smooth the pressure distribution.\n - (6) Low camber. The mean camber line should be shaped to help minimize the local velocity peaks.\n\nIn addition, the higher the required lift coefficient the lower the critical Mach number and more camber is required of the airfoil. If supersonic flight is a possibility the thickness ratio and leading edge radius must be small to decrease wave drag.\n\n## NAVWEPS 00-801-80 HIGH SPEED AERODYNAMICS\n\nFigure 3.13 shows the flow patterns for two basic supersonic airfoil sections and provides the approximate equations for lift,drag, and lift curve slope. Since the wave drag is the only factor of difference between -the two airfoil sections, notice the configuration factors which affect the wave drag. For the same thickness ratio, the circular arc airfoil would have a larger wedge angle formed between the upper and lower surfaces at the leading edge. At the same flight Mach number the larger angle at the leading edge would form the stronger shock wave at the nose and cause a greater pressure change on the circular arc airfoil. This same principle applies when investigating the effect of airfoil thickness. Notice that the wave drag coefficients for both airfoils vary as the SQUARE of the thickness ratio, e.g., if the thickness ratio were doubled, the wave drag coefhcient would he four times as great. If the thickness were increased, the airflow at the leading edge will experience a greater change in direction and a stronger shock wave will be formed. This powerful variation of wave drag with thickness ratio necessitates the use of very thin airfoils with sharp leading edges for supersonic flight. An additional consideration is that thin airfoil sections favor the use of low aspect ratios and high taper to obtain lightweight structures and preserve stiffness and rigidity.", - "page_start": 240, - "page_end": 240, - "source_file": "00-80T-80.pdf" - }, - { - "text": "basic section. The effect of a fixed slot on\n\nthe lift characteristics is shown in figure 1.18. .UO~J ana' &Z~J can produce significant increases in cl, but the increased angle of attack for maximum lift can be a disadvantage. If slots were the only high lift device on the wing, the high take off and landing angles of attack may complicate the design of the landing gear. For this reason slots or slats are usually used in conjunction with flaps since the flaps provide reduction in the maximum lift angle of attack. The use of a slot has two important advantages: there is only a negligible change in the pitching moment due to the slot and no significant change in section drag at low angles of attack. In fact, the slotted section will have less drag than the basic section near the maximum lift angle for the basic section.\n\nThe slot-slat device finds great application in modern airplane configurations. The tailless airplane configuration can utilize only the high lift devices which have negligible effect on the pitching moments. The slot and slat are often used to increase the clin high speed flight when compressibility effects are considerable. The small change in twisting moment is a favorable feature for any high lift device to be used at high speed. Leading edge high lift devices are more effective on the highiy swept wing than trailing edge flaps since slats are quite powerful in controlling the flow pattern. Small amounts of local camber added to the leading edge as a high lift device is most effective on wings of very low thickness and sharp leading edges. Most usually the slope of the leading edge high lift device is used to control the spanwise lift distribution on the wing.\n\n'Boundary larcr control devices are additional means of increasing the maximum lift coe&cient of a section. The thin layer of airflow adjacent to the surface of an airfoil shows reduced local velocities from the effect of skin friction. When at high angles of attack this boundary layer on the upper surface tends to\n\n## NAVWEPS OO-BOT-RO BASIC AERODYNAMICS\n\nstagnate and come to a stop. If this happens the airflow will separate from the surface and stall occurs. Boundary layer control for high lift applications features various devices to maintain high velocity in the boundary layer to allay separation of the airflow. This control of the boundary layer kinetic energy can be accomplished in two ways. One method is the application of a suction through ports to draw off low energy boundary layer and replace it with high velocity air from outside the boundary layer. The effect of surface suction boundary layer control on lift characteristics is typified by figure 1.18. Increasing surface suction produces greater maximum lift coe5cients which occur at higher angles of attack. The effect is similar to that of a slot because the slot is essentially a boundary layer control device ducting high energy air to the upper surface.", - "page_start": 60, - "page_end": 60, - "source_file": "00-80T-80.pdf" - }, - { - "text": "AIRFOIL SECTIONS. It should be obvious that airfoils for high speed subsonic flight should have high critical Mach numbers since critical Mach number defines the lower limit for shock wave formation and subsequent force divergence. An additional complication to airfoil selection in this speed range is that the airfoil should have a high maximum lift coefficient and sufficient thickness to allow application of high lift devices. Otherwise an excessive wing area would be required to provide maneuverability and reasonable takeoff and landing speeds.\n\nI", - "page_start": 237, - "page_end": 237, - "source_file": "00-80T-80.pdf" - }, - { - "text": "attack would produce an approximate 0.5 change in lift coefficient. Evidently, lift,~curve slope is not a factor important in the selection of an airfoil.\n\nAn important lift property affected by the airfoil shape is the section maximum lift coefficient, ci-. The effect of airfoil shape on cican be appreciated by comparison of the lift curves for the five airfoils of figure 1.12. The NACA airfoils 63X06,63-009, and 63i-012 ate symmetrical sections of a basic thickness distribution but maximum thicknesses of 6, 9, and 12 percent respectively. The effect of thickness on ~1% is obvious from an inspection of these curves :\n\n9.0°\n\n| NACA 63-005 .~. :. | Cl.82 | |\n|------------------------|---------|-------|\n| NACA 6Mo9. | 1.1 | 10.5~ |\n| NACA 63'-01?,. | 1.4 | 13.80 |\n\nThe 12-percent section has a crapproximately 70 percent greater than the 6-percent thick section. In addition, the thicker airfoils have greater benefit from the use of various high lift devices.\n\nThe effect of camber is illustrated by the lift curves of the NACA 4412 and 631-412 sections. The NACA 4412 section is a 12 percent thick airfoil which has 4 percent maximum camber located at 40 percent of the chord. The NACA 63i-412 airfoil has the same thickness and thickness distribution as the 631-012 but camber added to give a 'design'' lift coefficient (c, for minimum section drag) of 0.4. The lift curves for these two airfoils show that camber has a beneficial e&t on cl-.\n\n| ScCdO' | %.I | a0 for '&* |\n|---------------------------|-------|--------------|\n| NACA 6h-312 (symmctricd) | 1.4 | 13.e |\n| NACA 631-412 Whmd). | 1.73 | IS. z' |\n\nAn additional effect of camber is the change in zero lift angle. While the symmetrical\n\nsections have zero lift at zero angle of attack, the sections with positive camber have negative angles for zero lift.\n\nThe importance of maximum lift coefficient is obvious. If the maximum lift coefficient is high, the stall speed will be low. However, the high thickness and camber necessary for high section maximum lift coefficients may produce low critical Mach numbers and large twisting moments at high speed. In other words, a high maximum lift coefficient is just one of the many features desired of an airfoil section.\n\nDRAG CHARACTERISTICS. Drag is the net aerodynamic force parallel to the relative wind and its source is the pressure distribution and skin friction on the surface. Large, thick bluff bodies in an airstream show a predominance of form drag due to the unbalanced pressure distribution. However, streamlined bodies with smooth contours show a ptedominance of drag due to skin friction. In a fashion similar to other aerodynamic forces, drag forces may be considered in the form of a coefficient which is independent of dynamic pressure and surface area. The basic drag equation is as follows:\n\nwhere\n\nD=GqS\n\nD=drag, lbs. C,= drag coefficient q= dynamic pressure, psf UP =z (V in knots, TAS)\n\nS= wing surface area, sq. ft.\n\nThe force of drag is shown as the product of dynamic pressure, surface area, and drag coefficient, C,. The drag coefficient in this equation is similar to any other aerodynamic force coefficient-it is the ratio of drag pressure to dynamic pressure. If the drag coefficient of a conventional airplane were plotted versus angle of attack, the result would be typical of the graph shown in figure 1.13. At low angles of attack the drag coefficient is low and small changes in angle of attack create only slight changes in drag coefficient. At", - "page_start": 46, - "page_end": 46, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "BD-EN_calendrier-Lauzun-2024.pdf", - "query": "What are the recyclable waste ?", - "target_page": 3, - "target_passage": "All types of paper and cardboard, Metal packaging, even the smallest ones, Plastic bottles and flasks, All other packaging", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Also, we currently provide recycling services in certain markets primarily to comply with local laws or obligations under our franchise agreements. These services include the curbside collection of residential recyclable waste and the provision of a variety of recycling services to commercial and industrial customers.", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "Transfer and Disposal Services. We own or operate 96 transfer stations. We deposit waste at these stations, as do other private haulers and municipal haulers, for compaction and transfer to trailers for transport to disposal sites or recycling facilities. As of December 31, 2004, we owned or operated 58 landÑlls, which had approximately 8,904 permitted acres and total available permitted and probable expansion disposal capacity of approximately 1.7 billion in-place cubic yards. The in-place capacity of our landÑlls is subject to change based on engineering factors, requirements of regulatory authorities and the ability to expand our sites successfully. Some of our landÑlls accept non-hazardous special waste, including utility ash, asbestos and contaminated soils. See \"\"Ì Properties.''\n\nMost of our existing landÑll sites have the potential for expanded disposal capacity beyond the currently permitted acreage. We monitor the availability of permitted disposal capacity at each of our landÑlls and evaluate whether to pursue expansion at a given landÑll based on estimated future waste volumes and prices, market needs, remaining capacity and likelihood of obtaining an expansion. To satisfy future disposal demand, we are currently seeking to expand permitted capacity at certain of our landÑlls, although no assurances can be made that all future expansions will be permitted as designed.\n\nOther Services. We have 35 materials recovery facilities and other recycling operations, which are generally required to fulÑll our obligations under long-term municipal contracts for residential collection services. These facilities sort recyclable paper, aluminum, glass and other materials. Most of these recyclable materials are internally collected by our residential collection operations. In some areas, we receive commercial and industrial solid waste that is sorted at our facilities into recyclable materials and nonrecyclable waste. The recyclable materials are salvaged, repackaged and sold to third parties and the nonrecyclable waste is disposed of at landÑlls or incinerators. Wherever possible, our strategy is to reduce our exposure to Öuctuations in recyclable commodity prices by utilizing third party recycling facilities, thereby minimizing our recycling investment.\n\nWe provide remediation and other heavy construction services primarily through our subsidiary located in Missouri.\n\nWe also have a Texas-based compost, mulch and soil business at which yard, mill and other waste is processed, packaged and sold as various products.\n\n## Sales and Marketing\n\nWe seek to provide quality services that will enable our company to maintain high levels of customer satisfaction. We derive our business from a broad customer base which we believe will enable our company to experience stable growth. We focus our marketing eÅorts on continuing and expanding business with existing customers, as well as attracting new customers.\n\nWe employ approximately 500 sales and marketing employees. Our sales and marketing strategy is to provide high-quality, comprehensive solid waste collection, recycling, transfer and disposal services to our customers at competitive prices. We target potential customers of all sizes, from small quantity generators to large \"\"Fortune 500'' companies and municipalities.\n\nMost of our marketing activity is local in nature. However, in 2000 we initiated a national accounts program in response to our customers' needs.\n\nWe generally do not change the tradenames of the local businesses we acquire, and therefore we do not operate nationally under any one mark or tradename. Rather, we rely on the goodwill associated with the acquired companies' local tradenames as used in each geographic market in which we operate.\n\n## Customers", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## Compost Questions and Answers\n\n## What is compost?\n\nCompost is a natural humus-like soil amendment that results from the controlled aerobic (with oxygen) decomposition of organic materials. Compost is not soil - it should be mixed with soil. It is not fertilizer, although it contains many slowly released nutrients.\n\n## What materials ('feedstocks') are used to make compost?\n\nCompost facilities in Washington recycle a variety of organic materials, including yard debris, food scraps, manure, biosolids, forest residuals like sawdust and bark, construction wood, and agricultural residues. All of these materials can be used to produce high quality compost. Your supplier can tell you which materials they compost.\n\n## How do I know I'm getting safe, quality compost?\n\nFortunately, in Washington we have strict permitting and production standards for compost facilities, that include both time and temperature requirements and contaminant limits.\n\n## What about weed seeds, plant diseases or pesticide residues?\n\nThe controlled time, aeration, and temperature process required in Washington has been shown to kill weed seeds and plant diseases. That same process breaks down most pesticide residues. There are a few agricultural pesticides that are not easily broken down, and permitted Washington compost manufacturers carefully watch their feedstocks to keep those materials out of the composting process.\n\n\n\n\n\n\n\n## Compost Beginnings\n\nThe yard debris or food scraps* that you place into your home compost bin, take to a drop-off site, or set out for curbside collection could become the compost that you later use on your garden, lawn, and flowerbeds.\n\nIt is essential to place only quality organic material into the composting process. Here are some tips:\n\n - l The products you use or spray in your yard can end up in the compost process. Carefully read the labels of pesticide and herbicide products you use. (See page 9.)\n - l Please keep yard debris free of :\n - x Garbage\n - x Plastic of any sort\n - - Plastic plant pots\n - - Plastic plant tabs\n - - Plastic bags (if you want to bag your yard debris, use paper garden bags - available at most garden centers)\n - x Rock, brick, or masonry\n - x Glass or metal\n - x Pet waste.\n - * Many localities now collect food scraps and food-soiled paper along with yard debris for composting. Call your local collection service to find out what is collected in your area.\n\n", - "page_start": 4, - "page_end": 4, - "source_file": "CompostGuide.pdf" - }, - { - "text": "transportation, treatment, storage and disposal of hazardous and non-hazardous solid waste, and require states to develop programs to ensure the safe disposal of solid waste in sanitary landÑlls.\n\nSubtitle D of RCRA establishes a framework for regulating the disposal of municipal solid waste. Regulations under Subtitle D currently include minimum comprehensive solid waste management criteria and guidelines, including location restrictions, facility design and operating criteria, closure and post-closure requirements, Ñnancial assurance standards, groundwater monitoring requirements and corrective action standards, many of which had not commonly been in eÅect or enforced in the past in connection with municipal solid waste landÑlls. Each state was required to submit to the U.S. EPA a permit program designed to implement Subtitle D regulations by April 9, 1993. All of the states in which we operate have implemented permit programs pursuant to RCRA and Subtitle D. These state permit programs may include landÑll requirements which are more stringent than those of Subtitle D.\n\nAll of our planned landÑll expansions or new landÑll development projects have been engineered to meet or exceed Subtitle D requirements. Operating and design criteria for existing operations have been modiÑed to comply with these new regulations. Compliance with Subtitle D regulations has resulted in increased costs and may in the future require substantial additional expenditures in addition to other costs normally associated with our waste management activities.\n\n - (2) The Comprehensive Environmental Response, Compensation and Liability Act of 1980, as amended. CERCLA, among other things, provides for the cleanup of sites from which there is a release or threatened release of a hazardous substance into the environment. CERCLA may impose strict joint and several liability for the costs of cleanup and for damages to natural resources upon current owners and operators of the site, parties who were owners or operators of the site at the time the hazardous substances were disposed of, parties who transported the hazardous substances to the site and parties who arranged for the disposal of the hazardous substances at the site. Under the authority of CERCLA and its implementing regulations, detailed requirements apply to the manner and degree of investigation and remediation of facilities and sites where hazardous substances have been or are threatened to be released into the environment. Liability under CERCLA is not dependent upon the existence or disposal of only \"\"hazardous wastes'' but can also be based upon the existence of small quantities of more than 700 \"\"substances'' characterized by the U.S. EPA as \"\"hazardous,'' many of which may be found in common household waste.\n\nAmong other things, CERCLA authorizes the federal government to investigate and remediate sites at which hazardous substances have been or are threatened to be released into the environment or to order (or oÅer an opportunity to) persons potentially liable for the cleanup of the hazardous substances to do so. In addition, the U.S. EPA has established a National Priorities List of sites at which hazardous substances have been or are threatened to be released and which require investigation or cleanup.", - "page_start": 17, - "page_end": 17, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## Competition\n\nWe operate in a highly competitive industry. Entry into our business and the ability to operate proÑtably in the industry requires substantial amounts of capital and managerial experience.\n\nCompetition in the non-hazardous solid waste industry comes from a few large, national publicly-owned companies, including Waste Management and Allied Waste Industries, several regional publicly- and privately-owned solid waste companies, and thousands of small privately-owned companies. Some of our competitors have signiÑcantly larger operations, and may have signiÑcantly greater Ñnancial resources, than we do. In addition to national and regional Ñrms and numerous local companies, we compete with municipalities that maintain waste collection or disposal operations. These municipalities may have Ñnancial advantages due to the availability of tax revenues and tax-exempt Ñnancing.\n\nWe compete for collection accounts primarily on the basis of price and the quality of our services. From time to time, our competitors may reduce the price of their services in an eÅort to expand market share or to win a competitively bid municipal contract. This may have an impact on our future revenue and proÑtability.\n\nIn each market in which we own or operate a landÑll, we compete for landÑll business on the basis of disposal costs, geographical location and quality of operations. Our ability to obtain landÑll business may be limited by the fact that some major collection companies also own or operate landÑlls to which they send their waste. There also has been an increasing trend at the state and local levels to mandate waste reduction at the source and to prohibit the disposal of certain types of waste, such as yard waste, at landÑlls. This may result in the volume of waste going to landÑlls being reduced in certain areas, which may aÅect our ability to operate our landÑlls at their full capacity and/or aÅect the prices that we can charge for landÑll disposal services. In addition, most of the states in which we operate landÑlls have adopted plans or requirements that set goals for speciÑed percentages of certain solid waste items to be recycled.\n\n## Regulation\n\nOur facilities and operations are subject to a variety of federal, state and local requirements that regulate the environment, public health, safety, zoning and land use. Operating and other permits, licenses and other approvals are generally required for landÑlls and transfer stations, certain solid waste collection vehicles, fuel storage tanks and other facilities that we own or operate, and these permits are subject to revocation, modiÑcation and renewal in certain circumstances. Federal, state and local laws and regulations vary, but generally govern wastewater or stormwater discharges, air emissions, the handling, transportation, treatment, storage and disposal of hazardous and non-hazardous waste, and the remediation of contamination associated with the release or threatened release of hazardous substances. These laws and regulations provide governmental authorities with strict powers of enforcement, which include the ability to obtain injunctions and/or impose Ñnes or penalties in the case of violations, including criminal penalties. The U.S. Environmental Protection Agency and various other federal, state and local environmental, public and occupational health and safety agencies and authorities administer these regulations, including the Occupational Safety and Health Administration of the U.S. Department of Labor.", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "\n\nRecycling yields approximately 0.1mg of rare Recycling yields approximately 0.1mg of rare earth product per expired card. earth product per expired card.\n\nRare earths are special metals, unobtainable Rare earths are special metals, unobtainable in Japan, which are essential to in Japan, which are essential to PCs and s and cellphones, electric vehicles and solar power cellphones, electric vehicles and solar power generators. Given that Japan is dependent on generators. Given that Japan is dependent on imports for nearly its entire supply, we believe imports for nearly its entire supply, we believe recycling rare earths is a worthwhile endeavor recycling rare earths is a worthwhile endeavor in terms of national energy policy. in terms of national energy policy.\n\nCard microcircuits that have become unusable Card microcircuits that have become unusable due to changes in card design are collected due to changes in card design are collected from cards with IC chips, which are separated from cards with IC chips, which are separated\n\nExpired credit cards with IC chips\n\nRecovery\n\nfrom cards without IC chips. Both types are from cards without IC chips. Both types are pulverized at the company pulverized at the company's Shimura Center s Shimura Center in Tokyo and sealed separately in recycling in Tokyo and sealed separately in recycling bags, under supervision of a company official. bags, under supervision of a company official. The bags are then sent off for processing by The bags are then sent off for processing by an outside company, which analyzes and an outside company, which analyzes and purifies the contents and then extracts the purifies the contents and then extracts the rare earths. rare earths.\n\n - * After intermediate processing, waste materials other than the rare earths and the cards with no IC chips are both sent off for final disposal, in conformity with established procedures.\n\nAnalysis and purification\n\nRare earth product\n\nBase metals, alloys, chemical products, etc. ( )\n\n## Sumitomo Mitsui Finance & Leasing: Promoting recycling and reuse\n\nAs part of its core leasing operations, As part of its core leasing operations, Sumitomo Mitsui Finance & Leasing is Sumitomo Mitsui Finance & Leasing is helping reduce customers' environmental helping reduce customers' environmental\n\nRecycling and reuse of old equipment and machinery\n\n\n\nload through measures such as 'carbon load through measures such as 'carbon neutral leases' (with carbon credits allocated neutral leases' (with carbon credits allocated in proportion to emission volumes of leased in proportion to emission volumes of leased assets) and leasing of environment-friendly assets) and leasing of environment-friendly and energy-saving equipment. and energy-saving equipment.\n\nLikewise, by trading used machinery and Likewise, by trading used machinery and semiconductor- manufacturing equipment, semiconductor- manufacturing equipment, Sumitomo Mitsui Finance & Leasing is Sumitomo Mitsui Finance & Leasing is supporting more efficient capital investment supporting more efficient capital investment by its customers, while itself evolving into a by its customers, while itself evolving into a recycling-oriented, environment-friendly recycling-oriented, environment-friendly company. company.\n\n## Recycling of rare earths used in smart cards\n\nAt Sumitomo Mitsui Card, rare earths At Sumitomo Mitsui Card, rare earths extracted from IC chips from expired credit extracted from IC chips from expired credit cards are recycled. cards are recycled.", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "A project of the Washington Organic Recycling Council, with support from the Washington State Department of Ecology's Public Participation Grant program.\n\nThis product was partly funded through a grant from the Washington Department of Ecology. While these materials were reviewed for grant consistency, this does not necessarily constitute endorsement by the department.\n\nSpecial thanks: the original version of this brochure in 2003 was created by the Washington County, Oregon Solid Waste and Recycling Program in cooperation with the Washington Organic Recycling Council and the Composting Council of Oregon.\n\n\n\nwww.compostwashington.org\n\n\n\nwww.soilsforsalmon.org\n\n\n\noriginal artwork provided by:\n\n\n\n## Tips to Remember:\n\n- · Don't put plants into 100% compost. Mix compost thoroughly into existing soil before planting.\n- · When transplanting, it's better to amend the whole bed, not just planting holes, to promote root growth.\n- · Ask your compost supplier which compost product is best for your intended use.\n- · Use compost at the recommended application rate.\n- · To maintain healthy soil, reapply compost or mulch every 1-2 years.\n- · Many composts are rich in plant nutrients, so you may be able to reduce fertilizer use after applying compost.\n- · Compost can also reduce your lawn and garden's summer irrigation needs.\n- · Compost-amended soil and mulching slow run off, reduce erosion, and break down pollutants. When you use compost, you're helping to protect our precious streams, rivers, lakes, and marine waters.", - "page_start": 1, - "page_end": 1, - "source_file": "CompostGuide.pdf" - }, - { - "text": "The other corporate oÇcers with responsibility for our operations have an average of over 23 years of management experience in the solid waste industry. Our Ñve regional vice presidents and our 23 area presidents have an average of 24 years of experience in the industry.\n\nIn addition, Harris W. Hudson, who has served as our Vice Chairman since our initial public oÅering, has over 40 years of experience in the solid waste industry, including 11 years with Waste Management and 19 years with private waste collection companies.", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## Resources\n\n## Compost Organizations\n\n## Washington Organic Recycling Council\n\nFind a compost producer in your area www.compostwashington.org\n\n## US Composting Council\n\nSeal of Testing Assurance (STA) program www.compostingcouncil.org/programs/sta/\n\n## Restoring the Soil to Protect our Waterways\n\nwww.soilsforsalmon.org\n\nCompost amendment and erosion control during construction: information for builders www.buildingsoil.org\n\n## Natural Lawn & Garden Care, Soils, and Home Composting\n\nCity of Seattle\n\nwww.seattle.gov/util/services/yard\n\nKing County\n\nwww.kingcounty.gov/soils\n\nWashington State University\n\nwww.puyallup.wsu.edu/soilmgmt/\n\n\n\n\n\n## The Beauty of Your Lawn and Garden Blossoms from the Soil\n\nThank you for your interest in compost.\n\nCompost is a versatile product with many benefits. It enhances soil quality, helps save water, and supports your community's efforts to recycle organic debris. All this helps to conserve our natural resources and reduces the amount of material sent to the landfill.\n\nCompost-amended soil also helps break down pollutants and absorb stormwater runoff. By making nutrients slowly available to plants and enhancing plant health, compost can reduce the need for chemical fertilizers and pesticides. All these benefits help protect our lakes, rivers, and marine waters from pollution and excessive runoff.\n\nCompost is a natural amendment for your lawn or garden, and can be used regularly to enrich your soil. This guide is designed to help you get the most from the compost that you buy.", - "page_start": 2, - "page_end": 2, - "source_file": "CompostGuide.pdf" - }, - { - "text": "Liability under CERCLA is not dependent upon the intentional disposal of hazardous waste or hazardous substances. It can be founded upon the release or threatened release, even as a result of unintentional, non-negligent or lawful action, of thousands of hazardous substances, including very small quantities of such substances. Thus, even if our landÑlls have never knowingly received hazardous waste as such, it is possible that one or more hazardous substances may have been deposited or \"\"released'' at our landÑlls or at other properties which we currently own or operate or may have owned or operated. Therefore, we could be liable under CERCLA for the cost of cleaning up such hazardous substances at such sites and for damages to natural resources, even if those substances were deposited at our facilities before we acquired or operated them. The costs of a CERCLA cleanup can be very expensive. Given the diÇculty of obtaining insurance for environmental impairment liability, such liability could have a material impact on our business and Ñnancial condition. For a further discussion, see \"\"Ì Liability Insurance and Bonding.''\n\n - (3) The Federal Water Pollution Control Act of 1972, as amended. This Act regulates the discharge of pollutants from a variety of sources, including solid waste disposal sites, into streams, rivers and other waters of the United States. Point source runoÅ from our landÑlls and transfer stations that is discharged into surface waters must be covered by discharge permits that generally require us to conduct", - "page_start": 17, - "page_end": 17, - "source_file": "NYSE_RSG_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "BD-EN_calendrier-Lauzun-2024.pdf", - "query": "What is the day of the black container in Lachapelle ?", - "target_page": 4, - "target_passage": "LACHAPELLE MONDAY green weeks", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## HOW DOES IT WORK?\n\n## When to put my garbage container outside?\n\nThe evening before the waste collection day.\n\n## Who is responsible for the maintenance of the containers?\n\nYou will have to keep them in a clean working state (periodical washing).\n\n## Container stolen: What to do?\n\nIn case of theft, your container will be replaced on presentation of a theft report effected at your local police station.\n\n## Out container = full container\n\nPut your rubbish container out only when full.\n\nAttention ! Black garbage bags left on the ground will no longer be collected.\n\nPlease be respectful with the agents.\n\n## HOW TO GET A COMPOST KIT?\n\nBuy your own compost kit and get\n\ntips for good composting practice.\n\nOnly during opening hours every wednesday from 2 pm to 4 pm at the old recycling centre impasse Elie Teyssier-Miramont. (In case of unavailability, please contact the environment department).\n\n30 minute workshops/awarenessraising sessions are regularly organised (starting at 4pm). It is possible to leave with a composter during these workshops ** .\n\nRegistration and information with the service.\n\n| Compost kit | Plastic | Wood |\n|---------------|-----------|--------|\n| 300 L | 20 € | 30 € |\n| 400 L | 25 € | 35 € |\n\n- * Only payment by cheque made payable to the\n- 'Tresor Public' are accepted\n- ** Specific condition of acquisition apply accor-\n- ding to your municipality of residence\n\n\n\n\n\n| Town | Black container | Yellow container |\n|------------------------|------------------------|------------------------|\n| TUESDAY white weeks | THURSDAY green weeks | AGNAC |\n| MONDAY green weeks | WEDNESDAY white weeks | ALLEMANS-DU-DROPT |\n| TUESDAY white weeks | THURSDAY green weeks | ARMILLAC |\n| WEDNESDAY green weeks | FRIDAY white weeks | BOURGOUGNAGUE |\n| MONDAY green weeks | WEDNESDAY white weeks | CAMBES |\n| MONDAY green weeks | THURSDAY white weeks | LACHAPELLE |\n| TUESDAY white weeks | WEDNESDAY green weeks | LAPERCHE |\n| TUESDAY white weeks | THURSDAY green weeks | LA-SAUVETAT-DU-DROPT |\n| MONDAY green weeks | FRIDAY white weeks | LAUZUN |\n| TUESDAY white weeks | THURSDAY green weeks | LAVERGNE |\n| TUESDAY green weeks | THURSDAY white weeks | MIRAMONT-DE-GUYENNE |\n| WEDNESDAY white weeks | WEDNESDAY green weeks | MONTIGNAC-DE-LAUZUN |\n| TUESDAY white weeks | THURSDAY green weeks | MONTIGNAC-TOUPINERIE |\n| WEDNESDAY green weeks | WEDNESDAY white weeks | MOUSTIER |\n| MONDAY green weeks | THURSDAY white weeks | PEYRIÈRE |\n| MONDAY green weeks | WEDNESDAY white weeks | PUYSSERAMPION |\n| MONDAY white weeks | THURSDAY green weeks | ROUMAGNE |\n| WEDNESDAY white weeks | WEDNESDAY green weeks | SAINT-COLOMB-DE-LAUZUN |\n| MONDAY white weeks | FRIDAY green weeks | SAINT-PARDOUX-ISAAC |\n| WEDNESDAY white weeks | WEDNESDAY green weeks | SEGALAS |\n\n## MORE QUESTIONS ?\n\nWebsite:\n\nwww.ccpl47.fr\n\n/ Section En Pratique > Environnement > Gestion des déchets\n\nEnvironnement Service :\n\n12 rue du Renfort 47410 LAUZUN\n\n05 53 94 11 23 / secretariat.environnement@ccpl47.fr\n\nComposting : anim.biodechets@ccpl47.fr / 06 33 72 84 18\n\nRecycling centre access, registration or modification : iris@ccpl47.fr / 05 53 64 12 26\n\nOn the CCPL website\n\n\n\nEverything you need to know about sorting\n\n\n\n\n\neepik\n\nr\n\ntock - F\n\nS\n\ndobe\n\nto : A\n\nédits pho\n\nr\n\nC", - "page_start": 3, - "page_end": 3, - "source_file": "BD-EN_calendrier-Lauzun-2024.pdf" - }, - { - "text": "Figure 11.3 Gruff Visualization of the EmployeeShape\n\n\n\nFigure 11.4 Gruff Visualization of the CustomerShape\n\n", - "page_start": 81, - "page_end": 81, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "- (i) paragraph 3(1)(e) to (h) of Schedule 10 as applied by paragraph (a) of this subparagraph,\n - (ii) paragraph (c) to (l) of this sub-paragraph,\n - (iii) paragraph 11(2), (3) and (4).\n - (2) For the purposes of sub-paragraph (1)(m), 'single end-to-end testing service' has the meaning given in paragraph 3(2)(c) of Schedule 10.", - "page_start": 61, - "page_end": 61, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "Contract number: ECHA/2019/355\n\naudits and investigations.", - "page_start": 38, - "page_end": 38, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "In Figure 5, the ratio between L γ and νL ν, 1mm reflects the division between BL Lacs and FSRQs as well", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0806.pdf" - }, - { - "text": "- (i) the number of tests they sold on that day, and\n - (ii) in relation to each test sold on that day-\n - (aa) the date of arrival in England of the person in respect of whom the test was sold, and\n - (bb) whether the person in respect of whom the test was sold is a category 1 arrival or not;\n - (h) if they arrange with another person ('X') for X to carry out any element of the single end-to-end testing service on their behalf, the test provider ensures that X complies with the following so far as relevant to the carrying out of that element-\n - (i) paragraph 3(1)(e) to (i) of Schedule 10 as applied by paragraph (a) of this subparagraph,\n - (ii) paragraph (b) to (g) of this sub-paragraph,", - "page_start": 63, - "page_end": 63, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "The Docker architecture includes the following components:\n\n - /SM590000 Docker Server Daemon\n - Daemon is the Docker process that runs as background process and listen for API requests. It also manages Dockers objects, such as images, containers, networks, and volumes.\n - /SM590000 Docker Registry\n - A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use. Docker is configured to look for images on Docker Hub by default.\n - /SM590000 Docker Objects:\n - - Images: This template is read-only with instruction to build a Docker container. An image can be layer on another image with specific changes. An images library is available from the Docker registry.\n - A Dockerfile contains the configuration information that is needed to build and run an image. Based on instructions that are defined in the Dockerfile, layers are created for an image. During the build process, only the changed layer is rebuilt; therefore, Docker remains lightweight, which makes it small and fast.\n - - Container: A container is an executable instance that is built from the image, which can be started, stopped, moved, or deleted by using Docker API or CLI. Containers can be connected to one or many networks. Storage can be added and a new container can be built by using a container.\n - - Services: Services allow you to scale containers across multiple Docker daemons, which all work together as a swarm with multiple managers and workers. Each member of a swarm is a Docker daemon, and the daemons all communicate by using the Docker API.\n - - NameSpace: In context of Docker, NameSpace is the technology that provides isolated workspaces for a container (see Table 2-1). Each container encapsulates all its features within the namespace that is associated with that specific container.\n\nTable 2-1 NameSpace\n\n| Namespace | Description |\n|-------------|--------------------------------------------------------------------------|\n| PID | Process isolation (PID: Process ID) |\n| NET | Managing network interfaces (NET: Networking) |\n| IPC | Managing access to IPC resources (IPC: InterProcess Communication) |\n| MNT | Managing file system mount points (MNT: Mount) |\n| UTS | Isolating kernel and version identifiers. (UTS: UNIX Timesharing System) |", - "page_start": 38, - "page_end": 38, - "source_file": "sg248459.pdf" - }, - { - "text": "## Requirement to undertake workforce tests\n\n7. -(1) This regulation applies to a person ('P'), to whom regulation 5(3) or (4) applies.\n\n - (2) Subject to paragraph (7)-\n - (a) where P is a person to whom regulation 5(3) applies, P must undertake a workforce test for day 2, day 5 and day 8 in accordance with paragraph (6) in relation to each category of test;\n - (b) where P is a person to whom regulation 5(4) applies, P must undertake a workforce test for day 2 in accordance with paragraph (6)(c).", - "page_start": 9, - "page_end": 9, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## 4.4 OpenShift registry\n\nOpenShift Container Platform can use any server that implements the container image registry API as a source of images, including the Docker Hub, private registries that are run by third parties, and the integrated OpenShift Container Platform registry.\n\n## 4.4.1 Integrated OpenShift Container Registry\n\nOpenShift Container Platform provides an integrated container image registry called OpenShift Container Registry (OCR). This registry that adds the ability to automatically provision new image repositories on demand. This feature provides users with a built-in location for their application builds to push the resulting images.\n\nWhenever a new image is pushed to OCR, the registry notifies OpenShift Container Platform about the new image, passing along all the information about it, such as the namespace, name, and image metadata. Different components of OpenShift Container Platform react to new images, creating builds and deployments.\n\nOCR can also be deployed as a stand-alone component that acts solely as a container image registry, without the build and deployment integration.\n\n## 4.4.2 Third-party registries\n\nOpenShift Container Platform can create containers by using images from third-party registries. However, these registries do not offer the same image notification support as the integrated OpenShift Container Platform registry. In this situation, OpenShift Container Platform fetches tags from the remote registry upon imagestream creation. Refreshing the fetched tags is as simple as running the oc import-image command. When new images are detected, the build that was described in 4.4.1, 'Integrated OpenShift Container Registry' and deployment reactions occur.\n\n## 4.5 Managing OpenShift resources\n\nAll OpenShift resources, images, containers, pods, services, builders, templates, and so on, are stored on etcd and can be managed by the OpenShift CLI, web console, or REST API. These resources also are defined in text files in JSON or YAML format and can be changed by editing those files and shared on an SCM system, such as GIT.\n\nOpenShift can even retrieve these resource definitions directly from an external SCM.", - "page_start": 84, - "page_end": 84, - "source_file": "sg248459.pdf" - }, - { - "text": "## 2.4.3 Kubernetes operating environment, objects, and basic operations\n\nThis section describes the Kubernetes operating environment, including its objects and basic operations.\n\n## Master node\n\nThis node runs multiple controllers that are responsible for the health of the cluster, replication, scheduling, endpoints (linking Services and Pods), Kubernetes API. It interacts with the underlying cloud providers and others. Generally, it ensures that everything is running and monitors worker nodes.\n\n## Worker node\n\nThis node runs the Kubernetes agent that is responsible for running Pod containers by way of Docker or rkt, requests secrets or configurations, mounts required Pod volumes, performs health checks, and reports the status of Pods and the node to the rest of the system.\n\n## Pod\n\nWithin a cluster, a pod encapsulates an application that is composed of one or more processes from one and at time multiple containers. Every pod includes dedicated I/O resources, such as storage, a unique IP, and a set of configuration properties for the runtime environment. These features make pod the smallest unit of deployment and basic unit of execution.\n\nDocker is the most popular container run time that is used for Kubernetes Pod 1 . Depending on associated containers, pods are available in the following types:\n\n - /SM590000 Pod with a single container: This configuration is the most common.\n - /SM590000 Pod with multiple containers: Must be colocated containers to serve a functional requirement.\n - /SM590000 Networking: Each pod shares its namespace, IP, and port. However, for optimal performance, containers in same Pod communicates with the localhost identity.\n - /SM590000 Storage: A pod specifies shared storage volume. All containers in a pod can share persistent data through this volume.\n\nAfter a pod is created and is scheduled to run on a node, it persists until one of the following actions occurs:\n\n - /SM590000 The process is ended.\n - /SM590000 The pod objected is deleted.\n - /SM590000 The pod is evicted for lack of resources.\n - /SM590000 The node fails.\n\nA pod alone is not self-healing, which means that during any failure, a pod does not attempt to restart. A pod is the encapsulation of containers, which primarily are executable entities. Therefore, to 'run a pod' means running an application and service through containers.", - "page_start": 41, - "page_end": 41, - "source_file": "sg248459.pdf" - } - ] - }, - { - "references": { - "source_file": "BD-EN_calendrier-Lauzun-2024.pdf", - "query": "What to do if my container is stolen ?", - "target_page": 4, - "target_passage": "Container stolen: What to do? In case of theft, your container will be replaced on presentation of a theft report effected at your local police station.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## HOW DOES IT WORK?\n\n## When to put my garbage container outside?\n\nThe evening before the waste collection day.\n\n## Who is responsible for the maintenance of the containers?\n\nYou will have to keep them in a clean working state (periodical washing).\n\n## Container stolen: What to do?\n\nIn case of theft, your container will be replaced on presentation of a theft report effected at your local police station.\n\n## Out container = full container\n\nPut your rubbish container out only when full.\n\nAttention ! Black garbage bags left on the ground will no longer be collected.\n\nPlease be respectful with the agents.\n\n## HOW TO GET A COMPOST KIT?\n\nBuy your own compost kit and get\n\ntips for good composting practice.\n\nOnly during opening hours every wednesday from 2 pm to 4 pm at the old recycling centre impasse Elie Teyssier-Miramont. (In case of unavailability, please contact the environment department).\n\n30 minute workshops/awarenessraising sessions are regularly organised (starting at 4pm). It is possible to leave with a composter during these workshops ** .\n\nRegistration and information with the service.\n\n| Compost kit | Plastic | Wood |\n|---------------|-----------|--------|\n| 300 L | 20 € | 30 € |\n| 400 L | 25 € | 35 € |\n\n- * Only payment by cheque made payable to the\n- 'Tresor Public' are accepted\n- ** Specific condition of acquisition apply accor-\n- ding to your municipality of residence\n\n\n\n\n\n| Town | Black container | Yellow container |\n|------------------------|------------------------|------------------------|\n| TUESDAY white weeks | THURSDAY green weeks | AGNAC |\n| MONDAY green weeks | WEDNESDAY white weeks | ALLEMANS-DU-DROPT |\n| TUESDAY white weeks | THURSDAY green weeks | ARMILLAC |\n| WEDNESDAY green weeks | FRIDAY white weeks | BOURGOUGNAGUE |\n| MONDAY green weeks | WEDNESDAY white weeks | CAMBES |\n| MONDAY green weeks | THURSDAY white weeks | LACHAPELLE |\n| TUESDAY white weeks | WEDNESDAY green weeks | LAPERCHE |\n| TUESDAY white weeks | THURSDAY green weeks | LA-SAUVETAT-DU-DROPT |\n| MONDAY green weeks | FRIDAY white weeks | LAUZUN |\n| TUESDAY white weeks | THURSDAY green weeks | LAVERGNE |\n| TUESDAY green weeks | THURSDAY white weeks | MIRAMONT-DE-GUYENNE |\n| WEDNESDAY white weeks | WEDNESDAY green weeks | MONTIGNAC-DE-LAUZUN |\n| TUESDAY white weeks | THURSDAY green weeks | MONTIGNAC-TOUPINERIE |\n| WEDNESDAY green weeks | WEDNESDAY white weeks | MOUSTIER |\n| MONDAY green weeks | THURSDAY white weeks | PEYRIÈRE |\n| MONDAY green weeks | WEDNESDAY white weeks | PUYSSERAMPION |\n| MONDAY white weeks | THURSDAY green weeks | ROUMAGNE |\n| WEDNESDAY white weeks | WEDNESDAY green weeks | SAINT-COLOMB-DE-LAUZUN |\n| MONDAY white weeks | FRIDAY green weeks | SAINT-PARDOUX-ISAAC |\n| WEDNESDAY white weeks | WEDNESDAY green weeks | SEGALAS |\n\n## MORE QUESTIONS ?\n\nWebsite:\n\nwww.ccpl47.fr\n\n/ Section En Pratique > Environnement > Gestion des déchets\n\nEnvironnement Service :\n\n12 rue du Renfort 47410 LAUZUN\n\n05 53 94 11 23 / secretariat.environnement@ccpl47.fr\n\nComposting : anim.biodechets@ccpl47.fr / 06 33 72 84 18\n\nRecycling centre access, registration or modification : iris@ccpl47.fr / 05 53 64 12 26\n\nOn the CCPL website\n\n\n\nEverything you need to know about sorting\n\n\n\n\n\neepik\n\nr\n\ntock - F\n\nS\n\ndobe\n\nto : A\n\nédits pho\n\nr\n\nC", - "page_start": 3, - "page_end": 3, - "source_file": "BD-EN_calendrier-Lauzun-2024.pdf" - }, - { - "text": "## 4.4 OpenShift registry\n\nOpenShift Container Platform can use any server that implements the container image registry API as a source of images, including the Docker Hub, private registries that are run by third parties, and the integrated OpenShift Container Platform registry.\n\n## 4.4.1 Integrated OpenShift Container Registry\n\nOpenShift Container Platform provides an integrated container image registry called OpenShift Container Registry (OCR). This registry that adds the ability to automatically provision new image repositories on demand. This feature provides users with a built-in location for their application builds to push the resulting images.\n\nWhenever a new image is pushed to OCR, the registry notifies OpenShift Container Platform about the new image, passing along all the information about it, such as the namespace, name, and image metadata. Different components of OpenShift Container Platform react to new images, creating builds and deployments.\n\nOCR can also be deployed as a stand-alone component that acts solely as a container image registry, without the build and deployment integration.\n\n## 4.4.2 Third-party registries\n\nOpenShift Container Platform can create containers by using images from third-party registries. However, these registries do not offer the same image notification support as the integrated OpenShift Container Platform registry. In this situation, OpenShift Container Platform fetches tags from the remote registry upon imagestream creation. Refreshing the fetched tags is as simple as running the oc import-image command. When new images are detected, the build that was described in 4.4.1, 'Integrated OpenShift Container Registry' and deployment reactions occur.\n\n## 4.5 Managing OpenShift resources\n\nAll OpenShift resources, images, containers, pods, services, builders, templates, and so on, are stored on etcd and can be managed by the OpenShift CLI, web console, or REST API. These resources also are defined in text files in JSON or YAML format and can be changed by editing those files and shared on an SCM system, such as GIT.\n\nOpenShift can even retrieve these resource definitions directly from an external SCM.", - "page_start": 84, - "page_end": 84, - "source_file": "sg248459.pdf" - }, - { - "text": "Consider the following basic concepts for aggregated logging:\n\n - /SM590000 Cluster: A set of Elasticsearch nodes that distribute the workload.\n - /SM590000 Node: A container that is running an instance of Elasticsearch, which is part of the cluster.\n - /SM590000 Index: Collection of documents (container logs).\n - /SM590000 Shards and Replicas: Indexes can be divided into sets of data that contain the primary copy of the documents that are stored (primary shards) or backups of that primary copies (replica shards). Sharding allows the application to horizontally scale the information and distributed/paralellized operations. Replication instead provides HA and also better search throughput because searches are also run on replicas.\n\nNote: Using NFS storage as a volume or a persistent volume (or by way of an NAS such as Gluster) is not supported for Elasticsearch storage because Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur.\n\nRed Hat OpenShift Container Platform can gather metrics from kubelet and store the values in Heapster. Red Hat OpenShift Container Platform Metrics provide the ability to view CPU, memory, and network-based metrics and display the values in the user interface. These metrics can allow for the horizontal autoscaling of pods based on parameters that are provided by a Red Hat OpenShift Container Platform user. It is important to understand capacity planning when metrics are deployed into an Red Hat OpenShift Container Platform environment.\n\nRed Hat OpenShift Container Platform metrics is composed by the following pods that are running on the Red Hat OpenShift Container Platform environment:\n\n - /SM590000 Heapster: Heapster scrapes the metrics for CPU, memory, and network usage on every Pod. Then, it exports them into Hawkular Metrics.", - "page_start": 112, - "page_end": 112, - "source_file": "sg248459.pdf" - }, - { - "text": "Figure 2-2 IBM PowerVC\n\n\n\nAround 2011, Container technology started to be a strong player in the cloud arena, which is a method to package an application in a box so it can be run with its dependencies, isolated from other applications. For more information, see 2.3, 'Containers' on page 19.\n\nA year later, Docker Containers exploded in popularity, but one thing was missing: the thorough view and management of the entire environment.", - "page_start": 28, - "page_end": 28, - "source_file": "sg248459.pdf" - }, - { - "text": "\n\nChapter 12.\n\n## Encryption\n\nEncryption protects against the potential exposure of sensitive user data that is stored on discarded, lost, or stolen storage devices. Storwize V7000 supports optional encryption of data at-rest.\n\nThis chapter includes the following topics:", - "page_start": 624, - "page_end": 624, - "source_file": "sg247938.pdf" - }, - { - "text": "## Registry\n\nOpenShift can build container images from source code, deploy them, and manage their lifecycle. To enable this process, OpenShift provides an internal, integrated registry that can be deployed in the OpenShift environment to manage images.\n\nThe registry stores images and metadata. For production environments, persistent storage must be used for the registry; otherwise, any images that were built or pushed into the registry disappear if the pod restarts.\n\n## Aggregated logging\n\nOne of the Red Hat OpenShift Container Platform optional components is named Red Hat OpenShift Container Platform aggregated logging. This component collects and aggregates logs from the pods that are running in the Red Hat OpenShift Container Platform cluster and /var/log/messages on nodes. This configuration enables Red Hat OpenShift Container Platform users to view the logs of projects that they can view by using a web interface.\n\nRed Hat OpenShift Container Platform aggregated logging component is a modified version of the ELK stack, which is composed of a few pods that are running on the Red Hat OpenShift Container Platform environment:\n\n - /SM590000 Elasticsearch: An object store where all logs are stored.\n - /SM590000 Kibana: A web UI for Elasticsearch.\n - /SM590000 Curator: Elasticsearch maintenance operations that are performed automatically on a per-project basis.\n - /SM590000 Fluentd: Gathers logs from nodes and containers and feeds them to Elasticsearch.\n\nConsider the following basic concepts for aggregated logging:", - "page_start": 112, - "page_end": 112, - "source_file": "sg248459.pdf" - }, - { - "text": "If the RESOURCE\\_NAME parameter is omitted, all resources of the specified RESOURCE\\_TYPE are summarized, as shown in Example 6-12.\n\nExample 6-12 oc get pod\n\n| # oc get pod NAME READY STATUS RESTARTS AGE |\n|-----------------------------------------------------------------------------|\n| docker-registry-3-4flql 1/1 Running 2 1d |\n| router-2-4gnmj 1/1 Running 3 1d |\n| router-2-cp5sf 1/1 Running 3 1d |\n| router-2-slkjf 1/1 Running 3 1d |\n\nUse the oc types command for a quick refresher on the concepts of the available RESOURCE\\_TYPES, as shown in Example 6-13.\n\n## Example 6-13 oc types\n\n## # oc types\n\nCommand \"types\" is deprecated, refer to official documentation instead Concepts and Types\n\nKubernetes and OpenShift help developers and operators build, test, and deploy applications in a containerized cloud\n\nenvironment. Applications may be composed of all of the components below, although most\n\ndevelopers will be concerned\n\nwith Services, Deployments, and Builds for delivering changes.\n\n## Concepts:\n\n## * Containers:\n\nA definition of how to run one or more processes inside of a portable Linux environment. Containers are started from an Image and are usually isolated from other containers on the same machine.\n\n## * Image:\n\nA layered Linux filesystem that contains application code, dependencies, and any supporting operating system libraries. An image is identified by a name that can be local to the current cluster or point to a remote Docker registry (a storage server for images).\n\n## * Pods [pod]:\n\nA set of one or more containers that are deployed onto a Node together and share a unique IP and Volumes (persistent storage). Pods also define the security and runtime policy for each container.\n\n## * Labels:\n\nLabels are key value pairs that can be assigned to any resource in the system for grouping and selection. Many resources use labels to identify sets of other resources.\n\n## * Volumes:\n\nContainers are not persistent by default - on restart their contents are cleared. Volumes are mounted filesystems available to Pods and their containers which may be backed by a number of host-local or network attached storage endpoints. The simplest volume type is EmptyDir, which is a temporary directory on a single machine. Administrators may also allow you to request a Persistent Volume that is automatically attached to your pods.\n\n - * Nodes [node]:", - "page_start": 163, - "page_end": 163, - "source_file": "sg248459.pdf" - }, - { - "text": "## Ephemeral storage\n\nContainer images are stored locally on the nodes running Red Hat OpenShift Container Platform pods.\n\nWhen Docker run time is used, the /var/lib/docker mount point is used by active containers and pods. This local storage is where the node maintains a copy of container images that are pulled from a container registry. This mount point is managed by docker-storage and it uses the following naming format: /var/lib/docker/overlay2/ and /var/lib/docker/containers/ .\n\n## Persistent storage\n\nPersistent Volume Claims (PVC) are used to store the application data. These claims can be added to the environment manually or provisioned dynamically by using a StorageClass object.\n\n## Storage classes\n\nThe StorageClass resource object describes and classifies different types of storage that can be requested. It also provides a means for passing parameters to the backend for dynamically provisioned storage on demand.\n\nStorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators (cluster-admin) or Storage Administrators (storage-admin) define and create the StorageClass objects that users can use without needing any intimate knowledge about the underlying storage volume sources. Therefore, the naming of the storage class that is defined in the StorageClass object must be useful in understanding the type of storage it maps, whether that is storage from PowerVC Cinder or from other storage provider.\n\n## Persistent Volumes\n\nPersistentVolumes (PV) objects provide pods with non-ephemeral storage by configuring and encapsulating underlying storage sources. A PersistentVolumeClaim (PVC) abstracts an underlying PV to provide provider-independent storage to OpenShift resources. When successfully fulfilled by the system, a PVC mounts the persistent storage to a specific directory (mountPath) within one or more pods. From the container perspective, the mountPath is connected to the underlying storage mount points by a regular bind mount.\n\n## FlexVolumes\n\nFlexVolume is known as an out-of-tree plug-in interface because it is developed outside the main Kubernetes source code. The FlexVolume interface enables users to write their own drivers. These drivers can be written in any programming or scripting language.\n\nWhen an application that is running on OpenShift needs a persistent volume, it submits a persistent volume claim to the PowerVC FlexVolume driver. The PowerVC FlexVolume call is translated into a Cinder API call to create a volume. When the volume is ready, it is presented back to OpenShift and attached to the requesting pod.\n\nThe persistent volume claim needs to include only the volume size and access mode. The backend implementation information about how and where the volume is created are handled by PowerVC. The OpenShift API abstracts them from the user that is making the resource claim.\n\nNote: For more information about the PowerVC FlexVolume, see this web page.", - "page_start": 111, - "page_end": 111, - "source_file": "sg248459.pdf" - }, - { - "text": "- - Control groups: A control group (cgroup) is the technology that limits an application to a specific set of resources. This feature allows Docker Engine to share available hardware resources to containers and optionally enforce limits and constraints.\n - - Union file system: Union file systems (UnionFS) are file systems that operate by creating layers, which makes them lightweight and fast. Docker Engine uses UnionFS to provide the building blocks for containers.\n - - Container format: Container format is the wrapper around NameSpaces, control groups, and UnionFS. The default container format is libcontainer.", - "page_start": 38, - "page_end": 38, - "source_file": "sg248459.pdf" - }, - { - "text": "The following timeline highlights the major shifts in the development of container to date (see Figure 2-9):\n\n - /SM590000 2000 FreeBSD Jails: FreeBSD Jails enabled Computer systems to be partitioned into multiple servers that were independent subsystems named Jail with unique IP address.\n - /SM590000 2001 Linux Vserver: Similar to FreeBSD Jails, Linux also developed a feature for operating system virtualization where a file system, memory, and network can be shared among independent systems.\n - /SM590000 2004 Solaris Containers: Solaris Containers combined system resource controls and boundary separation that was provided by zones to take advantage of features, such as snapshots and cloning from ZFS.\n - /SM590000 2006 Google process containers: Process Containers was designed for limiting, accounting, and isolating resource usage (CPU, memory, disk I/O, and network) of a collection of processes. Later, this was renamed as Control Groups (cgroups) and merged to Linux kernel 2.6.24.\n - /SM590000 2008 LXC evolved (Linux Container Group): Linux Containers (LXC) was the first, most complete implementation of Linux container manager. It was implemented in 2008 by using cgroups and Linux namespaces.\n - /SM590000 2013 Let Me Contain That For You (LMCTFY): Let Me Contain That For You (LMCTFY) started in 2013 as an open source version of Google's container stack. Applications can be made container aware, which creates and manages their own subcontainers.\n - /SM590000 2013 Docker: Docker emerged, which made container service even more popular. Docker and container grew together.\n - /SM590000 2016 Security and DevOps: Container security enhanced and DevOps method evolved as most preferred Container Application process.\n - /SM590000 2017 Container becomes more matured with CNCF and Kubernetes.\n\nFigure 2-9 Containers timeline\n\n", - "page_start": 36, - "page_end": 36, - "source_file": "sg248459.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed6_cc4.pdf", - "query": "How many people include the Dyspnea study ?", - "target_page": 1, - "target_passage": "This population-based study included 2,857 adults who were experiencing respiratory symptoms.", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "## Assessment of the Impact of Participants ' Dyspnea\n\nAlthough neither the CAT nor the SGRQ are dyspneaspeci /uniFB01 c tools, both are recommended by the Global Initiative for Chronic Obstructive Lung Disease to evaluate symptoms, including dyspnea, 20 and both yield a richer assessment of dyspnea than the modi /uniFB01 ed Medical Research Council breathlessness scale. 20 Fifteen questions were taken from the CAT and SGRQ questionnaires that referred to individuals ' experiences with dyspnea, and a composite measure of dyspnea impact using a weighted sum of the responses to the 15 questions was constructed. Questions were coded so that larger values indicate more impactful dyspnea. Weights used for question responses in calculating the dyspnea impact assessment measure were those of the /uniFB01 rst component of a principal component analysis (PCA) based on the covariance matrix of question responses. Questions with multiple responses and ordinal structure are individually more informative and thus were accorded higher weight than individual true-false questions. No additional PCA component was anticipated a priori to be material for our investigation, and an eigenvalue analysis of the PCA was conducted to verify this assumption.\n\nThe composite dyspnea impact measure was scaled so its minimum value was 0 if the response to each of the 15 questions was 0, and the maximum value was scaled to 100 if the individual responses for all 15 questions represented the most severe dyspnea response.\n\n[\n\n]", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "The prevalence of individuals who were obese and morbidly obese in the PRISm group partially explains the between-group difference in dyspnea. The excess dyspnea seen in the PRISm group when compared with the normal spirometry group is partly explained by patient-speci /uniFB01 c risk factors, including BMI, which shrink the mean dyspnea differential between the groups from 11.2 to 5.5 points (Tables 3-6). The remaining 5.5point difference indicates that PRISm patients have excess dyspnea relative to symptomatic individuals with normal spirometry for additional reasons other than obesity.\n\n[\n\n]", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "## Take-home Points\n\nStudy Question: How profoundly are adults with undiagnosed respiratory symptoms affected by dyspnea?\n\nResults: In community-based adults with undiagnosed respiratory symptoms, those identi /uniFB01 ed with preserved ratio impaired spirometry experienced the greatest impact of dyspnea, followed by those with undiagnosed asthma or COPD. Greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\nInterpretation: Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity.\n\nDyspnea refers to a subjective sensation of breathing discomfort. 1 In a study involving a community-based population aged > 70 years, the prevalence of dyspnea was found to be 32%. 2 Dyspnea can lead to limitations in daily activities, reduced exercise tolerance, and heightened mortality risks. 3\n\nDyspnea not only affects individuals with diagnosed respiratory conditions but also poses a signi /uniFB01 cant burden on those with undiagnosed conditions. In a systematic review by Müller et al, 4 the combined\n\n## Study Design and Methods\n\n## Recruitment of Undiagnosed Cases and Healthy\n\nControl Patients\n\nBetween June 2017 and January 2023, adults aged $ 18 years were recruited through a two-step process into the Undiagnosed COPD and Asthma Population (UCAP) study, a multicenter case /uniFB01 nding study. Approval for\n\nABBREVIATIONS: ASQ = Asthma Screening Questionnaire; BD = bronchodilator; CAT = COPD Assessment Test; PCA = principal component analysis; PRISm = preserved ratio impaired spirometry; SGRQ = St. George ' s Respiratory Questionnaire\n\nAFFILIATIONS: From The Ottawa Hospital Research Institute (J. B., E. G., K. L. V., G. G. A., S. M., and S. D. A.), University of Ottawa, Ottawa, ON; the Desautels Faculty of Management (G. A. W.), McGill University, Montreal, QC; the Department of Medicine (C. B.), The University of British Columbia, Vancouver, BC; the Centre de recherche (L.-P. B. and A. C.), Institut de cardiologie et de pneumologie de Québec, Université Laval, Quebec, QC; the Cumming School of Medicine (S. K. F.), University of Calgary, Calgary, AB; the Department of Medicine (E. P.), University of Saskatchewan, Regina, SK; the Firestone Institute for Respiratory Health (R. A. M.), McMaster University, Hamilton, ON; the Department of Medicine (C. L.), Université de Montreal, Montreal, QC; the Department of Medicine and the Li Ka Shing Knowledge Institute (S. G.), St. Michael ' s Hospital University of Toronto, Toronto, ON; the Department of Medicine\n\nprevalence of dyspnea in the adult general population across 11 studies was estimated to be 10%. Dyspnea can arise from a broad spectrum of underlying factors, including both respiratory and nonrespiratory conditions. Studies have revealed that dyspnea is not solely attributable to respiratory conditions but is also heavily in /uniFB02 uenced by cardiovascular deconditioning and by nonrespiratory factors, including psychosocial, social, and environmental determinants. 5,6\n\nDyspnea is a prevalent symptom with consequences that extend beyond its physiologic implications. A study in European patients with COPD explored the burden of dyspnea and identi /uniFB01 ed potential correlates. The study revealed that higher dyspnea impact correlated with lower health-related quality of life, increased work impairment, and a higher frequency of emergency department visits. 7", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "## TABLE 2 ] (Continued)\n\nTable 4 presents the association of dyspnea with patient-speci /uniFB01 c risk factors. Dyspnea impact increased with younger age, being female, higher BMI, higher smoking and smoke exposure history, and total work", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "[\n\n]\n\n## Impact of Dyspnea on Adults With Respiratory Symptoms Without a De /uniFB01 ned Diagnosis\n\n\n\n\n\nJared Bierbrier, BSc; Emily Gerstein; George A. Whitmore, PhD; Katherine L. Vandemheen, MScN; Celine Bergeron, MD; Louis-Philippe Boulet, MD; Andreanne Cote, MD; Stephen K. Field, MD; Erika Penz, MD; R. Andrew McIvor, MD; Catherine Lemière, MD; Samir Gupta, MD; Paul Hernandez, MD; Irvin Mayers, MD; Mohit Bhutani, MD; M. Diane Lougheed, MD; Christopher J. Licskai, MD; Tanweer Azher, MD; Nicole Ezer, MD; Martha Ainslie, MD; Gonzalo G. Alvarez, MD; Sunita Mulpuru, MD; and Shawn D. Aaron, MD\n\nBACKGROUND: We investigated dyspnea; its associated risk factors; and its impact on health care utilization, quality of life, and work productivity in adults with undiagnosed respiratory symptoms.\n\nRESEARCH QUESTION: What is the impact of dyspnea in adults with undiagnosed respiratory symptoms?\n\nSTUDY DESIGN AND METHODS: This population-based study included 2,857 adults who were experiencing respiratory symptoms. These individuals had not been previously diagnosed with any lung conditions and were recruited from 17 Canadian centers using random digit dialing. Each participant underwent spirometry testing both before and after using a bronchodilator to determine if they met the diagnostic criteria for COPD, asthma, or preserved ratio impaired spirometry (PRISm), or if their spirometry results were normal. An agematched control group (n ¼ 231) was similarly recruited using random digit dialing. A dyspnea impact assessment score from 0 to 100 was produced using questions from the COPD Assessment Test and St. George ' s Respiratory questionnaire.\n\nRESULTS: Individuals with PRISm (n ¼ 172) reported more impactful dyspnea (mean score, 63.0; 95% CI, 59.5-66.4) than those with undiagnosed asthma (n ¼ 265; mean score, 56.6; 95% CI, 53.9-59.3) or undiagnosed COPD (n ¼ 330; mean score, 57.5; 95% CI, 55.1-59.9). All groups reported signi /uniFB01 cantly more impactful dyspnea than the control group (mean score, 13.8; 95% CI, 11.8-15.7). Patient-speci /uniFB01 c risk factors including age, sex, BMI, smoking, and comorbidities explained 20.6% of the variation in dyspnea. An additional 12.4% of the variation was explained by disease classi /uniFB01 cation and another 1.7% by the severity of lung function impairment assessed with spirometry. After adjusting for age, sex, and BMI, greater dyspnea impact was associated with increased health care utilization, lower quality of life, and reduced work productivity.\n\nINTERPRETATION: Our /uniFB01 ndings showed that in community-based adults with undiagnosed respiratory symptoms, those identi /uniFB01 ed with PRISm experienced the greatest impact of dyspnea. Dyspnea imposes burdens on the health care system and is associated with impaired quality of life and work productivity. CHEST 2024; 166(6):1296-1308\n\nKEY WORDS: asthma; case /uniFB01 nding; COPD; dyspnea\n\nFOR EDITORIAL COMMENT, SEE PAGE 1259\n\n[\n\n]", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "## Risk Factors Associated With Dyspnea\n\nPatient-related risk factors were considered /uniFB01 rst, and results of spirometry considered afterward. The spirometry risk factors chosen for the second stage analysis included the spirometry-based diagnosis of the patient (asthma, COPD, PRISm, or normal) and lung function results indicative of the severity of physiologic impairment. Severity was gauged by assessing three principal lung function measures: (1) post-BD FEV1 % predicted, (2) post-BD FEV1/FVC ratio, and (3) percentage reversal of FEV1 with BD.\n\n## Dyspnea Impact and Health Care Use, Quality of Life, and Work Productivity\n\nThe impact of dyspnea and its associations with health care use, quality of life, and work productivity were examined. Health care utilization was assessed through selfreported data. Quality of life was assessed using the 36Item Short Form Health Survey questionnaire, where higher scores indicate better health status. Work productivity was assessed using the Work Productivity and Activity Impairment questionnaire, where higher scores\n\n## Results\n\nFigure 1 illustrates the results of the case /uniFB01 nding approach, including the enrollment of the control group. Among 5,631 potentially eligible participants, 1,359\n\nindicate greater impairment in work productivity and daily activities.\n\n## Statistical Analysis\n\nBox plots were used to compare distribution patterns of dyspnea impact assessments among the disease groups. Pairwise comparison tests were conducted to evaluate mean dyspnea differences between groups. Multiple linear regression analysis was used to measure contributions to variability of dyspnea by selected patient-speci /uniFB01 c risk factors, spirometry disease classi /uniFB01 cation, and key lung function measures. The selected sets of risk factors were evaluated using successive regression analyses. Analysis of variance sums of squares from the successive regression analyses provided the cumulative percentage contributions to variability of dyspnea. Simple, multiple, and logistic regression analyses were used to study associations between dyspnea and health care utilization, quality of life, and work productivity outcomes. All statistical analyses were done using STATA 16 statistical software (StataCorp).\n\nparticipants (24%) did not meet the threshold of $ 6 points on the ASQ or $ 20 points on the COPDDiagnostic Questionnaire and were thus excluded, leaving 4,272 individuals deemed eligible for spirometry.\n\nFigure 1 -Study /uniFB02 ow diagram demonstrating the case /uniFB01 nding and control group recruitment and allocation. ASQ ¼ Asthma Screening Questionnaire; COPD-DQ ¼ COPD Diagnostic Questionnaire; CF ¼ cystic /uniFB01 brosis; MI ¼ myocardial infarction; PRISM ¼ preserved ratio impaired spirometry.\n\n", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "bronchial challenge testing into a case /uniFB01 nding strategy identi /uniFB01 ed asthma in 26% of symptomatic individuals who had normal spirometry and no response to BD. 27\n\nIndividuals with undiagnosed respiratory symptoms, determined to have asthma or COPD through spirometry, experience poor health status. 28 Therefore, the implementation of known treatment approaches for asthma or COPD is important to improve their conditions. 29 In contrast, those with normal spirometry or PRISm face unclear treatment approaches. Longacting BD therapy in symptomatic individuals with tobacco exposure with normal spirometry is not effective. 30 Weight management programs may be useful for individuals who are obese with PRISm-related dyspnea; however, this awaits de /uniFB01 nitive clinical trials. 31\n\nDyspnea was severe and prevalent within our study group; however, it remained undiagnosed. A study conducted by Stefan et al 32 revealed that physicians underestimated their patients ' dyspnea 37.9% of the time, whereas nurses underestimated it 3.5% of the time. Moreover, many patients limit their physical activities, which lead them to downplay the extent of their dyspnea. 19 Patient underreporting of symptoms, coupled\n\n## Acknowledgments\n\nAuthor contributions: S. D. A. and G. A. W. contributed to conception and design. J. B., E. G., G. A. W., K. L. V., and S. D. A. contributed to analysis and interpretation. J. B., E. G., G. A. W., K. L. V., S. D. A., C. B., C. L., L.-P. B., A. C., E. P., S. K. F., S. G., R. A. M., I. M., M. B., P. H., M. D. L., M. A., C. J. L., T. A., N. E., G. G. A., and S. M. contributed to drafting the manuscript for important intellectual content. All authors had access to and participated in the interpretation of the data and provided input into the preparation and submission of the manuscript. The authors vouch for the accuracy and completeness of the data.\n\nRole of sponsors: The sponsor had no role in the design of the study, the collection and analysis of the data, or the preparation of the manuscript.\n\nOther contributions: We thank the following individuals from the Canadian study sites: Ottawa Hospital Research Institute, Ottawa, Ontario: Taylor Poulin; Susan Deveau, RRT; Victoria Thompson; Meredith McCleery; Angelina Tohme; Vicky Panteleakos, RRT; Geneviève Longtin, RRT; Joanne Cassidy, RRT; Amanda Bergeron, MSc; Jennifer Biggs, RN; Jessica Bergeron; and Elisabet White; Vancouver General Hospital, Vancouver, British Columbia: Shelley Abercromby, BSc; Jana Caine; David\n\nwith inadequate physician-led investigations of symptoms, may explain why dyspnea often goes undiagnosed in the population. 33\n\nIn conclusion, our study measured dyspnea impact in individuals with no preexisting diagnosis of lung disease who reported respiratory symptoms as part of a purposeful case /uniFB01 nding strategy. Individuals with PRISm exhibited the greatest impact of dyspnea, even higher than those newly diagnosed with asthma or COPD. After adjusting for patient factors, comorbidities, pulmonary diseases, and severity of lung physiologic impairment, most of the variability in dyspnea remained unexplained. We also showed that dyspnea was associated with increased health care utilization, impaired quality of life, and work productivity.\n\n## Funding/Support\n\nThis study is supported by the Canadian Institutes of Health Research [FDN Grant 154322].\n\n## Financial/Non /uniFB01 nancial Disclosures\n\nNone declared.", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "Data are presented as mean (SD) for Q1, Q2, and Q3 (total), and Q3 to Q15 were presented to participants as yes or no questions, where percentages of parti cipants who answered yes are shown. Question weights (principal component analysis scoring coef /uniFB01 cients) used for calculating the dyspnea assessment are shown below individual questions. CAT ¼ COPD Assessment Test; PRISm ¼ preserved ratio impaired spirometry; Q ¼ question; SGRQ ¼ St. George ' s Respiratory Questionnaire.\n\nHowever, 1,415 either did not attend or were unable to complete adequate spirometry. Ultimately, 2,857 (67%) of those eligible underwent both pre- and post-BD spirometry.\n\nOf these 2,857 participants, 2,090 (73.2%) had normal spirometry, 265 (9.3%) had undiagnosed asthma, 330 (11.5%) had undiagnosed COPD, and 172 (6.0%) had PRISm based on post-BD spirometry. Of the 595 individuals with spirometric evidence of asthma or COPD, 253 were independently assessed by a pulmonologist. In 245 of these 253 cases (97%), the independent physician diagnosis agreed with the study diagnosis of asthma or COPD.\n\nIndividuals in the COPD group were generally older andmorelikelytobemalecomparedwithallother study groups (Table 1). All groups, including healthy control participants, had mean BMIs in the overweight orobeseranges.ThePRISmgroupwasheaviestwithan average BMI of 34.7, and 22% of PRISm patients met BMI criteria for morbid obesity. Compared with all other groups, those with COPD were the most likely to have active or previous tobacco use, with the highest average total pack-years of 32.7. The control group had the lowest number of people with active or previous tobacco use.\n\nTable 2 shows mean responses to the 15 dyspnea questions for each disease classi /uniFB01 cation and presents question weights (PCA scoring coef /uniFB01 cients) used for calculating the dyspnea impact assessment.\n\nIndividuals with PRISm reported the highest dyspnea impact, with a signi /uniFB01 cantly greater mean score (63.0; 95% CI, 59.5-66.4) than those with undiagnosed asthma or COPD (Table 3). Those with undiagnosed asthma or COPD had similar mean scores (56.6; 95% CI, 53.9-59.3 and 57.5; 95% CI, 55.1-59.9, respectively), followed by those with normal spirometry (51.8; 95% CI, 50.7-52.8). All four groups reported signi /uniFB01 cantly more impactful dyspnea than the control group (mean score, 13.8; 95% CI, 11.815.7). Table 3 shows between-group differences in mean dyspnea impact assessments for each pair of disease outcomes. Figure 2 compares box plots of the dyspnea impact assessment values across disease classi /uniFB01 cations.\n\n[\n\n]", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "[\n\n- assessed through inspiratory resistive loading. J Bras Pneumol . 2015;41(2): 143-150.\n- 25. Ekström M, Bornefalk H, Sköld M, et al. Validation of the Swedish Multidimensional Dyspnea Pro /uniFB01 le (MDP) in outpatients with cardiorespiratory disease. BMJ Open Respir Res . 2019;6: e000381.\n- 26. Yorke J, Russell AM, Swigris J, et al. Assessment of dyspnea in asthma: validation of The Dyspnea-12. J Asthma . 2011;48(6):602-608.\n- 27. Boulet LP, Boulay ME, Cote A, et al. Airway in /uniFB02 ammation and hyperresponsiveness in subjects with respiratory symptoms and normal spirometry. Eur Respir J . 2023;61(3): 2201194.\n- 28. Gerstein E, Bierbrier J, Whitmore GA, et al. Impact of undiagnosed chronic obstructive pulmonary disease and asthma on symptoms, quality of life, healthcare use, and work productivity. Am J Respir Crit Care Med . 2023;208(12):1271-1282.\n- 29. Aaron SD, Vandemheen K, Whitmore GA, et al. Early diagnosis and treatment of COPD and asthma: a randomized, controlled trial. N Engl J Med . 2024;390(22):2061-2073.\n- 30. Han MK, Ye W, Wang D, et al. Bronchodilators in tobacco-exposed persons with symptoms and preserved lung function. N Engl J Med . 2022;387(13): 1173-1184.\n- 31. Marott JL, Ingebrigtsen TS, Çolak Y, et al. Impact of the metabolic syndrome on cardiopulmonary morbidity and mortality in individuals with lung function impairment: a prospective cohort study of the Danish general population. Lancet Reg Health Eur . 2023;35:100759.\n- 32. Stefan MS, Priya A, Martin B, et al. How well do patients and providers agree on the severity of dyspnea? J Hosp Med . 2016;11(10):701-707.\n- 33. Cherian M, Magner KMA, Whitmore GA, et al. Patient and physician factors associated with symptomatic undiagnosed asthma or COPD. Eur Respir J . 2023;61(2): 2201721.\n\n]", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed6_cc4.pdf" - }, - { - "text": "TABLE 9 ] Unadjusted and Adjusted Dyspnea Associations With Work Productivity (WPAI)\n\n| | Unadjusted | Unadjusted | Adjusted | Adjusted |\n|-----------------------------------------------|--------------------------------------|--------------|--------------------------------------|------------|\n| Measure | Dyspnea OR (95% CI) | P Value | Dyspnea OR (95% CI) | P Value |\n| Are you currently employed (working for pay)? | 0.995 (0.992-0.998) | .002 | 0.993 (0.990-0.997) | < .001 |\n| Measure a | Dyspnea Coef /uniFB01 cient (95% CI) | P Value | Dyspnea Coef /uniFB01 cient (95% CI) | P Value |\n| Absenteeism | 0.061 (0.040-0.083) | < .001 | 0.066 (0.044-0.089) | < .001 |\n| Presenteeism | 0.334 (0.293-0.375) | < .001 | 0.349 (0.306-0.392) | < .001 |\n| Work productivity loss | 0.368 (0.323-0.413) | < .001 | 0.383 (0.336-0.430) | < .001 |\n| Activity impairment | 0.503 (0.463-0.544) | < .001 | 0.501 (0.458-0.544) | < .001 |\n\nORs and regression coef /uniFB01 cients are presented with 95% CIs and P values. Adjusted coef /uniFB01 cients are adjusted for age, sex, and BMI. WPAI ¼ Work Productivity and Activity Impairment questionnaire.\n\n[\n\n]", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed6_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "CompostGuide.pdf", - "query": "Can I put my plants directly on my compost ?", - "target_page": 2, - "target_passage": "Don’t\tput\tplants\tinto\t100%\tcompost.\t\tMix\t\t\t\t\t\t\t\t\t compost\tthoroughly\tinto\texisting\tsoil\tbefore\t\t\t planting.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "A project of the Washington Organic Recycling Council, with support from the Washington State Department of Ecology's Public Participation Grant program.\n\nThis product was partly funded through a grant from the Washington Department of Ecology. While these materials were reviewed for grant consistency, this does not necessarily constitute endorsement by the department.\n\nSpecial thanks: the original version of this brochure in 2003 was created by the Washington County, Oregon Solid Waste and Recycling Program in cooperation with the Washington Organic Recycling Council and the Composting Council of Oregon.\n\n\n\nwww.compostwashington.org\n\n\n\nwww.soilsforsalmon.org\n\n\n\noriginal artwork provided by:\n\n\n\n## Tips to Remember:\n\n- · Don't put plants into 100% compost. Mix compost thoroughly into existing soil before planting.\n- · When transplanting, it's better to amend the whole bed, not just planting holes, to promote root growth.\n- · Ask your compost supplier which compost product is best for your intended use.\n- · Use compost at the recommended application rate.\n- · To maintain healthy soil, reapply compost or mulch every 1-2 years.\n- · Many composts are rich in plant nutrients, so you may be able to reduce fertilizer use after applying compost.\n- · Compost can also reduce your lawn and garden's summer irrigation needs.\n- · Compost-amended soil and mulching slow run off, reduce erosion, and break down pollutants. When you use compost, you're helping to protect our precious streams, rivers, lakes, and marine waters.", - "page_start": 1, - "page_end": 1, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Compost Questions and Answers\n\n## What is compost?\n\nCompost is a natural humus-like soil amendment that results from the controlled aerobic (with oxygen) decomposition of organic materials. Compost is not soil - it should be mixed with soil. It is not fertilizer, although it contains many slowly released nutrients.\n\n## What materials ('feedstocks') are used to make compost?\n\nCompost facilities in Washington recycle a variety of organic materials, including yard debris, food scraps, manure, biosolids, forest residuals like sawdust and bark, construction wood, and agricultural residues. All of these materials can be used to produce high quality compost. Your supplier can tell you which materials they compost.\n\n## How do I know I'm getting safe, quality compost?\n\nFortunately, in Washington we have strict permitting and production standards for compost facilities, that include both time and temperature requirements and contaminant limits.\n\n## What about weed seeds, plant diseases or pesticide residues?\n\nThe controlled time, aeration, and temperature process required in Washington has been shown to kill weed seeds and plant diseases. That same process breaks down most pesticide residues. There are a few agricultural pesticides that are not easily broken down, and permitted Washington compost manufacturers carefully watch their feedstocks to keep those materials out of the composting process.\n\n\n\n\n\n\n\n## Compost Beginnings\n\nThe yard debris or food scraps* that you place into your home compost bin, take to a drop-off site, or set out for curbside collection could become the compost that you later use on your garden, lawn, and flowerbeds.\n\nIt is essential to place only quality organic material into the composting process. Here are some tips:\n\n - l The products you use or spray in your yard can end up in the compost process. Carefully read the labels of pesticide and herbicide products you use. (See page 9.)\n - l Please keep yard debris free of :\n - x Garbage\n - x Plastic of any sort\n - - Plastic plant pots\n - - Plastic plant tabs\n - - Plastic bags (if you want to bag your yard debris, use paper garden bags - available at most garden centers)\n - x Rock, brick, or masonry\n - x Glass or metal\n - x Pet waste.\n - * Many localities now collect food scraps and food-soiled paper along with yard debris for composting. Call your local collection service to find out what is collected in your area.\n\n", - "page_start": 4, - "page_end": 4, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## The Composting Process\n\nEven though there are a variety of composting methods, most composting follows a similar process:\n\n## 1. Grinding Organic Materials:\n\nDepending on the facility, the feedstock (material) available, and the desired compost product, different combinations of materials are added together and ground into small pieces:\n\n - · Nitrogen-rich materials (such as grass, fresh plant cuttings, biosolids, and manures)\n - · Carbon-rich materials (such as dried leaves, woody materials, and straw).\n\n## 2. Heating Up:\n\nThe material is placed into piles where it begins to heat up from the biological activity of the compost microbes. Typically, compost temperatures are required to reach at least 131 degrees F in a specified time period in order to destroy weed seeds and pathogens. The compost is turned or aerated, allowing the composting microbes to breathe. After a period of time, the nitrogen-rich material is depleted, the biological process slows, and the hot compost begins to cool.\n\n## 3. Finishing:\n\nTypically 'finished' compost has undergone a series of steps to ensure maturity and stability. The cooling compost is aged, which allows the decomposition process to slow down and the finished compost to stabilize.\n\nThe end products you purchase may be entirely compost, or a combination of compost blended with uncomposted additives (such as peat, bark, minerals, or soil).\n\n\n\n## Applications for Compost\n\n## Planting New Garden Beds or Lawns\n\nSpread a 2-4 inch layer of compost and mix into the upper 6-12 inches of existing soil: use more in sandy soils, and less in heavy clay. Reapply ½-1 inch annually on garden beds.\n\n## Mulch (surface applications on landscape beds)\n\nSpread a 1-2 inch layer of coarse, woody compost. To allow proper airflow, it is best not to pile mulch around the stems of trees and shrubs. Pull mulch 1-2 inches away from stems.\n\n## Top Dressing for Lawns\n\nSpread a ¼ to ½ inch layer of fine screened compost, and rake it into the lawn. For best results, plug-aerate the lawn before top-dressing. Overseeding at the same time will thicken thin patches in lawns.\n\n## Blended (Manufactured) Topsoils\n\nGood quality 'topsoil' products usually include 10-40% compost by volume, mixed with a sandy loam soil that allows good drainage. These compost-soil blends help establish healthy lawns and gardens.\n\n## When to Use Compost?\n\n - · Any time you're preparing soil for planting\n - · Mulching beds and gardens in spring, summer, or fall\n - · Top-dressing lawns in spring or fall.\n\n", - "page_start": 6, - "page_end": 6, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Building Rich and Healthy Soil With Compost\n\nTo grow healthy plants you need healthy soil.\n\n## Healthy Soil:\n\n - l Is teeming with life! Healthy soil is a miniature ecosystem. A teaspoon of healthy soil will have upwards of four billion tiny organisms which recycle nutrients, suppress disease, and discourage pests.\n - l Retains moisture but allows drainage. Healthy soil has structure that allows water to drain through, retains moisture, and promotes strong root growth.\n - l Is full of organic nutrients. Plants depend on the microorganisms found in healthy organic-rich soil to provide nutrients to their roots, and help them thrive.\n\nA healthy garden and landscape is naturally resistant to pests, drought, weeds, and diseases. Maintaining healthy soil may allow you to reduce use of chemical fertilizers and pesticides.\n\nSoil is a planting medium. Compost is a soil amendment. Do not place plants directly into 100% compost. Ask your supplier or see next page for mixes for different uses.\n\n## Washington State Encourages the Use of Compost, to Protect Our Water Quality\n\nThe Washington State Department of Ecology recommends that soils on construction sites be restored with compost before planting, and also encourages the use of compost for construction site erosion control, to reduce stormwater runoff and help keep our rivers, lakes, and Puget Sound clean. Learn more at www.SoilsforSalmon.org or www.BuildingSoil.org.\n\n\n\n## Selecting Quality Compost\n\nCompost is available in many product types and blends that may be used for different gardening applications. The type of feedstock, the composting process, and any supplementary additives determine the end product.\n\nMany facilities offer a variety of blends based on compost, such as garden mix, potting soil, planting mix, mulches, turf top-dressing and soil blends.\n\n## What to Look for in Compost\n\nFor most compost applications you will want a finished product that has matured and stabilized. Look for material\n\n - l with a dark, crumbly texture\n - l with a mild odor\n\n\n\nFor most compost applications you will not want compost that is extremely dry or wet, or extremely hot. (Note that it is okay for compost to be warm and to give off some steam and mild odor.)\n\n## Quality Testing at Composting Facilities\n\nFeel free to ask your compost provider if they have a quality control program, and ask for test results. Compost facilities in Washington are permitted by the Department of Ecology and must meet standards for both the composting process and contaminants, ensuring a quality product. Some facilities also participate in the 'Seal of Testing Assurance' (STA) testing program. See 'Resources' on page 11 to learn more.\n\n## Remember:\n\nYour compost provider can help you pick the best compost mix for your needs.\n\n", - "page_start": 5, - "page_end": 5, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Compost: A Natural Cycle\n\nComposting is a natural process in which microorganisms and macro-organisms break down organic material (leaves, twigs, grass, etc.) into a dark crumbly soil amendment. Modern compost facilities use the same natural biological composting process.\n\n\n\nTheir controlled-temperature process works faster, breaks down pesticide residues, and also kills weed seeds and plant diseases.\n\n\n\nCompost improves soil structure and plant growth by\n\n - · Replenishing soil organic matter, and storing nutrients in plant-available forms\n - · Supporting beneficial soil life\n - · Reducing erosion and water run-off\n - · Loosening clay soils for better root development (increasing soil pore space)\n - · Retaining moisture in sandy soils so plants need less watering.\n\n\n\n## Ask Your Compost Supplier\n\nWhether you're buying direct from the composting facility, or from a local vendor, here are some good questions to ask:\n\n - · What ingredients go into your compost?\n - · What compost products or blends do you sell?\n - · Are there quality control or testing results available for these products? (These may be on the manufacturer's website.)\n - · Which product is best for my intended use?\n - · What application rate do you recommend?\n - · How much do I need for my area? (Or see pages 4-6.)\n\n## Comparing Landscape Products\n\nA variety of soil and landscape products are sold. Here's a comparison:\n\nCompost is stable, decomposed organic matter, excellent for improving soil structure, fertility, moisture holding capacity, and plant growth.\n\nMulch is any material applied to the soil surface. Woody mulches (high in carbon, low in nitrogen) like wood chips, bark and woody composts are great for woody plants. Annual plants should be mulched with nutrient-balanced mulches like compost, grass clippings, or leaves.\n\nPeat Moss is partially decayed sphagnum moss from peat bogs. It provides soil porosity, but not the nutrients or biological diversity for healthy soil that compost provides.\n\nFertilizers are concentrated sources of plant nutrients, used in small amounts to supplement natural soil fertility.\n\nTopsoil that is sold is usually not native topsoil. Quality manufactured topsoils are a blend of native sandy sub-soils with composted organic matter to support soil life.\n\n", - "page_start": 3, - "page_end": 3, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Resources\n\n## Compost Organizations\n\n## Washington Organic Recycling Council\n\nFind a compost producer in your area www.compostwashington.org\n\n## US Composting Council\n\nSeal of Testing Assurance (STA) program www.compostingcouncil.org/programs/sta/\n\n## Restoring the Soil to Protect our Waterways\n\nwww.soilsforsalmon.org\n\nCompost amendment and erosion control during construction: information for builders www.buildingsoil.org\n\n## Natural Lawn & Garden Care, Soils, and Home Composting\n\nCity of Seattle\n\nwww.seattle.gov/util/services/yard\n\nKing County\n\nwww.kingcounty.gov/soils\n\nWashington State University\n\nwww.puyallup.wsu.edu/soilmgmt/\n\n\n\n\n\n## The Beauty of Your Lawn and Garden Blossoms from the Soil\n\nThank you for your interest in compost.\n\nCompost is a versatile product with many benefits. It enhances soil quality, helps save water, and supports your community's efforts to recycle organic debris. All this helps to conserve our natural resources and reduces the amount of material sent to the landfill.\n\nCompost-amended soil also helps break down pollutants and absorb stormwater runoff. By making nutrients slowly available to plants and enhancing plant health, compost can reduce the need for chemical fertilizers and pesticides. All these benefits help protect our lakes, rivers, and marine waters from pollution and excessive runoff.\n\nCompost is a natural amendment for your lawn or garden, and can be used regularly to enrich your soil. This guide is designed to help you get the most from the compost that you buy.", - "page_start": 2, - "page_end": 2, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## How Much Compost to Use\n\n - l Estimate the planting area (Math Hint: Square feet = length x width)\n - l Decide upon the appropriate application depth of the compost (page 4)\n - l Use the charts below to estimate your compost needs. (Abbreviations: ft = foot; yd = yard; sq = square; cu = cubic.)\n - l Conversions: 9 square feet = 1 square yard; 27 cubic feet = 1 cubic yard.\n\n## Question: I have a plot about this big, how much compost do I buy?\n\n| Plot Size | # of Sq Feet | 1/2' Deep - Mulching or Top-dressing | 2' Deep - Amending new lawns or gardens |\n|----------------|----------------|-----------------------------------------|---------------------------------------------|\n| 5' x 10' plot | 50 sq ft | 2.08 cu ft of compost | 8.33 cu ft of compost (0.31 cu yd) |\n| 10' x 10' plot | 100 sq ft | 4.17 cu ft of compost | 16.66 cu ft of compost (0.62 cu yd) |\n| 20 x 50' plot | 1000 sq ft | 41.7 cu ft of compost | 166.7 cu ft of compost (6.2 cu yd) |\n| 1 acre | 43,600 sq ft | 1,815 cu ft of compost (67 cu yd) | 7,257 cu ft of compost (268 cu yd) |\n\n## Question: If I buy this much compost, how many square feet will it cover?\n\n\n\n| Compost Quantity | 1/2' Deep - Mulching or Top-dressing | 2' Deep - Amending new lawns or gardens |\n|--------------------------------------------------|-----------------------------------------|-------------------------------------------------|\n| 1 cu ft bag of compost 2.2 cu ft bag of compost | 24 sq foot area | 6 sq foot area 9 sq foot area 13 sq foot area |\n| 1.5 cu ft bag of compost | 36 sq foot area | |\n| | 53 sq foot area | |\n| 2.5 cu ft bag of compost | 60 sq foot area | 15 sq foot area |\n| 1 cubic yard of compost | 648 sq foot area | 162 sq foot area |\n\nCompost Works! Soil blending trials conducted in 2008 by the Washington Organic Recycling Council, with funding from the Washington Department of Ecology,\n\n\n\ndemonstrated that compost improves soil structure (lowers bulk density), nutrient availability (increases cation exchange capacity), moisture holding capacity, and supplies both nutrients that plants need and organic matter that supports soil life. See the 2008 Soil Blending Trial report at\n\n", - "page_start": 7, - "page_end": 7, - "source_file": "CompostGuide.pdf" - }, - { - "text": "\n\n\n\nCompost adds organic material and nutrients to the soil, increases water-holding capacity and biological activity, and improves plant growth and health.", - "page_start": 0, - "page_end": 0, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## THE PURPOSE OF A RESIGNATION LETTER:\n\nThe purpose of a resignation letter is to give your employer official no -tice that you will be leaving the organisation. However, it is usually appropriate to inform your manager of your intention to resign in person, and then to follow up your conversation with the formal resignation letter.\n\nWhat to include:\n\nYour resignation letter should be short and to the point. Keep it positive and professional - this is not the place to voice your dissatisfaction with your job.\n\nIn your letter, you should make sure that you include the following:\n\n## 1.\n\n## A clear statement of your intention to resign.\n\nExample:\n\n'Please accept this letter as formal notice of my resignation from my post as Assistant IT Manager at XYZ.'\n\n## 2.\n\nReference to your notice period (where applicable), as well as your last working day with the organisation.\n\nExample:\n\n'My last working day will be in two weeks' time, on 31 August 2015.'\n\n## 3.\n\n## Your reason for leaving.\n\nYou don't need to elaborate on this if you don't want to. Remember to keep it positive, and not to make any rude, offensive or insulting remarks about the organisation or your co- workers, no matter how tempting it might be.", - "page_start": 48, - "page_end": 48, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "If you have any questions about your course work, you are always welcome to approach your tutors for help. Just remember that your tutors cannot guess what your needs are: you will have to make contact with your tutors and communicate your questions clearly if you want to get the assistance that you need.\n\nWhen it comes to contacting your tutors, your best option will usually be to send an e-mail.\n\nHere are some important tips to keep in mind when requesting help from a tutor via e-mail:\n\n\n\n## Use a relevant and descriptive subject line.\n\nThis way, your tutor will immediately know what your e-mail is about, and he or she will be more likely to open it. A good subject line might read as follows: 'Enquiry regarding Assignment 1 for Safety Management 101'\n\n## Be polite, and use an appropriate form of address.\n\nAlways start your e-mail with an appropriate form of address, such as 'Hello Mr/Ms …' and sign it off with your full name and student number. This will help to give your message a friendly, yet professional tone.\n\n## Be clear and concise.\n\nMake sure that your tutor will be able to understand what it is that you are asking.", - "page_start": 33, - "page_end": 33, - "source_file": "basic-english-language-skills.PDF" - } - ] - }, - { - "references": { - "source_file": "CompostGuide.pdf", - "query": "What are fertilizers ?", - "target_page": 4, - "target_passage": " Fertilizers are concentrated sources of plant nutrients, used in small amounts to supplement natural soil fertility. ", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## Resources\n\n## Compost Organizations\n\n## Washington Organic Recycling Council\n\nFind a compost producer in your area www.compostwashington.org\n\n## US Composting Council\n\nSeal of Testing Assurance (STA) program www.compostingcouncil.org/programs/sta/\n\n## Restoring the Soil to Protect our Waterways\n\nwww.soilsforsalmon.org\n\nCompost amendment and erosion control during construction: information for builders www.buildingsoil.org\n\n## Natural Lawn & Garden Care, Soils, and Home Composting\n\nCity of Seattle\n\nwww.seattle.gov/util/services/yard\n\nKing County\n\nwww.kingcounty.gov/soils\n\nWashington State University\n\nwww.puyallup.wsu.edu/soilmgmt/\n\n\n\n\n\n## The Beauty of Your Lawn and Garden Blossoms from the Soil\n\nThank you for your interest in compost.\n\nCompost is a versatile product with many benefits. It enhances soil quality, helps save water, and supports your community's efforts to recycle organic debris. All this helps to conserve our natural resources and reduces the amount of material sent to the landfill.\n\nCompost-amended soil also helps break down pollutants and absorb stormwater runoff. By making nutrients slowly available to plants and enhancing plant health, compost can reduce the need for chemical fertilizers and pesticides. All these benefits help protect our lakes, rivers, and marine waters from pollution and excessive runoff.\n\nCompost is a natural amendment for your lawn or garden, and can be used regularly to enrich your soil. This guide is designed to help you get the most from the compost that you buy.", - "page_start": 2, - "page_end": 2, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Compost: A Natural Cycle\n\nComposting is a natural process in which microorganisms and macro-organisms break down organic material (leaves, twigs, grass, etc.) into a dark crumbly soil amendment. Modern compost facilities use the same natural biological composting process.\n\n\n\nTheir controlled-temperature process works faster, breaks down pesticide residues, and also kills weed seeds and plant diseases.\n\n\n\nCompost improves soil structure and plant growth by\n\n - · Replenishing soil organic matter, and storing nutrients in plant-available forms\n - · Supporting beneficial soil life\n - · Reducing erosion and water run-off\n - · Loosening clay soils for better root development (increasing soil pore space)\n - · Retaining moisture in sandy soils so plants need less watering.\n\n\n\n## Ask Your Compost Supplier\n\nWhether you're buying direct from the composting facility, or from a local vendor, here are some good questions to ask:\n\n - · What ingredients go into your compost?\n - · What compost products or blends do you sell?\n - · Are there quality control or testing results available for these products? (These may be on the manufacturer's website.)\n - · Which product is best for my intended use?\n - · What application rate do you recommend?\n - · How much do I need for my area? (Or see pages 4-6.)\n\n## Comparing Landscape Products\n\nA variety of soil and landscape products are sold. Here's a comparison:\n\nCompost is stable, decomposed organic matter, excellent for improving soil structure, fertility, moisture holding capacity, and plant growth.\n\nMulch is any material applied to the soil surface. Woody mulches (high in carbon, low in nitrogen) like wood chips, bark and woody composts are great for woody plants. Annual plants should be mulched with nutrient-balanced mulches like compost, grass clippings, or leaves.\n\nPeat Moss is partially decayed sphagnum moss from peat bogs. It provides soil porosity, but not the nutrients or biological diversity for healthy soil that compost provides.\n\nFertilizers are concentrated sources of plant nutrients, used in small amounts to supplement natural soil fertility.\n\nTopsoil that is sold is usually not native topsoil. Quality manufactured topsoils are a blend of native sandy sub-soils with composted organic matter to support soil life.\n\n", - "page_start": 3, - "page_end": 3, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Building Rich and Healthy Soil With Compost\n\nTo grow healthy plants you need healthy soil.\n\n## Healthy Soil:\n\n - l Is teeming with life! Healthy soil is a miniature ecosystem. A teaspoon of healthy soil will have upwards of four billion tiny organisms which recycle nutrients, suppress disease, and discourage pests.\n - l Retains moisture but allows drainage. Healthy soil has structure that allows water to drain through, retains moisture, and promotes strong root growth.\n - l Is full of organic nutrients. Plants depend on the microorganisms found in healthy organic-rich soil to provide nutrients to their roots, and help them thrive.\n\nA healthy garden and landscape is naturally resistant to pests, drought, weeds, and diseases. Maintaining healthy soil may allow you to reduce use of chemical fertilizers and pesticides.\n\nSoil is a planting medium. Compost is a soil amendment. Do not place plants directly into 100% compost. Ask your supplier or see next page for mixes for different uses.\n\n## Washington State Encourages the Use of Compost, to Protect Our Water Quality\n\nThe Washington State Department of Ecology recommends that soils on construction sites be restored with compost before planting, and also encourages the use of compost for construction site erosion control, to reduce stormwater runoff and help keep our rivers, lakes, and Puget Sound clean. Learn more at www.SoilsforSalmon.org or www.BuildingSoil.org.\n\n\n\n## Selecting Quality Compost\n\nCompost is available in many product types and blends that may be used for different gardening applications. The type of feedstock, the composting process, and any supplementary additives determine the end product.\n\nMany facilities offer a variety of blends based on compost, such as garden mix, potting soil, planting mix, mulches, turf top-dressing and soil blends.\n\n## What to Look for in Compost\n\nFor most compost applications you will want a finished product that has matured and stabilized. Look for material\n\n - l with a dark, crumbly texture\n - l with a mild odor\n\n\n\nFor most compost applications you will not want compost that is extremely dry or wet, or extremely hot. (Note that it is okay for compost to be warm and to give off some steam and mild odor.)\n\n## Quality Testing at Composting Facilities\n\nFeel free to ask your compost provider if they have a quality control program, and ask for test results. Compost facilities in Washington are permitted by the Department of Ecology and must meet standards for both the composting process and contaminants, ensuring a quality product. Some facilities also participate in the 'Seal of Testing Assurance' (STA) testing program. See 'Resources' on page 11 to learn more.\n\n## Remember:\n\nYour compost provider can help you pick the best compost mix for your needs.\n\n", - "page_start": 5, - "page_end": 5, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## Compost Questions and Answers\n\n## What is compost?\n\nCompost is a natural humus-like soil amendment that results from the controlled aerobic (with oxygen) decomposition of organic materials. Compost is not soil - it should be mixed with soil. It is not fertilizer, although it contains many slowly released nutrients.\n\n## What materials ('feedstocks') are used to make compost?\n\nCompost facilities in Washington recycle a variety of organic materials, including yard debris, food scraps, manure, biosolids, forest residuals like sawdust and bark, construction wood, and agricultural residues. All of these materials can be used to produce high quality compost. Your supplier can tell you which materials they compost.\n\n## How do I know I'm getting safe, quality compost?\n\nFortunately, in Washington we have strict permitting and production standards for compost facilities, that include both time and temperature requirements and contaminant limits.\n\n## What about weed seeds, plant diseases or pesticide residues?\n\nThe controlled time, aeration, and temperature process required in Washington has been shown to kill weed seeds and plant diseases. That same process breaks down most pesticide residues. There are a few agricultural pesticides that are not easily broken down, and permitted Washington compost manufacturers carefully watch their feedstocks to keep those materials out of the composting process.\n\n\n\n\n\n\n\n## Compost Beginnings\n\nThe yard debris or food scraps* that you place into your home compost bin, take to a drop-off site, or set out for curbside collection could become the compost that you later use on your garden, lawn, and flowerbeds.\n\nIt is essential to place only quality organic material into the composting process. Here are some tips:\n\n - l The products you use or spray in your yard can end up in the compost process. Carefully read the labels of pesticide and herbicide products you use. (See page 9.)\n - l Please keep yard debris free of :\n - x Garbage\n - x Plastic of any sort\n - - Plastic plant pots\n - - Plastic plant tabs\n - - Plastic bags (if you want to bag your yard debris, use paper garden bags - available at most garden centers)\n - x Rock, brick, or masonry\n - x Glass or metal\n - x Pet waste.\n - * Many localities now collect food scraps and food-soiled paper along with yard debris for composting. Call your local collection service to find out what is collected in your area.\n\n", - "page_start": 4, - "page_end": 4, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## The Composting Process\n\nEven though there are a variety of composting methods, most composting follows a similar process:\n\n## 1. Grinding Organic Materials:\n\nDepending on the facility, the feedstock (material) available, and the desired compost product, different combinations of materials are added together and ground into small pieces:\n\n - · Nitrogen-rich materials (such as grass, fresh plant cuttings, biosolids, and manures)\n - · Carbon-rich materials (such as dried leaves, woody materials, and straw).\n\n## 2. Heating Up:\n\nThe material is placed into piles where it begins to heat up from the biological activity of the compost microbes. Typically, compost temperatures are required to reach at least 131 degrees F in a specified time period in order to destroy weed seeds and pathogens. The compost is turned or aerated, allowing the composting microbes to breathe. After a period of time, the nitrogen-rich material is depleted, the biological process slows, and the hot compost begins to cool.\n\n## 3. Finishing:\n\nTypically 'finished' compost has undergone a series of steps to ensure maturity and stability. The cooling compost is aged, which allows the decomposition process to slow down and the finished compost to stabilize.\n\nThe end products you purchase may be entirely compost, or a combination of compost blended with uncomposted additives (such as peat, bark, minerals, or soil).\n\n\n\n## Applications for Compost\n\n## Planting New Garden Beds or Lawns\n\nSpread a 2-4 inch layer of compost and mix into the upper 6-12 inches of existing soil: use more in sandy soils, and less in heavy clay. Reapply ½-1 inch annually on garden beds.\n\n## Mulch (surface applications on landscape beds)\n\nSpread a 1-2 inch layer of coarse, woody compost. To allow proper airflow, it is best not to pile mulch around the stems of trees and shrubs. Pull mulch 1-2 inches away from stems.\n\n## Top Dressing for Lawns\n\nSpread a ¼ to ½ inch layer of fine screened compost, and rake it into the lawn. For best results, plug-aerate the lawn before top-dressing. Overseeding at the same time will thicken thin patches in lawns.\n\n## Blended (Manufactured) Topsoils\n\nGood quality 'topsoil' products usually include 10-40% compost by volume, mixed with a sandy loam soil that allows good drainage. These compost-soil blends help establish healthy lawns and gardens.\n\n## When to Use Compost?\n\n - · Any time you're preparing soil for planting\n - · Mulching beds and gardens in spring, summer, or fall\n - · Top-dressing lawns in spring or fall.\n\n", - "page_start": 6, - "page_end": 6, - "source_file": "CompostGuide.pdf" - }, - { - "text": "A project of the Washington Organic Recycling Council, with support from the Washington State Department of Ecology's Public Participation Grant program.\n\nThis product was partly funded through a grant from the Washington Department of Ecology. While these materials were reviewed for grant consistency, this does not necessarily constitute endorsement by the department.\n\nSpecial thanks: the original version of this brochure in 2003 was created by the Washington County, Oregon Solid Waste and Recycling Program in cooperation with the Washington Organic Recycling Council and the Composting Council of Oregon.\n\n\n\nwww.compostwashington.org\n\n\n\nwww.soilsforsalmon.org\n\n\n\noriginal artwork provided by:\n\n\n\n## Tips to Remember:\n\n- · Don't put plants into 100% compost. Mix compost thoroughly into existing soil before planting.\n- · When transplanting, it's better to amend the whole bed, not just planting holes, to promote root growth.\n- · Ask your compost supplier which compost product is best for your intended use.\n- · Use compost at the recommended application rate.\n- · To maintain healthy soil, reapply compost or mulch every 1-2 years.\n- · Many composts are rich in plant nutrients, so you may be able to reduce fertilizer use after applying compost.\n- · Compost can also reduce your lawn and garden's summer irrigation needs.\n- · Compost-amended soil and mulching slow run off, reduce erosion, and break down pollutants. When you use compost, you're helping to protect our precious streams, rivers, lakes, and marine waters.", - "page_start": 1, - "page_end": 1, - "source_file": "CompostGuide.pdf" - }, - { - "text": "\n\n\n\nCompost adds organic material and nutrients to the soil, increases water-holding capacity and biological activity, and improves plant growth and health.", - "page_start": 0, - "page_end": 0, - "source_file": "CompostGuide.pdf" - }, - { - "text": "green spaces has increased 52 , green spaces often lose out in the competition for land as the share of the population living in urban areas continues to rise.\n\nThis strategy aims to reverse these trends and stop the loss of green urban ecosystems. The promotion of healthy ecosystems, green infrastructure and nature-based solutions should be systematically integrated into urban planning, including in public spaces, infrastructure, and the design of buildings and their surroundings.\n\nTo bring nature back to cities and reward community action, the Commission calls on European cities of at least 20,000 inhabitants to develop ambitious Urban Greening Plans by the end of 2021. These should include measures to create biodiverse and accessible urban forests, parks and gardens; urban farms; green roofs and walls; treelined streets; urban meadows; and urban hedges. They should also help improve connections between green spaces, eliminate the use of pesticides, limit excessive mowing of urban green spaces and other biodiversity harmful practices. Such plans could mobilise policy, regulatory and financial tools.\n\nTo facilitate this work, the Commission will in 2021 set up an EU Urban Greening Platform , under a new 'Green City Accord' 53 with cities and mayors. This will be done in close coordination with the European Covenant of Mayors. The Urban Greening Plans will have a central role in choosing the European Green Capital 2023 and European Green Leaf 2022.\n\nThe Commission will support Member States and local and regional authorities through technical guidance and help to mobilise funding and capacity building. It will also reflect these objectives in the European Climate Pact .\n\n## 2.2.9. Reducing pollution\n\nPollution is a key driver of biodiversity loss and has a harmful impact on our health and environment. While the EU has a solid legal framework in place to reduce pollution, greater efforts are still required. Biodiversity is suffering from the release of nutrients, chemical pesticides, pharmaceuticals, hazardous chemicals, urban and industrial wastewater, and other waste including litter and plastics. All of these pressures must be reduced.\n\nAs part of the Commission's Zero Pollution Ambition for a toxic-free environment, a new EU Chemicals Strategy for Sustainability will be put forward along with a Zero Pollution Action Plan for Air, Water and Soil .\n\nThe Commission will also promote the goal of zero pollution from nitrogen and phosphorus flows from fertilisers through reducing nutrient losses by at least 50%, while ensuring that there is no deterioration in soil fertility. This will result in the reduction of use of fertilisers by at least 20% . This will be achieved by implementing and enforcing the relevant environmental and climate legislation in full, identifying with Member States the nutrient load reductions needed to achieve these goals, applying balanced fertilisation and sustainable nutrient management, and by managing nitrogen and phosphorus better throughout their lifecycle. To this end, the Commission will work with Member States to", - "page_start": 13, - "page_end": 13, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "progress towards the target will be under constant review, and adjustment if needed, to mitigate against undue impact on biodiversity, food security and farmers' competitiveness.\n\nAgroecology can provide healthy food while maintaining productivity, increase soil fertility and biodiversity, and reduce the footprint of food production. Organic farming in particular holds great potential for farmers and consumers alike. The sector creates jobs and attracts young farmers. Organic farming also provides 10-20 % more jobs per hectare than conventional farms, and creates added value for agricultural products 32 . To make the most of this potential, at least 25% of the EU's agricultural land must be organically farmed by 2030 . In addition to CAP measures, the Commission will put forward an Action Plan on organic farming, helping Member States stimulate both supply and demand of organic products. It will also ensure consumer's trust through promotion campaigns and green public procurement. In the implementation of the EU-wide agroecological targets set out in this strategy and in the Farm to Fork Strategy, the different starting points and differences in progress already made in Member States will be taken into account.\n\nThe uptake of agroforestry support measures under rural development should be increased as it has great potential to provide multiple benefits for biodiversity, people and climate.\n\nThe decline of genetic diversity must also be reversed, including by facilitating the use of traditional varieties of crops and breeds. This would also bring health benefits through more varied and nutritious diets. The Commission is considering the revision of marketing rules for traditional crop varieties in order to contribute to their conservation and sustainable use. The Commission will also take measures to facilitate the registration of seed varieties, including for organic farming, and to ensure easier market access for traditional and locally adapted varieties.\n\n## 2.2.3. Addressing land take and restoring soil ecosystems\n\nSoil is one of the most complex of all ecosystems. It is a habitat in its own right, and home to an incredible diversity of organisms that regulate and control key ecosystem services such as soil fertility, nutrient cycling and climate regulation. Soil is a hugely important non-renewable resource , vital for human and economic health, as well as the production of food and new medications.\n\nIn the EU, the degradation of soil is having considerable environmental and economic consequences. Poor land management, such as deforestation, overgrazing, unsustainable farming and forestry practices, construction activities and land sealing are among the main causes of this situation 33 . Despite recent reductions in the pace of soil sealing, fertile soils continue to be lost to land take and urban sprawl 34 . When compounded by", - "page_start": 8, - "page_end": 8, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "- (d) 'specified activities' means-\n - (i) crop maintenance,\n - (ii) crop harvesting,\n - (iii) tunnel construction and dismantling,\n - (iv) irrigation installation and maintaining,\n - (v) crop husbandry,\n - (vi) packing and processing of crops on employer's premises,\n - (vii) preparing and dismantling growing areas and media,\n - (viii) general primary production work in edible horticulture,\n - (ix) activities relating to supervising teams of horticulture workers.\n - 44. -(1) A domestic elite sportsperson, an international elite sportsperson, a domestic ancillary sportsperson or an international ancillary sportsperson.", - "page_start": 46, - "page_end": 46, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "CompostGuide.pdf", - "query": "Explain to me what is peat moss ?", - "target_page": 4, - "target_passage": "Peat Moss is partially decayed sphagnum moss from peat bogs. It provides soil porosity, but not the nutrients or biological diversity for healthy soil that compost provides.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Compost: A Natural Cycle\n\nComposting is a natural process in which microorganisms and macro-organisms break down organic material (leaves, twigs, grass, etc.) into a dark crumbly soil amendment. Modern compost facilities use the same natural biological composting process.\n\n\n\nTheir controlled-temperature process works faster, breaks down pesticide residues, and also kills weed seeds and plant diseases.\n\n\n\nCompost improves soil structure and plant growth by\n\n - · Replenishing soil organic matter, and storing nutrients in plant-available forms\n - · Supporting beneficial soil life\n - · Reducing erosion and water run-off\n - · Loosening clay soils for better root development (increasing soil pore space)\n - · Retaining moisture in sandy soils so plants need less watering.\n\n\n\n## Ask Your Compost Supplier\n\nWhether you're buying direct from the composting facility, or from a local vendor, here are some good questions to ask:\n\n - · What ingredients go into your compost?\n - · What compost products or blends do you sell?\n - · Are there quality control or testing results available for these products? (These may be on the manufacturer's website.)\n - · Which product is best for my intended use?\n - · What application rate do you recommend?\n - · How much do I need for my area? (Or see pages 4-6.)\n\n## Comparing Landscape Products\n\nA variety of soil and landscape products are sold. Here's a comparison:\n\nCompost is stable, decomposed organic matter, excellent for improving soil structure, fertility, moisture holding capacity, and plant growth.\n\nMulch is any material applied to the soil surface. Woody mulches (high in carbon, low in nitrogen) like wood chips, bark and woody composts are great for woody plants. Annual plants should be mulched with nutrient-balanced mulches like compost, grass clippings, or leaves.\n\nPeat Moss is partially decayed sphagnum moss from peat bogs. It provides soil porosity, but not the nutrients or biological diversity for healthy soil that compost provides.\n\nFertilizers are concentrated sources of plant nutrients, used in small amounts to supplement natural soil fertility.\n\nTopsoil that is sold is usually not native topsoil. Quality manufactured topsoils are a blend of native sandy sub-soils with composted organic matter to support soil life.\n\n", - "page_start": 3, - "page_end": 3, - "source_file": "CompostGuide.pdf" - }, - { - "text": "## YOU WOOD LOVE ME\n\nOf course, you wood love the new me even more, since I thrive on change and selfimprovement. Seek a seeker of all things bold and beautiful - someone who desires elegant solutions and tailored style, and who appreciates handcrafted expressions of commitment. I'm easy to be around, and I'm certain I could fit into both your life and your office. Let's create a new way of being - together.\n\nGUNLOCKE", - "page_start": 6, - "page_end": 6, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "to a certain extent the particle-particle attraction. Normally, the solution is deposited on to a plain silicon substrate that is covered by the native oxide layer only [34]. However, one may locally change the wetting behaviour of the solvent by further oxidising the substrate [38]. By adding excess thiol one can also vary the properties of the solvent [40].\n\nTwo different procedures are employed for the deposition of the solution on to the substrate: spincoating or a meniscus technique [61, 62]. The choice is important as it strongly influences the evaporation rate and, as a result, the pattern formation process. When using spin-coating, one finds that directly after deposition, evaporation competes with dewetting until all the solvent has evaporated. The resulting deposits of nanoparticles are imaged by atomic force microscopy (AFM). For spin-coated films, the evaporation rate is high and structuring is normally finished before the spincoater is stopped. Conversely, the solvent evaporation rate is strongly decreased when employing the meniscus technique [61], i.e., by depositing a drop of solution on a Teflon ring that is wetted by the solvent. This allows for a better control of the process and enables the use of contrast-enhanced microscopy to observe the dewetting process in situ [40]. All pattern formation is confined to the region of the receding contact line of toluene, silicon and air. With both techniques one may find mono-modal or bi-modal polygonal networks [34], labyrinthine spinodal structures, or branched patterns (see Fig. 1). The meniscus technique allows for the study of branched structures in a more controlled manner. The work in Ref. [40] indicates that fingering strongly depends on the interaction strength of the particles, i.e., on the chain length of the thiol molecules coating the gold cores. For short chains (C 5 and C 8 ) no formation of branched structures is observed. At similar concentrations, well-developed branched structures are formed for longer chains (C 10 and C 12 ). For even longer chains (C 14 ), however, one again finds less branching. It also depends on the amount of excess thiol in the solvent (for details see Ref. [40]).\n\nWhen following the evolution of the branched patterns in situ (see the complementary video material of Ref. [40]), one clearly observes that different processes occur on different lenght scales. First, a macroscopic dewetting front recedes, leaving behind a seemingly dry substrate. The macroscopic front can be transversely unstable resulting in large-scale ( > 100 µ m) strongly anisotropic fingered structures. For fronts that move relatively quickly these macroscopic structures cover all the available substrate. However, when at a later stage the macroscopic front becomes slower, those fingers become scarce and 'macroscopic fingering' finally ceases. At this stage it is possible to appreciate that the seemingly dry region left behind by the front is not at all dry, but covered by an ultrathin 'postcursor' film that is itself unstable. The thickness of this film", - "page_start": 5, - "page_end": 5, - "source_file": "1001.2669.pdf" - }, - { - "text": "Afforestation, reforestation and tree planting to support biodiversity and ecosystem restoration will be promoted through the CAP Strategic Plans, and the Cohesion Policy funds. The new European Urban Greening Platform 38 will also facilitate urban tree planting, including under the LIFE programme.\n\nThe share of forest areas covered by management plans should cover all managed public forests and an increased number of private forests, and biodiversity-friendly practices such as closer-to-nature-forestry should continue and be further developed. To support this, the Commission will develop guidelines on biodiversity-friendly afforestation and reforestation and closer-to-nature-forestry practices. This will be done in parallel with the new EU Forest Strategy.\n\nTo gain a better picture of the health of European forests, the Commission will work with other data providers to further develop the Forest Information System for Europe . This will help produce up-to-date assessments of the condition of European forests and link all EU forest-data web-platforms. This will also be presented as part of the EU Forest Strategy.\n\n## 2.2.5. Win-win solutions for energy generation\n\nDecarbonising the energy system is critical for climate neutrality, as well as for the EU's recovery from the COVID-19 crisis and long-term prosperity. More sustainably sourced renewable energy will be essential to fight climate change and biodiversity loss. The EU will prioritise solutions such as ocean energy, offshore wind, which also allows for fish stock regeneration, solar-panel farms that provide biodiversity-friendly soil cover, and sustainable bioenergy.\n\nTo mitigate climate and environmental risks created by the increasing use of certain sources for bioenergy, the revised Renewable Energy Directive 39 includes strengthened sustainability criteria. It also promotes the shift to advanced biofuels based on residues and non-reusable and non-recyclable waste. This approach should continue for all forms of bioenergy. The use of whole trees and food and feed crops for energy production whether produced in the EU or imported - should be minimised.\n\nTo better understand and monitor the potential climate and biodiversity risks, the Commission is assessing the EU and global biomass supply and demand and related sustainability 40 . As part of its increased ambition to protect and restore forest ecosystems, the Commission will publish the results of this work on the use of forest biomass for energy production by the end of 2020. This will inform the Commission's policymaking, including the review and revision, where necessary, of the level of ambition of the Renewable Energy Directive, the Emissions Trading Scheme, and the Regulation on land use, land use change and forestry (LULUCF) set for 2021.\n\nIn line with the Renewable Energy Directive, the Commission will also develop operational guidance in 2021 on the new sustainability criteria on forest biomass for", - "page_start": 10, - "page_end": 10, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "## The Composting Process\n\nEven though there are a variety of composting methods, most composting follows a similar process:\n\n## 1. Grinding Organic Materials:\n\nDepending on the facility, the feedstock (material) available, and the desired compost product, different combinations of materials are added together and ground into small pieces:\n\n - · Nitrogen-rich materials (such as grass, fresh plant cuttings, biosolids, and manures)\n - · Carbon-rich materials (such as dried leaves, woody materials, and straw).\n\n## 2. Heating Up:\n\nThe material is placed into piles where it begins to heat up from the biological activity of the compost microbes. Typically, compost temperatures are required to reach at least 131 degrees F in a specified time period in order to destroy weed seeds and pathogens. The compost is turned or aerated, allowing the composting microbes to breathe. After a period of time, the nitrogen-rich material is depleted, the biological process slows, and the hot compost begins to cool.\n\n## 3. Finishing:\n\nTypically 'finished' compost has undergone a series of steps to ensure maturity and stability. The cooling compost is aged, which allows the decomposition process to slow down and the finished compost to stabilize.\n\nThe end products you purchase may be entirely compost, or a combination of compost blended with uncomposted additives (such as peat, bark, minerals, or soil).\n\n\n\n## Applications for Compost\n\n## Planting New Garden Beds or Lawns\n\nSpread a 2-4 inch layer of compost and mix into the upper 6-12 inches of existing soil: use more in sandy soils, and less in heavy clay. Reapply ½-1 inch annually on garden beds.\n\n## Mulch (surface applications on landscape beds)\n\nSpread a 1-2 inch layer of coarse, woody compost. To allow proper airflow, it is best not to pile mulch around the stems of trees and shrubs. Pull mulch 1-2 inches away from stems.\n\n## Top Dressing for Lawns\n\nSpread a ¼ to ½ inch layer of fine screened compost, and rake it into the lawn. For best results, plug-aerate the lawn before top-dressing. Overseeding at the same time will thicken thin patches in lawns.\n\n## Blended (Manufactured) Topsoils\n\nGood quality 'topsoil' products usually include 10-40% compost by volume, mixed with a sandy loam soil that allows good drainage. These compost-soil blends help establish healthy lawns and gardens.\n\n## When to Use Compost?\n\n - · Any time you're preparing soil for planting\n - · Mulching beds and gardens in spring, summer, or fall\n - · Top-dressing lawns in spring or fall.\n\n", - "page_start": 6, - "page_end": 6, - "source_file": "CompostGuide.pdf" - }, - { - "text": "In summary, we have demonstrated antiferromagnetic coupling between Fe and (Ga,Mn)As layers in bilayer structures. A markedly different coupling is observed for the bulk of the (Ga,Mn)As layer and for Mn moments in the near-interface region. A thickness-dependent exchange bias field is observed to affect the whole of the bulk (Ga,Mn)As layer, which aligns antiparallel to the Fe layer at low fields, and switches to parallel when the external field is large enough to overcome the bias field and the magnetocrystalline anisotropy fields. In contrast, the interfacial Mn moments remain aligned antiparallel to the Fe layer even at 20 kOe, the largest field studied, and are polarized at temperatures well above the T C of the bulk (Ga,Mn)As layer. The latter observation confirms the recently reported result of Ref. 7, in which the Fe/(Ga,Mn)As bilayers were produced by a different method but showed qualitatively similar behavior of the interfacial moments. Our results shed new light on the magnetic coupling in Fe/(Ga,Mn)As hybrid layers which are of potential interest for room temperature spintronics, and also offer a means of controlling the spin orientation in a FM semiconductor.\n\nWe acknowledge support from EU grants SemiSpinNet-215368 and NAMASTE-214499, and STFC studentship grant CMPC07100. The Advanced Light Source is supported by the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. We thank Leigh Shelford for help during the Diamond beamtime.\n\n- Polesya, H. Ebert, U. Wurstbauer, M. Hochstrasser, G. Rossi, G. Woltersdorf, W. Wegscheider, and C. H. Back, Phys. Rev. Lett. 101 , 267201 (2008).\n- 8 R. P. Campion, K. W. Edmonds, L. X. Zhao, K. Y. Wang, C. T. Foxon, B. L. Gallagher, and C. R. Staddon, J. Crystal Growth 247 , 42 (2003).\n- 9 F. Maccherozzi, G. Panaccione, G. Rossi, M. Hochstrasser, M. Sperl, M. Reinwald, G. Woltersdorf, W. Wegscheider, and C. H. Back, Phys. Rev. B 74 , 104421 (2006).\n- 10 Ch. Binek, S. Polisetty, X. He and A. Berger, Phys. Rev. Lett. 96 , 067201 (2006).\n- 11 C. Won, Y.Z. Wu, E. Arenholz, J. Choi, J. Wu, and Z. Q. Qiu, Phys. Rev. Lett. 99 , 077203 (2007).\n- 12 J. Nogues and I. K. Schuller, J. Magn. Magn. Mater. 192 , 203 (1999).\n- 13 K. F. Eid, M. B. Stone, K. C. Ku, O. Maksimov, P. Schiffer, N. Samarth, T. C. Shih and C. J. Palmstrom, Appl. Phys. Lett. 85 , 1556 (2004).\n- 14 B. T. Thole, P. Carra, F. Sette, and G. van der Laan, Phys. Rev. Lett. 68 , 1943 (1992); P. Carra, B. T. Thole, M. Altarelli, and X. Wang, Phys. Rev. Lett. 70 , 694 (1993).\n- 15 T. Jungwirth, J. Masek, K. Y. Wang, K. W. Edmonds,", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2449.pdf" - }, - { - "text": "λ\n\nFIG. 17: Conductivities and ∆ W for a fixed λω sf . Top ω sf = 26 meV , λ = 1, ω o = 40 meV , Z o = 0 . 77 Bottom ω sf = 2 . 6 meV , λ = 10, ω o = 13 . 5 meV , Z o = 1 . 22. The zero crossing for ∆ W is not affected by a change in λ because it is determined only by λω sf . We set ∆ = 30 meV .\n\n\n\nFIG. 18: The behavior of Kubo sums in the CB model. Note that the spectral weight in the NS is always larger than in the SCS. We set ω sf = 26 meV , λ = 1, and ∆ = 30 meV .\n\n\n\nWe performed the same calculations of conductivities and optical integrals as in the previous three cases. The results are summarized in Figs. 17 - 22. Fig 17 shows conductivities in the NS and the SCS for two couplings λ = 1 and λ = 10 (keeping λω sf constant). Other parameters Z o and ω o are calculated according to the discussion after Eq 21. for ω sf = 26 meV , λ = 1, we find ω o = 40 meV , Z o = 0 . 77. And for ω sf = 2 . 6 meV , λ = 10, we find ω o = 13 . 5 meV , Z o = 1 . 22. Note that the conductivity in the SCS starts at 2∆ + ω o (i.e. the resonance energy\n\nFIG. 19: The evolution of the optical integrals in the NS and the SCS in the CB model. Note that about ∼ 75% of the spectral weight is recovered up to 1 eV . We set ω sf = 26 meV , λ = 1, and ∆ = 30 meV .\n\n\n\n1\n\nFIG. 20: ∆ W (in meV) for λ = 1(top) and λ = 10(bottom). We used ω sf = 26 meV/λ and ∆ = 30 meV . The zero crossing is not affected because we keep λω sf constant. The notable difference is the widening of the dip at a larger λ .\n\n", - "page_start": 11, - "page_end": 11, - "source_file": "1001.0764.pdf" - }, - { - "text": "climate change, the effects of erosion and losses of soil organic carbon are becoming increasingly apparent. Desertification is also a growing threat in the EU 35 .\n\nIt is therefore essential to step up efforts to protect soil fertility, reduce soil erosion and increase soil organic matter . This should be done by adopting sustainable soil management practices, including as part of the CAP. Significant progress is also needed on identifying contaminated soil sites, restoring degraded soils, defining the conditions for their good ecological status, introducing restoration objectives, and improving the monitoring of soil quality.\n\nTo address these issues in a comprehensive way and help to fulfil EU and international commitments on land-degradation neutrality, the Commission will update the EU Soil Thematic Strategy 36 in 2021. The Zero Pollution Action Plan for Air, Water and Soil that the Commission will adopt in 2021 will also look at these issues. Soil sealing and rehabilitation of contaminated brownfields will be addressed in the upcoming Strategy for a Sustainable Built Environment. A mission in the area of soil health and food under Horizon Europe 37 will aim to develop solutions for restoring soil health and functions.\n\n## 2.2.4. Increasing the quantity of forests and improving their health and resilience\n\nForests are hugely important for biodiversity, climate and water regulation, the provision of food, medicines and materials, carbon sequestration and storage, soil stabilisation and the purification of air and water. They are also a natural home for recreation and learning about nature. Foresters have a key role to play in ensuring sustainable forest management and in restoring and sustaining biodiversity in forests.\n\nIn addition to strictly protecting all remaining EU primary and old-growth forests, the EU must increase the quantity, quality and resilience of its forests , notably against fires, droughts, pests, diseases and other threats likely to increase with climate change. To retain their function for both biodiversity and climate, all forests need to be preserved in good health. More resilient forests can support a more resilient economy. They also play an important role in providing materials, products and services, which are key for the circular bio-economy.\n\nTo make this happen, the Commission will propose a dedicated EU Forest Strategy in 2021 in line with our wider biodiversity and climate neutrality ambitions. It will include a roadmap for planting at least 3 billion additional trees in the EU by 2030 , in full respect of ecological principles. This will create substantial job opportunities linked to the collecting and cultivating of seeds, planting seedlings, and ensuring their development. Tree planting is particularly beneficial in cities, while in rural areas it can work well with agroforestry, landscape features and increased carbon sequestration. At the same time, the Commission will continue to work with Member States to ensure that the EU is sufficiently equipped to prevent and respond to major forest fires, which can inflict significant damages on forest biodiversity.", - "page_start": 9, - "page_end": 9, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "Here, we demonstrate an antiferromagnetic coupling and exchange bias in Fe/(Ga,Mn)As bilayer films, by combining element-specific XMCD measurements and bulk-sensitive superconducting quantum interference device (SQUID) magnetometry. As with previous studies of FM metal/FM semiconductor bilayers 4,5 (and in contrast to AFM coupled FM metal/FM metal exchange bias structures 10,11 ) the layers are in direct contact without a non-magnetic spacer in between. We distinguish interface and bulk (Ga,Mn)As layers that are respectively strongly and weakly antiferromagnetically coupled to the Fe overlayer. In agreement with Ref. 7 , the interface layer remains polarized at room temperature.\n\nThe Fe and (Ga,Mn)As layers of the present study were both grown by molecular beam epitaxy in the same ultra-high vacuum system, in order to ensure a clean interface between them. The (Ga,Mn)As layer of thickness 10 to 50 nm was deposited on a GaAs(001) substrate at a temperature of 260 · C, using previously established methods 3,8 . A low Mn concentration of x ≈ 0 . 03 was chosen in order to avoid the formation of compensating Mn interstitials. The substrate temperature was then reduced to ∼ 0 · C, before depositing a 2 nm Fe layer, plus a 2 nm Al capping layer. In-situ reflection high energy electron diffraction and ex-situ x-ray reflectivity and diffraction measurements confirmed that the layers are single-crystalline with sub-nm interface roughness. SQUID magnetometry measurements were performed using a Quantum Design Magnetic Property Measurement System. Mn and Fe L 2 , 3 x-ray absorption and XMCD", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "- /SM590000 External applications that are opened according to their associated document types (for example, Microsoft Word for .doc or .docx files).\n - /SM590000 Special client applications, such as the CICS client, the Structured APIs, or Java API access.", - "page_start": 209, - "page_end": 209, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv3.pdf", - "query": "How encourage temporally adjacent representations to be predictive of each other ?", - "target_page": 2, - "target_passage": "One way to encourage temporally adjacent representations to be predictive of each other is to ensure that they vary slowly over time. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "To that end, we pretrain a family of V-JEPA models on a dataset of 2 million videos collected from publicly available datasets by combining a masked modeling prediction task with a joint-embedding predictive architecture (see Figure 2). We measure performance on several downstream image and video tasks, using both frozen evaluation and end-to-end fine-tuning. Our findings suggest that feature prediction can indeed serve as an effective stand-alone objective for unsupervised learning from video, while using significantly shorter training schedules than pixel prediction methods. Specifically:\n\n- · Feature prediction leads to versatile visual representations that perform well across downstream image and video tasks without adaption of the model's weights; i.e., using a frozen backbone. V-JEPA achieves the best performance among methods we consider (+6% accuracy) on the SomethingSomething-v2 task, which requires finegrained temporal understanding. V-JEPA is also competitive on tasks like Kinetics400, where appearance-based features are sufficient and hence state-of-the-art image models such as DINOv2 excel (Figure 1 and Table 6).\n- · Models trained with feature prediction are superior to pixel prediction approaches under a frozen evaluation protocol (attentive probing) and are competitive with pixel prediction under full fine-tuning, while using significantly shorter training schedules (Tables 5 and 6).\n- · Models trained with feature prediction are more label-efficient than pixel prediction approaches. Decreasing the available number of labeled examples results in an increase in the performance gap between V-JEPA and pixel-reconstruction models (Table 7).\n\n## 2 Related Works\n\nSlow Features. One way to encourage temporally adjacent representations to be predictive of each other is to ensure that they vary slowly over time. Early works targeting predictive features encouraged representations of individual video frames to be locally temporally invariant, while preventing representation collapse by using spectral methods, as in SFA (Wiskott and Sejnowski, 2002), SSA (Kayser et al., 2001), and Simulated Fixations (Zou et al., 2012). More recently, Goroshin et al. (2015); Wang et al. (2010) train a siamese convolutional network to map the representations of two subsequent frames to the same point, while encouraging distant frames to have diverse representations via a pairwise margin loss and a triplet loss, respectively. Other works (Oord et al., 2018; Surís et al., 2021; Feichtenhofer et al., 2021) implement temporal invariance using noisecontrastive estimation (Gutmann and Hyvärinen, 2012). Our exploration in this paper goes beyond temporal in-\n\nriance and explores feature prediction using masked modeling.", - "page_start": 1, - "page_end": 1, - "source_file": "arxiv3.pdf" - }, - { - "text": "## Feature Prediction versus Pixel Reconstruction.\n\nApproaches that predict in pixel space must dedicate significant model capacity and compute to capture all the low-level detail in the visual input. By contrast, approaches that predict in latent space have the flexibility to eliminate irrelevant or unpredictable pixel-level details from the target representation (Vondrick et al., 2016). Predicting in representation space has been shown to lead to versatile representations that perform well across many downstream tasks through linear probing or lowshot adaptation (Assran et al., 2023; Oquab et al., 2023; Assran et al., 2022), while demonstrating an efficiency gain during pretraining compared to pixel level reconstruction (Assran et al., 2023; Baevski et al., 2022b,a). The works of Baevski et al. (2022a,b) additionally show that predicting in representation space results in competitive end-to-end fine-tuning performance in the image, audio and text domains. In this work, we extend these findings to the video modality.\n\n## 3 Methodology: Video-JEPA\n\nFigure 2 Joint-Embedding Predictive Architectures are trained to predict the representation of an input y from the representation of another input x . The additional variable z provides the predictor with information about the transformation that computes y from x .\n\n\n\nOur goal is to explore the effectiveness of feature prediction as a stand-alone objective for learning visual representations from video. To that end, we use a joint-embedding predictive architecture (JEPA) (LeCun, 2022); see Figure 2. The main idea behind a JEPA is to learn by predicting the representation of an input y from the representation of another input x . The basic architecture is made up of an encoder, E θ ( · ) , which computes the representation of the inputs, and a predictor, P ϕ ( · ) , which predicts the representation of y from the representation of x , conditioned on a variable z indicating the transformation (or corruption) between x and y . Conditioning on z enables the generation of distinct predictions for various transformations of x .\n\n## 3.1 Training Objective\n\nWe train our visual encoder E θ ( · ) to satisfy the constraint that representations computed from one part of the video, y , should be predictable from representations\n\ncomputed from another part of the video, x . The predictor network P ϕ ( · ) , which maps the representation of x to the representation of y , is trained simultaneously with the encoder, and is provided specification of the spatio-temporal positions of y through the conditioning variable z ← ∆ y .\n\nNaively implementing the objective using the regression\n\nminimize θ,ϕ ∥ P ϕ ( E θ ( x ) , ∆ y ) -E θ ( y ) ∥ 1 ,\n\nwould admit a trivial solution, where the encoder outputs a constant representation, regardless of its input. In practice, we use the following modified objective to prevent representation collapse,\n\nminimize θ,ϕ ∥ P ϕ ( E θ ( x ) , ∆ y ) -sg ( E θ ( y )) ∥ 1 , (1)\n\nwhere sg ( · ) denotes a stop-gradient operation, which does not backpropagate through its argument, and E θ ( · ) is an exponential moving average of the network E θ ( · ) . The use of an exponential-moving average feature extractor along with a stop-gradient and a predictor has been used as a collapse prevention strategy for image pretraining (Grill et al., 2020), and studied empirically (Xie et al., 2021) and theoretically (Tian et al., 2021). In fact, the objective in equation (1) is similar to the loss of Assran et al. (2023) used for image pretraining, but we modify it to use an ℓ 1 regression, which we found to be more stable.", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv3.pdf" - }, - { - "text": "Figure 1. A schematic illustration of a hierarchical active inference model. This model links (exteroceptive, interoceptive, and proprioceptive) sensations at lower levels with multimodal models of hidden bodily states, such as fatigue and hunger, at intermediate levels, and finally with temporally extended, integrative models of the embodied self at the higher hierarchical level. In this schematic, following predictive coding (Rao and Ballard 1999, Friston 2005), black and red circles represent neural units that encode predictions and prediction errors, respectively. The levels are reciprocally connected, so predictions are propagated from the top-down (black edges) and prediction errors from the bottom-up (red edges). Finally, the pink triangles indicate a mechanism of precision gating (or gain control) of prediction error units, which determines their relative influence on units encoding predictions. At a neurobiological level, prediction and prediction error units could be mapped to deep and superficial pyramidal cells in cortical hierarchies, whereas expected precision could be linked to neuromodulatory input. The elements of the generative model shown do not need to map one-to-one to specific brain areas or networks but are plausibly distributed across many of them. However, as a first approximation, the lower and intermediate layers of the generative model could be linked to brain networks that process unimodal information (e.g. sensory cortices for exteroceptive information) and multimodal association areas, respectively. The highest level of the generative model could be linked to brain networks that process information about the self, such as the insular cortex, the anterior cingulate cortex, and the medial prefrontal cortex. See Parr et al. (2022) for details about hierarchical generative models supporting adaptive regulation and allostasis and Barrett and Simmons (2015) for their putative neuronal underpinnings. See online article for colored version of this figure.\n\n\n\nare reciprocally linked through top-down connections that convey predictions (black edges) and bottom-up connections that convey prediction errors (red edges), within and across levels. This predictive coding architecture permits inferring (in the Bayesian sense) the most likely causes of sensations, across multiple modalities and multiple hierarchical levels, by minimizing prediction errors at all levels. The rationale is that predictions at all levels are continuously adjusted (and synaptic weights adjusted at a slower time scale) until they match with incoming multimodal stimuli sufficiently well, and, consequently, the prediction errors across all levels are minimized. This process entails that even if a predictive coding agent starts with an incorrect prediction (e.g. about what object it is looking at) the prediction errors that measure a discrepancy between the predicted sensations and the actual sensations can help revise the initial predictions. See Parr et al. (2022) for a more detailed explanation of how to interpret these schematics.\n\nAnother critical aspect of Fig. 1 is that it illustrates two pathways in which prediction errors at the proprioceptive and interoceptive levels are used to steer physical actions (reflex arcs) and autonomic actions (autonomic reflexes). Endowing predictive coding with these reflexes-hence realizing an 'active inference' architecture-permits minimizing prediction errors by changing the state of the world (by physically acting) or the internal milieu (by engaging in autonomic actions) rather than only by changing predictions, as described later.", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed1.pdf" - }, - { - "text": "## Self-Supervised Learning from Videos\n\nSimilar to unsupervised learning from images, a family of unsupervised video representation learning approaches enforces a spatio-temporal representation of a video clip to be invariant to hand-crafted spatio-temporal data augmentations (Parthasarathy et al., 2022). However, one obvious insight is that the temporal ordering of visual information in video can provide implicit supervision. Indeed, this insight is the key insight leveraged by many works on unsupervised video learning. Towards leveraging temporal information as supervision, some approaches train a visual encoder by predicting the temporal ordering of frames (Xu et al., 2019; Lee et al., 2017). Other approaches seek to predict low-level motion vectors computed from optical flow (Pintea et al., 2014), or to predict mixing pixels in video frames, using either a frame-interpolation objective (Kalluri et al., 2023) or a denoising autoencoder (Tong et al., 2022; Feichtenhofer et al., 2022; Wang et al., 2023a).", - "page_start": 14, - "page_end": 14, - "source_file": "arxiv3.pdf" - }, - { - "text": "## Revisiting Feature Prediction for Learning Visual Representations from Video\n\nAdrien Bardes 1 , 2 , 3 , Quentin Garrido 1 , 4 , Jean Ponce 3 , 5 , 6 , Xinlei Chen 1 , Michael Rabbat 1 , Yann LeCun 1 , 5 , 6 , MahmoudAssran 1 , † , Nicolas Ballas 1 , †\n\n1 FAIR at Meta, 2 Inria, 3 École normale supérieure, CNRS, PSL Research University, 4 Univ. Gustave Eiffel, CNRS, LIGM, 5 Courant Institute, New York University, 6 Center for Data Science, New York University † Joint last author\n\nThis paper explores feature prediction as a stand-alone objective for unsupervised learning from video and introduces V-JEPA , a collection of vision models trained solely using a feature prediction objective, without the use of pretrained image encoders, text, negative examples, reconstruction, or other sources of supervision. The models are trained on 2 million videos collected from public datasets and are evaluated on downstream image and video tasks. Our results show that learning by predicting video features leads to versatile visual representations that perform well on both motion and appearance-based tasks, without adaption of the model's parameters; e.g., using a frozen backbone. Our largest model, a ViT-H/16 trained only on videos, obtains 81 . 9% on Kinetics-400, 72 . 2% on Something-Something-v2, and 77 . 9% on ImageNet1K.\n\nDate: April 15, 2024\n\nCorrespondence:\n\n{abardes, massran, ballasn}@meta.com\n\nCode:\n\nhttps://github.com/facebookresearch/jepa\n\nBlogpost:\n\nClick here\n\n## 1 Introduction\n\nHumans possess the remarkable ability to map low-level signals originating from the retina into a semantic spatiotemporal understanding of the world; synthesizing notions such as objects and global motion (Spelke et al., 1995). A long-standing goal of the machine learning community is to identify the principles or objectives that may guide such unsupervised learning in humans (Field, 1994; Berkes and Wiskott, 2005; Hinton, 1989). One related hypothesis is based on the predictive feature principle (Rao and Ballard, 1999), which posits that representations of temporally adjacent sensory stimuli should be predictive of each other.\n\nIn this work, we revisit feature prediction as a standalone objective for unsupervised learning of visual representations from video. Numerous advances in the field such as the standard use of transformer architectures in vision (Dosovitskiy et al., 2020), the maturing of masked autoencoding frameworks (Xie et al., 2021; Bao et al., 2021; He et al., 2021), query-based feature pooling (Chen et al., 2022), joint-embedding predictive architectures (JEPA) (LeCun, 2022; Assran et al., 2023; Baevski et al., 2022b), and larger datasets - form a unique arsenal of tools, which we integrate in a modern and conceptually simple method, the video joint-embedding predictive architecture or V-JEPA , which is based solely on feature prediction, without using pretrained image encoders, text, negative examples, human annotations, or pixel-\n\n\n\n## Frozen Evaluation\n\nFigure 1 V-JEPA models pretrained on video learn versatile visual representations. It performs well on motion-based tasks (Something-Something-v2) and appearance-based tasks (Kinetics 400) without adaptation of the model's parameters, i.e., using the same frozen backbone for both tasks.\n\n\n\nlevel reconstruction.\n\nWe seek to answer the simple question:\n\nHow effective is feature prediction as a standalone objective for unsupervised learning from video with modern tools?", - "page_start": 0, - "page_end": 0, - "source_file": "arxiv3.pdf" - }, - { - "text": "riance and explores feature prediction using masked modeling.\n\nPredictive Features. Going beyond local invariance, a family of works trains a predictor network to map the representation of a frame or clip at one time-step to a distinct representation at another time-step. Srivastava et al. (2015); Vondrick et al. (2016); Wang et al. (2023b) train such a video feature predictor network on top of a frozen pretrained image or video encoder. Unfreezing the target feature extractor, several methods train the video encoder and the predictor network simultaneously, while preventing collapse by using a supervised action forecasting loss (Girdhar and Grauman, 2021), or by using the representations of distant clips as negative samples in a contrastive loss (Han et al., 2019, 2020; Tan et al., 2023), often focusing on small convolutional encoders (Han et al., 2019, 2020). The idea of learning a representation by predicting missing information in feature space is also core to the joint-embedding predictive architecture (JEPA) (LeCun, 2022), which combines a siamese encoder with a predictor network. JEPAs have been successfully instantiated in several modalities, such as with audio data (Baevski et al., 2022b) and image data (Zhou et al., 2021; Oquab et al., 2023; Assran et al., 2023). In this work, we extend this paradigm to video data by leveraging recent advances in self-supervised learning.\n\nAdvances in Self-Supervised Learning. The use of vision transformers (Dosovitskiy et al., 2020; Li et al., 2022) has become standard practice in self-supervised learning with joint-embedding architectures (Chen et al., 2021; Caron et al., 2021; Oquab et al., 2023; Zhou et al., 2021; Assran et al., 2022), and unlocked masked image modeling in pixel space by parameterizing the pixel decoder as a transformer with learnable mask tokens (Dosovitskiy et al., 2020; Xie et al., 2021; He et al., 2021; Bao et al., 2021), demonstrating a step-change in the representation quality of autoencoding methods (Vincent et al., 2010). This line of generative methods was subsequently extended to video data using spatio-temporal masking (Tong et al., 2022; Feichtenhofer et al., 2022; Wang et al., 2023a; Kalluri et al., 2023; Gupta et al., 2023). It was also recently shown that the representations of masked image autoencoders could be significantly improved by using learnable pooling mechanisms based on cross-attention (Chen et al., 2022). Finally, through careful selection of design choices, the non-contrastive collapse prevention strategy in BYOL (Grill et al., 2020) was recently made to work with image feature prediction methods (Baevski et al., 2022b; Assran et al., 2023), which demonstrated the ability to learn representations that can be leveraged for various downstream tasks without relying on invariance to hand-crafted image transformations.", - "page_start": 1, - "page_end": 1, - "source_file": "arxiv3.pdf" - }, - { - "text": "## 4 WhatMatters for Learning Representations from Video?\n\nIn this section we isolate the contributions of several design choices, including: a) the use of a feature prediction\n\nversus pixel prediction objective, b) the construction of the pretraining data distribution, c) the feature pooling strategy for leveraging the model's representations in downstream tasks, and d) the masking strategy, towards identifying: what to predict from what?\n\n## 4.1 Predicting Representations versus Pixels\n\nWe first ablate the effect of computing the prediction loss in representation space. We train a pair of ViT-L/16 models using either a V-JEPA feature prediction loss, or a mean-squared error loss with the normalized pixel values, as in masked autoencoders (He et al., 2021), and perform a sweep over the learning rate and weight decay schedules for both approaches. All models are pretrained on VideoMix2M for 90K iterations with a batch size of 3072 using multi-block masking. We examine performance on Kinetics-400 (K400), Something-Something-v2 (SSv2), and ImageNet-1K (IN1K), using a frozen backbone with an attentive probe, and report top-1 accuracy using a single center view. We also examine end-to-end fine-tuning performance of the models on Kinetics-400.\n\nResults of this comparison are reported in Table 1 and indicate that predicting in feature space provides a consistent performance improvement over pixel space prediction in both frozen evaluation of the video backbone, as well as end-to-end fine-tuning.\n\n## 4.2 Pretraining Data Distribution\n\nNext we study the impact of the pretraining data distribution in Table 2. Leveraging large scale datasets", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv3.pdf" - }, - { - "text": "Extending this idea even further, one can assume that certain emotional states, as well as self-awareness and the (embodied) sense of self-and the feeling of continually being the same person-could be constructed similarly: it would be the result of an inferential process that integrates bodily sensations and other experiences over time (Gu et al. 2013, Seth 2013, Stephan et al. 2016, Barrett 2017). Figure 1 illustrates graphically this perspective by showing a (schematic) hierarchical generative model that links (exteroceptive, interoceptive, and proprioceptive) sensations at lower levels with multimodal models of hidden bodily states, such as fatigue and hunger at intermediate layers, and, finally, with temporally extended, integrative models of the emotional and embodied self at the higher hierarchical level. The hierarchical generative model recapitulates a simple predictive coding architecture, which includes various putative brain areas or networks (gray ovals) arranged hierarchically. In the schematic, networks for unimodal (exteroceptive, proprioceptive, and interoceptive) processing are situated at the lowest hierarchical level, multimodal networks are at an intermediate level, and networks for processing a persistent model of the self are at the highest level. Note that this simple schematic is not supposed to recapitulate brain anatomy but to illustrate the basic principles of hierarchical generative models and predictive coding; (for a discussion of the mapping between predictive coding networks and brain anatomy, see Parr et al. 2022). Each network includes cells encoding predictions (black nodes) and prediction errors (red nodes). These units", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed1.pdf" - }, - { - "text": "Figure 3 V-JEPA. Training operates on a video clip of T frames with spatial resolution H × W , flattened into a sequence of L tokens. (Left to right): We first obtain the input of the x -encoder by dropping tokens from the video clip. The x -encoder then processes the masked video sequence, and outputs an embedding vector for each input token. Next, the outputs of the x -encoder are concatenated with a set of learnable mask tokens containing positional embeddings of the masked spatio-temporal patches. The predictor network processes the combined token sequence, and outputs an embedding vector for each mask token. The outputs of the predictor are then regressed to the prediction targets using an L 1 loss. The prediction targets correspond to the output of the y -encoder.\n\n\n\n×\n\n×\n\n×\n\n×\n\n×\n\n×\n\n×\n\n×\n\n## 3.2 Prediction Task: Predicting y from x\n\nThe feature prediction task is based on a masked modeling formulation (He et al., 2021; Tong et al., 2022); i.e., regions x and y from the video are sampled using masking. To sample y from a video, we sample several (possibly overlapping) spatially continuous blocks with various aspect ratios and repeat the spatial blocks across the entire temporal dimension of the video; x is taken to be the complement. Masking a large continuous block that covers the full temporal dimension limits information leakage due to the spatial and temporal redundancy of videos, and results in a harder prediction task (Tong et al., 2022).\n\nWe leverage two types of masks: short-range masks, where we take the union of 8 randomly sampled target blocks covering 15% of each frame, and long-range masks, where we take the union of 2 randomly sampled target blocks covering 70% of each frame. In both cases, the aspect ratio for all sampled blocks is randomly chosen in the range (0 . 75 , 1 . 5) . Given that both short-range and long-range masks are produced by sampling many blocks and taking their union, the result is an average masking ratio of ∼ 90% . We refer to our masking strategy as multi-block, and compare it to other possible masking strategies in Section 4.\n\n## 3.3 Network Parameterization\n\nWe use a Vision Transformer (ViT) (Dosovitskiy et al., 2020; Arnab et al., 2021) as our video backbone. To process a video with a transformer network, we split the video clip into a 3D grid of L spatio-temporal patches, where a patch consists of a 16 × 16 pixel block spanning 2 consecutive frames; we refer to these spatio-temporal patches as tokens. This sequence of tokens is then directly processed by the stack of transformer blocks. In-\n\nuts x and y correspond to masked regions of a video, we apply the video masks by simply dropping a subset of the tokens. We apply masking at the input of the x -encoder, and at the output of the y -encoder to construct contextualized targets (Baevski et al., 2022b). The encoder is parameterized using standard ViT networks, while the predictor is a narrow transformer implemented using 12 blocks with an embedding dimension of 384 . Taking inspiration from masked autoencoders (He et al., 2021), our predictor takes as input the sequence of embeddings produced by the x -encoder as well as a sequence of learnable mask tokens with positional embeddings indicating the spatio-temporal positions of the y tokens. The output of the predictor is an embedding vector for each mask token; see Figure 3 and refer to Appendix B for more details.\n\n## 3.4 Pretraining Data and Evaluation Setup", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv3.pdf" - }, - { - "text": "n/a\n\nInvolved in the study\n\nFunctional and/or effective connectivity\n\nGraph analysis\n\nMultivariate modeling or predictive analysis\n\nMultivariate modeling and predictive analysis\n\nMultivariate regression analyses was used to explore brain structure in relation to gestation. Regional, network, and summary brain measures (dependent variables) were examined in relation to gestation week (independent variable). In follow-up statistical analyses (noted in Methods), various quality control metrics and global brain volume were included into the model to account for variables of non-interest (e.g., motion) and to identify highly impacted brain areas (e.g., controlling for total GMV).", - "page_start": 17, - "page_end": 17, - "source_file": "pubmed4.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv3.pdf", - "query": "What does mean the JEPA acronym ?", - "target_page": 3, - "target_passage": " joint-embedding predictive architecture (JEPA)", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Figure 4 SSv2 fine-tuning performance vs. Samples Seen. We report SSv2 fine-tuning for V-JEPA and pixel-reconstruction baselines using a ViT-L/16 or Hiera-L architecture. V-JEPA outperforms all pixel-reconstruction methods using a ViTL/16 and matches the Hiera-L performance while seeing significantly less samples during pretraining.\n\n\n\nageNet; hence, V-JEPA achieves comparable ImageNet performance despite only pretraining on video.\n\nUnder the fine-tuning protocol, V-JEPA also achieves the best performance of any model trained with a ViT-L/16, and matches the performance of the Hiera-L on SSv2, which benefits from a hierachical prior (Ryali et al., 2023). The V-JEPA models achieve this result while processing significantly fewer samples during pretraining (Figure 4), demonstrating the efficiency of feature prediction as a learning principle.\n\n## 5.2 Comparison with State-of-the-Art\n\nNext, in Table 6, we inspect how the V-JEPA models pretrained on video stack up next to the largest stateof-the-art self-supervised image and video models when freezing the backbone encoder and training an attentive probe on top. Our image pretrained baselines include OpenCLIP (Cherti et al., 2023), DINOv2 (Oquab et al., 2023), and I-JEPA (Assran et al., 2023). The OpenCLIP model is trained with a contrastive image-text alignment objective, DINOv2 and I-JEPA are trained with self-supervision. These models are known to excel in their frozen-evaluation performance (Oquab et al., 2023); i.e., their ability to produce visual features that can be applied to many downstream tasks simultaneously, without end-to-end fine-tuning, and thus provide highly competitive baselines. Our video pretrained baselines include VideoMAE (Tong et al., 2022), OmniMAE (Girdhar et al., 2023), Hiera (Ryali et al., 2023), VideoMAEv2 (Wang et al., 2023a), and MVD (Wang et al., 2023b). The OpenCLIP, DINOv2 and VideoMAEv2 models are parameterized as Giant/Gigantic vision transformer architectures containing over 1B parameters trained on large-scale image or video datasets.\n\nComparison with video models. Compared to large-scale video baselines, the V-JEPA models outperform all previous models on every downstream video\n\nFigure 5 SSv2 frozen-evaluation performance vs. Pretraining Time. Wallclock times for all methods are measured on a single GPU with a batch size of 10 clips, using the official codebases for VideoMAE and VideoMAEv2, and linearly extrapolated assuming a global batch size of 2400 samples. However, note that the SSv2 accuracies of video pixel prediction methods are actually obtained with small batch sizes and significantly longer training schedules. V-JEPA outperforms pixel-reconstruction methods while training significantly faster.\n\n\n\nand image task with notable margin (see Table 6). Our H/16 model outperforms the largest publicly available VideoMAE, VideoMAEv2, OmniMAE, MVD, and Hiera models by at least +5 points in motion understanding (Something-Something-v2), +2 points in action recognition (Kinetics-400), +5 points on action detection (AVA), +1 point on object recognition (ImageNet-1K), +2 points in scene recognition (Places205), and +0 . 2 points on finegrained recognition (iNaturalist). Moreover, when comparing pretraining wallclock time in Figure 5, we see that V-JEPA achieves this performance with a roughly 2 × speedup compared to the large pixel prediction models.\n\nComparison with image models. On tasks that require a fine-grained understanding of motion (SomethingSomething-v2), the V-JEPA models provide a major improvement (over +21 points) compared to large-scale image baselines, such as DINOv2, OpenCLIP, and IJEPA. Self-supervised pretraining from videos allows to model dynamic concepts that are not easily learned from static image datasets. Similarly, we observe that the V-JEPA models outperform image-based pretraining on action localization.", - "page_start": 7, - "page_end": 7, - "source_file": "arxiv3.pdf" - }, - { - "text": "Table 5 Comparison with Pixel Prediction Methods. We compare V-JEPA with OmniMAE (Girdhar et al., 2023), VideoMAE (Tong et al., 2022), and Hiera (Ryali et al., 2023), which leverage a pixel-reconstruction loss. All models are trained using a ViT-L architecture or a comparable Hiera-L. We evaluate the approaches on downstream image tasks (IN1K, Places205, iNat201) and video tasks (K400, SSv2, AVA) in both frozen evaluation (with a frozen backbone), and end-to-end fine-tuning. All models are evaluated at resolution 224. On K400 and SSv2 we follow the standard practice of reporting accuracy from several spatial and temporal views from the video. In frozen evaluation, V-JEPA outperforms the baselines on all downstream tasks, except ImageNet, where the model achieves 74 . 8% compared to 75 . 1% of an OmniMAE model trained directly on ImageNet. V-JEPA also achieves the best fine-tuning performance amongs all ViT-L models and matches the Hiera-L on SSv2. The V-JEPA results are achieved while processing significantly fewer examples during pretraining.Table 6 Comparison with State-of-the-Art Models. We compare V-JEPA with state-of-the-art baselines in frozen evaluation with an attentive probe on downstream image tasks (IN1K, Place205, iNat21) and video tasks (K400, SSv2, AVA). All models are evaluated at resolution 224, except I-JEPA 512 and V-JEPA 384 which are evaluated respectively at resolution 512 and 384 . On K400 and SSv2 we follow the standard practice of reporting accuracy from several spatial and temporal views from the video. Compared to other video baselines, V-JEPA exhibits a consistent improvement across all downstream tasks. Compared to image-models that excel under the frozen evaluation, V-JEPA shows a significant performance improvement on tasks requiring motion understanding (+21 points on SSv2), and reduces the gap between video and image models on tasks requiring static appearance-based features.", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv3.pdf" - }, - { - "text": "## Frozen\n\n(a) Visualization Methodology. We train a conditional diffusion model to decode the V-JEPA feature-space predictions to interpretable pixels; the pretrained V-JEPA encoder and predictor networks are kept frozen in this process. The decoder is only fed the representations predicted for the missing regions of the video, and does not have access to the unmasked regions of the video.\n\n\n\nFigure 6 Qualitative Analysis. Offline visualizations of the V-JEPA feature-space predictions.\n\n\n\n(b) Visualizations. First Row: Masked videos used as input to the V-JEPA models (a pretrained ViT-H/16 encoder and its corresponding predictor network). Other rows: Bounding boxes contain various samples from the decoder overlayed on the original video. V-JEPA is not a generative model and the decoder does not have access to the context (first row), so we do not expect samples to exactly match the input. This experiment qualitatively illustrates what information is encoded and predicted by V-JEPA. In particular, characteristics that are common across samples represent information that is encoded in the V-JEPA predictions. V-JEPA generates predictions that are spatially and temporally coherent with unmask region of the video. The predictions also capture consistent motion through time.\n\n\n\n## 7 Conclusion\n\nIn this work, we explored the effectiveness of feature prediction as a stand-alone objective for unsupervised learning from video and introduced V-JEPA , a collection of vision models trained solely using a self-supervised feature prediction objective. The V-JEPA models demonstrate the ability to solve various downstream image and video tasks without adaption of the model parameters, and outperform previous video representation learning approaches in frozen evaluation on action recognition, spatio-temporal action detection, and image classification tasks. Additionally, we show that pretraining VJEPA on videos is particularly effective for solving down-\n\nstream tasks requiring fine-grained motion understanding, while large-scale image models trained on internet scale datasets fall short on such tasks. Finally, we empirically observed that V-JEPA models are label-efficient learners, and exhibit good performance on downstream tasks, even when only few labeled examples are available.\n\n## References\n\nHassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong. Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. Advances in Neural Information Processing Systems , 34:24206-24221, 2021.", - "page_start": 9, - "page_end": 9, - "source_file": "arxiv3.pdf" - }, - { - "text": "We find V-JEPA to be more label-efficient than other self-supervised video models: decreasing the available number of labeled examples for training the attentive probe results in an increase in the performance gap between V-JEPA and the other models. In particular, the performance of the largest V-JEPA model on K400 drops by 12% to 68.2% top-1 when we reduce the number of labeled examples by a factor of 10 × (from roughly 287 examples per class to 29 examples per class). By contrast, VideoMAEv2 drops by 30% to 37.0% top-1, VideoMAE drops by 15.9% to 62.3% top-1, and MVD drops by 14.6% to 62.6% top-1.\n\nSimilar observations hold on SSv2. The performance of the largest V-JEPA model on SSv2 drops by 13.9%\n\nto 54.0% top-1 when we reduce the number of labeled examples by a factor of 10 × (from roughly 440 examples per class to 48 examples per class). By contrast, VideoMAEv2 drops by 26% to 28.0% top-1, VideoMAE drops by 19.1% to 41.4% top-1, and MVD drops by 18.1% to 42.9% top-1.\n\n## 6 Evaluating the Predictor\n\nNext, we seek to qualitatively inspect the V-JEPA models. Recall that the predictor network in V-JEPA predicts the representations of a masked spatio-temporal region y from a visible region x , given the positional information of the masked regions (see Section 3). To qualitatively investigate the grounding of the feature-space predictions, we freeze the pretrained encoder and predictor networks and train a conditional diffusion decoder to map the V-JEPA predictions to interpretable pixels. Notably, the decoder is only fed the representations predicted for the missing regions of the video, and does not have access to the unmasked regions of the video (see Figure 6a).\n\nGiven a masked video, we use the V-JEPA pretrained models to predict the representations of the missing regions, and then use the decoder to project the representations to pixel space. Figure 6b shows decoder outputs for various random seeds. Qualities that are common across samples represent information that is contained in the predictor representation.\n\nFigure 6b shows that the V-JEPA feature predictions are indeed grounded, and exhibit spatio-temporal consistency with the unmasked regions of the video. Specifically, the samples in Figure 6b show that the V-JEPA predictor correctly captures positional uncertainty and produces a variety of visual objects at various locations with consistent motion. Some of the samples also demonstrate an understanding of object-permanence, as the visual objects remain consistent after partial occlusion.", - "page_start": 8, - "page_end": 8, - "source_file": "arxiv3.pdf" - }, - { - "text": "| V-JEPA | ViT-L/16 | 270M | 90K | 80.8 | 69.5 | 25.6 | 74.8 | 60.3 | 67.8 | 85.6 | 75.1 |", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv3.pdf" - }, - { - "text": "On Kinetics-400, we find image models to perform well; e.g., while DINOv2 (Oquab et al., 2023) previously reported 78 . 4% on K400 with a linear probe, we improve the frozen evaluation of the g/14 model to 83 . 4% by using an attentive probe. In this case, our H/16 model achieves 82 . 0% top-1 accuracy. It is worth noting that the label for many Kinetics videos can be inferred using appearance-based cues, without requiring an understanding of motion (Sevilla-Lara et al., 2021).\n\nThe V-JEPA models narrow the gap with image models on image classification tasks. In particular, V-JEPA achieves a score of 77 . 4% on ImageNet using a one-", - "page_start": 7, - "page_end": 7, - "source_file": "arxiv3.pdf" - }, - { - "text": "Table 14 Temporal Coverage on Kinetics-400. We evaluate the effect of temporal coverage on K400. We train an attentive probe on K400 using either 1 clip ( ≈ 2 seconds of a video) or 8 clips ( ≈ 16 seconds of a video). To sample N clips, we first divide a video in N equal-length temporal segments and sample one clip at random per segment. The video encoder processes each clip in parallel and all the encoder output tokens are concatenated at the input of the attentive probe. Increasing the temporal coverage from 1 clip per video to 8 clips significantly improves the performance for both our VideoMAE baseline and V-JEPA.\n\nTable 15 Finetuning results. We evaluate a V-JEPA model with the finetuning protocol on the K400 and SSv2 datasets using 16 frames per clip and multi-view fusion (5 × 3 or 2 × 3 ) for inference. The #Samples Seen entry corresponds to the number of video clips processed during pretraining, which is larger than the size of the pretraining dataset for multi-epoch training. We compare V-JEPA with different video self-supervised learning approaches. We report the VideoMAEv2 results without instruction-turning for consistency with the other approaches. V-JEPA obtains competitive performance using the finetuning protocol.\n\n| Method | Arch. | 1 Clip | 8 Clips |\n|----------|----------|----------|-----------|\n| VideoMAE | ViT-L/16 | 69.4 | 77.8 |\n| V-JEPA | ViT-L/16 | 73.7 | 80.9 |\n\n| Method | Arch. | Pretraining Data | #Samples Seen | K400 (16 × 5 × 3) | SSv2 (16 × 2 × 3) |\n|------------|----------|--------------------|-----------------|---------------------|---------------------|\n| VideoMAEv1 | ViT-L/16 | K400 | SSv2 | 380M | 410M | 85.4 | 74.3 |\n| VideoMAEv1 | ViT-H/16 | K400 | SSv2 | 380M | 410M | 86.6 | 74.8 |\n| VideoMAEv2 | ViT-H/16 | Un.Hybrid | 1600M | 86.9 | 76.8 |\n| MVD | ViT-L/16 | K400+IN1K | 2400M | 86.4 | 76.7 |\n| MVD | ViT-H/16 | K400+IN1K | 2400M | 87.2 | 77.3 |\n| V-JEPA | ViT-L/16 | VideoMix2M | 270M | 85.6 | 75.1 |\n| V-JEPA | ViT-H/16 | VideoMix2M | 270M | 86.6 | 77 |\n\nexamine our multi-masking strategy and find that sampling two masks for each clip (long-range and short-range) to be more effective than sampling just a single mask for each clip.\n\nIn Figure 8c, we explore different average spatial and temporal masking ratio, i.e. the spatial/temporal ratio of the area that is covered by a mask on average for a clip. Recall that each mask is constructed by sampling several (possibly overlapping) blocks and taking their union. We change the average spatial or temporal masking ratio by changing a block spatial or temporal size, as well as the overall number of blocks. We found that low spatial or temporal coverage results in a trivial prediction task, which degrades downstream performance. Based on those results, we sample masks that remove roughly 90% of the frame and extend along the entire temporal dimension of the clip by default.\n\nIn Figure 8b , we explore different block size given an effective spatial masking ratio of 90% and temporal ratio of 100%. We keep the masking ratio approximately constant by changing the block size and the number of block at the same time. We find that sampling several blocks to perform better than sampling a single large block. Figure 9 visually illustrates the effect of sampling several smaller blocks to construct a mask.", - "page_start": 21, - "page_end": 21, - "source_file": "arxiv3.pdf" - }, - { - "text": "FIG. 8: XTEJ1752-223 light curve. Horizontal scale is in modified Julian days.\n\n\n\n- [1] C. Meegan et al., Ap. J. 702 , 791 (2009).\n- [2] C. Wilson-Hodge et al. (2010), these proceedings.\n- [3] B. A. Harmon et al., Ap. J. Suppl. 138 , 149 (2002).\n- [4] B. A. Harmon et al., Ap. J. Suppl. 154 , 585 (2004).\n- [5] G. L. Case et al., in The First GLAST Symposium , edited by S. Ritz, P. Michelson, and C. Meegan (2007), vol. 921 of AIP Conf. Proceedings , p. 538.\n- [6] J. Tueller et al. (2010), ap. J. Suppl., (to be published), astro-ph/0903.3037.\n- [7] J. C. Ling and W. A. Wheaton, Ap. J. 598 , 334 (2003).\n- [8] E. Jourdain and J. P. Roques, Ap. J. 704 , 17 (2009).\n- [9] H. Steinle et al., Astron. and Astrophys. 330 , 97\n\n12-25 keV band, where the flux initially rose to about 240 mCrab (2009 Oct 25-28), suddenly dropped to non-detectable on 2009 October 29-30, then rose again during the period 2009 October 31 to November 2. As of mid December 2009, the source remains in a high intensity state. The light curve is shown for the period MJD 54700-55200, again with 1-day resolution, in Fig. 8. The fluxes for XTE J1752-223 in Table 1 are given are for the interval of flaring activity, TJD 55130-55180.\n\n## Acknowledgments\n\nThis work is supported by the NASA Fermi Guest Investigator program. At LSU, additional support is provided by NASA/Louisiana Board of Regents Cooperative Agreement NNX07AT62A.\n\n(1998).\n\n- [10] M. McConnell et al., Ap. J. 523 , 928 (2000).\n- [11] J. C. Ling and W. A. Wheaton, Chinese J. Astron. Astrophys. Suppl. 5 , 80 (2005).\n- [12] G. L. Case et al., Chinese J. Astron. Astrophys. Suppl. 5 , 341 (2005).\n- [13] L. Bouchet et al., Ap. J. 693 , 1871 (2009).\n- [14] M. C. Bell et al., Ap. J. 659 , 549 (2007).\n- [15] G. L. Case et al. (2010), to be submitted.\n- [16] C. Wilson-Hodge et al., Astron. Telegram 2280 (2009).", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0955.pdf" - }, - { - "text": "## 5 Comparison with Prior Work\n\nIn Section 5.1, we investigate the impact of feature prediction by comparing V-JEPA with video approaches that rely on pixel prediction, while using a similar architecture for all baselines. Subsequently, in Section 5.2, we remove the architectural constraint and report the best performance across architectures for self-supervised video and image pretraining approaches. Finally, we explore the label-efficiency of V-JEPA relative to other selfsupervised video pretraining approaches in Section 5.3. We further detail the evaluation setup in Appendix D.\n\n## 5.1 Comparison with Pixel Prediction\n\nTo investigate the effectiveness of feature prediction pretraining, we first compare V-JEPA to video masked modeling models relying on a pixel prediction loss. We control\n\nfor the possible confounding factor of model architecture by evaluating all models using either a ViT-L/16 encoder, or a Hiera-L encoder, which has a similar number of parameters. For the pixel prediction baselines we consider VideoMAE (Tong et al., 2022; Wang et al., 2023a), which trains vision transformer autoencoders exclusively on video, Hiera (Ryali et al., 2023), which trains a hierarchical transformer autoencoder on video, and OmniMAE (Girdhar et al., 2023), which trains a vision transformer autoencoder on static images and video simultaneously.", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv3.pdf" - }, - { - "text": "TRACE CT,ON,COMP=SYSTCPDA,SUB=(TCPIP\\_Proc) reply,WTR=CTWTR1,END\n\n - 3. Start and connect a SYSTCPIP component trace to your writer procedure by running the following command:\n\n```\nTRACE CT,ON,COMP=SYSTCPIP,SUB=(TCPIP\\_Proc) reply,WTR=CTWTR1,JOBNAME=(RM\\_Jobname) reply,WTR=CTWTR1,OPTIONS=(socket,pfs,tcp,sockapi),END\n```", - "page_start": 427, - "page_end": 427, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "arxiv3.pdf", - "query": "What is the average performance of the ViT-L/16 architecture on the K710 dataset with 700k samples ?", - "target_page": 5, - "target_passage": "70.9", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "## 3.4 Pretraining Data and Evaluation Setup\n\nPretraining. We combine several public datasets to construct an unsupervised video pretraining dataset, which we refer to as VideoMix2M. Specifically, we combine the videos from HowTo100M (HT) (Miech et al., 2019), Kinetics-400/600/700 (K710) (Kay et al., 2017), and Something-Something-v2 (SSv2) (Goyal et al., 2017), and remove any overlap with the validation sets of Kinetics-400/600/700 and Something-Something-v2, resulting in approximately 2 million videos. We train a ViT-L/16, a ViT-H/16, and a ViT-H/16 384 transformer model on VideoMix2M. We use a batch size of 3072 for the ViT-L/16 and ViT-H/16 models, and a batch size of 2400 for the ViT-H/16 384 model. Each model takes as input a video clip of 16 frames sampled with a frameskip of 4, corresponding to roughly 3 second clips on average. The ViT-L/16 and ViT-H/16 process the video at a spatial resolution of 224, while the ViT-H/16 384 uses an input resolution of 384; cf. Appendix C.", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv3.pdf" - }, - { - "text": "## A Supplementary materials for datasets\n\n## A.1 All datasets\n\nTable 3 displays the size of each dataset along with the average number of tokens per sample and their references. The dataset's content was tokenized using cl100k\\_base encoding. For Retrieval, the two numbers refer to the queries and the documents. For Reranking, the three numbers refer to the queries, the pairs of queries with relevant documents and the pairs of queries with irrelevant ones, respectively. The pairs of queries and documents are obtained from the 90 documents extracted. For SummEvalFr , the three numbers refer to the texts, human and machine summaries, respectively.\n\nFigure 3 represents the semantic similarity between each dataset. The methodology was as follows: 90 random samples per dataset are embedded using the multilingual-e5-large model. The embeddings of each dataset's samples are averaged. The similarity between each dataset is then calculated using cosine similarity as in (Muennighoff et al., 2022).\n\nWe complement this analysis by observing the dataset's clouds of embedding in a 2D plane using PCA in Figure 4.\n\n## A.2 Created datasets\n\nSyntec Figure 5 shows an extract from the Syntec dataset with a document and a query relative to this document.\n\nHAL Figure 6 is an extract from the HAL dataset. Table 4 lists the distribution of classes ( domain field) for the HAL dataset on raw subset and mteb\\_eval subset, which is used for MTEB evaluation. Labels descriptions can be found at this URL: https://api.archivesouvertes.fr/ref/domain/?q=*:*&rows=393 or in Table 4. After pre-processing, mteb\\_eval covers titles from 10 domains as classes with less than 500 samples were removed. In the MTEB evaluation subset of the dataset, titles composed of 2 words or less have been removed (371 samples), resulting in an average word count of 13 . 4 . Figure 7 shows the word count distribution per title. Furthermore, the dataset has been cleaned up by manually removing all non-French titles. Additionally, it can be observed in Table 4 that in the original raw dataset, the shs and sdv classes represent by far the majority of the dataset samples with respectively 58706 samples (73%) and 11049 samples (13%). In order to\n\nmitigate the class imbalance while preserving the majority of those classes, they have been randomly subsampled to 6701 and 4803 samples. Furthermore, baseline models have been trained and tested to assess the usability of this dataset in other tasks, such as classification and topic modeling. Table 5 shows the results obtained.\n\nSummEvalFr Extracts of humans and machine summaries translated in French from SummEvalFr and the original ones in English from SummEval (Fabbri et al., 2021) are shown in Figure 9. As explained in section 3.1.3, we use a LLM to evaluate the quality of translations for human summaries, we provide the prompt used with GPT-4 for this evaluation in Figure 8.\n\nTable 6 shows the distribution of ratings given by the LLM. With the scale being 10, we manually verify random samples rated above 9. We verify all samples with ratings under 9 and those with no provided rating (N/A) due to the triggering of the OpenAI content management policy. The LLM suggests that 60 samples are not correctly translated. These were verified manually, and after checking, less than 10 samples only needed to be corrected.\n\n## B Supplementary materials for correlation analysis\n\nThis section presents various correlations computed based on the model results on the proposed benchmark.\n\nFigure 10 represents cross-correlations between models' performances and their studied characteristics as a heatmap.\n\nFigure 11 represents the Spearman correlations in terms of performance across models.\n\nFigure 12 represents the Spearman correlations in terms of performance across datasets.\n\n## C Supplementary materials for models\n\nWe present in this section the model characteristics we collected for the 46 evaluated models.", - "page_start": 11, - "page_end": 11, - "source_file": "arxiv4.pdf" - }, - { - "text": "Table 3 Average Pooling vs. Adaptive Pooling. We pool the feature map output by the frozen V-JEPA encoder using an attentive probe, which is then fed into a linear classifier for downstream supervised tasks (K400 and SSv2). We evaluate two pooling strategies: 1) average pooling (Avg.), and attentive pooling (Att.). Results are reported using a single center view. Using adaptive pooling with a crossattention layer leads to improvements of +17 . 3 points on K400 and +16 . 1 points on SSv2.\n\n| | | Frozen Evaluation | Frozen Evaluation | Frozen Evaluation | Frozen Evaluation |\n|--------|----------|---------------------|---------------------|------------------------|------------------------|\n| | | K400 (16 × 1 × 1) | K400 (16 × 1 × 1) | SSv2 (16 × 1 × 1) Att. | SSv2 (16 × 1 × 1) Att. |\n| Method | Arch. | Avg. | Att. | Avg. | |\n| V-JEPA | ViT-L/16 | 56.7 | 73.7 | 50.1 | 66.2 |\n\nhas been critical for enabling the surge of advancements in other modalities, such as text and images (Kaplan et al., 2020; Cherti et al., 2023). We investigate whether a similar trend holds for video data. To control for the possible confounding variable of compute budget, we pretrain all models in Table 2 for 90K iterations using a batch-size of 3072. We report downstream results on K400, SSv2, and IN1K using a frozen backbone with an attentive probe, and report top-1 accuracy using a single center view.\n\nTable 2 shows that average performance across tasks monotonically increases as we increase the size of the pretraining dataset, but the best task-specific performance is obtained by independently selecting the pretraining data for each specific downstream task. For instance, the L/16 obtains its best SSv2 performance when pretrained on K710+SSv2, its best K400 performance when pretrained only on K710, and its best IN1K performance when pretrained only on K710+HT. The best average performance across all tasks is achieved by pretraining VideoMix2M, which combines all the data sources. Similarly, the H/16 pretrained on K710+SSv2 achieves a greater K400 score than the H/16 pretrained on VideoMix2M, however, the top performing H/16 on average is pretrained on VideoMix2M.\n\n## 4.3 Evaluation: Attentive Probing\n\nNext we explore the feature pooling strategy for applying the model's representations in downstream tasks. Since the prediction objective in equation (1) is unnormalized, there is no a priori reason for the encoder to yield a linearly separable subspace (Chen et al., 2020). Thus, rather than using a linear operation (averaging) to pool the features output of the frozen backbone, we explore a learnable non-linear pooling strategy. Specifically, when evaluating the frozen pretrained backbone on downstream tasks, we learn a cross-attention layer with a learnable query token. The output of the crossattention layer is then added back to the query token (residual connection), and then fed into two-layer MLP\n\nTable 4 Ablating Prediction Task. Models are ViT-L/16 networks pretrained on K710 and SSv2 and evaluated with an attentive probe using a single center view. The region x is sampled by masking spatio-temporal regions in the video; y is the mask complement. 1) random-tube[r]: x is obtained by masking a fraction r of tubes (spatial patches extended across the entire temporal duration) from the video, 2) causal multi-block[p]: x is restricted to the first p frames of the 16-frame video, which are then masked with a random set of spatio-temporal blocks, 3) multi-block : x is obtained by masking a random set of spatio-temporal blocks from the entire video. Best performance obtained by using multiblock masking.", - "page_start": 5, - "page_end": 5, - "source_file": "arxiv3.pdf" - }, - { - "text": "## 4 WhatMatters for Learning Representations from Video?\n\nIn this section we isolate the contributions of several design choices, including: a) the use of a feature prediction\n\nversus pixel prediction objective, b) the construction of the pretraining data distribution, c) the feature pooling strategy for leveraging the model's representations in downstream tasks, and d) the masking strategy, towards identifying: what to predict from what?\n\n## 4.1 Predicting Representations versus Pixels\n\nWe first ablate the effect of computing the prediction loss in representation space. We train a pair of ViT-L/16 models using either a V-JEPA feature prediction loss, or a mean-squared error loss with the normalized pixel values, as in masked autoencoders (He et al., 2021), and perform a sweep over the learning rate and weight decay schedules for both approaches. All models are pretrained on VideoMix2M for 90K iterations with a batch size of 3072 using multi-block masking. We examine performance on Kinetics-400 (K400), Something-Something-v2 (SSv2), and ImageNet-1K (IN1K), using a frozen backbone with an attentive probe, and report top-1 accuracy using a single center view. We also examine end-to-end fine-tuning performance of the models on Kinetics-400.\n\nResults of this comparison are reported in Table 1 and indicate that predicting in feature space provides a consistent performance improvement over pixel space prediction in both frozen evaluation of the video backbone, as well as end-to-end fine-tuning.\n\n## 4.2 Pretraining Data Distribution\n\nNext we study the impact of the pretraining data distribution in Table 2. Leveraging large scale datasets", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv3.pdf" - }, - { - "text": "Table 1 Pixels vs. Featurized Targets. We ablate the effect of computing the prediction loss in feature space vs pixel space. All models are trained on VideoMix2M for 90K iterations with a batch size of 3072 using the multi-block prediction task. We examine downstream performance using a frozen backbone with attentive probing, and report top-1 accuracy using a single center view. We also examine end-to-end fine-tuning performance of the models on K400. Predicting in feature space provide a consistent improvement over pixel space prediction.Table 2 Pretraining Data Distribution. We pretrain all models for 90K iterations using a batch size of 3072, and evaluate downstream performance of the frozen backbones with an attentive probe using a single center view. Average performance across tasks increases with the pretraining dataset size.\n\n| | | Frozen Evaluation | Frozen Evaluation | Frozen Evaluation | Fine-Tuning |\n|----------|----------|---------------------|---------------------|---------------------|----------------------|\n| Target | Arch. | K400 (16 × 1 × 1) | SSv2 (16 × 1 × 1) | IN1K | K400-ft (16 × 5 × 3) |\n| Pixels | ViT-L/16 | 68.6 | 66.0 | 73.3 | 85.4 |\n| Features | ViT-L/16 | 73.7 | 66.2 | 74.8 | 85.6 |\n\n| | | | Frozen Evaluation | Frozen Evaluation | Frozen Evaluation | |\n|----------|------------|----------|---------------------|---------------------|---------------------|------|\n| Arch. | Data | #Samples | K400 (16 × 1 × 1) | SSv2 (16 × 1 × 1) | IN1K | Avg. |\n| ViT-L/16 | K710 | 700K | 75.8 | 63.2 | 73.7 | 70.9 |\n| ViT-L/16 | K710+SSv2 | 900K | 72.9 | 67.4 | 72.8 | 71.0 |\n| ViT-L/16 | K710+HT | 1900K | 74.5 | 64.2 | 74.8 | 71.1 |\n| | VideoMix2M | 2000K | 73.7 | 66.2 | 74.8 | 71.5 |\n| ViT-H/16 | K710+SSv2 | 900K | 75.7 | 66.8 | 73.7 | 72.0 |\n| ViT-H/16 | VideoMix2M | 2000K | 74.0 | 68.5 | 75.9 | 72.8 |\n\nEvaluations. Pretrained models are evaluated on downstream video and image tasks. On video tasks, we use a subset of the VideoGLUE benchmark (Yuan et al., 2023) to test for various capabilities; specifically, we investigate action recognition on Kinetics400 (K400) (Kay et al., 2017), motion classification on Something-Something-v2 (SSv2) (Goyal et al., 2017), and action localization on AVA (Gu et al., 2018). Action classification on Kinetics evaluates the appearance-based understanding of the model, as many action classes in the dataset can be inferred from the presence of specific objects in the video (Sevilla-Lara et al., 2021). Motion classification on Something-Something-v2 evaluates the temporal understanding of the model, as action classes in the dataset are decoupled from the appearance/presence of specific objects in the video (Goyal et al., 2017). Finally, action localization on AVA evaluates the ability of the model to understand and localize motions in the video. We follow standard practice and report accuracy on K400 and SSv2 by sampling several spatial and temporal views. For static image tasks, we explore object recognition on ImageNet (Russakovsky et al., 2015), scene classification on Places205 (Zhou et al., 2014), and fine-grained recognition on iNaturalist 2021 (Van Horn et al., 2018).\n\n## 4 WhatMatters for Learning Representations from Video?\n\nIn this section we isolate the contributions of several design choices, including: a) the use of a feature prediction", - "page_start": 4, - "page_end": 4, - "source_file": "arxiv3.pdf" - }, - { - "text": "Table 7 Low-Shot Frozen Evaluation. Comparing V-JEPA to other video models in frozen evaluation on Kinetics-400 and Something-Something-v2 as we vary the percentage of labeled examples from each dataset available for training the attentive probe. We train the probes in several low-shot settings: using either 5% of the train set, 10%, or 50%, and take 3 random splits in each setting to obtain more robust metrics, resulting in 9 different evaluation experiments for each model. We report the mean performances and standard deviation using the K400 and SSv2 validation sets. V-JEPA is more label-efficient than other models; specifically, decreasing the available number of labeled examples from each class increases the performance gap between V-JEPA and the baselines.\n\n| | | Frozen Evaluation | Frozen Evaluation | Frozen Evaluation | Frozen Evaluation | Frozen Evaluation | Frozen Evaluation |\n|------------|--------------|----------------------------|-----------------------------|----------------------------|----------------------------|-----------------------------|----------------------------|\n| | | K400 (16 × 8 × 3) | K400 (16 × 8 × 3) | K400 (16 × 8 × 3) | SSv2 (16 × 2 × 3) | SSv2 (16 × 2 × 3) | SSv2 (16 × 2 × 3) |\n| Method | Arch. | 5% ∼ 29 samples per class) | 10% ∼ 58 samples per class) | 50% 287 samples per class) | 5% ∼ 48 samples per class) | 10% ∼ 96 samples per class) | 50% 440 samples per class) |\n| MVD | ViT-L/16 | 62.6 ± 0.2 | 68.3 ± 0.2 | 77.2 ± 0.3 | 42.9 ± 0.8 | 49.5 ± 0.6 | 61.0 ± 0.2 |\n| VideoMAE | ViT-H/16 | 62.3 ± 0.3 | 68.5 ± 0.2 | 78.2 ± 0.1 | 41.4 ± 0.8 | 48.1 ± 0.2 | 60.5 ± 0.4 |\n| VideoMAEv2 | ViT-g/14 | 37.0 ± 0.3 | 48.8 ± 0.4 | 67.8 ± 0.1 | 28.0 ± 1.0 | 37.3 ± 0.3 | 54.0 ± 0.3 |\n| V-JEPA | ViT-H/16 | 67.0 ± 0.2 | 72.1 ± 0.1 | 80.2 ± 0.2 | 51.9 ± 0.3 | 57.5 ± 0.4 | 67.3 ± 0.2 |\n| V-JEPA | ViT-H/16 384 | 68.2 ± 0.2 | 72.8 ± 0.2 | 80.6 ± 0.2 | 54.0 ± 0.2 | 59.3 ± 0.5 | 67.9 ± 0.2 |\n\nlayer attentive probe, which can be further improved to 77 . 9 % using a two-layer attentive probe. More generally, we hypothesize that the datasets used to train V-JEPA and other video models are too constrained and lack the visual diversity of the internet-scale pretraining data used by the images models; as such, there is value in focusing future work on building diverse publicly available video datasets.\n\n## 5.3 Label-efficiency\n\nWe examine the label-efficiency of V-JEPA compared to other self-supervised video models by measuring the ability of the pretrained backbones to adapt to downstream tasks with few labels. Specifically, we investigate the performance of the frozen models on Kinetics-400 and Something-Something-v2 as we vary the percentage of labeled examples from each dataset available for training the attentive probe. We train the probes in several lowshot settings: using either 5% of the train set, 10%, or 50%, and take 3 random splits in each setting to obtain more robust metrics, resulting in 9 different evaluation experiments for each model. Table 7 reports the mean performances and standard deviation using the K400 and SSv2 validation sets.", - "page_start": 8, - "page_end": 8, - "source_file": "arxiv3.pdf" - }, - { - "text": "| V-JEPA | ViT-L/16 | 270M | 90K | 80.8 | 69.5 | 25.6 | 74.8 | 60.3 | 67.8 | 85.6 | 75.1 |", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv3.pdf" - }, - { - "text": "More details about this process are provided in the appendix A.2 along with some extracts in Figure 6. We make the dataset publicly available in both their raw and clean versions. We use this dataset in a clustering setup to cluster publications by their title and use the domain as ground truth. To ensure the quality of this dataset, we run 3 baseline models for classification: TF-IDF + SVM , a fine-tuned Camembert (Martin et al., 2019) and GPT-4 leveraging In-Context Learning (ICL). Furthermore, we run one baseline model for topic modeling: Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and report scores in the appendix A.2.\n\n## 3.1.3 SummEvalFr (Summarization)\n\nThe original SummEval dataset (Fabbri et al., 2021) consists of 100 news articles from the CNN/Dai-\n\nlyMail dataset. Each article has 11 human-written summaries and 16 machine-generated summaries annotated by 8 people with a score for coherence, consistency, fluency, and relevance. We translated it from English to French using DeepL API 6 . Since MTEB evaluation is based on the embedding similarity between machine-generated and humangenerated summaries, we propose to compute the ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002) metrics between machine and human summaries for both French and English version. In Table 2, we report the average of the scores as well as their correlations between the two languages. The correlation is high (above 0.7), showing that the word and n-gram overlap between human and machine summaries is highly preserved in the French version. One may argue that computing the metric on fully translated texts (human and machine summaries are both translated from English) may introduce biases and not assess the quality of the translations. For this purpose, we ensure the French human summaries are correctly translated from English. We use an LLM as-a-judge (Zheng et al.,", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv4.pdf" - }, - { - "text": "Multi-Mask Prediction. To increase the efficiency of V-JEPA , we use a multi-masking strategy (Caron et al., 2020; Baevski et al., 2022a), which enables us to amortize the cost of the target computation. As mentioned in Section 3, for a given video clip, we sample 2 different masks, short-range and long-range. While we need to forward propagate the x -encoder and predictor separately for each mask, we only need to compute the y -representation once.\n\n## C Pretraining details\n\nIn section, we report V-JEPA pretraining details. Table 8 summarizes the main hyperparameters used during pretraining.\n\nArchitectures. We use Vision Transformer (Dosovitskiy et al., 2020) (ViT) architectures for the x -encoder and y -encoder. We train three V-JEPA encoders: a ViT-L/16 224 , a ViT-H/16 224 and a ViT-H/16 384 . All three encoders take as input a short video clip of 16 frames with a temporal stride of 4 between consecutive frames. The subscripts, 224 and 384 , indicate the spatial resolution of the video clip. V-JEPA flattens the video clip into a sequence of non-overlapping spatio-temporal patches of size 16 × 16 × 2 (see Figure 7). For all three models, the predictor is designed as a narrow ViT architecture, consisting of 12 transformer blocks with an embedding dimension of 384. For simplicity, we keep the number of self-attention heads in the predictor equal to that of the backbone used for the context-encoder/target-encoder. V-JEPA is pretrained without using a [cls] token.\n\nOptimization. We use AdamW (Loshchilov and Hutter, 2017) to optimize the x -encoder and predictor weights. The ViT-L/16 224 and ViT-H/16 224 models use a batch size of 3072 while the ViT-H/16 384 uses a batch size of 2400 . Models are trained for a total of 90,000 iterations. The learning rate is linearly increased from 2 × 10 -4 to 6 . 25 × 10 -4 during the first 12 , 000 iterations of pretraining, and decayed to 10 -6 following a cosine schedule.", - "page_start": 16, - "page_end": 16, - "source_file": "arxiv3.pdf" - }, - { - "text": "FIG. 5: (Colour online) Density profiles for the situation where the substrate is covered by nanoparticles with average density ρ av n = 0 . 3 and with the liquid excluded from the region y < 0 . The top row shows the nanoparticle density profiles and bottom row the corresponding liquid density profiles at the times t/t l = 1000 (left), 10000 (middle) and 30000 (right), where t l = 1 /kTM nc l σ 2 . The parameters are kT/ε ll = 0 . 8 , ε nl /ε ll = 0 . 6 , ε nn = 0 , α = 0 . 2 M nc l σ 4 , M c l = 0 , ρ l ( t = 0) = 0 . 9 ± ξ (where ξ represents white noise of amplitude 0.05) and ( µ -µ coex ) /kT = -0 . 78 .\n\n\n\nThis theory allows us to study the time evolution of the evaporating film of nanoparticle suspension without some of the restrictions of the kinetic Monte Carlo model. Here, however, we illustrate its application in similar parameter regimes as used above for the KMC. We focus on two examples: (i) the spinodal dewetting of a initially flat film of nanoparticle suspension characterised by constant ρ l and ρ n (Fig. 4); and (ii) the retraction of a dewetting front that is unstable with respect to a fingering instability (Fig. 5).\n\nFig. 4 presents two pairs of snapshots from a purely evaporative dewetting process deep inside the parameter region of the phase diagram where spinodal dewetting occurs. For small times the film becomes unstable showing a typical spinodal labyrinthine pattern with a typical wavelength. The nanoparticles concentrate where the remaining liquid is situated. However, they are 'slow' in their reaction: when ρ l already takes values in the range 0.08 - 0.83, the nanoparticle concentration has only deviated by about 25% from its initial value. The film thins strongly forming many", - "page_start": 16, - "page_end": 16, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "PLAW-116publ30.pdf", - "query": "What is appropriate authority ?", - "target_page": 1, - "target_passage": "APPROPRIATE AUTHORITY.—The term ‘appropriate authority’ means the head of a Federal agency, the Architect of the Capitol, or other official authority responsible for the operation of a public building. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Figure 3-22 Completion window\n\n\n\n## 3.2 User and group administration\n\nWhen you design a Content Manager OnDemand system, you must determine the best way to implement the many authority structures that are available for users and administrators of your system. The span of control for the administration of the system must be considered with the level of user access to the data that is stored in the system. How many different administrators are required? Will all administrators have system administrator authority or will different administrators have different levels of authority? What is the most effective way to restrict a user's access to only the data that is necessary to do that user's job?\n\nThe answers to these questions depend on the size of the system, the degree of centralization to be exercised over system administration, and the nature of the data and the business needs of the users.\n\n## Centralized or decentralized\n\nIn a system design that exercises centralized control, one or a few administrators are granted system administrator authority. A centralized system typically is used when the number of reports and users to be added to the system is small. Centralized administration is also appropriate where resources are limited and only one person might have the skills and knowledge to perform the system administration tasks, or where one user group performs all of the administration tasks.\n\nIn a system design with decentralized control, different users are granted different levels of administrative authority. For example, you might have users that have the authority to create users and groups. Other users might have the authority to create application groups and folders, and others might be given full system administration authority.", - "page_start": 89, - "page_end": 89, - "source_file": "sg246915.pdf" - }, - { - "text": "The contractor warrants that the exclusive rights and the modes of exploitation may be exercised by the contracting authority on all parts of the results , be it via a transfer of ownership of the rights, on those parts which were specifically created by the contractor, or via a licence of the pre-existing rights, on those parts consisting of pre-existing materials .\n\nWhere pre-existing materials are inserted in the results , the contracting authority may accept reasonable restrictions impacting on the above list, provided that the said materials are easily identifiable and separable from the rest, that they do not correspond to substantial elements of the results , and that, should the need arise, satisfactory replacement solutions exist, at no additional costs to the contracting authority. In such case, the contractor will have to clearly inform the contracting authority before making such choice and the contracting authority has the right to refuse it.\n\n## II.13.4. Identification of pre-existing rights\n\nWhen delivering the results , the contractor must warrant that, for any use that the contracting authority may envisage within the limits set in this FWC, the newly created parts and the pre-existing material incorporated in the results are free of claims from", - "page_start": 24, - "page_end": 24, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "com m unication be to the public generally or to any person or class of persons) and freedom from interference w ith his or her correspondence.\n\n - (2) N othing contained in or done under the authority of any law shall be held to be inconsistent w ith or in contravention of this section to the extent that the law in question m akes provision-\n - ( a ) that is reasonably required in the interests of defence, public safety, public order, public m orality or public health; or\n - ( b ) that is reasonably required for the purpose of protecting the reputations, rights and freedom s of other persons or the private lives of persons concerned in legal proceedings, preventing the disclosure of inform ation received in confidence, m aintaining the authority and independence of the courts, regulating educational institutions in the interests of persons receiving instruction therein, or regulating the technical adm inistration or the technical operation of telephony, telegraphy, posts, w ireless, broadcasting or television; or\n - ( c ) that im poses restrictions upon public officers, em ployees of local governm ent bodies, or teachers,\n\nand except so far as that provision or, as the case m ay be, the thing done under the authority thereof is show n not to be reasonably justifiable in a dem ocratic society.\n\n## 13. Protection of freedom of assem bly and association", - "page_start": 11, - "page_end": 11, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "## II.13.7. Moral rights of creators\n\nBy delivering the results , the contractor warrants that the creators will not object to the following on the basis of their moral rights under copyright:\n\n - (a) that their names be mentioned or not mentioned when the results are presented to the public;\n - (b) that the results be divulged or not after they have been delivered in their final version to the contracting authority;\n - (c) that the results be adapted, provided that this is done in a manner which is not prejudicial to the creator 's honour or reputation.\n\nIf moral rights on parts of the results protected by copyright may exist, the contractor must obtain the consent of creators regarding the granting or waiver of the relevant moral rights in accordance with the applicable legal provisions and be ready to provide documentary evidence upon request.\n\n## II.13.8. Image rights and sound recordings\n\nIf natural persons appear in a result or their voice or any other private element is recorded in a recognisable manner, the contractor must obtain a statement by these persons (or, in the case of minors, by the persons exercising parental authority) giving their permission for the described use of their image, voice or private element and, on request, submit a copy of the permission to the contracting authority. The contractor must take the necessary measures to obtain such consent in accordance with the applicable legal provisions.\n\n## II.13.9. Copyright notice for pre-existing rights\n\nWhen the contractor retains pre-existing rights on parts of the results , reference must be inserted to that effect when the result is used as set out in Article I.10.1, with the following disclaimer: '© - year - European Union. All rights reserved. Certain parts are licensed under conditions to the EU', or with any other equivalent disclaimer as the contracting authority may consider best appropriate, or as the parties may agree on a case-by-case basis. This does not apply where inserting such reference would be impossible, notably for practical reasons.\n\n## II.13.10. Visibility of ECHA funding and disclaimer\n\nWhen making use of the results , the contractor must declare that they have been produced under a contract with the contracting authority and that the opinions expressed are those of the contractor only and do not represent the contracting authority's official position. The contracting authority may waive this obligation in writing or provide the text of the disclaimer.\n\n## II.14. Force majeure\n\n - II.14.1 If a party is affected by force majeure , it must immediately notify the other party, stating the nature of the circumstances, their likely duration and foreseeable effects.\n - II.14.2 A party is not liable for any delay or failure to perform its obligations under the FWC if that delay or failure is a result of force majeure . If the contractor is unable to fulfil its contractual obligations owing to force majeure , it has the right to remuneration only for the services actually provided.", - "page_start": 26, - "page_end": 26, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "entities working for it or cooperating with it, including contractors and subcontractors, whether legal or natural persons, but only for the purpose of their mission for the contracting authority;\n\n - (b) if the result is a \"document\" such as a report or a study, and it is meant to be published, the existence of pre-existing materials in the result may not prevent the publication of the document, its translation or its \"reuse\", it being understood however that the \"reuse\" may only be made of the result as a whole and not of the pre-existing materials taken separately from the result ; for the sake of this provision, \"reuse\" and \"document\" have the meaning given by the Commission Decision of 12 December 2011 on the reuse of Commission documents (2011/833/EU).\n\nAll pre-existing rights are licensed to the contracting authority from the moment the results are delivered and approved by the contracting authority.\n\nThe licensing of pre-existing rights to the contracting authority under this FWC covers all territories worldwide and is valid for the duration of intellectual property rights protection.\n\nThe payment of the price as set out in the specific contracts is deemed to also include any fees payable to the contractor in relation to the licensing of pre-existing rights to the contracting authority, including for all forms of exploitation and of use of the results .\n\nWhere implementation of the FWC requires that the contractor uses pre-existing materials belonging to the contracting authority, the contracting authority may request that the contractor signs an adequate licence agreement. Such use by the contractor will not entail any transfer of rights to the contractor and is limited to the needs of this FWC.\n\n## II.13.3. Exclusive rights\n\nThe Contracting Authority acquires the following exclusive rights:", - "page_start": 23, - "page_end": 23, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "- II.4.4 The contractor must obtain any permit or licence required in the State where the services are to be provided.\n - II.4.5 All periods specified in the FWC are calculated in calendar days, unless otherwise specified.\n - II.4.6 The contractor must not present itself as a representative of the contracting authority and must inform third parties that it is not part of the European public service.\n - II.4.7 The contractor is responsible for the personnel who carry out the services and exercises its authority over its personnel without interference by the contracting authority. The contractor must inform its personnel that:\n - (a) they may not accept any direct instructions from the contracting authority; and\n - (b) their participation in providing the services does not result in any employment or contractual relationship with the contracting authority.\n - II.4.8 The contractor must ensure that the personnel implementing the FWC and any future replacement personnel possess the professional qualifications and experience required to provide the services, as the case may be on the basis of the selection criteria set out in the tender specifications.\n - II.4.9 At the contracting authority's reasoned request, the contractor must replace any member of personnel who:\n - (a) does not have the expertise required to provide the services; or\n - (b) has caused disruption at the premises of the contracting authority.\n\nThe contractor bears the cost of replacing its personnel and is responsible for any delay in providing the services resulting from the replacement of personnel .\n\n - II.4.10 The contractor must record and report to the contracting authority any problem that affects its ability to provide the services. The report must describe the problem, state when it started and what action the contractor is taking to resolve it.\n\n## II.5. Communication between the parties\n\n## II.5.1. Form and means of communication\n\nAny communication of information, notices or documents under the FWC must:\n\n - (a) be made in writing in paper or electronic format in the language of the contract;\n - (b) bear the FWC number and, if applicable, the specific contract number;\n - (c) be made using the relevant communication details set out in Article I.8; and\n - (d) be sent by mail, email or, for the documents specified in the special conditions, via e-PRIOR .\n\nIf a party requests written confirmation of an e-mail within a reasonable time, the other party must provide an original signed paper version of the communication as soon as possible.", - "page_start": 15, - "page_end": 15, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "found to be in a situation provided for in points (d) and (e) of Article II.18.1.\n\n## II.11. Amendments\n\n - II.11.1 Any amendment to the FWC or a specific contract must be made in writing before all contractual obligations have been fulfilled. A specific contract does not constitute an amendment to the FWC.\n - II.11.2 Any amendment must not make changes to the FWC or a specific contract that might alter the initial conditions of the procurement procedure or result in unequal treatment of tenderers or contractors.\n\n## II.12. Assignment\n\n - II.12.1 The contractor must not assign any of the rights and obligations arising from the FWC, including claims for payments or factoring, without prior written authorisation from the contracting authority. In such cases, the contractor must provide the contracting authority with the identity of the intended assignee.\n - II.12.2 Any right or obligation assigned by the contractor without authorisation is not enforceable against the contracting authority.\n\n## II.13. Intellectual property rights\n\n## II.13.1. Ownership of the rights in the results\n\nThe contracting authority acquires irrevocably worldwide ownership of the results and of all intellectual property rights on the newly created materials produced specifically for the contracting authority under the FWC and incorporated in the results , without prejudice however to the rules applying to pre-existing rights on pre-existing materials , as per Article II.13.2.\n\nThe intellectual property rights so acquired include any rights, such as copyright and other intellectual or industrial property rights, to any of the results and in all technological solutions and information created or produced by the contractor or by its subcontractor in implementation of the FWC . The contracting authority may exploit and use the acquired rights as stipulated in this FWC. The contracting authority acquires all the rights as from the moment the contractor has created the results .\n\nThe payment of the price includes any fees payable to the contractor about the acquisition of ownership of rights by the contracting authority including for all modes of exploitation and of use of the results .\n\n## II.13.2. Licensing rights on pre-existing materials\n\nUnless provided otherwise in the special conditions, the contracting authority does not acquire ownership of pre-existing rights under this FWC.\n\nThe contractor licenses the pre-existing rights on a royalty-free, non-exclusive and irrevocable basis to the contracting authority, which may use the pre-existing materials for all the modes of exploitation set out in this FWC or in specific contracts. Unless otherwise agreed, the licence is non-transferable and cannot be sub-licensed, except as provided hereafter:\n\n - (a) the pre-existing rights can be sub-licensed by the contracting authority to persons and", - "page_start": 22, - "page_end": 22, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "Responsibilities of management include -\n\n - · Implement the corporate strategy set by the Board;\n - · Achieve the performance targets set by the Board;\n - · Develop, implement and manage risk management and internal control frameworks;\n - · Develop, implement and update policies and procedures;\n - · Provide sufficient, relevant and timely information to the Board to enable the Board to effectively discharge its responsibilities; and\n - · Manage human, physical and financial resources to achieve the Company's objectives - in other words to run the day to day business in an effective way.\n\n## 1.2 Management Performance\n\nSundance's Chairman, with Non-Executive Director input, is responsible for providing feedback to the MD on his performance assessed against the responsibilities mentioned above. The MD, with Chairman and Non-Executive Directors input, is responsible for providing feedback to senior executives and assessing their performance against the responsibilities mentioned above.\n\nDuring fiscal year 2014, an annual performance evaluation of senior executives was completed in line with the Company's incentive compensation policy as well as periodic one on one discussions carried out by the MD. Appropriate induction procedures are in place to allow new senior executives to participate fully and actively in management decision making at the earliest opportunity.\n\n## Principle 2: Structure the Board to Add Value\n\n## 2.1 Board Composition and Independence\n\nThe composition and operation of the Board is determined in accordance with the following requirements:\n\n - · The constitution of Sundance specifies that there must be a minimum of three directors and a maximum of ten. The Board may determine the size of the Board within those limits;\n - · It is the intention of the Board that its membership consists of a majority of independent directors who satisfy the criteria recommended by the ASX best practice corporate governance requirements, though it is recognized that this intention may be impractical to implement given the size and scope of the Company's business;\n - · The Chairman of the Board should be an independent director who satisfies the criteria for independence recommended by the ASX best practice corporate governance requirements; and\n - · The Board should, collectively, have the appropriate level of personal qualities, skills, experience, and time commitment to properly fulfil its responsibilities or have ready access to such skills where they are not available.", - "page_start": 49, - "page_end": 49, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "- II.6.3 The contractor is liable for any loss or damage caused to the contracting authority during or as a consequence of implementation of the FWC , including in the event of subcontracting, but only up to an amount not exceeding three times the total amount of the relevant specific contract. However, if the damage or loss is caused by the gross negligence or wilful misconduct of the contractor or of its personnel or subcontractors, as well as in the case of an action brought against the contracting authority by a third party for breach of its intellectual property rights, the contractor is liable for the whole amount of the damage or loss.\n - II.6.4 If a third party brings any action against the contracting authority in connection with the implementation of the FWC , including any action for alleged breach of intellectual property rights, the contractor must assist the contracting authority in the legal proceedings, including by intervening in support of the contracting authority upon request. If the contracting authority's liability towards the third party is established and that such liability is caused by the contractor during or as a consequence of the implementation of the FWC , Article II.6.3 applies.\n - II.6.5 If the contractor is composed of two or more economic operators (i.e. who submitted a joint tender), they are all jointly and severally liable to the contracting authority for the implementation of the FWC .\n - II.6.6 The contracting authority is not liable for any loss or damage caused to the contractor during or as a consequence of implementation of the FWC , unless the loss or damage was caused by wilful misconduct or gross negligence of the contracting authority.\n\n## II.7. Conflict of interest and professional conflicting interests\n\n - II.7.1 The contractor must take all the necessary measures to prevent any situation of conflict of interest or professional conflicting interest .\n - II.7.2 The contractor must notify the contracting authority in writing as soon as possible of any situation that could constitute a conflict of interest or a professional conflicting interest during the implementation of the FWC . The contractor must immediately take action to rectify the situation.\n\nThe contracting authority may do any of the following:", - "page_start": 18, - "page_end": 18, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "Responsibilities of the board include -\n\n - · Providing input into and final approval of management's development of corporate strategy and performance objectives;\n - · Monitoring senior executives' performance and implementation of the Company's strategy;\n - · Approving and monitoring the business plan, budget and corporate policies;\n - · Monitoring and the approval of financial and other reporting;\n - · Ensuring an effective system of internal controls exists and is functioning as required;\n - · Establishing Sundance's vision, mission, values and ethical standards as reflected in a Code of Conduct;\n - · Delegating an appropriate level of authority to management and approving any additional change to those delegations;\n - · Ensuring appropriate resources are available to senior executives;\n - · Appointment, succession, performance assessment, remuneration and dismissal of the Managing Director;\n - · Reviewing, ratifying and monitoring systems of risk management and internal control, codes of conduct, and legal compliance; and\n - · Approving and monitoring the progress of major capital expenditure, capital management, and acquisitions and divestitures.\n\nThe Board has delegated responsibility to the Managing Director ('MD') and the executive management team to manage the day-to-day operations and administration of the Company. In carrying out this delegation, the MD, supported by the senior executives, routinely reports to the Board regarding Sundance's progress on achieving both the short and long-term plans for the Company. The MD is accountable to the Board for the authority that is delegated by the Board.", - "page_start": 48, - "page_end": 48, - "source_file": "ASX_SEA_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "PLAW-116publ30.pdf", - "query": "What criteria must a lactation room meet?", - "target_page": 1, - "target_passage": "LACTATION ROOM.—The term ‘lactation room’ means a hygienic place, other than a bathroom, that— ‘‘(A) is shielded from view; ‘‘(B) is free from intrusion; and ‘‘(C) contains a chair, a working surface, and, if the public building is otherwise supplied with electricity, an electrical outlet. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\ndkrause on DSKBC28HB2PROD with PUBLAWS\n\nVerDate Sep 11 2014\n\n15:46 Aug 08, 2019\n\nJkt 089139\n\nPO 00030\n\nFrm 00001\n\nFmt 6580\n\nSfmt 6581\n\nE:\\PUBLAW\\PUBL030.116\n\nPUBL030\n\nPublic Law 116-30 116th Congress\n\n## An Act\n\nTo provide a lactation room in public buildings.\n\nBe it enacted by the Senate and House of Representatives of the United States of America in Congress assembled,\n\n## SECTION 1. SHORT TITLE.\n\nThis Act may be cited as the ''Fairness For Breastfeeding Mothers Act of 2019''.\n\n## SEC. 2. LACTATION ROOM IN PUBLIC BUILDINGS.\n\n(a) LACTATION ROOM IN PUBLIC BUILDINGS.-Chapter 33 of title 40, United States Code, is amended by adding at the end the following new section:\n\n## ''§ 3318. Lactation room in public buildings\n\n''(a) DEFINITIONS.-In this section:\n\n''(1) APPROPRIATE AUTHORITY.-The term 'appropriate authority' means the head of a Federal agency, the Architect of the Capitol, or other official authority responsible for the operation of a public building.\n\n''(2) COVERED PUBLIC BUILDING.-The term 'covered public building' means a public building (as defined in section 3301) that is open to the public and contains a public restroom, and includes a building listed in section 6301 or 5101.\n\n''(3) LACTATION ROOM.-The term 'lactation room' means a hygienic place, other than a bathroom, that-\n\n''(A) is shielded from view;\n\n''(B) is free from intrusion; and\n\n''(C) contains a chair, a working surface, and, if the public building is otherwise supplied with electricity, an electrical outlet.\n\n''(b) LACTATION ROOM REQUIRED.-Except as provided in subsection (c), the appropriate authority of a covered public building shall ensure that the building contains a lactation room that is made available for use by members of the public to express breast milk.\n\n''(c) EXCEPTIONS.-A covered public building may be excluded from the requirement in subsection (b) at the discretion of the appropriate authority if-\n\n''(1) the public building-\n\n''(A) does not contain a lactation room for employees who work in the building; and\n\n''(B) does not have a room that could be repurposed as a lactation room or a space that could be made private using portable materials, at a reasonable cost; or\n\nJuly 25, 2019\n\n[H.R. 866]\n\nFairness For Breastfeeding Mothers Act of 2019. 40 USC 101 note.\n\n40 USC 3318.", - "page_start": 0, - "page_end": 0, - "source_file": "PLAW-116publ30.pdf" - }, - { - "text": "- (a) at the end of sub-paragraph (c) omit 'or'; and\n - (b) at the end of sub-paragraph (d) insert-\n\n'; or\n\n - (e) of a reason relating to the incidence or transmission of coronavirus'.\n - 10. In regulation 13(3) (timescales for EHC plans), for '(d)' substitute '(e)'.\n - 11. After regulation 18 (circumstances in which a local authority must review an EHC plan) insert-\n\n## ' Circumstances in which it is not necessary to review an EHC plan\n\n - 18A. -(1) It is not necessary for a local authority to review an EHC plan in accordance with section 44(1) of the Act if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\n - (2) Where paragraph (1) applies, a local authority must instead conduct such reviews as soon as reasonably practicable.'.\n - 12. In regulation 22 (amending an EHC plan following a review), after paragraph (5) insert-\n - '(6) The local authority need not comply with the time limit referred to in paragraphs (3) and (4) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.'.\n - 13. In regulation 27(3) (amending or replacing an EHC plan following a re-assessment)-\n - (a) at the end of sub-paragraph (c) omit 'or'; and\n - (b) at the end of sub-paragraph (d) insert-\n\n'; or\n\n - (e) of a reason relating to the incidence or transmission of coronavirus'.\n - 14. In regulation 45 (unopposed appeals), after paragraph (7) insert-\n\n'(8) The local authority need not comply with the time limits specified in paragraph (3A) if it is impractical to do so because the circumstances referred to in regulation 10(4)(e) apply.'.\n\n## Amendment of the Special Educational Needs (Personal Budgets) Regulations 2014\n\n15. The Special Educational Needs (Personal Budgets) Regulations 2014( a ) are amended as follows.\n\n - 16. In regulation 2 (interpretation), at the appropriate place insert-\n - ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n - 17. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time period due to coronavirus exception\n\n - 2A. -(1) Where the coronavirus exception applies, the requirement for the local authority to review the making and use of direct payments within the first three months of them being made in regulation 11(2)(a) (monitoring and review of direct payments) is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- 23. In regulation 8(2) (duty to co-operate in a detained person's EHC needs assessment), at the end of sub-paragraph (d) insert-\n\n'; or\n\n - (e) of a reason relating to the incidence or transmission of coronavirus'.\n - 24. In regulation 10(4) (decision not to secure an EHC plan)-\n - (a) at the end of sub-paragraph (b) omit 'or'; and\n - (b) at the end of sub-paragraph (c) insert-\n\n'; or\n\n - (d) of a reason relating to the incidence or transmission of coronavirus'.\n - 25. In regulation 13(3) (timescales for EHC plans), for '(c)' substitute '(d)'.\n - 26. In regulation 29 (compliance with the orders of the First-tier Tribunal)-\n - (a) after paragraph (6) insert-\n - '(6A) The home authority need not comply with the time limits specified in paragraph (3) if it is impractical to do so because the circumstances referred to in regulation 10(4)(d) apply.'.\n - (b) in paragraph (7)(c) after '10(4)(a)' insert 'or (d)'.\n - 27. In regulation 30(7)(c) (unopposed appeals), after '10(4)(a)' insert 'or (d)'.\n\n## Amendment of the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017\n\n28. The Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017( a ) are amended as follows.\n\n - 29. In regulation 2 (interpretation), at the appropriate place insert-\n - ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n - 30. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n - 2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n - (2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n - (3) The following regulations are specified for the purposes of paragraphs (1) and (2)-\n - (a) regulation 6(3) and (6) (responding to health care recommendations); and\n - (b) regulation 7(1) and (4) (responding to social care recommendations).'.\n\nVicky Ford Parliamentary Under Secretary of State Department for Education\n\n28th April 2020", - "page_start": 4, - "page_end": 4, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "## Meaning of 'place'\n\n14. For the purposes of this Schedule the place referred to in paragraphs 8 to 13 means the room in the designated accommodation where P is staying and, if connected to the room where P is staying, the room of any person referred to in paragraph 11(a) (travelling companion), including any balcony, and does not include the communal areas or any garden, yard, passage, stair, garage, outhouse or appurtenance of the accommodation in which the place is situated.\n\n## Designations\n\n15. The Secretary of State must designate for the purposes of this Schedule-\n\n - (a) accommodation;\n - (b) transportation to the designated accommodation,\n\nand must publish details of the designations in such manner as appears to the Secretary of State to be appropriate.\n\n## Duties where P is a child\n\n16. If P is a child-\n\n - (a) any person who has custody or charge of P when P is travelling to England must ensure, so far as is reasonably practicable, that P complies with the obligations in paragraphs 5 and 6;\n - (b) any person who has custody or charge of P during P's period of self-isolation must ensure, so far as is reasonably practicable, that P self-isolates in accordance with this Schedule.\n\n## Person caring for P", - "page_start": 77, - "page_end": 77, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (2) The coronavirus exception applies where it is not reasonably practicable for the local authority to meet the requirement specified in regulation 11(2)(a) for a reason relating to the incidence or transmission of coronavirus.'.\n\n## Amendment of the Special Educational Needs and Disability (Detained Persons) Regulations 2015\n\n - 18. The Special Educational Needs and Disability (Detained Persons) Regulations 2015( a ) are amended as follows.\n - 19. In regulation 2(1) (interpretation), at the appropriate place insert-\n - ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n - 20. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n - 2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n - (2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n - (3) The following regulations are specified for the purposes of paragraphs (1) and (2)-\n - (a) regulation 15(1) and (4) (needs assessments which are not completed);\n - (b) regulation 16(2), (3) and (4) (transfer of a kept EHC plan);\n - (c) regulation 17(1) and (2) (restriction on disclosure of EHC plans);\n - (d) regulation 19 (requirement to consider mediation);\n - (e) regulation 20(1) and (2) (where the appropriate person does not wish to or fails to pursue mediation);\n - (f) regulation 21 (mediation);\n - (g) regulation 24(1) and (3) (mediation certificate under section 55(5) of the Act);\n - (h) regulation 27(3) (steps to be taken by a home authority);\n - (i) regulation 29(2) and (6) (compliance with the orders of the First-tier Tribunal); and\n - (j) regulation 30(3) and (6) (unopposed appeals).'.\n - 21. In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert-\n - '(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.'.\n - 22. In regulation 5(4) (decision whether or not to conduct a detained person's EHC needs assessment)-\n - (a) at the end of sub-paragraph (b) omit 'or'; and\n - (b) at the end of sub-paragraph (c) insert-\n\n', or\n\n - (d) of a reason relating to the incidence or transmission of coronavirus'.", - "page_start": 3, - "page_end": 3, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- (iv) in the goods vehicle or a hotel, hostel or bed and breakfast accommodation while not undertaking the work described in that paragraph if P is travelling with another person in a goods vehicle with a sleeper cab.\n - (4) The address specified by P in the Passenger Locator Form pursuant to paragraph 2(a) of Schedule 6 must be-\n - (a) their home;\n - (b) the home of a friend or family member;\n - (c) a hotel, hostel, bed and breakfast accommodation, holiday apartment or home, campsite, caravan park or boarding house, canal boat or any other vessel;\n - (d) a military site or establishment;\n - (e) accommodation facilitated by the Secretary of State for the purposes of P's self-isolation;\n - (f) where P is an asylum seeker, accommodation provided or arranged under section 4, 95 or 98 of the Immigration and Asylum Act 1999; or\n - (g) where P is a person described in paragraph 9(1) of Schedule 10 to the Immigration Act 2016 (powers of Secretary of State to enable person to meet bail conditions), accommodation provided or arranged under that paragraph.\n - (5) More than one address may be specified as the place at which P intends to self-isolate in the Passenger Locator Form where-\n - (a) a legal obligation requires P to change addresses; or\n - (b) it is necessary for P to stay overnight at an address on their arrival in England before travelling directly to another address at which they will be self-isolating.\n - (6) In paragraph (3)(a)(ii) 'a place at which they intend to self-isolate while in England' means-\n - (a) where the person has completed a Passenger Locator Form, at an intended place of selfisolation specified in that form;\n - (b) where the person has completed a form equivalent to a Passenger Locator Form pursuant to an enactment in Scotland, Wales or Northern Ireland, at an intended place of selfisolation specified in that form;\n - (c) in any other case at a place described in paragraph (4)(a) to (c).\n - (7) P must, on their arrival in England, travel directly to the place at which they are to selfisolate, and must then self-isolate until whichever is the earlier of-\n - (a) the end of the 10th day after the day on which they arrived in England or, if later, the end of any period that applies by virtue of paragraph 2 or 3 of Schedule 8;\n - (b) their departure from England; or", - "page_start": 13, - "page_end": 13, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "To successfully perform the configuration backup, the following prerequisites must be met:", - "page_start": 704, - "page_end": 704, - "source_file": "sg247938.pdf" - }, - { - "text": "time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n - (2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n - (3) The following regulations are specified for the purposes of paragraphs (1) and (2)-\n - (a) regulation 15(2) (transfer of EHC plans) (in relation to the second reference to 15 working days), (4), (5), (7) (in relation to the second reference to 15 working days) and (8);\n - (b) regulation 16(2) and (3) (change of responsible commissioning body);\n - (c) regulation 20(9) and (10) (review where the child or young person attends a school or other institution);\n - (d) regulation 21(7), (8) and (9) (review of EHC plan where the child or young person does not attend a school or other institution);\n - (e) regulation 25(1) (notification of decision whether it is necessary to re-assess educational, health care and social care provision);\n - (f) regulation 27(4) (amending or replacing an EHC plan following a re-assessment);\n - (g) regulation 33 (requirement to consider mediation);\n - (h) regulation 34(1) and (2) (where a parent or young person does not wish to or fails to pursue mediation);\n - (i) regulation 35(2), (3) and (4) (mediation - health care issues);\n - (j) regulation 36(2) (mediation - no health care issues);\n - (k) regulation 39(1) and (3) (mediation certificate under section 55(5));\n - (l) regulation 42(3) and (4) (steps to be taken by a local authority);\n - (m) regulation 44(2)(d), (e), (f) and (h) (compliance with the orders of the First-tier Tribunal);\n - (n) regulation 45(4), (5) and (6A) (unopposed appeals);\n - (o) regulation 47 (disclosure of EHC plans in relation to higher education); and\n - (p) regulation 56(3) (publication of comments on the local offer).'.\n - 6. In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert-\n - '(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.'.\n - 7. In regulation 5(4) (decision whether or not to conduct an EHC needs assessment)-\n - (a) at the end of sub-paragraph (c) omit 'or'; and\n - (b) at the end of sub-paragraph (d) insert-\n\n'; or\n\n - (e) of a reason relating to the incidence or transmission of coronavirus'.\n - 8. In regulation 8(2) (duty to co-operate in EHC needs assessments)-\n - (a) at the end of sub-paragraph (b) omit 'or'; and\n - (b) at the end of sub-paragraph (c) insert-\n\n'; or\n\n - (d) of a reason relating to the incidence or transmission of coronavirus'.\n - 9. In regulation 10(4) (decision not to secure an EHC plan)-", - "page_start": 1, - "page_end": 1, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "transportation, treatment, storage and disposal of hazardous and non-hazardous solid waste, and require states to develop programs to ensure the safe disposal of solid waste in sanitary landÑlls.\n\nSubtitle D of RCRA establishes a framework for regulating the disposal of municipal solid waste. Regulations under Subtitle D currently include minimum comprehensive solid waste management criteria and guidelines, including location restrictions, facility design and operating criteria, closure and post-closure requirements, Ñnancial assurance standards, groundwater monitoring requirements and corrective action standards, many of which had not commonly been in eÅect or enforced in the past in connection with municipal solid waste landÑlls. Each state was required to submit to the U.S. EPA a permit program designed to implement Subtitle D regulations by April 9, 1993. All of the states in which we operate have implemented permit programs pursuant to RCRA and Subtitle D. These state permit programs may include landÑll requirements which are more stringent than those of Subtitle D.\n\nAll of our planned landÑll expansions or new landÑll development projects have been engineered to meet or exceed Subtitle D requirements. Operating and design criteria for existing operations have been modiÑed to comply with these new regulations. Compliance with Subtitle D regulations has resulted in increased costs and may in the future require substantial additional expenditures in addition to other costs normally associated with our waste management activities.\n\n - (2) The Comprehensive Environmental Response, Compensation and Liability Act of 1980, as amended. CERCLA, among other things, provides for the cleanup of sites from which there is a release or threatened release of a hazardous substance into the environment. CERCLA may impose strict joint and several liability for the costs of cleanup and for damages to natural resources upon current owners and operators of the site, parties who were owners or operators of the site at the time the hazardous substances were disposed of, parties who transported the hazardous substances to the site and parties who arranged for the disposal of the hazardous substances at the site. Under the authority of CERCLA and its implementing regulations, detailed requirements apply to the manner and degree of investigation and remediation of facilities and sites where hazardous substances have been or are threatened to be released into the environment. Liability under CERCLA is not dependent upon the existence or disposal of only \"\"hazardous wastes'' but can also be based upon the existence of small quantities of more than 700 \"\"substances'' characterized by the U.S. EPA as \"\"hazardous,'' many of which may be found in common household waste.\n\nAmong other things, CERCLA authorizes the federal government to investigate and remediate sites at which hazardous substances have been or are threatened to be released into the environment or to order (or oÅer an opportunity to) persons potentially liable for the cleanup of the hazardous substances to do so. In addition, the U.S. EPA has established a National Priorities List of sites at which hazardous substances have been or are threatened to be released and which require investigation or cleanup.", - "page_start": 17, - "page_end": 17, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "Responsibilities of management include -\n\n - · Implement the corporate strategy set by the Board;\n - · Achieve the performance targets set by the Board;\n - · Develop, implement and manage risk management and internal control frameworks;\n - · Develop, implement and update policies and procedures;\n - · Provide sufficient, relevant and timely information to the Board to enable the Board to effectively discharge its responsibilities; and\n - · Manage human, physical and financial resources to achieve the Company's objectives - in other words to run the day to day business in an effective way.\n\n## 1.2 Management Performance\n\nSundance's Chairman, with Non-Executive Director input, is responsible for providing feedback to the MD on his performance assessed against the responsibilities mentioned above. The MD, with Chairman and Non-Executive Directors input, is responsible for providing feedback to senior executives and assessing their performance against the responsibilities mentioned above.\n\nDuring fiscal year 2014, an annual performance evaluation of senior executives was completed in line with the Company's incentive compensation policy as well as periodic one on one discussions carried out by the MD. Appropriate induction procedures are in place to allow new senior executives to participate fully and actively in management decision making at the earliest opportunity.\n\n## Principle 2: Structure the Board to Add Value\n\n## 2.1 Board Composition and Independence\n\nThe composition and operation of the Board is determined in accordance with the following requirements:\n\n - · The constitution of Sundance specifies that there must be a minimum of three directors and a maximum of ten. The Board may determine the size of the Board within those limits;\n - · It is the intention of the Board that its membership consists of a majority of independent directors who satisfy the criteria recommended by the ASX best practice corporate governance requirements, though it is recognized that this intention may be impractical to implement given the size and scope of the Company's business;\n - · The Chairman of the Board should be an independent director who satisfies the criteria for independence recommended by the ASX best practice corporate governance requirements; and\n - · The Board should, collectively, have the appropriate level of personal qualities, skills, experience, and time commitment to properly fulfil its responsibilities or have ready access to such skills where they are not available.", - "page_start": 49, - "page_end": 49, - "source_file": "ASX_SEA_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "PLAW-116publ30.pdf", - "query": "When take effect the Fairness For Breastfeeding Mothers Act ?", - "target_page": 2, - "target_passage": "The amendments made by this section shall take effect 1 year after the date of the enactment of this Act. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Our policies, procedures and practices and the technology we implement are designed to comply with federal, state, local and foreign laws, rules and regulations, including those imposed by the SEC and other regulatory agencies, the marketplace, the banking industry and foreign countries, as well as responsible business, social and environmental practices, all of which may change from time to time. Significant legislative changes, including those that relate to employment matters and health care reform, could impact our relationship with our workforce, which could increase our expenses and adversely affect our operations. In addition, if we fail to comply with applicable laws and regulations or implement responsible business, social, environmental and supply chain practices, we could be subject to damage to our reputation, class action lawsuits, legal and settlement costs, civil and criminal liability, increased cost of regulatory compliance, restatements of our financial statements, disruption of our business and loss of customers. Any required changes to our employment practices could result in the loss of employees, reduced sales, increased employment costs, low employee morale and harm to our business and results of operations. In addition, political and economic factors could lead to unfavorable changes in federal, state and foreign tax laws, which may increase our tax liabilities. An increase in our tax liabilities could adversely affect our results of operations. We are also regularly involved in various litigation matters that arise in the ordinary course of business. Litigation or regulatory developments could adversely affect our business and financial condition.\n\n## We continue to face uncertainties due to financial services industry regulation and supervision that could have an adverse affect on our operations.\n\nFederal and state regulation and supervision of the financial industry has increased in recent years due to implementation of consumer protection and financial reform legislation such as the Credit Card Accountability Responsibility and Disclosure Act of 2009 ('CARD Act') and the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 ('Financial Reform Act'). The Financial Reform Act significantly restructured regulatory oversight and other aspects of the financial industry, created the Consumer Financial Protection Bureau ('CFPB') to supervise and enforce consumer lending laws and regulations, and expanded state authority over consumer lending. The CARD Act included new and revised rules and restrictions on credit card pricing, finance charges and fees, customer billing practices and payment application. We anticipate more regulation and interpretations of the new rules to continue, and, depending on the nature and extent of these new regulations and interpretations, we may be required to make changes to our credit card practices and systems, which could adversely impact the revenues and profitability of our Credit segment. In addition, we operate in a regulated environment where financial supervisory agencies provide oversight over our activities. Compliance with applicable laws and regulations could limit or restrict our activities and the conduct of our business and enforcement actions by those agencies for failure to comply could have an adverse impact on us.", - "page_start": 20, - "page_end": 20, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "\n\ndkrause on DSKBC28HB2PROD with PUBLAWS\n\nVerDate Sep 11 2014\n\n15:46 Aug 08, 2019\n\nJkt 089139\n\nPO 00030\n\nFrm 00001\n\nFmt 6580\n\nSfmt 6581\n\nE:\\PUBLAW\\PUBL030.116\n\nPUBL030\n\nPublic Law 116-30 116th Congress\n\n## An Act\n\nTo provide a lactation room in public buildings.\n\nBe it enacted by the Senate and House of Representatives of the United States of America in Congress assembled,\n\n## SECTION 1. SHORT TITLE.\n\nThis Act may be cited as the ''Fairness For Breastfeeding Mothers Act of 2019''.\n\n## SEC. 2. LACTATION ROOM IN PUBLIC BUILDINGS.\n\n(a) LACTATION ROOM IN PUBLIC BUILDINGS.-Chapter 33 of title 40, United States Code, is amended by adding at the end the following new section:\n\n## ''§ 3318. Lactation room in public buildings\n\n''(a) DEFINITIONS.-In this section:\n\n''(1) APPROPRIATE AUTHORITY.-The term 'appropriate authority' means the head of a Federal agency, the Architect of the Capitol, or other official authority responsible for the operation of a public building.\n\n''(2) COVERED PUBLIC BUILDING.-The term 'covered public building' means a public building (as defined in section 3301) that is open to the public and contains a public restroom, and includes a building listed in section 6301 or 5101.\n\n''(3) LACTATION ROOM.-The term 'lactation room' means a hygienic place, other than a bathroom, that-\n\n''(A) is shielded from view;\n\n''(B) is free from intrusion; and\n\n''(C) contains a chair, a working surface, and, if the public building is otherwise supplied with electricity, an electrical outlet.\n\n''(b) LACTATION ROOM REQUIRED.-Except as provided in subsection (c), the appropriate authority of a covered public building shall ensure that the building contains a lactation room that is made available for use by members of the public to express breast milk.\n\n''(c) EXCEPTIONS.-A covered public building may be excluded from the requirement in subsection (b) at the discretion of the appropriate authority if-\n\n''(1) the public building-\n\n''(A) does not contain a lactation room for employees who work in the building; and\n\n''(B) does not have a room that could be repurposed as a lactation room or a space that could be made private using portable materials, at a reasonable cost; or\n\nJuly 25, 2019\n\n[H.R. 866]\n\nFairness For Breastfeeding Mothers Act of 2019. 40 USC 101 note.\n\n40 USC 3318.", - "page_start": 0, - "page_end": 0, - "source_file": "PLAW-116publ30.pdf" - }, - { - "text": "the offspring 12 . Human studies have revealed GMV reductions in areas of the brain important for social cognition and the magnitude of these changes corresponds with increased parental attachment 13 . Deeper examination of cellular and systems-level mechanisms will improve our understanding of how pregnancy remodels specific circuits to promote maternal behavior.\n\nAlthough studied to a lesser degree, ties between maternal behavior and white matter microstructure (particularly connectivity between temporal and occipital lobes) have been noted 31 . Here we reveal pronounced GMV changes in regions within sensory, attention and default mode networks over the gestational window. In parallel, we observed increased anisotropy in white matter tracts that facilitate communication between emotional and visual processing hubs 37-39 , including the inferior longitudinal fasciculus and inferior fronto-occipital fasciculus. Pinpointing the synchrony of gray and white matter changes that unfold in the maternal brain could be key to understanding the behavioral adaptions that emerge during and after pregnancy, such as honing the brain's visual and auditory responses to infant cues and eliciting maternal behavior. Research into other major transition periods supports this idea. For instance, adolescence is a dynamic period characterized by region-specific, nonlinear decreases in GMV and increases in WMV, maturational brain changes that are tied to gains in executive function and social cognition 40 . For both adolescence 41 and matrescence, the considerable rise in steroid hormone production appears to remodel the brain (see ref. 25 for comparative analysis), promoting a suite of behaviors adaptive to that life stage. How specific neural changes give rise to specific behavioral adaptations has yet to be fully explored with respect to human pregnancy.\n\nThis precision imaging study mapped neuroanatomical changes across pregnancy in a single individual, precluding our ability to generalize to the broader population. To benchmark our findings, we compared the magnitude of GMV changes observed throughout pregnancy against data from nonpregnant individuals sampled over a similar time course. Doing so provided compelling evidence that pregnancy-related neuroanatomical shifts far exceed normative day-to-day brain variability and measurement error. Evidence suggests that white matter microstructure remains fairly stable over a six-month period 42 , but more studies are needed to compare the degree of white matter changes observed during pregnancy to normative change over time. Further, sampling larger cohorts of women will generate much-needed normative models of brain change (akin to ref. 43) throughout pregnancy to establish what constitutes a typical degree of neuroanatomical change expected during gestation and postpartum recovery.", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed4.pdf" - }, - { - "text": "## Neuroanatomical changes observed over the course of a human pregnancy\n\nReceived: 23 August 2023\n\nAccepted: 29 July 2024\n\nPublished online: 16 September 2024\n\nCheck for updates\n\nLaura Pritschet 1 , Caitlin M. Taylor 1 , Daniela Cossio 2 , Joshua Faskowitz 3 , Tyler Santander 1 , Daniel A. Handwerker 3 , Hannah Grotzinger 1 , Evan Layher 1 , Elizabeth R. Chrastil 2,5 &\n\nEmily G. Jacobs 1,4,5\n\nPregnancy is a period of profound hormonal and physiological changes experienced by millions of women annually, yet the neural changes unfolding in the maternal brain throughout gestation are not well studied in humans. Leveraging precision imaging, we mapped neuroanatomical changes in an individual from preconception through 2 years postpartum. Pronounced decreases in gray matter volume and cortical thickness were evident across the brain, standing in contrast to increases in white matter microstructural integrity, ventricle volume and cerebrospinal /fluid, with few regions untouched by the transition to motherhood. This dataset serves as a comprehensive map of the human brain across gestation, providing an open-access resource for the brain imaging community to further explore and understand the maternal brain.\n\nWorldwide, nearly 85% of women experience one or more pregnancies in their lifetime 1 , with 140 million women becoming pregnant each year. Over an approximately 40-week gestational window, the maternal body undergoes profound physiological adaptations to support the development of the fetus, including increases in plasma volume, metabolic rate, oxygen consumption and immune regulation 2 . These rapid adaptations are initiated by 100-fold to 1,000-fold increases in hormone production, including estrogen and progesterone. These neuromodulatory hormones also drive significant reorganization of the central nervous system. Evidence from animal models and human studies converge on pregnancy as a period of remarkable neuroplasticity 3-10 (see ref. 10 for one of the earliest known observations). Gestational increases in steroid hormone synthesis drive neurogenesis, dendritic spine growth, microglial proliferation, myelination and astrocyte remodeling (for review, see ref. 11). These cellular changes are pronounced in brain circuits that promote maternal behavior. For example, Ammari et al. recently discovered that steroid hormones can fine-tune the response properties of galanin neurons in the rodent medial preoptic area of the hypothalamus (mPOA), leading to enhanced sensitivity in dams to sensory cues from newborn pups 12 .\n\nIn humans, reductions in gray matter volume (GMV) have been observed postpartum 13-16 , particularly in regions central to theory-of-mind processing 13 . These GMV changes persist at 6 years postpartum 17 and are traceable decades later 18,19 , underscoring the permanence of this major remodeling event. And yet the changes that occur within the maternal brain during gestation itself are virtually unknown (see ref. 20 for early neuroimaging insight). A recent study by Paternina-Die et al. offers intriguing clues 21 . Women were scanned once in the third trimester and again in the postpartum period, revealing a reduction of cortical volume observable in the late pregnancy scan. These findings suggest that pregnancy is a highly dynamic period for neural remodeling, yet neuroscientists lack a detailed map of how the human brain changes throughout the gestational period.\n\nHere we conducted a precision imaging study of pregnancy in which a healthy 38-year-old primiparous woman underwent 26 magnetic resonance imaging (MRI) scans and venipuncture beginning 3 weeks preconception through 2 years postpartum. We observed widespread reductions in cortical GMV and cortical thickness (CT) occurring in step with advancing gestational week and the dramatic rise in sex hormone production. Remodeling was also evident within\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed4.pdf" - }, - { - "text": "Fig. 1 | Precision imaging reveals neuroanatomical changes throughout gestation. a , Standard medical demarcations for pregnancy stages (that is, trimesters) by gestation week (the image is created with BioRender.com). b , Steroid hormones increased significantly throughout pregnancy and dropped precipitously postpartum, as is characteristic of the prenatal and postnatal periods. c , A healthy 38-year-old primiparous woman underwent 26 scanning sessions from 3 weeks preconception through 2 years postpartum. Scans were distributed throughout preconception (four scans), first trimester (four scans), second trimester (six scans), third trimester (five scans) and postpartum (seven scans); tick marks indicate when major measures were collected and\n\n\n\ncolors denote pregnancy stage. The participant underwent IVF to achieve pregnancy, allowing for precise mapping of ovulation, conception and gestation week. d , Summary (that is, total) of brain measures throughout the experiment. Generalized additive models revealed GMV, CT and total brain volume decreased throughout pregnancy (see Methods for validation with cubic regression), with a slight recovery postpartum. Global QA, lateral ventricle and CSF volumes displayed nonlinear increases across gestation, with a notable rise in the second and third trimesters before dropping sharply postpartum. Shaded regions represent 95% confidence bands; solid lines indicate model fit; dashed line indicates parturition.\n\n## Discussion\n\nConverging evidence across mammalian species points to pregnancy as a remarkable period of neuroplasticity, revealing the brain's ability to undergo adaptive, hormonally-driven neuroanatomical changes beyond adolescence 13-15,20,21,24-26 . Investigations that compare women\n\nprepregnancy and then again postpartum provide the strongest evidence to date that the human brain undergoes such neural changes 11,27 . But what about pregnancy itself? Over what time course do anatomical changes in the maternal brain manifest? Are they tied to the substantial increase in sex hormone production? Here we begin to address these", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed4.pdf" - }, - { - "text": "Critically, dynamic neural changes occurred within the pregnancy window itself, a nuance not captured by studies limited to comparisons between prepregnancy and postpregnancy. For example, we observed large increases in white matter microstructural integrity (QA) throughout the first and second trimesters of pregnancy, but these measures fully returned to baseline values by the first postpartum scan. This pattern may explain why previous studies report no pregnancy-related differences in white matter tractography 14 . Other measures, such as GMV and CT, decreased throughout gestation and displayed only a modest rebound postpartum. These nonlinear patterns suggest that only quantifying prepregnancy and postpartum brain structure may\n\nPHC\n\n", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed4.pdf" - }, - { - "text": "subcortical structures, including the ventral diencephalon, caudate, thalamus, putamen and hippocampus. High-resolution imaging and segmentation of the medial temporal lobe (MTL) extend these findings further, revealing specific volumetric reductions within hippocampal subfields CA1, CA2/CA3 and parahippocampal cortex (PHC). In contrast to widespread decreases in cortical and subcortical GMV, correlational tractography analyses revealed nonlinear increases in white matter quantitative anisotropy (QA) throughout the brain-indicating greater tract integrity-as gestational week progressed. Together, these findings reveal the highly dynamic changes that unfold in a human brain across pregnancy, demonstrating a capacity for extensive neural remodeling well into adulthood.\n\n## Results\n\n## Serological evaluations\n\nSerological evaluations captured canonical hormone fluctuations characteristic of the prenatal, perinatal and postnatal periods (Fig. 1b). Serum hormone concentrations increased significantly over the course of pregnancy and dropped precipitously postpartum (preconception, estradiol (E) = 3.42 pg ml -1 and progesterone (P) = 0.84 ng ml -1 ; 3 weeks preparturition, E = 12,400 pg ml -1 and P = 103 ng ml -1 ; 3 months postparturition, E = 11.50 pg ml -1 and P = 0.04 ng ml -1 ).\n\n## Whole-brain dynamics from baseline through postpartum\n\nTo begin, we characterized broad neuroanatomical changes over the course of the entire experimental window (baseline-2 years postpartum, 26 scans; Fig. 1d). Generalized additive models revealed strong nonlinear (effective degrees of freedom > 3) relationships between weeks since conception and summary brain metrics. Total GMV ( F = 27.87, P < 0.001, deviance explained = 93.9%, R 2 adj = 0.91), summary CT ( F = 15.79, P < 0.001, deviance explained = 78.6%, R 2 adj = 0.75) and total brain volume ( F = 26.12, P < 0.001, deviance explained = 93.4%, R 2 adj = 0.90) linearly decreased during gestation and appeared to partially rebound postpartum. In contrast, global microstructural integrity (QA) of white matter increased throughout the first and second trimesters before returning to baseline levels in the postpartum period (whole-brain QA, F = 4.62, P = 0.007, deviance explained = 60.2%, R 2 adj = 0.51). We also observed nonlinear patterns of lateral ventricle expansion (F = 10.44, P < 0.001, deviance explained = 83.8%, R 2 adj = 0.77) and increased cerebrospinal fluid (CSF; F = 13.32, P < 0.001, deviance explained = 83.8%, R 2 adj = 0.79) rising in the second and third trimesters before dropping sharply postpartum.\n\n## Cortical volume and thickness changes tied to gestation\n\nWe then narrowed the aperture to capture changes unfolding within gestation itself (baseline-36 weeks pregnant, 19 scans). Relationships between summary brain metrics were evident over the gestational period as follows: total brain volume, GMV and CT were positively associated with one another, whereas lateral ventricles, CSF and global QA demonstrated negative relationships with GMV (Supplementary Fig. 1).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed4.pdf" - }, - { - "text": "sleep patterns 11 . These factors could have a role in the brain changes observed here, with some driving neurobiological changes and others, like water retention, potentially affecting MRI-based measurements. Note that, although cortical reductions in GMV over gestation were stable across analyses, accounting for QC measures influenced the magnitude and location of these results. These metrics all fell within the standard range, but there may be meaningful reductions in signal that accompany volumetric reductions (for example, increased CSF and decreased GM)-a methodological nuance that goes beyond the scope of this resource study. Ultimately, identifying the shared and unique contributions of these factors to the neuroanatomical changes that unfold across gestation warrants further investigation. Deeply phenotyping a large and diverse cohort of women across pregnancy will open up new avenues of exploration, for example, allowing researchers to link blood-based proteomic signatures to pregnancy outcomes; deploying wearable devices to monitor changes in sleep, cognition and mood; and probing the broader social and environmental determinants of maternal health 27 .\n\nThe neuroanatomical changes that unfold during matrescence may have broad implications for understanding individual differences in parental behavior 13,24,30,31 , vulnerability to mental health disorders 32,33 and patterns of brain aging 18,19,34-36 . Decreases in GMV may reflect 'fine-tuning' of the brain by neuromodulatory hormones in preparation for parenthood 26 . For example, in rodents, steroid hormones promote parental behavior by remodeling specific neural circuits in the medial preoptic area of the hypothalamus. These behavioral adaptations are critical to the dam's ability to meet the demands of caring for", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed4.pdf" - }, - { - "text": "## Consumer Laws and Regulations\n\nWe are also subject to certain consumer laws and regulations that are designed to protect consumers in transactions with banks. While the following list is not exhaustive, these laws and regulations include the Truth in Lending Act, the Truth in Savings Act, the Electronic Funds Transfer Act, the Expedited Funds Availability Act, the Equal Credit Opportunity Act, and the Fair Housing Act, among others. These laws and regulations among other things prohibit discrimination on the basis of race, gender or other designated characteristics and mandate various disclosure requirements and regulate the manner in which financial institutions must deal with customers when taking deposits or making loans to such customers. These and other laws also limit finance charges or other fees or charges earned in our activities. We must comply with the applicable provisions of these consumer protection laws and regulations as part of our ongoing customer relations.\n\n## Technology Risk Management and Consumer Privacy\n\nState and federal banking regulators have issued various policy statements emphasizing the importance of technology risk management and supervision in evaluating the safety and soundness of depository institutions with respect to banks that contract with outside vendors to provide data processing and core banking functions. The use of technology-related products, services, delivery channels and processes expose a bank to various risks, particularly operational, privacy, security, strategic, reputation and compliance risk. Banks are generally expected to prudently manage technology-related risks as part of their comprehensive risk management policies by identifying, measuring, monitoring and controlling risks associated with the use of technology.\n\nUnder Section 501 of the Gramm-Leach-Bliley Act, the federal banking agencies have established appropriate standards for financial institutions regarding the implementation of safeguards to ensure the security and confidentiality of customer records and information, protection against any anticipated threats or hazards to the security or integrity of such records and protection against unauthorized access to or use of such records or information in a way that could result in substantial harm or inconvenience to a customer. Among other matters, the rules require each bank to implement a comprehensive written information security program that includes administrative, technical and physical safeguards relating to customer information.\n\nUnder the Gramm-Leach-Bliley Act, a financial institution must also provide its customers with a notice of privacy policies and practices. Section 502 prohibits a financial institution from disclosing nonpublic personal information about a consumer to nonaffiliated third parties unless the institution satisfies various notice and opt-out requirements and the customer has not elected to opt out of the disclosure. Under Section 504, the agencies are authorized to issue regulations as necessary to implement notice requirements and restrictions on a financial institution's ability to disclose nonpublic personal information about consumers to nonaffiliated third parties. Under the final rule the regulators adopted, all banks must develop initial and annual privacy notices which describe in general terms the bank's information sharing practices. Banks that share nonpublic personal information about customers with nonaffiliated third parties must also provide customers with an opt-out notice and a reasonable period of time for the customer to opt out of any such disclosure (with certain exceptions). Limitations are placed on the extent to which a bank can disclose an account number or access code for credit card, deposit, or transaction accounts to any nonaffiliated third party for use in marketing.\n\n## Monetary Policy", - "page_start": 37, - "page_end": 37, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "These findings provide a critical rationale for conducting further precision imaging studies of pregnancy in demographically enriched cohorts to determine the universality and idiosyncrasy of these adaptations and their role in maternal health. Are the changes observed in our participant reflective of the broader population? Do deviations from the norm lead to maladaptive outcomes? A precision imaging approach can help determine whether the pace of pregnancy-induced neuroanatomical changes drives divergent brain health outcomes in women, as may be the case during other rapid periods of brain development 44 . One in five women experiences perinatal depression 45 and while the first FDA-approved treatment is now available 46 , early detection remains elusive. Precision imaging studies could offer clues about an individual's risk for or resilience to depression before symptom onset, helping clinicians better determine when and how to intervene. Neuroscientists and clinicians also lack tools to facilitate detection and treatment of neurological disorders that co-occur, worsen or remit with pregnancy, such as epilepsy, headaches, multiple sclerosis and intracranial hypertension 47 . Precision mapping of the maternal brain lays the groundwork for a greater understanding of the subtle and sweeping structural, functional, behavioral and clinical changes that unfold across pregnancy. Such pursuits will advance our basic\n\nunderstanding of the human brain and its remarkable ability to undergo protracted plasticity in adulthood.\n\n## Online content\n\nAny methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41593-024-01741-0.\n\n## References", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed4.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20200471_en.pdf", - "query": "When is it not necessary to review an EHC plan ?", - "target_page": 3, - "target_passage": " It is not necessary for a local authority to review an EHC plan in accordance with section 44(1) of the Act if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "- (a) at the end of sub-paragraph (c) omit 'or'; and\n - (b) at the end of sub-paragraph (d) insert-\n\n'; or\n\n - (e) of a reason relating to the incidence or transmission of coronavirus'.\n - 10. In regulation 13(3) (timescales for EHC plans), for '(d)' substitute '(e)'.\n - 11. After regulation 18 (circumstances in which a local authority must review an EHC plan) insert-\n\n## ' Circumstances in which it is not necessary to review an EHC plan\n\n - 18A. -(1) It is not necessary for a local authority to review an EHC plan in accordance with section 44(1) of the Act if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.\n - (2) Where paragraph (1) applies, a local authority must instead conduct such reviews as soon as reasonably practicable.'.\n - 12. In regulation 22 (amending an EHC plan following a review), after paragraph (5) insert-\n - '(6) The local authority need not comply with the time limit referred to in paragraphs (3) and (4) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.'.\n - 13. In regulation 27(3) (amending or replacing an EHC plan following a re-assessment)-\n - (a) at the end of sub-paragraph (c) omit 'or'; and\n - (b) at the end of sub-paragraph (d) insert-\n\n'; or\n\n - (e) of a reason relating to the incidence or transmission of coronavirus'.\n - 14. In regulation 45 (unopposed appeals), after paragraph (7) insert-\n\n'(8) The local authority need not comply with the time limits specified in paragraph (3A) if it is impractical to do so because the circumstances referred to in regulation 10(4)(e) apply.'.\n\n## Amendment of the Special Educational Needs (Personal Budgets) Regulations 2014\n\n15. The Special Educational Needs (Personal Budgets) Regulations 2014( a ) are amended as follows.\n\n - 16. In regulation 2 (interpretation), at the appropriate place insert-\n - ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n - 17. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time period due to coronavirus exception\n\n - 2A. -(1) Where the coronavirus exception applies, the requirement for the local authority to review the making and use of direct payments within the first three months of them being made in regulation 11(2)(a) (monitoring and review of direct payments) is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n\n - (2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n - (3) The following regulations are specified for the purposes of paragraphs (1) and (2)-\n - (a) regulation 15(2) (transfer of EHC plans) (in relation to the second reference to 15 working days), (4), (5), (7) (in relation to the second reference to 15 working days) and (8);\n - (b) regulation 16(2) and (3) (change of responsible commissioning body);\n - (c) regulation 20(9) and (10) (review where the child or young person attends a school or other institution);\n - (d) regulation 21(7), (8) and (9) (review of EHC plan where the child or young person does not attend a school or other institution);\n - (e) regulation 25(1) (notification of decision whether it is necessary to re-assess educational, health care and social care provision);\n - (f) regulation 27(4) (amending or replacing an EHC plan following a re-assessment);\n - (g) regulation 33 (requirement to consider mediation);\n - (h) regulation 34(1) and (2) (where a parent or young person does not wish to or fails to pursue mediation);\n - (i) regulation 35(2), (3) and (4) (mediation - health care issues);\n - (j) regulation 36(2) (mediation - no health care issues);\n - (k) regulation 39(1) and (3) (mediation certificate under section 55(5));\n - (l) regulation 42(3) and (4) (steps to be taken by a local authority);\n - (m) regulation 44(2)(d), (e), (f) and (h) (compliance with the orders of the First-tier Tribunal);\n - (n) regulation 45(4), (5) and (6A) (unopposed appeals);\n - (o) regulation 47 (disclosure of EHC plans in relation to higher education); and\n - (p) regulation 56(3) (publication of comments on the local offer).'.\n - 6. In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert-\n - '(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.'.\n - 7. In regulation 5(4) (decision whether or not to conduct an EHC needs assessment)-\n - (a) at the end of sub-paragraph (c) omit 'or'; and\n - (b) at the end of sub-paragraph (d) insert-\n\n'; or\n\n - (e) of a reason relating to the incidence or transmission of coronavirus'.\n - 8. In regulation 8(2) (duty to co-operate in EHC needs assessments)-\n - (a) at the end of sub-paragraph (b) omit 'or'; and\n - (b) at the end of sub-paragraph (c) insert-\n\n'; or\n\n - (d) of a reason relating to the incidence or transmission of coronavirus'.\n - 9. In regulation 10(4) (decision not to secure an EHC plan)-", - "page_start": 1, - "page_end": 1, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- 23. In regulation 8(2) (duty to co-operate in a detained person's EHC needs assessment), at the end of sub-paragraph (d) insert-\n\n'; or\n\n - (e) of a reason relating to the incidence or transmission of coronavirus'.\n - 24. In regulation 10(4) (decision not to secure an EHC plan)-\n - (a) at the end of sub-paragraph (b) omit 'or'; and\n - (b) at the end of sub-paragraph (c) insert-\n\n'; or\n\n - (d) of a reason relating to the incidence or transmission of coronavirus'.\n - 25. In regulation 13(3) (timescales for EHC plans), for '(c)' substitute '(d)'.\n - 26. In regulation 29 (compliance with the orders of the First-tier Tribunal)-\n - (a) after paragraph (6) insert-\n - '(6A) The home authority need not comply with the time limits specified in paragraph (3) if it is impractical to do so because the circumstances referred to in regulation 10(4)(d) apply.'.\n - (b) in paragraph (7)(c) after '10(4)(a)' insert 'or (d)'.\n - 27. In regulation 30(7)(c) (unopposed appeals), after '10(4)(a)' insert 'or (d)'.\n\n## Amendment of the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017\n\n28. The Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017( a ) are amended as follows.\n\n - 29. In regulation 2 (interpretation), at the appropriate place insert-\n - ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n - 30. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n - 2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n - (2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n - (3) The following regulations are specified for the purposes of paragraphs (1) and (2)-\n - (a) regulation 6(3) and (6) (responding to health care recommendations); and\n - (b) regulation 7(1) and (4) (responding to social care recommendations).'.\n\nVicky Ford Parliamentary Under Secretary of State Department for Education\n\n28th April 2020", - "page_start": 4, - "page_end": 4, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- 9. There is a 50% reduction in the number of Red List species threatened by invasive alien species.\n - 10. The losses of nutrients from fertilisers are reduced by 50%, resulting in the reduction ofthe use of fertilisers by at least 20%.\n - 11. Cities with at least 20,000 inhabitants have an ambitious Urban Greening Plan.\n - 12. No chemical pesticides are used in sensitive areas such as EU urban green areas.\n - 13. The negative impacts on sensitive species and habitats, including on the seabed through fishing and extraction activities, are substantially reduced to achieve good environmental status.\n - 14. The by-catch of species is eliminated or reduced to a level that allows species recovery and conservation.\n\n## 3. ENABLING TRANSFORMATIVE CHANGE\n\n## 3.1. A new governance framework\n\nIn the EU, there is currently no comprehensive governance framework to steer the implementation of biodiversity commitments agreed at national, European or international level. To address the gap, the Commission will put in place a new European biodiversity governance framework . This will help map obligations and commitments and set out a roadmap to guide their implementation.\n\nAs part of this new framework, the Commission will put in place a monitoring and review mechanism. This will include a clear set of agreed indicators and will enable regular progress assessment and set out corrective action if necessary. This mechanism will feed the Environmental Implementation Review and contribute to the European Semester.\n\nThe new governance framework will ensure co-responsibility and co-ownership by all relevant actors in meeting the EU's biodiversity commitments. It will support administrative capacity building, transparency, stakeholder dialogue, and participatory governance at different levels.\n\nThe Commission will assess the progress and suitability of this approach in 2023, and consider whether a legally binding approach to governance is needed.\n\n## 3.2. Stepping up implementation and enforcement of EU environmental legislation\n\nAll environmental legislation relies on proper implementation and enforcement. Over the last 30 years, the EU has put in place a solid legislative framework to protect and restore its natural capital. However, recent evaluations show that although legislation is fit for purpose, implementation on the ground is lagging behind 60 . This is having dramatic consequences on biodiversity and comes with a substantial economic cost 61 . The full implementation and enforcement of EU environmental legislation is therefore at the heart of this strategy , for which political support and financial and human resources will need to be prioritised.", - "page_start": 15, - "page_end": 15, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "## (27) Employee Benefit Plans\n\nE u ronet has established a Profit Sharing and 401(k) plan for all employees who have completed six months of service and are not otherw i s e c o v e red by a re t i rement benefit plan (national or private) outside of the US. Each plan participant can contribute up to the maximum amount allowed by the Internal Revenue Service to the Plan through payroll deductions. Euro n e t 's matching contribution to the plan is d i s c re t i o n a ry and is determined each year by the Board of Directors. The employee's vested percentage re g a rding the employer's contribution varies according to years of service. Euro n e t 's contribution accrual to the Plan for the years ended December 31, 2000, 1999 and 1998 was $213,000, $159,000 and $26,000 re s p e c t i v e l y.\n\nE u ronet maintains both a fully funded and self-funded health insurance programs, which cover all full-time employees and their families at no charge to the employees. In order to administer the self-funded program, Euronet has entered into a contractual agreement with a third p a rty administrator by which Euronet pays a monthly service fee to the administrator based upon employee enrollment participating in the self-funded plan. Euronet has also purchased a stop/loss insurance policy to limit Euro n e t 's self-funded liability to $25,000 per employee per year and a total loss on all claims to approximately $31,000 per month.", - "page_start": 45, - "page_end": 45, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "The site Emergency Management Team was involved in a large-scale exercise during the year that tested the team's training and allowed the opportunity to review the site emergency management plan.", - "page_start": 23, - "page_end": 23, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "\n\n## Long-term incentives\n\nEffective from 1 July 2012, the Group implemented an LTI Plan, also referred to as the Executive Rights Plan. The objectives of the LTI Plan are to retain key executives and to align an at-risk component of certain executives' remuneration with shareholder returns.\n\nKey features of the LTI Plan are outlined in the table as follows:\n\n## Overview of the LTI Plan", - "page_start": 54, - "page_end": 54, - "source_file": "ASX_KCN_2013.pdf" - }, - { - "text": "encouraging cooperation in education for environmental sustainability in 2021. This will provide guidance for schools and teachers on how to cooperate and exchange experiences across Member States on biodiversity teaching. The Commission will also provide support materials and facilitate the exchange of good practices in EU networks of teacher-training programmes.\n\n## 4. THE EUROPEAN UNION FOR AN AMBITIOUS GLOBAL BIODIVERSITY AGENDA\n\nBiodiversity is a priority of the EU's external action and an integral part of efforts to meet the United Nations Sustainable Development Goals. It will be mainstreamed throughout bilateral and multilateral engagements, through the EU's 'Green Deal diplomacy', and forthcoming green alliances 76 . The Commission will work closely with the European Parliament and Member States to ensure a high level of EU ambition and mobilise all efforts for the good of the world's biodiversity.\n\n## 4.1. Raising the level of ambition and commitment worldwide\n\nProtecting biodiversity is a global challenge and the next decade will be decisive. Global efforts under the United Nations Convention on Biological Diversity have largely been insufficient. Nature cannot afford any half measures or lack of ambition.\n\nIn this spirit, the EU is ready to lead all efforts - working with like-minded partners in a high-ambition coalition on biodiversity - to agree an ambitious new global framework for post-2020 at the upcoming 15 th Conference of the Parties to the Convention on Biological Diversity.\n\nWith this strategy, the Commission proposes ambitious commitments for the EU to bring to the table. The EU should also support governments and stakeholders across the globe to significantly step up their ambition and their action.\n\nThe Commission proposes that the EU ensures that the post-2020 global framework includes, at a minimum, the elements outlined below:\n\n -  Overarching global goals for biodiversity for 2050, in line with the United Nations 2030 Agenda for Sustainable Development and the vision of 'living in harmony with nature'. The ambition should be that, by 2050, all of the world's ecosystems are restored, resilient, and adequately protected. The world should commit to the net-gain principle to give nature back more than it takes. The world should commit to no human-induced extinction of species, at minimum where avoidable.\n -  Ambitious global 2030 targets in line with EU commitments in this strategy. These should clearly address the drivers of biodiversity loss and be specific, measurable, actionable, relevant and time-bound.\n -  A much stronger implementation, monitoring and review process. Parties should revise their National Biodiversity Strategies and Action Plans by the end of 2021, or as a minimum, submit national commitments for the most important targets. There should be a regular review cycle to look at progress towards the", - "page_start": 19, - "page_end": 19, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "\n\n## EMPLOYEE RETIREMENT AND BENEFIT PLANS\n\nA noncontributory defined benefit retirement plan is maintained for all regular employees of the Company except those of Quest Medical. This plan was amended effective January 1, 1998 to become a cash balance pension plan. The Company's funding policy is to make the annual contributions required by applicable regulations and recommended by its actuary. The Company uses a December 31 measurement date for the plan.\n\nThe changes in the plan's projected benefit obligation ('PBO') as of December 31, 2003 and 2002 are as follows (in thousands):\n\n| | 2003 | 2002 |\n|---------------------------------|---------|---------|\n| CHANGE IN BENEFIT OBLIGATION: | | |\n| Benefit obligation, January 1 | $ 4,170 | $ 4,599 |\n| Service cost | 214 | 320 |\n| Interest cost | 298 | 307 |\n| Amendments | -- | (616) |\n| Actuarial (gain)/loss | 529 | (93) |\n| Benefits paid | (333) | (347) |\n| Benefit obligation, December 31 | $ 4,878 | $ 4,170 |\n\nIn December 2002, the plan was amended to reduce benefit accruals for future service by plan participants by approximately 50 percent. This amendment caused a reduction in the PBO of approximately $616,000, and is reflected as a reduction in pension expense over the estimated employee service lives.\n\nThe changes in the fair value of plan assets, funded status of the plan and the status of the prepaid pension benefit recognized, which is included in the Company's balance sheets as of December 31, 2003 and 2002 are as follows (in thousands):\n\n| | 2003 | 2002 |\n|----------------------------------------|---------|---------|\n| CHANGE IN PLAN ASSETS: | | |\n| Fair value of plan assets, January 1 | $ 4,383 | $ 4,550 |\n| Actual return on plan assets | 963 | (750) |\n| Employer contributions | 400 | 930 |\n| Benefits paid | (333) | (347) |\n| Fair value of plan assets, December 31 | $ 5,413 | $ 4,383 |\n| Funded status of plan | $ 535 | $ 213 |\n| Unrecognized actuarial loss | 1,941 | 2,154 |\n| Unrecognized prior service cost | (502) | (539) |\n| Unrecognized net transition obligation | (88) | (132) |\n| Net amount recognized as other assets | $ 1,886 | $ 1,696 |", - "page_start": 21, - "page_end": 21, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "The authors conclude on the relevance of the EU OSH directives :\n\n'The evaluation shows very clearly that the EU OSH acquis is the reference frame for national OSH regulatory regimes. While the Member States have chosen various models for their legal implementation of the Directives' requirements, there is no doubt that the Directives' requirements form the core of the national systems in one way or the other. The significance of the Directives in setting the scene for OSH regulation in the EU is therefore very high.'\n\nThe authors also distinguish between the two major principles of legislative approaches in OSH, that is, either setting an objective and letting the actors define how this goal can be achieved (goal-oriented approach) , or prescribing also quite detailed measures to reach the objective (prescriptive approach) : 352\n\n'There seems to be a general view that the Framework Directive, with its orientation towards a goaloriented approach to OSH (rather than prescriptive) successfully lays out a suitable template for managing workplace risks - but not in itself enough to ensure that all risks are dealt with sufficiently. One criticism of the goal-setting approach is that the absence of prescriptive intermediate goals makes", - "page_start": 120, - "page_end": 120, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "Excel Training Manual 1.pdf", - "query": "Give me some info about the scroll bars in excel", - "target_page": 6, - "target_passage": "Appear at the right and on the bottom of the screen. You may click the scroll arrows, drag the scroll box or click the scroll bar to move through the document. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## NAVIGATING IN A FILE\n\nArrow Keys\n\nMove one cell to the right, left, up or down\n\nTab\n\nMove once cell to the right\n\nCtrl+Home\n\nTo beginning file\n\nCtrl+End\n\nTo end of typed information\n\nHome\n\nBeginning of a line\n\nEnd\n\nEnd of a line\n\nPage Down\n\nDown one screen\n\nPage Up\n\nUp one screen\n\nF5\n\nTo a specific page\n\nScroll bars\n\nAppear at the right and on the bottom of the screen. You may click the scroll arrows, drag the scroll box or click the scroll bar to move through the document.", - "page_start": 5, - "page_end": 5, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## Try This Yourself:\n\n - O pe n Fi le Before starting this exercise you MUST open the file E1355 Quick Analysis\\_5.xlsx…\n -  Click in any cell containing data\n -  Hold down + , then press to select all of the non-empty cells around the current cell\n -  Using the scroll bars, scroll to the bottom right corner of the selection, click on the Quick Analysis button, then click on the TABLES tab\n -  Click on Table to turn the selected range into a table\n -  Scroll across and on the drop arrow for Position to see sorting and filtering options\n -  Click on Select All to remove the tick, then click on Effective People Leader so it appears ticked\n\n", - "page_start": 40, - "page_end": 40, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## UNDERSTANDING WORKBOOKS\n\nIn Microsoft Excel the data you enter, whether it consists of numbers, text, or formulas, is stored in a file known as a workbook . Workbooks are just like huge electronic books with pages (or\n\nsheets ) that have been ruled into columns and rows. Before using Excel it is helpful to know what the various parts and elements that make up a workbook are.\n\n\n\n-  A worksheet (or page) in a workbook contains 16,384 columns that are labelled using letters of the alphabet. The first column in a worksheet is labelled column A , while the last is labelled XFD\n-  A worksheet (or page) in a workbook contains 1,048,576 rows that are labelled using numbers from 1 to 1,048,576\n-  Where a column and row intersect we get what is known as a cell . You enter your data into these cells. Each cell in a worksheet can hold up to 32,767 characters - although it would be unrealistic to ever push it this far. Cells are referred to by their column and row labels. For example, in the screen above the cell we are pointing to is C11 - this reference is known as the cell address and is most important as it is frequently used in commands and formulas\n-  When you start typing something, you want it to appear somewhere in the worksheet. As a consequence when the Status Bar shows Ready mode, at least one cell in the worksheet will be highlighted - this is known as the active cell . In the screen above, the active cell is cell A1 -notice that the column label and the row label also appears coloured to indicate the active cell. You can have more than one active cell - when this occurs you have what is known as a range\n-  A workbook (as you would expect) is made up of pages known as worksheets . You can have as many sheets in a workbook as your computer resources can accommodate. As a default, a new blank workbook normally has 3 worksheets labelled Sheet1 , Sheet2 , and Sheet3 . Of course these labels are pretty boring and meaningless and can be changed to something more relevant\n-  The Insert Worksheet button here will insert another worksheet into the current workbook should you need it", - "page_start": 4, - "page_end": 4, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "\n\n## Artifacts\n\nThe following types of content are marked as in the PDF Content Tree and have no PDF/UA tags:\n\n - Slicer scrollbar\n - Grid lines\n - Cell borders\n - Cell shading\n - Decorative graphical objects\n - Text in SmartArt objects\n\n## Availability\n\nThe information in this article is applicable to the following versions of Excel.\n\n - Excel for Windows Version 2408 and later.\n - Excel for Mac Version 16.89 and later.\n - Excel for iOS Version 2.89 and later.\n - Excel for Android Build 16.0.18025.XXXXX or later.", - "page_start": 47, - "page_end": 47, - "source_file": "office-pdf.pdf" - }, - { - "text": "1\n\n3\n\n5\n\n## Try This Yourself:\n\nn\n\npe\n\nO\n\nFile\n\nBefore starting this exercise you MUST open the file E723 Cell Alignment\\_9.xlsx...\n\n -  Click in cell A5\n - This cell contains a long text entry that spills across several columns…\n -  Click on the Expand Formula Bar tool to the right of the formula bar to see all of the text\n -  Click on the Wrap Text\n - command in the\n - Alignment group on the Home tab to wrap the text in cell A5\n - Notice how the row height has now increased…\n -  Hold down the key and click in cell E5 to select the range A5:E5\n -  Click on the drop arrow for Merge & Centre in the Alignment group and select Merge Cells to merge the cells in the range\n -  Move the mouse pointer to the bottom of the row 5 heading border and drag the row height up until you reach 30 points\n\n\n\n## For Your Reference…\n\n## Handy to Know…", - "page_start": 25, - "page_end": 25, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## WRAPPING AND MERGING TEXT\n\nMicrosoft Excel will allow long cell entries to spill across to other adjacent cells to the right as long as those cells are empty. If those cells contain data the spill-over will be chopped off. If you need\n\nto place long text entries in a cell you can arrange for Microsoft Excel to wrap the text within the cell and also merge that cell with others to accommodate the longer text entry.\n\n1\n\n3\n\n5\n\n## Try This Yourself:\n\nn\n\npe\n\nO\n\nFile\n\nBefore starting this exercise you MUST open the file E723 Cell Alignment\\_9.xlsx...", - "page_start": 25, - "page_end": 25, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## TYPING TEXT OR NUMBERS INTO A WORKSHEET\n\nGenerally when you start a new spreadsheet project, the first task is to enter some headings into rows and columns. To type anything into a worksheet you need to make the cell into which\n\nyou wish to enter the data active. This can be done in a number of ways but the most common is to click in it first before typing.\n\n\n\n## For Your Reference… For Your Reference…\n\n## To enter text : To save a new document :\n\n - 1. Click the cell pointer on the desired cell and 1. Click on the File Tab and select Save As\n - type the required information 2. Press , an arrow key or to 2. Locate the storage folder in the Navigation pane\n - confirm the data entry and to move the cell 3. Type a File name and click on [Save]\n\npointer to another cell\n\n## Handy to Know… Handy to Know…\n\n -  You don't have to use or to make adjacent cells active. You can simply use the mouse and click in the cells if you want or even press the arrow keys to move up, down, left, or right.  In the exercise above we have named the workbook Garden Department Sales and filed it in C:\\Course Files for Excel 2010 . Each time you start Excel it will most likely assume you want to file your workbooks in a folder called Documents which is associated with the user name you use on the computer.", - "page_start": 6, - "page_end": 6, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## Microsoft Excel", - "page_start": 3, - "page_end": 3, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## THE CHARTING PROCESS\n\nCharts provide a way of seeing trends in the data in your worksheet. The charting feature in Excel is extremely flexible and powerful and allows you to create a wide range of charts from\n\nany of the Insert commands in the Charts group on the\n\n## Inserting Charts\n\nThe first step when creating a chart is to select the data from the worksheet that you want to chart. It is important to remember that the selected range (which can be either contiguous or non-contiguous), should include headings (e.g. names of months, countries, departments, etc). These become labels on the chart. Secondly, the selected range should not (normally) include totals as these are inserted automatically when a chart is created.\n\nThe second step is to create a chart using the INSERT tab on the ribbon. You can choose a Recommended Chart where Excel analyses the selected data and suggests several possible chart layouts.\n\nAlternatively you can create the chart yourself from scratch by choosing one of the Insert commands in the Charts group. Charts that you create in Excel can be either embedded into a worksheet, or they can exist on their own sheets, known as chart sheets .\n\n## Embedded Charts\n\nCharts that appear within a worksheet are known as embedded charts. A chart is really an object that sits on top of the worksheet - unlike numbers and letters, charts are not actually placed into worksheet cells.\n\n## Chart Sheets\n\nIf you want to keep your chart separate from the data you can move the chart to its own sheet. Chart sheets make it easier and more convenient to work with your chart because you'll see more of it on the screen since the data is not there!\n\n\n\n", - "page_start": 43, - "page_end": 43, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## RENAMING A WORKSHEET\n\nBy default, Excel names worksheets as Sheet1 , Sheet2 , Sheet3 , etc. These names are fine if you are not planning to share the workbook, but changing these to something more relevant\n\nmakes it much easier to understand the purpose of a worksheet. You can also adjust the horizontal scroll bar to make room for longer, more meaningful worksheet names.\n\n## Try This Yourself:\n\n\n\n\n\nContinue using the previous file with this exercise, or open the file E1324 Worksheet Techniques\\_2.xlsx...\n\n -  Point to the vertical dots between the sheet names and the horizontal scroll bar, as shown\n\nThe pointer will change to a double-headed arrow...\n\n -  Click and drag the bar across to the right, to the end of column L , then release the mouse button\n -  Double-click on Sheet1 (5) to select the worksheet tab name\n\nThis will also place it into edit mode…\n\n -  Type Comms , then press\n -  Repeat steps 3 and 4 to rename the other worksheets:\n\nSheet1 (4)\n\nAdmin\n\nSheet1 (3)\n\nShop\n\nSheet1 (2)\n\nIT\n\nSheet1\n\nMaintenance\n\n## For Your Reference…\n\n## To rename a worksheet :\n\n - 1. Double click on the current name on the worksheet tab\n - 2. Type the new name and press\n\n\n\n\n\n\n\n\n\n\n\n\n\n## Handy to Know…\n\n -  You can rename a worksheet by right-clicking on the worksheet tab to display the shortcut menu and clicking on Rename .\n -  A worksheet tab name can contain up to 31 characters including spaces, but it is better to keep it short and succinct.", - "page_start": 11, - "page_end": 11, - "source_file": "Excel Training Manual 1.pdf" - } - ] - }, - { - "references": { - "source_file": "Excel Training Manual 1.pdf", - "query": "How to rename a worksheet in Excel ?", - "target_page": 12, - "target_passage": "To rename a worksheet: 1. Double click on the current name on the worksheet tab 2. Type the new name and press ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## RENAMING A WORKSHEET\n\nBy default, Excel names worksheets as Sheet1 , Sheet2 , Sheet3 , etc. These names are fine if you are not planning to share the workbook, but changing these to something more relevant\n\nmakes it much easier to understand the purpose of a worksheet. You can also adjust the horizontal scroll bar to make room for longer, more meaningful worksheet names.\n\n## Try This Yourself:\n\n\n\n\n\nContinue using the previous file with this exercise, or open the file E1324 Worksheet Techniques\\_2.xlsx...\n\n -  Point to the vertical dots between the sheet names and the horizontal scroll bar, as shown\n\nThe pointer will change to a double-headed arrow...\n\n -  Click and drag the bar across to the right, to the end of column L , then release the mouse button\n -  Double-click on Sheet1 (5) to select the worksheet tab name\n\nThis will also place it into edit mode…\n\n -  Type Comms , then press\n -  Repeat steps 3 and 4 to rename the other worksheets:\n\nSheet1 (4)\n\nAdmin\n\nSheet1 (3)\n\nShop\n\nSheet1 (2)\n\nIT\n\nSheet1\n\nMaintenance\n\n## For Your Reference…\n\n## To rename a worksheet :\n\n - 1. Double click on the current name on the worksheet tab\n - 2. Type the new name and press\n\n\n\n\n\n\n\n\n\n\n\n\n\n## Handy to Know…\n\n -  You can rename a worksheet by right-clicking on the worksheet tab to display the shortcut menu and clicking on Rename .\n -  A worksheet tab name can contain up to 31 characters including spaces, but it is better to keep it short and succinct.", - "page_start": 11, - "page_end": 11, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## GROUPING WORKSHEETS\n\nWorksheet grouping enables you to make the same change at once to all selected worksheets. This feature is useful in situations where your worksheets have identical layouts or text. For\n\nexample, if you want to format the heading for multiple worksheets, you simply group the worksheets, make a change to one worksheet and the other worksheets will reflect the change also.\n\n## Try This Yourself:\n\n## Sa m e F i le\n\n\n\nContinue using the previous file with this exercise, or open the file E1324 Worksheet Techniques\\_8.xlsx...\n\n Click on the Admin worksheet tab, hold down , then click on the Shop worksheet tab to select the first three worksheets\n\n -  Click in cell A1 to select the cell\n -  Click on the HOME tab, then click on Italics in the Font group\n\nThis will italicise the text in cell A1 on this and all other worksheets in the group…\n\n -  Click on the Maintenance worksheet tab, then the Shop worksheet tab to see that the changes have been applied here\n -  Click on the IT worksheet tab to see that the changes have not been applied to this worksheet\n\nSince this was not part of the grouped sheets the changes have not been applied here. Notice too that clicking on a tab deselects the previous grouping\n\n## For Your Reference…\n\n## To group worksheet tabs :\n\n - 1. Click on the first worksheet tab\n - 2. Hold down , then click on the last worksheet tab\n\n\n\n2\n\n\n\n\n\n3\n\n4\n\n\n\n\n\n## Handy to Know…\n\n -  To deselect a group, either click on the tab of a worksheet that is not in the group, or rightclick on a tab and select Ungroup Sheets .\n -  Most formatting and text changes done on a worksheet in a group will be applied to other sheets in that grouping.", - "page_start": 14, - "page_end": 14, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## INSERTING AND DELETING WORKSHEETS\n\nOnce you've decided on a structure for your workbook, you may find that there are some worksheets that can be deleted . Alternatively, you may find that you need additional blank\n\nworksheets inserted . However, remember that deletion of worksheets is permanent and can't be undone using Undo , so always save your workbook before making these changes.\n\n## Try This Yourself:\n\nn\n\n\n\nBefore starting this exercise you MUST open the file E1324 Worksheet Techniques\\_1.xlsx…\n\n -  Examine the workbook - it currently contains one worksheet named Sheet1\n -  Click on the New Sheet icon at the end of the worksheet tabs\n - A new worksheet named Sheet2 will be inserted. You can also use the keyboard shortcut...\n -  Press + to insert another new worksheet\n\nThis sheet is named Sheet3 and is inserted before the currently selected sheet. Now let's delete a sheet...\n\n -  Right-click on the Sheet3 worksheet tab to display the shortcut menu\n -  Select Delete to remove the worksheet\n\nAs the worksheet contains no data, the sheet will be deleted immediately. If a worksheet contains data, Excel will ask you to confirm your actions...\n\n\n\n Repeat steps 4 and 5 to delete Sheet2\n\n\n\n## For Your Reference…\n\nTo insert a new worksheet into a workbook :\n\n -  Click on the New Sheet icon to the right of the worksheet tabs\n\nTo delete a worksheet from a workbook :\n\n -  Right click on the worksheet tab, then select Delete\n\n## Handy to Know…\n\n -  To insert a worksheet between existing worksheets, right-click on the worksheet tab before which you want to insert a new sheet, then click on Insert to display the Insert dialog box. Select Worksheet and click on [OK] .\n\n1\n\n2\n\n3\n\n4\n\n5", - "page_start": 9, - "page_end": 9, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## COPYING A WORKSHEET\n\nJust as you can copy the contents of cells and ranges within a worksheet, you can duplicate worksheets within a workbook. This technique is ideal for replicating layouts. For example, if you\n\nhave a budget workbook that contains data for several departments, you can create a worksheet for the first department and then copy it to create identical worksheets for other departments.\n\n## Try This Yourself:\n\n\n\nContinue using the previous file with this exercise, or open the file E1324 Worksheet Techniques\\_1.xlsx...\n\n -  Right-click on Sheet1 to display the worksheet shortcut menu\n -  Select Move or Copy to display the Move or Copy dialog box\n -  Click on Create a copy so it appears ticked, then click on [OK]\n\nThe new worksheet is named Sheet1 (2). Let's create a 'template' from this worksheet by deleting unwanted data...\n\n -  Select the range B7:E9 , then press to clear it\n -  Repeat step 4 to clear the ranges B14:E23 , G7:J9 and G14:J23 , then press + to return to cell A1\n\nNow we can copy this 'template' to create additional worksheets...\n\n\n\n Repeat steps 1 to 3 three times to create three copies of the template worksheet - this time without data\n\nThe final worksheet should be named Sheet1 (5)\n\n## For Your Reference…\n\n## To copy a worksheet :\n\n - 1. Right-click on the worksheet to copy, then select Move or Copy\n - 2. Click on Create a copy so it appears ticked\n - 3. Click on [OK]\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n## Handy to Know…\n\n -  You can copy the current worksheet using the HOME tab by clicking on Format in the Cells group, then clicking on Move or Copy Sheet .\n -  The Before sheet options in the Move or Copy dialog box allow you to position the copied worksheet where you want.", - "page_start": 10, - "page_end": 10, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## Handy to Know…\n\n -  To copy a worksheet into an existing workbook, make sure that you open the destination workbook first to ensure that it is listed in To book in the Move or Copy dialog box.\n\n## MOVING OR COPYING A SHEET TO ANOTHER WORKBOOK\n\nYou can copy worksheets to other workbooks as required. For example, you might need to keep records for six different divisions - rather than send each division the entire set of records, you\n\ncan copy their worksheet to another workbook and send them their data only. If worksheets exist in the other workbook, you will need to determine the order in which to place the copied worksheet.\n\n## Try This Yourself:\n\nle\n\ni\n\nF\n\ne\n\nm\n\nSa\n\n\n\nContinue using the previous file with this exercise, or open the file E1324 Worksheet Techniques\\_6.xlsx...\n\n -  Click on the Maintenance worksheet tab\n\nWe'll copy this completed data to another workbook...\n\n -  Right-click on the worksheet tab to display the shortcut menu, then click on Move or Copy to display the Move or Copy dialog box\n -  Click on the drop arrow for To book , then select (new book)\n -  Click on Create a copy so it appears ticked\n\nThis will create a new workbook as well as making a copy of the worksheet...\n\n -  Click on\n\n[OK]\n\n\n\nA new workbook will be created and Maintenance will be the only worksheet in the workbook…\n\n\n\n Save the new workbook as Maintenance.xlsx , then close it\n\n## For Your Reference…\n\n## To copy a sheet to another workbook :\n\n - 1. Right click on the worksheet tab, then click on Move or Copy\n - 2. Select either (new book) or the name of another workbook in To book\n - 3. Tick Create a copy , then click on [OK]\n\n\n\n1\n\n\n\n", - "page_start": 12, - "page_end": 12, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## CHANGING WORKSHEET TAB COLOURS\n\nTo make it easier for you to distinguish between worksheets, Excel enables you to change the colours of worksheet tabs. This allows you, for example, to quickly distinguish between different\n\nfinancial years, departments or months. The active sheet appears as underlined in a gradient version of the selected colour, while inactive tabs will display a solid colour background.\n\n\n\n## For Your Reference…\n\n## To change the colour of a worksheet tab :\n\n - 1. Right-click on the worksheet tab to display the shortcut menu\n - 2. Point to Tab colour to display a palette of colour options\n - 3. Click on the desired colour\n\n## Handy to Know…\n\n -  To apply the same colour to two or more sheets at once, select them first. Hold down to select consecutive worksheets or hold down to select non-consecutive worksheets.", - "page_start": 13, - "page_end": 13, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## UNDERSTANDING WORKBOOKS\n\nIn Microsoft Excel the data you enter, whether it consists of numbers, text, or formulas, is stored in a file known as a workbook . Workbooks are just like huge electronic books with pages (or\n\nsheets ) that have been ruled into columns and rows. Before using Excel it is helpful to know what the various parts and elements that make up a workbook are.\n\n\n\n-  A worksheet (or page) in a workbook contains 16,384 columns that are labelled using letters of the alphabet. The first column in a worksheet is labelled column A , while the last is labelled XFD\n-  A worksheet (or page) in a workbook contains 1,048,576 rows that are labelled using numbers from 1 to 1,048,576\n-  Where a column and row intersect we get what is known as a cell . You enter your data into these cells. Each cell in a worksheet can hold up to 32,767 characters - although it would be unrealistic to ever push it this far. Cells are referred to by their column and row labels. For example, in the screen above the cell we are pointing to is C11 - this reference is known as the cell address and is most important as it is frequently used in commands and formulas\n-  When you start typing something, you want it to appear somewhere in the worksheet. As a consequence when the Status Bar shows Ready mode, at least one cell in the worksheet will be highlighted - this is known as the active cell . In the screen above, the active cell is cell A1 -notice that the column label and the row label also appears coloured to indicate the active cell. You can have more than one active cell - when this occurs you have what is known as a range\n-  A workbook (as you would expect) is made up of pages known as worksheets . You can have as many sheets in a workbook as your computer resources can accommodate. As a default, a new blank workbook normally has 3 worksheets labelled Sheet1 , Sheet2 , and Sheet3 . Of course these labels are pretty boring and meaningless and can be changed to something more relevant\n-  The Insert Worksheet button here will insert another worksheet into the current workbook should you need it", - "page_start": 4, - "page_end": 4, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "- Sa m e F i le Continue using the previous file with this exercise, or open the file E1317 Charting\\_11.xlsx...\n -  Click on the Revenue Chart worksheet tab\n -  Click on the CHART TOOLS: DESIGN tab, then click on the Move Chart tool in the Location group to display the Move Chart dialog box\n -  Click on Object in , then click on the drop arrow and click on Sheet 2\n -  Click on [OK] to move the chart to the worksheet", - "page_start": 56, - "page_end": 56, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## TYPING TEXT OR NUMBERS INTO A WORKSHEET\n\nGenerally when you start a new spreadsheet project, the first task is to enter some headings into rows and columns. To type anything into a worksheet you need to make the cell into which\n\nyou wish to enter the data active. This can be done in a number of ways but the most common is to click in it first before typing.\n\n\n\n## For Your Reference… For Your Reference…\n\n## To enter text : To save a new document :\n\n - 1. Click the cell pointer on the desired cell and 1. Click on the File Tab and select Save As\n - type the required information 2. Press , an arrow key or to 2. Locate the storage folder in the Navigation pane\n - confirm the data entry and to move the cell 3. Type a File name and click on [Save]\n\npointer to another cell\n\n## Handy to Know… Handy to Know…\n\n -  You don't have to use or to make adjacent cells active. You can simply use the mouse and click in the cells if you want or even press the arrow keys to move up, down, left, or right.  In the exercise above we have named the workbook Garden Department Sales and filed it in C:\\Course Files for Excel 2010 . Each time you start Excel it will most likely assume you want to file your workbooks in a folder called Documents which is associated with the user name you use on the computer.", - "page_start": 6, - "page_end": 6, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## PRINTING A CHART SHEET\n\nYou can print an embedded chart simply by printing the worksheet as if it is a standard worksheet. You can also print a chart sheet in exactly the same way. To print a chart sheet, the worksheet data. But the real benefit of inserting\n\n## Try This Yourself:\n\n\n\nContinue using the previous file with this exercise, or open the file E1317 Charting\\_10.xlsx...\n\n -  Click on the Revenue Chart\n\nworksheet tab\n\n\n\n -  Click on the Chart Title text box, select the text, then type Revenue Chart to change the title\n -  Repeat step 2 to change the Axis Title to Euros\n -  Click on the FILE tab, then click on Print to see the print options and a preview of the chart\n\nNo further adjustment is required here so we can go ahead and print it…\n\n -  If you wish to print the chart, click on [Print]", - "page_start": 55, - "page_end": 55, - "source_file": "Excel Training Manual 1.pdf" - } - ] - }, - { - "references": { - "source_file": "Excel Training Manual 1.pdf", - "query": "I want to freeze a pane in my Excel worksheet ", - "target_page": 16, - "target_passage": "To freeze panes in a worksheet: 1. Click in the cell below and to the right of the area you want to freeze/unfreeze 2. Click on the VIEW tab 3. Click on Freeze Panes in the Window group, then select Freeze Panes ", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## INSERTING AND DELETING WORKSHEETS\n\nOnce you've decided on a structure for your workbook, you may find that there are some worksheets that can be deleted . Alternatively, you may find that you need additional blank\n\nworksheets inserted . However, remember that deletion of worksheets is permanent and can't be undone using Undo , so always save your workbook before making these changes.\n\n## Try This Yourself:\n\nn\n\n\n\nBefore starting this exercise you MUST open the file E1324 Worksheet Techniques\\_1.xlsx…\n\n -  Examine the workbook - it currently contains one worksheet named Sheet1\n -  Click on the New Sheet icon at the end of the worksheet tabs\n - A new worksheet named Sheet2 will be inserted. You can also use the keyboard shortcut...\n -  Press + to insert another new worksheet\n\nThis sheet is named Sheet3 and is inserted before the currently selected sheet. Now let's delete a sheet...\n\n -  Right-click on the Sheet3 worksheet tab to display the shortcut menu\n -  Select Delete to remove the worksheet\n\nAs the worksheet contains no data, the sheet will be deleted immediately. If a worksheet contains data, Excel will ask you to confirm your actions...\n\n\n\n Repeat steps 4 and 5 to delete Sheet2\n\n\n\n## For Your Reference…\n\nTo insert a new worksheet into a workbook :\n\n -  Click on the New Sheet icon to the right of the worksheet tabs\n\nTo delete a worksheet from a workbook :\n\n -  Right click on the worksheet tab, then select Delete\n\n## Handy to Know…\n\n -  To insert a worksheet between existing worksheets, right-click on the worksheet tab before which you want to insert a new sheet, then click on Insert to display the Insert dialog box. Select Worksheet and click on [OK] .\n\n1\n\n2\n\n3\n\n4\n\n5", - "page_start": 9, - "page_end": 9, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## FREEZING ROWS AND COLUMNS\n\nWhen you lay out your data in rows and columns, it is most likely that your headings end up at the top or to the left of your data. If you have a large amount of data, you may find that when you\n\nscroll across or down to particular cells, the headings scroll out of view. This problem can be resolved by freezing the rows and/or columns that hold the headings.\n\n## Try This Yourself:\n\n\n\n\n\nContinue using the previous file E1324 Worksheet\n\nwith this exercise, or open the file Techniques\\_11.xlsx...\n\n Click on the Maintenance worksheet tab, then spend a few moments examining the worksheet\n\nDepending on your screen, it is possible that you won't be able to see all of the figures on the screen at once...\n\n -  Click in cell B6 to select the cell\n -  Click on the VIEW tab, click on Freeze Panes in the Window group, then select Freeze Panes\n\nThin black lines appear above and to the left of the selected cell. This indicates that the areas above and to the left are frozen...\n\n -  Scroll to the right until Yearly Average in column L appears next to column A\n -  Scroll down until Overheads in row 25 is below row 5\n -  Press + to move to cell B6 - this is our temporary home cell, as the cells above and to the left are frozen\n\n\n\n On the VIEW tab, click on Freeze Panes in the Freeze Panes group, then click on Unfreeze Panes to unfreeze the rows and columns\n\n## For Your Reference…\n\n## To freeze panes in a worksheet :\n\n - 1. Click in the cell below and to the right of the area you want to freeze/unfreeze\n - 2. Click on the VIEW tab\n - 3. Click on Freeze Panes in the Window group, then select Freeze Panes\n\n\n\n\n\n\n\n\n\n## Handy to Know…\n\n -  If you want to freeze only the rows above the selected cell (leaving all columns unfrozen), select the cell in column A of that row - e.g. to freeze rows 1 to 6 , click in cell A7 . The same applies to freezing only columns and leaving the rows unfrozen: select the cell in row 1 .", - "page_start": 15, - "page_end": 15, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## GROUPING WORKSHEETS\n\nWorksheet grouping enables you to make the same change at once to all selected worksheets. This feature is useful in situations where your worksheets have identical layouts or text. For\n\nexample, if you want to format the heading for multiple worksheets, you simply group the worksheets, make a change to one worksheet and the other worksheets will reflect the change also.\n\n## Try This Yourself:\n\n## Sa m e F i le\n\n\n\nContinue using the previous file with this exercise, or open the file E1324 Worksheet Techniques\\_8.xlsx...\n\n Click on the Admin worksheet tab, hold down , then click on the Shop worksheet tab to select the first three worksheets\n\n -  Click in cell A1 to select the cell\n -  Click on the HOME tab, then click on Italics in the Font group\n\nThis will italicise the text in cell A1 on this and all other worksheets in the group…\n\n -  Click on the Maintenance worksheet tab, then the Shop worksheet tab to see that the changes have been applied here\n -  Click on the IT worksheet tab to see that the changes have not been applied to this worksheet\n\nSince this was not part of the grouped sheets the changes have not been applied here. Notice too that clicking on a tab deselects the previous grouping\n\n## For Your Reference…\n\n## To group worksheet tabs :\n\n - 1. Click on the first worksheet tab\n - 2. Hold down , then click on the last worksheet tab\n\n\n\n2\n\n\n\n\n\n3\n\n4\n\n\n\n\n\n## Handy to Know…\n\n -  To deselect a group, either click on the tab of a worksheet that is not in the group, or rightclick on a tab and select Ungroup Sheets .\n -  Most formatting and text changes done on a worksheet in a group will be applied to other sheets in that grouping.", - "page_start": 14, - "page_end": 14, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## COPYING A WORKSHEET\n\nJust as you can copy the contents of cells and ranges within a worksheet, you can duplicate worksheets within a workbook. This technique is ideal for replicating layouts. For example, if you\n\nhave a budget workbook that contains data for several departments, you can create a worksheet for the first department and then copy it to create identical worksheets for other departments.\n\n## Try This Yourself:\n\n\n\nContinue using the previous file with this exercise, or open the file E1324 Worksheet Techniques\\_1.xlsx...\n\n -  Right-click on Sheet1 to display the worksheet shortcut menu\n -  Select Move or Copy to display the Move or Copy dialog box\n -  Click on Create a copy so it appears ticked, then click on [OK]\n\nThe new worksheet is named Sheet1 (2). Let's create a 'template' from this worksheet by deleting unwanted data...\n\n -  Select the range B7:E9 , then press to clear it\n -  Repeat step 4 to clear the ranges B14:E23 , G7:J9 and G14:J23 , then press + to return to cell A1\n\nNow we can copy this 'template' to create additional worksheets...\n\n\n\n Repeat steps 1 to 3 three times to create three copies of the template worksheet - this time without data\n\nThe final worksheet should be named Sheet1 (5)\n\n## For Your Reference…\n\n## To copy a worksheet :\n\n - 1. Right-click on the worksheet to copy, then select Move or Copy\n - 2. Click on Create a copy so it appears ticked\n - 3. Click on [OK]\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n## Handy to Know…\n\n -  You can copy the current worksheet using the HOME tab by clicking on Format in the Cells group, then clicking on Move or Copy Sheet .\n -  The Before sheet options in the Move or Copy dialog box allow you to position the copied worksheet where you want.", - "page_start": 10, - "page_end": 10, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## UNDERSTANDING WORKBOOKS\n\nIn Microsoft Excel the data you enter, whether it consists of numbers, text, or formulas, is stored in a file known as a workbook . Workbooks are just like huge electronic books with pages (or\n\nsheets ) that have been ruled into columns and rows. Before using Excel it is helpful to know what the various parts and elements that make up a workbook are.\n\n\n\n-  A worksheet (or page) in a workbook contains 16,384 columns that are labelled using letters of the alphabet. The first column in a worksheet is labelled column A , while the last is labelled XFD\n-  A worksheet (or page) in a workbook contains 1,048,576 rows that are labelled using numbers from 1 to 1,048,576\n-  Where a column and row intersect we get what is known as a cell . You enter your data into these cells. Each cell in a worksheet can hold up to 32,767 characters - although it would be unrealistic to ever push it this far. Cells are referred to by their column and row labels. For example, in the screen above the cell we are pointing to is C11 - this reference is known as the cell address and is most important as it is frequently used in commands and formulas\n-  When you start typing something, you want it to appear somewhere in the worksheet. As a consequence when the Status Bar shows Ready mode, at least one cell in the worksheet will be highlighted - this is known as the active cell . In the screen above, the active cell is cell A1 -notice that the column label and the row label also appears coloured to indicate the active cell. You can have more than one active cell - when this occurs you have what is known as a range\n-  A workbook (as you would expect) is made up of pages known as worksheets . You can have as many sheets in a workbook as your computer resources can accommodate. As a default, a new blank workbook normally has 3 worksheets labelled Sheet1 , Sheet2 , and Sheet3 . Of course these labels are pretty boring and meaningless and can be changed to something more relevant\n-  The Insert Worksheet button here will insert another worksheet into the current workbook should you need it", - "page_start": 4, - "page_end": 4, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## Handy to Know…\n\n -  To copy a worksheet into an existing workbook, make sure that you open the destination workbook first to ensure that it is listed in To book in the Move or Copy dialog box.\n\n## MOVING OR COPYING A SHEET TO ANOTHER WORKBOOK\n\nYou can copy worksheets to other workbooks as required. For example, you might need to keep records for six different divisions - rather than send each division the entire set of records, you\n\ncan copy their worksheet to another workbook and send them their data only. If worksheets exist in the other workbook, you will need to determine the order in which to place the copied worksheet.\n\n## Try This Yourself:\n\nle\n\ni\n\nF\n\ne\n\nm\n\nSa\n\n\n\nContinue using the previous file with this exercise, or open the file E1324 Worksheet Techniques\\_6.xlsx...\n\n -  Click on the Maintenance worksheet tab\n\nWe'll copy this completed data to another workbook...\n\n -  Right-click on the worksheet tab to display the shortcut menu, then click on Move or Copy to display the Move or Copy dialog box\n -  Click on the drop arrow for To book , then select (new book)\n -  Click on Create a copy so it appears ticked\n\nThis will create a new workbook as well as making a copy of the worksheet...\n\n -  Click on\n\n[OK]\n\n\n\nA new workbook will be created and Maintenance will be the only worksheet in the workbook…\n\n\n\n Save the new workbook as Maintenance.xlsx , then close it\n\n## For Your Reference…\n\n## To copy a sheet to another workbook :\n\n - 1. Right click on the worksheet tab, then click on Move or Copy\n - 2. Select either (new book) or the name of another workbook in To book\n - 3. Tick Create a copy , then click on [OK]\n\n\n\n1\n\n\n\n", - "page_start": 12, - "page_end": 12, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## THE CHARTING PROCESS\n\nCharts provide a way of seeing trends in the data in your worksheet. The charting feature in Excel is extremely flexible and powerful and allows you to create a wide range of charts from\n\nany of the Insert commands in the Charts group on the\n\n## Inserting Charts\n\nThe first step when creating a chart is to select the data from the worksheet that you want to chart. It is important to remember that the selected range (which can be either contiguous or non-contiguous), should include headings (e.g. names of months, countries, departments, etc). These become labels on the chart. Secondly, the selected range should not (normally) include totals as these are inserted automatically when a chart is created.\n\nThe second step is to create a chart using the INSERT tab on the ribbon. You can choose a Recommended Chart where Excel analyses the selected data and suggests several possible chart layouts.\n\nAlternatively you can create the chart yourself from scratch by choosing one of the Insert commands in the Charts group. Charts that you create in Excel can be either embedded into a worksheet, or they can exist on their own sheets, known as chart sheets .\n\n## Embedded Charts\n\nCharts that appear within a worksheet are known as embedded charts. A chart is really an object that sits on top of the worksheet - unlike numbers and letters, charts are not actually placed into worksheet cells.\n\n## Chart Sheets\n\nIf you want to keep your chart separate from the data you can move the chart to its own sheet. Chart sheets make it easier and more convenient to work with your chart because you'll see more of it on the screen since the data is not there!\n\n\n\n", - "page_start": 43, - "page_end": 43, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## RENAMING A WORKSHEET\n\nBy default, Excel names worksheets as Sheet1 , Sheet2 , Sheet3 , etc. These names are fine if you are not planning to share the workbook, but changing these to something more relevant\n\nmakes it much easier to understand the purpose of a worksheet. You can also adjust the horizontal scroll bar to make room for longer, more meaningful worksheet names.\n\n## Try This Yourself:\n\n\n\n\n\nContinue using the previous file with this exercise, or open the file E1324 Worksheet Techniques\\_2.xlsx...\n\n -  Point to the vertical dots between the sheet names and the horizontal scroll bar, as shown\n\nThe pointer will change to a double-headed arrow...\n\n -  Click and drag the bar across to the right, to the end of column L , then release the mouse button\n -  Double-click on Sheet1 (5) to select the worksheet tab name\n\nThis will also place it into edit mode…\n\n -  Type Comms , then press\n -  Repeat steps 3 and 4 to rename the other worksheets:\n\nSheet1 (4)\n\nAdmin\n\nSheet1 (3)\n\nShop\n\nSheet1 (2)\n\nIT\n\nSheet1\n\nMaintenance\n\n## For Your Reference…\n\n## To rename a worksheet :\n\n - 1. Double click on the current name on the worksheet tab\n - 2. Type the new name and press\n\n\n\n\n\n\n\n\n\n\n\n\n\n## Handy to Know…\n\n -  You can rename a worksheet by right-clicking on the worksheet tab to display the shortcut menu and clicking on Rename .\n -  A worksheet tab name can contain up to 31 characters including spaces, but it is better to keep it short and succinct.", - "page_start": 11, - "page_end": 11, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## EMBEDDING A CHART INTO A WORKSHEET\n\nCharts can either be presented in their own sheets or they can be embedded into a worksheet that contains data. In fact, you can move a chart back and forth between its own\n\nsheet and a worksheet as often as you wish without impacting at all on the chart. Sometimes it is easier to work with a chart in its own sheet, but it may be necessary to print the chart with its data.\n\n## Try This Yourself:", - "page_start": 56, - "page_end": 56, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "Notice that the chart is no longer embedded on this worksheet\n\n## For Your Reference…\n\n## To create a chart sheet :\n\n - 1. Click on the CHART TOOLS: DESIGN tab, then click on Move Chart in the Location group\n - 2. Click on New Sheet , type a name for the sheet and click on [OK]\n\n2\n\n3\n\n\n\n4\n\n## Handy to Know…\n\n -  Keeping charts on their own sheets makes them easier to work with as they do not obstruct the data.\n\nare interested in printing the chart on its own page. Charts can be shifted back and forth between a worksheet and a chart sheet.", - "page_start": 51, - "page_end": 51, - "source_file": "Excel Training Manual 1.pdf" - } - ] - }, - { - "references": { - "source_file": "office-pdf.pdf", - "query": "What is the msodocexStructTypeArticle type value ?", - "target_page": 21, - "target_passage": "A group of nodes forming a single flow of text that should be read or searched as a contiguous block of content. Some documents have a single article and others have multiple articles.", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "| Type Value | Description |\n|-----------------------------------|-----------------------------------|\n| msodocexStructTypeTOC | A table of contents. |\n| msodocexStructTypeTOCI | An item in a table of contents. |\n| msodocexStructTypeExtLink | A link to an external resource. |\n| msodocexStructTypeIntLink | A link to an internal resource. |\n| msodocexStructTypeFootnote | A footnote. |\n| msodocexStructTypeEndnote | An endnote. |\n| msodocexStructTypeTextbox | A text box. |\n| msodocexStructTypeHeader | A block of text forming a header. |\n| msodocexStructTypeFooter | A footer. |\n| msodocexStructInlineShape | An inline shape. |\n| msodocexStructAnnotation | An annotation. |\n| msodocexStructTypeSpanBlock | A block of text. |\n| msodocexStructTypeWorkbook | A workbook. |\n| msodocexStructTypeWorksheet | A worksheet. |\n| msodocexStructTypeMacrosheet | A macrosheet. |\n| msodocexStructTypeDialogsheet | A dialogsheet. |\n| msodocexStructTypeSlide | A slide. |\n| msodocexStructTypeChart | A chart. |\n| msodocexStructTypeDiagram | A SmartArt diagram. |\n| msodocexStructTypeBulletText | Buller text. |\n| msodocexStructTypeTextLine | A line of text. |\n| msodocexStructTypeDropCap | A drop cap. |\n| msodocexStructTypeSection | A section. |\n| msodocexStructTypeAnnotationBegin | The beginning of an annotation. |\n| msodocexStructTypeAnnotationEnd | The end of an annotation. |", - "page_start": 21, - "page_end": 21, - "source_file": "office-pdf.pdf" - }, - { - "text": "| Type Value | Description |\n|-----------------------------------------|-------------------------------------------------------------------------|\n| msodocexStructTypeParaRTLAttr | A block of text within an article with right-to-left layout. |\n| msodocexStructTypeTableRTLAttr | A block of text forming a table with right-to-left layout. |\n| msodocexStructTypeHeadingRTLAttr | A heading in the text with right-to-left layout. |\n| msodocexStructTypeListItemRTLAttr | A block of text forming a list item with right-to-left layout. |\n| msodocexStructTypeParaUnannotatableAttr | A block of text within an article that is not annotatable. |\n| msodocexStructTypeTHead | The header row area in a table. |\n| msodocexStructTypeTBody | The body area in a table, i.e. the portion between the THead and TFoot. |\n| msodocexStructTypeLabel | A label. |\n| msodocexStructTypeEquation | An equation. |\n| msodocexStructTypeIntLinkNoteRef | A footnote or endnote reference mark link. |\n| msodocexStructTypeTFoot | The footer row area in a table. |\n\nfContentNode Specifies whether a DocExComment\\_EndStructNode structure marks the end of this structure node. If fContentNode is true , a\n\nDocExComment\\_EndStructNode structure closes off the content bounded by the node. If this fContentNode has a false value, then the node does not bound any content.\n\nThe fContentNode member affects the interpretation of the parent ID value of subsequent nodes. If fContentNode is true , nodes that are inserted between this DocExComment\\_BeginStructNode and a subsequent DocExComment\\_EndStructNode , and that have a parent ID of -1 , are children of this node. However, if fContentNode is true , nodes inserted after this DocExComment\\_BeginStructNode , and that have a parent ID of -1 , are not children of this node. They are children of the next-most-recently specified node that has fContentNode equal to false .\n\nYou can nest document structure nodes to arbitrary depth.\n\ncwchAltText Specifies the number of Unicode characters in the block of alternate text that follows the structure. This Unicode string specifies alternate text for the node (for example, alternate text for an image).", - "page_start": 22, - "page_end": 22, - "source_file": "office-pdf.pdf" - }, - { - "text": "```\ntypedef struct \\_MsoDocexStructNode { int idNode; MSODOCEXSTRUCTTYPE nodetype; WCHAR* pwchAltText; union { int iHeadingLevel; ULONG idPara; ULONG idDropCap; int iPage; WCHAR* pwchActualText; MSODOCEXLINEBREAKTYPE bt; int iListLevel; MSODOCEXLISTTYPE listType; ULONG idAtn; long cpLim; int shapeProperty; MsoDocexTableAttr tableAttr; WCHAR* idTableHeader; int iTargetParentId; }; } MSODOCEXSTRUCTNODE;\n```\n\nThe idNode member specifies the ID of the node being passed in the call to HrBeginStructNode . This member may not have a value of 0 . A value of -1 indicates that child nodes do not use the idNodeParent parameter to specify this node as their parent. Instead, this node can be a parent only by enclosing child nodes in the EMF. Multiple nodes can have an ID of -1 . If the ID is not -1 , the value is unique across the document.\n\nThe embedded union at the end of the MSODOCEXSTRUCTNODE is interpreted differently depending on the type of node:\n\n- iHeadingLevel is the heading level for an msodocexStructTypeHeading.\n- idPara is the paragraph id for a P, TOCI, or ListBody.\n- idDropCap is the id of an msodocexStructTypeDropCap.\n- iPage is the page number for an msodocexStructTypePage.\n- bt is the line break type for an msodocexStructTypeTextLine.\n- iListLevel is the list level for an msodocexStructTypeList or msodocexStructTypeListItem.\n- listType is the list type for an msodocexStructTypeListItem.\n- idAtn is the id of an msodocexStructTypeAnnotationBegin or msodocexStructTypeAnnotationEnd.\n- cpLim is used to determine the nesting order of tables within tables for an msodocexStructTypeTable, msodocexStructTypeTOC, or msodocexStructTypeListBody.", - "page_start": 8, - "page_end": 8, - "source_file": "office-pdf.pdf" - }, - { - "text": "- shapeProperty is for a msodocexStructTypeFigure where the content is a shape, text box, or table cell and contains bit fields from the MSODOCEXSHAPEPROPERTY enumeration.\n- tableAttr is the table cell attributes for a msodocexStructTypeTH or msodocexStructTypeTD.\n- idTableHeader is the unique id for an msodocexStructTypeTH or msodocexStructTypeTD.\n- iTargetParentId is the id of the node to reparent an msodocexStructTypeDiagram to.\n\nTable 3. Enumerated values of MSODOCEXLINEBREAKTYPE\n\nノ Expand table\n\nTable 4. Enumerated values of MSODOCEXLISTTYPE\n\n| Value | Description |\n|-----------------------------|--------------------|\n| msodocexLineBreakTypeNormal | Normal line break. |\n| msodocexLineBreakTypeManual | Manual line break. |\n| msodocexLineBreakTypeEOP | End of paragraph. |\n\n## ノ Expand table\n\nTable 5. Enumerated values of MSODOCEXSHAPEPROPERTY bit fields\n\n| Value | Description |\n|-------------------------------|-------------------------------------|\n| msodocexListTypeNone | No bullets or numbering. |\n| msodocexListTypeBulletDisc | Disc-shaped bullets. |\n| msodocexListTypeBulletCircle | Circle-shaped bullets. |\n| msodocexListTypeBulletSquare | Square-shaped bullets. |\n| msodocexListTypeBulletDecimal | Decimal numbering. |\n| msodocexListTypeUpperRoman | Uppercase Roman numeral numbering. |\n| msodocexListTypeLowerRoman | Lowercase Roman numberal numbering. |\n| msodocexListTypeUpperAlpha | Uppercase alphabetic numbering. |\n| msodocexListTypeLowerAlpha | Lowercase alphabetic numbering. |", - "page_start": 9, - "page_end": 9, - "source_file": "office-pdf.pdf" - }, - { - "text": "Table 7. Document structure node types\n\n\n\nExpand table\n\n| Type Value | Description |\n|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| msodocexStructTypePara | A block of text within an article. Its parent node must be an article. |\n| msodocexStructTypeFigure | A graphical element (for example, an image or collection of shapes) that has a textual representation. The textual representation is the alternate text used for reading or searching the document. |\n| msodocexStructTypeArticle | A group of nodes forming a single flow of text that should be read or searched as a contiguous block of content. Some documents have a single article and others have multiple articles. |\n| msodocexStructTypeHeading | A heading in the text. |\n| msodocexStructTypeTable | A block of text forming a table. |\n| msodocexStructTypeTR | A block of text forming a single row of a table. |\n| msodocexStructTypeTD | A block of text forming a single cell in a table row. |\n| msodocexStructTypeTH | A block of text forming a single header cell in a table row. |\n| msodocexStructTypeList | A block of text forming a list. |\n| msodocexStructTypeListItem | A block of text forming a list item. |\n| msodocexStructTypeListBody | A block of text forming the body of a list item. |\n| msodocexStructTypeDocument | A document. |\n| msodocexStructTypePage | A page in the document. |", - "page_start": 20, - "page_end": 20, - "source_file": "office-pdf.pdf" - }, - { - "text": "The metadatatype parameter specifies the type of metadata represented by the string. The metadatatype parameter must be one of the following values from the MSODOCEXMETADATA enumeration type.\n\nTable 8. Enumerated values of MSODOCEXMETADATA\n\n\n\nExpand table\n\n| Value | Description |\n|--------------------------|--------------------------------------------------------------------------------------------------------------------------------|\n| msodocexMetadataTitle | The title of the document. |\n| msodocexMetadataAuthor | The author of the document |\n| msodocexMetadataSubject | String that describes the subject matter of the document (for example, business or science). |\n| msodocexMetadataKeywords | Keyword relevant to the document content. |\n| msodocexMetadataCreator | The creator of the document, possibly distinct from the author. |\n| msodocexMetadataProducer | The producer of the document, possibly distinct from the author or creator. |\n| msodocexMetadataCategory | String that describes the type of document (for example, memo, article, or book). |\n| msodocexMetadataStatus | Status of the document. This field can reflect where the document is in the publication process (for example, draft or final). |\n| msodocexMetadataComments | Miscellaneous comments relevant to the document. |\n\nFor a given document, each metadata type can have only one string associated with it. So, for example, if the document has multiple keywords, they are passed to the add-in as one concatenated string.\n\nThe pwchValue parameter specifies a Unicode string that contains the metadata itself.\n\nHow the add-in incorporates the text-string metadata into the exported document depends on the implementation details of the export code and the type of fixed-format used in the exported document.\n\n## HrAddDocumentMetadataDate\n\nPublisher calls the HrAddDocumentMetadataDate method to specify document metadata in the form of a FILETIME structure.", - "page_start": 34, - "page_end": 34, - "source_file": "office-pdf.pdf" - }, - { - "text": "| Value | Numeric Value | Description |\n|------------------------|-----------------|------------------------------------------------|\n| msodocexShape | 0x00000001 | The object is a shape or text box. |\n| msodocexShapeText | 0x00000002 | The object has non-whitespace text. |\n| msodocexShapePath | 0x00000004 | The object has a fill and/or outline. |\n| msodocexShapeAltText | 0x00000008 | The object has Alt Text. |\n| msodocexShapeEquation | 0x00000010 | The object has text that contains an equation. |\n| msodocexShapeTabelCell | 0x00000020 | The object is a cell in a table. |\n\n## MsoDocexTableAttr\n\nThe MsoDocexTableAttr structure fits in 32 bits and includes the row and column span and header scope information for a table cell.\n\n```\nC++ struct MsoDocexTableAttr { static constexpr unsigned int MaxSpanBits = sizeof(unsigned int) * 8 / 2 - 1; static constexpr unsigned int MaxSpanValue = (1u << MaxSpanBits) - 1; unsigned int rowSpan : MaxSpanBits; unsigned int fRowScope : 1; unsigned int colSpan : MaxSpanBits; unsigned int fColScope : 1; };\n```\n\nThe members of MsoDocexTableAttr structure are as follows:\n\n - MaxSpanBits Specifies the number of bits available for the rowSpan and colSpan values, which is 15.\n - MaxSpanValue Specifies the maximum value that can be specified for the rowSpan and colSpan.\n - rowSpan Specifies the number of rows that a table cell spans.\n - fRowScope Specifies whether the header is Row/Both or Column.\n - colSpan Specifies the number of columns that a table cell spans.", - "page_start": 10, - "page_end": 10, - "source_file": "office-pdf.pdf" - }, - { - "text": "## ノ Expand table\n\n| Comment Value | Structure Type |\n|-----------------------------------------|------------------------------------|\n| msodocexcommentExternalHyperlink | DocExComment\\_ExternalHyperlink |\n| msodocexcommentExternalHyperlinkRctfv | DocExComment\\_ExternalHyperlink |\n| msodocexcommentInternalHyperlink | DocExComment\\_InternalHyperlink |\n| msodocexcommentInternalHyperlinkRctfv | DocExComment\\_InternalHyperlink |\n| msodocexcommentColorInfo | DocExComment\\_ColorInfo |\n| msodocexcommentColorMapEnable | DocExComment\\_ColorEnable |\n| msodocexcommentBeginTextRun | DocExComment\\_BeginTextRun |\n| msodocexcommentBeginTextRunRTL | DocExComment\\_BeginTextRun |\n| msodocexcommentEndTextRun | DocExComment\\_EndTextRun |\n| msodocexcommentBeginStructNode | DocExComment\\_BeginStructNode |\n| msodocexcommentEndStructNode | DocExComment\\_EndStructNode |\n| msodocexcommentUnicodeForNextTextOut | DocExComment\\_UnicodeForNextTextOut |\n| msodocexcommentUnicodeForNextTextOutRTL | DocExComment\\_UnicodeForNextTextOut |\n| msodocexcommentEPSColor | DocExComment\\_EPSColor |\n| msodocexcommentEPSCMYKJPEG | DocExComment\\_EPSColorCMYKJPEG |\n| msodocexcommentEPSSpotImage | DocExComment\\_EPSColorSpotImage |\n| msodocexcommentEPSStart | DocExComment\\_EPSStart |\n| msodocexcommentPageName | DocExComment\\_PageName |\n| msodocexcommentTransparent | DocExComment\\_Transparent |\n\n## DocExComment\\_ExternalHyperlink(Rctfv)\n\nThe DocExComment\\_ExternalHyperlink(Rctfv) structure describes a hyperlink that links to outside of the document, for example to a Web site on the Internet.", - "page_start": 14, - "page_end": 14, - "source_file": "office-pdf.pdf" - }, - { - "text": "```\nC++ HRESULT HrAddDocumentMetadataDate( MSODOCEXMETADATA metadataType, const FILETIME* pftLocalTime );\n```\n\nThe metadatatype parameter specifies the type of metadata represented by the FILETIME structure. The metadatatype parameter must be one of the following values from the MSODOCEXMETADATA enumeration type.\n\nノ\n\nExpand table\n\nTable 9. Enumerated values of MSODOCEXMETADATA\n\n| Value | Description |\n|------------------------------|------------------------------------------|\n| msodocexMetadataCreationDate | The creation date for the document. |\n| msodocexMetadataModDate | The last-modified date for the document. |\n\nThe pftLocalTime parameter specifies a pointer to a FILETIME structure that contains the date and time information for the metadata. The following code snippet demonstrates how to extract this information from the structure.\n\n```\nC++ SYSTEMTIME st = { 0 }; WCHAR s[100]; FileTimeToSystemTime(pfiletime, &st); swprintf(s, 99, L\" %04d-%02d-%02dT%02d:%02d:%02dZ\", st.wYear % 10000, st.wMonth % 100, st.wDay % 100, st.wHour % 100, st.wMinute % 100, st.wSecond % 100);\n```\n\nHow the add-in incorporates the date and time metadata into the exported document depends on the implementation details of the export code and the type of fixed-format used in the exported document.\n\n## HrFinalize\n\nPublisher calls the HrFinalize method at the end of the document-export process.\n\n```\nC++\n```", - "page_start": 35, - "page_end": 35, - "source_file": "office-pdf.pdf" - }, - { - "text": "The collection of structure nodes within the document forms a tree; each node has a parent node and may also have sibling nodes. The idNodeParent and iSortOrder members describe the structure of this tree. Note that a child node may or may not appear between the DocExComment\\_BeginStructNode and\n\nDocExComment\\_EndStructNode structures of the parent node in the EMF.\n\n```\nC++ struct DocExComment\\_BeginStructNode { DWORD ident {}; DWORD iComment {}; int idNodeParent {}; int iSortOrder {}; MSODOCEXSTRUCTNODE desn; BOOL fContentNode {}; int cwchAltText {}; };\n```\n\nThe members of the DocExComment\\_BeginStructNode structure are as follows:\n\n - ident Specifies the constant value, msodocexsignature, which identifies this EMF comment as containing semantic information.\n - iComment Specifies the MSODOCEXCOMMENT value, msodocexcommentBeginStructNode.\n - idNodeParent Specifies the ID of the parent node. A value of 0 specifies the root node. A value of -1 specifies the currently open structure node, that is, the enclosing structure node.\n - iSortOrder Specifies the sort order of the structure node among its sibling nodes. The sort order enables the add-in to order the content correctly in the exported document.\n\nNo two nodes can have the same sort order. However, the set of integers that constitute the sort order do not need to be contiguous.\n\nA value of -1 indicates that the sibling order is the same order in which the nodes appear in the EMF comments. Note that the order in which the content appears in the EMF is not necessarily the order in which the content is consumed by a user of the document.\n\n - desn Specifies a MSODOCEXSTRUCTTYPE structure, which is defined earlier in the document.", - "page_start": 19, - "page_end": 19, - "source_file": "office-pdf.pdf" - } - ] - }, - { - "references": { - "source_file": "office-pdf.pdf", - "query": "What are vector colors ?", - "target_page": 29, - "target_passage": "Vector colors are any COLORREF values that the add-in receives from Publisher.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "an energy of interband transitions, which is roughly 2 eV . This would be consistent with Refs. 8,9.\n\nWe begin with formulating our calculational basis in the next section. Then we take up the four cases and consider in each case the extent to which the Kubo sum is satisfied up to the order of bandwidth and the functional form and the sign of ∆ W ( ω c ). The last section presents our conclusions.\n\n## II. OPTICAL INTEGRAL IN NORMAL AND SUPERCONDUCTING STATES\n\nThe generic formalism of the computation of the optical conductivity and the optical integral has been discussed several times in the literature 21-23,26,29 and we\n\njust list the formulas that we used in our computations. The conductivity σ (Ω) and the optical integral W ( ω c ) are given by (see for example Ref. 35).\n\nσ ' (Ω) = Im [ -Π(Ω) Ω+ iδ ] = -Π '' (Ω) Ω + πδ (Ω) Π ' (Ω) (7a)\n\nW ( ω c ) = ∫ ω c 0 σ ' (Ω) d Ω = -∫ ω c 0+ Π '' (Ω) Ω d Ω + π 2 Π ' (0) (7b)\n\nwhere ' X ' ' and ' X '' ' stand for real and imaginary parts of X . We will restrict with T = 0. The polarization operator Π(Ω) is (see Ref. 36)\n\nΠ( i Ω) = T ∑ ω ∑ /vector k ( ∇ /vector k ε /vector k ) 2 ( G ( iω, /vector k ) G ( iω + i Ω , /vector k ) + F ( iω, /vector k ) F ( iω + i Ω , /vector k ) ) (8a)\n\nΠ ' (Ω) = 1 π 2 ∑ /vector k ( ∇ /vector k ε /vector k ) 2 ∫ ' ∫ ' dxdy ( G '' ( x, /vector k ) G '' ( y, /vector k ) + F '' ( x, /vector k ) F '' ( y, /vector k ) ) n F ( y ) -n F ( x ) y -x (8c)\n\nΠ '' (Ω) = -1 π �� /vector k ( ∇ /vector k ε /vector k ) 2 ∫ 0 -Ω dω ( G '' ( ω, /vector k ) G '' ( ω +Ω , /vector k ) + F '' ( ω, /vector k ) F '' ( ω +Ω , /vector k ) ) (8b)\n\nwhere ∫ ' denotes the principal value of the integral, ∑ /vector k is understood to be 1 N ∑ /vector k ,( N is the number of lattice sites), n F ( x ) is the Fermi function which is a step function at zero temperature, G and F are the normal and anomalous Greens functions. given by 37\n\nFor a NS, G ( ω, /vector k ) = 1 ω -Σ( k, ω ) -ε /vector k + iδ (9a)\n\nFor a SCS, G ( ω, /vector k ) = Z k,ω ω + ε /vector k Z 2 k,ω ( ω 2 -∆ 2 k,ω ) -ε 2 /vector k + iδsgn ( ω ) (9b)\n\nF ( ω, /vector k ) = Z k,ω ∆ k,ω Z 2 k,ω ( ω 2 -∆ 2 k,ω ) -ε 2 /vector k + iδsgn ( ω ) (9c)\n\nwhere Z k,ω = 1 -Σ( k,ω ) ω , and ∆ k,ω , is the SC gap. Following earlier works 31,33 , we assume that the fermionic self-energy Σ( k, ω ) predominantly depends on frequency and approximate Σ( k, ω ) ≈ Σ( ω ) and also neglect the frequency dependence of the gap, i.e., approximate ∆ k,ω by a d -wave ∆ k . The lattice dispersion ε /vector k is taken from Ref. 38. To calculate W K , one has to evaluate the Kubo term in Eq.3 wherein the distribution function n /vector k , is calculated from\n\nn ( ε /vector k ) = -2 ∫ 0 -∞ dω 2 π G '' ( ω, /vector k ) (10)\n\nThe 2 is due to the trace over spin indices. We show the distribution functions in the NS and SCS under different circumstances in Fig 2.\n\nThe /vector k -summation is done over first Brillouin zone for a 2-D lattice with a 62x62 grid. The frequency integrals are done analytically wherever possible, otherwise performed using Simpson's rule for all regular parts. Contributions from the poles are computed separately using Cauchy's theorem. For comparison, in all four cases we also calculated FGT sum rule by replacing ∫ d 2 k = d Ω k d/epsilon1 k ν /epsilon1 k , Ω k and keeping ν constant. We remind that the FGT is the result when one assumes that the integral in W ( ω c ) predominantly comes from a narrow region around the Fermi surface.\n\nWe will first use Eq 3 and compute W K in NS and SCS. This will tell us about the magnitude of ∆ W ( ω c = ∞ ). We next compute the conductivity σ ( ω ) using the equations listed above, find W ( ω c ) and ∆ W ( ω c ) and compare ∆ f ( ω c ) and ∆ W K .", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0764.pdf" - }, - { - "text": "∫ ' ∞ ' 0 Reσ (Ω) d Ω = W K = πe 2 2 N ∑ /vector k ∇ 2 /vector k x ε /vector k n /vector k (3)\n\nwhere n /vector k is the electronic distribution function and ε /vector k is the band dispersion. Prime in the upper limit of the integration has the practical implication that the upper limit is much larger than the bandwidth of a given band which crosses the Fermi level, but smaller than the frequencies of interband transitions. Interactions with external objects, e.g., phonons or impurities, and interactions between fermions are indirectly present in the distribution function which is expressed via the full fermionic Green's function as n /vector k = T ∑ m G ( /vector k, ω m ). For /epsilon1 k = k 2 / 2 m , ∇ 2 /vector k x ε /vector k = 1 /m , W K = πne 2 / (2 m ), and Kubo sum rule reduces to Eq. (1). In general, however, ε /vector k is a lattice dispersion, and Eqs. (1) and (3) are different. Most important, W K in Eq. (3) generally depends on T and on the state of the system because of n /vector k . In this situation, the temperature evolution of the optical integral does not reduce to a simple redistribution of the spectral weight - the whole spectral weight inside the conduction band changes with T . This issue was first studied in detail by Hirsch 4 who introduced the now-frequently-used notation 'violation of the conductivity sum rule'.\n\nIn reality, as already pointed out by Hirsch, there is no true violation as the change of the total spectral weight", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0764.pdf" - }, - { - "text": "same type that is used for RGB color. For information about the COLORREF structure, see COLORREF.\n\nTo resolve color IDs in the EMF back to the extend color space, the add-in calls back to Publisher through the HrResolveColor method of the IMsoDocExporterSite interface. The add-in passes Publisher an interface pointer to an IDOCEXCOLOR interface as one of the parameters to HrResolveColor . Publisher takes the color IDs, also specified in the call to HrResolveColor , converts them to extended color (RGB, CMYK, or spot color), and passes them back to the add-in through the methods in the IDOCEXCOLOR interface.\n\n## Vector Color and Recolored Images\n\nVector colors are any COLORREF values that the add-in receives from Publisher. For example, text color, line stroke color, and color for metafile recolor. When color mapping is enabled, Publisher uses a color ID for COLORREF rather than a real RGB color value. If Publisher provides the add-in an IMsoDocExporterSite interface pointer by calling the SetDocExporterSite method of the IMsoDocExporter interface, the add-in should always call the IMsoDocExporterSite::HrResolveColor method to convert the COLORREF to an extended color, which the add-in receives through the methods in the IDOCEXCOLOR interface.\n\nTo support vector color mapping, the add-in needs to do the following:\n\n - Implement class support for an IDOCEXCOLOR interface. The methods in this interface enable Publisher to pass extended color back to the add-in.\n - Cache the following color state values from the semantic records in the EMF.\n - Set foreground color for recoloring. This is set through the DocExComment\\_ColorInfo structure.\n - Set background color for recoloring. This is set through the DocExComment\\_ColorInfo structure.\n - Determine when color mapping is enabled. This is set through the DocExComment\\_ColorEnable structure.\n - For a vector color, create an IDOCEXCOLOR interface with the color ID, so that IDOCEXCOLOR::GetUnresolvedRGB returns the color ID. The add-in should call the IMsoDocExporterSite::HrResolveColor method with the IDOCEXCOLOR interface and cached color states. Publisher calls the IDOCEXCOLOR interface methods with the final color, which can be RGB, CMYK, spot, or registration tint.", - "page_start": 28, - "page_end": 28, - "source_file": "office-pdf.pdf" - }, - { - "text": "```\n✞ # Create AIF object aif = init\\_aif( A::Vector{Array{T, N}}, # A-matrices B::Vector{Array{T, N}}; # B-matrices C::Vector{Array{Real}}, # C-matrices (optional) D::Vector{Vector{Real}}, # D-matrices (optional) E::Vector{T}, # E-vector (optional) pA::Union{Vector{Array{T, N}}, Nothing}, # Dirichlet priors for A-matrices (optional) pB::Union{Vector{Array{T, N}}, Nothing}, # Dirichlet priors for B-matrices (optional) pD::Union{Vector{Array{Real}}, Nothing}, # Dirichlet priors for D-vectors (optional) parameters::Dict{String, Real}, # Dictionary containing other parameters (optional) settings::Dict{String, Any} # Dictionary containing settings (optional) ) ✝\n```\n\n```\n✞ # Information about number of states , observations , actions and policy length states = [6] # Six states , single factor observations = [5] # Five observations , single modality controls = [2] # Two actions , single factor policy\\_length = 1 # Length of policies # Generate uniform templates for matrices and vectors of the generative model A, B, C, D, E = create\\_matrix\\_templates(states, observations, controls, policy\\_length) ✝\n```\n\n```\n✞ # We make C take the following form: [0, 0, 0, 0, 1] C[1] = onehot(5,5) # Initialize the single element of the C object with a one-hot vector # D will be: [1, 0, 0, 0, 0, 0] D[1] = onehot(1,6) # Initialize the single element of the D object with a one-hot vector # To make the agent prefer policy 2 E = onehot(2,2) # Initialize as a one-hot encoded vector: [0,1] ✝\n```\n\n☎\n\n✆\n\n☎\n\nA and B are the only mandatory arguments to the init\\_aif function-the other arguments are keyword arguments that default to uniform priors. A , B , C , D and E and their corresponding Dirichlet priors, in the cases of A , B and D , should be formatted as standard array objects. All but E can have multiple modalities/factors (see Section 4), so they should be formatted as vectors of arrays with one array per modality/factor. These arrays can be hand-specified by the user, or be generated with some of the helper functions supplied by ActiveInference . Here, we create an AIF agent equipped with a generative model with six environmental states, five possible observations and two possible actions. Here, we use helper functions to create matrices and vectors with the correct dimensions; in Section 4, we create them manually. First, we define the number of states, observations, controls and the length of policies:\n\n✆\n\n☎\n\nThe A object generated here is a one-dimensional vector containing a uniform 5 × 6 matrix (six states and five observations). The B object is a one-dimensional vector containing a uniform 6 × 6 × 2 array (six states and two actions). The C , D and E objects are onedimensional vectors, each containing uniform vectors with their corresponding sizes. We can now modify these to supply the agent with more informative priors over observations, initial states and policies. Here, we performed this using the onehot function:\n\n✆\n\nWe now create the Dirichlet priors for A , B and D . When we use parameter learning, these are used to define A , B and D defined above, and are updated at every time step. One way to construct Dirichlet priors is to simply multiply the matrices below with a scaling factor; a higher scaling leads to more precise priors that require stronger evidence to update. Here, we use a scaling parameter of 2. In the current version, parameter learning is only implemented for the A , B and D :", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "The DocExComment\\_EPSColorSpotImage structure provides spot color information for the subsequent RGB image. For more information about this structure, see the section Extended Color Support.\n\n```\nC++ typedef struct { DWORD ident {}; DWORD iComment {}; COLORREF cmykAlt { 0 }; COLORREF rgbAlt { 0 }; float flTintMin {}; float flTintMax {}; char szSpotName[1]; } DocExComment\\_EPSColorSpotImage;\n```\n\nThe members of the DocExComment\\_EPSColorSpotImage structure are as follows:\n\n - ident Specifies the constant value, msodocexsignature, which identifies this EMF comment as containing semantic information.\n - iComment Specifies the MSODOCEXCOMMENT value, msodocexcommentEPSSpotImage.\n - cmykAlt Specifies a CMYK color ID.\n - rgbAlt Specifies an RGB color ID.\n - flTintMin Specifies the minimum tint.\n - flTintMax Specifies the maximum tint.\n - szSpotName[1] Specifies a variable length, zero-terminated string that contains the spot name.\n\n## Extended Color Support\n\nTo support extended color spaces in Publisher, additional EMF semantic records and interfaces are needed because EMF only supports RGB (red-green-black) colors. Extended color spaces include CMYK (cyan-magenta-yellow-black) and spot color space, which are commonly used in commercial printing.\n\nPublisher uses color mapping to represent extended colors in the document EMF. Publisher builds a color table for all colors used in the document and replaces actual colors with color IDs in the EMF. The type for the color ID is COLORREF , which is the", - "page_start": 27, - "page_end": 27, - "source_file": "office-pdf.pdf" - }, - { - "text": "- When either foreground color or background color for recoloring is specified from an EMF semantic record, the add-in should recolor images in the add-in (for example, metafiles or raster pictures).\n\n## Non-Recolored Images\n\nEMF supports CMYK images using GDI+. Therefore, images in the EMF may be either RGB or CMYK. If the image is a CMYK image, the add-in needs to convert the image to the target color space.\n\nPublisher maintains a target color space for the document. The add-in can use this target color space by calling the IMsoDocExporterSite::HrConvertImageColorSpace method with the image's color space.\n\n## Color from EPS Files\n\nEncapsulated Postscript (EPS) is a metafile type that supports extended color spaces. User who embed EPS images in a Publisher document expect the color information to be used in the fixed-format output. Inside Publisher, the EPS is converted to an EMF with EPS-related semantic records. This EMF is then embedded in the page EMF file that the application passes to the add-in.\n\nTo support color in EPS files, the add-in needs to do the following:\n\n - Call the IMsoDocExporterSite::SetEPSInfo method for DocExComment\\_EPSColor records encountered in the EMF.\n - Extract the CMYK image from the DocExComment\\_EPSColorCMYKJPEG record in the EMF. This record contains a binary object that is the actual CMYK JPEG file stream. Use it to replace the RGB image specified in the subsequent call to the StretchDIBits function.\n - The DocExComment\\_EPSColorSpotImage record provides spot color information for the subsequent RGB image, which is always an index image. The add-in needs to convert the spot image to the target color space.\n - The add-in can optionally call the IMsoDocExporterSite:: HrGetSpotRecolorInfo method to obtain the document's target color from Publisher. Then the add-in can recolor the subsequent RGB image by mapping colors from the palette of the RGB image to flTintMin and flTintMax tints specified in the\n\nDoxExComment\\_EPSColorSpotImage palette is used for the mapping.\n\n - record. The luminosity for each color of the", - "page_start": 29, - "page_end": 29, - "source_file": "office-pdf.pdf" - }, - { - "text": "H = -  J 0 ∑ 〈 ij 〉 /vector S i · /vector S j + J 1 ∑ 〈 ik 〉 /vector S i · /vector S k + J 2 ∑ 〈 il 〉 /vector S i · /vector S l   . (1)\n\n/vector S i are classical planar unit vectors representing the direction of the total angular momentum of the magnetic ions, whose magnitude √ j ( j +1) ( j = 8 for Holmium ions) is already encompassed within the definition of the interaction constants J 0 , 1 , 2 . As sketched in Fig. 1, the magnetic ions are located on the sites of a body-centered tetragonal (BCT) lattice; the first sum appearing in the Hamiltonian describes the in-plane ( xy ) nearest neighbor (NN) interaction, which is taken ferromagnetic (FM), with exchange strength J 0 > 0; the second sum represents the coupling, of exchange strength J 1 , between spins belonging to nearest neighbor (NN) planes along the z -direction (which we will assume to coincide with the film growth direction); finally, the third sum takes into account the interaction, of exchange strength J 2 , between spins lying on next-nearest neighbor (NNN) planes along z . In order to have frustration, giving rise to noncollinear order along z in the bulk, NN interaction J 1 can be taken both ferro- or antiferromagnetic, but NNN coupling J 2 has necessarily to be antiferromagnetic, and the condition | J 2 | > | J 1 | / 4 must be fulfilled. Such simplified Hamiltonian was already employed to simulate helical ordering in bulk systems by Diep 1,17 and Loison 18 . In the bulk limit, the state of minimal energy of a system described by Eq.(1) corresponds to a helical arrangement of spins. The ground state energy per spin is equal to e g ( Q z ) = [ -4 J 0 -2 J 1 (4 cos ( Q z c ' ) + δ cos (2 Q z c ' ))] where c ' is the distance between NN layers, δ = J 2 J 1 , and Q z c ' = arccos ( -1 δ ) is the angle between spins lying on adjacent planes along the z -direction. The observed helical arrangement in bulk holmium corresponds to Q z c ' /similarequal 30 . 5 · 10 : such value can be obtained from the formula above with the set of coupling constants J 0 =67.2K, J 1 =20.9K, and J 2 = -24.2 K, that we have employed in our simulations. The given values for the exchange constants are the same already used by Weschke et al. in Ref. 13 to interpret experimental data on Holmium films on the basis of a J 1 -J 2 model, after a proper scaling by the numbers of NN and NNN on neighboring layers of a BCT lattice.\n\nIn the following we will denote with n the film thickness, i.e. the number of spin layers along the z direction, and with L × L the number of spins in each layer (i.e., L is the lattice size along both the x and y directions). In our simulations thickness values from 1 to 24 were considered, while the range of lateral size L was from 8 to 64. Periodic boundary conditions were applied along x and y , while free boundaries were obviously taken along the film growth direction z .\n\nThermal equilibrium was attained by the usual Metropolis algorithm 19 , supplemented by the overrelaxed technique 20 in order to speed-up the sampling of the spin configuration space: a typical 'Monte Carlo step' was composed by four Metropolis and four-five over-relaxed moves per particle. Such judicious mix of moves is able both to get faster the thermal equilibrium and to minimize the correlation 'time' between successive samples, i.e. the undesired effects due to lack of in-", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0510.pdf" - }, - { - "text": "FIG. 1: (colors online) (a) : body-centered tetragonal (BCT) lattice with J 0 in-plane coupling constant, and out-of-plane J 1 , and J 2 competing interactions.\n\n\n\nbe achieved with different number of interacting layers: notably, nearest and next-nearest layers competitive interactions are enough to get a helical structure with a whatever pitch wavevector. Such observation gives us a possible way to solve the conundrum previously emerged, as we have the possibility of varying the range of interactions without modifying the helical pitch, thus decoupling the two relevant length scales along the film growth direction, and making accessible a range of n of the order of, or smaller than, the helical pitch, but still large enough that a substantial number of layers can behave as 'bulk' layers. Therefore, while in the previous papers we have studied the properties of ultrathin magnetic films of Ho assuming a model with six interlayer exchange interactions, here we investigate by MC simulations the properties of the same system by making use of the simplest model Hamiltonian able to describe the onset of a helical magnetic order in Holmium, i.e. we consider only two inter-layer coupling constants, as previously done in Ref. 11.\n\nThe paper is organized as follows: In Sec. II the model Hamiltonian will be defined, and the MC techniques, and all the thermodynamic quantities relevant for this study, will be introduced. In Sec. III the results obtained for different thicknesses will be presented, both in the matter of the critical properties of the model and of the magnetic ordered structures observed. Finally, in Sec. IV we shall discuss such results, drawing also some conclusions.\n\n## II. MODEL HAMILTONIAN AND MONTE CARLO OBSERVABLES\n\nThe model Hamiltonian we use in our simulations is the minimal one able to describe helimagnetic structures:\n\nH = -  J 0 ∑ 〈 ij 〉 /vector S i · /vector S j + J 1 ∑ 〈 ik 〉 /vector S i · /vector S k + J 2 ∑ 〈 il 〉 /vector S i · /vector S l   . (1)", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0510.pdf" - }, - { - "text": "- /SM590000 Attack vectors", - "page_start": 625, - "page_end": 625, - "source_file": "sg247938.pdf" - }, - { - "text": "space, so no transformation is needed.\n\n - iTargetPage Specifies the page number of the destination page within the document.\n - xtfvTarget Specifies the x-coordinate of the target location on the destination page. The unit of measure for this value is points.\n - ytfvTarget Specifies the y-coordinate of the target location on the destination page. The unit of measure for this value is points.\n - dytfTargetPage The height of the destination page in points. The offset specified by the ytfvTarget member is relative to the upper-left corner of the page. However, some fixed-format types use a coordinate system that is relative to the bottom-left corner of the page. For these types of documents, the page height is required to convert the offset.\n\n## DocExComment\\_ColorInfo\n\nThe DocExComment\\_ColorInfo structure specifies color-state information for the EMF. For more information about this structure, see the section Extended Color Support.\n\n```\nC++ struct DocExComment\\_ColorInfo { DWORD ident {}; DWORD iComment {}; COLORREF clr { 0 }; BOOL fForeColor {}; };\n```\n\nThe members of the DocExComment\\_ColorInfo structure are as follows:\n\n - ident Specifies the constant value, msodocexsignature, which identifies this EMF comment as containing semantic information.\n - iComment Specifies the MSODOCEXCOMMENT value, msodocexcommentColorInfo.\n - clr Specifies a color ID that represents a current color state in the EMF.\n - fForeColor Specifies whether the color ID in the clr member represents a foreground color or a background color. If this member has a value of true , the", - "page_start": 17, - "page_end": 17, - "source_file": "office-pdf.pdf" - } - ] - }, - { - "references": { - "source_file": "office-pdf.pdf", - "query": "What are msodocexMetadataComments ?", - "target_page": 35, - "target_passage": "Miscellaneous comments relevant to the document.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "The metadatatype parameter specifies the type of metadata represented by the string. The metadatatype parameter must be one of the following values from the MSODOCEXMETADATA enumeration type.\n\nTable 8. Enumerated values of MSODOCEXMETADATA\n\n\n\nExpand table\n\n| Value | Description |\n|--------------------------|--------------------------------------------------------------------------------------------------------------------------------|\n| msodocexMetadataTitle | The title of the document. |\n| msodocexMetadataAuthor | The author of the document |\n| msodocexMetadataSubject | String that describes the subject matter of the document (for example, business or science). |\n| msodocexMetadataKeywords | Keyword relevant to the document content. |\n| msodocexMetadataCreator | The creator of the document, possibly distinct from the author. |\n| msodocexMetadataProducer | The producer of the document, possibly distinct from the author or creator. |\n| msodocexMetadataCategory | String that describes the type of document (for example, memo, article, or book). |\n| msodocexMetadataStatus | Status of the document. This field can reflect where the document is in the publication process (for example, draft or final). |\n| msodocexMetadataComments | Miscellaneous comments relevant to the document. |\n\nFor a given document, each metadata type can have only one string associated with it. So, for example, if the document has multiple keywords, they are passed to the add-in as one concatenated string.\n\nThe pwchValue parameter specifies a Unicode string that contains the metadata itself.\n\nHow the add-in incorporates the text-string metadata into the exported document depends on the implementation details of the export code and the type of fixed-format used in the exported document.\n\n## HrAddDocumentMetadataDate\n\nPublisher calls the HrAddDocumentMetadataDate method to specify document metadata in the form of a FILETIME structure.", - "page_start": 34, - "page_end": 34, - "source_file": "office-pdf.pdf" - }, - { - "text": "## ノ Expand table\n\n| Comment Value | Structure Type |\n|-----------------------------------------|------------------------------------|\n| msodocexcommentExternalHyperlink | DocExComment\\_ExternalHyperlink |\n| msodocexcommentExternalHyperlinkRctfv | DocExComment\\_ExternalHyperlink |\n| msodocexcommentInternalHyperlink | DocExComment\\_InternalHyperlink |\n| msodocexcommentInternalHyperlinkRctfv | DocExComment\\_InternalHyperlink |\n| msodocexcommentColorInfo | DocExComment\\_ColorInfo |\n| msodocexcommentColorMapEnable | DocExComment\\_ColorEnable |\n| msodocexcommentBeginTextRun | DocExComment\\_BeginTextRun |\n| msodocexcommentBeginTextRunRTL | DocExComment\\_BeginTextRun |\n| msodocexcommentEndTextRun | DocExComment\\_EndTextRun |\n| msodocexcommentBeginStructNode | DocExComment\\_BeginStructNode |\n| msodocexcommentEndStructNode | DocExComment\\_EndStructNode |\n| msodocexcommentUnicodeForNextTextOut | DocExComment\\_UnicodeForNextTextOut |\n| msodocexcommentUnicodeForNextTextOutRTL | DocExComment\\_UnicodeForNextTextOut |\n| msodocexcommentEPSColor | DocExComment\\_EPSColor |\n| msodocexcommentEPSCMYKJPEG | DocExComment\\_EPSColorCMYKJPEG |\n| msodocexcommentEPSSpotImage | DocExComment\\_EPSColorSpotImage |\n| msodocexcommentEPSStart | DocExComment\\_EPSStart |\n| msodocexcommentPageName | DocExComment\\_PageName |\n| msodocexcommentTransparent | DocExComment\\_Transparent |\n\n## DocExComment\\_ExternalHyperlink(Rctfv)\n\nThe DocExComment\\_ExternalHyperlink(Rctfv) structure describes a hyperlink that links to outside of the document, for example to a Web site on the Internet.", - "page_start": 14, - "page_end": 14, - "source_file": "office-pdf.pdf" - }, - { - "text": "- shapeProperty is for a msodocexStructTypeFigure where the content is a shape, text box, or table cell and contains bit fields from the MSODOCEXSHAPEPROPERTY enumeration.\n- tableAttr is the table cell attributes for a msodocexStructTypeTH or msodocexStructTypeTD.\n- idTableHeader is the unique id for an msodocexStructTypeTH or msodocexStructTypeTD.\n- iTargetParentId is the id of the node to reparent an msodocexStructTypeDiagram to.\n\nTable 3. Enumerated values of MSODOCEXLINEBREAKTYPE\n\nノ Expand table\n\nTable 4. Enumerated values of MSODOCEXLISTTYPE\n\n| Value | Description |\n|-----------------------------|--------------------|\n| msodocexLineBreakTypeNormal | Normal line break. |\n| msodocexLineBreakTypeManual | Manual line break. |\n| msodocexLineBreakTypeEOP | End of paragraph. |\n\n## ノ Expand table\n\nTable 5. Enumerated values of MSODOCEXSHAPEPROPERTY bit fields\n\n| Value | Description |\n|-------------------------------|-------------------------------------|\n| msodocexListTypeNone | No bullets or numbering. |\n| msodocexListTypeBulletDisc | Disc-shaped bullets. |\n| msodocexListTypeBulletCircle | Circle-shaped bullets. |\n| msodocexListTypeBulletSquare | Square-shaped bullets. |\n| msodocexListTypeBulletDecimal | Decimal numbering. |\n| msodocexListTypeUpperRoman | Uppercase Roman numeral numbering. |\n| msodocexListTypeLowerRoman | Lowercase Roman numberal numbering. |\n| msodocexListTypeUpperAlpha | Uppercase alphabetic numbering. |\n| msodocexListTypeLowerAlpha | Lowercase alphabetic numbering. |", - "page_start": 9, - "page_end": 9, - "source_file": "office-pdf.pdf" - }, - { - "text": "| Type Value | Description |\n|-----------------------------------|-----------------------------------|\n| msodocexStructTypeTOC | A table of contents. |\n| msodocexStructTypeTOCI | An item in a table of contents. |\n| msodocexStructTypeExtLink | A link to an external resource. |\n| msodocexStructTypeIntLink | A link to an internal resource. |\n| msodocexStructTypeFootnote | A footnote. |\n| msodocexStructTypeEndnote | An endnote. |\n| msodocexStructTypeTextbox | A text box. |\n| msodocexStructTypeHeader | A block of text forming a header. |\n| msodocexStructTypeFooter | A footer. |\n| msodocexStructInlineShape | An inline shape. |\n| msodocexStructAnnotation | An annotation. |\n| msodocexStructTypeSpanBlock | A block of text. |\n| msodocexStructTypeWorkbook | A workbook. |\n| msodocexStructTypeWorksheet | A worksheet. |\n| msodocexStructTypeMacrosheet | A macrosheet. |\n| msodocexStructTypeDialogsheet | A dialogsheet. |\n| msodocexStructTypeSlide | A slide. |\n| msodocexStructTypeChart | A chart. |\n| msodocexStructTypeDiagram | A SmartArt diagram. |\n| msodocexStructTypeBulletText | Buller text. |\n| msodocexStructTypeTextLine | A line of text. |\n| msodocexStructTypeDropCap | A drop cap. |\n| msodocexStructTypeSection | A section. |\n| msodocexStructTypeAnnotationBegin | The beginning of an annotation. |\n| msodocexStructTypeAnnotationEnd | The end of an annotation. |", - "page_start": 21, - "page_end": 21, - "source_file": "office-pdf.pdf" - }, - { - "text": "```\nC++ HRESULT HrAddDocumentMetadataDate( MSODOCEXMETADATA metadataType, const FILETIME* pftLocalTime );\n```\n\nThe metadatatype parameter specifies the type of metadata represented by the FILETIME structure. The metadatatype parameter must be one of the following values from the MSODOCEXMETADATA enumeration type.\n\nノ\n\nExpand table\n\nTable 9. Enumerated values of MSODOCEXMETADATA\n\n| Value | Description |\n|------------------------------|------------------------------------------|\n| msodocexMetadataCreationDate | The creation date for the document. |\n| msodocexMetadataModDate | The last-modified date for the document. |\n\nThe pftLocalTime parameter specifies a pointer to a FILETIME structure that contains the date and time information for the metadata. The following code snippet demonstrates how to extract this information from the structure.\n\n```\nC++ SYSTEMTIME st = { 0 }; WCHAR s[100]; FileTimeToSystemTime(pfiletime, &st); swprintf(s, 99, L\" %04d-%02d-%02dT%02d:%02d:%02dZ\", st.wYear % 10000, st.wMonth % 100, st.wDay % 100, st.wHour % 100, st.wMinute % 100, st.wSecond % 100);\n```\n\nHow the add-in incorporates the date and time metadata into the exported document depends on the implementation details of the export code and the type of fixed-format used in the exported document.\n\n## HrFinalize\n\nPublisher calls the HrFinalize method at the end of the document-export process.\n\n```\nC++\n```", - "page_start": 35, - "page_end": 35, - "source_file": "office-pdf.pdf" - }, - { - "text": "```\ntypedef struct \\_MsoDocexStructNode { int idNode; MSODOCEXSTRUCTTYPE nodetype; WCHAR* pwchAltText; union { int iHeadingLevel; ULONG idPara; ULONG idDropCap; int iPage; WCHAR* pwchActualText; MSODOCEXLINEBREAKTYPE bt; int iListLevel; MSODOCEXLISTTYPE listType; ULONG idAtn; long cpLim; int shapeProperty; MsoDocexTableAttr tableAttr; WCHAR* idTableHeader; int iTargetParentId; }; } MSODOCEXSTRUCTNODE;\n```\n\nThe idNode member specifies the ID of the node being passed in the call to HrBeginStructNode . This member may not have a value of 0 . A value of -1 indicates that child nodes do not use the idNodeParent parameter to specify this node as their parent. Instead, this node can be a parent only by enclosing child nodes in the EMF. Multiple nodes can have an ID of -1 . If the ID is not -1 , the value is unique across the document.\n\nThe embedded union at the end of the MSODOCEXSTRUCTNODE is interpreted differently depending on the type of node:\n\n- iHeadingLevel is the heading level for an msodocexStructTypeHeading.\n- idPara is the paragraph id for a P, TOCI, or ListBody.\n- idDropCap is the id of an msodocexStructTypeDropCap.\n- iPage is the page number for an msodocexStructTypePage.\n- bt is the line break type for an msodocexStructTypeTextLine.\n- iListLevel is the list level for an msodocexStructTypeList or msodocexStructTypeListItem.\n- listType is the list type for an msodocexStructTypeListItem.\n- idAtn is the id of an msodocexStructTypeAnnotationBegin or msodocexStructTypeAnnotationEnd.\n- cpLim is used to determine the nesting order of tables within tables for an msodocexStructTypeTable, msodocexStructTypeTOC, or msodocexStructTypeListBody.", - "page_start": 8, - "page_end": 8, - "source_file": "office-pdf.pdf" - }, - { - "text": "| Type Value | Description |\n|-----------------------------------------|-------------------------------------------------------------------------|\n| msodocexStructTypeParaRTLAttr | A block of text within an article with right-to-left layout. |\n| msodocexStructTypeTableRTLAttr | A block of text forming a table with right-to-left layout. |\n| msodocexStructTypeHeadingRTLAttr | A heading in the text with right-to-left layout. |\n| msodocexStructTypeListItemRTLAttr | A block of text forming a list item with right-to-left layout. |\n| msodocexStructTypeParaUnannotatableAttr | A block of text within an article that is not annotatable. |\n| msodocexStructTypeTHead | The header row area in a table. |\n| msodocexStructTypeTBody | The body area in a table, i.e. the portion between the THead and TFoot. |\n| msodocexStructTypeLabel | A label. |\n| msodocexStructTypeEquation | An equation. |\n| msodocexStructTypeIntLinkNoteRef | A footnote or endnote reference mark link. |\n| msodocexStructTypeTFoot | The footer row area in a table. |\n\nfContentNode Specifies whether a DocExComment\\_EndStructNode structure marks the end of this structure node. If fContentNode is true , a\n\nDocExComment\\_EndStructNode structure closes off the content bounded by the node. If this fContentNode has a false value, then the node does not bound any content.\n\nThe fContentNode member affects the interpretation of the parent ID value of subsequent nodes. If fContentNode is true , nodes that are inserted between this DocExComment\\_BeginStructNode and a subsequent DocExComment\\_EndStructNode , and that have a parent ID of -1 , are children of this node. However, if fContentNode is true , nodes inserted after this DocExComment\\_BeginStructNode , and that have a parent ID of -1 , are not children of this node. They are children of the next-most-recently specified node that has fContentNode equal to false .\n\nYou can nest document structure nodes to arbitrary depth.\n\ncwchAltText Specifies the number of Unicode characters in the block of alternate text that follows the structure. This Unicode string specifies alternate text for the node (for example, alternate text for an image).", - "page_start": 22, - "page_end": 22, - "source_file": "office-pdf.pdf" - }, - { - "text": "see the section Extended Color Support.\n\n```\nC++ typedef struct { DWORD ident {}; DWORD iComment {}; BYTE colorInfo[]; } DocExComment\\_EPSColor;\n```\n\nThe members of the DocExComment\\_EPSColor structure are as follows:\n\n - ident Specifies the constant value, msodocexsignature, which identifies this EMF comment as containing semantic information.\n - iComment Specifies the MSODOCEXCOMMENT value, msodocexcommentEPSColor.\n - colorInfo[] Specifies the color information for the EPS file. The add-in should pass this information to Publisher using the IMsoDocExporterSite::SetEPSInfo method.\n\n## DocExComment\\_EPSColorCMYKJPEG\n\nThe DocExComment\\_EPSColorCMYKJPEG structure specifies the start, in the EMF, of a binary object that is a CMYKJPEG file stream. For more information about this structure, see the section Extended Color Support.\n\n```\nC++ typedef struct { DWORD ident {}; DWORD iComment {}; } DocExComment\\_EPSColorCMYKJPEG;\n```\n\nThe members of the DocExComment\\_EPSColorCMYKJPEG structure are as follows:\n\n - ident Specifies the constant value, msodocexsignature, which identifies this EMF comment as containing semantic information.\n - iComment Specifies the MSODOCEXCOMMENT value, msodocexcommentEPSCMYKJPEG;\n\n## DocExComment\\_EPSColorSpotImage", - "page_start": 26, - "page_end": 26, - "source_file": "office-pdf.pdf" - }, - { - "text": "Table 7. Document structure node types\n\n\n\nExpand table\n\n| Type Value | Description |\n|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| msodocexStructTypePara | A block of text within an article. Its parent node must be an article. |\n| msodocexStructTypeFigure | A graphical element (for example, an image or collection of shapes) that has a textual representation. The textual representation is the alternate text used for reading or searching the document. |\n| msodocexStructTypeArticle | A group of nodes forming a single flow of text that should be read or searched as a contiguous block of content. Some documents have a single article and others have multiple articles. |\n| msodocexStructTypeHeading | A heading in the text. |\n| msodocexStructTypeTable | A block of text forming a table. |\n| msodocexStructTypeTR | A block of text forming a single row of a table. |\n| msodocexStructTypeTD | A block of text forming a single cell in a table row. |\n| msodocexStructTypeTH | A block of text forming a single header cell in a table row. |\n| msodocexStructTypeList | A block of text forming a list. |\n| msodocexStructTypeListItem | A block of text forming a list item. |\n| msodocexStructTypeListBody | A block of text forming the body of a list item. |\n| msodocexStructTypeDocument | A document. |\n| msodocexStructTypePage | A page in the document. |", - "page_start": 20, - "page_end": 20, - "source_file": "office-pdf.pdf" - }, - { - "text": "| Value | Numeric Value | Description |\n|------------------------|-----------------|------------------------------------------------|\n| msodocexShape | 0x00000001 | The object is a shape or text box. |\n| msodocexShapeText | 0x00000002 | The object has non-whitespace text. |\n| msodocexShapePath | 0x00000004 | The object has a fill and/or outline. |\n| msodocexShapeAltText | 0x00000008 | The object has Alt Text. |\n| msodocexShapeEquation | 0x00000010 | The object has text that contains an equation. |\n| msodocexShapeTabelCell | 0x00000020 | The object is a cell in a table. |\n\n## MsoDocexTableAttr\n\nThe MsoDocexTableAttr structure fits in 32 bits and includes the row and column span and header scope information for a table cell.\n\n```\nC++ struct MsoDocexTableAttr { static constexpr unsigned int MaxSpanBits = sizeof(unsigned int) * 8 / 2 - 1; static constexpr unsigned int MaxSpanValue = (1u << MaxSpanBits) - 1; unsigned int rowSpan : MaxSpanBits; unsigned int fRowScope : 1; unsigned int colSpan : MaxSpanBits; unsigned int fColScope : 1; };\n```\n\nThe members of MsoDocexTableAttr structure are as follows:\n\n - MaxSpanBits Specifies the number of bits available for the rowSpan and colSpan values, which is 15.\n - MaxSpanValue Specifies the maximum value that can be specified for the rowSpan and colSpan.\n - rowSpan Specifies the number of rows that a table cell spans.\n - fRowScope Specifies whether the header is Row/Both or Column.\n - colSpan Specifies the number of columns that a table cell spans.", - "page_start": 10, - "page_end": 10, - "source_file": "office-pdf.pdf" - } - ] - }, - { - "references": { - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf", - "query": "What are the total operating expenses of Wikimedia foundation in 2024 ?", - "target_page": 6, - "target_passage": "178,471,109", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nThe Foundation also receives donations on behalf of the Wikimedia Endowment as well as transfers additional Foundation donations to the Endowment monthly. Donations that are donor-specified for the Wikimedia Endowment are not recognized as revenue to the Foundation, whereas donations that are not donor-specified for the Wikimedia Endowment are recognized both as contributions revenue and awards and grants expense to the Foundation. The Foundation transferred $10,706,812 donor-designated gifts and $624,137 Foundation gifts to the Wikimedia Endowment during the year ended June 30, 2024. As of June 30, 2024, the Foundation owed the Wikimedia Endowment $525,607 for donations to be transferred to the Wikimedia Endowment for the month of June 2024.\n\nDuring the fiscal year ended June 30, 2024, the Wikimedia Endowment also provided the Foundation with grants of $1,500,000 for MediaWiki improvements, $600,000 for the Abstract Wikipedia project, and $500,000 for exploring strategies for expanding beyond the Foundation's existing audiences of consumers and contributors. The grants are recorded as contributions with donor restrictions and within net assets with donor restrictions as of June 30, 2024.\n\n## (11) Contingencies and Commitments\n\nIn the normal course of business, the Foundation receives various threats of litigation. In the opinion of management, the outcome of the pending lawsuits will not materially affect operations or the financial position of the Foundation.\n\n## (12) Subsequent Events\n\nThe Foundation has evaluated its subsequent events through October 8, 2024, the date at which the consolidated financial statements were available to be issued, and determined there are no items to disclose.", - "page_start": 19, - "page_end": 19, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Consolidated Statements of Cash Flows\n\nYears ended June 30, 2024 and 2023", - "page_start": 6, - "page_end": 6, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "\n\n## WIKIMEDIA FOUNDATION, INC.\n\nConsolidated Financial Statements\n\nJune 30, 2024 and 2023\n\n(With Independent Auditors' Report Thereon)", - "page_start": 0, - "page_end": 0, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\n## For example (unaudited):\n\n - · Wikipedia and the other projects operated by the Foundation receive more than 19.4 billion pageviews per month, making them one of the most popular Web properties worldwide. Wikipedia is available in more than 332 languages and contains more than 63 million articles contributed by a global volunteer community.\n - · For the year ended June 30, 2024, the educational content of the Foundation's largest project, Wikipedia, grew by approximately 1.9 million articles to approximately 63.4 million articles.\n - · For the year ended June 30, 2024, volunteers added approximately 12.2 million images, movies, and sound files to the Foundation's multimedia repository, making the total 106.7 million files.\n - · Volunteers also contribute in several ways to the Foundation's wiki software: volunteer software developers add new functionality to the code base, and volunteer language specialists add to the code base by translating the wiki interface into different languages. During the year ended June 30, 2024, there were 47,773 commits merged, through the efforts of approximately 511 authors/contributors, of which 8,161 commits were through the efforts of approximately 244 volunteers.\n\n## (7) Operating Leases\n\nOur operating lease relates to the Foundation's headquarters in San Francisco and has a non-cancelable remaining term of 3 months as of June 30, 2024. The discount rate is 2.9%, the risk-free rate based on daily U.S. Treasury with a term comparable to the lease term. The lease provides the Foundation the option to extend the lease term for one additional period of five years. The Foundation determined during the year ended June 30, 2024 not to renew the lease. Operating lease expense was $1,859,383 and $1,489,134 for the year ended June 30, 2024 and 2023, respectively.\n\nUndiscounted lease payments as of June 30, 2024 were as follows:\n\n| | Lease payments |\n|------------------------------|------------------|\n| Year ending June 30: | |\n| 2025 | 419,791 |\n| Total minimum lease payments | $ 419,791 |\n\n## (8) Retirement Plan\n\nThe Foundation offers a 401(k) plan (the Plan) to all of its employees residing in the United States. Employees are eligible to participate in the Plan upon employment. The Foundation matches employee contributions on a dollar-for-dollar basis up to 4% of the employee's compensation. The Foundation contributed $1,859,839 and $1,859,012 to the Plan for the years ended June 30, 2024 and 2023, respectively.", - "page_start": 17, - "page_end": 17, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\nNotes to Consolidated Financial Statements June 30, 2024 and 2023\n\n## (1) Organization and Summary of Significant Accounting Policies\n\n## (a) Organization and Purpose\n\nThe accompanying consolidated financial statements present the financial position, change in net assets and cash flows of the Wikimedia Foundation, Inc. (the Foundation) and Wikimedia, LLC.\n\nThe Foundation is the nonprofit organization that operates Wikipedia, a free online encyclopedia. Based in San Francisco, California, the Foundation is a 501(c)(3) charity that is funded primarily through donations and contributions.\n\nThe Foundation also operates Wikimedia, LLC, a Delaware Limited Liability Company, with the Foundation as its Sole Member. The Wikimedia, LLC is organized and operated exclusively for charitable and educational purposes within the meaning of section 501(c)(3) of the Internal Revenue Code and is a disregarded entity for tax purposes.\n\n## (b) Risks and Uncertainties\n\nThe Foundation's operations are funded primarily by public donations from individuals as well as gifts from foundations and corporations. External factors such as global geopolitics, recession, and currency markets may impact our ability to raise funds. As of the date of this report, the Foundation has not experienced an adverse impact on its business operations.\n\n## (c) Income Taxes\n\nThe Foundation is exempt from federal income tax under Section 501(c)(3) of the Internal Revenue Code and from state income tax under Chapter 220.13 of the Florida Statutes and Sections 23701d of Revenue and Taxation Code of the State of California. The Internal Revenue Service has determined that the Foundation is not a private foundation and contributions to it qualify as charitable contributions.\n\nThe Foundation has evaluated the financial statement impact of positions taken or expected to be taken in its tax returns. The Foundation is subject to income taxes on any net income that is derived from a trade or business, regularly carried on, and not in furtherance of the purposes for which it was granted exemption. Net income from any unrelated trade or business, in the opinion of management, is not material to the consolidated financial statements taken as a whole.\n\n## (d) Financial Statement Presentation\n\nNet assets, support and revenue, expenses, gains, and losses are classified based on the existence or absence of donor-imposed restrictions in accordance with Accounting Standards Codification (ASC) Topic 958, Not-for-Profit Entities .\n\nNet assets without donor restrictions represent unrestricted resources available to support operations and also include previously temporarily restricted resources, which have become available for use by the Foundation in accordance with the intentions of donors.\n\nNet assets with donor restrictions represent contributions that are limited in use by the Foundation in accordance with donor-imposed stipulations. The stipulations may expire with time or may be satisfied and removed by the actions of the Foundation according to the terms of the contribution by the donor.", - "page_start": 7, - "page_end": 7, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\n## (9) Liquidity and Availability of Financial Assets\n\nThe Foundation's financial assets available for general expenditure within one year of the balance sheet date, June 30, 2024 and 2023, are as follows:\n\n| | 2024 | 2023 |\n|----------------------------------------------------------------------------------------|---------------|-------------|\n| Cash and cash equivalents | $ 82,845,159 | 75,808,401 |\n| Current contributions receivable | 856,657 | - |\n| Short-term investments | 116,074,763 | 132,216,667 |\n| Total financial assets | 199,776,579 | 208,025,068 |\n| Less: | | |\n| Restricted by donors for programs | 5,696,323 | 5,882,673 |\n| Donations payable to Wikimedia Endowment | 525,607 | 5,274,448 |\n| Financial assets available to meet cash needs for general expenditures within one year | $ 193,554,649 | 196,867,947 |\n\nThe Foundation's liquidity management includes a policy of structuring its financial assets to be available to meet its general expenditures, liabilities, grant-making, and other obligations as they come due. Cash and cash equivalents as reported on the consolidated balance sheet at June 30, 2024 and 2023, are the primary liquid resources used by the Foundation to meet these obligations. Financial assets invested in the short-term and long-term investments can be liquidated at any time as needed.\n\n## (10) Related Party Transactions\n\nThe Wikimedia Endowment began operations as a standalone tax-exempt 501(c)(3) organization on September 30, 2023, with the mission to act as a permanent fund that can support in perpetuity the operations and activities of current and future Wikimedia projects, which are projects that are approved by and advance the purposes of the Foundation or its successor if the Foundation ceases to exist. The Foundation does not have control or controlling financial interest in the Wikimedia Endowment and the Wikimedia Endowment has a separate Board of Directors, but the Wikimedia Endowment is considered a related party to the Foundation because Wikimedia Endowment management is also management at the Foundation.\n\nDuring the fiscal year ended June 30, 2024, the Foundation recognized revenue of $2,063,195 related to services provided to the Wikimedia Endowment, primarily for fundraising and general and administrative support under the terms of a cost sharing agreement. These costs are included within the Foundation ' s expenses based on the nature of the cost. The revenue from the Wikimedia Endowment reimbursing the costs is recorded within other income, net.", - "page_start": 18, - "page_end": 18, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "\n\nThe first CC License was created in 2002. Today, we boast six CC Licenses and two public domain tools, setting a global standard for sharing.\n\n## We've estimated that over 2.5 billion pieces of content were CC Licensed by the end of 2023.\n\n\n\n\n\n\"The great growling engine of change - technology. Alvin Toffler\" by katerha is licensed under CC BY 2.0.\n\nOur legal and technology staff continued to make key infrastructure updates and manage daily maintenance to ensure these Licenses work for everyone.\n\n## In 2023, we launched the Open Infrastructure Circle (OIC) to ensure consistent funding for this work.\n\nWe're grateful to the early supporters of the OIC, including the William + Flora Hewlett Foundation, Bill & Melinda Gates Foundation, Filecoin Foundation for the Decentralized Web, Robert Wood Johnson Foundation, Chan Zuckerberg Initiative, Endless, Siegel Family Endowment, Flickr, Microsoft, and Paul and Iris Brest.\n\n", - "page_start": 3, - "page_end": 3, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nGifts of cash and other assets are reported as contributions with donor restrictions if they are received with donor stipulations that limit the use of the donated assets or are restricted as to time. When a donor restriction expires, that is, when a stipulated time restriction ends or purpose restriction is accomplished, net assets with donor restrictions are reclassified to net assets without donor restrictions and reported in the consolidated statement of activities as net assets released from restrictions.\n\n## (l) Contributions of Nonfinancial Assets and Services\n\nContributions of nonfinancial assets and services include contributed services, as described below.\n\nContributed services are reported at fair value in the consolidated financial statements for voluntary donations of services when those services (1) create or enhance nonfinancial assets, (2) require specialized skills provided by individuals possessing those skills and are services that would be typically purchased if not provided by the donation, and (3) are professional in nature, and have been explicitly agreed to in advance. Contributed services are reported as contributions of nonfinancial assets and services revenue and in-kind service expenses in the consolidated statements of activities. Fair value is estimated based on current local rates for similar services.\n\nA substantial number of volunteers make significant contributions of their time in the furtherance of the Foundation's projects. The value of this contributed time is not reflected in the accompanying consolidated financial statements, as the criteria above are not met.\n\nContributed service revenue and expenses recorded in the consolidated statements of activities consist of contributed legal services, engineering services, subscription services, and internet hosting services and bandwidth. The amounts of specialized contributed legal services as revenue and expenses are $82,638 and $493,315 for the years ended June 30, 2024 and 2023, respectively. The value of specialized engineering services as revenue and expenses are $0 and $498,800 for the years ended June 30, 2024 and 2023, respectively. The value of donated subscription services as revenue and expenses was $124,738 and $0 for the years ended June 30, 2024 and 2023, respectively. The amounts of contributed internet hosting services and bandwidth for the years ended June 30, 2024 and 2023 is $56,100 and $48,338, respectively. Included in the 2024 and 2023 amounts are donated hosting services and bandwidth from the following companies: (1) FiberRing, (2) Tele2, (3) Datahop, (4) LibertyGlobal, (5) Init7, and (6) Arelion.\n\n## (m) Revenue Recognition - Contracts With Customers\n\nThe Foundation recognizes revenue from contracts with customers related to Wikimedia, LLC under Accounting Standards Codification Topic 606, Revenue from Contracts with Customers, which establishes a principle that revenue is recognized upon transfer of control of promised products and services to customers in an amount that reflects the consideration the Foundation expects to receive in exchange for those products or services.\n\nThe Foundation determines the amount of revenue to be recognized through the application of the following 5-step process: 1) identification of the contract, or contracts, with a customer; 2) identification of the performance obligations in the contract; 3) determination of the transaction price; 4) allocation of the transaction price to the performance obligations in the contract; and 5) recognition of revenue when or as the Foundation satisfies the performance obligations.", - "page_start": 10, - "page_end": 10, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nand free to everyone in the world, the Foundation's cost related to this collaborative arrangement is included within awards and grants in the statement of activities. The amount included within awards and grants was $6.1 million and $4.1 million for the years ended June 30, 2024 and 2023, respectively.\n\n## (p) Use of Estimates\n\nThe preparation of financial statements in conformity with U.S. generally accepted accounting principles requires management to make estimates and assumptions that affect the amounts reported in the consolidated financial statements and accompanying notes. Items subject to such estimates and assumptions include the investment valuations, useful lives of fixed assets, and the valuation of contributed services. Accordingly, actual results could differ from those estimates.\n\n## (q) Reclassifications\n\nCertain reclassifications have been made in the financial statements to conform 2023 information to the 2024 presentation. The Foundation had a change in accounting policy to present unrealized gains and losses on investments separately from investment income, net. This resulted in a reclassification of $3,547,510 from investment income, net to unrealized gains on investments within the statement of activities. The Foundation also had a change in accounting policy to no longer present the Wikimania event as special event expense, net in the statement of activities. Revenue from registration sales is now reported within other income, net, and expenses are reported within travel and conference expenses. This resulted in a reclassification of $698,141 from special event expenses to travel and conference expenses in the statement of activities.\n\n## (2) Contributions Receivable\n\nAs of June 30, 2024 and 2023, contributions receivable is $1,571,657 and $0, respectively, and represents contributions receivable from two grants, as well as contributions receivable from payment processors.", - "page_start": 12, - "page_end": 12, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "\n\n\n\nOver 300 attendees from 45 countries joined us this past October in Mexico City for the first in-person CC Global Summit since 2019. The theme was AI & the Commons with over 60 sessions and 180 speakers. Learn more here.\n\nThank you to our sponsors: John D. and Catherine T. MacArthur Foundation, Microsoft Corporation, Filecoin Foundation for the Decentralized Web, Akin, Anthropic, Mozilla Foundation, The Michelson 20MM Foundation, MHz Curationist, Frontiers Media, Arnold & Porter, and Crowell & Moring.", - "page_start": 5, - "page_end": 5, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - } - ] - }, - { - "references": { - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf", - "query": "What external events can affect Wikimedia Fundation in raising funds ?", - "target_page": 8, - "target_passage": "External factors such as global geopolitics, recession, and currency markets may impact our ability to raise funds.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nThe Foundation also receives donations on behalf of the Wikimedia Endowment as well as transfers additional Foundation donations to the Endowment monthly. Donations that are donor-specified for the Wikimedia Endowment are not recognized as revenue to the Foundation, whereas donations that are not donor-specified for the Wikimedia Endowment are recognized both as contributions revenue and awards and grants expense to the Foundation. The Foundation transferred $10,706,812 donor-designated gifts and $624,137 Foundation gifts to the Wikimedia Endowment during the year ended June 30, 2024. As of June 30, 2024, the Foundation owed the Wikimedia Endowment $525,607 for donations to be transferred to the Wikimedia Endowment for the month of June 2024.\n\nDuring the fiscal year ended June 30, 2024, the Wikimedia Endowment also provided the Foundation with grants of $1,500,000 for MediaWiki improvements, $600,000 for the Abstract Wikipedia project, and $500,000 for exploring strategies for expanding beyond the Foundation's existing audiences of consumers and contributors. The grants are recorded as contributions with donor restrictions and within net assets with donor restrictions as of June 30, 2024.\n\n## (11) Contingencies and Commitments\n\nIn the normal course of business, the Foundation receives various threats of litigation. In the opinion of management, the outcome of the pending lawsuits will not materially affect operations or the financial position of the Foundation.\n\n## (12) Subsequent Events\n\nThe Foundation has evaluated its subsequent events through October 8, 2024, the date at which the consolidated financial statements were available to be issued, and determined there are no items to disclose.", - "page_start": 19, - "page_end": 19, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\nNotes to Consolidated Financial Statements June 30, 2024 and 2023\n\n## (1) Organization and Summary of Significant Accounting Policies\n\n## (a) Organization and Purpose\n\nThe accompanying consolidated financial statements present the financial position, change in net assets and cash flows of the Wikimedia Foundation, Inc. (the Foundation) and Wikimedia, LLC.\n\nThe Foundation is the nonprofit organization that operates Wikipedia, a free online encyclopedia. Based in San Francisco, California, the Foundation is a 501(c)(3) charity that is funded primarily through donations and contributions.\n\nThe Foundation also operates Wikimedia, LLC, a Delaware Limited Liability Company, with the Foundation as its Sole Member. The Wikimedia, LLC is organized and operated exclusively for charitable and educational purposes within the meaning of section 501(c)(3) of the Internal Revenue Code and is a disregarded entity for tax purposes.\n\n## (b) Risks and Uncertainties\n\nThe Foundation's operations are funded primarily by public donations from individuals as well as gifts from foundations and corporations. External factors such as global geopolitics, recession, and currency markets may impact our ability to raise funds. As of the date of this report, the Foundation has not experienced an adverse impact on its business operations.\n\n## (c) Income Taxes\n\nThe Foundation is exempt from federal income tax under Section 501(c)(3) of the Internal Revenue Code and from state income tax under Chapter 220.13 of the Florida Statutes and Sections 23701d of Revenue and Taxation Code of the State of California. The Internal Revenue Service has determined that the Foundation is not a private foundation and contributions to it qualify as charitable contributions.\n\nThe Foundation has evaluated the financial statement impact of positions taken or expected to be taken in its tax returns. The Foundation is subject to income taxes on any net income that is derived from a trade or business, regularly carried on, and not in furtherance of the purposes for which it was granted exemption. Net income from any unrelated trade or business, in the opinion of management, is not material to the consolidated financial statements taken as a whole.\n\n## (d) Financial Statement Presentation\n\nNet assets, support and revenue, expenses, gains, and losses are classified based on the existence or absence of donor-imposed restrictions in accordance with Accounting Standards Codification (ASC) Topic 958, Not-for-Profit Entities .\n\nNet assets without donor restrictions represent unrestricted resources available to support operations and also include previously temporarily restricted resources, which have become available for use by the Foundation in accordance with the intentions of donors.\n\nNet assets with donor restrictions represent contributions that are limited in use by the Foundation in accordance with donor-imposed stipulations. The stipulations may expire with time or may be satisfied and removed by the actions of the Foundation according to the terms of the contribution by the donor.", - "page_start": 7, - "page_end": 7, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\n## (9) Liquidity and Availability of Financial Assets\n\nThe Foundation's financial assets available for general expenditure within one year of the balance sheet date, June 30, 2024 and 2023, are as follows:\n\n| | 2024 | 2023 |\n|----------------------------------------------------------------------------------------|---------------|-------------|\n| Cash and cash equivalents | $ 82,845,159 | 75,808,401 |\n| Current contributions receivable | 856,657 | - |\n| Short-term investments | 116,074,763 | 132,216,667 |\n| Total financial assets | 199,776,579 | 208,025,068 |\n| Less: | | |\n| Restricted by donors for programs | 5,696,323 | 5,882,673 |\n| Donations payable to Wikimedia Endowment | 525,607 | 5,274,448 |\n| Financial assets available to meet cash needs for general expenditures within one year | $ 193,554,649 | 196,867,947 |\n\nThe Foundation's liquidity management includes a policy of structuring its financial assets to be available to meet its general expenditures, liabilities, grant-making, and other obligations as they come due. Cash and cash equivalents as reported on the consolidated balance sheet at June 30, 2024 and 2023, are the primary liquid resources used by the Foundation to meet these obligations. Financial assets invested in the short-term and long-term investments can be liquidated at any time as needed.\n\n## (10) Related Party Transactions\n\nThe Wikimedia Endowment began operations as a standalone tax-exempt 501(c)(3) organization on September 30, 2023, with the mission to act as a permanent fund that can support in perpetuity the operations and activities of current and future Wikimedia projects, which are projects that are approved by and advance the purposes of the Foundation or its successor if the Foundation ceases to exist. The Foundation does not have control or controlling financial interest in the Wikimedia Endowment and the Wikimedia Endowment has a separate Board of Directors, but the Wikimedia Endowment is considered a related party to the Foundation because Wikimedia Endowment management is also management at the Foundation.\n\nDuring the fiscal year ended June 30, 2024, the Foundation recognized revenue of $2,063,195 related to services provided to the Wikimedia Endowment, primarily for fundraising and general and administrative support under the terms of a cost sharing agreement. These costs are included within the Foundation ' s expenses based on the nature of the cost. The revenue from the Wikimedia Endowment reimbursing the costs is recorded within other income, net.", - "page_start": 18, - "page_end": 18, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "\n\n## WIKIMEDIA FOUNDATION, INC.\n\nConsolidated Financial Statements\n\nJune 30, 2024 and 2023\n\n(With Independent Auditors' Report Thereon)", - "page_start": 0, - "page_end": 0, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nOnce such stipulations are satisfied, the associated net assets are released from net assets with donor restrictions and recognized as net assets without donor restrictions.\n\nContributions received are recorded as net assets without donor restriction or net assets with donor restrictions depending on the existence and/or nature of any donor restrictions.\n\n## (e) Cash and Cash Equivalents\n\nThe Foundation manages its cash through major financial institutions. At June 30, 2024 and 2023, the carrying amount of the Foundation's general ledger cash held primarily in nationally recognized financial institutions is $60.0 million and $63.9 million, respectively. Cash balances are insured by the Federal Deposit Insurance Corporation (FDIC) up to the applicable limits. Cash balances held in these financial institutions at June 30, 2024 and 2023 exceed the applicable FDIC insurance limits. The Foundation's current practice is to maintain at least four months of cash and cash equivalents to support a combination of operating cash and a current reserve fund. The Foundation considers all highly liquid investments with an original maturity of three months or less when purchased to be cash equivalents. Cash equivalents of $22.8 million and $12.0 million as of June 30, 2024 and 2023, respectively, are considered Level 1 under ASC Topic 820, Fair Value Measurement .\n\n## (f) Restricted Cash\n\nRestricted cash includes standby letters of credit for (1) the Foundation's headquarters office lease and (2) one of the Foundation's Employer of Record responsible for administering compensation and benefits for non-US personnel. As of June 30, 2024, neither letter of credit has been used.\n\n## (g) Contributions Receivable\n\nContributions receivable represent gift amounts due from various entities, which are occasionally directed at specific activities. Contributions receivable due more than one year from the contribution date are discounted to present value using a fair value rate based on the U.S. Treasury bond rate and reflect the risks inherent in these cash flows. Contributions receivable are subject to review and adjustment by management should amounts be deemed uncollectible.\n\n## (h) Investments\n\nThe Foundation's policy regarding investments is to invest cash in short-term, intermediate-term, and long-term fixed income, and equity instruments without assuming material undue risk to principal. Preservation of principal and maintenance of liquidity are priorities over yield. Investments are reported at fair value with realized and unrealized gains and losses, and accrued interest included as a component of the change in net assets. Additionally, the Foundation holds no shares of donated stock as of June 30, 2024 or 2023, consistent with its policy to sell stock received through donations as soon as possible.\n\nThe Foundation presents its investment portfolios as short-term and long-term based on expectations of the holding period of the investment in line with the investment guidelines stipulated in the investment policy.\n\nASC Topic 820 establishes a fair value hierarchy that prioritizes observable inputs to valuation techniques used to measure fair value. The hierarchy gives the highest priority to unadjusted quoted", - "page_start": 8, - "page_end": 8, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nand free to everyone in the world, the Foundation's cost related to this collaborative arrangement is included within awards and grants in the statement of activities. The amount included within awards and grants was $6.1 million and $4.1 million for the years ended June 30, 2024 and 2023, respectively.\n\n## (p) Use of Estimates\n\nThe preparation of financial statements in conformity with U.S. generally accepted accounting principles requires management to make estimates and assumptions that affect the amounts reported in the consolidated financial statements and accompanying notes. Items subject to such estimates and assumptions include the investment valuations, useful lives of fixed assets, and the valuation of contributed services. Accordingly, actual results could differ from those estimates.\n\n## (q) Reclassifications\n\nCertain reclassifications have been made in the financial statements to conform 2023 information to the 2024 presentation. The Foundation had a change in accounting policy to present unrealized gains and losses on investments separately from investment income, net. This resulted in a reclassification of $3,547,510 from investment income, net to unrealized gains on investments within the statement of activities. The Foundation also had a change in accounting policy to no longer present the Wikimania event as special event expense, net in the statement of activities. Revenue from registration sales is now reported within other income, net, and expenses are reported within travel and conference expenses. This resulted in a reclassification of $698,141 from special event expenses to travel and conference expenses in the statement of activities.\n\n## (2) Contributions Receivable\n\nAs of June 30, 2024 and 2023, contributions receivable is $1,571,657 and $0, respectively, and represents contributions receivable from two grants, as well as contributions receivable from payment processors.", - "page_start": 12, - "page_end": 12, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "\n\n'You 'You · You Toshi' website You Toshi' website (SMBC Friend Securities) (SMBC Friend Securities)\n\n\n\nGenki Fund is used to help customers develop Genki Fund is used to help customers develop a wide range of businesses. The bank will a wide range of businesses. The bank will continue to play its role as a regional financial continue to play its role as a regional financial institution dedicated to supporting customers institution dedicated to supporting customers that show originality and energy in helping that show originality and energy in helping develop the Kansai economy. develop the Kansai economy.\n\n\n\nKansai Genki Fund Kansai Genki Fund\n\n\n\n\n\n¥50 billion ¥50 billion\n\nSupport for companies creating Support for companies creating growth platforms in fields such growth platforms in fields such as medical and nursing care, as medical and nursing care, environmental and energy tech environmental and energy technologies, and businesses in Asia nologies, and businesses in Asia\n\n## Promoting environmentally-aware management and supporting business ventures in China through the fund\n\nIn the past, SMBC also worked to provide In the past, SMBC also worked to provide funding to the support creation of platforms funding to the support creation of platforms for growth in Japan. Now, working through for growth in Japan. Now, working through the Bank of Japan's 'Fund-supply measure the Bank of Japan's 'Fund-supply measure to support strengthening the foundations to support strengthening the foundations for economic growth' loan program, it for economic growth' loan program, it has established the Environmentally has established the Environmentally Responsible Company Support Fund and Responsible Company Support Fund and the Environmental Facilities Support t he Environmental Facilities Support Fund, in support of companies with Fund, in support of companies with environmentally-conscious managements, environmentally-conscious managements, and which invest in environmental facilities. and which invest in environmental facilities. Given the wave of Japanese companies Given the wave of Japanese companies setting up operations in China setting up operations in China's fast-growings fast-growing market, the bank has also established a China market, the bank has also established a China Business Support Fund to meet the funding Business Support Fund to meet the funding needs of companies that plan to make new needs of companies that plan to make new investments in subsidiaries in China. investments in subsidiaries in China.\n\nName\n\n\n\nEnvironmentally Responsible Environmentally Responsible Company Support Fund Company Support Fund Environmental Facilities Environmental Facilities Support Fund Support Fund\n\nSize of fund\n\nOutline\n\nName\n\n¥50 billion ¥50 billion\n\nThe fund supports companies The fund supports companies with environmentally-aware with environmentally-aware managements or involvement in managements or involvement in environmental businesses environmental businesses\n\nChina Business Support Fund China Business Support Fund\n\n\n\n\n\n¥50 billion ¥50 billion\n\nThe fund supports for companies The fund supports for companies considering moving into China, or considering moving into China, or expanding their business there expanding their business there\n\n## Financial education through teaching of investment skills\n\nSMBC Friend Securities runs an online SMBC Friend Securities runs an online education program, 'You education program, 'You · You Toshi' You Toshi' (Self-composed Investment), for (Self-composed Investment), for inexperiinexperienced investors. enced investors.\n\nThe service is free and includes a training The service is free and includes a training program that can be used as a tool for program that can be used as a tool for lifelong study of investment skills. lifelong study of investment skills.\n\n", - "page_start": 8, - "page_end": 8, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Consolidated Statements of Cash Flows\n\nYears ended June 30, 2024 and 2023", - "page_start": 6, - "page_end": 6, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nprices in active markets for identical assets or liabilities (Level 1 measurements) and the lowest priority to measurements involving significant unobservable inputs (Level 3 measurements).\n\nThe three levels of the fair value hierarchy are as follows:\n\n - · Level 1 inputs are quoted prices (unadjusted) in active markets for identical investments that the Foundation has the ability to access at the measurement date. The Foundation's Level 1 assets are investments in marketable securities, including stocks and mutual funds.\n - · Level 2 inputs are inputs other than quoted prices included in Level 1 that are observable for the investment, either directly or indirectly. The Foundation's Level 2 assets are investments in corporate bonds, mortgage-backed securities, and U.S. Treasury securities.\n - · Level 3 inputs are unobservable inputs from investments. Level 3 inputs incorporate assumptions about the factors that market participants would use in pricing the instrument.\n\n## (i) Property and Equipment, Net\n\nExpenditures for property and equipment with useful lives of one year or more are capitalized and recorded at cost. Depreciation is calculated on a straight-line basis over the estimated useful lives of the assets. The estimated useful life of furniture and data center equipment is five years and computer equipment such as laptops and desktops is four years. Leasehold improvements are amortized over the shorter of the life of the lease or the leasehold improvement. Donated computer equipment and software are recorded at the fair value at the time of the donation and are deemed as contributions without donor restriction in the year in which they are received. Repairs and maintenance of equipment are charged to operations. Upon retirement, sale, or other disposition of property and equipment, costs, and accumulated depreciation are eliminated from the accounts, and any resulting gain or loss is included in operations.\n\nThe Foundation incurs software development costs related to internal use software. Qualifying costs incurred during the application development stage are capitalized. These costs primarily consist of internal labor and third-party development costs and are amortized using the straight-line method over the estimated useful life of the software, which is generally three years. These assets are reviewed for impairment whenever events or changes in circumstances occur that could impact their recoverability. External use software is expensed as incurred since there is generally no passage of time between achievement of technological feasibility and the availability for general release.\n\n## (j) Other Operating Expenses\n\nOther operating expenses primarily include facility expenses, staff related expenses, insurance and personal property tax expenses, and other general administrative expenses.\n\n## (k) Contributions of Cash and Other Financial Assets\n\nUnconditional promises to give are recognized as revenue when the underlying promises are received by the Foundation. Contributions that are conditional are not recorded until the condition is substantially met. Conditional contributions must include both (1) one or more barriers that need to be overcome before the Foundation is entitled to the contribution, and (2) a right of return or a right of release from the donor's obligation to provide the contribution.", - "page_start": 9, - "page_end": 9, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nGifts of cash and other assets are reported as contributions with donor restrictions if they are received with donor stipulations that limit the use of the donated assets or are restricted as to time. When a donor restriction expires, that is, when a stipulated time restriction ends or purpose restriction is accomplished, net assets with donor restrictions are reclassified to net assets without donor restrictions and reported in the consolidated statement of activities as net assets released from restrictions.\n\n## (l) Contributions of Nonfinancial Assets and Services\n\nContributions of nonfinancial assets and services include contributed services, as described below.\n\nContributed services are reported at fair value in the consolidated financial statements for voluntary donations of services when those services (1) create or enhance nonfinancial assets, (2) require specialized skills provided by individuals possessing those skills and are services that would be typically purchased if not provided by the donation, and (3) are professional in nature, and have been explicitly agreed to in advance. Contributed services are reported as contributions of nonfinancial assets and services revenue and in-kind service expenses in the consolidated statements of activities. Fair value is estimated based on current local rates for similar services.\n\nA substantial number of volunteers make significant contributions of their time in the furtherance of the Foundation's projects. The value of this contributed time is not reflected in the accompanying consolidated financial statements, as the criteria above are not met.\n\nContributed service revenue and expenses recorded in the consolidated statements of activities consist of contributed legal services, engineering services, subscription services, and internet hosting services and bandwidth. The amounts of specialized contributed legal services as revenue and expenses are $82,638 and $493,315 for the years ended June 30, 2024 and 2023, respectively. The value of specialized engineering services as revenue and expenses are $0 and $498,800 for the years ended June 30, 2024 and 2023, respectively. The value of donated subscription services as revenue and expenses was $124,738 and $0 for the years ended June 30, 2024 and 2023, respectively. The amounts of contributed internet hosting services and bandwidth for the years ended June 30, 2024 and 2023 is $56,100 and $48,338, respectively. Included in the 2024 and 2023 amounts are donated hosting services and bandwidth from the following companies: (1) FiberRing, (2) Tele2, (3) Datahop, (4) LibertyGlobal, (5) Init7, and (6) Arelion.\n\n## (m) Revenue Recognition - Contracts With Customers\n\nThe Foundation recognizes revenue from contracts with customers related to Wikimedia, LLC under Accounting Standards Codification Topic 606, Revenue from Contracts with Customers, which establishes a principle that revenue is recognized upon transfer of control of promised products and services to customers in an amount that reflects the consideration the Foundation expects to receive in exchange for those products or services.\n\nThe Foundation determines the amount of revenue to be recognized through the application of the following 5-step process: 1) identification of the contract, or contracts, with a customer; 2) identification of the performance obligations in the contract; 3) determination of the transaction price; 4) allocation of the transaction price to the performance obligations in the contract; and 5) recognition of revenue when or as the Foundation satisfies the performance obligations.", - "page_start": 10, - "page_end": 10, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - } - ] - }, - { - "references": { - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf", - "query": "What include Wikimedia Fundation restricted cash ?", - "target_page": 9, - "target_passage": "Restricted cash includes standby letters of credit for (1) the Foundation’s headquarters office lease and (2) one of the Foundation’s Employer of Record responsible for administering compensation and benefits for non-US personnel.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nOnce such stipulations are satisfied, the associated net assets are released from net assets with donor restrictions and recognized as net assets without donor restrictions.\n\nContributions received are recorded as net assets without donor restriction or net assets with donor restrictions depending on the existence and/or nature of any donor restrictions.\n\n## (e) Cash and Cash Equivalents\n\nThe Foundation manages its cash through major financial institutions. At June 30, 2024 and 2023, the carrying amount of the Foundation's general ledger cash held primarily in nationally recognized financial institutions is $60.0 million and $63.9 million, respectively. Cash balances are insured by the Federal Deposit Insurance Corporation (FDIC) up to the applicable limits. Cash balances held in these financial institutions at June 30, 2024 and 2023 exceed the applicable FDIC insurance limits. The Foundation's current practice is to maintain at least four months of cash and cash equivalents to support a combination of operating cash and a current reserve fund. The Foundation considers all highly liquid investments with an original maturity of three months or less when purchased to be cash equivalents. Cash equivalents of $22.8 million and $12.0 million as of June 30, 2024 and 2023, respectively, are considered Level 1 under ASC Topic 820, Fair Value Measurement .\n\n## (f) Restricted Cash\n\nRestricted cash includes standby letters of credit for (1) the Foundation's headquarters office lease and (2) one of the Foundation's Employer of Record responsible for administering compensation and benefits for non-US personnel. As of June 30, 2024, neither letter of credit has been used.\n\n## (g) Contributions Receivable\n\nContributions receivable represent gift amounts due from various entities, which are occasionally directed at specific activities. Contributions receivable due more than one year from the contribution date are discounted to present value using a fair value rate based on the U.S. Treasury bond rate and reflect the risks inherent in these cash flows. Contributions receivable are subject to review and adjustment by management should amounts be deemed uncollectible.\n\n## (h) Investments\n\nThe Foundation's policy regarding investments is to invest cash in short-term, intermediate-term, and long-term fixed income, and equity instruments without assuming material undue risk to principal. Preservation of principal and maintenance of liquidity are priorities over yield. Investments are reported at fair value with realized and unrealized gains and losses, and accrued interest included as a component of the change in net assets. Additionally, the Foundation holds no shares of donated stock as of June 30, 2024 or 2023, consistent with its policy to sell stock received through donations as soon as possible.\n\nThe Foundation presents its investment portfolios as short-term and long-term based on expectations of the holding period of the investment in line with the investment guidelines stipulated in the investment policy.\n\nASC Topic 820 establishes a fair value hierarchy that prioritizes observable inputs to valuation techniques used to measure fair value. The hierarchy gives the highest priority to unadjusted quoted", - "page_start": 8, - "page_end": 8, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\n## (9) Liquidity and Availability of Financial Assets\n\nThe Foundation's financial assets available for general expenditure within one year of the balance sheet date, June 30, 2024 and 2023, are as follows:\n\n| | 2024 | 2023 |\n|----------------------------------------------------------------------------------------|---------------|-------------|\n| Cash and cash equivalents | $ 82,845,159 | 75,808,401 |\n| Current contributions receivable | 856,657 | - |\n| Short-term investments | 116,074,763 | 132,216,667 |\n| Total financial assets | 199,776,579 | 208,025,068 |\n| Less: | | |\n| Restricted by donors for programs | 5,696,323 | 5,882,673 |\n| Donations payable to Wikimedia Endowment | 525,607 | 5,274,448 |\n| Financial assets available to meet cash needs for general expenditures within one year | $ 193,554,649 | 196,867,947 |\n\nThe Foundation's liquidity management includes a policy of structuring its financial assets to be available to meet its general expenditures, liabilities, grant-making, and other obligations as they come due. Cash and cash equivalents as reported on the consolidated balance sheet at June 30, 2024 and 2023, are the primary liquid resources used by the Foundation to meet these obligations. Financial assets invested in the short-term and long-term investments can be liquidated at any time as needed.\n\n## (10) Related Party Transactions\n\nThe Wikimedia Endowment began operations as a standalone tax-exempt 501(c)(3) organization on September 30, 2023, with the mission to act as a permanent fund that can support in perpetuity the operations and activities of current and future Wikimedia projects, which are projects that are approved by and advance the purposes of the Foundation or its successor if the Foundation ceases to exist. The Foundation does not have control or controlling financial interest in the Wikimedia Endowment and the Wikimedia Endowment has a separate Board of Directors, but the Wikimedia Endowment is considered a related party to the Foundation because Wikimedia Endowment management is also management at the Foundation.\n\nDuring the fiscal year ended June 30, 2024, the Foundation recognized revenue of $2,063,195 related to services provided to the Wikimedia Endowment, primarily for fundraising and general and administrative support under the terms of a cost sharing agreement. These costs are included within the Foundation ' s expenses based on the nature of the cost. The revenue from the Wikimedia Endowment reimbursing the costs is recorded within other income, net.", - "page_start": 18, - "page_end": 18, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\n## Notes to Consolidated Financial Statements\n\nJune 30, 2024 and 2023\n\nThe Foundation also receives donations on behalf of the Wikimedia Endowment as well as transfers additional Foundation donations to the Endowment monthly. Donations that are donor-specified for the Wikimedia Endowment are not recognized as revenue to the Foundation, whereas donations that are not donor-specified for the Wikimedia Endowment are recognized both as contributions revenue and awards and grants expense to the Foundation. The Foundation transferred $10,706,812 donor-designated gifts and $624,137 Foundation gifts to the Wikimedia Endowment during the year ended June 30, 2024. As of June 30, 2024, the Foundation owed the Wikimedia Endowment $525,607 for donations to be transferred to the Wikimedia Endowment for the month of June 2024.\n\nDuring the fiscal year ended June 30, 2024, the Wikimedia Endowment also provided the Foundation with grants of $1,500,000 for MediaWiki improvements, $600,000 for the Abstract Wikipedia project, and $500,000 for exploring strategies for expanding beyond the Foundation's existing audiences of consumers and contributors. The grants are recorded as contributions with donor restrictions and within net assets with donor restrictions as of June 30, 2024.\n\n## (11) Contingencies and Commitments\n\nIn the normal course of business, the Foundation receives various threats of litigation. In the opinion of management, the outcome of the pending lawsuits will not materially affect operations or the financial position of the Foundation.\n\n## (12) Subsequent Events\n\nThe Foundation has evaluated its subsequent events through October 8, 2024, the date at which the consolidated financial statements were available to be issued, and determined there are no items to disclose.", - "page_start": 19, - "page_end": 19, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## WIKIMEDIA FOUNDATION, INC.\n\nNotes to Consolidated Financial Statements June 30, 2024 and 2023\n\n## (1) Organization and Summary of Significant Accounting Policies\n\n## (a) Organization and Purpose\n\nThe accompanying consolidated financial statements present the financial position, change in net assets and cash flows of the Wikimedia Foundation, Inc. (the Foundation) and Wikimedia, LLC.\n\nThe Foundation is the nonprofit organization that operates Wikipedia, a free online encyclopedia. Based in San Francisco, California, the Foundation is a 501(c)(3) charity that is funded primarily through donations and contributions.\n\nThe Foundation also operates Wikimedia, LLC, a Delaware Limited Liability Company, with the Foundation as its Sole Member. The Wikimedia, LLC is organized and operated exclusively for charitable and educational purposes within the meaning of section 501(c)(3) of the Internal Revenue Code and is a disregarded entity for tax purposes.\n\n## (b) Risks and Uncertainties\n\nThe Foundation's operations are funded primarily by public donations from individuals as well as gifts from foundations and corporations. External factors such as global geopolitics, recession, and currency markets may impact our ability to raise funds. As of the date of this report, the Foundation has not experienced an adverse impact on its business operations.\n\n## (c) Income Taxes\n\nThe Foundation is exempt from federal income tax under Section 501(c)(3) of the Internal Revenue Code and from state income tax under Chapter 220.13 of the Florida Statutes and Sections 23701d of Revenue and Taxation Code of the State of California. The Internal Revenue Service has determined that the Foundation is not a private foundation and contributions to it qualify as charitable contributions.\n\nThe Foundation has evaluated the financial statement impact of positions taken or expected to be taken in its tax returns. The Foundation is subject to income taxes on any net income that is derived from a trade or business, regularly carried on, and not in furtherance of the purposes for which it was granted exemption. Net income from any unrelated trade or business, in the opinion of management, is not material to the consolidated financial statements taken as a whole.\n\n## (d) Financial Statement Presentation\n\nNet assets, support and revenue, expenses, gains, and losses are classified based on the existence or absence of donor-imposed restrictions in accordance with Accounting Standards Codification (ASC) Topic 958, Not-for-Profit Entities .\n\nNet assets without donor restrictions represent unrestricted resources available to support operations and also include previously temporarily restricted resources, which have become available for use by the Foundation in accordance with the intentions of donors.\n\nNet assets with donor restrictions represent contributions that are limited in use by the Foundation in accordance with donor-imposed stipulations. The stipulations may expire with time or may be satisfied and removed by the actions of the Foundation according to the terms of the contribution by the donor.", - "page_start": 7, - "page_end": 7, - "source_file": "Wikimedia_Foundation_2024_Audited_Financial_Statements.pdf" - }, - { - "text": "## BA L A N C E SH E E T IT E M S\n\nCash and Cash Equivalents The decrease of cash and cash equivalents to $7.2 million at December 31, 2000 from $15.0 million at December 31, 1999 is due primarily to the net effects of working capital movements, foreign exchange gains and losses, the settlement of a forw a rd fore i g n exchange contract, private placement of common shares, capital expenditures and capital lease payments, and operating losses for the year ended December 31, 2000. (See Note 21 to the Consolidated Financial Statements - Reconciliation of net loss to net cash used in operating activities and the Consolidated Statements of Cash Flows.)\n\nRestricted Cash Restricted cash decreased to $2.1 million at December 31, 2000 from $10.9 million at December 31, 1999. The majority of restricted cash was held as security with respect to cash provided in Hungary by banks participating in Euro n e t 's ATM network, to cover", - "page_start": 21, - "page_end": 21, - "source_file": "NASDAQ_EEFT_2000.pdf" - }, - { - "text": "Cash and cash equivalents (bank advances) are defined as cash and short-term deposits, which have an original maturity of less than 90 days, less bank advances. As at December 31, 2013 and 2012, the balance of cash and cash equivalents was comprised of cash and demand deposits.\n\nThe accompanying notes are an integral part of the consolidated financial statements.", - "page_start": 96, - "page_end": 96, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "At December 31, 2004, the Company had $237.0 million of restricted cash deposits and $38.7 million of restricted marketable securities held as Ñnancial guarantees, including $119.0 million of restricted cash held for capital expenditures under certain debt facilities, and $34.3 million and $38.7 million of restricted cash and restricted marketable securities, respectively, pledged to regulatory agencies and governmental entities as Ñnancial guarantees of the Company's performance related to its Ñnal capping, closure and post-closure obligations at its landÑlls. The Company's restricted marketable securities consist of mutual funds invested in short-term investment grade securities, including mortgage-backed securities and U.S. Government obligations. These securities are available for sale and, as a result, are stated at fair value based upon quoted market", - "page_start": 91, - "page_end": 91, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## LIQUIDITY AND CAPITAL RESOURCES\n\nWe strive to maintain a level of liquidity sufficient to allow us to cover our seasonal cash needs and to maintain appropriate levels of shortterm borrowings. We believe that our operating cash flows, available credit facilities and potential future borrowings are sufficient to finance our cash requirements for the next 12 months and beyond.\n\nOver the long term, we manage our cash and capital structure to maximize shareholder return, maintain our financial position, manage refinancing risk and allow flexibility for strategic initiatives. We regularly assess our debt and leverage levels, capital expenditure requirements, debt service payments, dividend payouts, potential share repurchases and other future investments. We believe that as of January 31, 2015, our existing cash and cash equivalents on-hand of $827, available credit facilities of $800 and potential future operating cash flows and borrowings will be sufficient to fund these scheduled future payments and potential long-term initiatives. Additionally, if an agreement is reached and a transaction is consummated in regards to our credit card receivables, it could result in additional cash flows to further support our capital requirements and strategic initiatives.\n\n## Operating Activities\n\nNet cash provided by operating activities was $1,220 in 2014, $1,320 in 2013 and $1,110 in 2012. The majority of our operating cash inflows are derived from sales. We also receive cash payments for property incentives from developers. Our operating cash outflows generally consist of payments to our merchandise vendors (net of vendor allowances), payments to our employees for wages, salaries and other employee benefits and payments to our landlords for rent. Operating cash outflows also include payments for income taxes and interest payments on our short-term and long-term borrowings.\n\nCash provided by operating activities decreased in 2014 compared with 2013, which was primarily due to higher state tax payments made in 2014 compared with 2013, as well as changes in working capital in 2014.\n\nCash provided by operating activities increased in 2013 compared with 2012, resulting from less state tax payments made in 2013 due to additional payments made in 2012 as a result of the 53rd week, along with increased property incentives received from developers and changes in working capital.\n\n## Investing Activities\n\nNet cash used in investing activities was $889 in 2014, $822 in 2013 and $369 in 2012. Our investing cash flows primarily consist of capital expenditures, changes in restricted cash accumulated for debt maturities and changes in credit card receivables associated with cardholder purchases outside of Nordstrom using our Nordstrom Visa credit cards.\n\n## Capital Expenditures\n\nOur capital expenditures over the last three years totaled $2,177, with $861 in 2014, $803 in 2013 and $513 in 2012. Capital expenditures increased in 2014 compared with 2013 primarily due to ongoing store expansion and increased technology investments.\n\nCapital expenditures increased in 2013 compared with 2012 as we continued to make progress executing our customer strategy through increased investments in technology, ecommerce, remodels and new stores, including Nordstrom Rack and our Manhattan full-line store.\n\nThe following table summarizes our store count and square footage activity:", - "page_start": 38, - "page_end": 38, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "| Proceeds from maturities of investments | 17,975 | 8,959 |\n| Proceeds from sales of investments | 200 | 138 |\n| Business combinations, net of cash acquired | - | (64) |\n| Net cash used in investing activities | (11,184) | (10,780) |\n| Cash Flows from Financing Activities | | |\n| Proceeds from issuances of debt | 4,360 | 2,526 |\n| Repayments of debt | (1,783) | (887) |\n| Proceeds from exercises of stock options and other stock issuances | 788 | 548 |\n| Principal payments on finance leases | (291) | (340) |\n| Debt issuance costs | (6) | (23) |\n| Distributions paid to noncontrolling interests in subsidiaries | (76) | (105) |\n| Payments for buy-outs of noncontrolling interests in subsidiaries | (124) | (17) |\n| Net cash provided by financing activities | 2,868 | 1,702 |\n| Effect of exchange rate changes on cash and cash equivalents and restricted cash | (8) | (142) |\n| Net increase (decrease) in cash and cash equivalents and restricted cash | 1,785 | (334) |\n| Cash and cash equivalents and restricted cash, beginning of period | 17,189 | 16,924 |\n| Cash and cash equivalents and restricted cash, end of period | $ 18,974 | $ 16,590 |\n| Supplemental Non-Cash Investing and Financing Activities | | |\n| Acquisitions of property and equipment included in liabilities | $ 2,727 | $ 1,717 |\n| Leased assets obtained in exchange for finance lease liabilities | $ 32 | $ 1 |\n| Leased assets obtained in exchange for operating lease liabilities | $ 1,232 | $ 1,548 |", - "page_start": 11, - "page_end": 11, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "\n\nNotes to the Financial Statements\n\n| | 2013 $'000 | 2012 $'000 |\n|---------------------------------------------------|---------------|---------------|\n| 7. Cash and cash equivalents and restricted cash | | |\n| Current | | |\n| Cash on hand | 18 | 17 |\n| Deposits at call | 30,476 | 87,014 |\n| Cash and other bank balances | 30,494 | 87,031 |\n| Other deposits | 2,493 | 3,592 |\n| Total cash and cash equivalents - current | 32,987 | 90,623 |\n| Non-current | | |\n| Restricted cash | 5,474 | - |\n| Total restricted cash - non-current | 5,474 | - |\n\n## Cash on hand\n\nThese are petty cash balances held by subsidiaries.\n\nDeposits at call\n\nThe deposits at call are bearing floating interest rates and they may be accessed daily.\n\n## Other deposits\n\nThis represents restricted cash held on deposit with financial institutions.\n\nRestricted cash\n\nUnder the terms of the loan facilities (see Note 16), the Group is required to maintain a minimum cash balance of US$5 million in respect of Akara.\n\nRisk exposure\n\nThe Group's exposure to interest rate risk and a sensitivity analysis for financial assets and liabilities are disclosed in Note 28.\n\n## 8. Receivables\n\n| Trade receivables | - | 3,201 |\n|---------------------|-------|---------|\n| Other debtors | 9,431 | 9,025 |\n| Total receivables | 9,431 | 12,226 |\n\n## Trade receivables\n\nTrade receivables represent gold sales at the end of the financial year, where payment was yet to be received. No trade receivables were past due or impaired as at 30 June 2013 (2012: nil).\n\n## Other debtors\n\nOther debtors mainly relate to GST / VAT receivables, advances made for land acquisition and diesel fuel tax credits.\n\nRisk exposure\n\nThe Group's exposure to credit and currency is disclosed in Note 28.\n\n", - "page_start": 85, - "page_end": 85, - "source_file": "ASX_KCN_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20200471_en.pdf", - "query": "What is the price of the The Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020 ?", - "target_page": 8, - "target_passage": "£6.90", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## S T A T U T O R Y I N S T R U M E N T S\n\n## 2020 No. 471\n\n## EDUCATION, ENGLAND\n\nThe Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020\n\nMade\n\n-\n\n-\n\n-\n\n-\n\n28th April 2020\n\nLaid before Parliament\n\n30th April 2020\n\nComing into force\n\n-\n\n-\n\n1st May 2020\n\nThe Secretary of State makes the following Regulations in exercise of the powers conferred by sections 30(8), 31(4), 36(11), 37(4), 44(7)(b) and (c), 47, 49(3), 51(4), 56(1), 71(11), 73(4), 74(3) and 135(2) and (3) of the Children and Families Act 2014( a ) and sections 29(3) and 569(4) of the Education Act 1996( b ).\n\n## Citation and commencement\n\n- 1. These Regulations may be cited as the Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020 and come into force on 1st May 2020.\n\n## Review and expiry\n\n- 2. -(1) The Secretary of State must review the effectiveness of these Regulations during the period for which they have effect.\n- (2) These Regulations cease to have effect on 25th September 2020.\n\n## Amendment of the Special Educational Needs and Disability Regulations 2014\n\n- 3. The Special Educational Needs and Disability Regulations 2014( c ) are amended as follows.\n- 4. In regulation 2(1) (interpretation), at the appropriate place insert-\n- ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n- 5. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- (2) The coronavirus exception applies where it is not reasonably practicable for the local authority to meet the requirement specified in regulation 11(2)(a) for a reason relating to the incidence or transmission of coronavirus.'.\n\n## Amendment of the Special Educational Needs and Disability (Detained Persons) Regulations 2015\n\n - 18. The Special Educational Needs and Disability (Detained Persons) Regulations 2015( a ) are amended as follows.\n - 19. In regulation 2(1) (interpretation), at the appropriate place insert-\n - ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n - 20. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n - 2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n - (2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n - (3) The following regulations are specified for the purposes of paragraphs (1) and (2)-\n - (a) regulation 15(1) and (4) (needs assessments which are not completed);\n - (b) regulation 16(2), (3) and (4) (transfer of a kept EHC plan);\n - (c) regulation 17(1) and (2) (restriction on disclosure of EHC plans);\n - (d) regulation 19 (requirement to consider mediation);\n - (e) regulation 20(1) and (2) (where the appropriate person does not wish to or fails to pursue mediation);\n - (f) regulation 21 (mediation);\n - (g) regulation 24(1) and (3) (mediation certificate under section 55(5) of the Act);\n - (h) regulation 27(3) (steps to be taken by a home authority);\n - (i) regulation 29(2) and (6) (compliance with the orders of the First-tier Tribunal); and\n - (j) regulation 30(3) and (6) (unopposed appeals).'.\n - 21. In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert-\n - '(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.'.\n - 22. In regulation 5(4) (decision whether or not to conduct a detained person's EHC needs assessment)-\n - (a) at the end of sub-paragraph (b) omit 'or'; and\n - (b) at the end of sub-paragraph (c) insert-\n\n', or\n\n - (d) of a reason relating to the incidence or transmission of coronavirus'.", - "page_start": 3, - "page_end": 3, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "18. Guidance issued by the Secretary of State pursuant to paragraph 4(2) of Schedule 2D to the 2020 Regulations has effect as guidance issued pursuant to paragraph 4(2) of Schedule 9 to these Regulations.\n\n## EXPLANATORY NOTE\n\n(This note is not part of the Regulations)\n\nThese Regulations replace the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the International Travel Regulations'), the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020 and the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021.\n\nThey impose requirements on certain categories of person to provide information upon arrival in England, to take coronavirus tests before and after arrival and to self-isolate in order to prevent the spread of infection or contamination from coronavirus or coronavirus disease. They also impose obligations on operators to ensure that passengers receive information and comply with the requirements.\n\nAn impact assessment has not been produced for this instrument. An explanatory memorandum has been published alongside this instrument at www.legislation.gov.uk.\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 90, - "page_end": 90, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (b) in the definition of 'International Travel Regulations', for 'the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (3) In regulation 4ZA-\n - (a) in the heading, for 'the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021';\n - (b) in paragraph (1)(a), for 'regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the 2020 Regulations')' substitute 'regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021 ('the International Travel and Operator Liability Regulations')';\n - (c) in paragraph (1)(c), for 'paragraph 7(1)(f) of Schedule 2C to the 2020 Regulations' substitute 'paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations';\n - (d) in paragraph (3), for 'paragraph 7(1)(f) of Schedule 2C to the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations'.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- 23. In regulation 8(2) (duty to co-operate in a detained person's EHC needs assessment), at the end of sub-paragraph (d) insert-\n\n'; or\n\n - (e) of a reason relating to the incidence or transmission of coronavirus'.\n - 24. In regulation 10(4) (decision not to secure an EHC plan)-\n - (a) at the end of sub-paragraph (b) omit 'or'; and\n - (b) at the end of sub-paragraph (c) insert-\n\n'; or\n\n - (d) of a reason relating to the incidence or transmission of coronavirus'.\n - 25. In regulation 13(3) (timescales for EHC plans), for '(c)' substitute '(d)'.\n - 26. In regulation 29 (compliance with the orders of the First-tier Tribunal)-\n - (a) after paragraph (6) insert-\n - '(6A) The home authority need not comply with the time limits specified in paragraph (3) if it is impractical to do so because the circumstances referred to in regulation 10(4)(d) apply.'.\n - (b) in paragraph (7)(c) after '10(4)(a)' insert 'or (d)'.\n - 27. In regulation 30(7)(c) (unopposed appeals), after '10(4)(a)' insert 'or (d)'.\n\n## Amendment of the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017\n\n28. The Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017( a ) are amended as follows.\n\n - 29. In regulation 2 (interpretation), at the appropriate place insert-\n - ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n - 30. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n - 2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n - (2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n - (3) The following regulations are specified for the purposes of paragraphs (1) and (2)-\n - (a) regulation 6(3) and (6) (responding to health care recommendations); and\n - (b) regulation 7(1) and (4) (responding to social care recommendations).'.\n\nVicky Ford Parliamentary Under Secretary of State Department for Education\n\n28th April 2020", - "page_start": 4, - "page_end": 4, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- 2. -(1) The Health Protection (Coronavirus, Restrictions) (Self-Isolation) (England) Regulations 2020( a ) are amended as follows.\n - (2) In regulation 2D(1)(c), for 'regulation 4 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'regulation 9 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.\n - (3) In regulation 6(1)-\n - (a) in the definitions of 'designated place', 'isolation requirements' and 'self-isolating worker', for 'regulation 4' substitute 'regulation 9';", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n - (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or", - "page_start": 28, - "page_end": 28, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## PART 6\n\n## Final provisions\n\n## Review of need for requirements\n\n24. The Secretary of State must review the need for the requirements imposed by these Regulations by 14th June 2021 and at least once every 28 days thereafter.\n\n## Expiry of Regulations\n\n25. These Regulations expire at the end of 16th May 2022.\n\n## Revocations, transitional provision consequential amendments and savings\n\n26. -(1) The following Regulations are revoked-\n\n - (a) the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020( a );\n - (b) the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the International Travel Regulations')( b ); and\n - (c) the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021( c ).\n - (2) Schedule 15 makes consequential amendments to other instruments specified in that Schedule.\n - (3) Schedule 16 makes transitional provisions.\n - (4) Nothing in these Regulations applies in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021 (and accordingly, the regulations mentioned in paragraph (1) continue to have effect in relation to such a person).\n\nSigned by authority of the Secretary of State\n\nAt 10.32 a.m. on 14th May 2021\n\nRobert Courts Parliamentary Under Secretary of State Department for Transport", - "page_start": 30, - "page_end": 30, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (a) for the purpose of carrying out a function under these Regulations;\n - (b) for the purpose of-\n - (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,\n - (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n - (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or\n - (c) for a purpose connected with, or otherwise incidental to, a purpose described in subparagraph (a) or (b).\n - (4) Subject to paragraph (7), A may only disclose relevant information to another person (the 'recipient') where it is necessary for the recipient to have the information -\n - (a) for the purpose of carrying out a function of the recipient under-\n - (i) these Regulations, or\n - (ii) an enactment which, in Scotland, Wales or Northern Ireland, has the effect of requiring the isolation or quarantine of persons who have been outside the common travel area, for any of the purposes described in sub-paragraph (b);\n - (b) for the purpose of-\n - (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,", - "page_start": 28, - "page_end": 28, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20200471_en.pdf", - "query": "When come into force the Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020 ?", - "target_page": 1, - "target_passage": "These Regulations may be cited as the Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020 and come into force on 1st May 2020.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## S T A T U T O R Y I N S T R U M E N T S\n\n## 2020 No. 471\n\n## EDUCATION, ENGLAND\n\nThe Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020\n\nMade\n\n-\n\n-\n\n-\n\n-\n\n28th April 2020\n\nLaid before Parliament\n\n30th April 2020\n\nComing into force\n\n-\n\n-\n\n1st May 2020\n\nThe Secretary of State makes the following Regulations in exercise of the powers conferred by sections 30(8), 31(4), 36(11), 37(4), 44(7)(b) and (c), 47, 49(3), 51(4), 56(1), 71(11), 73(4), 74(3) and 135(2) and (3) of the Children and Families Act 2014( a ) and sections 29(3) and 569(4) of the Education Act 1996( b ).\n\n## Citation and commencement\n\n- 1. These Regulations may be cited as the Special Educational Needs and Disability (Coronavirus) (Amendment) Regulations 2020 and come into force on 1st May 2020.\n\n## Review and expiry\n\n- 2. -(1) The Secretary of State must review the effectiveness of these Regulations during the period for which they have effect.\n- (2) These Regulations cease to have effect on 25th September 2020.\n\n## Amendment of the Special Educational Needs and Disability Regulations 2014\n\n- 3. The Special Educational Needs and Disability Regulations 2014( c ) are amended as follows.\n- 4. In regulation 2(1) (interpretation), at the appropriate place insert-\n- ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n- 5. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "18. Guidance issued by the Secretary of State pursuant to paragraph 4(2) of Schedule 2D to the 2020 Regulations has effect as guidance issued pursuant to paragraph 4(2) of Schedule 9 to these Regulations.\n\n## EXPLANATORY NOTE\n\n(This note is not part of the Regulations)\n\nThese Regulations replace the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the International Travel Regulations'), the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020 and the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021.\n\nThey impose requirements on certain categories of person to provide information upon arrival in England, to take coronavirus tests before and after arrival and to self-isolate in order to prevent the spread of infection or contamination from coronavirus or coronavirus disease. They also impose obligations on operators to ensure that passengers receive information and comply with the requirements.\n\nAn impact assessment has not been produced for this instrument. An explanatory memorandum has been published alongside this instrument at www.legislation.gov.uk.\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 90, - "page_end": 90, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (2) The coronavirus exception applies where it is not reasonably practicable for the local authority to meet the requirement specified in regulation 11(2)(a) for a reason relating to the incidence or transmission of coronavirus.'.\n\n## Amendment of the Special Educational Needs and Disability (Detained Persons) Regulations 2015\n\n - 18. The Special Educational Needs and Disability (Detained Persons) Regulations 2015( a ) are amended as follows.\n - 19. In regulation 2(1) (interpretation), at the appropriate place insert-\n - ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n - 20. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n - 2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n - (2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n - (3) The following regulations are specified for the purposes of paragraphs (1) and (2)-\n - (a) regulation 15(1) and (4) (needs assessments which are not completed);\n - (b) regulation 16(2), (3) and (4) (transfer of a kept EHC plan);\n - (c) regulation 17(1) and (2) (restriction on disclosure of EHC plans);\n - (d) regulation 19 (requirement to consider mediation);\n - (e) regulation 20(1) and (2) (where the appropriate person does not wish to or fails to pursue mediation);\n - (f) regulation 21 (mediation);\n - (g) regulation 24(1) and (3) (mediation certificate under section 55(5) of the Act);\n - (h) regulation 27(3) (steps to be taken by a home authority);\n - (i) regulation 29(2) and (6) (compliance with the orders of the First-tier Tribunal); and\n - (j) regulation 30(3) and (6) (unopposed appeals).'.\n - 21. In regulation 4 (determination whether or not special educational provision may be necessary), after paragraph (2) insert-\n - '(3) The local authority need not comply with the time limit referred to in paragraph (1) if it is impractical to do so because of a reason relating to the incidence or transmission of coronavirus.'.\n - 22. In regulation 5(4) (decision whether or not to conduct a detained person's EHC needs assessment)-\n - (a) at the end of sub-paragraph (b) omit 'or'; and\n - (b) at the end of sub-paragraph (c) insert-\n\n', or\n\n - (d) of a reason relating to the incidence or transmission of coronavirus'.", - "page_start": 3, - "page_end": 3, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- 23. In regulation 8(2) (duty to co-operate in a detained person's EHC needs assessment), at the end of sub-paragraph (d) insert-\n\n'; or\n\n - (e) of a reason relating to the incidence or transmission of coronavirus'.\n - 24. In regulation 10(4) (decision not to secure an EHC plan)-\n - (a) at the end of sub-paragraph (b) omit 'or'; and\n - (b) at the end of sub-paragraph (c) insert-\n\n'; or\n\n - (d) of a reason relating to the incidence or transmission of coronavirus'.\n - 25. In regulation 13(3) (timescales for EHC plans), for '(c)' substitute '(d)'.\n - 26. In regulation 29 (compliance with the orders of the First-tier Tribunal)-\n - (a) after paragraph (6) insert-\n - '(6A) The home authority need not comply with the time limits specified in paragraph (3) if it is impractical to do so because the circumstances referred to in regulation 10(4)(d) apply.'.\n - (b) in paragraph (7)(c) after '10(4)(a)' insert 'or (d)'.\n - 27. In regulation 30(7)(c) (unopposed appeals), after '10(4)(a)' insert 'or (d)'.\n\n## Amendment of the Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017\n\n28. The Special Educational Needs and Disability (First-tier Tribunal Recommendations Power) Regulations 2017( a ) are amended as follows.\n\n - 29. In regulation 2 (interpretation), at the appropriate place insert-\n - ''coronavirus' means severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); '.\n - 30. After regulation 2 (interpretation) insert-\n\n## ' Relaxation of time periods due to coronavirus exception\n\n - 2A. -(1) Where the coronavirus exception applies, any requirement in any of the regulations specified in paragraph (3) for action to be taken within a specified period of time or by a certain day is to be read instead as a requirement for such action to be taken as soon as reasonably practicable.\n - (2) The coronavirus exception applies where it is not reasonably practicable for a person to meet a requirement referred to in paragraph (1) for a reason relating to the incidence or transmission of coronavirus.\n - (3) The following regulations are specified for the purposes of paragraphs (1) and (2)-\n - (a) regulation 6(3) and (6) (responding to health care recommendations); and\n - (b) regulation 7(1) and (4) (responding to social care recommendations).'.\n\nVicky Ford Parliamentary Under Secretary of State Department for Education\n\n28th April 2020", - "page_start": 4, - "page_end": 4, - "source_file": "uksi_20200471_en.pdf" - }, - { - "text": "- (b) in the definition of 'International Travel Regulations', for 'the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## PART 6\n\n## Final provisions\n\n## Review of need for requirements\n\n24. The Secretary of State must review the need for the requirements imposed by these Regulations by 14th June 2021 and at least once every 28 days thereafter.\n\n## Expiry of Regulations\n\n25. These Regulations expire at the end of 16th May 2022.\n\n## Revocations, transitional provision consequential amendments and savings\n\n26. -(1) The following Regulations are revoked-\n\n - (a) the Health Protection (Coronavirus, Public Health Information for International Passengers) (England) Regulations 2020( a );\n - (b) the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the International Travel Regulations')( b ); and\n - (c) the Health Protection (Coronavirus, Pre-Departure Testing and Operator Liability) (England) (Amendment) Regulations 2021( c ).\n - (2) Schedule 15 makes consequential amendments to other instruments specified in that Schedule.\n - (3) Schedule 16 makes transitional provisions.\n - (4) Nothing in these Regulations applies in relation to a person who arrived in England before 4.00 a.m. on 17th May 2021 (and accordingly, the regulations mentioned in paragraph (1) continue to have effect in relation to such a person).\n\nSigned by authority of the Secretary of State\n\nAt 10.32 a.m. on 14th May 2021\n\nRobert Courts Parliamentary Under Secretary of State Department for Transport", - "page_start": 30, - "page_end": 30, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- 2. -(1) The Health Protection (Coronavirus, Restrictions) (Self-Isolation) (England) Regulations 2020( a ) are amended as follows.\n - (2) In regulation 2D(1)(c), for 'regulation 4 of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'regulation 9 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021'.\n - (3) In regulation 6(1)-\n - (a) in the definitions of 'designated place', 'isolation requirements' and 'self-isolating worker', for 'regulation 4' substitute 'regulation 9';", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (3) In regulation 4ZA-\n - (a) in the heading, for 'the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021';\n - (b) in paragraph (1)(a), for 'regulation 3B of the Health Protection (Coronavirus, International Travel) (England) Regulations 2020 ('the 2020 Regulations')' substitute 'regulation 6 of the Health Protection (Coronavirus, International Travel and Operator Liability) (England) Regulations 2021 ('the International Travel and Operator Liability Regulations')';\n - (c) in paragraph (1)(c), for 'paragraph 7(1)(f) of Schedule 2C to the 2020 Regulations' substitute 'paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations';\n - (d) in paragraph (3), for 'paragraph 7(1)(f) of Schedule 2C to the Health Protection (Coronavirus, International Travel) (England) Regulations 2020' substitute 'paragraph 7(1)(g) of Schedule 11 to the International Travel and Operator Liability Regulations'.", - "page_start": 88, - "page_end": 88, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n - (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or", - "page_start": 28, - "page_end": 28, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (a) for the purpose of carrying out a function under these Regulations;\n - (b) for the purpose of-\n - (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,\n - (ii) monitoring the spread of infection or contamination with coronavirus or coronavirus disease, or\n - (iii) giving effect to any international agreement or arrangement relating to the spread of infection or contamination with coronavirus or coronavirus disease; or\n - (c) for a purpose connected with, or otherwise incidental to, a purpose described in subparagraph (a) or (b).\n - (4) Subject to paragraph (7), A may only disclose relevant information to another person (the 'recipient') where it is necessary for the recipient to have the information -\n - (a) for the purpose of carrying out a function of the recipient under-\n - (i) these Regulations, or\n - (ii) an enactment which, in Scotland, Wales or Northern Ireland, has the effect of requiring the isolation or quarantine of persons who have been outside the common travel area, for any of the purposes described in sub-paragraph (b);\n - (b) for the purpose of-\n - (i) preventing danger to public health as a result of the spread of infection or contamination with coronavirus or coronavirus disease,", - "page_start": 28, - "page_end": 28, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "sg248459.pdf", - "query": "Who is Daniel Casali ?", - "target_page": 12, - "target_passage": " Daniel Casali is a Thought Leader Information Technology Specialist working for 15 years at IBM with Power Systems, high-performance computing, big data, and storage. His role at IBM is to bring to reality solutions that address client’s needs by exploring new technologies for different workloads. He is also fascinated by real multicloud implementations, always trying to abstract and simplify the new challenges of the heterogeneous architectures that are intrinsic to this new consumption model, be that on-premises or in the public cloud. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Front cover\n\n\n\n## Red Hat OpenShift and IBM Cloud Paks on IBM Power Systems Volume 1\n\nDino Quintero\n\nRicardo Dobelin Barros\n\nDaniel Casali\n\nLuis Ferreira\n\nAlain Fisher\n\nFederico Fros\n\nLuis Daniel Gonzalez\n\nMiguel Gomez Gonzalez\n\nMahesh Gurugunti\n\nRogelio Rivera Gutierrez\n\nNicolas Joly\n\nBoris Litichevsky\n\nIsmael Solis Moreno\n\nGabriel Padilla\n\n\n\nSudipto Pal\n\nBogdan Savu\n\nRichard Wale\n\n\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "sg248459.pdf" - }, - { - "text": "## HON INDUSTRIES 2003\n\n## HEARTH & HOME TECHNOLOGIES: HOT! HOT! HOT!\n\n\n\n## A CASE STUDY IN EXPANDING MARKETS\n\nWith four brand names under the Hearth & Home Technologies umbrella, we are collectively the world's largest fireplace manufacturer, the country's premier fireplace brands, the most recognized name in the industry, and the preferred brands among home builders. As the leading provider of hearth and home products and services, we make houses feel more like homes.\n\nIn addition to our commanding leadership position in manufacturing the two strongest hearth and home product brand names - Heatilator ® and Heat-N-Glo ® - we also offer innovative wood fuel technology, fireplaces, and stoves through Quadra-Fire TM , while Fireside Hearth & Home distributes, services, and sells fireplace systems.\n\nWhat are we up to with all our great brands? We are meeting a broad range of customer needs, particularly by selling both to consumers and builders through a network of independent and company-owned, stand-alone, or gallerystyle design and installation centers. These Fireside Hearth & Home design centers - visually impressive and aspirational in setting - manifest our proprietary concept of elevating the hearth retail, installation, and distribution experience to a new level of sophistication and service. Since there is no other nationally branded hearth retailer in the industry, we are once again changing the game by being first-to-market innovators.\n\nOur newest store in Eagan, Minnesota, for example, is living proof that we're succeeding in growing core product share by getting closer to consumers. One customer, a St. Paul, Minnesota veterinarian, recently had a typically dynamic retail experience at the Eagan store. He's among a large group of people who own at least one of our hearth products - and who comes back for more. He explains: 'When we moved into our house, there were three fireplaces built into the family room, living room, and kitchen. Since we used them every day and liked them so much, we decided to convert our threeseason porch into a year-round porch.'\n\n'We all went to the Eagan store to purchase our fourth Heat-N-Glo ® fireplace. Once we were walking around the store, taking in the lifestyle environments that are set up and dreaming about what our house could look and feel like, we realized we wanted more! We saw an amazing stone surround setting in one of the store displays - and before you knew it, we had bought the whole wall. Not only does our new fireplace now have a beautiful aesthetic and terrific functionality, but so does our porch. Because the surround wall installation was so surprisingly easy and clean, we're even considering our next purchase.'", - "page_start": 26, - "page_end": 26, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "PERFECT MATCH #4\n\n## THE HOMEOWNER AND HEARTH & HOME TECHNOLOGIES\n\nHearth & Home Technologies, you warm our hearts by making a powerful impact on our lives; you are the ones who transform our houses into homes. First, you warmed up our living rooms and family rooms with style, elegance, and comfort. Now, you're heating up our porches and our kitchens … and finding creative and innovative ways to make our bedrooms, bathrooms, dens, guest rooms, and kids' rooms all toasty with your beautiful glow. The home fires are burning brighter and hotter than ever, now that you've come into our lives.", - "page_start": 24, - "page_end": 24, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "While Patricia Churchland and Paul Churchland have famously applied eliminative materialism to propositional attitudes, philosophers including Daniel Dennett, Georges Rey, and Keith Frankish have applied it to qualia or phenomenal consciousness (i.e., conscious experience). [59] On their view, it is mistaken not only to believe there is a hard problem of consciousness, but to believe phenomenal consciousness exists at all. [19][61]", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia2.pdf" - }, - { - "text": "Semantic Evaluation (SemEval-2022) , pages 10941106, Seattle, United States. Association for Computational Linguistics.\n\nAlexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. ArXiv , abs/1803.05449.\n\nMathias Creutz. 2018. Open subtitles paraphrase corpus for six languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) , Miyazaki, Japan. European Language Resources Association (ELRA).\n\nJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In North American Chapter of the Association for Computational Linguistics .\n\nNing Ding, Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2023. Sentence and document representation learning. In Representation Learning for Natural Language Processing , pages 81-125. Springer Nature Singapore Singapore.\n\nAarohi Srivastava et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. ArXiv , abs/2206.04615.\n\nAlexander R Fabbri, Wojciech Kry'sci'nski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. Summeval: Re-evaluating summarization evaluation. Transactions of the Association for Computational Linguistics , 9:391-409.\n\nManuel Faysse, Patrick Fernandes, Nuno M. Guerreiro, António Loison, Duarte M. Alves, Caio Corro, Nicolas Boizard, João Alves, Ricardo Rei, Pedro H. Martins, Antoni Bigata Casademunt, François Yvon, André F. T. Martins, Gautier Viaud, Céline Hudelot, and Pierre Colombo. 2024. Croissantllm: A truly bilingual french-english language model.\n\nJack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron Nash, Liam Urbach, Vishesh Kakarala, Richa Singh, Swetha Ranganath, Laurie Crist, Misha Britan, Wouter Leeuwis, Gokhan Tur, and Prem Natarajan. 2023. MASSIVE: A 1M-example multilingual natural language understanding dataset with 51 typologically-diverse languages. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 4277-4302, Toronto, Canada. Association for Computational Linguistics.\n\nTianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In Conference on Empirical Methods in Natural Language Processing .\n\nIker García-Ferrero, Rodrigo Agerri, and German Rigau. 2021. Benchmarking meta-embeddings: What works and what does not. In Findings of the Association for Computational Linguistics: EMNLP 2021 , pages\n\n3957-3972, Punta Cana, Dominican Republic. Association for Computational Linguistics.\n\nNaman Goyal, Cynthia Gao, Vishrav Chaudhary, PengJen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2021. The flores-101 evaluation benchmark for low-resource and multilingual machine translation.\n\nHang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, and Didier Schwab. 2020. Flaubert: Unsupervised language model pre-training for french.\n\nChankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. 2024. Nv-embed: Improved techniques for training llms as generalist embedding models.\n\nAntoine Lefebvre-Brossard, Stephane Gazaille, and Michel C. Desmarais. 2023. Alloprof: a new french question-answer education dataset and its use in an information retrieval case study.\n\nHaoran Li, Abhinav Arora, Shuohui Chen, Anchit Gupta, Sonal Gupta, and Yashar Mehdad. 2021. MTOP: A comprehensive multilingual task-oriented semantic parsing benchmark. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume , pages 2950-2962, Online. Association for Computational Linguistics.", - "page_start": 9, - "page_end": 9, - "source_file": "arxiv4.pdf" - }, - { - "text": "| | | Jeffrie Davidson |\n| Steve Beeson | Octavio Carpio Deborah Carroll | |\n| | | David Danley Beverly Dart |\n| Danny Beets Bo Bekendam Robyn Belew | Stephan Carroll James Carter | Betsy Davis Chad Davis |\n| Paige Benedict Cheryl Bennett | Alex Casias | Garry Davis |\n| | Bernardino Castaneda Jr. Jose Castelo Aaron Casto | Kathy Davis Megan Davis Rodger Davis Ron Davis |\n| Garrett Benton John Bergman Sharon Berkley | Charles Castelli | Kenny Dawson |\n| Eric Bess Robert Bevel | Brandon Cates Scott Cavner | Robert Day Landon Dean |\n| Amar Bhakta | Gregory Cavness Cassie Cawyer Rosa Chacon | Stanley Dean Kevin Deeds Matthew Deel |\n| Randy Bickel Jr. Liz Bicoy Jacob Biernacki | Tim Chaloupek Paul Charles | Tim Deffenbaugh Gary Dennis |\n| Pam Billingsley | Harvey Chambliss | Mark Deshazo |\n| Matthew Birch Jeremy Black David Black Jr | David Chavarria Oscar Chavez | |\n| Willis Blaker III | Kathy Cheesman James Cheshire | Karl Dexter Donald DeForest Jr. |\n| Phillip Blankenship Emily Blaschke Tony Blasier | Henry Childress Richard Childress | Gianny Diaz Andrew Dickins |\n| Jimmy Blevins Doug Bohlen Brandi Bonner | Stephanie Choate Twila Christy Kerry Clapp Suzanne Clapper | Ed Dillard Robert Dison Linda Dixon |\n| | David Clark | |\n| | | Michelle Dodd Gary Donley |\n| Richard Bolding | | |\n| Daniel Borowski | Brandon Clark | |\n| John Bottrell II | | Nicolas Dominguez |\n| | Dustin Clark James Clark | Stephanie Doty |\n| | Leon Clark | |\n| Brian Bounds Barbara Bowersox | | Dawn Douglas Greg Douglas Johnny Dowdy |\n| Deven Bowles Donald Bowman | Steve Clark | Lorie Douglas |", - "page_start": 34, - "page_end": 34, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "## HEARTH & HOME TECHNOLOGIES AT-A-GLANCE\n\n\n\nSince the first air-circulating fireplace was patented in 1927, Heatilator ® has become the most recognized and preferred fireplace brand among homebuilders. Heatilator Home Products TM extends our business beyond the hearth to include products that enhance a healthy home environment.\n\n## HIGHLIGHTS/AWARDS:\n\n - · Redesigned and launched Novus TM , the largest Heatilator ® product family, continuing a best-in-class tradition in the gas fireplace segment.\n - · Won the hearth industry's Vesta Award that recognized the Silhouette TM electric fireplace as the best new electric fireplace on the market in 2003.\n\nWWW.HEATILATOR.COM\n\nThe leader in high-efficiency, durable, and stylish hearth products, the Quadra-Fire TM brand offers specialty channel partners the widest selection of high-performance fireplaces, stoves, and fireplace inserts in the wood, gas, pellet, and electric fuel categories.\n\n## HIGHLIGHTS/AWARDS:\n\n - · 2003 product introductions included new wood- and pellet-burning stoves and fireplaces; wood and gas inserts and fireplace fronts; and electric stoves and fireplaces.\n - · The Quadra-Fire TM 7100 EPA Wood Fireplace won the Vesta Awards' 'Best New Hearth Product for 2003' and 'Best in Show in 2003.'\n\n\n\nThe hearth industry's design and innovation leader, Heat-N-Glo ® has been awarded more than 50 patents and is known for its innovative hearth technology. The Heat-N-Glo ® brand now includes a complete line of gas, wood, and electric fireplaces, stoves and inserts, unique surrounds, and distinctive accessories - all designed to meet discriminating homeowners' desires for comfort, beauty, and elegance.\n\n## HIGHLIGHTS/AWARDS:\n\nNew product introductions:\n\n - · Cutting Edge TM - the world's only customizable insert surround that allows for a natural stone finish at installation.\n - · Escape TM fireplace - the world's first and only direct-vent gas fireplace that has a complete masonry appearance inside and out was a finalist for the Vesta award for 'Best New Gas Fireplace.'\n - · Vesta awarded the Dakota TM outdoor fireplace 'Best New Outdoor Fireplace Product' in 2003, and also named Rekindler TM a finalist for 'Best New Gas Insert.'\n - · The Infinity TM fireplace won 'Best New Product' from Building Products magazine, for its innovative combination of traditional masonry appearance and advanced venting and installation applications.\n\nWWW.HEATNGLO.COM\n\n\n\nThe Fireside Furnishings business is the preferred manufacturer for mantels and surrounds to complement Heatilator ® , Heat-N-Glo ® , or Quadra-Fire TM fireplaces. It builds the widest range of mantels, shelves, cabinets, and wall systems, from simple and inexpensive offerings to elegant and elaborate designs featuring custom millwork and imported stone.\n\n## HIGHLIGHTS/AWARDS:\n\n - · Heritage Collection TM was named a 2003 Vesta Award Finalist for Mantels, Facings, and Surrounds.\n\nWWW.FIRESIDEFURNISHINGS.COM\n\n\n\nThe leading provider of hearth and home products and services, Fireside Hearth & Home design centers help consumers achieve the feeling they want in their home by supporting the entire buying process - from purchase to installation and after-sale service. Fireside Hearth & Home works through a network of independent and company-owned, standalone or gallery design centers, as well as installation centers, catering both to consumers and builders.\n\nWWW.FIRESIDEUSA.COM", - "page_start": 29, - "page_end": 29, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "A total of 577,584 options were out of the money for 2013 (2012 17,240). They were excluded from the calculation since they were antidilutive.", - "page_start": 110, - "page_end": 110, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "| [91] Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Shamsuddeen Muhammad, |\n| Salomon Kabongo Kabenamualu, Salomey Osei, Freshia Sackey, Rubungo Andre Niyongabo, Ricky Macharm, Perez Ogayo, Orevaoghene Ahia, Musie Meressa Berhe, Mofetoluwa Adeyemi, Masabata Mokgesi-Selinga, Lawrence Okegbemi, Laura Martinus, Kolawole Tajudeen, Kevin Degila, Kelechi Ogueji, Kathleen |\n| Siminyu, Julia Kreutzer, Jason Webster, Jamiil Toure Ali, Jade Abbott, Iroro Orife, Ignatius Ezeani, Idris Abdulkadir Dangana, Herman Kamper, Hady Elsa- har, Goodness Duru, Ghollah Kioko, Murhabazi Espoir, Elan van Biljon, Daniel |\n| ipatory Research for Low-resourced Machine Translation: A Case Study in African Languages. In Findings of the Association for Computational Linguistics: EMNLP 2020 . Association for Computational Linguistics, Online, 2144-2160. https://doi.org/10.18653/v1/2020.findings-emnlp.195 [92] Maggie Nelson. 2015. The Argonauts . Graywolf Press, Minneapolis. [93] Timothy Niven and Hung-Yu Kao. 2019. Probing Neural Network Comprehen- sion of Natural Language Arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics . Association for Computational Linguistics, Florence, Italy, 4658-4664. https://doi.org/10.18653/v1/P19-1459 [94] Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Rein- |\n| Huang. 2020. Pre-trained Models for Natural Language Processing: A Survey. OpenAI 1, 8 (2019), 9. |", - "page_start": 12, - "page_end": 12, - "source_file": "arxiv5_ccby4license.pdf" - }, - { - "text": "| for self-supervised representation learning. arXiv preprint arXiv:2202.03026 , 2022. | Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michal- ski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz |\n| Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. arXiv | Mueller-Freitag, et al. The\" something something\" video Proceedings of the IEEE international conference on |\n| preprint arXiv:2104.02057 , 2021. | database for learning and evaluating visual common sense. In computer vision , pages 5842-5850, 2017. |\n| Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev. Repro- | |\n| ducible scaling laws for contrastive language-image learn- ing. In Proceedings of the IEEE/CVF Conference on Com- , pages 2818-2829, | Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Do- ersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Moham- mad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. |", - "page_start": 10, - "page_end": 10, - "source_file": "arxiv3.pdf" - } - ] - }, - { - "references": { - "source_file": "sg248459.pdf", - "query": "When does IBM close its acquisition of Red Hat ?", - "target_page": 20, - "target_passage": " On July 9th, 2019, IBM closed its acquisition of Red Hat, a leader in enterprise Linux and open source technology", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "This publication describes how Red Hat and IBM can advance your cloud journey and speed growth and innovation for your business by using Red Hat OpenShift on IBM Power Systems.\n\nNote: Red Hat joins IBM as a distinct unit, preserving the independence and neutrality of Red Hat's open source development heritage and unique development culture. Red Hat's unwavering commitment to open source remains unchanged and it continues to offer customers choice and flexibility.", - "page_start": 20, - "page_end": 20, - "source_file": "sg248459.pdf" - }, - { - "text": "## 1.1 Introduction\n\nMost companies started or are contemplating their journey to cloud. Although in recent years the adoption of cloud became much more common place, the scope of what a cloud is or can be also increased. This broadening of possibilities unfortunately added confusion and can result in companies being unsure of how their existing application estate can change to integrate with the cloud model.\n\nAs such, doubts still exist around how to start and progress on this journey. It is also true that although people understand traditional enterprise applications and more modern cloud-hosted applications, the integration or co-existence of both can prove equally confusing and contradicting.\n\nRecent industry trends, combined with the new partnership between Red Hat and IBM, seek to bring some clarity to the landscape while providing new modernization opportunities for existing enterprise applications and familiar environments.\n\nThe main focus of this IBM Redbooks publication relates to IBM Cloud Paks and Red Hat OpenShift, which is hosted on IBM Power Systems. Although individually much can be written about either topic, the relationship this publication highlights is between Red Hat OpenShift and IBM Power Systems.\n\nWe show what Red Hat OpenShift brings to the IBM Power Systems platform specifically discuss how it can be deployed and added into existing familiar Power System environments, and the benefits that integration and co-existence can provide from an existing enterprise application viewpoint.\n\nThis publication is a first volume in a planned multi-volume publication over the next 12 - 18 months. Within this initial volume, we explain the fundamental perspective (which is accurate as of the time of this writing) while providing pointers to future direction that will be discussed in future volumes.\n\nNote: This initial publication relates to Red Hat OpenShift 3.11, because this release was the current OpenShift Container Platform (OCP) release for IBM Power Systems at the time of this writing. IBM and Red Hat intend to deliver Red Hat OpenShift 4 for IBM POWERfi to accelerate agility for enterprise clients through integrated tooling and a feature-rich Kubernetes container platform for cloud-native development on POWER9 and IBM POWER8fi processor-based servers.\n\n## 1.2 Red Hat and IBM\n\nOn July 9th, 2019, IBM closed its acquisition of Red Hat, a leader in enterprise Linux and open source technology.\n\nThis acquisition puts Red Hat and IBM in a unique position to unlock the true value of hybrid cloud for your business. By combining the power and flexibility of Red Hat's open hybrid cloud technologies with the scale and depth of IBM innovation and industry expertise, you now have the tools to accelerate your cloud journey.\n\nIBM and Red Hat worked together for more than 20 years in making open source a competitive advantage for businesses on x86, IBM Power Systems, and IBM z Systemsfi. Together, we are both on a mission to improve open source technology and help your companies capture the business value of the cloud.", - "page_start": 19, - "page_end": 19, - "source_file": "sg248459.pdf" - }, - { - "text": "\n\nIBM Redbooks\n\nRed Hat OpenShift and IBM Cloud Paks on IBM Power Systems: Volume 1\n\nMarch 2020", - "page_start": 2, - "page_end": 2, - "source_file": "sg248459.pdf" - }, - { - "text": "https://ibm.co/34Cko06\n\n - /SM590000\n - Red Hat OpenShift Container Platform 3.11 CLI Reference https://red.ht/2XZGBmz\n\n## Help from IBM\n\nIBM Support and downloads\n\nibm.com /support\n\nIBM Global Services\n\nibm.com /services", - "page_start": 264, - "page_end": 264, - "source_file": "sg248459.pdf" - }, - { - "text": "## Related publications\n\nThe publications that are listed in this section are considered particularly suitable for a more detailed discussion of the topics that are covered in this book.\n\n## IBM Redbooks\n\nThe IBM Redbooks publication IBM PowerVM Best Practices , SG24-8062 , provides more information about the topic in this document. Note that this publication might be available in softcopy only.\n\nYou can search for, view, download or order this documents and other Redbooks, Redpapers, Web Docs, draft, and other materials, at the following website:\n\nibm.com /redbooks\n\n## Online resources\n\nThe following websites are also relevant as further information sources:\n\n - /SM590000\n - Deploying Red Hat OpenShift Container Platform 3.11 on Red Hat OpenStack Platform 13\n\nhttps://red.ht/2pEFNpV", - "page_start": 264, - "page_end": 264, - "source_file": "sg248459.pdf" - }, - { - "text": "## Red Hat OpenShift and IBM Cloud Paks on IBM Power Systems: Volume 1", - "page_start": 266, - "page_end": 266, - "source_file": "sg248459.pdf" - }, - { - "text": "```\nsubscription-manager refresh All local data refreshed subscription-manager list --available --matches '*OpenShift*' Subscription Name: Red Hat OpenShift Container Platform for Power, LE Business Partner NFR, Self-Supported Provides: Red Hat Enterprise Linux for Power, little endian - Extended Update Support Red Hat Enterprise Linux Fast Datapath Beta for Power, little endian Red Hat Enterprise Linux for Power, little endian Red Hat Ansible Engine Red Hat OpenShift Enterprise Application Node Red Hat Enterprise Linux for Power 9 Red Hat Software Collections (for RHEL Server for IBM Power LE) Red Hat OpenShift Container Platform for Power Red Hat Software Collections Beta (for RHEL Server for IBM Power LE) RHEL for SAP HANA for Power, little endian - Extended Update Support Red Hat Beta Red Hat OpenShift Container Platform Client Tools for Power Red Hat Enterprise Linux Fast Datapath (for RHEL Server for IBM Power LE) RHEL for SAP for Power, little endian - Extended Update Support Red Hat Enterprise Linux for Power, little endian Beta Red Hat Container Native Virtualization Red Hat CodeReady Linux Builder for Power, little endian - Extended Update Support SKU: 111111111 Contract: 111111111 Pool ID: Provides Management: No Available: Unlimited Suggested: 1 Service Level: Standard Service Type: L1-L3 Subscription Type: Stackable Starts: 05/31/2019 Ends: 05/31/2020 System Type: Virtual\n```\n\n - c. Assign the OpenShift subscription:\n\nsubscription-manager attach --pool= Successfully attached a subscription for: Red Hat OpenShift Container Platform for Power, LE Business Partner NFR, Self-Supported\n\n - d. Enable only the repositories that are required by OpenShift Container Platform 3.11. For IBM POWER9, run the commands that are shown in Example 6-2. For IBM POWER8, run the commands that are shown in Example 6-3 on page 107.\n\nExample 6-2 OpenShift repositories for POWER9 servers\n\n```\n# subscription-manager repos --disable=\"*\" # subscription-manager repos \\ --enable=\"rhel-7-for-power-9-rpms\" \\ --enable=\"rhel-7-for-power-9-extras-rpms\" \\\n```", - "page_start": 121, - "page_end": 121, - "source_file": "sg248459.pdf" - }, - { - "text": "## Help from IBM\n\nIBM Support and downloads\n\nibm.com /support\n\nIBM Global Services\n\nibm.com /services", - "page_start": 434, - "page_end": 434, - "source_file": "sg246915.pdf" - }, - { - "text": "## ibm.com /redbooks\n\nThe following IBM Redbooks publication web pages that are related to this book are also useful resources:\n\n - /SM590000 IBM Storage Networking Redbooks:", - "page_start": 810, - "page_end": 810, - "source_file": "sg247938.pdf" - }, - { - "text": "Sudipto Pal is Solution Architect for IBM Cognosfi Analytics in GBS. He successfully delivered several critical deliverable with IBM clients from USA and Europe. He led Cognos administration competency and monitored several candidates. He co-authored IBM Redbooks publications about Cognos implementation with PowerVM platform. He has experience in IBM Power system for Virtualized environment setup and provisioning. He also has hands-on experience in data lake implementation by using DIP over a big data platform. He is based in IBM India, Kolkata. He holds Master of Computer Application and has experience in product development that uses C, C++ and Python,\n\nBogdan Savu is a Cloud Infrastructure Architect at IBM Cloud Managed Application Services and works for IBM Global Technologies Services in Romania. He has over 13 years of experience in designing, developing, and implementing Cloud Computing, Virtualization, Automatization, and Infrastructure solutions. Bogdan holds a Bachelor's degree in Computer Science from the Polytechnic University of Bucharest. He is an IBM Certified Advanced Technical Expert for Power Systems, TOGAF 9 Certified, VMware Certified Professional, and Red Hat Certified Specialist in Containerized Application Development. His areas of expertise include Cloud Computing, Virtualization, DevOps, and Scripting.\n\nRichard Wale is a Senior IT Specialist, supporting many IBM development teams at the IBM Hursley Lab, UK. He holds a B.Sc. (Hons) degree in Computer Science from Portsmouth University, England. He joined IBM in 1996 and has been supporting production AIX systems since 1998. His areas of expertise include IBM Power Systems, PowerVM, AIX, and IBM i. He has participated in co-writing many IBM Redbooks publications since 2002.\n\nThanks to the following people for their contributions to this project:\n\nWade Wallace\n\nIBM Redbooks, Austin Center\n\nManoj Kumar, Joe Cropper, Chuck Bryan, Keshav Ranganathan, Bruce Anthony, Bruce Semple, Reza Ghasemi, Mike Easlon\n\nIBM USA\n\nMiguel Angel de la Mora, Cesar Dominguez Moreno, Guillermo Hernandez Gonzalez,\n\nArianne Navarro\n\nIBM Guadalajara, Mexico\n\nYenugu Madhavi\n\nIBM India\n\nAlfonso Jara\n\nIBM Spain\n\n## Now you can become a published author, too!\n\nHere's an opportunity to spotlight your skills, grow your career, and become a published author-all at the same time! Join an IBM Redbooks residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base.", - "page_start": 13, - "page_end": 13, - "source_file": "sg248459.pdf" - } - ] - }, - { - "references": { - "source_file": "sg248459.pdf", - "query": "What does an ITMS service provide ?", - "target_page": 30, - "target_passage": "An IT Service Management (ITSM) perspective can provide automation and a global management view, and incorporate the necessary software disciplines that are required to build a solid infrastructure for an enterprise, commercial or not. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- - Alternative Services: Divide traffic across multiple servers\n - - Service: app2b\n - - Service Weights: 50%/50%", - "page_start": 253, - "page_end": 253, - "source_file": "sg248459.pdf" - }, - { - "text": "The architecture of traditional monolithic web applications tends to become more complex over time. Complexity increases ramp-up time for new developers, makes tracking down the source of bugs more challenging, and delays the delivery of new features.\n\n## Use services instead of custom code\n\nServerless applications usually comprise several AWS services, integrated with custom code run in Lambda functions. While Lambda can be integrated with most AWS services, the services most commonly used in serverless applications are:\n\n## Commonly used AWS services in serverless applications\n\n| Category | AWS service |\n|------------------------------|-------------------------------------|\n| Compute | Lambda |\n| Data storage | Amazon S3, DynamoDB, Amazon RDS |\n| API | API Gateway |\n| Application integration | EventBridge, Amazon SNS, Amazon SQS |\n| Orchestration | Step Functions |\n| Streaming data and analytics | Amazon Data Firehose |\n\nThere are many well-established, common patterns in distributed architectures that you can build yourself or implement using AWS services. For most customers, there is little commercial value in investing time to develop these patterns from scratch. When your application needs one of these patterns, use the corresponding AWS service:\n\n## Common patterns and corresponding AWS services\n\n| Pattern | AWS service |\n|-----------------------------|---------------|\n| Queue | Amazon SQS |\n| Event bus | EventBridge |\n| Publish/subscribe (fan-out) | Amazon SNS |", - "page_start": 22, - "page_end": 22, - "source_file": "serverless-core.pdf" - }, - { - "text": "- (4) In this regulation-\n - 'authorised person' means-\n - (a) a constable,\n - (b) the Civil Aviation Authority,\n - (c) the Secretary of State, or\n - (d) a person authorised by the Civil Aviation Authority or the Secretary of State under the Air Navigation Order 2016( a );\n - 'operator' has the meaning given in article 4 of the Air Navigation Order 2016;\n - 'pilot in command' and 'private aircraft' have the meanings given in the Air Navigation Order 2016 (see Schedule 1 to that Order);\n\n'relevant transport service', in relation to an operator, means a transport service provided by or on behalf of that operator;\n\n - 'transport service' means-\n - (a) a relevant service,\n - (b) a shuttle service,\n - (c) a service (other than a relevant service) which-\n - (i) is carrying passengers travelling to England from outside the common travel area (whether for payment or valuable consideration or otherwise), and\n - (ii) is provided by means of an aircraft (other than a private aircraft), or\n - (d) a flight which-\n - (i) is carrying passengers travelling to England from outside the common travel area (whether for payment or valuable consideration or otherwise), and\n - (ii) is provided by means of a private aircraft.\n\n## PART 5\n\n## Offences, proceedings and information\n\n## Offences and penalties", - "page_start": 22, - "page_end": 22, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## Focusing on core serverless services\n\nAWS has over 220 services.\n\nEach service is a tool in your serverless development toolbox. Commonly, you start out using some services more frequently than others. This topic provides an overview of the core services you need to build serverless solutions.\n\nYou can read high level explanations of the core services here, and an example of how they interact within the context of an example microservice, or you can choose to skip ahead to the hands on workshop that uses three common services to build a working microservice.\n\n## Common serverless services\n\nThe following diagram shows AWS services commonly used together to build serverless applications:", - "page_start": 33, - "page_end": 33, - "source_file": "serverless-core.pdf" - }, - { - "text": "\n\n## Tip\n\nAn IAM role is identical in function to an IAM user, with the important distinction that it is not uniquely associated with one entity, but assumable by many entities. Typically, IAM roles correspond to a job function.\n\nA loose analogy for IAM roles are that of professional uniforms: a surgeon's scrubs, a firefighter's hardhat, or a startup CTO's favorite hoodie. Many people can assume the role of a surgeon, firefighter, and startup CTO, which identifies them with a certain job function.\n\nOne of the most useful things about IAM roles is they can be associated not only with human entities, but also with AWS services. These types of roles are known as service roles . This means you can assign an IAM role directly to a service. With an IAM role assigned to the service instance, you can then associate specific IAM policies with the instance role, so that the service instance itself can access other AWS services. This is extremely useful for automation.\n\n## Authorization - PARC\n\nSo far we've been talking about principals. Principals represent the authentication component. For authorization, you will attach JSON documents called IAM policies to principals.\n\n## Principals\n\nAs mentioned, principals are the entities that are allowed or denied access.\n\n## Actions\n\nActions are the type of access that is allowed or denied. Actions are commonly AWS service API calls that represent create, read, describe, list, update, and delete semantics.\n\n## Resources\n\nResources are the AWS resources the action will act upon.\n\nAll AWS resources are identified by an Amazon Resource Name (ARN) . Because AWS services are deployed all over the world, ARNs function like an addressing system to precisely locate a specific component. ARNs have hierarchical structures:\n\narn:partition:service:region:account-id:resource-id", - "page_start": 43, - "page_end": 43, - "source_file": "serverless-core.pdf" - }, - { - "text": "Development Activities. We seek to identify opportunities to further our position as an integrated service provider in markets where we provide services for a portion of the waste stream. Where appropriate, we seek to obtain permits to build transfer stations and/or landÑlls that would provide", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## 13.10 Service Assistant Tool\n\nThe Service Assistant Tool (SAT) is a web-based GUI that is used to service individual node canisters, primarily when a node has a fault and is in a service state. A node is not an active part of a clustered system while it is in service state.\n\nTypically, the IBM Storwize V7000 is configured with the following IP addresses:", - "page_start": 755, - "page_end": 755, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 http(s)://< service IP address of a node >/service", - "page_start": 756, - "page_end": 756, - "source_file": "sg247938.pdf" - }, - { - "text": "market knowledge, community relations and name recognition, and to instill their entrepreneurial drive at all levels of our operations. By furnishing the local management of such acquired companies with our Ñnancial and marketing resources and technical expertise, we believe that the acquired companies are better able to secure additional municipal franchises and other contracts.\n\nPrivatize Municipal Operations and Acquire Divested Operations. We also seek to acquire solid waste collection operations, transfer stations and landÑlls that municipalities and other governmental authorities are privatizing. Many municipalities are seeking to outsource or sell these types of solid waste operations, as they lack the capital, technical expertise and/or operational resources necessary to comply with increasingly stringent regulatory standards and/or to compete eÅectively with privatesector companies. In addition, we have acquired, and will continue to seek to acquire, operations and facilities that may be divested by other publicly-owned waste companies.\n\n## Operations\n\nOur operations primarily consist of the collection, transfer and disposal of non-hazardous solid waste.\n\nCollection Services. We provide solid waste collection services to commercial, industrial, municipal and residential customers in 22 states through 140 collection companies. In 2004, 74.3% of our revenue was derived from collection services consisting of approximately 32.5% from services provided to municipal and residential customers, 36.6% from services provided to commercial customers, and 30.9% from services provided to industrial and other customers.\n\nOur residential collection operations involve the curbside collection of refuse from small containers into collection vehicles for transport to transfer stations or directly to landÑlls. Residential solid waste collection services are typically performed under contracts with municipalities, which we generally secure by competitive bid and which give our company exclusive rights to service all or a portion of the homes in their respective jurisdictions. These contracts or franchises usually range in duration from one to Ñve years, although some of our exclusive franchises are for signiÑcantly longer periods. Residential solid waste collection services may also be performed on a subscription basis, in which individual households contract directly with our company. The fees received for subscription residential collection are based primarily on market factors, frequency and type of service, the distance to the disposal facility and cost of disposal. In general, subscription residential collection fees are paid quarterly in advance by the residential customers receiving the service.\n\nIn our commercial and industrial collection operations, we supply our customers with waste containers of varying sizes. We also rent compactors to large waste generators. Commercial collection services are generally performed under one- to three-year service agreements, and fees are determined by such considerations as:\n\n - , market factors,\n - , collection frequency,\n - , type of equipment furnished,\n - , the type and volume or weight of the waste collected,\n - , the distance to the disposal facility and\n - , the cost of disposal.\n\nWe rent waste containers to construction sites and also provide waste collection services to industrial and construction facilities on a contractual basis with terms generally ranging from a single pickup to one year or longer. We collect the containers or compacted waste and transport the waste either to a landÑll or a transfer station for disposal.", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## Getting started with serverless applications\n\nCore service starters will quickly explain the value and technical fundamentals of each service. Each starter will also mention advanced topics, so you can start with the essentials, but be aware of capabilities to dive into when you need them.\n\nStarters are short reads (less than 2,300 words; 10-15 min) that connect concepts and practical hands-on use.\n\n## Topics\n\n - · Get started with IAM\n - · Get started with Lambda\n - · Get started with API Gateway\n - · Get started with DynamoDB\n - · Learn using a workshop\n\n## Get started with IAM\n\nInteractions with AWS services and resources by developers and entities require:\n\n - · Authentication : proof that the entity requesting access is who they claim to be\n - · Authorization : actions that are allowed or denied\n\n## What is Identity and Access Management?\n\nAWS provides and uses a service called Identity and Access Management (IAM) for authentication and authorization. IAM is used to manage developer accounts and secure the interaction between services and resources.\n\n\n\n## Warning\n\nSecurity is an important, complex, and broad topic. Large organizations generally have specific operational procedures that developers need to follow. This guide will explain only essential concepts necessary to get started with AWS services. If in doubt, consult your IT department or the official security documentation.", - "page_start": 39, - "page_end": 39, - "source_file": "serverless-core.pdf" - } - ] - }, - { - "references": { - "source_file": "Publicdomain.pdf", - "query": "What are the two distinct public domain tools support by Creative Commons ?", - "target_page": 1, - "target_passage": "Creative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work - on conditions of your choice. CC licenses let you change your copyright terms from the default of 'all rights reserved' to 'some rights reserved.'\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\n\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n\n\nPublic domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark . Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n## Where public domain tools fit in the copyright spectrum\n\n\n\n## The CC0 Public Domain Dedication\n\nUse this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.\n\n\n\n\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.\n\n## What is the di/fference between CC0 and the Public Domain Mark?\n\n\n\nCC0 ('CC Zero') is intended for use only by authors or holders of copyright and related rights (including database rights), in connection with works that are still subject to those rights in one or more countries.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "\n\nThis is a frame from 'Twenty Years of Creative Commons (in Sixty Seconds)' by Ryan Junell and Glenn Otis Brown for Creative Commons licensed under CC BY 4.0. It includes adaptations of multiple open and public domain works. View full licensing and attribution information about all works included in the video on Flickr.\n\n## Creative Commons\n\nPO Box 1866 Mountain View CA 94042 USA +1 415 429 6753 info@creativecommons.org\n\n", - "page_start": 11, - "page_end": 11, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate\n\ncredit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.\n\n© The Author(s) 2025", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed3.pdf" - }, - { - "text": "Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional a/ff.shortiliations.\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.\n\n© The Author(s) 2024", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed4.pdf" - }, - { - "text": "\n\n\"great colors of nature\" by marcostetter is published under Public Domain Mark 1.0.\n\n## About Us\n\nCreative Commons (CC) is the global nonprofit organization behind the CC Licenses and public domain tools, which power open sharing on popular platforms like Wikipedia, Flickr, YouTube, Medium, Vimeo, and Khan Academy. Since 2002, the CC Licenses have served as an alternative to traditional copyright, providing a simple, standardized, and legal way for individuals and institutions to freely share images, music, research, educational resources, and cultural artifacts.\n\n## Chief Executive Officer\n\nAnna Tumadóttir\n\nGeneral Counsel Kat Walsh\n\n## Board of Directors\n\nMarta Belcher Glenn Otis Brown Delia Browne James Grimmelmann\n\nLawrence Lessig * Emeritus\n\nAngela Oduor Lungati Bilal Randeree Alek Tarkowski Jeni Tennison Luis Villa\n\nExcept where otherwise noted, 'Annual Report 2023' by Creative Commons is licensed under CC BY 4.0.\n\n", - "page_start": 1, - "page_end": 1, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "## Author contributions\n\nK.L. designed the framework of the article and analyzed the yield results and the maize price under future scenarios. J.P. simulated the climate data from 5 climate models recommended by ISI-MIP under 4 RCP scenarios. W.X. simulated the maize yields in whole world under di/fferent scenarios. W.X. simulated the market price of maize at national and global levels. T.A. helped the revision of language.\n\n## Funding\n\nFunding was provided by the National Key Research and Development program of China (Grant Nos. 2019YFA0607403 and 2017YFD0300301) and National Natural Science Foundation of China (Grant Nos. 41961124007 and 41871026).\n\n## Competing interests\n\n/T\\_he authors declare no competing interests.\n\n## Additional information\n\nCorrespondence and requests for materials should be addressed to K.L.\n\nReprints and permissions information is available at www.nature.com/reprints.\n\nPublisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional a/ffiliations.\n\n\n\nOpen Access /T\\_his article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. /T\\_he images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.\n\n© /T\\_he Author(s) 2022\n\nVol:.(1234567890)", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed9.pdf" - }, - { - "text": "with. The vast majority of in-copyright books are out-of-print or out-of-commerce, and most are not actively managed by their rightsholders. There is no official registry of copyrighted works and their owners, and existing datasets can be incomplete or erroneous. 16\n\nAs a result, there may be no way to license the vast majority of in-copyright books, especially those that have or have had limited commercial value. Put differently, the barrier to using 17 most books is not simply to pay publishers; even if one had significant financial resources, licensing would not enable access to most works.\n\n## Permissively licensed works\n\nThere are books that have been permissively licensed in an easily identifiable way, such as works placed under Creative Commons (CC) licenses. Such works explicitly allow particular uses of works subject to various responsibilities (e.g., requiring attribution by the user in their follow-on use).\n\nWhile such works could be candidates for inclusion in a books data commons, their inclusion depends on whether the license's terms can be complied with in the context of AI training. For instance, in the context of CC licensed works, there are requirements for proper attribution across all licenses (the CC tools Public Domain Dedication (CC0) and Public Domain Mark (PDM) are not licenses and do not require attribution). 18", - "page_start": 9, - "page_end": 9, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "\n\nFigure 4.15 Domain and Range inferred by the reasoner\n\n\n\nIt is possible to specify more than one class as the domain or range of a property. One of the most common mistakes of new users is to do this and expect that the resulting domain/range is the union of the two classes. However, note that next to the Domain and Range in the Description view it says (intersection). This is because the semantics of having 2 or more classes as the domain or range is the intersection of those classes not the union. E.g., if one defined the domain for a property to be Pizza and then added another domain IceCream that would mean that for something to be in the domain of that property it would have to be an instance of both Pizza and IceCream not (as people often expect) the union of those two sets which would be either the class Pizza or the class IceCream . Also, note that the domain and range are for inferencing, they are not data integrity constraints. This distinction will be explained in more detail below in the section on SHACL.", - "page_start": 28, - "page_end": 28, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "When CC0 is applied to a work, copyright and related rights are relinquished worldwide, making the work free from those restrictions to the greatest extent possible.\n\n\n\nThe Public Domain Mark (PDM) is used to label works that are already free of known copyright restrictions. Unlike CC0, PDM doesn't change the copyright status of a work.\n\nPDM can be used by anyone, and is intended for use with works that are already free of known copyright restrictions throughout the world.\n\n## Public Domain Mark\n\nUse this tool if you have identified a work that is free of known copyright restrictions.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "## 5. Examining approaches to building a books data commons\n\nThere are many possible permutations for building a books data commons. To structure our exploration, we focused on two particular tracks, discussed below. We chose these tracks mindful of the above legal issues, and because there are already existence proofs that help to illuminate tradeoffs, challenges and potential paths forward for each.\n\n## 5a. Public domain and permissively licensed books\n\n## Existing Project Example : The Pile v2 27\n\nIn 2020, the nonprofit research group EleutherAI constructed and released The Pile - a large, diverse, open dataset for AI training. EleutherAI developed it not only to support their own training of LLMs, but also to lower the barriers for others. 28\n\nAlong with data drawn from the web at large, The Pile included books from three datasets. The first dataset was the Books3 corpus referenced at the outset of this paper. The second and third books datasets were smaller: BookCorpus2, which is a collection of 17,868 books by otherwise unpublished authors; and a 28,752 books in the public domain and published prior to 1919, drawn from a volunteer effort to digitize public domain works called Project Gutenberg.\n\nAs the awareness about The Pile dataset grew, certain rightsholders began sending copyright notices to have the dataset taken down from various websites.\n\nDespite the takedown requests, the importance of books to EleutherAI and the broader community's AI research remained. In hoping to forge a path forward EleutherAI announced in 2024 that they would create a new version of the dataset, which they will call The Pile v2. 29 Among other things, v2 would 'have many more books than the original Pile had, for example, and more diverse representation of non-academic non-fiction domains.' At the same time, it would only seek to include public domain books and permissively licensed content. As before, this corpus focuses on English language books.", - "page_start": 12, - "page_end": 12, - "source_file": "creative_common_ai.pdf" - } - ] - }, - { - "references": { - "source_file": "Publicdomain.pdf", - "query": "What is Creative Commons ?", - "target_page": 1, - "target_passage": " Creative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "\n\nThis is a frame from 'Twenty Years of Creative Commons (in Sixty Seconds)' by Ryan Junell and Glenn Otis Brown for Creative Commons licensed under CC BY 4.0. It includes adaptations of multiple open and public domain works. View full licensing and attribution information about all works included in the video on Flickr.\n\n## Creative Commons\n\nPO Box 1866 Mountain View CA 94042 USA +1 415 429 6753 info@creativecommons.org\n\n", - "page_start": 11, - "page_end": 11, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate\n\ncredit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.\n\n© The Author(s) 2025", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed3.pdf" - }, - { - "text": "Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional a/ff.shortiliations.\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.\n\n© The Author(s) 2024", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed4.pdf" - }, - { - "text": "## Guide to using public domain tools\n\n## What Is Creative Commons?\n\nCreative Commons is a global nonprofit organization dedicated to supporting an open and accessible Internet that is enriched with free knowledge and creative resources for people around the world to use, share, and cultivate.\n\nOur easy-to-use licenses provide a simple, standardized way to give the public permission to share and use your creative work - on conditions of your choice. CC licenses let you change your copyright terms from the default of 'all rights reserved' to 'some rights reserved.'\n\nMillions of people use CC licenses on some of the world's most popular platforms for user-generated content. When you use a CC license to share your photos, videos, or blog, your creation joins a globally accessible pool of resources that includes the work of artists, educators, scientists, and governments.\n\n\n\nCreative Commons has waived all copyright and related or neighboring rights to this guide using the CC0 Public Domain Dedication.\n\n\n\nPublic domain works are valuable because anyone can freely build upon, enhance, and reuse them for any purposes without restriction under copyright or database law.\n\nThat's why it's important for creators to have a clear and legally robust way to place their works in the public domain as completely as possible, and it's also important for publishers and archives to have a standardized way to identify works that are already in the public domain.\n\nCreative Commons supports two distinct public domain tools, the CC0 Public Domain Dedication and the Public Domain Mark . Creative Commons copyright licenses help authors manage their copyright on terms they choose. Conversely, CC0 enables authors and copyright owners who want to dedicate their works to the worldwide public domain to do so, and PDM facilitates the labeling and discovery of works that are already free of known copyright restrictions.\n\n## Where public domain tools fit in the copyright spectrum\n\n\n\n## The CC0 Public Domain Dedication\n\nUse this universal tool if you are a holder of copyright or database rights, and wish to waive all your rights to the work worldwide.\n\n\n\n\n\nBy using CC0, you waive all copyright and related rights together with all associated claims and causes of action with respect to this work to the extent possible under the law.\n\nApplying CC0 to your work is easy. Simply visit the CC0 chooser (http://creativecommons.org/choose/zero) which will lead you through the process. When completed, you will be provided with HTML code that you can copy and paste into your website.\n\nYou let others copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.\n\nWorks marked with the Public Domain Mark have been identified as being free of known restrictions under copyright law, including all related and neighboring rights. Anyone can copy, modify, distribute, and perform such works, even for commercial purposes, all without asking permission.\n\nApplying the PDM to a work is easy. Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.\n\nCreative Commons does not recommend this tool for works that are restricted by copyright laws in one or more jurisdictions. Consult with your legal advisor if you are unsure whether you should use the PDM for a certain work.\n\n## What is the di/fference between CC0 and the Public Domain Mark?\n\n\n\nCC0 ('CC Zero') is intended for use only by authors or holders of copyright and related rights (including database rights), in connection with works that are still subject to those rights in one or more countries.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "\n\n\"great colors of nature\" by marcostetter is published under Public Domain Mark 1.0.\n\n## About Us\n\nCreative Commons (CC) is the global nonprofit organization behind the CC Licenses and public domain tools, which power open sharing on popular platforms like Wikipedia, Flickr, YouTube, Medium, Vimeo, and Khan Academy. Since 2002, the CC Licenses have served as an alternative to traditional copyright, providing a simple, standardized, and legal way for individuals and institutions to freely share images, music, research, educational resources, and cultural artifacts.\n\n## Chief Executive Officer\n\nAnna Tumadóttir\n\nGeneral Counsel Kat Walsh\n\n## Board of Directors\n\nMarta Belcher Glenn Otis Brown Delia Browne James Grimmelmann\n\nLawrence Lessig * Emeritus\n\nAngela Oduor Lungati Bilal Randeree Alek Tarkowski Jeni Tennison Luis Villa\n\nExcept where otherwise noted, 'Annual Report 2023' by Creative Commons is licensed under CC BY 4.0.\n\n", - "page_start": 1, - "page_end": 1, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "## Author contributions\n\nK.L. designed the framework of the article and analyzed the yield results and the maize price under future scenarios. J.P. simulated the climate data from 5 climate models recommended by ISI-MIP under 4 RCP scenarios. W.X. simulated the maize yields in whole world under di/fferent scenarios. W.X. simulated the market price of maize at national and global levels. T.A. helped the revision of language.\n\n## Funding\n\nFunding was provided by the National Key Research and Development program of China (Grant Nos. 2019YFA0607403 and 2017YFD0300301) and National Natural Science Foundation of China (Grant Nos. 41961124007 and 41871026).\n\n## Competing interests\n\n/T\\_he authors declare no competing interests.\n\n## Additional information\n\nCorrespondence and requests for materials should be addressed to K.L.\n\nReprints and permissions information is available at www.nature.com/reprints.\n\nPublisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional a/ffiliations.\n\n\n\nOpen Access /T\\_his article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. /T\\_he images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.\n\n© /T\\_he Author(s) 2022\n\nVol:.(1234567890)", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed9.pdf" - }, - { - "text": "## A Note from Leadership\n\nCC staff photos are licensed under CC BY 4.0.\n\n\n\n2023 was a busy year at Creative Commons. Our Open Culture program and Open Climate Campaign entered their third and second years, respectively. We hosted our first in-person CC Global Summit since 2019 in Mexico City. We held critical consultations and open panels on AI, copyright, and the CC Licenses, cultural heritage, education, and science; and we launched our Open Infrastructure Circle in an effort to ensure the CC Licenses are funded well into the future.\n\nWe also marked transitions in leadership. At the end of December, Catherine Stihler concluded her time as Chief Executive Officer (CEO) at Creative Commons, and I transitioned in as Interim. In March 2024, I was appointed CC's permanent CEO. I look forward to working closely with our Board of Directors, staff, and larger community on the critical work that awaits us in 2024 .\n\n## Anna Tumadóttir, CEO\n\n\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - }, - { - "text": "content repositories, like libraries, with that of AI developers. A 'books data commons' needs to be both responsibly managed, and useful for developers of AI models.\n\nWe use 'commons' here in the sense of a resource that is broadly shared and accessible, and thus obviates the need for each individual actor to acquire, digitize, and format their own corpus of books for AI training. This resource could be collectively and intentionally managed, though we do not mean to select a particular form of governance in this paper. 4\n\nThis paper is descriptive, rather than prescriptive, mapping possible paths to building a books data commons as defined above and key questions relevant to developers, repositories, and other stakeholders, building on our workshop discussions. We first explain why books matter for AI training and how broader access could be beneficial. We then summarize two tracks that might be considered for developing such a resource, highlighting existing projects that help foreground both the potential and challenges. Finally, we present several key design choices, and next steps that could advance further development of this approach. 5", - "page_start": 2, - "page_end": 2, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## Acknowledgements\n\nAuthored by Alek Tarkowski and Paul Keller (Open Future), Derek Slater and Betsy Masiello (Proteus Strategies) in collaboration with Creative Commons.\n\nWe are grateful to participants in the workshops, including Luis Villa, Tidelift and openml.fyi; Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan, Cornell; Lila Bailey, Internet Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin, Library Futures/ NYU Engelberg Center, Nicholas P. Garcia, Public Knowledge; Sayeed Choudhury; Erik Stallman, UC Berkeley School of Law. The paper represents the views of the authors, however, and should not be attributed to the workshop as a whole. All mistakes or errors are the authors'.\n\n\n\nThis report is published under the terms of the Creative Commons Attribution License.", - "page_start": 21, - "page_end": 21, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "## 7. Conclusion\n\nThis paper is a snapshot of an idea that is as underexplored as it is rooted in decades of existing work. The concept of mass digitization of books, including to support text and data mining, of which AI is a subset, is not new. But AI training is newly of the zeitgeist, and its transformative use makes questions about how we digitize, preserve, and make accessible knowledge and cultural heritage salient in a distinct way.\n\nAs such, efforts to build a books data commons need not start from scratch; there is much to glean from studying and engaging existing and previous efforts. Those learnings might inform substantive decisions about how to build a books data commons for AI training. For instance, looking at the design decisions of HathiTrust may inform how the technical infrastructure and data management practices for AI training might be designed, as well as how to address challenges to building a comprehensive, diverse, and useful corpus. In addition, learnings might inform the process by which we get to a books data commons for example, illustrating ways to attend to the interests of those likely to be impacted by the dataset's development. 41\n\nWhile this paper does not prescribe a particular path forward, we do think finding a path (or paths) to extend access to books for AI training is critical. In the status quo, large swaths of knowledge contained in books are effectively locked up and inaccessible to most everyone. Google is an exception - it can reap the benefits of their 40 million books dataset for research, development, and deployment of AI models. Large, well-resourced entities could theoretically try to replicate Google's digitization efforts, although it would be incredibly expensive, impractical, and largely duplicative for each entity to individually pursue their own efforts. Even then, it isn't clear how everyone else - independent researchers, entrepreneurs, and smaller entities - will have access. The controversy around the Books3 dataset discussed at the outset should not, then, be an argument in favor of preserving the status quo. Instead, it should highlight the urgency of building a books data commons to support an AI ecosystem that provides broad benefits beyond the privileged few.", - "page_start": 20, - "page_end": 20, - "source_file": "creative_common_ai.pdf" - } - ] - }, - { - "references": { - "source_file": "Publicdomain.pdf", - "query": "How to apply the PDM to my work ?", - "target_page": 1, - "target_passage": "Simply visit the PDM chooser (http://creativecommons.org/choose/mark) which will lead you through the proces. When completed, you will be provided with the HTML code that you can copy and paste into your website.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "and paragraphs 10 to 13 apply to that person as they apply to P for the period those paragraphs apply to P.\n\n## Modification of application of this Schedule where P is a relevant person", - "page_start": 78, - "page_end": 78, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## 10.6 Submit inventory (PM)\n\nThis section describes on how the PM submits the inventory by selecting tables for the general submission after being approved by the NFP (See section 10.5).\n\n## 10.6.1 Submit select tables for preparing the general submission\n\n - 1. Log in as PM.\n - 2. Click on 'View Inventories Progress' under sub menu 'Submission Management'.\n - 3. The 'View Inventories Progress' screen appears.\n - 4. Select the appropriate inventory by clicking the box under column 'Working inventory' (figure 68, a).\n - *** Note: The selected inventory year to be submitted should be in status 'approved' (figure 68, b).\n - 5. Click on 'Work on Inventories' under Submission Management (figure 68, c).\n - This opens the Submit Inventory initial screen (figure 69).\n - 6. Click the inventory year to be submitted (figure 69, a).\n - 7. Press the 'Generate Official Submission' button (figure 69, c).\n\nFigure 69. Submit select tables for the preparation for the general submission\n\n\n\nFigure 68. View Inventories Progress screen - select inventory for the preparation for the general submission\n\n", - "page_start": 41, - "page_end": 41, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "## 10 Submission management\n\n## 10.1 Workflow\n\nCreating and preparing an inventory, generating tables for checking by the NFP and approving and/or rejecting submission, follows a number of steps known collectively as a workflow. This chapter describes the workflow relating to the submission of the GHG inventory/(ies), which users should follow to create, prepare, and send GHG inventories for internal checking, and approval/rejection of the submission by the NFP, within the NAIIS web application (figure 52).\n\nFigure 52: Non-Annex I Inventory Software workflow\n\n\n\n## 10.2 Start of inventory/submission (NFP or PM)\n\nThis procedure allows the NFP or PM to start a new (created) inventory. The existing data for the inventory year identified will be made available in the new inventory/submission.\n\nThese are the steps to start a new inventory:\n\n - 1. Click on 'View Inventories Progress' under sub menu 'Submission Management' (figure 53).\n\nFigure 53. View Inventories Progress sub menu\n\n\n\n - 2. The 'View Inventories Progress' screen appears (figure 54).\n - 3. Select the appropriate inventory by clicking the box under column 'Working Inventory' (figure 54, a).\n\n*** Note: The selected appropriate inventory should be in status 'created' (figure 54, b)", - "page_start": 34, - "page_end": 34, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "When CC0 is applied to a work, copyright and related rights are relinquished worldwide, making the work free from those restrictions to the greatest extent possible.\n\n\n\nThe Public Domain Mark (PDM) is used to label works that are already free of known copyright restrictions. Unlike CC0, PDM doesn't change the copyright status of a work.\n\nPDM can be used by anyone, and is intended for use with works that are already free of known copyright restrictions throughout the world.\n\n## Public Domain Mark\n\nUse this tool if you have identified a work that is free of known copyright restrictions.", - "page_start": 0, - "page_end": 0, - "source_file": "Publicdomain.pdf" - }, - { - "text": "## 10.3 Send for checking (PM)\n\nOnce the SE's/or PM's have prepared the national GHG inventory, by entering data into the sectoral grids and the PM of the Party has checked the complete GHG inventory for consistency and correctness, the following steps allows the PM to send the inventory for checking:\n\n - 1. Log in as PM.\n - 2. Click on 'View Inventories Progress' under sub menu 'Submission Management'.\n - 3. The 'View Inventories Progress' screen appears.\n - 4. Select the appropriate inventory by clicking the Inventory name under column 'Name' (figure 58, a).\n - 5. Press the 'Send for Checking by NFP' button to send it to the NFP for his review and approval (figure 58, b). *** Note: A notification email will be sent to the NFP email address, and the status changed to 'check' (figure 59).\n\nFigure 58. Work on Inventories screen - Status = Started\n\n", - "page_start": 36, - "page_end": 36, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "## 10.4 Send for approval/rejection of an Inventory (PM)\n\nThis section describes on how the PM approves or rejects an inventory after being checked by the PM.\n\n## 10.4.1 Send for approval of an Inventory\n\n - 1. Log in as PM.\n - 2. Click on 'View Inventories Progress' under sub menu 'Submission Management'.\n - 3. The 'View Inventories Progress' screen appears.\n - 4. Select the appropriate inventory by clicking the Inventory name under column 'Name' (figure 60, a).\n - 5. Press the 'Send for Approval' button to send it to NFP for his/her review and approval of the inventory (figure 60, b).\n\n*** Note: A notification email will be sent to the PM, once the 'Send for Approval' has been pressed. And the status changed to 'Awaiting\\_approval' (figure 61).\n\nFigure 60. Work on Inventories screen - Send for Approval - Status = checkFigure 61. Work on Inventories screen - Status = awaiting\\_approval\n\n\n\n\n\n## 10.4.2 Rejection of an Inventory\n\n - 1. Log in as PM.\n - 2. Click on 'View Inventories Progress' under sub menu 'Submission Management'.\n - 3. The 'View Inventories Progress' screen appears.\n - 4. Select the appropriate inventory by clicking the Inventory name under column 'Name' (figure 62, a).\n - 5. Press the 'Reject' button (figure 62, b).\n\n*** Note: A notification email will be sent to the PM, once the 'Reject' button has been pressed. And the status changed to 'Awaiting\\_rejection\\_check' (figure 63).", - "page_start": 37, - "page_end": 37, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "## THE PURPOSE OF A RESIGNATION LETTER:\n\nThe purpose of a resignation letter is to give your employer official no -tice that you will be leaving the organisation. However, it is usually appropriate to inform your manager of your intention to resign in person, and then to follow up your conversation with the formal resignation letter.\n\nWhat to include:\n\nYour resignation letter should be short and to the point. Keep it positive and professional - this is not the place to voice your dissatisfaction with your job.\n\nIn your letter, you should make sure that you include the following:\n\n## 1.\n\n## A clear statement of your intention to resign.\n\nExample:\n\n'Please accept this letter as formal notice of my resignation from my post as Assistant IT Manager at XYZ.'\n\n## 2.\n\nReference to your notice period (where applicable), as well as your last working day with the organisation.\n\nExample:\n\n'My last working day will be in two weeks' time, on 31 August 2015.'\n\n## 3.\n\n## Your reason for leaving.\n\nYou don't need to elaborate on this if you don't want to. Remember to keep it positive, and not to make any rude, offensive or insulting remarks about the organisation or your co- workers, no matter how tempting it might be.", - "page_start": 48, - "page_end": 48, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## CHAPTER 10:\n\n## LANGUAGE SKILLS AT WORK HOW TO WRITE A COVER LETTER\n\n\n\nIf you've ever applied for a job, you'll know that writing the cover letter is the most difficult part of almost any job application. Your cover letter creates the first impression, and often determines whether an employer will even look at your CV.\n\nYou need to use this opportunity to introduce yourself and your skills, and to set yourself apart from all the other candidates. You can also use this opportunity to explain any gaps in your CV, and to motivate why you are the right person for the job.", - "page_start": 44, - "page_end": 44, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## 3.2.2.2 Add a new GHG inventory year or edit general properties/sectors (only NFP and PM's)\n\n -  Log in as NFP or PM.\n -  Click on 'Work on Inventories' under Submission Management (figure 10).\n\nFigure 11. Initial screen of 'Work on Inventories'\n\n\n\nOnce 'Work on Inventories' has been clicked, the initial screen will be displayed, which shows the following boxes (figure 11):\n\n - a. Existing Inventory (with all options)\n - b. General properties - include the name, submission year, creator, creation date, status, updater and submission date\n - c. Sectors\n - d. Inventory years\n\n\n\nFollow the steps to add/remove an inventory year:\n\n -  Click on the inventory year (figure 12a)\n -  Select the inventory year under General properties (figure 12b)\n -  Select or deselect the appropriate Sectors (figure 12c)\n -  To add or remove an inventory year, select or deselect the relevant year under Inventory Years box (figure 12d)", - "page_start": 9, - "page_end": 9, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "## Figure 12. Screen of 'Work on Inventories'\n\nFigure 13. View Inventories Progress\n\n\n\n\n\n## 3.2.2.3 View Inventory Progress\n\n -  The NFP or PM should log into the system.\n -  Click on 'View Inventories Progress' under Submission Management (figure 13)\n\n\n\n\n\nClick on 'View Inventories Progress' button will display the initial screen with the following columns (figure 14a, 14b and 14c):\n\n -  Name - automatically given by the system, once created\n -  Working Inventory - active box shows the current working inventory\n -  Submission year - year when the submission process was initiated\n -  Creator - user who created the inventory\n -  Creation date - date when the inventory was created\n -  Status - created, started, check, submitted, approved, awaiting approval, awaiting rejection check\n -  Updater - user name who updated the inventory\n -  Submission date - date of submission\n -  Sectors - Energy, Industrial processes, Solvent and other product use, Agriculture, LUCF, LULUCF, Waste, Other\n -  Inventory year", - "page_start": 10, - "page_end": 10, - "source_file": "maiis-user-manual.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia4.pdf", - "query": "Which rivers flow through Lyon?", - "target_page": 1, - "target_passage": "It is located at the confluence of the rivers Rhône and Saône, ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "\n\n\n\nLyon [c] (Franco-Provençal: Liyon ) is the second-largest city in France by urban area and the third largest by city limits. [14] It is located at the confluence of the rivers Rhône and Saône, to the northwest of the French Alps, 391 km (243 mi) southeast of Paris, 278 km (173 mi) north of Marseille, 113 km (70 mi) southwest of Geneva, Switzerland, 58 km (36 mi) northeast of Saint-Étienne.\n\nThe City of Lyon had a population of 522,250 at the Jan. 2021 census within its small municipal territory of 48 km 2 (19 sq mi), [15] but together with its suburbs and exurbs the Lyon metropolitan area had a population of 2,308,818 that same year, [7] the second most populated in France. Lyon and 58 suburban municipalities have formed since 2015 the Metropolis of Lyon, a directly elected metropolitan authority now in charge of most urban issues, with a population of 1,424,069 in 2021. [16] Lyon is the prefecture of the Auvergne-Rhône-Alpes region and seat of the Departmental Council of Rhône (whose jurisdiction, however, no longer extends over the Metropolis of Lyon since 2015).\n\nThe capital of the Gauls during the Roman Empire, Lyon is the seat of an archbishopric whose holder bears the title of Primate of the Gauls. Lyon became a major economic hub during the Renaissance. The city is recognised for its cuisine and gastronomy, as well as historical and architectural landmarks; as such, the districts of Old Lyon, the Fourvière hill, the Presqu'île and the slopes of the Croix-Rousse are inscribed on the UNESCO World Heritage List. Lyon was historically an important area for the production and weaving of silk. Lyon played a significant role in the history of cinema since Auguste and Louis Lumière invented the cinematograph there. The city is also known for its light festival, the Fête des lumières, which begins every 8 December and lasts for four days, earning Lyon the title of \"Capital of Lights\".\n\nEconomically, Lyon is a major centre for banking, chemical, pharmaceutical and biotech industries. The city contains a significant software industry with a particular focus on video games; in recent years it has fostered a growing local start-up sector. [17] The home of renowned universities and higher education schools, Lyon is the second-largest student city in France, with a university population of nearly 200,000 students within the Metropolis of Lyon. [18] Lyon hosts the international headquarters of Interpol, the International Agency for Research on Cancer, as well as Euronews. According to the Globalization and World Rankings Research Institute, Lyon is considered a Beta city, as of 2018. [19] It ranked second in France and 40th globally in Mercer's 2019 liveability rankings. [20]\n\n## History\n\n## Lyon\n\nLiyon (Arpitan)\n\n## Prefecture and commune\n\nSkyline of Lyon in La Part-Dieu\n\n\n\n\n\nBasilica of NotreDame de Fourvière\n\n\n\nPlace des Terreaux with the Fontaine BartholdiParc de la Tête d'or\n\n\n\nConfluence District\n\n\n\nVieux Lyon\n\n\n\nPont Lafayette\n\n\n\nCoat of arms\n\n\n\nMotto(s): Avant, avant, Lion le melhor (old Franco-Provençal for \"Forward, forward, Lyon the best\") [a] Virtute duce, comite fortuna (\"With virtue as guide and fortune as companion\") [b]\n\nLocation of Lyon\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- Bellecour, Écoles D'Arts.\n\n## Primary and secondary schools\n\nThere are some international private schools in the Lyon area, including:\n\n - Cité Scolaire Internationale de Lyon or the Lycée de Gerland;\n - Includes the Section Japonaises ( リヨン·ジェルラン補習授業校 Riyon Jeruran Hoshū Jugyō Kō \"Lyon Gerland Japanese Supplementary School\"), which the Japanese Ministry of Education (MEXT) counts as a part-time Japanese supplementary school [73]\n - Ombrosa;\n - International School of Lyon in nearby Sainte-Foy-lès-Lyon;\n - Montessori School of Lyon.\n\n## Supplementary education\n\nOther Japanese supplementary schools:\n\n - The Association Pour le Développement de la Langue et de la Culture Japonaises (ADLCJ; リヨン補習授業校 Riyon Hoshū Jugyō Kō ) is held in the Maison Berty Albrecht in Villeurbanne, near Lyon. [73] It was formed in 1987. [74] It serves Japanese expatriate children who wish to continue their Japanese education whilst abroad.\n\n## Transport\n\nLyon-Saint-Exupéry Airport, located east of Lyon, serves as a base for domestic and international flights. It is a key transport facility for the entire Rhône-Alpes region, with coach links to other cities in the area. The in-house train station Gare de Lyon Saint-Exupéry connects the airport to the nationwide TGV network. The Rhônexpress tram monopoly links the airport with the business quarter of La Part Dieu in less than 30 minutes, and offers connections with Underground A & B, Tramway T1, T3 & T4, and bus lines. Lyon public transport Sytral offers a bus service, Route 47, that links the airport to Meyzieu [75] where passengers can change onto Tram T3. The regular price of public transport is €1.90, as opposed to €15 one way for the Rhonexpress. In the suburb of Bron, the smaller Lyon-Bron Airport provides an alternative for domestic aviation.\n\nLyon has two major railway stations: Lyon-Part-Dieu, which was built to accommodate the TGV, and Lyon Perrache, an older station that now provides mostly regional service. Smaller railway stations include Gorge-de-Loup, Vaise, Saint-Paul and Jean Macé. Lyon was the first city to be connected to Paris by the TGV in 1981. [76] Since that time the TGV train network has expanded and links Lyon directly to Perpignan, Toulouse, Nice, Marseille, Strasbourg, Nantes and Lille. International trains operate directly to Madrid, Barcelona, Milan, Turin, Geneva, Frankfurt, Luxembourg, Brussels and London.\n\nThe city is at the heart of a dense road network and is located at the meeting point of several highways: A6 to Paris, A7 Marseille, A42 to Geneva, and A43 to Grenoble. The city is now bypassed by the A46. A double motorway tunnel passes under Fourvière, connecting the A6 and the A7 autoroutes, both forming the \"Autoroute du Soleil\".\n\nLyon 3: Berges du Rhône campus\n\n\n\nLyon 2: Berges du Rhône campus\n\n\n\nIPSA Lyon Campus\n\n\n\nPlatform I, Lyon-Part-Dieu train station\n\n\n\nT1 tramway on the Raymond Barre bridge\n\n", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia4.pdf" - }, - { - "text": "The convention was not the only target within Lyon during the French Revolution. After the Convention faded into history, the French Directory appeared and days after the 4 September 1797 Coup of 18 Fructidor, a Directory's commissioner was assassinated in Lyon.\n\nThe city became an important industrial town in the 19th century. In 1831 and 1834, the canuts (silk workers) of Lyon staged two major uprisings for better working conditions and pay. In 1862, the first of Lyon's extensive network of funicular railways began operation.\n\nMassacre during the Canut rebellion of 1834\n\n\n\nDuring World War II, Lyon was a centre for the occupying Nazi forces, including Klaus Barbie, the infamous \"Butcher of Lyon\". However, the city was also a\n\nstronghold of the French Resistance, the many secret passages known as traboules , enabled people to escape Gestapo raids. On 3 September 1944, Lyon was liberated by the 1st Free French Division and the Forces Françaises de l'Intérieur. The city is now home to a Resistance museum. [33][34]\n\n## Geography\n\nThe Rhône and Saône converge to the south of the historic city centre, forming a peninsula - the \" Presqu'île \" - bounded by two large hills to the west and north and a large plain eastward. Place Bellecour is located on the Presqu'île between the two rivers and is the third-largest public square in France. The broad, pedestrian-only Rue de la République leads north from Place Bellecour.\n\nThe northern hill is La Croix-Rousse, known as \"the hill that works\" because it is traditionally home to many small silk workshops, an industry for which the city has long been renowned. [35]\n\nThe Saône-Rhône confluence\n\n\n\nThe western hill is Fourvière, known as \"the hill that prays\" because it is the location for Basilica of Notre-Dame de Fourvière, several convents, and Archbishop residence. The district, Vieux Lyon, also hosts the Tour métallique (a highly visible TV tower, replicating the last stage of the Eiffel Tower) and one of the city's railways. [36] Fourvière, along with portions of the Presqu'île and much of La Croix-Rousse, is designated as a UNESCO World Heritage Site. [37]\n\nEast of the Rhône from the Presqu'île is a large flat area upon which sits much of modern Lyon and contains most of the city's population. Situated in this area is La Part-Dieu urban centre, which clusters the landmark structures Tour Incity, Tour Part-Dieu, Tour Oxygène, and Tour Swiss Life, as well as the city's primary railway station, Gare de Lyon-Part-Dieu.\n\nNorth of this district lays the sixth arrondissement, which is home to one of Europe's largest urban parks, the Parc de la Tête d'or, as well as Lycée du Parc and Interpol's world headquarters.", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- 31. Braudel 1984 p. 327\n - 32. Pierre Edmond DESVIGNES. \"Quartier renaissance Lyon : Vieux Lyon, quartier ancien et secteur sauvegarde Lyon\" (https://web.archive.org/web/20110119152753/http://www.vieux-lyon.org/lyon-epoque-renaissance\\_f01 150.htm). Vieux-lyon.org. Archived from the original (http://www.vieux-lyon.org/lyon-epoque-renaissance\\_f011 50.htm) on 19 January 2011. Retrieved 3 April 2011.\n - 33. \"CHRD Lyon\" (https://web.archive.org/web/20110124140355/http://www.chrd.lyon.fr/chrd/sections/fr/pied/engli sh\\_1). Chrd.lyon.fr . 2017. Archived from the original (http://www.chrd.lyon.fr/chrd/sections/fr/pied/english\\_1) on 24 January 2011. Retrieved 21 December 2017.\n - 34. Cosgrove, Michael (4 June 2009). \"Lyon: The Resistance and Deportation Museum\" (http://www.digitaljournal. com/article/273644). Digitaljournal.com .\n - 35. (in French) Georges Duby (ed), Histoire de la France : Dynasties et révolutions, de 1348 à 1852 (vol. 2), Larousse, 1999 p. 53 ISBN 2-03-505047-2\n - 36. \"Lyon, France: Local Transport\" (http://www.lonelyplanet.com/france/burgundy-and-the-rhone/lyon/transport/g etting-around/local-transport). Lonely Planet. Retrieved 2 February 2017.\n - 37. \"Historic Site of Lyon\" (https://whc.unesco.org/en/list/872/). unesco.org . UNESCO World Heritage Centre. Retrieved 31 July 2015.\n - 38. Gregory, Stanley. 'Climatic Classification and Climatic Change (Klimaklassifikation Und Klimaänderung) (http s://www.jstor.org/stable/25636095).' Erdkunde , vol. 8, no. 4, 1954, pp. 246-252. JSTOR.\n - 39. \"Données climatiques de la station de Lyon: Relevés de 2016 - Lyon\" (https://web.archive.org/web/20161004 055201/http://www.meteofrance.com/climat/france/lyon/69029001/releves) (in French). Meteo France. Archived from the original (http://www.meteofrance.com/climat/france/lyon/69029001/releves) on 4 October 2016. Retrieved 2 October 2016.\n - 40. \"Lyon-Bron (69)\" (https://donneespubliques.meteofrance.fr/FichesClim/FICHECLIM\\_69029001.pdf) (PDF). Fiche Climatologique: Statistiques 1991-2020 et records (in French). Meteo France. Retrieved 14 July 2022.\n - 41. \"Température et records en Août pour Lyon\" (https://www.meteo-lyon.net/records/mois/aout). meteo-lyon.net (in French). Météo Villes. Retrieved 7 September 2023.\n - 42. \"Lyon-Bron (07480) - WMO Weather Station\" (ftp://ftp.atdd.noaa.gov/pub/GCOS/WMO-Normals/TABLES/RE G\\_VI/FR/07480.TXT). NOAA. Retrieved 8 February 2019. Archived (https://archive.org/details/19611990Norm alsNOAALyonBron) 8 February 2019, at the Wayback Machine\n - 43. \"Normes et records 1961-1990: Lyon-Bron (69) - altitude 198m\" (https://web.archive.org/web/201603032035 26/http://www.infoclimat.fr/climatologie-07480-lyon-bron.html) (in French). Infoclimat. Archived from the original (http://www.infoclimat.fr/climatologie-07480-lyon-bron.html) on 3 March 2016. Retrieved 8 February 2019.\n - 44. \"St-Irénée - France\" (http://www.sacred-destinations.com/france/lyon-eglise-st-irenee). sacreddestinations.com .", - "page_start": 22, - "page_end": 22, - "source_file": "wikipedia4.pdf" - }, - { - "text": "All figures come from population censuses. Figures from 1911 to 1936 (incl.) are computed using the redressed figures for the commune of Lyon calculated by INSEE to correct the overestimated population of Lyon published by the municipal authorities at the time (10,000s of false residents had been added by the municipal authorities to artificially inflate the population figures and remain the 2nd largest city of France ahead of Marseille). [68] The 1906 figure is computed using the figure for the commune of Lyon published by the municipal authorities, probably already inflated, but not corrected by INSEE because the overestimate was smaller than 10,000.\n\nSource: EHESS [70] and INSEE [71]\n\n## Foreign-born\n\n## Education\n\n## Universities and tertiary education\n\n - École Centrale de Lyon;\n - École Normale Supérieure de Lyon\n - EM Lyon (École de Management de Lyon);\n - ECE Lyon (École de Commerce Européenne de Lyon);\n - Institut d'études politiques de Lyon (Sciences Po Lyon);\n - CPE Lyon;\n - CNSMD (Conservatoire national supérieur de musique et de danse de Lyon)\n - ECAM Lyon (École Catholique d'Arts et Métiers de Lyon);\n - EPITECH;\n - EPITA;\n - ENTPE (École Nationale des Travaux Publiques de l'État);\n - École nationale vétérinaire de Lyon (ENVL);\n - ESME-Sudria;\n - École des Beaux-Arts;\n - E-Artsup;\n - INSA Lyon (Institut National des Sciences Appliquées de Lyon);\n - Polytech Lyon;\n - Institut supérieur européen de gestion group;\n - ISARA (Institut Supérieur d'Agriculture Rhône Alpes);\n - Institution des Chartreux;\n - Institut polytechnique des sciences avancées;\n - Université Claude Bernard (Lyon 1);\n - Université Lumière (Lyon 2);\n - Université Jean Moulin (Lyon 3);\n - IAE (Institut d'Administration des Entreprises de Lyon);\n - Institut Sup'Biotech de Paris;\n - Catholic University of Lyon;\n - ESDES Business School;\n - IDRAC (International School of Management);\n - Wesford Graduate Business School;\n - IFAG (Business Management School);\n - Institut supérieur européen de formation par l'action;\n - Le Lycée du Parc;\n - La Martinière Lyon;\n - Web@cademie;\n - CEESO (Centre Européen d'Enseignement Supérieur de l'Ostéopathie);\n\nForeign-born population in Lyon by country of birth [72]\n\n\n\n\n\n\n\n\n\n\n\n", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Early Christians in Lyon were martyred for their beliefs under the reigns of various Roman emperors, most notably Marcus Aurelius and Septimius Severus. [28] Local saints from this period include Blandina, Pothinus, and Epipodius, among others. The Greek Irenaeus was the second bishop of Lyon during the latter part of the second century. [29] To this day, the archbishop of Lyon is still referred to as \" Primat des Gaules \". [30]\n\nBurgundians fleeing the destruction of Worms by the Huns in 437 were re-settled in eastern Gaul. In 443 the Romans established the Kingdom of the Burgundians, and Lugdunum became its capital in 461. In 843, under the Treaty of Verdun, Lyon went to the Holy Roman Emperor Lothair I. It later was made part of the Kingdom of Arles which was incorporated into the Holy Roman Empire in 1033. Lyon did not come\n\nThe Roman-era Theatre on the\n\n\n\nFourvière Hill\n\nunder French control until the 14th century.\n\n## Modern Lyon\n\nFernand Braudel remarked, \"Historians of Lyon are not sufficiently aware of the bipolarity between Paris and Lyon, which is a constant structure in French development...from the late Middle Ages to the Industrial\n\nRevolution\". [31] In the late 15th century, the fairs introduced by Italian merchants made Lyon the economic counting house of France. Even the Bourse (treasury), built in 1749, resembled a public bazaar where accounts were settled in the open air. When international banking moved to Genoa, then Amsterdam, Lyon remained the banking centre of France.\n\nDuring the Renaissance, the city's development was driven by the silk trade, which strengthened its ties to Italy. Italian influence on Lyon's architecture is still visible among historic buildings. [32] In the late 1400s and 1500s Lyon was also a key centre of literary activity and book publishing, both of French writers (such as Maurice Scève, Antoine Heroet, and Louise Labé) and of Italians in exile (such as Luigi Alamanni and Gian Giorgio Trissino).\n\nIn 1572, Lyon was a scene of mass violence by Catholics against Protestant Huguenots in the St. Bartholomew's Day Massacre. Two centuries later, Lyon was again convulsed by violence during the French Revolution, when the citizenry rose up against the National Convention\n\n- • Metro density\n\n500/km 2 (1,300/sq mi)\n\nTime zone\n\nUTC+01:00 (CET)\n\n- • Summer (DST)\n\nUTC+02:00 (CEST)\n\nINSEE/Postal code\n\n69123 (https://www.inse e.fr/fr/statistiques/14055 99?geo=COM-69123) /69001-69009\n\nElevation\n\n162-349 m (531- 1,145 ft)\n\nWebsite\n\nlyon.fr (https://www.lyon. fr/)\n\n1 French Land Register data, which excludes lakes, ponds, glaciers > 1 km 2 (0.386 sq mi or 247 acres) and river estuaries.\n\n\n\n\n\n\n\n\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- - 8: Flow", - "page_start": 425, - "page_end": 425, - "source_file": "sg246915.pdf" - }, - { - "text": "Both Vieux Lyon and the slopes of Croix-Rousse are known for their narrow passageways (named traboules ) that pass through buildings and link streets on either side. The first examples of traboules are thought to have been built in Lyon in the 4th century. [54] The traboules allowed the inhabitants to get from their homes to the Saône quickly and allowed the canuts on the Croix-Rousse hill to get from their workshops to the textile merchants at the foot of the hill.\n\n## Gastronomy\n\nLyon has a long and chronicled culinary arts tradition. The noted food critic Curnonsky referred to the city as \"the gastronomic capital of the world\", [55] a claim repeated by later writers such as Bill Buford. [56] Renowned 3-star Michelin chefs such as Marie Bourgeois [57] and Eugénie Brazier [58] developed Lyonnaise cuisine into a national phenomenon favoured by the French elite; a tradition which Paul Bocuse later turned into a worldwide success. [59] The bouchon is a traditional Lyonnais restaurant that serves local fare such as sausages, duck pâté or roast pork, along with local wines. Two of France's best known wine-growing regions are located near the city: the Beaujolais region to the north and the Côtes du Rhône region to the south. Another Lyon tradition is a type of brunch food called \"mâchons\", made of local charcuterie and usually accompanied by Beaujolais red wine. Mâchons were the customary meal of the canuts, the city's silk workers, who ate a late-morning meal after they finished their shifts in the factories. [60]\n\nOther traditional local dishes include coq au vin; quenelle; gras double; salade lyonnaise (lettuce with bacon, croûtons and a poached egg); and the sausage-based rosette lyonnaise and andouillette. Popular local confections include marron glacé and coussin de Lyon. Cervelle de canut (literally, \"silk worker's brains\") is a cheese spread/dip made of a base of fromage blanc, seasoned with chopped herbs, shallots, salt, pepper, olive oil and vinegar.\n\nPassage de l'Argue\n\n\n\nÎle Barbe bakery at the Halles de Lyon-Paul Bocuse\n\n\n\nMore recently, the french tacos was invented in Lyon suburbs (Vaulx-en-Velin) (or Grenoble according to some theories), in the early 2000s and is now famous worldwide. [61][62]\n\n## Sport\n\nLyon is home to the football club Olympique Lyonnais (OL), whose men's team plays in Ligue 1 and has won the championship of that competition seven times, all consecutively from 2002 to 2008. [63] OL played until December 2015 at the 43,000seat Stade de Gerland, which also hosted matches of the 1998 FIFA World Cup. Since 2016, the team has played at the Parc Olympique Lyonnais, a 59,000-seat stadium located in the eastern suburb of Décines-Charpieu. [64] OL operates a women's team, Olympique Lyonnais Féminin, which competes in and dominates Division 1 Féminine. They won fourteen consecutive top-flight championships (2007-2020), and additionally claim the four titles won by the original incarnation of FC Lyon, a\n\nParc Olympique Lyonnais\n\n\n\nwomen's football club that merged into OL in 2004 (the current FC Lyon was founded in 2009). The OL women have also won the UEFA Women's Champions League eight times, including in five consecutive editions from 2016 to 2020. Lyon hosted the 2019 FIFA Women's World Cup semi-finals as well as the Final on 7 July at Stade de Lyon.\n\nLyon has a rugby union team, Lyon OU, in the Top 14, which moved into Stade de Gerland full-time in 2017-18. In addition, Lyon has a rugby league side called Lyon Villeurbanne that plays in the French rugby league championship. The club's home is the Stade Georges Lyvet in Villeurbanne.", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia4.pdf" - }, - { - "text": "## External links\n\n - Official website (http://www.lyon.fr)(in French)\n - Visit Lyon, the official website for tourism in France (https://en.visiterlyon.com/)\n - Lyon's English Language News and Information (https://thisislyon.fr/)\n - Rues de Lyon (https://www.ruesdelyon.net/) Streets, Places, Monuments (in French)\n - Old maps of Lyon (http://historic-cities.huji.ac.il/france/lyon/lyon.html) Archived (https://web.archive.org/we b/20210116220537/http://historic-cities.huji.ac.il/france/lyon/lyon.html) 16 January 2021 at the Wayback Machine, Historic cities site (http://historic-cities.huji.ac.il/historic\\_cities.html) Archived (https://web.archive. org/web/20220325051637/http://historic-cities.huji.ac.il/historic\\_cities.html) 25 March 2022 at the Wayback Machine, The National Library of Israel\n\nRetrieved from \"https://en.wikipedia.org/w/index.php?title=Lyon&oldid=1267625203\"", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia4.pdf" - }, - { - "text": "The name of the city has taken the forms Lugdon , Luon , and since the 13th century, Lyon . The Gallic Lugdun or Lugdunon that was Latinized in Roman as Lugdunum is composed of two words. The first may be the name of the Celtic god Lug (in charge of order and law), or the derived word lugon , meaning \"crow\" (the crow being the messenger of Lug), but might also be another word lug , meaning \"light\". The second is dunos ('fortress', 'hill'). The name thus may designate the hill of Fourvière, on which the ancient city of Lyon is founded, but could mean \"hill of the god Lug\", \"hill of the crows\" or \"shining hill\". [21] [22]\n\nAlternatively Julius Pokorny associates the first part of the word with the Indo-European radical * lūg ('dark, black, swamp'), the basis of the toponyms Ludza in Latvia, Lusatia in Germany (from Sorbian Łužica ), and several places in the Czech Republic named Lužice; [23] it could then also be compared to Luze in Franche-Comté and various hydronyms such as Louge.\n\nFurther down, in the current Saint-Vincent district, was the Gallic village of Condate, probably a simple hamlet of sailors or fishermen living on the banks of the Saône. Condate is a Gallic word meaning \"confluence\", from which the Confluence district gets its name.\n\nIn Roman times the city was called Caput Galliæ , meaning \"capital of the Gauls\". As an homage to this title, the Archbishop of Lyon is still called the Primate of Gaul.\n\nDuring the revolutionary period, Lyon was renamed CommuneAffranchie (\"Emancipated Commune\") on 12 October 1793 by a decree of the Convention Nationale. It resumed its name in 1794, after the end of the Terror.\n\nLyon is called Liyon in Franco-Provençal. [24]\n\n## Ancient Lyon\n\nAccording to the historian Dio Cassius, in 43 BC, the Roman Senate ordered the creation of a settlement for Roman refugees of war with the Allobroges. These refugees had been expelled from Vienne and were now encamped at the confluence of the Saône and Rhône rivers. The foundation was built on Fourvière hill and officially called Colonia Copia Felix Munatia , a name invoking prosperity and the blessing of the gods. The city became increasingly referred to as Lugdunum (and occasionally Lugudunum [25] ). [26] The earliest translation of this Gaulish place-name as \"Desired Mountain\" is offered by the 9th-century Endlicher Glossary . [27] In contrast, some modern scholars have proposed a Gaulish hill-fort named Lug[o]dunon, after the Celtic god Lugus (cognate with Old Irish Lugh , Modern Irish Lú ), and dúnon (hillfort).\n\nThe Romans recognised that Lugdunum's strategic location at the convergence of two navigable rivers made it a natural communications hub. The city became the starting point of main Roman roads in the area, and it quickly became the capital of the province, Gallia Lugdunensis. Two Emperors were born in this city: Claudius, whose speech is preserved in the Lyon Tablet in which he justifies the nomination of Gallic Senators, and Caracalla.\n\n\n\nCoordinates: 45°46'N 4°50'E\n\n", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia4.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia4.pdf", - "query": "How big was Lyon's population in 2022? ", - "target_page": 2, - "target_passage": "Population (2022) 520,774", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "\n\n\n\nLyon [c] (Franco-Provençal: Liyon ) is the second-largest city in France by urban area and the third largest by city limits. [14] It is located at the confluence of the rivers Rhône and Saône, to the northwest of the French Alps, 391 km (243 mi) southeast of Paris, 278 km (173 mi) north of Marseille, 113 km (70 mi) southwest of Geneva, Switzerland, 58 km (36 mi) northeast of Saint-Étienne.\n\nThe City of Lyon had a population of 522,250 at the Jan. 2021 census within its small municipal territory of 48 km 2 (19 sq mi), [15] but together with its suburbs and exurbs the Lyon metropolitan area had a population of 2,308,818 that same year, [7] the second most populated in France. Lyon and 58 suburban municipalities have formed since 2015 the Metropolis of Lyon, a directly elected metropolitan authority now in charge of most urban issues, with a population of 1,424,069 in 2021. [16] Lyon is the prefecture of the Auvergne-Rhône-Alpes region and seat of the Departmental Council of Rhône (whose jurisdiction, however, no longer extends over the Metropolis of Lyon since 2015).\n\nThe capital of the Gauls during the Roman Empire, Lyon is the seat of an archbishopric whose holder bears the title of Primate of the Gauls. Lyon became a major economic hub during the Renaissance. The city is recognised for its cuisine and gastronomy, as well as historical and architectural landmarks; as such, the districts of Old Lyon, the Fourvière hill, the Presqu'île and the slopes of the Croix-Rousse are inscribed on the UNESCO World Heritage List. Lyon was historically an important area for the production and weaving of silk. Lyon played a significant role in the history of cinema since Auguste and Louis Lumière invented the cinematograph there. The city is also known for its light festival, the Fête des lumières, which begins every 8 December and lasts for four days, earning Lyon the title of \"Capital of Lights\".\n\nEconomically, Lyon is a major centre for banking, chemical, pharmaceutical and biotech industries. The city contains a significant software industry with a particular focus on video games; in recent years it has fostered a growing local start-up sector. [17] The home of renowned universities and higher education schools, Lyon is the second-largest student city in France, with a university population of nearly 200,000 students within the Metropolis of Lyon. [18] Lyon hosts the international headquarters of Interpol, the International Agency for Research on Cancer, as well as Euronews. According to the Globalization and World Rankings Research Institute, Lyon is considered a Beta city, as of 2018. [19] It ranked second in France and 40th globally in Mercer's 2019 liveability rankings. [20]\n\n## History\n\n## Lyon\n\nLiyon (Arpitan)\n\n## Prefecture and commune\n\nSkyline of Lyon in La Part-Dieu\n\n\n\n\n\nBasilica of NotreDame de Fourvière\n\n\n\nPlace des Terreaux with the Fontaine BartholdiParc de la Tête d'or\n\n\n\nConfluence District\n\n\n\nVieux Lyon\n\n\n\nPont Lafayette\n\n\n\nCoat of arms\n\n\n\nMotto(s): Avant, avant, Lion le melhor (old Franco-Provençal for \"Forward, forward, Lyon the best\") [a] Virtute duce, comite fortuna (\"With virtue as guide and fortune as companion\") [b]\n\nLocation of Lyon\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia4.pdf" - }, - { - "text": "All figures come from population censuses. Figures from 1911 to 1936 (incl.) are computed using the redressed figures for the commune of Lyon calculated by INSEE to correct the overestimated population of Lyon published by the municipal authorities at the time (10,000s of false residents had been added by the municipal authorities to artificially inflate the population figures and remain the 2nd largest city of France ahead of Marseille). [68] The 1906 figure is computed using the figure for the commune of Lyon published by the municipal authorities, probably already inflated, but not corrected by INSEE because the overestimate was smaller than 10,000.\n\nSource: EHESS [70] and INSEE [71]\n\n## Foreign-born\n\n## Education\n\n## Universities and tertiary education\n\n - École Centrale de Lyon;\n - École Normale Supérieure de Lyon\n - EM Lyon (École de Management de Lyon);\n - ECE Lyon (École de Commerce Européenne de Lyon);\n - Institut d'études politiques de Lyon (Sciences Po Lyon);\n - CPE Lyon;\n - CNSMD (Conservatoire national supérieur de musique et de danse de Lyon)\n - ECAM Lyon (École Catholique d'Arts et Métiers de Lyon);\n - EPITECH;\n - EPITA;\n - ENTPE (École Nationale des Travaux Publiques de l'État);\n - École nationale vétérinaire de Lyon (ENVL);\n - ESME-Sudria;\n - École des Beaux-Arts;\n - E-Artsup;\n - INSA Lyon (Institut National des Sciences Appliquées de Lyon);\n - Polytech Lyon;\n - Institut supérieur européen de gestion group;\n - ISARA (Institut Supérieur d'Agriculture Rhône Alpes);\n - Institution des Chartreux;\n - Institut polytechnique des sciences avancées;\n - Université Claude Bernard (Lyon 1);\n - Université Lumière (Lyon 2);\n - Université Jean Moulin (Lyon 3);\n - IAE (Institut d'Administration des Entreprises de Lyon);\n - Institut Sup'Biotech de Paris;\n - Catholic University of Lyon;\n - ESDES Business School;\n - IDRAC (International School of Management);\n - Wesford Graduate Business School;\n - IFAG (Business Management School);\n - Institut supérieur européen de formation par l'action;\n - Le Lycée du Parc;\n - La Martinière Lyon;\n - Web@cademie;\n - CEESO (Centre Européen d'Enseignement Supérieur de l'Ostéopathie);\n\nForeign-born population in Lyon by country of birth [72]\n\n\n\n\n\n\n\n\n\n\n\n", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia4.pdf" - }, - { - "text": "\n\nCoordinates: 45°46'N 4°50'E\n\n\n\n| Country | France |\n|------------------------|----------------------------|\n| Region | Auvergne-Rhône-Alpes |\n| Metropolis | Lyon Metropolis |\n| Arrondissement | Lyon |\n| Subdivisions | 9 arrondissements |\n| Government | |\n| · Mayor (2020- | Grégory Doucet [2] |\n| 2026) | (EELV) |\n| Area 1 | 47.87 km 2 (18.48 sq mi) |\n| · Urban (2020 [3] ) | 1,141.4 km 2 (440.7 sq mi) |\n| · Metro (2020 [4] ) | 4,605.8 km 2 |\n| Population (2022) [5] | 520,774 |\n| · Rank | 3rd in France |\n| | 11,000/km 2 |\n| · Density | (28,000/sq mi) |\n| · Urban (Jan. [6] | 1,702,921 |\n| 2021 ) | |\n| · Urban density | 1,500/km 2 (3,900/sq mi) |\n| · Metro (Jan. | 2,308,818 |\n| 2021 [7] ) | |", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Early Christians in Lyon were martyred for their beliefs under the reigns of various Roman emperors, most notably Marcus Aurelius and Septimius Severus. [28] Local saints from this period include Blandina, Pothinus, and Epipodius, among others. The Greek Irenaeus was the second bishop of Lyon during the latter part of the second century. [29] To this day, the archbishop of Lyon is still referred to as \" Primat des Gaules \". [30]\n\nBurgundians fleeing the destruction of Worms by the Huns in 437 were re-settled in eastern Gaul. In 443 the Romans established the Kingdom of the Burgundians, and Lugdunum became its capital in 461. In 843, under the Treaty of Verdun, Lyon went to the Holy Roman Emperor Lothair I. It later was made part of the Kingdom of Arles which was incorporated into the Holy Roman Empire in 1033. Lyon did not come\n\nThe Roman-era Theatre on the\n\n\n\nFourvière Hill\n\nunder French control until the 14th century.\n\n## Modern Lyon\n\nFernand Braudel remarked, \"Historians of Lyon are not sufficiently aware of the bipolarity between Paris and Lyon, which is a constant structure in French development...from the late Middle Ages to the Industrial\n\nRevolution\". [31] In the late 15th century, the fairs introduced by Italian merchants made Lyon the economic counting house of France. Even the Bourse (treasury), built in 1749, resembled a public bazaar where accounts were settled in the open air. When international banking moved to Genoa, then Amsterdam, Lyon remained the banking centre of France.\n\nDuring the Renaissance, the city's development was driven by the silk trade, which strengthened its ties to Italy. Italian influence on Lyon's architecture is still visible among historic buildings. [32] In the late 1400s and 1500s Lyon was also a key centre of literary activity and book publishing, both of French writers (such as Maurice Scève, Antoine Heroet, and Louise Labé) and of Italians in exile (such as Luigi Alamanni and Gian Giorgio Trissino).\n\nIn 1572, Lyon was a scene of mass violence by Catholics against Protestant Huguenots in the St. Bartholomew's Day Massacre. Two centuries later, Lyon was again convulsed by violence during the French Revolution, when the citizenry rose up against the National Convention\n\n- • Metro density\n\n500/km 2 (1,300/sq mi)\n\nTime zone\n\nUTC+01:00 (CET)\n\n- • Summer (DST)\n\nUTC+02:00 (CEST)\n\nINSEE/Postal code\n\n69123 (https://www.inse e.fr/fr/statistiques/14055 99?geo=COM-69123) /69001-69009\n\nElevation\n\n162-349 m (531- 1,145 ft)\n\nWebsite\n\nlyon.fr (https://www.lyon. fr/)\n\n1 French Land Register data, which excludes lakes, ponds, glaciers > 1 km 2 (0.386 sq mi or 247 acres) and river estuaries.\n\n\n\n\n\n\n\n\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- 31. Braudel 1984 p. 327\n - 32. Pierre Edmond DESVIGNES. \"Quartier renaissance Lyon : Vieux Lyon, quartier ancien et secteur sauvegarde Lyon\" (https://web.archive.org/web/20110119152753/http://www.vieux-lyon.org/lyon-epoque-renaissance\\_f01 150.htm). Vieux-lyon.org. Archived from the original (http://www.vieux-lyon.org/lyon-epoque-renaissance\\_f011 50.htm) on 19 January 2011. Retrieved 3 April 2011.\n - 33. \"CHRD Lyon\" (https://web.archive.org/web/20110124140355/http://www.chrd.lyon.fr/chrd/sections/fr/pied/engli sh\\_1). Chrd.lyon.fr . 2017. Archived from the original (http://www.chrd.lyon.fr/chrd/sections/fr/pied/english\\_1) on 24 January 2011. Retrieved 21 December 2017.\n - 34. Cosgrove, Michael (4 June 2009). \"Lyon: The Resistance and Deportation Museum\" (http://www.digitaljournal. com/article/273644). Digitaljournal.com .\n - 35. (in French) Georges Duby (ed), Histoire de la France : Dynasties et révolutions, de 1348 à 1852 (vol. 2), Larousse, 1999 p. 53 ISBN 2-03-505047-2\n - 36. \"Lyon, France: Local Transport\" (http://www.lonelyplanet.com/france/burgundy-and-the-rhone/lyon/transport/g etting-around/local-transport). Lonely Planet. Retrieved 2 February 2017.\n - 37. \"Historic Site of Lyon\" (https://whc.unesco.org/en/list/872/). unesco.org . UNESCO World Heritage Centre. Retrieved 31 July 2015.\n - 38. Gregory, Stanley. 'Climatic Classification and Climatic Change (Klimaklassifikation Und Klimaänderung) (http s://www.jstor.org/stable/25636095).' Erdkunde , vol. 8, no. 4, 1954, pp. 246-252. JSTOR.\n - 39. \"Données climatiques de la station de Lyon: Relevés de 2016 - Lyon\" (https://web.archive.org/web/20161004 055201/http://www.meteofrance.com/climat/france/lyon/69029001/releves) (in French). Meteo France. Archived from the original (http://www.meteofrance.com/climat/france/lyon/69029001/releves) on 4 October 2016. Retrieved 2 October 2016.\n - 40. \"Lyon-Bron (69)\" (https://donneespubliques.meteofrance.fr/FichesClim/FICHECLIM\\_69029001.pdf) (PDF). Fiche Climatologique: Statistiques 1991-2020 et records (in French). Meteo France. Retrieved 14 July 2022.\n - 41. \"Température et records en Août pour Lyon\" (https://www.meteo-lyon.net/records/mois/aout). meteo-lyon.net (in French). Météo Villes. Retrieved 7 September 2023.\n - 42. \"Lyon-Bron (07480) - WMO Weather Station\" (ftp://ftp.atdd.noaa.gov/pub/GCOS/WMO-Normals/TABLES/RE G\\_VI/FR/07480.TXT). NOAA. Retrieved 8 February 2019. Archived (https://archive.org/details/19611990Norm alsNOAALyonBron) 8 February 2019, at the Wayback Machine\n - 43. \"Normes et records 1961-1990: Lyon-Bron (69) - altitude 198m\" (https://web.archive.org/web/201603032035 26/http://www.infoclimat.fr/climatologie-07480-lyon-bron.html) (in French). Infoclimat. Archived from the original (http://www.infoclimat.fr/climatologie-07480-lyon-bron.html) on 3 March 2016. Retrieved 8 February 2019.\n - 44. \"St-Irénée - France\" (http://www.sacred-destinations.com/france/lyon-eglise-st-irenee). sacreddestinations.com .", - "page_start": 22, - "page_end": 22, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- Bellecour, Écoles D'Arts.\n\n## Primary and secondary schools\n\nThere are some international private schools in the Lyon area, including:\n\n - Cité Scolaire Internationale de Lyon or the Lycée de Gerland;\n - Includes the Section Japonaises ( リヨン·ジェルラン補習授業校 Riyon Jeruran Hoshū Jugyō Kō \"Lyon Gerland Japanese Supplementary School\"), which the Japanese Ministry of Education (MEXT) counts as a part-time Japanese supplementary school [73]\n - Ombrosa;\n - International School of Lyon in nearby Sainte-Foy-lès-Lyon;\n - Montessori School of Lyon.\n\n## Supplementary education\n\nOther Japanese supplementary schools:\n\n - The Association Pour le Développement de la Langue et de la Culture Japonaises (ADLCJ; リヨン補習授業校 Riyon Hoshū Jugyō Kō ) is held in the Maison Berty Albrecht in Villeurbanne, near Lyon. [73] It was formed in 1987. [74] It serves Japanese expatriate children who wish to continue their Japanese education whilst abroad.\n\n## Transport\n\nLyon-Saint-Exupéry Airport, located east of Lyon, serves as a base for domestic and international flights. It is a key transport facility for the entire Rhône-Alpes region, with coach links to other cities in the area. The in-house train station Gare de Lyon Saint-Exupéry connects the airport to the nationwide TGV network. The Rhônexpress tram monopoly links the airport with the business quarter of La Part Dieu in less than 30 minutes, and offers connections with Underground A & B, Tramway T1, T3 & T4, and bus lines. Lyon public transport Sytral offers a bus service, Route 47, that links the airport to Meyzieu [75] where passengers can change onto Tram T3. The regular price of public transport is €1.90, as opposed to €15 one way for the Rhonexpress. In the suburb of Bron, the smaller Lyon-Bron Airport provides an alternative for domestic aviation.\n\nLyon has two major railway stations: Lyon-Part-Dieu, which was built to accommodate the TGV, and Lyon Perrache, an older station that now provides mostly regional service. Smaller railway stations include Gorge-de-Loup, Vaise, Saint-Paul and Jean Macé. Lyon was the first city to be connected to Paris by the TGV in 1981. [76] Since that time the TGV train network has expanded and links Lyon directly to Perpignan, Toulouse, Nice, Marseille, Strasbourg, Nantes and Lille. International trains operate directly to Madrid, Barcelona, Milan, Turin, Geneva, Frankfurt, Luxembourg, Brussels and London.\n\nThe city is at the heart of a dense road network and is located at the meeting point of several highways: A6 to Paris, A7 Marseille, A42 to Geneva, and A43 to Grenoble. The city is now bypassed by the A46. A double motorway tunnel passes under Fourvière, connecting the A6 and the A7 autoroutes, both forming the \"Autoroute du Soleil\".\n\nLyon 3: Berges du Rhône campus\n\n\n\nLyon 2: Berges du Rhône campus\n\n\n\nIPSA Lyon Campus\n\n\n\nPlatform I, Lyon-Part-Dieu train station\n\n\n\nT1 tramway on the Raymond Barre bridge\n\n", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia4.pdf" - }, - { - "text": "| Population of Lyon (metropolis) (59 communes, within 2020 borders) | Population of Lyon (metropolis) (59 communes, within 2020 borders) | Population of Lyon (metropolis) (59 communes, within 2020 borders) | Population of Lyon (metropolis) (59 communes, within 2020 borders) | Population of Lyon (metropolis) (59 communes, within 2020 borders) | Population of Lyon (metropolis) (59 communes, within 2020 borders) | Population of Lyon (metropolis) (59 communes, within 2020 borders) | Population of Lyon (metropolis) (59 communes, within 2020 borders) | Population of Lyon (metropolis) (59 communes, within 2020 borders) |\n|----------------------------------------------------------------------|----------------------------------------------------------------------|----------------------------------------------------------------------|----------------------------------------------------------------------|----------------------------------------------------------------------|----------------------------------------------------------------------|----------------------------------------------------------------------|----------------------------------------------------------------------|----------------------------------------------------------------------|\n| Year | Pop. | ±% p.a. | Year | Pop. | ±% p.a. | Year | Pop. | ±% p.a. |\n| 1861 | 418,515 | - | 1906 | 627,073 | +0.60% | 1968 | 1,077,794 | +2.17% |\n| 1866 | 427,522 | +0.43% | 1911 | 629,931 | +0.09% | 1975 | 1,153,402 | +0.98% |\n| 1872 | 426,552 | -0.04% | 1921 | 659,007 | +0.45% | 1982 | 1,138,718 | -0.18% |", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia4.pdf" - }, - { - "text": "The convention was not the only target within Lyon during the French Revolution. After the Convention faded into history, the French Directory appeared and days after the 4 September 1797 Coup of 18 Fructidor, a Directory's commissioner was assassinated in Lyon.\n\nThe city became an important industrial town in the 19th century. In 1831 and 1834, the canuts (silk workers) of Lyon staged two major uprisings for better working conditions and pay. In 1862, the first of Lyon's extensive network of funicular railways began operation.\n\nMassacre during the Canut rebellion of 1834\n\n\n\nDuring World War II, Lyon was a centre for the occupying Nazi forces, including Klaus Barbie, the infamous \"Butcher of Lyon\". However, the city was also a\n\nstronghold of the French Resistance, the many secret passages known as traboules , enabled people to escape Gestapo raids. On 3 September 1944, Lyon was liberated by the 1st Free French Division and the Forces Françaises de l'Intérieur. The city is now home to a Resistance museum. [33][34]\n\n## Geography\n\nThe Rhône and Saône converge to the south of the historic city centre, forming a peninsula - the \" Presqu'île \" - bounded by two large hills to the west and north and a large plain eastward. Place Bellecour is located on the Presqu'île between the two rivers and is the third-largest public square in France. The broad, pedestrian-only Rue de la République leads north from Place Bellecour.\n\nThe northern hill is La Croix-Rousse, known as \"the hill that works\" because it is traditionally home to many small silk workshops, an industry for which the city has long been renowned. [35]\n\nThe Saône-Rhône confluence\n\n\n\nThe western hill is Fourvière, known as \"the hill that prays\" because it is the location for Basilica of Notre-Dame de Fourvière, several convents, and Archbishop residence. The district, Vieux Lyon, also hosts the Tour métallique (a highly visible TV tower, replicating the last stage of the Eiffel Tower) and one of the city's railways. [36] Fourvière, along with portions of the Presqu'île and much of La Croix-Rousse, is designated as a UNESCO World Heritage Site. [37]\n\nEast of the Rhône from the Presqu'île is a large flat area upon which sits much of modern Lyon and contains most of the city's population. Situated in this area is La Part-Dieu urban centre, which clusters the landmark structures Tour Incity, Tour Part-Dieu, Tour Oxygène, and Tour Swiss Life, as well as the city's primary railway station, Gare de Lyon-Part-Dieu.\n\nNorth of this district lays the sixth arrondissement, which is home to one of Europe's largest urban parks, the Parc de la Tête d'or, as well as Lycée du Parc and Interpol's world headquarters.", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia4.pdf" - }, - { - "text": "## External links\n\n - Official website (http://www.lyon.fr)(in French)\n - Visit Lyon, the official website for tourism in France (https://en.visiterlyon.com/)\n - Lyon's English Language News and Information (https://thisislyon.fr/)\n - Rues de Lyon (https://www.ruesdelyon.net/) Streets, Places, Monuments (in French)\n - Old maps of Lyon (http://historic-cities.huji.ac.il/france/lyon/lyon.html) Archived (https://web.archive.org/we b/20210116220537/http://historic-cities.huji.ac.il/france/lyon/lyon.html) 16 January 2021 at the Wayback Machine, Historic cities site (http://historic-cities.huji.ac.il/historic\\_cities.html) Archived (https://web.archive. org/web/20220325051637/http://historic-cities.huji.ac.il/historic\\_cities.html) 25 March 2022 at the Wayback Machine, The National Library of Israel\n\nRetrieved from \"https://en.wikipedia.org/w/index.php?title=Lyon&oldid=1267625203\"", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Lyon is also home to the Lyon Hockey Club, an ice hockey team that competes in France's national ice hockey league. The Patinoire Charlemagne is the seat of Club des Sports de Glace de Lyon, the club of Olympic ice dancing champions Marina Anissina and Gwendal Peizerat, and world champions Isabelle Delobel and Olivier Shoenfelder. [65] Lyon-Villeurbanne also has a basketball team, ASVEL, that plays at the Astroballe arena.\n\n## Street art\n\nSince 2000, Birdy Kids, a group of graffiti artists from the city, has decorated several random buildings and walls along the Lyon ring road. In 2012, the artist collective was chosen to represent the city as its cultural ambassadors. [66]\n\n## Demographics\n\nThe population of the city (commune) of Lyon proper was 522,250 at the January 2021 census. [15] As of 2011, 14% of its population was born outside Metropolitan France. [67]\n\n## Population of Lyon (commune) (within 2020 borders)\n\n| Year | Pop. | ±% p.a. | Year | Pop. | ±% p.a. | Year | Pop. | ±% p.a. |\n|--------|---------|-----------|--------|---------|-----------|--------|---------|-----------|\n| 1801 | 101,760 | - | 1876 | 344,513 | +1.33% | 1946 | 464,104 | +0.02% |\n| 1806 | 114,643 | +2.41% | 1881 | 378,581 | +1.84% | 1954 | 475,343 | +0.29% |\n| 1821 | 149,611 | +1.79% | 1886 | 404,172 | +1.45% | 1962 | 535,746 | +1.54% |\n| 1831 | 182,668 | +2.02% | 1891 | 440,315 | +1.78% | 1968 | 527,800 | -0.25% |\n| 1836 | 198,683 | +1.60% | 1896 | 468,311 | +1.25% | 1975 | 456,716 | -2.06% |\n| 1841 | 206,670 | +0.79% | 1901 | 461,687 | -0.29% | 1982 | 413,095 | -1.42% |\n| 1846 | 238,466 | +2.86% | 1906 | 474,652 | +0.56% | 1990 | 415,487 | +0.07% |\n| 1851 | 259,220 | +1.68% | 1911 | 462,248 | -0.53% | 1999 | 445,452 | +0.78% |\n| 1856 | 293,743 | +2.66% | 1921 | 462,446 | +0.00% | 2010 | 484,344 | +0.78% |\n| 1861 | 320,326 | +1.72% | 1926 | 463,125 | +0.03% | 2015 | 513,275 | +1.17% |\n| 1866 | 325,219 | +0.30% | 1931 | 463,647 | +0.02% | 2021 | 522,250 | +0.29% |\n| 1872 | 324,590 | -0.03% | 1936 | 463,061 | -0.03% | | | |\n\nAll figures come from population censuses. Figures from 1911 to 1936 (incl.) are the redressed figures calculated by INSEE to correct the overestimated population of Lyon published by the municipal authorities at the time (10,000s of false residents had been added by the municipal authorities to artificially inflate the population figures and remain the 2nd largest city of France ahead of Marseille). [68] The 1906 figure is the one published by the municipal authorities, probably already inflated, but not corrected by INSEE because the overestimate was smaller than 10,000. Source: EHESS [69] and INSEE [15]\n\nThe city of Lyon and 58 suburban municipalities have formed since 2015 the Metropolis of Lyon, a directly elected metropolitan authority now in charge of most urban issues, with a population of 1,424,069 in 2021. [16]", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia4.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia4.pdf", - "query": "What is the climate in Lyon ?", - "target_page": 5, - "target_passage": " Lyon has a humid subtropical climate ( Köppen: Cfa), bordering an oceanic climate (Köppen: Cfb, Trewartha: Do).", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "\n\n\n\nLyon [c] (Franco-Provençal: Liyon ) is the second-largest city in France by urban area and the third largest by city limits. [14] It is located at the confluence of the rivers Rhône and Saône, to the northwest of the French Alps, 391 km (243 mi) southeast of Paris, 278 km (173 mi) north of Marseille, 113 km (70 mi) southwest of Geneva, Switzerland, 58 km (36 mi) northeast of Saint-Étienne.\n\nThe City of Lyon had a population of 522,250 at the Jan. 2021 census within its small municipal territory of 48 km 2 (19 sq mi), [15] but together with its suburbs and exurbs the Lyon metropolitan area had a population of 2,308,818 that same year, [7] the second most populated in France. Lyon and 58 suburban municipalities have formed since 2015 the Metropolis of Lyon, a directly elected metropolitan authority now in charge of most urban issues, with a population of 1,424,069 in 2021. [16] Lyon is the prefecture of the Auvergne-Rhône-Alpes region and seat of the Departmental Council of Rhône (whose jurisdiction, however, no longer extends over the Metropolis of Lyon since 2015).\n\nThe capital of the Gauls during the Roman Empire, Lyon is the seat of an archbishopric whose holder bears the title of Primate of the Gauls. Lyon became a major economic hub during the Renaissance. The city is recognised for its cuisine and gastronomy, as well as historical and architectural landmarks; as such, the districts of Old Lyon, the Fourvière hill, the Presqu'île and the slopes of the Croix-Rousse are inscribed on the UNESCO World Heritage List. Lyon was historically an important area for the production and weaving of silk. Lyon played a significant role in the history of cinema since Auguste and Louis Lumière invented the cinematograph there. The city is also known for its light festival, the Fête des lumières, which begins every 8 December and lasts for four days, earning Lyon the title of \"Capital of Lights\".\n\nEconomically, Lyon is a major centre for banking, chemical, pharmaceutical and biotech industries. The city contains a significant software industry with a particular focus on video games; in recent years it has fostered a growing local start-up sector. [17] The home of renowned universities and higher education schools, Lyon is the second-largest student city in France, with a university population of nearly 200,000 students within the Metropolis of Lyon. [18] Lyon hosts the international headquarters of Interpol, the International Agency for Research on Cancer, as well as Euronews. According to the Globalization and World Rankings Research Institute, Lyon is considered a Beta city, as of 2018. [19] It ranked second in France and 40th globally in Mercer's 2019 liveability rankings. [20]\n\n## History\n\n## Lyon\n\nLiyon (Arpitan)\n\n## Prefecture and commune\n\nSkyline of Lyon in La Part-Dieu\n\n\n\n\n\nBasilica of NotreDame de Fourvière\n\n\n\nPlace des Terreaux with the Fontaine BartholdiParc de la Tête d'or\n\n\n\nConfluence District\n\n\n\nVieux Lyon\n\n\n\nPont Lafayette\n\n\n\nCoat of arms\n\n\n\nMotto(s): Avant, avant, Lion le melhor (old Franco-Provençal for \"Forward, forward, Lyon the best\") [a] Virtute duce, comite fortuna (\"With virtue as guide and fortune as companion\") [b]\n\nLocation of Lyon\n\n", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia4.pdf" - }, - { - "text": "All figures come from population censuses. Figures from 1911 to 1936 (incl.) are computed using the redressed figures for the commune of Lyon calculated by INSEE to correct the overestimated population of Lyon published by the municipal authorities at the time (10,000s of false residents had been added by the municipal authorities to artificially inflate the population figures and remain the 2nd largest city of France ahead of Marseille). [68] The 1906 figure is computed using the figure for the commune of Lyon published by the municipal authorities, probably already inflated, but not corrected by INSEE because the overestimate was smaller than 10,000.\n\nSource: EHESS [70] and INSEE [71]\n\n## Foreign-born\n\n## Education\n\n## Universities and tertiary education\n\n - École Centrale de Lyon;\n - École Normale Supérieure de Lyon\n - EM Lyon (École de Management de Lyon);\n - ECE Lyon (École de Commerce Européenne de Lyon);\n - Institut d'études politiques de Lyon (Sciences Po Lyon);\n - CPE Lyon;\n - CNSMD (Conservatoire national supérieur de musique et de danse de Lyon)\n - ECAM Lyon (École Catholique d'Arts et Métiers de Lyon);\n - EPITECH;\n - EPITA;\n - ENTPE (École Nationale des Travaux Publiques de l'État);\n - École nationale vétérinaire de Lyon (ENVL);\n - ESME-Sudria;\n - École des Beaux-Arts;\n - E-Artsup;\n - INSA Lyon (Institut National des Sciences Appliquées de Lyon);\n - Polytech Lyon;\n - Institut supérieur européen de gestion group;\n - ISARA (Institut Supérieur d'Agriculture Rhône Alpes);\n - Institution des Chartreux;\n - Institut polytechnique des sciences avancées;\n - Université Claude Bernard (Lyon 1);\n - Université Lumière (Lyon 2);\n - Université Jean Moulin (Lyon 3);\n - IAE (Institut d'Administration des Entreprises de Lyon);\n - Institut Sup'Biotech de Paris;\n - Catholic University of Lyon;\n - ESDES Business School;\n - IDRAC (International School of Management);\n - Wesford Graduate Business School;\n - IFAG (Business Management School);\n - Institut supérieur européen de formation par l'action;\n - Le Lycée du Parc;\n - La Martinière Lyon;\n - Web@cademie;\n - CEESO (Centre Européen d'Enseignement Supérieur de l'Ostéopathie);\n\nForeign-born population in Lyon by country of birth [72]\n\n\n\n\n\n\n\n\n\n\n\n", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- 31. Braudel 1984 p. 327\n - 32. Pierre Edmond DESVIGNES. \"Quartier renaissance Lyon : Vieux Lyon, quartier ancien et secteur sauvegarde Lyon\" (https://web.archive.org/web/20110119152753/http://www.vieux-lyon.org/lyon-epoque-renaissance\\_f01 150.htm). Vieux-lyon.org. Archived from the original (http://www.vieux-lyon.org/lyon-epoque-renaissance\\_f011 50.htm) on 19 January 2011. Retrieved 3 April 2011.\n - 33. \"CHRD Lyon\" (https://web.archive.org/web/20110124140355/http://www.chrd.lyon.fr/chrd/sections/fr/pied/engli sh\\_1). Chrd.lyon.fr . 2017. Archived from the original (http://www.chrd.lyon.fr/chrd/sections/fr/pied/english\\_1) on 24 January 2011. Retrieved 21 December 2017.\n - 34. Cosgrove, Michael (4 June 2009). \"Lyon: The Resistance and Deportation Museum\" (http://www.digitaljournal. com/article/273644). Digitaljournal.com .\n - 35. (in French) Georges Duby (ed), Histoire de la France : Dynasties et révolutions, de 1348 à 1852 (vol. 2), Larousse, 1999 p. 53 ISBN 2-03-505047-2\n - 36. \"Lyon, France: Local Transport\" (http://www.lonelyplanet.com/france/burgundy-and-the-rhone/lyon/transport/g etting-around/local-transport). Lonely Planet. Retrieved 2 February 2017.\n - 37. \"Historic Site of Lyon\" (https://whc.unesco.org/en/list/872/). unesco.org . UNESCO World Heritage Centre. Retrieved 31 July 2015.\n - 38. Gregory, Stanley. 'Climatic Classification and Climatic Change (Klimaklassifikation Und Klimaänderung) (http s://www.jstor.org/stable/25636095).' Erdkunde , vol. 8, no. 4, 1954, pp. 246-252. JSTOR.\n - 39. \"Données climatiques de la station de Lyon: Relevés de 2016 - Lyon\" (https://web.archive.org/web/20161004 055201/http://www.meteofrance.com/climat/france/lyon/69029001/releves) (in French). Meteo France. Archived from the original (http://www.meteofrance.com/climat/france/lyon/69029001/releves) on 4 October 2016. Retrieved 2 October 2016.\n - 40. \"Lyon-Bron (69)\" (https://donneespubliques.meteofrance.fr/FichesClim/FICHECLIM\\_69029001.pdf) (PDF). Fiche Climatologique: Statistiques 1991-2020 et records (in French). Meteo France. Retrieved 14 July 2022.\n - 41. \"Température et records en Août pour Lyon\" (https://www.meteo-lyon.net/records/mois/aout). meteo-lyon.net (in French). Météo Villes. Retrieved 7 September 2023.\n - 42. \"Lyon-Bron (07480) - WMO Weather Station\" (ftp://ftp.atdd.noaa.gov/pub/GCOS/WMO-Normals/TABLES/RE G\\_VI/FR/07480.TXT). NOAA. Retrieved 8 February 2019. Archived (https://archive.org/details/19611990Norm alsNOAALyonBron) 8 February 2019, at the Wayback Machine\n - 43. \"Normes et records 1961-1990: Lyon-Bron (69) - altitude 198m\" (https://web.archive.org/web/201603032035 26/http://www.infoclimat.fr/climatologie-07480-lyon-bron.html) (in French). Infoclimat. Archived from the original (http://www.infoclimat.fr/climatologie-07480-lyon-bron.html) on 3 March 2016. Retrieved 8 February 2019.\n - 44. \"St-Irénée - France\" (http://www.sacred-destinations.com/france/lyon-eglise-st-irenee). sacreddestinations.com .", - "page_start": 22, - "page_end": 22, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Panorama of the inner city of Lyon, taken from the basilica of Notre-Dame de Fourvière's roof\n\n\n\n## Climate\n\nLyon has a humid subtropical climate (Köppen: Cfa ), bordering an oceanic climate ( Köppen : Cfb , Trewartha: Do ). [38] The mean temperature in Lyon in the coldest month is 4.1 °C (39.4 °F) in January and in the warmest month in July is 22.6 °C (72.7 °F). Precipitation is adequate year-round, at an average of 820 mm (32.3 in), the winter months are the driest. The highest recorded temperature was 40.5 °C (104.9 °F) on 13 August 2003 while the lowest recorded temperature was -24.6 °C (-12.3 °F) on 22 December 1938. [39]\n\nIce on the Saône, 2012\n\n", - "page_start": 4, - "page_end": 4, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- Bellecour, Écoles D'Arts.\n\n## Primary and secondary schools\n\nThere are some international private schools in the Lyon area, including:\n\n - Cité Scolaire Internationale de Lyon or the Lycée de Gerland;\n - Includes the Section Japonaises ( リヨン·ジェルラン補習授業校 Riyon Jeruran Hoshū Jugyō Kō \"Lyon Gerland Japanese Supplementary School\"), which the Japanese Ministry of Education (MEXT) counts as a part-time Japanese supplementary school [73]\n - Ombrosa;\n - International School of Lyon in nearby Sainte-Foy-lès-Lyon;\n - Montessori School of Lyon.\n\n## Supplementary education\n\nOther Japanese supplementary schools:\n\n - The Association Pour le Développement de la Langue et de la Culture Japonaises (ADLCJ; リヨン補習授業校 Riyon Hoshū Jugyō Kō ) is held in the Maison Berty Albrecht in Villeurbanne, near Lyon. [73] It was formed in 1987. [74] It serves Japanese expatriate children who wish to continue their Japanese education whilst abroad.\n\n## Transport\n\nLyon-Saint-Exupéry Airport, located east of Lyon, serves as a base for domestic and international flights. It is a key transport facility for the entire Rhône-Alpes region, with coach links to other cities in the area. The in-house train station Gare de Lyon Saint-Exupéry connects the airport to the nationwide TGV network. The Rhônexpress tram monopoly links the airport with the business quarter of La Part Dieu in less than 30 minutes, and offers connections with Underground A & B, Tramway T1, T3 & T4, and bus lines. Lyon public transport Sytral offers a bus service, Route 47, that links the airport to Meyzieu [75] where passengers can change onto Tram T3. The regular price of public transport is €1.90, as opposed to €15 one way for the Rhonexpress. In the suburb of Bron, the smaller Lyon-Bron Airport provides an alternative for domestic aviation.\n\nLyon has two major railway stations: Lyon-Part-Dieu, which was built to accommodate the TGV, and Lyon Perrache, an older station that now provides mostly regional service. Smaller railway stations include Gorge-de-Loup, Vaise, Saint-Paul and Jean Macé. Lyon was the first city to be connected to Paris by the TGV in 1981. [76] Since that time the TGV train network has expanded and links Lyon directly to Perpignan, Toulouse, Nice, Marseille, Strasbourg, Nantes and Lille. International trains operate directly to Madrid, Barcelona, Milan, Turin, Geneva, Frankfurt, Luxembourg, Brussels and London.\n\nThe city is at the heart of a dense road network and is located at the meeting point of several highways: A6 to Paris, A7 Marseille, A42 to Geneva, and A43 to Grenoble. The city is now bypassed by the A46. A double motorway tunnel passes under Fourvière, connecting the A6 and the A7 autoroutes, both forming the \"Autoroute du Soleil\".\n\nLyon 3: Berges du Rhône campus\n\n\n\nLyon 2: Berges du Rhône campus\n\n\n\nIPSA Lyon Campus\n\n\n\nPlatform I, Lyon-Part-Dieu train station\n\n\n\nT1 tramway on the Raymond Barre bridge\n\n", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Early Christians in Lyon were martyred for their beliefs under the reigns of various Roman emperors, most notably Marcus Aurelius and Septimius Severus. [28] Local saints from this period include Blandina, Pothinus, and Epipodius, among others. The Greek Irenaeus was the second bishop of Lyon during the latter part of the second century. [29] To this day, the archbishop of Lyon is still referred to as \" Primat des Gaules \". [30]\n\nBurgundians fleeing the destruction of Worms by the Huns in 437 were re-settled in eastern Gaul. In 443 the Romans established the Kingdom of the Burgundians, and Lugdunum became its capital in 461. In 843, under the Treaty of Verdun, Lyon went to the Holy Roman Emperor Lothair I. It later was made part of the Kingdom of Arles which was incorporated into the Holy Roman Empire in 1033. Lyon did not come\n\nThe Roman-era Theatre on the\n\n\n\nFourvière Hill\n\nunder French control until the 14th century.\n\n## Modern Lyon\n\nFernand Braudel remarked, \"Historians of Lyon are not sufficiently aware of the bipolarity between Paris and Lyon, which is a constant structure in French development...from the late Middle Ages to the Industrial\n\nRevolution\". [31] In the late 15th century, the fairs introduced by Italian merchants made Lyon the economic counting house of France. Even the Bourse (treasury), built in 1749, resembled a public bazaar where accounts were settled in the open air. When international banking moved to Genoa, then Amsterdam, Lyon remained the banking centre of France.\n\nDuring the Renaissance, the city's development was driven by the silk trade, which strengthened its ties to Italy. Italian influence on Lyon's architecture is still visible among historic buildings. [32] In the late 1400s and 1500s Lyon was also a key centre of literary activity and book publishing, both of French writers (such as Maurice Scève, Antoine Heroet, and Louise Labé) and of Italians in exile (such as Luigi Alamanni and Gian Giorgio Trissino).\n\nIn 1572, Lyon was a scene of mass violence by Catholics against Protestant Huguenots in the St. Bartholomew's Day Massacre. Two centuries later, Lyon was again convulsed by violence during the French Revolution, when the citizenry rose up against the National Convention\n\n- • Metro density\n\n500/km 2 (1,300/sq mi)\n\nTime zone\n\nUTC+01:00 (CET)\n\n- • Summer (DST)\n\nUTC+02:00 (CEST)\n\nINSEE/Postal code\n\n69123 (https://www.inse e.fr/fr/statistiques/14055 99?geo=COM-69123) /69001-69009\n\nElevation\n\n162-349 m (531- 1,145 ft)\n\nWebsite\n\nlyon.fr (https://www.lyon. fr/)\n\n1 French Land Register data, which excludes lakes, ponds, glaciers > 1 km 2 (0.386 sq mi or 247 acres) and river estuaries.\n\n\n\n\n\n\n\n\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia4.pdf" - }, - { - "text": "## External links\n\n - Official website (http://www.lyon.fr)(in French)\n - Visit Lyon, the official website for tourism in France (https://en.visiterlyon.com/)\n - Lyon's English Language News and Information (https://thisislyon.fr/)\n - Rues de Lyon (https://www.ruesdelyon.net/) Streets, Places, Monuments (in French)\n - Old maps of Lyon (http://historic-cities.huji.ac.il/france/lyon/lyon.html) Archived (https://web.archive.org/we b/20210116220537/http://historic-cities.huji.ac.il/france/lyon/lyon.html) 16 January 2021 at the Wayback Machine, Historic cities site (http://historic-cities.huji.ac.il/historic\\_cities.html) Archived (https://web.archive. org/web/20220325051637/http://historic-cities.huji.ac.il/historic\\_cities.html) 25 March 2022 at the Wayback Machine, The National Library of Israel\n\nRetrieved from \"https://en.wikipedia.org/w/index.php?title=Lyon&oldid=1267625203\"", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia4.pdf" - }, - { - "text": "The convention was not the only target within Lyon during the French Revolution. After the Convention faded into history, the French Directory appeared and days after the 4 September 1797 Coup of 18 Fructidor, a Directory's commissioner was assassinated in Lyon.\n\nThe city became an important industrial town in the 19th century. In 1831 and 1834, the canuts (silk workers) of Lyon staged two major uprisings for better working conditions and pay. In 1862, the first of Lyon's extensive network of funicular railways began operation.\n\nMassacre during the Canut rebellion of 1834\n\n\n\nDuring World War II, Lyon was a centre for the occupying Nazi forces, including Klaus Barbie, the infamous \"Butcher of Lyon\". However, the city was also a\n\nstronghold of the French Resistance, the many secret passages known as traboules , enabled people to escape Gestapo raids. On 3 September 1944, Lyon was liberated by the 1st Free French Division and the Forces Françaises de l'Intérieur. The city is now home to a Resistance museum. [33][34]\n\n## Geography\n\nThe Rhône and Saône converge to the south of the historic city centre, forming a peninsula - the \" Presqu'île \" - bounded by two large hills to the west and north and a large plain eastward. Place Bellecour is located on the Presqu'île between the two rivers and is the third-largest public square in France. The broad, pedestrian-only Rue de la République leads north from Place Bellecour.\n\nThe northern hill is La Croix-Rousse, known as \"the hill that works\" because it is traditionally home to many small silk workshops, an industry for which the city has long been renowned. [35]\n\nThe Saône-Rhône confluence\n\n\n\nThe western hill is Fourvière, known as \"the hill that prays\" because it is the location for Basilica of Notre-Dame de Fourvière, several convents, and Archbishop residence. The district, Vieux Lyon, also hosts the Tour métallique (a highly visible TV tower, replicating the last stage of the Eiffel Tower) and one of the city's railways. [36] Fourvière, along with portions of the Presqu'île and much of La Croix-Rousse, is designated as a UNESCO World Heritage Site. [37]\n\nEast of the Rhône from the Presqu'île is a large flat area upon which sits much of modern Lyon and contains most of the city's population. Situated in this area is La Part-Dieu urban centre, which clusters the landmark structures Tour Incity, Tour Part-Dieu, Tour Oxygène, and Tour Swiss Life, as well as the city's primary railway station, Gare de Lyon-Part-Dieu.\n\nNorth of this district lays the sixth arrondissement, which is home to one of Europe's largest urban parks, the Parc de la Tête d'or, as well as Lycée du Parc and Interpol's world headquarters.", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia4.pdf" - }, - { - "text": "Both Vieux Lyon and the slopes of Croix-Rousse are known for their narrow passageways (named traboules ) that pass through buildings and link streets on either side. The first examples of traboules are thought to have been built in Lyon in the 4th century. [54] The traboules allowed the inhabitants to get from their homes to the Saône quickly and allowed the canuts on the Croix-Rousse hill to get from their workshops to the textile merchants at the foot of the hill.\n\n## Gastronomy\n\nLyon has a long and chronicled culinary arts tradition. The noted food critic Curnonsky referred to the city as \"the gastronomic capital of the world\", [55] a claim repeated by later writers such as Bill Buford. [56] Renowned 3-star Michelin chefs such as Marie Bourgeois [57] and Eugénie Brazier [58] developed Lyonnaise cuisine into a national phenomenon favoured by the French elite; a tradition which Paul Bocuse later turned into a worldwide success. [59] The bouchon is a traditional Lyonnais restaurant that serves local fare such as sausages, duck pâté or roast pork, along with local wines. Two of France's best known wine-growing regions are located near the city: the Beaujolais region to the north and the Côtes du Rhône region to the south. Another Lyon tradition is a type of brunch food called \"mâchons\", made of local charcuterie and usually accompanied by Beaujolais red wine. Mâchons were the customary meal of the canuts, the city's silk workers, who ate a late-morning meal after they finished their shifts in the factories. [60]\n\nOther traditional local dishes include coq au vin; quenelle; gras double; salade lyonnaise (lettuce with bacon, croûtons and a poached egg); and the sausage-based rosette lyonnaise and andouillette. Popular local confections include marron glacé and coussin de Lyon. Cervelle de canut (literally, \"silk worker's brains\") is a cheese spread/dip made of a base of fromage blanc, seasoned with chopped herbs, shallots, salt, pepper, olive oil and vinegar.\n\nPassage de l'Argue\n\n\n\nÎle Barbe bakery at the Halles de Lyon-Paul Bocuse\n\n\n\nMore recently, the french tacos was invented in Lyon suburbs (Vaulx-en-Velin) (or Grenoble according to some theories), in the early 2000s and is now famous worldwide. [61][62]\n\n## Sport\n\nLyon is home to the football club Olympique Lyonnais (OL), whose men's team plays in Ligue 1 and has won the championship of that competition seven times, all consecutively from 2002 to 2008. [63] OL played until December 2015 at the 43,000seat Stade de Gerland, which also hosted matches of the 1998 FIFA World Cup. Since 2016, the team has played at the Parc Olympique Lyonnais, a 59,000-seat stadium located in the eastern suburb of Décines-Charpieu. [64] OL operates a women's team, Olympique Lyonnais Féminin, which competes in and dominates Division 1 Féminine. They won fourteen consecutive top-flight championships (2007-2020), and additionally claim the four titles won by the original incarnation of FC Lyon, a\n\nParc Olympique Lyonnais\n\n\n\nwomen's football club that merged into OL in 2004 (the current FC Lyon was founded in 2009). The OL women have also won the UEFA Women's Champions League eight times, including in five consecutive editions from 2016 to 2020. Lyon hosted the 2019 FIFA Women's World Cup semi-finals as well as the Final on 7 July at Stade de Lyon.\n\nLyon has a rugby union team, Lyon OU, in the Top 14, which moved into Stade de Gerland full-time in 2017-18. In addition, Lyon has a rugby league side called Lyon Villeurbanne that plays in the French rugby league championship. The club's home is the Stade Georges Lyvet in Villeurbanne.", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia4.pdf" - }, - { - "text": "1,600,000 m 2 (17,222,256.67 sq ft) of office space and services and more than 55,000 jobs. [48] Cité Internationale , created by the architect Renzo Piano is located in the border of the Parc de la Tête d'Or in the 6th arrondissement. The worldwide headquarters of Interpol is located there. The district of Confluence , in the south of the historic centre, is a new pole of economical and cultural development.\n\nTourism is an important part of the Lyon economy, with one billion euros in 2007 and 3.5 million hotel-nights in 2006 provided by non-residents. Approximately 60% of tourists visit for business, with the rest for leisure. In January 2009, Lyon ranked first in France for hostels business. The festivals most important for attracting tourists are the Fête des lumières , the Nuits de Fourvière every summer, the Biennale d'art contemporain and the Nuits Sonores .\n\n## Culture\n\nSince the Middle Ages, the region residents have spoken several dialects of FrancoProvençal. The Lyonnais dialect was replaced by the French language as the importance of the city grew. However some \"frenchified\" Franco-Provençal words can also be heard in the French of the Lyonnais, who call their little boys and girls \"gones\" and \"fenottes\" for example. [49]\n\n - The Lumière brothers pioneered cinema in the town in 1895. The Institut Lumière, built as Auguste Lumiere's house, and a fascinating piece of architecture in its own right, holds many of their first inventions and other early cinematic and photographic artifacts.\n\nGuignol, created in the early 19th C., associated with the silk-workers\n\n\n\n - 8 December each year is marked by the Festival of Lights (la Fête des lumières), a celebration of thanks to the Virgin Mary, who purportedly saved the city from a deadly plague in the Middle Ages. During the event, the local population places candles ( luminions ) at their windows and the city of Lyon organizes large-scale light shows onto the sides of important Lyonnais monuments, such as the medieval Cathédrale St-Jean.\n - The Saint Francis of Sales church is famous for its large and unaltered Cavaillé-Coll pipe organ, attracting audiences from around the world.\n - The Opéra Nouvel (New Opera House) is the home of the Opéra National de Lyon. The original opera house was re-designed by the distinguished French architect Jean Nouvel between 1985 and 1993 and is named after him.\n - Lyon is also the French capital of \" trompe l'œil \" walls, a very ancient tradition. Many are to be seen around the city. This old tradition is now finding a contemporary expression, for example in the art of Guillaume Bottazzi. [50][51]\n - The Brothers of the Sacred Heart, a Roman Catholic congregation that operates schools in Europe and North America, was founded in Lyon in 1821.\n - The African Museum of Lyon is one of the oldest museums situated in Lyon. [52]\n - The Museum of Resistance and Deportation looks at the various individuals prominent in the Resistance movement in World War II. The building is strongly linked to Klaus Barbie. Lyon sees itself as the centre of the French resistance and many members were shot in Place Bellecour in the town centre. The exhibition is largely a series of , mini-biographies of those involved.\n - Lyon is a pilot city of the Council of Europe and the European Commission Intercultural cities program.\n\n## UNESCO World Heritage Site", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia4.pdf" - } - ] - }, - { - "references": { - "source_file": "uksi_20210538_en.pdf", - "query": " What should do the rector, vicar or curate in charge of a church or chapel to which a register of marriage services has been provided ?", - "target_page": 2, - "target_passage": "ensure that the register is kept in that church or chapel, and (b) do everything that is reasonably practicable to ensure that the register is protected against theft, loss or damage.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "- (a) indicates the descriptions of information required by each of sub-paragraphs (a) to (h) of regulation 3(2) in relation to the marriage, and\n - (b) provides corresponding spaces for recording information required by each of those subparagraphs in relation to the marriage.\n - (6) A register of marriage services provided under paragraph (1) by a parochial church council belongs to that parochial church council.\n\n## Duty to record information about marriages solemnized according to the rites of the Church of England or Church in Wales\n\n - 3. -(1) Paragraphs (2), (3) and (4) apply where a marriage has been solemnized according to the rites of the Church of England in a church or chapel in which banns of matrimony may be published.\n - (2) As soon as practicable after the marriage has been solemnized, the clergyman by whom the marriage was solemnized must make a record of the following information in relation to that marriage in a register of marriage services provided to the church or chapel under regulation 2(1)-\n - (a) the date and place of the marriage;\n - (b) the name and surname of each party;\n - (c) the date of birth of each party;\n - (d) the occupation (if any) of each party;\n - (e) the address of each party at the time of the marriage;\n - (f) the names and surnames of each party's parents, so far as those names and surnames are known to the clergyman who solemnized the marriage;\n - (g) the name and surname of each of the witnesses in whose presence the marriage was solemnized;\n - (h) the name and surname of the clergyman by whom the marriage was solemnized.\n - (3) The clergyman must record the information required by paragraph (2) in English, and may also record information required by that paragraph in Welsh where the church or chapel is situated in Wales.\n - (4) After making a record under paragraph (2) the clergyman must sign it.\n - (5) This regulation does not apply in relation to a marriage solemnized before 4th May 2021.\n\n## Requirements about the keeping of registers of marriage services\n\n - 4. -(1) The rector, vicar or curate in charge of a church or chapel to which a register of marriage services has been provided under regulation 2(1) must-\n - (a) ensure that the register is kept in that church or chapel, and\n - (b) do everything that is reasonably practicable to ensure that the register is protected against theft, loss or damage.\n - (2) Where there is no rector, vicar or curate in charge of a church or chapel to which a register of marriage services has been provided under regulation 2(1), the obligations under paragraph (1) in respect of that register fall on the churchwardens of the parish in which the church or chapel is situated.\n\nGiven under my hand on 29th April 2021\n\nAbi Tierney Registrar General", - "page_start": 1, - "page_end": 1, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "29th April 2021\n\nKevin Foster Parliamentary Under Secretary of State Home Office\n\n## EXPLANATORY NOTE\n\n(This note is not part of the Regulations)\n\nThese Regulations provide for records of marriages to be kept in churches and chapels of the Church of England and the Church in Wales, other than chapels to which Part 5 of the Marriage Act 1949 applies (naval, military and air force chapels).\n\nRegulation 2 requires parochial church councils to provide books known as 'registers of marriage services' to churches and chapels in their parish in which banns of matrimony may be published, for the purposes of keeping the records required by regulation 3. Regulation 2 also imposes requirements relating to the durability and pre-printed content of these registers, and provides that they belong to the parochial church council.\n\nRegulation 3 requires specified information to be recorded in a register of marriage services when a marriage has been solemnized on or after 4th May 2021 according to the rites of the Church of England or Church in Wales in a church or chapel in which banns of matrimony may be published. The record must be made and signed by the member of the clergy by whom the marriage was solemnized.\n\nRegulation 4 imposes requirements relating to the keeping of registers of marriage services provided under regulation 2.\n\nA full impact assessment has not been produced for this instrument because no, or no significant, impact on the private, public or voluntary sector is foreseen.\n\nPrinted and published in the UK by The Stationery Office Limited under the authority and superintendence of Jeff James, Controller of Her Majesty's Stationery Office and Queen's Printer of Acts of Parliament.", - "page_start": 2, - "page_end": 2, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "## S T A T U T O R Y I N S T R U M E N T S\n\n## 2021 No. 538\n\n## MARRIAGE, ENGLAND AND WALES\n\nThe Marriage (Keeping of Records in Churches and Chapels) Regulations 2021\n\nMade\n\n-\n\n-\n\n-\n\n-\n\n29th April 2021\n\nComing into force - -\n\n4th May 2021\n\nThe Registrar General makes these Regulations with the approval of the Secretary of State in exercise of the powers conferred by section 74(1)(c)(v), (1A)(a) and (3) of the Marriage Act 1949( a ).\n\n## Citation, commencement, extent and interpretation\n\n- 1. -(1) These Regulations may be cited as the Marriage (Keeping of Records in Churches and Chapels) Regulations 2021.\n- (2) These Regulations come into force on 4th May 2021.\n- (3) These Regulations extend to England and Wales.\n- (4) In these Regulations, 'chapel' does not include a chapel to which Part 5 of the Marriage Act 1949 (marriages in naval, military and air force chapels) applies( b ).\n\n## Duty of parochial church councils to provide registers of marriage services\n\n- 2. -(1) The parochial church council of a parish must provide books for the purpose of making records under regulation 3 to each church and chapel of the Church of England( c ) in that parish in which banns of matrimony may be published.\n- (2) Books provided under paragraph (1) are to be known as 'registers of marriage services'.\n- (3) A register of marriage services provided under paragraph (1) must meet the requirements of paragraphs (4) and (5).\n- (4) The register must be made of durable material.\n- (5) For the purposes of enabling a record to be made in the register under regulation 3 in respect of a marriage, the register must be printed in such a way that it-", - "page_start": 0, - "page_end": 0, - "source_file": "uksi_20210538_en.pdf" - }, - { - "text": "\n\nLyon Cathedral\n\n\n\nMaison du Crible (16th C.) in the Vieux Lyon\n\n\n\n\n\nÉglise Saint-Bonaventure\n\n\n\nChurch of Saint-Just, LyonManécanterie, Lyon\n\n\n\n## 17th and 18th centuries\n\n - City Hall on the Place des Terreaux, built by architects Jules Hardouin-Mansart and Robert de Cotte\n - Musée des beaux-arts de Lyon, fine arts museum housed in a former convent of the 17th century, including the Baroque chapelle Saint-Pierre\n - Hôtel-Dieu de Lyon (17th and 18th century), historical hospital with a baroque chapel\n - Temple du Change (17th and 18th century), former stock exchange of Lyon, Protestant temple since the 18th century\n - Place Bellecour, one of the largest town squares in Europe\n - Chapelle de la Trinité (1622), the first Baroque chapel built in Lyon, and part of the former École de la Trinité, now Collège-lycée Ampère\n - Église Saint-Polycarpe (1665-1670), Classical church\n - Église Saint-Just (16th to 18th century), Classical church\n - Saint-Bruno des Chartreux (17th and 18th century), church, masterpiece of Baroque architecture\n - Église Notre Dame Saint-Vincent (18th century), Neo-classical church\n\nBasilica of Saint-Martin d'AinaySaint-Nizier Church\n\n\n\nÉglise Saint-Paul\n\n", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- (d) to visit a person ('D') whom P reasonably believes is dying, and where P is a member of D's household or a close family member or friend of D;\n - (e) to attend the funeral of a member of P's household or a close family member;\n - (f) in other exceptional circumstances such as-\n - (i) to seek medical assistance where this is required urgently or on the advice of a registered medical practitioner including to access services from dentists, opticians, audiologists, chiropodists, chiropractors, osteopaths and other medical and health practitioners, including services relating to mental health,\n - (ii) to access critical public services including social services or services provided to victims (such as victims of crime),\n - (iii) to avoid injury or illness or to escape risk of harm,", - "page_start": 77, - "page_end": 77, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "(4) A person shall not be qualified to be appointed as S ecretary to the Independent E lectoral C om m ission if-\n\n - ( a ) he or she is not a citizen of B otsw ana;\n - ( b ) he or she has been declared insolvent or adjudged or otherw ise declared bankrupt under any law in force in any part of the C om m onw ealth and has not been discharged, or has m ade a com position w ith his or her creditors and has not paid his or her debts in full; or\n - ( c ) he or she has been convicted of any offence involving dishonesty in any country.\n\n(5) A person shall not enter upon the duties of the office of S ecretary until he or she has taken and subscribed to the oath of allegiance and such oath for the due execution of his or her office as m ay be prescribed by an A ct of P arliam ent.", - "page_start": 30, - "page_end": 30, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- (6) For the purposes of the exercise of his or her functions under subsection (3) of this section, the S ecretary m ay give such directions as he or she considers necessary or expedient to any registering officer, presiding officer or returning officer relating to the exercise by that officer of his or her functions under any law regulating the registration of voters or the conduct of elections, and any officer to w hom directions are given under this subsection shall com ply w ith those directions.\n - (7) S ubject to the provisions of this section, a person holding office as S ecretary shall vacate that office on attaining the age of 65 years or such other age as m ay be prescribed by an A ct of P arliam ent.\n - (8) A holder of the office of S ecretary m ay be rem oved from office only for inability to perform the functions of his or her office (w hether arising from infirm ity of body or m ind or from any other cause) or for m isbehaviour, and shall not be so rem oved except in accordance w ith the provisions of this section.\n - (9) If the P resident considers that the question of rem oving the S ecretary ought to be investigated then-\n - ( a ) he or she shall appoint a tribunal w hich shall consist of a C hairm an and not less than tw o m em bers w ho hold or have held high judicial office;\n - ( b ) the tribunal shall enquire into and report on the facts thereof to the P resident and advise the P resident w hether the S ecretary ought to be rem oved from office under this section for inability to perform the functions of his or her office or for m isbehaviour.", - "page_start": 31, - "page_end": 31, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "Ronnie Royster\n\n\n\nRonnie Royster and his wife Ellen, not only open their hearts to children in need, they open their home to children in need of a place to stay. Ronnie, a staff accountant with Shentel for five years, became a foster parent 10 years ago when he lived in Danville, VA. Currently, he and Ellen are members of the Warren County, VA Foster Parent organization. They have hosted two children since moving to Warren County. In the past, they hosted three international children; one from the Philippines and two from Brazil. Ronnie and Ellen have one birth child, but they have chosen to officially adopt six other children ranging in age from an infant to 8 years old.\n\n'The Lord blessed us, so when there are children out there who need help, we are able to offer it to them.'\n\nDawn Sager, office assistant in the marketing department, has worked for Shentel since 1996. Her former job as a dispatcher with the Strasburg Police Department led to 21 years of volunteering with the Strasburg Rescue Squad. Dawn has served as secretary and building and grounds lieutenant for the squad. She currently is serving another term as the organization's treasurer. As an Emergency Medical Technician, she pulls regular duty; devoting up to 24 hours a month volunteering with the squad.\n\n'The potential is there to make a real difference for someone.'\n\nDawn Sager\n\n\n\nThere aren't many corners of Wakeman's Grove Church of the Brethren that Gary Shipe, who has worked as an installer repairman for Shentel since 1986, doesn't know. At Wakeman's Grove he serves on the executive committee, teaches Sunday school and sings in the choir. He does whatever is necessary to keep this tight-knit country church in good shape. Gary believes it is important to not just sit in the pew on Sunday.\n\n'I believe that you should help where you can.'\n\nGary Shipe\n\n\n\nAnn Masland has been a business-to-business sales representative in Central Pennsylvania for Shentel's PCS business for the past three years. In 2002, Ann joined the United Way of Carlisle and Cumberland County. In 2003, she was appointed to the United Way's Needs Assessment Committee. This nine-member group is responsible for reviewing data collected about community needs to provide an overview for the United Way board. The board uses the information as a basis for its strategic plan. Ann also finds the time to serve on the Board of Directors of the Carlisle, PA YWCA and is Chairperson of its Marketing Committee.\n\n'I've lived in Carlisle a long time. My kids are now old enough where I have some time to give and that's what I do.'\n\nAnn Masland\n\n\n\n\n\n■", - "page_start": 6, - "page_end": 6, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "\n\n## TIPS FOR FILLING IN YOUR COLLEGE REGISTRATION FORM\n\nApplying for college (www.oxbridgeacademy.co.za/enrol-now/) can be a daunting experience. Not only do you need to choose a course, but you also need to make sure that you:\n\n - · meet the entry requirements\n - · meet the deadlines\n - · fill in the forms correctly\n - · send the forms to the right address\n - · include all the necessary attachments\n\nTo make the college registration process easier for you, we've compiled a comprehensive guide on how to register at Oxbridge Academy (www.oxbridgeacademy.co.za/enrol-now/). The guide also includes general tips that will be relevant to the application and registration processes at other colleges.\n\n## There are 4 steps you need to follow when you want to register as a student at Oxbridge Academy:\n\n - 1. Select Your Course\n - 2. Fill in Your Student Details\n - 3. Select Your Delivery Option\n - 4. Pay Your Registration Fee and Send in Your Form\n\n", - "page_start": 20, - "page_end": 20, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "- (a) to provide emergency assistance;\n - (b) to provide care or assistance, including relevant personal care within the meaning of paragraph 1(1B) or 7(3B) of Schedule 4 to the Safeguarding Vulnerable Groups Act 2006( a );\n - (c) to provide medical assistance to P or to any other person who is staying in the place where P is self-isolating where this is required urgently or on the advice of a registered medical practitioner;\n - (d) to provide veterinary services where this is required urgently or on the advice of a veterinary surgeon;\n - (e) to provide critical public services including social services or services provided to victims (such as victims of crime).", - "page_start": 76, - "page_end": 76, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "tesla_form_10q.pdf", - "query": "What are Tesla's total liabilities and equity in 2024?", - "target_page": 5, - "target_passage": "119,852", - "chunk_present": { - "presence": true, - "index": 7 - } - }, - "top_chunk": [ - { - "text": "## Table of Contents\n\n## Legal Proceedings\n\n## Litigation Relating to 2018 CEO Performance Award\n\nOn June 4, 2018, a purported Tesla stockholder filed a putative class and derivative action in the Delaware Court of Chancery against Elon Musk and the members of Tesla's board of directors as then constituted, alleging corporate waste, unjust enrichment and that such board members breached their fiduciary duties by approving the stock-based compensation plan awarded to Elon Musk in 2018 (the '2018 CEO Performance Award'). Trial was held November 14-18, 2022. On January 30, 2024, the Court issued an opinion finding that the 2018 CEO Performance Award should be rescinded. Plaintiff's counsel filed a brief seeking a fee award of 29,402,900 Tesla shares, plus expenses of $1,120,115.50. Tesla opposed the fee request on June 7, 2024, and a hearing was held on July 8, 2024. At Tesla's 2024 Annual Meeting of Stockholders, 72% of the disinterested voting shares of Tesla, excluding shares owned by Mr. Musk and Kimbal Musk, voted to ratify the 2018 CEO Performance Award. On June 28, 2024, because Tesla's disinterested stockholders voted to ratify the 2018 CEO Performance Award, Mr. Musk and the other director defendants, joined by Tesla, filed a brief seeking to revise the Court's January 30, 2024 opinion, and a hearing was held on August 2, 2024.\n\n## Litigation Related to Directors' Compensation\n\nOn June 17, 2020, a purported Tesla stockholder filed a derivative action in the Delaware Court of Chancery, purportedly on behalf of Tesla, against certain of Tesla's current and former directors regarding compensation awards granted to Tesla's directors, other than Elon Musk, between 2017 and 2020. The suit asserts claims for breach of fiduciary duty and unjust enrichment and seeks declaratory and injunctive relief, unspecified damages and other relief. Defendants filed their answer on September 17, 2020.\n\nOn July 14, 2023, the parties filed a Stipulation and Agreement of Compromise and Settlement, which does not involve an admission of any wrongdoing by any party. If the settlement is approved by the Court, this action will be fully settled and dismissed with prejudice. Pursuant to the terms of the agreement, Tesla provided notice of the proposed settlement to stockholders of record as of July 14, 2023. The Court held a hearing regarding the settlement on October 13, 2023, after which it took the settlement and plaintiff counsels' fee request under advisement. On August 14, 2024, the parties submitted a joint letter requesting that the Court approve and enter final judgment with respect to the settlement, and decide the fee request at a later date. The settlement is not expected to have an adverse impact on our results of operations, cash flows or financial position.\n\n## Litigation Relating to Potential Going Private Transaction", - "page_start": 26, - "page_end": 26, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "Deferred revenue is equivalent to the total transaction price allocated to the performance obligations that are unsatisfied, or partially unsatisfied, as of the balance sheet date. Revenue recognized from the deferred revenue balances as of December 31, 2023 and 2022 was $711 million and $360 million for the nine months ended September 30, 2024 and 2023, respectively. Of the total deferred revenue balance as of September 30, 2024, we expect to recognize $821 million of revenue in the next 12 months. The remaining balance will be recognized at the time of transfer of control of the product or over the performance period.\n\nWe have financing receivables on our consolidated balance sheets related to loans we provide for financing our automotive deliveries. As of September 30, 2024 and December 31, 2023, we had current net financing receivables of $245 million and $242 million, respectively, in Accounts receivable, net, and $868 million and $1.04 billion, respectively, in Other non-current assets for the long-term portion.\n\nWe offer resale value guarantees to our commercial banking partners in connection with certain vehicle leasing programs. Under these programs, we originate the lease with our end customer and immediately transfer the lease and the underlying vehicle to our commercial banking partner, with the transaction being accounted for as a sale under ASC 606, Revenue from Contracts with Customers . We estimate a guarantee liability in accordance with ASC 460, Guarantees and record it within other liabilities on our consolidated balance sheet. On a quarterly basis, we assess the estimated market value of vehicles sold under this program to determine whether there have been changes to the amount of expected resale value guarantee liabilities. The total recorded guarantee liabilities on vehicles sold under this program were immaterial as of September 30, 2024 and December 31, 2023. Our maximum exposure on the guarantees we provide if they are unable to sell the vehicle at or above the vehicle's contractual residual value at the end of the lease term was $1.04 billion and $166 million as of September 30, 2024 and December 31, 2023, respectively.\n\n## Automotive Regulatory Credits\n\nAs of September 30, 2024, total transaction price allocated to performance obligations that were unsatisfied or partially unsatisfied for contracts with an original expected length of more than one year was $4.72 billion. Of this amount, we expect to recognize $683 million in the next 12 months and the rest over the remaining performance obligation period. Additionally, changes in regulations on automotive regulatory credits may significantly impact our remaining performance obligations and revenue to be recognized under these contracts.\n\n## Automotive Leasing Revenue\n\n## Direct Sales-Type Leasing Program\n\nLease receivables relating to sales-type leases are presented on the consolidated balance sheets as follows (in millions):\n\nTable of Contents\n\n| | September 30, 2024 | December 31, 2023 |\n|-------------------------------------------|----------------------|---------------------|\n| Gross lease receivables | $ 584 | $ 780 |\n| Unearned interest income | (48) | (78) |\n| Allowance for expected credit losses | (7) | (6) |\n| Net investment in sales-type leases | $ 529 | $ 696 |\n| Reported as: | | |\n| Prepaid expenses and other current assets | $ 171 | $ 189 |\n| Other non-current assets | 358 | 507 |\n| Net investment in sales-type leases | $ 529 | $ 696 |", - "page_start": 14, - "page_end": 14, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "## Litigation Relating to Potential Going Private Transaction\n\nBetween August 10, 2018 and September 6, 2018, nine purported stockholder class actions were filed against Tesla and Elon Musk in connection with Mr. Musk's August 7, 2018 Twitter post that he was considering taking Tesla private. On January 16, 2019, Plaintiffs filed their consolidated complaint in the United States District Court for the Northern District of California and added as defendants the members of Tesla's board of directors. The consolidated complaint asserts claims for violations of the federal securities laws and seeks unspecified damages and other relief. The parties stipulated to certification of a class of stockholders, which the court granted on November 25, 2020. Trial started on January 17, 2023, and on February 3, 2023, a jury rendered a verdict in favor of the defendants on all counts. After trial, plaintiffs filed a motion for judgment as a matter of law and a motion for new trial, which the Court denied and judgement was entered in favor of defendants on July 11, 2023. On July 14, 2023, plaintiffs filed a notice of appeal. The appeal, which is pending in the United States Court of Appeals for the Ninth Circuit, has been fully briefed by the parties, and is scheduled for oral argument on October 25, 2024.\n\nBetween October 17, 2018 and March 8, 2021, seven derivative lawsuits were filed in the Delaware Court of Chancery, purportedly on behalf of Tesla, against Mr. Musk and the members of Tesla's board of directors, as constituted at relevant times, in relation to statements made and actions connected to a potential going private transaction, with certain of the lawsuits challenging additional Twitter posts by Mr. Musk, among other things. Several of those actions were consolidated, and all have been stayed. In addition to these cases, two derivative lawsuits were filed on October 25, 2018 and February 11, 2019 in the U.S. District Court for the District of Delaware, purportedly on behalf of Tesla, against Mr. Musk and the members of the Tesla board of directors as then constituted. Those cases have also been consolidated and stayed pending resolution of the appeal in the above-referenced consolidated purported stockholder class action.", - "page_start": 26, - "page_end": 26, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "## Table of Contents\n\nOn October 21, 2022, a lawsuit was filed in the Delaware Court of Chancery by a purported shareholder of Tesla alleging, among other things, that board members breached their fiduciary duties in connection with their oversight of the Company's 2018 settlement with the SEC, as amended. Among other things, the plaintiff seeks reforms to the Company's corporate governance and internal procedures, unspecified damages, and attorneys' fees. The lawsuit has been stayed pending resolution of a motion to consolidate certain derivative lawsuits in the Delaware Court of Chancery referenced below.\n\nOn November 15, 2021, JPMorgan Chase Bank ('JP Morgan') filed a lawsuit against Tesla in the Southern District of New York alleging breach of a stock warrant agreement that was entered into as part of a convertible notes offering in 2014. In 2018, JP Morgan informed Tesla that it had adjusted the strike price based upon Mr. Musk's August 7, 2018 Twitter post that he was considering taking Tesla private. Tesla disputed JP Morgan's adjustment as a violation of the parties' agreement. In 2021, Tesla delivered shares to JP Morgan per the agreement, which they duly accepted. JP Morgan now alleges that it is owed approximately $162 million as the value of additional shares that it claims should have been delivered as a result of the adjustment to the strike price in 2018. On January 24, 2022, Tesla filed multiple counterclaims as part of its answer to the underlying lawsuit, asserting among other points that JP Morgan should have terminated the stock warrant agreement in 2018 rather than make an adjustment to the strike price that it should have known would lead to a commercially unreasonable result. Tesla believes that the adjustments made by JP Morgan were neither proper nor commercially reasonable, as required under the stock warrant agreements. JP Morgan filed a motion for judgment on the pleadings, which Tesla opposed, and on September 12, 2024, the Court denied JP Morgan's motion.\n\n## Certain Derivative Lawsuits in Delaware\n\nBefore converting from a Delaware to Texas corporation on June 13, 2024, three separate derivative actions brought by purported Tesla stockholders were filed in the Delaware Court of Chancery on May 24, June 10 and June 13, 2024, purportedly on behalf of Tesla, against current and former directors regarding topics involving Elon Musk and others, X Corp. (formerly Twitter) and x.AI. These suits assert various claims, including breach of fiduciary duty and breach of contract, and seek unspecified damages and other relief. On August 6, 2024, the plaintiffs in these three actions moved to consolidate the matters into a single case, and a hearing on that motion is scheduled for November 18, 2024.\n\n## Litigation and Investigations Relating to Alleged Discrimination and Harassment\n\nOn February 9, 2022, the California Civil Rights Department ('CRD,' formerly 'DFEH') filed a civil complaint against Tesla in Alameda County, California Superior Court, alleging systemic race discrimination, hostile work environment and pay equity claims, among others. CRD's amended complaint seeks monetary damages and injunctive relief. The case is currently in discovery. Trial is scheduled for September 15, 2025.\n\nAdditionally, on June 1, 2022 the Equal Employment Opportunity Commission ('EEOC') issued a cause finding against Tesla that closely parallels the CRD's allegations. On September 28, 2023, the EEOC filed a civil complaint against Tesla in the United States District Court for the Northern District of California asserting claims for race harassment and retaliation and seeking, among other things, monetary and injunctive relief.", - "page_start": 27, - "page_end": 27, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "On June 16, 2022, two Tesla stockholders filed separate derivative actions in the U.S. District Court for the Western District of Texas, purportedly on behalf of Tesla, against certain of Tesla's current and former directors. Both suits assert claims for breach of fiduciary duty, unjust enrichment, and violation of the federal securities laws in connection with alleged race and gender discrimination and sexual harassment. Among other things, plaintiffs seek declaratory and injunctive relief, unspecified damages payable to Tesla, and attorneys' fees. On July 22, 2022, the Court consolidated the two cases and on September 6, 2022, plaintiffs filed a consolidated complaint. On November 7, 2022, the defendants filed a motion to dismiss the case and on September 15, 2023, the Court dismissed the action but granted plaintiffs leave to file an amended complaint. On November 2, 2023, plaintiff filed an amended complaint purportedly on behalf of Tesla, against Elon Musk. On December 19, 2023, the defendants moved to dismiss the amended complaint, which the Court granted on April 12, 2024, with leave for plaintiffs to amend. On May 15, 2024, plaintiffs filed a second amended consolidated complaint purportedly on behalf of Tesla, against Mr. Musk. On July 1, 2024, the defendants moved to dismiss the second amended consolidated complaint.", - "page_start": 27, - "page_end": 27, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "On March 14, 2023, a proposed class action was filed against Tesla, Inc. in the U.S. District Court for the Northern District of California. Several similar complaints were also filed in the same court and these cases have now all been consolidated. These complaints allege that Tesla violates federal antitrust and warranty laws through its repair, service, and maintenance practices and seeks, among other relief, damages for persons who paid Tesla for repairs services or Tesla compatible replacement parts from March 2019 to March 2023. On July 17, 2023, these plaintiffs filed a consolidated amended complaint. On September 27, 2023, the court granted Tesla's motion to compel arbitration as to three of the plaintiffs, and on November 17, 2023, the court granted Tesla's motion to dismiss without prejudice. The plaintiffs filed a Consolidated Second Amended Complaint on December 12, 2023, which Tesla moved to dismiss. Plaintiffs also appealed the court's arbitration order, which was denied. On June 17, 2024, the Court granted in part and denied in part Tesla's motion to dismiss the Consolidated Second Amended Complaint.\n\nThe Company intends to vigorously defend itself in these matters; however, we cannot predict the outcome or impact. We are unable to reasonably estimate the possible loss or range of loss, if any, associated with these claims, unless noted.", - "page_start": 28, - "page_end": 28, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "| Total current liabilities | 4,606 | 3,002 | 1,604 | 53 | |\n| Provisions | 40 | 31 | 9 | 29 | Increased due to costs associated with exiting and ceasing the use of certain sites. |\n| Long-term debt | 12,173 | 10,441 | 1,732 | 17 | Increased due to issuances of long-term debt in March 2013 and October 2013. |\n| Derivative instruments | 83 | 417 | (334) | (80) | Reflects the change in market values of our derivatives due to scheduled settlements, new transactions and changes in interest and foreign exchange rates |\n| Other long-term liabilities | 328 | 458 | (130) | (28) | Mainly reflects the decrease in pension liability due to an increase in discount rates. |\n| Deferred tax liability | 1,702 | 1,501 | 201 | 13 | Mainly reflects additional temporary differences arising from property, plant and equipment, goodwill and intangible assets. |\n| Total liabilities | 18,932 | 15,850 | 3,082 | 19 | |\n| Shareholders' equity | 4,669 | 3,768 | 901 | 24 | Includes changes in retained earnings and equity reserves. |\n| Total liabilities and shareholders' equity | $23,601 | $19,618 | 3,983 | 20 | |", - "page_start": 60, - "page_end": 60, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "| Other non-current assets | 4,989 | 4,531 |\n| Total assets | $ 119,852 | $ 106,618 |\n| Liabilities | | |\n| Current liabilities | | |\n| Accounts payable | $ 14,654 | $ 14,431 |\n| Accrued liabilities and other | 10,601 | 9,080 |\n| Deferred revenue | 3,031 | 2,864 |\n| Current portion of debt and finance leases | 2,291 | 2,373 |\n| Total current liabilities | 30,577 | 28,748 |\n| Debt and finance leases, net of current portion | 5,405 | 2,857 |\n| Deferred revenue, net of current portion | 3,350 | 3,251 |\n| Other long-term liabilities | 9,810 | 8,153 |\n| Total liabilities | 49,142 | 43,009 |\n| Commitments and contingencies (Note 10) | | |\n| Redeemable noncontrolling interests in subsidiaries | 70 | 242 |\n| Equity | | |\n| Stockholders' equity | | |\n| Preferred stock; $0.001 par value; 100 shares authorized; no shares issued and outstanding | - | - |", - "page_start": 4, - "page_end": 4, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "## ITEM 2. MANAGEMENT'S DISCUSSION AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS\n\nThe following discussion and analysis should be read in conjunction with the consolidated financial statements and the related notes included elsewhere in this Quarterly Report on Form 10-Q.\n\n## Overview\n\nOur mission is to accelerate the world's transition to sustainable energy. We design, develop, manufacture, lease and sell high-performance fully electric vehicles, solar energy generation systems and energy storage products. We also offer maintenance, installation, operation, charging, insurance, financial and other services related to our products. Additionally, we are increasingly focused on products and services based on AI, robotics and automation.\n\nIn 2024, we produced approximately 1,314,000 consumer vehicles and delivered approximately 1,294,000 consumer vehicles through the third quarter. We are focused on profitable growth, including by leveraging existing factories and production lines to introduce new and more affordable products, further improving and deploying our FSD capabilities, including through our planned robotaxi product, reducing costs, increasing vehicle production, utilized capacity and delivery capabilities, improving and developing our vehicles and battery technologies, vertically integrating and localizing our supply chain, and expanding our global infrastructure, including our service and charging infrastructure.\n\nIn 2024, we deployed 20.41 GWh of energy storage products through the third quarter. We are focused on ramping the production and increasing the market penetration of our energy storage products.\n\nDuring the three and nine months ended September 30, 2024, we recognized total revenues of $25.18 billion and $71.98 billion, respectively, representing increases of $1.83 billion and $377 million, respectively, compared to the same periods in the prior year. During the three and nine months ended September 30, 2024, our net income attributable to common stockholders was $2.17 billion and $4.77 billion, respectively, representing an increase of $314 million and a decrease of $2.30 billion, respectively, compared to the same periods in the prior year. We continue to ramp production and build and optimize our manufacturing capacity, expand our operations while focusing on further cost reductions and operational efficiencies to enable increased deliveries and deployments of our products, and invest in research and development to accelerate our AI, software, and fleet-based profits for further revenue growth.\n\nWe ended the third quarter of 2024 with $33.65 billion in cash and cash equivalents and investments, representing an increase of $4.55 billion from the end of 2023. Our cash flows provided by operating activities were $10.11 billion during the nine months ended September 30, 2024, compared to $8.89 billion during the same period ended September 30, 2023, representing an increase of $1.22 billion. Capital expenditures amounted to $8.56 billion during the nine months ended September 30, 2024, compared to $6.59 billion during the same period ended September 30, 2023, representing an increase of $1.96 billion. Overall growth has allowed our business to generally fund itself, and we will continue investing in a number of capital-intensive projects and research and development in upcoming periods.\n\n## Management Opportunities, Challenges and Uncertainties and 2024 Outlook\n\n## Automotive-Production\n\nThe following is a summary of the status of production of each of our announced vehicle models in production and under development, as of the date of this Quarterly Report on Form 10-Q:\n\nTable of Contents", - "page_start": 31, - "page_end": 31, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "## Table of Contents\n\n## PART II. OTHER INFORMATION\n\n## ITEM 1. LEGAL PROCEEDINGS\n\nFor a description of our material pending legal proceedings, please see Note 10, Commitments and Contingencies , to the consolidated financial statements included elsewhere in this Quarterly Report on Form 10-Q.\n\n## ITEM 1A. RISK FACTORS\n\nOur operations and financial results are subject to various risks and uncertainties, including the factors discussed in Part I, Item 1A, Risk Factors in our Annual Report on Form 10-K for the year ended December 31, 2023, which could adversely affect our business, financial conditions and future results.\n\n## ITEM 2. UNREGISTERED SALES OF EQUITY SECURITIES AND USE OF PROCEEDS\n\nIn connection with the offering of 2.00% Convertible Senior Notes due 2024 in May 2019, we sold warrants to each of Société Générale, Wells Fargo Bank, National Association, Credit Suisse Capital LLC (later assigned to UBS AG, London Branch) and Goldman, Sachs & Co. LLC (together, the '2019 Warrantholders'). Between August 19, 2024 and September 30, 2024, we issued an aggregate of 8,506,223 shares of our common stock to the 2019 Warrantholders pursuant to their exercise of such warrants, which were net of the applicable exercise prices. Such shares were issued pursuant to an exemption from registration provided by Rule 3(a)(9) of the Securities Act of 1933.\n\n## ITEM 3. DEFAULTS UPON SENIOR SECURITIES\n\nNone.\n\n## ITEM 4. MINE SAFETY DISCLOSURES\n\nNot applicable.\n\n## ITEM 5. OTHER INFORMATION\n\nNone of the Company's directors or officers adopted, modified or terminated a Rule 10b5-1 trading arrangement or a non-Rule 10b5-1 trading arrangement during the Company's fiscal quarter ended September 30, 2024, as such terms are defined under Item 408(a) of Regulation S-K, except as follows:\n\nOn July 25, 2024, Robyn Denholm, one of our directors, adopted a Rule 10b5-1 trading arrangement for the potential sale of up to 674,345 shares of our common stock (all resulting from stock options expiring in June 2025), subject to certain conditions. The arrangement's expiration date is June 18, 2025.\n\nOn July 31, 2024, Kimbal Musk, one of our directors, adopted a Rule 10b5-1 trading arrangement for the potential sale of up to 152,088 shares of our common stock, subject to certain conditions. The arrangement's expiration date is May 30, 2025.\n\nOn August 12, 2024, Kathleen Wilson-Thompson, one of our directors, adopted a Rule 10b5-1 trading arrangement for the potential sale of up to 300,000 shares of our common stock, subject to certain conditions. The arrangement's expiration date is February 28, 2025.", - "page_start": 46, - "page_end": 46, - "source_file": "tesla_form_10q.pdf" - } - ] - }, - { - "references": { - "source_file": "tesla_form_10q.pdf", - "query": "Where was Tesla incorporated? ", - "target_page": 13, - "target_passage": "State of Delaware", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## Table of Contents\n\n## Legal Proceedings\n\n## Litigation Relating to 2018 CEO Performance Award\n\nOn June 4, 2018, a purported Tesla stockholder filed a putative class and derivative action in the Delaware Court of Chancery against Elon Musk and the members of Tesla's board of directors as then constituted, alleging corporate waste, unjust enrichment and that such board members breached their fiduciary duties by approving the stock-based compensation plan awarded to Elon Musk in 2018 (the '2018 CEO Performance Award'). Trial was held November 14-18, 2022. On January 30, 2024, the Court issued an opinion finding that the 2018 CEO Performance Award should be rescinded. Plaintiff's counsel filed a brief seeking a fee award of 29,402,900 Tesla shares, plus expenses of $1,120,115.50. Tesla opposed the fee request on June 7, 2024, and a hearing was held on July 8, 2024. At Tesla's 2024 Annual Meeting of Stockholders, 72% of the disinterested voting shares of Tesla, excluding shares owned by Mr. Musk and Kimbal Musk, voted to ratify the 2018 CEO Performance Award. On June 28, 2024, because Tesla's disinterested stockholders voted to ratify the 2018 CEO Performance Award, Mr. Musk and the other director defendants, joined by Tesla, filed a brief seeking to revise the Court's January 30, 2024 opinion, and a hearing was held on August 2, 2024.\n\n## Litigation Related to Directors' Compensation\n\nOn June 17, 2020, a purported Tesla stockholder filed a derivative action in the Delaware Court of Chancery, purportedly on behalf of Tesla, against certain of Tesla's current and former directors regarding compensation awards granted to Tesla's directors, other than Elon Musk, between 2017 and 2020. The suit asserts claims for breach of fiduciary duty and unjust enrichment and seeks declaratory and injunctive relief, unspecified damages and other relief. Defendants filed their answer on September 17, 2020.\n\nOn July 14, 2023, the parties filed a Stipulation and Agreement of Compromise and Settlement, which does not involve an admission of any wrongdoing by any party. If the settlement is approved by the Court, this action will be fully settled and dismissed with prejudice. Pursuant to the terms of the agreement, Tesla provided notice of the proposed settlement to stockholders of record as of July 14, 2023. The Court held a hearing regarding the settlement on October 13, 2023, after which it took the settlement and plaintiff counsels' fee request under advisement. On August 14, 2024, the parties submitted a joint letter requesting that the Court approve and enter final judgment with respect to the settlement, and decide the fee request at a later date. The settlement is not expected to have an adverse impact on our results of operations, cash flows or financial position.\n\n## Litigation Relating to Potential Going Private Transaction", - "page_start": 26, - "page_end": 26, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "On March 14, 2023, a proposed class action was filed against Tesla, Inc. in the U.S. District Court for the Northern District of California. Several similar complaints were also filed in the same court and these cases have now all been consolidated. These complaints allege that Tesla violates federal antitrust and warranty laws through its repair, service, and maintenance practices and seeks, among other relief, damages for persons who paid Tesla for repairs services or Tesla compatible replacement parts from March 2019 to March 2023. On July 17, 2023, these plaintiffs filed a consolidated amended complaint. On September 27, 2023, the court granted Tesla's motion to compel arbitration as to three of the plaintiffs, and on November 17, 2023, the court granted Tesla's motion to dismiss without prejudice. The plaintiffs filed a Consolidated Second Amended Complaint on December 12, 2023, which Tesla moved to dismiss. Plaintiffs also appealed the court's arbitration order, which was denied. On June 17, 2024, the Court granted in part and denied in part Tesla's motion to dismiss the Consolidated Second Amended Complaint.\n\nThe Company intends to vigorously defend itself in these matters; however, we cannot predict the outcome or impact. We are unable to reasonably estimate the possible loss or range of loss, if any, associated with these claims, unless noted.", - "page_start": 28, - "page_end": 28, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "## Table of Contents\n\nOn October 21, 2022, a lawsuit was filed in the Delaware Court of Chancery by a purported shareholder of Tesla alleging, among other things, that board members breached their fiduciary duties in connection with their oversight of the Company's 2018 settlement with the SEC, as amended. Among other things, the plaintiff seeks reforms to the Company's corporate governance and internal procedures, unspecified damages, and attorneys' fees. The lawsuit has been stayed pending resolution of a motion to consolidate certain derivative lawsuits in the Delaware Court of Chancery referenced below.\n\nOn November 15, 2021, JPMorgan Chase Bank ('JP Morgan') filed a lawsuit against Tesla in the Southern District of New York alleging breach of a stock warrant agreement that was entered into as part of a convertible notes offering in 2014. In 2018, JP Morgan informed Tesla that it had adjusted the strike price based upon Mr. Musk's August 7, 2018 Twitter post that he was considering taking Tesla private. Tesla disputed JP Morgan's adjustment as a violation of the parties' agreement. In 2021, Tesla delivered shares to JP Morgan per the agreement, which they duly accepted. JP Morgan now alleges that it is owed approximately $162 million as the value of additional shares that it claims should have been delivered as a result of the adjustment to the strike price in 2018. On January 24, 2022, Tesla filed multiple counterclaims as part of its answer to the underlying lawsuit, asserting among other points that JP Morgan should have terminated the stock warrant agreement in 2018 rather than make an adjustment to the strike price that it should have known would lead to a commercially unreasonable result. Tesla believes that the adjustments made by JP Morgan were neither proper nor commercially reasonable, as required under the stock warrant agreements. JP Morgan filed a motion for judgment on the pleadings, which Tesla opposed, and on September 12, 2024, the Court denied JP Morgan's motion.\n\n## Certain Derivative Lawsuits in Delaware\n\nBefore converting from a Delaware to Texas corporation on June 13, 2024, three separate derivative actions brought by purported Tesla stockholders were filed in the Delaware Court of Chancery on May 24, June 10 and June 13, 2024, purportedly on behalf of Tesla, against current and former directors regarding topics involving Elon Musk and others, X Corp. (formerly Twitter) and x.AI. These suits assert various claims, including breach of fiduciary duty and breach of contract, and seek unspecified damages and other relief. On August 6, 2024, the plaintiffs in these three actions moved to consolidate the matters into a single case, and a hearing on that motion is scheduled for November 18, 2024.\n\n## Litigation and Investigations Relating to Alleged Discrimination and Harassment\n\nOn February 9, 2022, the California Civil Rights Department ('CRD,' formerly 'DFEH') filed a civil complaint against Tesla in Alameda County, California Superior Court, alleging systemic race discrimination, hostile work environment and pay equity claims, among others. CRD's amended complaint seeks monetary damages and injunctive relief. The case is currently in discovery. Trial is scheduled for September 15, 2025.\n\nAdditionally, on June 1, 2022 the Equal Employment Opportunity Commission ('EEOC') issued a cause finding against Tesla that closely parallels the CRD's allegations. On September 28, 2023, the EEOC filed a civil complaint against Tesla in the United States District Court for the Northern District of California asserting claims for race harassment and retaliation and seeking, among other things, monetary and injunctive relief.", - "page_start": 27, - "page_end": 27, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "## Litigation Relating to Potential Going Private Transaction\n\nBetween August 10, 2018 and September 6, 2018, nine purported stockholder class actions were filed against Tesla and Elon Musk in connection with Mr. Musk's August 7, 2018 Twitter post that he was considering taking Tesla private. On January 16, 2019, Plaintiffs filed their consolidated complaint in the United States District Court for the Northern District of California and added as defendants the members of Tesla's board of directors. The consolidated complaint asserts claims for violations of the federal securities laws and seeks unspecified damages and other relief. The parties stipulated to certification of a class of stockholders, which the court granted on November 25, 2020. Trial started on January 17, 2023, and on February 3, 2023, a jury rendered a verdict in favor of the defendants on all counts. After trial, plaintiffs filed a motion for judgment as a matter of law and a motion for new trial, which the Court denied and judgement was entered in favor of defendants on July 11, 2023. On July 14, 2023, plaintiffs filed a notice of appeal. The appeal, which is pending in the United States Court of Appeals for the Ninth Circuit, has been fully briefed by the parties, and is scheduled for oral argument on October 25, 2024.\n\nBetween October 17, 2018 and March 8, 2021, seven derivative lawsuits were filed in the Delaware Court of Chancery, purportedly on behalf of Tesla, against Mr. Musk and the members of Tesla's board of directors, as constituted at relevant times, in relation to statements made and actions connected to a potential going private transaction, with certain of the lawsuits challenging additional Twitter posts by Mr. Musk, among other things. Several of those actions were consolidated, and all have been stayed. In addition to these cases, two derivative lawsuits were filed on October 25, 2018 and February 11, 2019 in the U.S. District Court for the District of Delaware, purportedly on behalf of Tesla, against Mr. Musk and the members of the Tesla board of directors as then constituted. Those cases have also been consolidated and stayed pending resolution of the appeal in the above-referenced consolidated purported stockholder class action.", - "page_start": 26, - "page_end": 26, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "## Table of Contents\n\n## Other Litigation Related to Our Products and Services\n\nWe are also subject to various lawsuits that seek monetary and other injunctive relief. These lawsuits include proposed class actions and other consumer claims that allege, among other things, purported defects and misrepresentations related to our products and services. For example, on September 14, 2022, a proposed class action was filed against Tesla, Inc. and related entities in the U.S. District Court for the Northern District of California, alleging various claims about the Company's driver assistance technology systems under state and federal law. This case was later consolidated with several other proposed class actions, and a Consolidated Amended Complaint was filed on October 28, 2022, which seeks damages and other relief on behalf of all persons who purchased or leased from Tesla between January 1, 2016, to the present. On October 5, 2022, a proposed class action complaint was filed in the U.S. District Court for the Eastern District of New York asserting similar state and federal law claims against the same defendants. On September 30, 2023, the Court dismissed this action with leave to amend the complaint. On November 20, 2023, the plaintiff moved to amend the complaint, which Tesla opposed. On August 8, 2024, the Court denied the plaintiff's motion for leave to file an amended complaint and entered judgment for Tesla. On September 5, 2024, the plaintiff filed a notice of appeal to United States Court of Appeals for the Second Circuit. On March 22, 2023, the plaintiffs in the Northern District of California consolidated action filed a motion for a preliminary injunction to order Tesla to (1) cease using the term 'Full Self-Driving Capability' (FSD Capability), (2) cease the sale and activation of FSD Capability and deactivate FSD Capability on Tesla vehicles, and (3) provide certain notices to consumers about proposed courtfindings about the accuracy of the use of the terms Autopilot and FSD Capability. Tesla opposed the motion. On September 30, 2023, the Court denied the request for a preliminary injunction, compelled four of five plaintiffs to arbitration, and dismissed the claims of the fifth plaintiff with leave to amend the complaint. On October 31, 2023, the remaining plaintiff in the Northern District of California action filed an amended complaint, which Tesla moved to dismiss, and on May 15, 2024, the Court granted in part and denied in part Tesla's motion. On October 2, 2023, a similar proposed class action was filed in San Diego County Superior Court in California. Tesla subsequently removed the San Diego County case to federal court and on January 8, 2024, the federal court granted Tesla's motion to transfer the case to the U.S. District Court for the Northern District of California. Tesla moved to compel arbitration, which the plaintiff did not oppose, and on June 27, 2024, the Court stayed the case pending arbitration.\n\nOn February 27, 2023, a proposed class action was filed in the U.S. District Court for the Northern District of California against Tesla, Inc., Elon Musk and certain current and former Company executives. The complaint alleges that the defendants made material misrepresentations and omissions about the Company's Autopilot and FSD Capability technologies and seeks money damages and other relief on behalf of persons who purchased Tesla stock between February 19, 2019, and February 17, 2023. An amended complaint was filed on September 5, 2023, naming only Tesla, Inc. and Elon Musk as defendants. On November 6, 2023, Tesla moved to dismiss the amended complaint. On September 30, 2024, the Court granted Tesla's motion to dismiss without prejudice.", - "page_start": 28, - "page_end": 28, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "On June 16, 2022, two Tesla stockholders filed separate derivative actions in the U.S. District Court for the Western District of Texas, purportedly on behalf of Tesla, against certain of Tesla's current and former directors. Both suits assert claims for breach of fiduciary duty, unjust enrichment, and violation of the federal securities laws in connection with alleged race and gender discrimination and sexual harassment. Among other things, plaintiffs seek declaratory and injunctive relief, unspecified damages payable to Tesla, and attorneys' fees. On July 22, 2022, the Court consolidated the two cases and on September 6, 2022, plaintiffs filed a consolidated complaint. On November 7, 2022, the defendants filed a motion to dismiss the case and on September 15, 2023, the Court dismissed the action but granted plaintiffs leave to file an amended complaint. On November 2, 2023, plaintiff filed an amended complaint purportedly on behalf of Tesla, against Elon Musk. On December 19, 2023, the defendants moved to dismiss the amended complaint, which the Court granted on April 12, 2024, with leave for plaintiffs to amend. On May 15, 2024, plaintiffs filed a second amended consolidated complaint purportedly on behalf of Tesla, against Mr. Musk. On July 1, 2024, the defendants moved to dismiss the second amended consolidated complaint.", - "page_start": 27, - "page_end": 27, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "## Table of Contents\n\nWe are focused on growing our manufacturing capacity, which includes capacity for manufacturing newer vehicle models such as our Cybertruck, Tesla Semi and future vehicles utilizing aspects of our next generation platform, and ramping the production at our Gigafactories to their installed production capacities as well as increasing production rate and efficiency at our current factories. The next phase of production growth will depend on the continued ramp at our factories and be initiated by advances in autonomy and the introduction of new products, including those built on our next generation vehicle platform, as well as our ability to add to our available sources of battery cell supply by manufacturing our own cells that we are developing to have high-volume output, lower capital and production costs and longer range. Our goals are to improve vehicle performance, decrease production costs and increase affordability and customer awareness.\n\nThese plans are subject to uncertainties inherent in establishing and ramping manufacturing operations, which may be exacerbated by new product and manufacturing technologies we introduce, the number of concurrent international projects, any industry-wide component constraints, labor shortages and any future impact from events outside of our control. For example, during the first quarter of 2024, we experienced a sequential decline in production volumes partially caused by the early phase of the production ramp of the updated Model 3 at our Fremont factory, and factory shutdowns at Gigafactory BerlinBrandenburg resulting from shipping diversions caused by the Red Sea conflict and an arson attack. Moreover, we have set ambitious technological targets with our plans for battery cells as well as for iterative manufacturing and design improvements for our vehicles with each new factory.\n\n## Automotive-Demand, Sales, Deliveries and Infrastructure\n\nOur cost reduction efforts, cost innovation strategies, and additional localized procurement and manufacturing are key to our vehicles' affordability and have allowed us to competitively price our vehicles. We will also continue to generate demand by improving our vehicles' performance and functionality, including through product offerings and features based on artificial intelligence such as Autopilot, FSD (Supervised), and other software, and delivering new vehicles and vehicle options. In addition, we have been increasing awareness, and expanding our vehicle financing programs, including attractive leasing terms for our customers. Moreover, we expect to continue to benefit from ongoing electrification of the automotive sector and increasing environmental regulations and initiatives.", - "page_start": 33, - "page_end": 33, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "Table of Contents\n\n| Production Location | Vehicle Model(s) | Production Status |\n|--------------------------------|--------------------------|---------------------|\n| Fremont Factory | Model S / Model X | Active |\n| | Model 3 / Model Y | Active |\n| Gigafactory Shanghai | Model 3 / Model Y | Active |\n| Gigafactory Berlin-Brandenburg | Model Y | Active |\n| Gigafactory Texas | Model Y | Active |\n| | Cybertruck | Active |\n| Gigafactory Nevada | Tesla Semi | Pilot production |\n| Various | Next Generation Platform | In development |\n| TBD | Roadster | In development |", - "page_start": 31, - "page_end": 31, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "| Nissan Chuo Parts Sales Co., Ltd. | Yokohama, Kanagawa | Sales of automobile repair parts | ¥545 | 80.61 |\n| US | | | | |\n| Nissan North America, Inc. | Gardena, California | Management of North American subsidiaries, manufacture and sales of automobiles and parts | $1,791 | 100.00 |\n| Nissan Motor Acceptance Corporation | Torrance California | Finance of wholesale and retail automobile sales in US | $499 | 100.00 |\n| Nissan Motor Corporation in Hawaii, Ltd. | Honolulu, Hawaii | Sales of automobiles and parts | $6 | 100.00 |\n| Nissan Capital of America, Inc. | Torrance, California | Financing for group companies | $1 | 100.00 |\n| Nissan Technical Center North America, Inc. | Farmington Hills Michigan | Research and development, testing | $16 | 100.00 |\n| Nissan Motor Insurance Corporation | Honolulu, Hawaii | Casualty insurance | $10 | 100.00 |\n| Nissan Forklift Co., North America | Marengo, Illinois | Manufacture and sales of forklifts and parts | $34 | 100.00 |\n| Canada | | | | |\n| Nissan Canada, Inc. | Mississauga, Ontario | Sales of automobiles and parts | CAN$68 | 100.00 |\n| Mexico | | | | |\n| Nissan Mexicana, S.A. de C.V. | Mexico D.F. | Manufacture and sales of automobiles and parts | P17,056 | 100.00 |", - "page_start": 107, - "page_end": 107, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## Tesla, Inc.\n\n## Consolidated Statements of Operations\n\n(in millions, except per share data) (unaudited)\n\nTable of Contents", - "page_start": 6, - "page_end": 6, - "source_file": "tesla_form_10q.pdf" - } - ] - }, - { - "references": { - "source_file": "tesla_form_10q.pdf", - "query": "What is the reason for the increase in Tesla's tax rate from 2023 to 2024?", - "target_page": 26, - "target_passage": " increase in our effective tax rate is primarily due to the impact of releasing the valuation allowance on our U.S. deferred tax assets in the fourth quarter of 2023 and changes in the mix of our jurisdictional earnings", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## Table of Contents\n\n## Legal Proceedings\n\n## Litigation Relating to 2018 CEO Performance Award\n\nOn June 4, 2018, a purported Tesla stockholder filed a putative class and derivative action in the Delaware Court of Chancery against Elon Musk and the members of Tesla's board of directors as then constituted, alleging corporate waste, unjust enrichment and that such board members breached their fiduciary duties by approving the stock-based compensation plan awarded to Elon Musk in 2018 (the '2018 CEO Performance Award'). Trial was held November 14-18, 2022. On January 30, 2024, the Court issued an opinion finding that the 2018 CEO Performance Award should be rescinded. Plaintiff's counsel filed a brief seeking a fee award of 29,402,900 Tesla shares, plus expenses of $1,120,115.50. Tesla opposed the fee request on June 7, 2024, and a hearing was held on July 8, 2024. At Tesla's 2024 Annual Meeting of Stockholders, 72% of the disinterested voting shares of Tesla, excluding shares owned by Mr. Musk and Kimbal Musk, voted to ratify the 2018 CEO Performance Award. On June 28, 2024, because Tesla's disinterested stockholders voted to ratify the 2018 CEO Performance Award, Mr. Musk and the other director defendants, joined by Tesla, filed a brief seeking to revise the Court's January 30, 2024 opinion, and a hearing was held on August 2, 2024.\n\n## Litigation Related to Directors' Compensation\n\nOn June 17, 2020, a purported Tesla stockholder filed a derivative action in the Delaware Court of Chancery, purportedly on behalf of Tesla, against certain of Tesla's current and former directors regarding compensation awards granted to Tesla's directors, other than Elon Musk, between 2017 and 2020. The suit asserts claims for breach of fiduciary duty and unjust enrichment and seeks declaratory and injunctive relief, unspecified damages and other relief. Defendants filed their answer on September 17, 2020.\n\nOn July 14, 2023, the parties filed a Stipulation and Agreement of Compromise and Settlement, which does not involve an admission of any wrongdoing by any party. If the settlement is approved by the Court, this action will be fully settled and dismissed with prejudice. Pursuant to the terms of the agreement, Tesla provided notice of the proposed settlement to stockholders of record as of July 14, 2023. The Court held a hearing regarding the settlement on October 13, 2023, after which it took the settlement and plaintiff counsels' fee request under advisement. On August 14, 2024, the parties submitted a joint letter requesting that the Court approve and enter final judgment with respect to the settlement, and decide the fee request at a later date. The settlement is not expected to have an adverse impact on our results of operations, cash flows or financial position.\n\n## Litigation Relating to Potential Going Private Transaction", - "page_start": 26, - "page_end": 26, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "Automotive sales revenue decreased $4.06 billion, or 7%, in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023, primarily due to lower average selling price on our vehicles driven by overall price reductions and attractive financing options provided year over year as well as mix. Additionally, there was a decrease of approximately 17,000 combined Model 3 and Model Y cash deliveries partially due to the early phase of the production ramp of the updated Model 3 at our Fremont factory. The decreases were partially offset by an increase of approximately 19,000 deliveries of other models primarily due to our production ramp of Cybertruck and an increase in FSD revenue compared to the prior period, as discussed above.\n\nAutomotive regulatory credits revenue increased $185 million, or 33%, in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Automotive regulatory credits revenue increased $714 million, or 53%, in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. These increases were driven by demand for credits in North America as other automobile manufacturers scale back on their battery electric vehicle plans.\n\nAutomotive leasing revenue decreased $43 million, or 9%, in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Automotive leasing revenue decreased $240 million, or 15%, in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The decreases were primarily due to lower direct sales-type leasing deliveries and a decrease in lease buyouts.\n\nServices and other revenue increased $624 million, or 29%, in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Services and other revenue increased $1.53 billion, or 25%, in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The increases were primarily due to increases in non-warranty maintenance services and collision revenue, used vehicle revenue, paid Supercharging revenue, insurance services revenue and part sales revenue.\n\n## Energy Generation and Storage Segment\n\nEnergy generation and storage revenue increased $817 million, or 52%, in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Energy generation and storage revenue increased $2.43 billion, or 53%, in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The increases were primarily due to increases in Megapack and Powerwall deployments compared to the prior periods.", - "page_start": 35, - "page_end": 35, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "Gross margin for total automotive increased from 18.7% to 20.1% in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023 primarily due to lower average combined cost per unit of our vehicles, an increase in FSD revenue and an increase in regulatory credits revenue, partially offset by lower average selling price on our vehicles, as discussed above.\n\nGross margin for total automotive decreased from 19.7% to 19.0% in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023 primarily due to lower average selling price on our vehicles and temporary under-utilization of manufacturing capacity during production ramps, partially offset by lower average combined cost per unit of our vehicles, an increase in regulatory credits revenue and an increase in FSD revenue, as discussed above.\n\nGross margin for total automotive & services and other segment increased from 17.4% to 18.7% in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Gross margin for total automotive & services and other segment decreased from 18.5% to 17.6% in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The changes in gross margin are primarily due to the automotive gross margin factors discussed above.\n\n## Energy Generation and Storage Segment\n\nCost of energy generation and storage revenue increased $473 million, or 40%, in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Cost of energy generation and storage revenue increased $1.39 billion, or 37%, in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The increases in cost of revenues were primarily due to increases in Megapack and Powerwall deployments, partially offset by increases in IRA manufacturing credits recognized as compared to the prior periods.\n\nGross margin for energy generation and storage increased from 24.4% to 30.5% in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Gross margin for energy generation and storage increased from 18.0% to 26.6% in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The increases were primarily due to margin improvements for our energy storage products driven by cost reductions, including benefits from IRA manufacturing credits, and a higher proportion of our storage business, which operated at a higher gross margin, within the segment as compared to the prior periods.\n\n## Research and Development Expense\n\nTable of Contents\n\n| | Three Months Ended September 30, | Three Months Ended September 30, | | | Nine Months Ended September 30, | Nine Months Ended September 30, | Change | Change |\n|-----------------------------|------------------------------------|------------------------------------|---------|-------|-----------------------------------|-----------------------------------|----------|----------|\n| (Dollars in millions) | 2024 | 2023 | $ | % | 2024 | 2023 | $ | % |\n| Research and development | $ 1,039 | $ 1,161 | $ (122) | (11)% | $ 3,264 | $ 2,875 | $ 389 | 14 % |\n| As a percentage of revenues | 4 % | 5 % | | | 5 % | 4 % | | |", - "page_start": 39, - "page_end": 39, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "## Automotive & Services and Other Segment\n\nCost of automotive sales revenue increased $87 million, or 1%, in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023 due to the increases in deliveries year over year as discussed above, partially offset by a decrease in the average combined cost per unit of our vehicles primarily from lower raw material costs, freight and duties as well as mix.\n\nCost of automotive sales revenue decreased $2.32 billion, or 5%, in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023 due to a decrease in the average combined cost per unit of our vehicles primarily from lower raw material costs, freight and duties as well as mix, in addition to the volume changes in deliveries year over year as discussed above. The decreases were partially offset by higher costs for Cybertruck and the updated Model 3 at our Fremont factory as a result of the temporary under-utilization of manufacturing capacity as production ramps.\n\nCost of automotive leasing revenue decreased $54 million, or 18%, in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Cost of automotive leasing revenue decreased $211 million, or 22%, in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The decreases were primarily due to a decrease in direct sales-type leasing cost of revenue driven by lower deliveries and a decrease in our direct operating lease cost of revenue driven by lower lease payoffs compared to the prior periods.\n\nCost of services and other revenue increased $507 million, or 25%, in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023. Cost of services and other revenue increased $1.47 billion, or 26%, in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The increases were primarily due to volume increases in used vehicle sales, insurance services, paid Supercharging, non-warranty maintenance services and collision and part sales.", - "page_start": 37, - "page_end": 37, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "- 200. \"Big tech and the pursuit of AI dominance\" (https://www.economist.com/business/2023/03/2 6/big-tech-and-the-pursuit-of-ai-dominance). The Economist . 26 March 2023. Archived (http s://web.archive.org/web/20231229021351/https://www.economist.com/business/2023/03/26/ big-tech-and-the-pursuit-of-ai-dominance) from the original on 29 December 2023.\n - 201. Fung, Brian (19 December 2023). \"Where the battle to dominate AI may be won\" (https://ww w.cnn.com/2023/12/19/tech/cloud-competition-and-ai/index.html). CNN Business . Archived (https://web.archive.org/web/20240113053332/https://www.cnn.com/2023/12/19/tech/cloudcompetition-and-ai/index.html) from the original on 13 January 2024.\n - 202. Metz, Cade (5 July 2023). \"In the Age of A.I., Tech's Little Guys Need Big Friends\" (https://w ww.nytimes.com/2023/07/05/business/artificial-intelligence-power-data-centers.html). The New York Times . Archived (https://web.archive.org/web/20240708214644/https://www.nytim es.com/2023/07/05/business/artificial-intelligence-power-data-centers.html) from the original on 8 July 2024. Retrieved 5 October 2024.\n - 203. \"Electricity 2024 - Analysis\" (https://www.iea.org/reports/electricity-2024). IEA . 24 January 2024. Retrieved 13 July 2024.\n - 204. Calvert, Brian (28 March 2024). \"AI already uses as much energy as a small country. It's only the beginning\" (https://www.vox.com/climate/2024/3/28/24111721/ai-uses-a-lot-of-ener gy-experts-expect-it-to-double-in-just-a-few-years). Vox . New York, New York. Archived (http s://web.archive.org/web/20240703080555/https://www.vox.com/climate/2024/3/28/2411172 1/ai-uses-a-lot-of-energy-experts-expect-it-to-double-in-just-a-few-years) from the original on 3 July 2024. Retrieved 5 October 2024.\n - 205. Halper, Evan; O'Donovan, Caroline (21 June 2024). \"AI is exhausting the power grid. Tech firms are seeking a miracle solution\" (https://www.washingtonpost.com/business/2024/06/2 1/artificial-intelligence-nuclear-fusion-climate/?utm\\_campaign=wp\\_post\\_most&utm\\_medium =email&utm\\_source=newsletter&wpisrc=nl\\_most&carta-url=https%3A%2F%2Fs2.washingto npost.com%2Fcar-ln-tr%2F3e0d678%2F6675a2d2c2c05472dd9ec0f4%2F596c09009bbc0f 20865036e7%2F12%2F52%2F6675a2d2c2c05472dd9ec0f4). Washington Post .\n - 206. Davenport, Carly. \"AI Data Centers and the Coming YS Power Demand Surge\" (https://web. archive.org/web/20240726080428/https://www.goldmansachs.com/intelligence/pages/gs-res earch/generational-growth-ai-data-centers-and-the-coming-us-power-surge/report.pdf) (PDF). Goldman Sachs . Archived from the original (https://www.goldmansachs.com/intellige nce/pages/gs-research/generational-growth-ai-data-centers-and-the-coming-us-power-surg e/report.pdf) (PDF) on 26 July 2024. Retrieved 5 October 2024.\n - 207. Ryan, Carol (12 April 2024). \"Energy-Guzzling AI Is Also the Future of Energy Savings\" (http s://www.wsj.com/business/energy-oil/ai-data-centers-energy-savings-d602296e). Wall Street Journal . Dow Jones.\n - 208. Hiller, Jennifer (1 July 2024). \"Tech Industry Wants to Lock Up Nuclear Power for AI\" (https:// www.wsj.com/business/energy-oil/tech-industry-wants-to-lock-up-nuclear-power-for-ai-6cb7 5316?mod=djem10point). Wall Street Journal . Dow Jones. Archived (https://web.archive.or g/web/20241005165650/https://www.wsj.com/business/energy-oil/tech-industry-wants-to-loc k-up-nuclear-power-for-ai-6cb75316?mod=djem10point) from the original on 5 October 2024. Retrieved 5 October 2024.\n - 209. Kendall, Tyler (28 September 2024). \"Nvidia's Huang Says Nuclear Power an Option to Feed Data Centers\" (https://www.bloomberg.com/news/articles/2024-09-27/nvidia-s-huang-s ays-nuclear-power-an-option-to-feed-data-centers). Bloomberg .", - "page_start": 41, - "page_end": 41, - "source_file": "wikipedia3.pdf" - }, - { - "text": "## Table of Contents\n\nOur provision for income taxes increased by $434 million in the three months ended September 30, 2024 and increased by $652 million in the nine months ended September 30, 2024 as compared to the three and nine months ended September 30, 2023, respectively. Our effective tax rate increased from 8% to 22% in the three months ended September 30, 2024 and increased from 10% to 23% in the nine months ended September 30, 2024 as compared to the three and nine months ended September 30, 2023, respectively. These increases are primarily due to the impact of releasing the valuation allowance on our U.S. deferred tax assets in the fourth quarter of 2023 and changes in mix of jurisdictional earnings.\n\nSee Note 9, Income Taxes , to the consolidated financial statements included elsewhere in this Quarterly Report on Form 10-Q for further details.\n\n## Liquidity and Capital Resources\n\nWe expect to continue to generate net positive operating cash flow as we have done in the last five fiscal years. The cash we generate from our core operations enables us to fund ongoing operations and production, our research and development projects for new products and technologies including our proprietary battery cells, additional manufacturing ramps at existing manufacturing facilities, the construction of future factories, and the continued expansion of our retail and service locations, body shops, Mobile Service fleet, Supercharger, including to support NACS, energy product installation capabilities and autonomy and other artificial intelligence enabled products.\n\nIn addition, because a large portion of our future expenditures will be to fund our growth, we expect that if needed we will be able to adjust our capital and operating expenditures by operating segment. For example, if our near-term manufacturing operations decrease in scale or ramp more slowly than expected, including due to global economic or business conditions, we may choose to correspondingly slow the pace of our capital expenditures. Finally, we continually evaluate our cash needs and may decide it is best to raise additional capital or seek alternative financing sources to fund the rapid growth of our business, including through drawdowns on existing or new debt facilities or financing funds. Conversely, we may also from time to time determine that it is in our best interests to voluntarily repay certain indebtedness early.\n\nAccordingly, we believe that our current sources of funds will provide us with adequate liquidity during the 12-month period following September 30, 2024, as well as in the long-term.\n\nSee the sections below for more details regarding the material requirements for cash in our business and our sources of liquidity to meet such needs.\n\n## Material Cash Requirements\n\nFrom time to time in the ordinary course of business, we enter into agreements with vendors for the purchase of components and raw materials to be used in the manufacture of our products. However, due to contractual terms, variability in the precise growth curves of our development and production ramps, and opportunities to renegotiate pricing, we generally do not have binding and enforceable purchase orders under such contracts beyond the short-term, and the timing and magnitude of purchase orders beyond such period is difficult to accurately project.", - "page_start": 42, - "page_end": 42, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "## Energy Generation and Storage Segment\n\n## Energy Generation and Storage Sales\n\nWe record as deferred revenue any non-refundable amounts that are collected from customers related to prepayments, which is recognized as revenue ratably over the respective customer contract term. As of September 30, 2024 and December 31, 2023, deferred revenue related to such customer payments amounted to $1.73 billion and $1.60 billion, respectively, mainly due to contractual payment terms. Revenue recognized from the deferred revenue balances as of December 31, 2023 and 2022 was $1.09 billion and $511 million for the nine months ended September 30, 2024 and 2023, respectively. As of September 30, 2024, total transaction price allocated to performance obligations that were unsatisfied or partially unsatisfied for contracts with an original expected length of more than one year was $6.61 billion. Of this amount, we expect to recognize $4.23 billion in the next 12 months and the rest over the remaining performance obligation period.\n\nWe have financing receivables on our consolidated balance sheets related to loans we provide for financing our energy products. As of September 30, 2024 and December 31, 2023, we had current net financing receivables of $32 million and $31 million, respectively, in Accounts receivable, net, and $641 million and $578 million, respectively, in Other non-current assets for the long-term portion.\n\n## Income Taxes\n\nWe are subject to income taxes in the U.S. and in many foreign jurisdictions. Significant judgment is required in determining our provision for income taxes, our deferred tax assets and liabilities and any valuation allowance recorded against our net deferred tax assets that are not more likely than not to be realized. We monitor the realizability of our deferred tax assets taking into account all relevant factors at each reporting period. In completing our assessment of realizability of our deferred tax assets, we consider our history of income (loss) measured at pre-tax income (loss) adjusted for permanent book-tax differences on a jurisdictional basis, volatility in actual earnings, excess tax benefits related to stock-based compensation in recent prior years and impacts of the timing of reversal of existing temporary differences. We also rely on our assessment of the Company's projected future results of business operations, including uncertainty in future operating results relative to historical results, volatility in the market price of our common stock and its performance over time, variable macroeconomic conditions impacting our ability to forecast future taxable income, and changes in business that may affect the existence and magnitude of future taxable income. Our valuation allowance assessment is based on our best estimate of future results considering all available information.\n\nOur provision for or benefit from income taxes for interim periods is determined using an estimate of our annual effective tax rate, adjusted for discrete items, if any, that are taken into account in the relevant period. Each quarter, we update our estimate of the annual effective tax rate, and if our estimated tax rate changes, we make a cumulative adjustment.\n\n## Net Income per Share of Common Stock Attributable to Common Stockholders\n\nThe following table presents the reconciliation of net income attributable to common stockholders to net income used in computing basic and diluted net income per share of common stock (in millions):\n\nTable of Contents", - "page_start": 15, - "page_end": 15, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "Research and development ('R&D') expenses decreased $122 million, or 11%, in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023 primarily due to a decrease in vehicle programs, partially offset by an increase in AI related costs year over year. R&D expenses as a percentage of revenue decreased from 5% to 4% in the three months ended September 30, 2024 as compared to the three months ended September 30, 2023 primarily due to lower R&D expenses in the current period.\n\nR&D expenses increased $389 million, or 14%, in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023. The overall increases were primarily driven by additional costs year over year related to AI programs. R&D expenses as a percentage of revenue increased from 4% to 5% in the nine months ended September 30, 2024 as compared to the nine months ended September 30, 2023 as we continue to expand our product roadmap and technologies.\n\n## Selling, General and Administrative Expense\n\n| | Three Months Ended September 30, | Three Months Ended September 30, | | | Nine Months Ended September 30, | Nine Months Ended September 30, | Change | Change |\n|-------------------------------------|------------------------------------|------------------------------------|--------|------|-----------------------------------|-----------------------------------|----------|----------|\n| (Dollars in millions) | 2024 | 2023 | $ | % | 2024 | 2023 | $ | % |\n| Selling, general and administrative | $ 1,186 | $ 1,253 | $ (67) | (5)% | $ 3,837 | $ 3,520 | $ 317 | 9 % |\n| As a percentage of revenues | 5 % | 5 % | | | 5 % | 5 % | | |", - "page_start": 39, - "page_end": 39, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "On June 16, 2022, two Tesla stockholders filed separate derivative actions in the U.S. District Court for the Western District of Texas, purportedly on behalf of Tesla, against certain of Tesla's current and former directors. Both suits assert claims for breach of fiduciary duty, unjust enrichment, and violation of the federal securities laws in connection with alleged race and gender discrimination and sexual harassment. Among other things, plaintiffs seek declaratory and injunctive relief, unspecified damages payable to Tesla, and attorneys' fees. On July 22, 2022, the Court consolidated the two cases and on September 6, 2022, plaintiffs filed a consolidated complaint. On November 7, 2022, the defendants filed a motion to dismiss the case and on September 15, 2023, the Court dismissed the action but granted plaintiffs leave to file an amended complaint. On November 2, 2023, plaintiff filed an amended complaint purportedly on behalf of Tesla, against Elon Musk. On December 19, 2023, the defendants moved to dismiss the amended complaint, which the Court granted on April 12, 2024, with leave for plaintiffs to amend. On May 15, 2024, plaintiffs filed a second amended consolidated complaint purportedly on behalf of Tesla, against Mr. Musk. On July 1, 2024, the defendants moved to dismiss the second amended consolidated complaint.", - "page_start": 27, - "page_end": 27, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "- Wong, Matteo (19 May 2023), \"ChatGPT Is Already Obsolete\" (https://www.theatlantic.com/tech nology/archive/2023/05/ai-advancements-multimodal-models/674113/), The Atlantic , archived (https://web.archive.org/web/20240918022529/https://www.theatlantic.com/technol ogy/archive/2023/05/ai-advancements-multimodal-models/674113/) from the original on 18 September 2024, retrieved 5 October 2024", - "page_start": 65, - "page_end": 65, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0764.pdf", - "query": "Which is the first candidate for experimenting the case of electrons interacting with a single boson mode?", - "target_page": 6, - "target_passage": "The primary candidate for such mode is an optical phonon", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "FIG. 4: Top - a conductivity plot for the BCSI case in the presence of a lattice. The parameters are ∆ = 30 meV , Γ = 3 . 5 meV . Bottom - the behavior of Kubo sums. Note that (a) the spectral weight in the NS is always greater in the SCS, (b) the spectral weight decreases with Γ, and (c) the difference between NS and SCS decreases as Γ increases.\n\n\n\nlittle variation of ∆ W ( ω c ) at above 0 . 1 -0 . 3 eV what implies that for larger ω c , ∆ W ( ω c ) ≈ ∆ W K >> ∆ f ( ω c ).\n\nTo make this more quantitative, we compare in Fig. 6 ∆ W ( ω c ) obtained for a constant DOS, when ∆ W ( ω c ) = ∆ f ( ω c ), and for the actual lattice dispersion, when ∆ W ( ω c ) = ∆ W K + ∆ f ( ω c ). In the clean limit there is obviously little cutoff dependence beyond 0 . 1 eV , i.e., ∆ f ( ω c ) is truly small, and the difference between the two cases is just ∆ W K . In the dirty limit, the situation is similar, but there is obviously more variation with ω c , and ∆ f ( ω c ) becomes truly small only above 0 . 3 eV . Note also that the position of the dip in ∆ W ( ω c ) in the clean limit is at a larger ω c in the presence of the lattice than in a continuum.\n\n## B. The Einstein boson model\n\nWe next consider the case of electrons interacting with a single boson mode which by itself is not affected by superconductivity. The primary candidate for such mode is an optical phonon. The imaginary part of the NS self energy has been discussed numerous times in the literature. We make one simplifying assumption - approximate the DOS by a constant in calculating fermionic self-energy. We will, however, keep the full lattice dispersion in the calculations of the optical integral. The advantage of this\n\nFIG. 5: The evolution of optical integral in NS(top) and SCS(bottom) for BCSI case. Plots are made for clean limit (solid lines, Γ = 3 . 5 meV ) and dirty limit (dashed lines, Γ = 150 meV ) for ∆ = 30 meV . Observe that (a) W (0) = 0 in the NS, but has a non-zero value in the SCS because of the δ -function (this value decreases in the dirty limit), and (b) the flat region in the SCS is due to the fact that σ ' ( ω ) = 0 for Ω < 2∆. Also note that ∼ 90 -95% of the spectral weight is recovered up to 1 eV\n\n\n\napproximation is that the self-energy can be computed analytically. The full self-energy obtained with the lattice dispersion is more involved and can only be obtained numerically, but its structure is quite similar to the one obtained with a constant DOS.\n\nThe self-energy for a constant DOS is given by\n\nΣ( iω ) = -i 2 π λ n ∫ d/epsilon1 k d ( i Ω) χ ( i Ω) G ( /epsilon1 k , iω + i Ω) (13)\n\nwhere\n\nχ ( i Ω) = ω 2 0 ω 2 0 -( i Ω) 2 (14)\n\nand λ n is a dimensionless electron-boson coupling. Integrating and transforming to real frequencies, we obtain\n\nΣ '' ( ω ) = -π 2 λ n ω o Θ( | ω | -ω o )\n\nIn the SCS, we obtain for ω < 0\n\nΣ ' ( ω ) = -1 2 λ n ω o log ∣ ∣ ∣ ∣ ω + ω o ω -ω o ∣ ∣ ∣ ∣ (15)\n\nΣ '' ( ω ) = -π 2 λ n ω o Re ( ω + ω o √ ( ω + ω o ) 2 -∆ 2 )", - "page_start": 5, - "page_end": 5, - "source_file": "1001.0764.pdf" - }, - { - "text": "## Optical Integral and Sum Rule Violation\n\nSaurabh Maiti, Andrey V. Chubukov\n\nDepartment of Physics, University of Wisconsin, Madison, Wisconsin 53706, USA\n\n(Dated: November 9, 2018)\n\nThe purpose of this work is to investigate the role of the lattice in the optical Kubo sum rule in the cuprates. We compute conductivities, optical integrals W , and ∆ W between superconducting and normal states for 2-D systems with lattice dispersion typical of the cuprates for four different models - a dirty BCS model, a single Einstein boson model, a marginal Fermi liquid model, and a collective boson model with a feedback from superconductivity on a collective boson. The goal of the paper is two-fold. First, we analyze the dependence of W on the upper cut-off ( ω c ) placed on the optical integral because in experiments W is measured up to frequencies of order bandwidth. For a BCS model, the Kubo sum rule is almost fully reproduced at ω c equal to the bandwidth. But for other models only 70%-80% of Kubo sum rule is obtained up to this scale and even less so for ∆ W , implying that the Kubo sum rule has to be applied with caution. Second, we analyze the sign of ∆ W . In all models we studied ∆ W is positive at small ω c , then crosses zero and approaches a negative value at large ω c , i.e. the optical integral in a superconductor is smaller than in a normal state. The point of zero crossing, however, increases with the interaction strength and in a collective boson model becomes comparable to the bandwidth at strong coupling. We argue that this model exhibits the behavior consistent with that in the cuprates.\n\n## I. INTRODUCTION\n\nThe analysis of sum rules for optical conductivity has a long history. Kubo, in an extensive paper 1 in 1957, used a general formalism of a statistical theory of irreversible processes to investigate the behavior of the conductivity in electronic systems. For a system of interacting electrons, he derived the expression for the integral of the real part of a (complex) electric conductivity σ (Ω) and found that it is independent on the nature of the interactions and reduces to\n\n∫ ∞ 0 Reσ (Ω) d Ω = π 2 ne 2 m (1)\n\nHere n is the density of the electrons in the system and m is the bare mass of the electron. This expression is exact provided that the integration extends truly up to infinity, and its derivation uses the obvious fact that at energies higher than the total bandwidth of a solid, electrons behave as free particles.\n\nThe independence of the r.h.s. of Eq. (1) on temperature and the state of a solid (e.g., a normal or a superconducting state - henceforth referred to as NS and SCS respectively) implies that, while the functional form of σ (Ω) changes with, e.g., temperature, the total spectral weight is conserved and only gets redistributed between different frequencies as temperature changes. This conservation of the total weight of σ (Ω) is generally called a sum rule.\n\nOne particular case, studied in detail for conventional superconductors, is the redistribution of the spectral weight between normal and superconducting states. This is known as Ferrel-Glover-Tinkham (FGT) sum rule: 2,3\n\n∫ ∞ 0+ Reσ NS (Ω) = ∫ ∞ 0+ Reσ sc (Ω) + πn s e 2 2 m (2)\n\nwhere n s is the superfluid density, and πn s e 2 / (2 m ) is\n\nthe spectral weight under the δ -functional piece of the conductivity in the superconducting state.\n\nIn practice, the integration up to an infinite frequency is hardly possible, and more relevant issue for practical applications is whether a sum rule is satisfied, at least approximately, for a situation when there is a single electron band which crosses the Fermi level and is well separated from other bands. Kubo considered this case in the same paper of 1957 and derived the expression for the 'band', or Kubo sum rule\n\n∫ ' ∞ ' 0 Reσ (Ω) d Ω = W K = πe 2 2 N ∑ /vector k ∇ 2 /vector k x ε /vector k n /vector k (3)", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0764.pdf" - }, - { - "text": "## I. INTRODUCTION\n\nThe nonvanishing neutrino masses have been confirmed by various neutrino oscillation phenomena and indicate the evidence of new physics beyond the Standard Model. The most attractive idea to naturally explain the tiny neutrino masses is the seesaw mechanism [1], in which the right-handed (RH) neutrinos singlet under the SM gauge group are introduced. The minimal gauged U (1) B -L model based on the gauge group SU (3) C × SU (2) L × U (1) Y × U (1) B -L [2] is an elegant and simple extension of the SM, in which the RH neutrinos of three generations are necessarily introduced because of the gauge and gravitational anomaly cancellations. In addition, the mass of RH neutrinos arises associated with the U (1) B -L gauge symmetry breaking.\n\nAlthough the scale of the B -L gauge symmetry breaking is basically arbitrary as long as phenomenological constraints are satisfied, one interesting option is to take it to be the TeV scale [3]. It has been recently pointed out [4] that when the classical conformal invariance is imposed on the minimal U (1) B -L model, the symmetry breaking scale appears to be the TeV scale naturally. If this is the case, all new particles, the Z ' gauge boson, the B -L Higgs boson H and the RH neutrinos appear at the TeV scale unless the U (1) B -L gauge coupling is extremely small, and they can be discovered at Large Hadron Collider [5-8]. Then we may be able to understand the relation between the gauge symmetry breaking and the origin of neutrino masses.\n\nAlthough such a TeV scale model is interesting and appealing, one might think that the absence of dark matter (DM) candidate is a shortcoming of this model. A sterile RH neutrino with mass of the order of MeV is one possibility [9]. In this paper, we propose a very simple idea to introduce the DM candidate in the minimal gauged U (1) B -L model. We introduce the Z 2 parity into the model and impose one of three RH neutrinos to be odd, while the others even. In this way, the Z 2 -odd RH neutrino becomes stable and the DM candidate. Note that two RH neutrinos are enough to reconcile with the observed neutrino oscillation data, with a prediction of one massless light neutrino. Therefore, without introducing any additional new dynamical degrees of freedom, the DM particle arises in the minimal gauged U (1) B -L model.\n\nThe paper is organized as follows. In the next section, we briefly describe our model. In section III, we estimate the thermal relic density of the RH neutrino and identify the model", - "page_start": 1, - "page_end": 1, - "source_file": "1002.2525.pdf" - }, - { - "text": "high-energy fermions and is an input for the low-energy theory. Below we follow Refs. 31,33 and assume that the momentum dependence of a collective boson is flat near ( π, π ). The self energy within such model has been worked out consistently in Ref. 31,33. In the normal state\n\nΣ '' ( ω ) = -1 2 λ n ω sf log ( 1 + ω 2 ω 2 sf ) ω (19)\n\nΣ ' ( ω ) = -λ n ω sf arctan ω sf\n\nwhere λ n is the spin-fermion coupling constant, and ω sf is a typical spin relaxation frequency of overdamped spin collective excitations with a propagator\n\nχ ( q ∼ Q, Ω) = χ Q 1 -i Ω ω sf (20)\n\nwhere χ Q is the uniform static susceptibility. If we use Ornstein-Zernike form of χ ( q ) and use either Eliashberg 45 or FLEX computational schemes 48 , we get rather similar behavior of Σ as a function of frequency and rather similar behavior of optical integrals.\n\nThe collective nature of spin fluctuations is reflected in the fact that the coupling λ and the bosonic frequency ω sf are related: λ scales as ξ 2 , where ξ is the bosonic mass (the distance to a bosonic instability), and ω sf ∝ ξ -2 (see Ref. 49). For a flat χ ( q ∼ Q ) the product λω sf does not depend on ξ and is the overall dimensional scale for boson-mediated interactions.\n\nIn the SCS fermionic excitations acquire a gap. This gap affects fermionic self-energy in two ways: directly, via the change of the dispersion of an intermediate boson in the exchange process involving a CB, and indirectly, via the change of the propagator of a CB. We remind ourselves that the dynamics of a CB comes from a particlehole bubble which is indeed affected by ∆.\n\nThe effect of a d -wave pairing gap on a CB has been discussed in a number of papers, most recently in 31 . In\n\na SCS a gapless continuum described by Eq. (20) transforms into a gaped continuum, with a gap about 2∆ and a resonance at ω = ω 0 < 2∆, where for a d -wave gap we define ∆ as a maximum of a d -wave gap.\n\nThe spin susceptibility near ( π, π ) in a superconductor can generally be written up as\n\nχ ( q ∼ Q, Ω) = χ Q 1 -i Π(Ω) ω sf (21)\n\nwhere Π is evaluated by adding up the bubbles made out of two normal and two anomalous Green's functions. Below 2∆, Π(Ω) is real ( ∼ Ω 2 / ∆ for small Ω), and the resonance emerges at Ω = ω 0 at which Π( ω 0 ) = ω sf . At frequencies larger than 2∆, Π(Ω) has an imaginary part, and this gives rise to a gaped continuum in χ (Ω).\n\nThe imaginary part of the spin susceptibility around the resonance frequency ω 0 is 31\n\nχ '' ( q, Ω) = πZ o ω 0 2 δ (Ω -ω 0 ) (22)\n\nwhere Z o ∼ 2 ω sf χ 0 / ∂ Π ∂ω | Ω= ω 0 . The imaginary part of the spin susceptibility describing a gaped continuum exists for for Ω ≥ 2∆ and is\n\nχ '' ( q, Ω) = Im [ χ 0 1 -1 ω sf ( 4∆ 2 Ω D ( 4∆ 2 Ω 2 ) + i Ω K 2 (1 -4∆ 2 Ω 2 ) ) ]\n\n≈ Im [ χ 0 1 -1 ω sf ( π ∆ 2 Ω + i π 2 Ω ) ] f or Ω >> 2∆ (23)\n\nIn Eq. (23) D ( x ) = K 1 ( x ) -K 2 ( x ) x , and K 1 ( x ) and K 2 ( x ) are Elliptic integrals of first and second kind. The real part of χ is obtained by Kramers-Kronig transform of the imaginary part.\n\nSubstituting Eq 6 for χ ( q, Ω) into the formula for the self-energy one obtains Σ '' ( ω ) in a SCS state as a sum of two terms 31\n\nΣ '' ( ω ) = Σ '' A ( ω ) + Σ '' B ( ω ) (24)\n\nwhere,\n\ncomes from the interaction with the resonance and\n\nΣ '' A ( ω ) = πZ o 2 λ n ω o Re ( ω + ω o √ ( ω + ω o ) 2 -∆ 2 )\n\nΣ '' B ( ω ) = -λ n ∫ | E | 2∆ dxRe ω + x √ ( ω + x ) 2 -∆ 2 x ω sf K 2 ( 1 -4∆ 2 x 2 ) [ 1 -4∆ 2 xω sf D ( 4∆ 2 x 2 ) ] 2 + [ x ω sf K 2 ( 1 -4∆ 2 x 2 ) ] 2 (25)\n\ncomes from the interaction with the gaped continuum.\n\nThe real part of Σ is obtained by Kramers-Kronig trans-", - "page_start": 10, - "page_end": 10, - "source_file": "1001.0764.pdf" - }, - { - "text": "From Eq. (19), one can see that σ ( p ) SI ∝ (sin 2 θ/v ' ) 2 for a given DM mass m N . Fig. 3 shows the spin-independent cross section of RH neutrino with a proton. The resultant cross section is found to be far below the current limits reported by XENON10 [24] and CDMSII [25]: σ SI /lessorsimilar 4 × 10 -8 -2 × 10 -7 pb, for a DM mass of 100 GeV-1 TeV. Future experiments such as XENON1T [26] can reach the cross section predicted in our model.\n\nFIG. 3: The spin independent scattering cross section with a proton. All parameters are same as those used in the previous section. The upper and lower lines correspond to sin θ = 0 . 7 and 0 . 3, respectively.\n\n\n\n## IV. SUMMARY\n\nWe have proposed a scenario of the RH neutrino dark matter in the context of the minimal gauged U (1) B -L model. We have introduced a discrete Z 2 parity in the model, so that one RH neutrino assigned as Z 2 -odd can be stable and, hence, the DM candidate, while the other two RH neutrinos account for neutrino masses and mixings through the seesaw mechanism. No additional degrees of freedom are necessary to be added. We have evaluated the relic density of the dark matter particle. The dominant annihilation modes are via the Higgs boson exchange processes in the s -channel and thus, our model can be called Higgs portal DM model. It has been found that the relic density consistent with the current observation", - "page_start": 7, - "page_end": 7, - "source_file": "1002.2525.pdf" - }, - { - "text": "The results for the conductivity within a spin-fermion model depend in quantitative (but not qualitative) way on the assumption for the momentum dispersion of a collective boson. This momentum dependence comes from", - "page_start": 9, - "page_end": 9, - "source_file": "1001.0764.pdf" - }, - { - "text": "chirality interactions in cold atom optical lattices has been proposed 38 .\n\nOur model (8) is achieved at second order of the perturbation series. Higher order terms become truncation errors but may be controlled by small parameters λ x,y,z /J cluster ∼ √ | J x,y,z | /J cluster .\n\n## V. CONCLUSIONS.\n\nWe constructed the exactly solvable Kitaev honeycomb model 1 as the exact low energy effective Hamiltonian of a spin-1/2 model [equations (8) or (9)] with spin-rotation and time reversal symmetry. The spin in Kitaev model is represented as the pseudo-spin in the two-fold degenerate spin singlet subspace of a cluster of four antiferromagnetically coupled spin-1/2 moments. The physical spin model is a honeycomb lattice of such four-spin clusters, with certain inter-cluster interactions. The machinery for the exact mapping to pseudo-spin Hamiltonian was developed (see e.g. TABLE I), which is quite general and can be used to construct other interesting (exactly solvable) spin-1/2 models from spin rotation invariant systems.\n\nIn this construction the pseudo-spin correlations in the Kitaev model will be mapped to dimer or spin-chirality correlations in the physical spin system. The corresponding picture of the fractionalized Majorana fermion excitations and Ising vortices still remain to be clarified.\n\nThis exact construction contains high order physical spin interactions, which is undesirable for practical implementation. We described two possible approaches to reduce this problem: generating the high order spin interactions by perturbative expansion of the coupling to optical phonon, or the magnetic coupling between clusters. This perturbative construction will introduce truncation error of perturbation series, which may be controlled by small expansion parameters. Whether these constructions can be experimentally engineered is however beyond the scope of this study. It is conceivable that other perturbative expansion can also generate these high order spin interactions, but this possibility will be left for future works.\n\n## Acknowledgments\n\nThe author thanks Ashvin Vishwanath, Yong-Baek Kim and Arun Paramekanti for inspiring discussions, and Todadri Senthil for critical comments. The author is supported by the MIT Pappalardo Fellowship in Physics.\n\n## Appendix A: Coupling between Distortions of a Tetrahedron and the Pseudo-spins\n\nIn this Appendix we reproduce from Ref. 35 the couplings of all tetrahedron distortion modes to the spin\n\nsystem. And convert them to pseudo-spin notation in the physical spin singlet sector.\n\nConsider a general small distortion of the tetrahedron, the spin Hamiltonian becomes\n\nH cluster , SL = ( J cluster / 2)( ∑ /lscript S /lscript ) 2 + J ' ∑ /lscript\n\nmode into W -boson pair becomes kinematically available, it is not possible to obtain the desired DM abundance without the Higgs resonant annihilation because the bound on v ' given by Eq. (12) is stringent.\n\n## B. Direct detection of dark matter\n\nOur RH neutrino DM can elastically scatter off with nucleon, unlike another RH neutrino DM model has been proposed by Krauss et. al. [21] and studied [22, 23]. The main process is Higgs exchange and the resultant cross section for a proton is given by\n\nσ ( p ) SI = 4 π ( m p m N m p + m N ) 2 f 2 p , (17)\n\nwith the hadronic matrix element\n\nf p m p = ∑ q = u,d,s f ( p ) Tq α q m q + 2 27 f ( p ) TG ∑ c,b,t α q m q , (18)\n\nand the effective vertex (see Appendix for notations)\n\nα q = -λ N y q ( ∂ Φ ∂h 1 M 2 h ∂ Ψ ∂h + ∂ Φ ∂H 1 M 2 H ∂ Ψ ∂H ) , (19)\n\nwhere m q is a mass of a quark with a Yukawa coupling y q , and f ( p ) Tq and f ( p ) TG are constants.", - "page_start": 6, - "page_end": 6, - "source_file": "1002.2525.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0764.pdf", - "query": "What was the optical integral analysis proposed by Norman and Pépin?", - "target_page": 8, - "target_passage": "a phenomenological model for the self energy which fits normal state scattering rate measure- ments by ARPES", - "chunk_present": { - "presence": true, - "index": 4 - } - }, - "top_chunk": [ - { - "text": "http://www.ibm.com/support/docview.wss?uid=swg21408525\n\n## Optical platters\n\nWhen you work with optical platters, check and adjust the values for the following parameters in SYS1.PARMLIB(CBROAMxx) :\n\n - /SM590000 MOUNTWAITTIME : Specifies the amount of time (in minutes) that can pass while a volume waits to be mounted on an operator-accessible drive within an optical library. After this time expires, message CBR4426D is issued to allow the operator to try again or to cancel the volume mount request. This value can be any numeric value 1 - 9999. If the operator retries the mount request, the value that is specified in the MOUNTWAITTIME parameter is used for the retry. The default value of this parameter is 5 minutes.\n - /SM590000 OPTICALDISPATCHERDELAY : Specifies the number of seconds that the OAM optical dispatcher delays the processing of certain requests to minimize the flipping of optical disk cartridges in an automated optical storage library that expects that another read request for the currently mounted optical disk volume will arrive within this delay interval.\n\nThe OAM optical dispatcher delays processing of a unit of work for a specific period, when all of the following conditions are true:\n\n - - A read request for an object on a currently mounted optical disk volume was completed.", - "page_start": 135, - "page_end": 135, - "source_file": "sg246915.pdf" - }, - { - "text": "The analysis of the optical integral showed that in overdoped cuprates it definitely decreases below T c , in consistency with the expectations at weak coupling 11 . For underdoped cuprates, all experimental groups agree that a relative change of the optical integral below T c gets much smaller. There is no agreement yet about the sign of the change of the optical integral : Molegraaf et al. 8 and Santander-Syro et al. 9 argued that the optical integral increases below T c , while Boris et al. 10 argued that it decreases.\n\nTheoretical analysis of these results 21,22,25,28,30 added one more degree of complexity to the issue. It is tempting to analyze the temperature dependence of W K and relate it to the observed behavior of the optical integral, and some earlier works 25,28,30 followed this route. In the experiments, however, optical conductivity is integrated only up to a certain frequency ω c , and the quantity which is actually measured is\n\nW ( ω c ) = ∫ ω c 0 Reσ (Ω) d Ω = W K + f ( ω c ) f ( ω c ) = -∫ ' ∞ ' ω c Reσ (Ω) d Ω (4)\n\nThe Kubo formula, Eq. (3) is obtained assuming that the second part is negligible. This is not guaranteed, however, as typical ω c ∼ 1 -2 eV are comparable to the bandwidth.\n\nThe differential sum rule ∆ W is also a sum of two terms\n\n∆ W ( ω c ) = ∆ W K +∆ f ( ω c ) (5)\n\nwhere ∆ W K is the variation of the r.h.s. of Eq. 3, and ∆ f ( ω c ) is the variation of the cutoff term. Because conductivity changes with T at all frequencies, ∆ f ( ω c ) also varies with temperature. It then becomes the issue whether the experimentally observed ∆ W ( ω c ) is predominantly due to 'intrinsic' ∆ W K , or to ∆ f ( ω c ). [A third possibility is non-applicability of the Kubo formula because of the close proximity of other bands, but we will not dwell on this.]\n\nFor the NS, previous works 21,22 on particular models for the cuprates indicated that the origin of the temperature dependence of W ( ω c ) is likely the T dependence of the cutoff term f ( ω c ). Specifically, Norman et. al. 22 approximated a fermionic DOS by a constant (in which", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0764.pdf" - }, - { - "text": "## Optical Integral and Sum Rule Violation\n\nSaurabh Maiti, Andrey V. Chubukov\n\nDepartment of Physics, University of Wisconsin, Madison, Wisconsin 53706, USA\n\n(Dated: November 9, 2018)\n\nThe purpose of this work is to investigate the role of the lattice in the optical Kubo sum rule in the cuprates. We compute conductivities, optical integrals W , and ∆ W between superconducting and normal states for 2-D systems with lattice dispersion typical of the cuprates for four different models - a dirty BCS model, a single Einstein boson model, a marginal Fermi liquid model, and a collective boson model with a feedback from superconductivity on a collective boson. The goal of the paper is two-fold. First, we analyze the dependence of W on the upper cut-off ( ω c ) placed on the optical integral because in experiments W is measured up to frequencies of order bandwidth. For a BCS model, the Kubo sum rule is almost fully reproduced at ω c equal to the bandwidth. But for other models only 70%-80% of Kubo sum rule is obtained up to this scale and even less so for ∆ W , implying that the Kubo sum rule has to be applied with caution. Second, we analyze the sign of ∆ W . In all models we studied ∆ W is positive at small ω c , then crosses zero and approaches a negative value at large ω c , i.e. the optical integral in a superconductor is smaller than in a normal state. The point of zero crossing, however, increases with the interaction strength and in a collective boson model becomes comparable to the bandwidth at strong coupling. We argue that this model exhibits the behavior consistent with that in the cuprates.\n\n## I. INTRODUCTION\n\nThe analysis of sum rules for optical conductivity has a long history. Kubo, in an extensive paper 1 in 1957, used a general formalism of a statistical theory of irreversible processes to investigate the behavior of the conductivity in electronic systems. For a system of interacting electrons, he derived the expression for the integral of the real part of a (complex) electric conductivity σ (Ω) and found that it is independent on the nature of the interactions and reduces to\n\n∫ ∞ 0 Reσ (Ω) d Ω = π 2 ne 2 m (1)\n\nHere n is the density of the electrons in the system and m is the bare mass of the electron. This expression is exact provided that the integration extends truly up to infinity, and its derivation uses the obvious fact that at energies higher than the total bandwidth of a solid, electrons behave as free particles.\n\nThe independence of the r.h.s. of Eq. (1) on temperature and the state of a solid (e.g., a normal or a superconducting state - henceforth referred to as NS and SCS respectively) implies that, while the functional form of σ (Ω) changes with, e.g., temperature, the total spectral weight is conserved and only gets redistributed between different frequencies as temperature changes. This conservation of the total weight of σ (Ω) is generally called a sum rule.\n\nOne particular case, studied in detail for conventional superconductors, is the redistribution of the spectral weight between normal and superconducting states. This is known as Ferrel-Glover-Tinkham (FGT) sum rule: 2,3\n\n∫ ∞ 0+ Reσ NS (Ω) = ∫ ∞ 0+ Reσ sc (Ω) + πn s e 2 2 m (2)\n\nwhere n s is the superfluid density, and πn s e 2 / (2 m ) is\n\nthe spectral weight under the δ -functional piece of the conductivity in the superconducting state.\n\nIn practice, the integration up to an infinite frequency is hardly possible, and more relevant issue for practical applications is whether a sum rule is satisfied, at least approximately, for a situation when there is a single electron band which crosses the Fermi level and is well separated from other bands. Kubo considered this case in the same paper of 1957 and derived the expression for the 'band', or Kubo sum rule\n\n∫ ' ∞ ' 0 Reσ (Ω) d Ω = W K = πe 2 2 N ∑ /vector k ∇ 2 /vector k x ε /vector k n /vector k (3)", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0764.pdf" - }, - { - "text": "Louis' patriline is the line from which he is descended from father to son.\n\nPatrilineal descent is the principle behind membership in royal houses, as it can be traced back through the generations - which means that if King Louis were to choose a historically accurate house name it would be Robertian, as all his male-line ancestors have been of that house.\n\nLouis is a member of the House of Bourbon, a branch of the Capetian dynasty and of the Robertians.\n\nLouis' patriline is the line from which he is descended from father to son. It follows the Bourbon kings of France, and the Counts of Paris and Worms. This line can be traced back more than 1,200 years from Robert of Hesbaye to the present day, through Kings of France & Navarre, Spain and Two-Sicilies, Dukes of Parma and Grand-Dukes of Luxembourg, Princes of Orléans and Emperors of Brazil. It is one of the oldest in Europe.\n\n - 1. Robert II of Worms and Rheingau (Robert of Hesbaye), 770-807\n - 2. Robert III of Worms and Rheingau, 808-834\n - 3. Robert IV the Strong, 820-866\n - 4. Robert I of France, 866-923\n - 5. Hugh the Great, 895-956\n - 6. Hugh Capet, 941-996\n - 7. Robert II of France, 972-1031\n - 8. Henry I of France, 1008-1060\n - 9. Philip I of France, 1053-1108\n - 10. Louis VI of France, 1081-1137\n - 11. Louis VII of France, 1120-1180\n - 12. Philip II of France, 1165-1223\n - 13. Louis VIII of France, 1187-1226\n - 14. Louis IX of France, 1214-1270\n - 15. Robert, Count of Clermont, 1256-1317\n - 16. Louis I, Duke of Bourbon, 1279-1342\n - 17. James I, Count of La Marche, 1319-1362\n - 18. John I, Count of La Marche, 1344-1393\n - 19. Louis, Count of Vendôme, 1376-1446\n - 20. Jean VIII, Count of Vendôme, 1428-1478\n - 21. François, Count of Vendôme, 1470-1495\n - 22. Charles de Bourbon, Duke of Vendôme, 1489-1537\n - 23. Antoine, King of Navarre, Duke of Vendôme, 1518-1562\n - 24. Henry IV, King of France and of Navarre, 1553-1610\n - 25. Louis XIII, King of France and Navarre, 1601-1643\n - 26. Louis XIV, King of France and Navarre, 1638-1715\n\n## Issue", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia5.pdf" - }, - { - "text": "FIG. 9: ∆ W vs the cut-off for the EB model. It remains negative for larger cut-offs. Parameters are the same as before. The dot indicates the value of ∆ W ( ∞ ) = ∆ W K\n\n\n\nof the lattice (the dashed line in Fig. 9).\n\n## C. Marginal Fermi liquid model\n\nFor their analysis of the optical integral, Norman and P'epin 30 introduced a phenomenological model for the self energy which fits normal state scattering rate measurements by ARPES 41 . It constructs the NS Σ '' ( ω ) out of two contributions - impurity scattering and electronelectron scattering which they approximated phenomenologically by the marginal Fermi liquid form of αω at small frequencies 6 (MFLI model). The total Σ '' is\n\nΣ '' ( ω ) = Γ + α | ω | f ( ω ω sat ) (17)\n\nwhere ω sat is about ∼ 1 2 of the bandwidth, and f ( x ) ≈ 1 for x < 1 and decreases for x > 1. In Ref 30 f ( x ) was assumed to scale as 1 /x at large x such that Σ '' is flat at large ω . The real part of Σ( ω ) is obtained from KramersKronig relations. For the superconducting state, they obtained Σ '' by cutting off the NS expression on the lower end at some frequency ω 1 (the analog of ω 0 +∆ that we had for EB model):\n\nΣ '' ( ω ) = (Γ + α | ω | )Θ( | ω | -ω 1 ) (18)\n\nwhere Θ( x ) is the step function. In reality, Σ '' which fits ARPESin the NS has some angular dependence along the Fermi surface 42 , but this was ignored for simplicity. This model had gained a lot of attention as it predicted the optical sum in the SCS to be larger than in the NS, i.e., ∆ W > 0 at large frequencies. This would be consistent with the experimental findings in Refs. 8,9 if, indeed, one identifies ∆ W measured up to 1eV with ∆ W K .\n\nWe will show below that the sign of ∆ W in the MFLI model actually depends on how the normal state results are extended to the superconducting state and, moreover, will argue that ∆ W K is actually negative if the extension is done such that at α = 0 the results are consistent with\n\nBCSI model. However, before that, we show in Figs 1012 the conductivities and the optical integrals for the original MFLI model.\n\nω\n\nσ\n\nFIG. 10: Top -the conductivities in the NS and SCS in the original MFLI model of Ref.30. We set Γ = 70 meV , α = 0 . 75, ∆ = 32 meV , ω 1 = 71 meV . Note that σ ' ( ω ) in the SCS begins at Ω = ∆ + ω 1 . Bottom - the behavior of W K with Γ.\n\n\n\nIn Fig 10 we plot the conductivities in the NS and the SCS and Kubo sums W K vs Γ at α = 0 . 75 showing that the spectral weight in the SCS is indeed larger than in the NS. In Fig 11 we show the behavior of the optical sums W ( ω c ) in NS and SCS. The observation here is that only ∼ 75 -80%of the Kubo sum is recovered up to the scale of the bandwidth implying that there is indeed a significant spectral weight well beyond the bandwidth. And in Fig 12 we show the behavior of ∆ W ( w c ). We see that it does not change sign and remain positive at all ω c , very much unlike the BCS case. Comparing the behavior of W ( w c ) with and without a lattice (solid and dashed lines in Fig. 12) we see that the 'finite bandwidth effect' just shifts the curve in the positive direction. We also see that the solid line flattens above roughly half of the bandwidth, i.e., at these frequencies ∆ W ( ω c ) ≈ ∆ W K . Still, we found that ∆ W continues going down even above the bandwidth and truly saturates only at about 2 eV (not shown in the figure) supporting the idea that there is 'more' left to recover from higher frequencies.\n\nThe rationale for ∆ W K > 0 in the original MFLI model has been provided in Ref. 30. They argued that this is closely linked to the absence of quasiparticle peaks in the NS and their restoration in the SCS state because the phase space for quasiparticle scattering at low energies is smaller in a superconductor than in a normal state.", - "page_start": 7, - "page_end": 7, - "source_file": "1001.0764.pdf" - }, - { - "text": "Provide the following information for disk pool definition:\n\n - /SM590000 A pool number that corresponds to an existing auxiliary storage pool\n - /SM590000 A description of the storage group\n - /SM590000 The type of data, which is primary or backup\n\nFigure 5-18 Content Manager OnDemand for i disk pool definition\n\n\n\n## Optical storage group\n\nOptical storage groups are used by Content Manager OnDemand to group sets of optical volumes for the storage of related data. Optical storage groups are used to group physical optical volumes and virtual optical volumes. Each optical storage group must contain only one type (physical or virtual). By using a specific storage group in the migration policy, the administrator can control the sets of reports that are stored on a particular set of optical volumes. Use IBM Navigator for i to define the optical storage group (Figure 5-19).\n\nFigure 5-19 Content Manager OnDemand for i optical storage group definition\n\n", - "page_start": 144, - "page_end": 144, - "source_file": "sg246915.pdf" - }, - { - "text": "FIG. 21: Distribution functions n ( /epsilon1 ) for CB model for λ = 1 and λ = 7 and a constant ω sf = 26 meV . We set ∆ = 30 meV . For smaller λ (top), quasiparticles near the FS are well defined as indicated by the well pronounced jump in n ( /epsilon1 ). For λ = 7, n ( /epsilon1 ) is rather smooth implying that a coherence is almost lost. Some irregularities is the SCS distribution function are due to finite sampling in the frequency domain. The irregularities disappear when finer mesh for frequencies is chosen.\n\n\n\nshows up in the optical gap), where as in the BCSI case it would have always begun from 2∆. In Fig 18 we plot the Kubo sums W K vs coupling λ . We see that for all λ , W K in the NS stays larger than in the SCS. Fig 19 shows the cutoff dependence of the optical integrals W ( ω c ) for λ = 1 separately in the NS and the SCS. We again see that only about 73% of the Kubo sum is recovered up to the bandwidth of 1 eV indicating that there is a significant amount left to recover beyond this energy scale. Fig 20 shows ∆ W for the two different couplings. We see that, for both λ 's, there is only one zero-crossing for the ∆ W curve, and ∆ W is negative at larger frequencies. The only difference between the two plots is that for larger coupling the dip in ∆ W gets 'shallower'. Observe also that the solid line in Fig. 20 is rather far away from the dashed line at ω c > 1 meV , which indicates that, although ∆ W ( ω c ) in this region has some dependence on ω c , still the largest part of ∆ W ( ω c ) is ∆ W K , while the contribution from ∆ f ( ω c ) is smaller.\n\n\n\nc\n\nFIG. 22: Top - conductivity at a larger value of ω sf λ ( ω sf = 26 meV , λ = 7) consistent with the one used in Ref.33). Bottom - ∆ W with and without lattice. Observe that the frequency of zero crossing of ∆ W enhances compared to the case of a smaller λω sf and becomes comparable to the bandwidth. At energies smaller than the bandwidth, ∆ W > 0, as in the Norman- P'epin model.FIG. 23: Kinetic energy difference between the SCS and the NS, δ KE We set λ to be either λ = 1 or λ = 10 and varied ω sf thus changing the overall prefactor in the self-energy. At weak coupling ( λ = 1) the behavior is BCS-like δ KE is positive and increases with the overall factor in the self-energy. At strong coupling ( λ = 7), δ KE shows a reverse trend at larger ω sf .\n\n\n\nThe negative sign of ∆ W ( ω c ) above a relatively small ω c ∼ 0 . 1 -0 . 2 eV implies that the 'compensating' effect from the fermionic self-energy on ∆ W is not strong enough to overshadow the decrease of the optical integral in the SCS due to gap opening. In other words,the CB model displays the same behavior as BCSI, EB, and", - "page_start": 12, - "page_end": 12, - "source_file": "1001.0764.pdf" - }, - { - "text": "Θ is then described by a Dirichlet distribution parametrised by a set of concentration parameters θ :\n\np ( Θ ) = Dir ( Θ | θ ) (19)\n\nThe concentration parameter of a Dirichlet distribution is essentially a non-negative count of how many times the given category (be it a type of observation or state transition) has occurred. The distribution of concentration parameter counts will determine the shape of the estimated categorical probability distribution, while the scale of the concentration parameters will determine the certainty per precision of the belief. Updating beliefs about Θ (the parameters in the matrices) then corresponds to updating these concentration parameters θ with the following update equation:\n\nθ t + 1 = ω ∗ θ t + η ∗ χ t (20)\n\nThe updated value for the concentration parameter ( θ t + 1 ) is found by adding the previous concentration parameter θ t multiplied by a forgetting rate ω to the observed data count χ (either the observation in the case of A learning, or the inferred state or state transition for other matrices) multiplied by a learning rate η . With this relatively simple update equation-which, in essence, amounts to just counting the occurrences of categories-an AIF agent can update its beliefs about the various matrices it uses to make inferences about environmental states. For more details on parameter learning with POMDPs, see [23,33,52].\n\n## 3. Using ActiveInference.jl\n\nIn this section, we provide an overview of the various functions a user will need to operate ActiveInference . This includes functionalities for creating POMDP agents, for simulating behaviour and for fitting the models to data. In the next section, we demonstrate how to use the package on a concrete worked example. ActiveInference is under continual development, and the newest version of the package, including documentation for how to use it, can be found at github.com/ilabcode/ActiveInference.jl.\n\n## 3.1. Creating and Using a POMDP\n\nThe general structure of ActiveInference.jl is heavily inspired by pymdp [23], a Python library for implementing simulations of AIF in discrete state spaces. Those already acquainted with pymdp should find the syntax here familiar. ActiveInference can be installed as normal from the official Julia General Registry using the Julia's native package manager Pkg:\n\nIt can then be loaded into the current project environment:\n\n☎\n\n✆\n\n☎\n\nCentral to the package is the AIF object. This is a structure containing all the components of the generative model, as well as the dynamic belief states and the various settings needed to perform AIF, and is used in conjunction with most of the high-level functions of the package. An AIF object can be created with the init\\_aif function, which takes as arguments the components of the generative model and a dictionary of various settings and parameters:\n\n✆\n\n```\n✞ using Pkg Pkg.add(ActiveInference) ✝\n```\n\n```\n✞ using ActiveInference ✝\n```", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "- 71. Palombo, D. J. et al. KIBRA polymorphism is associated with individual di/fferences in hippocampal subregions: evidence from anatomical segmentation using high-resolution MRI. J. Neurosci. 33 , 13088-13093 (2013).\n - 72. Crum, W. R., Camara, O. & Hill, D. L. Generalized overlap measures for evaluation and validation in medical image analysis. IEEE Trans. Med. Imaging 25 , 1451-1461 (2006).\n - 73. Cieslak, M. et al. QSIPrep: an integrative platform for preprocessing and reconstructing di/ffusion MRI data. Nat. Methods 18 , 775-778 (2021).\n - 74. Yeh, F. C., Badre, D. & Verstynen, T. Connectometry: a statistical approach harnessing the analytical potential of the local connectome. Neuroimage 125 , 162-171 (2016).\n - 75. Yeh, F. C. & Tseng, W. Y. I. NTU-90: a high angular resolution brain atlas constructed by q-space di/ffeomorphic reconstruction. Neuroimage 58 , 91-99 (2011).\n - 76. Wood, S. N. Generalized Additive Models: An Introduction With R, Second Edition (Chapman and Hall/CRC, 2017).\n - 77. Sullivan, K. J., Shadish, W. R. & Steiner, P. M. An introduction to modeling longitudinal data with generalized additive models: applications to single-case designs. Psychol. Methods 20 , 26-42 (2015).\n - 78. Yeh, F. C., Verstynen, T. D., Wang, Y., Fernández-Miranda, J. C. & Tseng, W. Y. I. Deterministic di/ffusion fiber tracking improved by quantitative anisotropy. PLoS ONE 8 , e80713 (2013).\n - 79. Jovicich, J. et al. Brain morphometry reproducibility in multi-center 3T MRI studies: a comparison of cross-sectional and longitudinal segmentations. Neuroimage 83 , 472-484 (2013).\n - 80. Hedges, E. P. et al. Reliability of structural MRI measurements: the e/ffects of scan session, head tilt, inter-scan interval, acquisition sequence, FreeSurfer version and processing stream. Neuroimage 246 , 118751 (2022).", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed4.pdf" - }, - { - "text": "When you define the optical storage group, you provide the following information:\n\n - /SM590000 Storage group name\n - /SM590000 Description of the storage group\n - /SM590000 Volume full reset when optical volumes are rewritable and you want to reuse the storage space (only available with local area network (LAN)-attached optical jukeboxes)\n - /SM590000 Free space threshold percent (the percent at which Content Manager OnDemand starts storing to rewritable volumes again if the volume full reset parameter is checked)\n - /SM590000 Storage group type, which is primary or backup\n\nAfter you define the optical storage group, use IBM Navigator for i to define the optical volumes to the Content Manager OnDemand system (Figure 5-20).\n\nFigure 5-20 Content Manager OnDemand for i optical volume definition\n\n\n\nWhen you define optical volumes, provide this information:", - "page_start": 145, - "page_end": 145, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0764.pdf", - "query": "What is the Ferrel-Glover-Tinkham sum rule?", - "target_page": 1, - "target_passage": "the redistribution of the spectral weight between normal and superconducting state", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "| Reuben Tan, Matthias De Lange, Michael Iuzzolino, Bryan A |\n| Conference on Computer Vision and Pattern Recognition , |\n| pages 12607-12617, 2021. |\n| Plummer, Kate Saenko, Karl Ridgeway, and Lorenzo Tor- resani. Multiscale video pretraining for long-term activity |\n| ter role models: Weight-averaged consistency targets im- |\n| forecasting. arXiv preprint arXiv:2307.12854 |\n| Antti Tarvainen and Harri Valpola. Mean teachers are bet- |\n| prove semi-supervised deep learning results. arXiv:1703.01780 , 2017. |\n| '08, page 1096-1103, 2008. |\n| Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre- Antoine Manzagol. Extracting and composing robust fea- tures with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning , ICML |\n| trastive pairs. In International Conference on Machine , pages 10268-10278. PMLR, 2021. |\n| mae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. information processing systems |\n| Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, |\n| Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, |\n| Proceedings of the IEEE con- |\n| and detection dataset. In ference on computer vision and pattern recognition |", - "page_start": 12, - "page_end": 12, - "source_file": "arxiv3.pdf" - }, - { - "text": "Dataset Correlation Heatmap (Spearman)\n\nFigure 12: Heatmap representing the correlation regarding model performance across tasks.\n\n", - "page_start": 18, - "page_end": 18, - "source_file": "arxiv4.pdf" - }, - { - "text": "## Acknowledgements\n\nWe would like to thank M. Norman, Tom Timusk, Dmitri Basov, Chris Homes, Nicole Bontemps, Andres Santander-Syro, Ricardo Lobo, Dirk van der Marel, A. Boris, E. van Heumen, A. B. Kuzmenko, L. Benfato, and\n\n- 1 R. Kubo, J. Phys. Soc. Jpn 12 , 570(1957).\n- 2 R.A. Ferrrel and R.E. Glover, Phys. Rev. 109 , 1398 (1958).\n- 3 M. Tinkham and R.A. Ferrrel, Phys. Rev. Lett. 2 , 331 (1959), M. Tinkham, Introduction to Superconductivity (McGraw-Hill, New York, 1975).\n- 4 J. Hirsch, Physica C 199 , 305 (1992).\n- 5 D. N. Basov and T. Timusk, Rev. Mod. Phys. 77 , 721 (2005); A. V. Puchkov, D. N. Basov and T. Timusk, J. Phys. Cond. Matter 8 , 10049 (1996).\n- 6 C. M. Varma et al , Phys. Rev. Lett. 63 , 1996 (1989).\n- 7 D. N. Basov, S. I. Woods, A. S. Katz, E. J. Singley, R. C. Dynes, M. Xu, D. G. Hinks, C. C. Homes and M. Strongin, Science 283 , 49 (1999).\n- 8 H.J.A Molegraaf, C. Presura, D. van der Marel, P.H. Kess, M. Li, Science 295 , 2239 (2002); A. B. Kuzmenko, H. J. A. Molegraaf, F. Carbone and D. van der Marel, Phys. Rev. B 72 , 144503 (2005).\n- 9 A. F. Santander-Syro, R. P. S. M. Lobo, N. Bontemps, Z. Konstantinovic, Z. Z. Li and H. Raffy, Europhys. Lett. 62 , 568 (2003);\n- 10 A. V. Boris, N. N. Kovaleva, O. V. Dolgov, T. Holden, C. T. Lin, B. Keimer and C. Bernhard, Science 304 , 708 (2004).\n- 11 G. Deutscher, A. F. Santander-Syro and N. Bontemps, Phys. Rev. B 72 , 092504 (2005).\n- 12 F. Carbone, A. B. Kuzmenko, H. J. A. Molegraaf, E. van Heumen, V. Lukovac, F. Marsiglio, D. van der Marel, K. Haule, G. Kotliar, H. Berger, S. Courjault, P. H. Kes and M. Li, Phys. Rev. B 74 , 064510 (2006).\n- 13 C. C. Homes, S. V. Dordevic, D. A. Bonn, R. Liang and W. N. Hardy, Phys. Rev. B 69 , 024514 (2004).\n- 14 J. Hwang et al , Phys. Rev. B 73 , 014508 (2006).\n- 15 E. van Heumen, R. Lortz, A. B. Kuzmenko, F. Carbone, D. van der Marel, X. Zhao, G. Yu, Y. Cho, N. Barisic, M. Greven, C. C. Homes and S. V. Dordevic, Phys. Rev. B 75 , 054522 (2007).\n- 16 M. Ortolani, P. Calvani and S. Lupi, Phys. Rev. Lett. 94 , 067002 (2005).\n- 17 A.F. Santander-Syro, R.P.S.M. Lobo, and N. Bontemps, Phys. Rev. B 70 , 134504(2004), A. F. Santander-Syro, R. P. S. M. Lobo, N. Bontemps, Z. Konstantinovic, Z. Z. Li and H. Raffy, Europhys. Lett. 62 , 568 (2003).\n- 18 P. F. Maldague, Phys. Rev. B 16 2437 (1977); E. H. Kim, Phys. Rev. B 58 2452 (1998).\n- 19 J. Hirsch, Physica C, 201 , 347 (1992) and Ref 4.\n- 20 for a review see F. Marsiglio, J. Superconductivity and Novel Magnetism 22 , 269 (2009).\n- 21 F. Marsiglio, E. van Heumen, A. B. Kuzmenko, Phys. Rev. B 77 144510 (2008).\n- 22 M. R. Norman, A. V. Chubukov, E. van Heumen, A. B. Kuzmenko, and D. van der Marel, Phys. Rev. B 76 , 220509 (2007).\n- 23 J. E. Hirsch and F. Marsiglio, Physica C 331 , 150 (2000)", - "page_start": 14, - "page_end": 14, - "source_file": "1001.0764.pdf" - }, - { - "text": "## Exchange bias of a ferromagnetic semiconductor by a ferromagnetic metal\n\nK. Olejnik, 1, 2 P. Wadley, 3 J. Haigh, 3 K. W. Edmonds, 3 R. P. Campion, 3 A. W. Rushforth, 3 B. L. Gallagher, 3 C. T. Foxon, 3 T. Jungwirth, 2, 3 J. Wunderlich, 1, 2 S. S. Dhesi, 4 S. Cavill, 4 G. van der Laan, 4 and E. Arenholz 5\n\n1 Hitachi Cambridge Laboratory, Cambridge CB3 0HE, United Kingdom\n\n2 Institute of Physics ASCR, v.v.i., Cukrovarnicka 10, 16253 Praha 6, Czech Republic\n\n3 School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD, United Kingdom 4 Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA\n\n5 (Dated: August 24, 2018)\n\nWe demonstrate an exchange bias in (Ga,Mn)As induced by antiferromagnetic coupling to a thin overlayer of Fe. Bias fields of up to 240 Oe are observed. Using element-specific x-ray magnetic circular dichroism measurements, we distinguish a strongly exchange coupled (Ga,Mn)As interface layer in addition to the biassed bulk of the (Ga,Mn)As film. The interface layer remains polarized at room temperature.\n\nPACS numbers: 75.70.Cn, 75.50.Pp, 75.50.Bb\n\nFerromagnetic (FM) semiconductors offer the prospect of combining high-density storage and gate-controlled logic in a single material. The realization of spin-valve devices from FM semiconductors requires the controlled switching of magnetization in adjacent layers between antiferromagnetic (AFM) and FM configurations. This has motivated several theoretical investigations of interlayer coupling in all-semiconductor devices 1 , and AFM coupling has recently been demonstrated in (Ga,Mn)As multilayers separated by p -type non-magnetic spacers 2 . However, the Curie temperature T C of (Ga,Mn)As is currently limited to 185 K in single layers 3 , and is typically much lower for layers embedded within a heterostructure 2 , which is an obstacle to the practical implementation of semiconductor spintronics.\n\nThe development of FM metal/FM semiconductor heterostructures has the potential to bring together the benefits of metal and semiconductor based spintronics, offering access to new functionalities and physical phenomena. Recent studies of MnAs/(Ga,Mn)As and NiFe/(Ga,Mn)As bilayer films have shown FM interlayer coupling and independent magnetization behavior, respectively 4,5 . Of particular interest is the Fe/(Ga,Mn)As system, since the growth of epitaxial Fe/GaAs(001) films is well-established 6 . Remarkably, a recent x-ray magnetic circular dichroism (XMCD) study has shown that Fe may induce a proximity polarization in the near-surface region of (Ga,Mn)As, antiparallel to the Fe moment and persisting even above room temperature 7 . Devices incorporating Fe/(Ga,Mn)As therefore offer the prospect of obtaining non-volatile room temperature spin-polarization in a semiconductor.\n\nUntil now, no information has been revealed about the coupling of Fe to (Ga,Mn)As layers away from the nearsurface region. At the surface, the (Ga,Mn)As layer may be highly non-stoichiometric and Mn-rich, due to its nonequilibrium nature 8,9 . Previously, Fe/(Ga,Mn)As layers were produced by a process including exposure to air followed by sputtering and annealing prior to Fe deposition,\n\nwhich may further disrupt the interface order. The origin of the interface magnetism then had to be inferred by comparison to a series of reference samples 7 . Demonstration of coupling between the bulk of the layers, i.e. , an exchange bias effect, would provide direct evidence of the interface magnetic order. Moreover, such coupling would offer new means of manipulating the FM semiconductor spin state and utilizing the proximity polarization effect in a spintronic device.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "Banks . The Federal Deposit Insurance Corporation Improvement Act of 1991, or FDICIA established five capital tiers with respect to depository institutions: 'well-capitalized,' 'adequately capitalized,' 'undercapitalized,' 'significantly undercapitalized,' and 'critically undercapitalized.' A depository institution's capital tier will depend upon where its capital levels are in relation to various relevant capital measures, including (1) risk-based capital measures, (2) a leverage ratio capital measure and (3) certain other factors. Regulations establishing the specific capital tiers provide that a 'well-capitalized' institution will have a total risk-based capital ratio of ten percent or greater, a Tier 1 risk-based capital ratio of six percent or greater, and a Tier 1 leverage ratio of five percent or greater, and not be subject to any written regulatory enforcement agreement, order, capital directive or prompt corrective action derivative. For an institution to be 'adequately capitalized,' it will have a total risk-based capital ratio of eight percent or greater, a Tier 1 risk-based capital ratio of four percent or greater, and a Tier 1 leverage ratio of four percent or greater (in some cases three percent). For an institution to be 'undercapitalized,' it will have a total risk-based capital ratio that is less than eight percent, a Tier 1 risk-based capital ratio less than four percent or a Tier 1 leverage ratio less than four percent (or a leverage ratio less than three percent if the institution is rated composite 1 in its most recent report of examination, subject to appropriate federal banking agency guidelines). For an institution to be 'significantly undercapitalized,' it will have a total risk-based capital ratio less than six percent, a Tier 1 risk-based capital ratio less than three percent, or a Tier 1 leverage ratio less than three percent. For an institution to be 'critically undercapitalized,' it will have a ratio of tangible equity to total assets equal to or less than two percent. FDICIA requires federal banking agencies to take 'prompt corrective action' against depository institutions that do not meet minimum capital requirements. Under current regulations, we were 'well capitalized' as of December 31, 2002.\n\nFDICIA generally prohibits a depository institution from making any capital distribution (including payment of a dividend) or paying any management fee to its holding company if the depository institution would thereafter be 'undercapitalized.' An 'undercapitalized' institution must develop a capital restoration plan and its parent holding company must guarantee that institution's compliance with such plan. The liability of the parent holding company under any such guarantee is limited to the lesser of five percent of the institution's assets at the time it became 'undercapitalized' or the amount needed to bring the institution into compliance with all capital standards. Furthermore, in the event of the bankruptcy of the parent holding company, such guarantee would take priority over", - "page_start": 34, - "page_end": 34, - "source_file": "NASDAQ_FFIN_2002.pdf" - }, - { - "text": "- 26 K. S. Raman, R. Moessner, S. L. Sondhi, Phys. Rev. B 72 , 064413 (2005).\n - 27 D. F. Schroeter, E. Kapit, R. Thomale, and M. Greiter, Phys. Rev. Lett. 99 , 097202 (2007); R. Thomale, E. Kapit, D. F. Schroeter, and M. Greiter, Phys. Rev. B 80 , 104406 (2009).\n - 28 O. Tchernyshyov, R. Moessner, S. L. Sondhi, Phys. Rev. Lett. 88 , 067203 (2002).\n - 29 F. Becca, F. Mila, Phys. Rev. Lett. 89 , 037204 (2002).\n - 30 K. Penc, N. Shannon, H. Shiba, Phys. Rev. Lett. 93 , 197203 (2004).\n - 31 C. Weber, F. Becca, F. Mila, Phys. Rev. B 72 , 024449 (2005).\n - 32 G.-W. Chern, C. J. Fennie, O. Tchernyshyov, Phys. Rev.\n - B 74 , 060405(R) (2006).\n - 33 D. L. Bergman, R. Shindou, G. A. Fiete, L. Balents, Phys. Rev. B 74 , 134409 (2006).\n - 34 Fa Wang, Ashvin Vishwanath, Phys. Rev. Lett. 100 , 077201 (2008).\n - 35 O. Tchernyshyov, G.-W. Chern, arXiv:0907.1693 (2009).\n - 36 Y. Taguchi, Y. Oohara, H. Yoshizawa, N. Nagaosa, Y. Tokura, Science 291 , 2573 (2001).\n - 37 X. G. Wen, Frank Wilczek, A. Zee, Phys. Rev. B 39, 11413 (1989); X. G. Wen, Phys. Rev. B 40 , 7387 (1989).\n - 38 Dimitris I. Tsomokos, Juan Jos'e Garc'ıa-Ripoll, Nigel R. Cooper, Jiannis K. Pachos, Phys. Rev. A 77 , 012106 (2008).", - "page_start": 10, - "page_end": 10, - "source_file": "1001.0266.pdf" - }, - { - "text": "The relationship of friction force, normal force, braking torque, and rolling torque is illustrated in figure 6.11.\n\nThe effect of slip velocity on the coefficient of friction is illustrated by the graph of figure 6.11. The conditions of zero slip corresponds to the rolling wheel without brake application while the condition of full, 100 percent slip corresponds to the locked wheel where the relative velocity between the tire surface and the runway equals the actual velocity. With the application of brakes, the coefficient of friction increases but incurs a small but measurable apparent slip. Continued increase in friction coefficient is obtained until some maximum is achieved then decreases as the slip increases and approaching the 100 percent slip condition. Actually, the peak value of coefficient of friction occurs at an incipient skid condition and the relative slip apparent at this point consists primarily of elastic shearing deflection of the tire structure.", - "page_start": 404, - "page_end": 404, - "source_file": "00-80T-80.pdf" - }, - { - "text": "model\n\nModel Correlation Heatmap (Spearman)\n\nFigure 11: Heatmap representing the Spearman correlations in terms of performance across models.\n\n", - "page_start": 17, - "page_end": 17, - "source_file": "arxiv4.pdf" - }, - { - "text": "n/a\n\nInvolved in the study\n\nFunctional and/or effective connectivity\n\nGraph analysis\n\nMultivariate modeling or predictive analysis\n\nMultivariate modeling and predictive analysis\n\nMultivariate regression analyses was used to explore brain structure in relation to gestation. Regional, network, and summary brain measures (dependent variables) were examined in relation to gestation week (independent variable). In follow-up statistical analyses (noted in Methods), various quality control metrics and global brain volume were included into the model to account for variables of non-interest (e.g., motion) and to identify highly impacted brain areas (e.g., controlling for total GMV).", - "page_start": 17, - "page_end": 17, - "source_file": "pubmed4.pdf" - }, - { - "text": "\n\n## Chatree - Ore Mined and Treated\n\n\n\n## Chatree - Cash Costs and Total Costs\n\n\n\nOperations Report\n\n## Production and Costs\n\nProduction for the year was 133,681 ounces of gold and 1,000,569 ounces of silver.\n\nTotal mill throughput of 5.7 million tonnes was 11.4% higher than 2012 despite the 63 days that the new plant was shut down during the process for the granting of its Metallurgical License. The overall plant availability was 98.1%.\n\nTotal cash costs for the year were $US767 per ounce ($US620 per ounce exclusive of Thai royalties). The average royalty paid to the Thai Government was $US147 per ounce of gold. Total production costs after depreciation and amortisation were $US952 per ounce of gold produced.\n\nAt year end, 9.7 million tonnes of ore was stockpiled with an average contained gold grade of 0.57 grams per tonne (g/t) representing 178,086 ounces of gold.\n\n## Operational Performance\n\nDuring the year 7.1 million tonnes of ore was mined, with a waste-to-ore strip ratio of 2.09:1. The average grade of mined ore was 0.72 g/t gold and 8.56 g/t silver.\n\nAdditional ore was generated by revising the mining sequence in A Pit Stage 2 and accessing near surface high grade oxide ore tonnes from Q Prospect.\n\nTotal volume of material mined at Chatree for the year was 8.4 million Bank Cubic Metres (\"BCM\") including 2.7 million BCM of ore.\n\nAn additional 566,000 BCM of laterite and clay material was excavated and used for the construction of the second lift of second tailings storage facility (TSF#2).\n\nSome 1.3 million loose cubic metres (LCM) of ore was relocated from the Marginal Grade Stockpiles to the primary crusher to supplement ore from the mining pits.\n\nTwo areas were mined during the year:\n\n - 〉 A Pit, where 8.3 million BCM of material was mined (2.7 million BCM of ore) at a stripping ratio of 2.09:1 waste to ore; and\n - 〉 Q Prospect where 298 thousand BCM of material was mined (143 thousand BCM of ore) at a stripping ratio of 1.1:1 waste to ore.\n\nThe mechanical reliability and hence availability of the major fleet items has been below expectations over the last few years.\n\nu", - "page_start": 14, - "page_end": 14, - "source_file": "ASX_KCN_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0266.pdf", - "query": "What does Kitaev show about spin- 1/2 model?", - "target_page": 1, - "target_passage": "spin- 1/2 model can be mapped to a model with one Majo- rana fermion per site coupled to Ising gauge fields on the links", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "FIG. 1: The honeycomb lattice for the Kitaev model. Filled and open circles indicate two sublattices. x, y, z label the links along three different directions used in (1).\n\n\n\nderived as well. There have been several proposals to open the fermion gap for the non-Abelian phase without spoiling exact solvability 4,6 . And many generalizations to other(even 3D) lattices have been developed in the last few years 10-16 . All these efforts have significantly enriched our knowledge of exactly solvable models and quantum phases of matter.\n\nHowever, in the original Kitaev model and its later generalizations in the form of spin models, spin rotation symmetry is explicitly broken. This makes them harder to realize in solid state systems. There are many proposals to realized the Kitaev model in more controllable situations, e.g. in cold atom optical lattices 17,18 , or in superconducting circuits 19 . But it is still desirable for theoretical curiosity and practical purposes to realize the Kitaev-type models in spin rotation invariant systems.\n\nIn this paper we realize the Kitaev honeycomb lattice model as the low energy Hamiltonian for a spin rotation invariant system. The trick is not to use the physical spin as the spin in the Kitaev model, instead the spin-1/2 in Kitaev model is from some emergent two-fold degenerate low energy states in the elementary unit of physical system. This type of idea has been explored recently by Jackeli and Khaliullin 20 , in which the spin-1/2 in the Kitaev model is the low energy Kramers doublet created by strong spin-orbit coupling of t 2 g orbitals. In the model presented below, the Hilbert space of spin-1/2 in the Kitaev model is actually the two dimensional spin singlet sector of four antiferromagnetically coupled spin-1/2 moments, and the role of spin-1/2 operators(Pauli matrices) in the Kitaev model is replaced by certain combinations of S j · S k [or the spin-chirality S j · ( S k × S /lscript )] between the four spins.\n\nOne major drawback of the model to be presented is that it contains high order spin interactions(involves up to six or eight spins), thus is still unnatural. However it opens the possibility to realize exotic (exactly solvable) models from spin-1/2 Hamiltonian with spin rotation invariant interactions. We will discuss two possible routes to reduce this artificialness through controlled perturbative expansions, by coupling to optical phonons or by magnetic couplings between the elementary units.\n\nThe outline of this paper is as follows. In Section II we will lay out the pseudo-spin-1/2 construction. In Sec-\n\nFIG. 2: Left: the physical spin lattice for the model (8). The dash circles are honeycomb lattice sites, each of which is actually a cluster of four physical spins. The dash straight lines are honeycomb lattice bonds, with their type x, y, z labeled. The interaction between clusters connected by x, y, z bonds are the J x,y,z terms in (8) or (9) respectively. Note this is not the 3-12 lattice used in Ref. 9,10 . Right: enlarged picture of the clusters with the four physical spins labeled as 1 , . . . , 4. Thick solid bonds within one cluster have large antiferromagnetic Heisenberg coupling J cluster .\n\n\n\ntion III the Kitaev model will be explicitly constructed using this formalism, and some properties of this construction will be discussed. In Section IV we will discuss two possible ways to generate the high order spin interactions involved in the construction of Section III by perturbative expansions. Conclusions and outlook will be summarized in Section V.\n\n## II. FORMULATION OF THE PSEUDO-SPIN-1/2 FROM FOUR-SPIN CLUSTER.", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0266.pdf" - }, - { - "text": "Many generalizations of the Kitaev model have been", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0266.pdf" - }, - { - "text": "## Realization of the Exactly Solvable Kitaev Honeycomb Lattice Model in a Spin Rotation Invariant System\n\nFa Wang 1\n\n1 Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA\n\nThe exactly solvable Kitaev honeycomb lattice model is realized as the low energy effect Hamiltonian of a spin-1/2 model with spin rotation and time-reversal symmetry. The mapping to low energy effective Hamiltonian is exact, without truncation errors in traditional perturbation series expansions. This model consists of a honeycomb lattice of clusters of four spin-1/2 moments, and contains short-range interactions up to six-spin(or eight-spin) terms. The spin in the Kitaev model is represented not as these spin-1/2 moments, but as pseudo-spin of the two-dimensional spin singlet sector of the four antiferromagnetically coupled spin-1/2 moments within each cluster. Spin correlations in the Kitaev model are mapped to dimer correlations or spin-chirality correlations in this model. This exact construction is quite general and can be used to make other interesting spin-1/2 models from spin rotation invariant Hamiltonians. We discuss two possible routes to generate the high order spin interactions from more natural couplings, which involves perturbative expansions thus breaks the exact mapping, although in a controlled manner.\n\nPACS numbers: 75.10.Jm, 75.10.Kt\n\n## Contents\n\n## I. Introduction.\n\n1\n\n- II. Formulation of the Pseudo-spin-1/2 from Four-spin Cluster.\n\n## III. Realization of the Kitaev Model.\n\n3\n\n- IV. Generate the High Order Physical Spin Interactions by Perturbative Expansion.\n- A. Generate the High Order Terms by Coupling to Optical Phonon.\n- B. Generate the High Order Terms by Magnetic Interactions between Clusters.\n\n## V. Conclusions.\n\n8\n\n## Acknowledgments\n\n8\n\n- A. Coupling between Distortions of a Tetrahedron and the Pseudo-spins\n- B. Derivation of the Terms Generated by Second Order Perturbation of Inter-cluster Magnetic Interactions\n\n8\n\n9\n\nReferences 10\n\n## I. INTRODUCTION.\n\nKitaev's exactly solvable spin-1/2 honeycomb lattice model 1 (noted as the Kitaev model hereafter) has inspired great interest since its debut, due to its exact solvability, fractionalized excitations, and the potential\n\n5\n\n5\n\n7\n\n2\n\nto realize non-Abelian anyons. The model simply reads\n\nH Kitaev = -∑ x -links J x τ x j τ x k -∑ y -links J y τ y j τ y k -∑ z -links J z τ z j τ z k (1)\n\nwhere τ x,y,z are Pauli matrices, and x, y, z -links are defined in FIG. 1. It was shown by Kitaev 1 that this spin1/2 model can be mapped to a model with one Majorana fermion per site coupled to Ising gauge fields on the links. And as the Ising gauge flux has no fluctuation, the model can be regarded as, under each gauge flux configuration, a free Majorana fermion problem. The ground state is achieved in the sector of zero gauge flux through each hexagon. The Majorana fermions in this sector have Dirac-like gapless dispersion resembling that of graphene, as long as | J x | , | J y | , and | J z | satisfy the triangular relation, sum of any two of them is greater than the third one 1 . It was further proposed by Kitaev 1 that opening of fermion gap by magnetic field can give the Ising vortices non-Abelian anyonic statistics, because the Ising vortex will carry a zero-energy Majorana mode, although magnetic field destroys the exact solvability.\n\nGreat efforts have been invested to better understand the properties of the Kitaev model. For example, several groups have pointed out that the fractionalized Majorana fermion excitations may be understood from the more familiar Jordan-Wigner transformation of 1D spin systems 2,3 . The analogy between the non-Abelian Ising vortices and vortices in p + ip superconductors has been raised in serveral works 4-7 . Exact diagonalization has been used to study the Kitaev model on small lattices 8 . And perturbative expansion methods have been developed to study the gapped phases of the Kitaev-type models 9 .", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0266.pdf" - }, - { - "text": "chirality interactions in cold atom optical lattices has been proposed 38 .\n\nOur model (8) is achieved at second order of the perturbation series. Higher order terms become truncation errors but may be controlled by small parameters λ x,y,z /J cluster ∼ √ | J x,y,z | /J cluster .\n\n## V. CONCLUSIONS.\n\nWe constructed the exactly solvable Kitaev honeycomb model 1 as the exact low energy effective Hamiltonian of a spin-1/2 model [equations (8) or (9)] with spin-rotation and time reversal symmetry. The spin in Kitaev model is represented as the pseudo-spin in the two-fold degenerate spin singlet subspace of a cluster of four antiferromagnetically coupled spin-1/2 moments. The physical spin model is a honeycomb lattice of such four-spin clusters, with certain inter-cluster interactions. The machinery for the exact mapping to pseudo-spin Hamiltonian was developed (see e.g. TABLE I), which is quite general and can be used to construct other interesting (exactly solvable) spin-1/2 models from spin rotation invariant systems.\n\nIn this construction the pseudo-spin correlations in the Kitaev model will be mapped to dimer or spin-chirality correlations in the physical spin system. The corresponding picture of the fractionalized Majorana fermion excitations and Ising vortices still remain to be clarified.\n\nThis exact construction contains high order physical spin interactions, which is undesirable for practical implementation. We described two possible approaches to reduce this problem: generating the high order spin interactions by perturbative expansion of the coupling to optical phonon, or the magnetic coupling between clusters. This perturbative construction will introduce truncation error of perturbation series, which may be controlled by small expansion parameters. Whether these constructions can be experimentally engineered is however beyond the scope of this study. It is conceivable that other perturbative expansion can also generate these high order spin interactions, but this possibility will be left for future works.\n\n## Acknowledgments\n\nThe author thanks Ashvin Vishwanath, Yong-Baek Kim and Arun Paramekanti for inspiring discussions, and Todadri Senthil for critical comments. The author is supported by the MIT Pappalardo Fellowship in Physics.\n\n## Appendix A: Coupling between Distortions of a Tetrahedron and the Pseudo-spins\n\nIn this Appendix we reproduce from Ref. 35 the couplings of all tetrahedron distortion modes to the spin\n\nsystem. And convert them to pseudo-spin notation in the physical spin singlet sector.\n\nConsider a general small distortion of the tetrahedron, the spin Hamiltonian becomes\n\nH cluster , SL = ( J cluster / 2)( ∑ /lscript S /lscript ) 2 + J ' ∑ /lscript J x τ x j τ x k -∑ y -links J y τ y j τ y k -∑ z -links J z τ z j τ z k (7)\n\nwhere j, k label the honeycomb lattice sites thus the fourspin clusters, H cluster is given by (2), τ x,y,z should be replaced by the corresponding physical spin operators in (4) and (5) or (6), or some other equivalent representations of personal preference.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0266.pdf" - }, - { - "text": "## II. FORMULATION OF THE PSEUDO-SPIN-1/2 FROM FOUR-SPIN CLUSTER.\n\nIn this Section we will construct the pseudo-spin-1/2 from a cluster of four physical spins, and map the physical spin operators to pseudo-spin operators. The mapping constructed here will be used in later Sections to construct the effective Kitaev model. In this Section we will work entirely within the four-spin cluster, all unspecified physical spin subscripts take values 1 , . . . , 4.\n\nConsider a cluster of four spin-1/2 moments(called physical spins hereafter), labeled by S 1 ,..., 4 , antiferromagnetically coupled to each other (see the right bottom part of FIG. 2). The Hamiltonian within the cluster(up to a constant) is simply the Heisenberg antiferromagnetic(AFM) interactions,\n\nH cluster = ( J cluster / 2) ( S 1 + S 2 + S 3 + S 4 ) 2 (2)\n\nThe energy levels should be apparent from this form: one group of spin-2 quintets with energy 3 J cluster , three groups of spin-1 triplets with energy J cluster , and two spin singlets with energy zero. We will consider large positive", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0266.pdf" - }, - { - "text": "Another note to take is that it is not necessary to have such a highly symmetric cluster Hamiltonian (2). The mappings to pseudo-spin-1/2 should work as long as the ground states of the cluster Hamiltonian are the two-fold degenerate singlets. One generalization, which conforms the symmetry of the lattice in FIG. 2, is to have\n\nH cluster = ( J cluster / 2)( r · S 1 + S 2 + S 3 + S 4 ) 2 (11)\n\nwith J cluster > 0 and 0 < r < 3. However this is not convenient for later discussions and will not be used.\n\nWe briefly describe some of the properties of (8). Its low energy states are entirely in the space that each of the clusters is a physical spin singlet (called cluster singlet subspace hereafter). Therefore physical spin correlations are strictly confined within each cluster. The excitations carrying physical spin are gapped, and their dynamics are 'trivial' in the sense that they do not move from one cluster to another. But there are non-trivial low energy physical spin singlet excitations, described by the pseudospins defined above. The correlations of the pseudo-spins can be mapped to correlations of their corresponding physical spin observables (the inverse mappings are not unique, c.f. TABLE I). For example τ x,y correlations become certain dimer-dimer correlations, τ z correlation becomes chirality-chirality correlation, or four-dimer correlation. It will be interesting to see the corresponding picture of the exotic excitations in the Kitaev model, e.g. the Majorana fermion and the Ising vortex. However this will be deferred to future studies.\n\nIt is tempting to call this as an exactly solved spin liquid with spin gap ( ∼ J cluster ), an extremely short-range resonating valence bond(RVB) state, from a model with spin rotation and time reversal symmetry. However it should be noted that the unit cell of this model contains an even number of spin-1/2 moments (so does the original Kitaev model) which does not satisfy the stringent definition of spin liquid requiring odd number of electrons per unit cell. Several parent Hamiltonians of spin liquids have already been constructed. See for example, Ref. 24-27 .\n\n## IV. GENERATE THE HIGH ORDER PHYSICAL SPIN INTERACTIONS BY PERTURBATIVE EXPANSION.\n\nOne major drawback of the present construction is that it involves high order interactions of physical spins[see (8) and (9)], thus is 'unnatural'. In this Section we will make compromises between exact solvability and naturalness. We consider two clusters j and k and try to generate the J x,y,z interactions in (7) from perturbation series expansion of more natural(lower order) physical spin interactions. Two different approaches for this purpose will be laid out in the following two Subsections. In Subsection IV A we will consider the two clusters as two tetrahedra, and couple the spin system to certain optical phonons, further coupling between the phonon modes\n\nFIG. 3: Illustration of the tetragonal to orthorhombic Q E 1 (top) and Q E 2 (bottom) distortion modes. (a) Perspective view of the tetrahedron. 1 , . . . , 4 label the spins. Arrows indicate the motion of each spin under the distortion mode. (b) Top view of (a). (c)(d) Side view of (a).\n\n\n\nof the two clusters can generate at lowest order the desired high order spin interactions. In Subsection IV B we will introduce certain magnetic, e.g. Heisenberg-type, interactions between physical spins of different clusters, at lowest order(second order) of perturbation theory the desired high order spin interactions can be achieved. These approaches involve truncation errors in the perturbation series, thus the mapping to low energy effect Hamiltonian will no longer be exact. However the error introduced may be controlled by small expansion parameters. In this Section we denote the physical spins on cluster j ( k ) as j 1 , . . . , j 4 ( k 1 , . . . , k 4), and denote pseudo-spins on cluster j ( k ) as /vectorτ j ( /vectorτ k ).", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0266.pdf" - }, - { - "text": "inter-cluster spin-chirality coupling in H perturbation z explicitly breaks time reversal symmetry and is probably harder to implement in solid state systems. However spin-chirality order may have important consequences in frustrated magnets 36,37 , and a realization of spin-", - "page_start": 6, - "page_end": 6, - "source_file": "1001.0266.pdf" - }, - { - "text": "E\n\nf 2 = (1 / 2)( S 2 · S 4 + S 1 · S 3 -S 1 · S 4 -S 2 · S 3 ) , f E 1 = √ 1 / 12( S 1 · S 4 + S 2 · S 3 + S 2 · S 4 + S 1 · S 3 -2 S 1 · S 2 -2 S 3 · S 4 ) .\n\nThe functions f T 2 1 , 2 , 3 for the T 2 modes are\n\nf T 2 1 = ( S 2 · S 3 -S 1 · S 4 ) , f T 2 2 = ( S 1 · S 3 -S 2 · S 4 ) , f T 2 3 = ( S 1 · S 2 -S 3 · S 4 )\n\nNow we can use TABLE I to convert the above couplings into pseudo-spin. It is easy to see that f A and f T 2 1 , 2 , 3 are all zero when converted to pseudo-spins, namely projected to the physical spin singlet sector. But f E 1 = ( P 14 + P 23 + P 24 + P 13 -2 P 12 -2 P 34 ) / (4 √ 3) = -( √ 3 / 2) τ x and f E 2 = ( P 24 + P 13 -P 14 -P 23 ) / 4 = ( √ 3 / 2) τ y . This has already been noted by Tchernyshyov et al. 28 , only the E modes can lift the degeneracy of the physical spin singlet ground states of the tetrahedron. Therefore the general spin lattice coupling is the form of (12) given in the main text.", - "page_start": 7, - "page_end": 7, - "source_file": "1001.0266.pdf" - }, - { - "text": "H = ∑ j ( J cluster / 2)( S j 1 + S j 2 + S j 3 + S j 4 ) 2 -∑ z -links J z (16 / 9)[ S j 2 · ( S j 3 × S j 4 )][ S k 2 · ( S k 3 × S k 4 )] -∑ x -links J x (2 S j 1 · S j 2 +1 / 2)(2 S k 1 · S k 2 +1 / 2) -∑ y -links J y (4 / 3)[ S j 1 · ( S j 3 -S j 4 )][ S k 1 · ( S k 3 -S k 4 )] (8)\n\nWhile by the represenation (4) and (5), the Hamilto-\n\nnian becomes\n\nH = ∑ j ( J cluster / 2)( S j 1 + S j 2 + S j 3 + S j 4 ) 2 -∑ x -links J x (2 S j 1 · S j 2 +1 / 2)(2 S k 1 · S k 2 +1 / 2) -∑ y -links J y (4 / 3)[ S j 1 · ( S j 3 -S j 4 )][ S k 1 · ( S k 3 -S k 4 )] -∑ z -links J z ( -4 / 3)(2 S j 3 · S j 4 +1 / 2)[ S j 1 · ( S j 3 -S j 4 )](2 S k 3 · S k 4 +1 / 2)[ S k 1 · ( S k 3 -S k 4 )] (9)\n\nThis model, in terms of physical spins S , has full spin rotation symmetry and time-reversal symmetry. A pseudo-magnetic field term ∑ j /vector h · /vectorτ j term can also be included under this mapping, however the resulting Kitaev model with magnetic field is not exactly solvable. It is quite curious that such a formidably looking Hamiltonian (8), with biquadratic and six-spin(or eight-spin) terms, has an exactly solvable low energy sector.\n\nWe emphasize that because the first intra-cluster term ∑ cluster H cluster commutes with the latter Kitaev terms independent of the representation used, the Kitaev model is realized as the exact low energy Hamiltonian of this model without truncation errors of perturbation theories, namely no ( | J x,y,z | /J cluster ) 2 or higher order terms will be generated under the projection to low energy cluster singlet space. This is unlike, for example, the t/U expansion of the half-filled Hubbard model 22,23 , where at lowest t 2 /U order the effective Hamiltonian is the Heisenberg model, but higher order terms ( t 4 /U 3 etc.) should in principle still be included in the low energy effective Hamiltonian for any finite t/U . Similar comparison can be made to the perturbative expansion studies of the Kitaev-type models by Vidal et al. 9 , where the low energy effective Hamiltonians were obtained in certian anisotropic (strong bond/triangle) limits. Although the spirit of this work, namely projection to low energy sector, is the same as all previous perturbative approaches to effective Hamiltonians.\n\nNote that the original Kitaev model (1) has threefold rotation symmetry around a honeycomb lattice site, combined with a three-fold rotation in pseudo-spin space (cyclic permutation of τ x , τ y , τ z ). This is not apparent in our model (8) in terms of physical spins, under the current representation of τ x,y,z . We can remedy this by using a different set of pseudo-spin Pauli matrices τ ' x,y,z in (7),\n\nτ ' x = √ 1 / 3 τ z + √ 2 / 3 τ x , τ ' y = √ 1 / 3 τ z -√ 1 / 6 τ x + √ 1 / 2 τ y , τ ' z = √ 1 / 3 τ z -√ 1 / 6 τ x -√ 1 / 2 τ y\n\nWith proper representation choice, they have a symmetric form in terms of physical spins,\n\nτ ' x = -(4 / 3) S 2 · ( S 3 × S 4 ) + √ 2 / 3(2 S 1 · S 2 +1 / 2) τ ' y = -(4 / 3) S 3 · ( S 4 × S 2 ) + √ 2 / 3(2 S 1 · S 3 +1 / 2) τ ' z = -(4 / 3) S 4 · ( S 2 × S 3 ) + √ 2 / 3(2 S 1 · S 4 +1 / 2) (10)\n\nSo the symmetry mentioned above can be realized by a three-fold rotation of the honeycomb lattice, with a cyclic permutation of S 2 , S 3 and S 4 in each cluster. This is in fact the three-fold rotation symmetry of the physical spin lattice illustrated in FIG. 2. However this more symmetric representation will not be used in later part of this paper.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0266.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0266.pdf", - "query": "How can fractionalised Majorana fermion excitations be understood?", - "target_page": 1, - "target_passage": "from the more familiar Jordan-Wigner transformation of 1D spin systems", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "high-energy fermions and is an input for the low-energy theory. Below we follow Refs. 31,33 and assume that the momentum dependence of a collective boson is flat near ( π, π ). The self energy within such model has been worked out consistently in Ref. 31,33. In the normal state\n\nΣ '' ( ω ) = -1 2 λ n ω sf log ( 1 + ω 2 ω 2 sf ) ω (19)\n\nΣ ' ( ω ) = -λ n ω sf arctan ω sf\n\nwhere λ n is the spin-fermion coupling constant, and ω sf is a typical spin relaxation frequency of overdamped spin collective excitations with a propagator\n\nχ ( q ∼ Q, Ω) = χ Q 1 -i Ω ω sf (20)\n\nwhere χ Q is the uniform static susceptibility. If we use Ornstein-Zernike form of χ ( q ) and use either Eliashberg 45 or FLEX computational schemes 48 , we get rather similar behavior of Σ as a function of frequency and rather similar behavior of optical integrals.\n\nThe collective nature of spin fluctuations is reflected in the fact that the coupling λ and the bosonic frequency ω sf are related: λ scales as ξ 2 , where ξ is the bosonic mass (the distance to a bosonic instability), and ω sf ∝ ξ -2 (see Ref. 49). For a flat χ ( q ∼ Q ) the product λω sf does not depend on ξ and is the overall dimensional scale for boson-mediated interactions.\n\nIn the SCS fermionic excitations acquire a gap. This gap affects fermionic self-energy in two ways: directly, via the change of the dispersion of an intermediate boson in the exchange process involving a CB, and indirectly, via the change of the propagator of a CB. We remind ourselves that the dynamics of a CB comes from a particlehole bubble which is indeed affected by ∆.\n\nThe effect of a d -wave pairing gap on a CB has been discussed in a number of papers, most recently in 31 . In\n\na SCS a gapless continuum described by Eq. (20) transforms into a gaped continuum, with a gap about 2∆ and a resonance at ω = ω 0 < 2∆, where for a d -wave gap we define ∆ as a maximum of a d -wave gap.\n\nThe spin susceptibility near ( π, π ) in a superconductor can generally be written up as\n\nχ ( q ∼ Q, Ω) = χ Q 1 -i Π(Ω) ω sf (21)\n\nwhere Π is evaluated by adding up the bubbles made out of two normal and two anomalous Green's functions. Below 2∆, Π(Ω) is real ( ∼ Ω 2 / ∆ for small Ω), and the resonance emerges at Ω = ω 0 at which Π( ω 0 ) = ω sf . At frequencies larger than 2∆, Π(Ω) has an imaginary part, and this gives rise to a gaped continuum in χ (Ω).\n\nThe imaginary part of the spin susceptibility around the resonance frequency ω 0 is 31\n\nχ '' ( q, Ω) = πZ o ω 0 2 δ (Ω -ω 0 ) (22)\n\nwhere Z o ∼ 2 ω sf χ 0 / ∂ Π ∂ω | Ω= ω 0 . The imaginary part of the spin susceptibility describing a gaped continuum exists for for Ω ≥ 2∆ and is\n\nχ '' ( q, Ω) = Im [ χ 0 1 -1 ω sf ( 4∆ 2 Ω D ( 4∆ 2 Ω 2 ) + i Ω K 2 (1 -4∆ 2 Ω 2 ) ) ]\n\n≈ Im [ χ 0 1 -1 ω sf ( π ∆ 2 Ω + i π 2 Ω ) ] f or Ω >> 2∆ (23)\n\nIn Eq. (23) D ( x ) = K 1 ( x ) -K 2 ( x ) x , and K 1 ( x ) and K 2 ( x ) are Elliptic integrals of first and second kind. The real part of χ is obtained by Kramers-Kronig transform of the imaginary part.\n\nSubstituting Eq 6 for χ ( q, Ω) into the formula for the self-energy one obtains Σ '' ( ω ) in a SCS state as a sum of two terms 31\n\nΣ '' ( ω ) = Σ '' A ( ω ) + Σ '' B ( ω ) (24)\n\nwhere,\n\ncomes from the interaction with the resonance and\n\nΣ '' A ( ω ) = πZ o 2 λ n ω o Re ( ω + ω o √ ( ω + ω o ) 2 -∆ 2 )\n\nΣ '' B ( ω ) = -λ n ∫ | E | 2∆ dxRe ω + x √ ( ω + x ) 2 -∆ 2 x ω sf K 2 ( 1 -4∆ 2 x 2 ) [ 1 -4∆ 2 xω sf D ( 4∆ 2 x 2 ) ] 2 + [ x ω sf K 2 ( 1 -4∆ 2 x 2 ) ] 2 (25)\n\ncomes from the interaction with the gaped continuum.\n\nThe real part of Σ is obtained by Kramers-Kronig trans-", - "page_start": 10, - "page_end": 10, - "source_file": "1001.0764.pdf" - }, - { - "text": "The results for the conductivity within a spin-fermion model depend in quantitative (but not qualitative) way on the assumption for the momentum dispersion of a collective boson. This momentum dependence comes from", - "page_start": 9, - "page_end": 9, - "source_file": "1001.0764.pdf" - }, - { - "text": "## III. CONCLUSION\n\nIn this work we analyzed the behavior of optical integrals W ( ω c ) ∝ ∫ ω c o σ ( ω ) dω and Kubo sum rules in the normal and superconducting states of interacting fermionic systems on a lattice. Our key goal was to understand what sets the sign of ∆ W K = ∆ W ( ∞ ) between the normal and superconducting states and what is the behavior of W ( ω c ) and ∆ W ( ω c ) at finite ω c . In a weak coupling BCS superconductor, ∆ W ( ω c ) is positive at ω c < 2∆ due to a contribution from superfluid density, but becomes negative at larger ω c , and approach a negative value of ∆ W K . Our study was motivated by fascinating optical experiments on the cuprates 7-10 . In overdoped cuprates, there is clear indication 11 that ∆ W ( ω c ) becomes negative above a few ∆, consistent with BCS behavior. In underdoped cuprates, two groups argued 8,9 that ∆ W integrated up to the bandwidth remains positive, while the other group argued 10 that it is negative.\n\nThe reasoning why ∆ W K may potentially change sign at strong coupling involves the correlation between -W K and the kinetic energy. In the BCS limit, kinetic energy obviously increases in a SCS because of gap opening, hence -W K increases, and ∆ W K is negative. At strong coupling, there is a counter effect - fermions become more mobile in a SCS due to a smaller self-energy.\n\nWe considered four models: a BCS model with impurities, a model of fermions interacting with an Einstein boson, a phenomenological MFL model with impurities, and a model of fermions interacting with collective spin fluctuations. In all cases, we found that ∆ W K is negative, but how it evolves with ω c and how much of the sum rule is recovered by integrating up to the bandwidth depends on the model.\n\nThe result most relevant to the experiments on the cuprates is obtained for the spin fluctuation model. We found that at strong coupling, the zero-crossing of δW ( ω c ) occurs at a frequency which increases with the coupling strength and may become larger than the bandwidth at a truly strong coupling. Still, at even larger frequencies, ∆ W ( ω c ) is negative.", - "page_start": 13, - "page_end": 13, - "source_file": "1001.0764.pdf" - }, - { - "text": "FIG. 9: ∆ W vs the cut-off for the EB model. It remains negative for larger cut-offs. Parameters are the same as before. The dot indicates the value of ∆ W ( ∞ ) = ∆ W K\n\n\n\nof the lattice (the dashed line in Fig. 9).\n\n## C. Marginal Fermi liquid model\n\nFor their analysis of the optical integral, Norman and P'epin 30 introduced a phenomenological model for the self energy which fits normal state scattering rate measurements by ARPES 41 . It constructs the NS Σ '' ( ω ) out of two contributions - impurity scattering and electronelectron scattering which they approximated phenomenologically by the marginal Fermi liquid form of αω at small frequencies 6 (MFLI model). The total Σ '' is\n\nΣ '' ( ω ) = Γ + α | ω | f ( ω ω sat ) (17)\n\nwhere ω sat is about ∼ 1 2 of the bandwidth, and f ( x ) ≈ 1 for x < 1 and decreases for x > 1. In Ref 30 f ( x ) was assumed to scale as 1 /x at large x such that Σ '' is flat at large ω . The real part of Σ( ω ) is obtained from KramersKronig relations. For the superconducting state, they obtained Σ '' by cutting off the NS expression on the lower end at some frequency ω 1 (the analog of ω 0 +∆ that we had for EB model):\n\nΣ '' ( ω ) = (Γ + α | ω | )Θ( | ω | -ω 1 ) (18)\n\nwhere Θ( x ) is the step function. In reality, Σ '' which fits ARPESin the NS has some angular dependence along the Fermi surface 42 , but this was ignored for simplicity. This model had gained a lot of attention as it predicted the optical sum in the SCS to be larger than in the NS, i.e., ∆ W > 0 at large frequencies. This would be consistent with the experimental findings in Refs. 8,9 if, indeed, one identifies ∆ W measured up to 1eV with ∆ W K .\n\nWe will show below that the sign of ∆ W in the MFLI model actually depends on how the normal state results are extended to the superconducting state and, moreover, will argue that ∆ W K is actually negative if the extension is done such that at α = 0 the results are consistent with\n\nBCSI model. However, before that, we show in Figs 1012 the conductivities and the optical integrals for the original MFLI model.\n\nω\n\nσ\n\nFIG. 10: Top -the conductivities in the NS and SCS in the original MFLI model of Ref.30. We set Γ = 70 meV , α = 0 . 75, ∆ = 32 meV , ω 1 = 71 meV . Note that σ ' ( ω ) in the SCS begins at Ω = ∆ + ω 1 . Bottom - the behavior of W K with Γ.\n\n\n\nIn Fig 10 we plot the conductivities in the NS and the SCS and Kubo sums W K vs Γ at α = 0 . 75 showing that the spectral weight in the SCS is indeed larger than in the NS. In Fig 11 we show the behavior of the optical sums W ( ω c ) in NS and SCS. The observation here is that only ∼ 75 -80%of the Kubo sum is recovered up to the scale of the bandwidth implying that there is indeed a significant spectral weight well beyond the bandwidth. And in Fig 12 we show the behavior of ∆ W ( w c ). We see that it does not change sign and remain positive at all ω c , very much unlike the BCS case. Comparing the behavior of W ( w c ) with and without a lattice (solid and dashed lines in Fig. 12) we see that the 'finite bandwidth effect' just shifts the curve in the positive direction. We also see that the solid line flattens above roughly half of the bandwidth, i.e., at these frequencies ∆ W ( ω c ) ≈ ∆ W K . Still, we found that ∆ W continues going down even above the bandwidth and truly saturates only at about 2 eV (not shown in the figure) supporting the idea that there is 'more' left to recover from higher frequencies.\n\nThe rationale for ∆ W K > 0 in the original MFLI model has been provided in Ref. 30. They argued that this is closely linked to the absence of quasiparticle peaks in the NS and their restoration in the SCS state because the phase space for quasiparticle scattering at low energies is smaller in a superconductor than in a normal state.", - "page_start": 7, - "page_end": 7, - "source_file": "1001.0764.pdf" - }, - { - "text": "Σ( k, Ω) = 3 g 2 ∫ dω 2 π d 2 q (2 π ) 2 χ ( q, ω ) G ( k + q, ω +Ω) (6)\n\nwhere g is the spin-fermion coupling, and χ ( q, ω ) is the spin susceptibility whose dynamics changes between NS and SCS.\n\nFrom our analysis we found that the introduction of a finite fermionic bandwidth by means of a lattice has generally a notable effect on both W and ∆ W . We found that for all models except for BCSI model, only 70% -80% of the optical spectral weight is obtained by integrating up to the bandwidth. In these three models, there also exists a wide range of ω c in which the behavior of ∆ W ( ω c ) is due to variation of ∆ f ( ω c ) which is dominant comparable to the ∆ W K term. This dominance of the cut off term is consistent with the analysis in Refs. 21,22,33.\n\nWe also found that for all models except for the original version of the MFLI model the optical weight at the highest frequencies is greater in the NS than in the SCS (i.e., ∆ W < 0). This observation is consistent with the findings of Abanov and Chubukov 32 , Benfatto et. al. 28 , and Karakozov and Maksimov 34 . In the original version of the MFLI model 30 the spectral weight in SCS was found to be greater than in the NS (∆ W > 0). We show that the behavior of ∆ W ( ω c ) in this model crucially depends on how the fermionic self-energy modeled to fit ARPES data in a NS is modified when a system becomes a superconductor and can be of either sign. We also found, however, that ω c at which ∆ W becomes negative rapidly increases with the coupling strength and at strong coupling becomes comparable to the bandwidth. In the CB model, which, we believe, is most appropriate for the application to the cuprates, ∆ W K = ∆ W ( ∞ ) is quite small, and at strong coupling a negative ∆ W ( ω c ) up to ω c ∼ 1 eV is nearly compensated by the optical integral between ω c and 'infinity', which, in practice, is", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0764.pdf" - }, - { - "text": "FIG. 2: Distribution functions in four cases (a) BCSI model, where one can see that for ε > 0, SC > NS implying KE increases in the SCS. (b) The original MFLI model of Ref. 30, where for ε > 0, SC < NS, implying KE decreases in the SCS. (c) Our version of MFLI model (see text) and (d) the CB model. In both cases, SC > NS, implying KE increases in the SCS. Observe that in the impurity-free CB model there is no jump in n ( /epsilon1 ) indicating lack of fermionic coherence. This is consistent with ARPES 39\n\n\n\n## A. The BCS case\n\nIn BCS theory the quantity Z ( ω ) is given by\n\nand\n\nZ BCSI ( ω ) = 1 + Γ √ ∆ 2 -( ω + iδ ) 2 (11)\n\nΣ BCSI ( ω ) = ω ( Z ( ω ) -1) = i Γ ω √ ( ω + iδ ) 2 -∆ 2 (12)\n\nThis is consistent with having in the NS, Σ = i Γ in accordance with Eq 6. In the SCS, Σ( ω ) is purely imaginary for ω > ∆ and purely real for ω < ∆. The self-energy has a square-root singularity at ω = ∆.\n\nIt is worth noting that Eq.12 is derived from the integration over infinite band. If one uses Eq.6 for finite band, Eq.12 acquires an additional frequency dependence at large frequencies of the order of bandwidth (the low frequency structure still remains the same as in Eq.12). In principle, in a fully self-consistent analysis, one should indeed evaluate the self-energy using a finite bandwidth. In practice, however, the self-energy at frequencies of order bandwidth is generally much smaller than ω and contribute very little to optical conductivity which predominantly comes from frequencies where the self-energy is comparable or even larger than ω . Keeping this in mind, below we will continue with the form of self-energy derived form infinite band. We use the same argument for all four models for the self-energy.\n\nFor completeness, we first present some well known results about the conductivity and optical integral for a\n\nconstant DOS and then extend the discussion to the case where the same calculations are done in the presence of a particular lattice dispersion.\n\nFIG. 3: The BCSI case with a dispersion linearized around the Fermi surface. Evolution of the difference of optical integrals in the SCS and the NS with the upper cut-off ω c Observe that the zero crossing point increases with impurity scattering rate Γ and also the 'dip' spreads out with increasing Γ. ∆ = 30 meV\n\n\n\nFor a constant DOS, ∆ W ( ω c ) = W SC ( ω c ) -W NS ( ω c ) is zero at ω c = ∞ and Kubo sum rule reduces to FGT sum rule. In Fig. 3 we plot for this case ∆ W ( ω c ) as a function of the cutoff ω c for different Γ ' s . The plot shows the two well known features: zero-crossing point is below 2∆ in the clean limit Γ << ∆ and is roughly 2Γ in the dirty limit 21,40 The magnitude of the 'dip' decreases quite rapidly with increasing Γ. Still, there is always a point of zero crossing and ∆ W ( ω c ) at large ω c approaches zero from below.\n\nWe now perform the same calculations in the presence of lattice dispersion. The results are summarized in Figs 4,5, and 6.\n\nFig 4 shows conductivities σ ( ω ) in the NS and the SCS and Kubo sums W K plotted against impurity scattering Γ. We see that the optical integral in the NS is always greater than in the SCS. The negative sign of ∆ W K is simply the consequence of the fact that n k is larger in the NS for /epsilon1 k < 0 and smaller for /epsilon1 k < 0, and ∇ 2 ε /vector k closely follows -ε /vector k for our choice of dispersion 38 ), Hence n k is larger in the NS for ∇ 2 ε /vector k > 0 and smaller for ∇ 2 ε /vector k < 0 and the Kubo sum rule, which is the integral of the product of n k and ∇ 2 ε /vector k (Eq. 3), is larger in the normal state.\n\nWe also see from Fig. 4 that ∆ W K decreases with Γ reflecting the fact that with too much impurity scattering there is little difference in n k between NS and SCS.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0764.pdf" - }, - { - "text": "modified MFLI models. It is interesting that this holds despite the fact that for large λ CB model displays the physics one apparently needs to reverse the sign of ∆ W K - the absence of the quasiparticle peak in the NS and its emergence in the SCS accompanied by the dip and the hump at larger energies. The absence of coherent quasiparticle in the NS at large λ is also apparent form Fig 21 where we show the normal state distribution functions for two different λ . For large λ the jump (which indicates the presence of quasiparticles) virtually disappears.\n\nOn a more careful look, we found that indifference of δW ( ω c ) to the increase of λ is merely the consequence of the fact that above we kept λω sf constant. Indeed, at small frequencies, fermionic self-energy in the NS is Σ ' = λω , Σ' = λ 2 ω 2 / ( λω sf ), and both Σ ' and Σ '' increase with λ if we keep λω sf constant. But at frequencies larger than ω sf , which we actually probe by ∆ W ( ω c ), the selfenergy essentially depends only on λω sf , and increasing λ but keeping λω sf constant does not bring us closer to the physics associated with the recovery of electron coherence in the SCS. To detect this physics, we need to see how things evolve when we increase λω sf above the scale of ∆ , i.e., consider a truly strong coupling when not only λ /greatermuch 1 but also the normal state Σ NS ( ω ≥ ∆) >> ∆.\n\nTo address this issue, we took a larger λ for the same ω sf and re-did the calculation of the conductivities and optical integrals. The results for σ ( ω ) and ∆ W ( ω c ) are presented in Fig. 22. We found the same behavior as before, i.e., ∆ W K is negative. But we also found that the larger is the overall scale for the self-energy, the larger is a frequency of zero-crossing of ∆ W ( ω c ). In particular, for the same λ and ω sf that were used in Ref. 33 to fit the NS conductivity data, the zero crossing is at ∼ 0 . 8 eV which is quite close to the bandwidth. This implies that at a truly strong coupling the frequency at which ∆ W ( ω c ) changes sign can well be larger than the bandwidth of 1 eV in which case ∆ W integrated up to the bandwidth does indeed remain positive. Such behavior would be consistent with Refs.8,9. we also see from Fig. 22 that ∆ W K becomes small at a truly strong coupling, and over a wide range of frequencies the behavior of ∆ W ( ω c ) is predominantly governed by ∆ f ( ω c ), i.e. by the cut-off term. 50 The implication is that, to first approximation, ∆ W K can be neglected and positive ∆ W ( w c ) integrated to a frequency where it is still positive is almost compensated by the integral over larger frequencies. This again would be consistent with the experimental data in Refs. 8,9.\n\nIt is also instructive to understand the interplay between the behavior of ∆ W ( ω c ) and the behavior of the difference of the kinetic energy between the SCS and the NS, δ KE . We computed the kinetic energy as a function of λω sf and present the results in Fig. 23 for λ = 1 and 10. For a relatively weak λ = 1 the behavior is clearly BCS likeδ KE > 0 and increases with increasing λω sf . However, at large λ = 10, we see that the kinetic energy begin decreasing at large λω sf and eventually changes sign. The behavior of δ KE at a truly strong coupling is\n\nconsistent with earlier calculation of the kinetic energy for Ornstein-Zernike form of the spin susceptibility 43 .\n\nWe clearly see that the increase of the zero crossing frequency of ∆ W ( ω c ) at a truly strong coupling is correlated with the non-BCS behavior of δ KE . At the same time, the behavior of δW ( ω c ) is obviously not driven by the kinetic energy as eventually δW ( ω c ) changes sign and become negative. Rather, the increase in the frequency range where ∆ W ( ω c ) remains positive and non-BCS behavior of δ KE are two indications of the same effect that fermions are incoherent in the NS but acquire coherence in the SCS.\n\n## III. CONCLUSION", - "page_start": 13, - "page_end": 13, - "source_file": "1001.0764.pdf" - }, - { - "text": "FIG. 4: Top - a conductivity plot for the BCSI case in the presence of a lattice. The parameters are ∆ = 30 meV , Γ = 3 . 5 meV . Bottom - the behavior of Kubo sums. Note that (a) the spectral weight in the NS is always greater in the SCS, (b) the spectral weight decreases with Γ, and (c) the difference between NS and SCS decreases as Γ increases.\n\n\n\nlittle variation of ∆ W ( ω c ) at above 0 . 1 -0 . 3 eV what implies that for larger ω c , ∆ W ( ω c ) ≈ ∆ W K >> ∆ f ( ω c ).\n\nTo make this more quantitative, we compare in Fig. 6 ∆ W ( ω c ) obtained for a constant DOS, when ∆ W ( ω c ) = ∆ f ( ω c ), and for the actual lattice dispersion, when ∆ W ( ω c ) = ∆ W K + ∆ f ( ω c ). In the clean limit there is obviously little cutoff dependence beyond 0 . 1 eV , i.e., ∆ f ( ω c ) is truly small, and the difference between the two cases is just ∆ W K . In the dirty limit, the situation is similar, but there is obviously more variation with ω c , and ∆ f ( ω c ) becomes truly small only above 0 . 3 eV . Note also that the position of the dip in ∆ W ( ω c ) in the clean limit is at a larger ω c in the presence of the lattice than in a continuum.\n\n## B. The Einstein boson model\n\nWe next consider the case of electrons interacting with a single boson mode which by itself is not affected by superconductivity. The primary candidate for such mode is an optical phonon. The imaginary part of the NS self energy has been discussed numerous times in the literature. We make one simplifying assumption - approximate the DOS by a constant in calculating fermionic self-energy. We will, however, keep the full lattice dispersion in the calculations of the optical integral. The advantage of this\n\nFIG. 5: The evolution of optical integral in NS(top) and SCS(bottom) for BCSI case. Plots are made for clean limit (solid lines, Γ = 3 . 5 meV ) and dirty limit (dashed lines, Γ = 150 meV ) for ∆ = 30 meV . Observe that (a) W (0) = 0 in the NS, but has a non-zero value in the SCS because of the δ -function (this value decreases in the dirty limit), and (b) the flat region in the SCS is due to the fact that σ ' ( ω ) = 0 for Ω < 2∆. Also note that ∼ 90 -95% of the spectral weight is recovered up to 1 eV\n\n\n\napproximation is that the self-energy can be computed analytically. The full self-energy obtained with the lattice dispersion is more involved and can only be obtained numerically, but its structure is quite similar to the one obtained with a constant DOS.\n\nThe self-energy for a constant DOS is given by\n\nΣ( iω ) = -i 2 π λ n ∫ d/epsilon1 k d ( i Ω) χ ( i Ω) G ( /epsilon1 k , iω + i Ω) (13)\n\nwhere\n\nχ ( i Ω) = ω 2 0 ω 2 0 -( i Ω) 2 (14)\n\nand λ n is a dimensionless electron-boson coupling. Integrating and transforming to real frequencies, we obtain\n\nΣ '' ( ω ) = -π 2 λ n ω o Θ( | ω | -ω o )\n\nIn the SCS, we obtain for ω < 0\n\nΣ ' ( ω ) = -1 2 λ n ω o log ∣ ∣ ∣ ∣ ω + ω o ω -ω o ∣ ∣ ∣ ∣ (15)\n\nΣ '' ( ω ) = -π 2 λ n ω o Re ( ω + ω o √ ( ω + ω o ) 2 -∆ 2 )", - "page_start": 5, - "page_end": 5, - "source_file": "1001.0764.pdf" - }, - { - "text": "in a given band is compensated by an appropriate change of the spectral weight in other bands such that the total spectral weight, integrated over all bands, is conserved, as in Eq. (1). Still, non-conservation of the spectral weight within a given band is an interesting phenomenon as the degree of non-conservation is an indicator of relevant energy scales in the problem. Indeed, when relevant energy scales are much smaller than the Fermi energy, i.e., changes in the conductivity are confined to a near vicinity of a Fermi surface (FS), one can expand ε k near k F as ε k = v F ( k -k F ) + ( k -k F ) 2 / (2 m B ) + O ( k -k F ) 3 and obtain ∇ 2 /vector k x ε /vector k ≈ 1 /m B [this approximation is equivalent to approximating the density of states (DOS) by a constant]. Then W K becomes πne 2 / (2 m B ) which does not depend on temperature. The scale of the temperature dependence of W K is then an indicator how far in energy the changes in conductivity extend when, e.g., a system evolves from a normal metal to a superconductor. Because relevant energy scales increase with the interaction strength, the temperature dependence of W K is also an indirect indicator of whether a system is in a weak, intermediate, or strong coupling regime.\n\nIn a conventional BCS superconductor the only relevant scales are the superconducting gap ∆ and the impurity scattering rate Γ. Both are generally much smaller than the Fermi energy, so the optical integral should be almost T -independent, i.e., the spectral weight lost in a superconducting state at low frequencies because of gap opening is completely recovered by the zero-frequency δ -function. In a clean limit, the weight which goes into a δ -function is recovered within frequencies up to 4∆. This is the essence of FGT sum rule 2,3 . In a dirty limit, this scale is larger, O (Γ), but still W K is T -independent and there was no 'violation of sum rule'.\n\nThe issue of sum rule attracted substantial interest in the studies of high T c cuprates 5-18,21-26 in which pairing is without doubts a strong coupling phenomenon. From a theoretical perspective, the interest in this issue was originally triggered by a similarity between W K and the kinetic energy K = 2 ∑ ε /vector k n /vector k . 18-20 For a model with a simple tight binding cosine dispersion ε k ∝ (cos k x +cos k y ), d 2 ε /vector k d k 2 x ∼ -ε /vector k and W K = -K . For a more complex dispersion there is no exact relation between W K and K , but several groups argued 17,27,28 that W K can still be regarded as a good monitor for the changes in the kinetic energy. Now, in a BCS superconductor, kinetic energy increases below T c because n k extends to higher frequencies (see Fig.2). At strong coupling, K not necessary increases because of opposite trend associated with the fermionic self-energy: fermions are more mobile in the SCS due to less space for scattering at low energies than they are in the NS. Model calculations show that above some coupling strength, the kinetic energy decreases below T c 29 . While, as we said, there is no one-to-one correspondence between K and W K , it is still likely that, when K decreases, W K increases.\n\nAgood amount of experimental effort has been put into\n\naddressing the issue of the optical sum rule in the c -axis 7 and in-plane conductivities 8-16 in overdoped, optimally doped, and underdoped cuprates. The experimental results demonstrated, above all, outstanding achievements of experimental abilities as these groups managed to detect the value of the optical integral with the accuracy of a fraction of a percent. The analysis of the change of the optical integral between normal and SCS is even more complex because one has to (i) extend NS data to T < T c and (ii) measure superfluid density with the same accuracy as the optical integral itself.", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0764.pdf" - }, - { - "text": "## Optical Integral and Sum Rule Violation\n\nSaurabh Maiti, Andrey V. Chubukov\n\nDepartment of Physics, University of Wisconsin, Madison, Wisconsin 53706, USA\n\n(Dated: November 9, 2018)\n\nThe purpose of this work is to investigate the role of the lattice in the optical Kubo sum rule in the cuprates. We compute conductivities, optical integrals W , and ∆ W between superconducting and normal states for 2-D systems with lattice dispersion typical of the cuprates for four different models - a dirty BCS model, a single Einstein boson model, a marginal Fermi liquid model, and a collective boson model with a feedback from superconductivity on a collective boson. The goal of the paper is two-fold. First, we analyze the dependence of W on the upper cut-off ( ω c ) placed on the optical integral because in experiments W is measured up to frequencies of order bandwidth. For a BCS model, the Kubo sum rule is almost fully reproduced at ω c equal to the bandwidth. But for other models only 70%-80% of Kubo sum rule is obtained up to this scale and even less so for ∆ W , implying that the Kubo sum rule has to be applied with caution. Second, we analyze the sign of ∆ W . In all models we studied ∆ W is positive at small ω c , then crosses zero and approaches a negative value at large ω c , i.e. the optical integral in a superconductor is smaller than in a normal state. The point of zero crossing, however, increases with the interaction strength and in a collective boson model becomes comparable to the bandwidth at strong coupling. We argue that this model exhibits the behavior consistent with that in the cuprates.\n\n## I. INTRODUCTION\n\nThe analysis of sum rules for optical conductivity has a long history. Kubo, in an extensive paper 1 in 1957, used a general formalism of a statistical theory of irreversible processes to investigate the behavior of the conductivity in electronic systems. For a system of interacting electrons, he derived the expression for the integral of the real part of a (complex) electric conductivity σ (Ω) and found that it is independent on the nature of the interactions and reduces to\n\n∫ ∞ 0 Reσ (Ω) d Ω = π 2 ne 2 m (1)\n\nHere n is the density of the electrons in the system and m is the bare mass of the electron. This expression is exact provided that the integration extends truly up to infinity, and its derivation uses the obvious fact that at energies higher than the total bandwidth of a solid, electrons behave as free particles.\n\nThe independence of the r.h.s. of Eq. (1) on temperature and the state of a solid (e.g., a normal or a superconducting state - henceforth referred to as NS and SCS respectively) implies that, while the functional form of σ (Ω) changes with, e.g., temperature, the total spectral weight is conserved and only gets redistributed between different frequencies as temperature changes. This conservation of the total weight of σ (Ω) is generally called a sum rule.\n\nOne particular case, studied in detail for conventional superconductors, is the redistribution of the spectral weight between normal and superconducting states. This is known as Ferrel-Glover-Tinkham (FGT) sum rule: 2,3\n\n∫ ∞ 0+ Reσ NS (Ω) = ∫ ∞ 0+ Reσ sc (Ω) + πn s e 2 2 m (2)\n\nwhere n s is the superfluid density, and πn s e 2 / (2 m ) is\n\nthe spectral weight under the δ -functional piece of the conductivity in the superconducting state.\n\nIn practice, the integration up to an infinite frequency is hardly possible, and more relevant issue for practical applications is whether a sum rule is satisfied, at least approximately, for a situation when there is a single electron band which crosses the Fermi level and is well separated from other bands. Kubo considered this case in the same paper of 1957 and derived the expression for the 'band', or Kubo sum rule\n\n∫ ' ∞ ' 0 Reσ (Ω) d Ω = W K = πe 2 2 N ∑ /vector k ∇ 2 /vector k x ε /vector k n /vector k (3)", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0764.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0266.pdf", - "query": "What happens when the spin-rotation symmetry is explicitly broken?", - "target_page": 2, - "target_passage": "makes them harder to realize in solid state systems", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "inter-cluster spin-chirality coupling in H perturbation z explicitly breaks time reversal symmetry and is probably harder to implement in solid state systems. However spin-chirality order may have important consequences in frustrated magnets 36,37 , and a realization of spin-", - "page_start": 6, - "page_end": 6, - "source_file": "1001.0266.pdf" - }, - { - "text": "chirality interactions in cold atom optical lattices has been proposed 38 .\n\nOur model (8) is achieved at second order of the perturbation series. Higher order terms become truncation errors but may be controlled by small parameters λ x,y,z /J cluster ∼ √ | J x,y,z | /J cluster .\n\n## V. CONCLUSIONS.\n\nWe constructed the exactly solvable Kitaev honeycomb model 1 as the exact low energy effective Hamiltonian of a spin-1/2 model [equations (8) or (9)] with spin-rotation and time reversal symmetry. The spin in Kitaev model is represented as the pseudo-spin in the two-fold degenerate spin singlet subspace of a cluster of four antiferromagnetically coupled spin-1/2 moments. The physical spin model is a honeycomb lattice of such four-spin clusters, with certain inter-cluster interactions. The machinery for the exact mapping to pseudo-spin Hamiltonian was developed (see e.g. TABLE I), which is quite general and can be used to construct other interesting (exactly solvable) spin-1/2 models from spin rotation invariant systems.\n\nIn this construction the pseudo-spin correlations in the Kitaev model will be mapped to dimer or spin-chirality correlations in the physical spin system. The corresponding picture of the fractionalized Majorana fermion excitations and Ising vortices still remain to be clarified.\n\nThis exact construction contains high order physical spin interactions, which is undesirable for practical implementation. We described two possible approaches to reduce this problem: generating the high order spin interactions by perturbative expansion of the coupling to optical phonon, or the magnetic coupling between clusters. This perturbative construction will introduce truncation error of perturbation series, which may be controlled by small expansion parameters. Whether these constructions can be experimentally engineered is however beyond the scope of this study. It is conceivable that other perturbative expansion can also generate these high order spin interactions, but this possibility will be left for future works.\n\n## Acknowledgments\n\nThe author thanks Ashvin Vishwanath, Yong-Baek Kim and Arun Paramekanti for inspiring discussions, and Todadri Senthil for critical comments. The author is supported by the MIT Pappalardo Fellowship in Physics.\n\n## Appendix A: Coupling between Distortions of a Tetrahedron and the Pseudo-spins\n\nIn this Appendix we reproduce from Ref. 35 the couplings of all tetrahedron distortion modes to the spin\n\nsystem. And convert them to pseudo-spin notation in the physical spin singlet sector.\n\nConsider a general small distortion of the tetrahedron, the spin Hamiltonian becomes\n\nH cluster , SL = ( J cluster / 2)( ∑ /lscript S /lscript ) 2 + J ' ∑ /lscript\n\nderived as well. There have been several proposals to open the fermion gap for the non-Abelian phase without spoiling exact solvability 4,6 . And many generalizations to other(even 3D) lattices have been developed in the last few years 10-16 . All these efforts have significantly enriched our knowledge of exactly solvable models and quantum phases of matter.\n\nHowever, in the original Kitaev model and its later generalizations in the form of spin models, spin rotation symmetry is explicitly broken. This makes them harder to realize in solid state systems. There are many proposals to realized the Kitaev model in more controllable situations, e.g. in cold atom optical lattices 17,18 , or in superconducting circuits 19 . But it is still desirable for theoretical curiosity and practical purposes to realize the Kitaev-type models in spin rotation invariant systems.\n\nIn this paper we realize the Kitaev honeycomb lattice model as the low energy Hamiltonian for a spin rotation invariant system. The trick is not to use the physical spin as the spin in the Kitaev model, instead the spin-1/2 in Kitaev model is from some emergent two-fold degenerate low energy states in the elementary unit of physical system. This type of idea has been explored recently by Jackeli and Khaliullin 20 , in which the spin-1/2 in the Kitaev model is the low energy Kramers doublet created by strong spin-orbit coupling of t 2 g orbitals. In the model presented below, the Hilbert space of spin-1/2 in the Kitaev model is actually the two dimensional spin singlet sector of four antiferromagnetically coupled spin-1/2 moments, and the role of spin-1/2 operators(Pauli matrices) in the Kitaev model is replaced by certain combinations of S j · S k [or the spin-chirality S j · ( S k × S /lscript )] between the four spins.\n\nOne major drawback of the model to be presented is that it contains high order spin interactions(involves up to six or eight spins), thus is still unnatural. However it opens the possibility to realize exotic (exactly solvable) models from spin-1/2 Hamiltonian with spin rotation invariant interactions. We will discuss two possible routes to reduce this artificialness through controlled perturbative expansions, by coupling to optical phonons or by magnetic couplings between the elementary units.\n\nThe outline of this paper is as follows. In Section II we will lay out the pseudo-spin-1/2 construction. In Sec-\n\nFIG. 2: Left: the physical spin lattice for the model (8). The dash circles are honeycomb lattice sites, each of which is actually a cluster of four physical spins. The dash straight lines are honeycomb lattice bonds, with their type x, y, z labeled. The interaction between clusters connected by x, y, z bonds are the J x,y,z terms in (8) or (9) respectively. Note this is not the 3-12 lattice used in Ref. 9,10 . Right: enlarged picture of the clusters with the four physical spins labeled as 1 , . . . , 4. Thick solid bonds within one cluster have large antiferromagnetic Heisenberg coupling J cluster .\n\n\n\ntion III the Kitaev model will be explicitly constructed using this formalism, and some properties of this construction will be discussed. In Section IV we will discuss two possible ways to generate the high order spin interactions involved in the construction of Section III by perturbative expansions. Conclusions and outlook will be summarized in Section V.\n\n## II. FORMULATION OF THE PSEUDO-SPIN-1/2 FROM FOUR-SPIN CLUSTER.", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0266.pdf" - }, - { - "text": "## Realization of the Exactly Solvable Kitaev Honeycomb Lattice Model in a Spin Rotation Invariant System\n\nFa Wang 1\n\n1 Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA\n\nThe exactly solvable Kitaev honeycomb lattice model is realized as the low energy effect Hamiltonian of a spin-1/2 model with spin rotation and time-reversal symmetry. The mapping to low energy effective Hamiltonian is exact, without truncation errors in traditional perturbation series expansions. This model consists of a honeycomb lattice of clusters of four spin-1/2 moments, and contains short-range interactions up to six-spin(or eight-spin) terms. The spin in the Kitaev model is represented not as these spin-1/2 moments, but as pseudo-spin of the two-dimensional spin singlet sector of the four antiferromagnetically coupled spin-1/2 moments within each cluster. Spin correlations in the Kitaev model are mapped to dimer correlations or spin-chirality correlations in this model. This exact construction is quite general and can be used to make other interesting spin-1/2 models from spin rotation invariant Hamiltonians. We discuss two possible routes to generate the high order spin interactions from more natural couplings, which involves perturbative expansions thus breaks the exact mapping, although in a controlled manner.\n\nPACS numbers: 75.10.Jm, 75.10.Kt\n\n## Contents\n\n## I. Introduction.\n\n1\n\n- II. Formulation of the Pseudo-spin-1/2 from Four-spin Cluster.\n\n## III. Realization of the Kitaev Model.\n\n3\n\n- IV. Generate the High Order Physical Spin Interactions by Perturbative Expansion.\n- A. Generate the High Order Terms by Coupling to Optical Phonon.\n- B. Generate the High Order Terms by Magnetic Interactions between Clusters.\n\n## V. Conclusions.\n\n8\n\n## Acknowledgments\n\n8\n\n- A. Coupling between Distortions of a Tetrahedron and the Pseudo-spins\n- B. Derivation of the Terms Generated by Second Order Perturbation of Inter-cluster Magnetic Interactions\n\n8\n\n9\n\nReferences 10\n\n## I. INTRODUCTION.\n\nKitaev's exactly solvable spin-1/2 honeycomb lattice model 1 (noted as the Kitaev model hereafter) has inspired great interest since its debut, due to its exact solvability, fractionalized excitations, and the potential\n\n5\n\n5\n\n7\n\n2\n\nto realize non-Abelian anyons. The model simply reads\n\nH Kitaev = -∑ x -links J x τ x j τ x k -∑ y -links J y τ y j τ y k -∑ z -links J z τ z j τ z k (1)\n\nwhere τ x,y,z are Pauli matrices, and x, y, z -links are defined in FIG. 1. It was shown by Kitaev 1 that this spin1/2 model can be mapped to a model with one Majorana fermion per site coupled to Ising gauge fields on the links. And as the Ising gauge flux has no fluctuation, the model can be regarded as, under each gauge flux configuration, a free Majorana fermion problem. The ground state is achieved in the sector of zero gauge flux through each hexagon. The Majorana fermions in this sector have Dirac-like gapless dispersion resembling that of graphene, as long as | J x | , | J y | , and | J z | satisfy the triangular relation, sum of any two of them is greater than the third one 1 . It was further proposed by Kitaev 1 that opening of fermion gap by magnetic field can give the Ising vortices non-Abelian anyonic statistics, because the Ising vortex will carry a zero-energy Majorana mode, although magnetic field destroys the exact solvability.\n\nGreat efforts have been invested to better understand the properties of the Kitaev model. For example, several groups have pointed out that the fractionalized Majorana fermion excitations may be understood from the more familiar Jordan-Wigner transformation of 1D spin systems 2,3 . The analogy between the non-Abelian Ising vortices and vortices in p + ip superconductors has been raised in serveral works 4-7 . Exact diagonalization has been used to study the Kitaev model on small lattices 8 . And perturbative expansion methods have been developed to study the gapped phases of the Kitaev-type models 9 .", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0266.pdf" - }, - { - "text": "Another note to take is that it is not necessary to have such a highly symmetric cluster Hamiltonian (2). The mappings to pseudo-spin-1/2 should work as long as the ground states of the cluster Hamiltonian are the two-fold degenerate singlets. One generalization, which conforms the symmetry of the lattice in FIG. 2, is to have\n\nH cluster = ( J cluster / 2)( r · S 1 + S 2 + S 3 + S 4 ) 2 (11)\n\nwith J cluster > 0 and 0 < r < 3. However this is not convenient for later discussions and will not be used.\n\nWe briefly describe some of the properties of (8). Its low energy states are entirely in the space that each of the clusters is a physical spin singlet (called cluster singlet subspace hereafter). Therefore physical spin correlations are strictly confined within each cluster. The excitations carrying physical spin are gapped, and their dynamics are 'trivial' in the sense that they do not move from one cluster to another. But there are non-trivial low energy physical spin singlet excitations, described by the pseudospins defined above. The correlations of the pseudo-spins can be mapped to correlations of their corresponding physical spin observables (the inverse mappings are not unique, c.f. TABLE I). For example τ x,y correlations become certain dimer-dimer correlations, τ z correlation becomes chirality-chirality correlation, or four-dimer correlation. It will be interesting to see the corresponding picture of the exotic excitations in the Kitaev model, e.g. the Majorana fermion and the Ising vortex. However this will be deferred to future studies.\n\nIt is tempting to call this as an exactly solved spin liquid with spin gap ( ∼ J cluster ), an extremely short-range resonating valence bond(RVB) state, from a model with spin rotation and time reversal symmetry. However it should be noted that the unit cell of this model contains an even number of spin-1/2 moments (so does the original Kitaev model) which does not satisfy the stringent definition of spin liquid requiring odd number of electrons per unit cell. Several parent Hamiltonians of spin liquids have already been constructed. See for example, Ref. 24-27 .\n\n## IV. GENERATE THE HIGH ORDER PHYSICAL SPIN INTERACTIONS BY PERTURBATIVE EXPANSION.\n\nOne major drawback of the present construction is that it involves high order interactions of physical spins[see (8) and (9)], thus is 'unnatural'. In this Section we will make compromises between exact solvability and naturalness. We consider two clusters j and k and try to generate the J x,y,z interactions in (7) from perturbation series expansion of more natural(lower order) physical spin interactions. Two different approaches for this purpose will be laid out in the following two Subsections. In Subsection IV A we will consider the two clusters as two tetrahedra, and couple the spin system to certain optical phonons, further coupling between the phonon modes\n\nFIG. 3: Illustration of the tetragonal to orthorhombic Q E 1 (top) and Q E 2 (bottom) distortion modes. (a) Perspective view of the tetrahedron. 1 , . . . , 4 label the spins. Arrows indicate the motion of each spin under the distortion mode. (b) Top view of (a). (c)(d) Side view of (a).\n\n\n\nof the two clusters can generate at lowest order the desired high order spin interactions. In Subsection IV B we will introduce certain magnetic, e.g. Heisenberg-type, interactions between physical spins of different clusters, at lowest order(second order) of perturbation theory the desired high order spin interactions can be achieved. These approaches involve truncation errors in the perturbation series, thus the mapping to low energy effect Hamiltonian will no longer be exact. However the error introduced may be controlled by small expansion parameters. In this Section we denote the physical spins on cluster j ( k ) as j 1 , . . . , j 4 ( k 1 , . . . , k 4), and denote pseudo-spins on cluster j ( k ) as /vectorτ j ( /vectorτ k ).", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0266.pdf" - }, - { - "text": "## II. FORMULATION OF THE PSEUDO-SPIN-1/2 FROM FOUR-SPIN CLUSTER.\n\nIn this Section we will construct the pseudo-spin-1/2 from a cluster of four physical spins, and map the physical spin operators to pseudo-spin operators. The mapping constructed here will be used in later Sections to construct the effective Kitaev model. In this Section we will work entirely within the four-spin cluster, all unspecified physical spin subscripts take values 1 , . . . , 4.\n\nConsider a cluster of four spin-1/2 moments(called physical spins hereafter), labeled by S 1 ,..., 4 , antiferromagnetically coupled to each other (see the right bottom part of FIG. 2). The Hamiltonian within the cluster(up to a constant) is simply the Heisenberg antiferromagnetic(AFM) interactions,\n\nH cluster = ( J cluster / 2) ( S 1 + S 2 + S 3 + S 4 ) 2 (2)\n\nThe energy levels should be apparent from this form: one group of spin-2 quintets with energy 3 J cluster , three groups of spin-1 triplets with energy J cluster , and two spin singlets with energy zero. We will consider large positive", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0266.pdf" - }, - { - "text": "## A. Generate the High Order Terms by Coupling to Optical Phonon.\n\nIn this Subsection we regard each four-spin cluster as a tetrahedron, and consider possible optical phonon modes(distortions) and their couplings to the spin system. The basic idea is that the intra-cluster Heisenberg coupling J cluster can linearly depend on the distance between physical spins. Therefore certain distortions of the tetrahedron couple to certain linear combinations of S /lscript · S m . Integrating out phonon modes will then generate high order spin interactions. This idea has been extensively studied and applied to several magnetic materials 28-34 . More details can be found in a recent review by Tchernyshyov and Chern 35 . And we will frequently use their notations. In this Subsection we will use the representation (5) for τ z .\n\nConsider first a single tetrahedron with four spins 1 , . . . , 4. The general distortions of this tetrahedron can be classified by their symmetry (see for example Ref. 35 ). Only two tetragonal to orthorhombic distortion modes, Q E 1 and Q E 2 (illustrated in FIG. 3), couple to the pseudospins defined in Section II. A complete analysis of all modes is given in Appendix A. The coupling is of the", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0266.pdf" - }, - { - "text": "τ z = -χ 234 / ( √ 3 / 4) = -(4 / √ 3) S 2 · ( S 3 × S 4 ) (6)\n\nThe above representations of τ x,y,z are all invariant under global spin rotation of the physical spins.\n\nWith the machinery of equations (4), (5), and (6), it will be straightforward to construct various pseudo-spin1/2 Hamiltonians on various lattices, of the Kitaev variety and beyond, as the exact low energy effective Hamiltonian of certain spin-1/2 models with spin-rotation symmetry. In these constructions a pseudo-spin lattice site actually represents a cluster of four spin-1/2 moments.\n\n## III. REALIZATION OF THE KITAEV MODEL.\n\nIn this Section we will use directly the results of the previous Section to write down a Hamiltonian whose low energy sector is described by the Kitaev model. The Hamiltonian will be constructed on the physical spin lattice illustrated in FIG. 2. In this Section we will use j, k to label four-spin clusters (pseudo-spin-1/2 sites), the physical spins in cluster j are labeled as S j 1 , . . . , S j 4 .\n\nApply the mappings developed in Section II, we have the desired Hamiltonian in short notation,\n\nH = ∑ cluster H cluster -∑ x -links J x τ x j τ x k -∑ y -links J y τ y j τ y k -∑ z -links J z τ z j τ z k (7)\n\nwhere j, k label the honeycomb lattice sites thus the fourspin clusters, H cluster is given by (2), τ x,y,z should be replaced by the corresponding physical spin operators in (4) and (5) or (6), or some other equivalent representations of personal preference.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0266.pdf" - }, - { - "text": "The fundamental requirement of the spin is that the airplane be placed at an excessive angle of attack to produce the autorotation rolling and yawing tendencies. Generally speaking, the conventional airplane must be stalled .before a spin can take place. This relationship establishes a fundamental p&rciple of recovery-the airplane must be unstalled by decreasing the wing angle of attack. The most dfective procedure for the conventional configuration is to use opposite rudder to stop the sideslip, then lower the angle of attack with the elevators. With sufficient rudder power this procedure will produce a positive recovery with a minimum loss of altitude. Care should be taken during pullout from the ensuing dive to prevent excessive angle of attack and entry into another spin.\n\nIt should be appreciated that a spin is always a possible corollary of a stall and the selfsustaining motion of a spin will take place at", - "page_start": 326, - "page_end": 326, - "source_file": "00-80T-80.pdf" - }, - { - "text": "J cluster limit. So only the singlet sector remains in low energy.\n\nThe singlet sector is then treated as a pseudo-spin-1/2 Hilbert space. From now on we denote the pseudo-spin1/2 operators as T = (1 / 2) /vectorτ , with /vectorτ the Pauli matrices. It is convenient to choose the following basis of the pseudo-spin\n\n| τ z = ± 1 〉 = 1 √ 6 ( | ↓↓↑↑〉 + ω -τ z | ↓↑↓↑〉 + ω τ z | ↓↑↑↓〉 + | ↑↑↓↓〉 + ω -τ z | ↑↓↑↓〉 + ω τ z | ↑↓↓↑〉 ) (3)\n\nwhere ω = e 2 πi/ 3 is the complex cubic root of unity, | ↓↓↑↑〉 and other states on the right-hand-side(RHS) are basis states of the four-spin system, in terms of S z quantum numbers of physical spins 1 , . . . , 4 in sequential order. This pseudo-spin representation has been used by Harris et al. to study magnetic ordering in pyrochlore antiferromagnets 21 .\n\nWe now consider the effect of Heisenberg-type interactions S j · S k inside the physical singlet sector. Note that since any S j · S k within the cluster commutes with the cluster Hamiltonian H cluster (2), their action do not mix physical spin singlet states with states of other total physical spin. This property is also true for the spinchirality operator used later. So the pseudo-spin Hamiltonian constructed below will be exact low energy Hamiltonian, without truncation errors in typical perturbation series expansions.\n\n/negationslash\n\nIt is simpler to consider the permutation operators P jk ≡ 2 S j · S k + 1 / 2, which just exchange the states of the two physical spin-1/2 moments j and k ( j = k ). As an example we consider the action of P 34 ,\n\nP 34 | τ z = -1 〉 = 1 √ 6 ( | ↓↓↑↑〉 + ω | ↓↑↑↓〉 + ω 2 | ↓↑↓↑〉 + | ↑↑↓↓〉 + ω | ↑↓↓↑〉 + ω 2 | ↑↓↑↓〉 ) = | τ z = +1 〉\n\nand similarly P 34 | τ z = -1 〉 = | τ z = +1 〉 . Therefore P 34 is just τ x in the physical singlet sector. A complete list of all permutation operators is given in TABLE I. We can choose the following representation of τ x and τ y ,\n\nτ x = P 12 = 2 S 1 · S 2 +1 / 2 τ y = ( P 13 -P 14 ) / √ 3 = (2 / √ 3) S 1 · ( S 3 -S 4 ) (4)\n\nMany other representations are possible as well, because several physical spin interactions may correspond to the same pseudo-spin interaction in the physical singlet sector, and we will take advantage of this later.\n\nFor τ z we can use τ z = -iτ x τ y , where i is the imaginary unit,\n\nτ z = -i (2 / √ 3)(2 S 1 · S 2 +1 / 2) S 1 · ( S 3 -S 4 ) (5)\n\nTABLE I: Correspondence between physical spin operators and pseudo-spin operators in the physical spin singlet sector of the four antiferromagnetically coupled physical spins. P jk = 2 S j · S k +1 / 2 are permutation operators, χ jk/lscript = S j · ( S k × S /lscript ) are spin-chirality operators. Note that several physical spin operators may correspond to the same pseudo-spin operator.\n\n| physical spin | pseudo-spin |\n|-----------------------------------|--------------------------------|\n| P 12 , and P 34 | τ x |\n| P 13 , and P 24 | - (1 / 2) τ x +( √ 3 / 2) τ y |\n| P 14 , and P 23 | - (1 / 2) τ x - ( √ 3 / 2) τ y |\n| χ 234 , χ 341 , χ 412 , and χ 123 | ( √ 3 / 4) τ z |\n\n-\n\n-\n\nHowever there is another simpler representation of τ z , by the spin-chirality operator χ jk/lscript = S j · ( S k × S /lscript ). Explicit calculation shows that the effect of S 2 · ( S 3 × S 4 ) is -( √ 3 / 4) τ z in the physical singlet sector. This can also be proved by using the commutation relation [ S 2 · S 3 , S 2 · S 4 ] = i S 2 · ( S 3 × S 4 ). A complete list of all chirality operators is given in TABLE I. Therefore we can choose another representation of τ z ,\n\nτ z = -χ 234 / ( √ 3 / 4) = -(4 / √ 3) S 2 · ( S 3 × S 4 ) (6)\n\nThe above representations of τ x,y,z are all invariant under global spin rotation of the physical spins.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0266.pdf" - } - ] - }, - { - "references": { - "source_file": "basic-english-language-skills.PDF", - "query": "What is the Oxbridge Academy email?", - "target_page": 59, - "target_passage": "Email: info@oxbridgeacademy.co.za", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Did you enjoy reading this book?\n\nJoin our online social community and share your opinion:\n\nwww.facebook.com/oxbridgeacademysa twitter.com/oxbridgeEdu www.linkedin.com/company/oxbridge-academy\n\nOxbridge Academy is an established distance learning college offer -ing skills courses, national qualifications, and internationally recognised courses to students in South Africa and abroad.\n\nWith our head office in Stellenbosch in the Western Cape, we cater to our students' needs by recruiting industry-expert tutors to provide academic assistance via telephone and e-mail, as well as by designing our study material in such a way that it is clear, simple, and easy for our students to understand.\n\nWith us, studying from home is easy, affordable, and convenient.\n\n## CONTACT NUMBERS:\n\nTel: 021 1100 200 Tel:+2721 883 2454 (international) Fax: 086 111 2121\n\nFax: +2721 883 2378 (international)\n\nWhatsapp: 0605671585 Email: info@oxbridgeacademy.co.za\n\nPostal Address:\n\nPO Box 12723, Die Boord, Stellenbosch, 7613\n\n\n\n\n\n\n\n\n\n\n\n\n\nWe are registered with the Department of Higher Education and Training as a Private College in terms of Section 31(6)(a) of the Continuing Education and Training Act, 2006 (Act No. 16 of 2006). Registration No. 2009/FE07/070.", - "page_start": 58, - "page_end": 58, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "3\n\n4\n\n\n\nSend your registration form to the registrations office at Oxbridge Academy via one of the following channels:\n\nFax:\n\n086 262 5550\n\nPost: PO Box 12723, Die Boord, 7613 E-mail: registrar@oxbridgeacademy.co.za\n\n6", - "page_start": 26, - "page_end": 26, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "\n\n## TIPS FOR FILLING IN YOUR COLLEGE REGISTRATION FORM\n\nApplying for college (www.oxbridgeacademy.co.za/enrol-now/) can be a daunting experience. Not only do you need to choose a course, but you also need to make sure that you:\n\n - · meet the entry requirements\n - · meet the deadlines\n - · fill in the forms correctly\n - · send the forms to the right address\n - · include all the necessary attachments\n\nTo make the college registration process easier for you, we've compiled a comprehensive guide on how to register at Oxbridge Academy (www.oxbridgeacademy.co.za/enrol-now/). The guide also includes general tips that will be relevant to the application and registration processes at other colleges.\n\n## There are 4 steps you need to follow when you want to register as a student at Oxbridge Academy:\n\n - 1. Select Your Course\n - 2. Fill in Your Student Details\n - 3. Select Your Delivery Option\n - 4. Pay Your Registration Fee and Send in Your Form\n\n", - "page_start": 20, - "page_end": 20, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## STEP 1 - SELECT YOUR COURSE\n\nOxbridge Academy Short Course: Marketing Management\n\nADV101\n\nBefore you start filling in the registration form, you need to choose your course. Once you've identified the course that you would like to study, remember to check that you meet the entry requirements.\n\nYou can find the course name and course code for your chosen course on the relevant detailed course information page on our website. Have a look at the example in the screenshot below (the course name and course code are circled in red):\n\n\n\nPlease make sure to check the accreditation status of your chosen course. Some of our courses are non-credit bearing skills development courses, which are neither accredited by external bodies nor registered on the NQF. Please go to our website: oxbridgeacademy.co.za for more information about our skills development courses.", - "page_start": 21, - "page_end": 21, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## IN THIS E-BOOK, WE'LL BE HELPING YOU TO:\n\n - · Develop your basic English language skills.\n - · Improve your English grammar.\n\nApply your language and communication skills in a business contexT. ( www.oxbridgeacademy.co.za/find-a- course/business-administrationcourses/)\n\n'Grammar is a litmus test. If job hopefuls can't distinguish between 'to' and too', their applications go into the bin'\n\nKyle Wiens, CEO of iFixit\n\n\n\n'Grammar often seems to be a low priority in education. Are school undervaluing grammar, given that employers may rule out applications with sloppy writing?'\n\nThe New York Times", - "page_start": 5, - "page_end": 5, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## ASSIGNMENT\n\n - 1. Identify the verb in the following sentence:\n\nThe grey elephant drinks water from the largest lake in Africa.\n\n - 2. Identify the collective noun in the following sentence:\n\nThe board of directors voted in favour of the decision.\n\n - 3. Correct the punctuation in the following sentence:\n\nAnthea will you please buy bread milk and eggs when you go to the shop.\n\n - 4. Choose the correct word:\n\nCharles was accepted/excepted into the engineering studies course at Oxbridge Academy.\n\n - 5. Choose the correct word:\n\nIts/It's time to go home now.\n\n - 6. Choose the correct word:\n\nThey were late for work, because there/their train was delayed.\n\n - 7. Choose the correct word:\n\nYou're/Your going to write your exam next week.", - "page_start": 54, - "page_end": 54, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## STEP 4 - PAY YOUR REGISTRATION FEE AND SEND IN YOUR FORM\n\nDifferent courses have different registration fees. Please check the course fees list (www.oxbridgeacademy.co.za/Documents/ Price-list-2015.pdf) to find out how much you need to pay to register for your chosen course, and pay this amount using the banking details provided at the bottom of the registration form. Remember to attach your proof of payment.\n\nIf you are under the age of 18, your parent or guardian will need to sign this section of the form to state that they are aware of your registration with Oxbridge Academy, and that they do not have any objections. If you are unemployed, you will need a guarantor to sign this section of the form. Your parent or guarantor will be held responsible if you miss any of your payments in relation to your course fees.\n\n", - "page_start": 25, - "page_end": 25, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "\n\n## CHAPTER 7:\n\n## HOW TO ASK FOR HELP FROM YOUR TUTOR\n\n\n\nAs a student, you are going to experience times when you need help with your studies. You might be unsure about an assignment question, you might be confused by a particular concept, or you might be stressed about the upcoming exams.\n\nAnd if you are studying via distance learning (www.oxbridgeacademy.co. za/distance-learning/), where you don't have any face-to-face interaction with lecturers, you will need to rely on your tutors for the necessary academic support.", - "page_start": 32, - "page_end": 32, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## USING THE WRONG WORD CAN SOMETIMES HAVE AMUSING (AND EMBARRASSING) RESULTS.\n\n\n\nAs you can probably tell from the image above, using the wrong word can sometimes have amusing (and embarrassing) results. In some situations, however, the effect of using incorrect words may be more serious.\n\nIn academic or business writing, for example, the words that you choose will influence the reader's opinion of you.\n\nIncorrect word choice in an exam or assignment may cause you to lose marks, while using the wrong word in a business letter may create a bad first impression.\n\n(www.oxbridgeacademy.co.za/find-a-course/business-administration-courses/)", - "page_start": 14, - "page_end": 14, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "- - IP of the email server (SMTP Server) and Port\n - - Call Home email address\n - - Email of one or more users set to receive one or more email notifications", - "page_start": 201, - "page_end": 201, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "sg247938.pdf", - "query": "When is it necessary to use a host multipathing driver for load balancing?", - "target_page": 340, - "target_passage": "For load balancing and access redundancy on the host side, the use of a host multipathing driver is required in the following situations: Protection from fabric link failures, including port failures on the IBM Spectrum Virtualize system nodes Protection from a host HBA failure (if two HBAs are in use) Protection from fabric failures if the host is connected through two HBAs to two separate fabrics Provide load balancing across the host HBA", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- /SM590000 Number of paths per host multipath device\n\nThe maximum supported number of paths per multipath device that is visible on the host is eight. Although the IBM Subsystem Device Driver Path Control Module (SDDPCM), related products, and most vendor multipathing software can support more paths, the Storwize V7000 expects a maximum of eight paths. In general, you see only an effect on performance from more paths than eight. Although the IBM Spectrum Virtualize can work with more than eight paths, this design is technically unsupported.", - "page_start": 762, - "page_end": 762, - "source_file": "sg247938.pdf" - }, - { - "text": "Figure 3-4 Overview of four-path host zoning\n\n\n\nWhen possible, use the minimum number of paths that are necessary to achieve a sufficient level of redundancy. For the Storwize V7000 environment, no more than four paths per I/O Group are required to accomplish this layout.\n\nAll paths must be managed by the multipath driver on the host side. Make sure that the multipath driver on each server can handle the number of paths required to access all volumes mapped to the host.\n\nFor hosts that use four HBAs/ports with eight connections to an I/O Group, use the zoning schema that is shown in Figure 3-5 on page 57. You can combine this schema with the previous four-path zoning schema.", - "page_start": 77, - "page_end": 77, - "source_file": "sg247938.pdf" - }, - { - "text": "Figure 13-21 Update process paused for host path recovery\n\n\n\n - 11.After a 30-minute pause, a node failover occurs and you temporarily lose connection to the GUI to ensure that multipathing recovered on all attached hosts. A warning window displays, prompting you to refresh the current session, as shown in Figure 13-22 on page 694.", - "page_start": 714, - "page_end": 714, - "source_file": "sg247938.pdf" - }, - { - "text": "## 13.2.3 Load testing\n\nThe goal of load testing is to verify that, under stressful system conditions, the required amount of data can be loaded into the Content Manager OnDemand system within a time window.\n\nA general approach to load testing a system is described:\n\n - /SM590000 Parallel loads: Run a single load and measure the load throughput. If the throughput does not meet the requirements, run two loads in parallel and measure the throughput. While the loads are run, collect system statistics to determine the system resources that are being used and any potential bottlenecks. Tune or acquire additional system resources as needed. Progressively increase the number of parallel loads until the required throughput is met.\n\nNote: For most users, a single load process meets the ingestion throughput requirements.\n\n - /SM590000 Data types and exits: A different data type, and whether an exit is started during the load process, affects the load throughput. Test samples of the different types that represent the general loads.", - "page_start": 326, - "page_end": 326, - "source_file": "sg246915.pdf" - }, - { - "text": "- c. Hosts usually do not support concurrent multipath drivers at the same time. You might need to remove drivers that are not compatible with the Storwize V7000, from the hosts and use the recommended device drivers. For more information about supported drivers, see the IBM SSIC.", - "page_start": 412, - "page_end": 412, - "source_file": "sg247938.pdf" - }, - { - "text": "When configuring multiple masters, the cluster installation process supports the native HA method. This method uses the native HA master capabilities that are built into OpenShift Container Platform and can be combined with any Load Balancing solution.\n\nIf a host is defined in the [lb] section of the inventory file, Ansible installs and configures HAProxy automatically as the load balancing solution. If no host is defined, it is assumed that you pre-configured an external load balancing solution of your choice to balance the master API (port 8443) on all master hosts.\n\nNote: The HAProxy Load Balancer is intended to demonstrate the API server's HA mode and is not recommended for production environments. If you are deploying to a cloud provider, Red Hat recommends deploying a cloud-native TCP-based Load Balancer or take other steps to provide a highly available load balancer.\n\n## DNS\n\nDNS service is an important component in the Red Hat OpenShift Container Platform environment. Regardless of the provider of DNS, an organization is required to have certain records in place to serve the various Red Hat OpenShift Container Platform components.\n\nConsidering the Load Balancer values for the Red Hat OpenShift Container Platform master service and infrastructure nodes running router Pods are known beforehand, entries must be configured into the DNS before starting the deployment procedure.\n\n## DNS for OpenShift applications\n\nApplications that are served by OpenShift are accessible by the router on ports 80/TCP and 443/TCP. The router uses a wildcard record to map all host names under a specific sub domain to the same IP address without requiring a separate record for each name. This process allows Red Hat OpenShift Container Platform to add applications with arbitrary names if they are under that sub domain.\n\nFor example, a wildcard record for *.apps.example.com causes DNS name lookups for app1.apps.example.com and app2.apps.example.com to both return the same IP address: 9.109.x.y . All traffic is forwarded to the OpenShift Infrastructure Nodes (Routers). The Routers examine the HTTP headers of the queries and forward them to the correct destination.\n\nWith a load-balancer host address of 9.109.x.y , the wildcard DNS record for *.apps.example.com resolves IP address 9.109.x.y .\n\nA simple DNS round-robin resolution can be used to spread traffic across infrastructure nodes.\n\nFor production environments, it is recommended to have more advanced load balancing capabilities to distribute the traffic among the OpenShift Routers. In those cases, an external Load Balancer is used.\n\n## OpenShift Software Defined Networking (SDN)\n\nRed Hat OpenShift Container Platform offers the ability to specify how pods communicate with each other. This process can be done by using Red Hat provided Software-defined networks (SDN) or a third-party SDN.\n\nDeciding on the suitable internal network for an Red Hat OpenShift Container Platform step is a crucial step. Unfortunately, no correct answer exists regarding the suitable pod network to chose because this choice varies based on the specific scenario requirements for how a Red Hat OpenShift Container Platform environment is to be used.", - "page_start": 109, - "page_end": 109, - "source_file": "sg248459.pdf" - }, - { - "text": "Hosts that connect to the Storwize V7000 system by using fabric switches that use FC or FCoE protocol must be zoned correctly, as described in 3.6, 'SAN configuration planning' on page 50.\n\nHosts that connect to the Storwize V7000 system with iSCSI protocol must be configured correctly, as described in Chapter 3, 'Planning' on page 43.\n\nNote: Certain host operating systems can be directly connected to the Storwize V7000 system without the need for FC fabric switches. For more information, see this page of the IBM System Storage Interoperation Center (SSIC).\n\nFor load balancing and access redundancy on the host side, the use of a host multipathing driver is required in the following situations:", - "page_start": 339, - "page_end": 339, - "source_file": "sg247938.pdf" - }, - { - "text": "## Load balancers\n\nThis guide uses an external load balancer that is running HAproxy to offer a single entry point for the many Red Hat OpenShift Container Platform components. Organizations can provide their own deployed load balancers if the service exists.\n\nThe Red Hat OpenShift Container Platform console, which is provided by the Red Hat OpenShift Container Platform master nodes, can be spread across multiple instances to provide load balancing and HA properties.\n\nApplication traffic passes through the Red Hat OpenShift Container Platform Router on its way to the container processes. The Red Hat OpenShift Container Platform Router is a reverse proxy service container that multiplexes the traffic to multiple containers that make up a scaled application that is running inside Red Hat OpenShift Container Platform. The load balancer that is used by infrastructure nodes acts as the public view for the Red Hat OpenShift Container Platform applications.\n\nThe destination for the master and application traffic must be set in the load balancer configuration after each instance is created, the floating IP address is assigned, and before the installation. A single HAproxy Load Balancer can forward both sets of traffic to different destinations.", - "page_start": 108, - "page_end": 108, - "source_file": "sg248459.pdf" - }, - { - "text": "- /SM590000 Protection from fabric failures if the host is connected through two HBAs to two separate fabrics\n - /SM590000 Provide load balancing across the host HBAs", - "page_start": 339, - "page_end": 339, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 For Multiplatforms and z/OS, run parallel load jobs to take advantage of multiprocessors, large memory pools, multiple data paths, and multiple disk drives.\n - /SM590000 Ensure that each parallel load is loading to a different application group.", - "page_start": 325, - "page_end": 325, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0955.pdf", - "query": "Which orbiting instrument provides near-continuous full-sky coverage in the hard X-ray/low-energy gamma-ray range?", - "target_page": 1, - "target_passage": "Gamma ray Burst Monitor", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Observations of Soft Gamma Ray Sources > 100 keV Using Earth Occultation with GBM\n\nG.L. Case, M.L. Cherry, J. Rodi Dept. of Physics & Astronomy, Louisiana State Univ., Baton Rouge, LA 70803, USA\n\n## A. Camero-Arranz\n\nFundaci'on Espa˜nola de Ciencia y Tecnolog'ıa (MICINN), C/Rosario Pino,14-16, 28020-Madrid, Spain\n\n## E. Beklen\n\nMiddle East Technical University (METU), 06531, Ankara, Turkey\n\nC. A. Wilson-Hodge\n\nNASA Marshall Space Flight Center, Huntsville, AL 35812\n\n## P. Jenke\n\nNASA Postdoctoral Program Fellow, NASA Marshall Space Flight Center, Huntsville, AL 35812\n\nP.N. Bhat, M.S. Briggs, V. Chaplin, V. Connaughton, R. Preece University of Alabama in Huntsville, Huntsville, AL 35899\n\n## M.H. Finger\n\nUSRA, National Space Science and Technology Center, Huntsville, AL 35899\n\nThe NaI and BGO detectors on the Gamma ray Burst Monitor (GBM) on Fermi are now being used for long term monitoring of the hard X-ray/low energy gamma ray sky. Using the Earth occultation technique demonstrated previously by the BATSE instrument on the Compton Gamma Ray Observatory, GBM produces multiband light curves and spectra for known sources and transient outbursts in the 8 keV - 1 MeV band with its NaI detectors and up to 40 MeV with its BGO. Coverage of the entire sky is obtained every two orbits, with sensitivity exceeding that of BATSE at energies below ∼ 25 keV and above ∼ 1 . 5 MeV. We describe the technique and present preliminary results after the first ∼ 17 months of observations at energies above 100 keV. Seven sources are detected: the Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105, and the transient source XTE J1752-223.\n\n## I. INTRODUCTION\n\nThe Gamma ray Burst Monitor (GBM) on Fermi is currently the only instrument in orbit providing nearly continuous full sky coverage in the hard X-ray/low energy gamma ray energy range. The Earth occultation technique, used very successfully on BATSE, has been adapted to GBM. An initial catalog of 64 sources is currently being monitored and continuously augmented. At energies above 100 keV, six steady sources (the Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105) and one transient source (XTE J1752-223) have been detected in the first year of observation. We describe the instrument, outline the technique, and present light curves for the seven sources.\n\n## II. GBM AND THE EARTH OCCULTATION OBSERVATIONAL TECHNIQUE\n\nThe Gamma ray Burst Monitor is the secondary instrument onboard the Fermi satellite [1, 2]. It con-\n\nsists of 12 NaI detectors 5 '' in diameter by 0.5 '' thick mounted on the corners of the spacecraft and oriented such that they view the entire sky not occulted by the Earth. GBM also contains 2 BGO detectors 5 '' in diameter by 5 '' thick located on opposite sides of the spacecraft. None of the GBM detectors have direct imaging capability.\n\nKnown sources of gamma ray emission can be monitored with non-imaging detectors using the Earth occultation technique, as was successfully demonstrated with BATSE [3, 4]. When a source of gamma rays is occulted by the Earth, the count rate measured by the detector will drop, producing a step-like feature. When the source reappears from behind the Earths limb, the count rate will increase, producing another step. The diameter of the Earth seen from Fermi is ∼ 140 · , so roughly 30% of the sky is occulted by the Earth at any one time. Coupled with the ± 35 · slewing of the pointing direction every orbit, this means that the entire sky is occulted every two orbits. With an altitude of 565 km, a period of 96 minutes, and an orbital inclination of 26 . 5 · , individual occultation steps last for ∼ 10 seconds (Fig. 1).", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0955.pdf" - }, - { - "text": "\n\nFIG. 3: Cen A light curve. Horizontal scale is in modified Julian days.\n\n\n\nto observe these breaks, GBM is able to see significant emission above 300 keV, consistent with the canonical hard spectrum.\n\nCen A (Fig. 3) is a Sy 2 galaxy that is the brightest AGN in hard x-rays/low energy gamma rays. It has a hard spectrum (Γ = 1 . 8) and has been observed at energies > 1 MeV [9]. The GBM results are consistent with this hard spectrum, though GBM does not have the sensitivity to determine if the hard spectrum continues beyond 300 keV or if the spectrum cuts off.\n\nCyg X-1 (Fig. 4) is a HMXB and one of the first systems determined to contain a black hole. It has been observed to emit significant emission above 100 keV including a power law tail extending out to greater than 1 MeV [10, 11]. The GBM results show significant emission above 300 keV, consistent with the power law tail observed when Cyg X-1 is in its hard state.\n\nGRS 1915+105 (Fig. 5) is a LMXB with the compact object being a massive black hole. Evidence for emission above 100 keV has been seen previously [12] with BATSE. The GBM light curve integrated over 490 days shows significant emission above 100 keV.\n\n1E 1740-29 (Fig. 6) is a LMXB very near the Galactic Center. It is a microquasar, and spends most of its time in the low/hard state. Integral observations indicate the presence of a power law tail above 200 keV [13]. The present GBM results are consistent with this high energy emission. In the future, we\n\nFIG. 4: Cyg X-1 light curve. Horizontal scale is in modified Julian days.FIG. 5: GRS 1915+105 light curve. Horizontal scale is in modified Julian days.\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0955.pdf" - }, - { - "text": "## VERITAS Observations of Blazars\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E > 100 GeV) γ -ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ -ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼ 30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ -rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n## 1. Introduction\n\nActive galactic nuclei are the most numerous class of identified VHE γ -ray sources. These objects emit non-thermal radiation across ∼ 20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ -ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ -rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH ( ∼ 2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ -rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## 2. VERITAS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "## Submillimeter Variability and the Gamma-ray Connection in Fermi Blazars\n\nA. Strom Univ. of Arizona, AZ 85721, USA A. Siemiginowska, M. Gurwell, B. Kelly\n\nCfA, MA 02138, USA\n\nWe present multi-epoch observations from the Submillimeter Array ( SMA ) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August-October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## 1. INTRODUCTION\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ -ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ -ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submil-\n\nlimeter Array 1 ( SMA ) at 1mm and 850 µ m, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ -ray indices and luminosities.\n\n## 2. SMA BLAZARS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 1: (Left) The preliminary significance measured from each of the 49 non-detected candidates using standard analysis cuts. The curve shows a Gaussian distribution, with mean zero and standard deviation one, normalized to the number of blazars. A similar result is obtained using analysis cuts optimized for soft-spectrum sources. (Right) The distribution of flux upper limits for the non-detected blazars in percentage of Crab Nebula flux above the observation threshold. The time-weighted average limit is less than ∼ 2% Crab flux.\n\n\n\n\n\nσ\n\nsince the launch of Fermi include LAT detections. In addition, several MWL campaigns on the well-studied VHE blazars Mkn 421 and Mkn 501 (please see the contributions of D. Gall and A. Konopelko in these proceedings) were also performed. Highlights of these campaigns include:\n\n - · 1ES 2344+514: A major (50% Crab) VHE flare, along with correlations of the VHE and X-ray flux were observed from this HBL. The VHE and X-ray spectra harden during bright states, and a synchrotron self-Compton (SSC) model can explain the observed SED in both the high and low states [26].\n - · 1ES 1218+304: This HBL flared during VERITAS MWL observations. Its unusually hard VHE spectrum strongly constrains the EBL. The observed flaring rules out kpc-scale jet emission as the explanation of the spectral hardness and places the EBL constraints on more solidfooting [27, 28].\n - · 1ES 0806+524: The observed SED of this new VHE HBL can be explained by an SSC model [16].\n - · W Comae: This IBL, the first discovered at VHE, flared twice in 2008 [14, 15]. Modeling of the SED is improved by including an externalCompton (EC) component in an SSC interpretation.\n - · 3C 66A: This IBL flared at VHE and MeV-GeV energies in 2008[17, 18]. Similar to W Comae and PKS 1424+240, modeling of observed SED suggests a strong EC component in addition to an SSC component.\n - · Mkn 421: This HBL exhibited major flaring behavior for several months in 2008. Correlations of the VHE and X-ray flux were observed, along with spectral hardening with increased flux in both bands [29].\n - · RGBJ0710+591: Modeling the SED of this HBL with an SSC model yields a good fit to the data. The inclusion of an external Compton component does not improve the fit.\n - · PKS1424+240: The broadband SED of this IBL (at unknown redshift) is well described by an SSC model favoring a redshift of less than 0.1 [21]. Using the photon index measured with Fermi-LAT in combination with recent EBL absorption models, the VERITAS data indicate that the redshift of PKS 1424+240 is less than 0.66.\n\n## 8. Conclusions\n\nThe first two years of the VERITAS blazar KSP were highly successful. Highlights include the detection of more than a 16 VHE blazars with the observations almost always having contemporaneous MWL data. Among these detections are 8 VHE blazar discoveries, including the first three IBLs known to emit VHE γ -rays. All but a handful of the blazars on the initial VERITAS discovery target list were observed, and the flux limits generated for those not VHE detected are generally the most-constraining ever. The excess seen in the stacked blazar analysis suggests that the initial direction of the VERITAS discovery program was well justified, and that follow-up observations of many of these initial targets will result in VHE discoveries. In addition, the Fermi-LAT is identifying many new compelling targets for the VERITAS blazar discovery program. These new candidates have already resulted in 3 VHE blazar discoveries. The future of the VERITAS blazar discovery program is clearly very bright.\n\nThe MWL aspect of the VERITAS blazar KSP has also been highly successful. Every VERITAS observation of a known, or newly discovered, VHE blazar has been accompanied by contemporaneous MWL observations. These data have resulted in the identifica-", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 4: The γ -ray index versus submillimeter index plane. The blazars fall more steeply in the γ -rays than in the submillimeter band, where most are, in fact, rising. This LAT-detected sample contrasts with the full SMA sample, where the blazars are more distributed around α S ∼ 0.\n\n\n\nas the presence of SSC versus ERC. Here, we use submillimeter luminosity as a proxy for jet power, which is correlated with the integrated luminosity of the synchrotron component. Elevated γ -ray luminosity with respect to the synchrotron component (which is often seen in FSRQs) suggests the upscattering of external photons off the synchrotron-emitting electrons. These objects should occupy the upper right of the ratio/jet power plot, and BL Lacs, which generally exhibit components with roughly comparable luminosities, should occupy the lower left. It is clear from the figure, however, that many FSRQs exhibit ratios similar to those of the BL Lacs and vis versa.\n\nSikora et al. [10] report that, during its flaring epochs, 3C 454.3 transitions from its typical FSRQ state to a more BL Lac-like state, where the synchrotron component emits much more strongly compared to the γ -ray component than during its 'low state'. 3C 454.3, which is the highest submillimeter luminosity FSRQ in our sample, would then shift down and to the right in Figure 5 when it enters a flaring period. For the first three months of the Fermi mission, 3C 454.3 was not flaring, which may explain its present location in Figure 5. The three objects for which there is a type discrepancy between CGRaBS and LBAS are all FSRQs (in CGRaBS) and exhibit\n\nlow luminosity ratios and high luminosity, which suggest they may be undergoing the same changes as 3C 454.3. A possible interpretation of the elevated luminosity ratios observed in some BL Lacs objects is that there has been a dramatic increase in γ -ray luminosity due to ERC, which would not be reflected in the synchrotron component.\n\n## 5. CONCLUSIONS\n\nThe motivation for observing blazars in the submillimeter is to study behavior close to the central engine, where the jet material is presumably still being accelerated. The separate emission processes that contribute to overall SED may present differently in BL Lacs and FSRQs, allowing us to understand the similarities and differences between blazar types. We have investigated these differences between objects in terms of submillimeter behavior and, in conclusion, find that\n\n- · The SMA blazars exhibit submillimeter energy spectral indexes that follow the spectral sequence interpretation of blazars.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0806.pdf" - }, - { - "text": "## 2. VERITAS\n\nVERITAS, a stereoscopic array of four 12-m atmospheric-Cherenkov telescopes located in Arizona, is used to study VHE γ -rays from a variety of astrophysical sources [4]. VERITAS began scientific observations with a partial array in September 2006 and has routinely observed with the full array since September 2007. The performance metrics of VERITAS include an energy threshold of ∼ 100 GeV, an energy resolution of ∼ 15%, an angular resolution of ∼ 0.1 · , and a sensitivity yielding a 5 σ detection of a 1% Crab Nebula flux object in < 30 hours 1 . VERITAS has an active maintenance program (e.g. frequent mirror recoating and alignment) to ensure its continued high performance over time, and an upgrade improving both the camera (higher quantum-efficiency PMTs) and the trigger system has been proposed to the funding agencies.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 5: Ratio of γ -ray luminosity to submillimeter luminosity in the 1mm band. The location of an object in this plot should be directly correlated with its blazar 'state', with FSRQs occupying the upper right and BL Lacs the lower left. Flat-spectrum radio quasar 3C 454.3 is the object with the highest submillimeter luminosity in this plot.\n\n\n\n - · BL Lacs and FSRQs do not exhibit significant differences in amplitude of submillimeter variability or characteristic timescale, but our sample of BL Lacs may be dominated by highpeaked BL Lacs (HBLs), which exhibit observational similarities with FSRQs.\n - · Blazar submillimeter light curves are consistent with being produced by a single process that accounts for both high and low states, with characteristic timescales 10 < τ rest < 500 days.\n - · The blazars detected by Fermi have synchrotron peaks at higher frequencies, regardless of submillimeter luminosity.\n - · FSRQs exhibit higher ratios of γ -ray to submillimeter luminosity than BL Lacs (Figure 5), but all objects inhabit a region of parameter space suggesting transitions between states during flaring epochs.\n\nAs Fermi continues to observe fainter sources, the sample of objects for which we can perform this type of analysis will increase and provide better limits on our results. To understand the physical relevance of these results, however, it is important to be able to distinguish between the difference in variability between BL\n\nLacs and FSRQs. One avenue for exploring this difference is to monitor changing submillimeter energy spectral index and the ratio of γ -ray to submillimeter luminosity as functions of time. The full meaning of the results of our autoregressive method is not yet clear, and will require better-sampled blazar light curves and the comparison between τ rest with physical timescales such as the synchrotron cooling timescale. These analyses would allow us to place constraints on the processes occurring near the base of the jet in blazars and further understand the intimate connection between them.\n\n## Acknowledgments\n\nThis work was supported in part by the NSF REU and DoD ASSURE programs under Grant no. 0754568 and by the Smithsonian Institution. Partial support was also provided by NASA contract NAS8-39073 and NASA grant NNX07AQ55G. We have made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the NASA/IPAC Extragalactic Database (NED) which is operated by the JPL, Caltech, under contract with NASA.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0806.pdf" - }, - { - "text": "## 5.1. Recent VERITAS Blazar Discoveries\n\nPrior to the launch of Fermi VERITAS had discovered VHE emission from 2 blazars. These included the first VHE-detected IBL, W Comae [14, 15], and the HBL 1ES0806+524 [16]. VERITAS has discovered 6 VHE blazars since the launch of Fermi. Three of these were initially observed by VERITAS prior to the release of Fermi-LAT results, due to the X-ray brightness of the synchrotron peaks of their SEDs.\n\nVHEemission from 3C66A was discovered by VERITAS in September 2008 [17] during a flaring episode that was also observed by the Fermi-LAT [18]. The observed flux above 200 GeV was 6% of the Crab Nebula flux and the measured VHE spectrum was very soft (Γ VHE ∼ 4 . 1). RGBJ0710+591 was detected", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "## 3. VERITAS Blazar KSP\n\nVERITAS observes for ∼ 750 h and ∼ 250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- · A VHE blazar discovery program ( ∼ 200 h / yr): Each year ∼ 10 targets are selected to receive ∼ 10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- · A target-of-opportunity (ToO) observation program ( ∼ 50 h / yr): VERITAS blazar observations can be triggered by either a VERITAS blazar discovery, a VHE flaring alert ( > 2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- · Multi-wavelength (MWL) studies of VHE blazars ( ∼ 50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- · Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n## 4. Blazar Discovery Program\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ -rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles ( -8 · < δ < 72 · ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0 . 3. To further the study of the\n\nEBL a few objects having a large ( z > 0 . 3) are also included in the target list. The target list includes:\n\n- · All nearby ( z < 0 . 3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- · The X-ray brightest HBL ( z < 0 . 3) in the recent Sedentary [8] and ROXA [9] surveys.\n- · Four distant ( z > 0 . 3) BL Lac objects recommended by [5, 10].\n- · Several FSRQ recommended as potential VHE emitters in [6, 11].\n- · All nearby ( z < 0 . 3) blazars detected by EGRET [12].\n- · All nearby ( z < 0 . 3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- · All sources ( | b | > 10 · ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ -ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERITAS blazar discovery program.\n\n## 5. VERITAS AGN Detections\n\nVERITAS has detected VHE γ -ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n## 5.1. Recent VERITAS Blazar Discoveries", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0955.pdf", - "query": "What is Cyg X-1?", - "target_page": 3, - "target_passage": "is a HMXB and one of the first systems determined to contain a black hole", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## Carriage controls\n\nIt is important to set the ACIF parameters CC and CCTYPE correctly. Table 7-2 describes the ANSI carriage controls. The encoding columns show what you see if you look at the document in a hexadecimal editor.\n\nTable 7-2 ANSI carriage controls\n\n| Carriage control | Description | Encoding in ASCII | Encoding in EBCDIC |\n|--------------------|-------------------|---------------------|----------------------|\n| 1 | New page | x'31' | x'F1' |\n| | Space one line | x'20' | x'40' |\n| 0 | Space two lines | x'30' | x'F0' |\n| - | Space three lines | x'2D' | x'60' |\n| + | Suppress space | x'2B' | x'8F' |", - "page_start": 198, - "page_end": 198, - "source_file": "sg246915.pdf" - }, - { - "text": "\n\nFIG. 3: Cen A light curve. Horizontal scale is in modified Julian days.\n\n\n\nto observe these breaks, GBM is able to see significant emission above 300 keV, consistent with the canonical hard spectrum.\n\nCen A (Fig. 3) is a Sy 2 galaxy that is the brightest AGN in hard x-rays/low energy gamma rays. It has a hard spectrum (Γ = 1 . 8) and has been observed at energies > 1 MeV [9]. The GBM results are consistent with this hard spectrum, though GBM does not have the sensitivity to determine if the hard spectrum continues beyond 300 keV or if the spectrum cuts off.\n\nCyg X-1 (Fig. 4) is a HMXB and one of the first systems determined to contain a black hole. It has been observed to emit significant emission above 100 keV including a power law tail extending out to greater than 1 MeV [10, 11]. The GBM results show significant emission above 300 keV, consistent with the power law tail observed when Cyg X-1 is in its hard state.\n\nGRS 1915+105 (Fig. 5) is a LMXB with the compact object being a massive black hole. Evidence for emission above 100 keV has been seen previously [12] with BATSE. The GBM light curve integrated over 490 days shows significant emission above 100 keV.\n\n1E 1740-29 (Fig. 6) is a LMXB very near the Galactic Center. It is a microquasar, and spends most of its time in the low/hard state. Integral observations indicate the presence of a power law tail above 200 keV [13]. The present GBM results are consistent with this high energy emission. In the future, we\n\nFIG. 4: Cyg X-1 light curve. Horizontal scale is in modified Julian days.FIG. 5: GRS 1915+105 light curve. Horizontal scale is in modified Julian days.\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0955.pdf" - }, - { - "text": "http://www.cygwin.com", - "page_start": 811, - "page_end": 811, - "source_file": "sg247938.pdf" - }, - { - "text": "- http://ibm.co/1CD1gxG", - "page_start": 811, - "page_end": 811, - "source_file": "sg247938.pdf" - }, - { - "text": "*\n\nTable 7 Summary of consistent individual predictors for each utilization outcome\n\n| Dependent variable | Utilization outcome | Utilization outcome | Utilization outcome | Utilization outcome | Utilization outcome | Utilization outcome |\n|----------------------------|-----------------------|-----------------------|-----------------------|-----------------------|-----------------------------|-----------------------|\n| | Any care | Opioids | Injection | Surgery | Diagnostic tests or imaging | Emergency room |\n| Age | | | | | | X |\n| Insurance | | | | | | X |\n| Comorbidities (CCI) | | X | | | X | |\n| Baseline disability | X | | X | X | X | X |\n| Baseline pain | | X | | | | |\n| Change in pain | X | X | | | X | X |\n| Change in disability | | | | X | | |\n| Change in 10-item OSPRO-YF | | | | X | | |\n\nCCI Charlson comorbidity index, OSPRO-YF Pain-related psychological distress screening tool\n\n - * Significant predictors ( p < .05) for each dependent variable denoted with ' X '\n\nservices, suggesting injection may be the most difficult service to predict with the included variable set.\n\n## Surgery\n\nBaseline disability (OR = 3.13 -3.25, p < 0.001), change in disability (OR = 3.04 -3.05, p = 0.01) and change in 10-item OSPRO-YF score (OR = 1.12 -1.14, p < 0.05) where consistent predictors of subsequent surgery. Notably, magnitude of prediction was comparable between change in disability and baseline disability. This was the only parsimonious model to include an OSPRO tool. In this case, an increase in pain-related psychological distress measured by the OSPRO-YF 10-item questionnaire over the first 4 weeks was associated with higher odds of surgery. The 3 predictors in this model explained just over 30% of the variance in surgery utilization.\n\n## Diagnostic tests or imaging\n\nComorbidity index score (OR = 1.35 -1.45, p < 0.05), baseline disability (OR = 2.25 -2.66, p < 0.001), and baseline to 4-week change in pain intensity (OR = 3.04 -3.05, p = 0.01) were significant predictors of diagnostic test or imaging utilization. Among these, baseline disability was the strongest predictor. In these models, higher comorbidity index, higher baseline disability and worsening pain were associated with higher odds of utilization. Together, these variables explained approximately 30% of the variance in utilization.\n\n## Emergency room", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed5.pdf" - }, - { - "text": "| Model | Cache | Fibre Channel (FC) / iSCSI / SAS ports | Drive slots | Power supply |\n|------------------------------------------------|---------------------------------|-----------------------------------------------|---------------|-----------------------------------------------------------------------------|\n| 2076-624 (with two node canisters Gen2+) | 64, 128, or 256 gigabytes (GB) | 16 x 16 Gb / 6 x 1 Gb + 8x 10 Gb / 4 x 12 Gb | 24 x 2.5-inch | Integrated dual power supplies with battery |\n| 2076-524 (with two node canisters Gen2) | 32 or 64 gigabytes (GB) | 4 x 16 Gb / 4 x 1 Gb + 4 x 10 Gb / 4 x 12 Gb | 24 x 2.5-inch | Integrated dual power supplies with battery |\n| 2076-212 (with two expansion canisters) | Not applicable (N/A) | -- / -- / 4 x 12 Gb | 12 x 3.5-inch | Integrated dual power supplies |\n| 2076-224 (with two expansion canisters) | N/A | -- / -- / 4 x 12 Gb | 24 x 2.5-inch | Integrated dual power supplies |\n| 2076-12F (with two expansion canisters Gen2) | N/A | -- / -- / 4 x 12 Gb | 12 x 3.5-inch | Integrated dual power supplies (attaches to 2076-524 and |\n| 2076-24F (with two expansion canisters Gen2) | N/A | -- / -- / 4 x 12 Gb | 24 x 2.5-inch | Integrated dual power supplies (attaches to 2076-524 and 2076-624 only) |\n| 2076-92F (with two expansion canisters Gen2) | N/A | -- / -- / 4 x 12 Gb | 92 x 3.5-inch | Integrated dual power supplies (attaches to 2076-524 and 2076-624 only) |\n\nNote: The first generation of control enclosures (2076 - models 112, 124, 312, and 324) were withdrawn from marketing. However, expansion enclosures 2076-212 and 2076-224 can still be ordered (see Table 2-1) because they attach to those control enclosures only. Intermixing control enclosures with expansion enclosures of different generations is not a supported combination, and is refused by IBM Spectrum Virtualize software.\n\nThe first generation of IBM Storwize V7000 hardware is not supported by IBM Spectrum Virtualize V8.1. Any attempt to upgrade to V8.1 is rejected by the software. The last supported version for first-generation Storwize V7000 is V7.8.\n\n## 2.3.2 IBM Storage Utility offerings\n\nThe IBM 2076 Model U7A is the IBM Storwize V7000 with a three-year warranty to be used in the Storage Utility Offering space. These models are physically and functionally identical to the Storwize V7000 model 624, except for target configurations and variable capacity billing. The variable capacity billing uses IBM Spectrum Control™ Storage Insights to monitor the system usage, which allows allocated storage usage above a base subscription rate to be billed per TB, per month.", - "page_start": 35, - "page_end": 35, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 The first byte is always x'5A'.\n - /SM590000 The second and third bytes are the length (maximum length of 32767).\n - /SM590000 The fourth byte is always x'D3'.\n - /SM590000 The fourth, fifth, and sixth bytes are the Structured Field Identifier, for example, x'D3A8A8' or x'D3A8AF'.", - "page_start": 199, - "page_end": 199, - "source_file": "sg246915.pdf" - }, - { - "text": "- /SM590000 Replaces the ASCII form feed ( X'0C' ) with an ASCII new page command ( X'31' ).\n - /SM590000 Leaves X'0A' in the file.", - "page_start": 267, - "page_end": 267, - "source_file": "sg246915.pdf" - }, - { - "text": "where I ( i j ) = 1 if i j = s and I ( i j ) = 0 if i j = w . In other words, the predicate is that the fraction of queries routed to the strong model is bounded by ϵ .\n\nControl plane integrity. A control plane integrity adversary is a randomized algorithm A that seeks to maliciously guide inference flow.\n\nIn an unconstrained LLM control plane integrity attack, the adversary A seeks to generate inputs ⃗x = ⃗x 1 , . . . , ⃗x q such that running R M ω ( ⃗x ) generates a transcript for which P (( x 1 , i 1 ) , . . . , ( x q , i q )) = 0 . This attack could be launched by an adversary who wants to maximize inference costs for a victim application using an LLM router.\n\nA harder setting requires input adaptation, where the adversary is given inputs x 1 , . . . , x q and it must find new inputs ˆ x 1 , . . . , ˆ x q for which the transcript resulting from P ((ˆ x 1 , i 1 ) , . . . , (ˆ x q , i q )) = 0 . There will be some competing constraint, such as that x j and ˆ x j are very similar for each j , or that the outputs y j ← $ R M ω ( x j ) and ˆ y j ← $ R M ω (ˆ x j ) are close. In the routing context, the adversary's goal is to increase the fraction of queries that get routed to the strong model, in order to improve the overall quality of responses, drive up the victim application's inference costs, or both.\n\n̸\n\nRelationship to evasion attacks. Evasion attacks [25, 43, 60] against an inference system (also called adversarial examples [32, 48, 49]) would, in our setting, seek to find a small modification ∆ to an input x such that R M ω ( x +∆) = R M ω ( x ) where addition is appropriately defined based on input type (e.g., slight changes to text).\n\nOur attack setting is not the same. The control plane integrity adversary seeks to maliciously control the inference flow , not necessarily the output of inference. In an unconstrained attack, the adversary does not care what outputs are generated. In the input adaptation attack, the adversary seeks to craft inputs that modify the inference flow yet do not change the responses of the strong underlying LLM to the extent possible. Looking ahead, we will use evasion techniques in our adaptation attacks against learned control plane routers, but, importantly, not the overall inference.\n\nIn the other direction, undermining LLM control plane integrity could be a stepping stone toward evasion attacks. For example, if R M ω is used to classify malicious content by combining LLMs each tuned to different types of harm categories, then modifying inputs to force inference flows away from appropriate models could aid evasion. We leave evaluation of how control-plane integrity attacks can enable evasion to future work.\n\nThreat models. Within the context of control plane integrity attacks against LLM routers, we identify several threat models that differ in terms of the adversary's goals and their knowledge about the target control plane R M ω .\n\nIn terms of goals, an adversary may seek to inflate the costs of a victim application that utilizes an LLM control plane. As a kind of denial-of-service attack, such cost inflation would penalize the application developer who expects routing to control costs. Another adversarial goal could be arbitrage : consider an application that charges X dollars per query, whereas directly using M s costs Y > X . The application's lower rate X makes economic sense assuming it uses a router to route the bulk of queries to a cheaper model M w . An input adaptation attack in this setting can gain (indirect) access to M s , obtaining an arbitrage advantage of Y -X per query. To be effective, this arbitrage adversary would want to ensure that adaptations do not lower response quality (i.e., it extracts all the value out of rerouting to M s ). As before, the victim in this case is the application that relies on routing to lower its costs (unsuccessfully, under this attack).", - "page_start": 3, - "page_end": 3, - "source_file": "arxiv1.pdf" - }, - { - "text": "- (a) regulation 4 or 4A of the Health Protection (Notification) Regulations 2010( a ) applies in relation to the test provider; or\n - (b) if the test provider arranges with another person ('X') for X to carry out any element of the single end-to-end testing service on their behalf, either of those regulations applies to X in the carrying out of that element,", - "page_start": 72, - "page_end": 72, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0955.pdf", - "query": "What satellite is the Gamma Ray Burst Observatory on?", - "target_page": 1, - "target_passage": " Fermi satellite", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## Observations of Soft Gamma Ray Sources > 100 keV Using Earth Occultation with GBM\n\nG.L. Case, M.L. Cherry, J. Rodi Dept. of Physics & Astronomy, Louisiana State Univ., Baton Rouge, LA 70803, USA\n\n## A. Camero-Arranz\n\nFundaci'on Espa˜nola de Ciencia y Tecnolog'ıa (MICINN), C/Rosario Pino,14-16, 28020-Madrid, Spain\n\n## E. Beklen\n\nMiddle East Technical University (METU), 06531, Ankara, Turkey\n\nC. A. Wilson-Hodge\n\nNASA Marshall Space Flight Center, Huntsville, AL 35812\n\n## P. Jenke\n\nNASA Postdoctoral Program Fellow, NASA Marshall Space Flight Center, Huntsville, AL 35812\n\nP.N. Bhat, M.S. Briggs, V. Chaplin, V. Connaughton, R. Preece University of Alabama in Huntsville, Huntsville, AL 35899\n\n## M.H. Finger\n\nUSRA, National Space Science and Technology Center, Huntsville, AL 35899\n\nThe NaI and BGO detectors on the Gamma ray Burst Monitor (GBM) on Fermi are now being used for long term monitoring of the hard X-ray/low energy gamma ray sky. Using the Earth occultation technique demonstrated previously by the BATSE instrument on the Compton Gamma Ray Observatory, GBM produces multiband light curves and spectra for known sources and transient outbursts in the 8 keV - 1 MeV band with its NaI detectors and up to 40 MeV with its BGO. Coverage of the entire sky is obtained every two orbits, with sensitivity exceeding that of BATSE at energies below ∼ 25 keV and above ∼ 1 . 5 MeV. We describe the technique and present preliminary results after the first ∼ 17 months of observations at energies above 100 keV. Seven sources are detected: the Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105, and the transient source XTE J1752-223.\n\n## I. INTRODUCTION\n\nThe Gamma ray Burst Monitor (GBM) on Fermi is currently the only instrument in orbit providing nearly continuous full sky coverage in the hard X-ray/low energy gamma ray energy range. The Earth occultation technique, used very successfully on BATSE, has been adapted to GBM. An initial catalog of 64 sources is currently being monitored and continuously augmented. At energies above 100 keV, six steady sources (the Crab, Cyg X-1, Swift J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105) and one transient source (XTE J1752-223) have been detected in the first year of observation. We describe the instrument, outline the technique, and present light curves for the seven sources.\n\n## II. GBM AND THE EARTH OCCULTATION OBSERVATIONAL TECHNIQUE\n\nThe Gamma ray Burst Monitor is the secondary instrument onboard the Fermi satellite [1, 2]. It con-\n\nsists of 12 NaI detectors 5 '' in diameter by 0.5 '' thick mounted on the corners of the spacecraft and oriented such that they view the entire sky not occulted by the Earth. GBM also contains 2 BGO detectors 5 '' in diameter by 5 '' thick located on opposite sides of the spacecraft. None of the GBM detectors have direct imaging capability.\n\nKnown sources of gamma ray emission can be monitored with non-imaging detectors using the Earth occultation technique, as was successfully demonstrated with BATSE [3, 4]. When a source of gamma rays is occulted by the Earth, the count rate measured by the detector will drop, producing a step-like feature. When the source reappears from behind the Earths limb, the count rate will increase, producing another step. The diameter of the Earth seen from Fermi is ∼ 140 · , so roughly 30% of the sky is occulted by the Earth at any one time. Coupled with the ± 35 · slewing of the pointing direction every orbit, this means that the entire sky is occulted every two orbits. With an altitude of 565 km, a period of 96 minutes, and an orbital inclination of 26 . 5 · , individual occultation steps last for ∼ 10 seconds (Fig. 1).", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0955.pdf" - }, - { - "text": "## 2. VERITAS\n\nVERITAS, a stereoscopic array of four 12-m atmospheric-Cherenkov telescopes located in Arizona, is used to study VHE γ -rays from a variety of astrophysical sources [4]. VERITAS began scientific observations with a partial array in September 2006 and has routinely observed with the full array since September 2007. The performance metrics of VERITAS include an energy threshold of ∼ 100 GeV, an energy resolution of ∼ 15%, an angular resolution of ∼ 0.1 · , and a sensitivity yielding a 5 σ detection of a 1% Crab Nebula flux object in < 30 hours 1 . VERITAS has an active maintenance program (e.g. frequent mirror recoating and alignment) to ensure its continued high performance over time, and an upgrade improving both the camera (higher quantum-efficiency PMTs) and the trigger system has been proposed to the funding agencies.", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "## VERITAS Observations of Blazars\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E > 100 GeV) γ -ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ -ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼ 30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ -rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n## 1. Introduction\n\nActive galactic nuclei are the most numerous class of identified VHE γ -ray sources. These objects emit non-thermal radiation across ∼ 20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ -ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ -rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH ( ∼ 2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ -rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## 2. VERITAS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "## Submillimeter Variability and the Gamma-ray Connection in Fermi Blazars\n\nA. Strom Univ. of Arizona, AZ 85721, USA A. Siemiginowska, M. Gurwell, B. Kelly\n\nCfA, MA 02138, USA\n\nWe present multi-epoch observations from the Submillimeter Array ( SMA ) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August-October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## 1. INTRODUCTION\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ -ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ -ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submil-\n\nlimeter Array 1 ( SMA ) at 1mm and 850 µ m, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ -ray indices and luminosities.\n\n## 2. SMA BLAZARS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "## 5.1. Recent VERITAS Blazar Discoveries\n\nPrior to the launch of Fermi VERITAS had discovered VHE emission from 2 blazars. These included the first VHE-detected IBL, W Comae [14, 15], and the HBL 1ES0806+524 [16]. VERITAS has discovered 6 VHE blazars since the launch of Fermi. Three of these were initially observed by VERITAS prior to the release of Fermi-LAT results, due to the X-ray brightness of the synchrotron peaks of their SEDs.\n\nVHEemission from 3C66A was discovered by VERITAS in September 2008 [17] during a flaring episode that was also observed by the Fermi-LAT [18]. The observed flux above 200 GeV was 6% of the Crab Nebula flux and the measured VHE spectrum was very soft (Γ VHE ∼ 4 . 1). RGBJ0710+591 was detected", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 4: The γ -ray index versus submillimeter index plane. The blazars fall more steeply in the γ -rays than in the submillimeter band, where most are, in fact, rising. This LAT-detected sample contrasts with the full SMA sample, where the blazars are more distributed around α S ∼ 0.\n\n\n\nas the presence of SSC versus ERC. Here, we use submillimeter luminosity as a proxy for jet power, which is correlated with the integrated luminosity of the synchrotron component. Elevated γ -ray luminosity with respect to the synchrotron component (which is often seen in FSRQs) suggests the upscattering of external photons off the synchrotron-emitting electrons. These objects should occupy the upper right of the ratio/jet power plot, and BL Lacs, which generally exhibit components with roughly comparable luminosities, should occupy the lower left. It is clear from the figure, however, that many FSRQs exhibit ratios similar to those of the BL Lacs and vis versa.\n\nSikora et al. [10] report that, during its flaring epochs, 3C 454.3 transitions from its typical FSRQ state to a more BL Lac-like state, where the synchrotron component emits much more strongly compared to the γ -ray component than during its 'low state'. 3C 454.3, which is the highest submillimeter luminosity FSRQ in our sample, would then shift down and to the right in Figure 5 when it enters a flaring period. For the first three months of the Fermi mission, 3C 454.3 was not flaring, which may explain its present location in Figure 5. The three objects for which there is a type discrepancy between CGRaBS and LBAS are all FSRQs (in CGRaBS) and exhibit\n\nlow luminosity ratios and high luminosity, which suggest they may be undergoing the same changes as 3C 454.3. A possible interpretation of the elevated luminosity ratios observed in some BL Lacs objects is that there has been a dramatic increase in γ -ray luminosity due to ERC, which would not be reflected in the synchrotron component.\n\n## 5. CONCLUSIONS\n\nThe motivation for observing blazars in the submillimeter is to study behavior close to the central engine, where the jet material is presumably still being accelerated. The separate emission processes that contribute to overall SED may present differently in BL Lacs and FSRQs, allowing us to understand the similarities and differences between blazar types. We have investigated these differences between objects in terms of submillimeter behavior and, in conclusion, find that\n\n- · The SMA blazars exhibit submillimeter energy spectral indexes that follow the spectral sequence interpretation of blazars.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 1: (Left) The preliminary significance measured from each of the 49 non-detected candidates using standard analysis cuts. The curve shows a Gaussian distribution, with mean zero and standard deviation one, normalized to the number of blazars. A similar result is obtained using analysis cuts optimized for soft-spectrum sources. (Right) The distribution of flux upper limits for the non-detected blazars in percentage of Crab Nebula flux above the observation threshold. The time-weighted average limit is less than ∼ 2% Crab flux.\n\n\n\n\n\nσ\n\nsince the launch of Fermi include LAT detections. In addition, several MWL campaigns on the well-studied VHE blazars Mkn 421 and Mkn 501 (please see the contributions of D. Gall and A. Konopelko in these proceedings) were also performed. Highlights of these campaigns include:\n\n - · 1ES 2344+514: A major (50% Crab) VHE flare, along with correlations of the VHE and X-ray flux were observed from this HBL. The VHE and X-ray spectra harden during bright states, and a synchrotron self-Compton (SSC) model can explain the observed SED in both the high and low states [26].\n - · 1ES 1218+304: This HBL flared during VERITAS MWL observations. Its unusually hard VHE spectrum strongly constrains the EBL. The observed flaring rules out kpc-scale jet emission as the explanation of the spectral hardness and places the EBL constraints on more solidfooting [27, 28].\n - · 1ES 0806+524: The observed SED of this new VHE HBL can be explained by an SSC model [16].\n - · W Comae: This IBL, the first discovered at VHE, flared twice in 2008 [14, 15]. Modeling of the SED is improved by including an externalCompton (EC) component in an SSC interpretation.\n - · 3C 66A: This IBL flared at VHE and MeV-GeV energies in 2008[17, 18]. Similar to W Comae and PKS 1424+240, modeling of observed SED suggests a strong EC component in addition to an SSC component.\n - · Mkn 421: This HBL exhibited major flaring behavior for several months in 2008. Correlations of the VHE and X-ray flux were observed, along with spectral hardening with increased flux in both bands [29].\n - · RGBJ0710+591: Modeling the SED of this HBL with an SSC model yields a good fit to the data. The inclusion of an external Compton component does not improve the fit.\n - · PKS1424+240: The broadband SED of this IBL (at unknown redshift) is well described by an SSC model favoring a redshift of less than 0.1 [21]. Using the photon index measured with Fermi-LAT in combination with recent EBL absorption models, the VERITAS data indicate that the redshift of PKS 1424+240 is less than 0.66.\n\n## 8. Conclusions\n\nThe first two years of the VERITAS blazar KSP were highly successful. Highlights include the detection of more than a 16 VHE blazars with the observations almost always having contemporaneous MWL data. Among these detections are 8 VHE blazar discoveries, including the first three IBLs known to emit VHE γ -rays. All but a handful of the blazars on the initial VERITAS discovery target list were observed, and the flux limits generated for those not VHE detected are generally the most-constraining ever. The excess seen in the stacked blazar analysis suggests that the initial direction of the VERITAS discovery program was well justified, and that follow-up observations of many of these initial targets will result in VHE discoveries. In addition, the Fermi-LAT is identifying many new compelling targets for the VERITAS blazar discovery program. These new candidates have already resulted in 3 VHE blazar discoveries. The future of the VERITAS blazar discovery program is clearly very bright.\n\nThe MWL aspect of the VERITAS blazar KSP has also been highly successful. Every VERITAS observation of a known, or newly discovered, VHE blazar has been accompanied by contemporaneous MWL observations. These data have resulted in the identifica-", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0770.pdf" - }, - { - "text": "\n\nFIG. 3: Cen A light curve. Horizontal scale is in modified Julian days.\n\n\n\nto observe these breaks, GBM is able to see significant emission above 300 keV, consistent with the canonical hard spectrum.\n\nCen A (Fig. 3) is a Sy 2 galaxy that is the brightest AGN in hard x-rays/low energy gamma rays. It has a hard spectrum (Γ = 1 . 8) and has been observed at energies > 1 MeV [9]. The GBM results are consistent with this hard spectrum, though GBM does not have the sensitivity to determine if the hard spectrum continues beyond 300 keV or if the spectrum cuts off.\n\nCyg X-1 (Fig. 4) is a HMXB and one of the first systems determined to contain a black hole. It has been observed to emit significant emission above 100 keV including a power law tail extending out to greater than 1 MeV [10, 11]. The GBM results show significant emission above 300 keV, consistent with the power law tail observed when Cyg X-1 is in its hard state.\n\nGRS 1915+105 (Fig. 5) is a LMXB with the compact object being a massive black hole. Evidence for emission above 100 keV has been seen previously [12] with BATSE. The GBM light curve integrated over 490 days shows significant emission above 100 keV.\n\n1E 1740-29 (Fig. 6) is a LMXB very near the Galactic Center. It is a microquasar, and spends most of its time in the low/hard state. Integral observations indicate the presence of a power law tail above 200 keV [13]. The present GBM results are consistent with this high energy emission. In the future, we\n\nFIG. 4: Cyg X-1 light curve. Horizontal scale is in modified Julian days.FIG. 5: GRS 1915+105 light curve. Horizontal scale is in modified Julian days.\n\n", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0955.pdf" - }, - { - "text": "Figure 1: The SMA light curves for 3C 454.3. The open circles represent the 850 µ m observations, and the open triangles represent the 1mm observations.\n\n\n\nJ1751+096) which have conflicting classifications between Fermi and CGRaBS. Some blazars found in the calibrator list have been studied extensively (e.g., 3C 279 and 3C 454.3) but the SMA blazars have not been studied collectively.\n\nForty-four of the objects in our total blazar sample were detected by Fermi and can be found in the catalog of LAT Bright AGN Sources (LBAS) from Abdo et al. [7]. J0050-094 has no redshift in either the LBAS catalog or CGRaBS and is not included in our study. Of the 43 remaining sources, 14 are BL Lac objects and 29 are FSRQs, with 0 . 03 ≤ z ≤ 2 . 19.\n\nWe examined submillimeter light curves for all of the SMA blazars, with observations beginning in approximately 2003 (see Figure 1). Typically, the 1mm band is much more well-sampled in comparison to the 850m band, but visual inspection reveals that the regularity and quality of observations vary greatly from source to source. Many of the objects exhibit nonperiodic variability, either in the form of persistent, low-amplitude fluctuations or higher amplitude flaring behavior.\n\n## 2.1. Submillimeter Properties\n\nSubmillimeter Luminosities. Since we are primarily concerned with comparisons to Fermi observations, we note that only 129 of the SMA blazars (23 BL Lacs and 106 FSRQs) were observed by the SMA in either band during the three months August-October 2008. For these objects, submillimeter luminosities are calculated in the standard way:\n\nν e L ν e = 4 πD 2 L ν obs F obs 1 + z , (1)\n\nwhere D L is the luminosity distance, ν obs is the frequency of the observed band, and F obs is the average\n\nFigure 2: Variability index for our sample (top: 1mm, bottom: 850 µ m), with FSRQs as the hatched distribution and BL Lacs as the solid distribution. There is no signicant difference in the class distributions in either band; the 'tail' to the left is populated by objects with errors larger than the intrinsic variability.\n\n\n\nflux (in erg cm -2 s -1 Hz -1 ) over the three month period. We adopt a lambda cold dark matter cosmology with values of H 0 = 71 km s -1 Mpc -1 , Ω M = 0 . 27, and Λ = 0 . 73.\n\nEnergy Spectral Indices. We derive submillimeter spectral energy indices from observations quasisimultaneous with the Fermi observations. To be consistent with the use of α γ , we define spectral energy index as νF ν = ν -α S and calculate α S from the average of the energy spectral indices over the corresponding three months. We only calculate α S for the 16 objects (8 BL Lacs and 35 FSRQs) with observations at both 1mm and 850 µ m during this time frame.\n\n## 3. VARIABILITY ANALYSIS\n\n## 3.1. Variability Index\n\nWe roughly characterize the level of variability of each source using the variability index from Hovatta et al. [8]:\n\nV = ( F max -σ F max ) -( F min + σ F min ) ( F max -σ F max ) + ( F min + σ F min ) (2)\n\nFigure 2 shows the distribution for the SMA blazars. Objects with V ≤ 0 are typically unsuitable for more", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0806.pdf" - }, - { - "text": "## 3. VERITAS Blazar KSP\n\nVERITAS observes for ∼ 750 h and ∼ 250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- · A VHE blazar discovery program ( ∼ 200 h / yr): Each year ∼ 10 targets are selected to receive ∼ 10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- · A target-of-opportunity (ToO) observation program ( ∼ 50 h / yr): VERITAS blazar observations can be triggered by either a VERITAS blazar discovery, a VHE flaring alert ( > 2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- · Multi-wavelength (MWL) studies of VHE blazars ( ∼ 50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- · Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n## 4. Blazar Discovery Program\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ -rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles ( -8 · < δ < 72 · ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0 . 3. To further the study of the\n\nEBL a few objects having a large ( z > 0 . 3) are also included in the target list. The target list includes:\n\n- · All nearby ( z < 0 . 3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- · The X-ray brightest HBL ( z < 0 . 3) in the recent Sedentary [8] and ROXA [9] surveys.\n- · Four distant ( z > 0 . 3) BL Lac objects recommended by [5, 10].\n- · Several FSRQ recommended as potential VHE emitters in [6, 11].\n- · All nearby ( z < 0 . 3) blazars detected by EGRET [12].\n- · All nearby ( z < 0 . 3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- · All sources ( | b | > 10 · ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ -ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERITAS blazar discovery program.\n\n## 5. VERITAS AGN Detections\n\nVERITAS has detected VHE γ -ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n## 5.1. Recent VERITAS Blazar Discoveries", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed3.pdf", - "query": "When in present-day Poland did the first shift away from earlier ancestry occur?", - "target_page": 3, - "target_passage": "in the Middle to Late Bronze Age (1500 bce to 1000 bce), we observe a clear shift away from preceding ancestry originally associated with Corded Ware cultures", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "In the region of present-day Poland, our analysis suggests several clear shifts in ancestry. First, in the Middle to Late Bronze Age (1500 BCE to 1000 BCE), we observe a clear shift away from preceding ancestry originally associated with Corded Ware cultures 55 (Fig. 3a). Second, in the first to fifth century CE, individuals associated with Wielbark culture 5,12 show an additional strong shift away from the preceding Bronze Age groups, and can only be modelled with a >75% component attributed to the EIA Scandinavian Peninsula. Multiple individuals, especially from earlier Wielbark cemeteries, have approximately 100%", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed3.pdf" - }, - { - "text": "## Article\n\nFig. 3 | Time transects across six geographical regions in Europe.\n\n\n\na -f , Ancestry change visualized over a time transect spanning from the Bronze Age to the present day in Poland ( a ), southeastern Europe ( b ), central Europe ( c ), Italy ( d ), Britain and Ireland ( e ) and Scandinavia ( f ). The maps show sample locations of all available ancient genomes with at least 0.5× coverage from\n\nmedieval individuals ( P ≪ 1 × 10 -32 ). Instead, the majority of individuals from medieval Poland can be modelled only as a mixture of ancestries related to Roman Iron Age Lithuania, which is similar to ancestries of individuals from middle to late Bronze Age Poland (44%, 95% confidence interval 36-51%), an ancestry component related to Hungarian Scythians or Slovakian La Tène individuals (49%, 95% confidence interval 41-57%) and potentially a minority component of ancestry related to Sarmatians from the Caucasus ( P = 0.13) (Fig. 2c). Four out of twelve individuals from medieval Poland, three of whom are from the late Viking Age 6 , carried detectable Scandinavian-related ancestry. Some of the ancestry detected in individuals from later medieval Poland may have persisted during the late first millennium CE in the cremating portion of the population, but regardless, this points to large-scale ancestry transformation in medieval Poland (Fig. 3a). Future data could shed light on the extent to which this reflects the influence of groups speaking Slavic languages in the region.\n\nthese regions (Supplementary Table 1). Their ancestry is shown on the same MDS model as in Fig. 2a for each time period. For each geographic region, the early medieval period is highlighted in orange and the area in the MDS corresponding to Scandinavian and central European ancestries is highlighted in an orange box.\n\nIn present-day Slovakia, individuals associated with the Iron Age La Tène period appear close to Hungarian Scythians in the two dimensions of our MDS analysis, and are modelled as a mixture of central and eastern European ancestry. However, a first-century CE burial of a 50-60-year-old woman from Zohor is modelled only with Scandinavian-related ancestry, providing evidence of ancestry related to the Scandinavian EIA appearing southwest of the range of the Wielbark archaeological complex 5,57 (Fig. 3b). Later early medieval individuals from Slovakia have partial Scandinavian-related ancestry, providing evidence for the integration between expanding and local groups.\n\nNearby, in present-day Hungary, we observe Scandinavian-related ancestry components in several burials dating to the sixth century CE associated with Longobards (Longobard\\_earlyMED(I)) 10 (Fig. 2c). This is consistent with the original study 10 , which reported affinity to present-day groups from northwestern Europe (GBR, CEU and FIN in the 1000 Genomes Project (1000GP)) 10 but which we can resolve with", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed3.pdf" - }, - { - "text": "higher resolution using earlier genomes. Several other individuals from these Longobard burials (Longobard\\_earlyMED(II)) show no detectable ancestry from northern Europe and, instead, are more closely related to Iron Age groups in continental central Europe, putatively representing descendants of local people buried in a Longobard style. Our results are consistent with attestations that the Longobards originated in the areas of present-day northern Germany or Denmark, but that by the sixth century CE they incorporated multiple different cultural identities, and mixed ancestries. Present-day populations of Hungary do not appear to derive detectable ancestry from early medieval individuals from Longobard contexts, and are instead more similar to Scythian-related ancestry sources (Extended Data Fig. 6), consistent with the later impact of Avars, Magyars and other eastern groups 58 .\n\nIn southern Germany, the genetic ancestry of individuals from early medieval Bavaria probably associated with the historical Germanic-language-speaking Baiuvarii 59 cannot be modelled as deriving ancestry solely from earlier groups in Iron Age central Germany ( P ≪ 1 × 10 -36 ). The Baiuvarii probably appeared in the region in the fifth century CE 59 , but their origins remain unresolved. Our current best model indicates a mixture with ancestry derived from EIA Peninsular Scandinavia and central Europe, suggesting an expansion of Scandinavian-related ancestry producing a regional ancestry shift (Figs. 2c and 3c).\n\nIn Italy, southward expansions of northern and central European ancestries appear by the Late Antiquity (approximately fourth century CE), where a clear diversification of ancestry can be observed compared with preceding time periods (Fig. 3d). However, no individuals with near 100% Scandinavian ancestry can be observed in the sampling data available so far.\n\nIn Britain, the ancestries of Iron Age and Roman individuals form a tight cluster in our MDS analysis (Fig. 3e), shifted relative to available preceding Bronze Age individuals from Ireland and Orkney, and adjacent to, but distinct from, available individuals in Iron Age and Roman central Europe. However, two first- to second-century CE burials from a Roman military fortress site in Austria (Klosterneuburg) 5 carry ancestry that is currently indistinguishable from Iron Age or Roman populations of Britain, to the exclusion of other groups (qpWave cladality P = 0.11). One option is that they had ancestry from Britain; alternatively, currently unsampled populations from western continental Europe carried ancestries similar to Iron Age southern Britain.\n\nTwigstats substantially improves models of admixture between ancestries from Iron Age Britain and northern Europe in early medieval England 9 , halving standard errors from 9% with SNPs to 4% when using time stratification (point estimates 80% and 79% Iron Age Britain-related ancestry, respectively). We used this improved resolution to demonstrate that an earlier Roman individual (6DT3) dating to approximately second to fourth century CE from the purported gladiator or military cemetery at Driffield Terrace in York (Roman Eboracum ), England 60 , who was previously identified as an ancestry outlier 61,62 , specifically carried approximately 25% EIA Scandinavian Peninsula-related ancestry (Fig. 2c). This documents that people with Scandinavian-related ancestry already were in Britain before the fifth century CE, after which there was a substantial influx associated with Anglo-Saxon migrations 9 . Although it is uncertain whether this individual was a gladiator or soldier, individuals and groups from northern Europe are indeed recorded in Roman sources both as soldiers and as enslaved gladiators 63,64 .", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed3.pdf" - }, - { - "text": "(including one with ancestry related to Britain) are part of the majority strontium values, consistent with them having grown up locally. By contrast, the six most clearly non-local individuals based on the stable isotopes all have 50% or more EIA Scandinavian Peninsula-related ancestry, although three individuals with wholly EIA Scandinavian Peninsula-related ancestry also had local values. This suggests that the presence of central European-related ancestry was not a transient phenomenon, but an ancestry shift that occurred at some point after about 500 CE, the period to which individuals from the massacre site at Sandby Borg ringfort on Öland were dated; these individuals all have strictly EIA Scandinavian-related ancestry. Indeed, one hypothesis is that the massacre at Sandby Borg could represent conflict associated with movements of people that contributed to later ancestry change, although other scenarios are possible and further synthesis of biomolecular and archaeological data is necessary to test this hypothesis.\n\n## Viking Age mobility into Scandinavia\n\nPrevious studies had suggested a major influx of ancestry related to Britain into Viking Age Scandinavia 6,7 . Although we detect this ancestry in some individuals (7 individuals in Norway, 14 in Denmark and 14 in Sweden), including some individuals whose ancestry appears to be entirely derived from Iron Age Britain, its overall impact appears reduced compared with previous reports. Our analysis indicates a proportionally larger impact of ancestry from Iron Age Britain in northern Norway, with southern Scandinavia predominantly influenced by continental central European ancestries (Fig. 4d). We hypothesize that our estimates of ancestry from Britain are reduced relative to previous studies because ancestry related to Britain and continental central Europe may have been indistinguishable. This could be due to a lack of statistical power to distinguish these closely related sources with standard methods, as well as through potential biases introduced by using modern surrogate populations that have since been influenced by later gene flow (such as gene flow into Britain). We illustrate this by replicating the analyses previously described 6,7 (Extended Data Fig. 8).\n\nSimilarly, a previous study has suggested that individuals at sites such as Kärda in southern Sweden carried ancestry from southern Europe 6 . In our models, two Kärda individuals fit with central European-related ancestry, but none of the individuals has a substantial proportion of ancestry related to southern European sources (Extended Data Fig. 9). Instead, we detect ancestry from southern European sources in only three individuals from Scandinavia, and in relatively small proportions (Fig. 4a).\n\nInterestingly, we detect ancestry from Bronze and Iron Age sources from Eastern Europe (present-day Lithuania and Poland), concentrated in southeastern parts of Sweden, particularly the island of Gotland (14 individuals; Fig. 4a). This is consistent with previous genetic studies 6,7 . We find that this ancestry is enriched in male individuals (Extended Data Fig. 7d), suggesting male-biased mobility and/or burial. The closest match tends to be Roman Iron Age Lithuanian genomes associated with Balts, which would be consistent with mobility across the Baltic Sea, but we caution that the geographical representation of available genomes is still limited.\n\n## Viking Age expansion from Scandinavia\n\nTraditionally, historical perspectives on what is now often referred to as the Viking diaspora placed an emphasis on the movements and settlements of population groups from various parts of Scandinavia 67 . Our explorative MDS analysis again indicates mixed ancestries related to the Scandinavian EIA, with regional differences that point to varied local admixture (Fig. 4e and Extended Data Fig. 10).", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed3.pdf" - }, - { - "text": "Figure 11: Number of recent (within two years) OCU initiates presenting to treatment in 2005 and 2013, by age of individual at first presentation.\n\n\n\nThe mode age of initiation has shifted from around 18 to around 25 and there is an older age profile throughout. Rises in average age of initiation have also been reported recently in cohorts of Australian injecting drug users (Horyniak et al., 2015). There appear to be two possible explanations.\n\n -  There is a genuine shift towards new initiates being older, and for them to present to treatment much faster than in previous years.\n -  There is a consistent, but small number of individuals who mis-report their age of onset when attending treatment i.e. who report that they have only been using opiates/crack for a short period when in fact they have been using for a far longer period, and that this is starting to really bias the numbers for recent cohorts because attendees from the original epidemic are becoming smaller.\n\nIt is possible then that the flattening we observe in the incidence trend is due to a small in-flux of older initiates, although mis-reporting may also explain that phenomenon. Either way though, as this analysis has made clear throughout, absolute numbers of new OCUs appear to be small probably fewer than 10,000 per annum and the numbers of those involved with crime will be smaller still. In addition, despite a flattening in the probable trend in new users, there is currently no sign that it is likely to tip upwards. If anything, the data suggest the downward trend is set to resume, though clearly it remains important to monitor the situation.", - "page_start": 28, - "page_end": 28, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "Across Europe, we see regional differences in the southeastern and southwestern expansions of Scandinavian-related ancestries. Early medieval groups from present-day Poland and Slovakia carry specific ancestry from one of the Scandinavian EIA groups-the one with individuals primarily from the northern parts of Scandinavia in the EIA-with no evidence of ancestry related to the other primary group in more southern Scandinavia (Fig. 2d). By contrast, in southern and western Europe, Scandinavian-related ancestry either derives from\n\nEIA southern Scandinavia-as in the cases of the probable Baiuvarii in Germany, Longobard-associated burials in Italy and early medieval burials in southern Britain-or cannot be resolved to a specific region in Scandinavia. If these expansions are indeed linked to language, this pattern is remarkably concordant with the main branches of Germanic languages, with the now-extinct eastern Germanic spoken by Goths in Ukraine on the one hand, and western Germanic languages such as Old English and Old High German recorded in the early medieval period on the other hand.\n\n## Influx into pre-Viking Age Scandinavia\n\nIn EIA Scandinavia (<500 CE), we find evidence for broad genetic homogeneity. Specifically, individuals from Denmark (100 CE-300 CE) were indistinguishable from contemporary people in the Scandinavian Peninsula (Fig. 2c). However, we observe a clear shift in genetic ancestry already in the eighth century CE (Late Iron Age/early Viking Age) on Zealand (present-day Denmark) for which a 100% EIA ancestry model is rejected ( P = 1 × 10 -17 using Twigstats; P = 7.5 × 10 -4 without). This shift in ancestry persists among later Viking Age groups in Denmark, where all groups are modelled with varying proportions of ancestry related to Iron Age continental groups in central Europe (Figs. 3f and 4c). A non-parametric MDS of Viking Age individuals suggests that variation between individuals forms a cline spanning from the EIA Scandinavian Peninsula individuals to ancestry characteristic of central Europe (Fig. 4e). The observed shift in ancestry in Denmark cannot be confounded by potentially earlier unknown gene flow into Iron Age source groups in Austria, France and Germany, but such gene flow could affect the exact ancestry proportions.\n\nThese patterns are consistent with northward expansion of ancestry, potentially starting before the Viking Age, into the Jutland peninsula and Zealand island towards southern Sweden. The geographical origin of this ancestry is currently difficult to discern, as the available samples from Iron Age central Europe remain sparse. The timing of this expansion is constrained only by the samples available: this ancestry is not observed in individuals from the Copenhagen area of Denmark (around 100 CE-300 CE) 6 , an individual from the southern tip of Sweden (around 500 CE) 16 , individuals from the Sandby Borg massacre site on Öland in present-day Sweden (around 500 CE) 7 and 31 individuals from the mid-eighth century Salme ship burials in present-day Estonia (Extended Data Fig. 9), who probably originated in central Sweden 6 . Therefore, this ancestry transformation most likely postdated these individuals in each particular region and mostly occurred in the second half of the first millennium CE.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed3.pdf" - }, - { - "text": "Table 5: Physical health risks, Sectors and exposures - EWCS 2015\n\nCountry colours: Romania aquamarine, Poland orange, Hungary blue.\n\nThe figure below illustrates country differences, based on data from the EWCS 2015: the values of Ireland (green), the EU28 level (blue) with numbers, and the values of Poland (orange). Poland had a relatively high share of employment in industry of 24%, for which Ireland has a share of 12%. The impact on working conditions can be seen in the share of workers reporting exposures to vibrations (Poland 27%, Ireland 16%) and loud noise (Poland 35%, Ireland 25%).\n\nFigure 17: Physical health risks compared (%) - EWCS 2015\n\n", - "page_start": 41, - "page_end": 41, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## High-resolution genomic history of early medieval Europe\n\nhttps://doi.org/10.1038/s41586-024-08275-2\n\nReceived: 14 December 2023\n\nAccepted: 23 October 2024\n\nPublished online: 1 January 2025\n\nOpen access\n\n\n\nLeo Speidel 1,2,3 ✉ , Marina Silva 1 , Thomas Booth 1 , Ben Raffield 4 , Kyriaki Anastasiadou 1 , Christopher Barrington 5 , Anders Götherström 6,7 , Peter Heather 8 & Pontus Skoglund 1 ✉\n\nMany known and unknown historical events have remained below detection thresholds of genetic studies because subtle ancestry changes are challenging to reconstruct. Methods based on shared haplotypes 1,2 and rare variants 3,4 improve power but are not explicitly temporal and have not been possible to adopt in unbiased ancestry models. Here we develop Twigstats, an approach of time-strati/fied ancestry analysis that can improve statistical power by an order of magnitude by focusing on coalescences in recent times, while remaining unbiased by population-speci/fic drift. We apply this framework to 1,556 available ancient whole genomes from Europe in the historical period. We are able to model individual-level ancestry using preceding genomes to provide high resolution. During the /first half of the /first millennium CE, we observe at least two di/fferent streams of Scandinavian-related ancestry expanding across western, central and eastern Europe. By contrast, during the second half of the /first millennium CE, ancestry patterns suggest the regional disappearance or substantial admixture of these ancestries. In Scandinavia, we document a major ancestry in/flux by approximately 800 CE, when a large proportion of Viking Age individuals carried ancestry from groups related to central Europe not seen in individuals from the early Iron Age. Our /findings suggest that time-strati/fied ancestry analysis can provide a higher-resolution lens for genetic history.\n\nCheck for updates\n\nAncient genome sequencing has revolutionized our ability to reconstruct expansions, migrations and admixture events in the ancient past and understand their impact on human genetic variation today. However, tracing history using genetic ancestry has remained challenging, particularly in historical periods for which the richest comparative information from history and archaeology often exists. This is because ancestries in many geographical regions are often so similar as to be statistically indistinguishable with current approaches. One example is northern and central Europe since the start of the Iron Age around 500 BCE, a period for which many long-standing questions remain, such as the nature of large-scale patterns of human migration during the fourth to sixth centuries CE, their impact on the Mediterranean world and later patterns of human mobility during the Viking Age (around 750-1050 CE).\n\nSeveral recent studies have documented substantial mobility and genetic diversity in these time periods, suggesting stable population structure despite high mobility 5 , and have revealed genetic variation in Viking Age Scandinavia 6-8 , early medieval England 3,9 , early medieval Hungary 10,11 and Iron Age and medieval Poland 12 . However, previous studies mostly used large modern cohorts to study ancestry change through time and space. This is because the differentiation between Iron Age groups in central and northern Europe is an order of magnitude lower (fixation index ( F ST) = 0.1-0.7%; Extended Data Fig. 1) than, for example, the more commonly studied hunter-gatherer, early farmer and steppe-pastoralist groups that shaped the ancestry landscape of", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed3.pdf" - }, - { - "text": "individuals form a clade with respect to reference groups. The reason why this is a principled approach despite the 1000GP groups post-dating the ancient individuals is that if a group of ancient individuals are truly homogeneous, they will be so also with respect to later individuals.\n\nWe then define clusters by running UPGMA (unweighted pair group method with arithmetic mean) on -log10[ P values] obtained from qpwave between all pairs of individuals and cut the resulting dendrogram at a height corresponding to a P value of 0.01. We then further subdivide clusters by requiring all samples to be within 500 years of the mean cluster age.\n\nTo choose the source groups shown in Fig. 2a and Extended Data Fig. 1d, we run this algorithm on samples from Iron and Roman Age Europe (Supplementary Table 1). We retain groups that have at least three individuals and, therefore, exclude clusters of size one or two.\n\nThis approach results in two clusters in the Scandinavian Peninsula, approximately separating northern from southern Scandinavia, three clusters in Poland and Ukraine that separate samples temporally between the early and later Bronze Age, a cluster combining the Hungarian Scythian and Slovakian La Tène-associated individuals, and a cluster each for Iron and Roman Age Portugal, Italy and Lithuania. In present-day Austria, Germany and France, this approach identifies three clusters, with each cluster spanning multiple archaeological sites in different countries, indicating genetic diversity in this region in the first millennium CE. Encouragingly, these clusters separate in our non-parametric MDS analysis (Fig. 2a), indicating that we are capturing real genetic differences between groups using this approach.\n\nFine-scale structure in Neolithic Europe. To quantify fine-scale structure in Neolithic Europe (Extended Data Fig. 5b), we aimed to select individuals in Neolithic Europe who have not yet been affected by the arrival of Steppe ancestry and do not show excess hunter-gatherer ancestry. We infer distal ancestry sources using Balkan\\_N, Yamnaya and Western Hunter-gatherers as source groups and reference groups according to a previously proposed qpAdm setup 46 (Supplementary Table 1). For this analysis, we infer ancestry using qpAdm applied to 1.2 million SNP sites of imputed genomes. We retain only Neolithic individuals with P > 0.01, z < 2 for Yamnaya ancestry, and z < 2 or proportion <0.25 for Western Hunter-gatherer ancestry.\n\n## Reporting summary\n\nFurther information on research design is available in the Nature Portfolio Reporting Summary linked to this article.\n\n## Data availability\n\nAll aDNA data used in this study were publicly available, and accession codes are listed in Supplementary Table 1.\n\n## Code availability\n\nTwigstats is freely available under an MIT licence through GitHub (https://github.com/leospeidel/twigstats), and detailed documentation, as well as example data, is available at https://leospeidel.github. io/twigstats/. The code has also been deposited at Zenodo (https:// zenodo.org/records/13833120) 76 . All scripts to reproduce simulations, and to run Relate on imputed ancient genomes, and downstream analyses, including computation of f -statistics and running qpAdm models, are available through GitHub (https://github.com/leospeidel/ twigstats\\_paper).", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed3.pdf" - }, - { - "text": "Fig. 2 | Ancestry from the Iron Age to the early medieval period in Europe.\n\n\n\na , Source groups used for qpAdm modelling of early medieval Europe. MDS is computed jointly with individuals from later periods using pairwise outgroup f 3 statistics (outgroup: Han Chinese people). These are calculated using Twigstats on Relate genealogies with a cut-off of 1,000 generations. The geographical map shows sampling locations of these individuals. b , The genetic structure of ancient groups predominantly from early medieval contexts shown on the same MDS as in a . The magnified inset shows an MDS computed without Twigstats on the same samples as the Twigstats MDS and focusing on early medieval or later individuals. c , Ancestry models of early medieval (EM) groups across Europe computed using qpAdm. Sample sizes are\n\nshown in black boxes. Sources are highlighted in a and marked as bold in the key, and were used in a rotational qpAdm scheme. For each target group, we remove models with infeasible admixture proportions (falling outside [0, 1]) and use a Twigstats cut-off of 1,000 generations. All models satisfy P > 0.01, unless a -log10[ P value] is shown next to the model. If models satisfy P > 0.05, we show all such models; otherwise, we show only the model with the largest P value. d , The ancestry proportion derived from EIA Scandinavia in groups with a non-zero component of this ancestry. We show groups modelled in c that have a feasible model ( P > 0.01). In c , d , we show one s.e. BA, Bronze Age; CNE, continental northern Europeans; EBA, early Bronze Age; EVA, early Viking Age; IA, Iron Age; MED, medieval; MLBA, middle/late Bronze Age; VA, Viking Age.\n\nancestry related to EIA Scandinavian Peninsula (Fig. 2c). The Wielbark archaeological complex has been linked to the later Chernyakhov culture to the southeast and to early Goths, an historical Germanic group that flourished in the second to fifth centuries CE 56 . Our modelling supports the idea that some groups that probably spoke Germanic languages from Scandinavia expanded south across the Baltic into the area between the Oder and Vistula rivers in the early centuries CE, although whether these expansions can be linked specifically with historical Goths is still debatable. Moreover, since a considerable\n\nproportion of Wielbark burials during this period were cremations, the possible presence of individuals with other ancestries cannot be strictly rejected if they were exclusively cremated (and are therefore invisible in the aDNA record).\n\nA previous study could not reject continuity in ancestry from the Wielbark-associated individuals to later medieval individuals from a similar region 12 . With the improved power of Twigstats, models of continuity are strongly rejected, with no one-source model of any preceding Iron Age or Bronze Age group providing a reasonable fit for the", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed3.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed3.pdf", - "query": "How many clusters has the Scandinavian peninsula been divided into thanks to Twigstats?", - "target_page": 12, - "target_passage": "This approach results in two clusters in the Scandinavian Penin- sula, approximately separating northern from southern Scandinavia", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "Across Europe, we see regional differences in the southeastern and southwestern expansions of Scandinavian-related ancestries. Early medieval groups from present-day Poland and Slovakia carry specific ancestry from one of the Scandinavian EIA groups-the one with individuals primarily from the northern parts of Scandinavia in the EIA-with no evidence of ancestry related to the other primary group in more southern Scandinavia (Fig. 2d). By contrast, in southern and western Europe, Scandinavian-related ancestry either derives from\n\nEIA southern Scandinavia-as in the cases of the probable Baiuvarii in Germany, Longobard-associated burials in Italy and early medieval burials in southern Britain-or cannot be resolved to a specific region in Scandinavia. If these expansions are indeed linked to language, this pattern is remarkably concordant with the main branches of Germanic languages, with the now-extinct eastern Germanic spoken by Goths in Ukraine on the one hand, and western Germanic languages such as Old English and Old High German recorded in the early medieval period on the other hand.\n\n## Influx into pre-Viking Age Scandinavia\n\nIn EIA Scandinavia (<500 CE), we find evidence for broad genetic homogeneity. Specifically, individuals from Denmark (100 CE-300 CE) were indistinguishable from contemporary people in the Scandinavian Peninsula (Fig. 2c). However, we observe a clear shift in genetic ancestry already in the eighth century CE (Late Iron Age/early Viking Age) on Zealand (present-day Denmark) for which a 100% EIA ancestry model is rejected ( P = 1 × 10 -17 using Twigstats; P = 7.5 × 10 -4 without). This shift in ancestry persists among later Viking Age groups in Denmark, where all groups are modelled with varying proportions of ancestry related to Iron Age continental groups in central Europe (Figs. 3f and 4c). A non-parametric MDS of Viking Age individuals suggests that variation between individuals forms a cline spanning from the EIA Scandinavian Peninsula individuals to ancestry characteristic of central Europe (Fig. 4e). The observed shift in ancestry in Denmark cannot be confounded by potentially earlier unknown gene flow into Iron Age source groups in Austria, France and Germany, but such gene flow could affect the exact ancestry proportions.\n\nThese patterns are consistent with northward expansion of ancestry, potentially starting before the Viking Age, into the Jutland peninsula and Zealand island towards southern Sweden. The geographical origin of this ancestry is currently difficult to discern, as the available samples from Iron Age central Europe remain sparse. The timing of this expansion is constrained only by the samples available: this ancestry is not observed in individuals from the Copenhagen area of Denmark (around 100 CE-300 CE) 6 , an individual from the southern tip of Sweden (around 500 CE) 16 , individuals from the Sandby Borg massacre site on Öland in present-day Sweden (around 500 CE) 7 and 31 individuals from the mid-eighth century Salme ship burials in present-day Estonia (Extended Data Fig. 9), who probably originated in central Sweden 6 . Therefore, this ancestry transformation most likely postdated these individuals in each particular region and mostly occurred in the second half of the first millennium CE.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed3.pdf" - }, - { - "text": "## Article\n\n\n\n## Extended Data Fig. 7 | Ancestry estimates stratified by genetic sex. a , Map\n\nshowing ancestry carried by each Scandinavian Viking age individual. b , Ancestry proportions across individuals grouped by Latitude and genetic sex. c , Odds ratio and p-values calculated using a two-sided Fisher's exact test on the number of males and females carrying each ancestry in Viking Age Denmark, Sweden, Norway, Iceland, and Gotland. d , F4 values of the form f 4(Scandinavian\\_Peninsula\\_ EIA(I), alternative source group, males in Viking group, females in Viking group) computed using all SNPs and Twigstats. A significantly positive value is\n\nevidence of attraction of females with pop2 or males with Scandinavian\\_ Peninsula\\_EIA(I). Number of males and females is shown in each facet title and we restrict to groups with at least four males and females. We plot one standard error. e , Map showing 'farflung' Viking individuals grouped by ancestry and genetic sex. In contrast to Fig. 4a and d where we showed results for the 'best' qpAdm model, here in panels a , b, c, and e , an individual is assigned an ancestry group, if it has any accepted model (p > 0.01) where that ancestry features.", - "page_start": 18, - "page_end": 18, - "source_file": "pubmed3.pdf" - }, - { - "text": "individuals form a clade with respect to reference groups. The reason why this is a principled approach despite the 1000GP groups post-dating the ancient individuals is that if a group of ancient individuals are truly homogeneous, they will be so also with respect to later individuals.\n\nWe then define clusters by running UPGMA (unweighted pair group method with arithmetic mean) on -log10[ P values] obtained from qpwave between all pairs of individuals and cut the resulting dendrogram at a height corresponding to a P value of 0.01. We then further subdivide clusters by requiring all samples to be within 500 years of the mean cluster age.\n\nTo choose the source groups shown in Fig. 2a and Extended Data Fig. 1d, we run this algorithm on samples from Iron and Roman Age Europe (Supplementary Table 1). We retain groups that have at least three individuals and, therefore, exclude clusters of size one or two.\n\nThis approach results in two clusters in the Scandinavian Peninsula, approximately separating northern from southern Scandinavia, three clusters in Poland and Ukraine that separate samples temporally between the early and later Bronze Age, a cluster combining the Hungarian Scythian and Slovakian La Tène-associated individuals, and a cluster each for Iron and Roman Age Portugal, Italy and Lithuania. In present-day Austria, Germany and France, this approach identifies three clusters, with each cluster spanning multiple archaeological sites in different countries, indicating genetic diversity in this region in the first millennium CE. Encouragingly, these clusters separate in our non-parametric MDS analysis (Fig. 2a), indicating that we are capturing real genetic differences between groups using this approach.\n\nFine-scale structure in Neolithic Europe. To quantify fine-scale structure in Neolithic Europe (Extended Data Fig. 5b), we aimed to select individuals in Neolithic Europe who have not yet been affected by the arrival of Steppe ancestry and do not show excess hunter-gatherer ancestry. We infer distal ancestry sources using Balkan\\_N, Yamnaya and Western Hunter-gatherers as source groups and reference groups according to a previously proposed qpAdm setup 46 (Supplementary Table 1). For this analysis, we infer ancestry using qpAdm applied to 1.2 million SNP sites of imputed genomes. We retain only Neolithic individuals with P > 0.01, z < 2 for Yamnaya ancestry, and z < 2 or proportion <0.25 for Western Hunter-gatherer ancestry.\n\n## Reporting summary\n\nFurther information on research design is available in the Nature Portfolio Reporting Summary linked to this article.\n\n## Data availability\n\nAll aDNA data used in this study were publicly available, and accession codes are listed in Supplementary Table 1.\n\n## Code availability\n\nTwigstats is freely available under an MIT licence through GitHub (https://github.com/leospeidel/twigstats), and detailed documentation, as well as example data, is available at https://leospeidel.github. io/twigstats/. The code has also been deposited at Zenodo (https:// zenodo.org/records/13833120) 76 . All scripts to reproduce simulations, and to run Relate on imputed ancient genomes, and downstream analyses, including computation of f -statistics and running qpAdm models, are available through GitHub (https://github.com/leospeidel/ twigstats\\_paper).", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed3.pdf" - }, - { - "text": "Fig. 2 | Ancestry from the Iron Age to the early medieval period in Europe.\n\n\n\na , Source groups used for qpAdm modelling of early medieval Europe. MDS is computed jointly with individuals from later periods using pairwise outgroup f 3 statistics (outgroup: Han Chinese people). These are calculated using Twigstats on Relate genealogies with a cut-off of 1,000 generations. The geographical map shows sampling locations of these individuals. b , The genetic structure of ancient groups predominantly from early medieval contexts shown on the same MDS as in a . The magnified inset shows an MDS computed without Twigstats on the same samples as the Twigstats MDS and focusing on early medieval or later individuals. c , Ancestry models of early medieval (EM) groups across Europe computed using qpAdm. Sample sizes are\n\nshown in black boxes. Sources are highlighted in a and marked as bold in the key, and were used in a rotational qpAdm scheme. For each target group, we remove models with infeasible admixture proportions (falling outside [0, 1]) and use a Twigstats cut-off of 1,000 generations. All models satisfy P > 0.01, unless a -log10[ P value] is shown next to the model. If models satisfy P > 0.05, we show all such models; otherwise, we show only the model with the largest P value. d , The ancestry proportion derived from EIA Scandinavia in groups with a non-zero component of this ancestry. We show groups modelled in c that have a feasible model ( P > 0.01). In c , d , we show one s.e. BA, Bronze Age; CNE, continental northern Europeans; EBA, early Bronze Age; EVA, early Viking Age; IA, Iron Age; MED, medieval; MLBA, middle/late Bronze Age; VA, Viking Age.\n\nancestry related to EIA Scandinavian Peninsula (Fig. 2c). The Wielbark archaeological complex has been linked to the later Chernyakhov culture to the southeast and to early Goths, an historical Germanic group that flourished in the second to fifth centuries CE 56 . Our modelling supports the idea that some groups that probably spoke Germanic languages from Scandinavia expanded south across the Baltic into the area between the Oder and Vistula rivers in the early centuries CE, although whether these expansions can be linked specifically with historical Goths is still debatable. Moreover, since a considerable\n\nproportion of Wielbark burials during this period were cremations, the possible presence of individuals with other ancestries cannot be strictly rejected if they were exclusively cremated (and are therefore invisible in the aDNA record).\n\nA previous study could not reject continuity in ancestry from the Wielbark-associated individuals to later medieval individuals from a similar region 12 . With the improved power of Twigstats, models of continuity are strongly rejected, with no one-source model of any preceding Iron Age or Bronze Age group providing a reasonable fit for the", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed3.pdf" - }, - { - "text": "To assess the full extent of the impact of this ancestry influx into Scandinavia, we next aimed to understand the ancestry of individuals in Scandinavia during the Viking Age. Previous studies have suggested that there was a diversity of ancestries in Scandinavia during this period 6,7,65 , due to increased maritime mobility, but have not reported per-individual ancestry estimates based on preceding ancestry. We analysed each individual's ancestry using a rotational qpAdm scheme (Fig. 4a, Extended Data Fig. 9 and Supplementary Table 4), which showed increased power in distinguishing models when restricted to recent coalescences with Twigstats (more than 80% of accepted one-source models in Twigstats were also accepted one-source models using all SNPs, compared with less than 17% for the inverse).\n\nWe investigated regional differences in non-local ancestry across Scandinavia. In Denmark, 25 out of 53 Viking Age individuals had detectable ( zscore > 1) central European-related ancestry (CentralEurope. IronRoman or Portugal.IronRoman) in their best accepted qpAdm models. In Sweden 20 out of 62 individuals had detectable central European-related ancestry, concentrated almost entirely in southern regions (Fig. 4a,d). By contrast, in Norway, this ancestry was observed in only 2 out of 24 individuals, indicating a wide-ranging impact of incoming ancestry in southern Scandinavia and suggesting more", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed3.pdf" - }, - { - "text": "In Britain, most of the individuals recovered from the two late Viking Age mass graves identified at Ridgeway Hill, Dorset, and St John's\n\nCollege, Oxford 6 , show ancestries typical of those seen in Viking Age southern Scandinavia (Fig. 4f). Further west, North Atlantic Viking Age individuals in the Faroe Islands, Iceland and Greenland carry ancestry from the Scandinavian Peninsula, with several individuals showing the continental central Europe-related ancestry signal found in southern Scandinavia (Fig. 4f) and others who share substantial ancestry with Iron Age Britain. In contrast to previous hypotheses 68 , we found a marginal enrichment of ancestry related to Britain and Ireland in men (15 out of 17 men and 3 out of 6 women with at least one accepted model involving Iron or Roman Age Britain as source; Fisher's exact test P = 0.089) (Extended Data Fig. 7c,e). However, sampling of additional individuals to improve distinction between early English- and Norse-related ancestries would be required to fully test this hypothesis.\n\nIn eastern Europe, we observe EIA Scandinavian ancestries in a Viking Age burial from Ukraine, and these ancestries are overrepresented in Viking Age burials from present-day Russia. At Staraya Ladoga in western Russia, we observe several individuals with EIA Scandinavian Peninsula-related ancestry and at least one individual dated to the eleventh century with apparent ancestry related to Iron Age Britain. The relative absence of Iron Age central European ancestry, which was largely restricted to southern Scandinavia during the Viking Age, is thus indicative that these individuals may have originated in the central/ northern parts of Sweden or Norway, where Viking Age individuals show the most similar ancestry profiles to them.\n\n## Conclusions\n\nOur approach, Twigstats, transfers the power advantage of haplotypebased approaches to a fully temporal framework, which is applicable to f -statistics and enables previously unavailable unbiased and time-stratified analyses of admixture. We demonstrated that Twigstats enables fine-scale quantitative modelling of ancestry proportions, revealing wide-ranging ancestry changes that affect northern and central Europe during the Iron, Roman and Viking ages. We reveal evidence of the southward and/or eastward expansion of individuals who probably spoke Germanic languages and who had Scandinavian-related ancestry in the first half of the first millennium CE. We note that 'Scandinavian-related' in this context relates to the ancient genomes available, and so it is entirely possible that these processes were driven, for example, from regions in northern-central Europe. This could be consistent with the attraction of the greater wealth, which tended to build up among Rome's immediate neighbours and may have played a major role in vectors of migration internal to communities in Europe who lived beyond the Roman frontier 52 . Later, patterns of gene flow seem to have turned northwards, with the spread of Iron Age Central Europe-related ancestry into Scandinavia. Overall, our approach can be used for the reconstruction of new high-resolution genetic histories around the world.\n\n## Online content\n\nAny methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-024-08275-2.", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed3.pdf" - }, - { - "text": "## Article\n\nFig. 4 | Ancestry in the Viking world. a , Map showing ancestry carried by Scandinavian Viking Age individuals as inferred using the best-fitting qpAdm model. These are chosen by either choosing the one-source model with largest P value and P > 0.01 or the two-source model with the largest P value and P > 0.01. Extended Data Fig. 7 shows the same map with all accepted models. b , Stable isotope data indicating the geology of childhood origin. The histogram shows the ratio of strontium isotopes 87 to 86 measured in 109 individuals in Öland 69 . For individuals included in our ancestry modelling, we plot Iron Age central European-related ancestry against their stable isotope values (grey circles, r = -0.39, P = 0.075). Shared area corresponds to the 95% confidence band\n\n\n\naround the regression line. c , The ancestry shift observed in Viking Age Danish groups using qpAdm on all SNPs or Twigstats. We show the best one-source and all two-source models with P > 0.05. For models with P < 0.05, the -log10[ P value] is shown under the plot. Sample sizes for each group are shown in brackets. d , The ancestry proportion across Viking Age individuals in Denmark, Sweden and Norway grouped by latitude. e , Viking Age genetic variation (grey circles) visualized on the same MDS as in Fig. 2a,b. f , The best-fitting qpAdm ancestry model for far-flung Viking individuals. Detailed models for all individuals are shown in Extended Data Figs. 9 and 10. In c and f , we show one s.e. Rotating qpAdm sources are marked in bold in the key.\n\ncontinuity from the EIA in Norway and northern Sweden (Fig. 4a). When considered collectively, the individuals who show evidence of central European-related ancestry are mostly observed in regions historically within the Danish sphere of influence and rule. Currently, no such individuals, for example, are noted in eastern central Sweden, which was a focus of regional power of the Svear (Fig. 4a). The difference in distribution could suggest that the central European-related ancestry was more common in regions dominated by the historical Götar and groups inhabiting the lands on the borders of the Danish kingdom.\n\nTo test the extent to which the variation in ancestry was consistent with mobility during the lifetime of the individuals or, alternatively,\n\nthat of established groups, we focused on the island of Öland in southeast Sweden, where 23 individuals for whom we could reconstruct ancestry portraits also had associated strontium stable isotope data 66 . Strontium isotope data from dental enamel reflect the geology of the region where an individual grew to maturity, and there are considerable differences in expectations between Öland and many other regions in northern Europe. The full range of strontium isotope ratios in 109 individuals show two modes, a majority group with low ratios and a second minority group with high ratios falling outside the expected range of local fauna (Fig. 4b). Among 23 individuals with genomes in our data, all 5 individuals with 100% ancestry relating to central Europe", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed3.pdf" - }, - { - "text": "\n\n## Extended Data Fig. 10 | Ancestry models of farflung Viking individuals.\n\n - a , MDS of each farflung Viking group plotted on top of preceding Iron age and Roman individuals. b , All accepted qpAdm models using Twigstats-1000 for\n\nevery non-Scandinavian Viking individual computed in a rotational qpAdm with source groups identical to Fig. 4. We plot one standard error.", - "page_start": 21, - "page_end": 21, - "source_file": "pubmed3.pdf" - }, - { - "text": "Extended Data Fig. 6 | MDS of ancient and modern genomes. a , Same MDS as in Fig. 2 but only showing qpAdm source groups of Fig. 2a and modern groups in the Simons Genome Diversity Project (labelled) computed using genotypes\n\n\n\n(top) or Twigstats (bottom). b , MDS computed using genotypes showing one early medieval or Viking age group per facet. c , MDS computed using Twigstats showing one early medieval or Viking age group per facet.", - "page_start": 17, - "page_end": 17, - "source_file": "pubmed3.pdf" - }, - { - "text": "## Article\n\nFig. 3 | Time transects across six geographical regions in Europe.\n\n\n\na -f , Ancestry change visualized over a time transect spanning from the Bronze Age to the present day in Poland ( a ), southeastern Europe ( b ), central Europe ( c ), Italy ( d ), Britain and Ireland ( e ) and Scandinavia ( f ). The maps show sample locations of all available ancient genomes with at least 0.5× coverage from\n\nmedieval individuals ( P ≪ 1 × 10 -32 ). Instead, the majority of individuals from medieval Poland can be modelled only as a mixture of ancestries related to Roman Iron Age Lithuania, which is similar to ancestries of individuals from middle to late Bronze Age Poland (44%, 95% confidence interval 36-51%), an ancestry component related to Hungarian Scythians or Slovakian La Tène individuals (49%, 95% confidence interval 41-57%) and potentially a minority component of ancestry related to Sarmatians from the Caucasus ( P = 0.13) (Fig. 2c). Four out of twelve individuals from medieval Poland, three of whom are from the late Viking Age 6 , carried detectable Scandinavian-related ancestry. Some of the ancestry detected in individuals from later medieval Poland may have persisted during the late first millennium CE in the cremating portion of the population, but regardless, this points to large-scale ancestry transformation in medieval Poland (Fig. 3a). Future data could shed light on the extent to which this reflects the influence of groups speaking Slavic languages in the region.\n\nthese regions (Supplementary Table 1). Their ancestry is shown on the same MDS model as in Fig. 2a for each time period. For each geographic region, the early medieval period is highlighted in orange and the area in the MDS corresponding to Scandinavian and central European ancestries is highlighted in an orange box.\n\nIn present-day Slovakia, individuals associated with the Iron Age La Tène period appear close to Hungarian Scythians in the two dimensions of our MDS analysis, and are modelled as a mixture of central and eastern European ancestry. However, a first-century CE burial of a 50-60-year-old woman from Zohor is modelled only with Scandinavian-related ancestry, providing evidence of ancestry related to the Scandinavian EIA appearing southwest of the range of the Wielbark archaeological complex 5,57 (Fig. 3b). Later early medieval individuals from Slovakia have partial Scandinavian-related ancestry, providing evidence for the integration between expanding and local groups.\n\nNearby, in present-day Hungary, we observe Scandinavian-related ancestry components in several burials dating to the sixth century CE associated with Longobards (Longobard\\_earlyMED(I)) 10 (Fig. 2c). This is consistent with the original study 10 , which reported affinity to present-day groups from northwestern Europe (GBR, CEU and FIN in the 1000 Genomes Project (1000GP)) 10 but which we can resolve with", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed3.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed3.pdf", - "query": "What are the cultures with which the Wielbark culture is associated?", - "target_page": 4, - "target_passage": "linked to the later Chernyakhov cul- ture to the southeast and to early Goths", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "In the region of present-day Poland, our analysis suggests several clear shifts in ancestry. First, in the Middle to Late Bronze Age (1500 BCE to 1000 BCE), we observe a clear shift away from preceding ancestry originally associated with Corded Ware cultures 55 (Fig. 3a). Second, in the first to fifth century CE, individuals associated with Wielbark culture 5,12 show an additional strong shift away from the preceding Bronze Age groups, and can only be modelled with a >75% component attributed to the EIA Scandinavian Peninsula. Multiple individuals, especially from earlier Wielbark cemeteries, have approximately 100%", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed3.pdf" - }, - { - "text": "Fig. 2 | Ancestry from the Iron Age to the early medieval period in Europe.\n\n\n\na , Source groups used for qpAdm modelling of early medieval Europe. MDS is computed jointly with individuals from later periods using pairwise outgroup f 3 statistics (outgroup: Han Chinese people). These are calculated using Twigstats on Relate genealogies with a cut-off of 1,000 generations. The geographical map shows sampling locations of these individuals. b , The genetic structure of ancient groups predominantly from early medieval contexts shown on the same MDS as in a . The magnified inset shows an MDS computed without Twigstats on the same samples as the Twigstats MDS and focusing on early medieval or later individuals. c , Ancestry models of early medieval (EM) groups across Europe computed using qpAdm. Sample sizes are\n\nshown in black boxes. Sources are highlighted in a and marked as bold in the key, and were used in a rotational qpAdm scheme. For each target group, we remove models with infeasible admixture proportions (falling outside [0, 1]) and use a Twigstats cut-off of 1,000 generations. All models satisfy P > 0.01, unless a -log10[ P value] is shown next to the model. If models satisfy P > 0.05, we show all such models; otherwise, we show only the model with the largest P value. d , The ancestry proportion derived from EIA Scandinavia in groups with a non-zero component of this ancestry. We show groups modelled in c that have a feasible model ( P > 0.01). In c , d , we show one s.e. BA, Bronze Age; CNE, continental northern Europeans; EBA, early Bronze Age; EVA, early Viking Age; IA, Iron Age; MED, medieval; MLBA, middle/late Bronze Age; VA, Viking Age.\n\nancestry related to EIA Scandinavian Peninsula (Fig. 2c). The Wielbark archaeological complex has been linked to the later Chernyakhov culture to the southeast and to early Goths, an historical Germanic group that flourished in the second to fifth centuries CE 56 . Our modelling supports the idea that some groups that probably spoke Germanic languages from Scandinavia expanded south across the Baltic into the area between the Oder and Vistula rivers in the early centuries CE, although whether these expansions can be linked specifically with historical Goths is still debatable. Moreover, since a considerable\n\nproportion of Wielbark burials during this period were cremations, the possible presence of individuals with other ancestries cannot be strictly rejected if they were exclusively cremated (and are therefore invisible in the aDNA record).\n\nA previous study could not reject continuity in ancestry from the Wielbark-associated individuals to later medieval individuals from a similar region 12 . With the improved power of Twigstats, models of continuity are strongly rejected, with no one-source model of any preceding Iron Age or Bronze Age group providing a reasonable fit for the", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed3.pdf" - }, - { - "text": "## Article\n\nFig. 3 | Time transects across six geographical regions in Europe.\n\n\n\na -f , Ancestry change visualized over a time transect spanning from the Bronze Age to the present day in Poland ( a ), southeastern Europe ( b ), central Europe ( c ), Italy ( d ), Britain and Ireland ( e ) and Scandinavia ( f ). The maps show sample locations of all available ancient genomes with at least 0.5× coverage from\n\nmedieval individuals ( P ≪ 1 × 10 -32 ). Instead, the majority of individuals from medieval Poland can be modelled only as a mixture of ancestries related to Roman Iron Age Lithuania, which is similar to ancestries of individuals from middle to late Bronze Age Poland (44%, 95% confidence interval 36-51%), an ancestry component related to Hungarian Scythians or Slovakian La Tène individuals (49%, 95% confidence interval 41-57%) and potentially a minority component of ancestry related to Sarmatians from the Caucasus ( P = 0.13) (Fig. 2c). Four out of twelve individuals from medieval Poland, three of whom are from the late Viking Age 6 , carried detectable Scandinavian-related ancestry. Some of the ancestry detected in individuals from later medieval Poland may have persisted during the late first millennium CE in the cremating portion of the population, but regardless, this points to large-scale ancestry transformation in medieval Poland (Fig. 3a). Future data could shed light on the extent to which this reflects the influence of groups speaking Slavic languages in the region.\n\nthese regions (Supplementary Table 1). Their ancestry is shown on the same MDS model as in Fig. 2a for each time period. For each geographic region, the early medieval period is highlighted in orange and the area in the MDS corresponding to Scandinavian and central European ancestries is highlighted in an orange box.\n\nIn present-day Slovakia, individuals associated with the Iron Age La Tène period appear close to Hungarian Scythians in the two dimensions of our MDS analysis, and are modelled as a mixture of central and eastern European ancestry. However, a first-century CE burial of a 50-60-year-old woman from Zohor is modelled only with Scandinavian-related ancestry, providing evidence of ancestry related to the Scandinavian EIA appearing southwest of the range of the Wielbark archaeological complex 5,57 (Fig. 3b). Later early medieval individuals from Slovakia have partial Scandinavian-related ancestry, providing evidence for the integration between expanding and local groups.\n\nNearby, in present-day Hungary, we observe Scandinavian-related ancestry components in several burials dating to the sixth century CE associated with Longobards (Longobard\\_earlyMED(I)) 10 (Fig. 2c). This is consistent with the original study 10 , which reported affinity to present-day groups from northwestern Europe (GBR, CEU and FIN in the 1000 Genomes Project (1000GP)) 10 but which we can resolve with", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed3.pdf" - }, - { - "text": "higher resolution using earlier genomes. Several other individuals from these Longobard burials (Longobard\\_earlyMED(II)) show no detectable ancestry from northern Europe and, instead, are more closely related to Iron Age groups in continental central Europe, putatively representing descendants of local people buried in a Longobard style. Our results are consistent with attestations that the Longobards originated in the areas of present-day northern Germany or Denmark, but that by the sixth century CE they incorporated multiple different cultural identities, and mixed ancestries. Present-day populations of Hungary do not appear to derive detectable ancestry from early medieval individuals from Longobard contexts, and are instead more similar to Scythian-related ancestry sources (Extended Data Fig. 6), consistent with the later impact of Avars, Magyars and other eastern groups 58 .\n\nIn southern Germany, the genetic ancestry of individuals from early medieval Bavaria probably associated with the historical Germanic-language-speaking Baiuvarii 59 cannot be modelled as deriving ancestry solely from earlier groups in Iron Age central Germany ( P ≪ 1 × 10 -36 ). The Baiuvarii probably appeared in the region in the fifth century CE 59 , but their origins remain unresolved. Our current best model indicates a mixture with ancestry derived from EIA Peninsular Scandinavia and central Europe, suggesting an expansion of Scandinavian-related ancestry producing a regional ancestry shift (Figs. 2c and 3c).\n\nIn Italy, southward expansions of northern and central European ancestries appear by the Late Antiquity (approximately fourth century CE), where a clear diversification of ancestry can be observed compared with preceding time periods (Fig. 3d). However, no individuals with near 100% Scandinavian ancestry can be observed in the sampling data available so far.\n\nIn Britain, the ancestries of Iron Age and Roman individuals form a tight cluster in our MDS analysis (Fig. 3e), shifted relative to available preceding Bronze Age individuals from Ireland and Orkney, and adjacent to, but distinct from, available individuals in Iron Age and Roman central Europe. However, two first- to second-century CE burials from a Roman military fortress site in Austria (Klosterneuburg) 5 carry ancestry that is currently indistinguishable from Iron Age or Roman populations of Britain, to the exclusion of other groups (qpWave cladality P = 0.11). One option is that they had ancestry from Britain; alternatively, currently unsampled populations from western continental Europe carried ancestries similar to Iron Age southern Britain.\n\nTwigstats substantially improves models of admixture between ancestries from Iron Age Britain and northern Europe in early medieval England 9 , halving standard errors from 9% with SNPs to 4% when using time stratification (point estimates 80% and 79% Iron Age Britain-related ancestry, respectively). We used this improved resolution to demonstrate that an earlier Roman individual (6DT3) dating to approximately second to fourth century CE from the purported gladiator or military cemetery at Driffield Terrace in York (Roman Eboracum ), England 60 , who was previously identified as an ancestry outlier 61,62 , specifically carried approximately 25% EIA Scandinavian Peninsula-related ancestry (Fig. 2c). This documents that people with Scandinavian-related ancestry already were in Britain before the fifth century CE, after which there was a substantial influx associated with Anglo-Saxon migrations 9 . Although it is uncertain whether this individual was a gladiator or soldier, individuals and groups from northern Europe are indeed recorded in Roman sources both as soldiers and as enslaved gladiators 63,64 .", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed3.pdf" - }, - { - "text": "Across Europe, we see regional differences in the southeastern and southwestern expansions of Scandinavian-related ancestries. Early medieval groups from present-day Poland and Slovakia carry specific ancestry from one of the Scandinavian EIA groups-the one with individuals primarily from the northern parts of Scandinavia in the EIA-with no evidence of ancestry related to the other primary group in more southern Scandinavia (Fig. 2d). By contrast, in southern and western Europe, Scandinavian-related ancestry either derives from\n\nEIA southern Scandinavia-as in the cases of the probable Baiuvarii in Germany, Longobard-associated burials in Italy and early medieval burials in southern Britain-or cannot be resolved to a specific region in Scandinavia. If these expansions are indeed linked to language, this pattern is remarkably concordant with the main branches of Germanic languages, with the now-extinct eastern Germanic spoken by Goths in Ukraine on the one hand, and western Germanic languages such as Old English and Old High German recorded in the early medieval period on the other hand.\n\n## Influx into pre-Viking Age Scandinavia\n\nIn EIA Scandinavia (<500 CE), we find evidence for broad genetic homogeneity. Specifically, individuals from Denmark (100 CE-300 CE) were indistinguishable from contemporary people in the Scandinavian Peninsula (Fig. 2c). However, we observe a clear shift in genetic ancestry already in the eighth century CE (Late Iron Age/early Viking Age) on Zealand (present-day Denmark) for which a 100% EIA ancestry model is rejected ( P = 1 × 10 -17 using Twigstats; P = 7.5 × 10 -4 without). This shift in ancestry persists among later Viking Age groups in Denmark, where all groups are modelled with varying proportions of ancestry related to Iron Age continental groups in central Europe (Figs. 3f and 4c). A non-parametric MDS of Viking Age individuals suggests that variation between individuals forms a cline spanning from the EIA Scandinavian Peninsula individuals to ancestry characteristic of central Europe (Fig. 4e). The observed shift in ancestry in Denmark cannot be confounded by potentially earlier unknown gene flow into Iron Age source groups in Austria, France and Germany, but such gene flow could affect the exact ancestry proportions.\n\nThese patterns are consistent with northward expansion of ancestry, potentially starting before the Viking Age, into the Jutland peninsula and Zealand island towards southern Sweden. The geographical origin of this ancestry is currently difficult to discern, as the available samples from Iron Age central Europe remain sparse. The timing of this expansion is constrained only by the samples available: this ancestry is not observed in individuals from the Copenhagen area of Denmark (around 100 CE-300 CE) 6 , an individual from the southern tip of Sweden (around 500 CE) 16 , individuals from the Sandby Borg massacre site on Öland in present-day Sweden (around 500 CE) 7 and 31 individuals from the mid-eighth century Salme ship burials in present-day Estonia (Extended Data Fig. 9), who probably originated in central Sweden 6 . Therefore, this ancestry transformation most likely postdated these individuals in each particular region and mostly occurred in the second half of the first millennium CE.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed3.pdf" - }, - { - "text": "In Britain, most of the individuals recovered from the two late Viking Age mass graves identified at Ridgeway Hill, Dorset, and St John's\n\nCollege, Oxford 6 , show ancestries typical of those seen in Viking Age southern Scandinavia (Fig. 4f). Further west, North Atlantic Viking Age individuals in the Faroe Islands, Iceland and Greenland carry ancestry from the Scandinavian Peninsula, with several individuals showing the continental central Europe-related ancestry signal found in southern Scandinavia (Fig. 4f) and others who share substantial ancestry with Iron Age Britain. In contrast to previous hypotheses 68 , we found a marginal enrichment of ancestry related to Britain and Ireland in men (15 out of 17 men and 3 out of 6 women with at least one accepted model involving Iron or Roman Age Britain as source; Fisher's exact test P = 0.089) (Extended Data Fig. 7c,e). However, sampling of additional individuals to improve distinction between early English- and Norse-related ancestries would be required to fully test this hypothesis.\n\nIn eastern Europe, we observe EIA Scandinavian ancestries in a Viking Age burial from Ukraine, and these ancestries are overrepresented in Viking Age burials from present-day Russia. At Staraya Ladoga in western Russia, we observe several individuals with EIA Scandinavian Peninsula-related ancestry and at least one individual dated to the eleventh century with apparent ancestry related to Iron Age Britain. The relative absence of Iron Age central European ancestry, which was largely restricted to southern Scandinavia during the Viking Age, is thus indicative that these individuals may have originated in the central/ northern parts of Sweden or Norway, where Viking Age individuals show the most similar ancestry profiles to them.\n\n## Conclusions\n\nOur approach, Twigstats, transfers the power advantage of haplotypebased approaches to a fully temporal framework, which is applicable to f -statistics and enables previously unavailable unbiased and time-stratified analyses of admixture. We demonstrated that Twigstats enables fine-scale quantitative modelling of ancestry proportions, revealing wide-ranging ancestry changes that affect northern and central Europe during the Iron, Roman and Viking ages. We reveal evidence of the southward and/or eastward expansion of individuals who probably spoke Germanic languages and who had Scandinavian-related ancestry in the first half of the first millennium CE. We note that 'Scandinavian-related' in this context relates to the ancient genomes available, and so it is entirely possible that these processes were driven, for example, from regions in northern-central Europe. This could be consistent with the attraction of the greater wealth, which tended to build up among Rome's immediate neighbours and may have played a major role in vectors of migration internal to communities in Europe who lived beyond the Roman frontier 52 . Later, patterns of gene flow seem to have turned northwards, with the spread of Iron Age Central Europe-related ancestry into Scandinavia. Overall, our approach can be used for the reconstruction of new high-resolution genetic histories around the world.\n\n## Online content\n\nAny methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-024-08275-2.", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed3.pdf" - }, - { - "text": "## UNESCO World Heritage Site\n\nThe historic site of Lyon was designated a UNESCO World Heritage Site in 1998. In its designation, UNESCO cited the \"exceptional testimony to the continuity of urban settlement over more than two millennia on a site of great commercial and strategic significance.\" [37] The specific regions comprising the historic site include the Roman district and Fourvière, the Renaissance district (Vieux Lyon), the silk district (slopes of Croix-Rousse), and the Presqu'île, which features architecture from the 12th century to modern times. [53]", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia4.pdf" - }, - { - "text": "(including one with ancestry related to Britain) are part of the majority strontium values, consistent with them having grown up locally. By contrast, the six most clearly non-local individuals based on the stable isotopes all have 50% or more EIA Scandinavian Peninsula-related ancestry, although three individuals with wholly EIA Scandinavian Peninsula-related ancestry also had local values. This suggests that the presence of central European-related ancestry was not a transient phenomenon, but an ancestry shift that occurred at some point after about 500 CE, the period to which individuals from the massacre site at Sandby Borg ringfort on Öland were dated; these individuals all have strictly EIA Scandinavian-related ancestry. Indeed, one hypothesis is that the massacre at Sandby Borg could represent conflict associated with movements of people that contributed to later ancestry change, although other scenarios are possible and further synthesis of biomolecular and archaeological data is necessary to test this hypothesis.\n\n## Viking Age mobility into Scandinavia\n\nPrevious studies had suggested a major influx of ancestry related to Britain into Viking Age Scandinavia 6,7 . Although we detect this ancestry in some individuals (7 individuals in Norway, 14 in Denmark and 14 in Sweden), including some individuals whose ancestry appears to be entirely derived from Iron Age Britain, its overall impact appears reduced compared with previous reports. Our analysis indicates a proportionally larger impact of ancestry from Iron Age Britain in northern Norway, with southern Scandinavia predominantly influenced by continental central European ancestries (Fig. 4d). We hypothesize that our estimates of ancestry from Britain are reduced relative to previous studies because ancestry related to Britain and continental central Europe may have been indistinguishable. This could be due to a lack of statistical power to distinguish these closely related sources with standard methods, as well as through potential biases introduced by using modern surrogate populations that have since been influenced by later gene flow (such as gene flow into Britain). We illustrate this by replicating the analyses previously described 6,7 (Extended Data Fig. 8).\n\nSimilarly, a previous study has suggested that individuals at sites such as Kärda in southern Sweden carried ancestry from southern Europe 6 . In our models, two Kärda individuals fit with central European-related ancestry, but none of the individuals has a substantial proportion of ancestry related to southern European sources (Extended Data Fig. 9). Instead, we detect ancestry from southern European sources in only three individuals from Scandinavia, and in relatively small proportions (Fig. 4a).\n\nInterestingly, we detect ancestry from Bronze and Iron Age sources from Eastern Europe (present-day Lithuania and Poland), concentrated in southeastern parts of Sweden, particularly the island of Gotland (14 individuals; Fig. 4a). This is consistent with previous genetic studies 6,7 . We find that this ancestry is enriched in male individuals (Extended Data Fig. 7d), suggesting male-biased mobility and/or burial. The closest match tends to be Roman Iron Age Lithuanian genomes associated with Balts, which would be consistent with mobility across the Baltic Sea, but we caution that the geographical representation of available genomes is still limited.\n\n## Viking Age expansion from Scandinavia\n\nTraditionally, historical perspectives on what is now often referred to as the Viking diaspora placed an emphasis on the movements and settlements of population groups from various parts of Scandinavia 67 . Our explorative MDS analysis again indicates mixed ancestries related to the Scandinavian EIA, with regional differences that point to varied local admixture (Fig. 4e and Extended Data Fig. 10).", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed3.pdf" - }, - { - "text": "individuals form a clade with respect to reference groups. The reason why this is a principled approach despite the 1000GP groups post-dating the ancient individuals is that if a group of ancient individuals are truly homogeneous, they will be so also with respect to later individuals.\n\nWe then define clusters by running UPGMA (unweighted pair group method with arithmetic mean) on -log10[ P values] obtained from qpwave between all pairs of individuals and cut the resulting dendrogram at a height corresponding to a P value of 0.01. We then further subdivide clusters by requiring all samples to be within 500 years of the mean cluster age.\n\nTo choose the source groups shown in Fig. 2a and Extended Data Fig. 1d, we run this algorithm on samples from Iron and Roman Age Europe (Supplementary Table 1). We retain groups that have at least three individuals and, therefore, exclude clusters of size one or two.\n\nThis approach results in two clusters in the Scandinavian Peninsula, approximately separating northern from southern Scandinavia, three clusters in Poland and Ukraine that separate samples temporally between the early and later Bronze Age, a cluster combining the Hungarian Scythian and Slovakian La Tène-associated individuals, and a cluster each for Iron and Roman Age Portugal, Italy and Lithuania. In present-day Austria, Germany and France, this approach identifies three clusters, with each cluster spanning multiple archaeological sites in different countries, indicating genetic diversity in this region in the first millennium CE. Encouragingly, these clusters separate in our non-parametric MDS analysis (Fig. 2a), indicating that we are capturing real genetic differences between groups using this approach.\n\nFine-scale structure in Neolithic Europe. To quantify fine-scale structure in Neolithic Europe (Extended Data Fig. 5b), we aimed to select individuals in Neolithic Europe who have not yet been affected by the arrival of Steppe ancestry and do not show excess hunter-gatherer ancestry. We infer distal ancestry sources using Balkan\\_N, Yamnaya and Western Hunter-gatherers as source groups and reference groups according to a previously proposed qpAdm setup 46 (Supplementary Table 1). For this analysis, we infer ancestry using qpAdm applied to 1.2 million SNP sites of imputed genomes. We retain only Neolithic individuals with P > 0.01, z < 2 for Yamnaya ancestry, and z < 2 or proportion <0.25 for Western Hunter-gatherer ancestry.\n\n## Reporting summary\n\nFurther information on research design is available in the Nature Portfolio Reporting Summary linked to this article.\n\n## Data availability\n\nAll aDNA data used in this study were publicly available, and accession codes are listed in Supplementary Table 1.\n\n## Code availability\n\nTwigstats is freely available under an MIT licence through GitHub (https://github.com/leospeidel/twigstats), and detailed documentation, as well as example data, is available at https://leospeidel.github. io/twigstats/. The code has also been deposited at Zenodo (https:// zenodo.org/records/13833120) 76 . All scripts to reproduce simulations, and to run Relate on imputed ancient genomes, and downstream analyses, including computation of f -statistics and running qpAdm models, are available through GitHub (https://github.com/leospeidel/ twigstats\\_paper).", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed3.pdf" - }, - { - "text": "## Our Impact\n\nCC believes that opening up knowledge is key to addressing the world's most pressing challenges. Today, we steer campaigns, programming, and training in many areas:\n\n## Open Culture\n\n2023 was quite a year for the CC Open Culture Program, thanks to generous funding from Arcadia . We grew our Open Culture team from one to two and a half staff, rolling out new initiatives like TAROC (Towards a Recommendation on Open Culture) and Open Culture Live: A Webinar Series . We invite you to read ' What did Creative Commons do for Open Culture in 2023? ' to learn more.\n\n## Open Journalism\n\nThanks to generous funding from the John D. and Catherine T. MacArthur Foundation , CC hosted its very first Open Journalism track at the CC Global Summit, including eight presentations, lightning talks, panel discussions, and workshops as well as a keynote by Anya Kamenetz .\n\nRepresentatives from 33 news outlets and digital rights-focused organizations attended the CC Summit sessions. The Open Journalism track built on numerous collaborations and workshops throughout 2023.\n\n## Open Education\n\nWe delivered workshops and presentations on CC Licenses and Open Educational Resources at over 16 conferences and events. The CC Open Education Platform also funded six global projects, including work to advance the UNESCO Recommendation on OER.\n\n\"Follow the Color Brick Road\" by Bert Kaufmann is licensed under CC BY-SA 2.0.\n\n\n\n", - "page_start": 6, - "page_end": 6, - "source_file": "2023-Creative-Commons-Annual-Report-2-1.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0806.pdf", - "query": "What do the timescales during which high-amplitude flaring events occur in blazars indicate?", - "target_page": 1, - "target_passage": "that much of the en- ergy is being produced deep within the jet on small, sub-parsec scales", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## 3. VERITAS Blazar KSP\n\nVERITAS observes for ∼ 750 h and ∼ 250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- · A VHE blazar discovery program ( ∼ 200 h / yr): Each year ∼ 10 targets are selected to receive ∼ 10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- · A target-of-opportunity (ToO) observation program ( ∼ 50 h / yr): VERITAS blazar observations can be triggered by either a VERITAS blazar discovery, a VHE flaring alert ( > 2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- · Multi-wavelength (MWL) studies of VHE blazars ( ∼ 50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- · Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n## 4. Blazar Discovery Program\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ -rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles ( -8 · < δ < 72 · ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0 . 3. To further the study of the\n\nEBL a few objects having a large ( z > 0 . 3) are also included in the target list. The target list includes:\n\n- · All nearby ( z < 0 . 3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- · The X-ray brightest HBL ( z < 0 . 3) in the recent Sedentary [8] and ROXA [9] surveys.\n- · Four distant ( z > 0 . 3) BL Lac objects recommended by [5, 10].\n- · Several FSRQ recommended as potential VHE emitters in [6, 11].\n- · All nearby ( z < 0 . 3) blazars detected by EGRET [12].\n- · All nearby ( z < 0 . 3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- · All sources ( | b | > 10 · ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ -ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERITAS blazar discovery program.\n\n## 5. VERITAS AGN Detections\n\nVERITAS has detected VHE γ -ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n## 5.1. Recent VERITAS Blazar Discoveries", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "## Submillimeter Variability and the Gamma-ray Connection in Fermi Blazars\n\nA. Strom Univ. of Arizona, AZ 85721, USA A. Siemiginowska, M. Gurwell, B. Kelly\n\nCfA, MA 02138, USA\n\nWe present multi-epoch observations from the Submillimeter Array ( SMA ) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August-October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## 1. INTRODUCTION\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ -ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ -ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submil-\n\nlimeter Array 1 ( SMA ) at 1mm and 850 µ m, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ -ray indices and luminosities.\n\n## 2. SMA BLAZARS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "## 6. Blazars Upper Limits\n\nMore than 50 VHE blazar candidates were observed by VERITAS between September 2007 and June 2009. The total exposure on the 49 non-detected candidates is ∼ 305 h live time (average of 6.2 h per candidate). Approximately 55% of the total exposure is split amongst the 27 observed HBL. The remainder is divided amongst the 8 IBL (26%), 5 LBL (6%), and 9 FSRQ (13%). There are no clear indications of significant VHE γ -ray emission from any of these 49 blazars [25]. However, the observed significance distribution is clearly skewed towards positive values (see Figure 1). A stacking analysis performed on the entire data sample shows an overall excess of 430 γ -rays, corresponding to a statistical significance of 4.8 σ , observed from the directions of the candidate blazars. The IBL and HBL targets make up 96% of the observed excess. Observations of these objects also comprise ∼ 80% of the total exposure. An identical stacked analysis of all the extragalactic non-blazar targets observed, but not clearly detected ( > 5 σ ), by VERITAS does not show a significant excess ( ∼ 120 h exposure). The stacked excess persists using alternate methods for estimating the background at each blazar location, and with different event selection criteria (e.g. soft cuts optimized for sources with Γ VHE > 4). The distribution of VHE flux upper limits is shown in Figure 1. These 49 VHE flux upper limits are generally the most-constraining ever reported for these objects.\n\n## 7. Multi-wavelength Studies of VHE Blazars\n\nDuring the first three seasons of VERITAS observations, pre-planned extensive MWL campaigns were organized for three blazars 1ES 2344+514 (2007-08), 1ES 1218+304 (2008-09) and 1ES 0229+200 (200910 - ongoing). In addition, numerous ToO MWLobservation campaigns were performed. These include campaigns for every blazar/AGN discovered by VERITAS, and all include Swift (XRT and UVOT) data. All MWL campaigns on the VHE blazars discovered", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0770.pdf" - }, - { - "text": "tion of correlated VHE and X-ray flux variability, as well as correlated spectral hardening in both the VHE and X-ray bands. The VHE MWL observations were performed in both 'quiescent' and flaring states for some of the observed blazars. For the observed HBL objects, the SEDs can be well described by a simple SSC model in both high and low states. However, an additional external Compton component is necessary to adequately fit the SEDs of the IBL objects.\n\nThe Fermi-LAT is already having a significant impact on the blazar KSP. In future seasons, the VERITAS blazar discovery program will focus its discovery program on hard-spectrum blazars detected by Fermi-LAT, and will likely have a greater focus on high-risk/high-reward objects at larger redshifts (0 . 3 < z < 0 . 7). In addition, the number of VHE blazars studied in pre-planned MWL campaigns will increase as data from the Fermi-LAT will be publicly available. In particular, the extensive pre-planned MWL campaigns will focus on objects that are noteworthy for the impact their data may have on understanding the EBL. The simultaneous observations of blazars by VERITAS and Fermi-LAT will completely resolve the higher-energy SED peak, often for the first time, enabling unprecedented constraints on the underlying blazar phenomena to be derived.\n\n## Acknowledgments\n\nThis research is supported by grants from the US Department of Energy, the US National Science Foundation, and the Smithsonian Institution, by NSERC in Canada, by Science Foundation Ireland, and by STFC in the UK. We acknowledge the excellent work of the technical support staff at the FLWO and the collab-\n\norating institutions in the construction and operation of the instrument.\n\n## References\n\n - [1] F. Aharonian et al. 2007, ApJ , 664 , L71\n - [2] F. Aharonian et al. 2006, Nature , 440 , 1018\n - [3] F. Aharonian et al. 2007, A&A , 475 , L9\n - [4] J. Holder, et al. 2008, AIPC , 1085 , 657\n - [5] L. Costamante & G. Ghisellini 2002, A&A , 384 , 56\n - [6] E.S. Perlman 2000, AIPC , 515 , 53\n - [7] F.W. Stecker et al. 1996, ApJ , 473 , L75\n - [8] P. Giommi et al. 2005, A&A , 434 , 385\n - [9] S. Turriziani et al. 2007, A&A , 472 , 699\n - [10] L. Costamante 2006, arXiv:0612709\n - [11] P. Padovani et al. 2002, ApJ , 581 , 895\n - [12] R. Muhkerjee et al. 2001, AIPC , 558 , 324\n - [13] A.A. Abdo et al. 2009, ApJ , 700 , 597\n - [14] V.A. Acciari et al. 2008, ApJ , 684 , L73\n - [15] V.A. Acciari et al. 2009, ApJ , 707 , 612\n - [16] V.A. Acciari et al. 2009, ApJ , 690 , L126\n - [17] V.A. Acciari et al. 2009, ApJ , 693 , L104\n - [18] L.C. Reyes 2009, arXiv:0907.5175\n - [19] R.A. Ong 2009, ATel , 1941\n - [20] R.A. Ong et al. 2009, ATel , 2272\n - [21] V.A. Acciari et al. 2009, ApJ , 708 , L100\n - [22] R.A. Ong et al. 2009, ATel , 2301\n - [23] R.A. Ong et al. 2009, ATel , 2260\n - [24] R.A. Ong et al. 2009, ATel , 2309\n - [25] W. Benbow 2009, arXiv:0908.1412\n - [26] V.A. Acciari et al. 2009, ApJ , submitted\n - [27] V.A. Acciari et al. 2009, ApJ , 695 , 1370\n - [28] V.A. Acciari et al. 2009, ApJ , in press\n - [29] J. Grube 2009, arXiv:0907.4862", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 5: Ratio of γ -ray luminosity to submillimeter luminosity in the 1mm band. The location of an object in this plot should be directly correlated with its blazar 'state', with FSRQs occupying the upper right and BL Lacs the lower left. Flat-spectrum radio quasar 3C 454.3 is the object with the highest submillimeter luminosity in this plot.\n\n\n\n - · BL Lacs and FSRQs do not exhibit significant differences in amplitude of submillimeter variability or characteristic timescale, but our sample of BL Lacs may be dominated by highpeaked BL Lacs (HBLs), which exhibit observational similarities with FSRQs.\n - · Blazar submillimeter light curves are consistent with being produced by a single process that accounts for both high and low states, with characteristic timescales 10 < τ rest < 500 days.\n - · The blazars detected by Fermi have synchrotron peaks at higher frequencies, regardless of submillimeter luminosity.\n - · FSRQs exhibit higher ratios of γ -ray to submillimeter luminosity than BL Lacs (Figure 5), but all objects inhabit a region of parameter space suggesting transitions between states during flaring epochs.\n\nAs Fermi continues to observe fainter sources, the sample of objects for which we can perform this type of analysis will increase and provide better limits on our results. To understand the physical relevance of these results, however, it is important to be able to distinguish between the difference in variability between BL\n\nLacs and FSRQs. One avenue for exploring this difference is to monitor changing submillimeter energy spectral index and the ratio of γ -ray to submillimeter luminosity as functions of time. The full meaning of the results of our autoregressive method is not yet clear, and will require better-sampled blazar light curves and the comparison between τ rest with physical timescales such as the synchrotron cooling timescale. These analyses would allow us to place constraints on the processes occurring near the base of the jet in blazars and further understand the intimate connection between them.\n\n## Acknowledgments\n\nThis work was supported in part by the NSF REU and DoD ASSURE programs under Grant no. 0754568 and by the Smithsonian Institution. Partial support was also provided by NASA contract NAS8-39073 and NASA grant NNX07AQ55G. We have made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the NASA/IPAC Extragalactic Database (NED) which is operated by the JPL, Caltech, under contract with NASA.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 4: The γ -ray index versus submillimeter index plane. The blazars fall more steeply in the γ -rays than in the submillimeter band, where most are, in fact, rising. This LAT-detected sample contrasts with the full SMA sample, where the blazars are more distributed around α S ∼ 0.\n\n\n\nas the presence of SSC versus ERC. Here, we use submillimeter luminosity as a proxy for jet power, which is correlated with the integrated luminosity of the synchrotron component. Elevated γ -ray luminosity with respect to the synchrotron component (which is often seen in FSRQs) suggests the upscattering of external photons off the synchrotron-emitting electrons. These objects should occupy the upper right of the ratio/jet power plot, and BL Lacs, which generally exhibit components with roughly comparable luminosities, should occupy the lower left. It is clear from the figure, however, that many FSRQs exhibit ratios similar to those of the BL Lacs and vis versa.\n\nSikora et al. [10] report that, during its flaring epochs, 3C 454.3 transitions from its typical FSRQ state to a more BL Lac-like state, where the synchrotron component emits much more strongly compared to the γ -ray component than during its 'low state'. 3C 454.3, which is the highest submillimeter luminosity FSRQ in our sample, would then shift down and to the right in Figure 5 when it enters a flaring period. For the first three months of the Fermi mission, 3C 454.3 was not flaring, which may explain its present location in Figure 5. The three objects for which there is a type discrepancy between CGRaBS and LBAS are all FSRQs (in CGRaBS) and exhibit\n\nlow luminosity ratios and high luminosity, which suggest they may be undergoing the same changes as 3C 454.3. A possible interpretation of the elevated luminosity ratios observed in some BL Lacs objects is that there has been a dramatic increase in γ -ray luminosity due to ERC, which would not be reflected in the synchrotron component.\n\n## 5. CONCLUSIONS\n\nThe motivation for observing blazars in the submillimeter is to study behavior close to the central engine, where the jet material is presumably still being accelerated. The separate emission processes that contribute to overall SED may present differently in BL Lacs and FSRQs, allowing us to understand the similarities and differences between blazar types. We have investigated these differences between objects in terms of submillimeter behavior and, in conclusion, find that\n\n- · The SMA blazars exhibit submillimeter energy spectral indexes that follow the spectral sequence interpretation of blazars.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0806.pdf" - }, - { - "text": "## VERITAS Observations of Blazars\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E > 100 GeV) γ -ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ -ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼ 30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ -rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n## 1. Introduction\n\nActive galactic nuclei are the most numerous class of identified VHE γ -ray sources. These objects emit non-thermal radiation across ∼ 20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ -ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ -rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH ( ∼ 2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ -rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## 2. VERITAS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 1: The SMA light curves for 3C 454.3. The open circles represent the 850 µ m observations, and the open triangles represent the 1mm observations.\n\n\n\nJ1751+096) which have conflicting classifications between Fermi and CGRaBS. Some blazars found in the calibrator list have been studied extensively (e.g., 3C 279 and 3C 454.3) but the SMA blazars have not been studied collectively.\n\nForty-four of the objects in our total blazar sample were detected by Fermi and can be found in the catalog of LAT Bright AGN Sources (LBAS) from Abdo et al. [7]. J0050-094 has no redshift in either the LBAS catalog or CGRaBS and is not included in our study. Of the 43 remaining sources, 14 are BL Lac objects and 29 are FSRQs, with 0 . 03 ≤ z ≤ 2 . 19.\n\nWe examined submillimeter light curves for all of the SMA blazars, with observations beginning in approximately 2003 (see Figure 1). Typically, the 1mm band is much more well-sampled in comparison to the 850m band, but visual inspection reveals that the regularity and quality of observations vary greatly from source to source. Many of the objects exhibit nonperiodic variability, either in the form of persistent, low-amplitude fluctuations or higher amplitude flaring behavior.\n\n## 2.1. Submillimeter Properties\n\nSubmillimeter Luminosities. Since we are primarily concerned with comparisons to Fermi observations, we note that only 129 of the SMA blazars (23 BL Lacs and 106 FSRQs) were observed by the SMA in either band during the three months August-October 2008. For these objects, submillimeter luminosities are calculated in the standard way:\n\nν e L ν e = 4 πD 2 L ν obs F obs 1 + z , (1)\n\nwhere D L is the luminosity distance, ν obs is the frequency of the observed band, and F obs is the average\n\nFigure 2: Variability index for our sample (top: 1mm, bottom: 850 µ m), with FSRQs as the hatched distribution and BL Lacs as the solid distribution. There is no signicant difference in the class distributions in either band; the 'tail' to the left is populated by objects with errors larger than the intrinsic variability.\n\n\n\nflux (in erg cm -2 s -1 Hz -1 ) over the three month period. We adopt a lambda cold dark matter cosmology with values of H 0 = 71 km s -1 Mpc -1 , Ω M = 0 . 27, and Λ = 0 . 73.\n\nEnergy Spectral Indices. We derive submillimeter spectral energy indices from observations quasisimultaneous with the Fermi observations. To be consistent with the use of α γ , we define spectral energy index as νF ν = ν -α S and calculate α S from the average of the energy spectral indices over the corresponding three months. We only calculate α S for the 16 objects (8 BL Lacs and 35 FSRQs) with observations at both 1mm and 850 µ m during this time frame.\n\n## 3. VARIABILITY ANALYSIS\n\n## 3.1. Variability Index\n\nWe roughly characterize the level of variability of each source using the variability index from Hovatta et al. [8]:\n\nV = ( F max -σ F max ) -( F min + σ F min ) ( F max -σ F max ) + ( F min + σ F min ) (2)\n\nFigure 2 shows the distribution for the SMA blazars. Objects with V ≤ 0 are typically unsuitable for more", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0806.pdf" - }, - { - "text": "detailed variability analysis for one of two reasons: (1) too few data points or (2) flux measurement uncertainties on the order of the amplitude of observed variability. It is important to note that, due to discrepancies between the sampling frequency in both bands, the variability indices for the 850 µ mband may be artificially depressed due to the fact that there are not always corresponding measurements at higher frequencies during flaring epochs.\n\n## 3.2. First-Order Continuous Autoregression\n\nWe follow the method of Kelly et al. [9], who model quasar optical light curves as a continuous time firstorder autoregressive process (CAR(1)) in order to extract characteristic time scales and the amplitude of flux variations. Although flaring behavior is not typically thought of as an autoregressive process, we find that the light curves are well-fit by the models and therefore adopt the method here to study blazar submillimeter light curves.\n\nThe CAR(1) process is described by a stochastic differential equation [9],\n\ndS ( t ) = 1 τ S ( t ) dt + σ √ dt glyph[epsilon1] ( t ) + b dt, (3)\n\nassociated with a power spectrum of the form\n\nP X ( f ) = 2 σ 2 τ 2 1 + (2 πτf ) 2 . (4)\n\nIn equations 3 and 4, τ is called the 'relaxation time' of the process S ( t ) and is identified by the break in P X ( f ). The power spectrum appears flat for timescales longer than this and falls off as 1 /f 2 for timescales shorter than the characteristic timescale of the process.\n\nTaking the logarithm of the blazar light curve (in Jy) to be S ( t ), we adopt τ (in days) as the characteristic timescale of variability, after which the physical process 'forgets' about what has happened at time lags of greater than τ . The two other relevant parameters, σ and µ = b/a , are the overall amplitude of variability and the logarithm of mean value of the light curve, respectively.\n\nIn the routine, we construct an autoregressive model for the light curves for a minimum of 100,000 iterations and calculate the value of τ from the break in the power spectrum in each instance. Due to the limited number of observations in the 850 µ m band, we performed this autoregressive analysis only for the 1mm light curves, which typically have more than 10 points per light curve.\n\nThis method yielded some surprising results. In Figure 3, we see that the BL Lacs and FSRQs exhibit virtually no difference in characteristic timescale, with\n\nFigure 3: Characteristic timescale (days) versus submillimeter luminosity (erg s -1 ) in the 1mm band for all objects. Physically, τ represents a 'relaxation timescale', the timescale beyond which events are no longer correlated.\n\n\n\nboth classes extending across a large range in τ . Because of the uncertainty for objects with shorter characteristic timescales, it is hard to draw any definitive conclusions about the differences between classes. It is important to note that τ does not necessarily represent a flaring timescale, which is a behavior that typically operates on a scale of ∼ 10-100 days and not on the longer timescales we see in τ .\n\n## 4. CONNECTION WITH GAMMA-RAYS\n\nIn general, we find that in the submillimeter, we are observing these blazars at or near the peak of the synchrotron component ( α S ∼ 0), but that Fermi -detected sources have more negative energy spectral indices overall than Fermi -nondetected sources. In Figure 4, we see that while the majority of Fermi blazars are observed on the rising part of the synchrotron component (at lower energies than the peak), all of the objects have very steeply falling γ -ray energy spectral indexes, putting the γ -ray peak at lower energies than the observed Fermi band. Knowing that we are not observing the synchrotron and γ -ray components at analagous points in the spectrum may allow us to better understand the magnetic field in the parsec-scale jet region and the population of external photons that is being upscattered to γ -rays.", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 1: (Left) The preliminary significance measured from each of the 49 non-detected candidates using standard analysis cuts. The curve shows a Gaussian distribution, with mean zero and standard deviation one, normalized to the number of blazars. A similar result is obtained using analysis cuts optimized for soft-spectrum sources. (Right) The distribution of flux upper limits for the non-detected blazars in percentage of Crab Nebula flux above the observation threshold. The time-weighted average limit is less than ∼ 2% Crab flux.\n\n\n\n\n\nσ\n\nsince the launch of Fermi include LAT detections. In addition, several MWL campaigns on the well-studied VHE blazars Mkn 421 and Mkn 501 (please see the contributions of D. Gall and A. Konopelko in these proceedings) were also performed. Highlights of these campaigns include:\n\n - · 1ES 2344+514: A major (50% Crab) VHE flare, along with correlations of the VHE and X-ray flux were observed from this HBL. The VHE and X-ray spectra harden during bright states, and a synchrotron self-Compton (SSC) model can explain the observed SED in both the high and low states [26].\n - · 1ES 1218+304: This HBL flared during VERITAS MWL observations. Its unusually hard VHE spectrum strongly constrains the EBL. The observed flaring rules out kpc-scale jet emission as the explanation of the spectral hardness and places the EBL constraints on more solidfooting [27, 28].\n - · 1ES 0806+524: The observed SED of this new VHE HBL can be explained by an SSC model [16].\n - · W Comae: This IBL, the first discovered at VHE, flared twice in 2008 [14, 15]. Modeling of the SED is improved by including an externalCompton (EC) component in an SSC interpretation.\n - · 3C 66A: This IBL flared at VHE and MeV-GeV energies in 2008[17, 18]. Similar to W Comae and PKS 1424+240, modeling of observed SED suggests a strong EC component in addition to an SSC component.\n - · Mkn 421: This HBL exhibited major flaring behavior for several months in 2008. Correlations of the VHE and X-ray flux were observed, along with spectral hardening with increased flux in both bands [29].\n - · RGBJ0710+591: Modeling the SED of this HBL with an SSC model yields a good fit to the data. The inclusion of an external Compton component does not improve the fit.\n - · PKS1424+240: The broadband SED of this IBL (at unknown redshift) is well described by an SSC model favoring a redshift of less than 0.1 [21]. Using the photon index measured with Fermi-LAT in combination with recent EBL absorption models, the VERITAS data indicate that the redshift of PKS 1424+240 is less than 0.66.\n\n## 8. Conclusions\n\nThe first two years of the VERITAS blazar KSP were highly successful. Highlights include the detection of more than a 16 VHE blazars with the observations almost always having contemporaneous MWL data. Among these detections are 8 VHE blazar discoveries, including the first three IBLs known to emit VHE γ -rays. All but a handful of the blazars on the initial VERITAS discovery target list were observed, and the flux limits generated for those not VHE detected are generally the most-constraining ever. The excess seen in the stacked blazar analysis suggests that the initial direction of the VERITAS discovery program was well justified, and that follow-up observations of many of these initial targets will result in VHE discoveries. In addition, the Fermi-LAT is identifying many new compelling targets for the VERITAS blazar discovery program. These new candidates have already resulted in 3 VHE blazar discoveries. The future of the VERITAS blazar discovery program is clearly very bright.\n\nThe MWL aspect of the VERITAS blazar KSP has also been highly successful. Every VERITAS observation of a known, or newly discovered, VHE blazar has been accompanied by contemporaneous MWL observations. These data have resulted in the identifica-", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0770.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0806.pdf", - "query": "Where is the Submillimeter Array?", - "target_page": 1, - "target_passage": "near the summit of Mauna Ke", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "## Submillimeter Variability and the Gamma-ray Connection in Fermi Blazars\n\nA. Strom Univ. of Arizona, AZ 85721, USA A. Siemiginowska, M. Gurwell, B. Kelly\n\nCfA, MA 02138, USA\n\nWe present multi-epoch observations from the Submillimeter Array ( SMA ) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August-October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## 1. INTRODUCTION\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ -ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ -ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submil-\n\nlimeter Array 1 ( SMA ) at 1mm and 850 µ m, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ -ray indices and luminosities.\n\n## 2. SMA BLAZARS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 5: Ratio of �� -ray luminosity to submillimeter luminosity in the 1mm band. The location of an object in this plot should be directly correlated with its blazar 'state', with FSRQs occupying the upper right and BL Lacs the lower left. Flat-spectrum radio quasar 3C 454.3 is the object with the highest submillimeter luminosity in this plot.\n\n\n\n - · BL Lacs and FSRQs do not exhibit significant differences in amplitude of submillimeter variability or characteristic timescale, but our sample of BL Lacs may be dominated by highpeaked BL Lacs (HBLs), which exhibit observational similarities with FSRQs.\n - · Blazar submillimeter light curves are consistent with being produced by a single process that accounts for both high and low states, with characteristic timescales 10 < τ rest < 500 days.\n - · The blazars detected by Fermi have synchrotron peaks at higher frequencies, regardless of submillimeter luminosity.\n - · FSRQs exhibit higher ratios of γ -ray to submillimeter luminosity than BL Lacs (Figure 5), but all objects inhabit a region of parameter space suggesting transitions between states during flaring epochs.\n\nAs Fermi continues to observe fainter sources, the sample of objects for which we can perform this type of analysis will increase and provide better limits on our results. To understand the physical relevance of these results, however, it is important to be able to distinguish between the difference in variability between BL\n\nLacs and FSRQs. One avenue for exploring this difference is to monitor changing submillimeter energy spectral index and the ratio of γ -ray to submillimeter luminosity as functions of time. The full meaning of the results of our autoregressive method is not yet clear, and will require better-sampled blazar light curves and the comparison between τ rest with physical timescales such as the synchrotron cooling timescale. These analyses would allow us to place constraints on the processes occurring near the base of the jet in blazars and further understand the intimate connection between them.\n\n## Acknowledgments\n\nThis work was supported in part by the NSF REU and DoD ASSURE programs under Grant no. 0754568 and by the Smithsonian Institution. Partial support was also provided by NASA contract NAS8-39073 and NASA grant NNX07AQ55G. We have made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the NASA/IPAC Extragalactic Database (NED) which is operated by the JPL, Caltech, under contract with NASA.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 1: The SMA light curves for 3C 454.3. The open circles represent the 850 µ m observations, and the open triangles represent the 1mm observations.\n\n\n\nJ1751+096) which have conflicting classifications between Fermi and CGRaBS. Some blazars found in the calibrator list have been studied extensively (e.g., 3C 279 and 3C 454.3) but the SMA blazars have not been studied collectively.\n\nForty-four of the objects in our total blazar sample were detected by Fermi and can be found in the catalog of LAT Bright AGN Sources (LBAS) from Abdo et al. [7]. J0050-094 has no redshift in either the LBAS catalog or CGRaBS and is not included in our study. Of the 43 remaining sources, 14 are BL Lac objects and 29 are FSRQs, with 0 . 03 ≤ z ≤ 2 . 19.\n\nWe examined submillimeter light curves for all of the SMA blazars, with observations beginning in approximately 2003 (see Figure 1). Typically, the 1mm band is much more well-sampled in comparison to the 850m band, but visual inspection reveals that the regularity and quality of observations vary greatly from source to source. Many of the objects exhibit nonperiodic variability, either in the form of persistent, low-amplitude fluctuations or higher amplitude flaring behavior.\n\n## 2.1. Submillimeter Properties\n\nSubmillimeter Luminosities. Since we are primarily concerned with comparisons to Fermi observations, we note that only 129 of the SMA blazars (23 BL Lacs and 106 FSRQs) were observed by the SMA in either band during the three months August-October 2008. For these objects, submillimeter luminosities are calculated in the standard way:\n\nν e L ν e = 4 πD 2 L ν obs F obs 1 + z , (1)\n\nwhere D L is the luminosity distance, ν obs is the frequency of the observed band, and F obs is the average\n\nFigure 2: Variability index for our sample (top: 1mm, bottom: 850 µ m), with FSRQs as the hatched distribution and BL Lacs as the solid distribution. There is no signicant difference in the class distributions in either band; the 'tail' to the left is populated by objects with errors larger than the intrinsic variability.\n\n\n\nflux (in erg cm -2 s -1 Hz -1 ) over the three month period. We adopt a lambda cold dark matter cosmology with values of H 0 = 71 km s -1 Mpc -1 , Ω M = 0 . 27, and Λ = 0 . 73.\n\nEnergy Spectral Indices. We derive submillimeter spectral energy indices from observations quasisimultaneous with the Fermi observations. To be consistent with the use of α γ , we define spectral energy index as νF ν = ν -α S and calculate α S from the average of the energy spectral indices over the corresponding three months. We only calculate α S for the 16 objects (8 BL Lacs and 35 FSRQs) with observations at both 1mm and 850 µ m during this time frame.\n\n## 3. VARIABILITY ANALYSIS\n\n## 3.1. Variability Index\n\nWe roughly characterize the level of variability of each source using the variability index from Hovatta et al. [8]:\n\nV = ( F max -σ F max ) -( F min + σ F min ) ( F max -σ F max ) + ( F min + σ F min ) (2)\n\nFigure 2 shows the distribution for the SMA blazars. Objects with V ≤ 0 are typically unsuitable for more", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0806.pdf" - }, - { - "text": "## 2. SMA BLAZARS\n\nThe Submillimeter Array [4] consists of eight 6 m antennas located near the summit of Mauna Kea. The SMA is used in a variety of baseline configurations and typically operates in the 1mm and 850 µ m windows, achieving spatial resolution as fine as 0.25' at 850 µ m. The sources used as phase calibrators for the array are compiled in a database known as the SMA Calibrator List 2 [5]. Essentially a collection of bright objects (stronger than 750 mJy at 230 GHz and 1 Jy at 345 GHz), these sources are monitored regularly, both during science observations and dedicated observing tracks.\n\nTo select our sample, we identified objects in the calibrator list that were also classified as BL Lacs or FSRQs by the Candidate Gamma-Ray Blazar Survey [6, CGRaBS]. Of the 243 total objects in the calibrator list, 171 (35 BL Lacs and 136 FSRQs) have positive blazar class identifications, although there are three sources (J0238+166, J0428-379, and", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 4: The γ -ray index versus submillimeter index plane. The blazars fall more steeply in the γ -rays than in the submillimeter band, where most are, in fact, rising. This LAT-detected sample contrasts with the full SMA sample, where the blazars are more distributed around α S ∼ 0.\n\n\n\nas the presence of SSC versus ERC. Here, we use submillimeter luminosity as a proxy for jet power, which is correlated with the integrated luminosity of the synchrotron component. Elevated γ -ray luminosity with respect to the synchrotron component (which is often seen in FSRQs) suggests the upscattering of external photons off the synchrotron-emitting electrons. These objects should occupy the upper right of the ratio/jet power plot, and BL Lacs, which generally exhibit components with roughly comparable luminosities, should occupy the lower left. It is clear from the figure, however, that many FSRQs exhibit ratios similar to those of the BL Lacs and vis versa.\n\nSikora et al. [10] report that, during its flaring epochs, 3C 454.3 transitions from its typical FSRQ state to a more BL Lac-like state, where the synchrotron component emits much more strongly compared to the γ -ray component than during its 'low state'. 3C 454.3, which is the highest submillimeter luminosity FSRQ in our sample, would then shift down and to the right in Figure 5 when it enters a flaring period. For the first three months of the Fermi mission, 3C 454.3 was not flaring, which may explain its present location in Figure 5. The three objects for which there is a type discrepancy between CGRaBS and LBAS are all FSRQs (in CGRaBS) and exhibit\n\nlow luminosity ratios and high luminosity, which suggest they may be undergoing the same changes as 3C 454.3. A possible interpretation of the elevated luminosity ratios observed in some BL Lacs objects is that there has been a dramatic increase in γ -ray luminosity due to ERC, which would not be reflected in the synchrotron component.\n\n## 5. CONCLUSIONS\n\nThe motivation for observing blazars in the submillimeter is to study behavior close to the central engine, where the jet material is presumably still being accelerated. The separate emission processes that contribute to overall SED may present differently in BL Lacs and FSRQs, allowing us to understand the similarities and differences between blazar types. We have investigated these differences between objects in terms of submillimeter behavior and, in conclusion, find that\n\n- · The SMA blazars exhibit submillimeter energy spectral indexes that follow the spectral sequence interpretation of blazars.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0806.pdf" - }, - { - "text": "The amount of usable capacity that is not yet used in a system, pool, array, or MDisk.\n\n - /SM590000 Effective capacity", - "page_start": 792, - "page_end": 792, - "source_file": "sg247938.pdf" - }, - { - "text": "Here, we demonstrate an antiferromagnetic coupling and exchange bias in Fe/(Ga,Mn)As bilayer films, by combining element-specific XMCD measurements and bulk-sensitive superconducting quantum interference device (SQUID) magnetometry. As with previous studies of FM metal/FM semiconductor bilayers 4,5 (and in contrast to AFM coupled FM metal/FM metal exchange bias structures 10,11 ) the layers are in direct contact without a non-magnetic spacer in between. We distinguish interface and bulk (Ga,Mn)As layers that are respectively strongly and weakly antiferromagnetically coupled to the Fe overlayer. In agreement with Ref. 7 , the interface layer remains polarized at room temperature.\n\nThe Fe and (Ga,Mn)As layers of the present study were both grown by molecular beam epitaxy in the same ultra-high vacuum system, in order to ensure a clean interface between them. The (Ga,Mn)As layer of thickness 10 to 50 nm was deposited on a GaAs(001) substrate at a temperature of 260 · C, using previously established methods 3,8 . A low Mn concentration of x ≈ 0 . 03 was chosen in order to avoid the formation of compensating Mn interstitials. The substrate temperature was then reduced to ∼ 0 · C, before depositing a 2 nm Fe layer, plus a 2 nm Al capping layer. In-situ reflection high energy electron diffraction and ex-situ x-ray reflectivity and diffraction measurements confirmed that the layers are single-crystalline with sub-nm interface roughness. SQUID magnetometry measurements were performed using a Quantum Design Magnetic Property Measurement System. Mn and Fe L 2 , 3 x-ray absorption and XMCD", - "page_start": 0, - "page_end": 0, - "source_file": "1001.2449.pdf" - }, - { - "text": "- /SM590000 Arrays (hardware encryption)", - "page_start": 628, - "page_end": 628, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 Used capacity\n\nThe amount of usable capacity taken up by data in a system, pool, array, or MDisk after data reduction techniques were applied.", - "page_start": 792, - "page_end": 792, - "source_file": "sg247938.pdf" - }, - { - "text": "Figure 6-47 List of drives in an array\n\n\n\nYou can use the CLI command lsarraymember to get the same information with the CLI. Provide an array name or ID as the parameter to filter output by the array. If run without arguments, the command lists all members of all configured arrays.\n\n## Properties\n\nThis section shows all available array MDisk parameters: its state, capacity, RAID level, and others.\n\nUse the CLI command lsarray to get a list of all configured arrays. Use lsarray with array name or ID as the parameter to get extended information about the selected one, as shown in Example 6-21.\n\nExample 6-21 lsarray output (truncated)\n\n```\nIBM\\_Storwize:ITSOV7K:superuser>lsarray mdisk\\_id mdisk\\_name status mdisk\\_grp\\_id mdisk\\_grp\\_name capacity 0 mdisk0 online 0 mdiskgrp0 1.3TB 16 Distributed\\_array online 1 mdiskgrp1 2.2TB IBM\\_Storwize:ITSOV7K:superuser>lsarray 16 mdisk\\_id 16 mdisk\\_name Distributed\\_array status online mode array mdisk\\_grp\\_id 1 mdisk\\_grp\\_name mdiskgrp1 capacity 2.2TB <...>\n```\n\n## 6.3 Working with external controllers and MDisks\n\nIn IBM Spectrum Virtualize terminology, Controllers are external storage systems that provide resources to be used as MDisks. Storwize V7000 supports external storage controllers that are attached through iSCSI and through Fibre Channel.", - "page_start": 248, - "page_end": 248, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "1001.0806.pdf", - "query": "How many blazars were observed by the SMA in either band during the three months August-October 2008?", - "target_page": 2, - "target_passage": "only 129 of the SMA blazars", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## 6. Blazars Upper Limits\n\nMore than 50 VHE blazar candidates were observed by VERITAS between September 2007 and June 2009. The total exposure on the 49 non-detected candidates is ∼ 305 h live time (average of 6.2 h per candidate). Approximately 55% of the total exposure is split amongst the 27 observed HBL. The remainder is divided amongst the 8 IBL (26%), 5 LBL (6%), and 9 FSRQ (13%). There are no clear indications of significant VHE γ -ray emission from any of these 49 blazars [25]. However, the observed significance distribution is clearly skewed towards positive values (see Figure 1). A stacking analysis performed on the entire data sample shows an overall excess of 430 γ -rays, corresponding to a statistical significance of 4.8 σ , observed from the directions of the candidate blazars. The IBL and HBL targets make up 96% of the observed excess. Observations of these objects also comprise ∼ 80% of the total exposure. An identical stacked analysis of all the extragalactic non-blazar targets observed, but not clearly detected ( > 5 σ ), by VERITAS does not show a significant excess ( ∼ 120 h exposure). The stacked excess persists using alternate methods for estimating the background at each blazar location, and with different event selection criteria (e.g. soft cuts optimized for sources with Γ VHE > 4). The distribution of VHE flux upper limits is shown in Figure 1. These 49 VHE flux upper limits are generally the most-constraining ever reported for these objects.\n\n## 7. Multi-wavelength Studies of VHE Blazars\n\nDuring the first three seasons of VERITAS observations, pre-planned extensive MWL campaigns were organized for three blazars 1ES 2344+514 (2007-08), 1ES 1218+304 (2008-09) and 1ES 0229+200 (200910 - ongoing). In addition, numerous ToO MWLobservation campaigns were performed. These include campaigns for every blazar/AGN discovered by VERITAS, and all include Swift (XRT and UVOT) data. All MWL campaigns on the VHE blazars discovered", - "page_start": 2, - "page_end": 2, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 1: The SMA light curves for 3C 454.3. The open circles represent the 850 µ m observations, and the open triangles represent the 1mm observations.\n\n\n\nJ1751+096) which have conflicting classifications between Fermi and CGRaBS. Some blazars found in the calibrator list have been studied extensively (e.g., 3C 279 and 3C 454.3) but the SMA blazars have not been studied collectively.\n\nForty-four of the objects in our total blazar sample were detected by Fermi and can be found in the catalog of LAT Bright AGN Sources (LBAS) from Abdo et al. [7]. J0050-094 has no redshift in either the LBAS catalog or CGRaBS and is not included in our study. Of the 43 remaining sources, 14 are BL Lac objects and 29 are FSRQs, with 0 . 03 ≤ z ≤ 2 . 19.\n\nWe examined submillimeter light curves for all of the SMA blazars, with observations beginning in approximately 2003 (see Figure 1). Typically, the 1mm band is much more well-sampled in comparison to the 850m band, but visual inspection reveals that the regularity and quality of observations vary greatly from source to source. Many of the objects exhibit nonperiodic variability, either in the form of persistent, low-amplitude fluctuations or higher amplitude flaring behavior.\n\n## 2.1. Submillimeter Properties\n\nSubmillimeter Luminosities. Since we are primarily concerned with comparisons to Fermi observations, we note that only 129 of the SMA blazars (23 BL Lacs and 106 FSRQs) were observed by the SMA in either band during the three months August-October 2008. For these objects, submillimeter luminosities are calculated in the standard way:\n\nν e L ν e = 4 πD 2 L ν obs F obs 1 + z , (1)\n\nwhere D L is the luminosity distance, ν obs is the frequency of the observed band, and F obs is the average\n\nFigure 2: Variability index for our sample (top: 1mm, bottom: 850 µ m), with FSRQs as the hatched distribution and BL Lacs as the solid distribution. There is no signicant difference in the class distributions in either band; the 'tail' to the left is populated by objects with errors larger than the intrinsic variability.\n\n\n\nflux (in erg cm -2 s -1 Hz -1 ) over the three month period. We adopt a lambda cold dark matter cosmology with values of H 0 = 71 km s -1 Mpc -1 , Ω M = 0 . 27, and Λ = 0 . 73.\n\nEnergy Spectral Indices. We derive submillimeter spectral energy indices from observations quasisimultaneous with the Fermi observations. To be consistent with the use of α γ , we define spectral energy index as νF ν = ν -α S and calculate α S from the average of the energy spectral indices over the corresponding three months. We only calculate α S for the 16 objects (8 BL Lacs and 35 FSRQs) with observations at both 1mm and 850 µ m during this time frame.\n\n## 3. VARIABILITY ANALYSIS\n\n## 3.1. Variability Index\n\nWe roughly characterize the level of variability of each source using the variability index from Hovatta et al. [8]:\n\nV = ( F max -σ F max ) -( F min + σ F min ) ( F max -σ F max ) + ( F min + σ F min ) (2)\n\nFigure 2 shows the distribution for the SMA blazars. Objects with V ≤ 0 are typically unsuitable for more", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0806.pdf" - }, - { - "text": "## 3. VERITAS Blazar KSP\n\nVERITAS observes for ∼ 750 h and ∼ 250 h each year during periods of astronomical darkness and partial moonlight, respectively. The moonlight observations are almost exclusively used for a blazar discovery program, and a large fraction of the dark time is used for the blazar KSP, which consists of:\n\n- · A VHE blazar discovery program ( ∼ 200 h / yr): Each year ∼ 10 targets are selected to receive ∼ 10 h of observations each during astronomical darkness. These data are supplemented by discovery observations during periods of partial moonlight.\n- · A target-of-opportunity (ToO) observation program ( ∼ 50 h / yr): VERITAS blazar observations can be triggered by either a VERITAS blazar discovery, a VHE flaring alert ( > 2 Crab) from the blazar monitoring program of the Whipple 10-m telescope or from another VHE instrument, or a lower-energy flaring alert (optical, X-ray or Fermi-LAT). Should the guaranteed allocation be exhausted, further time can be requested from a pool of director's discretionary time.\n- · Multi-wavelength (MWL) studies of VHE blazars ( ∼ 50 h / yr + ToO): Each year one blazar receives a deep exposure in a pre-planned campaign of extensive, simultaneous MWL (Xray, optical, radio) measurements. ToO observation proposals for MWL measurements are also submitted to lower-energy observatories (e.g. Swift) and are triggered by a VERITAS discovery or flaring alert.\n- · Distant VHE blazar studies to constrain the extragalactic background light (EBL): Here distant targets are given a higher priority in the blazar discovery program, as well as for the MWL observations of known VHE blazars, particularly those with hard VHE spectra.\n\n## 4. Blazar Discovery Program\n\nThe blazars observed in the discovery program are largely high-frequency-peaked BL Lac objects. However, the program also includes IBLs (intermediatepeaked) and LBLs (low-peaked), as well as flat spectrum radio quasars (FSRQs), in an attempt to increase the types of blazars known to emit VHE γ -rays. The observed targets are drawn from a target list containing objects visible to the telescopes at reasonable zenith angles ( -8 · < δ < 72 · ), without a previously published VHE limit below 1.5% Crab, and with a measured redshift z < 0 . 3. To further the study of the\n\nEBL a few objects having a large ( z > 0 . 3) are also included in the target list. The target list includes:\n\n- · All nearby ( z < 0 . 3) HBL and IBL recommended as potential VHE emitters in [5, 6, 7].\n- · The X-ray brightest HBL ( z < 0 . 3) in the recent Sedentary [8] and ROXA [9] surveys.\n- · Four distant ( z > 0 . 3) BL Lac objects recommended by [5, 10].\n- · Several FSRQ recommended as potential VHE emitters in [6, 11].\n- · All nearby ( z < 0 . 3) blazars detected by EGRET [12].\n- · All nearby ( z < 0 . 3) blazars contained in the Fermi-LAT Bright AGN Sample [13].\n- · All sources ( | b | > 10 · ) detected by Fermi-LAT where extrapolations of their MeV-GeV γ -ray spectrum (including EBL absorption; assuming z = 0.3 if the redshift is unknown) indicates a possible VERITAS detection in less than 20 h. This criteria is the focus of the 2009-10 VERITAS blazar discovery program.\n\n## 5. VERITAS AGN Detections\n\nVERITAS has detected VHE γ -ray emission from 16 AGN (15 blazars), including 8 VHE discoveries. These AGN are shown in Table I, and each has been detected by the Large Area Telescope (LAT) instrument aboard the Fermi Gamma-ray Space Telescope. Every blazar discovered by VERITAS was the subject of ToO MWL observations to enable modeling of its simultaneously-measured SED. The known VHE blazars detected by VERITAS were similarly the targets of MWL observations.\n\n## 5.1. Recent VERITAS Blazar Discoveries", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "## 2. SMA BLAZARS\n\nThe Submillimeter Array [4] consists of eight 6 m antennas located near the summit of Mauna Kea. The SMA is used in a variety of baseline configurations and typically operates in the 1mm and 850 µ m windows, achieving spatial resolution as fine as 0.25' at 850 µ m. The sources used as phase calibrators for the array are compiled in a database known as the SMA Calibrator List 2 [5]. Essentially a collection of bright objects (stronger than 750 mJy at 230 GHz and 1 Jy at 345 GHz), these sources are monitored regularly, both during science observations and dedicated observing tracks.\n\nTo select our sample, we identified objects in the calibrator list that were also classified as BL Lacs or FSRQs by the Candidate Gamma-Ray Blazar Survey [6, CGRaBS]. Of the 243 total objects in the calibrator list, 171 (35 BL Lacs and 136 FSRQs) have positive blazar class identifications, although there are three sources (J0238+166, J0428-379, and", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "## Submillimeter Variability and the Gamma-ray Connection in Fermi Blazars\n\nA. Strom Univ. of Arizona, AZ 85721, USA A. Siemiginowska, M. Gurwell, B. Kelly\n\nCfA, MA 02138, USA\n\nWe present multi-epoch observations from the Submillimeter Array ( SMA ) for a sample of 171 bright blazars, 43 of which were detected by Fermi during the first three months of observations. We explore the correlation between their gamma-ray properties and submillimeter observations of their parsec-scale jets, with a special emphasis on spectral index in both bands and the variability of the synchrotron component. Subclass is determined using a combination of Fermi designation and the Candidate Gamma-Ray Blazar Survey (CGRaBS), resulting in 35 BL Lac objects and 136 flat-spectrum radio quasars (FSRQs) in our total sample. We calculate submillimeter energy spectral indices using contemporaneous observations in the 1 mm and 850 micron bands during the months August-October 2008. The submillimeter light curves are modeled as first-order continuous autoregressive processes, from which we derive characteristic timescales. Our blazar sample exhibits no differences in submillimeter variability amplitude or characteristic timescale as a function of subclass or luminosity. All of the the light curves are consistent with being produced by a single process that accounts for both low and high states, and there is additional evidence that objects may be transitioning between blazar class during flaring epochs.\n\n## 1. INTRODUCTION\n\nThe timescales on which high-amplitude flaring events occur in blazars indicate that much of the energy is being produced deep within the jet on small, sub-parsec scales [1, 2]. Understanding if/how emission differs between blazar subclasses (i.e., BL Lacs objects and flat-spectrum radio quasars (FSRQs)) may offer important insight into the similarity between blazars and, furthermore, can provide constraints on the formation and acceleration of the jets themselves.\n\nFor the synchrotron component of blazar spectra, the low-frequency spectral break due to synchrotron self-absorption moves to higher frequencies as one measures closer to the base of the jet [2]. This often places the peak of the spectrum in the millimeter and submillimeter bands, where the emission is optically-thin and originates on parsec and sub-parsec scales [3], allowing direct observation of the most compact regions near the central engine. The high energy γ -ray emission originates as a Compton process, typically a combination of synchrotron-self-Compton (SSC) and external-radiation-Compton (ERC). Depending on the source properties, the synchrotron photons or external photons are upscattered by the same population of electrons that emit the millimeter and submillimeter spectra. Therefore the submillimeter and γ -ray emission are closely linked and give the full information about the source emission.\n\nA systematic study of the submillimeter properties of the entire sample of Fermi blazars has yet to be conducted and is one of the primary goals of our work. We present here preliminary analysis of the submillimeter properties of Fermi blazars detected by the Submil-\n\nlimeter Array 1 ( SMA ) at 1mm and 850 µ m, including an investigation of variable behavior and the determination of submillimeter energy spectral indices. In addition, we consider the connection to the observed γ -ray indices and luminosities.\n\n## 2. SMA BLAZARS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0806.pdf" - }, - { - "text": "tion of correlated VHE and X-ray flux variability, as well as correlated spectral hardening in both the VHE and X-ray bands. The VHE MWL observations were performed in both 'quiescent' and flaring states for some of the observed blazars. For the observed HBL objects, the SEDs can be well described by a simple SSC model in both high and low states. However, an additional external Compton component is necessary to adequately fit the SEDs of the IBL objects.\n\nThe Fermi-LAT is already having a significant impact on the blazar KSP. In future seasons, the VERITAS blazar discovery program will focus its discovery program on hard-spectrum blazars detected by Fermi-LAT, and will likely have a greater focus on high-risk/high-reward objects at larger redshifts (0 . 3 < z < 0 . 7). In addition, the number of VHE blazars studied in pre-planned MWL campaigns will increase as data from the Fermi-LAT will be publicly available. In particular, the extensive pre-planned MWL campaigns will focus on objects that are noteworthy for the impact their data may have on understanding the EBL. The simultaneous observations of blazars by VERITAS and Fermi-LAT will completely resolve the higher-energy SED peak, often for the first time, enabling unprecedented constraints on the underlying blazar phenomena to be derived.\n\n## Acknowledgments\n\nThis research is supported by grants from the US Department of Energy, the US National Science Foundation, and the Smithsonian Institution, by NSERC in Canada, by Science Foundation Ireland, and by STFC in the UK. We acknowledge the excellent work of the technical support staff at the FLWO and the collab-\n\norating institutions in the construction and operation of the instrument.\n\n## References\n\n - [1] F. Aharonian et al. 2007, ApJ , 664 , L71\n - [2] F. Aharonian et al. 2006, Nature , 440 , 1018\n - [3] F. Aharonian et al. 2007, A&A , 475 , L9\n - [4] J. Holder, et al. 2008, AIPC , 1085 , 657\n - [5] L. Costamante & G. Ghisellini 2002, A&A , 384 , 56\n - [6] E.S. Perlman 2000, AIPC , 515 , 53\n - [7] F.W. Stecker et al. 1996, ApJ , 473 , L75\n - [8] P. Giommi et al. 2005, A&A , 434 , 385\n - [9] S. Turriziani et al. 2007, A&A , 472 , 699\n - [10] L. Costamante 2006, arXiv:0612709\n - [11] P. Padovani et al. 2002, ApJ , 581 , 895\n - [12] R. Muhkerjee et al. 2001, AIPC , 558 , 324\n - [13] A.A. Abdo et al. 2009, ApJ , 700 , 597\n - [14] V.A. Acciari et al. 2008, ApJ , 684 , L73\n - [15] V.A. Acciari et al. 2009, ApJ , 707 , 612\n - [16] V.A. Acciari et al. 2009, ApJ , 690 , L126\n - [17] V.A. Acciari et al. 2009, ApJ , 693 , L104\n - [18] L.C. Reyes 2009, arXiv:0907.5175\n - [19] R.A. Ong 2009, ATel , 1941\n - [20] R.A. Ong et al. 2009, ATel , 2272\n - [21] V.A. Acciari et al. 2009, ApJ , 708 , L100\n - [22] R.A. Ong et al. 2009, ATel , 2301\n - [23] R.A. Ong et al. 2009, ATel , 2260\n - [24] R.A. Ong et al. 2009, ATel , 2309\n - [25] W. Benbow 2009, arXiv:0908.1412\n - [26] V.A. Acciari et al. 2009, ApJ , submitted\n - [27] V.A. Acciari et al. 2009, ApJ , 695 , 1370\n - [28] V.A. Acciari et al. 2009, ApJ , in press\n - [29] J. Grube 2009, arXiv:0907.4862", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0770.pdf" - }, - { - "text": "## VERITAS Observations of Blazars\n\nW. Benbow for the VERITAS Collaboration\n\nHarvard-Smithsonian Center for Astrophysics, F.L. Whipple Observatory, PO Box 6369, Amado, AZ 85645, USA\n\nThe VERITAS array of four 12-m diameter imaging atmospheric-Cherenkov telescopes in southern Arizona is used to study very high energy (VHE; E > 100 GeV) γ -ray emission from astrophysical objects. VERITAS is currently the most sensitive VHE γ -ray observatory in the world and one of the VERITAS collaboration's Key Science Projects (KSP) is the study of blazars. These active galactic nuclei (AGN) are the most numerous class of identified VHE sources, with ∼ 30 known to emit VHE photons. More than 70 AGN, almost all of which are blazars, have been observed with the VERITAS array since 2007, in most cases with the deepest-ever VHE exposure. These observations have resulted in the detection of VHE γ -rays from 16 AGN (15 blazars), including 8 for the first time at these energies. The VERITAS blazar KSP is summarized in this proceeding and selected results are presented.\n\n## 1. Introduction\n\nActive galactic nuclei are the most numerous class of identified VHE γ -ray sources. These objects emit non-thermal radiation across ∼ 20 orders of magnitude in energy and rank among the most powerful particle accelerators in the universe. A small fraction of AGN possess strong collimated outflows (jets) powered by accretion onto a supermassive black hole (SMBH). VHE γ -ray emission can be generated in these jets, likely in a compact region very near the SMBH event horizon. Blazars, a class of AGN with jets pointed along the line-of-sight to the observer, are of particular interest in the VHE regime. Approximately 30 blazars, primarily high-frequency-peaked BL Lacs (HBL), are identified as sources of VHE γ -rays, and some are spectacularly variable on time scales comparable to the light crossing time of their SMBH ( ∼ 2 min; [1]). VHE blazar studies probe the environment very near the central SMBH and address a wide range of physical phenomena, including the accretion and jet-formation processes. These studies also have cosmological implications, as VHE blazar data can be used to strongly constrain primordial radiation fields (see the extragalactic background light (EBL) constraints from, e.g., [2, 3]).\n\nVHE blazars have double-humped spectral energy distributions (SEDs), with one peak at UV/X-ray energies and another at GeV/TeV energies. The origin of the lower-energy peak is commonly explained as synchrotron emission from the relativistic electrons in the blazar jets. The origin of the higher-energy peak is controversial, but is widely believed to be the result of inverse-Compton scattering of seed photons off the same relativistic electrons. The origin of the seed photons in these leptonic scenarios could be the synchrotron photons themselves, or photons from an external source. Hadronic scenarios are also plausible explanations for the VHE emission, but generally are not favored.\n\nContemporaneous multi-wavelength (MWL) obser-\n\nvations of VHE blazars, can measure both SED peaks and are crucial for extracting information from the observations of VHE blazars. They are used to constrain the size, magnetic field and Doppler factor of the emission region, as well as to determine the origin (leptonic or hadronic) of the VHE γ -rays. In leptonic scenarios, such MWL observations are used to measure the spectrum of high-energy electrons producing the emission, as well as to elucidate the nature of the seed photons. Additionally, an accurate measure of the cosmological EBL density requires accurate modeling of the blazar's intrinsic VHE emission that can only be performed with contemporaneous MWL observations.\n\n## 2. VERITAS", - "page_start": 0, - "page_end": 0, - "source_file": "1001.0770.pdf" - }, - { - "text": "## 5.1. Recent VERITAS Blazar Discoveries\n\nPrior to the launch of Fermi VERITAS had discovered VHE emission from 2 blazars. These included the first VHE-detected IBL, W Comae [14, 15], and the HBL 1ES0806+524 [16]. VERITAS has discovered 6 VHE blazars since the launch of Fermi. Three of these were initially observed by VERITAS prior to the release of Fermi-LAT results, due to the X-ray brightness of the synchrotron peaks of their SEDs.\n\nVHEemission from 3C66A was discovered by VERITAS in September 2008 [17] during a flaring episode that was also observed by the Fermi-LAT [18]. The observed flux above 200 GeV was 6% of the Crab Nebula flux and the measured VHE spectrum was very soft (Γ VHE ∼ 4 . 1). RGBJ0710+591 was detected", - "page_start": 1, - "page_end": 1, - "source_file": "1001.0770.pdf" - }, - { - "text": "Figure 4: The γ -ray index versus submillimeter index plane. The blazars fall more steeply in the γ -rays than in the submillimeter band, where most are, in fact, rising. This LAT-detected sample contrasts with the full SMA sample, where the blazars are more distributed around α S ∼ 0.\n\n\n\nas the presence of SSC versus ERC. Here, we use submillimeter luminosity as a proxy for jet power, which is correlated with the integrated luminosity of the synchrotron component. Elevated γ -ray luminosity with respect to the synchrotron component (which is often seen in FSRQs) suggests the upscattering of external photons off the synchrotron-emitting electrons. These objects should occupy the upper right of the ratio/jet power plot, and BL Lacs, which generally exhibit components with roughly comparable luminosities, should occupy the lower left. It is clear from the figure, however, that many FSRQs exhibit ratios similar to those of the BL Lacs and vis versa.\n\nSikora et al. [10] report that, during its flaring epochs, 3C 454.3 transitions from its typical FSRQ state to a more BL Lac-like state, where the synchrotron component emits much more strongly compared to the γ -ray component than during its 'low state'. 3C 454.3, which is the highest submillimeter luminosity FSRQ in our sample, would then shift down and to the right in Figure 5 when it enters a flaring period. For the first three months of the Fermi mission, 3C 454.3 was not flaring, which may explain its present location in Figure 5. The three objects for which there is a type discrepancy between CGRaBS and LBAS are all FSRQs (in CGRaBS) and exhibit\n\nlow luminosity ratios and high luminosity, which suggest they may be undergoing the same changes as 3C 454.3. A possible interpretation of the elevated luminosity ratios observed in some BL Lacs objects is that there has been a dramatic increase in γ -ray luminosity due to ERC, which would not be reflected in the synchrotron component.\n\n## 5. CONCLUSIONS\n\nThe motivation for observing blazars in the submillimeter is to study behavior close to the central engine, where the jet material is presumably still being accelerated. The separate emission processes that contribute to overall SED may present differently in BL Lacs and FSRQs, allowing us to understand the similarities and differences between blazar types. We have investigated these differences between objects in terms of submillimeter behavior and, in conclusion, find that\n\n- · The SMA blazars exhibit submillimeter energy spectral indexes that follow the spectral sequence interpretation of blazars.", - "page_start": 3, - "page_end": 3, - "source_file": "1001.0806.pdf" - }, - { - "text": "Figure 5: Ratio of γ -ray luminosity to submillimeter luminosity in the 1mm band. The location of an object in this plot should be directly correlated with its blazar 'state', with FSRQs occupying the upper right and BL Lacs the lower left. Flat-spectrum radio quasar 3C 454.3 is the object with the highest submillimeter luminosity in this plot.\n\n\n\n - · BL Lacs and FSRQs do not exhibit significant differences in amplitude of submillimeter variability or characteristic timescale, but our sample of BL Lacs may be dominated by highpeaked BL Lacs (HBLs), which exhibit observational similarities with FSRQs.\n - · Blazar submillimeter light curves are consistent with being produced by a single process that accounts for both high and low states, with characteristic timescales 10 < τ rest < 500 days.\n - · The blazars detected by Fermi have synchrotron peaks at higher frequencies, regardless of submillimeter luminosity.\n - · FSRQs exhibit higher ratios of γ -ray to submillimeter luminosity than BL Lacs (Figure 5), but all objects inhabit a region of parameter space suggesting transitions between states during flaring epochs.\n\nAs Fermi continues to observe fainter sources, the sample of objects for which we can perform this type of analysis will increase and provide better limits on our results. To understand the physical relevance of these results, however, it is important to be able to distinguish between the difference in variability between BL\n\nLacs and FSRQs. One avenue for exploring this difference is to monitor changing submillimeter energy spectral index and the ratio of γ -ray to submillimeter luminosity as functions of time. The full meaning of the results of our autoregressive method is not yet clear, and will require better-sampled blazar light curves and the comparison between τ rest with physical timescales such as the synchrotron cooling timescale. These analyses would allow us to place constraints on the processes occurring near the base of the jet in blazars and further understand the intimate connection between them.\n\n## Acknowledgments\n\nThis work was supported in part by the NSF REU and DoD ASSURE programs under Grant no. 0754568 and by the Smithsonian Institution. Partial support was also provided by NASA contract NAS8-39073 and NASA grant NNX07AQ55G. We have made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the NASA/IPAC Extragalactic Database (NED) which is operated by the JPL, Caltech, under contract with NASA.", - "page_start": 4, - "page_end": 4, - "source_file": "1001.0806.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_MRM_2000.pdf", - "query": "How big is the Mermaid fleet?", - "target_page": 12, - "target_passage": "Mermaid operates a fleet of fifteen (15) tugs, workboats and barges, undertaking all forms of offshore activity", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## MERMAID FLEET\n\n\n\n", - "page_start": 25, - "page_end": 25, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## OPERATIONS REVIEW\n\n## MERMAID LABOUR AND MANAGEMENT LIMITED\n\nSAFETY\n\n\n\nDuring 2000 Mermaid Marine formed a new business unit Mermaid Labour and Management Limited. The focus of this unit will be labour supply and industrial relations management to the marine, offshore construction industry and onshore resources projects in the NW of Australia. The Directors and Management of the new entity are very experienced, well known and regarded by the industry in general. The company has high expectations for Mermaid Labour and Management Limited.\n\nMermaid remains dedicated to ensuring a safe environment in all areas where we operate or have responsibility.\n\nIn April 2000, following the regular six monthly Quality Assurance audit, the Company's accreditation under AS/NZS/ISO 9002 was reconfirmed. Mermaid's quality assurance and compliance team continues with a continuous day to day effort to improve our health, safety and environmental performance. Stringent charterer requirements, which are a pre requisite of increased vessel usage, must be met to the letter and are the subject of regular and demanding audits. Although time consuming and expensive, we are grateful to certain of the large producers, who while demanding the highest levels of compliance, have also been prepared to give their time, sharing their safety expertise with us and in that way assisting in the very major advances our company has made in this all important area.\n\nAt the time of writing this report, Mermaid had accumulated 348 days without a Lost Time Injury. A fine achievement and a continuing record.", - "page_start": 23, - "page_end": 23, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## OPERATIONS REVIEW\n\nThe foreshore of King Bay will be redeveloped as part of the Mermaid Marine Dampier Base Expansion works.\n\n\n\nleased facilities to seven third party vessels and protection for three of our own vessels using this technique by the cyclone season in 2001.\n\nAs more vessels seek protection, additional breakwaters can be constructed and sea room dredged. Each mooring involves a pattern of pin piles drilled into the granite sea floor with four vessel specific mooring lines secured to special attachment points on the vessel.\n\nMany smaller vessels including Mermaid's will be lifted from the water and tied down on purpose built cradles for cyclones.\n\n## F. ONSHORE LAND RECLAMATION.\n\nLike our neighbours, much of the Mermaid site is below the prescribed storm surge level, or needs some degree of earthworks to maximize its value. Currently 8 of the 17 ha of the area is suitable for development in its present state.\n\nThe spoil produced from dredging will allow Mermaid to achieve full utilization of the site at a fraction of the cost of importing fill from elsewhere.\n\nConsiderable effort has gone into anticipating the future direction of the Base. Planning services such as traffic flows, land allocation and security, as well as fulfilling the many and complex regulatory requirements related to health, safety, quarantine, environmental management, dust, dangerous goods and hazchem materials have been the subject of considerable study prior to this implementation stage.\n\n13\n\n", - "page_start": 16, - "page_end": 16, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## OPERATIONS REVIEW\n\nTrading for the period commencing 1 July 1999 to 30 June 2000 for Mermaid Marine Australia Ltd ('Company') and its controlled entities, experienced a 43% turnover reduction from last year. The result was almost entirely due to a heavy fall in oil prices, which reached their low of US$10 in February 1999, leading to the lowest level of offshore activity for many years. In September 1999 Mermaid exercised its option to acquire the utility vessel 'Mermaid Achiever' for $3,250,000. Previously the Achiever operated under a bare boat charter.\n\nIn February 2000 Mermaid received approval in principle from the Western Australian Minister for the Environment for the development of a supply and engineering base at Dampier (Dampier Base). Since that time a detailed environmental management system has been produced for final approval and as a guide to daily environmental management and compliance. Refinements to the design have proceeded, together with the preparation of bid packages and negotiations with Banks for project finance.\n\nSubsequent to years end, the subscription of a further $5 million from Mr Mark Bradley and Clough Engineering will see an extremely robust balance sheet, with cash on hand approaching $10 million. As construction commences at Dampier, a level of project finance will be arranged providing a comfortable mix of debt and equity and allowing the retention of a significant cash balance.\n\nThe year saw considerable progress with Base activities at Dampier, Broome and Darwin. They are dealt with in detail under following headings.\n\nFINANCIAL\n\nMermaid recorded an after-tax loss for the Period of $207,957. Compared with an after-tax profit for the previous period of $2,454,919. Revenue for the Period was $15,124,774, a decrease of 43% over the previous period. Fixed cost reductions enabled the Company to ride out the market reversal with a minimal loss and positive operating cash before capex of $1.6m. This result, achieved against a major drop in turnover, was possible through a vigorous attack on overheads, which included more beneficial ownership costs, insurance savings, management salary savings, including voluntary sacrifice from certain senior executives in recognition of the tighter conditions. In all the changes contributed approximately $1.5million to the bottom line.\n\nBare boat charters, although useful for the busy times encountered in 1998 exposed the Company to a high level of fixed costs. The vessels were valuable earners and the transfer of the Mermaid Achiever, Mermaid Eagle and Mermaid Reunion to Company ownership has proved to be the right decision for all market conditions. Although there have been no contracts yet let for work of any significance by producers on the North West Shelf, underlying day to day activity has returned. Expressions of interest for major project work have been issued and as an indication of better trading conditions, an unaudited profit of $496,721 has been recorded for the two months to 31st August 2000. The trend has continued in September.\n\n7\n\n\n\n## OVERVIEW", - "page_start": 10, - "page_end": 10, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "\n\n## NOTES TO AND FORMING PART OF THE FINANCIAL STATEMENTS FOR THE FINANCIAL YEAR ENDED 30 JUNE 2000\n\n| Note | Consolidated 1999 | Company | Company | |\n|------------------------------------------------------|---------------------|--------------------------|---------------|----|\n| | 2000 $ $ | 2000 $ | 1999 $ | |\n| INVESTMENTS | | | | |\n| At cost: | | | | |\n| Unlisted investment - shares controlled in entities | - | - 2,444,611 | 2,444,611 | |\n| | Country of | Ownership Interest 2000 | Ownership | |\n| | Incorporation | | Interest 1999 | |\n| Parent Entity | | | | |\n| Mermaid Marine Australia Limited | Australia | | | |\n| Controlled Entities | | | | |\n| Mermaid Marine Group Pty Ltd* | Australia | 100 | 100 | |\n| Mermaid Marine Vessel Operations Pty Ltd* | Australia | 100 | 100 | |\n| Mermaid Marine Pty Ltd* | Australia | 100 | 100 | |\n| Mermaid Marine Offshore Pty Ltd* | Australia | 100 | 100 | |\n| Mermaid Marine Charters Pty Ltd* | Australia | 100 | 100 | |\n| Mermaid Supply Base Pty Ltd* | Australia | 100 | 100 | |\n| Dampier Stevedoring Pty Ltd* | Australia | 100 | 100 | |\n| Mermaid Manning and Management Pty Ltd* | Australia | 100 | 100 | |", - "page_start": 49, - "page_end": 49, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## CHAIRMAN'S REPORT\n\nDirector of the Clough Group and a highly experienced and talented executive. Richard has appointed an alternate director, Mr Chris Sutherland, a senior Clough Executive, with engineering qualifications and associated business skills to assist him.\n\nCaptain Jim Carver, Mermaid's founder continues to play a significant role in Mermaid's operations, paying particular attention to our business at sea. Under 20 years of Jim's leadership, Mermaid developed an enviable reputation as a 'can do' company, and in our drive for new engineering expertise and professionalism, we have no intention of allowing that attitude to be lost.\n\nLast year we identified Broome as our next strategic position. No oil and gas work had been supported out of Broome for seventeen years and with the valuable cooperation and assistance of the Broome Port Authority, we secured Inpex, the large Japanese resource company as our first client. The base was then established early this year.\n\nA new focus has developed in the Browse Basin and it is pleasing to report that after only seven months operation, our Base is profitable, housing Inpex, BHP, Woodside and Sedco in support of their current drilling programs. All the holes drilled from the Broome Base have been designated as commercial finds by the explorers and the very major increase in the reserves at Brecknock, Woodside's permit 500 kilometres north of Broome creates optimism for future production based in the Broome area.\n\nDarwin was next on our list, enabling involvement in Timor Sea oil and gas activity. The Bayu Undan project operated by Phillips, is well advanced and will impact Darwin's offshore activity quite soon. Pursuing the formula for a strategic sea/land interface, we reached agreement with Perkins Shipping in Darwin, to set up an office at their Frances Drive facility. Perkins Shipping is synonymous with Darwin's history. Set up by V.B. Perkins in the late 40's, it has grown to significant size, operating its ships across the top of Australia and into South East Asia. There are many synergies which Mermaid shares with Perkins and we look forward to developing our Darwin business in close association with that fine old Company.\n\nOur ambitions for the support of the oil and gas industry now go beyond bases and vessels. Early in the current financial year, Mermaid acquired 50% of the OIS MOC Joint Venture Pty Ltd, to be paid for by the issue of 800,000 Mermaid shares. OIS MOC owns the highly successful labour hire business operated by Kevin Ponga and Rick De Franck. Kevin Ponga is now General Manager of Mermaid Labour & Management Pty Limited and Mr De Franck becomes a Director. With their reputation and talent added to Mermaid's experienced team, this labour hire company has become a significant force and can be expected to be in the final when major labour hire contracts are let.\n\n", - "page_start": 8, - "page_end": 8, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## G. SLIPWAY.\n\nAustralia, and particularly the north west is impoverished in terms of infrastructure to service our marine industries. Some of this has been due to a historical link with our recent industrial past. This is now behind us, and Australia has now become a centre of excellence with respect to both new building and ship repair, particularly for high tech and specialty vessels.\n\nThe Mermaid slipway will be the third such facility on the western half of the continent , with others located at Fremantle and Darwin.\n\nThe slipway will be a repair only facility, no new building is contemplated. Its capacity is structured to meet the regional steel mono-hulled fleet requirements of some 60 vessels between 200 and 4000 tonne displacement. Fishing industry, marine tourist industry, large private pleasure craft , naval, scientific and law enforcement vessels are a secondary target.\n\nThe slipway is designed to initially accept vessels up to 2,700 tonnes, a restriction which is set by our current inventory of cradles used to support vessel on the slip. The cradles will be progressively upgraded to ultimately handle 4000 tonne. A later expansion will allow 500 tonne vessels to be side slipped, thereby increasing capacity.\n\nThe slipway location and orientation on the Base has been chosen to maximize the cost and load bearing benefits of having a very high strength granite bedrock as the best possible foundation.\n\nThe Mermaid slipway will rank second in terms of capacity on the western half of the continent. Tenix, Fremantle 8,000 tonne, Mermaid Dampier 2,700 tonne rising to 4,000 tonne, Darwin Ship Repair 2,500 tonne. The nearest other facilities are Singapore, Adelaide, Port Moresby or Cairns.\n\n\n\nMermaid has purchased a very large cyclone rated industrial building frame which will be sited beside the slipway and tenanted by Mermaid engineering and companies which will provide ancillary services related to ship repair.\n\nThe Northwest Shelf is a world scale offshore oil and gas exploration province.\n\n", - "page_start": 20, - "page_end": 20, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## OPERATIONS REVIEW\n\nDarwin is serviced by three marine infrastructure elements.\n\n - a. A public port adjacent to the main business centre, which is destined to be redeveloped as a cruise ship and tourism precinct .\n - b. A group of freehold water front properties on Frances Bay near to the main business center.\n - c. A recently commissioned public port and industrial estate at East Arm some 25 km from the main business district.\n\nDarwin already has an abundance of shore based logistics service providers who operate from onshore industrial estates through publicly owned facilities.\n\nThe Northern Territory Government has sponsored a study to determine the marine infrastructure deficits of the Darwin area. Mermaid has contributed to the study and is monitoring the subsequent planning processes.\n\nRegardless of industry trends, Mermaid has a need for a Darwin Base to service and care for Mermaid vessels working in the area. Too often vessels have been demobilised to Dampier at the conclusion of a contract then being required to return to Darwin within days or weeks for another assignment.\n\nMermaid has decided that needs and opportunities in the north of Australia can be best served by entering a co-operative arrangement with an established Darwin Company. Agreement has therefore been reached with Perkins Shipping Group, who are one of the freehold land owners on Frances Bay.\n\nPerkins Shipping, established in the 1950s is the major coastal shipping service provider in Australia's north, linking Darwin to mining and aboriginal committees from the Kimberly to Gulf of Carpenteria. Additionally Perkins operate services to East Timor, mining operations in Indonesia, as well as Singapore and East Malaysia. The Perkins and Mermaid businesses are different, but complementary, offering benefits to both. The arrangement with Perkins will give Mermaid well placed office facilities, open storage and waterfront access.\n\nOur intention is that Darwin become the third and final mainland entreport to service the Northwestern offshore oil and gas industry together with our other strategically placed facilities at Dampier and Broome.\n\n", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "\n\n## NOTES TO AND FORMING PART OF THE FINANCIAL STATEMENTS FOR THE FINANCIAL YEAR ENDED 30 JUNE 2000\n\n## 7. RELATED PARTY TRANSACTIONS\n\nThe directors of Mermaid Marine Australia Limited during the Financial Year were:\n\nA G Birchmore\n\n(appointed 12 August 1998)\n\nJ H Carver\n\n(appointed 29 June 1998)\n\nD A Dillon\n\n(appointed 12 August 1998)\n\nJ A S Mews\n\n(appointed 12 August 1998)\n\nInterest in the shares of the Company held by directors and their director related entities as at 30 June 2000.\n\n| | Mermaid Marine Australia Limited | Mermaid Marine Australia Limited |\n|---------------|------------------------------------|------------------------------------|\n| | Ordinary Shares | Options over Ordinary Shares |\n| A G Birchmore | 13,695,300 | 382,000 |\n| J H Carver | 13,631,300 | 20,000 |\n| D A Dillon | 1,520,000 | 10,000 |\n| J A S Mews | 1,500,000 | - |\n\nThe following related party transactions occurred during the Financial Year:\n\n## Transactions with directors and director related entities\n\nDuring the Financial Year, a total of $75,000 for directors fees was paid to Chalfont Holdings Limited, a related entity of A G Birchmore. This is reflected in full in note 24 - Remuneration of Directors.\n\n## Transactions with other related parties\n\n## (a) Mermaid Achiever\n\nThe Achiever Partnership (comprising Delmark Investments Pty Ltd, a related entity of A G Birchmore, J H Carver, D A Dillon and P D M Holdings Pty Ltd, a related entity of J A S Mews) entered into a put and call option agreement with the Company on 12 April 1999, pursuant to which it was agreed that either party could, at any time between 30 June 2000 and the expiration of the Company's Charter of the Mermaid Achiever from the Achiever Partnership, oblige the other party to enter into an Agreement for the sale and purchase of that vessel at price fixed at $3,250,000.\n\nOn 24 September 1999 the Company exercised its option to acquire the Mermaid Achiever in accordance with the terms of the above option. Bareboat Charter Fees of $184,000 were paid for the period 1st July 1999 to the 30th September 2000 the effective date of settlement.\n\n## (b) Fremantle Premises\n\n - (i) The Achiever Partnership and the Company entered into a heads of agreement dated 12 April 1999 for the lease to the entity of its registered office at 20 Mews Road, Fremantle.\n - (ii) The term of the lease is 5 years with a 5 year option of renewal in favour of the Company.\n - (iii) The Company is responsible for all fitting out, maintenance (except capital works items), rates, taxes, insurance, and other usual variable outgoings.\n - (iv) The offices have undergone substantial refurbishment.", - "page_start": 59, - "page_end": 59, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "\n\n## OPERATIONS REVIEW\n\n## BASE EXPANSION WORKS AND ENVIRONMENTAL MANAGEMENT\n\nWork on Dampier\n\nBase expansion commenced on 9 October and will be largely complete by June 2001, involving a capital budget of $13m.\n\nThe principle activities and facility developments involved in the expansion are:\n\n## A. DREDGING\n\nApproximately 700,000 m3 of material is to be dredged in King Bay to form an entrance channel, vessel berths, cyclone moorings and to provide access to the slipway.\n\nThe experience of Woodside constructing their nearby base in 1981 indicates that two types of dredges will be required, a Cutter Suction to remove the soft unconsolidated material (approx.70%) and a Dipper Dredge (barge mounted back-hoe) to remove harder consolidated material.\n\nThe Cutter Suction dredge size will be deliberately modest due to onshore spoil management requirement and environmental considerations.\n\nThe Dipper Dredge will be the largest of its type in the world, and will be an ideal remedial dredging tool using the experience gained from the earlier Woodside project.\n\nThe layout of the Base has been very much driven by the desire to avoid or minimize blasting while fulfilling functional objectives.\n\nThe entrance channel into the Mermaid Base will be 30 m wide and dredged to 6 m below chart datum. The dredge spoil will be pumped ashore and used as fill around the Base.\n\nDredges are expected to be onsite for approximately 7 months commencing mid November.\n\n## B. QUAY WALL ( BERTH 1)\n\nMarket research and customer needs have caused Mermaid to relocate and redesign the main berth to accommodate a wider range of vessels than originally contemplated. The berth is now located in deeper water with better vessel access.\n\nThe regional offshore fleet characteristics have been changing in terms of vessel size. There are now four vessels operating in the region with 12,000 to 18,000 hp. When design commenced there were none of this size.\n\nThe depth alongside Berth 1 will be 7.5m. King Bay has a statistical average extreme low tide (MLWS) of 0.9 m, the occurrence of which can be expressed in hours per month. The largest", - "page_start": 13, - "page_end": 13, - "source_file": "ASX_MRM_2000.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_MRM_2000.pdf", - "query": "What was the budget for the expansion of Dampier Base?", - "target_page": 14, - "target_passage": "a capital budget of $13m", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "\n\n## OPERATIONS REVIEW\n\n## SEAGOING OPERATIONS\n\nMermaid operates a fleet of fifteen (15) tugs, workboats and barges, undertaking all forms of offshore activity including exploration support, supply, survey and berthing assist. Lower vessel utilisation during the period allowed an acceleration of scheduled maintenance. Two tugs, Mermaid Commando and Mermaid Chieftan received extensive refits. In both cases the work increased productivity through enhanced bollard pull and consequent earnings.\n\nSafety was given the highest priority through new monitoring systems and awareness programs. Formalised on the job instruction and training courses have also lifted levels of experience and proficiency across the workforce.\n\nThe offshore waters and islands adjacent to Dampier, host in excess of 50% of all exploration and development budgets of Australia's offshore oil and gas industry. The Burrup Peninsular where the Base is located is the intended site of major new oil, gas, petrochemical and industrial mineral processing plants. The Port of Dampier is Australia's largest Port as measured by tonnage, but as identified in the 1997 WA Department of Commerce and Trade report, there remains an urgent need for additional marine support infrastructure. Mermaid is now well advanced in our plan to satisfy those needs and onshore work was announced to start on the 9th October 2000. DAMPIER BASE\n\nSince receiving approval in principle for development of the Dampier Base from the Western Australian Minister for the Environment in February 2000, engineering and general design work in connection with the base proceeded at an accelerated pace.\n\nThis work, assisted by technical studies and a re-assessment of an increased demand for services arising out of greater expectations for growth in the sector, has led to improvements and expansion of capacity over earlier plans.\n\nThe Dampier Base will now comprise:-\n\n\n\n·\n\n\n\n·\n\nAn 'all tides' approach channel to a minimum depth of 6 metres\n\nA wharf offering 7.5 metres depth at low tide, featuring a heavy loadout section to accommodate modules of up to 1500 tonnes to onshore projects on the Burrup Peninsular and adjacent mining centres. A subsea pipe reel loading facility will encourage the use of spool ships in the region for deepwater pipelay. On a project by project basis, pipeline protection rock dumping, specialist vessel rig up activities and the like will be facilitated, as will dry and bulk cargo handling, refuelling, watering and all categories of waste reception. The joint Commonwealth and WA State Government initiative to establish an integrated industrial estate at Jervoise Bay (south of Perth) serviced by high wide load corridors from Perth's industrial areas will see the heavy capacity wharf playing a strategic role in major capital works in the Pilbara, leading to significant cost savings.", - "page_start": 11, - "page_end": 11, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "·\n\n·\n\n·\n\n## OPERATIONS REVIEW\n\nA slipway initially capable of receiving vessels up to 2,700 tonnes capacity will handle most of the 60 vessels currently working in the region, a considerable number, but one which will rise over coming years. First class engineering facilities have been planned and highly experienced management recruited. Alternative slipways offering comparable capacity are only to be found in Darwin or Fremantle, a sea journey of approximately 1000 miles from this operational region. Australia has emerged as a centre of excellence with respect to vessel repair work, the Dampier facility will both benefit from and protect that valuable reputation.\n\nRehabilitated land for buildings and storage will finally extend over 17 hectares. The major oilfield services company Halliburton, have been attracted to the base as a tenant and a $1.1m purpose built building is being constructed for their use. Negotiations are also proceeding with other groups who recognise the unique advantages of operating from this strategically positioned Base. Rental income and associated revenues such as plant and labour hire will contribute significantly to the overall economics of the facility.\n\nProtected moorings for cyclone shelter will be established inside the breakwater for long term lease to local tug operators. The demand arises from serious vessel and crew safety considerations. The Dampier Port Authority are reluctant to see the continued use of cyclone moorings in the Harbour, not only for safety reasons, but for environmental concerns as well. Oil spills are not acceptable under any circumstances and will be avoided whatever the cost. Tug owners share similar concerns, but in addition they need to remain in a position of readiness for crews and equipment to resume their important functions immediately following a cyclonic event. The number of specific purpose spread moorings, detailed on the adjacent plan will total 10 in the first phase of construction, a limit which will be assisted by an ability to remove vessels up to 100 tonnes from the water by wharf crane for tie down on cradles.\n\n\n\nConstruction of the Dampier Base commenced on the 9th October this year, with an expectation that all major elements of the project will be largely completed within 12 months.\n\nThe 'Clough Challenge' Barge Shallow Water Construction Support Barge in the East Spar Field\n\n", - "page_start": 12, - "page_end": 12, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## OPERATIONS REVIEW\n\nTrading for the period commencing 1 July 1999 to 30 June 2000 for Mermaid Marine Australia Ltd ('Company') and its controlled entities, experienced a 43% turnover reduction from last year. The result was almost entirely due to a heavy fall in oil prices, which reached their low of US$10 in February 1999, leading to the lowest level of offshore activity for many years. In September 1999 Mermaid exercised its option to acquire the utility vessel 'Mermaid Achiever' for $3,250,000. Previously the Achiever operated under a bare boat charter.\n\nIn February 2000 Mermaid received approval in principle from the Western Australian Minister for the Environment for the development of a supply and engineering base at Dampier (Dampier Base). Since that time a detailed environmental management system has been produced for final approval and as a guide to daily environmental management and compliance. Refinements to the design have proceeded, together with the preparation of bid packages and negotiations with Banks for project finance.\n\nSubsequent to years end, the subscription of a further $5 million from Mr Mark Bradley and Clough Engineering will see an extremely robust balance sheet, with cash on hand approaching $10 million. As construction commences at Dampier, a level of project finance will be arranged providing a comfortable mix of debt and equity and allowing the retention of a significant cash balance.\n\nThe year saw considerable progress with Base activities at Dampier, Broome and Darwin. They are dealt with in detail under following headings.\n\nFINANCIAL\n\nMermaid recorded an after-tax loss for the Period of $207,957. Compared with an after-tax profit for the previous period of $2,454,919. Revenue for the Period was $15,124,774, a decrease of 43% over the previous period. Fixed cost reductions enabled the Company to ride out the market reversal with a minimal loss and positive operating cash before capex of $1.6m. This result, achieved against a major drop in turnover, was possible through a vigorous attack on overheads, which included more beneficial ownership costs, insurance savings, management salary savings, including voluntary sacrifice from certain senior executives in recognition of the tighter conditions. In all the changes contributed approximately $1.5million to the bottom line.\n\nBare boat charters, although useful for the busy times encountered in 1998 exposed the Company to a high level of fixed costs. The vessels were valuable earners and the transfer of the Mermaid Achiever, Mermaid Eagle and Mermaid Reunion to Company ownership has proved to be the right decision for all market conditions. Although there have been no contracts yet let for work of any significance by producers on the North West Shelf, underlying day to day activity has returned. Expressions of interest for major project work have been issued and as an indication of better trading conditions, an unaudited profit of $496,721 has been recorded for the two months to 31st August 2000. The trend has continued in September.\n\n7\n\n\n\n## OVERVIEW", - "page_start": 10, - "page_end": 10, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "\n\n## OPERATIONS REVIEW\n\n## BASE EXPANSION WORKS AND ENVIRONMENTAL MANAGEMENT\n\nWork on Dampier\n\nBase expansion commenced on 9 October and will be largely complete by June 2001, involving a capital budget of $13m.\n\nThe principle activities and facility developments involved in the expansion are:\n\n## A. DREDGING\n\nApproximately 700,000 m3 of material is to be dredged in King Bay to form an entrance channel, vessel berths, cyclone moorings and to provide access to the slipway.\n\nThe experience of Woodside constructing their nearby base in 1981 indicates that two types of dredges will be required, a Cutter Suction to remove the soft unconsolidated material (approx.70%) and a Dipper Dredge (barge mounted back-hoe) to remove harder consolidated material.\n\nThe Cutter Suction dredge size will be deliberately modest due to onshore spoil management requirement and environmental considerations.\n\nThe Dipper Dredge will be the largest of its type in the world, and will be an ideal remedial dredging tool using the experience gained from the earlier Woodside project.\n\nThe layout of the Base has been very much driven by the desire to avoid or minimize blasting while fulfilling functional objectives.\n\nThe entrance channel into the Mermaid Base will be 30 m wide and dredged to 6 m below chart datum. The dredge spoil will be pumped ashore and used as fill around the Base.\n\nDredges are expected to be onsite for approximately 7 months commencing mid November.\n\n## B. QUAY WALL ( BERTH 1)\n\nMarket research and customer needs have caused Mermaid to relocate and redesign the main berth to accommodate a wider range of vessels than originally contemplated. The berth is now located in deeper water with better vessel access.\n\nThe regional offshore fleet characteristics have been changing in terms of vessel size. There are now four vessels operating in the region with 12,000 to 18,000 hp. When design commenced there were none of this size.\n\nThe depth alongside Berth 1 will be 7.5m. King Bay has a statistical average extreme low tide (MLWS) of 0.9 m, the occurrence of which can be expressed in hours per month. The largest", - "page_start": 13, - "page_end": 13, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "| Expenditure transferred to areas where production has commenced | (107.2) | (36.3) | (38.8) | (33.9) |\n| Expenditure written off during the year | (5.8) | (55.8) | (4.6) | (6.1) |\n| Cost at the end of the year | 375.2 | 422.5 | 80.0 | 107.0 |\n| Total exploration and development expenditure | 3,210.3 | 2,945.3 | 905.8 | 903.6 |", - "page_start": 59, - "page_end": 59, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "Contract number: ECHA/2019/355\n\n## HAVE AGREED\n\n## I.1.1.1.1. Article 1 Subject matter\n\n - 1.1 This specific contract implements framework contract (FWC) No ECHA/2019/355 signed by the parties on [ complete date ] .\n - 1.2 In accordance with the provisions set out in the FWC and in this specific contract and [its][their] annex[es], which form an integral part of it, the contractor must provide the [following services:] [services specified in Annex [ complete ] . ]\n - I.1.1.1.2. Article 2 Entry into force and duration\n - 2.1 This specific contract enters into force on the date on which the last party signs it.\n - 2.2 The provision of the services starts from the date of entry into force of this specific contract.\n - 2.3 The provision of the services must not exceed [ complete ] [ days] [months ] . The parties may extend the duration by written agreement before it elapses and before expiry of the FWC.\n\n## I.1.1.1.3. Article 3 Price\n\n - 3.1 The price payable under this specific contract excluding reimbursement of expenses is EUR [ amount in figures and in words ].\n\n[The maximum amount covering all services to be provided under this specific contract including reimbursement of expenses and excluding price revision is EUR [ amount in figures and in words ].]\n\n - 3.2 [Reimbursement of expenses is not applicable to this specific contract.] [Within the maximum amount, up to EUR [ amount in figures and in words ] is earmarked for expenses, which must be reimbursed in accordance with the FWC].\n\n***\n\n## I.1.1.1.4. Article 4 communication details\n\nFor the purpose of this specific contract, communications must be sent to the following addresses:\n\nContracting authority:\n\nEuropean Chemicals Agency\n\n[Directorate [ complete ]]\n\n[Unit [ complete ]]\n\n[ Postcode and city ]\n\nE-mail: [ insert functional mailbox ]", - "page_start": 43, - "page_end": 43, - "source_file": "EN-Draft FWC for services 0142.pdf" - }, - { - "text": "\n\n## OPERATIONS REVIEW\n\n## BROOME SUPPLY BASE\n\nMermaid Marine services base at the Port of Broome (Broome Base) commenced operations on 1 February 2000 when the first ship containing drill pipe for Inpex Browse Ltd arrived from Japan.\n\nAs a result of Mermaid's efforts in establishing the Broome Base, Inpex Browse Ltd., BHP Petroleum and Woodside have used Broome as their base for drilling a total of four (4) offshore wells.\n\nIt is presently expected that at least six (6) exploration wells will be drilled in the area during 2001. The Base now employs as many as ten (10) staff up from the three (3) who commenced in February 2000. Excellent management and staff competence are the prime factors, which have delivered the smooth start up and continued success at Broome.\n\nThe Mermaid Broome Supply Base certified Impex, Woodside and BHP Petroleum exploration program during 2000.\n\n\n\nThe base is currently secured on a come and go lease arrangement, located on Port premises adjacent to the wharf gates. Although convenient, with an excellent cyclone proof building, the site has limitations in terms of size and slope. An area more suitable for our long term needs has been optioned from Port authorities and discussions will proceed with our clients this year to determine their precise needs.\n\nThe success of Browse Basin wells drilled this year, strong developments in the energy sector and the intention of operators to base their 2001 operations in Broome, have encouraged the Board to consider further investment to ensure that capability keeps pace with demand and that we leave no reason for competitors to offer more or better.\n\n## DARWIN BASE\n\nThe offshore waters of the Northern Territory, the Zone of Co-Operation (ZOCA) between Australia and Timor, and the Commonwealth Territory of Ashmore and Cartier host approximately 35% of the exploration and development budgets of Australian offshore oil and gas industry.\n\nTwo large projects are under study or implementation in these waters; the Phillips Petroleum Bayu-Undang Project and the Woodside Sunrise Troubador Project.\n\nTwo large petrochemical projects are under study for the Darwin area based upon pipelines from the Timor Sea gas resources of the projects above.\n\nDarwin will within 3 years be the northern terminus of the Australian national rail system with the completion of the Alice Springs Darwin rail link, further expanding its role in Australia's economy.", - "page_start": 21, - "page_end": 21, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## OPERATIONS REVIEW\n\nbreakwater will be an over capping type, which interrupts the waves progress, but does not totally protect from wave penetration. These events are manageable and estimated as a once in 50 years possibility.\n\nThe breakwater core will be used as a construction causeway allowing land based equipment to perform the work. The greater part of the breakwater work involves winning the material as opposed to actual construction.\n\n## E. CYCLONE MOORINGS.\n\nThe extent of the cyclone problem in Australia's north and north west was emphasised when Cyclone Tracey struck Darwin in 1974. The most powerful cyclone to cross the Australian coast was Cyclone Vance in 1999, which passed near Dampier, destroying large parts of the towns of Onslow and Exmouth further to the south.\n\nThe problem is acute, particularly in the area between Exmouth and Port Hedland, which suffers cyclones of an intensity and frequency as high as anywhere in the world. The Mermaid Base is typically on cyclone alert three times per season. The season is November to April.\n\nTo date there have been three options available to vessel owners when a cyclone approaches:.\n\n - · Run to sea\n - · Take refuge with crew onboard, on a mooring in the most sheltered location available such as the Dampier Archipelago or the Monte Bello Islands.\n - · Construct a cyclone shelter.\n\nThere are serious personal safety and environmental considerations related to Options 1 and 2 and it is obvious that best practice universally adopted by large responsible Companies can be satisfied in this way.\n\nOnly Woodside at Dampier and BHP at Port Hedand have taken the step of building shelters which provides protection to 12 of the region's 60 vessels and this at very considerable cost.\n\nMermaid has undertaken significant engineering work on the placing of vessels on partially sheltered spread moorings, allowing the vessels to be secured near to shore and the crews demobilized to take care of their families and attend to household cyclone preparation.\n\nMermaid is taking a leadership role with a technical solution which will lead to wider adoption as vessel owners and the insurance industry fully value the arrangements. Mermaid will provide\n\n", - "page_start": 15, - "page_end": 15, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "| | YEAR ENDED DECEMBER 31, | YEAR ENDED DECEMBER 31, | YEAR ENDED DECEMBER 31, |\n|---------------------------------------------------------------------------------------------------------------------------------------|---------------------------|---------------------------|---------------------------|\n| (IN THOUSANDS,EXCEPT PER SHARE AMOUNTS) | 2003 | 2002 | 2001 |\n| Net income, as reported | $ 5,057 | $ 2,589 | $ 9,754 |\n| Deduct: Total stock-based employee compensation expense determined under fair value-based methods for all awards, net of tax effects | (526) | (691) | (275) |\n| Pro forma net income | $ 4,531 | $ 1,898 | $ 9,479 |\n| Income per share: | | | |\n| Basic - as reported | $ 2.96 | $ 1.51 | $ 4.80 |\n| Basic - pro forma | $ 2.65 | $ 1.11 | $ 4.66 |\n| Diluted - as reported | $ 2.75 | $ 1.39 | $ 4.30 |\n| Diluted - pro forma | $ 2.46 | $ 1.02 | $ 4.17 |\n\n## NEW ACCOUNTING PRONOUNCEMENTS\n\nIn December 2003, the Financial Accounting Standards Board issued a revised SFAS No. 132, 'Employers' Disclosures about Pensions and Other Postretirement Benefits.' SFAS No. 132 (as revised) revises employers' disclosures about pension plans and other postretirement benefit plans. It does not change the measurement or recognition of those plans required by SFAS Nos. 87, 88 and 106. SFAS No. 132 (as revised) requires additional disclosures to those in the original SFAS No. 132 and it also amends APB Opinion 28, 'Interim Financial Reporting,' to require certain disclosures about pension and other postretirement benefit plans in interim financial statements. SFAS No. 132 (as revised) is generally effective for financial statements with fiscal years ending after December 15, 2003. The Company has revised its disclosures in Note 11 to conform to this new pronouncement.\n\n## 2 GOODWILL AND INTANGIBLE ASSETS\n\n", - "page_start": 15, - "page_end": 15, - "source_file": "NASDAQ_ATRI_2003.pdf" - }, - { - "text": "## Management's Discussion and Analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## MHCs - Capital Spend\n\nA summary of the capital spend on the MHC segment is included below:\n\nFor the years ended December 31,\n\n| | 2013 | 2012 | % change |\n|--------------------------------------|--------|--------|------------|\n| Water & sewer upgrades | $2,212 | $1,812 | 22.1% |\n| roads and paving | 255 | 421 | (39.4)% |\n| equipment | 21 | 36 | (41.7)% |\n| other | 465 | 527 | (11.8)% |\n| Site expansion and land improvements | 552 | 549 | 0.5% |\n| Total capital spend - MHCs | $3,505 | $3,345 | 4.8% |\n| average number of units outstanding | 7,207 | 8,251 | (12.7)% |\n| capital spend per unit | $486 | $405 | 20.0% |\n\nManagement expects to spend between $300 and $400 in capital per MHC site on an annual basis. As with the apartment portfolio, a portion of the MHC capital is considered maintenance capital and a portion is value enhancing. Management estimates that $100 per unit is maintenance capital, including costs to support the existing infrastructure, and the remaining amount increases the value of the properties, with improved roadways, ability to accommodate future expansion, and community enhancements, such as the addition of playgrounds. The cost of most capital projects will be recovered through above guideline increases in the provinces with rent control, leading to increased NOI for the investment.\n\nFor the year ended December 31, 2013, Killam spent $2.2 million on water and sewer upgrades, an increase of 22.1% over 2012 due to the installation of several new water systems and upgrades to existing water and sewer infrastructure. This capital work fluctuates from year-to-year with only $1.8 million invested in 2012 but $3.1 million in 2011. the high water upgrade costs in 2013 resulted in the per unit mHc spend being above Killam's expectation of $300 - $400 per year.\n\nAs with the apartment portfolio, the timing of capital spending changes based on requirements at each community. Killam expects to invest $1 million to $2 million during 2014 on capital improvements across the MHC portfolio.\n\n## Liquidity and Capital Resources\n\nThe Company's sources of capital are cash generated from operating activities, credit facilities, mortgage financing and refinancing, and equity and debt issuances. The Company's primary use of capital includes property acquisitions and developments, major property improvements, recurring property maintenance, debt principal and interest payments, and payment of dividends. The Company anticipates meeting all current and future obligations with current cash and cash equivalents, cash flow generated from operations and conventional mortgage refinancing and that the Company will be able to obtain financing on reasonable terms.\n\nKillam's ability to grow through acquisitions and development will be dependent on the ability to access mortgage debt, construction financing and to raise equity in the capital markets. Killam had cash on hand of $27.7 million at December 31, 2013, primarily as a result of the net proceeds of $42.6 million related to the sale of the ten MHC properties in the fourth quarter of 2013. Killam utilized part of the sale proceeds to retire a $10 million vendor take-back ('VTB') loan and acquire additional properties, and expects to redeploy the remaining funds during the first quarter of 2014. Based on 60% debt on acquisitions, the Company expects to complete an additional $60 million in accretive apartment acquisitions. The Company also has $139.3 million in debt maturing during 2014 and expects to generate approximately $50 million in surplus cash to be used for its 2014 capital program and to fund additional acquisitions throughout the year.", - "page_start": 51, - "page_end": 51, - "source_file": "TSX_KMP_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_MRM_2000.pdf", - "query": "When did Mermaid Marine Service Base in the Port of Broome start?", - "target_page": 22, - "target_passage": "1 February 2000", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "## OPERATIONS REVIEW\n\nDarwin is serviced by three marine infrastructure elements.\n\n - a. A public port adjacent to the main business centre, which is destined to be redeveloped as a cruise ship and tourism precinct .\n - b. A group of freehold water front properties on Frances Bay near to the main business center.\n - c. A recently commissioned public port and industrial estate at East Arm some 25 km from the main business district.\n\nDarwin already has an abundance of shore based logistics service providers who operate from onshore industrial estates through publicly owned facilities.\n\nThe Northern Territory Government has sponsored a study to determine the marine infrastructure deficits of the Darwin area. Mermaid has contributed to the study and is monitoring the subsequent planning processes.\n\nRegardless of industry trends, Mermaid has a need for a Darwin Base to service and care for Mermaid vessels working in the area. Too often vessels have been demobilised to Dampier at the conclusion of a contract then being required to return to Darwin within days or weeks for another assignment.\n\nMermaid has decided that needs and opportunities in the north of Australia can be best served by entering a co-operative arrangement with an established Darwin Company. Agreement has therefore been reached with Perkins Shipping Group, who are one of the freehold land owners on Frances Bay.\n\nPerkins Shipping, established in the 1950s is the major coastal shipping service provider in Australia's north, linking Darwin to mining and aboriginal committees from the Kimberly to Gulf of Carpenteria. Additionally Perkins operate services to East Timor, mining operations in Indonesia, as well as Singapore and East Malaysia. The Perkins and Mermaid businesses are different, but complementary, offering benefits to both. The arrangement with Perkins will give Mermaid well placed office facilities, open storage and waterfront access.\n\nOur intention is that Darwin become the third and final mainland entreport to service the Northwestern offshore oil and gas industry together with our other strategically placed facilities at Dampier and Broome.\n\n", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## CHAIRMAN'S REPORT\n\nDirector of the Clough Group and a highly experienced and talented executive. Richard has appointed an alternate director, Mr Chris Sutherland, a senior Clough Executive, with engineering qualifications and associated business skills to assist him.\n\nCaptain Jim Carver, Mermaid's founder continues to play a significant role in Mermaid's operations, paying particular attention to our business at sea. Under 20 years of Jim's leadership, Mermaid developed an enviable reputation as a 'can do' company, and in our drive for new engineering expertise and professionalism, we have no intention of allowing that attitude to be lost.\n\nLast year we identified Broome as our next strategic position. No oil and gas work had been supported out of Broome for seventeen years and with the valuable cooperation and assistance of the Broome Port Authority, we secured Inpex, the large Japanese resource company as our first client. The base was then established early this year.\n\nA new focus has developed in the Browse Basin and it is pleasing to report that after only seven months operation, our Base is profitable, housing Inpex, BHP, Woodside and Sedco in support of their current drilling programs. All the holes drilled from the Broome Base have been designated as commercial finds by the explorers and the very major increase in the reserves at Brecknock, Woodside's permit 500 kilometres north of Broome creates optimism for future production based in the Broome area.\n\nDarwin was next on our list, enabling involvement in Timor Sea oil and gas activity. The Bayu Undan project operated by Phillips, is well advanced and will impact Darwin's offshore activity quite soon. Pursuing the formula for a strategic sea/land interface, we reached agreement with Perkins Shipping in Darwin, to set up an office at their Frances Drive facility. Perkins Shipping is synonymous with Darwin's history. Set up by V.B. Perkins in the late 40's, it has grown to significant size, operating its ships across the top of Australia and into South East Asia. There are many synergies which Mermaid shares with Perkins and we look forward to developing our Darwin business in close association with that fine old Company.\n\nOur ambitions for the support of the oil and gas industry now go beyond bases and vessels. Early in the current financial year, Mermaid acquired 50% of the OIS MOC Joint Venture Pty Ltd, to be paid for by the issue of 800,000 Mermaid shares. OIS MOC owns the highly successful labour hire business operated by Kevin Ponga and Rick De Franck. Kevin Ponga is now General Manager of Mermaid Labour & Management Pty Limited and Mr De Franck becomes a Director. With their reputation and talent added to Mermaid's experienced team, this labour hire company has become a significant force and can be expected to be in the final when major labour hire contracts are let.\n\n", - "page_start": 8, - "page_end": 8, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "\n\n## OPERATIONS REVIEW\n\n## BROOME SUPPLY BASE\n\nMermaid Marine services base at the Port of Broome (Broome Base) commenced operations on 1 February 2000 when the first ship containing drill pipe for Inpex Browse Ltd arrived from Japan.\n\nAs a result of Mermaid's efforts in establishing the Broome Base, Inpex Browse Ltd., BHP Petroleum and Woodside have used Broome as their base for drilling a total of four (4) offshore wells.\n\nIt is presently expected that at least six (6) exploration wells will be drilled in the area during 2001. The Base now employs as many as ten (10) staff up from the three (3) who commenced in February 2000. Excellent management and staff competence are the prime factors, which have delivered the smooth start up and continued success at Broome.\n\nThe Mermaid Broome Supply Base certified Impex, Woodside and BHP Petroleum exploration program during 2000.\n\n\n\nThe base is currently secured on a come and go lease arrangement, located on Port premises adjacent to the wharf gates. Although convenient, with an excellent cyclone proof building, the site has limitations in terms of size and slope. An area more suitable for our long term needs has been optioned from Port authorities and discussions will proceed with our clients this year to determine their precise needs.\n\nThe success of Browse Basin wells drilled this year, strong developments in the energy sector and the intention of operators to base their 2001 operations in Broome, have encouraged the Board to consider further investment to ensure that capability keeps pace with demand and that we leave no reason for competitors to offer more or better.\n\n## DARWIN BASE\n\nThe offshore waters of the Northern Territory, the Zone of Co-Operation (ZOCA) between Australia and Timor, and the Commonwealth Territory of Ashmore and Cartier host approximately 35% of the exploration and development budgets of Australian offshore oil and gas industry.\n\nTwo large projects are under study or implementation in these waters; the Phillips Petroleum Bayu-Undang Project and the Woodside Sunrise Troubador Project.\n\nTwo large petrochemical projects are under study for the Darwin area based upon pipelines from the Timor Sea gas resources of the projects above.\n\nDarwin will within 3 years be the northern terminus of the Australian national rail system with the completion of the Alice Springs Darwin rail link, further expanding its role in Australia's economy.", - "page_start": 21, - "page_end": 21, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## OPERATIONS REVIEW\n\nThe foreshore of King Bay will be redeveloped as part of the Mermaid Marine Dampier Base Expansion works.\n\n\n\nleased facilities to seven third party vessels and protection for three of our own vessels using this technique by the cyclone season in 2001.\n\nAs more vessels seek protection, additional breakwaters can be constructed and sea room dredged. Each mooring involves a pattern of pin piles drilled into the granite sea floor with four vessel specific mooring lines secured to special attachment points on the vessel.\n\nMany smaller vessels including Mermaid's will be lifted from the water and tied down on purpose built cradles for cyclones.\n\n## F. ONSHORE LAND RECLAMATION.\n\nLike our neighbours, much of the Mermaid site is below the prescribed storm surge level, or needs some degree of earthworks to maximize its value. Currently 8 of the 17 ha of the area is suitable for development in its present state.\n\nThe spoil produced from dredging will allow Mermaid to achieve full utilization of the site at a fraction of the cost of importing fill from elsewhere.\n\nConsiderable effort has gone into anticipating the future direction of the Base. Planning services such as traffic flows, land allocation and security, as well as fulfilling the many and complex regulatory requirements related to health, safety, quarantine, environmental management, dust, dangerous goods and hazchem materials have been the subject of considerable study prior to this implementation stage.\n\n13\n\n", - "page_start": 16, - "page_end": 16, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "\n\n## OPERATIONS REVIEW\n\n## SEAGOING OPERATIONS\n\nMermaid operates a fleet of fifteen (15) tugs, workboats and barges, undertaking all forms of offshore activity including exploration support, supply, survey and berthing assist. Lower vessel utilisation during the period allowed an acceleration of scheduled maintenance. Two tugs, Mermaid Commando and Mermaid Chieftan received extensive refits. In both cases the work increased productivity through enhanced bollard pull and consequent earnings.\n\nSafety was given the highest priority through new monitoring systems and awareness programs. Formalised on the job instruction and training courses have also lifted levels of experience and proficiency across the workforce.\n\nThe offshore waters and islands adjacent to Dampier, host in excess of 50% of all exploration and development budgets of Australia's offshore oil and gas industry. The Burrup Peninsular where the Base is located is the intended site of major new oil, gas, petrochemical and industrial mineral processing plants. The Port of Dampier is Australia's largest Port as measured by tonnage, but as identified in the 1997 WA Department of Commerce and Trade report, there remains an urgent need for additional marine support infrastructure. Mermaid is now well advanced in our plan to satisfy those needs and onshore work was announced to start on the 9th October 2000. DAMPIER BASE\n\nSince receiving approval in principle for development of the Dampier Base from the Western Australian Minister for the Environment in February 2000, engineering and general design work in connection with the base proceeded at an accelerated pace.\n\nThis work, assisted by technical studies and a re-assessment of an increased demand for services arising out of greater expectations for growth in the sector, has led to improvements and expansion of capacity over earlier plans.\n\nThe Dampier Base will now comprise:-\n\n\n\n·\n\n\n\n·\n\nAn 'all tides' approach channel to a minimum depth of 6 metres\n\nA wharf offering 7.5 metres depth at low tide, featuring a heavy loadout section to accommodate modules of up to 1500 tonnes to onshore projects on the Burrup Peninsular and adjacent mining centres. A subsea pipe reel loading facility will encourage the use of spool ships in the region for deepwater pipelay. On a project by project basis, pipeline protection rock dumping, specialist vessel rig up activities and the like will be facilitated, as will dry and bulk cargo handling, refuelling, watering and all categories of waste reception. The joint Commonwealth and WA State Government initiative to establish an integrated industrial estate at Jervoise Bay (south of Perth) serviced by high wide load corridors from Perth's industrial areas will see the heavy capacity wharf playing a strategic role in major capital works in the Pilbara, leading to significant cost savings.", - "page_start": 11, - "page_end": 11, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## OPERATIONS REVIEW\n\n## MERMAID LABOUR AND MANAGEMENT LIMITED\n\nSAFETY\n\n\n\nDuring 2000 Mermaid Marine formed a new business unit Mermaid Labour and Management Limited. The focus of this unit will be labour supply and industrial relations management to the marine, offshore construction industry and onshore resources projects in the NW of Australia. The Directors and Management of the new entity are very experienced, well known and regarded by the industry in general. The company has high expectations for Mermaid Labour and Management Limited.\n\nMermaid remains dedicated to ensuring a safe environment in all areas where we operate or have responsibility.\n\nIn April 2000, following the regular six monthly Quality Assurance audit, the Company's accreditation under AS/NZS/ISO 9002 was reconfirmed. Mermaid's quality assurance and compliance team continues with a continuous day to day effort to improve our health, safety and environmental performance. Stringent charterer requirements, which are a pre requisite of increased vessel usage, must be met to the letter and are the subject of regular and demanding audits. Although time consuming and expensive, we are grateful to certain of the large producers, who while demanding the highest levels of compliance, have also been prepared to give their time, sharing their safety expertise with us and in that way assisting in the very major advances our company has made in this all important area.\n\nAt the time of writing this report, Mermaid had accumulated 348 days without a Lost Time Injury. A fine achievement and a continuing record.", - "page_start": 23, - "page_end": 23, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "\n\n## OPERATIONS REVIEW\n\n## BASE EXPANSION WORKS AND ENVIRONMENTAL MANAGEMENT\n\nWork on Dampier\n\nBase expansion commenced on 9 October and will be largely complete by June 2001, involving a capital budget of $13m.\n\nThe principle activities and facility developments involved in the expansion are:\n\n## A. DREDGING\n\nApproximately 700,000 m3 of material is to be dredged in King Bay to form an entrance channel, vessel berths, cyclone moorings and to provide access to the slipway.\n\nThe experience of Woodside constructing their nearby base in 1981 indicates that two types of dredges will be required, a Cutter Suction to remove the soft unconsolidated material (approx.70%) and a Dipper Dredge (barge mounted back-hoe) to remove harder consolidated material.\n\nThe Cutter Suction dredge size will be deliberately modest due to onshore spoil management requirement and environmental considerations.\n\nThe Dipper Dredge will be the largest of its type in the world, and will be an ideal remedial dredging tool using the experience gained from the earlier Woodside project.\n\nThe layout of the Base has been very much driven by the desire to avoid or minimize blasting while fulfilling functional objectives.\n\nThe entrance channel into the Mermaid Base will be 30 m wide and dredged to 6 m below chart datum. The dredge spoil will be pumped ashore and used as fill around the Base.\n\nDredges are expected to be onsite for approximately 7 months commencing mid November.\n\n## B. QUAY WALL ( BERTH 1)\n\nMarket research and customer needs have caused Mermaid to relocate and redesign the main berth to accommodate a wider range of vessels than originally contemplated. The berth is now located in deeper water with better vessel access.\n\nThe regional offshore fleet characteristics have been changing in terms of vessel size. There are now four vessels operating in the region with 12,000 to 18,000 hp. When design commenced there were none of this size.\n\nThe depth alongside Berth 1 will be 7.5m. King Bay has a statistical average extreme low tide (MLWS) of 0.9 m, the occurrence of which can be expressed in hours per month. The largest", - "page_start": 13, - "page_end": 13, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## OPERATIONS REVIEW\n\nvessels engaged in routine offshore logistics tasks operate fully laden with 7.4 m draft which means there will be very few occasions when the largest vessels in the industry have to make a tide dependent entry or departure through the Mermaid channel. Further the Mermaid Base will not suffer operational disadvantages experienced by the adjacent Woodshed Base or nearby Damper Public Wharf in terms of entry and departure draft restrictions.\n\nThe function and purpose of Berth 1 will be:\n\n- · To service the larger offshore supply boat market on a fast turnaround basis.\n- · To receive and offload very heavy ro/ro cargoes up to 1500 tonne delivered by ocean going heavy lift ships and barges.\n- · To handle inbound and outbound cargoes related to major offshore pipe lay projects.\n- · To receive and efficiently load reel ships used for deep water small diameter pipelay.\n\nThe wharf will be an earth filled structure with steel sheet pile faces and concrete capping beam surround. Most of the construction will be performed using land based equipment working from the core of the earth filled system.\n\nMuch effort has gone into a design concept which allows very large cranes (>100 tonne capacity) to operate without restriction on the wharf.\n\nThe separation between Berth 1 and Berth 2 is such to allow Road Train Triples (the max allowable) to turn unassisted on the wharf.\n\n## C. QUAY WALL (BERTH 2)\n\nThe inner berth, Berth 2 has a minimum depth alongside of 5.0 m allowing unrestricted operation of all the Mermaid fleet, and the majority of other vessels servicing the offshore oil/gas industry and mineral ports. This berth will offer excellent weather protection for small and medium size vessels.\n\n## D. BREAKWATER.\n\nThe rubble mount type breakwater will be an extension of the wharf, constructed using core and armor rock largely won from excavations on the Base. The excavations created will become depositories for dredge spoil.\n\nBecause the storm surge associated with major cyclones can be up to 7 m above chart datum (low tide), before imposing the wave height, a fully protective breakwater is not practical. The\n\n", - "page_start": 14, - "page_end": 14, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "## G. SLIPWAY.\n\nAustralia, and particularly the north west is impoverished in terms of infrastructure to service our marine industries. Some of this has been due to a historical link with our recent industrial past. This is now behind us, and Australia has now become a centre of excellence with respect to both new building and ship repair, particularly for high tech and specialty vessels.\n\nThe Mermaid slipway will be the third such facility on the western half of the continent , with others located at Fremantle and Darwin.\n\nThe slipway will be a repair only facility, no new building is contemplated. Its capacity is structured to meet the regional steel mono-hulled fleet requirements of some 60 vessels between 200 and 4000 tonne displacement. Fishing industry, marine tourist industry, large private pleasure craft , naval, scientific and law enforcement vessels are a secondary target.\n\nThe slipway is designed to initially accept vessels up to 2,700 tonnes, a restriction which is set by our current inventory of cradles used to support vessel on the slip. The cradles will be progressively upgraded to ultimately handle 4000 tonne. A later expansion will allow 500 tonne vessels to be side slipped, thereby increasing capacity.\n\nThe slipway location and orientation on the Base has been chosen to maximize the cost and load bearing benefits of having a very high strength granite bedrock as the best possible foundation.\n\nThe Mermaid slipway will rank second in terms of capacity on the western half of the continent. Tenix, Fremantle 8,000 tonne, Mermaid Dampier 2,700 tonne rising to 4,000 tonne, Darwin Ship Repair 2,500 tonne. The nearest other facilities are Singapore, Adelaide, Port Moresby or Cairns.\n\n\n\nMermaid has purchased a very large cyclone rated industrial building frame which will be sited beside the slipway and tenanted by Mermaid engineering and companies which will provide ancillary services related to ship repair.\n\nThe Northwest Shelf is a world scale offshore oil and gas exploration province.\n\n", - "page_start": 20, - "page_end": 20, - "source_file": "ASX_MRM_2000.pdf" - }, - { - "text": "\n\n## NOTES TO AND FORMING PART OF THE FINANCIAL STATEMENTS FOR THE FINANCIAL YEAR ENDED 30 JUNE 2000\n\n| Note | Consolidated 1999 | Company | Company | |\n|------------------------------------------------------|---------------------|--------------------------|---------------|----|\n| | 2000 $ $ | 2000 $ | 1999 $ | |\n| INVESTMENTS | | | | |\n| At cost: | | | | |\n| Unlisted investment - shares controlled in entities | - | - 2,444,611 | 2,444,611 | |\n| | Country of | Ownership Interest 2000 | Ownership | |\n| | Incorporation | | Interest 1999 | |\n| Parent Entity | | | | |\n| Mermaid Marine Australia Limited | Australia | | | |\n| Controlled Entities | | | | |\n| Mermaid Marine Group Pty Ltd* | Australia | 100 | 100 | |\n| Mermaid Marine Vessel Operations Pty Ltd* | Australia | 100 | 100 | |\n| Mermaid Marine Pty Ltd* | Australia | 100 | 100 | |\n| Mermaid Marine Offshore Pty Ltd* | Australia | 100 | 100 | |\n| Mermaid Marine Charters Pty Ltd* | Australia | 100 | 100 | |\n| Mermaid Supply Base Pty Ltd* | Australia | 100 | 100 | |\n| Dampier Stevedoring Pty Ltd* | Australia | 100 | 100 | |\n| Mermaid Manning and Management Pty Ltd* | Australia | 100 | 100 | |", - "page_start": 49, - "page_end": 49, - "source_file": "ASX_MRM_2000.pdf" - } - ] - }, - { - "references": { - "source_file": "Word QS.pdf", - "query": "How do I create a new document in Word?", - "target_page": 2, - "target_passage": "Just select File > New", - "chunk_present": { - "presence": true, - "index": 8 - } - }, - "top_chunk": [ - { - "text": "## Word\n\n## Get writing suggestions\n\nWith Editor , bring out your best writing. Editor helps you bring out your best writing by giving you intelligent writing suggestions. It also calculates an Editor Score based on the number and types of suggestions you have yet to address. Select an underlined word or phrase to accept or ignore a suggestion.\n\n\n\n## Review and track changes\n\nWhether you just want to check spelling, keep your word count in check, or fully collaborate with other people, the Review tab has essential commands to track, discuss, and manage all of the changes made to your documents.\n\n\n\n\n\n## View who else is typing\n\nCo-authoring Word documents that are shared on OneDrive or on a SharePoint site happens in real-time, which means you can easily view where other authors are making changes in the same document that you're currently working in.\n\n\n\n## Format with styles\n\nStyles lets you create, apply, and review the formatting styles in your current document. To open it, select the Home tab, and then select the small arrow in the lower right corner of the Styles gallery.", - "page_start": 2, - "page_end": 2, - "source_file": "Word QS.pdf" - }, - { - "text": "## Count on Word to count your words\n\nTry it: Hit return after this line and type some words.\n\nThe status bar at the bottom of the window keeps a running count of the number of words in the document.\n\n\n\n## Save this for later, access it anywhere\n\nWhen you save this document in OneDrive, you'll be able to open it anywhere: on your computer, tablet, or phone. Your changes will be saved automatically.\n\nTry it: Select File > Save As , and then select OneDrive and give this document a name.\n\n\n\nIf you sign in to Office 365 on another device, this document will be in your list of recent files. You can pick up where you left off… even if you left the document open on the computer you're using now.", - "page_start": 1, - "page_end": 1, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "## Welcome to Word\n\n## Instructions you can edit, share, and print\n\n\n\nUnlike old-school user guides, this doc is yours to tailor exactly for your needs. Reading it will teach you some basics about Word, but this document isn't just for reading. It's for editing too, so you can learn by doing.\n\nFor practice using Word features, watch for Try it text in red throughout this document.\n\n\n\nTime saver: If you've only got a minute and you want to see how this works, watch this Video: Welcome to Word.\n\n## Write eloquently, with a little help\n\nWord automatically checks spelling and grammar, and marks misspelled words with a red squiggly underline. Grammatical glitches get a blue double underline.\n\nTry it: Put your cursor at the end of this paragraph, and hit Enter to start a new paragraph. Write a sentence with some spelling or grammatical mistakes, and press Enter to finish the paragraph.\n\nRight-click the text that's marked with underlines, or Press F7. Choose a suggestion to correct the mistakes.", - "page_start": 0, - "page_end": 0, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "## Share and collaborate\n\nWith this document saved in OneDrive, you can share it with others. They don't even need Word to open it.\n\nTry it: Select Share , and send a link to this document. (keyboard shortcut - Alt+F+Z or Alt+Z+S)\n\nYou can send the link by typing someone's email address or by copying the link and pasting it into a message or chat. If you want them to read the document but not edit it, set their permission to view-only.\n\nIf they don't have Word, the document will open in their web browser, in Word Online.\n\n## Add visuals with pictures from the web\n\n\n\nWord works with Bing to give you access to thousands of pictures you can use in your documents.\n\nTry it: Hit enter after this line to make a blank line:\n\n- 1. With your cursor in the blank space above, go to the Insert tab, select Online Pictures , and then search for something, like puppy clip art .\n- 2. Select the picture you want, and select Insert .", - "page_start": 2, - "page_end": 2, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "## Word PDF Accessibility\n\nArticle • 11/26/2024\n\n## Summary\n\nAuthors can ensure that their Word documents are accessible to people with disabilities even when distributing them in PDF format using the following approach:\n\n - 1. First, they should follow the practices in Make your Word documents accessible to people with disabilities .\n - 2. Next, they should follow the steps in Create accessible PDFs to preserve the accessibility of the document in PDF format.\n\nThis article provides details about the information Word includes in the PDF to make it accessible.\n\n - 1. PDF/UA tags are included to provide semantic information about the content in the document.\n - 2. Decorative content does not need to be read, so it is marked as in the Content Tree in the PDF and no PDF/UA tags are included.\n - 3. Bookmarks for each section and slide are included to make it easier to navigate the content.\n\n## PDF/UA Tags\n\nノ\n\nExpand table\n\n\n\n| Type of content | Tags |\n|-------------------|------------|\n| Document | |\n| | |\n| Title | |\n| | |\n| H1, H2, etc. | |", - "page_start": 55, - "page_end": 55, - "source_file": "office-pdf.pdf" - }, - { - "text": "## Word\n\n## Find whatever you need\n\nType a keyword or phrase into the Search box to quickly find the Word features and ribbon commands you're looking for, to discover Help content, or to get more information online .\n\n<!-- image -->\n\n<!-- image -->\n\n## Get other Quick Start guides\n\nTo download our free Quick Start Guides for your other favorite apps, go to https://go.microsoft.com/fwlink/?linkid=2008317.\n\n<!-- image -->\n\n## Next steps with Word\n\n## See what's new in Office\n\nExplore the new and improved features in Word and the other Office apps. Visit https://go.microsoft.com/fwlink/?linkid=871117 for more information.\n\n## Get free training, tutorials, and videos for Office\n\nReady to dig deeper into the capabilities that Word has to offer? Visit https://go.microsoft.com/fwlink/?linkid=871123 to explore our free training options.\n\n## Send us your feedback\n\nLove Word? Got an idea for improvement to share with us? On the File menu, select Feedback and then follow the prompts to send your suggestions directly to the Word product team. Thank you!\n\n## Share your work with others\n\nTo invite others to view or edit your documents, select the Share button in the top right corner of the app window. Then, you can choose to share a link to your document or send invitations directly to specific people. If someone doesn't have Word, they can use the free Word for the Web app to edit and comment.", - "page_start": 3, - "page_end": 3, - "source_file": "Word QS.pdf" - }, - { - "text": "## Get help with Word\n\n<!-- image -->\n\nThe Tell me search box takes you straight to commands and Help in Word.\n\n## Try it: Get help:\n\n - 1. Go to Tell me what you want to do at the top of the window.\n - 2. Type what you want to do.\n\nFor example, type:\n\n -  Add watermark to quickly get to the watermark command.\n -  Help to go to Word help.\n -  Training to see the list of Word training courses.\n -  What's new for a list of the most recent updates to Word\n\n## Let us know what you think\n\nPlease give us feedback on this template, so we can provide content that's truly useful and helpful. Thanks!\n\n<!-- image -->", - "page_start": 7, - "page_end": 7, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "| , | Colon | Before co-ordinating conjunctions when they join two independent clauses. | It was raining outside, but they decided to go swimming anyway. |\n| | Colon | After a dependent clause at the begin- ning of a sentence | When I arrived at work, I realised that I had left my office key at home. |\n| | Colon | When addressing someone or some- thing directly. | 'John, will you please print this document before the meeting?' |\n| : | | A colon is used to introduce the second part of a sentence when the second part explains or expands upon the first part. | There is only one way to fix this: we have to start over. The following items must be included with your CV: a cover letter, a copy of your ID, and a copy of your Matric |\n| ' | Apostrophe | An apostrophe is used to indicate pos- session, or to indicate that letters have been omitted from a word. | Possession: The director's office was locked. Omission: He wasn't there. ('was not' becomes 'wasn't') |\n| ' | Inverted commas | Inverted commas, or quotation marks, are used to indicate direct speech, or to indicate that text is being quoted from another source. | 'Where have you been?' he asked. |\n| - | Hyphen | A hyphen is used to join words, or to join words and letters/numbers. | Forming compound adjectives: rose-coloured, prize-win - ner, hand- picked Adding prefixes: pre-release, pre- production, pre-qualify, non-verbal Joining words with letters/numbers: pre-2014, X-ray, C-section. |\n| - | En dash | En dashes are used to replace the words 'to' or 'through'. | The company's financial year runs from March - February. |", - "page_start": 12, - "page_end": 12, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## Word\n\n## Create something\n\nBegin with a Blank document to get right to work. Or start with a template to save yourself time and steps. Just select File > New , and then select or search for the template you want.\n\n<!-- image -->\n\n<!-- image -->\n\n## Access files anywhere\n\nNeed to work on the go and across different devices? Click File > Account to sign in with your Microsoft account and access your recently used files anywhere, on any device, through seamless integration between Office, OneDrive, OneDrive for Business, and SharePoint.\n\n<!-- image -->\n\n## Discover related options\n\nWhen you select objects in your document, options related to your selection will appear. For example, selecting a table displays the Table Design and Layout tabs, which offer additional options.\n\n<!-- image -->\n\n## Find recent files\n\nWhether you only work with files stored on your PC's local hard drive or you store files in multiple shared locations, selecting File > Open takes you to your recently used documents and any files that you may have pinned to your list.", - "page_start": 1, - "page_end": 1, - "source_file": "Word QS.pdf" - }, - { - "text": "Important: Use care when you enable this feature. The Display Document Location function can result in degraded search performance because the storage location information for every document that is returned must be retrieved from the Content Manager OnDemand object server.\n\n## Display Document Hold\n\nThe Display Document Hold setting (Figure 3-7 on page 54) determines whether the client shows a column that indicates whether a hold is placed on the document. For more information, see Chapter 16, 'Enhanced Retention Management' on page 353.\n\n## Note Search\n\nIf the annotation parameter (annotation flags in the document database table) in the application group is set to 'No', the Note Search parameter (Figure 3-7 on page 54) determines when Content Manager OnDemand searches the database for annotations and notifies the user of the annotations. The following options are possible:\n\n - /SM590000 Hit list: When a folder query is run, Content Manager OnDemand searches for annotations, and a note icon, which contains an annotation, is displayed next to each document in the resulting hit list. The hit list option has a direct performance impact on the generation of the document list.\n - /SM590000 Retrieve: Content Manager OnDemand searches for annotations when the user selects a document for display. This option is the default and preferred option.\n - /SM590000 Note: Content Manager OnDemand searches for annotations when the user selects the note command when the user views a displayed document.\n\nAs a preferred practice, set the annotation parameter in the application group advanced settings to ' Yes '. In this case, an annotation flag is set in the database when a user adds an annotation to a document. When the document hit list is displayed, a note icon is displayed next to the documents for which an annotation exists.\n\n## Full Report Browse\n\nIn the Permissions tab of the folder definition window (Figure 3-8 on page 56), the Full Report Browse option allows a user of the Content Manager OnDemand Windows Client to select a document, retrieve that document, and view the entire report to which the document belongs.", - "page_start": 78, - "page_end": 78, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "Word QS.pdf", - "query": "Where can I find other Microsoft quick start guides?", - "target_page": 4, - "target_passage": "To download our free Quick Start Guides for your other favorite apps, go to https://go.microsoft.com/fwlink/?linkid=2008317.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## Word\n\n## Quick Start Guide\n\nNew to Word? Use this guide to learn the basics.\n\n<!-- image -->\n\n<!-- image -->", - "page_start": 0, - "page_end": 0, - "source_file": "Word QS.pdf" - }, - { - "text": "## Word\n\n## Find whatever you need\n\nType a keyword or phrase into the Search box to quickly find the Word features and ribbon commands you're looking for, to discover Help content, or to get more information online .\n\n<!-- image -->\n\n<!-- image -->\n\n## Get other Quick Start guides\n\nTo download our free Quick Start Guides for your other favorite apps, go to https://go.microsoft.com/fwlink/?linkid=2008317.\n\n<!-- image -->\n\n## Next steps with Word\n\n## See what's new in Office\n\nExplore the new and improved features in Word and the other Office apps. Visit https://go.microsoft.com/fwlink/?linkid=871117 for more information.\n\n## Get free training, tutorials, and videos for Office\n\nReady to dig deeper into the capabilities that Word has to offer? Visit https://go.microsoft.com/fwlink/?linkid=871123 to explore our free training options.\n\n## Send us your feedback\n\nLove Word? Got an idea for improvement to share with us? On the File menu, select Feedback and then follow the prompts to send your suggestions directly to the Word product team. Thank you!\n\n## Share your work with others\n\nTo invite others to view or edit your documents, select the Share button in the top right corner of the app window. Then, you can choose to share a link to your document or send invitations directly to specific people. If someone doesn't have Word, they can use the free Word for the Web app to edit and comment.", - "page_start": 3, - "page_end": 3, - "source_file": "Word QS.pdf" - }, - { - "text": "<!-- image -->\n\n## Welcome to Microsoft Teams\n\nMicrosoft Teams is the app that brings your conversations, meetings, and files together in one place. This guide will help you get started with Teams, learn the basics, get tips to practice on your own, and discover ways to engage your team.\n\n## Set up\n\n## Explore\n\n## Practice\n\nDownload the app for desktop and mobile to access Teams with the best performance anywhere you go.\n\nOnce you sign in, connect with your team in chat, channels, calls, and meetings.\n\nTry out the different features as you learn about them in this guide. You'll get the basics in no time!\n\n<!-- image -->", - "page_start": 0, - "page_end": 0, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "## UNDERSTANDING QUICK ANALYSIS\n\nThe Quick Analysis tools were developed in response to the fact that users weren't using or even aware of the more powerful analytical tools found in Excel. So Excel decided to combine\n\n## The Quick Analysis Button\n\nThe Quick Analysis button appears when a range is selected in a worksheet. Clicking on the button displays the Quick Analysis gallery which contains quick analysis tools that can be applied to the selected data.\n\nThe tools have been organised along tabs at the top -\n\nFORMATTING , CHARTS , TOTALS , TABLES , and SPARKLINES .\n\nWhen you click on a tab, options specific to that tab are presented.\n\nLive Preview with some of these tools to create the Quick Analysis tools.\n\n<!-- image -->\n\n## Using Quick Analysis Tools With Live Preview\n\nMost of the Quick Analysis tools in the Quick Analysis gallery provide a Live Preview of the changes in the worksheet when you point to an option.\n\nThis is very useful if you are not sure of the formatting or type of analysis you require as it provides you with a preview of what the data would look like if you selected that specific option.\n\nAt the right we have selected only the totals from the worksheet shown above. We have pointed to options from the TOTALS tab ( % Total and Average ) and from the FORMATTING tab ( Data Bars ).\n\nLive Preview has either presented another row of analysed data or has formatted the selection accordingly.\n\nAll of these tools are also available on the ribbon but using the Quick Analysis tools is much quicker.\n\n<!-- image -->", - "page_start": 35, - "page_end": 35, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## Microsoft Excel", - "page_start": 3, - "page_end": 3, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "<!-- image -->\n\n## Next Steps\n\nYou will get the most out of Teams when you get to truly connect with your team and collaborate together. Keep practicing until each step of your workflow feels natural.\n\n<!-- image -->\n\n## Share knowledge\n\nTeamwork is all about collaboration! Share with your team best practices you learn along the way, tips and tricks for how you can best organize your workflows and ask for their own advice to define how you can best use Teams together.\n\n## Keep learning\n\nNo matter how you like to learn and practice, we've got resources to support and inspire you:\n\n - Virtual classes: We have instructors to answer your questions and walk you through all the details. ·\n - · Training series: Complete the beginner series of videos at your own pace.\n - · Support articles and step-by-step guides: To get answers to your most common questions.\n - · Feature overviews, tutorials, and announcements: Our YouTube channel has carefully curated content to get you excited and show how you can use Teams effortlessly.", - "page_start": 5, - "page_end": 5, - "source_file": "MSTeams_QuickStartGuide_EN_Final_4.18.22.pdf" - }, - { - "text": "## PRACTICE EXERCISE\n\n## The Quick Analysis Tools\n\n## Tasks:\n\n## Completed:\n\nBefore starting this exercise you MUST have completed all of the topics in the chapter The Quick Analysis Tools…\n\n -  Open the workbook PE\\_Quick Analysis.xlsx (it can be found in the same folder as the student files)\n\n\n\n -  Use the Quick Analysis tools to apply a colour scale to the data in the worksheet\n\n\n\n -  Use the Quick Analysis tools to create a chart for the Overheads data. This chart should be a clustered column chart that has the column headings as the x axis, and displays the legend at the bottom of the chart. Make the chart title Cost of Overheads .\n\n\n\n -  Reposition the chart below the data\n\n\n\n -  Use the Quick Analysis tools to create Sparklines for the Qtr1 to Qtr4 and Total columns for Overheads\n\nYour worksheet should appear as shown on the following page…\n\n\n\n -  Use the Save As command to save the workbook as PE\\_Quick Analysis (Completed).xlsx\n\n\n\n<!-- image -->", - "page_start": 41, - "page_end": 41, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "tools provide a way of seeing what the different charts will look like without having to first create the chart.\n\n<!-- image -->\n\n## Handy to Know…\n\n## To use the Quick Charting tools :\n\n - 1. Select the range to be charted, then click on the Quick Analysis button\n - 2. Choose the desired option from the CHARTS tab\n -  When creating a chart you'll need to ensure that the range you select includes the labels to be used on the chart.\n\n<!-- image -->\n\n<!-- image -->", - "page_start": 37, - "page_end": 37, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## QUICK SPARKLINES\n\nSparklines are mini charts that are embedded into a worksheet, usually immediately adjacent to the data. Sparklines are only relatively new in Excel and probably haven't gained the\n\nacceptance or understanding that Microsoft would like. So, you'll now find them in the Quick Analysis tools where you can easily implement them without too much head scratching.\n\n## Try This Yourself:\n\nn\n\n<!-- image -->\n\nBefore starting this exercise you MUST open the file E1355 Quick Analysis\\_4.xlsx…\n\n Click in cell B5 , hold down , then click in cell E9 to select the range B5:E9\n\n -  Click on the Quick Analysis button, then click on the SPARKLINES tab\n -  Point to Line to display a line drawing showing trends for each row across the four weeks\n -  Point to Column to display the trend as columns rather than a continuous line\n -  Click on Column to add Sparklines in column F\n\nNotice that after the Sparklines have been created the SPARKLINE TOOLS tab on the ribbon is now available so that you can further enhance or modify the Sparklines\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n## For Your Reference…\n\n## To use Quick Sparklines in a worksheet :\n\n - 1. Select the range to be analysed, then click on the Quick Analysis button\n - 2. Choose the desired Sparkline from the SPARKLINES tab\n\n## Handy to Know…", - "page_start": 39, - "page_end": 39, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## QUICK FORMATTING\n\nThe first tab in the Quick Analysis gallery is FORMATTING . This tab provides access to the conditional formatting tools of Excel. These are the tools that allow you to analyse data by\n\ncolouring it or presenting it in a slightly different way. In the Quick Analysis gallery you can apply data bars, colour high and low values, values over or below a value, and more.\n\n<!-- image -->\n\n## For Your Reference…\n\n## To apply Quick Formatting in a worksheet :\n\n - 1. Select the range to be formatted, then click on the Quick Analysis button\n - 2. Choose the desired formatting from the FORMATTING tab\n\n## Handy to Know…\n\n -  Quick Formatting applies conditional formatting, not the standard formatting.\n -  The Clear Format option in the Quick Analysis gallery will clear any conditional formatting that has been applied.", - "page_start": 36, - "page_end": 36, - "source_file": "Excel Training Manual 1.pdf" - } - ] - }, - { - "references": { - "source_file": "Word QS.pdf", - "query": "How to connect to my Microsoft account from Word?", - "target_page": 2, - "target_passage": " Click File > Account to sign in with your Microsoft account", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "## Word\n\n## Find whatever you need\n\nType a keyword or phrase into the Search box to quickly find the Word features and ribbon commands you're looking for, to discover Help content, or to get more information online .\n\n<!-- image -->\n\n<!-- image -->\n\n## Get other Quick Start guides\n\nTo download our free Quick Start Guides for your other favorite apps, go to https://go.microsoft.com/fwlink/?linkid=2008317.\n\n<!-- image -->\n\n## Next steps with Word\n\n## See what's new in Office\n\nExplore the new and improved features in Word and the other Office apps. Visit https://go.microsoft.com/fwlink/?linkid=871117 for more information.\n\n## Get free training, tutorials, and videos for Office\n\nReady to dig deeper into the capabilities that Word has to offer? Visit https://go.microsoft.com/fwlink/?linkid=871123 to explore our free training options.\n\n## Send us your feedback\n\nLove Word? Got an idea for improvement to share with us? On the File menu, select Feedback and then follow the prompts to send your suggestions directly to the Word product team. Thank you!\n\n## Share your work with others\n\nTo invite others to view or edit your documents, select the Share button in the top right corner of the app window. Then, you can choose to share a link to your document or send invitations directly to specific people. If someone doesn't have Word, they can use the free Word for the Web app to edit and comment.", - "page_start": 3, - "page_end": 3, - "source_file": "Word QS.pdf" - }, - { - "text": "## Share and collaborate\n\nWith this document saved in OneDrive, you can share it with others. They don't even need Word to open it.\n\nTry it: Select Share , and send a link to this document. (keyboard shortcut - Alt+F+Z or Alt+Z+S)\n\nYou can send the link by typing someone's email address or by copying the link and pasting it into a message or chat. If you want them to read the document but not edit it, set their permission to view-only.\n\nIf they don't have Word, the document will open in their web browser, in Word Online.\n\n## Add visuals with pictures from the web\n\n<!-- image -->\n\nWord works with Bing to give you access to thousands of pictures you can use in your documents.\n\nTry it: Hit enter after this line to make a blank line:\n\n- 1. With your cursor in the blank space above, go to the Insert tab, select Online Pictures , and then search for something, like puppy clip art .\n- 2. Select the picture you want, and select Insert .", - "page_start": 2, - "page_end": 2, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "you will be prompted to create a user ID (your email address) and a password. Once you do that you should have a fresh Web Protégé workspace. Figure 12.1 shows what my Web Protégé workspace currently looks like. Most of the projects are owned by me although note that the CODO project is owned by my colleague Biswanath Dutta. However, I still have complete access to that ontology due to the way Biswanath has configured my access as being able to both view and edit the ontology.\n\nTo upload the Pizza ontology, select the large Create New Project button. This will bring up the window shown in figure 12.2. Fill out the project name and description, then select the Choose File button and navigate to where you have the latest version of the Pizza tutorial with data. Note that in the figure I have already done this navigation so there is a value for the file to load. You can leave the Language field blank. Once you have all the fields set up similar to figure 12.2 click the Create New Project button on this dialog (note this is a different button than the one you started from).\n\nFigure 12.2 The Create New Project Dialog\n\n<!-- image -->\n\nYour workspace should now include your first project. Click on the three horizontal bars at the far right of the project. This should bring up a pop-up menu. Select the Open option. This should bring you into the main Web Protégé UI to browse an ontology.\n\nBefore you make changes to the ontology you need to make sure the settings for new entities and rendering are consistent with the settings you used for the Pizza ontology. The default in Web Protégé as with Protégé is to use Auto-Generated UUIDs rather than user supplied names. If you aren't sure about these settings you can go back to exercise 2 at the beginning of chapter 4 and chapter 7 to refresh your memory. There are excellent reasons to use auto-generated UUIDs but for beginners, especially for those who want to learn SPARQL, I think they make learning the basics more difficult so we have been using the alternative of user supplied names. At the top of the Web Protégé UI in the right corner there are", - "page_start": 84, - "page_end": 84, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "## Word\n\n## Create something\n\nBegin with a Blank document to get right to work. Or start with a template to save yourself time and steps. Just select File > New , and then select or search for the template you want.\n\n<!-- image -->\n\n<!-- image -->\n\n## Access files anywhere\n\nNeed to work on the go and across different devices? Click File > Account to sign in with your Microsoft account and access your recently used files anywhere, on any device, through seamless integration between Office, OneDrive, OneDrive for Business, and SharePoint.\n\n<!-- image -->\n\n## Discover related options\n\nWhen you select objects in your document, options related to your selection will appear. For example, selecting a table displays the Table Design and Layout tabs, which offer additional options.\n\n<!-- image -->\n\n## Find recent files\n\nWhether you only work with files stored on your PC's local hard drive or you store files in multiple shared locations, selecting File > Open takes you to your recently used documents and any files that you may have pinned to your list.", - "page_start": 1, - "page_end": 1, - "source_file": "Word QS.pdf" - }, - { - "text": "- 3. In the Category pane, on the left side of the PuTTY Configuration window, click Connection → Data , as shown on Figure B-10 on page 763. In the Auto-login username field, enter the Spectrum Virtualize user ID that was used when uploading the public key. The admin account was used", - "page_start": 783, - "page_end": 783, - "source_file": "sg247938.pdf" - }, - { - "text": "- 2. Enter the initiator name of your host (see Figure 8-57) and click Add Port to List .\n\nFigure 8-57 Enter the initiator name\n\n<!-- image -->\n\n - 3. Click Add Ports to Host to apply the changes to the system and then, click Close .", - "page_start": 388, - "page_end": 388, - "source_file": "sg247938.pdf" - }, - { - "text": "## Count on Word to count your words\n\nTry it: Hit return after this line and type some words.\n\nThe status bar at the bottom of the window keeps a running count of the number of words in the document.\n\n<!-- image -->\n\n## Save this for later, access it anywhere\n\nWhen you save this document in OneDrive, you'll be able to open it anywhere: on your computer, tablet, or phone. Your changes will be saved automatically.\n\nTry it: Select File > Save As , and then select OneDrive and give this document a name.\n\n<!-- image -->\n\nIf you sign in to Office 365 on another device, this document will be in your list of recent files. You can pick up where you left off… even if you left the document open on the computer you're using now.", - "page_start": 1, - "page_end": 1, - "source_file": "welcome_to_word_template.pdf" - }, - { - "text": "If you are creating a new account, you will create a root account using an email address. The root account has unrestricted access , similar to root accounts for an operating system. As a best practice, you should create an administrative user too.\n\n<!-- image -->\n\n## Granting administrative access to a user\n\nAs you might guess, granting administrative access to a user is still rather far reaching. An account with administrative level privileges will make getting started easier. For systems in production, follow the principle of least-privilege - granting only the minimum access necessary to accomplish tasks.\n\n - · For a step-by-step guide to account types and login management, see Signing in to the AWS Management Console.\n - · AWS Identity and Access Management (IAM) is the service to manage entities and resources authorized to use services and service resources.\n\n## Sign up for an AWS account\n\nIf you do not have an AWS account, complete the following steps to create one.\n\n## To sign up for an AWS account\n\n - 1. Open https://portal.aws.amazon.com/billing/signup.\n - 2. Follow the online instructions.\n\nPart of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad.\n\nWhen you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access.\n\nAWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account .", - "page_start": 13, - "page_end": 13, - "source_file": "serverless-core.pdf" - }, - { - "text": "Note: Make sure that your PC or notebook has a network route to the system IP address that you specified. In particular, you can access the management GUI from any management console that is connected to the same subnet as the system. Enter the system IP address on a supported browser to access the management GUI.\n\n## 4.3 System setup\n\nThis section provides instructions about how to define the basic settings of the system with the system setup wizard, and how to add nodes and optional expansion enclosures.\n\n## 4.3.1 System setup wizard\n\nWhether you are redirected from your PC or notebook after completing system initialization or you browse to the management IP address manually, you must complete the system setup wizard to define the basic settings of the system.\n\nNote: The first time that you connect to the management GUI, you are prompted to accept untrusted certificates because the system certificates are self-signed.\n\nYou can install certificates that are signed by a trusted certificate authority after you complete system setup. For more information about how to perform this task, see 4.5, 'Configuring secure communications' on page 117.", - "page_start": 113, - "page_end": 113, - "source_file": "sg247938.pdf" - }, - { - "text": "<!-- image -->\n\n## Up button:\n\nShort press to light up or turn off the screen; one press to go back the dial interface; long press to reactivate the watch.\n\n## Button down:\n\nShort press to enter multi-sport mode.\n\nIn addition, when the watch is in the off-screen state, you can light up the screen by pressing any buttons.\n\n## Charging instructions:\n\nWireless charging, as shown in the picture below.\n\n<!-- image -->\n\n## 1.1 Shortcut function:\n\n- 1) Swipe to the left till you find the \"+\" icon, click the icon to add part of the functions in the shortcut.\n- 2) Scroll down the screen when the watch is in the dial interface, you can find Bluetooth connection status, time, power, brightness adjustment and other functions.", - "page_start": 1, - "page_end": 1, - "source_file": "6126797.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HRL_2004.pdf", - "query": "What are the products of Hormel Foods Corporation?", - "target_page": 4, - "target_passage": "meat and other food product", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## (a) General Development of Business\n\nHormel Foods Corporation, a Delaware corporation, was founded by George A. Hormel in 1891 in Austin, Minnesota, as George A. Hormel & Company. The Company started as a processor of meat and food products and continues in this line of business. The Company name was changed to Hormel Foods Corporation on January 31, 1995. The Company is primarily engaged in the production of a variety of meat and food products and the marketing of those products throughout the United States. Although pork and turkey remain the major raw materials for Hormel products, the Company has emphasized for several years the manufacture and distribution of branded, consumer packaged items rather than the commodity fresh meat business.\n\nThe Company's branding strategy led to the development of a joint venture between Hormel Foods Corporation and Excel Corporation, a wholly owned subsidiary of Cargill Incorporated. This joint venture began marketing and selling nationally branded fresh case ready beef and pork under the existing HORMEL ALWAYS TENDER brand name in fiscal year 2003. This 50 percent owned joint venture, named Precept Foods LLC, is based in Austin, Minn.\n\nIn fiscal 2001, the Jennie-O Turkey Store (JOTS) business was formed as a result of merging the Company's existing Jennie-O Foods, Inc. business with the operations of The Turkey Store Company, which was acquired in the second quarter of fiscal 2001. The Turkey Store Company was a turkey processing business headquartered in Barron, Wisconsin. The merged JOTS operation is currently the largest turkey processor in the world. JOTS", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "HORMEL, ALWAYS TENDER, AMERICAN CLASSICS, AUSTIN BLUES, BLACK LABEL, CARAPELLI, CHI-CHI'S, CURE 81, CUREMASTER, DAN'S PRIZE, DIAMOND CRYSTAL, DI LUSSO, DINTY MOORE, DUBUQUE, EL TORITO, FAST 'N EASY, HERB-OX, HERDEZ, HOMELAND, HOUSE OF TSANG, JENNIE-O TURKEY STORE, KID'S KITCHEN, LAYOUT, LITTLE SIZZLERS, MARRAKESH EXPRESS, MARY KITCHEN, OLD SMOKEHOUSE, PATAK'S, PELOPONNESE, PILLOW PACK, QUICK MEAL, RANGE BRAND, ROSA GRANDE, SANDWICH MAKER, SPAM, STAGG, SWEET THING, THICK & EASY and WRANGLERS.\n\n## Customers and Backlog Orders\n\nDuring fiscal year 2003, no customer accounted for more than 10 percent of total Company sales. The five largest customers in each segment make up approximately the following percentage of segment sales: 39 percent of Grocery Products, 39 percent of Refrigerated Foods, 35 percent of JOTS, 51 percent of Specialty Foods, and 27 percent of All Other. The loss of one or more of the top customers in any of these segments could have a material adverse effect on the results of such segment. Backlog orders are not significant due to the perishable nature of a large portion of the products. Orders are accepted and shipped on a current basis.\n\n## Competition\n\nThe production and sale of meat and food products in the United States and internationally are highly competitive. The Company competes with manufacturers of pork and turkey products, as well as national and regional producers of other meat and protein sources, such as beef, chicken and fish. The Company believes that its largest domestic competitors for its Refrigerated Foods segment in 2003 were Tyson Foods, Smithfield Foods and ConAgra Foods; for its Grocery Products segment, ConAgra Foods, Dial Corp. and Campbell Soup Co.; and for JOTS, ConAgra Foods and Cargill, Inc.\n\nAll Hormel segments compete on the basis of price, product quality, brand identification and customer service. Through aggressive marketing and strong quality assurance programs, the Company's strategy is to provide higher quality products that possess strong brand recognition, which would then support higher value perceptions from customers.\n\nThe Company competes using this same strategy in international markets around the world.\n\n## Research and Development\n\nResearch and development continues to be a vital part of the Company's strategy to extend existing brands and expand into new branded items. The expenditures for research and development for fiscal 2003, 2002 and 2001, respectively, were $13,165,000, $12,097,000 and $11,478,000. There are 42 professional employees engaged in full time research, 19 in the area of improving existing products and 23 in developing new products.\n\n## Employees\n\nAs of October 25, 2003, the Company had over 16,000 active employees.", - "page_start": 4, - "page_end": 4, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "markets its turkey products through its own sales force and independent brokers.\n\nThe acquisitions of Diamond Crystal Brands Nutritional Products in fiscal 2001 and the Century Foods International business in July of fiscal 2003 strengthened the Company's presence in the nutritional food products and supplements market. The Company currently operates as one of the largest companies providing nutritional products to the U.S. healthcare industry.\n\nThe Company acquired the Diamond Crystal Brands business from Imperial Sugar Co. in December of fiscal 2003. Diamond Crystal Brands packages and sells various sugar, sugar substitute, salt and pepper products, savory products, drink mixes and dessert mixes to retail and foodservice customers.\n\nInternationally, the Company markets its products through Hormel Foods International Corporation (HFIC), a wholly owned subsidiary. HFIC has a presence in the international marketplace through joint ventures and placement of personnel in strategic foreign locations such as China, Spain, and the Philippines. HFIC also has a global presence with minority positions in food companies in Spain (Campofrio Alimentacion S.A., 15% holding) and the Philippines (Purefoods-Hormel, 40% holding).\n\nThe Company has not been involved in any bankruptcy, receivership or similar proceedings during its history. Substantially all of the assets of the Company have been acquired in the ordinary course of business.\n\nThe Company had no significant change in the type of products produced or services rendered, nor in the markets or methods of distribution since the beginning of the fiscal year.\n\n## (b) Industry Segment\n\nThe Company's business is reported in five segments: Grocery Products, Refrigerated Foods, Jennie-O Turkey Store, Specialty Foods, and All Other. The contributions of each segment to net sales to unaffiliated customers and operating profit, and the presentation of certain other financial information by segment are reported in Note K of the Notes to Consolidated Financial Statements and in the Management's Discussion and Analysis of the Annual Stockholder's Report for the year ended October 25, 2003, incorporated herein by reference.\n\n## (c) Description of Business\n\n## Products and Distribution\n\nThe Company's products primarily consist of meat and other food products. The meat products are sold fresh, frozen, cured, smoked, cooked and canned. The percentages of total revenues contributed by classes of similar products for the last three fiscal years of the Company are as follows:\n\n| Perishable meat | 50.3% | 53.0% | 54.7% |\n|--------------------|---------|---------|---------|\n| Nonperishable meat | 18.9 | 19.8 | 21.0 |\n| Poultry | 22.1 | 22.6 | 20.3 |\n| | 100.0% | 100.0% | 100.0% |\n\nReporting of revenues from external customers is based on similarity of products, as the same or similar products are sold across multiple distribution channels such as retail, foodservice or international. Revenues reported are based on financial information used to produce the Company's generalpurpose financial statements.\n\nPerishable meat includes fresh meats, sausages, hams, wieners and bacon (excluding JOTS products.) Nonperishable meat includes canned luncheon meats, shelf stable microwaveable entrees, stews, chilies, hash, meat spreads and other items that do not require refrigeration as well as frozen processed products. The Poultry category is composed primarily of JOTS products. The Other category primarily consists of nutritional food products and supplements, sugar and sugar substitutes, salt and pepper products, dessert mixes, food packaging (casings for dry sausage), and industrial gelatin products. The Other category has increased over the past two years primarily due to the following acquisitions: Century Foods International (July 2003), Diamond Crystal Brands (December 2002), and Diamond Crystal Brands Nutritional Products (April 2001).", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "No new product in fiscal 2003 required a material investment of Company assets.\n\nDomestically, the Company sells its products in all 50 states. Hormel products are sold through Company sales personnel, operating in assigned territories coordinated from district sales offices located in most of the larger U.S. cities, as well as independent brokers and distributors. As of October 25, 2003, the Company had approximately 600 sales personnel engaged in selling its products. Distribution of products to customers is by common carrier.\n\nThrough HFIC, the Company markets its products in various locations throughout the world. Some of the larger markets include Australia, Canada, China, England, Japan, Mexico and Micronesia. The distribution of export sales to customers is by common carrier, while the China operations own and operate their own delivery system. The Company, through HFIC, has licensed companies to manufacture various Hormel products internationally on a royalty basis, with the primary licensees being Tulip International of Denmark and CJ Corp. of South Korea.\n\n## Raw Materials\n\nThe Company has, for the past several years, been concentrating on processed branded products for consumers with year-round demand to minimize the seasonal variation experienced with commodity type products. Pork continues to be the primary raw material for Company products. Although hog producers are moving toward larger, more efficient year-round confinement operations and supply contracts are becoming increasingly prevalent in the industry, there is still a seasonal variation in the supply of fresh pork materials. The Company's expanding line of processed items has reduced but not eliminated the sensitivity of Company results to raw material supply and price fluctuations.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "## YEAR ENDED OCTOBER 25, 2003\n\n## HORMEL FOODS CORPORATION\n\n## Austin, Minnesota\n\nItem 15(a) (1), (2) and (3) and Item 15 (c) and (d)\n\n## LIST OF FINANCIAL STATEMENTS AND FINANCIAL STATEMENT SCHEDULES\n\n## HORMEL FOODS CORPORATION\n\n## FINANCIAL STATEMENTS\n\nThe following consolidated financial statements of Hormel Foods Corporation included in the Annual Stockholders' Report for the Registrant to its stockholders for the year ended October 25, 2003, are incorporated herein by reference in Item 8 of Part II of this report:\n\nConsolidated Statements of Financial Position -October 25, 2003, and October 26, 2002.\n\nConsolidated Statements of Operations -Years Ended October 25, 2003, October 26, 2002 and October 27, 2001.\n\nConsolidated Statements of Changes in Shareholders' Investment -Years Ended October 25, 2003, October 26, 2002, and October 27, 2001.\n\nConsolidated Statements of Cash Flows -Years Ended October 25, 2003, October 26, 2002, and October 27, 2001.\n\nNotes to Financial Statements -October 25, 2003.\n\n## Report of Independent Auditors\n\n## FINANCIAL STATEMENT SCHEDULES\n\nThe following consolidated financial statement schedule of Hormel Foods Corporation required pursuant to Item 15(d) is submitted herewith:\n\n## Schedule II-Valuation and Qualifying Accounts and Reserves...F-3\n\nAll other schedules for which provision is made in the applicable accounting regulation of the Securities and Exchange Commission are not required under the related instructions or are inapplicable, and therefore have been omitted.\n\n## FINANCIAL STATEMENTS AND SCHEDULES OMITTED\n\nCondensed parent company financial statements of the registrant are omitted pursuant to Rule 5-04(c) of Article 5 of Regulation S-X.\n\n## SCHEDULE II-VALUATION AND QUALIFYING ACCOUNTS AND RESERVES\n\n## HORMEL FINANCIAL SERVICES CORPORATION\n\n(In Thousands)\n\nNote (1) -Uncollectible accounts written off.\n\nNote (2) -Recoveries on accounts previously written off.\n\nNote (3) -Increase in the reserve due to the inclusion of The Turkey Store Company accounts receivable.\n\nNote (4) -Increase in the reserve due to the inclusion of Diamond Crystal Brands accounts receivable.\n\n## LIST OF EXHIBITS\n\n## HORMEL FOODS CORPORATION\n\n| 2.1 (1) | Agreement and Plan of Merger and Plan of Reorganization dated January 22, 2001, by and among Hormel, Badger Acquisition Corporation, Jerome Foods, Inc. and Jerome K. Jerome. (Incorporated by reference to Hormel's Current Report on Form 8-K dated March 9, 2001, File No. 001-02402.) |\n|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| 3.1 (1) | Certificate of Incorporation as amended to date. (Incorporated by reference to Exhibit 3A-1 to Hormel's Annual Report on Form 10- K/A for the fiscal year ended October 28, 2000, File No. 001-02402.) |", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "<!-- image -->\n\n## Hormel Foods Annual Report 2004\n\n## Form 10-K (NYSE:HRL)\n\nPublished: January 23rd, 2004\n\n<!-- image -->", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "Livestock slaughtered by the Company is purchased by Company buyers and commission dealers at sale barns and terminal markets or under long-term supply contracts at locations principally in Minnesota, Illinois, Iowa, Nebraska, Colorado and South Dakota. The cost of livestock and the utilization of the Company's facilities are affected by both the level and the methods of pork production in the United States. The hog production industry has been rapidly moving to very large, vertically integrated, year-round confinement operations operating under long-term supply agreements. This has resulted in fewer hogs being available on the spot cash market, which decreases the supply of hogs on the open market and can severely diminish the utilization of slaughter facilities and increase the cost of the raw materials they produce. The Company, along with others in the industry, uses long-term supply contracts to manage the effects of this trend and to assure a stable supply of raw materials while minimizing extreme fluctuations in costs over the longterm. This may result in costs for live hogs that are either higher or lower than the spot cash market depending on the relationship of the cash spot market to contract prices. Contract costs are fully reflected in the Company's reported financial results. In fiscal 2003, the Company purchased 79 percent of its hogs under long-term supply contracts.\n\nIn fiscal 2003, JOTS raised approximately 57 percent of the turkeys needed to meet its raw material requirements for whole bird and processed turkey products. Turkeys not sourced within the Company are contracted with independent turkey growers. JOTS' turkey-raising farms are located throughout Minnesota and Wisconsin. Production costs in raising turkeys are primarily subject to fluctuations in feed grain prices and to a lesser extent fuel costs.\n\n## Manufacturing\n\nThe Company has plants in Austin, Minnesota; Fremont, Nebraska; and Beijing, China that slaughter livestock for processing. Quality Pork Processors of Dallas, Texas, operates the slaughter facility at Austin under a custom slaughter arrangement.\n\nFacilities that produce manufactured items are located in Algona, Iowa; Aurora, Illinois; Austin, Minnesota; Beloit, Wisconsin; Bondurant, Iowa; Ft. Dodge, Iowa; Fremont, Nebraska; Houston, Texas; Knoxville, Iowa; Mitchellville, Iowa; Osceola, Iowa; Perrysburg, Ohio; Quakertown, Pennsylvania; Rochelle, Illinois; Savannah, Georgia; Sparta, Wisconsin; Stockton, California; Tucker, Georgia; Visalia, California; Wichita, Kansas; Beijing, China; and Shanghai, China. Company products are also custom manufactured by several other companies. The following are the Company's larger custom manufacturers: Lakeside Packing Company, Manitowoc, Wisconsin; Schroeder Milk, Maplewood, Minnesota; Steuben Foods, Jamaica, New York; Power Packaging, St. Charles, Illinois; Criders, Stilmore, Georgia; Tony Downs, St. James, Minnesota; and Concept Foods, Alma, Kansas. Power\n\nLogistics, Inc., based in St. Charles, Illinois, operates distribution centers for the Company in Dayton, Ohio, and Osceola, Iowa.\n\nThe Company's turkey slaughter and processing operations are located in Barron, Wisconsin; Faribault, Minnesota; Melrose, Minnesota; Montevideo, Minnesota; Pelican Rapids, Minnesota; and Willmar, Minnesota.\n\n## Patents and Trademarks\n\nThere are numerous patents and trademarks that are important to the Company's business. The Company holds seven foreign and 47 U.S. issued patents. Some of the trademarks are registered and some are not. In recognition of the importance of these assets, the Company created a subsidiary, Hormel Foods, LLC, in 1998 to create, own, maintain and protect most of the Company's trademarks and patents. Some of the more significant owned or licensed trademarks used in the Company's segments are:", - "page_start": 4, - "page_end": 4, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "## COMPETITIVE CONDITIONS\n\nWe operate in a highly competitive business environment. We compete with other national, regional, local and online retailers that may carry similar lines of merchandise, including department stores, specialty stores, off-price stores, boutiques and Internet businesses. Our specific competitors vary from market to market. We believe the keys to competing in our industry are providing great customer service and customer experiences in stores and online, which includes compelling price and value, fashion newness, quality of products, selection, convenience, technology, product fulfillment, personalization and appealing, relevant store environments in top locations.\n\n## INVENTORY\n\nWe plan our merchandise purchases and receipts to coincide with expected sales trends. For instance, our merchandise purchases and receipts increase prior to our Anniversary Sale, which has historically extended over the last two weeks of July. We also purchase and receive a larger amount of merchandise in the fall as we prepare for the holiday shopping season (from late November through December). Beginning in 2012, we increased our investment in pack and hold inventory at Nordstrom Rack, which involves the strategic purchase of merchandise from some of our full-line stores' top brands in advance of the upcoming selling seasons to take advantage of favorable buying opportunities. This inventory is typically held for six months on average and has contributed to the growth in our Nordstrom Rack business. We pay for our merchandise purchases under the terms established with our vendors.\n\nIn order to offer merchandise that our customers want, we purchase from a wide variety of high-quality suppliers, including domestic and foreign businesses. We also have arrangements with agents and contract manufacturers to produce our private label merchandise. We expect our suppliers to meet our 'Nordstrom Partnership Guidelines,' which address our corporate social responsibility standards for matters such as legal and regulatory compliance, labor, health and safety and the environment, and are available on our website at Nordstrom.com.\n\n## EMPLOYEES\n\nDuring 2014, we employed approximately 67,000 employees on a full- or part-time basis. Due to the seasonal nature of our business, employment increased to approximately 68,000 employees in July 2014 and 73,500 in December 2014. All of our employees are non-union. We believe our relationship with our employees is good.\n\n## CAUTIONARY STATEMENT\n\nCertain statements in this Annual Report on Form 10-K contain or may suggest 'forward-looking' information (as defined in the Private Securities Litigation Reform Act of 1995) that involve risks and uncertainties, including, but not limited to, anticipated financial outlook for the fiscal year ending January 30, 2016, anticipated annual total and comparable sales rates, anticipated new store openings in existing, new and international markets, anticipated Return on Invested Capital and trends in our operations. Such statements are based upon the current beliefs and expectations of the company's management and are subject to significant risks and uncertainties. Actual future results may differ materially from historical results or current expectations depending upon factors including, but not limited to:", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## HON INDUSTRIES Inc. and SUBSIDIARIES", - "page_start": 56, - "page_end": 56, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "- · competition within the office furniture and fireplace industries, including competition from imported products and competitive pricing;\n - · increases in the cost of raw materials, including steel, which is the Company's largest raw material category;\n - · increases in the cost of health care benefits provided by the Company;\n - · reduced demand for the Company's storage products caused by changes in office technology, including the change from paper record storage to electronic record storage;\n - · the effects of economic conditions on demand for office furniture, customer insolvencies and related bad debts, and claims against the Company that it received preferential payments;\n - · changes in demand and order patterns from the Company's customers, particularly its top ten customers, which represented approximately 36% of net sales in 2003;\n - · issues associated with acquisitions and integration of acquisitions;\n - · the ability of the Company to realize cost savings and productivity improvements from its cost containment and business simplification initiatives;\n - · the ability of the Company to realize financial benefits from investments in new products;\n - · the ability of the Company's distributors and dealers to successfully market and sell the Company's products; and\n - · the availability and cost of capital to finance planned growth.", - "page_start": 37, - "page_end": 37, - "source_file": "NYSE_HNI_2003.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HRL_2004.pdf", - "query": "Where are Hormel Foods Corporation plants located? ", - "target_page": 5, - "target_passage": "has plants in Austin, Minnesota; Fremont, Nebraska; and Beijing, China", - "chunk_present": { - "presence": true, - "index": 6 - } - }, - "top_chunk": [ - { - "text": "## (a) General Development of Business\n\nHormel Foods Corporation, a Delaware corporation, was founded by George A. Hormel in 1891 in Austin, Minnesota, as George A. Hormel & Company. The Company started as a processor of meat and food products and continues in this line of business. The Company name was changed to Hormel Foods Corporation on January 31, 1995. The Company is primarily engaged in the production of a variety of meat and food products and the marketing of those products throughout the United States. Although pork and turkey remain the major raw materials for Hormel products, the Company has emphasized for several years the manufacture and distribution of branded, consumer packaged items rather than the commodity fresh meat business.\n\nThe Company's branding strategy led to the development of a joint venture between Hormel Foods Corporation and Excel Corporation, a wholly owned subsidiary of Cargill Incorporated. This joint venture began marketing and selling nationally branded fresh case ready beef and pork under the existing HORMEL ALWAYS TENDER brand name in fiscal year 2003. This 50 percent owned joint venture, named Precept Foods LLC, is based in Austin, Minn.\n\nIn fiscal 2001, the Jennie-O Turkey Store (JOTS) business was formed as a result of merging the Company's existing Jennie-O Foods, Inc. business with the operations of The Turkey Store Company, which was acquired in the second quarter of fiscal 2001. The Turkey Store Company was a turkey processing business headquartered in Barron, Wisconsin. The merged JOTS operation is currently the largest turkey processor in the world. JOTS", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "HORMEL, ALWAYS TENDER, AMERICAN CLASSICS, AUSTIN BLUES, BLACK LABEL, CARAPELLI, CHI-CHI'S, CURE 81, CUREMASTER, DAN'S PRIZE, DIAMOND CRYSTAL, DI LUSSO, DINTY MOORE, DUBUQUE, EL TORITO, FAST 'N EASY, HERB-OX, HERDEZ, HOMELAND, HOUSE OF TSANG, JENNIE-O TURKEY STORE, KID'S KITCHEN, LAYOUT, LITTLE SIZZLERS, MARRAKESH EXPRESS, MARY KITCHEN, OLD SMOKEHOUSE, PATAK'S, PELOPONNESE, PILLOW PACK, QUICK MEAL, RANGE BRAND, ROSA GRANDE, SANDWICH MAKER, SPAM, STAGG, SWEET THING, THICK & EASY and WRANGLERS.\n\n## Customers and Backlog Orders\n\nDuring fiscal year 2003, no customer accounted for more than 10 percent of total Company sales. The five largest customers in each segment make up approximately the following percentage of segment sales: 39 percent of Grocery Products, 39 percent of Refrigerated Foods, 35 percent of JOTS, 51 percent of Specialty Foods, and 27 percent of All Other. The loss of one or more of the top customers in any of these segments could have a material adverse effect on the results of such segment. Backlog orders are not significant due to the perishable nature of a large portion of the products. Orders are accepted and shipped on a current basis.\n\n## Competition\n\nThe production and sale of meat and food products in the United States and internationally are highly competitive. The Company competes with manufacturers of pork and turkey products, as well as national and regional producers of other meat and protein sources, such as beef, chicken and fish. The Company believes that its largest domestic competitors for its Refrigerated Foods segment in 2003 were Tyson Foods, Smithfield Foods and ConAgra Foods; for its Grocery Products segment, ConAgra Foods, Dial Corp. and Campbell Soup Co.; and for JOTS, ConAgra Foods and Cargill, Inc.\n\nAll Hormel segments compete on the basis of price, product quality, brand identification and customer service. Through aggressive marketing and strong quality assurance programs, the Company's strategy is to provide higher quality products that possess strong brand recognition, which would then support higher value perceptions from customers.\n\nThe Company competes using this same strategy in international markets around the world.\n\n## Research and Development\n\nResearch and development continues to be a vital part of the Company's strategy to extend existing brands and expand into new branded items. The expenditures for research and development for fiscal 2003, 2002 and 2001, respectively, were $13,165,000, $12,097,000 and $11,478,000. There are 42 professional employees engaged in full time research, 19 in the area of improving existing products and 23 in developing new products.\n\n## Employees\n\nAs of October 25, 2003, the Company had over 16,000 active employees.", - "page_start": 4, - "page_end": 4, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "markets its turkey products through its own sales force and independent brokers.\n\nThe acquisitions of Diamond Crystal Brands Nutritional Products in fiscal 2001 and the Century Foods International business in July of fiscal 2003 strengthened the Company's presence in the nutritional food products and supplements market. The Company currently operates as one of the largest companies providing nutritional products to the U.S. healthcare industry.\n\nThe Company acquired the Diamond Crystal Brands business from Imperial Sugar Co. in December of fiscal 2003. Diamond Crystal Brands packages and sells various sugar, sugar substitute, salt and pepper products, savory products, drink mixes and dessert mixes to retail and foodservice customers.\n\nInternationally, the Company markets its products through Hormel Foods International Corporation (HFIC), a wholly owned subsidiary. HFIC has a presence in the international marketplace through joint ventures and placement of personnel in strategic foreign locations such as China, Spain, and the Philippines. HFIC also has a global presence with minority positions in food companies in Spain (Campofrio Alimentacion S.A., 15% holding) and the Philippines (Purefoods-Hormel, 40% holding).\n\nThe Company has not been involved in any bankruptcy, receivership or similar proceedings during its history. Substantially all of the assets of the Company have been acquired in the ordinary course of business.\n\nThe Company had no significant change in the type of products produced or services rendered, nor in the markets or methods of distribution since the beginning of the fiscal year.\n\n## (b) Industry Segment\n\nThe Company's business is reported in five segments: Grocery Products, Refrigerated Foods, Jennie-O Turkey Store, Specialty Foods, and All Other. The contributions of each segment to net sales to unaffiliated customers and operating profit, and the presentation of certain other financial information by segment are reported in Note K of the Notes to Consolidated Financial Statements and in the Management's Discussion and Analysis of the Annual Stockholder's Report for the year ended October 25, 2003, incorporated herein by reference.\n\n## (c) Description of Business\n\n## Products and Distribution\n\nThe Company's products primarily consist of meat and other food products. The meat products are sold fresh, frozen, cured, smoked, cooked and canned. The percentages of total revenues contributed by classes of similar products for the last three fiscal years of the Company are as follows:\n\n| Perishable meat | 50.3% | 53.0% | 54.7% |\n|--------------------|---------|---------|---------|\n| Nonperishable meat | 18.9 | 19.8 | 21.0 |\n| Poultry | 22.1 | 22.6 | 20.3 |\n| | 100.0% | 100.0% | 100.0% |\n\nReporting of revenues from external customers is based on similarity of products, as the same or similar products are sold across multiple distribution channels such as retail, foodservice or international. Revenues reported are based on financial information used to produce the Company's generalpurpose financial statements.\n\nPerishable meat includes fresh meats, sausages, hams, wieners and bacon (excluding JOTS products.) Nonperishable meat includes canned luncheon meats, shelf stable microwaveable entrees, stews, chilies, hash, meat spreads and other items that do not require refrigeration as well as frozen processed products. The Poultry category is composed primarily of JOTS products. The Other category primarily consists of nutritional food products and supplements, sugar and sugar substitutes, salt and pepper products, dessert mixes, food packaging (casings for dry sausage), and industrial gelatin products. The Other category has increased over the past two years primarily due to the following acquisitions: Century Foods International (July 2003), Diamond Crystal Brands (December 2002), and Diamond Crystal Brands Nutritional Products (April 2001).", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "## YEAR ENDED OCTOBER 25, 2003\n\n## HORMEL FOODS CORPORATION\n\n## Austin, Minnesota\n\nItem 15(a) (1), (2) and (3) and Item 15 (c) and (d)\n\n## LIST OF FINANCIAL STATEMENTS AND FINANCIAL STATEMENT SCHEDULES\n\n## HORMEL FOODS CORPORATION\n\n## FINANCIAL STATEMENTS\n\nThe following consolidated financial statements of Hormel Foods Corporation included in the Annual Stockholders' Report for the Registrant to its stockholders for the year ended October 25, 2003, are incorporated herein by reference in Item 8 of Part II of this report:\n\nConsolidated Statements of Financial Position -October 25, 2003, and October 26, 2002.\n\nConsolidated Statements of Operations -Years Ended October 25, 2003, October 26, 2002 and October 27, 2001.\n\nConsolidated Statements of Changes in Shareholders' Investment -Years Ended October 25, 2003, October 26, 2002, and October 27, 2001.\n\nConsolidated Statements of Cash Flows -Years Ended October 25, 2003, October 26, 2002, and October 27, 2001.\n\nNotes to Financial Statements -October 25, 2003.\n\n## Report of Independent Auditors\n\n## FINANCIAL STATEMENT SCHEDULES\n\nThe following consolidated financial statement schedule of Hormel Foods Corporation required pursuant to Item 15(d) is submitted herewith:\n\n## Schedule II-Valuation and Qualifying Accounts and Reserves...F-3\n\nAll other schedules for which provision is made in the applicable accounting regulation of the Securities and Exchange Commission are not required under the related instructions or are inapplicable, and therefore have been omitted.\n\n## FINANCIAL STATEMENTS AND SCHEDULES OMITTED\n\nCondensed parent company financial statements of the registrant are omitted pursuant to Rule 5-04(c) of Article 5 of Regulation S-X.\n\n## SCHEDULE II-VALUATION AND QUALIFYING ACCOUNTS AND RESERVES\n\n## HORMEL FINANCIAL SERVICES CORPORATION\n\n(In Thousands)\n\nNote (1) -Uncollectible accounts written off.\n\nNote (2) -Recoveries on accounts previously written off.\n\nNote (3) -Increase in the reserve due to the inclusion of The Turkey Store Company accounts receivable.\n\nNote (4) -Increase in the reserve due to the inclusion of Diamond Crystal Brands accounts receivable.\n\n## LIST OF EXHIBITS\n\n## HORMEL FOODS CORPORATION\n\n| 2.1 (1) | Agreement and Plan of Merger and Plan of Reorganization dated January 22, 2001, by and among Hormel, Badger Acquisition Corporation, Jerome Foods, Inc. and Jerome K. Jerome. (Incorporated by reference to Hormel's Current Report on Form 8-K dated March 9, 2001, File No. 001-02402.) |\n|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| 3.1 (1) | Certificate of Incorporation as amended to date. (Incorporated by reference to Exhibit 3A-1 to Hormel's Annual Report on Form 10- K/A for the fiscal year ended October 28, 2000, File No. 001-02402.) |", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "<!-- image -->\n\n## Hormel Foods Annual Report 2004\n\n## Form 10-K (NYSE:HRL)\n\nPublished: January 23rd, 2004\n\n<!-- image -->", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "No new product in fiscal 2003 required a material investment of Company assets.\n\nDomestically, the Company sells its products in all 50 states. Hormel products are sold through Company sales personnel, operating in assigned territories coordinated from district sales offices located in most of the larger U.S. cities, as well as independent brokers and distributors. As of October 25, 2003, the Company had approximately 600 sales personnel engaged in selling its products. Distribution of products to customers is by common carrier.\n\nThrough HFIC, the Company markets its products in various locations throughout the world. Some of the larger markets include Australia, Canada, China, England, Japan, Mexico and Micronesia. The distribution of export sales to customers is by common carrier, while the China operations own and operate their own delivery system. The Company, through HFIC, has licensed companies to manufacture various Hormel products internationally on a royalty basis, with the primary licensees being Tulip International of Denmark and CJ Corp. of South Korea.\n\n## Raw Materials\n\nThe Company has, for the past several years, been concentrating on processed branded products for consumers with year-round demand to minimize the seasonal variation experienced with commodity type products. Pork continues to be the primary raw material for Company products. Although hog producers are moving toward larger, more efficient year-round confinement operations and supply contracts are becoming increasingly prevalent in the industry, there is still a seasonal variation in the supply of fresh pork materials. The Company's expanding line of processed items has reduced but not eliminated the sensitivity of Company results to raw material supply and price fluctuations.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "Livestock slaughtered by the Company is purchased by Company buyers and commission dealers at sale barns and terminal markets or under long-term supply contracts at locations principally in Minnesota, Illinois, Iowa, Nebraska, Colorado and South Dakota. The cost of livestock and the utilization of the Company's facilities are affected by both the level and the methods of pork production in the United States. The hog production industry has been rapidly moving to very large, vertically integrated, year-round confinement operations operating under long-term supply agreements. This has resulted in fewer hogs being available on the spot cash market, which decreases the supply of hogs on the open market and can severely diminish the utilization of slaughter facilities and increase the cost of the raw materials they produce. The Company, along with others in the industry, uses long-term supply contracts to manage the effects of this trend and to assure a stable supply of raw materials while minimizing extreme fluctuations in costs over the longterm. This may result in costs for live hogs that are either higher or lower than the spot cash market depending on the relationship of the cash spot market to contract prices. Contract costs are fully reflected in the Company's reported financial results. In fiscal 2003, the Company purchased 79 percent of its hogs under long-term supply contracts.\n\nIn fiscal 2003, JOTS raised approximately 57 percent of the turkeys needed to meet its raw material requirements for whole bird and processed turkey products. Turkeys not sourced within the Company are contracted with independent turkey growers. JOTS' turkey-raising farms are located throughout Minnesota and Wisconsin. Production costs in raising turkeys are primarily subject to fluctuations in feed grain prices and to a lesser extent fuel costs.\n\n## Manufacturing\n\nThe Company has plants in Austin, Minnesota; Fremont, Nebraska; and Beijing, China that slaughter livestock for processing. Quality Pork Processors of Dallas, Texas, operates the slaughter facility at Austin under a custom slaughter arrangement.\n\nFacilities that produce manufactured items are located in Algona, Iowa; Aurora, Illinois; Austin, Minnesota; Beloit, Wisconsin; Bondurant, Iowa; Ft. Dodge, Iowa; Fremont, Nebraska; Houston, Texas; Knoxville, Iowa; Mitchellville, Iowa; Osceola, Iowa; Perrysburg, Ohio; Quakertown, Pennsylvania; Rochelle, Illinois; Savannah, Georgia; Sparta, Wisconsin; Stockton, California; Tucker, Georgia; Visalia, California; Wichita, Kansas; Beijing, China; and Shanghai, China. Company products are also custom manufactured by several other companies. The following are the Company's larger custom manufacturers: Lakeside Packing Company, Manitowoc, Wisconsin; Schroeder Milk, Maplewood, Minnesota; Steuben Foods, Jamaica, New York; Power Packaging, St. Charles, Illinois; Criders, Stilmore, Georgia; Tony Downs, St. James, Minnesota; and Concept Foods, Alma, Kansas. Power\n\nLogistics, Inc., based in St. Charles, Illinois, operates distribution centers for the Company in Dayton, Ohio, and Osceola, Iowa.\n\nThe Company's turkey slaughter and processing operations are located in Barron, Wisconsin; Faribault, Minnesota; Melrose, Minnesota; Montevideo, Minnesota; Pelican Rapids, Minnesota; and Willmar, Minnesota.\n\n## Patents and Trademarks\n\nThere are numerous patents and trademarks that are important to the Company's business. The Company holds seven foreign and 47 U.S. issued patents. Some of the trademarks are registered and some are not. In recognition of the importance of these assets, the Company created a subsidiary, Hormel Foods, LLC, in 1998 to create, own, maintain and protect most of the Company's trademarks and patents. Some of the more significant owned or licensed trademarks used in the Company's segments are:", - "page_start": 4, - "page_end": 4, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nDouglas C. Arthur Attorney-at-Law Arthur and Allamong\n\n<!-- image -->\n\nKen L. Burch Farmer\n\nHarold Morrison, Jr. Chairman of the Board Woodstock Garage, Inc. (an auto sales and repair firm)\n\nNoel M. Borden President, Retired H. L. Borden Lumber Company (a retail building materials firm)\n\nChristopher E. French President Shenandoah Telecommunications Company and its subsidiaries\n\nZane Neff Manager, Retired Hugh Saum Company, Inc. (a hardware and furniture store)\n\n## BOARD OF DIRECTORS\n\nDick D. Bowman President Bowman Bros., Inc. (a farm equipment dealer)\n\nGrover M. Holler, Jr. President Valley View, Inc. (a real estate developer)\n\nJames E. Zerkel II Vice President James E. Zerkel, Inc. (a hardware firm)\n\n<!-- image -->\n\n## FIVE-YEAR SUMMARY OF SELECTED FINANCIAL DATA", - "page_start": 8, - "page_end": 8, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## HON INDUSTRIES Inc. and SUBSIDIARIES", - "page_start": 56, - "page_end": 56, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "vertically integrated waste services or expand the service area for our existing disposal sites. Development projects, while generally less capital intensive, typically require extensive permitting eÅorts that can take years to complete with no assurance of success. We undertake development projects when we believe there is a reasonable probability of success and where reasonably priced acquisition opportunities are not available.\n\n - , Acquisition Growth. During the late 1990's, the solid waste industry experienced a period of rapid consolidation. We were able to grow signiÑcantly through acquisitions during this period. However, the rate of consolidation in the industry has slowed considerably. Despite this, we continue to look to acquire businesses that complement our existing business platform. Our acquisition growth strategy focuses on privately-held solid waste companies and municipal and other local governmental authorities. We believe that our ability to acquire privately-held companies is enhanced by increasing competition in the solid waste industry, increasing capital requirements as a result of changes in solid waste regulatory requirements, and the limited number of exit strategies for these privately-held companies' owners and principals. We also seek to acquire operations and facilities from municipalities that are privatizing, which occur for many of the same reasons that privately-held companies sell their solid waste businesses. In addition, we will continue to evaluate opportunities to acquire operations and facilities that may be divested by other publicly-owned waste companies. In sum, our acquisition growth strategy focuses on:\n - , acquiring businesses that position our company for growth in existing and new markets,\n - , acquiring well-managed companies and, when appropriate, retaining local management,\n - , acquiring operations and facilities from municipalities that are privatizing and publicly-owned companies that are divesting of assets.\n\nFor certain risks involved with our acquisition growth strategy, see \"\"Risk Factors Ì We may be unable to execute our acquisition growth strategy,'' \"\"Ì We may be unable to manage our growth eÅectively,'' and \"\"Ì Businesses we acquire may have undisclosed liabilities.''\n\nAcquire Businesses Positioning the Company for Growth. In making acquisitions, we principally target high quality businesses that will allow our company to be, or provide our company favorable prospects of becoming, a leading provider of integrated solid waste services in markets with favorable demographic growth. Generally, we have acquired, and will continue to seek to acquire, solid waste collection, transfer and disposal companies that:\n\n - , have strong operating margins,\n - , are in growth markets,\n - , are among the largest or have a signiÑcant presence in their local markets, and\n - , have long-term contracts or franchises with municipalities and other customers.", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_RSG_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_HRL_2004.pdf", - "query": "Does Hormel Food Corporation have any material legal proceedings pending?", - "target_page": 8, - "target_passage": "The Company knows of no pending material legal proceedings.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## (a) General Development of Business\n\nHormel Foods Corporation, a Delaware corporation, was founded by George A. Hormel in 1891 in Austin, Minnesota, as George A. Hormel & Company. The Company started as a processor of meat and food products and continues in this line of business. The Company name was changed to Hormel Foods Corporation on January 31, 1995. The Company is primarily engaged in the production of a variety of meat and food products and the marketing of those products throughout the United States. Although pork and turkey remain the major raw materials for Hormel products, the Company has emphasized for several years the manufacture and distribution of branded, consumer packaged items rather than the commodity fresh meat business.\n\nThe Company's branding strategy led to the development of a joint venture between Hormel Foods Corporation and Excel Corporation, a wholly owned subsidiary of Cargill Incorporated. This joint venture began marketing and selling nationally branded fresh case ready beef and pork under the existing HORMEL ALWAYS TENDER brand name in fiscal year 2003. This 50 percent owned joint venture, named Precept Foods LLC, is based in Austin, Minn.\n\nIn fiscal 2001, the Jennie-O Turkey Store (JOTS) business was formed as a result of merging the Company's existing Jennie-O Foods, Inc. business with the operations of The Turkey Store Company, which was acquired in the second quarter of fiscal 2001. The Turkey Store Company was a turkey processing business headquartered in Barron, Wisconsin. The merged JOTS operation is currently the largest turkey processor in the world. JOTS", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "markets its turkey products through its own sales force and independent brokers.\n\nThe acquisitions of Diamond Crystal Brands Nutritional Products in fiscal 2001 and the Century Foods International business in July of fiscal 2003 strengthened the Company's presence in the nutritional food products and supplements market. The Company currently operates as one of the largest companies providing nutritional products to the U.S. healthcare industry.\n\nThe Company acquired the Diamond Crystal Brands business from Imperial Sugar Co. in December of fiscal 2003. Diamond Crystal Brands packages and sells various sugar, sugar substitute, salt and pepper products, savory products, drink mixes and dessert mixes to retail and foodservice customers.\n\nInternationally, the Company markets its products through Hormel Foods International Corporation (HFIC), a wholly owned subsidiary. HFIC has a presence in the international marketplace through joint ventures and placement of personnel in strategic foreign locations such as China, Spain, and the Philippines. HFIC also has a global presence with minority positions in food companies in Spain (Campofrio Alimentacion S.A., 15% holding) and the Philippines (Purefoods-Hormel, 40% holding).\n\nThe Company has not been involved in any bankruptcy, receivership or similar proceedings during its history. Substantially all of the assets of the Company have been acquired in the ordinary course of business.\n\nThe Company had no significant change in the type of products produced or services rendered, nor in the markets or methods of distribution since the beginning of the fiscal year.\n\n## (b) Industry Segment\n\nThe Company's business is reported in five segments: Grocery Products, Refrigerated Foods, Jennie-O Turkey Store, Specialty Foods, and All Other. The contributions of each segment to net sales to unaffiliated customers and operating profit, and the presentation of certain other financial information by segment are reported in Note K of the Notes to Consolidated Financial Statements and in the Management's Discussion and Analysis of the Annual Stockholder's Report for the year ended October 25, 2003, incorporated herein by reference.\n\n## (c) Description of Business\n\n## Products and Distribution\n\nThe Company's products primarily consist of meat and other food products. The meat products are sold fresh, frozen, cured, smoked, cooked and canned. The percentages of total revenues contributed by classes of similar products for the last three fiscal years of the Company are as follows:\n\n| Perishable meat | 50.3% | 53.0% | 54.7% |\n|--------------------|---------|---------|---------|\n| Nonperishable meat | 18.9 | 19.8 | 21.0 |\n| Poultry | 22.1 | 22.6 | 20.3 |\n| | 100.0% | 100.0% | 100.0% |\n\nReporting of revenues from external customers is based on similarity of products, as the same or similar products are sold across multiple distribution channels such as retail, foodservice or international. Revenues reported are based on financial information used to produce the Company's generalpurpose financial statements.\n\nPerishable meat includes fresh meats, sausages, hams, wieners and bacon (excluding JOTS products.) Nonperishable meat includes canned luncheon meats, shelf stable microwaveable entrees, stews, chilies, hash, meat spreads and other items that do not require refrigeration as well as frozen processed products. The Poultry category is composed primarily of JOTS products. The Other category primarily consists of nutritional food products and supplements, sugar and sugar substitutes, salt and pepper products, dessert mixes, food packaging (casings for dry sausage), and industrial gelatin products. The Other category has increased over the past two years primarily due to the following acquisitions: Century Foods International (July 2003), Diamond Crystal Brands (December 2002), and Diamond Crystal Brands Nutritional Products (April 2001).", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "No new product in fiscal 2003 required a material investment of Company assets.\n\nDomestically, the Company sells its products in all 50 states. Hormel products are sold through Company sales personnel, operating in assigned territories coordinated from district sales offices located in most of the larger U.S. cities, as well as independent brokers and distributors. As of October 25, 2003, the Company had approximately 600 sales personnel engaged in selling its products. Distribution of products to customers is by common carrier.\n\nThrough HFIC, the Company markets its products in various locations throughout the world. Some of the larger markets include Australia, Canada, China, England, Japan, Mexico and Micronesia. The distribution of export sales to customers is by common carrier, while the China operations own and operate their own delivery system. The Company, through HFIC, has licensed companies to manufacture various Hormel products internationally on a royalty basis, with the primary licensees being Tulip International of Denmark and CJ Corp. of South Korea.\n\n## Raw Materials\n\nThe Company has, for the past several years, been concentrating on processed branded products for consumers with year-round demand to minimize the seasonal variation experienced with commodity type products. Pork continues to be the primary raw material for Company products. Although hog producers are moving toward larger, more efficient year-round confinement operations and supply contracts are becoming increasingly prevalent in the industry, there is still a seasonal variation in the supply of fresh pork materials. The Company's expanding line of processed items has reduced but not eliminated the sensitivity of Company results to raw material supply and price fluctuations.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "## YEAR ENDED OCTOBER 25, 2003\n\n## HORMEL FOODS CORPORATION\n\n## Austin, Minnesota\n\nItem 15(a) (1), (2) and (3) and Item 15 (c) and (d)\n\n## LIST OF FINANCIAL STATEMENTS AND FINANCIAL STATEMENT SCHEDULES\n\n## HORMEL FOODS CORPORATION\n\n## FINANCIAL STATEMENTS\n\nThe following consolidated financial statements of Hormel Foods Corporation included in the Annual Stockholders' Report for the Registrant to its stockholders for the year ended October 25, 2003, are incorporated herein by reference in Item 8 of Part II of this report:\n\nConsolidated Statements of Financial Position -October 25, 2003, and October 26, 2002.\n\nConsolidated Statements of Operations -Years Ended October 25, 2003, October 26, 2002 and October 27, 2001.\n\nConsolidated Statements of Changes in Shareholders' Investment -Years Ended October 25, 2003, October 26, 2002, and October 27, 2001.\n\nConsolidated Statements of Cash Flows -Years Ended October 25, 2003, October 26, 2002, and October 27, 2001.\n\nNotes to Financial Statements -October 25, 2003.\n\n## Report of Independent Auditors\n\n## FINANCIAL STATEMENT SCHEDULES\n\nThe following consolidated financial statement schedule of Hormel Foods Corporation required pursuant to Item 15(d) is submitted herewith:\n\n## Schedule II-Valuation and Qualifying Accounts and Reserves...F-3\n\nAll other schedules for which provision is made in the applicable accounting regulation of the Securities and Exchange Commission are not required under the related instructions or are inapplicable, and therefore have been omitted.\n\n## FINANCIAL STATEMENTS AND SCHEDULES OMITTED\n\nCondensed parent company financial statements of the registrant are omitted pursuant to Rule 5-04(c) of Article 5 of Regulation S-X.\n\n## SCHEDULE II-VALUATION AND QUALIFYING ACCOUNTS AND RESERVES\n\n## HORMEL FINANCIAL SERVICES CORPORATION\n\n(In Thousands)\n\nNote (1) -Uncollectible accounts written off.\n\nNote (2) -Recoveries on accounts previously written off.\n\nNote (3) -Increase in the reserve due to the inclusion of The Turkey Store Company accounts receivable.\n\nNote (4) -Increase in the reserve due to the inclusion of Diamond Crystal Brands accounts receivable.\n\n## LIST OF EXHIBITS\n\n## HORMEL FOODS CORPORATION\n\n| 2.1 (1) | Agreement and Plan of Merger and Plan of Reorganization dated January 22, 2001, by and among Hormel, Badger Acquisition Corporation, Jerome Foods, Inc. and Jerome K. Jerome. (Incorporated by reference to Hormel's Current Report on Form 8-K dated March 9, 2001, File No. 001-02402.) |\n|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| 3.1 (1) | Certificate of Incorporation as amended to date. (Incorporated by reference to Exhibit 3A-1 to Hormel's Annual Report on Form 10- K/A for the fiscal year ended October 28, 2000, File No. 001-02402.) |", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "HORMEL, ALWAYS TENDER, AMERICAN CLASSICS, AUSTIN BLUES, BLACK LABEL, CARAPELLI, CHI-CHI'S, CURE 81, CUREMASTER, DAN'S PRIZE, DIAMOND CRYSTAL, DI LUSSO, DINTY MOORE, DUBUQUE, EL TORITO, FAST 'N EASY, HERB-OX, HERDEZ, HOMELAND, HOUSE OF TSANG, JENNIE-O TURKEY STORE, KID'S KITCHEN, LAYOUT, LITTLE SIZZLERS, MARRAKESH EXPRESS, MARY KITCHEN, OLD SMOKEHOUSE, PATAK'S, PELOPONNESE, PILLOW PACK, QUICK MEAL, RANGE BRAND, ROSA GRANDE, SANDWICH MAKER, SPAM, STAGG, SWEET THING, THICK & EASY and WRANGLERS.\n\n## Customers and Backlog Orders\n\nDuring fiscal year 2003, no customer accounted for more than 10 percent of total Company sales. The five largest customers in each segment make up approximately the following percentage of segment sales: 39 percent of Grocery Products, 39 percent of Refrigerated Foods, 35 percent of JOTS, 51 percent of Specialty Foods, and 27 percent of All Other. The loss of one or more of the top customers in any of these segments could have a material adverse effect on the results of such segment. Backlog orders are not significant due to the perishable nature of a large portion of the products. Orders are accepted and shipped on a current basis.\n\n## Competition\n\nThe production and sale of meat and food products in the United States and internationally are highly competitive. The Company competes with manufacturers of pork and turkey products, as well as national and regional producers of other meat and protein sources, such as beef, chicken and fish. The Company believes that its largest domestic competitors for its Refrigerated Foods segment in 2003 were Tyson Foods, Smithfield Foods and ConAgra Foods; for its Grocery Products segment, ConAgra Foods, Dial Corp. and Campbell Soup Co.; and for JOTS, ConAgra Foods and Cargill, Inc.\n\nAll Hormel segments compete on the basis of price, product quality, brand identification and customer service. Through aggressive marketing and strong quality assurance programs, the Company's strategy is to provide higher quality products that possess strong brand recognition, which would then support higher value perceptions from customers.\n\nThe Company competes using this same strategy in international markets around the world.\n\n## Research and Development\n\nResearch and development continues to be a vital part of the Company's strategy to extend existing brands and expand into new branded items. The expenditures for research and development for fiscal 2003, 2002 and 2001, respectively, were $13,165,000, $12,097,000 and $11,478,000. There are 42 professional employees engaged in full time research, 19 in the area of improving existing products and 23 in developing new products.\n\n## Employees\n\nAs of October 25, 2003, the Company had over 16,000 active employees.", - "page_start": 4, - "page_end": 4, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "Livestock slaughtered by the Company is purchased by Company buyers and commission dealers at sale barns and terminal markets or under long-term supply contracts at locations principally in Minnesota, Illinois, Iowa, Nebraska, Colorado and South Dakota. The cost of livestock and the utilization of the Company's facilities are affected by both the level and the methods of pork production in the United States. The hog production industry has been rapidly moving to very large, vertically integrated, year-round confinement operations operating under long-term supply agreements. This has resulted in fewer hogs being available on the spot cash market, which decreases the supply of hogs on the open market and can severely diminish the utilization of slaughter facilities and increase the cost of the raw materials they produce. The Company, along with others in the industry, uses long-term supply contracts to manage the effects of this trend and to assure a stable supply of raw materials while minimizing extreme fluctuations in costs over the longterm. This may result in costs for live hogs that are either higher or lower than the spot cash market depending on the relationship of the cash spot market to contract prices. Contract costs are fully reflected in the Company's reported financial results. In fiscal 2003, the Company purchased 79 percent of its hogs under long-term supply contracts.\n\nIn fiscal 2003, JOTS raised approximately 57 percent of the turkeys needed to meet its raw material requirements for whole bird and processed turkey products. Turkeys not sourced within the Company are contracted with independent turkey growers. JOTS' turkey-raising farms are located throughout Minnesota and Wisconsin. Production costs in raising turkeys are primarily subject to fluctuations in feed grain prices and to a lesser extent fuel costs.\n\n## Manufacturing\n\nThe Company has plants in Austin, Minnesota; Fremont, Nebraska; and Beijing, China that slaughter livestock for processing. Quality Pork Processors of Dallas, Texas, operates the slaughter facility at Austin under a custom slaughter arrangement.\n\nFacilities that produce manufactured items are located in Algona, Iowa; Aurora, Illinois; Austin, Minnesota; Beloit, Wisconsin; Bondurant, Iowa; Ft. Dodge, Iowa; Fremont, Nebraska; Houston, Texas; Knoxville, Iowa; Mitchellville, Iowa; Osceola, Iowa; Perrysburg, Ohio; Quakertown, Pennsylvania; Rochelle, Illinois; Savannah, Georgia; Sparta, Wisconsin; Stockton, California; Tucker, Georgia; Visalia, California; Wichita, Kansas; Beijing, China; and Shanghai, China. Company products are also custom manufactured by several other companies. The following are the Company's larger custom manufacturers: Lakeside Packing Company, Manitowoc, Wisconsin; Schroeder Milk, Maplewood, Minnesota; Steuben Foods, Jamaica, New York; Power Packaging, St. Charles, Illinois; Criders, Stilmore, Georgia; Tony Downs, St. James, Minnesota; and Concept Foods, Alma, Kansas. Power\n\nLogistics, Inc., based in St. Charles, Illinois, operates distribution centers for the Company in Dayton, Ohio, and Osceola, Iowa.\n\nThe Company's turkey slaughter and processing operations are located in Barron, Wisconsin; Faribault, Minnesota; Melrose, Minnesota; Montevideo, Minnesota; Pelican Rapids, Minnesota; and Willmar, Minnesota.\n\n## Patents and Trademarks\n\nThere are numerous patents and trademarks that are important to the Company's business. The Company holds seven foreign and 47 U.S. issued patents. Some of the trademarks are registered and some are not. In recognition of the importance of these assets, the Company created a subsidiary, Hormel Foods, LLC, in 1998 to create, own, maintain and protect most of the Company's trademarks and patents. Some of the more significant owned or licensed trademarks used in the Company's segments are:", - "page_start": 4, - "page_end": 4, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "premises within the complex. The agreement is subject to the implementation of proposed gaming law reforms and a tax structure acceptable to the Company, and obtaining required planning and other approvals.\n\nMacau. In connection with the Company's pending joint venture in Macau (see Note 1), the Company has committed to invest up to $280 million in the entity in the form of capital contributions and shareholder loans.\n\nNew York Racing Association. The Company has an understanding with the New York Racing Association ('NYRA') to manage video lottery terminals ('VLTs') at NYRA's Aqueduct horseracing facility in metropolitan New York. The Company would assist in the development of the facility, including providing project financing, and would manage the facility for a fee. Work was halted on the VLT facility in August 2003 pending the outcome of an investigation of certain aspects of NYRA's operations by Federal prosecutors. In December 2003, NYRA reached agreement with the Justice Department whereby NYRA was indicted with prosecution deferred. NYRA agreed to pay a fine and the indictment will be dismissed with prejudice upon NYRA implementing certain reforms and otherwise complying with the terms of the agreement. The Company's participation is subject to a definitive agreement, regulatory approvals and certain legislative changes by the State of New York.\n\nThe Residences at MGM Grand. In July 2004, the venture obtained construction financing for up to $210 million for the development of the first tower. The Company has provided a guaranty for up to 50% of the interest and principal payment obligations on the construction financing as well as a joint and several completion guaranty with its partners. The Company recorded the value of the guaranty obligation, approximately $2 million, in other long-term liabilities.\n\nOther Guarantees. The Company is party to various guarantee contracts in the normal course of business, which are generally supported by letters of credit issued by financial institutions. The Company's Senior Credit Facility limits the amount of letters of credit that can be issued to $200 million, and the amount of available borrowings under the Senior Credit Facility is reduced by any outstanding letters of credit. At December 31, 2004, the Company had provided a $50 million letter of credit to support the Economic Development Corporation of the City of Detroit bonds referred to above, which are a liability of the Company.\n\nLitigation. The Company is a party to various legal proceedings, most of which relate to routine matters incidental to its business. Management does not believe that the outcome of such proceedings will have a material adverse effect on the Company's financial position or results of operations.", - "page_start": 71, - "page_end": 71, - "source_file": "NYSE_MGM_2004.pdf" - }, - { - "text": "## NOTES TO CONSOLIDATED FINANCIAL STATEMENTS\n\n## Nature of Operations\n\nHON INDUSTRIES Inc., with its subsidiaries (the 'Company'), is a provider of office furniture and hearth products. Both industries are reportable segments; however, the Company's office furniture business is its principal line of business. Refer to the Operating Segment Information note for further information. Office furniture products are sold through a national system of dealers, wholesalers, mass merchandisers, warehouse clubs, retail superstores, end-user customers, and to federal and state governments. Dealer, wholesaler, and retail superstores are the major channels based on sales. Hearth products include electric, wood-, pellet-, and gas-burning factory-built fireplaces, fireplace inserts, stoves, and gas logs. These products are sold through a national system of dealers, wholesalers, large regional contractors, and Company-owned retail outlets. The Company's products are marketed predominantly in the United States and Canada. The Company exports select products to a limited number of markets outside North America, principally Latin America and the Caribbean, through its export subsidiary; however, based on sales, these activities are not significant.\n\n## Summary of Significant Accounting Policies\n\n## PRINCIPLES OF CONSOLIDATION AND FISCAL YEAR-END\n\nThe consolidated financial statements include the accounts and transactions of the Company and its subsidiaries. Intercompany accounts and transactions have been eliminated in consolidation.\n\nThe Company follows a 52/53-week fiscal year which ends on the Saturday nearest December 31. Fiscal year 2003 ended on January 3, 2004; 2002 ended on December 28, 2002; and 2001 ended on December 29, 2001. The financial statements for fiscal year 2003 are based on a 53-week period; fiscal years 2002 and 2001 are on a 52-week basis.\n\n## CASH, CASH EQUIVALENTS, AND INVESTMENTS\n\nCash and cash equivalents generally consist of cash, money market accounts, and debt securities. These securities have original maturity dates not exceeding three months from date of purchase. The Company has short-term investments with maturities of less than one year and also has investments with maturities greater than one year that are included in Other Assets on the consolidated balance sheet. Management classifies investments in marketable securities at the time of purchase and reevaluates such classification at each balance sheet\n\ndate. Equity securities are classified as available-for-sale and are stated at current market value with unrealized gains and losses included as a separate component of equity, net of any related tax effect. Debt securities are classified as held-to-maturity and are stated at amortized cost. The specific identification method is used to determine realized gains and losses on the trade date. Short-term investments include municipal bonds, money market preferred stock, and U.S. treasury notes. Longterm investments include U.S. government securities, municipal bonds, certificates of deposit, and asset- and mortgage-backed securities.\n\nAt January 3, 2004, and December 28, 2002, cash, cash equivalents and investments consisted of the following (cost approximates market value):", - "page_start": 42, - "page_end": 42, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "Use these links to rapidly review the document HORMEL FOODS CORPORATION TABLE OF CONTENTS\n\n## ANNUAL REPORT ON FORM 10-K HORMEL FOODS CORPORATION OCTOBER 25, 2003\n\n## FORM 10-K\n\nANNUAL REPORT PURSUANT TO SECTION 13 OR 15 (d) OF THE SECURITIES EXCHANGE ACT OF 1934\n\n## HORMEL FOODS CORPORATION\n\n(Exact name of registrant as specified in its charter)\n\n## DELAWARE\n\n41-0319970\n\n(State or other jurisdiction of incorporation or organization)\n\n(I.R.S. Employer Identification No.)\n\n## 1 HORMEL PLACE AUSTIN, MINNESOTA\n\n55912-3680\n\n(Address of principal executive offices)\n\n(Zip Code)\n\nRegistrant's telephone number, including area code (507) 437-5611\n\nSecurities registered pursuant to Section 12 (b) of the Act:\n\nCOMMON STOCK, PAR VALUE $.0586 PER SHARE\n\nTitle of Each Class\n\nNEW YORK STOCK EXCHANGE\n\nName of Each Exchange On Which Registered\n\nSecurities registered pursuant to Section 12 (g) of the Act:\n\nIndicate by check mark whether the registrant (1) has filed all reports required to be filed by Section 13 or 15(d) of the Securities Exchange Act of 1934 during the preceding 12 months, and (2) has been subject to such filing requirements for the past 90 days. Yes ý No o\n\nIndicate by check mark if disclosure of delinquent filers pursuant to Item 405 of Regulation S-K is not contained herein, and will not be contained, to the best of registrant's knowledge in definitive proxy or information statements incorporated by reference in Part III of this Form 10-K or any amendments to this Form 10-K. o\n\nIndicate by check mark whether the registrant is an accelerated filer (as defined in Rule 12b-2 of the Act). Yes ý No o\n\nThe aggregate market value of the voting stock held by non-affiliates of the registrant as of April 26, 2003 (the last business day of the registrant's most recently completed second fiscal quarter), was $1,592,020,962 based on the closing price of $21.74 per share on that date.\n\nAs of December 1, 2003, the number of shares outstanding of each of the Corporation's classes of common stock was as follows:\n\nCommon Stock, $.0586 Par Value-138,672,803 shares\n\nCommon Stock Non-Voting, $.01 Par Value-0 shares\n\n## DOCUMENTS INCORPORATED BY REFERENCE\n\nPortions of the Annual Stockholders' Report for the year ended October 25, 2003, are incorporated by reference into Part I and Part II Items 5-8, and included as exhibit 13.1 filed herewith.\n\nHORMEL FOODS CORPORATION\n\nTABLE OF CONTENTS", - "page_start": 1, - "page_end": 1, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "## Certain Investigations and Other Matters\n\nWe regularly receive requests for information, including subpoenas, from regulators and governmental authorities such as the National Highway Traffic Safety Administration, the National Transportation Safety Board, the Securities and Exchange Commission ('SEC'), the Department of Justice ('DOJ'), and various local, state, federal, and international agencies. The ongoing requests for information include topics such as operations, technology (e.g., vehicle functionality, vehicle incidents, Autopilot and FSD Capability), compliance, finance, data privacy, and other matters related to Tesla's business, its personnel, and related parties. We routinely cooperate with such formal and informal requests for information, investigations, and other inquiries. To our knowledge no government agency in any ongoing investigation has concluded that any wrongdoing occurred. We cannot predict the outcome or impact of any ongoing matters. Should the government decide to pursue an enforcement action, there exists the possibility of a material adverse impact on our business, results of operation, prospects, cash flows, financial position or brand.\n\nWe are also subject to various other legal proceedings, risks and claims that arise from the normal course of business activities. For example, during the second quarter of 2023, a foreign news outlet reported that it obtained certain misappropriated data including, purportedly non-public Tesla business and personal information. Tesla has made notifications to potentially affected individuals (current and former employees) and regulatory authorities and we are working with certain law enforcement and other authorities. On August 5, 2023, a putative class action was filed in the United States District Court for the Northern District of California, purportedly on behalf of all U.S. individuals impacted by the data incident, followed by several additional lawsuits, that each assert claims under various state laws and seeks monetary damages and other relief. If an unfavorable ruling or development were to occur in these or other possible legal proceedings, risks and claims, there exists the possibility of a material adverse impact on our business, results of operations, prospects, cash flows, financial position or brand.\n\n## Note 11 - Variable Interest Entity Arrangements\n\nThe aggregate carrying values of the variable interest entities' assets and liabilities, after elimination of any intercompany transactions and balances, in the consolidated balance sheets were as follows (in millions):\n\nTable of Contents", - "page_start": 29, - "page_end": 29, - "source_file": "tesla_form_10q.pdf" - } - ] - }, - { - "references": { - "source_file": "Open_Data_Report.pdf", - "query": "What is Mexican Farm Subsidies ?", - "target_page": 9, - "target_passage": "an online tool to analyze how the federal government allocates those subsidies", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "currently in favourable status are in that category or show a strong positive trend. The Commission and the European Environmental Agency will provide guidance to Member States in 2020 on how to select and prioritise species and habitats.\n\n## 2.2.2. Bringing nature back to agricultural land\n\nAs guardians of our land, farmers play a vital role in preserving biodiversity. They are among the first to feel the consequences when biodiversity is lost but also among the first to reap the benefits when it is restored. Biodiversity enables them to provide us with safe, sustainable, nutritious and affordable food and provides them with the income they need to thrive and develop. European farmers are an essential part of the EU's future and must continue to be the social and economic hub of many communities across our Union.\n\nAt the same time, certain agricultural practices are a key driver of biodiversity decline. This is why it is important to work with farmers to support and incentivise the transition to fully sustainable practices . Improving the condition and diversity of agroecosystems will increase the sector's resilience to climate change, environmental risks and socioeconomic shocks, while creating new jobs, for example in organic farming, rural tourism or recreation.\n\nTo support the long-term sustainability of both nature and farming, this strategy will work in tandem with the new Farm to Fork Strategy and the new Common Agricultural Policy (CAP) , including by promoting eco-schemes and result-based payment schemes. In implementing the Biodiversity and the Farm to Fork Strategies, the Commission will closely monitor progress and improvements in terms of food security and farmers income. The Commission will ensure that the CAP Strategic plans are assessed against robust climate and environmental criteria, and that Member States set explicit national values for the relevant targets set in this strategy, as well as in the Farm to Fork Strategy. These plans should lead to sustainable practices such as precision agriculture, organic farming, agro-ecology, agro-forestry, low-intensive permanent grassland, and stricter animal welfare standards.\n\nFarmland birds and insects, particularly pollinators, are key indicators of the health of agroecosystems and are vital for agricultural production and food security. Their alarming decline must be reversed. As set out in the Farm to Fork Strategy, the Commission will take action to reduce by 50% the overall use of - and risk from chemical pesticides by 2030 and reduce by 50% the use of more hazardous pesticides by 2030. This must be supported by the full implementation of the EU Pollinators initiative 31 . By the end of 2020, the Commission will review the initiative and propose additional measures if necessary. To provide space for wild animals, plants, pollinators and natural pest regulators, there is an urgent need to bring back at least 10% of agricultural area under high-diversity landscape features . These include, inter alia , buffer strips, rotational or non-rotational fallow land, hedges, non-productive trees, terrace walls, and ponds. These help enhance carbon sequestration, prevent soil erosion and depletion, filter air and water, and support climate adaptation. In addition, more biodiversity often helps lead to more agricultural production. Member States will need to translate the 10% EU target to a lower geographical scale to ensure connectivity among habitats, especially through the CAP instruments and CAP Strategic Plans, in line with the Farm to Fork Strategy, and through the implementation of the Habitats Directive. The", - "page_start": 7, - "page_end": 7, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "progress towards the target will be under constant review, and adjustment if needed, to mitigate against undue impact on biodiversity, food security and farmers' competitiveness.\n\nAgroecology can provide healthy food while maintaining productivity, increase soil fertility and biodiversity, and reduce the footprint of food production. Organic farming in particular holds great potential for farmers and consumers alike. The sector creates jobs and attracts young farmers. Organic farming also provides 10-20 % more jobs per hectare than conventional farms, and creates added value for agricultural products 32 . To make the most of this potential, at least 25% of the EU's agricultural land must be organically farmed by 2030 . In addition to CAP measures, the Commission will put forward an Action Plan on organic farming, helping Member States stimulate both supply and demand of organic products. It will also ensure consumer's trust through promotion campaigns and green public procurement. In the implementation of the EU-wide agroecological targets set out in this strategy and in the Farm to Fork Strategy, the different starting points and differences in progress already made in Member States will be taken into account.\n\nThe uptake of agroforestry support measures under rural development should be increased as it has great potential to provide multiple benefits for biodiversity, people and climate.\n\nThe decline of genetic diversity must also be reversed, including by facilitating the use of traditional varieties of crops and breeds. This would also bring health benefits through more varied and nutritious diets. The Commission is considering the revision of marketing rules for traditional crop varieties in order to contribute to their conservation and sustainable use. The Commission will also take measures to facilitate the registration of seed varieties, including for organic farming, and to ensure easier market access for traditional and locally adapted varieties.\n\n## 2.2.3. Addressing land take and restoring soil ecosystems\n\nSoil is one of the most complex of all ecosystems. It is a habitat in its own right, and home to an incredible diversity of organisms that regulate and control key ecosystem services such as soil fertility, nutrient cycling and climate regulation. Soil is a hugely important non-renewable resource , vital for human and economic health, as well as the production of food and new medications.\n\nIn the EU, the degradation of soil is having considerable environmental and economic consequences. Poor land management, such as deforestation, overgrazing, unsustainable farming and forestry practices, construction activities and land sealing are among the main causes of this situation 33 . Despite recent reductions in the pace of soil sealing, fertile soils continue to be lost to land take and urban sprawl 34 . When compounded by", - "page_start": 8, - "page_end": 8, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "- (b) 'edible horticulture' means growing-\n - (i) protected vegetables grown in glasshouse systems,\n - (ii) field vegetables grown outdoors, including vegetables, herbs, leafy salads and potatoes,\n - (iii) soft fruit grown outdoors or under cover,\n - (iv) trees that bear fruit,\n - (v) vines and bines,\n - (vi) mushrooms;\n - (c) 'specified farm' means the farm named in that person's passenger information;", - "page_start": 45, - "page_end": 45, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "Livestock slaughtered by the Company is purchased by Company buyers and commission dealers at sale barns and terminal markets or under long-term supply contracts at locations principally in Minnesota, Illinois, Iowa, Nebraska, Colorado and South Dakota. The cost of livestock and the utilization of the Company's facilities are affected by both the level and the methods of pork production in the United States. The hog production industry has been rapidly moving to very large, vertically integrated, year-round confinement operations operating under long-term supply agreements. This has resulted in fewer hogs being available on the spot cash market, which decreases the supply of hogs on the open market and can severely diminish the utilization of slaughter facilities and increase the cost of the raw materials they produce. The Company, along with others in the industry, uses long-term supply contracts to manage the effects of this trend and to assure a stable supply of raw materials while minimizing extreme fluctuations in costs over the longterm. This may result in costs for live hogs that are either higher or lower than the spot cash market depending on the relationship of the cash spot market to contract prices. Contract costs are fully reflected in the Company's reported financial results. In fiscal 2003, the Company purchased 79 percent of its hogs under long-term supply contracts.\n\nIn fiscal 2003, JOTS raised approximately 57 percent of the turkeys needed to meet its raw material requirements for whole bird and processed turkey products. Turkeys not sourced within the Company are contracted with independent turkey growers. JOTS' turkey-raising farms are located throughout Minnesota and Wisconsin. Production costs in raising turkeys are primarily subject to fluctuations in feed grain prices and to a lesser extent fuel costs.\n\n## Manufacturing\n\nThe Company has plants in Austin, Minnesota; Fremont, Nebraska; and Beijing, China that slaughter livestock for processing. Quality Pork Processors of Dallas, Texas, operates the slaughter facility at Austin under a custom slaughter arrangement.\n\nFacilities that produce manufactured items are located in Algona, Iowa; Aurora, Illinois; Austin, Minnesota; Beloit, Wisconsin; Bondurant, Iowa; Ft. Dodge, Iowa; Fremont, Nebraska; Houston, Texas; Knoxville, Iowa; Mitchellville, Iowa; Osceola, Iowa; Perrysburg, Ohio; Quakertown, Pennsylvania; Rochelle, Illinois; Savannah, Georgia; Sparta, Wisconsin; Stockton, California; Tucker, Georgia; Visalia, California; Wichita, Kansas; Beijing, China; and Shanghai, China. Company products are also custom manufactured by several other companies. The following are the Company's larger custom manufacturers: Lakeside Packing Company, Manitowoc, Wisconsin; Schroeder Milk, Maplewood, Minnesota; Steuben Foods, Jamaica, New York; Power Packaging, St. Charles, Illinois; Criders, Stilmore, Georgia; Tony Downs, St. James, Minnesota; and Concept Foods, Alma, Kansas. Power\n\nLogistics, Inc., based in St. Charles, Illinois, operates distribution centers for the Company in Dayton, Ohio, and Osceola, Iowa.\n\nThe Company's turkey slaughter and processing operations are located in Barron, Wisconsin; Faribault, Minnesota; Melrose, Minnesota; Montevideo, Minnesota; Pelican Rapids, Minnesota; and Willmar, Minnesota.\n\n## Patents and Trademarks\n\nThere are numerous patents and trademarks that are important to the Company's business. The Company holds seven foreign and 47 U.S. issued patents. Some of the trademarks are registered and some are not. In recognition of the importance of these assets, the Company created a subsidiary, Hormel Foods, LLC, in 1998 to create, own, maintain and protect most of the Company's trademarks and patents. Some of the more significant owned or licensed trademarks used in the Company's segments are:", - "page_start": 4, - "page_end": 4, - "source_file": "NYSE_HRL_2004.pdf" - }, - { - "text": "- (d) 'specified activities' means-\n - (i) crop maintenance,\n - (ii) crop harvesting,\n - (iii) tunnel construction and dismantling,\n - (iv) irrigation installation and maintaining,\n - (v) crop husbandry,\n - (vi) packing and processing of crops on employer's premises,\n - (vii) preparing and dismantling growing areas and media,\n - (viii) general primary production work in edible horticulture,\n - (ix) activities relating to supervising teams of horticulture workers.\n - 44. -(1) A domestic elite sportsperson, an international elite sportsperson, a domestic ancillary sportsperson or an international ancillary sportsperson.", - "page_start": 46, - "page_end": 46, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "Right now, one of the most active Asian countries in the Open Data arena is India, which also signed an Open Government partnership with the USA in November 2010. In January 2011 the Indian Congress Party announced plans for a new law to fight corruption among public servants and politicians. Anti-corruption websites (including ones in local dialects) like Indiaagainstcorruption.org, already existed, including one, Ipaidabribe.com, that collected more than 3,000 people reports of graft in its first four months.\n\nAs it happens in Asia, even Latin America is currently focused, at least outside Public Administration circles, on how to open public data to achieve actual transparency. This appears even from the way many projects are labeled, that is \"Civic Information\" instead of Open Data (which is an idea starting from data reuse ) or Open Government.\n\nThe reason is that even where good Freedom of Information laws exist in Latin America, they still have too little practical effects. Mexico, for example, already has a digital system to manage Freedom of Information requests, but there are reports of complaints filed against municipal officials that either have no effect at all, or aren't possible in the first place, because relevant information has not been updated in years, or omits key data like (in the case of budget reports) \"descriptions of how the money was spent\" .\n\nEven with these difficulties, the Latin America Open Data/Civic Information landscape is active and definitely worthwhile following. The list of interesting Civic Information projects in Latin America include (from Sasaki's Access to Information: Is Mexico a Model for the Rest of the World?:\n\n## · Mexico\n\n - · Mexican Farm Subsidies - an online tool to analyze how the federal government allocates those subsidies\n - · Compare Your School : compares aggregate test results from any school with the municipal, regional, and national averages\n - · Rebellion of the Sick built for patients with chronic diseases whose expenses are not covered by the government subsidized health coverage.\n - · Argentina: Public Spending in Bahía analyzes how public funds are used.\n - · Colombia: Visible Congress monitors the actions of the Colombian congress\n - · Brazil\n - · Eleitor 2010 : a website to submit reports of electoral fraud during the Brazil 2010", - "page_start": 8, - "page_end": 8, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "Faced with climate change, agriculture is the most vulnerable sector, which will experience the largest negative impacts from climatic change and lead to more serious food security in the whole world 15-20 . Meanwhile, global production losses might lead to price shocks and trigger export restrictions 21-24 ; an increasingly interconnected global food system 25,26 and the projected fragility of the global food production system due to climatic change further exacerbate the threats to food security in the worldwide 27-29 . So, the impacts of climate changes on crop yields and prices have been of highly concerned. Numerous studies have revealed that the warming trend has negative impact on crop yields and global trade in most regions all over the world 30-32 . /T\\_here are three main methods for impacts assessment of climate change on crops, including environment-controlled experiments, statistical regression analysis and model simulations 17,33 . Environment-controlled experiments are designed to observe the in/fluence of climate factors on crops, such as drought, /flood, heat stress, cold damage, elevated CO 2 concentration, through which the impact mechanism of climate change on crops would be revealed and established 23,34,35 . Crop models and trade models are applied to simulate the response of crop yield and market price under climate change, based on process-based crop growth in daily time steps, either in selected /field sites or in selected regions 36-39 . /T\\_he statistical regression analysis usually explores the relationship between historical crop yields and meteorological records in di/fferent sites or counties to establish regression functions for crop responses predictions 40-43 . /T\\_hese researches have documented that crop yield and price would be threatened much more seriously by global warming, especially due to the increasing trend of frequency and intensity of climate extreme events in the future.\n\nͷ Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing ͷͶͶͶ;ͷ, China. ͸ International Maize and Wheat Improvement Center, Texcoco, Mexico. ͹ Peking University, Beijing, China. * email: hqlk͸ͶͶͶ@ͷͼ͹.com\n\nglyph<c=25,font=/NDNDJN+Corbel>", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed9.pdf" - }, - { - "text": "<!-- image -->\n\nFigure 6. Yield loss rates on maize in 6 continents under global warming by 1.5 °C and 2.0 °C.\n\n<!-- image -->\n\nMarket price of maize in main countries. In this study, we elaborate on the endogenous response of our economic models. /T\\_his response can be theoretically elaborated as: due to the e/ffect of climate change on yield reduction (improvement), the supply curve moves le/f\\_tward (rightward), reducing (increasing) production and raising (lowering) prices. In response, the consumers decrease (increase) their consumption of more expensive (cheaper) crops and shi/f\\_ting to other (increase the use of the same) crops. Producers, at the same time, respond by changing farm-level management practices and increasing (decreasing) the amount of acreage under these crops. At a global scale, the reallocation of production and consumption through international trade further alters climate change impacts on global agriculture. /T\\_his also alters the self-su/fficiency ratios of each country/ region due to climate change.\n\nIn response to production changes, the price of each commodity changes under both scenarios. At the global level, the market price for maize would increase by 0.7% and 3.4% under 1.5 °C scenario and 2.0 °C scenario, respectively, which would vary quite largely among di/fferent countries and regions under both climate change scenarios (Fig. 7). Particularly, the market price would increase by around 22% and 27% in Iran under 2.0 °C scenario and 1.5 °C scenario, respectively. Iran is also the region where the highest yield reduction is observed due to climate change. Market prices for maize in India, Mexico, Russia, South Africa and the Rest of Africa would decrease signi/ficantly under both scenarios, as their yields improve due to climate e/ffects. Along with the domestic production, the climate change will also induce changes in international trade of maize, resulting in changing levels of self-su/fficiency ratios (SSR) for each country/region. By SSR, we mean the ratio of domestically produced commodity, to the sum of net imports and domestic production. In our scenario analysis, generally, the countries that face positive e/ffects on yields and/or are relatively less dependent on imports, are positively (less negatively) a/ffected by climate change. For example, maize SSR for Ukraine, India, Russia and Mexico would improve under both scenarios (Fig. 8). Whereas the self-su/fficiency ratios of maize for Southeast Asia, Bangladesh and Iran will worsen under both scenarios. China's SSR for maize stays almost similar to the level as the baseline.\n\n## Discussion and conclusion\n\nDiscussion. Our analysis highlights the e/ffects of climate change on global- and regional-speci/fic maize yields and the associated economic consequences in 1.5 °C and 2.0 °C -warming scenarios. We /find that the reduction risk of maize yield under global warming by 2.0 °C is much more serious than that under global warming by 1.5 °C. On the one hand, the larger the temperature rise, the greater the evapotranspiration would be. Although the precipitation is also increasing, the evapotranspiration would become more intense. /T\\_he limitation of water supply for maize growth leads to the decline of yield. On the other hand, relative to global warming by 1.5 °C, maize production would be faced with more serious and frequent extreme climate events, such as drought and heat waves, which would increase the risk of corn yield reduction under global warming by 2.0 °C. In the\n\nVol:.(1234567890)", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed9.pdf" - }, - { - "text": "Even during the past three decades the trend from agriculture and industry to other service-dominated sectors continued in the EU, as the following Eurostat figure shows. The share of employees in agriculture went down from 8% to 4%, and also down in industry from 21% to 15%, construction remained quite stable between 6% and 7% whilst all the service sectors (except 'Financial services and insurance') gained a bigger share, particularly 'Professional, scientific and technical activities'.", - "page_start": 100, - "page_end": 100, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Afforestation, reforestation and tree planting to support biodiversity and ecosystem restoration will be promoted through the CAP Strategic Plans, and the Cohesion Policy funds. The new European Urban Greening Platform 38 will also facilitate urban tree planting, including under the LIFE programme.\n\nThe share of forest areas covered by management plans should cover all managed public forests and an increased number of private forests, and biodiversity-friendly practices such as closer-to-nature-forestry should continue and be further developed. To support this, the Commission will develop guidelines on biodiversity-friendly afforestation and reforestation and closer-to-nature-forestry practices. This will be done in parallel with the new EU Forest Strategy.\n\nTo gain a better picture of the health of European forests, the Commission will work with other data providers to further develop the Forest Information System for Europe . This will help produce up-to-date assessments of the condition of European forests and link all EU forest-data web-platforms. This will also be presented as part of the EU Forest Strategy.\n\n## 2.2.5. Win-win solutions for energy generation\n\nDecarbonising the energy system is critical for climate neutrality, as well as for the EU's recovery from the COVID-19 crisis and long-term prosperity. More sustainably sourced renewable energy will be essential to fight climate change and biodiversity loss. The EU will prioritise solutions such as ocean energy, offshore wind, which also allows for fish stock regeneration, solar-panel farms that provide biodiversity-friendly soil cover, and sustainable bioenergy.\n\nTo mitigate climate and environmental risks created by the increasing use of certain sources for bioenergy, the revised Renewable Energy Directive 39 includes strengthened sustainability criteria. It also promotes the shift to advanced biofuels based on residues and non-reusable and non-recyclable waste. This approach should continue for all forms of bioenergy. The use of whole trees and food and feed crops for energy production whether produced in the EU or imported - should be minimised.\n\nTo better understand and monitor the potential climate and biodiversity risks, the Commission is assessing the EU and global biomass supply and demand and related sustainability 40 . As part of its increased ambition to protect and restore forest ecosystems, the Commission will publish the results of this work on the use of forest biomass for energy production by the end of 2020. This will inform the Commission's policymaking, including the review and revision, where necessary, of the level of ambition of the Renewable Energy Directive, the Emissions Trading Scheme, and the Regulation on land use, land use change and forestry (LULUCF) set for 2021.\n\nIn line with the Renewable Energy Directive, the Commission will also develop operational guidance in 2021 on the new sustainability criteria on forest biomass for", - "page_start": 10, - "page_end": 10, - "source_file": "legal5_eubiodiversity_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "Open_Data_Report.pdf", - "query": "What concerns has open data raised in the insurance sector?", - "target_page": 23, - "target_passage": "insurance companies may charge higher fees for life insurance to those among their customers who... put online a family tree from which it shows that they come from families with an average life expectancy lower than usual", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "there is no mandate to support one group to centralize it.\n\nKenya's own OpenData.go.ke website has only ever seen a small handful of data sets, none of which are now (early April 2011) available anymore. Groups like the Ministry of Education might publish some information on schools, but they won't give anyone the location data.\n\n## 3. Emerging trends and issues related to Open Data\n\nOne of the most common activities for Open Data activists in this moment is the creation of country-wide catalogs of all data sources, to facilitate individuation and correlation of independent data sets. Normally, all initiatives of this type are announced on the Open Knowledge Foundation blog and/or its data hub CKAN. Another relevant development is the publication of an Open Data Manual that \"can be used by anyone but is especially designed for those seeking to open up data, since it discusses why to go open, what open is, and the how to 'Open' Data.\" Activists in several European countries have already published local versions of the manual, or equivalent documents. On this background, several interesting issues, some of which were anticipated in the Open Data, Open Society report, are coming in full light. They are presented, one at a time, in the following sections of this chapter.\n\n## 3.1. Cost of not opening PSI is increasing\n\nMuch has been said on the economic benefits of opening public sector information, and much more remains to be said and studied. One part of this issue that is becoming more evident over time is that Open Data are the simplest, if not the only way, to save Public Administrations from the costs that they have already (and rightfully!) forced themselves to bear, through assorted laws and official regulations. This is explained well in the report from LinkedGov about the economic impact of open data:\n\n(p. 2) \"As the costs of disseminating and accessing information have declined, the transactions costs associated with charging for access to information, and controlling subsequent redistribution have come to constitute a major barrier to access in themselves. As a result, the case for free (gratis) provision of Public Sector Information is stronger than has already been recognized.\n\nEaves provides a practical example from Canada in Access to Information is Fatally Broken… You Just Don't Know it Yet: the number of Access to Information Requests (ATIP) has almost tripled", - "page_start": 10, - "page_end": 10, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "officially lobbying Public Administrations to get the PSI they could use for the same purposes. As other suggestions made here, these are activities that should start at the city and regional level, first with custom-made education initiatives, then with specific data-based services. Engaging all these actors in the adoption of (local) Open Data will be one of the big challenges of the next years.\n\n## 5. Bibliography\n\nBesides those explicitly linked from the text, this report has drawn inspiration by many other resources. The most important ones are listed here, but the complete list should be much longer. We wish to thank first the authors of the works listed below and, immediately after, to all the activists, inside and outside governments worldwide, who are working on this topic.\n\n - 1. Are you prepared for the pitfalls of Gov 2.0?\n - 2. Can we use Mobile Tribes to pay for the costs of Open Data?\n - 3. Canada launches data.gc.ca - what works and what is broken\n - 4. Creative Commons and data bases: huge in 2011, what you can do\n - 5. Defining Gov 2.0 and Open Government\n - 6. How Government Data Can Improve Lives\n - 7. If you like solar, tell your utility to publish this map\n - 8. Indian corruption backlash builds after \"year of the treasure hunters\"\n - 9. Información Cívica / Just What is Civic Information?\n - 10.Is open government just about information?\n - 11.LSDI : In un click la mappa del crimine\n - 12.La casta è online: dategli la caccia!\n - 13.Linee guida UK sull'opendata\n - 14.MSc dissertation on Open Government Data in the UK\n - 15.Open Data (2): Effective Data Use .\n - 16.Open Data: quali prospettive per la pianificazione?\n - 17.Open Knowledge Foundation Blog \" Blog Archive \" Keeping Open Government Data\n - Open?\n - 18.Open data, democracy and public sector reform\n - 19.Pubblicato Camere Aperte 2011 - blog - OpenParlamento\n - 20.Reasons for not releasing data in government\n - 21.The impact of open data: first evidence", - "page_start": 32, - "page_end": 32, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "## 4. Conclusion: seven Open Data strategy and best practices suggestions\n\nStarting from the trends and conclusion described in the previous chapter, this section lists, in the most synthetic way possible, some strategic actions and best practices for 2011, that we consider important in making Open Data succeed and bring the greatest possible benefits to all citizens and businesses.\n\n## 4.1. Properly define and explain both Open Data and Public Data\n\nJust because Open Data is becoming more popular (and, we may say, more and more necessary every year), it is essential to intensify efforts to explain, both to the general public and to public administrators, that\n\n - 1. Privacy issues are almost always a non-issue. Quoting from What \"open data\" means and what it doesn't): Privacy and/or security concerns with putting all the government's data out there are a separate issue that shouldn't be confused with Open Data. Whether data should be made publicly available is where privacy concerns come into play. Once it has been determined that government data should be made public, then it should be done openly.\n - 2. Defining as Public and consequently opening them in the right way, much more data than those born and stored inside Public Administration is an urgent task that is in the best interest of all citizens and businesses\n\n## 4.2. Keep political issues separated by economics ones\n\nOpen Data can reduce the costs of Public Administrations and generate (or at least protect, as in the case of deals from local merchants) local jobs in all sectors of the economy, not just high-tech ones. There seems to be enough evidence for these two assertions to go for more Open Data even if they had no effect at all on participation to politics. This should always be kept in mind, also because some data that can directly stimulate business are not the same that would be useful for transparency.", - "page_start": 26, - "page_end": 26, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "- 22.Thinking About Africa's Open Data\n - 23.Towards EU Benchmarking 2.0 - Transparency and Open Data on Structural Funds in Europe\n - 24.UK Open Government Licence removes barriers to re-use of public sector information\n - 25.Western Europe: A journey through tech for transparency projects\n - 26.What open data means to marginalized communities\n - 27.What's in a Name? Open Gov and Good Gov\n - 28.WikiLeaks Relationship With the Media\n - 29.WikiLeaks, Open Information and Effective Use: Exploring the Limits of Open Government", - "page_start": 33, - "page_end": 33, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "coal plants. If data are not available, every conclusion is questionable because it relies on assumptions or estimates.\n\n## 2.3. Open Data in Latin America, Asia and Africa\n\nSeveral countries in Latin America are studying and making experiments with Open Data both at the government and at the grassroots level. The same is happening, on a much smaller scale, in a few parts of Asia and Africa. On average, the volume of these Open Data experiments and the level of local interest and awareness around them is still lower than what is happening in Europe and North America. In spite of this we suggest that it is important, for public officials and civic activists in Western Countries, to follow these developments closely. The reason is that they may turn into very useful test beds for all the strengths and limits of Open Data, especially those not encountered yet where the movement was born.\n\nIn fact, the original discourse and arguments around Open Data are heavily Western centric. The problem they want to solve is how to make democracy work better in countries where it already exists and which share a great amount of history and cultural/philosophical values .\n\nOther countries face very different challenges, from the philosophical level to the practical one. A common issue in developing countries, for example, is that there is very little to open simply because much PSI (Public Sector Information) doesn't exist in digital format yet. Therefore, the first thing to do is to create data, normally through outsourcing and crowd sourcing.\n\nOther issues, that will be discussed in detail in other sections of the report because they are also present in Europe in different forms, are related to lack of equal opportunities for access to data and serious fears (sometimes, concrete, sometimes caused by confusion about what should be open and how) that data will be used against citizens. A commenter to Gurstein's Open Data: Empowering the Empowered or Effective Data Use for Everyone? said:\n\nin Delhi and Mumbai, mobs and rioters managed to get information about particular identity groups through voter rolls: openness is, in certain situations, a precarious virtue. It is almost certain that Open Data would be used to rig election but here again openness is not the issue, they would find it anyway...\n\nSo far, the main interest about Open Data in Asian countries seems limited, so to speak, to its effects on transparency in politics. At a two-weeks programming contest held at the end of 2010 in Thailand, for example, one of the most appreciated entries was a software scraper of the Thailand's Member of House of Representative Website, that made it possible for everybody to create applications using those data.", - "page_start": 7, - "page_end": 7, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "digital, attacks to privacy and to civil rights in general can and are coming by so many other sides that those from (properly done) Open Data are a really tiny percentage of the total.\n\nThis is a consequence of the fact that data about us end up online from the most different sources (including ourselves and our acquaintances), and that often it would be very hard to discover, never mind prove , that they've been used against our interest. There have been concerns, for example, that insurance companies may charge higher fees for life insurance to those among their customers who... put online a family tree from which it shows that they come from families with an average life expectancy lower than usual.\n\nAssuming such concerns were real, would it always be possible to spot and prove such abuses of data, that weren't even published by any Public Administration? Of course, publishing online complete, official Census data of several generations, in a way that would make such automatic analysis possible would be a totally different matter.\n\nGetting rid of all the unjustified concerns about privacy is very simple, at least in theory. All is needed to dismiss for good the idea that Open Data is a generalized attack to privacy is to always remember and explain that:\n\n - 1. Most Open Data have nothing personal to begin with (examples: digital maps, budgets, air pollution measurements....)\n - 2. The majority of data that are directly related to individuals (e.g. things like names and address of people with specific diseases, or who were victims of some crime) have no reason to be published, nor there is any actual demand for them by Open Data advocates\n - 3. Exceptions that limit privacy for specific cases and categories of people (e.g. candidates to public offices, Government and Parliament members etc...) already exist in many countries\n - 4. Very often, in practice, Open Data struggles only happen about when and how to make available in the most effective way for society information that was already recognized as public. What to declare public, hence open, is indeed a serious issue (more on this in the next paragraph) but is a separate one.\n\n## 3.8. Need to better define what is Public Data\n\nTogether with citizens education, there is a huge challenge that Governments and the Open Data movement will have to face (hopefully together) in 2011 and beyond. This challenge is to update and expand the definition of Public Data and to have it accepted by lawmakers and public administrators.", - "page_start": 22, - "page_end": 22, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "## 3.6.1. Data alterations and financial sustainability\n\nSome concerns about the limits of Open Data are about what may happen, or stop to happen, before they are published online. The most common concerns of this type are (from Open Public Data: Then What? - Part 1):\n\n - 1. Opening up PSI causes those data to not be produced anymore, or to be only produced as private property by private corporations, because the public agencies whose job was to produce those data, can't sell them anymore.\n - 2. total accessibility of data provides more incentives to tinker with them, at the risk of reducing trust in institutions and inhibiting decision-making even more than today.\n\nData manipulation is the topic of the next paragraph. Speaking of costs, a point to take into account is that, once data are open, routinely used and monitored by as many independent users as possible, even the cost of keeping them up to date may be sensibly reduced: in other words, in the medium/long term Open Data may reduce the need to periodically perform complete, that is very expensive, studies and surveys to update a whole corpus of data in one run.\n\nBesides, and above all, even if opening data always destroyed any source of income for the public office that used to create and maintain them, this problem would only exist for the PSI datasets that are already sold today. Such data, even if of strategic importance as is the case with digital cartography, are only a minimal fraction of all the PSI that could and should be opened to increase transparency, reduce the costs of Government and stimulate the economy. In all these other cases:\n\n - · the money to generate the data already arrives by some other source than sales and licensing(but even with those data it may be possible to generate them by crowdsourcing, thereby reducing those costs!)\n - · the only extra expense caused by publishing those data online (assuming they're already available in some digital format, of course!), would be the hosting and bandwidth costs, that may be greatly reduced by mirroring and other technical solutions like torrents, already widely used to distribute Free/Open Source Software (FOSS) through the Internet.\n\n## 3.6.2. Real impact of data manipulation or misunderstanding\n\nThe fix for the risk that data is manipulated is to not only open government data and procedures, but to simplify the latter (which eventually also greatly reduces cost) as much as possible. Abundance of occasions to secretly play with data and how they are managed is a symptom of excessive, or peak complexity: again, problems and risks with Open Data are a symptom of a [pre-", - "page_start": 16, - "page_end": 16, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "## procurement.\n\nThe same issue is denounced as an obstacle to innovation and cost savings in New recommendations for improving local open government and creating online hubs:\n\nJohn Grant focused on a major pain point for government at all levels for tapping into the innovation economy: procurement issues, which civic entrepreneurs run into in cities, statehouses and Washington. \"It is time to look at these procurement rules more closely,\" he said, and promote higher levels of innovation. \"There are a lot of ideas are happening but a lot of rules restrict vendors from interacting in government,\" said Grant. Turner-Lee observed that traditional procurement laws may also not be flexible enough to bring more mobile apps into government.\n\nCurrent procurement laws aren't partially incompatible with an Open Data world only at this level, that is when it's time to procure software that makes the data useful. Even bigger problems and inefficiencies can be introduced at the beginning of data life, that is when data collection and processing services are procured. We've already explained that forgetting to impose the right license is one of the problems, but it's not the only one. Even future organization of all the foreseeable data management activities should take advantage of the flexibility provided by data openness. Here is how Tim Davies summarizes this point:\n\nRight now [public] bodies often procure data collection, data publishing and data interfaces all in one block (as seems to be the case with Oxfordshires real-time bus information - leading to a roadblock on innovation) - and so without these layers being separated in procurement, some of the benefits here stand to be lost.\n\nChanging procurement of information/data-rich public services would be, of course, only the first step of a general reform of procurement laws and regulations. After management of Open Data has been simplified, it becomes time to implement similar simplifications to procurement of everything else. In fact, in such a scenario, there would be much less possibilities for the loopholes, frauds and inefficiencies that forced local procurement procedures to become so slow and complicated: since the public budget and other relevant public data would already be fully open, errors and other problems would surface and be fixed much more quickly and reliably than today, even assuming that they would continue to appear with the same frequency.\n\n## 4.5. Educate citizens to understand and use data\n\nIt is necessary to guarantee the widest possible availability of all the pre-requisites for effective use of Open Data. In other words, it is necessary to provide free and widely accessible training, oriented to average citizens, on how and why to visualize Public Data and use them to make informed", - "page_start": 29, - "page_end": 29, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "by David Osimo in EU eGov action plan published: the good, the bad and the unknown, are the actions on Open Data (a EU portal and a revision of the EU PSI directive), and on citizens control over their data. However the Action Plan contains no reference to the need for a more open and collaborative governance.\n\nIn the case of European Structural Funds, as Luigi Reggi reported in March 2011:\n\nthere is no single point of access to the data. Hundreds of Managing Authorities are following different paths and implementing different information strategies when opening up their data.\n\nMany databases (often simple PDF lists) [...show...] huge variation not only in the way they can be accessed but also in content and quality of data provided.\n\n - ... [...The results of...] an independent web-based survey on the overall quality of data published by each Managing Authority responsible for the 434 Operational Programmes approved in July 2009... can be summarized as follows:\n\nThe use of open, machine-processable and linked-data formats have unexpected advantages in terms of transparency and re-use of the data by the public and private sector. The application of these technical principles does not need extra budget or major changes in government organization and information management; nor does it require the update of existing software and infrastructures. What is needed today is the promotion among national and local authorities of the culture of transparency and the raising of awareness of the benefits that could derive from opening up existing data and information in a re-usable way.\n\nThe European Cohesion Policy is only halfway to accomplishing a paradigm shift to open data, with differences in performance both between and - in some cases - within European Countries.\n\nThings don't go much better for the European Union in the energy field. Carlo Stagnaro wrote in EU Energy Orwellianism: Ignorance Is Strength:\n\nEnergy is an active area of EU public policy. Yet authorities are not revealing information (data is surely has) that is crucial to determine whether its policies are distorting the market and come at too high a cost to society. This is a major fault in Europe's credibility in advancing its policy goals, as well as a serious limitation to the accountability of the policy making process\n\nWe realized that, while strongly supporting green investments the EU does not know, or does not make it public, how much is spent every year on green subsidies... With regard to green jobs, several estimates exist, but no official figure is provided.\n\nMore recently... I discovered that Eurostat does not tell how much coal capacity is installed - as opposed to natural gas- or oil-fueled generation plants. It is possible to know how much coal is used, but not the amount of fixed capital which is invested in", - "page_start": 6, - "page_end": 6, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "## elections\n\n - · Open Congress : a tool for political scientists to track the work and effectiveness of the Brazilian congress\n - · Paraguay: Who Do We Choose?: lists profiles of all candidates for many public posts.\n\nIn Brazil, the principle that \"what is not confidential should be available on the Internet in the open data format\" is already discussed and, in principle, accepted, by some departments of the Brazilian federal government. However, the preferred practice for now is (if there are no other obstacles) to only publish data that have been explicitly requested by some citizens.\n\nA report presented in May 2011 at the First Global Conference on Transparency Research mentioned a couple of Open Data issues in Latin America that are worth noting, because they're present even in Europe and North America, in spite of the different historical and social background:\n\n - · \"Better coordination is needed between right to information campaigners and open data activists.\"\n - · \"If activist manage to target particular topics to add \"value\" to the discussion, demand for open data could eventually increase in the region.\"\n\nIn Africa, mobile phones are much more available, and more essential than computer with Internet access, often bypassing the need for real desktop PCs with many applications. Therefore, from a purely technical point of view, transparency, accountability and efficiency in government are quickly becoming accessible to most African citizens through mobile networks rather than through the \"traditional\" Internet. However, there are still too few public departments and procedures that use digital documents and procedures on a scale large enough to generate meaningful volumes of digital data that could be then published online.\n\nWhile we write, Kenya is laying the legal groundwork to support Open Data. Permanent Secretary for Information and Communications, Dr. Bitange Ndemo is reported as having been championing for quite some time. In practice, big challenges remain for Open Data usage in Kenya. The easiest one to solve is to technical, that is find skilled people that can package the data in ways that the public can consume (even on mobile phones...). The real problem, however, is the fact that (summarizing from Thinking About Africa's Open Data):\n\nThere is a lot of Kenya data but it isn't accessible. The entities that hold the most public and infrastructure data are always government institutions. Getting information from them can be very hard indeed. We don't know who to go to to get the data we need, and", - "page_start": 9, - "page_end": 9, - "source_file": "Open_Data_Report.pdf" - } - ] - }, - { - "references": { - "source_file": "Open_Data_Report.pdf", - "query": "What are Steinberg's concerns about the government releasing all non-private existing data?", - "target_page": 28, - "target_passage": "The first reasons for Steinberg's concern is that asking for everything as soon as possible would \"stress the system too much, by spreading thin the finite amount of good will, money and political capital\". The second is that many existing old data and data archival systems are, in practice, so uninteresting that it wouldn't make sense to spend resources in opening them", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "## 4. Conclusion: seven Open Data strategy and best practices suggestions\n\nStarting from the trends and conclusion described in the previous chapter, this section lists, in the most synthetic way possible, some strategic actions and best practices for 2011, that we consider important in making Open Data succeed and bring the greatest possible benefits to all citizens and businesses.\n\n## 4.1. Properly define and explain both Open Data and Public Data\n\nJust because Open Data is becoming more popular (and, we may say, more and more necessary every year), it is essential to intensify efforts to explain, both to the general public and to public administrators, that\n\n - 1. Privacy issues are almost always a non-issue. Quoting from What \"open data\" means and what it doesn't): Privacy and/or security concerns with putting all the government's data out there are a separate issue that shouldn't be confused with Open Data. Whether data should be made publicly available is where privacy concerns come into play. Once it has been determined that government data should be made public, then it should be done openly.\n - 2. Defining as Public and consequently opening them in the right way, much more data than those born and stored inside Public Administration is an urgent task that is in the best interest of all citizens and businesses\n\n## 4.2. Keep political issues separated by economics ones\n\nOpen Data can reduce the costs of Public Administrations and generate (or at least protect, as in the case of deals from local merchants) local jobs in all sectors of the economy, not just high-tech ones. There seems to be enough evidence for these two assertions to go for more Open Data even if they had no effect at all on participation to politics. This should always be kept in mind, also because some data that can directly stimulate business are not the same that would be useful for transparency.", - "page_start": 26, - "page_end": 26, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "more concrete over time is damage control. In a world that produces digital data without interruption, uncontrolled and unpredictable data releases are facts of life that are very hard to predict, practically impossible to avoid and increasingly common. Opening public government data, that is providing plenty of officially verified information, becomes therefore also a damage control solution, to prevent or at least minimize damages from such uncontrolled releases. Without official Open Public Data, individual citizens, political parties or other organizations will start to process and compare (if they already aren't...) data from unofficial sources anyway, maybe from different countries. In such cases, it will be unavoidable not reach sometimes, even in good faith, wrong conclusions. This is not some theoretical possibility far in the future, as this real world example (from a comment to an Open Data discussion in an italian blog) proves:\n\n\" on the [non italian] Geonames website you can download geo-referenced data about... 47000 Italian municipalities. That worries me, because there are only 8094 of them. Besides, I grabbed a few random data about population, and I can guarantee you that not one was right. What should be done in such cases?\n\nFrom an Open Data perspective, all these recent stories have (at least) one thing in common: they suggest that, considering its current needs and problems, current societies want and need more Open Data than they already have.\n\n## 2.1. Wikileaks and the Open Data movement\n\nDuring the 2010/2011 winter the discussions around the Cablegate and other documents published by Wikileaks have, in some occasion, included hostility towards Open Data. This is a consequence of a more or less conscious mixing of the two themes, because in a very general sense, both Open Data and Wikileaks are about transparency, accountability and democracy.\n\nAs far as this study is concerned, two conclusions can be drawn from the Cablegate/Wikileaks scandal.\n\nThe first is that, in practice, it is necessary to find and equilibrium between secrecy and transparency whenever government activities are concerned. Citizens must be able to know what the state is actually doing but sometimes, be it for careful evaluation of all the alternatives or because of security, it must be possible to work behind closed doors, at least temporarily. We'll come back to this point later in this report.\n\nThe second conclusion is that, while certainly both Open Data and Wikileaks are about openness and transparency in politics, not only there are deep differences between the two ideas but, in our", - "page_start": 4, - "page_end": 4, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "## 4.3. Keep past and future separate\n\nFor the same reason why it is important to always distinguishes between political and economical advantages (or disadvantages) of Open Data, it is necessary to keep decisions about future data (those that will arrive in the future, due to new contracts, public services and so on) separate from those about data that already exist. At the end of 2010, T. Steinberg wrote that the idea that Government should publish everything non-private it can now is \"rather dangerous\", and that it would be much better to release nothing until someone actually asked for it, and at that point doing it right, that is with an open license and so on. The first reasons for Steinberg's concern is that asking for everything as soon as possible would \"stress the system too much, by spreading thin the finite amount of good will, money and political capital\" . The second is that many existing old data and data archival systems are, in practice, so uninteresting that it wouldn't make sense to spend resources in opening them.\n\nEven if these concerns were always true, it is important to realize that they apply (especially the second) to already existing data, not to future ones. The two classes of data have, or can have, very different constraints. Existing data may still exist only in paper format and/or be locked by closed or unclear licenses, or not relevant anymore for future decisions.\n\nOpening future data, instead, is almost always more important, useful urgent, easier and cheaper than digitizing or even only reformatting material that in many cases is already too old to make immediate, concrete differences. While this argument is probably not always true when we look at Open data for transparency, it probably is when it comes to economic development.\n\nTherefore, features and guidelines that should be present in all future data generation and management processes include:\n\n - · standardization: the less, obviously open, formats are used for data of the same type, the easier it is to merge and correlate them. The formats that have to be standardized are not only those at the pure software level. Even more important is, for example, to adopt by law standard identificators for government suppliers, names and machine-readable identifiers of budget voices and so on\n - · preparation for future digitization: new digital systems should explicitly be designed from the beginning so that it will be possible, when non-digital records will be digitized, to add them to the databases without modifying losses.\n - · Open licenses", - "page_start": 27, - "page_end": 27, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "digital, attacks to privacy and to civil rights in general can and are coming by so many other sides that those from (properly done) Open Data are a really tiny percentage of the total.\n\nThis is a consequence of the fact that data about us end up online from the most different sources (including ourselves and our acquaintances), and that often it would be very hard to discover, never mind prove , that they've been used against our interest. There have been concerns, for example, that insurance companies may charge higher fees for life insurance to those among their customers who... put online a family tree from which it shows that they come from families with an average life expectancy lower than usual.\n\nAssuming such concerns were real, would it always be possible to spot and prove such abuses of data, that weren't even published by any Public Administration? Of course, publishing online complete, official Census data of several generations, in a way that would make such automatic analysis possible would be a totally different matter.\n\nGetting rid of all the unjustified concerns about privacy is very simple, at least in theory. All is needed to dismiss for good the idea that Open Data is a generalized attack to privacy is to always remember and explain that:\n\n - 1. Most Open Data have nothing personal to begin with (examples: digital maps, budgets, air pollution measurements....)\n - 2. The majority of data that are directly related to individuals (e.g. things like names and address of people with specific diseases, or who were victims of some crime) have no reason to be published, nor there is any actual demand for them by Open Data advocates\n - 3. Exceptions that limit privacy for specific cases and categories of people (e.g. candidates to public offices, Government and Parliament members etc...) already exist in many countries\n - 4. Very often, in practice, Open Data struggles only happen about when and how to make available in the most effective way for society information that was already recognized as public. What to declare public, hence open, is indeed a serious issue (more on this in the next paragraph) but is a separate one.\n\n## 3.8. Need to better define what is Public Data\n\nTogether with citizens education, there is a huge challenge that Governments and the Open Data movement will have to face (hopefully together) in 2011 and beyond. This challenge is to update and expand the definition of Public Data and to have it accepted by lawmakers and public administrators.", - "page_start": 22, - "page_end": 22, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "with a project called \"Tales of Things\" to allow people to leave messages for each other (or just for the world) at the bus stops. Scanning the QR code now allows people to see not just the bus timetable, but also the notes other travelers have left on that stop, including \"what's nearby, who's waiting for whom, what number can you call for a good time. It's a cross between bus stop Facebook and digital graffiti\" , that happened thanks to the openness of the original bus stop data.\n\nThe Social Life of Data Project will study instead how particular datasets have been used, who used them, how those people are connected and what conversations happen around Open Data.\n\n## 3.3. Legal issues remain crucial\n\nProper licensing of Public data is essential. The more Open Data activities continue, the clearer this rule becomes. What distinguishes Open Data from \"mere\" transparency is reuse. Paraphrasing Eaves, until a government get the licensing issue right, Open Data cannot bring all the possible benefits in that country. If there are no guarantees that public data can be used without restriction, very little happens in practice, and when it happens it may be something against the public interest.\n\nCanadian Company Public Engines Inc, that is paid by local police departments to collect, process and analyze official crime data, also publishes online, with a proprietary license, anonymized summaries of those data. When in 2010 another company, Report See Inc, scraped those data from their website to reuse them, Public Engines sued.\n\nReporting this, D. Eaves rightly points out that both companies are right: one is trying to protect its investment, the other is simply trying to reuse what IS public data, by getting it from the ONLY place where it's available. This is what happens when public officials leave the ownership of public data to the third parties hired to collect them. Please note that, in practice, it makes very little difference whether those third parties are private, for-profit corporations or even other Public Administrations. Unless, of course, there are national laws already in place that define in advance what is the license of all present and future Public Data, no matter how they were generated and by whom , those data can be lost in any moment for society. In all other cases, the legal status of data will be either officially closed and locked, or uncertain enough to prevent most or all reuses. In February 2011, the news came that, even if they weren't the original copyright holders, Public Engines had been able to put together enough legal claims to convince Report See to give up.\n\nDisputes like this should not happen and would not happen if all contracts regarding collection and management of PSI clearly specified that all the resulting data either go directly into the public domain (after being anonymized if necessary, of course) or remain exclusive property of the", - "page_start": 12, - "page_end": 12, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "What is, exactly, Public Data? A definition that is accepted almost implicitly is \"data that is of public interest, that belongs to the whole community, data that every citizen is surely entitled to know and use\" . This definition is so generic that accepting it together with the assumption that all such data should be open as preached by the Open Data movement (online, as soon as possible, in machine readable format with an open license etc...) doesn't create any particular problem or conflict.\n\nReal problems however start as it has happened all too often so far, whenever we assume more or less consciously that \"Public Data\" in the sense defined above and data directly produced by Governments and Public Administrations, that is what's normally called PSI (Public Sector Information) are the same thing.\n\nThere is no doubt that Governments and Public Administrations produce huge quantities of Public Data. But this is an age of privatization of many public services, from transportation to healthcare, energy and water management. This is an age in which many activities with potentially very serious impacts on whole communities, like processing of hazardous substances or toxic waste, happen outside Public Administrations. The paradox is that, as Sasaki put it, this increased privatization is happening in the very same period in which \" we are observing a worldwide diffusion of access to information laws that empower citizens to hold government agencies accountable.\"\n\nIn such a context, \"Public Data\"is critical just because it is a much bigger set of data than what constitutes traditional, official PSI. \"Public Data\" includes all that information plus the much bigger amount of data describing and measuring all the activities of private companies, from bus timetables to packaged food ingredients, aqueducts performances and composition of fumes released in the atmosphere, that have a direct impact on the health and rights of all citizens of the communities affected by the activities of those companies.\n\nAre such data \"Public\" today, in the sense defined at the beginning of this paragraph, that is something every citizen has the right to know without intermediaries or delegates, or not? Should they be public? If yes, shouldn't law mandate that all such data be Open (that is, published online as soon as possible, in machine readable format with an open license etc...) just like, for example, the budget of some Ministry? Answering these questions may be one of the biggest challenges for the Open Data community, and for society as a whole, in the next years.\n\nHere are, in order to facilitate reflection on this issue, a few recent, real world examples of \"Public Data\" that are not PSI, and of the impacts of their lack of openness.", - "page_start": 23, - "page_end": 23, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "## 3.6.1. Data alterations and financial sustainability\n\nSome concerns about the limits of Open Data are about what may happen, or stop to happen, before they are published online. The most common concerns of this type are (from Open Public Data: Then What? - Part 1):\n\n - 1. Opening up PSI causes those data to not be produced anymore, or to be only produced as private property by private corporations, because the public agencies whose job was to produce those data, can't sell them anymore.\n - 2. total accessibility of data provides more incentives to tinker with them, at the risk of reducing trust in institutions and inhibiting decision-making even more than today.\n\nData manipulation is the topic of the next paragraph. Speaking of costs, a point to take into account is that, once data are open, routinely used and monitored by as many independent users as possible, even the cost of keeping them up to date may be sensibly reduced: in other words, in the medium/long term Open Data may reduce the need to periodically perform complete, that is very expensive, studies and surveys to update a whole corpus of data in one run.\n\nBesides, and above all, even if opening data always destroyed any source of income for the public office that used to create and maintain them, this problem would only exist for the PSI datasets that are already sold today. Such data, even if of strategic importance as is the case with digital cartography, are only a minimal fraction of all the PSI that could and should be opened to increase transparency, reduce the costs of Government and stimulate the economy. In all these other cases:\n\n - · the money to generate the data already arrives by some other source than sales and licensing(but even with those data it may be possible to generate them by crowdsourcing, thereby reducing those costs!)\n - · the only extra expense caused by publishing those data online (assuming they're already available in some digital format, of course!), would be the hosting and bandwidth costs, that may be greatly reduced by mirroring and other technical solutions like torrents, already widely used to distribute Free/Open Source Software (FOSS) through the Internet.\n\n## 3.6.2. Real impact of data manipulation or misunderstanding\n\nThe fix for the risk that data is manipulated is to not only open government data and procedures, but to simplify the latter (which eventually also greatly reduces cost) as much as possible. Abundance of occasions to secretly play with data and how they are managed is a symptom of excessive, or peak complexity: again, problems and risks with Open Data are a symptom of a [pre-", - "page_start": 16, - "page_end": 16, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "## procurement.\n\nThe same issue is denounced as an obstacle to innovation and cost savings in New recommendations for improving local open government and creating online hubs:\n\nJohn Grant focused on a major pain point for government at all levels for tapping into the innovation economy: procurement issues, which civic entrepreneurs run into in cities, statehouses and Washington. \"It is time to look at these procurement rules more closely,\" he said, and promote higher levels of innovation. \"There are a lot of ideas are happening but a lot of rules restrict vendors from interacting in government,\" said Grant. Turner-Lee observed that traditional procurement laws may also not be flexible enough to bring more mobile apps into government.\n\nCurrent procurement laws aren't partially incompatible with an Open Data world only at this level, that is when it's time to procure software that makes the data useful. Even bigger problems and inefficiencies can be introduced at the beginning of data life, that is when data collection and processing services are procured. We've already explained that forgetting to impose the right license is one of the problems, but it's not the only one. Even future organization of all the foreseeable data management activities should take advantage of the flexibility provided by data openness. Here is how Tim Davies summarizes this point:\n\nRight now [public] bodies often procure data collection, data publishing and data interfaces all in one block (as seems to be the case with Oxfordshires real-time bus information - leading to a roadblock on innovation) - and so without these layers being separated in procurement, some of the benefits here stand to be lost.\n\nChanging procurement of information/data-rich public services would be, of course, only the first step of a general reform of procurement laws and regulations. After management of Open Data has been simplified, it becomes time to implement similar simplifications to procurement of everything else. In fact, in such a scenario, there would be much less possibilities for the loopholes, frauds and inefficiencies that forced local procurement procedures to become so slow and complicated: since the public budget and other relevant public data would already be fully open, errors and other problems would surface and be fixed much more quickly and reliably than today, even assuming that they would continue to appear with the same frequency.\n\n## 4.5. Educate citizens to understand and use data\n\nIt is necessary to guarantee the widest possible availability of all the pre-requisites for effective use of Open Data. In other words, it is necessary to provide free and widely accessible training, oriented to average citizens, on how and why to visualize Public Data and use them to make informed", - "page_start": 29, - "page_end": 29, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "The biggest difference between Gov 2.0 and OpenGov seems to be how they approach transparency. Gov 2.0 is about transparency through open data and the \"government as a platform\" idea. \"Open Government\" is about Transparency for the sake of accountability, but not necessarily interaction, cooperation and reuse of data outside the government.\n\n[who advocates] Open Data does so in order to make it accessible to citizens rather than to hold government accountable. This is not to say that one approach is better than another, but this is to say that there seem to be two very different motivations for advocating for transparency, and they do seem to correlate to whether people label themselves as part of Gov 2.0 or part of OpenGov.\n\nIn general, reflection and debate on this point is accelerating. At the moment, some characteristics of Open Government on which there is more or less agreement are that Open Government is about:\n\n - · deliberation, choice, influence on decisions and participation as a common citizen\n - · letting all citizens use technology to participate, monitor and define government activities. In other words, Government is really Open when it's based on interaction, not only on some set of infrastructures and methods imposed top-down\n - · diffused, seamless conversations, that are only possible with digital technologies, online social networks and so on, between public employees and citizens.\n\nThe obvious potential limit of these definitions is that they rely on a big, still largely unknown factor, that is actual citizen participation. When data are opened, the problem becomes to have everybody use them, in order to actually realize Open Government as defined above. This issue will be explored in detail in the next paragraphs, but we can already say that Open Data are highlighting the critical, weak points in the present and future relationship between citizens and governments.\n\nWhile citizens participation is essential, especially in times of social and economic crisis, achieving it on a large scale won't be easy. Frustration and lack of trust in institutions in many countries are high, so it's no surprise when people express doubts that opening government data won't help much in fixing things.\n\n## 3.6. Clearer vision of the real risks and limits of Open Data\n\nOpen Data, we already said, is about reuse. The point is, at least when the goal is Open Government and transparency in politics, reuse by whom? There is no automatic cause-effect relationship between Open Data and real transparency and democracy. On the contrary, several problems may occur, if administrators and citizens don't pay close attention.", - "page_start": 15, - "page_end": 15, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "## elections\n\n - · Open Congress : a tool for political scientists to track the work and effectiveness of the Brazilian congress\n - · Paraguay: Who Do We Choose?: lists profiles of all candidates for many public posts.\n\nIn Brazil, the principle that \"what is not confidential should be available on the Internet in the open data format\" is already discussed and, in principle, accepted, by some departments of the Brazilian federal government. However, the preferred practice for now is (if there are no other obstacles) to only publish data that have been explicitly requested by some citizens.\n\nA report presented in May 2011 at the First Global Conference on Transparency Research mentioned a couple of Open Data issues in Latin America that are worth noting, because they're present even in Europe and North America, in spite of the different historical and social background:\n\n - · \"Better coordination is needed between right to information campaigners and open data activists.\"\n - · \"If activist manage to target particular topics to add \"value\" to the discussion, demand for open data could eventually increase in the region.\"\n\nIn Africa, mobile phones are much more available, and more essential than computer with Internet access, often bypassing the need for real desktop PCs with many applications. Therefore, from a purely technical point of view, transparency, accountability and efficiency in government are quickly becoming accessible to most African citizens through mobile networks rather than through the \"traditional\" Internet. However, there are still too few public departments and procedures that use digital documents and procedures on a scale large enough to generate meaningful volumes of digital data that could be then published online.\n\nWhile we write, Kenya is laying the legal groundwork to support Open Data. Permanent Secretary for Information and Communications, Dr. Bitange Ndemo is reported as having been championing for quite some time. In practice, big challenges remain for Open Data usage in Kenya. The easiest one to solve is to technical, that is find skilled people that can package the data in ways that the public can consume (even on mobile phones...). The real problem, however, is the fact that (summarizing from Thinking About Africa's Open Data):\n\nThere is a lot of Kenya data but it isn't accessible. The entities that hold the most public and infrastructure data are always government institutions. Getting information from them can be very hard indeed. We don't know who to go to to get the data we need, and", - "page_start": 9, - "page_end": 9, - "source_file": "Open_Data_Report.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed4.pdf", - "query": "How did serum estradiol and progesterone levels change during pregnancy?", - "target_page": 2, - "target_passage": "Serum hormone concentrations increased significantly over the course of pregnancy and dropped precipitously postpartum", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "Fig. 1 | Precision imaging reveals neuroanatomical changes throughout gestation. a , Standard medical demarcations for pregnancy stages (that is, trimesters) by gestation week (the image is created with BioRender.com). b , Steroid hormones increased significantly throughout pregnancy and dropped precipitously postpartum, as is characteristic of the prenatal and postnatal periods. c , A healthy 38-year-old primiparous woman underwent 26 scanning sessions from 3 weeks preconception through 2 years postpartum. Scans were distributed throughout preconception (four scans), first trimester (four scans), second trimester (six scans), third trimester (five scans) and postpartum (seven scans); tick marks indicate when major measures were collected and\n\n<!-- image -->\n\ncolors denote pregnancy stage. The participant underwent IVF to achieve pregnancy, allowing for precise mapping of ovulation, conception and gestation week. d , Summary (that is, total) of brain measures throughout the experiment. Generalized additive models revealed GMV, CT and total brain volume decreased throughout pregnancy (see Methods for validation with cubic regression), with a slight recovery postpartum. Global QA, lateral ventricle and CSF volumes displayed nonlinear increases across gestation, with a notable rise in the second and third trimesters before dropping sharply postpartum. Shaded regions represent 95% confidence bands; solid lines indicate model fit; dashed line indicates parturition.\n\n## Discussion\n\nConverging evidence across mammalian species points to pregnancy as a remarkable period of neuroplasticity, revealing the brain's ability to undergo adaptive, hormonally-driven neuroanatomical changes beyond adolescence 13-15,20,21,24-26 . Investigations that compare women\n\nprepregnancy and then again postpartum provide the strongest evidence to date that the human brain undergoes such neural changes 11,27 . But what about pregnancy itself? Over what time course do anatomical changes in the maternal brain manifest? Are they tied to the substantial increase in sex hormone production? Here we begin to address these", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed4.pdf" - }, - { - "text": "subcortical structures, including the ventral diencephalon, caudate, thalamus, putamen and hippocampus. High-resolution imaging and segmentation of the medial temporal lobe (MTL) extend these findings further, revealing specific volumetric reductions within hippocampal subfields CA1, CA2/CA3 and parahippocampal cortex (PHC). In contrast to widespread decreases in cortical and subcortical GMV, correlational tractography analyses revealed nonlinear increases in white matter quantitative anisotropy (QA) throughout the brain-indicating greater tract integrity-as gestational week progressed. Together, these findings reveal the highly dynamic changes that unfold in a human brain across pregnancy, demonstrating a capacity for extensive neural remodeling well into adulthood.\n\n## Results\n\n## Serological evaluations\n\nSerological evaluations captured canonical hormone fluctuations characteristic of the prenatal, perinatal and postnatal periods (Fig. 1b). Serum hormone concentrations increased significantly over the course of pregnancy and dropped precipitously postpartum (preconception, estradiol (E) = 3.42 pg ml -1 and progesterone (P) = 0.84 ng ml -1 ; 3 weeks preparturition, E = 12,400 pg ml -1 and P = 103 ng ml -1 ; 3 months postparturition, E = 11.50 pg ml -1 and P = 0.04 ng ml -1 ).\n\n## Whole-brain dynamics from baseline through postpartum\n\nTo begin, we characterized broad neuroanatomical changes over the course of the entire experimental window (baseline-2 years postpartum, 26 scans; Fig. 1d). Generalized additive models revealed strong nonlinear (effective degrees of freedom > 3) relationships between weeks since conception and summary brain metrics. Total GMV ( F = 27.87, P < 0.001, deviance explained = 93.9%, R 2 adj = 0.91), summary CT ( F = 15.79, P < 0.001, deviance explained = 78.6%, R 2 adj = 0.75) and total brain volume ( F = 26.12, P < 0.001, deviance explained = 93.4%, R 2 adj = 0.90) linearly decreased during gestation and appeared to partially rebound postpartum. In contrast, global microstructural integrity (QA) of white matter increased throughout the first and second trimesters before returning to baseline levels in the postpartum period (whole-brain QA, F = 4.62, P = 0.007, deviance explained = 60.2%, R 2 adj = 0.51). We also observed nonlinear patterns of lateral ventricle expansion (F = 10.44, P < 0.001, deviance explained = 83.8%, R 2 adj = 0.77) and increased cerebrospinal fluid (CSF; F = 13.32, P < 0.001, deviance explained = 83.8%, R 2 adj = 0.79) rising in the second and third trimesters before dropping sharply postpartum.\n\n## Cortical volume and thickness changes tied to gestation\n\nWe then narrowed the aperture to capture changes unfolding within gestation itself (baseline-36 weeks pregnant, 19 scans). Relationships between summary brain metrics were evident over the gestational period as follows: total brain volume, GMV and CT were positively associated with one another, whereas lateral ventricles, CSF and global QA demonstrated negative relationships with GMV (Supplementary Fig. 1).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed4.pdf" - }, - { - "text": "Cortical GMV and CT . We then narrowed our analyses to the first 19 sessions (baseline-36 weeks gestation) to assess novel brain changes occurring over the gestational window. We first computed Pearson's product-moment correlation matrices between the following variables: gestation week, estradiol, progesterone and the 17 network-level average GMV values. We then ran a multivariate regression analysis predicting ROI-level GMV changes by gestation week. To identify which regions were changing at a rate different from the global decrease, we then ran the analyses again to include total GMV in the regression model (Supplementary Table 2). This was extended to the network level, where we ran partial correlations accounting for total GMV. These same analyses were then run with CT measures. Globally-corrected results provided in Supplementary Tables 1-5. Percent change at the network level was computed by subtracting the final pregnancy value (36 weeks pregnant) from the first prepregnancy baseline value, then dividing that difference by said first prepregnancy baseline value. All analyses underwent multiple comparisons testing (false discovery rate (FDR)-corrected at q < 0.05).\n\nSubcortical GMV . A similar statistical approach was taken for subcortical volume estimates. We ran a multivariate regression analysis predicting GMV changes over gestation in 28 ROIs (Supplementary Fig. 6a) by gestation week (FDR-corrected at q < 0.05).\n\nTo evaluate the relationship between gestation week and MTL subregion volume over pregnancy ( n = 7 bilateral subregions and n = 18 MTL scans), we used a combination of linear and nonlinear models based on individual subregion data patterns. Models were compared for best fit with each subregion via AIC from the GLM output (as described in 'Summary brain metrics'). A linear regression model was most appropriate for PHC (AICdiff < 3), whereas a quadratic model performed best for CA1 and CA2/CA3. As a control, we repeated the analyses with MTL subregion volumes after proportional volume correction of total GMV calculated by ASHS. Finally, we evaluated the relationship between endogenous sex hormones (estrogen and progesterone) and subregion volumes using linear regression. Relationships were considered significant only if they met FDR correction at q < 0.05.\n\nWhite matter microstructure . DSI Studio's correlational tractography 74 was used to analyze the relationship between white matter structure and gestational week ( n = 16). A truncated model was run to examine the relationship between white matter and sex steroid hormones ( n = 14) for the subset of diffusion scans with paired endocrine data during gestation. A nonparametric Spearman's correlation was used to derive the correlation between gestational week and endocrine factors and our metrics of interest (QA and MD; see Supplementary Table 9 and Supplementary Fig. 10 for MD results) because the data were not normally distributed. Statistical inference was reached using connectometry, a permutation-based approach that tests the strength of coherent associations found between the local connectome and our variables of interest. It provides higher reliability and replicability by correcting for multiple comparisons. This technique provides a high-resolution characterization of local axonal orientation. The correlational tractography was run with the following parameters: t score threshold of 2.5, four pruning iterations and a length threshold of 25 voxel distance. To estimate the FDR, a total of 4,000 randomized permutations were applied to obtain the null distribution of the track length. Reported regions were selected based on FDR cutoff (FDR < 0.2, suggested by DSI Studio), and contained at least ten tracts. For visualization of global and tract QA at each gestational stage, mean QA values were extracted using DSI Studio's whole-brain fiber tracking algorithm and ROI-based tracking using the default HCP842 atlas 78 .", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed4.pdf" - }, - { - "text": "## Neuroanatomical changes observed over the course of a human pregnancy\n\nReceived: 23 August 2023\n\nAccepted: 29 July 2024\n\nPublished online: 16 September 2024\n\nCheck for updates\n\nLaura Pritschet 1 , Caitlin M. Taylor 1 , Daniela Cossio 2 , Joshua Faskowitz 3 , Tyler Santander 1 , Daniel A. Handwerker 3 , Hannah Grotzinger 1 , Evan Layher 1 , Elizabeth R. Chrastil 2,5 &\n\nEmily G. Jacobs 1,4,5\n\nPregnancy is a period of profound hormonal and physiological changes experienced by millions of women annually, yet the neural changes unfolding in the maternal brain throughout gestation are not well studied in humans. Leveraging precision imaging, we mapped neuroanatomical changes in an individual from preconception through 2 years postpartum. Pronounced decreases in gray matter volume and cortical thickness were evident across the brain, standing in contrast to increases in white matter microstructural integrity, ventricle volume and cerebrospinal /fluid, with few regions untouched by the transition to motherhood. This dataset serves as a comprehensive map of the human brain across gestation, providing an open-access resource for the brain imaging community to further explore and understand the maternal brain.\n\nWorldwide, nearly 85% of women experience one or more pregnancies in their lifetime 1 , with 140 million women becoming pregnant each year. Over an approximately 40-week gestational window, the maternal body undergoes profound physiological adaptations to support the development of the fetus, including increases in plasma volume, metabolic rate, oxygen consumption and immune regulation 2 . These rapid adaptations are initiated by 100-fold to 1,000-fold increases in hormone production, including estrogen and progesterone. These neuromodulatory hormones also drive significant reorganization of the central nervous system. Evidence from animal models and human studies converge on pregnancy as a period of remarkable neuroplasticity 3-10 (see ref. 10 for one of the earliest known observations). Gestational increases in steroid hormone synthesis drive neurogenesis, dendritic spine growth, microglial proliferation, myelination and astrocyte remodeling (for review, see ref. 11). These cellular changes are pronounced in brain circuits that promote maternal behavior. For example, Ammari et al. recently discovered that steroid hormones can fine-tune the response properties of galanin neurons in the rodent medial preoptic area of the hypothalamus (mPOA), leading to enhanced sensitivity in dams to sensory cues from newborn pups 12 .\n\nIn humans, reductions in gray matter volume (GMV) have been observed postpartum 13-16 , particularly in regions central to theory-of-mind processing 13 . These GMV changes persist at 6 years postpartum 17 and are traceable decades later 18,19 , underscoring the permanence of this major remodeling event. And yet the changes that occur within the maternal brain during gestation itself are virtually unknown (see ref. 20 for early neuroimaging insight). A recent study by Paternina-Die et al. offers intriguing clues 21 . Women were scanned once in the third trimester and again in the postpartum period, revealing a reduction of cortical volume observable in the late pregnancy scan. These findings suggest that pregnancy is a highly dynamic period for neural remodeling, yet neuroscientists lack a detailed map of how the human brain changes throughout the gestational period.\n\nHere we conducted a precision imaging study of pregnancy in which a healthy 38-year-old primiparous woman underwent 26 magnetic resonance imaging (MRI) scans and venipuncture beginning 3 weeks preconception through 2 years postpartum. We observed widespread reductions in cortical GMV and cortical thickness (CT) occurring in step with advancing gestational week and the dramatic rise in sex hormone production. Remodeling was also evident within\n\n<!-- image -->", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed4.pdf" - }, - { - "text": "Critically, dynamic neural changes occurred within the pregnancy window itself, a nuance not captured by studies limited to comparisons between prepregnancy and postpregnancy. For example, we observed large increases in white matter microstructural integrity (QA) throughout the first and second trimesters of pregnancy, but these measures fully returned to baseline values by the first postpartum scan. This pattern may explain why previous studies report no pregnancy-related differences in white matter tractography 14 . Other measures, such as GMV and CT, decreased throughout gestation and displayed only a modest rebound postpartum. These nonlinear patterns suggest that only quantifying prepregnancy and postpartum brain structure may\n\nPHC\n\n<!-- image -->", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed4.pdf" - }, - { - "text": "the offspring 12 . Human studies have revealed GMV reductions in areas of the brain important for social cognition and the magnitude of these changes corresponds with increased parental attachment 13 . Deeper examination of cellular and systems-level mechanisms will improve our understanding of how pregnancy remodels specific circuits to promote maternal behavior.\n\nAlthough studied to a lesser degree, ties between maternal behavior and white matter microstructure (particularly connectivity between temporal and occipital lobes) have been noted 31 . Here we reveal pronounced GMV changes in regions within sensory, attention and default mode networks over the gestational window. In parallel, we observed increased anisotropy in white matter tracts that facilitate communication between emotional and visual processing hubs 37-39 , including the inferior longitudinal fasciculus and inferior fronto-occipital fasciculus. Pinpointing the synchrony of gray and white matter changes that unfold in the maternal brain could be key to understanding the behavioral adaptions that emerge during and after pregnancy, such as honing the brain's visual and auditory responses to infant cues and eliciting maternal behavior. Research into other major transition periods supports this idea. For instance, adolescence is a dynamic period characterized by region-specific, nonlinear decreases in GMV and increases in WMV, maturational brain changes that are tied to gains in executive function and social cognition 40 . For both adolescence 41 and matrescence, the considerable rise in steroid hormone production appears to remodel the brain (see ref. 25 for comparative analysis), promoting a suite of behaviors adaptive to that life stage. How specific neural changes give rise to specific behavioral adaptations has yet to be fully explored with respect to human pregnancy.\n\nThis precision imaging study mapped neuroanatomical changes across pregnancy in a single individual, precluding our ability to generalize to the broader population. To benchmark our findings, we compared the magnitude of GMV changes observed throughout pregnancy against data from nonpregnant individuals sampled over a similar time course. Doing so provided compelling evidence that pregnancy-related neuroanatomical shifts far exceed normative day-to-day brain variability and measurement error. Evidence suggests that white matter microstructure remains fairly stable over a six-month period 42 , but more studies are needed to compare the degree of white matter changes observed during pregnancy to normative change over time. Further, sampling larger cohorts of women will generate much-needed normative models of brain change (akin to ref. 43) throughout pregnancy to establish what constitutes a typical degree of neuroanatomical change expected during gestation and postpartum recovery.", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed4.pdf" - }, - { - "text": "## White matter microstructure changes tied to gestation\n\nIn contrast to decreasing global GMV, correlational tractography of white matter, which tests for linear trends in the data, revealed increasing microstructural integrity across the whole brain during gestation (Fig. 4a), concomitant with the rise in 17β-estradiol and progesterone (all q < 0.001; Supplementary Fig. 9). Tracts displaying robust correlations with gestational week included the corpus callosum, arcuate fasciculus, inferior fronto-occipital fasciculus and inferior longitudinal fasciculus (Fig. 4b), as well as the cingulum bundle, middle and superior longitudinal fasciculus, corticostriatal, corticospinal and corticopontine tracts (see Supplementary Table 9 for complete list).\n\n## Comparing brain changes across pregnancy against controls\n\nWe then compared the changes in GMV across gestation to that of typical variability over time, derived from eight densely-sampled controls 23 . The GMV changes we see across pregnancy far exceed normative brain variability (Supplementary Fig. 11). On average, change in cortical GMV was nearly three times higher than controls scanned over a similar duration (Supplementary Fig. 11a,b). This extends to MTL subfields, wherein change in volume was three to four times greater across gestation than normative brain variability (Supplementary Fig. 11c,d). We contextualized these findings further by comparing gestational GMV change against our participant's preconception brain volumes; average GMV change during pregnancy was six times (cortical) and three times (MTL) higher than the variability observed between baseline sessions.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed4.pdf" - }, - { - "text": "sleep patterns 11 . These factors could have a role in the brain changes observed here, with some driving neurobiological changes and others, like water retention, potentially affecting MRI-based measurements. Note that, although cortical reductions in GMV over gestation were stable across analyses, accounting for QC measures influenced the magnitude and location of these results. These metrics all fell within the standard range, but there may be meaningful reductions in signal that accompany volumetric reductions (for example, increased CSF and decreased GM)-a methodological nuance that goes beyond the scope of this resource study. Ultimately, identifying the shared and unique contributions of these factors to the neuroanatomical changes that unfold across gestation warrants further investigation. Deeply phenotyping a large and diverse cohort of women across pregnancy will open up new avenues of exploration, for example, allowing researchers to link blood-based proteomic signatures to pregnancy outcomes; deploying wearable devices to monitor changes in sleep, cognition and mood; and probing the broader social and environmental determinants of maternal health 27 .\n\nThe neuroanatomical changes that unfold during matrescence may have broad implications for understanding individual differences in parental behavior 13,24,30,31 , vulnerability to mental health disorders 32,33 and patterns of brain aging 18,19,34-36 . Decreases in GMV may reflect 'fine-tuning' of the brain by neuromodulatory hormones in preparation for parenthood 26 . For example, in rodents, steroid hormones promote parental behavior by remodeling specific neural circuits in the medial preoptic area of the hypothalamus. These behavioral adaptations are critical to the dam's ability to meet the demands of caring for", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed4.pdf" - }, - { - "text": "## Methods\n\n## Participant\n\nOur participant (E.R.C.) was a healthy 38-year-old primiparous woman who underwent in-vitro fertilization (IVF) to achieve pregnancy. Previous studies reported no observable differences in neural changes from prepregnancy to postpregnancy between women who conceived naturally versus women who conceived via IVF 13 , and doing so provides a controlled way of monitoring pregnancy status. The participant experienced no pregnancy complications (for example, gestational diabetes and hypertension), delivered at full term via vaginal birth, nursed through 16 months postpartum, and had no history of neuropsychiatric diagnosis, endocrine disorders, prior head trauma or history of smoking. The participant gave written informed consent and the study was approved by the University of California, Irvine Human Subjects Committee.\n\n## Study design\n\nThe participant underwent 26 MRI scanning sessions from 3 weeks before conception through 2 years postpartum (162 weeks), during which high-resolution anatomical and diffusion spectrum imaging scans of the brain were acquired. Scans were distributed throughout this period, including prepregnancy (four scans), first trimester (four scans), second trimester (six scans), third trimester (five scans) and postpartum (seven scans; Fig. 1c). The first 6 sessions took place at the UCSB Brain Imaging Center (BIC), the final 20 sessions took place at the UCI Facility for Imaging and Brain Research (FIBRE). The majority of scans took place between 9 AM and 2 PM, limiting significant AM-PM fluctuations 49 . The MRI protocol, scanner (Siemens 3T Prisma) and software (version MR E11) were identical across sites. Each scanner was checked weekly for the duration of the study and passed all QC reports indicating no significant alterations in the geometry. To ensure the robustness of the findings, after the final study session, the participant completed back-to-back validation scans at UCI and UCSB within a 12-h window to assess reliability between scanners. Intraclass correlation coefficients (two-way, random effects, absolute agreement, single rater) reveal 'excellent' test-retest reliability between scanners, including ROI-level GMV (ICC = 0.97, 95% CI: 0.80-0.99), ROI-level CT (ICC = 0.96, 95% CI: 0.90-0.98), MTL subfield volume (ICC = 0.99, 95% CI: 0.97-0.99) and ROI-level QA (ICC = 0.94, 95% CI: 0.91-0.97). Furthermore, when examining the relationship between gestation week and GMV among UCI-only gestational sessions, findings were consistent (Supplementary Fig. 12), indicating that site differences are highly unlikely to have contributed meaningfully to the observed effects. Although not applicable here, we note that having a control participant scanned over a similar duration within the same scanner is critical for estimating how much variation in the brain can be attributed to within-scanner variability.\n\nTo monitor state-dependent mood and lifestyle measures, the following scales were administered on each experiment day: Perceived Stress Scale 50 , Pittsburgh Sleep Quality Index 51 , State-Trait Anxiety Inventory for Adults 52 and Profile of Mood States 53 . Correlation analyses between state-dependent measures, summary brain metrics and gestation week revealed little to no relationships. The only exception to this was a moderate negative association between global QA and state anxiety (Spearman's correlation ( ρ ) = -0.65, q = 0.04; baseline-36 weeks, n = 16). By making this data openly accessible, we encourage a more nuanced approach toward exploring mood and lifestyle measures in relation to brain changes over pregnancy.\n\n## Endocrine procedures\n\nThe participant underwent a blood draw ( n = 19; Fig. 1c) before MRI scanning. Sex steroid concentrations were determined via ultra-sensitive liquid chromatography-mass spectrometry at the Brigham and Women's Hospital Research Assay Core (BRAC). Assay sensitivities, dynamic range and intra-assay coefficients of variation", - "page_start": 8, - "page_end": 8, - "source_file": "pubmed4.pdf" - }, - { - "text": "participant's first two baseline scans (that is, preconception) to derive within-participant variability estimates.\n\nBenchmarking our data in this way allows us to capture the degree of change expected due to factors such as image processing and instrumentation variability or other day-to-day changes that could potentially modulate brain size and shape (see ref. 80 for review). The percent change observed over pregnancy (baseline versus 36 weeks gestation) far exceeds the expected variability estimated using both the Day2Day dataset (Supplementary Fig. 11) and our within-participant control data. This was quantified by dividing the observed percent change in GMV metrics (baseline versus 36 weeks) by the global measure of GMV percent variability of each control group (that is, Day2Day, within-participant control), independently for cortex and subcortex.\n\n## Reporting summary\n\nFurther information on research design is available in the Nature Portfolio Reporting Summary linked to this article.\n\n## Data availability\n\nThe dataset consists of 26 MRI scans (T1w, T2w and diffusion scans) alongside state-dependent measures and serum assessments of ovarian sex hormones for each session. The raw data is publicly available at https://openneuro.org/datasets/ds005299. Source data are provided with this paper.\n\n## Code availability\n\nNo custom code was used.\n\n## References", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed4.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed4.pdf", - "query": "Which cortical sub-networks were particularly sensitive to pregnancy?", - "target_page": 2, - "target_passage": "Several sensory and attention subnetworks were particu- larly sensitive to gestation, including the control (subnetwork B), sali- ence ventral attention (subnetwork A), dorsal attention (subnetwork B), default (subnetwork A) and somatomotor (subnetworks A and B) networks", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Changes in GMV were near-ubiquitous across the cortical mantle (Fig. 2a). Most large-scale brain networks exhibited decreases in GMV (Fig. 2b and Supplementary Table 1); indeed, 80% of the 400 regions of interest (ROI) demonstrated negative relationships between GMV and gestation week (Fig. 2a and Supplementary Table 2). Together, these results provide evidence of a global decrease in cortical volume across pregnancy. Several sensory and attention subnetworks were particularly sensitive to gestation, including the control (subnetwork B), salience/ventral attention (subnetwork A), dorsal attention (subnetwork B), default (subnetwork A) and somatomotor (subnetworks A and B) networks (Supplementary Table 1). Regions driving these network-level changes include the bilateral inferior parietal lobe, postcentral gyri, insulae, prefrontal cortex, posterior cingulate and somatosensory cortex (Fig. 2c, Supplementary Table 2 and validation of findings using alternate pipeline in Supplementary Tables 1 and 3). These regions and\n\nassociated brain networks appear to decrease in volume at a faster rate than the rest of the brain throughout pregnancy, as determined by a subsequent analysis controlling for total GMV (Supplementary Tables 1 and 2). GMV reductions were also significantly correlated with the participant's estradiol and progesterone concentrations (Supplementary Table 1). A highly similar pattern of results was observed when examining pregnancy-related CT changes (Supplementary Fig. 3 and Supplementary Tables 4 and 5). Significant reductions in cortical GMV over gestation remained after controlling for standard quality control (QC) metrics, albeit with some influence on the magnitude and location of the observed effects (Supplementary Figs. 4 and 5).\n\nIn contrast, GMV within regions of the default mode (subnetwork C), limbic (subnetworks A and B) and visual peripheral networks buck the global trend by slightly increasing (for example, temporal poles), remaining constant (for example, orbitofrontal cortex) or reducing at a much slower rate (for example, extrastriate cortex) than total GMV (Fig. 2a,b and Supplementary Tables 1 and 2). CT changes in these regions exhibit similar patterns (Supplementary Fig. 3 and Supplementary Tables 4 and 5).\n\n## Subcortical GMV changes tied to gestation\n\nConsistent with the broader cortical reductions in GMV, several subcortical regions significantly reduced in volume across gestation (Fig. 3a, left). This included bilateral ventral diencephalon (right hemisphere values shown in Fig. 3a, right; encompasses hypothalamus, substantia nigra, mammillary body, lateral geniculate nucleus and red nucleus among others 22 ), caudate, hippocampus and thalamus, along with left putamen and brain stem (Supplementary Table 6, q < 0.05).\n\nNext, high-resolution segmentation of the MTL allowed us to interrogate subcortical structures at a finer resolution, revealing nonlinear volumetric decreases in CA1 ( F (2,15) = 5.84, q = 0.031, R 2 adj = 0.36; Fig. 3b, left) and CA2/CA3 ( F (2,15) = 6.82, q = 0.027, R 2 adj = 0.41; Fig. 3b, middle) across gestation. PHC exhibited linear volumetric decreases across gestation ( F (1,16) = 24.87, q < 0.001, R 2 adj = 0.58; Fig. 3b, right) which was also tied to estradiol ( F (1,12) = 20.21, q = 0.005, R 2 adj = 0.60). All three relationships remained significant after proportional correction for total GMV. There was no significant change in other subregions or total volume of the hippocampal body, or in the parahippocampal gyrus (Supplementary Table 7 and Supplementary Fig. 8).\n\n## White matter microstructure changes tied to gestation", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed4.pdf" - }, - { - "text": "the offspring 12 . Human studies have revealed GMV reductions in areas of the brain important for social cognition and the magnitude of these changes corresponds with increased parental attachment 13 . Deeper examination of cellular and systems-level mechanisms will improve our understanding of how pregnancy remodels specific circuits to promote maternal behavior.\n\nAlthough studied to a lesser degree, ties between maternal behavior and white matter microstructure (particularly connectivity between temporal and occipital lobes) have been noted 31 . Here we reveal pronounced GMV changes in regions within sensory, attention and default mode networks over the gestational window. In parallel, we observed increased anisotropy in white matter tracts that facilitate communication between emotional and visual processing hubs 37-39 , including the inferior longitudinal fasciculus and inferior fronto-occipital fasciculus. Pinpointing the synchrony of gray and white matter changes that unfold in the maternal brain could be key to understanding the behavioral adaptions that emerge during and after pregnancy, such as honing the brain's visual and auditory responses to infant cues and eliciting maternal behavior. Research into other major transition periods supports this idea. For instance, adolescence is a dynamic period characterized by region-specific, nonlinear decreases in GMV and increases in WMV, maturational brain changes that are tied to gains in executive function and social cognition 40 . For both adolescence 41 and matrescence, the considerable rise in steroid hormone production appears to remodel the brain (see ref. 25 for comparative analysis), promoting a suite of behaviors adaptive to that life stage. How specific neural changes give rise to specific behavioral adaptations has yet to be fully explored with respect to human pregnancy.\n\nThis precision imaging study mapped neuroanatomical changes across pregnancy in a single individual, precluding our ability to generalize to the broader population. To benchmark our findings, we compared the magnitude of GMV changes observed throughout pregnancy against data from nonpregnant individuals sampled over a similar time course. Doing so provided compelling evidence that pregnancy-related neuroanatomical shifts far exceed normative day-to-day brain variability and measurement error. Evidence suggests that white matter microstructure remains fairly stable over a six-month period 42 , but more studies are needed to compare the degree of white matter changes observed during pregnancy to normative change over time. Further, sampling larger cohorts of women will generate much-needed normative models of brain change (akin to ref. 43) throughout pregnancy to establish what constitutes a typical degree of neuroanatomical change expected during gestation and postpartum recovery.", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed4.pdf" - }, - { - "text": "a\n\n## Whole-brain subcortical volumes\n\n<!-- image -->\n\nb\n\nCA1Medial temporal lobe subregion volumes\n\n<!-- image -->\n\nCA2/CA3Fig. 3 | Subcortical GMV changed throughout gestation. a , Multivariate regression analyses revealed largely negative relationships between gestation week and subcortical GMV regions over pregnancy, including bilateral thalamus, caudate, hippocampus, ventral diencephalon (encompassing hypothalamus, substantia nigra, mammillary body and red nucleus) and left caudate. Lateral ventricles displayed the only positive relationships with gestation week (also depicted in Fig. 1d). The whole-brain subcortical GMV estimates shown here were derived via FreeSurfer and 'aseg' subcortical segmentation. FDRcorrected at q < 0.05. Inset, right ventral diencephalon displayed the strongest negative association with gestation (left; baseline-36 weeks, 19 scans) and did not return to baseline postpartum (right; gestation and postpartum, 26 scans). b , The participant's hippocampus and surrounding cortex were segmented\n\n<!-- image -->\n\ninto seven bilateral subregions. Quadratic (CA1, CA2/CA3) and linear regression analyses (PHC) revealed subfields were negatively associated with gestation week (baseline-36 weeks, 18 scans) and did not return to baseline postpartum (gestation and postpartum, 25 scans). Shaded regions in scatterplots represent a 95% confidence interval. Each boxplot represents IQR for each stage, with a horizontal line representing the median value. The whiskers indicate variability outside (±1.5) of this range. Outside values are >1.5× and <3× IQR beyond either end of the box. FDR-corrected at q < 0.05. For a and b , nonsignificant regions were set to zero for interpretability. See Supplementary Fig. 6 for complete labeling of regions in both segmentations. Brain visualizations created with R package ggseg 48 . DC, diencephalon.\n\noutstanding questions. This study and corresponding open-access dataset offer neuroscientists a detailed map of the human brain across gestation, a resource for which a wide range of previously unattainable neurobiological questions can now be explored.\n\nOur findings from this precision imaging study show that pregnancy is characterized by reductions in GMV, cortical thinning and enhanced white matter microstructural integrity that unfold week by week. These changes were also tied to the significant rise in steroid hormone concentrations over pregnancy. Some of these changes persist at 2 years postpartum (for example, global reductions in GMV and CT), while others, including markers of white matter integrity, appear to be transient. Ventricular expansion and contraction parallel these cortical changes. These widespread patterns, and the notable increase in CSF volume across gestation, could reflect increased water retention and subsequent compression of cortical tissue. However, the persistence of these changes at 2 years postpartum and regional variation in GMV, CT and QA, hint at cellular underpinnings, such as alterations in glia\n\nor neuron number, synaptic density and myelination (for review on the latter, see ref. 4). Future studies of the relationship between fluid dynamics and volumetric changes will help clarify the factors that drive global neural changes during pregnancy; such insights will have broad implications for maternal health (for example, neurological effects tied to pre-eclampsia or edema).", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed4.pdf" - }, - { - "text": "## Neuroanatomical changes observed over the course of a human pregnancy\n\nReceived: 23 August 2023\n\nAccepted: 29 July 2024\n\nPublished online: 16 September 2024\n\nCheck for updates\n\nLaura Pritschet 1 , Caitlin M. Taylor 1 , Daniela Cossio 2 , Joshua Faskowitz 3 , Tyler Santander 1 , Daniel A. Handwerker 3 , Hannah Grotzinger 1 , Evan Layher 1 , Elizabeth R. Chrastil 2,5 &\n\nEmily G. Jacobs 1,4,5\n\nPregnancy is a period of profound hormonal and physiological changes experienced by millions of women annually, yet the neural changes unfolding in the maternal brain throughout gestation are not well studied in humans. Leveraging precision imaging, we mapped neuroanatomical changes in an individual from preconception through 2 years postpartum. Pronounced decreases in gray matter volume and cortical thickness were evident across the brain, standing in contrast to increases in white matter microstructural integrity, ventricle volume and cerebrospinal /fluid, with few regions untouched by the transition to motherhood. This dataset serves as a comprehensive map of the human brain across gestation, providing an open-access resource for the brain imaging community to further explore and understand the maternal brain.\n\nWorldwide, nearly 85% of women experience one or more pregnancies in their lifetime 1 , with 140 million women becoming pregnant each year. Over an approximately 40-week gestational window, the maternal body undergoes profound physiological adaptations to support the development of the fetus, including increases in plasma volume, metabolic rate, oxygen consumption and immune regulation 2 . These rapid adaptations are initiated by 100-fold to 1,000-fold increases in hormone production, including estrogen and progesterone. These neuromodulatory hormones also drive significant reorganization of the central nervous system. Evidence from animal models and human studies converge on pregnancy as a period of remarkable neuroplasticity 3-10 (see ref. 10 for one of the earliest known observations). Gestational increases in steroid hormone synthesis drive neurogenesis, dendritic spine growth, microglial proliferation, myelination and astrocyte remodeling (for review, see ref. 11). These cellular changes are pronounced in brain circuits that promote maternal behavior. For example, Ammari et al. recently discovered that steroid hormones can fine-tune the response properties of galanin neurons in the rodent medial preoptic area of the hypothalamus (mPOA), leading to enhanced sensitivity in dams to sensory cues from newborn pups 12 .\n\nIn humans, reductions in gray matter volume (GMV) have been observed postpartum 13-16 , particularly in regions central to theory-of-mind processing 13 . These GMV changes persist at 6 years postpartum 17 and are traceable decades later 18,19 , underscoring the permanence of this major remodeling event. And yet the changes that occur within the maternal brain during gestation itself are virtually unknown (see ref. 20 for early neuroimaging insight). A recent study by Paternina-Die et al. offers intriguing clues 21 . Women were scanned once in the third trimester and again in the postpartum period, revealing a reduction of cortical volume observable in the late pregnancy scan. These findings suggest that pregnancy is a highly dynamic period for neural remodeling, yet neuroscientists lack a detailed map of how the human brain changes throughout the gestational period.\n\nHere we conducted a precision imaging study of pregnancy in which a healthy 38-year-old primiparous woman underwent 26 magnetic resonance imaging (MRI) scans and venipuncture beginning 3 weeks preconception through 2 years postpartum. We observed widespread reductions in cortical GMV and cortical thickness (CT) occurring in step with advancing gestational week and the dramatic rise in sex hormone production. Remodeling was also evident within\n\n<!-- image -->", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed4.pdf" - }, - { - "text": "subcortical structures, including the ventral diencephalon, caudate, thalamus, putamen and hippocampus. High-resolution imaging and segmentation of the medial temporal lobe (MTL) extend these findings further, revealing specific volumetric reductions within hippocampal subfields CA1, CA2/CA3 and parahippocampal cortex (PHC). In contrast to widespread decreases in cortical and subcortical GMV, correlational tractography analyses revealed nonlinear increases in white matter quantitative anisotropy (QA) throughout the brain-indicating greater tract integrity-as gestational week progressed. Together, these findings reveal the highly dynamic changes that unfold in a human brain across pregnancy, demonstrating a capacity for extensive neural remodeling well into adulthood.\n\n## Results\n\n## Serological evaluations\n\nSerological evaluations captured canonical hormone fluctuations characteristic of the prenatal, perinatal and postnatal periods (Fig. 1b). Serum hormone concentrations increased significantly over the course of pregnancy and dropped precipitously postpartum (preconception, estradiol (E) = 3.42 pg ml -1 and progesterone (P) = 0.84 ng ml -1 ; 3 weeks preparturition, E = 12,400 pg ml -1 and P = 103 ng ml -1 ; 3 months postparturition, E = 11.50 pg ml -1 and P = 0.04 ng ml -1 ).\n\n## Whole-brain dynamics from baseline through postpartum\n\nTo begin, we characterized broad neuroanatomical changes over the course of the entire experimental window (baseline-2 years postpartum, 26 scans; Fig. 1d). Generalized additive models revealed strong nonlinear (effective degrees of freedom > 3) relationships between weeks since conception and summary brain metrics. Total GMV ( F = 27.87, P < 0.001, deviance explained = 93.9%, R 2 adj = 0.91), summary CT ( F = 15.79, P < 0.001, deviance explained = 78.6%, R 2 adj = 0.75) and total brain volume ( F = 26.12, P < 0.001, deviance explained = 93.4%, R 2 adj = 0.90) linearly decreased during gestation and appeared to partially rebound postpartum. In contrast, global microstructural integrity (QA) of white matter increased throughout the first and second trimesters before returning to baseline levels in the postpartum period (whole-brain QA, F = 4.62, P = 0.007, deviance explained = 60.2%, R 2 adj = 0.51). We also observed nonlinear patterns of lateral ventricle expansion (F = 10.44, P < 0.001, deviance explained = 83.8%, R 2 adj = 0.77) and increased cerebrospinal fluid (CSF; F = 13.32, P < 0.001, deviance explained = 83.8%, R 2 adj = 0.79) rising in the second and third trimesters before dropping sharply postpartum.\n\n## Cortical volume and thickness changes tied to gestation\n\nWe then narrowed the aperture to capture changes unfolding within gestation itself (baseline-36 weeks pregnant, 19 scans). Relationships between summary brain metrics were evident over the gestational period as follows: total brain volume, GMV and CT were positively associated with one another, whereas lateral ventricles, CSF and global QA demonstrated negative relationships with GMV (Supplementary Fig. 1).", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed4.pdf" - }, - { - "text": "Fig. 1 | Precision imaging reveals neuroanatomical changes throughout gestation. a , Standard medical demarcations for pregnancy stages (that is, trimesters) by gestation week (the image is created with BioRender.com). b , Steroid hormones increased significantly throughout pregnancy and dropped precipitously postpartum, as is characteristic of the prenatal and postnatal periods. c , A healthy 38-year-old primiparous woman underwent 26 scanning sessions from 3 weeks preconception through 2 years postpartum. Scans were distributed throughout preconception (four scans), first trimester (four scans), second trimester (six scans), third trimester (five scans) and postpartum (seven scans); tick marks indicate when major measures were collected and\n\n<!-- image -->\n\ncolors denote pregnancy stage. The participant underwent IVF to achieve pregnancy, allowing for precise mapping of ovulation, conception and gestation week. d , Summary (that is, total) of brain measures throughout the experiment. Generalized additive models revealed GMV, CT and total brain volume decreased throughout pregnancy (see Methods for validation with cubic regression), with a slight recovery postpartum. Global QA, lateral ventricle and CSF volumes displayed nonlinear increases across gestation, with a notable rise in the second and third trimesters before dropping sharply postpartum. Shaded regions represent 95% confidence bands; solid lines indicate model fit; dashed line indicates parturition.\n\n## Discussion\n\nConverging evidence across mammalian species points to pregnancy as a remarkable period of neuroplasticity, revealing the brain's ability to undergo adaptive, hormonally-driven neuroanatomical changes beyond adolescence 13-15,20,21,24-26 . Investigations that compare women\n\nprepregnancy and then again postpartum provide the strongest evidence to date that the human brain undergoes such neural changes 11,27 . But what about pregnancy itself? Over what time course do anatomical changes in the maternal brain manifest? Are they tied to the substantial increase in sex hormone production? Here we begin to address these", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed4.pdf" - }, - { - "text": "## White matter microstructure changes tied to gestation\n\nIn contrast to decreasing global GMV, correlational tractography of white matter, which tests for linear trends in the data, revealed increasing microstructural integrity across the whole brain during gestation (Fig. 4a), concomitant with the rise in 17β-estradiol and progesterone (all q < 0.001; Supplementary Fig. 9). Tracts displaying robust correlations with gestational week included the corpus callosum, arcuate fasciculus, inferior fronto-occipital fasciculus and inferior longitudinal fasciculus (Fig. 4b), as well as the cingulum bundle, middle and superior longitudinal fasciculus, corticostriatal, corticospinal and corticopontine tracts (see Supplementary Table 9 for complete list).\n\n## Comparing brain changes across pregnancy against controls\n\nWe then compared the changes in GMV across gestation to that of typical variability over time, derived from eight densely-sampled controls 23 . The GMV changes we see across pregnancy far exceed normative brain variability (Supplementary Fig. 11). On average, change in cortical GMV was nearly three times higher than controls scanned over a similar duration (Supplementary Fig. 11a,b). This extends to MTL subfields, wherein change in volume was three to four times greater across gestation than normative brain variability (Supplementary Fig. 11c,d). We contextualized these findings further by comparing gestational GMV change against our participant's preconception brain volumes; average GMV change during pregnancy was six times (cortical) and three times (MTL) higher than the variability observed between baseline sessions.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed4.pdf" - }, - { - "text": "Figure 1. A schematic illustration of a hierarchical active inference model. This model links (exteroceptive, interoceptive, and proprioceptive) sensations at lower levels with multimodal models of hidden bodily states, such as fatigue and hunger, at intermediate levels, and finally with temporally extended, integrative models of the embodied self at the higher hierarchical level. In this schematic, following predictive coding (Rao and Ballard 1999, Friston 2005), black and red circles represent neural units that encode predictions and prediction errors, respectively. The levels are reciprocally connected, so predictions are propagated from the top-down (black edges) and prediction errors from the bottom-up (red edges). Finally, the pink triangles indicate a mechanism of precision gating (or gain control) of prediction error units, which determines their relative influence on units encoding predictions. At a neurobiological level, prediction and prediction error units could be mapped to deep and superficial pyramidal cells in cortical hierarchies, whereas expected precision could be linked to neuromodulatory input. The elements of the generative model shown do not need to map one-to-one to specific brain areas or networks but are plausibly distributed across many of them. However, as a first approximation, the lower and intermediate layers of the generative model could be linked to brain networks that process unimodal information (e.g. sensory cortices for exteroceptive information) and multimodal association areas, respectively. The highest level of the generative model could be linked to brain networks that process information about the self, such as the insular cortex, the anterior cingulate cortex, and the medial prefrontal cortex. See Parr et al. (2022) for details about hierarchical generative models supporting adaptive regulation and allostasis and Barrett and Simmons (2015) for their putative neuronal underpinnings. See online article for colored version of this figure.\n\n<!-- image -->\n\nare reciprocally linked through top-down connections that convey predictions (black edges) and bottom-up connections that convey prediction errors (red edges), within and across levels. This predictive coding architecture permits inferring (in the Bayesian sense) the most likely causes of sensations, across multiple modalities and multiple hierarchical levels, by minimizing prediction errors at all levels. The rationale is that predictions at all levels are continuously adjusted (and synaptic weights adjusted at a slower time scale) until they match with incoming multimodal stimuli sufficiently well, and, consequently, the prediction errors across all levels are minimized. This process entails that even if a predictive coding agent starts with an incorrect prediction (e.g. about what object it is looking at) the prediction errors that measure a discrepancy between the predicted sensations and the actual sensations can help revise the initial predictions. See Parr et al. (2022) for a more detailed explanation of how to interpret these schematics.\n\nAnother critical aspect of Fig. 1 is that it illustrates two pathways in which prediction errors at the proprioceptive and interoceptive levels are used to steer physical actions (reflex arcs) and autonomic actions (autonomic reflexes). Endowing predictive coding with these reflexes-hence realizing an 'active inference' architecture-permits minimizing prediction errors by changing the state of the world (by physically acting) or the internal milieu (by engaging in autonomic actions) rather than only by changing predictions, as described later.", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed1.pdf" - }, - { - "text": "Fig. 4 | White matter microstructure changes throughout the experiment. a , Numerous white matter tracts demonstrate increasing QA in relation to advancing gestation week (baseline-36 weeks, 16 scans), as determined by correlational tractography analysis (FDR, q < 0.0001). See Supplementary Table 9 for complete list of tracts with a significant correlation between QA and gestation week. b , Summary of QA values by pregnancy stage (gestation and postpartum, 23 scans) for representative ROIs significantly tied to gestation. ROI-based tractometry was used to extract QA values. Each boxplot represents\n\n<!-- image -->\n\nIQR for each stage, with a horizontal line representing the median value. The whiskers indicate variability outside (±1.5) of this range. Outside values are >1.5× and <3× IQR beyond either end of the box. Values were z scored and transformed to have a mean of zero and s.d. of one for easier comparison across individual tracts. AF, arcuate fasciculus; C, cingulum bundle; CC, corpus callosum; CPT, corticopontine tracts; CS, corticostriatal tracts; CST, corticospinal tracts; DT, dentatorubrothalamic tract; IFOF, inferior frontal occipital fasciculus; ILF, inferior longitudinal fasciculus; MLF, middle longitudinal fasciculus.\n\noverlook the full range of changes that unfold within the gestational window, and underrepresent the brain's metamorphosis during pregnancy. Furthermore, although observed changes were largely global, some regions displayed notable stability (for example, extrastriate cortex). The subcortical region that displayed the strongest relationship with gestation week was the ventral diencephalon, which encompasses the hypothalamus and subsequent medial preoptic area and paraventricular nucleus-structures critical for inducing maternal behavior 12,16 . The hippocampus exhibited a reduction in volume across gestation, and with higher spatial resolution, this reduction was revealed to be driven by changes in CA1 and CA2/CA3 subfield volumes, while other hippocampal subfields remained stable. Adjacent PHC within the MTL also exhibited volume reduction across gestation. While our hippocampal findings are consistent with pre/post studies of pregnancy 13 , the precision lens applied within gestation revealed the nonlinear nature of this reduction. Recapitulating and clarifying these regionally specific patterns of volume change throughout the MTL merits further investigation.\n\nSimilar precision imaging studies have captured dynamic brain reorganization across other neuroendocrine transitions, such as the menstrual cycle (see review in ref. 28), underscoring the powerful role steroid hormones have in shaping the mammalian brain 29 . Endocrine changes across pregnancy dwarf those that occur across the menstrual cycle, which highlights the critical need to map the brain's response to this unique hormonal state. Broad physiological changes occur in tandem with the rise in steroid hormones, including changes in body mass composition, water retention, immune function and", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed4.pdf" - }, - { - "text": "Learning algorithms for neural networks use local search to choose the weights that will get the right output for each input during training. The most common training technique is the backpropagation algorithm. [105] Neural networks learn to model complex relationships between inputs and outputs and find patterns in data. In theory, a neural network can learn any function. [106]\n\nIn feedforward neural networks the signal passes in only one direction. [107] Recurrent neural networks feed the output signal back into the input, which allows short-term memories of previous input events. Long short term memory is the most successful network architecture for recurrent networks. [108] Perceptrons [109] use only a single layer of neurons; deep learning [110] uses multiple layers. Convolutional neural networks strengthen the connection\n\nA neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.\n\n<!-- image -->\n\nbetween neurons that are \"close\" to each other-this is especially important in image processing, where a local set of neurons must identify an \"edge\" before the network can identify an object. [111]\n\n## Deep learning\n\nDeep learning [110] uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higherlevel features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters, or faces. [112]\n\nDeep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing, image classification, [113] and others. The reason that deep learning performs so\n\n<!-- image -->\n\nwell in so many applications is not known as of 2023. [114] The sudden success of deep learning in 20122015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s) [i] but because of two factors: the incredible increase in computer power (including the hundred-fold increase in speed by switching to GPUs) and the availability of vast amounts of training data, especially the giant curated datasets used for benchmark testing, such as ImageNet. [j]\n\n## GPT\n\nGenerative pre-trained transformers (GPT) are large language models (LLMs) that generate text based on the semantic relationships between words in sentences. Text-based GPT models are pretrained on a large corpus of text that can be from the Internet. The pretraining consists of predicting the next token (a token being usually a word, subword, or punctuation). Throughout this pretraining, GPT models accumulate knowledge about the world and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful, and harmless, usually with a technique called reinforcement learning from human feedback (RLHF). Current GPT", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed4.pdf", - "query": "What may reflect the decrease in GMV during pregnancy?", - "target_page": 6, - "target_passage": " Decreases in GMV may reflect ‘fine-tuning’ of the brain by neuromodulatory hormones in prepara- tion for parenthood", - "chunk_present": { - "presence": true, - "index": 9 - } - }, - "top_chunk": [ - { - "text": "- 7. The administrator creates MM, GM, and GM with Change Volume relationships.", - "page_start": 576, - "page_end": 576, - "source_file": "sg247938.pdf" - }, - { - "text": "- a. The MM/GM relationship is created with the -sync option, and the MM/GM relationship enters the ConsistentStopped state.", - "page_start": 555, - "page_end": 555, - "source_file": "sg247938.pdf" - }, - { - "text": "Cortical GMV and CT . We then narrowed our analyses to the first 19 sessions (baseline-36 weeks gestation) to assess novel brain changes occurring over the gestational window. We first computed Pearson's product-moment correlation matrices between the following variables: gestation week, estradiol, progesterone and the 17 network-level average GMV values. We then ran a multivariate regression analysis predicting ROI-level GMV changes by gestation week. To identify which regions were changing at a rate different from the global decrease, we then ran the analyses again to include total GMV in the regression model (Supplementary Table 2). This was extended to the network level, where we ran partial correlations accounting for total GMV. These same analyses were then run with CT measures. Globally-corrected results provided in Supplementary Tables 1-5. Percent change at the network level was computed by subtracting the final pregnancy value (36 weeks pregnant) from the first prepregnancy baseline value, then dividing that difference by said first prepregnancy baseline value. All analyses underwent multiple comparisons testing (false discovery rate (FDR)-corrected at q < 0.05).\n\nSubcortical GMV . A similar statistical approach was taken for subcortical volume estimates. We ran a multivariate regression analysis predicting GMV changes over gestation in 28 ROIs (Supplementary Fig. 6a) by gestation week (FDR-corrected at q < 0.05).\n\nTo evaluate the relationship between gestation week and MTL subregion volume over pregnancy ( n = 7 bilateral subregions and n = 18 MTL scans), we used a combination of linear and nonlinear models based on individual subregion data patterns. Models were compared for best fit with each subregion via AIC from the GLM output (as described in 'Summary brain metrics'). A linear regression model was most appropriate for PHC (AICdiff < 3), whereas a quadratic model performed best for CA1 and CA2/CA3. As a control, we repeated the analyses with MTL subregion volumes after proportional volume correction of total GMV calculated by ASHS. Finally, we evaluated the relationship between endogenous sex hormones (estrogen and progesterone) and subregion volumes using linear regression. Relationships were considered significant only if they met FDR correction at q < 0.05.\n\nWhite matter microstructure . DSI Studio's correlational tractography 74 was used to analyze the relationship between white matter structure and gestational week ( n = 16). A truncated model was run to examine the relationship between white matter and sex steroid hormones ( n = 14) for the subset of diffusion scans with paired endocrine data during gestation. A nonparametric Spearman's correlation was used to derive the correlation between gestational week and endocrine factors and our metrics of interest (QA and MD; see Supplementary Table 9 and Supplementary Fig. 10 for MD results) because the data were not normally distributed. Statistical inference was reached using connectometry, a permutation-based approach that tests the strength of coherent associations found between the local connectome and our variables of interest. It provides higher reliability and replicability by correcting for multiple comparisons. This technique provides a high-resolution characterization of local axonal orientation. The correlational tractography was run with the following parameters: t score threshold of 2.5, four pruning iterations and a length threshold of 25 voxel distance. To estimate the FDR, a total of 4,000 randomized permutations were applied to obtain the null distribution of the track length. Reported regions were selected based on FDR cutoff (FDR < 0.2, suggested by DSI Studio), and contained at least ten tracts. For visualization of global and tract QA at each gestational stage, mean QA values were extracted using DSI Studio's whole-brain fiber tracking algorithm and ROI-based tracking using the default HCP842 atlas 78 .", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed4.pdf" - }, - { - "text": "a\n\nWhole-brain GMV\n\nGMV ~ gestation week\n\nFig. 2 | Cortical GMV showed widespread change through gestation and\n\n<!-- image -->\n\n<!-- image -->\n\nPostcentral gyrus Dorsal attention network B Regional GMV\n\n<!-- image -->\n\nFrontal eye fields\n\nDorsal attention network B\n\n<!-- image -->\n\nc\n\nPrecuneus/posterior cingulate Default mode network A\n\n<!-- image -->\n\n<!-- image -->\n\nMedial frontal Salience ventral attention network A\n\n<!-- image -->\n\nInsula Salience ventral attention network B\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\npostpartum. a , Multivariate regression analyses reveal largely negative relationships between gestation week and regional GMV, with only a minority of regions unaffected or increasing over the gestational window (baseline-36 weeks). All associations presented here were corrected for multiple comparisons (FDR at q < 0.05; nonsignificant values set to zero for interpretability). b , Average network change was calculated by estimating GMV percent change from baseline (initial) to 36 weeks gestation (final). Attention and control networks appear most affected. c , Six representative regions, classified by major subnetworks, that exhibit pronounced GMV change across gestation. For each panel, we display a scatterplot between average GMV of the ROIs and gestation week (left; gestation sessions only, 19 scans), and summary GMV of ROIs by pregnancy stage across the whole study (right; gestation and postpartum sessions, 26 scans).\n\nShaded regions in scatterplots represent a 95% confidence interval. Each boxplot represents IQR for each stage, with a horizontal line representing the median value. The whiskers indicate variability outside (±1.5) of this range. Outside values are >1.5× and <3× IQR beyond either end of the box. All statistical tests were corrected for multiple comparisons (FDR at q < 0.05) and values were z scored and transformed to have a mean of zero and s.d. of one for easier comparison across regions. Please note that the data values shown here are raw (see Supplementary Tables 1 and 2 and Supplementary Data 1 for exhaustive list). Brain visualizations created with R package ggseg 48 . IQR, interquartile range; Lat, lateral; Med, medial; DMN, default mode network; VisPeri, visual peripheral network; SomMot, somatomotor network; VisCent, visual central network; Cont, control network; TempPar, temporal parietal network; DorsAttn, dorsal attention network; SalVentAttn, salience/ventral attention network.\n\nInferior parietal Control network B", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed4.pdf" - }, - { - "text": "- 3. To manage multiple MM/GM relationships as one entity, the relationships can be made part of a MM/GM Consistency Group to ensure data consistency across multiple MM/GM relationships, or for ease of management.", - "page_start": 562, - "page_end": 562, - "source_file": "sg247938.pdf" - }, - { - "text": "Changes in GMV were near-ubiquitous across the cortical mantle (Fig. 2a). Most large-scale brain networks exhibited decreases in GMV (Fig. 2b and Supplementary Table 1); indeed, 80% of the 400 regions of interest (ROI) demonstrated negative relationships between GMV and gestation week (Fig. 2a and Supplementary Table 2). Together, these results provide evidence of a global decrease in cortical volume across pregnancy. Several sensory and attention subnetworks were particularly sensitive to gestation, including the control (subnetwork B), salience/ventral attention (subnetwork A), dorsal attention (subnetwork B), default (subnetwork A) and somatomotor (subnetworks A and B) networks (Supplementary Table 1). Regions driving these network-level changes include the bilateral inferior parietal lobe, postcentral gyri, insulae, prefrontal cortex, posterior cingulate and somatosensory cortex (Fig. 2c, Supplementary Table 2 and validation of findings using alternate pipeline in Supplementary Tables 1 and 3). These regions and\n\nassociated brain networks appear to decrease in volume at a faster rate than the rest of the brain throughout pregnancy, as determined by a subsequent analysis controlling for total GMV (Supplementary Tables 1 and 2). GMV reductions were also significantly correlated with the participant's estradiol and progesterone concentrations (Supplementary Table 1). A highly similar pattern of results was observed when examining pregnancy-related CT changes (Supplementary Fig. 3 and Supplementary Tables 4 and 5). Significant reductions in cortical GMV over gestation remained after controlling for standard quality control (QC) metrics, albeit with some influence on the magnitude and location of the observed effects (Supplementary Figs. 4 and 5).\n\nIn contrast, GMV within regions of the default mode (subnetwork C), limbic (subnetworks A and B) and visual peripheral networks buck the global trend by slightly increasing (for example, temporal poles), remaining constant (for example, orbitofrontal cortex) or reducing at a much slower rate (for example, extrastriate cortex) than total GMV (Fig. 2a,b and Supplementary Tables 1 and 2). CT changes in these regions exhibit similar patterns (Supplementary Fig. 3 and Supplementary Tables 4 and 5).\n\n## Subcortical GMV changes tied to gestation\n\nConsistent with the broader cortical reductions in GMV, several subcortical regions significantly reduced in volume across gestation (Fig. 3a, left). This included bilateral ventral diencephalon (right hemisphere values shown in Fig. 3a, right; encompasses hypothalamus, substantia nigra, mammillary body, lateral geniculate nucleus and red nucleus among others 22 ), caudate, hippocampus and thalamus, along with left putamen and brain stem (Supplementary Table 6, q < 0.05).\n\nNext, high-resolution segmentation of the MTL allowed us to interrogate subcortical structures at a finer resolution, revealing nonlinear volumetric decreases in CA1 ( F (2,15) = 5.84, q = 0.031, R 2 adj = 0.36; Fig. 3b, left) and CA2/CA3 ( F (2,15) = 6.82, q = 0.027, R 2 adj = 0.41; Fig. 3b, middle) across gestation. PHC exhibited linear volumetric decreases across gestation ( F (1,16) = 24.87, q < 0.001, R 2 adj = 0.58; Fig. 3b, right) which was also tied to estradiol ( F (1,12) = 20.21, q = 0.005, R 2 adj = 0.60). All three relationships remained significant after proportional correction for total GMV. There was no significant change in other subregions or total volume of the hippocampal body, or in the parahippocampal gyrus (Supplementary Table 7 and Supplementary Fig. 8).\n\n## White matter microstructure changes tied to gestation", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed4.pdf" - }, - { - "text": "- /SM590000 During periods when application hosts can tolerate extended response times and it is expected that the gmlinktolerance feature might stop the GM relationships. For example, if you test by using an I/O generator that is configured to stress the back-end storage, the gmlinktolerance feature might detect the high latency and stop the GM relationships.", - "page_start": 564, - "page_end": 564, - "source_file": "sg247938.pdf" - }, - { - "text": "## White matter microstructure changes tied to gestation\n\nIn contrast to decreasing global GMV, correlational tractography of white matter, which tests for linear trends in the data, revealed increasing microstructural integrity across the whole brain during gestation (Fig. 4a), concomitant with the rise in 17β-estradiol and progesterone (all q < 0.001; Supplementary Fig. 9). Tracts displaying robust correlations with gestational week included the corpus callosum, arcuate fasciculus, inferior fronto-occipital fasciculus and inferior longitudinal fasciculus (Fig. 4b), as well as the cingulum bundle, middle and superior longitudinal fasciculus, corticostriatal, corticospinal and corticopontine tracts (see Supplementary Table 9 for complete list).\n\n## Comparing brain changes across pregnancy against controls\n\nWe then compared the changes in GMV across gestation to that of typical variability over time, derived from eight densely-sampled controls 23 . The GMV changes we see across pregnancy far exceed normative brain variability (Supplementary Fig. 11). On average, change in cortical GMV was nearly three times higher than controls scanned over a similar duration (Supplementary Fig. 11a,b). This extends to MTL subfields, wherein change in volume was three to four times greater across gestation than normative brain variability (Supplementary Fig. 11c,d). We contextualized these findings further by comparing gestational GMV change against our participant's preconception brain volumes; average GMV change during pregnancy was six times (cortical) and three times (MTL) higher than the variability observed between baseline sessions.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed4.pdf" - }, - { - "text": "- a. When a MM/GM relationship is stopped in the ConsistentSynchronized state, the MM/GM relationship enters the Idling state when you specify the -access option, which enables write I/O on the auxiliary volume.", - "page_start": 556, - "page_end": 556, - "source_file": "sg247938.pdf" - }, - { - "text": "sleep patterns 11 . These factors could have a role in the brain changes observed here, with some driving neurobiological changes and others, like water retention, potentially affecting MRI-based measurements. Note that, although cortical reductions in GMV over gestation were stable across analyses, accounting for QC measures influenced the magnitude and location of these results. These metrics all fell within the standard range, but there may be meaningful reductions in signal that accompany volumetric reductions (for example, increased CSF and decreased GM)-a methodological nuance that goes beyond the scope of this resource study. Ultimately, identifying the shared and unique contributions of these factors to the neuroanatomical changes that unfold across gestation warrants further investigation. Deeply phenotyping a large and diverse cohort of women across pregnancy will open up new avenues of exploration, for example, allowing researchers to link blood-based proteomic signatures to pregnancy outcomes; deploying wearable devices to monitor changes in sleep, cognition and mood; and probing the broader social and environmental determinants of maternal health 27 .\n\nThe neuroanatomical changes that unfold during matrescence may have broad implications for understanding individual differences in parental behavior 13,24,30,31 , vulnerability to mental health disorders 32,33 and patterns of brain aging 18,19,34-36 . Decreases in GMV may reflect 'fine-tuning' of the brain by neuromodulatory hormones in preparation for parenthood 26 . For example, in rodents, steroid hormones promote parental behavior by remodeling specific neural circuits in the medial preoptic area of the hypothalamus. These behavioral adaptations are critical to the dam's ability to meet the demands of caring for", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed4.pdf" - } - ] - }, - { - "references": { - "source_file": "6126797.pdf", - "query": "How to light up my sports smart watch?", - "target_page": 2, - "target_passage": "Up button: Short press to light up or turn off the screen", - "chunk_present": { - "presence": true, - "index": 6 - } - }, - "top_chunk": [ - { - "text": "## Sports smart watch User Manual DT3 Mate\n\n<!-- image -->\n\nThank you for choosing our smart watch. You can fully understand the use and operation of the equipment by reading this manual.\n\nThe company reserves the right to modify the contents of this manual without any prior notice.\n\nThe product contains: a packing box, a manual, a watch body, and a charging cable.\n\n## A. Watch function description\n\nButton description:", - "page_start": 0, - "page_end": 0, - "source_file": "6126797.pdf" - }, - { - "text": "Click 'camera' in the app WearPro to wake up the camera mode of the watch, click the camera button on the watch to take photos, and the photos will be automatically saved to the phone album.\n\n## 5. Data synchronization\n\nAfter the watch is successfully bound to the application, the data in the smartwatch can be synchronized to the application.\n\n## 6. Tilt to wake the screen\n\nWear the smartwatch correctly on your wrist (left/right hand). when you switch on the feature, you can light up the screen when you raise up your wrist.\n\n## 7. Do not disturb mode\n\nIn the APP, tap 'Device' > 'More' > 'Do not disturb mode', set the start to end time, such as 12:00 to 14:00, then you won't receive phone calls and apps notifications on the watch during this period.\n\n## 8. Daily alarm clock\n\nIn the APP in the APP Device>More, set the start and the end time, the alarm can be set only once or repeatedly on the date (week) setting, and the alarm can be turned on/off.\n\n## 9. Sedentary reminder\n\nSet the start and the end time of the sedentary reminder, and the time interval (minutes) in the APP. You can set the reminder for once or to repeat regularly by entering the repeating setting. When the sedentary time is reached, the watch will vibrate and display a sedentary icon on the screen.\n\n## 10. Drink water reminder\n\nSet the reminder frequency (minutes) and the time period of the start and the end in a day in the APP. You can set the reminder for once or to repeat regularly by entering the repeating setting and selecting the date (week) of the water reminder. When the time of drink water reminder is reached, the watch will vibrate and there will be a water icon on the screen.\n\n## 11. Dial push\n\n## 11.1.Push an existing watch face\n\nBind the watch and the app, open the app, tap Device > Watch face push, the watch will restart and bind the APP automatically after the synchronization of the watch face.\n\n - 11.2. Customize the watch face\n\nBind the watch and the app, open the app, tap Device > Watch face push, the first several watch faces marked with 'custom watch faces' are customizable. The watch will restart and bind the APP automatically after the synchronization of the watch face.\n\n## 12. Firmware version", - "page_start": 6, - "page_end": 6, - "source_file": "6126797.pdf" - }, - { - "text": "The version of the watch is displayed on 'Firmware upgrade' in the column of 'Device', and users can decide to whether upgrade the firmware version.\n\n## 13. Unbind\n\nIn the \"Device\" column of WearPro, scroll down to the \"Unbind\" and click to unbind the APP. The iSO users need to go to the Bluetooth settings of the phone, select the Bluetooth name of the\n\n<!-- image -->\n\nsmart watch, and click \"Forget this device\". The 'About' of the watch has an 'Unbind' button, click it to unbind or do it in the APP. For the safety of users' data, the watch will implement a factory reset after that.\n\n## ●Frequently asked questions and answers\n\n*Please avoid exposing the device to extreme temperatures that are too cold or too hot for a long time, which may cause permanent damage.\n\n*Why can't I take a hot bath with my watch?\n\nThe temperature of the bath water is relatively changed, it will produce a lot of water vapor, and the water vapor is in the gas phase, and its molecular radius is small, and it is easy to seep into the gap of the watch case. The internal circuit of the watch is short-circuited, which damages the circuit board of the watch and damages the watch.\n\n## *No power on, no charging\n\nIf you receive the goods and the watch does not turn on, it may be caused by a collision during the transportation of the watch and the battery Seiko board has been protected, so plug in the charging cable to activate it.", - "page_start": 7, - "page_end": 7, - "source_file": "6126797.pdf" - }, - { - "text": "Enable the SMS notification in the app. When one or more SMS messages are received on the mobile phone, the watch will receive one or more SMS reminders at the same time.\n\n1.5.3. Other application message notifications:\n\nTurn on the corresponding application message notification in the app, such as WeChat, QQ, Outlook, Facebook and other applications. When the mobile phone receives one/multiple application message notifications, the watch will receive one/multiple corresponding message reminders at the same time.\n\n## 1.6 Frequently used contacts\n\nThe watch binds to the app, and you allow the watch to access to the phone book of your mobile phone, then you can synchronize you contacts of your mobile phone to the smartwatch.\n\n## 1.7 Fitness data\n\nFitness data is turned on by default. When you enter the fitness data interface, scroll up the screen, the smartwatch will display the current data of steps, distance, and calories. The data will be wiped out at 00:00 every day in the morning.\n\n## 1.8 Sports modes (walking, running, cycling, rope skipping, badminton,\n\n## basketball, football)\n\n1.8.1 Select the corresponding exercise mode, click the 'Start' button on the screen to start the exercise; click the 'Start' button again to pause the recording of the exercise; click the 'End' button to end the recording, and save to the data.\n\n1.8.2 The data can only be saved when the recording of the exercise is more than 1 minute; If the recording time is less than 1 minute, the smartwatch will remind you that the data is too little to be saved.\n\n## 1.9 Heart rate\n\nAfter you wearing the smartwatch correctly, you can measure heart rate when you enter the heart rate function. If you don't wear the smartwatch properly, it will remind you to wear firmly for the measurement.\n\n## 1.10 ECG\n\nAfter you wearing the smartwatch correctly, and enter the ECG function(you need to turn on the ECG interface in the app, you can have single measurement at a time. The data of ECG will be saved in the mobile phone. This function should be used with the app.\n\n## 2.0 My QR code\n\nConnect the watch to the APP, find My QR Code in the APP, select WeChat/QQ/Alipay and other \"Receive money QR code\" to sync to the watch (Please follow the instructions of the app to operate the function).\n\n## 2.1 Remote control music", - "page_start": 3, - "page_end": 3, - "source_file": "6126797.pdf" - }, - { - "text": "Bind the smartwatch to the app WearPro, you can control the music to start/pause/play previous song/play next song of your phone.\n\nBind the audio/calling Bluetooth of the smartwatch also, the music will be broadcast on the smartwatch.\n\n## 2.2 Sleep\n\nSleep monitoring time period: from 18:00 at night to 10:00 the next day, the data will be generated by the watch. After connecting to the APP, the sleep data on the watch can be synchronized to the APP for you to check.\n\n## 2.3 stopwatch\n\nClick the stopwatch to enter the timing interface, and you can record the time once.\n\n## 2.4 Weather\n\nAfter the smartwatch is connected to the app and the data is synchronized, tap Weather on the watch to display the weather information for the day.\n\n## 2.5 Find mobile phone\n\nAfter the watch is bound to the app WearPro, tap this function to find the mobile phone, and the mobile phone will vibrate or emit a ringtone.\n\n## 2.6 Meteorology\n\nClick on 'Meteorology' on the watch to display the ultraviolet (UV) and air pressure conditions of the day.\n\n## 2.7 Massager\n\nTap the green button to start the massage, and the watch is in a vibrating state, tap the red button to end the massage state.\n\n## 3.0 Menu style\n\nThere are a variety of menu styles for users to choose.\n\n## 3.1 Settings\n\n - 1) You can select the watch language on the settings of the watch, or the watch language can be synchronized with your mobile phone language after the watch successfully binds to the APP.\n - 2) Switch the watch face, swipe to the right to view the next watch face, select a watch face, and click it to set the watch face.\n - 3) Set screen time; a variety of screen time lengths can be selected.\n - 4) Vibration intensity; set reminder vibration intensity.\n - 5) Password; a 4-digit password can be set (if you forget the password, please enter 8762 to decrypt the previous password).\n - 6) Restore factory settings; click √ to enable the factory reset, and click X to cancel the factory reset.", - "page_start": 4, - "page_end": 4, - "source_file": "6126797.pdf" - }, - { - "text": "- 3) Swipe to the right when the watch is in the dial interface, you can find time/date/week/the latest message (enter to view multiple messages)/some of the recently used menu functions, and turn on or off audio Bluetooth for calls.\n- 4) Swipe up the screen when the watch is in the dial interface to enter the menu interface, and scroll up and down to find the corresponding function.\n- 5) Long press the watch face interface and swipe to right or left to switch the watch face, select one of them and set it with one-click.\n\n## 1.2 App notification\n\n- 1) When the watch is bound to the APP, and you allow the watch to display notifications on the watch, the new messages received in your mobile phone will be pushed to the watch, and a total of 10 messages can be saved. The messages received after 10 messages will be overwritten one by one.\n- 2) Swipe to the bottom to click the delete icon to clear all message records.\n\n## 1.3 Drop-down menu\n\nScroll down the screen when the watch is in the dial interface to enter the drop-down menu interface.\n\n- 1) Bluetooth connection status; time; power left;\n- 2) About, where you can check the firmware version of watch and the address of the Bluetooth\n- 3) Setting, where you can enter it to set part of the functions;\n- 4) Brightness adjustment; where you can adjust the brightness of the screen;\n- 5) Alipay. Download the app Alipay in your mobile phone and bind it with your watch to realize offline payment.\n\n## 1.4 Phone/Call History\n\n- 1. Swipe to the left when the watch is in the watch interface, click the calling icon to turn on/off the calling Bluetooth. Turn on the calling Bluetooth, you will find the name of the calling Bluetooth, then go to the Bluetooth settings of your mobile phone, and bind the Bluetooth in the name of the calling Bluetooth of your watch. You can use the watch to make phone calls when they are successfully bound.\n- 2. Call records, which can save the records of incoming and dialed calls. (It can save more than 50 call records, and it will be automatically overwritten when 128 records are full. Click any call record to call back)\n- 3. Dial the keyboard, you can enter the phone number to make a call.\n\n## 1.5 message\n\nWhen the watch is successfully bound to the app, and you approve notifications of corresponding apps in your mobile phone system, and switch on these apps or callings notifications functions on your watch, the notifications on your mobile phone can synchronize to your watch.\n\n- 1.5.1. Incoming call notification:\n\nTurn on the incoming call reminder in the app. When the phone has a incoming call, the watch will light up or vibrate.\n\n- 1.5.2. SMS notification:", - "page_start": 2, - "page_end": 2, - "source_file": "6126797.pdf" - }, - { - "text": "<!-- image -->\n\n## Up button:\n\nShort press to light up or turn off the screen; one press to go back the dial interface; long press to reactivate the watch.\n\n## Button down:\n\nShort press to enter multi-sport mode.\n\nIn addition, when the watch is in the off-screen state, you can light up the screen by pressing any buttons.\n\n## Charging instructions:\n\nWireless charging, as shown in the picture below.\n\n<!-- image -->\n\n## 1.1 Shortcut function:\n\n- 1) Swipe to the left till you find the \"+\" icon, click the icon to add part of the functions in the shortcut.\n- 2) Scroll down the screen when the watch is in the dial interface, you can find Bluetooth connection status, time, power, brightness adjustment and other functions.", - "page_start": 1, - "page_end": 1, - "source_file": "6126797.pdf" - }, - { - "text": "## B . Bind to the APP\n\n## 1. APP download method\n\n## 1.1 Scan the QR code to download\n\n<!-- image -->\n\n1.2 Search the application at App market and download For Android users:\n\nSearch for \"WearPro\" in the Google Play app store or any customized Android store to download, remember to check the pop-up box on your phone when installing, and agree to the permission. For iOS users:\n\nSearch for \"WearPro\" in the APP Store to download, remember to check the pop-up box on your phone when installing, and agree to the permission.\n\n<!-- image -->\n\nAfter WearPro is installed, the app icon appears as\n\n.\n\n## 2.Bind Bluetooth\n\n<!-- image -->\n\n## 2.1 Unconnected to the APP state:\n\nAfter the watch is turned on, the Bluetooth will be in the state of being searched. After open the APK/APP, go to Devices > Add Device > click to start searching, select and click the corresponding watch device name, and the watch will be successfully bound to the app.\n\n## 2.2 Connected to the APP state:\n\n<!-- image -->\n\nWatch time synchronization: the time shown at the smartwatch and your mobile phone will synchronized after the smartwatch is bound to the APP successfully.\n\n2.3 Binding the audio/calling Bluetooth\n\nWhen the smartwatch is in the dial interface, you can find the audio/calling Bluetooth icon, and click it to turn it on, then go to the Bluetooth settings of your mobile phone and click the name of the audio/calling Bluetooth of the smartwatch to bind it.\n\n## 3. Find Watch\n\nAfter the smartwatch is bound to the APP, you click 'Find Watch' in the APP, the smartwatch will light up and vibrate for once.\n\n## 4. Camera", - "page_start": 5, - "page_end": 5, - "source_file": "6126797.pdf" - }, - { - "text": "<!-- image -->\n\nand view content on demand. They can search content and control their PVR remotely from their smartphone. They can stream programming to their tablet anywhere in their home. A single Rogers Nextbox serves as a master PVR for the entire home enabling simultaneous viewing and recording of up to eight separate shows and storage of over 250 hours of high-definition programming. And customers can access television and movie content on-demand from anywhere by laptop, tablet or smartphone using the Rogers Anyplace TV app.\n\nTelevision has never been this good, this easy, or this simple to control. And it's even better when combined with innovative Rogers features, such as the ability to screen phone calls on their TV, listen to voicemail on their tablet, or receive talking text messages on their home phone. Wireless customers can also use Rogers One Number to switch calls\n\namong their computer, home phone and wireless device without interruption; manage e-mails; text messages and voicemail; hold live video chats; and combine and sync contacts from across multiple devices.\n\nWhen they're not at home, more and more customers also rely on Rogers Smart Home Monitoring, a complete monitoring, automation and security solution that includes the most innovative technology and features available. Smart Home Monitoring lets customers monitor, control and receive alerts by smartphone or online, staying connected to their home from almost anywhere, and enjoying the peace of mind that comes with having the most reliable monitoring solution available. Smart Home Monitoring also gives customers the ability to automate lights, appliances, thermostats and more, so they know their homes are not only secure but more energy-efficient and convenient, also.", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "<!-- image -->\n\nOur new wireless Share Everything plans were Canada's first to let individuals, families and small businesses share wireless data and unlimited nationwide talk and text, with up to 10 wireless devices. Rogers recently further enhanced its exciting One Number service by introducing smartphone apps which enable customers to use mobile data or Wi-Fi to talk, text and video chat using their existing Rogers wireless number from any device.\n\nWe also keep customers informed and entertained with Rogers nextgeneration NextBox 3.0 TV experience which allows customers to view and record up to eight HD programs simultaneously, store hundreds of hours of content and enjoy whole-home PVR capability. And with Rogers Anyplace TV, it's also a wireless experience where viewers can navigate their cable guide, use a virtual remote, set PVR recordings and stream live or on-demand content from a tablet, smartphone, laptop or gaming console.\n\nRogers continues to be Canada's innovation leader in rapidly growing areas such as wireless machine-to-machine communications, remote home monitoring and automation, mobile payments, in-car infotainment and telematics, and digital media. As well, Rogers has deployed a suite of unique local digital services that create virtual marketplaces for bringing consumers and businesses together and provide location-based targeted offers.\n\nThese are just a few examples of the ways Rogers continues to innovate and lead the way, introducing wireless, broadband and digital technologies and services that fundamentally change the way customers stay connected, informed and entertained anywhere they are. Canadians know there's one thing to be certain of - if they're with Rogers, they'll never miss a thing.", - "page_start": 18, - "page_end": 18, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "6126797.pdf", - "query": "Is my sports smartwatch's fitness data turned on or off by default?", - "target_page": 4, - "target_passage": "Fitness data is turned on by default.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Enable the SMS notification in the app. When one or more SMS messages are received on the mobile phone, the watch will receive one or more SMS reminders at the same time.\n\n1.5.3. Other application message notifications:\n\nTurn on the corresponding application message notification in the app, such as WeChat, QQ, Outlook, Facebook and other applications. When the mobile phone receives one/multiple application message notifications, the watch will receive one/multiple corresponding message reminders at the same time.\n\n## 1.6 Frequently used contacts\n\nThe watch binds to the app, and you allow the watch to access to the phone book of your mobile phone, then you can synchronize you contacts of your mobile phone to the smartwatch.\n\n## 1.7 Fitness data\n\nFitness data is turned on by default. When you enter the fitness data interface, scroll up the screen, the smartwatch will display the current data of steps, distance, and calories. The data will be wiped out at 00:00 every day in the morning.\n\n## 1.8 Sports modes (walking, running, cycling, rope skipping, badminton,\n\n## basketball, football)\n\n1.8.1 Select the corresponding exercise mode, click the 'Start' button on the screen to start the exercise; click the 'Start' button again to pause the recording of the exercise; click the 'End' button to end the recording, and save to the data.\n\n1.8.2 The data can only be saved when the recording of the exercise is more than 1 minute; If the recording time is less than 1 minute, the smartwatch will remind you that the data is too little to be saved.\n\n## 1.9 Heart rate\n\nAfter you wearing the smartwatch correctly, you can measure heart rate when you enter the heart rate function. If you don't wear the smartwatch properly, it will remind you to wear firmly for the measurement.\n\n## 1.10 ECG\n\nAfter you wearing the smartwatch correctly, and enter the ECG function(you need to turn on the ECG interface in the app, you can have single measurement at a time. The data of ECG will be saved in the mobile phone. This function should be used with the app.\n\n## 2.0 My QR code\n\nConnect the watch to the APP, find My QR Code in the APP, select WeChat/QQ/Alipay and other \"Receive money QR code\" to sync to the watch (Please follow the instructions of the app to operate the function).\n\n## 2.1 Remote control music", - "page_start": 3, - "page_end": 3, - "source_file": "6126797.pdf" - }, - { - "text": "Bind the smartwatch to the app WearPro, you can control the music to start/pause/play previous song/play next song of your phone.\n\nBind the audio/calling Bluetooth of the smartwatch also, the music will be broadcast on the smartwatch.\n\n## 2.2 Sleep\n\nSleep monitoring time period: from 18:00 at night to 10:00 the next day, the data will be generated by the watch. After connecting to the APP, the sleep data on the watch can be synchronized to the APP for you to check.\n\n## 2.3 stopwatch\n\nClick the stopwatch to enter the timing interface, and you can record the time once.\n\n## 2.4 Weather\n\nAfter the smartwatch is connected to the app and the data is synchronized, tap Weather on the watch to display the weather information for the day.\n\n## 2.5 Find mobile phone\n\nAfter the watch is bound to the app WearPro, tap this function to find the mobile phone, and the mobile phone will vibrate or emit a ringtone.\n\n## 2.6 Meteorology\n\nClick on 'Meteorology' on the watch to display the ultraviolet (UV) and air pressure conditions of the day.\n\n## 2.7 Massager\n\nTap the green button to start the massage, and the watch is in a vibrating state, tap the red button to end the massage state.\n\n## 3.0 Menu style\n\nThere are a variety of menu styles for users to choose.\n\n## 3.1 Settings\n\n - 1) You can select the watch language on the settings of the watch, or the watch language can be synchronized with your mobile phone language after the watch successfully binds to the APP.\n - 2) Switch the watch face, swipe to the right to view the next watch face, select a watch face, and click it to set the watch face.\n - 3) Set screen time; a variety of screen time lengths can be selected.\n - 4) Vibration intensity; set reminder vibration intensity.\n - 5) Password; a 4-digit password can be set (if you forget the password, please enter 8762 to decrypt the previous password).\n - 6) Restore factory settings; click √ to enable the factory reset, and click X to cancel the factory reset.", - "page_start": 4, - "page_end": 4, - "source_file": "6126797.pdf" - }, - { - "text": "Click 'camera' in the app WearPro to wake up the camera mode of the watch, click the camera button on the watch to take photos, and the photos will be automatically saved to the phone album.\n\n## 5. Data synchronization\n\nAfter the watch is successfully bound to the application, the data in the smartwatch can be synchronized to the application.\n\n## 6. Tilt to wake the screen\n\nWear the smartwatch correctly on your wrist (left/right hand). when you switch on the feature, you can light up the screen when you raise up your wrist.\n\n## 7. Do not disturb mode\n\nIn the APP, tap 'Device' > 'More' > 'Do not disturb mode', set the start to end time, such as 12:00 to 14:00, then you won't receive phone calls and apps notifications on the watch during this period.\n\n## 8. Daily alarm clock\n\nIn the APP in the APP Device>More, set the start and the end time, the alarm can be set only once or repeatedly on the date (week) setting, and the alarm can be turned on/off.\n\n## 9. Sedentary reminder\n\nSet the start and the end time of the sedentary reminder, and the time interval (minutes) in the APP. You can set the reminder for once or to repeat regularly by entering the repeating setting. When the sedentary time is reached, the watch will vibrate and display a sedentary icon on the screen.\n\n## 10. Drink water reminder\n\nSet the reminder frequency (minutes) and the time period of the start and the end in a day in the APP. You can set the reminder for once or to repeat regularly by entering the repeating setting and selecting the date (week) of the water reminder. When the time of drink water reminder is reached, the watch will vibrate and there will be a water icon on the screen.\n\n## 11. Dial push\n\n## 11.1.Push an existing watch face\n\nBind the watch and the app, open the app, tap Device > Watch face push, the watch will restart and bind the APP automatically after the synchronization of the watch face.\n\n - 11.2. Customize the watch face\n\nBind the watch and the app, open the app, tap Device > Watch face push, the first several watch faces marked with 'custom watch faces' are customizable. The watch will restart and bind the APP automatically after the synchronization of the watch face.\n\n## 12. Firmware version", - "page_start": 6, - "page_end": 6, - "source_file": "6126797.pdf" - }, - { - "text": "## B . Bind to the APP\n\n## 1. APP download method\n\n## 1.1 Scan the QR code to download\n\n<!-- image -->\n\n1.2 Search the application at App market and download For Android users:\n\nSearch for \"WearPro\" in the Google Play app store or any customized Android store to download, remember to check the pop-up box on your phone when installing, and agree to the permission. For iOS users:\n\nSearch for \"WearPro\" in the APP Store to download, remember to check the pop-up box on your phone when installing, and agree to the permission.\n\n<!-- image -->\n\nAfter WearPro is installed, the app icon appears as\n\n.\n\n## 2.Bind Bluetooth\n\n<!-- image -->\n\n## 2.1 Unconnected to the APP state:\n\nAfter the watch is turned on, the Bluetooth will be in the state of being searched. After open the APK/APP, go to Devices > Add Device > click to start searching, select and click the corresponding watch device name, and the watch will be successfully bound to the app.\n\n## 2.2 Connected to the APP state:\n\n<!-- image -->\n\nWatch time synchronization: the time shown at the smartwatch and your mobile phone will synchronized after the smartwatch is bound to the APP successfully.\n\n2.3 Binding the audio/calling Bluetooth\n\nWhen the smartwatch is in the dial interface, you can find the audio/calling Bluetooth icon, and click it to turn it on, then go to the Bluetooth settings of your mobile phone and click the name of the audio/calling Bluetooth of the smartwatch to bind it.\n\n## 3. Find Watch\n\nAfter the smartwatch is bound to the APP, you click 'Find Watch' in the APP, the smartwatch will light up and vibrate for once.\n\n## 4. Camera", - "page_start": 5, - "page_end": 5, - "source_file": "6126797.pdf" - }, - { - "text": "- 3) Swipe to the right when the watch is in the dial interface, you can find time/date/week/the latest message (enter to view multiple messages)/some of the recently used menu functions, and turn on or off audio Bluetooth for calls.\n- 4) Swipe up the screen when the watch is in the dial interface to enter the menu interface, and scroll up and down to find the corresponding function.\n- 5) Long press the watch face interface and swipe to right or left to switch the watch face, select one of them and set it with one-click.\n\n## 1.2 App notification\n\n- 1) When the watch is bound to the APP, and you allow the watch to display notifications on the watch, the new messages received in your mobile phone will be pushed to the watch, and a total of 10 messages can be saved. The messages received after 10 messages will be overwritten one by one.\n- 2) Swipe to the bottom to click the delete icon to clear all message records.\n\n## 1.3 Drop-down menu\n\nScroll down the screen when the watch is in the dial interface to enter the drop-down menu interface.\n\n- 1) Bluetooth connection status; time; power left;\n- 2) About, where you can check the firmware version of watch and the address of the Bluetooth\n- 3) Setting, where you can enter it to set part of the functions;\n- 4) Brightness adjustment; where you can adjust the brightness of the screen;\n- 5) Alipay. Download the app Alipay in your mobile phone and bind it with your watch to realize offline payment.\n\n## 1.4 Phone/Call History\n\n- 1. Swipe to the left when the watch is in the watch interface, click the calling icon to turn on/off the calling Bluetooth. Turn on the calling Bluetooth, you will find the name of the calling Bluetooth, then go to the Bluetooth settings of your mobile phone, and bind the Bluetooth in the name of the calling Bluetooth of your watch. You can use the watch to make phone calls when they are successfully bound.\n- 2. Call records, which can save the records of incoming and dialed calls. (It can save more than 50 call records, and it will be automatically overwritten when 128 records are full. Click any call record to call back)\n- 3. Dial the keyboard, you can enter the phone number to make a call.\n\n## 1.5 message\n\nWhen the watch is successfully bound to the app, and you approve notifications of corresponding apps in your mobile phone system, and switch on these apps or callings notifications functions on your watch, the notifications on your mobile phone can synchronize to your watch.\n\n- 1.5.1. Incoming call notification:\n\nTurn on the incoming call reminder in the app. When the phone has a incoming call, the watch will light up or vibrate.\n\n- 1.5.2. SMS notification:", - "page_start": 2, - "page_end": 2, - "source_file": "6126797.pdf" - }, - { - "text": "The version of the watch is displayed on 'Firmware upgrade' in the column of 'Device', and users can decide to whether upgrade the firmware version.\n\n## 13. Unbind\n\nIn the \"Device\" column of WearPro, scroll down to the \"Unbind\" and click to unbind the APP. The iSO users need to go to the Bluetooth settings of the phone, select the Bluetooth name of the\n\n<!-- image -->\n\nsmart watch, and click \"Forget this device\". The 'About' of the watch has an 'Unbind' button, click it to unbind or do it in the APP. For the safety of users' data, the watch will implement a factory reset after that.\n\n## ●Frequently asked questions and answers\n\n*Please avoid exposing the device to extreme temperatures that are too cold or too hot for a long time, which may cause permanent damage.\n\n*Why can't I take a hot bath with my watch?\n\nThe temperature of the bath water is relatively changed, it will produce a lot of water vapor, and the water vapor is in the gas phase, and its molecular radius is small, and it is easy to seep into the gap of the watch case. The internal circuit of the watch is short-circuited, which damages the circuit board of the watch and damages the watch.\n\n## *No power on, no charging\n\nIf you receive the goods and the watch does not turn on, it may be caused by a collision during the transportation of the watch and the battery Seiko board has been protected, so plug in the charging cable to activate it.", - "page_start": 7, - "page_end": 7, - "source_file": "6126797.pdf" - }, - { - "text": "<!-- image -->\n\n## Up button:\n\nShort press to light up or turn off the screen; one press to go back the dial interface; long press to reactivate the watch.\n\n## Button down:\n\nShort press to enter multi-sport mode.\n\nIn addition, when the watch is in the off-screen state, you can light up the screen by pressing any buttons.\n\n## Charging instructions:\n\nWireless charging, as shown in the picture below.\n\n<!-- image -->\n\n## 1.1 Shortcut function:\n\n- 1) Swipe to the left till you find the \"+\" icon, click the icon to add part of the functions in the shortcut.\n- 2) Scroll down the screen when the watch is in the dial interface, you can find Bluetooth connection status, time, power, brightness adjustment and other functions.", - "page_start": 1, - "page_end": 1, - "source_file": "6126797.pdf" - }, - { - "text": "## Sports smart watch User Manual DT3 Mate\n\n<!-- image -->\n\nThank you for choosing our smart watch. You can fully understand the use and operation of the equipment by reading this manual.\n\nThe company reserves the right to modify the contents of this manual without any prior notice.\n\nThe product contains: a packing box, a manual, a watch body, and a charging cable.\n\n## A. Watch function description\n\nButton description:", - "page_start": 0, - "page_end": 0, - "source_file": "6126797.pdf" - }, - { - "text": "<!-- image -->\n\nprogramming across the country's largest markets, as well as five OMNI Television stations which deliver multilingual news, information and entertainment to Canada's multiple language communities.\n\nThe Sportsnet specialty network provides sports programming across Canada through its four regional television channels and its nationallydistributed Sportsnet ONE, Sportsnet World, and Sportsnet 360 stations. Rogers also owns other Canadian specialty television channels, including FX Canada, OLN, The Biography Channel and G4.\n\nThe Shopping Channel - Canada's only nationally televised and Internet shopping service - is a leading interactive multi-channel retailer, offering a vast assortment of exclusive products and top brand names. As one of Canada's most innovative and diversified retailers, it provides customers with exceptional selections in health/beauty, jewelry, home/lifestyle, fashion/accessories, and electronics.\n\nRogers also publishes many well-known consumer magazines, such as Maclean's, Chatelaine, FLARE, L'actualité, and Canadian Business, and is the leading publisher of a number of industry, medical and financial publications. Rogers also controls a suite of fast-growing digital media assets, including 90+ owned and 300+ premium partnership online sites, as well as the recently launched Next Issue Canada digital magazine platform which provides 100+ of North America's most celebrated titles on an unlimited anytime, anywhere basis.\n\nIn sports entertainment, Rogers owns the Toronto Blue Jays baseball team and Rogers Centre stadium, Canada's largest sports and entertainment facility and home field of the Blue Jays. Rogers also holds a 37.5% investment in Maple Leaf Sports & Entertainment which owns the NHL Maple Leafs, NBA Raptors, MLS Toronto FC and a number of other sports related assets.", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## 13.7 Monitoring\n\nAn important step is to correct any issues that are reported by your IBM Storwize V7000 system as soon as possible. Configure your system to send automatic notifications to standard Call Home server or to the new Cloud Call Home server when a new event is reported. To avoid having to monitor the management GUI for new events, select the type of event for which you want to be notified. For example, restrict notifications to only events that require action. The following event notification mechanisms are available:\n\n - /SM590000 Call Home", - "page_start": 731, - "page_end": 731, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "6126797.pdf", - "query": "When does my Sport smartwatch start and stop monitoring sleep?", - "target_page": 5, - "target_passage": "Sleep monitoring time period: from 18:00 at night to 10:00 the next day", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Bind the smartwatch to the app WearPro, you can control the music to start/pause/play previous song/play next song of your phone.\n\nBind the audio/calling Bluetooth of the smartwatch also, the music will be broadcast on the smartwatch.\n\n## 2.2 Sleep\n\nSleep monitoring time period: from 18:00 at night to 10:00 the next day, the data will be generated by the watch. After connecting to the APP, the sleep data on the watch can be synchronized to the APP for you to check.\n\n## 2.3 stopwatch\n\nClick the stopwatch to enter the timing interface, and you can record the time once.\n\n## 2.4 Weather\n\nAfter the smartwatch is connected to the app and the data is synchronized, tap Weather on the watch to display the weather information for the day.\n\n## 2.5 Find mobile phone\n\nAfter the watch is bound to the app WearPro, tap this function to find the mobile phone, and the mobile phone will vibrate or emit a ringtone.\n\n## 2.6 Meteorology\n\nClick on 'Meteorology' on the watch to display the ultraviolet (UV) and air pressure conditions of the day.\n\n## 2.7 Massager\n\nTap the green button to start the massage, and the watch is in a vibrating state, tap the red button to end the massage state.\n\n## 3.0 Menu style\n\nThere are a variety of menu styles for users to choose.\n\n## 3.1 Settings\n\n - 1) You can select the watch language on the settings of the watch, or the watch language can be synchronized with your mobile phone language after the watch successfully binds to the APP.\n - 2) Switch the watch face, swipe to the right to view the next watch face, select a watch face, and click it to set the watch face.\n - 3) Set screen time; a variety of screen time lengths can be selected.\n - 4) Vibration intensity; set reminder vibration intensity.\n - 5) Password; a 4-digit password can be set (if you forget the password, please enter 8762 to decrypt the previous password).\n - 6) Restore factory settings; click √ to enable the factory reset, and click X to cancel the factory reset.", - "page_start": 4, - "page_end": 4, - "source_file": "6126797.pdf" - }, - { - "text": "Enable the SMS notification in the app. When one or more SMS messages are received on the mobile phone, the watch will receive one or more SMS reminders at the same time.\n\n1.5.3. Other application message notifications:\n\nTurn on the corresponding application message notification in the app, such as WeChat, QQ, Outlook, Facebook and other applications. When the mobile phone receives one/multiple application message notifications, the watch will receive one/multiple corresponding message reminders at the same time.\n\n## 1.6 Frequently used contacts\n\nThe watch binds to the app, and you allow the watch to access to the phone book of your mobile phone, then you can synchronize you contacts of your mobile phone to the smartwatch.\n\n## 1.7 Fitness data\n\nFitness data is turned on by default. When you enter the fitness data interface, scroll up the screen, the smartwatch will display the current data of steps, distance, and calories. The data will be wiped out at 00:00 every day in the morning.\n\n## 1.8 Sports modes (walking, running, cycling, rope skipping, badminton,\n\n## basketball, football)\n\n1.8.1 Select the corresponding exercise mode, click the 'Start' button on the screen to start the exercise; click the 'Start' button again to pause the recording of the exercise; click the 'End' button to end the recording, and save to the data.\n\n1.8.2 The data can only be saved when the recording of the exercise is more than 1 minute; If the recording time is less than 1 minute, the smartwatch will remind you that the data is too little to be saved.\n\n## 1.9 Heart rate\n\nAfter you wearing the smartwatch correctly, you can measure heart rate when you enter the heart rate function. If you don't wear the smartwatch properly, it will remind you to wear firmly for the measurement.\n\n## 1.10 ECG\n\nAfter you wearing the smartwatch correctly, and enter the ECG function(you need to turn on the ECG interface in the app, you can have single measurement at a time. The data of ECG will be saved in the mobile phone. This function should be used with the app.\n\n## 2.0 My QR code\n\nConnect the watch to the APP, find My QR Code in the APP, select WeChat/QQ/Alipay and other \"Receive money QR code\" to sync to the watch (Please follow the instructions of the app to operate the function).\n\n## 2.1 Remote control music", - "page_start": 3, - "page_end": 3, - "source_file": "6126797.pdf" - }, - { - "text": "Click 'camera' in the app WearPro to wake up the camera mode of the watch, click the camera button on the watch to take photos, and the photos will be automatically saved to the phone album.\n\n## 5. Data synchronization\n\nAfter the watch is successfully bound to the application, the data in the smartwatch can be synchronized to the application.\n\n## 6. Tilt to wake the screen\n\nWear the smartwatch correctly on your wrist (left/right hand). when you switch on the feature, you can light up the screen when you raise up your wrist.\n\n## 7. Do not disturb mode\n\nIn the APP, tap 'Device' > 'More' > 'Do not disturb mode', set the start to end time, such as 12:00 to 14:00, then you won't receive phone calls and apps notifications on the watch during this period.\n\n## 8. Daily alarm clock\n\nIn the APP in the APP Device>More, set the start and the end time, the alarm can be set only once or repeatedly on the date (week) setting, and the alarm can be turned on/off.\n\n## 9. Sedentary reminder\n\nSet the start and the end time of the sedentary reminder, and the time interval (minutes) in the APP. You can set the reminder for once or to repeat regularly by entering the repeating setting. When the sedentary time is reached, the watch will vibrate and display a sedentary icon on the screen.\n\n## 10. Drink water reminder\n\nSet the reminder frequency (minutes) and the time period of the start and the end in a day in the APP. You can set the reminder for once or to repeat regularly by entering the repeating setting and selecting the date (week) of the water reminder. When the time of drink water reminder is reached, the watch will vibrate and there will be a water icon on the screen.\n\n## 11. Dial push\n\n## 11.1.Push an existing watch face\n\nBind the watch and the app, open the app, tap Device > Watch face push, the watch will restart and bind the APP automatically after the synchronization of the watch face.\n\n - 11.2. Customize the watch face\n\nBind the watch and the app, open the app, tap Device > Watch face push, the first several watch faces marked with 'custom watch faces' are customizable. The watch will restart and bind the APP automatically after the synchronization of the watch face.\n\n## 12. Firmware version", - "page_start": 6, - "page_end": 6, - "source_file": "6126797.pdf" - }, - { - "text": "## B . Bind to the APP\n\n## 1. APP download method\n\n## 1.1 Scan the QR code to download\n\n<!-- image -->\n\n1.2 Search the application at App market and download For Android users:\n\nSearch for \"WearPro\" in the Google Play app store or any customized Android store to download, remember to check the pop-up box on your phone when installing, and agree to the permission. For iOS users:\n\nSearch for \"WearPro\" in the APP Store to download, remember to check the pop-up box on your phone when installing, and agree to the permission.\n\n<!-- image -->\n\nAfter WearPro is installed, the app icon appears as\n\n.\n\n## 2.Bind Bluetooth\n\n<!-- image -->\n\n## 2.1 Unconnected to the APP state:\n\nAfter the watch is turned on, the Bluetooth will be in the state of being searched. After open the APK/APP, go to Devices > Add Device > click to start searching, select and click the corresponding watch device name, and the watch will be successfully bound to the app.\n\n## 2.2 Connected to the APP state:\n\n<!-- image -->\n\nWatch time synchronization: the time shown at the smartwatch and your mobile phone will synchronized after the smartwatch is bound to the APP successfully.\n\n2.3 Binding the audio/calling Bluetooth\n\nWhen the smartwatch is in the dial interface, you can find the audio/calling Bluetooth icon, and click it to turn it on, then go to the Bluetooth settings of your mobile phone and click the name of the audio/calling Bluetooth of the smartwatch to bind it.\n\n## 3. Find Watch\n\nAfter the smartwatch is bound to the APP, you click 'Find Watch' in the APP, the smartwatch will light up and vibrate for once.\n\n## 4. Camera", - "page_start": 5, - "page_end": 5, - "source_file": "6126797.pdf" - }, - { - "text": "- 3) Swipe to the right when the watch is in the dial interface, you can find time/date/week/the latest message (enter to view multiple messages)/some of the recently used menu functions, and turn on or off audio Bluetooth for calls.\n- 4) Swipe up the screen when the watch is in the dial interface to enter the menu interface, and scroll up and down to find the corresponding function.\n- 5) Long press the watch face interface and swipe to right or left to switch the watch face, select one of them and set it with one-click.\n\n## 1.2 App notification\n\n- 1) When the watch is bound to the APP, and you allow the watch to display notifications on the watch, the new messages received in your mobile phone will be pushed to the watch, and a total of 10 messages can be saved. The messages received after 10 messages will be overwritten one by one.\n- 2) Swipe to the bottom to click the delete icon to clear all message records.\n\n## 1.3 Drop-down menu\n\nScroll down the screen when the watch is in the dial interface to enter the drop-down menu interface.\n\n- 1) Bluetooth connection status; time; power left;\n- 2) About, where you can check the firmware version of watch and the address of the Bluetooth\n- 3) Setting, where you can enter it to set part of the functions;\n- 4) Brightness adjustment; where you can adjust the brightness of the screen;\n- 5) Alipay. Download the app Alipay in your mobile phone and bind it with your watch to realize offline payment.\n\n## 1.4 Phone/Call History\n\n- 1. Swipe to the left when the watch is in the watch interface, click the calling icon to turn on/off the calling Bluetooth. Turn on the calling Bluetooth, you will find the name of the calling Bluetooth, then go to the Bluetooth settings of your mobile phone, and bind the Bluetooth in the name of the calling Bluetooth of your watch. You can use the watch to make phone calls when they are successfully bound.\n- 2. Call records, which can save the records of incoming and dialed calls. (It can save more than 50 call records, and it will be automatically overwritten when 128 records are full. Click any call record to call back)\n- 3. Dial the keyboard, you can enter the phone number to make a call.\n\n## 1.5 message\n\nWhen the watch is successfully bound to the app, and you approve notifications of corresponding apps in your mobile phone system, and switch on these apps or callings notifications functions on your watch, the notifications on your mobile phone can synchronize to your watch.\n\n- 1.5.1. Incoming call notification:\n\nTurn on the incoming call reminder in the app. When the phone has a incoming call, the watch will light up or vibrate.\n\n- 1.5.2. SMS notification:", - "page_start": 2, - "page_end": 2, - "source_file": "6126797.pdf" - }, - { - "text": "The version of the watch is displayed on 'Firmware upgrade' in the column of 'Device', and users can decide to whether upgrade the firmware version.\n\n## 13. Unbind\n\nIn the \"Device\" column of WearPro, scroll down to the \"Unbind\" and click to unbind the APP. The iSO users need to go to the Bluetooth settings of the phone, select the Bluetooth name of the\n\n<!-- image -->\n\nsmart watch, and click \"Forget this device\". The 'About' of the watch has an 'Unbind' button, click it to unbind or do it in the APP. For the safety of users' data, the watch will implement a factory reset after that.\n\n## ●Frequently asked questions and answers\n\n*Please avoid exposing the device to extreme temperatures that are too cold or too hot for a long time, which may cause permanent damage.\n\n*Why can't I take a hot bath with my watch?\n\nThe temperature of the bath water is relatively changed, it will produce a lot of water vapor, and the water vapor is in the gas phase, and its molecular radius is small, and it is easy to seep into the gap of the watch case. The internal circuit of the watch is short-circuited, which damages the circuit board of the watch and damages the watch.\n\n## *No power on, no charging\n\nIf you receive the goods and the watch does not turn on, it may be caused by a collision during the transportation of the watch and the battery Seiko board has been protected, so plug in the charging cable to activate it.", - "page_start": 7, - "page_end": 7, - "source_file": "6126797.pdf" - }, - { - "text": "## Sports smart watch User Manual DT3 Mate\n\n<!-- image -->\n\nThank you for choosing our smart watch. You can fully understand the use and operation of the equipment by reading this manual.\n\nThe company reserves the right to modify the contents of this manual without any prior notice.\n\nThe product contains: a packing box, a manual, a watch body, and a charging cable.\n\n## A. Watch function description\n\nButton description:", - "page_start": 0, - "page_end": 0, - "source_file": "6126797.pdf" - }, - { - "text": "<!-- image -->\n\nand view content on demand. They can search content and control their PVR remotely from their smartphone. They can stream programming to their tablet anywhere in their home. A single Rogers Nextbox serves as a master PVR for the entire home enabling simultaneous viewing and recording of up to eight separate shows and storage of over 250 hours of high-definition programming. And customers can access television and movie content on-demand from anywhere by laptop, tablet or smartphone using the Rogers Anyplace TV app.\n\nTelevision has never been this good, this easy, or this simple to control. And it's even better when combined with innovative Rogers features, such as the ability to screen phone calls on their TV, listen to voicemail on their tablet, or receive talking text messages on their home phone. Wireless customers can also use Rogers One Number to switch calls\n\namong their computer, home phone and wireless device without interruption; manage e-mails; text messages and voicemail; hold live video chats; and combine and sync contacts from across multiple devices.\n\nWhen they're not at home, more and more customers also rely on Rogers Smart Home Monitoring, a complete monitoring, automation and security solution that includes the most innovative technology and features available. Smart Home Monitoring lets customers monitor, control and receive alerts by smartphone or online, staying connected to their home from almost anywhere, and enjoying the peace of mind that comes with having the most reliable monitoring solution available. Smart Home Monitoring also gives customers the ability to automate lights, appliances, thermostats and more, so they know their homes are not only secure but more energy-efficient and convenient, also.", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "Some events require a certain number of occurrences in 25 hours before they are displayed as unfixed. If they do not reach this threshold in 25 hours, they are flagged as expired . Monitoring events are below the coalesce threshold, and are usually transient.\n\nImportant: The management GUI is the primary tool that is used to operate and service your system. Real-time monitoring should be established by using SNMP traps, email notifications, or syslog messaging on an automatic manner.\n\n## 13.6.1 Managing event log\n\nRegularly check the status of the system using the management GUI. If you suspect a problem, first use the management GUI to diagnose and resolve the problem.\n\nUse the views that are available in the management GUI to verify the status of the system, the hardware devices, the physical storage, and the available volumes by completing the following steps:", - "page_start": 724, - "page_end": 724, - "source_file": "sg247938.pdf" - }, - { - "text": "## 13.7 Monitoring\n\nAn important step is to correct any issues that are reported by your IBM Storwize V7000 system as soon as possible. Configure your system to send automatic notifications to standard Call Home server or to the new Cloud Call Home server when a new event is reported. To avoid having to monitor the management GUI for new events, select the type of event for which you want to be notified. For example, restrict notifications to only events that require action. The following event notification mechanisms are available:\n\n - /SM590000 Call Home", - "page_start": 731, - "page_end": 731, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "OTC_NSANY_2004.pdf", - "query": "Have the operating profits in Japan for Nissan gone up or down in 2004?", - "target_page": 5, - "target_passage": "operating profits in Japan came to ¥341.1 billion, a decrease of 3.2 percent compared to last year", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## BUSINESS AND OTHER RISKS\n\nDue to changes in government regulations, information on risks involved in business operations has been disclosed in the Yukashoken-Houkokusho for the year ended March 31,2005 as follows:\n\n## Economic Factors\n\nThe demand for products manufactured by Nissan is affected by the economic conditions in each country or market in which they are offered for sale. Nissan conducts its operations all over the world and, in particular, in the major markets of North America, Europe, and Asia, to say nothing of Japan. While Nissan strives to develop a comprehensive and integrated projection of the global economic outlook, any greater-than-anticipated downturn in one of these markets may have a significant effect on Nissan financial position and results of operations.\n\n## International Activities and Overseas Expansion\n\nNissan's manufacturing and marketing activities outside Japan are conducted in the United States, in Europe, and in the developing and emerging markets of Asia. Nissan forecasts and evaluates a wide variety of risks inherent in doing business in such overseas markets including the following factors, each of which entails a greater-than-anticipated level of risk:\n\n - · Unfavorable political or economic factors\n - · Legal or regulatory changes\n - · Potentially adverse tax consequences\n - · Labor disputes including strikes\n - · Difficulties in recruiting and retaining personnel\n - · Social, political or economic turmoil due to terrorism, war, or other destabilizing factors.\n\n## Research and Development\n\nNissan's technology must be 'real world'-useful, pragmatic and easy to use. Nissan anticipates the nature and scope of the market demand, and then prioritizes and invests in new technologies. Nonetheless, any sudden and greater-than-anticipated changes in its business environment or in customer preferences may impact negatively on customer satisfaction with these new technologies.\n\n## Product Defects\n\nNissan places a high priority on safety and does its best to enhance safety from the standpoint of research and development, manufacturing and sales. Although Nissan takes out insurance policies to cover product liability, this does not necessarily mean that all potential defects and the related liabilities are fully covered. If Nissan were to implement strict product recalls for its customers, Nissan would incur significant additional expenses which could adversely affect its financial position and results of operations.\n\n## Fluctuation in Foreign Currency Exchange Rates\n\nNissan's Japanese operations export vehicles to various countries around the world. In general, the appreciation of the yen against other currencies adversely affects Nissan's financial results of operations and, on the contrary, the depreciation of the yen against other currencies favorably affects Nissan's financial results of operations. Any sharp appreciation of the currencies of those countries against the yen could lead to increases in both procurement and production costs which would adversely affect Nissan's competitiveness.\n\n## Derivatives", - "page_start": 72, - "page_end": 72, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## FISCAL YEAR 2004 SHARE PERFORMANCE\n\nDESPITE NISSAN'S RECORD OPERATING RESULT IN FISCAL 2004, ITS STOCK PERFORMANCE RETURN WAS NEGATIVE AND LOWER THAN THE TOPIX INDEX. THE INVESTOR RELATIONS TEAM WAS STRENGTHENED AT THE START OF FISCAL 2005 TO BETTER ADDRESS THE NEEDS OF INVESTORS AND ENHANCE THEIR UNDERSTANDING OF NISSAN'S PERFORMANCE. INVESTORS WILL NOW BE ABLE TO GAIN A MORE IN-DEPTH VIEW OF THE COMPANY'S OPERATIONS AND PERFORMANCE INDICATORS.\n\n## Share Performance in Fiscal 2004\n\nNissan's share price began at ¥1,143 at the beginning of fiscal 2004 and ended the fiscal year at ¥1,099, generating a negative return of 3.85 percent. Total shareholder return (TSR) was -1.67 percent, while the dividend yield came to 2.18 percent (¥24 per share dividend, divided by the ¥1,099 closing price). Adverse movements in foreign exchange rates and commodity price hikes adversely affected Nissan's profitability, which was reflected in the share price. In addition, specific events relating directly to the company also had a negative impact. Later in this report, corporate officers will explain what actions Nissan has undertaken to ensure better performance.\n\n## Payout Policy\n\nNissan announced its NISSAN Value-Up three-year dividend policy, covering the period from fiscal 2005 to fiscal 2007, at the annual general meeting of shareholders on June 23, 2004. Nissan proposes a long-term dividend policy to provide more visibility and improve transparency into the ways in which Nissan rewards its shareholders. Nissan believes that a long-term dividend policy reduces uncertainty for investors who already own or are considering acquiring Nissan stock.\n\n## Fiscal Year 2004 Share Performance\n\n(Index: April 1, 2004=100)\n\n<!-- image -->\n\n## IR Activities\n\nUnder NISSAN Value-Up, the IR team's performance will be evaluated based on the price-earnings ratio (PER) and volatility relative to our major competitors. PER is used to measure how successfully the IR team manages market expectations about Nissan in order to maintain the Nissan share price close to an intrinsic value. The other measure, volatility, is used to measure the risk investors perceive when considering Nissan stock. If Nissan can successfully reduce volatility, the minimum return required by investors should decline. The IR team believes that a strengthening of disclosure activities is required to improve both measures. The team plans to disclose not only financial results but also more forward-looking information about Nissan fundamentals such as technology and product. Such forward-looking information helps investors to forecast future performance more precisely and reduces uncertainty about the future. As a consequence, Nissan will increase the number of investor conferences, events, and teleconferences during fiscal 2005.\n\n## Five-Year Share Performance\n\n<!-- image -->", - "page_start": 16, - "page_end": 16, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "OUR WORLD\n\nNISSAN HAS A GLOBAL PRESENCE. BORN IN JAPAN, WE ARE PERFECTLY AT HOME IN THE U.S., THE UK, SPAIN, THAILAND, CHINA, EGYPT, BRAZIL AND WELL OVER 150 OTHER NATIONS WHERE NISSAN CARS AND THEIR COMPONENT PARTS ARE PRODUCED, SOLD AND DRIVEN. WITH NISSAN, DRIVING PLEASURE IS A SENSATION THAT KNOWS NO BORDERS. THIS IS THE NISSAN SHIFT\\_\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->", - "page_start": 59, - "page_end": 59, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## Derivatives\n\nNissan utilizes derivatives transactions for the purpose of hedging its exposure to fluctuation in foreign exchange rates, interest rates and commodity prices. While Nissan can hedge against these risks by using derivatives transactions, Nissan, by so doing, may miss the potential gains which could result from seizing the market opportunities to profit from such fluctuation in exchange rates and interest rates.\n\nIn addition, Nissan manages its exposure to credit risk by limiting its counterparties to financial institutions with high credit ratings. However, a default by any one of these counterparties could have an adverse effect on Nissan's financial position and operating results.\n\n## Lawsuits and Claims\n\nWith respect to various lawsuits and claims which Nissan encounters, the possibility exists that the position defended by Nissan will not be accepted\n\nand that the outcome may be significantly different from that anticipated. As a result, any such verdict or settlement could adversely affect Nissan's financial position and operating results.\n\n## Government Regulations\n\nThe automobile industry worldwide is influenced by a broad spectrum of regulations governing the emission levels of exhaust fumes, fuel economy guidelines, noise level limitations and safety standards, and Nissan expects these regulations to become increasingly stringent. In order to ensure compliance, it may be necessary for Nissan to make significant ongoing investments in these areas which would have an impact on its financial position and results of operations.\n\n## Intellectual Property Rights\n\nNissan owns a wide variety of proprietary technologies and has the expertise to differentiate Nissan's products making them unique from those of its competitors. These assets have proven their value in the growth of Nissan's business and will, no doubt, continue to be of value in the future. Nissan strives to protect its intellectual property assets; however, in certain markets, Nissan may encounter difficulty in fully protecting the proprietary rights to its own technologies. Cases may arise where Nissan finds itself unable to prohibit others from infringing on its intellectual property rights.\n\nThe Company has established Intellectual Property Rights Management Department for the purpose of protecting intellectual property rights in specific areas, strengthening activities to protect Nissan's intellectual property rights, and abstracting new intellectual property rights. And the department has been performing various activities to protect and create Nissan Brand.\n\n## Natural Disasters\n\nNissan's corporate headquarters and many of its manufacturing facilities are located in Japan, where the statistically proven probability of earthquakes is higher than in many other countries. Nissan has developed risk management guidelines relating to earthquake damage and the CEO has organized a global task force to direct disaster prevention and recovery activities. In addition, the Gruop has begun to strengthen its manufacturing facilities with anti-seismic reinforcement. However, if a severe earthquake were to hit one of Nissan's key facilities causing a halt in production, this would adversely affect Nissan's financial position and results of operations.\n\n## Sales Financing Business Risk\n\nSales financing is an integral part of Nissan's core business, providing strong support to its automotive sales, while maintaining high profitability and a sound and stable financial condition through strict risk management policies. However, the sales financing companies have a high exposure to interest-rate risk, residual value risk, and credit risk, any one of which may adversely affect Nissan's financial position and results of operations.\n\n## Counterparty Credit Risk", - "page_start": 72, - "page_end": 72, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "<!-- image -->\n\nPURCHASING\n\n## More value, Higher quality, Win-win partnerships\n\n'The evolution that took place in Nissan's purchasing activities during the Nissan Revival Plan, or NRP, and continued through NISSAN 180, will stretch even further during NISSAN Value-Up. Why evolution and not revolution? Because the shift in purchasing that started six years ago was not a single action, it was a mindset change that continues to drive all our activities.\n\nPurchasing represents the single largest area of cost for Nissan. Through the NISSAN Value-Up business plan, we are determined to drive greater value from our purchasing activities and maintain the momentum built over the last six years.\n\nDuring the Nissan Revival Plan years, our focus was on catching up with the rest of the industry. NISSAN 180 was focused on reaching the benchmarks set during NRP and now as we enter the NISSAN Value-Up period, that focus evolves towards being the global cost leader.\n\nOne of the key breakthrough strategies of NISSAN Value-Up is the focus on new and emerging markets. On the sales side, markets like China, India, Russia and ASEAN represent significant opportunities for Nissan. On the purchasing side, we look at the cost competitiveness of these new markets and how we can increasingly use them to enhance our global competitiveness.\n\nOur strategy for what we call 'Leading Competitive Countries', or LCCs, is to focus on those markets that we see as trend leaders in both cost, quality and supply stability. We will focus first on China and then on ASEAN nations. This will bring cost advantages for our major regions, such as Japan, North America and Western Europe, making us more competitive. We're also investigating sourcing from Eastern Europe, the Mercosur trading zone, and India.\n\nHIROTO SAIKAWA Executive Vice President\n\n<!-- image -->\n\nOur Alliance with Renault has also provided substantial purchasing benefits and opportunities. Formed in 2001, the Renault Nissan Purchasing Organization, or RNPO, now accounts for over 70 percent of all purchasing for Nissan and Renault. Nissan will further benefit from RNPO through the utilization of Renault supply bases in certain LCCs.\n\nAlthough the turnaround in the Nissan business has been profound, we also recognize that our supplier partners have played a significant role. Going forward, we intend to reinforce those relationships, building value on both sides. For example, we are reinvigorating our innovative 3-3-3 engineering program.\n\nWe are also deploying a purchasing process that gets suppliers involved earlier and further upstream in the product development process, the concept of 'project partners'. This is a program that identifies key technologies and innovations that require substantial investments from both sides. Suppliers will be selected as project partners for a specific area and will work closer with us to develop lower cost and higher quality solutions. This win-win approach has already started with interior systems and chassis development projects.\n\nLast year, we faced several challenges with raw materials. Those risks-both price and supply related-are a factor that we have to recognize and address in the coming years. Last year, the pressure was concentrated on the supply side, going forward we see an increasingly challenging cost environment. Working closely with our key raw material suppliers as well as parts suppliers and accelerating our cost reduction countermeasures will be key during NISSAN Value-Up.\n\nOur purchasing philosophy at Nissan is focused on value, quality and relationships. We want our purchasing process to be transparent and proactive, and create more value for our suppliers and for the company.'", - "page_start": 49, - "page_end": 49, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "Nissan Annual Report 2004\n\nc3", - "page_start": 112, - "page_end": 112, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "WHO WE ARE\n\nNISSAN IS ABOUT MEETING UNMET NEEDS, CRAFTING SINGULAR PRODUCTS AND TRANSFORMING BRAND STRENGTH AND INNOVATION INTO NEW BUSINESS OPPORTUNITIES. WE ARE NISSAN. WE ARE INFINITI. WE ARE NISSAN LIGHT COMMERCIAL VEHICLES, EXPANDING OUR RANGE. WE ARE NISSAN INDUSTRIAL MACHINERY, LEVERAGING OUR EXPERTISE TO BUILD FORKLIFTS AND MARINE PRODUCTS. AND WE ARE NISSAN FINANCIAL SERVICES, PROVIDING OUR CUSTOMERS WITH A COMPREHENSIVE LINEUP OF OFFERINGS. THIS IS THE NISSAN SHIFT\\_\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->", - "page_start": 17, - "page_end": 17, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## NISSAN Value-Up: Sustaining Performance\n\nNissan's position today is much different than it was six years ago or even three years ago. In 1999, we were in crisis, and the Nissan Revival Plan was needed to revive our company and build a future. In April 2002, when NISSAN 180 began, we wanted to complete the revival process, with an emphasis on profitable growth.\n\nNISSAN Value-Up is about sustaining performance. About taking all the gains we have made in connecting with our customers, in growing volumes, in creating value, in earning profits, in improving management- and then building upon these gains.\n\nWith NISSAN Value-Up, you will not see a radical break from NISSAN 180. This plan is evolutionary, not revolutionary. We will take the core elements that got us to this point-namely, more revenue, less cost, more quality and speed, and maximized Alliance benefit with Renaultand build upon them.\n\nNISSAN Value-Up has three critical commitments:\n\nProfit: Nissan will maintain the top level of operating profit margin among global automakers for each of the three years of the plan.\n\nVolume: Nissan will achieve global sales of 4.2 million units measured in fiscal 2008.\n\nROIC: Nissan will achieve a 20 percent ROIC on average over the course of the plan, based on the new formula that excludes cash on hand from the denominator.\n\nNISSAN Value-Up will oversee 28 new models, resulting in the start of production of 70 models worldwide, over two dozen more than the 44 production starts during NISSAN 180. Of the 28 new models, 18 will be replacements for existing models and 10 will be completely new 'conquest' models. We will enter more new segments, and we will introduce six models that will delight customers by being completely innovative in their concept and benefits.\n\nWe will pursue four major breakthroughs while implementing NISSAN Value-Up:\n\n - · Our Infiniti luxury brand will extend its reach into new markets such as China and Russia and continue to establish its credibility as a Tier-1 luxury player.\n - · We will develop our Light Commercial Vehicle (LCV) business into a fully competitive global operation through new market and product entries. By 2007, we plan to increase our LCV volume by 40 percent from fiscal 2004 to 434,000 units. During this period, operating margin is targeted to double from 4 percent to 8 percent.\n - · We will take a more efficient global sourcing approach to maximize our opportunities and minimize our overall costs as we grow. Our engineering, production and purchasing functions will continue their acceleration toward being fully integrated global operations.\n - · We will continue to invest in new and emerging markets, including China, India and Russia.", - "page_start": 11, - "page_end": 11, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## LETTER FROM THE PRESIDENT AND CEO\n\n<!-- image -->\n\nA public company has two key responsibilities to its shareholders: transparency and value creation.\n\nAt Nissan, transparency is essential to our business. Especially in uncertain times, it builds trust between a company and its shareholders. And we believe transparency is the best way to encourage long-term investment in our company.\n\nBut transparency is not yet universal. Nissan is still one of the few large corporations that publicly disclose future business plans, performance indicators, commitments and future dividends. We trust that these measures give shareholders a clear view of our company's future direction.\n\nFrom the start of the Nissan Revival Plan (NRP) in 1999, we have created value by focusing on key value drivers-particularly sales growth, operating profit margin, and return on invested capital.\n\nBy the end of fiscal 2001 we exceeded our NRP commitments by returning Nissan to profit one year ahead of schedule, halving the company's debt and over-delivering on our commitment to achieve a 4.5 percent operating profit margin.\n\nFollowing NRP, we launched a three-year business plan called NISSAN 180. By the end of the plan in fiscal 2004, we committed to achieve the following:\n\n - · An increase in global sales of 1 million units, compared to the start of the plan. We are confident of meeting this final commitment by the end of the measurement period in September 2005.\n - · An 8 percent operating profit margin. For every year of the NISSAN 180 plan our operating margin has been at or above 10 percent topping the performance of all global automakers.\n - · Zero net automotive debt. We now have more than ¥200 billion in net cash under the new and more demanding accounting standards.\n\n## Review of 2004\n\nNissan lived up to its challenges in fiscal 2004, despite a very challenging year in the global industry, full of risks both anticipated and unexpected.\n\nConsolidated net revenues reached ¥8 trillion 576.3 billion, up 15.4 percent from last year. Consolidated operating profit improved by 4.4 percent to a record ¥861.2 billion. As a percentage of net revenue, our operating profit margin came to 10 percent, which remains at the top level among global automakers. And our net income reached ¥512.3 billion, or ¥125.16 per share, compared to ¥122.02 per share for the previous fiscal year.\n\n## NISSAN Value-Up\n\nThe Nissan revival story is now complete. Our next three-year business plan, 'NISSAN Value-Up,' is focused, as its name suggests, on delivering sustainable long-term value to all our stakeholders. As such, it is evolutionary not revolutionary.\n\nAs with our previous business plans, NISSAN Value-Up establishes three core commitments. They are ambitious, and will require us to stretch our capabilities. But they are realistic.\n\nProfit: Nissan will maintain the top level of operating profit margin among global automakers for each of the three years of the plan. Operating profit remains at the center of our management system, as it is the most accurate measure of business performance.", - "page_start": 3, - "page_end": 3, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "4\n\n## LETTER FROM THE COO\n\n<!-- image -->\n\nMuch has been written about the Nissan revival. While innovative product, an improved cost base, greater manufacturing efficiencies and a better-defined brand have all been factors, the strongest element in our revival has been our people. And, what we learned during the crisis in the 90s and through the Nissan Revival Plan and Nissan 180 plan, now guides how we will manage the company in the future. We call it the Nissan Management Way. It is both a philosophy and set of disciplines that guide us at all levels of the organization and will help Nissan build on the momentum of the past six years.\n\nAlthough our president and CEO Carlos Ghosn has now taken on the same responsibilities at Renault, our basic management style will not change. As in the past, the Executive Committee, chaired by Carlos Ghosn, is still the highest decision making authority for strategy and management policy.\n\nThe COO position I now hold was created to provide an 'operating officer' in the truest sense of the title. As COO my role is to assist the CEO by executing the business plan, monitoring the Company's performance and supervising dayto-day operations. The decisions I make are always based on the Nissan Management Way and support the commitments of the NISSAN Value-Up business plan.\n\nWhat distinguishes the Nissan Management Way is that we are both profit-driven and customer-focused, and that we share our strategy globally and execute in a cross-functional way. These cross-functional activities are particularly important to our success; along with cross-functional thinking, they have helped create an organization of singular structure, focus and culture. In this organization, employees representing each of Nissan's three axis-regional businesses such as Japan and U.S., functions such as engineering and manufacturing, and products-are actively encouraged to work together to maximize profits and to avoid a 'silo' mentality that is only focused on their immediate operational group.\n\nFiscal 2005 is a year of immense challenges and uncertainties, but we have still pushed ahead with an ambitious business plan for this period. As COO, my priority is to keep a close watch on Nissan's performance to ensure that we deliver our commitments. These include achieving the final Nissan 180 commitment of one million additional vehicles by the end of September 2005 and hitting our financial targets for fiscal 2005. There is no doubt that we have the strong leadership and management teams capable of sustaining the high level of performance required to reach these goals.\n\nNissan is now a learning organization. We have fully integrated the changes that began during the Nissan Revival Plan and continue to shape our business in the future. Our employees continually seek to build a better Nissan and fortify the brand, and are not afraid to speak out on issues and openly discuss challenges that face the business. Within the Nissan Management Way, we call that 'healthy conflict'- and it strongly related to our belief in transparency and accountability. This is the essence of the evolution that continues to empower our company.\n\nOur alliance with Renault also continues to be a source of immense strength. We expect to further reinforce the Alliance and to develop new synergies now that Carlos Ghosn is the CEO of both companies.\n\nWhile we have the kinds of advantages I have mentioned, we also have risks. One of those risks is complacency. During the last six years, we have made significant achievements and consistently met tough commitments, but countless challenges remain. Our industry is immensely competitive, our customers more demanding than ever and we have no time to rest and congratulate ourselves. We need to create a culture where employees are always motivated to challenge themselves and the company and to create value for all our stakeholders.", - "page_start": 5, - "page_end": 5, - "source_file": "OTC_NSANY_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "Microscope Manual.pdf", - "query": "How can CEDAR Oil be used with the AY11236 microscope?", - "target_page": 10, - "target_passage": "1. Drop some cedar oil on to the top of the 100x objective when the 100x objective is being used. NOTE: To maintain a good quality image, rotate the turret right and left several times to eliminate bubbles in the cedar oil. 2. After finishing the observation, wipe off the cedar oil. 3. Do not use the 40x objective until you have wiped off all of the cedar oil.", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "## CHANGING THE BULB\n\n - 1. Disconnect the power cord from the electrical outlet.\n - 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n - 3. Replace with a new halogen bulb.\n - 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n<!-- image -->\n\n<!-- image -->\n\n## MODEL AY11230/AY11234\n\n<!-- image -->\n\n## MICROSCOPE USAGE\n\nBARSKA Model AY11230 and Model AY11234 are trinocular microscopes designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use and the vertical tube make them is useful for school classroom instruction.\n\n## CONSTRUCTION\n\nBARSKA Model AY11230 is a fixed power trinocular stereo microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination and oblique illumination. By using this instrument, the user can observe and enlarge the right side stereo image. BARSKA Model AY11234 is a zoom trinocular stereo microscope. The object being viewed is enlarged through two identical sized sets of right and left eye lenses. The zoom provides different magnification and features an inversion system which allows the image to be viewed normally and right side up.\n\n<!-- image -->", - "page_start": 5, - "page_end": 5, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## OPERATION ( cont. )\n\n - 6. Adjust the interpupillary distance by using the eyepiece interpupillary slide adjustment.\n - 7. Observe using the right eyepiece adjusting the coarse and fine focus and adjust the diopter ring until image is clear and sharp.\n - 8. Observe with the left eyepiece and adjust the diopter ring until image is clear and sharp.\n - 9. Rotate the fine focus adjustment when using other objectives. NOTE: This instrument is equipped with patent objectives so the precision or parfocalization is very high.\n\nFig. 1 - Objective Parts\n\n<!-- image -->\n\n - 10. If the image is in focus with the 10x objective, you can select other objectives and observe the specimen even if the fine adjustment knob has not been used by using the following method (See Fig. 1):\n - 1. Unscrew the 40x or 100x objective and remove from turret.\n - 2. Remove the mark sleeve.\n - 3. Turn the ring on the objective to adjust its parfocal distance.\n - 4. Re-insert the objective and compare with the 10x.\n - 5. Adjust until the 40x and 100x objectives image is clear.\n\n## USING THE CEDAR OIL\n\n - 1. Drop some cedar oil on to the top of the 100x objective when the 100x objective is being used. NOTE: To maintain a good quality image, rotate the turret right and left several times to eliminate bubbles in the cedar oil.\n - 2. After finishing the observation, wipe off the cedar oil.\n - 3. Do not use the 40x objective until you have wiped off all of the cedar oil.\n\n<!-- image -->\n\n## OPERATION ( cont. )\n\n## ADJUSTING THE CONDENSER APERTURE", - "page_start": 9, - "page_end": 9, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## OPERATION ( cont. )\n\n## Model AY11240\n\n## Model AY11238\n\n - 7. To clearly see the outline of the specimen, rotate the coarse adjustment knob and lower the barrel to the space limiter.\n - 8. Rotate the fine adjustment knob until the image is in sharp focus. When using other objectives, rotate the fine focus adjustment until the image is in focus.\n - 6. To clearly see the outline of the specimen, rotate the coarse adjustment knob and lower the barrel to the space limiter.\n - 7. Rotate the fine adjustment knob until the image is in sharp focus. When using other objectives, rotate the fine focus adjustment until the image is in focus.\n\n## USING THE 5-HOLE DIAPHRAGM\n\n - 1. To obtain the best contrast for observing, match the hole size to the objective that is being used to view the specimen.\n - 2. Each hole has a corresponding number from 1 to 5. 1 is the smallest hole; 5 is the largest hole.\n - Use the following guidelines to match the hole number to the objective that you have selected:\n - 40x objective: Use #5 hole\n\n10x objective: Use #4 or #3 hole\n\n4x objective: Use #2 or #1 hole\n\n## COARSE KNOB ADJUSTMENT - Model AY11240\n\n - 1. The coarse adjustment knob has an adjustable heavy-light nut (See Fig.1).\n - 2. To adjust the knob loosen or tighten the nut. NOTE: Adjusting the nut too tight will make focusing difficult. Adjusting the nut too loose will cause the tube to slide.\n\nFig. 1- Coarse Adjustment Knob\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n## MODEL AY11228/AY11232\n\n<!-- image -->\n\n## MICROSCOPE USAGE\n\nBARSKA Model AY11228 and Model AY11232 are designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use is especially useful for school classroom instruction.\n\n## CONSTRUCTION\n\nBARSKA Model AY11228 is a fixed power stereo microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination and oblique illumination. By using this instrument, the user can observe and enlarge the right side stereo image. BARSKA Model AY11232 is a zoom stereo microscope. The object being viewed is enlarged through two identical sized sets of right and left eye lenses. The zoom provides different magnification and features an inversion system which allows the image to be viewed normally and right side up.", - "page_start": 3, - "page_end": 3, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n## PARTS LIST\n\n| Name | Name |\n|-----------------------------------|---------------------------------------------|\n| Microscope Stand | Microscope Stand |\n| | 4x (parfocal distance adjustable) |\n| | 10x |\n| | 40x (s) (parfocal distance adjustable) |\n| | 100x (oil,s) (parfocal distance adjustable) |\n| 10x Wide Field Eyepiece w/Pointer | 10x Wide Field Eyepiece w/Pointer |\n| Abbe Condenser NA1.25 | Abbe Condenser NA1.25 |\n| | |\n| Spare 6V20W Halogen Bulb | Spare 6V20W Halogen Bulb |\n| Lens Cleaning Tissue | Lens Cleaning Tissue |\n| Cedar Oil | Cedar Oil |\n| 1A Fuse (spare) | 1A Fuse (spare) |\n| Specification | Specification |\n| Inspection Certificate | Inspection Certificate |\n| Packing List | Packing List |\n\n## OPERATION\n\n - 1. Remove all components from package. Identify all parts before assembling instrument.\n - 2. Attach 4x, 10x and 40x objectives by screwing into revolving turret. Tighten and secure to maximum finger pressure only.\n - 3. Place the specimen on the stage and secure with spring clips. NOTE: The cover glass must face upward (the thinner glass is the cover glass), otherwise when the 40x objective is used the specimen cannot be observed. Observation is best when the thickness of the cover glass is 0.1-1.1mm and the cover glass is 0.17mm.\n - 4. Plug power cord into an electrical outlet. Turn microscope lamp ON.\n - 5. Observe the specimen using the lowest magnification objective first. The 10x objective provides a larger field of view making it easier to search the specimen.", - "page_start": 8, - "page_end": 8, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## ZOOM MAGNIFICATION\n\n - 1. Turn the zoom magnification knob to the desired magnification and field of view.\n - 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary.\n - 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment.\n\n## DIOPTER RING ADJUSTMENT\n\n - 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps:\n - a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob.\n - b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus.\n - c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring.\n - d.With more than one viewer, each viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting.\n\n## CHANGING THE BULB\n\n - 1. Disconnect the power cord from the electrical outlet.\n - 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n - 3. Replace with a new halogen bulb.\n - 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n<!-- image -->\n\n## MODEL AY11236\n\nModel AY11236\n\n<!-- image -->\n\n## MICROSCOPE USAGE\n\nBARSKA Model AY11236 is a powerful fixed power compound microscope designed for biological studies such as specimen examination. It can also be used for examining bacteria and for general clinical and medical studies and other scientific uses.\n\n## CONSTRUCTION\n\nBARSKA Model AY11236 is a fixed power compound microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination. By using this instrument, the user can observe specimens at magnification from 40x to 1000x by selecting the desired objective lens. Coarse and fine focus adjustments provide accuracy and image detail. The rotating head allows the user to position the eyepieces for maximum viewing comfort and easy access to all adjustment knobs.\n\n<!-- image -->", - "page_start": 7, - "page_end": 7, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n## PARTS LIST\n\n## Model AY11240\n\nName\n\nMicroscope Stand\n\nAchromatic\n\nObjective\n\nPlain Concave Mirror\n\n1\n\nPlastic Dust Cover\n\n1\n\n10x Wide Field Eyepiece\n\n1\n\nLens Cleaning Tissue\n\n1\n\nSpecification\n\n1\n\nInspection Certificate\n\n1\n\nPacking List\n\n1\n\n## OPERATION\n\n## Model AY11240\n\n## Model AY11238\n\n - 1. Remove components from package. identify all parts before assembling.\n - 2. Attach 4x, 10x and 40x objectives to revolving turret.\n - 3. Place the specimen on the stage and secure with spring clips. NOTE: The cover glass must face upward (the thinner glass is the cover glass), otherwise when the 40x objective is used the specimen cannot be observed. Observation is best when the thickness of the cover glass is 0.1-1.1mm and the cover glass is 0.17mm.\n - 4. Adjust the stand to an angle that provides comfortable observation.\n - 5. Rotate and adjust concave mirror to light the field of view. NOTE: Do not reflect the Sun with the mirror. This can cause serious eye injury or permanent eye damage.\n - 6. Observe the specimen using the lowest magnification objective first. The 4x objective provides a larger field of view to search specimen.\n - 1. Remove components from package. identify all parts before assembling.\n - 2. Attach 4x, 10x and 40x objectives to revolving turret. 3. Place the specimen on the stage and secure with spring clips. NOTE: The cover glass must face upward (the thinner glass is the cover glass), otherwise when the 40x objective is used the specimen cannot be observed. Observation is best when the thickness of the cover glass is 0.1-1.1mm and the cover glass is 0.17mm.\n - 4. Plug power cord into an electrical outlet. Turn microscope lamp ON.\n - 5. Observe the specimen using the lowest magnification objective first. The 4x objective provides a larger field of view to search specimen.\n\n4x\n\n10x\n\n40x (s)\n\nQty\n\n1\n\n1\n\n1\n\n1\n\n## Model AY11238\n\n| Name | Name | Qty |\n|-------------------------|-------------------------|-------|\n| Microscope Stand | Microscope Stand | 1 |\n| | 4x | 1 |\n| Achromatic Objective | 10x | 1 |\n| | 40x (s) | 1 |\n| 10x Wide Field Eyepiece | 10x Wide Field Eyepiece | 1 |\n| Plastic Dust Cover | Plastic Dust Cover | 1 |\n| Spare Bulb | Spare Bulb | 1 |\n| Lens Cleaning Tissue | Lens Cleaning Tissue | 1 |\n| Specification | Specification | 1 |\n| Inspection Certificate | Inspection Certificate | 1 |\n| Packing List | Packing List | 1 |", - "page_start": 2, - "page_end": 2, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## OPERATION ( cont. )\n\n## Model AY11230\n\n## Model AY11234\n\n## SELECTING OBJECTIVE MAGNIFICATION\n\n - 1. There are two objectives. The lower magnification objective has a greater depth of field and view.\n - 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed.\n\n## CHANGING THE INTERPUPILLARY DISTANCE\n\n - 1. The distance between the observer's pupils is the interpupillary distance.\n - 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n## FOCUSING\n\n - 1. Remove the lens protective cover.\n - 2. Place the specimen on the working stage.\n - 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp.\n - 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear.\n\n## CHANGING THE BULB\n\n - 1. Disconnect the power cord.\n - 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n - 3. Replace with a new halogen bulb.\n - 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n## USING THE VERTICAL TUBE MODELS AY11230/11234\n\n - 1. The vertical tube can be used for instructional viewing or to photograph the image witrh a digital camera or micro TV unit.\n - 2. Loosen the retention screw, then rotate the adjustment ring to change the length of the vertical tube.\n - 3. Make sure that both the images in\n\n## FOCUSING\n\n - 1. Turn the focusing knob away or toward you until a clear image is viewed.\n - 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again.\n\n## ZOOM MAGNIFICATION", - "page_start": 7, - "page_end": 7, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## INDEX\n\nMaintenance............................................\n\nModel AY11240/Model AY11238..................\n\nModel AY11228/Model AY11232.................. 6-9\n\nModel AY11230/Model AY11234..................\n\nModel AY11236........................................\n\nWarranty Information................................\n\n1\n\n2-5\n\n10-13\n\n14-18\n\nBack Cover\n\n## IMPORTANT NOTES\n\nCongratulations on your purchase of this high quality BARSKA microscope. With proper care, this microscope will provide many years of use. Please read the following instructions before operating this instrument.\n\n- 1. Do not attempt to disassemble the instrument. This product has been carefully assembled at the factory and should only be examined by a factory-trained technician.\n- 2. This instrument should only be used in an environment with an indoor temperature range of 32 o F to 104 o F.\n- 3. Do not use this instrument in an environment with a lot of dust. Cover the instrument when not in use.\n- 4. Do not subject the instrument to shock.\n\n## MAINTENANCE\n\nProper care and storage of this instrument is essential. Please read the following guidelines:\n\n- 1. Keep the instrument in a dry and moisture-free location.\n- 2. Do not expose to acid, alkali fumes or moisture.\n- 3. Keep optical parts clean and free of dust. To clean optical parts gently wipe with lens cleaning tissue and a mixture of alcohol and diethyl ether. Depending on weather conditions, the following are the recommended mixture ratios:\n\nWet weather: 1:2\n\n- Dry Weather: 1:1\n- 4. After use, cover the instrument with the plastic dust cover.\n- 5. If instrument is to be stored for an extended period of time, remove the eyepiece and oculars and store in a moisture-proof container.\n\n<!-- image -->\n\n## MODEL AY11240/AY11238\n\n<!-- image -->\n\n## MICROSCOPE USAGE\n\nBARSKA Model AY11240 and Model AY11238 are designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use is especially useful for school classroom instruction.\n\n## CONSTRUCTION\n\nBARSKA Model AY11240 is a fixed tube type. For comfortable observation, the arm can be easily tilted at any angle from 90 o vertical to 45 o level. It is also equipped with a coarse adjustment and fine adjustment as well as a space limiter to protect the objective from contacting and damaging the specimen. BARSKA Model AY11238 features a monocular tube that is slanted at a 45 o angle. The head rotates 360 o . The Eyepiece Set Screw prevents the eyepiece from falling out of the tube.\n\n<!-- image -->", - "page_start": 1, - "page_end": 1, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n## PARTS LIST\n\n## Model AY11228\n\n| Name | Qty |\n|--------------------------------------------------|---------------|\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 10V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Black/White Working Stage | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n## OPERATION\n\n## Model AY11228\n\n## Model AY11232\n\n - 1. Remove components from package. identify all parts before assembling.\n - 2. Tighten the knob on the stand to prevent the elevator from sliding down.\n - 3. Fix the binocular body on the stand with the tightening screw.\n - 4. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n## SELECTING THE ILLUMINATION\n\n - 1. Depending on microscope use, select oblique or transmitted illumination.\n - 2. The Brightness Adjustment knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n - 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n - 1. Remove components from package. identify all parts before assembling.\n - 2. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n## SELECTING THE ILLUMINATION\n\n - 1. Depending on microscope use, select oblique or transmitted illumination.\n - 2. The Brightness Adjustment Knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n - 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n## CHANGING THE INTERPUPILLARY DISTANCE\n\n - 1. The distance between the observer's pupils is the interpupillary distance.\n - 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n## Model AY11232\n\n| Name | Qty |\n|--------------------------------------------------|---------------|\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 12V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |", - "page_start": 4, - "page_end": 4, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "- (2) For the purposes of sub-paragraph (1)-\n - (a) a facility has a capacity in excess of 20,000 tonnes at any time if it was used in the previous calendar year for the purposes of downstream oil sector activities in relation to more than that number of tonnes of oil;\n - (b) 'specified activities' are-\n - (i) storing oil,\n - (ii) handling oil,\n - (iii) the carriage of oil by sea or inland water,\n - (iv) conveying oil by pipes,\n - (v) refining or otherwise processing oil.\n - 29. -(1) A worker required to undertake or commence within the period during which they would, but for this paragraph, have had to self-isolate in accordance with regulation 9-\n - (a) activities on or in relation to an offshore installation;\n - (b) activities on or in relation to upstream petroleum infrastructure;", - "page_start": 42, - "page_end": 42, - "source_file": "uksi_20210582_en.pdf" - } - ] - }, - { - "references": { - "source_file": "Microscope Manual.pdf", - "query": "For the AY11230 microscope, what is the interpupillary adjustment?", - "target_page": 7, - "target_passage": "Model AY11230 1. Interpupillary Adjustment: 55mm - 75mm", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## SPECIFICATIONS\n\n## Model AY11230\n\n - 1. Interpupillary Adjustment: 55mm - 75mm\n - 2. Working Stage Diameter: 95mm\n - 3. Focus Knob Adjustment Range: 60mm\n - 4. Elevator Adjustment Range: 110mm\n - 5. Right Diopter Adjustment Range: +4 to -6 dopters\n - 6. Illumination: Input Voltage: 110V AC or 220V Output: Oblique illumination: 12V 10W Halogen Lamp\n\n## Model AY11234\n\n - 1. Interpupillary Adjustment: 55mm - 75mm\n - 2. Working Stage Diameter: 95mm\n - 3. Focus Knob Adjustment Range: >50mm\n - 4. Elevator Adjustment Range: 110mm\n - 5. Diopter Adjustment Range: +/- 5 diopters\n - 6. Illumination:\n\nInput Voltage: 110V AC or 220V Output: Oblique Illumination: 12V 10W Halogen Lamp Transmitted Illumination: 12V 10W Halogen Lamp\n\n<!-- image -->\n\n<!-- image -->\n\n## Optical Specifications - Model AY11230\n\n| Total Magnification | Objective Magnification | Eyepiece Magnification & Field Diameter (mm) | Working Distance |\n|-----------------------|---------------------------|------------------------------------------------|--------------------|\n| 20x, 40x | 2x, 4x | Wide Field 10x, 20mm | 90mm |\n\n## Optical Specifications - Model AY11234", - "page_start": 6, - "page_end": 6, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## OPERATION ( cont. )\n\n## Model AY11240\n\n## Model AY11238\n\n - 7. To clearly see the outline of the specimen, rotate the coarse adjustment knob and lower the barrel to the space limiter.\n - 8. Rotate the fine adjustment knob until the image is in sharp focus. When using other objectives, rotate the fine focus adjustment until the image is in focus.\n - 6. To clearly see the outline of the specimen, rotate the coarse adjustment knob and lower the barrel to the space limiter.\n - 7. Rotate the fine adjustment knob until the image is in sharp focus. When using other objectives, rotate the fine focus adjustment until the image is in focus.\n\n## USING THE 5-HOLE DIAPHRAGM\n\n - 1. To obtain the best contrast for observing, match the hole size to the objective that is being used to view the specimen.\n - 2. Each hole has a corresponding number from 1 to 5. 1 is the smallest hole; 5 is the largest hole.\n - Use the following guidelines to match the hole number to the objective that you have selected:\n - 40x objective: Use #5 hole\n\n10x objective: Use #4 or #3 hole\n\n4x objective: Use #2 or #1 hole\n\n## COARSE KNOB ADJUSTMENT - Model AY11240\n\n - 1. The coarse adjustment knob has an adjustable heavy-light nut (See Fig.1).\n - 2. To adjust the knob loosen or tighten the nut. NOTE: Adjusting the nut too tight will make focusing difficult. Adjusting the nut too loose will cause the tube to slide.\n\nFig. 1- Coarse Adjustment Knob\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n## MODEL AY11228/AY11232\n\n<!-- image -->\n\n## MICROSCOPE USAGE\n\nBARSKA Model AY11228 and Model AY11232 are designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use is especially useful for school classroom instruction.\n\n## CONSTRUCTION\n\nBARSKA Model AY11228 is a fixed power stereo microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination and oblique illumination. By using this instrument, the user can observe and enlarge the right side stereo image. BARSKA Model AY11232 is a zoom stereo microscope. The object being viewed is enlarged through two identical sized sets of right and left eye lenses. The zoom provides different magnification and features an inversion system which allows the image to be viewed normally and right side up.", - "page_start": 3, - "page_end": 3, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## OPERATION ( cont. )\n\n## Model AY11230\n\n## Model AY11234\n\n## SELECTING OBJECTIVE MAGNIFICATION\n\n - 1. There are two objectives. The lower magnification objective has a greater depth of field and view.\n - 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed.\n\n## CHANGING THE INTERPUPILLARY DISTANCE\n\n - 1. The distance between the observer's pupils is the interpupillary distance.\n - 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n## FOCUSING\n\n - 1. Remove the lens protective cover.\n - 2. Place the specimen on the working stage.\n - 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp.\n - 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear.\n\n## CHANGING THE BULB\n\n - 1. Disconnect the power cord.\n - 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n - 3. Replace with a new halogen bulb.\n - 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n## USING THE VERTICAL TUBE MODELS AY11230/11234\n\n - 1. The vertical tube can be used for instructional viewing or to photograph the image witrh a digital camera or micro TV unit.\n - 2. Loosen the retention screw, then rotate the adjustment ring to change the length of the vertical tube.\n - 3. Make sure that both the images in\n\n## FOCUSING\n\n - 1. Turn the focusing knob away or toward you until a clear image is viewed.\n - 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again.\n\n## ZOOM MAGNIFICATION", - "page_start": 7, - "page_end": 7, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n## PARTS LIST\n\n## Model AY11228\n\n| Name | Qty |\n|--------------------------------------------------|---------------|\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 10V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Black/White Working Stage | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n## OPERATION\n\n## Model AY11228\n\n## Model AY11232\n\n - 1. Remove components from package. identify all parts before assembling.\n - 2. Tighten the knob on the stand to prevent the elevator from sliding down.\n - 3. Fix the binocular body on the stand with the tightening screw.\n - 4. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n## SELECTING THE ILLUMINATION\n\n - 1. Depending on microscope use, select oblique or transmitted illumination.\n - 2. The Brightness Adjustment knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n - 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n - 1. Remove components from package. identify all parts before assembling.\n - 2. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n## SELECTING THE ILLUMINATION\n\n - 1. Depending on microscope use, select oblique or transmitted illumination.\n - 2. The Brightness Adjustment Knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n - 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n## CHANGING THE INTERPUPILLARY DISTANCE\n\n - 1. The distance between the observer's pupils is the interpupillary distance.\n - 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n## Model AY11232\n\n| Name | Qty |\n|--------------------------------------------------|---------------|\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 12V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |", - "page_start": 4, - "page_end": 4, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## SPECIFICATIONS\n\n## Model AY11228\n\n - 1. Interpupillary Adjustment: 55mm - 75mm\n - 2. Working Stage Diameter: 95mm\n - 3. Focus Knob Adjustment Range: 60mm\n - 4. Elevator Adjustment Range: 110mm\n - 5. Right Diopter Adjustment Range: +4 to -6 dopters\n - 6. Illumination: Input Voltage: 110V AC or 220V Output: Oblique illumination: 12V 10W Halogen Lamp\n\n## Model AY11232\n\n - 1. Interpupillary Adjustment: 55mm - 75mm\n - 2. Working Stage Diameter: 95mm\n - 3. Focus Knob Adjustment Range: >50mm\n - 4. Elevator Adjustment Range: 110mm\n - 5. Diopter Adjustment Range: +/- 5 diopters\n - 6. Illumination:\n - Input Voltage: 110V AC or 220V Output: Oblique Illumination: 12V 10W Halogen Lamp Transmitted Illumination: 12V 10W Halogen Lamp\n\n<!-- image -->\n\n<!-- image -->\n\n## Optical Specifications - Model AY11228\n\n| Total Magnification | Objective | Eyepiece Magnification | Working Distance |\n|-----------------------|---------------|--------------------------|--------------------|\n| | Magnification | & Field Diameter (mm) | |\n| 20x, 40x | 2x, 4x | Wide Field 10x, 20mm | 90mm |\n\n## Optical Specifications - Model AY11232\n\n| Objective Zoom Scale | Objective Zoom Scale | | | | | |\n|---------------------------|-----------------------------------|---------------|-------------|--------------|--------------|---------------|\n| Accessory Large Objective | Accessory Large Objective | - | 0.5x | 0.75x 1.5x | | 2x |\n| Working Distance (mm) | Working Distance (mm) | 95 | 156 | 102 | 44 | 30 |\n| WF10x/20mm | Total Magnification | 7x- 45x | 3.5x- 22.5x | 5.3x- 33.8x | 10.5x- 67.5x | 14x- |\n| WF10x/20mm | Field of View Objective Dia. (mm) | 28.6- 4.4 | 57.2- 8.8 | 38.1- 5.9 | 19.0- 2.9 | 90x 14.3- 2.2 |\n| WF12.5x/18mm | Total Magnification | 8.8x- 56x | 4.4x- 28x | 6.6x- 42x | 13.2x- 84x | 17.6x- 112x |\n| WF12.5x/18mm | Field of View Objective Dia. (mm) | 25.7- 4.0 | 51.4- 8 | 34.3- 5.3 | 17.1- 2.7 | 12.9- 2.0 |\n| WF15x/16mm | Total Magnification | 10.5x- 67.5x | 5.3x- 33.8x | 7.9x- 58.6x | 15.7x- 101x | 21x- 135x |\n| WF15x/16mm | Field of View Objective Dia. (mm) | 22.9- 3.6 | 45.8- 7.2 | 30.5- 4.8 | 15.3- 24 | 11.5- 1.8 |\n| WF20x/12mm | Total Magnification | 14x- 90x | 7x- 45x | 10.5x- 67.5x | 21x- 135x | 28x- 180x |\n| WF20x/12mm | Field of View Objective Dia. (mm) | 17.0- 2.7 | 34.0- 5.4 | 22.7- 3.6 | 11.3- 1.8 | 8.5- 1.4 |\n| WF25x/9mm | Total Magnification | 17.5x- 112.5x | 8.8x- 56.3x | 13x- 84.4x | 26.3x- 169x | 35x- 225x |\n| WF25x/9mm | Field of View Objective Dia. (mm) | 12.9- 2.0 | 25.8- 4.0 | 17.2- 2.7 | 8.6- 1.3 | 6.5- 1.0 |\n\n<!-- image -->\n\n<!-- image -->\n\n## PARTS LIST\n\n## Model AY11228", - "page_start": 4, - "page_end": 4, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## SELECTING THE ILLUMINATION\n\n - 1. Depending on microscope use, select oblique or transmitted illumination.\n - 2. The Brightness Adjustment knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n - 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n - 1. Remove components from package. identify all parts before assembling.\n - 2. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n## SELECTING THE ILLUMINATION\n\n - 1. Depending on microscope use, select oblique or transmitted illumination.\n - 2. The Brightness Adjustment Knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n - 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n## CHANGING THE INTERPUPILLARY DISTANCE\n\n - 1. The distance between the observer's pupils is the interpupillary distance.\n - 12 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n## Model AY11234\n\n| Name | Qty |\n|--------------------------------------------------|---------------|\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 12V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |", - "page_start": 6, - "page_end": 6, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## OPERATION ( cont. )\n\n## Model AY11228\n\n## Model AY11232\n\n## SELECTING OBJECTIVE MAGNIFICATION\n\n - 1. There are two objectives. The lower magnification objective has a greater depth of field and view.\n - 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed.\n\n## CHANGING THE INTERPUPILLARY DISTANCE\n\n - 1. The distance between the observer's pupils is the interpupillary distance.\n - 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n## FOCUSING\n\n - 1. Remove the lens protective cover.\n - 2. Place the specimen on the working stage.\n - 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp.\n - 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear.\n\n## CHANGING THE BULB\n\n - 1. Disconnect the power cord from the electrical outlet before changing the bulb.\n - 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n - 3. Replace with a new halogen bulb.\n - 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n## FOCUSING\n\n - 1. Turn the focusing knob away or toward you until a clear image is viewed.\n - 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again.\n\n## ZOOM MAGNIFICATION\n\n - 1. Turn the zoom magnification knob to the desired magnification and field of view.\n - 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary.\n - 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment.\n\n## DIOPTER RING ADJUSTMENT\n\n - 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps:\n - a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob.\n - b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus.\n - c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring.\n - d.With more than one viewer, each viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting.\n\n## CHANGING THE BULB", - "page_start": 5, - "page_end": 5, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## CHANGING THE BULB\n\n - 1. Disconnect the power cord from the electrical outlet.\n - 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n - 3. Replace with a new halogen bulb.\n - 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n<!-- image -->\n\n<!-- image -->\n\n## MODEL AY11230/AY11234\n\n<!-- image -->\n\n## MICROSCOPE USAGE\n\nBARSKA Model AY11230 and Model AY11234 are trinocular microscopes designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use and the vertical tube make them is useful for school classroom instruction.\n\n## CONSTRUCTION\n\nBARSKA Model AY11230 is a fixed power trinocular stereo microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination and oblique illumination. By using this instrument, the user can observe and enlarge the right side stereo image. BARSKA Model AY11234 is a zoom trinocular stereo microscope. The object being viewed is enlarged through two identical sized sets of right and left eye lenses. The zoom provides different magnification and features an inversion system which allows the image to be viewed normally and right side up.\n\n<!-- image -->", - "page_start": 5, - "page_end": 5, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## ZOOM MAGNIFICATION\n\n - 1. Turn the zoom magnification knob to the desired magnification and field of view.\n - 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary.\n - 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment.\n\n## DIOPTER RING ADJUSTMENT\n\n - 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps:\n - a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob.\n - b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus.\n - c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring.\n - d.With more than one viewer, each viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting.\n\n## CHANGING THE BULB\n\n - 1. Disconnect the power cord from the electrical outlet.\n - 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n - 3. Replace with a new halogen bulb.\n - 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n<!-- image -->\n\n## MODEL AY11236\n\nModel AY11236\n\n<!-- image -->\n\n## MICROSCOPE USAGE\n\nBARSKA Model AY11236 is a powerful fixed power compound microscope designed for biological studies such as specimen examination. It can also be used for examining bacteria and for general clinical and medical studies and other scientific uses.\n\n## CONSTRUCTION\n\nBARSKA Model AY11236 is a fixed power compound microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination. By using this instrument, the user can observe specimens at magnification from 40x to 1000x by selecting the desired objective lens. Coarse and fine focus adjustments provide accuracy and image detail. The rotating head allows the user to position the eyepieces for maximum viewing comfort and easy access to all adjustment knobs.\n\n<!-- image -->", - "page_start": 7, - "page_end": 7, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## OPERATION ( cont. )\n\n - 6. Adjust the interpupillary distance by using the eyepiece interpupillary slide adjustment.\n - 7. Observe using the right eyepiece adjusting the coarse and fine focus and adjust the diopter ring until image is clear and sharp.\n - 8. Observe with the left eyepiece and adjust the diopter ring until image is clear and sharp.\n - 9. Rotate the fine focus adjustment when using other objectives. NOTE: This instrument is equipped with patent objectives so the precision or parfocalization is very high.\n\nFig. 1 - Objective Parts\n\n<!-- image -->\n\n - 10. If the image is in focus with the 10x objective, you can select other objectives and observe the specimen even if the fine adjustment knob has not been used by using the following method (See Fig. 1):\n - 1. Unscrew the 40x or 100x objective and remove from turret.\n - 2. Remove the mark sleeve.\n - 3. Turn the ring on the objective to adjust its parfocal distance.\n - 4. Re-insert the objective and compare with the 10x.\n - 5. Adjust until the 40x and 100x objectives image is clear.\n\n## USING THE CEDAR OIL\n\n - 1. Drop some cedar oil on to the top of the 100x objective when the 100x objective is being used. NOTE: To maintain a good quality image, rotate the turret right and left several times to eliminate bubbles in the cedar oil.\n - 2. After finishing the observation, wipe off the cedar oil.\n - 3. Do not use the 40x objective until you have wiped off all of the cedar oil.\n\n<!-- image -->\n\n## OPERATION ( cont. )\n\n## ADJUSTING THE CONDENSER APERTURE", - "page_start": 9, - "page_end": 9, - "source_file": "Microscope Manual.pdf" - } - ] - }, - { - "references": { - "source_file": "Microscope Manual.pdf", - "query": "The illumination of my AY11236 microscope is not very strong, what can I do to solve this?", - "target_page": 10, - "target_passage": "1. Open iris diaphragm wider. 2. Raise condenser. 3. Clean lens.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## CHANGING THE BULB\n\n - 1. Disconnect the power cord from the electrical outlet.\n - 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n - 3. Replace with a new halogen bulb.\n - 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n<!-- image -->\n\n<!-- image -->\n\n## MODEL AY11230/AY11234\n\n<!-- image -->\n\n## MICROSCOPE USAGE\n\nBARSKA Model AY11230 and Model AY11234 are trinocular microscopes designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use and the vertical tube make them is useful for school classroom instruction.\n\n## CONSTRUCTION\n\nBARSKA Model AY11230 is a fixed power trinocular stereo microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination and oblique illumination. By using this instrument, the user can observe and enlarge the right side stereo image. BARSKA Model AY11234 is a zoom trinocular stereo microscope. The object being viewed is enlarged through two identical sized sets of right and left eye lenses. The zoom provides different magnification and features an inversion system which allows the image to be viewed normally and right side up.\n\n<!-- image -->", - "page_start": 5, - "page_end": 5, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n## PARTS LIST\n\n## Model AY11228\n\n| Name | Qty |\n|--------------------------------------------------|---------------|\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 10V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Black/White Working Stage | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |\n\n## OPERATION\n\n## Model AY11228\n\n## Model AY11232\n\n - 1. Remove components from package. identify all parts before assembling.\n - 2. Tighten the knob on the stand to prevent the elevator from sliding down.\n - 3. Fix the binocular body on the stand with the tightening screw.\n - 4. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n## SELECTING THE ILLUMINATION\n\n - 1. Depending on microscope use, select oblique or transmitted illumination.\n - 2. The Brightness Adjustment knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n - 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n - 1. Remove components from package. identify all parts before assembling.\n - 2. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n## SELECTING THE ILLUMINATION\n\n - 1. Depending on microscope use, select oblique or transmitted illumination.\n - 2. The Brightness Adjustment Knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n - 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n## CHANGING THE INTERPUPILLARY DISTANCE\n\n - 1. The distance between the observer's pupils is the interpupillary distance.\n - 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n## Model AY11232\n\n| Name | Qty |\n|--------------------------------------------------|---------------|\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 12V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |", - "page_start": 4, - "page_end": 4, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## ZOOM MAGNIFICATION\n\n - 1. Turn the zoom magnification knob to the desired magnification and field of view.\n - 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary.\n - 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment.\n\n## DIOPTER RING ADJUSTMENT\n\n - 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps:\n - a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob.\n - b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus.\n - c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring.\n - d.With more than one viewer, each viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting.\n\n## CHANGING THE BULB\n\n - 1. Disconnect the power cord from the electrical outlet.\n - 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n - 3. Replace with a new halogen bulb.\n - 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n<!-- image -->\n\n## MODEL AY11236\n\nModel AY11236\n\n<!-- image -->\n\n## MICROSCOPE USAGE\n\nBARSKA Model AY11236 is a powerful fixed power compound microscope designed for biological studies such as specimen examination. It can also be used for examining bacteria and for general clinical and medical studies and other scientific uses.\n\n## CONSTRUCTION\n\nBARSKA Model AY11236 is a fixed power compound microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination. By using this instrument, the user can observe specimens at magnification from 40x to 1000x by selecting the desired objective lens. Coarse and fine focus adjustments provide accuracy and image detail. The rotating head allows the user to position the eyepieces for maximum viewing comfort and easy access to all adjustment knobs.\n\n<!-- image -->", - "page_start": 7, - "page_end": 7, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## OPERATION ( cont. )\n\n## Model AY11240\n\n## Model AY11238\n\n - 7. To clearly see the outline of the specimen, rotate the coarse adjustment knob and lower the barrel to the space limiter.\n - 8. Rotate the fine adjustment knob until the image is in sharp focus. When using other objectives, rotate the fine focus adjustment until the image is in focus.\n - 6. To clearly see the outline of the specimen, rotate the coarse adjustment knob and lower the barrel to the space limiter.\n - 7. Rotate the fine adjustment knob until the image is in sharp focus. When using other objectives, rotate the fine focus adjustment until the image is in focus.\n\n## USING THE 5-HOLE DIAPHRAGM\n\n - 1. To obtain the best contrast for observing, match the hole size to the objective that is being used to view the specimen.\n - 2. Each hole has a corresponding number from 1 to 5. 1 is the smallest hole; 5 is the largest hole.\n - Use the following guidelines to match the hole number to the objective that you have selected:\n - 40x objective: Use #5 hole\n\n10x objective: Use #4 or #3 hole\n\n4x objective: Use #2 or #1 hole\n\n## COARSE KNOB ADJUSTMENT - Model AY11240\n\n - 1. The coarse adjustment knob has an adjustable heavy-light nut (See Fig.1).\n - 2. To adjust the knob loosen or tighten the nut. NOTE: Adjusting the nut too tight will make focusing difficult. Adjusting the nut too loose will cause the tube to slide.\n\nFig. 1- Coarse Adjustment Knob\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n## MODEL AY11228/AY11232\n\n<!-- image -->\n\n## MICROSCOPE USAGE\n\nBARSKA Model AY11228 and Model AY11232 are designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use is especially useful for school classroom instruction.\n\n## CONSTRUCTION\n\nBARSKA Model AY11228 is a fixed power stereo microscope. It is constructed with two optical paths at the same angle. It is equipped with transmitted illumination and oblique illumination. By using this instrument, the user can observe and enlarge the right side stereo image. BARSKA Model AY11232 is a zoom stereo microscope. The object being viewed is enlarged through two identical sized sets of right and left eye lenses. The zoom provides different magnification and features an inversion system which allows the image to be viewed normally and right side up.", - "page_start": 3, - "page_end": 3, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## OPERATION ( cont. )\n\n## Model AY11230\n\n## Model AY11234\n\n## SELECTING OBJECTIVE MAGNIFICATION\n\n - 1. There are two objectives. The lower magnification objective has a greater depth of field and view.\n - 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed.\n\n## CHANGING THE INTERPUPILLARY DISTANCE\n\n - 1. The distance between the observer's pupils is the interpupillary distance.\n - 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n## FOCUSING\n\n - 1. Remove the lens protective cover.\n - 2. Place the specimen on the working stage.\n - 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp.\n - 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear.\n\n## CHANGING THE BULB\n\n - 1. Disconnect the power cord.\n - 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n - 3. Replace with a new halogen bulb.\n - 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n## USING THE VERTICAL TUBE MODELS AY11230/11234\n\n - 1. The vertical tube can be used for instructional viewing or to photograph the image witrh a digital camera or micro TV unit.\n - 2. Loosen the retention screw, then rotate the adjustment ring to change the length of the vertical tube.\n - 3. Make sure that both the images in\n\n## FOCUSING\n\n - 1. Turn the focusing knob away or toward you until a clear image is viewed.\n - 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again.\n\n## ZOOM MAGNIFICATION", - "page_start": 7, - "page_end": 7, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## SELECTING THE ILLUMINATION\n\n - 1. Depending on microscope use, select oblique or transmitted illumination.\n - 2. The Brightness Adjustment knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n - 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n - 1. Remove components from package. identify all parts before assembling.\n - 2. Check the input voltage to ensure that it conforms to the microscopes requirement.\n\n## SELECTING THE ILLUMINATION\n\n - 1. Depending on microscope use, select oblique or transmitted illumination.\n - 2. The Brightness Adjustment Knobs change the oblique or transmitted light independently. The transmitted illuminator fluorescent lamp cannot be adjusted.\n - 3. The angle of the oblique lamp can be adjusted to ensure optimum lighting of the sample.\n\n## CHANGING THE INTERPUPILLARY DISTANCE\n\n - 1. The distance between the observer's pupils is the interpupillary distance.\n - 12 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n## Model AY11234\n\n| Name | Qty |\n|--------------------------------------------------|---------------|\n| Binocular Body (incl. 2x, 4x obj.) | 1 |\n| 10x Wide Field Eyepiece | 2 |\n| Eyeshade | 2 |\n| 12V 10W Halogen Lamp 12V 10W Halogen Lamp w/cup | 1 ea. (spare) |\n| Fuse 2A (spare) | 1 |\n| Lens Cleaning Tissue | 1 |\n| Dust Cover | 1 |\n| Specifications | 1 |\n| Packing Slip | 1 |\n| Quality Inspection Certificate | 1 |", - "page_start": 6, - "page_end": 6, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n## PARTS LIST\n\n## Model AY11240\n\nName\n\nMicroscope Stand\n\nAchromatic\n\nObjective\n\nPlain Concave Mirror\n\n1\n\nPlastic Dust Cover\n\n1\n\n10x Wide Field Eyepiece\n\n1\n\nLens Cleaning Tissue\n\n1\n\nSpecification\n\n1\n\nInspection Certificate\n\n1\n\nPacking List\n\n1\n\n## OPERATION\n\n## Model AY11240\n\n## Model AY11238\n\n - 1. Remove components from package. identify all parts before assembling.\n - 2. Attach 4x, 10x and 40x objectives to revolving turret.\n - 3. Place the specimen on the stage and secure with spring clips. NOTE: The cover glass must face upward (the thinner glass is the cover glass), otherwise when the 40x objective is used the specimen cannot be observed. Observation is best when the thickness of the cover glass is 0.1-1.1mm and the cover glass is 0.17mm.\n - 4. Adjust the stand to an angle that provides comfortable observation.\n - 5. Rotate and adjust concave mirror to light the field of view. NOTE: Do not reflect the Sun with the mirror. This can cause serious eye injury or permanent eye damage.\n - 6. Observe the specimen using the lowest magnification objective first. The 4x objective provides a larger field of view to search specimen.\n - 1. Remove components from package. identify all parts before assembling.\n - 2. Attach 4x, 10x and 40x objectives to revolving turret. 3. Place the specimen on the stage and secure with spring clips. NOTE: The cover glass must face upward (the thinner glass is the cover glass), otherwise when the 40x objective is used the specimen cannot be observed. Observation is best when the thickness of the cover glass is 0.1-1.1mm and the cover glass is 0.17mm.\n - 4. Plug power cord into an electrical outlet. Turn microscope lamp ON.\n - 5. Observe the specimen using the lowest magnification objective first. The 4x objective provides a larger field of view to search specimen.\n\n4x\n\n10x\n\n40x (s)\n\nQty\n\n1\n\n1\n\n1\n\n1\n\n## Model AY11238\n\n| Name | Name | Qty |\n|-------------------------|-------------------------|-------|\n| Microscope Stand | Microscope Stand | 1 |\n| | 4x | 1 |\n| Achromatic Objective | 10x | 1 |\n| | 40x (s) | 1 |\n| 10x Wide Field Eyepiece | 10x Wide Field Eyepiece | 1 |\n| Plastic Dust Cover | Plastic Dust Cover | 1 |\n| Spare Bulb | Spare Bulb | 1 |\n| Lens Cleaning Tissue | Lens Cleaning Tissue | 1 |\n| Specification | Specification | 1 |\n| Inspection Certificate | Inspection Certificate | 1 |\n| Packing List | Packing List | 1 |", - "page_start": 2, - "page_end": 2, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "- 2. The NVMe host can be verified by using the lshost command, as shown in Example 8-20.\n\nExample 8-20\n\n```\nIBM\\_Storwize:ITSO-V7000:superuser>lshost 6 id 6 name NVMe-Host-01 port\\_count 1 type generic mask 1111111111111111111111111111111111111111111111111111111111111111 iogrp\\_count 4 status offline site\\_id\n```", - "page_start": 396, - "page_end": 396, - "source_file": "sg247938.pdf" - }, - { - "text": "## INDEX\n\nMaintenance............................................\n\nModel AY11240/Model AY11238..................\n\nModel AY11228/Model AY11232.................. 6-9\n\nModel AY11230/Model AY11234..................\n\nModel AY11236........................................\n\nWarranty Information................................\n\n1\n\n2-5\n\n10-13\n\n14-18\n\nBack Cover\n\n## IMPORTANT NOTES\n\nCongratulations on your purchase of this high quality BARSKA microscope. With proper care, this microscope will provide many years of use. Please read the following instructions before operating this instrument.\n\n- 1. Do not attempt to disassemble the instrument. This product has been carefully assembled at the factory and should only be examined by a factory-trained technician.\n- 2. This instrument should only be used in an environment with an indoor temperature range of 32 o F to 104 o F.\n- 3. Do not use this instrument in an environment with a lot of dust. Cover the instrument when not in use.\n- 4. Do not subject the instrument to shock.\n\n## MAINTENANCE\n\nProper care and storage of this instrument is essential. Please read the following guidelines:\n\n- 1. Keep the instrument in a dry and moisture-free location.\n- 2. Do not expose to acid, alkali fumes or moisture.\n- 3. Keep optical parts clean and free of dust. To clean optical parts gently wipe with lens cleaning tissue and a mixture of alcohol and diethyl ether. Depending on weather conditions, the following are the recommended mixture ratios:\n\nWet weather: 1:2\n\n- Dry Weather: 1:1\n- 4. After use, cover the instrument with the plastic dust cover.\n- 5. If instrument is to be stored for an extended period of time, remove the eyepiece and oculars and store in a moisture-proof container.\n\n<!-- image -->\n\n## MODEL AY11240/AY11238\n\n<!-- image -->\n\n## MICROSCOPE USAGE\n\nBARSKA Model AY11240 and Model AY11238 are designed for biological studies such as specimen examination. They can also be used for examining bacteria and for general clinical and medical studies. Simple design and use is especially useful for school classroom instruction.\n\n## CONSTRUCTION\n\nBARSKA Model AY11240 is a fixed tube type. For comfortable observation, the arm can be easily tilted at any angle from 90 o vertical to 45 o level. It is also equipped with a coarse adjustment and fine adjustment as well as a space limiter to protect the objective from contacting and damaging the specimen. BARSKA Model AY11238 features a monocular tube that is slanted at a 45 o angle. The head rotates 360 o . The Eyepiece Set Screw prevents the eyepiece from falling out of the tube.\n\n<!-- image -->", - "page_start": 1, - "page_end": 1, - "source_file": "Microscope Manual.pdf" - }, - { - "text": "## OPERATION ( cont. )\n\n## Model AY11228\n\n## Model AY11232\n\n## SELECTING OBJECTIVE MAGNIFICATION\n\n - 1. There are two objectives. The lower magnification objective has a greater depth of field and view.\n - 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed.\n\n## CHANGING THE INTERPUPILLARY DISTANCE\n\n - 1. The distance between the observer's pupils is the interpupillary distance.\n - 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece.\n\n## FOCUSING\n\n - 1. Remove the lens protective cover.\n - 2. Place the specimen on the working stage.\n - 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp.\n - 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear.\n\n## CHANGING THE BULB\n\n - 1. Disconnect the power cord from the electrical outlet before changing the bulb.\n - 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap.\n - 3. Replace with a new halogen bulb.\n - 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator.\n\n## FOCUSING\n\n - 1. Turn the focusing knob away or toward you until a clear image is viewed.\n - 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again.\n\n## ZOOM MAGNIFICATION\n\n - 1. Turn the zoom magnification knob to the desired magnification and field of view.\n - 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary.\n - 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment.\n\n## DIOPTER RING ADJUSTMENT\n\n - 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps:\n - a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob.\n - b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus.\n - c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring.\n - d.With more than one viewer, each viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting.\n\n## CHANGING THE BULB", - "page_start": 5, - "page_end": 5, - "source_file": "Microscope Manual.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia3.pdf", - "query": "What event marks the beginning of the field of artificial intelligence?", - "target_page": 22, - "target_passage": "The field of AI research was founded at a workshop at Dartmouth College in 1956.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "<!-- image -->\n\n## Artificial intelligence\n\nArtificial intelligence ( AI ), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. [1] Such machines may be called AIs.\n\nHigh-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.\" [2][3]\n\nVarious subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. [a] General intelligence-the ability to complete any task performed by a human on an at least equal level-is among the field's long-term goals. [4] To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. [b] AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields. [5]\n\nArtificial intelligence was founded as an academic discipline in 1956, [6] and the field went through multiple cycles of optimism throughout its history, [7][8] followed by periods of disappointment and loss of funding, known as AI winters. [9][10] Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques. [11] This growth accelerated further after 2017 with the transformer architecture, [12] and by the early 2020s many billions of dollars were being invested in AI and the field experienced rapid ongoing progress in what has become known as the AI boom. The emergence of advanced generative AI in the midst of the AI boom and its ability to create and modify content exposed several unintended consequences and harms in the present and raised concerns about the risks of AI and its long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.\n\n## Goals", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Edward Fredkin argues that \"artificial intelligence is the next step in evolution\", an idea first proposed by Samuel Butler's \"Darwin among the Machines\" as far back as 1863, and expanded upon by George Dyson in his 1998 book Darwin Among the Machines: The Evolution of Global Intelligence . [398]\n\n## In fiction\n\nThought-capable artificial beings have appeared as storytelling devices since antiquity, [399] and have been a persistent theme in science fiction. [400]\n\nA common trope in these works began with Mary Shelley's Frankenstein , where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and\n\nThe word \"robot\" itself was coined by Karel Čapek in his 1921 play R.U.R. , the title standing for \"Rossum's Universal Robots\".\n\n<!-- image -->\n\nBishop from Aliens (1986) are less prominent in popular culture. [401]\n\nIsaac Asimov introduced the Three Laws of Robotics in many stories, most notably with the \"Multivac\" super-intelligent computer. Asimov's laws are often brought up during lay discussions of machine ethics; [402] while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity. [403]\n\nSeveral works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R. , the films A.I. Artificial Intelligence and Ex Machina , as well as the novel Do Androids Dream of Electric Sheep? , by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence. [404]\n\n## See also\n\n - Artificial intelligence and elections - Use and impact of AI on political elections\n - Artificial intelligence content detection - Software to detect AI-generated content\n - Behavior selection algorithm - Algorithm that selects actions for intelligent agents\n - Business process automation - Automation of business processes\n - Case-based reasoning - Process of solving new problems based on the solutions of similar past problems\n - Computational intelligence - Ability of a computer to learn a specific task from data or experimental observation\n - Digital immortality - Hypothetical concept of storing a personality in digital form\n - Emergent algorithm - Algorithm exhibiting emergent behavior\n - Female gendering of AI technologies - Gender biases in digital technology", - "page_start": 27, - "page_end": 27, - "source_file": "wikipedia3.pdf" - }, - { - "text": "In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks. [314] 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence. [315][316] In May 2024 at the AI Seoul Summit, 16 global AI tech companies agreed to safety commitments on the development of AI. [317][318]\n\n## History\n\nThe study of mechanical or \"formal\" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as \"0\" and \"1\", could simulate any conceivable form of mathematical reasoning. [319][320] This, along with concurrent discoveries in cybernetics, information theory and neurobiology, led researchers to consider the possibility of building an \"electronic brain\". [r] They developed several areas of research that would become part of AI, [322] such as McCullouch and Pitts design for \"artificial neurons\" in 1943, [115] and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced the Turing test and showed that \"machine intelligence\" was plausible. [323][320]\n\nThe field of AI research was founded at a workshop at Dartmouth College in 1956. [s][6] The attendees became the leaders of AI research in the 1960s. [t] They and their students produced programs that the press described as \"astonishing\": [u] computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English. [v][7] Artificial intelligence laboratories were set up at a number of British and U.S. universities in the latter 1950s and early 1960s. [320]\n\nResearchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine with general intelligence and considered this the goal of their field. [327] In 1965 Herbert Simon predicted, \"machines will be capable, within twenty years, of doing any work a man can do\". [328] In 1967 Marvin Minsky agreed, writing that \"within a generation ... the problem of creating 'artificial intelligence' will substantially be solved\". [329] They had, however, underestimated the difficulty of the problem. [w] In 1974, both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill [331] and ongoing pressure from the U.S. Congress to fund more productive projects. [332] Minsky's and Papert's book Perceptrons was understood as proving that artificial neural networks would never be useful for solving real-world tasks, thus discrediting the approach altogether. [333] The \"AI winter\", a period when obtaining funding for AI projects was difficult, followed. [9]\n\nIn the early 1980s, AI research was revived by the commercial success of expert systems, [334] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research. [8] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longerlasting winter began. [10]", - "page_start": 21, - "page_end": 21, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI, [367] with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did \"not actually use AI in a material way\". [368]\n\n## Evaluating approaches to AI\n\nNo established unifying theory or paradigm has guided AI research for most of its history. [aa] The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term \"artificial intelligence\" to mean \"machine learning with neural networks\"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.\n\n## Symbolic AI and its limits\n\nSymbolic AI (or \"GOFAI\") [370] simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at \"intelligent\" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: \"A physical symbol system has the necessary and sufficient means of general intelligent action.\" [371]\n\nHowever, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level \"intelligent\" tasks were easy for AI, but low level \"instinctive\" tasks were extremely difficult. [372] Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a \"feel\" for the situation, rather than explicit symbolic knowledge. [373] Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him. [ab][16]\n\nThe issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence, [375][376] in part because subsymbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.\n\n## Neat vs. scruffy\n\n\"Neats\" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). \"Scruffies\" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s, [377] but eventually was seen as irrelevant. Modern AI has elements of both.\n\n## Soft vs. hard computing", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia3.pdf" - }, - { - "text": "U.S. Computer Science PhD graduates have specialized in \"AI\". [353] About 800,000 \"AI\"-related U.S. job openings existed in 2022. [354] According to PitchBook research, 22% of newly funded startups in 2024 claimed to be AI companies. [355]\n\n## Philosophy\n\nPhilosophical debates have historically sought to determine the nature of intelligence and how to make intelligent machines. [356] Another major focus has been whether machines can be conscious, and the associated ethical implications. [357] Many other topics in philosophy are relevant to AI, such as epistemology and free will. [358] Rapid advancements have intensified public discussions on the philosophy and ethics of AI. [357]\n\n## Defining artificial intelligence\n\nAlan Turing wrote in 1950 \"I propose to consider the question 'can machines think'?\" [359] He advised changing the question from whether a machine \"thinks\", to \"whether or not it is possible for machinery to show intelligent behaviour\". [359] He devised the Turing test, which measures the ability of a machine to simulate human conversation. [323] Since we can only observe the behavior of the machine, it does not matter if it is \"actually\" thinking or literally has a \"mind\". Turing notes that we can not determine these things about other people but \"it is usual to have a polite convention that everyone thinks.\" [360]\n\nRussell and Norvig agree with Turing that intelligence must be defined in terms of external behavior, not internal structure. [1] However, they are critical that the test requires the machine to imitate humans. \"Aeronautical engineering texts\", they wrote, \"do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons.' \" [362] AI founder John McCarthy agreed, writing that \"Artificial intelligence is not, by definition, simulation of human intelligence\". [363]\n\nMcCarthy defines intelligence as \"the computational part of the ability to achieve goals in the world\". [364] Another AI founder, Marvin Minsky similarly describes it as \"the ability to solve hard problems\". [365] The leading AI textbook defines it as the study of\n\nThe Turing test can provide some evidence of intelligence, but it penalizes non-human intelligent behavior. [361]\n\n<!-- image -->\n\nagents that perceive their environment and take actions that maximize their chances of achieving defined goals. [1] These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the \"intelligence\" of the machine-and no other philosophical discussion is required, or may not even be possible.\n\nAnother definition has been adopted by Google, [366] a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence.", - "page_start": 23, - "page_end": 23, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Up to this point, most of AI's funding had gone to projects that used high-level symbols to represent mental objects like plans, goals, beliefs, and known facts. In the 1980s, some researchers began to doubt that this approach would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition, [335] and began to look into \"sub-symbolic\" approaches. [336] Rodney Brooks rejected \"representation\" in general and focussed directly on engineering machines that move and survive. [x] Judea Pearl, Lofti Zadeh, and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic. [86][341] But the most important development was the revival of \"connectionism\", including neural network research, by Geoffrey Hinton and others. [342] In 1990, Yann LeCun successfully showed that convolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks. [343]\n\nAI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This \"narrow\" and \"formal\" focus allowed researchers to produce verifiable results and collaborate with other fields (such as statistics, economics and mathematics). [344] By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as \"artificial intelligence\" (a tendency known as the AI effect). [345] However, several academic researchers became concerned that AI was no longer pursuing its original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield of artificial general intelligence (or \"AGI\"), which had several well-funded institutions by the 2010s. [4]\n\nDeep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field. [11] For many specific tasks, other methods were abandoned. [y] Deep learning's success was based on both hardware improvements (faster computers, [347] graphics processing units, cloud computing [348] ) and access to large amounts of data [349] (including curated datasets, [348] such as ImageNet). Deep learning's success led to an enormous increase in interest and funding in AI. [z] The amount of machine learning research (measured by total publications) increased by 50% in the years 2015-2019. [306]\n\nIn 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers refocussed their careers on these issues. The alignment problem became a serious field of academic study. [283]\n\nIn the late teens and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program taught only the game's rules and developed a strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. [350] ChatGPT, launched on November 30, 2022, became the fastest-growing consumer software application in history, gaining over 100 million users in two months. [351] It marked what is widely regarded as AI's breakout year, bringing it into the public consciousness. [352] These programs, and others, inspired an aggressive AI boom, where large companies began investing billions of dollars in AI research. According to AI Impacts, about $50 billion annually was invested in \"AI\" around 2022 in the U.S. alone and about 20% of the new", - "page_start": 22, - "page_end": 22, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- Glossary of artificial intelligence - List of definitions of terms and concepts commonly used in the study of artificial intelligence\n - Intelligence amplification - Use of information technology to augment human intelligence\n - Intelligent agent - Software agent which acts autonomously\n - Mind uploading - Hypothetical process of digitally emulating a brain\n - Organoid intelligence - Use of brain cells and brain organoids for intelligent computing\n - Robotic process automation - Form of business process automation technology\n - Wetware computer - Computer composed of organic material\n\n## Explanatory notes\n\n - a. This list of intelligent traits is based on the topics covered by the major AI textbooks, including: Russell & Norvig (2021), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and Nilsson (1998)\n - b. This list of tools is based on the topics covered by the major AI textbooks, including: Russell & Norvig (2021), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and Nilsson (1998)\n - c. It is among the reasons that expert systems proved to be inefficient for capturing knowledge. [30][31]\n - d. \"Rational agent\" is general term used in economics, philosophy and theoretical artificial intelligence. It can refer to anything that directs its behavior to accomplish goals, such as a person, an animal, a corporation, a nation, or in the case of AI, a computer program.\n - e. Alan Turing discussed the centrality of learning as early as 1950, in his classic paper \"Computing Machinery and Intelligence\". [42] In 1956, at the original Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: \"An Inductive Inference Machine\". [43]\n - f. See AI winter § Machine translation and the ALPAC report of 1966\n - g. Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be conditionally independent of one another. AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve. [93]\n - h. Expectation-maximization, one of the most popular algorithms in machine learning, allows clustering in the presence of unknown latent variables. [95]\n - i. Some form of deep neural networks (without a specific learning algorithm) were described by: Warren S. McCulloch and Walter Pitts (1943) [115] Alan Turing (1948); [116] Karl Steinbuch and Roger David Joseph (1961). [117] Deep or recurrent networks that learned (or used gradient descent) were developed by: Frank Rosenblatt(1957); [116] Oliver Selfridge (1959); [117] Alexey Ivakhnenko and Valentin Lapa (1965); [118] Kaoru Nakano (1971); [119] Shun-Ichi Amari (1972); [119] John Joseph Hopfield (1982). [119] Precursors to backpropagation were developed by: Henry J. Kelley (1960); [116] Arthur E. Bryson (1962); [116] Stuart Dreyfus (1962); [116] Arthur E. Bryson and Yu-Chi Ho (1969); [116] Backpropagation was independently developed by: Seppo Linnainmaa (1970); [120] Paul Werbos (1974). [116]\n - j. Geoffrey Hinton said, of his work on neural networks in the 1990s, \"our labeled datasets were thousands of times too small. [And] our computers were millions of times too slow.\" [121]", - "page_start": 28, - "page_end": 28, - "source_file": "wikipedia3.pdf" - }, - { - "text": "show that even a computer capable of perfectly simulating human behavior would not have a mind. [387]\n\n## AI welfare and rights\n\nIt is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree. [388] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals. [389][390] Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights. [389] Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society. [391]\n\nIn 2017, the European Union considered granting \"electronic personhood\" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities. [392] Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own. [393][394]\n\nProgress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited. [390][389]\n\n## Future\n\n## Superintelligence and the singularity\n\nA superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. [379] If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an \"intelligence explosion\" and Vernor Vinge called a \"singularity\". [395]\n\nHowever, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do. [396]\n\n## Transhumanism\n\nRobot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger. [397]", - "page_start": 26, - "page_end": 26, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- Roberts, Jacob (2016). \"Thinking Machines: The Search for Artificial Intelligence\" (https://web.ar chive.org/web/20180819152455/https://www.sciencehistory.org/distillations/magazine/thinkin g-machines-the-search-for-artificial-intelligence). Distillations . Vol. 2, no. 2. pp. 14-23. Archived from the original (https://www.sciencehistory.org/distillations/magazine/thinking-ma chines-the-search-for-artificial-intelligence) on 19 August 2018. Retrieved 20 March 2018.\n - Robitzski, Dan (5 September 2018). \"Five experts share what scares them the most about AI\" (https://futurism.com/artificial-intelligence-experts-fear/amp). Archived (https://web.archive.or g/web/20191208094101/https://futurism.com/artificial-intelligence-experts-fear/amp) from the original on 8 December 2019. Retrieved 8 December 2019.\n - Rose, Steve (11 July 2023). \"AI Utopia or dystopia?\". The Guardian Weekly . pp. 42-43.\n - Russell, Stuart (2019). Human Compatible: Artificial Intelligence and the Problem of Control . United States: Viking. ISBN 978-0-5255-5861-3. OCLC 1083694322 (https://search.worldca t.org/oclc/1083694322).\n - Sainato, Michael (19 August 2015). \"Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence\" (https://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gat es-warn-about-artificial-intelligence). Observer . Archived (https://web.archive.org/web/20151 030053323/http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-ab out-artificial-intelligence) from the original on 30 October 2015. Retrieved 30 October 2015.\n - Sample, Ian (5 November 2017). \"Computer says no: why making AIs fair, accountable and transparent is crucial\" (https://www.theguardian.com/science/2017/nov/05/computer-says-no -why-making-ais-fair-accountable-and-transparent-is-crucial). The Guardian . Archived (http s://web.archive.org/web/20221010134155/https://theguardian.com/science/2017/nov/05/co mputer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial) from the original on 10 October 2022. Retrieved 30 January 2018.\n - Rothman, Denis (7 October 2020). \"Exploring LIME Explanations and the Mathematics Behind It\" (https://www.codemotion.com/magazine/ai-ml/lime-explainable-ai). Codemotion . Archived (https://web.archive.org/web/20231125045932/https://www.codemotion.com/magazine/ai-m l/lime-explainable-ai/) from the original on 25 November 2023. Retrieved 25 November 2023.\n - Scassellati, Brian (2002). \"Theory of mind for a humanoid robot\". Autonomous Robots . 12 (1): 13-24. doi:10.1023/A:1013298507114 (https://doi.org/10.1023%2FA%3A1013298507114). S2CID 1979315 (https://api.semanticscholar.org/CorpusID:1979315).\n - Schmidhuber, J. (2015). \"Deep Learning in Neural Networks: An Overview\". Neural Networks . 61 : 85-117. arXiv:1404.7828 (https://arxiv.org/abs/1404.7828). doi:10.1016/j.neunet.2014.09.003 (https://doi.org/10.1016%2Fj.neunet.2014.09.003). PMID 25462637 (https://pubmed.ncbi.nlm.nih.gov/25462637). S2CID 11715509 (https://api. semanticscholar.org/CorpusID:11715509).\n - Schmidhuber, Jürgen (2022). \"Annotated History of Modern AI and Deep Learning\" (https://peop le.idsia.ch/~juergen/). Archived (https://web.archive.org/web/20230807173414/https://peopl e.idsia.ch/~juergen/) from the original on 7 August 2023. Retrieved 5 October 2024.", - "page_start": 62, - "page_end": 62, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, including virtual assistants, chatbots, autonomous vehicles, game-playing systems, and industrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks. [175][176][177]\n\nVincent van Gogh in watercolour created by generative AI software\n\n<!-- image -->\n\n## Other industry-specific tasks\n\nThere are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated \"AI\" in some offerings or processes. [178] A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management.\n\nAI applications for evacuation and disaster management are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions. [179][180][181]\n\nIn agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.\n\nArtificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for \"classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights.\" For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia3.pdf", - "query": "What would a superintelligence need?", - "target_page": 27, - "target_passage": "possess intelligence far surpassing that of the brightest and most gifted human mind.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "show that even a computer capable of perfectly simulating human behavior would not have a mind. [387]\n\n## AI welfare and rights\n\nIt is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree. [388] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals. [389][390] Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights. [389] Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society. [391]\n\nIn 2017, the European Union considered granting \"electronic personhood\" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities. [392] Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own. [393][394]\n\nProgress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited. [390][389]\n\n## Future\n\n## Superintelligence and the singularity\n\nA superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. [379] If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an \"intelligence explosion\" and Vernor Vinge called a \"singularity\". [395]\n\nHowever, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do. [396]\n\n## Transhumanism\n\nRobot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger. [397]", - "page_start": 26, - "page_end": 26, - "source_file": "wikipedia3.pdf" - }, - { - "text": "## Existential risk\n\nIt has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicist Stephen Hawking stated, \"spell the end of the human race\". [265] This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like \"self-awareness\" (or \"sentience\" or \"consciousness\") and becomes a malevolent character. [q] These sci-fi scenarios are misleading in several ways.\n\nFirst, AI does not require human-like sentience to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. Philosopher Nick Bostrom argued that if one gives almost any goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a paperclip factory manager). [267] Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that \"you can't fetch the coffee if you're dead.\" [268] In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is \"fundamentally on our side\". [269]\n\nSecond, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are built on language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive. [270]\n\nThe opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI. [271] Personalities such as Stephen Hawking, Bill Gates, and Elon Musk, [272] as well as AI pioneers such as Yoshua Bengio, Stuart Russell, Demis Hassabis, and Sam Altman, have expressed concerns about existential risk from AI.\n\nIn May 2023, Geoffrey Hinton announced his resignation from Google in order to be able to \"freely speak out about the risks of AI\" without \"considering how this impacts Google.\" [273] He notably mentioned risks of an AI takeover, [274] and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI. [275]\n\nIn 2023, many leading AI experts endorsed the joint statement that \"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war\". [276]\n\nSome other researchers were more optimistic. AI pioneer Jürgen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making \"human lives longer and healthier and easier.\" [277] While the tools that are now being used to improve lives can also be used by bad actors, \"they can also be used against the bad actors.\" [278][279] Andrew Ng also argued that \"it's a mistake to fall for the doomsday hype on AI-and that regulators who do will only benefit vested interests.\" [280] Yann LeCun \"scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction.\" [281] In the early 2010s, experts argued that the risks are too distant in", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Finding a provably correct or optimal solution is intractable for many important problems. [15] Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks.\n\n## Narrow vs. general AI\n\nAI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals. [378][379] General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively.\n\n## Machine consciousness, sentience, and mind\n\nThe philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. Russell and Norvig add that \"[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on.\" [380] However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction.\n\n## Consciousness\n\nDavid Chalmers identified two problems in understanding the mind, which he named the \"hard\" and \"easy\" problems of consciousness. [381] The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While human information processing is easy to explain, human subjective experience is difficult to explain. For example, it is easy to imagine a colorblind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to know what red looks like . [382]\n\n## Computationalism and functionalism\n\nComputationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind-body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam. [383]\n\nPhilosopher John Searle characterized this position as \"strong AI\": \"The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.\" [ac] Searle challenges this claim with his Chinese room argument, which attempts to", - "page_start": 25, - "page_end": 25, - "source_file": "wikipedia3.pdf" - }, - { - "text": "U.S. Computer Science PhD graduates have specialized in \"AI\". [353] About 800,000 \"AI\"-related U.S. job openings existed in 2022. [354] According to PitchBook research, 22% of newly funded startups in 2024 claimed to be AI companies. [355]\n\n## Philosophy\n\nPhilosophical debates have historically sought to determine the nature of intelligence and how to make intelligent machines. [356] Another major focus has been whether machines can be conscious, and the associated ethical implications. [357] Many other topics in philosophy are relevant to AI, such as epistemology and free will. [358] Rapid advancements have intensified public discussions on the philosophy and ethics of AI. [357]\n\n## Defining artificial intelligence\n\nAlan Turing wrote in 1950 \"I propose to consider the question 'can machines think'?\" [359] He advised changing the question from whether a machine \"thinks\", to \"whether or not it is possible for machinery to show intelligent behaviour\". [359] He devised the Turing test, which measures the ability of a machine to simulate human conversation. [323] Since we can only observe the behavior of the machine, it does not matter if it is \"actually\" thinking or literally has a \"mind\". Turing notes that we can not determine these things about other people but \"it is usual to have a polite convention that everyone thinks.\" [360]\n\nRussell and Norvig agree with Turing that intelligence must be defined in terms of external behavior, not internal structure. [1] However, they are critical that the test requires the machine to imitate humans. \"Aeronautical engineering texts\", they wrote, \"do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons.' \" [362] AI founder John McCarthy agreed, writing that \"Artificial intelligence is not, by definition, simulation of human intelligence\". [363]\n\nMcCarthy defines intelligence as \"the computational part of the ability to achieve goals in the world\". [364] Another AI founder, Marvin Minsky similarly describes it as \"the ability to solve hard problems\". [365] The leading AI textbook defines it as the study of\n\nThe Turing test can provide some evidence of intelligence, but it penalizes non-human intelligent behavior. [361]\n\n<!-- image -->\n\nagents that perceive their environment and take actions that maximize their chances of achieving defined goals. [1] These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the \"intelligence\" of the machine-and no other philosophical discussion is required, or may not even be possible.\n\nAnother definition has been adopted by Google, [366] a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence.", - "page_start": 23, - "page_end": 23, - "source_file": "wikipedia3.pdf" - }, - { - "text": "## EFFECT OF SUPERCHARGING ON ALTITUDE PERFORMANCE\n\nFigure 2.17. Fffect of Supercharging on Altitude Performonce\n\n<!-- image -->", - "page_start": 159, - "page_end": 159, - "source_file": "00-80T-80.pdf" - }, - { - "text": "In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks. [314] 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence. [315][316] In May 2024 at the AI Seoul Summit, 16 global AI tech companies agreed to safety commitments on the development of AI. [317][318]\n\n## History\n\nThe study of mechanical or \"formal\" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as \"0\" and \"1\", could simulate any conceivable form of mathematical reasoning. [319][320] This, along with concurrent discoveries in cybernetics, information theory and neurobiology, led researchers to consider the possibility of building an \"electronic brain\". [r] They developed several areas of research that would become part of AI, [322] such as McCullouch and Pitts design for \"artificial neurons\" in 1943, [115] and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced the Turing test and showed that \"machine intelligence\" was plausible. [323][320]\n\nThe field of AI research was founded at a workshop at Dartmouth College in 1956. [s][6] The attendees became the leaders of AI research in the 1960s. [t] They and their students produced programs that the press described as \"astonishing\": [u] computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English. [v][7] Artificial intelligence laboratories were set up at a number of British and U.S. universities in the latter 1950s and early 1960s. [320]\n\nResearchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine with general intelligence and considered this the goal of their field. [327] In 1965 Herbert Simon predicted, \"machines will be capable, within twenty years, of doing any work a man can do\". [328] In 1967 Marvin Minsky agreed, writing that \"within a generation ... the problem of creating 'artificial intelligence' will substantially be solved\". [329] They had, however, underestimated the difficulty of the problem. [w] In 1974, both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill [331] and ongoing pressure from the U.S. Congress to fund more productive projects. [332] Minsky's and Papert's book Perceptrons was understood as proving that artificial neural networks would never be useful for solving real-world tasks, thus discrediting the approach altogether. [333] The \"AI winter\", a period when obtaining funding for AI projects was difficult, followed. [9]\n\nIn the early 1980s, AI research was revived by the commercial success of expert systems, [334] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research. [8] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longerlasting winter began. [10]", - "page_start": 21, - "page_end": 21, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- Roivainen, Eka, \"AI's IQ: ChatGPT aced a [standard intelligence] test but showed that intelligence cannot be measured by IQ alone\", Scientific American , vol. 329, no. 1 (July/August 2023), p. 7. \"Despite its high IQ, ChatGPT fails at tasks that require real humanlike reasoning or an understanding of the physical and social world.... ChatGPT seemed unable to reason logically and tried to rely on its vast database of... facts derived from online texts.\"", - "page_start": 68, - "page_end": 68, - "source_file": "wikipedia3.pdf" - }, - { - "text": "David Chalmers calls this form of idealism one of \"the handful of promising approaches to the mindbody problem.\" [127]\n\n## New mysterianism\n\nNew mysterianism, most significantly associated with the philosopher Colin McGinn, proposes that the human mind, in its current form, will not be able to explain consciousness. [128][11] McGinn draws on Noam Chomsky's distinction between problems, which are in principle solvable, and mysteries, which human cognitive faculties are unequipped to ever understand, and places the mind-body problem in the latter category. [128] His position is that a naturalistic explanation does exist but that the human mind is cognitively closed to it due to its limited range of intellectual abilities. [128] He cites Jerry Fodor's concept of the modularity of mind in support of cognitive closure. [128]\n\nWhile in McGinn's strong form, new mysterianism states that the relationship between consciousness and the material world can never be understood by the human mind, there are also weaker forms that argue it cannot be understood within existing paradigms but that advances in science or philosophy may open the way to other solutions (see above). [43] The ideas of Thomas Nagel and Joseph Levine fall into the second category. [43] Steven Pinker has also endorsed this weaker version of the view, summarizing it as follows: [9]\n\nAnd then there is the theory put forward by philosopher Colin McGinn that our vertigo when pondering the Hard Problem is itself a quirk of our brains. The brain is a product of evolution, and just as animal brains have their limitations, we have ours. Our brains can't hold a hundred numbers in memory, can't visualize seven-dimensional space and perhaps can't intuitively grasp why neural information processing observed from the outside should give rise to subjective experience on the inside. This is where I place my bet, though I admit that the theory could be demolished when an unborn genius-a Darwin or Einstein of consciousness-comes up with a flabbergasting new idea that suddenly makes it all clear to us.\n\n## Commentary on the problem's explanatory targets\n\nPhilosopher Raamy Majeed argued in 2016 that the hard problem is associated with two \"explanatory targets\": [54]\n\n - 1. [PQ] Physical processing gives rise to experiences with a phenomenal character.\n - 2. [Q] Our phenomenal qualities are thus-and-so.\n\nThe first fact concerns the relationship between the physical and the phenomenal (i.e., how and why are some physical states felt states), whereas the second concerns the very nature of the phenomenal itself (i.e., what does the felt state feel like?).\n\nWolfgang Fasching argues that the hard problem is not about qualia, but about the what-it-is-like-ness of experience in Nagel's sense-about the givenness of phenomenal contents:", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia2.pdf" - }, - { - "text": "Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI, [367] with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did \"not actually use AI in a material way\". [368]\n\n## Evaluating approaches to AI\n\nNo established unifying theory or paradigm has guided AI research for most of its history. [aa] The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term \"artificial intelligence\" to mean \"machine learning with neural networks\"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.\n\n## Symbolic AI and its limits\n\nSymbolic AI (or \"GOFAI\") [370] simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at \"intelligent\" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: \"A physical symbol system has the necessary and sufficient means of general intelligent action.\" [371]\n\nHowever, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level \"intelligent\" tasks were easy for AI, but low level \"instinctive\" tasks were extremely difficult. [372] Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a \"feel\" for the situation, rather than explicit symbolic knowledge. [373] Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him. [ab][16]\n\nThe issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence, [375][376] in part because subsymbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.\n\n## Neat vs. scruffy\n\n\"Neats\" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). \"Scruffies\" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s, [377] but eventually was seen as irrelevant. Modern AI has elements of both.\n\n## Soft vs. hard computing", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia3.pdf" - }, - { - "text": "The combined effect of these three factors defines altitude as the one most important item affecting the specific range of the turbojet airPl ane. As an example of this combined'effect, the typical turbojet airplane obtains a specific range at 40,ooO ft. which is approximately 150 percent greater than that obtained at sea leirel. The increased TAS accounts for approximately two-thirds of this benefit while increased engine performance (reduced cJ ,~ 'accounts for the other one-third of the benefit. For example, at sea level the maximum specific range of a turbojet airplane may be 0.1 nmi/lb. but at 40,000 ft. the maximum specific range would be approximately 0.25 nmi/lb.", - "page_start": 183, - "page_end": 183, - "source_file": "00-80T-80.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia3.pdf", - "query": "Where can I find the Inspect tool to evaluate the safety of our models?", - "target_page": 21, - "target_passage": "The UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## Safer and healthier technologies and organisation\n\nTo support the practical implementation of preventive safety and health measures , numerous actors (e.g. organisations of OSH professionals and practitioners, and standardisation institutes such as the European Committee for Standardisation and the International Organisation for Standardisation) issued safety and health guidance or standards, or developed new and advanced OSH management systems, the engineering sciences worked on better technical preventive technologies, on measuring and monitoring technologies, the medical sciences introduced better medical diagnosis and treatment of work-related diseases, and the social sciences contributed with better knowledge on the legal and economic determinants of OSH, or analysed the characteristics of awareness raising, knowledge development and healthy work organisation.\n\nIt is obvious that better technical and organisational prevention at work contributed to more safety and the evident strong reduction in accidents. Prominent fields and examples of such improvements are: technically safer design of moving vehicles (e.g. for fork lifts or heavy trucks and machines, light and noise warning signals for moving vehicles); safer design of machines like automatic shutdowns or disconnections, two-hand operating of machines (e.g. for pressing and punching), safer cranes including better technologies for communication between co-workers, coverage of moving parts, safer company cars (e.g. safety belts and airbags), safer tools (e.g. for drilling or cutting); improved personal protective equipment like air-supplied breathing apparatus, steel mesh gloves for meat workers, trousers for forest workers that resist a chainsaw; minimum safety requirements for buildings (e.g. forms and size of stairs and handrails, fire exits and fire alarms, safer ladders and scaffolds), emergency equipment like eye wash and emergency showers; better monitoring of acute hazards (e.g. in sewage water systems), exhaust and ventilation technologies to avoid fumes, dusts, chemicals or contact with hazardous biological agents; strong safety obligations for work in confined spaces, or for work at height and work in trenches; introduction of explosion zones and of non-sparking tools, a comprehensive system of warning signals, warning signals for slippery floors and unsafe grounds, better warning systems and equipment in particularly dangerous work environments like road maintenance, combined with better organisational measures; quality systems that promote continuous repair and maintenance of tools; regular instructions by safety representatives and safety coordinators, and guarantee of minimum safety standards of machines and products by European standards like CE ('European Conformity').\n\n## Major technological developments\n\nThe widespread introduction of new or advanced technologies - automation, digitalisation/ICT, green technologies, new material technologies and so on - results in substantial changes in work organisation and work processes, and replacement of (traditional) materials (screws by glues, metal and wood by plastics, nanomaterials). For OSH regulators and practitioners, it is a constant challenge to assess these changes regarding their impact on risks for health and safety and to develop adequate risk prevention and mitigation measures.", - "page_start": 13, - "page_end": 13, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- /SM590000 Status and monitor tool", - "page_start": 338, - "page_end": 338, - "source_file": "sg246915.pdf" - }, - { - "text": "## UNDERSTANDING QUICK ANALYSIS\n\nThe Quick Analysis tools were developed in response to the fact that users weren't using or even aware of the more powerful analytical tools found in Excel. So Excel decided to combine\n\n## The Quick Analysis Button\n\nThe Quick Analysis button appears when a range is selected in a worksheet. Clicking on the button displays the Quick Analysis gallery which contains quick analysis tools that can be applied to the selected data.\n\nThe tools have been organised along tabs at the top -\n\nFORMATTING , CHARTS , TOTALS , TABLES , and SPARKLINES .\n\nWhen you click on a tab, options specific to that tab are presented.\n\nLive Preview with some of these tools to create the Quick Analysis tools.\n\n<!-- image -->\n\n## Using Quick Analysis Tools With Live Preview\n\nMost of the Quick Analysis tools in the Quick Analysis gallery provide a Live Preview of the changes in the worksheet when you point to an option.\n\nThis is very useful if you are not sure of the formatting or type of analysis you require as it provides you with a preview of what the data would look like if you selected that specific option.\n\nAt the right we have selected only the totals from the worksheet shown above. We have pointed to options from the TOTALS tab ( % Total and Average ) and from the FORMATTING tab ( Data Bars ).\n\nLive Preview has either presented another row of analysed data or has formatted the selection accordingly.\n\nAll of these tools are also available on the ribbon but using the Quick Analysis tools is much quicker.\n\n<!-- image -->", - "page_start": 35, - "page_end": 35, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "## Safety\n\nIn the area of safety, we have established Vision Zero, the goal of which is to reduce the number of fatal accidents to zero. As a reference point, we are using the number of such accidents in 1995 that involved Nissan vehicles. We realize that accidents cannot be completely avoided, so our objective is to be substantially zero in the future. To achieve this, we have set a series of milestones, including cutting the 1995 fatal accident figure in half by 2015.\n\nInterestingly, while the number of fatal ones is decreasing, the number of all accidents in Japan is increasing. Our first goal is to decrease the overall accident count, which should further reduce the number of fatalities. Several factors contribute to accidents, including driver inexperience and higher speeds. Based on these factors, we came up with the approach of Safety Shield. Safety Shield establishes a timeline for the entire accident, covering the safe driving zone, the moment before the accident, the actual crash, the response time by authorities, and the time taken for post-accident rescue.\n\nIn the past, safety technology primarily focused on dealing with damage in and around the vehicle, such as airbags, body structure design, seatbelts and crumple zones. Now we are studying normal driving conditions and researching how we can keep car and driver in the safe driving zone. In cases where the driving environment becomes unsafe, some type of warning would usually help the driver to return to the safe driving zone. A driver actually in danger has probably lost control of the car. In the latter\n\ncases, we must focus on safety technologies that prompt the vehicle itself to automatically assist the driver. An example of this is Nissan's Lane Departure Prevention system or brake assist: When the vehicle approaches the lane markers, this system not only warns the driver to pay attention through a display and an audible buzzer, it also generates part of the necessary yaw movement needed to return the vehicle to its lane and safety.\n\nAnother Nissan safety innovation is the Around View Monitor. This system offers a 360-degree view on a dashboard display of what is around the vehicle. In addition to significantly reducing the blind spots in driving, the Around View Monitor is helpful when parking, since it improves the driver's field of vision and enables better maneuverability.\n\nIn developing safety technologies, we also look at the conditions that exist seconds before an unavoidable crash. With this information, we can provide technologies to minimize the impact and damage in addition to notifying the authorities and calling for assistance afterward. Because we are building on actual accident data, the final stage in the Safety Shield involves collecting and analyzing the data and feeding what we learn back into the development process. We have committed ourselves to introducing over ten new safety technologies during the next three years, spanning the entire driving range from the safe driving zone to the actual crash.\n\nFor more on safety at Nissan, please see the 2005 Nissan Sustainability Report\n\n<!-- image -->\n\nSafety Shield-concept image\n\n<!-- image -->\n\nAround View Monitor", - "page_start": 47, - "page_end": 47, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "It is evident that better technical and organisational prevention at workplaces contributed to this strong reduction of accidents; prominent examples of such improvements are:\n\nTechnical safer design of moving vehicles, for example, fork lifts, heavy trucks and machines, light and noise warning signals for moving vehicles; safer design of machines like automatic shutdowns or disconnections, two-hand operating of machines, for example, for pressing and punching, safer cranes including better technologies for communication between co-workers, coverage of moving parts, safer company cars, for example, safety belts, safer tools, for example, for drilling or cutting; improved PPE like air-supplied breathing apparatus, steel-made gloves for meat workers, trousers that resist a chainsaw; minimum requirements for buildings, for example, forms and size of stairs and handrails, fire exits and fire alarms, safer ladders and scaffolds, 126 emergency equipment like eye wash and emergency shower; better monitoring of acute hazards, for example, in sewage water systems, exhaust and ventilation technologies, to avoid fumes, dusts, chemicals or contact with hazardous biological agents; strong safety obligations for work in confined spaces, work at height and work in trenches; introduction of explosion zones and of non-sparking tools, a comprehensive system of warning signals, warning signals for slippery floors and unsafe grounds, better warning systems and equipment in", - "page_start": 61, - "page_end": 61, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## 10.3 Send for checking (PM)\n\nOnce the SE's/or PM's have prepared the national GHG inventory, by entering data into the sectoral grids and the PM of the Party has checked the complete GHG inventory for consistency and correctness, the following steps allows the PM to send the inventory for checking:\n\n - 1. Log in as PM.\n - 2. Click on 'View Inventories Progress' under sub menu 'Submission Management'.\n - 3. The 'View Inventories Progress' screen appears.\n - 4. Select the appropriate inventory by clicking the Inventory name under column 'Name' (figure 58, a).\n - 5. Press the 'Send for Checking by NFP' button to send it to the NFP for his review and approval (figure 58, b). *** Note: A notification email will be sent to the NFP email address, and the status changed to 'check' (figure 59).\n\nFigure 58. Work on Inventories screen - Status = Started\n\n<!-- image -->", - "page_start": 36, - "page_end": 36, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "## 3.2.2.1.2 Start a GHG inventory\n\nIn order to START a GHG inventory, please follow the steps below:\n\n -  Log in as PM.\n -  Hover the cursor on the 'Submission Management' and click on the 'View Inventories Progress' button.\n -  Click/select the appropriate GHG Inventory in Status = 'created' (see figure 7a).\n -  Click on 'Work on Inventories' under Submission Management (see figure 7b).\n\n## Figure 7: Select an Inventory screen\n\nFigure 9: 'Started' status of an Inventory\n\n<!-- image -->\n\n -  Left click to select the appropriate Inventory (figure 8a)\n -  Press the 'Start Inventory' button (figure 8b)\n\n## Figure 8: Start an Inventory screen\n\n<!-- image -->\n\nOnce the 'Start Inventory' button is pressed, the status of the selected Inventory change to 'started'. (see Figure 9)\n\n<!-- image -->", - "page_start": 8, - "page_end": 8, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "tools provide a way of seeing what the different charts will look like without having to first create the chart.\n\n<!-- image -->\n\n## Handy to Know…\n\n## To use the Quick Charting tools :\n\n - 1. Select the range to be charted, then click on the Quick Analysis button\n - 2. Choose the desired option from the CHARTS tab\n -  When creating a chart you'll need to ensure that the range you select includes the labels to be used on the chart.\n\n<!-- image -->\n\n<!-- image -->", - "page_start": 37, - "page_end": 37, - "source_file": "Excel Training Manual 1.pdf" - }, - { - "text": "Eurostat has developed under the lead of UNECE a framework to assess the quality of employment in its multiple facets. 497 Eurostat describes this framework as set of 68 indicators 498 on seven dimensions 'that address employment quality from the perspective of the employed person. Its design also facilitates international comparisons.' 499 OSH is covered under the section 'Safety' and is based on four indicators and includes two outcome and two risk indicators: 1) Fatal occupational injuries / Number of fatal accidents at work (excluding traffic accidents); 2) Non-fatal occupational injuries / Number of non-fatal accidents at work; 3) Exposure to physical health risk factors; and 4) Exposure to mental health risk factors. Eurostat implements the OSH parts of this framework by its ESAW and by the OSH-related ad hoc modules to the LFS, called 'Accidents at work and other work-related health problems' (surveys in 2007, 2013 and 2020).\n\nFor more detailed monitoring at EU level, DG EMPL/ACSH and EU-OSHA developed a structural model that uses four groupings: Generic information on the basics of the OSH systems and on major context factors like age or sectoral structure, main policies for the Steering of OSH , an overview on relevant Working conditions and Prevention , and Outcomes , that is, accidents, diseases and wellbeing, and some elements of the OSH infrastructure and monitoring capacity . Currently, the OSH Barometer works with 16 quantitative and qualitative indicators in these four groupings. Some of these indicators are purely descriptive, like the short descriptions of OSH authorities, OSH institutions or OSH-related surveys, and others allow qualitative comparisons of structures and policies, for example, the indicator on 'National strategies' or 'Social dialogue'. Many indicators, for example, on working conditions or work accidents, are based on quantitative data from surveys and statistics. These indicators allow a comparison between sectors, occupations, types of enterprises, countries, for example.\n\n## CHAPTERS\n\n## INDICATORS\n\n## Generic information\n\nIndicator:\n\nOSH authorities (descriptive)\n\nIndicator:\n\nEconomic and sector profile (quantitative)\n\nIndicator:\n\nWorkforce profile (quantitative)\n\n## Steering of OSH\n\nIndicator:\n\nRegulation (descriptive)\n\nIndicator:\n\nNational strategies (descriptive)\n\nIndicator: Social dialogue (descriptive, composite indicator)\n\n## Working conditions and prevention\n\nIndicator:\n\nWorking conditions (quantitative)\n\nIndicator:\n\nPrevention in companies (quantitative)\n\nIndicator:\n\nWorker involvement (quantitative)\n\nIndicator: OSH culture and health awareness (quantitative)\n\n## Accidents, diseases and wellbeing\n\nIndicator:\n\nWork accidents (quantitative)\n\nIndicator:\n\nWork-related diseases (quantitative)\n\nIndicator: Health perception of workers (quantitative)", - "page_start": 137, - "page_end": 137, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- 405 Short descriptions are available on the ISO website, the complete ISO standards are priced. ISO 9001 , ISO 14001 and I ISO 31000 , n\n\n406\n\nISO Committee TC 18\n\n3: Occupational health and safety management systems\n\n## https://committee.iso.org/home/tc283\n\n - 407 ILO, 2009: Guidelines on occupational safety and health management systems (Second Edition) https://www.ilo.org/safework/info/standards-and-instruments/WCMS\\_107727/lang--en/index.htm\n - 408 Köper B, Möller K, Zwetsloot G., 2009: The Occupational Safety and Health Scorecard - a business case example for strategic management, Scand J Work Environ Health 2009;35(6):413-420, doi:10.5271/sjweh.1361 409 Hutchinson, B., Dekler, S. Rae, A., 2022: Writing plans instead of eliminating risks: How can written safety artefacts reduce safety?,\n\nhttps://www.sciencedirect.com/science/article/abs/pii/S0925753522000789?via%3Dihub 410 European Agency for Safety and Health at Work, 2013: Analysis of the determinants of workplace occupational safety and health practice in a selection of EU Member States, European Risk Observatory, Luxembourg: Publications Office of the European Union, doi:10.2802/558", - "page_start": 156, - "page_end": 156, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "legal2_opengouvernementlicense.pdf", - "query": "What was the age category of most new opiate/crack users during the crime peak in the mid-1990s?", - "target_page": 9, - "target_passage": "mplying that most of these individuals were in their mid-to-late teens during the crime peak of the mid-1990s", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "<!-- image -->\n\n## Executive summary\n\nThis paper uses a range of datasets and methodologies to:\n\n-  obtain working estimates for the number of individuals in England who started using opiates/crack from 2005 to 2013; 1\n-  examine the characteristics of these individuals.\n\nThe main findings of the paper are as follows.\n\n-  It is estimated that around 5,000 to 8,000 individuals started using opiates or crackcocaine in 2013. There is a high degree of uncertainty around this figure due to the sparse data on this population, but sense-checks based on treatment and criminal justice system data suggest the true figure is unlikely to be much larger than 10,000.\n-  Data also suggest that the number of current opiate/crack initiates involved with crime may be even lower. The number of arrestees testing positive for the first time for opiates (or for both opiates and crack-cocaine) dropped from 14,750 in 2006 to 4,281 in the first 11 months of 2013, a fall of around 70 per cent 2 . Furthermore, of the new positive testers in 2013, only 721 were aged 18-24. 3 Though this arrestee data will capture only a proportion of the true population, it does suggest that the number of new, young initiates involved with crime - those who have the potential to inflict most societal harm - has decreased markedly, probably just to a few thousand per year; and that this group now make up a small minority of the total number of opiate/crack-cocaine users (estimated to be 294,000 in 2011/12), most of whom are older, longer-term users.\n-  In terms of trends in new opiate/crack-cocaine users, all available data suggest that figures have dipped by at least a fifth since 2005 and have dropped hugely since the late 1980s and early 1990s when the opiate/crack-cocaine population in the UK grew very rapidly. The current estimate works out at a rate of 0.18 per 1,000 population. During the epidemic years, published estimates of new opiate/crack-cocaine users in Manchester and Bolton show rates more than 11 times larger.\n-  However, the findings also suggest that between 2011 and early 2014, the number of new opiate/crack-cocaine users stopped decreasing and instead stabilised at a (historically) low level. Further analysis was conducted to try and determine whether this was a precursor to a new rise in initiates. Though the data are not totally conclusive, the results suggest that a marked increase in new opiate/crack-cocaine users in the near future is unlikely. If anything, findings suggested that the downward trend may be set to resume.\n-  Analysis also revealed some possible changes in characteristics of the new opiate/crackcocaine initiates. There is a trend in the treatment data towards new initiates coming to treatment earlier in their drug-using careers than previous cohorts and also to have", - "page_start": 2, - "page_end": 2, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "between March 2011 and March 2015 can also be seen in the raw numbers for total new OCU treatment presentations. 22\n\nFigure 10: New treatment presentations for opiate/crack use.\n\n<!-- image -->\n\nFigure 10 shows that, rather than increasing in the current year, new presentations for opiate/crack use have actually fallen slightly from 48,154 in 2013/14 to 47,241 in 2014/15, a decrease of 1.9%. However, given that the early signs of previous opiate/crack use epidemics have been missed before (see Morgan, 2014), and the potential social harm that a fresh increase in new OCUs could cause, further analysis was conducted on the most recent data to try and determine whether the apparent flattening in trends was actually caused by the early stages of a significant surge in new users.\n\nThe treatment data was broken down by age to check whether the slight fall in total new presentations in 2014/15 masked an increase in younger treatment presentations. This showed instead that opiate/crack presentations by those aged 18-24 had fallen from 3,579 in 2013/14 to 3,021 in 2014/15, a fall of 15.6%. In other words, younger new presentations have fallen at a faster rate over the last year than for those aged over-25. Furthermore, separate statistics produced for those in treatment aged 18-and-under also show a fall in aggregate numbers in treatment for opiates and crack.\n\nWe also looked at trends at the local level, given that previous epidemics have started in very specific areas and have taken several years to spread nationally. This means that the start of an epidemic can be hidden in the national data because it has not reached enough areas to register.", - "page_start": 26, - "page_end": 26, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "- initiated use at an older age. Currently it is not possible to determine whether this is a reporting issue or a genuine shift in the age profile of new opiate/crack-cocaine users.\n -  The report has several important policy implications. Even though numbers of new initiates involved with crime have dropped to the low thousands, putting downward pressure on crime, identification and early diversion to treatment remains paramount. Frontier Economics have estimated that the average 4 lifetime crime cost of an injecting drug user is £445,000, so the potential for social harm - even from a small number of individuals - remains large and potentially long-lasting. This means local areas need to manage both the (relatively large) stock of current users, and the (much smaller) flow of new initiates, whose treatment needs may be different. There is no evidence of any new epidemic in this country, but given the impact of the epidemic of the 80s and early 90s on crime, ongoing monitoring of recent trends is required to spot early signs of any emerging problems.\n\n## Aims and Methodology\n\nPrevious Home Office research has demonstrated the importance of opiate/crack-cocaine use in driving aggregate trends in acquisitive crime (Morgan, 2014). While established estimates exist of the total number of opiate/crack-cocaine users (OCUs) in England (Hay et al ., 2013), there are no estimates for the number of new OCUs each year (throughout this paper the number of new OCUs is also referred to as 'incidence' ). This is important for three main reasons.\n\n - i) Stock and flows: Simply knowing the stock of OCUs tells us nothing about the flows in and out - i.e. if the stock were constant each year that could mean that no one starts using these drugs and no one quits or it could mean all existing users quit but that they are wholly replaced by new users, or any similar scenario in between. Clearly the policy response would need to be quite different for each of these cases, so knowing the true situation is important.\n - ii) Early-warning system: Research by the Home Office and others has shown that there is generally a lag between the start of a heroin/crack epidemic and the point at which it becomes visible on administrative datasets. Closing this gap is important for policy, and part of the reason for its existence is the lack of incidence estimates. Evidence also suggests epidemics spread from area to area, so it is important to monitor local as well as national trends.\n - iii) The social harm that can arise: Though research suggests that not all OCUs resort to acquisitive crime to help finance their drug use, numerous studies show that a proportion consistently do and these individuals can be extremely prolific offenders (Morgan, 2014). One study by Frontier Economics estimated that the average lifetime cost to society of an injecting drug user was £445,000 from crime alone. Hence analysing and identifying new OCUs is a policy priority (Frontier Economics, 2010).\n\nThere are two inter-connected reasons why regular national incidence estimates have not been attempted before 5 . The first is that data on this issue are sparse given the 'hidden' nature of opiate/crack markets and that date of first use is not something that gets recorded at the moment it actually occurs. The second reason, which flows from the first, is that current", - "page_start": 3, - "page_end": 3, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "cocaine users. In addition, the sharp decline in total DIP tests in 2013 may be due in part to the fact that DIP ceased to be a nationally funded programme in April 2013.\n\nThese data do show, however, that from 2006 onwards, between a third and half of all acquisitive crime arrests involved a drug test and between 15 per cent and 35 per cent of those tests (depending on the year) resulted in a positive result for opiates-only or for both opiates and cocaine (hereafter labelled 'positive-for-both').\n\nThe reason for highlighting only the opiates-only and the 'positive-for-both' test results is that the primary group of interest in this report are opiate and crack-cocaine users. To capture this group, cocaine-only tests must be excluded because DIP tests cannot distinguish between powder- and crack-cocaine, so a cocaine-only positive test could indicate either. Previous evidence has demonstrated that while there is much overlap between heroin and crack-cocaine cohorts (i.e. many of those who use heroin also use crack-cocaine), powdercocaine users have a quite different profile and are far less likely to be involved with acquisitive crime. Excluding the cocaine-only tests means we can be guaranteed not to capture any powder-cocaine users (who are not also using opiates or crack), but it also means we may miss some crack-cocaine-only users, hence the figures may under-estimate the true population of OCUs slightly.\n\nThe fifth row in Table 1 shows that the total number of opiate and opiate/cocaine tests over the period was 364,537. Table 2 shows descriptive statistics for the individuals providing these tests (noting that the same individual may be included several times if they gave multiple positive tests).\n\nTable 2: Descriptive statistics on all positive opiate-only/positive-for-both tests.\n\n| Age | Age | Year of birth | Year of birth |\n|-----------------|---------|-----------------|-----------------|\n| Number of tests | 364,537 | Number of tests | 364,537 |\n| Mean | 32 | Mean | 1977 |\n| Median | 31 | Median | 1977 |\n| Mode | 28 | Mode | 1979 |\n| Minimum | 18 | Minimum | 1960 |\n| Maximum | 53 | Maximum | 1995 |\n\nThe mean age at test is 32 and the mean year of birth is 1977, implying that most of these individuals were in their mid-to-late teens during the crime peak of the mid-1990s. 9 Given evidence suggesting that the average age of initiation for opiate/crack use is around 18-20 (Millar et al ., 2001), this age profile would tentatively suggest that OCU incidence also peaked in the 1990s and that this created a large cohort of users who would be approaching 40 today.\n\nThe minimum and maximum years of birth are fixed by construction, because anyone born", - "page_start": 8, - "page_end": 8, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "## Conclusion\n\nThis report has attempted to draw together available data and evidence to estimate the number of new opiate/crack-cocaine users (OCUs) per year in England since 2005 and then to look briefly at their characteristics. This is important as previous research has suggested that mostly through the actions of a minority - this group has the potential to have a large impact on crime trends and therefore to impose significant societal costs.\n\nThough data on this population is imperfect, a number of different data sources and methodologies are available to estimate OCU incidence. From these, three key conclusions emerge:\n\n -  The number of new opiate/crack users is clearly far lower now than it was in the 1980s and early 1990s and has even dropped 20-45% since 2005.\n -  This means numbers of new users in 2013 may be around 5,000-8,000 with an approximate upper bound of 10,000; and numbers involved with prolific criminality will be lower still.\n -  The downward trend in new OCUs has flattened since about 2011, but available data do not suggest that this is the precursor to a new increase. If anything, the downward trend may resume in 2014, though the situation requires further monitoring.\n\nFor local areas then, this report suggests that it is still important to identify new OCUs as the arrestee data showed that a proportion of these are likely to offend over a long period of time. But also, there was some evidence of a shift to older initiates, which may require a slightly different treatment approach.", - "page_start": 29, - "page_end": 29, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "before 1960 was removed and because DIP tests are only administered to those aged 18 and over, so only using data to 2013 means it would not be possible for anyone to be born in 1996 or afterwards to be included. Even so, it is clear from the year-of-birth distribution (Figure 2) that positive opiate tests drop off sharply for those born after 1982. This is in line with other evidence suggesting that the number of new users of opiates decreased sharply in the 2000s. This needs to be considered when interpreting the analysis that follows. When DIP and the NDTMS treatment system began in the mid-2000s, there already existed a cohort of around 320,000 OCUs, according to available estimates by Hay et al ., (2013). And most of these individuals began using opiates/crack during the epidemic years of the 1980s and 1990s. In terms of data capture this means it is hard to separate the gradual inclusion of more and more individuals from this original cohort from genuinely new users of these drugs.\n\nFigure 2: Year of birth distribution for all opiate-only/positive-for-both tests.\n\n<!-- image -->\n\nFigure 3, which shows the age of the individual at a positive test, also reveals that although the average age at positive test is 32, the peak is quite flat, with high numbers of positive tests still being recorded by individuals in their late 30s and even into their 40s.", - "page_start": 9, - "page_end": 9, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "| | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year |\n|-----------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|\n| First test year | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | Adjusted 2013 |\n| 2004 | 12,246 | 3,171 | 3,299 | 3,090 | 2,992 | 2,573 | 2,311 | 1,766 | 1,513 | 1,092 | 1,191 |", - "page_start": 17, - "page_end": 17, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "| | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) |\n|---------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|\n| Year of first test | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | Adjusted 2013 |\n| 2004 | 17,174 | 5,604 | 7,091 | 6,784 | 6,509 | 5,292 | 4,863 | 3,341 | 2,629 | 1,800 | 1,964 |\n| 2005 | | 13,553 | 6,066 | 5,110 | 4,941 | 3,983 | 3,549 | 2,323 | 1,947 | 1,383 | 1,509 |", - "page_start": 16, - "page_end": 16, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "## References\n\nAhmad and Richardson. (2016). Impact of the reduction in heroin supply . Forthcoming.\n\nDe Angelis, D., Hickman, M. and Yang, S. (2004). 'Estimating Long-term Trends in the Incidence and Prevalence of Opiate Use/Injecting Drug Use and the Number of Former Users: Back-Calculation Methods and Opiate Overdose Deaths'. American Journal of Epidemiology vol. 160 (10). Available at: http://aje.oxfordjournals.org/content/160/10/994.full.pdf\n\nFrontier Economics (2010). Specialist drug and alcohol services for young people - a cost benefit analysis. Department for Education, 2010. Available at: https://www.gov.uk/government/uploads/system/uploads/attachment\\_data/file/182312/DFERR087.pdf\n\nGossop, M., Marsden, J., Stewart, D., & Kidd, T. (2003) 'The National Treatment Outcome Research Study (NTORS): 4-5 year follow -up results', Addiction , vol. 98 (3), pp 291-303.\n\nHay, G., dos Santos, A. R. and Worsley, J. (2013). Estimates of the Prevalence of Opiate Use and/or Crack Cocaine Use, 2011/12: Sweep 8 report. Liverpool John Moores University.\n\nHoryniak, D., Stoové, M., Degenhardt, L., Aitken, C., Kerr, T., & Dietze, P . (2015). How do drug market changes affect characteristics of injecting initiation and subsequent patterns of drug use? Findings from a cohort of regular heroin and methamphetamine injectors in Melbourne, Australia. International Journal of Drug Policy , 26 (1), 43-50.\n\nMillar, T., Craine, N., Carnwath, T. and Donmall, M. (2001). 'The dynamics of heroin use; implications for intervention.' Journal of Epidemiology and Community Health , 55(12), 930-933.\n\nMorgan, N. (2014). The heroin epidemic of the 1980s and 1990s and its effect on crime trends then and now, Home Office Research Report 79.\n\nONS, (2014). Crime Statistics, Focus on Property Crime, 2013/14 , Office for National Statistics.", - "page_start": 43, - "page_end": 43, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "## 2. Estimating an incidence trend from treatment data\n\nThis section uses treatment data from the National Database Treatment Monitoring System (NDTMS) to estimate the number of new OCUs annually. The NDTMS captures data on the numbers of people presenting to services with problem drug misuse and information about the drug treatment they receive. All drug treatment agencies in England provide a basic level of information to the NDTMS on their activities each month. The data for this report included all unique individuals presenting to treatment with opiates or crack-cocaine listed as their primary drug between 2005 and 2014. All individuals whose age of first use was listed as below ten or before 2005 were then excluded. Excluding individuals who started using opiates/crack before 2005 resulted in a large number of records being left out, due to the fact that the majority of the treatment population, even in 2013/14, initiated in the 1980s and 1990s when heroin and crack use surged in the UK. However, this exclusion is necessary for the incidence methodology, as explained later in this section. The remaining dataset included 52,829 individuals, as shown in Table 10.\n\nTable 10: Descriptive statistics from the NDTMS data.\n\n| Reason for exclusion | Number of individuals excluded | Total number of individuals analysed |\n|------------------------------------------------------------------------------------------|------------------------------------|------------------------------------------|\n| Initial sample prior to exclusion | 0 | 243,588 |\n| No age at first use recorded or age was below 10 or higher than age at first treatment | 443 | 243,145 |\n| Year of first use before 2005 | 190,316 | 52,829 |\n| Percentage of total sample initiating 2005-14 | n/a | 21.7% |\n\nThe majority of those presenting for treatment between 2005 and 2014 started using opiates/crack before 2005 (around four in five). Only 52,829 individuals said they had an opiate/crack initiation date between 2005 and 2014. This suggests an average of just under 5,000 new starters per year during this period. But this would be an under-estimate of incidence because it is likely that some of those who began use between 2005 and 2014 would not yet have come to treatment during that period.\n\nTo correct for this, we use two variants of a methodology employed by researchers in Millar et al . (2001) and Hickman et al . (2001). These papers discuss the methodology in detail.\n\nNew opiate and crack-cocaine users: characteristics and trends 22 In brief, the method uses the lag-to-treatment distribution for the sample coupled with the number of new treatment presentations in a given year to estimate OCU incidence in that year. So, when presenting to treatment, all individuals are asked to provide the year in which they first began using their primary drug, which for this analysis was limited to opiates and/or crack-", - "page_start": 21, - "page_end": 21, - "source_file": "legal2_opengouvernementlicense.pdf" - } - ] - }, - { - "references": { - "source_file": "legal2_opengouvernementlicense.pdf", - "query": "According to the National Database Treatment Monitoring System, how many people started using opiates/crack between 2005 and 2014?", - "target_page": 22, - "target_passage": " Only 52,829 individuals said they had an opiate/crack initiation date between 2005 and 2014", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## 2. Estimating an incidence trend from treatment data\n\nThis section uses treatment data from the National Database Treatment Monitoring System (NDTMS) to estimate the number of new OCUs annually. The NDTMS captures data on the numbers of people presenting to services with problem drug misuse and information about the drug treatment they receive. All drug treatment agencies in England provide a basic level of information to the NDTMS on their activities each month. The data for this report included all unique individuals presenting to treatment with opiates or crack-cocaine listed as their primary drug between 2005 and 2014. All individuals whose age of first use was listed as below ten or before 2005 were then excluded. Excluding individuals who started using opiates/crack before 2005 resulted in a large number of records being left out, due to the fact that the majority of the treatment population, even in 2013/14, initiated in the 1980s and 1990s when heroin and crack use surged in the UK. However, this exclusion is necessary for the incidence methodology, as explained later in this section. The remaining dataset included 52,829 individuals, as shown in Table 10.\n\nTable 10: Descriptive statistics from the NDTMS data.\n\n| Reason for exclusion | Number of individuals excluded | Total number of individuals analysed |\n|------------------------------------------------------------------------------------------|------------------------------------|------------------------------------------|\n| Initial sample prior to exclusion | 0 | 243,588 |\n| No age at first use recorded or age was below 10 or higher than age at first treatment | 443 | 243,145 |\n| Year of first use before 2005 | 190,316 | 52,829 |\n| Percentage of total sample initiating 2005-14 | n/a | 21.7% |\n\nThe majority of those presenting for treatment between 2005 and 2014 started using opiates/crack before 2005 (around four in five). Only 52,829 individuals said they had an opiate/crack initiation date between 2005 and 2014. This suggests an average of just under 5,000 new starters per year during this period. But this would be an under-estimate of incidence because it is likely that some of those who began use between 2005 and 2014 would not yet have come to treatment during that period.\n\nTo correct for this, we use two variants of a methodology employed by researchers in Millar et al . (2001) and Hickman et al . (2001). These papers discuss the methodology in detail.\n\nNew opiate and crack-cocaine users: characteristics and trends 22 In brief, the method uses the lag-to-treatment distribution for the sample coupled with the number of new treatment presentations in a given year to estimate OCU incidence in that year. So, when presenting to treatment, all individuals are asked to provide the year in which they first began using their primary drug, which for this analysis was limited to opiates and/or crack-", - "page_start": 21, - "page_end": 21, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "<!-- image -->\n\n## Executive summary\n\nThis paper uses a range of datasets and methodologies to:\n\n-  obtain working estimates for the number of individuals in England who started using opiates/crack from 2005 to 2013; 1\n-  examine the characteristics of these individuals.\n\nThe main findings of the paper are as follows.\n\n-  It is estimated that around 5,000 to 8,000 individuals started using opiates or crackcocaine in 2013. There is a high degree of uncertainty around this figure due to the sparse data on this population, but sense-checks based on treatment and criminal justice system data suggest the true figure is unlikely to be much larger than 10,000.\n-  Data also suggest that the number of current opiate/crack initiates involved with crime may be even lower. The number of arrestees testing positive for the first time for opiates (or for both opiates and crack-cocaine) dropped from 14,750 in 2006 to 4,281 in the first 11 months of 2013, a fall of around 70 per cent 2 . Furthermore, of the new positive testers in 2013, only 721 were aged 18-24. 3 Though this arrestee data will capture only a proportion of the true population, it does suggest that the number of new, young initiates involved with crime - those who have the potential to inflict most societal harm - has decreased markedly, probably just to a few thousand per year; and that this group now make up a small minority of the total number of opiate/crack-cocaine users (estimated to be 294,000 in 2011/12), most of whom are older, longer-term users.\n-  In terms of trends in new opiate/crack-cocaine users, all available data suggest that figures have dipped by at least a fifth since 2005 and have dropped hugely since the late 1980s and early 1990s when the opiate/crack-cocaine population in the UK grew very rapidly. The current estimate works out at a rate of 0.18 per 1,000 population. During the epidemic years, published estimates of new opiate/crack-cocaine users in Manchester and Bolton show rates more than 11 times larger.\n-  However, the findings also suggest that between 2011 and early 2014, the number of new opiate/crack-cocaine users stopped decreasing and instead stabilised at a (historically) low level. Further analysis was conducted to try and determine whether this was a precursor to a new rise in initiates. Though the data are not totally conclusive, the results suggest that a marked increase in new opiate/crack-cocaine users in the near future is unlikely. If anything, findings suggested that the downward trend may be set to resume.\n-  Analysis also revealed some possible changes in characteristics of the new opiate/crackcocaine initiates. There is a trend in the treatment data towards new initiates coming to treatment earlier in their drug-using careers than previous cohorts and also to have", - "page_start": 2, - "page_end": 2, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "methods for calculating incidence are complicated and imperfect. It should be acknowledged in advance that this paper does not fully resolve these issues. It is merely intended as a first step, to obtain workable estimates upon which to base policy until more sophisticated methods are developed. That said, every effort is made in this analysis to sense-check the results against other available datasets. The datasets used and the structure of the paper is as follows.\n\n - i) Drug Interventions Programme (DIP) data. In part one, we produce general descriptive statistics from these data, which capture individuals who test positive for opiates/crack-cocaine following arrest or charge. Due to the limitations in coverage of these data over time, we draw only broad conclusions, some of which act as a sensecheck for the main results from part two.\n - ii) Data on presentations to treatment from the National Drug Treatment Monitoring System (NDTMS). In part two, we use two models based on previous research papers to calculate OCU incidence at the national level between 2005 and 2013. Most of the main conclusions come from this section.", - "page_start": 4, - "page_end": 4, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "cocaine users. In addition, the sharp decline in total DIP tests in 2013 may be due in part to the fact that DIP ceased to be a nationally funded programme in April 2013.\n\nThese data do show, however, that from 2006 onwards, between a third and half of all acquisitive crime arrests involved a drug test and between 15 per cent and 35 per cent of those tests (depending on the year) resulted in a positive result for opiates-only or for both opiates and cocaine (hereafter labelled 'positive-for-both').\n\nThe reason for highlighting only the opiates-only and the 'positive-for-both' test results is that the primary group of interest in this report are opiate and crack-cocaine users. To capture this group, cocaine-only tests must be excluded because DIP tests cannot distinguish between powder- and crack-cocaine, so a cocaine-only positive test could indicate either. Previous evidence has demonstrated that while there is much overlap between heroin and crack-cocaine cohorts (i.e. many of those who use heroin also use crack-cocaine), powdercocaine users have a quite different profile and are far less likely to be involved with acquisitive crime. Excluding the cocaine-only tests means we can be guaranteed not to capture any powder-cocaine users (who are not also using opiates or crack), but it also means we may miss some crack-cocaine-only users, hence the figures may under-estimate the true population of OCUs slightly.\n\nThe fifth row in Table 1 shows that the total number of opiate and opiate/cocaine tests over the period was 364,537. Table 2 shows descriptive statistics for the individuals providing these tests (noting that the same individual may be included several times if they gave multiple positive tests).\n\nTable 2: Descriptive statistics on all positive opiate-only/positive-for-both tests.\n\n| Age | Age | Year of birth | Year of birth |\n|-----------------|---------|-----------------|-----------------|\n| Number of tests | 364,537 | Number of tests | 364,537 |\n| Mean | 32 | Mean | 1977 |\n| Median | 31 | Median | 1977 |\n| Mode | 28 | Mode | 1979 |\n| Minimum | 18 | Minimum | 1960 |\n| Maximum | 53 | Maximum | 1995 |\n\nThe mean age at test is 32 and the mean year of birth is 1977, implying that most of these individuals were in their mid-to-late teens during the crime peak of the mid-1990s. 9 Given evidence suggesting that the average age of initiation for opiate/crack use is around 18-20 (Millar et al ., 2001), this age profile would tentatively suggest that OCU incidence also peaked in the 1990s and that this created a large cohort of users who would be approaching 40 today.\n\nThe minimum and maximum years of birth are fixed by construction, because anyone born", - "page_start": 8, - "page_end": 8, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "between March 2011 and March 2015 can also be seen in the raw numbers for total new OCU treatment presentations. 22\n\nFigure 10: New treatment presentations for opiate/crack use.\n\n<!-- image -->\n\nFigure 10 shows that, rather than increasing in the current year, new presentations for opiate/crack use have actually fallen slightly from 48,154 in 2013/14 to 47,241 in 2014/15, a decrease of 1.9%. However, given that the early signs of previous opiate/crack use epidemics have been missed before (see Morgan, 2014), and the potential social harm that a fresh increase in new OCUs could cause, further analysis was conducted on the most recent data to try and determine whether the apparent flattening in trends was actually caused by the early stages of a significant surge in new users.\n\nThe treatment data was broken down by age to check whether the slight fall in total new presentations in 2014/15 masked an increase in younger treatment presentations. This showed instead that opiate/crack presentations by those aged 18-24 had fallen from 3,579 in 2013/14 to 3,021 in 2014/15, a fall of 15.6%. In other words, younger new presentations have fallen at a faster rate over the last year than for those aged over-25. Furthermore, separate statistics produced for those in treatment aged 18-and-under also show a fall in aggregate numbers in treatment for opiates and crack.\n\nWe also looked at trends at the local level, given that previous epidemics have started in very specific areas and have taken several years to spread nationally. This means that the start of an epidemic can be hidden in the national data because it has not reached enough areas to register.", - "page_start": 26, - "page_end": 26, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "Reading down the year columns, the table shows that of the 6,449 people who presented for opiate/crack treatment for the first time in 2013, 376 said they had begun using in 2005. Another 470 said they started using in 2006, and so on.\n\nReading across the table shows that of all those who said they began using opiates/crack in 2005 (8,960), 1,305 also presented to treatment for the first time in that year (which is 15 per cent of the observed cohort from Table 11 and 12 per cent of our estimated total cohort from Table 12). Another 1,508 presented for the first time a year later, and so on. The first number in the totals column (8,960) therefore represents all individuals who said they began using in 2005. It is therefore the 'observed' incidence level. The column to the right of this is the cumulative percentages from the estimated lag-to-treatment distribution in Table 12. This shows the", - "page_start": 23, - "page_end": 23, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "## References\n\nAhmad and Richardson. (2016). Impact of the reduction in heroin supply . Forthcoming.\n\nDe Angelis, D., Hickman, M. and Yang, S. (2004). 'Estimating Long-term Trends in the Incidence and Prevalence of Opiate Use/Injecting Drug Use and the Number of Former Users: Back-Calculation Methods and Opiate Overdose Deaths'. American Journal of Epidemiology vol. 160 (10). Available at: http://aje.oxfordjournals.org/content/160/10/994.full.pdf\n\nFrontier Economics (2010). Specialist drug and alcohol services for young people - a cost benefit analysis. Department for Education, 2010. Available at: https://www.gov.uk/government/uploads/system/uploads/attachment\\_data/file/182312/DFERR087.pdf\n\nGossop, M., Marsden, J., Stewart, D., & Kidd, T. (2003) 'The National Treatment Outcome Research Study (NTORS): 4-5 year follow -up results', Addiction , vol. 98 (3), pp 291-303.\n\nHay, G., dos Santos, A. R. and Worsley, J. (2013). Estimates of the Prevalence of Opiate Use and/or Crack Cocaine Use, 2011/12: Sweep 8 report. Liverpool John Moores University.\n\nHoryniak, D., Stoové, M., Degenhardt, L., Aitken, C., Kerr, T., & Dietze, P . (2015). How do drug market changes affect characteristics of injecting initiation and subsequent patterns of drug use? Findings from a cohort of regular heroin and methamphetamine injectors in Melbourne, Australia. International Journal of Drug Policy , 26 (1), 43-50.\n\nMillar, T., Craine, N., Carnwath, T. and Donmall, M. (2001). 'The dynamics of heroin use; implications for intervention.' Journal of Epidemiology and Community Health , 55(12), 930-933.\n\nMorgan, N. (2014). The heroin epidemic of the 1980s and 1990s and its effect on crime trends then and now, Home Office Research Report 79.\n\nONS, (2014). Crime Statistics, Focus on Property Crime, 2013/14 , Office for National Statistics.", - "page_start": 43, - "page_end": 43, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "cocaine. From this information it is possible to create a distribution, for all presentations, of the lag-time between initiation and their first presentation at treatment. This might show - for example - that only ten per cent of all individuals presenting to treatment do so in the first year of use, but that 25 per cent present within two years, and so on. This means that for each year, we can estimate the number of individuals who have begun an opiate-crack career but who have yet to come to treatment . Adding these to the numbers who began in that year and have come to treatment gives our total incidence estimate for each year.\n\nThe first model uses NDTMS data for the cohort starting use in 2005 (n=8,960), the lag-time distribution for those initiating use in 2005 and presenting to treatment between 2005 and 2014 18 is shown below.\n\nTable 11: Time-to-treatment distribution for those initiating use in 2005 and presenting to treatment between 2005 and 2014. 19\n\n| Lag time to treatment (years) | 0-1 | 1-2 | 2-3 | 3-4 | 4-5 | 5-6 | 6-7 | 7-8 | 8-9 | 9-10 |\n|----------------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-----------|\n| Percentage | 15% | 17% | 17% | 14% | 10% | 9% | 6% | 5% | 4% | 4% |\n| Cumulative percentage | 15% | 31% | 49% | 62% | 73% | 82% | 88% | 92% | | 96% 100% |\n\nTable 11 shows that 15 per cent of the individuals who started use in 2005 and had presented for treatment by 2014, presented within one year of initiation. A further 17 per cent presented between one and two years after initiation, prior to coming to treatment, meaning that overall 31 per cent of the sample said they came to treatment within two years of first using opiates/crack. (The fact this is not 32% is simply due to rounding).\n\nAs a basis for the total lag-to-treatment distribution, the main limitation with the above analysis is that it assumes all individuals coming to treatment do so within ten years. Examining data from earlier cohorts suggests this is inaccurate, as a small proportion of OCUs will continue to use these drugs for a long time, sometimes two decades or more, before seeking treatment, and some never will. However, we cannot use an earlier cohort for the distribution because this is equivalent to using out-of-date data. The average lag-to-treatment is likely to have reduced over time given the expansion of treatment places and the influence of DIP. Using old data will miss this and bias the estimates. Even using the 2005 cohort's distribution contains the assumption that the time-to-treatment lag has not altered significantly between 2005 and 2013/14. So, to try and obtain the most accurate model, we used the figures from the 2005 cohort for the first ten years, as above, on the basis that this covers the majority of individuals and for that we want the most up-to-date data possible whilst maintaining a long enough time period. We then index the trend at that point to an older cohort, and use data from that cohort to model the 'tail' of the distribution - i.e. those who take longer than ten years to reach treatment. 20 The result is a 20-year lag-to-treatment distribution, shown in Table 12 below.", - "page_start": 22, - "page_end": 22, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "before 1960 was removed and because DIP tests are only administered to those aged 18 and over, so only using data to 2013 means it would not be possible for anyone to be born in 1996 or afterwards to be included. Even so, it is clear from the year-of-birth distribution (Figure 2) that positive opiate tests drop off sharply for those born after 1982. This is in line with other evidence suggesting that the number of new users of opiates decreased sharply in the 2000s. This needs to be considered when interpreting the analysis that follows. When DIP and the NDTMS treatment system began in the mid-2000s, there already existed a cohort of around 320,000 OCUs, according to available estimates by Hay et al ., (2013). And most of these individuals began using opiates/crack during the epidemic years of the 1980s and 1990s. In terms of data capture this means it is hard to separate the gradual inclusion of more and more individuals from this original cohort from genuinely new users of these drugs.\n\nFigure 2: Year of birth distribution for all opiate-only/positive-for-both tests.\n\n<!-- image -->\n\nFigure 3, which shows the age of the individual at a positive test, also reveals that although the average age at positive test is 32, the peak is quite flat, with high numbers of positive tests still being recorded by individuals in their late 30s and even into their 40s.", - "page_start": 9, - "page_end": 9, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "- initiated use at an older age. Currently it is not possible to determine whether this is a reporting issue or a genuine shift in the age profile of new opiate/crack-cocaine users.\n -  The report has several important policy implications. Even though numbers of new initiates involved with crime have dropped to the low thousands, putting downward pressure on crime, identification and early diversion to treatment remains paramount. Frontier Economics have estimated that the average 4 lifetime crime cost of an injecting drug user is £445,000, so the potential for social harm - even from a small number of individuals - remains large and potentially long-lasting. This means local areas need to manage both the (relatively large) stock of current users, and the (much smaller) flow of new initiates, whose treatment needs may be different. There is no evidence of any new epidemic in this country, but given the impact of the epidemic of the 80s and early 90s on crime, ongoing monitoring of recent trends is required to spot early signs of any emerging problems.\n\n## Aims and Methodology\n\nPrevious Home Office research has demonstrated the importance of opiate/crack-cocaine use in driving aggregate trends in acquisitive crime (Morgan, 2014). While established estimates exist of the total number of opiate/crack-cocaine users (OCUs) in England (Hay et al ., 2013), there are no estimates for the number of new OCUs each year (throughout this paper the number of new OCUs is also referred to as 'incidence' ). This is important for three main reasons.\n\n - i) Stock and flows: Simply knowing the stock of OCUs tells us nothing about the flows in and out - i.e. if the stock were constant each year that could mean that no one starts using these drugs and no one quits or it could mean all existing users quit but that they are wholly replaced by new users, or any similar scenario in between. Clearly the policy response would need to be quite different for each of these cases, so knowing the true situation is important.\n - ii) Early-warning system: Research by the Home Office and others has shown that there is generally a lag between the start of a heroin/crack epidemic and the point at which it becomes visible on administrative datasets. Closing this gap is important for policy, and part of the reason for its existence is the lack of incidence estimates. Evidence also suggests epidemics spread from area to area, so it is important to monitor local as well as national trends.\n - iii) The social harm that can arise: Though research suggests that not all OCUs resort to acquisitive crime to help finance their drug use, numerous studies show that a proportion consistently do and these individuals can be extremely prolific offenders (Morgan, 2014). One study by Frontier Economics estimated that the average lifetime cost to society of an injecting drug user was £445,000 from crime alone. Hence analysing and identifying new OCUs is a policy priority (Frontier Economics, 2010).\n\nThere are two inter-connected reasons why regular national incidence estimates have not been attempted before 5 . The first is that data on this issue are sparse given the 'hidden' nature of opiate/crack markets and that date of first use is not something that gets recorded at the moment it actually occurs. The second reason, which flows from the first, is that current", - "page_start": 3, - "page_end": 3, - "source_file": "legal2_opengouvernementlicense.pdf" - } - ] - }, - { - "references": { - "source_file": "legal2_opengouvernementlicense.pdf", - "query": "What proportion of opiate users tested in 2004 were still positive a decade later?", - "target_page": 18, - "target_passage": "Nearly ten per cent (8.9%) of individuals who tested positive for opiates at charge in 2004 also tested positive nearly a decade later in 2013 (on arrest)", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "| | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year | Number of unique individuals with positive opiate/opiate + cocaine tests per year |\n|-----------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|\n| First test year | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | Adjusted 2013 |\n| 2004 | 12,246 | 3,171 | 3,299 | 3,090 | 2,992 | 2,573 | 2,311 | 1,766 | 1,513 | 1,092 | 1,191 |", - "page_start": 17, - "page_end": 17, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "| | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) | Number of tests per year (positive opiate/opiate + cocaine) |\n|---------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|---------------------------------------------------------------|\n| Year of first test | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | Adjusted 2013 |\n| 2004 | 17,174 | 5,604 | 7,091 | 6,784 | 6,509 | 5,292 | 4,863 | 3,341 | 2,629 | 1,800 | 1,964 |\n| 2005 | | 13,553 | 6,066 | 5,110 | 4,941 | 3,983 | 3,549 | 2,323 | 1,947 | 1,383 | 1,509 |", - "page_start": 16, - "page_end": 16, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "before 1960 was removed and because DIP tests are only administered to those aged 18 and over, so only using data to 2013 means it would not be possible for anyone to be born in 1996 or afterwards to be included. Even so, it is clear from the year-of-birth distribution (Figure 2) that positive opiate tests drop off sharply for those born after 1982. This is in line with other evidence suggesting that the number of new users of opiates decreased sharply in the 2000s. This needs to be considered when interpreting the analysis that follows. When DIP and the NDTMS treatment system began in the mid-2000s, there already existed a cohort of around 320,000 OCUs, according to available estimates by Hay et al ., (2013). And most of these individuals began using opiates/crack during the epidemic years of the 1980s and 1990s. In terms of data capture this means it is hard to separate the gradual inclusion of more and more individuals from this original cohort from genuinely new users of these drugs.\n\nFigure 2: Year of birth distribution for all opiate-only/positive-for-both tests.\n\n<!-- image -->\n\nFigure 3, which shows the age of the individual at a positive test, also reveals that although the average age at positive test is 32, the peak is quite flat, with high numbers of positive tests still being recorded by individuals in their late 30s and even into their 40s.", - "page_start": 9, - "page_end": 9, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "Table 7: Number of unique individuals testing positive for opiates-only or positive-for-both, by year of first positive test.", - "page_start": 17, - "page_end": 17, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "<!-- image -->\n\n## Executive summary\n\nThis paper uses a range of datasets and methodologies to:\n\n-  obtain working estimates for the number of individuals in England who started using opiates/crack from 2005 to 2013; 1\n-  examine the characteristics of these individuals.\n\nThe main findings of the paper are as follows.\n\n-  It is estimated that around 5,000 to 8,000 individuals started using opiates or crackcocaine in 2013. There is a high degree of uncertainty around this figure due to the sparse data on this population, but sense-checks based on treatment and criminal justice system data suggest the true figure is unlikely to be much larger than 10,000.\n-  Data also suggest that the number of current opiate/crack initiates involved with crime may be even lower. The number of arrestees testing positive for the first time for opiates (or for both opiates and crack-cocaine) dropped from 14,750 in 2006 to 4,281 in the first 11 months of 2013, a fall of around 70 per cent 2 . Furthermore, of the new positive testers in 2013, only 721 were aged 18-24. 3 Though this arrestee data will capture only a proportion of the true population, it does suggest that the number of new, young initiates involved with crime - those who have the potential to inflict most societal harm - has decreased markedly, probably just to a few thousand per year; and that this group now make up a small minority of the total number of opiate/crack-cocaine users (estimated to be 294,000 in 2011/12), most of whom are older, longer-term users.\n-  In terms of trends in new opiate/crack-cocaine users, all available data suggest that figures have dipped by at least a fifth since 2005 and have dropped hugely since the late 1980s and early 1990s when the opiate/crack-cocaine population in the UK grew very rapidly. The current estimate works out at a rate of 0.18 per 1,000 population. During the epidemic years, published estimates of new opiate/crack-cocaine users in Manchester and Bolton show rates more than 11 times larger.\n-  However, the findings also suggest that between 2011 and early 2014, the number of new opiate/crack-cocaine users stopped decreasing and instead stabilised at a (historically) low level. Further analysis was conducted to try and determine whether this was a precursor to a new rise in initiates. Though the data are not totally conclusive, the results suggest that a marked increase in new opiate/crack-cocaine users in the near future is unlikely. If anything, findings suggested that the downward trend may be set to resume.\n-  Analysis also revealed some possible changes in characteristics of the new opiate/crackcocaine initiates. There is a trend in the treatment data towards new initiates coming to treatment earlier in their drug-using careers than previous cohorts and also to have", - "page_start": 2, - "page_end": 2, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "These tables can be read both horizontally and vertically. Reading vertically (i.e. down the columns) it can be observed, for example, that of the 12,353 individuals with a positive test in 2013, 4,281 (35%) had not had a previous positive test and over half had already tested positive at least once in 2010 or before.\n\nReading horizontally - for example from left to right across the first row - it can be concluded that of the 12,246 individuals testing positive in 2004, 3,171 also had a positive test in 2005; 3,299 of the original 12,246 also had a positive test in 2006 and so on. The table does not show whether those who had a subsequent test in 2005 were the same individuals as those who had a subsequent test in 2006. So reading the results of the two tables together, we can say that 12,246 individuals had 17,174 positive tests in 2004, and of these, 3,171 also tested positive in 2005, resulting in 5,604 positive tests because some tested positive more than once in that year. The last figure in each column gives the number of new users that year (10,539 in 2005, 14,750 in 2006 and so on).\n\nThere are several observations to be drawn from these tables. First, it is clear that a proportion of opiate-using offenders offend over long periods of time. Nearly ten per cent (8.9%) of individuals who tested positive for opiates at charge in 2004 also tested positive nearly a decade later in 2013 (on arrest). And reading vertically, of the 12,253 individuals testing positive in 2013, 1,092 (8.9%) had also tested positive almost a decade earlier.\n\nNew opiate and crack-cocaine users: characteristics and trends 18 Second, in relation to incidence, these numbers also allow for some back-of-the-envelope modelling to address the extent to which the figure of 4,281 individuals, who are new positive testers in 2013, is an under- or over-estimate of the number of new OCUs in total. Taking the figures for 2008, when DIP was fully up and running, we know that around 25,000 unique individuals had positive tests that year. This can be combined with available estimates of the total OCU population (Hay et al ., 2013) and the proportion who are likely to be offending (Gossop et al. , 2003; Morgan, 2014) to give an approximate arrest rate. i.e. if there were about 150,000 crime-involved OCUs through the period, this implies an arrest rate of about 17 per", - "page_start": 17, - "page_end": 17, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "The age and year of birth distributions are also similar and are shown in the Appendix. Thus, for the majority of the analysis that follows, tests with no PNC number were excluded. 10\n\nThe charts and tables above use data from all positive tests, so will include cases where the same individual has tested positively on more than one occasion. The following data look just at the first test for each individual testing positive for opiates-only or positive-for-both.\n\nTable 4: Descriptive statistics on first positive opiate-only/positive-for-both tests.\n\n| Age | Age | Year of birth | Year of birth |\n|-----------------|---------|-----------------|-----------------|\n| Number of tests | 104,817 | Number of tests | 104,817 |\n| Mean | 31 | Mean | 1977 |\n| Median | 30 | Median | 1977 |\n| Mode | 27 | Mode | 1980 |\n| Minimum | 18 | Minimum | 1960 |\n| Maximum | 53 | Maximum | 1995 |\n\nThere were just over 100,000 unique individuals who tested positive for opiates-only or positivefor-both between 2004 and 2013. The distribution of the 296,008 positive tests these individuals gave, shows that the vast majority (55%) were only tested once (see Figure 4), which is likely to be why the age statistics are quite similar between Table 3 and Table 4. However, within this", - "page_start": 11, - "page_end": 11, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "cocaine users. In addition, the sharp decline in total DIP tests in 2013 may be due in part to the fact that DIP ceased to be a nationally funded programme in April 2013.\n\nThese data do show, however, that from 2006 onwards, between a third and half of all acquisitive crime arrests involved a drug test and between 15 per cent and 35 per cent of those tests (depending on the year) resulted in a positive result for opiates-only or for both opiates and cocaine (hereafter labelled 'positive-for-both').\n\nThe reason for highlighting only the opiates-only and the 'positive-for-both' test results is that the primary group of interest in this report are opiate and crack-cocaine users. To capture this group, cocaine-only tests must be excluded because DIP tests cannot distinguish between powder- and crack-cocaine, so a cocaine-only positive test could indicate either. Previous evidence has demonstrated that while there is much overlap between heroin and crack-cocaine cohorts (i.e. many of those who use heroin also use crack-cocaine), powdercocaine users have a quite different profile and are far less likely to be involved with acquisitive crime. Excluding the cocaine-only tests means we can be guaranteed not to capture any powder-cocaine users (who are not also using opiates or crack), but it also means we may miss some crack-cocaine-only users, hence the figures may under-estimate the true population of OCUs slightly.\n\nThe fifth row in Table 1 shows that the total number of opiate and opiate/cocaine tests over the period was 364,537. Table 2 shows descriptive statistics for the individuals providing these tests (noting that the same individual may be included several times if they gave multiple positive tests).\n\nTable 2: Descriptive statistics on all positive opiate-only/positive-for-both tests.\n\n| Age | Age | Year of birth | Year of birth |\n|-----------------|---------|-----------------|-----------------|\n| Number of tests | 364,537 | Number of tests | 364,537 |\n| Mean | 32 | Mean | 1977 |\n| Median | 31 | Median | 1977 |\n| Mode | 28 | Mode | 1979 |\n| Minimum | 18 | Minimum | 1960 |\n| Maximum | 53 | Maximum | 1995 |\n\nThe mean age at test is 32 and the mean year of birth is 1977, implying that most of these individuals were in their mid-to-late teens during the crime peak of the mid-1990s. 9 Given evidence suggesting that the average age of initiation for opiate/crack use is around 18-20 (Millar et al ., 2001), this age profile would tentatively suggest that OCU incidence also peaked in the 1990s and that this created a large cohort of users who would be approaching 40 today.\n\nThe minimum and maximum years of birth are fixed by construction, because anyone born", - "page_start": 8, - "page_end": 8, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "Table 3: Descriptive statistics for the DIP positive opiate-only/positive-for-both tests in which an individual can be identified with a PNC number.", - "page_start": 11, - "page_end": 11, - "source_file": "legal2_opengouvernementlicense.pdf" - }, - { - "text": "population there exists a small group of frequent repeat users. 1,828 individuals (1.7% of this population) accounted for just over ten per cent of all positive tests (30,471 tests in total). These individuals provided between 16 and 57 positive tests over the period 2004 to 2013.\n\nFigure 4: Proportion of positive tests by number of times an individual tested positive.\n\n<!-- image -->\n\nThe age and year-of-birth distributions for the 104,817 individuals reveals a similar profile to the distribution for total tests (Figures 5 and 6).", - "page_start": 12, - "page_end": 12, - "source_file": "legal2_opengouvernementlicense.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia5.pdf", - "query": "Who led the Fronde des princes?", - "target_page": 4, - "target_passage": "It was headed by the highest-ranking French nobles, among them Louis's uncle Gaston, Duke of Orléans and first cousin Anne Marie Louise d'Orléans, Duchess of Montpensier, known as la Grande Mademoiselle; Princes of the Blood such as Condé, his brother Armand de Bourbon, Prince of Conti, and their sister the Duchess of Longueville; dukes of legitimised royal descent, such as Henri, Duke of Longueville, and François, Duke of Beaufort; so-called \"foreign princes\" such as Frédéric Maurice, Duke of Bouillon, his brother Marshal Turenne, and Marie de Rohan, Duchess of Chevreuse; and scions of France's oldest families, such as François de La Rochefoucauld.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Condé, attacked the rebels in Paris; the rebels were under the political control of Anne's old friend Marie de Rohan. Beaufort, who had escaped from the prison where Anne had incarcerated him five years before, was the military leader in Paris, under the nominal control of Conti. After a few battles, a political compromise was reached; the Peace of Rueil was signed, and the court returned to Paris.\n\nUnfortunately for Anne, her partial victory depended on Condé, who wanted to control the queen and destroy Mazarin's influence. It was Condé's sister who pushed him to turn against the queen. After striking a deal with her old friend Marie de Rohan, who was able to impose the nomination of Charles de l'Aubespine, marquis de Châteauneuf as minister of justice, Anne arrested Condé, his brother Armand de Bourbon, Prince of Conti, and the husband of their sister Anne Genevieve de Bourbon, duchess of Longueville. This situation did not last long, and Mazarin's unpopularity led to the creation of a coalition headed mainly by Marie de Rohan and the duchess of Longueville. This aristocratic coalition was strong enough to liberate the princes, exile Mazarin, and impose a condition of virtual house arrest on Queen Anne.\n\nPortrait by Justus van Egmont between the years 1649-1652.\n\n<!-- image -->\n\nAll these events were witnessed by Louis and\n\nlargely explained his later distrust of Paris and the higher aristocracy. [27] \"In one sense, Louis's childhood came to an end with the outbreak of the Fronde. It was not only that life became insecure and unpleasant - a fate meted out to many children in all ages - but that Louis had to be taken into the confidence of his mother and Mazarin on political and military matters of which he could have no deep understanding\". [28] \"The family home became at times a near-prison when Paris had to be abandoned, not in carefree outings to other chateaux but in humiliating flights\". [28] The royal family was driven out of Paris twice in this manner, and at one point Louis XIV and Anne were held under virtual arrest in the royal palace in Paris. The Fronde years planted in Louis a hatred of Paris and a consequent determination to move out of the ancient capital as soon as possible, never to return. [29]\n\nJust as the first Fronde (the Fronde parlementaire of 1648-1649) ended, a second one (the Fronde des princes of 1650-1653) began. Unlike that which preceded it, tales of sordid intrigue and half-hearted warfare characterized this second phase of upper-class insurrection. To the aristocracy, this rebellion represented a protest for the reversal of their political demotion from vassals to courtiers. It was headed by the highest-ranking French\n\nnobles, among them Louis's uncle Gaston, Duke of Orléans and first cousin Anne Marie Louise d'Orléans, Duchess of Montpensier, known as la Grande Mademoiselle ; Princes of the Blood such as Condé, his brother Armand de Bourbon, Prince of Conti, and their sister the Duchess of Longueville; dukes of legitimised royal descent, such as Henri, Duke of Longueville, and François, Duke of Beaufort; so-called \"foreign princes\" such as Frédéric Maurice, Duke of Bouillon, his brother Marshal Turenne, and Marie de Rohan, Duchess of Chevreuse; and scions of France's oldest families, such as François de La Rochefoucauld.\n\nQueen Anne played the most important role in defeating the Fronde because she wanted to transfer absolute authority to her son. In addition, most of the princes refused to deal with Mazarin, who went into exile for a number of years. The Frondeurs claimed to act on Louis's behalf, and in his real interest, against his mother and Mazarin.", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Painting from 1667 depicting Louis as patron of the fine arts\n\n<!-- image -->\n\nThe Cour royale and the Cour de marbre at Versailles\n\n<!-- image -->\n\nfamous throughout Europe. Composers and musicians such as Jean-Baptiste Lully, Jacques Champion de Chambonnières, and François Couperin thrived. In 1661, Louis founded the Académie Royale de Danse, and in 1669, the Académie d'Opéra, important driving events in the evolution of ballet. He also attracted, supported and patronized such artists as André Charles Boulle, who revolutionised marquetry with his art of inlay, today known as \"Boulle work\". Always on the lookout for new talent, the king launched music competitions: in 1683, Michel-Richard de Lalande thus became deputy master of the Royal Chapel, composing his Symphonies for the Soupers du Roy along with 77 large scale Grand Motets .\n\nOver the course of four building campaigns, Louis converted a hunting lodge commissioned by Louis XIII into the spectacular Palace of Versailles. Except for the current Royal Chapel (built near the end of his reign), the palace achieved much of its current appearance after the third building campaign, which was followed by an official move of the royal court to Versailles on 6 May 1682. Versailles became a dazzling, aweinspiring setting for state affairs and the reception of foreign dignitaries. At Versailles, the king alone commanded attention.\n\nSeveral reasons have been suggested for the creation of the extravagant and stately palace, as well as the relocation of the monarchy's seat. The memoirist Saint-Simon speculated that Louis viewed Versailles as an isolated power centre where\n\ntreasonous cabals could be more readily discovered and foiled. [62] There has also been speculation that the revolt of the Fronde caused Louis to hate Paris, which he abandoned for a country retreat, but his sponsorship of many public works in Paris, such as the establishment of a police force and of street-lighting, [111] lend little credence to this theory. As a further example of his continued care for the capital, Louis constructed the Hôtel des Invalides , a military complex and home to this day for officers and soldiers rendered infirm either by injury or old age. While pharmacology was still quite rudimentary in his day, the Invalides pioneered new treatments and set new standards for hospice treatment. The conclusion of the Treaty of Aix-la-Chapelle in 1668 also induced Louis to demolish Paris's northern walls in 1670 and replace them with wide tree-lined boulevards. [112]\n\nBust of Louis XIV by Gianlorenzo Bernini\n\n<!-- image -->\n\nLouis also renovated and improved the Louvre and other royal residences. Gian Lorenzo Bernini was originally to plan additions to the Louvre; however, his plans would have meant the destruction of much of the existing structure, replacing it with an Italian summer villa in the centre of Paris. Bernini's plans were eventually shelved in favour of the elegant Louvre Colonnade designed by three Frenchmen: Louis Le Vau, Charles Le Brun, and Claude Perrault. With the relocation of the court to Versailles, the Louvre was given over to the arts and the public. [113] During his visit from Rome, Bernini also executed a renowned portrait bust of the king.\n\n## Image and depiction", - "page_start": 16, - "page_end": 16, - "source_file": "wikipedia5.pdf" - }, - { - "text": "The Queen sought a lasting peace between Catholic nations, but only after a French victory over her native Spain. She also gave a partial Catholic orientation to French foreign policy. This was felt by the Netherlands, France's Protestant ally, which negotiated a separate peace with Spain in 1648. [18]\n\nIn 1648, Anne and Mazarin successfully negotiated the Peace of Westphalia, which ended the Thirty Years' War. [19] Its terms ensured Dutch independence from Spain, awarded some autonomy to the various German princes of the Holy Roman Empire, and granted Sweden seats on the Imperial Diet and territories controlling the mouths of the Oder, Elbe, and Weser Rivers. [20] France, however, profited most from the settlement. Austria, ruled by the Habsburg Emperor Ferdinand III, ceded all Habsburg lands and claims in Alsace to France and acknowledged her de [21]\n\nfacto sovereignty over the Three Bishoprics of Metz, Verdun, and Toul. Moreover, many petty German states sought French protection, eager to emancipate themselves from Habsburg domination. This anticipated the formation of the 1658 League of the Rhine, which further diminished Imperial power.\n\n## Early acts\n\nAs the Thirty Years' War came to an end, a civil war known as the Fronde erupted in France. It effectively checked France's ability to exploit the Peace of Westphalia. Anne and Mazarin had largely pursued the policies of Cardinal Richelieu, augmenting the Crown's power at the expense of the nobility and the Parlements . Anne was more concerned with internal policy than foreign affairs; she was a very proud queen who insisted on the divine rights of the King of France. [22]\n\nAll this led her to advocate a forceful policy in all matters relating to the King's authority, in a manner that was much more radical than the one proposed by Mazarin. The Cardinal depended totally on Anne's support and had to use all his influence on the Queen to temper some of her radical actions. Anne imprisoned any aristocrat or member of parliament who challenged her will; her main aim was to transfer to her son an absolute authority in the matters of finance and justice. One of the leaders of the Parlement of Paris, whom she had jailed, died in prison. [23]\n\nThe Frondeurs , political heirs of the disaffected feudal aristocracy, sought to protect their traditional feudal privileges from the increasingly centralized royal government. Furthermore, they believed their traditional influence and authority was being usurped by the recently ennobled bureaucrats (the Noblesse de Robe , or \"nobility of the robe\"), who administered the kingdom and on whom the monarchy increasingly began to rely. This belief intensified the nobles' resentment.\n\nIn 1648, Anne and Mazarin attempted to tax members of the Parlement de Paris . The members refused to comply and ordered all of the king's earlier financial edicts burned. Buoyed by the victory of Louis, duc d'Enghien (later known as le Grand Condé ) at the Battle of Lens, Mazarin, on Queen Anne's insistence, arrested certain members in a show of force. [24] The most important arrest, from Anne's point of view, concerned Pierre Broussel, one of the most important leaders in the Parlement de Paris .", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia5.pdf" - }, - { - "text": "People in France were complaining about the expansion of royal authority, the high rate of taxation, and the reduction of the authority of the Parlement de Paris and other regional representative entities. Paris erupted in rioting as a result, and Anne was forced, under intense pressure, to free Broussel. Moreover, on the night of 9-10 February 1651, when Louis was twelve, a mob of angry Parisians broke into the royal palace and demanded to see their king. Led into the royal bed-chamber, they gazed upon Louis, who was feigning sleep, were appeased, and then quietly departed. [25] The threat to the royal family prompted Anne to flee Paris with the king and his courtiers.\n\nShortly thereafter, the conclusion of the Peace of Westphalia allowed Condé's army to return to aid Louis and his court. Condé's family was close to Anne at that time, and he agreed to help her attempt to restore the king's authority. [26] The queen's army, headed by\n\nBaptismal certificate, 1638\n\n<!-- image -->\n\nLouis XIV, then Dauphin of France, in 1642, one year before his accession to the throne, by Philippe de Champaigne\n\n<!-- image -->\n\nLouis XIV in 1643, by Claude Deruet\n\n<!-- image -->\n\nEurope after the Peace of Westphalia in 1648\n\n<!-- image -->", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia5.pdf" - }, - { - "text": "negotiations in 1709 and 1710. France retained Île-Saint-Jean and Île Royale, and Louis acquired a few minor European territories, such as the Principality of Orange and the Ubaye Valley, which covered transalpine passes into Italy. Thanks to Louis, his allies the Electors of Bavaria and Cologne were restored to their prewar status and returned their lands. [102]\n\n## Personal life\n\n## Marriages and children\n\nLouis and his wife Maria Theresa of Spain had six children from the marriage contracted for them in 1660. However, only one child, the eldest, survived to adulthood: Louis, le Grand Dauphin , known as Monseigneur . Maria Theresa died in 1683, whereupon Louis remarked that she had never caused him unease on any other occasion.\n\nDespite evidence of affection early on in their marriage, Louis was never faithful to Maria Theresa. He took a series of mistresses, both official and unofficial. Among the better documented are Louise de La Vallière (with whom he had five children; 1661-1667), Bonne de Pons d'Heudicourt (1665), Catherine Charlotte de Gramont (1665), FrançoiseAthénaïs, Marquise de Montespan (with whom he had seven children; 1667-1680), Anne de Rohan-Chabot (1669-1675), Claude de Vin des Œillets (one child born in 1676),\n\nWedding of Louis and Maria Theresa\n\n<!-- image -->\n\nIsabelle de Ludres (1675-1678), and Marie Angélique de Scorailles (1679-1681), who died at age 19 in childbirth. Through these liaisons, he produced numerous illegitimate children, most of whom he married to members of cadet branches of the royal family.\n\nLouis proved relatively more faithful to his second wife, Françoise d'Aubigné, Marquise de Maintenon. He first met her through her work caring for his children by Madame de Montespan, noting the care she gave to his favourite, Louis Auguste, Duke of Maine. [103] The king was, at first, put off by her strict religious practice, but he warmed to her through her care for his children. [103]\n\nWhen he legitimized his children by Madame de Montespan on 20 December 1673, Françoise d'Aubigné became the royal governess at Saint-Germain. [103] As governess, she was one of very few people permitted to speak to him as an equal, without limits. [103] It is believed that they were married secretly at Versailles on or around 10 October 1683 [104] or January 1684. [105] This marriage, though never announced or publicly discussed, was an open secret and lasted until his death. [106]\n\n## Piety and religion\n\nLouis was a pious and devout king who saw himself as the head and protector of the Catholic Church in France. He made his devotions daily regardless of where he was, following the liturgical calendar regularly. [107] Under the influence of his very religious second wife, he became much stronger in the practice of his Catholic faith. [108] This included banning opera and comedy performances during Lent. [108]\n\nTowards the middle and the end of his reign, the centre for the King's religious observances was usually the Chapelle Royale at Versailles. Ostentation was a distinguishing feature of daily Mass, annual celebrations, such as those of Holy Week, and special ceremonies. [109] Louis established the Paris Foreign Missions Society, but his informal alliance with the Ottoman Empire was criticised for undermining Christendom. [110]\n\nLouis XIV encouraged Catholic missions through the creation of the Paris Foreign Missions Society\n\n<!-- image -->\n\n## Patronage of the arts", - "page_start": 15, - "page_end": 15, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Queen Anne had a very close relationship with the Cardinal, and many observers believed that Mazarin became Louis XIV's stepfather by a secret marriage to Queen Anne. [30] However, Louis's coming-of-age and subsequent coronation deprived them of the Frondeurs ' pretext for revolt. The Fronde thus gradually lost steam and ended in 1653, when Mazarin returned triumphantly from exile. From that time until his death, Mazarin was in charge of foreign and financial policy without the daily supervision of Anne, who was no longer regent. [31]\n\nDuring this period, Louis fell in love with Mazarin's niece Marie Mancini, but Anne and Mazarin ended the king's infatuation by sending Mancini away from court to be married in Italy. While Mazarin might have been tempted for a short time to marry his niece to the King of France, Queen Anne was absolutely against this; she wanted to marry her son to the daughter of her brother, Philip IV of Spain, for both dynastic and political reasons. Mazarin soon supported the Queen's position because he knew that her support for his power and his foreign policy depended on making peace with Spain from a strong position and on the Spanish marriage. Additionally, Mazarin's relations with Marie Mancini were not good, and he did not trust her to support his position. All of Louis's tears and his supplications to his mother did not make her change her mind. The Spanish marriage would be very\n\n1655 portrait of Louis, the Victor of the Fronde, portrayed as the god Jupiter\n\n<!-- image -->", - "page_start": 3, - "page_end": 3, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Louis XIV was born on 5 September 1638 in the Château de Saint-Germain-enLaye, to Louis XIII and Anne of Austria. He was named Louis Dieudonné (Louis the God-given) [7] and bore the traditional title of French heirs apparent: Dauphin . [8] At the time of his birth, his parents had been married for 23 years. His mother had experienced four stillbirths between 1619 and 1631. Leading contemporaries thus regarded him as a divine gift and his birth a miracle of God. [9]\n\nLouis's relationship with his mother was uncommonly affectionate for the time. Contemporaries and eyewitnesses claimed that the Queen would spend all her time with Louis. [10] Both were greatly interested in food and theatre, and it is highly likely that Louis developed these interests through his close relationship with his mother. This long-lasting and loving relationship can be evidenced by excerpts in Louis's journal entries, such as:\n\n\"Nature was responsible for the first knots which tied me to my mother. But attachments formed later by shared qualities of the spirit are far more difficult to break than those formed merely by blood.\" [11]\n\nIt was his mother who gave Louis his belief in the absolute and divine power of his monarchical rule. [12]\n\nDuring his childhood, he was taken care of by the governesses Françoise de Lansac and Marie-Catherine de Senecey. In 1646, Nicolas V de Villeroy became the young king's tutor. Louis XIV became friends with Villeroy's young children, particularly François de Villeroy, and divided his time between the Palais-Royal and the nearby Hotel de Villeroy.\n\n## Minority and the Fronde\n\nIssue more...\n\nLouis, Grand Dauphin\n\nMarie Thérèse, Madame Royale\n\nPhilippe Charles, Duke of Anjou\n\nIllegitimate :\n\nMarie Anne, Princess of Conti\n\nLouis, Count of Vermandois\n\nLouis Auguste, Duke of Maine\n\nLouis César, Count of Vexin\n\nLouise Françoise, Princess of Condé\n\nLouise Marie Anne,\n\nMademoiselle de Tours\n\nLouise, Baroness of La Queue\n\nFrançoise Marie, Duchess of Orléans\n\nLouis Alexandre, Count of Toulouse\n\n## Names\n\nLouis-Dieudonné de France\n\nHouse\n\nBourbon\n\nFather\n\nLouis XIII\n\nMother\n\nAnne of Austria\n\nReligion\n\nCatholicism\n\nSignature\n\n## Accession\n\nSensing imminent death in the spring of 1643, King Louis XIII decided to put his affairs in order for his four-year-old son Louis XIV. Not trusting the judgement of his Spanish wife Queen Anne, who would normally have become the sole regent of France, the king decreed that a regency council would rule on his son's behalf, with Anne at its head. [13]\n\nLouis XIII died on 14 May 1643. On 18 May [14] Queen Anne had her husband's will annulled by the Parlement de Paris , a judicial body of nobles and high-ranking clergy, [15] and she became sole regent. She exiled her husband's ministers Chavigny and Bouthilier and appointed the Count of Brienne as her minister of foreign affairs. [16] Anne kept the direction of religious policy strongly in hand until her son's majority in 1661.\n\nShe appointed Cardinal Mazarin as chief minister, giving him the daily administration of policy. She continued the policies of her late husband and Cardinal Richelieu, despite their persecution of her, in order to win absolute authority in France and victory abroad for her son. Anne protected Mazarin by exiling her followers the Duke of Beaufort and Marie de Rohan, who conspired against him in 1643. [17]\n\n<!-- image -->\n\nLouis XIV as a young child, unknown painter\n\n<!-- image -->\n\nThe best example of Anne's loyalty to France was her treatment of one of Richelieu's men, the Chancellor Pierre Séguier. Séguier had brusquely interrogated Anne in 1637 (like a", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia5.pdf" - }, - { - "text": "The Nine Years' War, which lasted from 1688 to 1697, initiated a period of decline in Louis's political and diplomatic fortunes. It arose from two events in the Rhineland. First, in 1685, the Elector Palatine Charles II died. All that remained of his immediate family was Louis's sister-in-law, Elizabeth Charlotte. German law ostensibly barred her from succeeding to her brother's lands and electoral dignity, but it was unclear enough for arguments in favour of Elizabeth Charlotte to have a chance of success. Conversely, the princess was demonstrably entitled to a division of the family's personal property. Louis pressed her claims to land and chattels, hoping the latter, at least, would be given to her. [76] Then, in 1688, Maximilian Henry of Bavaria, Archbishop of Cologne, an ally of France, died. The archbishopric had traditionally been held by the Wittelsbachs of Bavaria, but the Bavarian claimant to replace Maximilian Henry, Prince Joseph Clemens of Bavaria, was at that time not more than 17 years old and not even ordained. Louis sought instead to install his own candidate, Wilhelm Egon von Fürstenberg, to ensure the key Rhenish state remained an ally. [77]\n\nIn light of his foreign and domestic policies during the early 1680s, which were perceived as aggressive, Louis's actions, fostered by the succession crises of the late 1680s, created concern and alarm in much of Europe. This led to the formation of the 1686 League of Augsburg by the Holy Roman Emperor, Spain, Sweden, Saxony, and Bavaria. Their stated intention was to return France to at least the borders agreed to in the Treaty of Nijmegen. [78] Emperor Leopold I's persistent refusal to convert the Truce of Ratisbon into a permanent treaty fed Louis's fears that the Emperor would turn on France and attack the Reunions after settling his affairs in the Balkans. [79]\n\nAnother event Louis found threatening was England's Glorious Revolution of 1688. Although King James II was Catholic, his two Anglican daughters, Mary and Anne, ensured the English people a Protestant succession. But when James II's son James Francis Edward Stuart was born, he took precedence in succession over his sisters. This seemed to herald an era of Catholic monarchs in England. Protestant lords called on the Dutch Prince\n\nBattle of Fleurus, 1690\n\n<!-- image -->\n\nLouis in 1690\n\n<!-- image -->\n\nWilliam III of Orange, grandson of Charles I of England, to come to their aid. He sailed for England with troops despite Louis's warning that France would regard it as a provocation. Witnessing numerous desertions and defections, even among those closest to him, James II fled England. Parliament declared the throne vacant, and offered it to James's daughter Mary II and his son-inlaw and nephew William. Vehemently anti-French, William (now William III of England) pushed his new kingdoms into war, thus transforming the League of Augsburg into the Grand Alliance. Before this happened, Louis expected William's expedition to England to absorb his energies and those of his allies, so he dispatched troops to the Rhineland after the expiry of his ultimatum to the German princes requiring confirmation of the Truce of Ratisbon and acceptance of his demands about the succession crises. This military manoeuvre was also intended to protect his eastern provinces from Imperial invasion by depriving the enemy army of sustenance, thus explaining the preemptive scorched earth policy pursued in much of southwestern Germany (the \"Devastation of the Palatinate\"). [80]\n\nLouis XIV at the siege of Namur (1692)\n\n<!-- image -->", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia5.pdf" - }, - { - "text": "| Philippe Charles, Duke of Anjou | 5 August 1668 | 10 July 1671 | Fils de France. Died in childhood. |\n| Louis François, Duke of Anjou | 14 June 1672 | 4 November 1672 | Fils de France. Died in infancy. |", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia5.pdf" - }, - { - "text": "illegitimate son Louis-Auguste de Bourbon, Duke of Maine. [129] Orléans, however, had Louis's will annulled by the Parlement of Paris after his death and made himself sole regent. He stripped Maine and his brother, Louis-Alexandre, Count of Toulouse, of the rank of Prince of the Blood, which Louis had granted them, and significantly reduced Maine's power and privileges. [130]\n\n## Line of succession in 1715\n\nLine of succession to the French throne upon the death of Louis XIV in 1715. Louis XIV's only surviving legitimate grandson, Philip V, was not included in the line of succession due to having renounced the French throne after the war of the Spanish Succession, which lasted for 13 years after the death of Charles II of Spain in 1700. [131]\n\nLouis XIII (1601-1643)\n\n<!-- image -->\n\nFurther down the French line of succession in 1715 was the House of Condé, followed by the House of Conti (a cadet branch of the House of Condé). Both of these royal houses were descended in the male line from Henri II, Prince of Condé, a second cousin of French King Louis XIII (the father of Louis XIV) in the male line.\n\n## Legacy\n\n## Reputation\n\nAccording to Philippe de Courcillon's Journal , Louis on his deathbed advised his heir with these words:\n\nDo not follow the bad example which I have set you; I have often undertaken war too lightly and have sustained it for vanity. Do not imitate me, but be a peaceful prince, and may you apply yourself principally to the alleviation of the burdens of your subjects. [132]\n\nSome historians point out that it was a customary demonstration of piety in those days to exaggerate one's sins. Thus they do not place much emphasis on Louis's deathbed declarations in assessing his accomplishments. Rather, they focus on military and diplomatic successes, such as how he placed a French prince on the Spanish throne. This, they contend, ended the threat of an aggressive Spain that historically interfered in domestic French politics. These historians also emphasise the effect of Louis's wars in expanding France's boundaries and creating more defensible frontiers that preserved France from invasion until the Revolution. [132]\n\nArguably, Louis also applied himself indirectly to \"the alleviation of the burdens of [his] subjects.\" For example, he patronised the arts, encouraged industry, fostered trade and commerce, and sponsored the founding of an overseas empire. Moreover, the significant reduction in civil wars and aristocratic rebellions during his reign are seen by these\n\nTerritorial expansion of France under Louis XIV (1643-1715) is depicted in orange.\n\n<!-- image -->\n\nhistorians as the result of Louis's consolidation of royal authority over feudal elites. In their analysis, his early reforms centralised France and marked the birth of the modern French state. They regard the political and military victories as well as numerous cultural achievements as how Louis helped raise France to a preeminent position in Europe. [133] Europe came to admire France for its military and cultural successes, power, and sophistication. Europeans generally began to emulate French manners, values, goods, and deportment. French became the universal language of the European elite.\n\nLouis's detractors have argued that his considerable foreign, military and domestic expenditure impoverished and bankrupted France. His supporters, however, distinguish the state, which was impoverished, from France, which was not. As supporting evidence, they cite the literature of the time, such as the social commentary in Montesquieu's Persian Letters . [134]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia5.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia5.pdf", - "query": "What was one of Louis XIV's most ill-famed decrees?", - "target_page": 6, - "target_passage": "One of Louis's more infamous decrees was the Grande Ordonnance sur les Colonies of 1685, the Code Noir (black code)", - "chunk_present": { - "presence": true, - "index": 9 - } - }, - "top_chunk": [ - { - "text": "Alternatively, Louis's critics attribute the social upheaval culminating in the French Revolution to his failure to reform French institutions while the monarchy was still secure. Other scholars counter that there was little reason to reform institutions that largely worked well under Louis. They also maintain that events occurring almost 80 years after his death were not reasonably foreseeable to Louis and that in any case, his successors had sufficient time to initiate reforms of their own. [135]\n\nLouis has often been criticised for his vanity. The memoirist Saint-Simon, who claimed that Louis slighted him, criticised him thus:\n\nThere was nothing he liked so much as flattery, or, to put it more plainly, adulation; the coarser and clumsier it was, the more he relished it.\n\nFor his part, Voltaire saw Louis's vanity as the cause for his bellicosity:\n\nRoyal procession passing the PontNeuf under Louis XIV\n\n<!-- image -->\n\nIt is certain that he passionately wanted glory, rather than the conquests themselves. In the acquisition of Alsace and half of Flanders, and of all of Franche-Comté, what he really liked was the name he made for himself. [136]\n\nNonetheless, Louis has also received praise. The anti-Bourbon Napoleon described him not only as \"a great king\", but also as \"the only King of France worthy of the name\". [137] Leibniz, the German Protestant philosopher, commended him as \"one of the greatest kings that ever was\". [138] And Lord Acton admired him as \"by far the ablest man who was born in modern times on the steps of a throne\". [139] The historian and philosopher Voltaire wrote: \"His name can never be pronounced without respect and without summoning the image of an eternally memorable age\". [140] Voltaire's history, The Age of Louis XIV , named Louis's reign as not only one of the four great ages in which reason and culture flourished, but the greatest ever. [141][142]\n\n## Quotes\n\nNumerous quotes have been attributed to Louis XIV by legend.\n\nThe well-known \"I am the state\" ( \"L'État, c'est moi.\" ) was reported from at least the late 18th century. [143] It was widely repeated but also denounced as apocryphal by the early 19th century. [144][b][145]", - "page_start": 21, - "page_end": 21, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Félix, Joël. \"'The most difficult financial matter that has ever presented itself': paper money and the financing of warfare under Louis XIV.\" Financial History Review 25.1 (2018): 43-70 online (http://centaur.reading.ac.uk/72452/ 2/The%20most%20difficult%20financial%20matter%20FH.pdf) Archived (https://web.archive.org/web/2021022610 4833/http://centaur.reading.ac.uk/72452/2/The%20most%20difficult%20financial%20matter%20FH.pdf) 26 February 2021 at the Wayback Machine.\n\nGoubert, Pierre (197). Louis XIV and Twenty Million Frenchmen . social history from Annales School. ISBN 978-03947-1751-7.\n\nJones, Colin. The Great Nation: France from Louis XIV to Napoleon (1715-1799) (2002)\n\nKlaits, Joseph. Printed propaganda under Louis XIV: absolute monarchy and public opinion (Princeton University Press, 2015).\n\nLe Roy Ladurie, Emmanuel. The Ancien Régime: A History of France 1610-1774 (1999), survey by leader of the Annales School ISBN 0631211969\n\nLewis, W. H. The Splendid Century: Life in the France of Louis XIV (1953) ISBN 0881339210\n\nMitford, Nancy (1966). The Sun King: Louis XIV at Versailles (2012 ed.). New York Review of Books. ISBN 978-15901-7491-3.\n\nPrest, Julia, and Guy Rowlands, eds. The Third Reign of Louis XIV, c. 1682-1715 (Taylor & Francis, 2016).\n\nRothkrug, Lionel. Opposition to Louis XIV: The Political and Social Origins of French Enlightenment (Princeton University Press, 2015).\n\nRowlands, Guy. The Dynastic State and the Army under Louis XIV: Royal Service and Private Interest, 1661-1701 (2002)\n\nRubin, David Lee, ed. Sun King: The Ascendancy of French Culture during the Reign of Louis XIV . Washington: Folger Books and Cranbury: Associated University Presses, 1992.\n\nRule, John C., Louis XIV and the craft of kingship 1969.\n\nShennan, J. H. Louis XIV (1993)\n\nThompson, Ian. The Sun King's Garden: Louis XIV, André Le Nôtre And the Creation of the Gardens of Versailles . London: Bloomsbury Publishing, 2006 ISBN 1-5823-4631-3\n\nTreasure, Geoffrey. The Making of Modern Europe, 1648-1780 (3rd ed. 2003). pp. 230-296.\n\nWilkinson, Rich. Louis XIV (Routledge, 2007). ISBN 978-0-4153-5815-6\n\nCénat, Jean-Philippe. Le roi stratège: Louis XIV et la direction de la guerre, 1661-1715 (Presses universitaires de Rennes, 2019).\n\nCroix, Alain. \"Vingt millions de Français et Louis XIV.\" Revue dhistoire moderne contemporaine 2 (2020): 27-46.\n\nEngerand, Fernand, editor (1899). (in French) Inventaire des tableaux du Roy rédigé en 1709 et 1710 par Nicolas Bailly . Paris: Ernest Leroux. Copy (http://gallica.bnf.fr/ark:/12148/bpt6k6323734m/f11.image) Archived (https://we b.archive.org/web/20160307153902/http://gallica.bnf.fr/ark:/12148/bpt6k6323734m/f11.image) 7 March 2016 at the Wayback Machine at Gallica.\n\n## External links", - "page_start": 33, - "page_end": 33, - "source_file": "wikipedia5.pdf" - }, - { - "text": "<!-- image -->\n\n## Louis XIV\n\nLouis XIV (Louis-Dieudonné; 5 September 1638 - 1 September 1715), also known as Louis the Great ( Louis le Grand ) or the Sun King ( le Roi Soleil ), was King of France from 1643 until his death in 1715. His verified reign of 72 years and 110 days is the longest of any sovereign. [1][a] An emblematic character of the Age of Absolutism in Europe, [3] Louis XIV's legacy is widely characterized by French colonial expansion, the conclusion of Eighty Years' War involving the Habsburgs, and his architectural bequest, marked by commissioned works of art and buildings. His pageantry, opulent lifestyle and ornate cultivated image earned him enduring admiration. Louis XIV raised France to be the exemplar nation-state of the early modern period, and established a cultural prestige which lasted through the subsequent centuries, and continues today.\n\nLouis began his personal rule of France in 1661, after the death of his chief minister Cardinal Mazarin, when the King famously declared that he would take over the job himself. [4] An adherent of the divine right of kings, Louis continued his predecessors' work of creating a centralised state governed from the capital. He sought to eliminate the remnants of feudalism persisting in parts of France; by compelling many members of the nobility to reside at his lavish Palace of Versailles, he succeeded in pacifying the aristocracy, many of whom had participated in the Fronde rebellions during his minority. He thus became one of the most powerful French monarchs and consolidated a system of absolute monarchy in France that endured until the French Revolution. Louis also enforced uniformity of religion under the Catholic Church. His revocation of the Edict of Nantes abolished the rights of the Huguenot Protestant minority and subjected them to a wave of dragonnades, effectively forcing Huguenots to emigrate or convert, virtually destroying the French Protestant community.\n\nDuring Louis's long reign, France emerged as the leading European power and regularly made war. A conflict with Spain marked his entire childhood, while during his personal rule, Louis fought three major continental conflicts, each against powerful foreign alliances: the Franco-Dutch War, the Nine Years' War, and the War of the Spanish Succession. In addition, France contested shorter wars such as the War of Devolution and the War of the Reunions. Warfare defined Louis's foreign policy, impelled by his personal ambition for glory and power: \"a mix of commerce, revenge, and pique\". [5] His wars strained France's resources to the utmost, while in peacetime he concentrated on preparing for the next war. He taught his diplomats that their job was to create tactical and strategic advantages for the French military. [6] Upon his death in 1715, Louis XIV left his great-grandson and successor, Louis XV, a powerful but war-weary kingdom, in major debt after the War of the Spanish Succession that had raged on since 1701.\n\nSome of his other notable achievements include the construction of the Canal du Midi, the patronage of artists, and the founding of the French Academy of Sciences.\n\n## Early years\n\n## Louis XIV\n\nPortrait by Hyacinthe Rigaud , 1701\n\n<!-- image -->\n\nKing of France (more...)\n\nReign\n\n14 May 1643 - 1 September\n\n1715\n\nCoronation\n\n7 June 1654\n\nReims Cathedral\n\nPredecessor\n\nLouis XIII\n\nSuccessor\n\nLouis XV\n\nRegent\n\nAnne of Austria (1643-1651)\n\nChief ministers See list\n\n- Cardinal Mazarin (1643-1661)\n- Jean-Baptiste Colbert (1661-1683)\n- The Marquis of Louvois (1683-1691)\n\nBorn\n\n5 September 1638\n\nChâteau de Saint-Germain- en-Laye, Saint-Germain-en- Laye, France\n\nDied\n\n1 September 1715 (aged 76) Palace of Versailles, Versailles, France\n\nBurial\n\n9 September 1715 Basilica of Saint-Denis\n\nSpouses\n\nMaria Theresa of Spain (m. 1660; died 1683)", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Siamese embassy of King Narai to Louis XIV in 1686, led by Kosa Pan. Engraving by Nicolas Larmessin.\n\n<!-- image -->\n\n## Centralisation of power\n\nPortrait of Louis XIV (gray pastel on paper by Charles Le Brun, 1667, Louvre Museum)\n\n<!-- image -->\n\nSiamese court, which granted Mergui as a naval base to France. However, the death of Narai, King of Ayutthaya, the execution of his pro-French minister Constantine Phaulkon, and the siege of Bangkok in 1688 ended this era of French influence. [55]\n\nFrance also attempted to participate actively in Jesuit missions to China. To break the Portuguese dominance there, Louis sent Jesuit missionaries to the court of the Kangxi Emperor in 1685: Jean de Fontaney, Joachim Bouvet, Jean-François Gerbillon, Louis Le Comte, and Claude de Visdelou. [56] Louis also received a Chinese Jesuit, Michael Shen Fu-Tsung, at Versailles in 1684. [57] Furthermore, Louis's librarian and translator Arcadio Huang was Chinese. [58][59]\n\n## Height of power\n\nBy the early 1680s, Louis had greatly augmented French influence in the world. Domestically, he successfully increased the influence of the crown and its authority over the church and aristocracy, thus consolidating absolute monarchy in France.\n\nLouis initially supported traditional Gallicanism, which limited papal authority in France, and convened an Assembly of the French clergy in November 1681. Before its dissolution eight months later, the Assembly had accepted the Declaration of the Clergy of France, which increased royal authority at the expense of papal power. Without royal approval, bishops could not leave France, and appeals could not be made to the pope. Additionally, government officials could not be excommunicated for acts committed in pursuance of their duties. Although the king could not make ecclesiastical law, all papal regulations without royal assent were invalid in France. Unsurprisingly, the Pope repudiated the Declaration. [4]\n\nBy attaching nobles to his court at Versailles, Louis achieved increased control over the French aristocracy. According to historian Philip Mansel, the king turned the palace into:\n\nan irresistible combination of marriage market, employment agency and entertainment capital of aristocratic Europe, boasting the best theatre, opera, music, gambling, sex and (most important) hunting. [60]\n\nApartments were built to house those willing to pay court to the king. [61] However, the pensions and privileges necessary to live in a style appropriate to their rank were only possible by waiting constantly on Louis. [62] For this purpose, an elaborate court ritual was created wherein the king became the centre of attention and was observed throughout the day by the public. With his excellent memory, Louis could then see who attended him at court and who was absent, facilitating the subsequent distribution of favours and positions.\n\nLouis receiving the Doge of Genoa at Versailles on 15 May 1685, following the Bombardment of Genoa. ( Reparation faite à Louis XIV par le Doge de Gênes. 15 mai 1685 by Claude Guy Halle, Versailles.)\n\n<!-- image -->\n\nAnother tool Louis used to control his nobility was censorship, which often involved the opening of letters to discern their author's opinion of the government and king. [61] Moreover, by entertaining, impressing, and domesticating them with extravagant luxury and other distractions, Louis not only cultivated public opinion of him, but he also ensured the aristocracy remained under his scrutiny.", - "page_start": 8, - "page_end": 8, - "source_file": "wikipedia5.pdf" - }, - { - "text": "- The film, Le Roi Danse (2000; translated: The King Dances ), directed by Gérard Corbiau, reveals Louis through the eyes of Jean-Baptiste Lully, his court musician.\n - Julian Sands portrayed Louis in Roland Jaffe's Vatel (2000).\n - Alan Rickman directed, co-wrote, and stars as Louis XIV in the film, A Little Chaos , which centres on construction in the gardens of Versaille, at the time immediately before and after the death of Queen Maria Theresa.\n - The 2016 film The Death of Louis XIV , directed by Albert Serra, is set during the last two weeks of Louis XIV's life before dying of gangrene, with the monarch played by Jean-Pierre Léaud.\n\n## Television\n\n - Louis XIV is portrayed by Thierry Perkins-Lyautey in the British television film Charles II: The Power and the Passion.\n - The 15-year-old Louis XIV, as played by the Irish actor Robert Sheehan, is a major character of the short-lived historical fantasy series Young Blades from January to June 2005.\n - George Blagden portrays Louis XIV in the Canal+ series Versailles which aired for three seasons from 2015.\n\n## Musicals\n\n - Emmanuel Moire portrayed Louis XIV in the 2005-07 Kamel Ouali musical Le Roi Soleil.\n\n## Health and death\n\nLouis XIV (seated) with his son le Grand Dauphin (to the left), his grandson Louis, Duke of Burgundy (to the right), his great-grandson Louis Duke of Anjou, and Madame de Ventadour, Anjou's governess, who commissioned this painting; busts of Henry IV and Louis XIII are in the background.\n\n<!-- image -->\n\n<!-- image -->\n\nDespite the image of a healthy and virile king that Louis sought to project, evidence exists to suggest that his health was not very good. He had many ailments: for example, symptoms of diabetes, as confirmed in reports of suppurating periostitis in 1678, dental abscesses in 1696, along with recurring boils, fainting spells, gout, dizziness, hot flushes, and headaches.\n\nFrom 1647 to 1711, the three chief physicians to the king (Antoine Vallot, Antoine d'Aquin, and Guy-Crescent Fagon) recorded all of his health problems in the Journal de Santé du Roi ( Journal of the King's Health ), a daily report of his health. On 18 November 1686, Louis underwent a painful operation for an anal fistula that was performed by the surgeon Charles Felix de Tassy, who prepared a specially shaped curved scalpel for the occasion. The wound took more than two months to heal. [124]\n\nLouis died of gangrene at Versailles on 1 September 1715, four days before his 77th birthday, after 72 years on the throne. Enduring much pain in his last days, he finally \"yielded up his soul without any effort, like a candle going out\", while reciting the psalm Deus, in adjutorium me festina ( O Lord, make haste to help me ). [125] His body was laid to rest in Saint-Denis Basilica outside Paris. It remained there undisturbed for about 80 years until revolutionaries exhumed and destroyed all of the remains found in the Basilica. [126] In 1848, at Nuneham House, a piece of Louis's mummified heart, taken from his tomb and kept in a silver locket by Lord Harcourt, Archbishop of York, was shown to the Dean of Westminster, William Buckland, who ate a part of it. [127]\n\nCardinal Armand Gaston Maximilien de Rohan gave Last Rites (confession, viaticum, and unction) to king Louis XIV. [128]\n\n## Succession\n\nLouis outlived most of his immediate legitimate family. His last surviving legitimate son, Louis, Dauphin of France, died in 1711. Barely a year later, the Duke of Burgundy, the eldest of the Dauphin's three sons and then heir-apparent to Louis, followed his father. Burgundy's elder son, Louis, Duke of Brittany, joined them a few weeks later. Thus, on his\n\ndeathbed, Louis's heir-apparent was his five-year-old great-grandson, Louis, Duke of Anjou, Burgundy's younger son.", - "page_start": 19, - "page_end": 19, - "source_file": "wikipedia5.pdf" - }, - { - "text": "People in France were complaining about the expansion of royal authority, the high rate of taxation, and the reduction of the authority of the Parlement de Paris and other regional representative entities. Paris erupted in rioting as a result, and Anne was forced, under intense pressure, to free Broussel. Moreover, on the night of 9-10 February 1651, when Louis was twelve, a mob of angry Parisians broke into the royal palace and demanded to see their king. Led into the royal bed-chamber, they gazed upon Louis, who was feigning sleep, were appeased, and then quietly departed. [25] The threat to the royal family prompted Anne to flee Paris with the king and his courtiers.\n\nShortly thereafter, the conclusion of the Peace of Westphalia allowed Condé's army to return to aid Louis and his court. Condé's family was close to Anne at that time, and he agreed to help her attempt to restore the king's authority. [26] The queen's army, headed by\n\nBaptismal certificate, 1638\n\n<!-- image -->\n\nLouis XIV, then Dauphin of France, in 1642, one year before his accession to the throne, by Philippe de Champaigne\n\n<!-- image -->\n\nLouis XIV in 1643, by Claude Deruet\n\n<!-- image -->\n\nEurope after the Peace of Westphalia in 1648\n\n<!-- image -->", - "page_start": 2, - "page_end": 2, - "source_file": "wikipedia5.pdf" - }, - { - "text": "<!-- image -->\n\nThe Battle of Ramillies where the French fought the Dutch and British, 23 May 1706\n\n<!-- image -->\n\nLouis XIV depicted on a Louis d'or in 1709\n\n<!-- image -->\n\nMap of France after the death of Louis XIV\n\n<!-- image -->", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia5.pdf" - }, - { - "text": "## See also\n\n - Charles de Lorme, personal medical doctor to Louis XIV\n - Fundamental laws of the Kingdom of France\n - House of France\n - Levée (ceremony)\n - List of French monarchs\n - Outline of France\n - Louis XIV style\n - Nicolas Fouquet\n - French forestry Ordinance of 1669\n - Potager du Roi", - "page_start": 25, - "page_end": 25, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Louis XIV was born on 5 September 1638 in the Château de Saint-Germain-enLaye, to Louis XIII and Anne of Austria. He was named Louis Dieudonné (Louis the God-given) [7] and bore the traditional title of French heirs apparent: Dauphin . [8] At the time of his birth, his parents had been married for 23 years. His mother had experienced four stillbirths between 1619 and 1631. Leading contemporaries thus regarded him as a divine gift and his birth a miracle of God. [9]\n\nLouis's relationship with his mother was uncommonly affectionate for the time. Contemporaries and eyewitnesses claimed that the Queen would spend all her time with Louis. [10] Both were greatly interested in food and theatre, and it is highly likely that Louis developed these interests through his close relationship with his mother. This long-lasting and loving relationship can be evidenced by excerpts in Louis's journal entries, such as:\n\n\"Nature was responsible for the first knots which tied me to my mother. But attachments formed later by shared qualities of the spirit are far more difficult to break than those formed merely by blood.\" [11]\n\nIt was his mother who gave Louis his belief in the absolute and divine power of his monarchical rule. [12]\n\nDuring his childhood, he was taken care of by the governesses Françoise de Lansac and Marie-Catherine de Senecey. In 1646, Nicolas V de Villeroy became the young king's tutor. Louis XIV became friends with Villeroy's young children, particularly François de Villeroy, and divided his time between the Palais-Royal and the nearby Hotel de Villeroy.\n\n## Minority and the Fronde\n\nIssue more...\n\nLouis, Grand Dauphin\n\nMarie Thérèse, Madame Royale\n\nPhilippe Charles, Duke of Anjou\n\nIllegitimate :\n\nMarie Anne, Princess of Conti\n\nLouis, Count of Vermandois\n\nLouis Auguste, Duke of Maine\n\nLouis César, Count of Vexin\n\nLouise Françoise, Princess of Condé\n\nLouise Marie Anne,\n\nMademoiselle de Tours\n\nLouise, Baroness of La Queue\n\nFrançoise Marie, Duchess of Orléans\n\nLouis Alexandre, Count of Toulouse\n\n## Names\n\nLouis-Dieudonné de France\n\nHouse\n\nBourbon\n\nFather\n\nLouis XIII\n\nMother\n\nAnne of Austria\n\nReligion\n\nCatholicism\n\nSignature\n\n## Accession\n\nSensing imminent death in the spring of 1643, King Louis XIII decided to put his affairs in order for his four-year-old son Louis XIV. Not trusting the judgement of his Spanish wife Queen Anne, who would normally have become the sole regent of France, the king decreed that a regency council would rule on his son's behalf, with Anne at its head. [13]\n\nLouis XIII died on 14 May 1643. On 18 May [14] Queen Anne had her husband's will annulled by the Parlement de Paris , a judicial body of nobles and high-ranking clergy, [15] and she became sole regent. She exiled her husband's ministers Chavigny and Bouthilier and appointed the Count of Brienne as her minister of foreign affairs. [16] Anne kept the direction of religious policy strongly in hand until her son's majority in 1661.\n\nShe appointed Cardinal Mazarin as chief minister, giving him the daily administration of policy. She continued the policies of her late husband and Cardinal Richelieu, despite their persecution of her, in order to win absolute authority in France and victory abroad for her son. Anne protected Mazarin by exiling her followers the Duke of Beaufort and Marie de Rohan, who conspired against him in 1643. [17]\n\n<!-- image -->\n\nLouis XIV as a young child, unknown painter\n\n<!-- image -->\n\nThe best example of Anne's loyalty to France was her treatment of one of Richelieu's men, the Chancellor Pierre Séguier. Séguier had brusquely interrogated Anne in 1637 (like a", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia5.pdf" - }, - { - "text": "was persuaded to change his fiscal policy. Though willing enough to tax the nobles, Louis feared the political concessions which they would demand in return. Only towards the close of his reign under the extreme exigency of war, was he able, for the first time in French history, to impose direct taxes on the aristocracy. This was a step toward equality before the law and toward sound public finance, though it was predictably diminished by concessions and exemptions won by the insistent efforts of nobles and bourgeois. [35]\n\nLouis and Colbert also had wide-ranging plans to grow French commerce and trade. Colbert's mercantilist administration established new industries and encouraged manufacturers and inventors, such as the Lyon silk manufacturers and the Gobelins tapestry manufactory. He invited manufacturers and artisans from all over Europe to France, such as Murano glassmakers, Swedish ironworkers, and Dutch shipbuilders. He aimed to decrease imports while increasing French exports, hence reducing the net outflow of precious metals from France.\n\nEngraving of Louis XIV\n\n<!-- image -->\n\nLouis instituted reforms in military administration through Michel le Tellier and his son François-Michel le Tellier, successive Marquis de Louvois. They helped to curb the independent spirit of the nobility, imposing order on them at court and in the army. Gone were the days when generals protracted war at the frontiers while bickering over precedence and ignoring orders from the capital and the larger strategic picture, with the old military aristocracy ( noblesse d'épée , nobility of the sword) monopolizing senior military positions and the higher ranks. Louvois modernized the army and reorganised it into a professional, disciplined, well-trained force. He was devoted to the soldiers' material well-being and morale, and even tried to direct campaigns.\n\n## Relations with the major colonies\n\nLouis's legal reforms were enacted in his numerous Great Ordinances. Prior to that, France was a patchwork of legal systems, with as many traditional legal regimes as there were provinces, and two co-existing legal systems-customary law in the north and Roman civil law in the south. [36] The Grande Ordonnance de Procédure Civile of 1667, the Code Louis , was a comprehensive legal code imposing a uniform regulation of civil procedure throughout the kingdom. Among other things, it prescribed baptismal, marriage and death records in the state's registers, not the church's, and it strictly regulated the right of the Parlements to remonstrate. [37] The Code Louis later became the basis for the Napoleonic code, which in turn inspired many modern legal codes.\n\nOne of Louis's more infamous decrees was the Grande Ordonnance sur les Colonies of 1685, the Code Noir (black code). Although it sanctioned slavery, it attempted to humanise the practice by prohibiting the separation of families. Additionally, in the colonies, only Roman Catholics could own slaves, and these had to be baptised.\n\nLouis ruled through a number of councils:\n\n - Conseil d'en haut (\"High Council\", concerning the most important matters of state)-composed of the king, the crown prince, the controller-general of finances, and the secretaries of state in charge of various departments. The members of that council were called ministers of state.\n\nLouis and his family portrayed as Roman gods in a 1670 painting by Jean Nocret. L to R: Louis's aunt, Henriette-Marie; his brother, Philippe, duc d'Orléans; the Duke's daughter, Marie Louise d'Orléans, and wife, Henriette-Anne Stuart; the Queen-mother, Anne of Austria; three daughters of Gaston d'Orléans; Louis XIV; the Dauphin Louis; Queen Marie-Thérèse; la Grande Mademoiselle .\n\n<!-- image -->", - "page_start": 5, - "page_end": 5, - "source_file": "wikipedia5.pdf" - } - ] - }, - { - "references": { - "source_file": "wikipedia5.pdf", - "query": "What did Louis XIV do to avoid the Spanish War of Succession in 1698?", - "target_page": 13, - "target_passage": "In an attempt to avoid war, Louis signed the Treaty of the Hague with William III of England in 1698. This agreement divided Spain's Italian territories between Louis's son le Grand Dauphin and Archduke Charles, with the rest of the empire awarded to Joseph Ferdinand.", - "chunk_present": { - "presence": true, - "index": 6 - } - }, - "top_chunk": [ - { - "text": "illegitimate son Louis-Auguste de Bourbon, Duke of Maine. [129] Orléans, however, had Louis's will annulled by the Parlement of Paris after his death and made himself sole regent. He stripped Maine and his brother, Louis-Alexandre, Count of Toulouse, of the rank of Prince of the Blood, which Louis had granted them, and significantly reduced Maine's power and privileges. [130]\n\n## Line of succession in 1715\n\nLine of succession to the French throne upon the death of Louis XIV in 1715. Louis XIV's only surviving legitimate grandson, Philip V, was not included in the line of succession due to having renounced the French throne after the war of the Spanish Succession, which lasted for 13 years after the death of Charles II of Spain in 1700. [131]\n\nLouis XIII (1601-1643)\n\n<!-- image -->\n\nFurther down the French line of succession in 1715 was the House of Condé, followed by the House of Conti (a cadet branch of the House of Condé). Both of these royal houses were descended in the male line from Henri II, Prince of Condé, a second cousin of French King Louis XIII (the father of Louis XIV) in the male line.\n\n## Legacy\n\n## Reputation\n\nAccording to Philippe de Courcillon's Journal , Louis on his deathbed advised his heir with these words:\n\nDo not follow the bad example which I have set you; I have often undertaken war too lightly and have sustained it for vanity. Do not imitate me, but be a peaceful prince, and may you apply yourself principally to the alleviation of the burdens of your subjects. [132]\n\nSome historians point out that it was a customary demonstration of piety in those days to exaggerate one's sins. Thus they do not place much emphasis on Louis's deathbed declarations in assessing his accomplishments. Rather, they focus on military and diplomatic successes, such as how he placed a French prince on the Spanish throne. This, they contend, ended the threat of an aggressive Spain that historically interfered in domestic French politics. These historians also emphasise the effect of Louis's wars in expanding France's boundaries and creating more defensible frontiers that preserved France from invasion until the Revolution. [132]\n\nArguably, Louis also applied himself indirectly to \"the alleviation of the burdens of [his] subjects.\" For example, he patronised the arts, encouraged industry, fostered trade and commerce, and sponsored the founding of an overseas empire. Moreover, the significant reduction in civil wars and aristocratic rebellions during his reign are seen by these\n\nTerritorial expansion of France under Louis XIV (1643-1715) is depicted in orange.\n\n<!-- image -->\n\nhistorians as the result of Louis's consolidation of royal authority over feudal elites. In their analysis, his early reforms centralised France and marked the birth of the modern French state. They regard the political and military victories as well as numerous cultural achievements as how Louis helped raise France to a preeminent position in Europe. [133] Europe came to admire France for its military and cultural successes, power, and sophistication. Europeans generally began to emulate French manners, values, goods, and deportment. French became the universal language of the European elite.\n\nLouis's detractors have argued that his considerable foreign, military and domestic expenditure impoverished and bankrupted France. His supporters, however, distinguish the state, which was impoverished, from France, which was not. As supporting evidence, they cite the literature of the time, such as the social commentary in Montesquieu's Persian Letters . [134]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia5.pdf" - }, - { - "text": "<!-- image -->\n\n## Louis XIV\n\nLouis XIV (Louis-Dieudonné; 5 September 1638 - 1 September 1715), also known as Louis the Great ( Louis le Grand ) or the Sun King ( le Roi Soleil ), was King of France from 1643 until his death in 1715. His verified reign of 72 years and 110 days is the longest of any sovereign. [1][a] An emblematic character of the Age of Absolutism in Europe, [3] Louis XIV's legacy is widely characterized by French colonial expansion, the conclusion of Eighty Years' War involving the Habsburgs, and his architectural bequest, marked by commissioned works of art and buildings. His pageantry, opulent lifestyle and ornate cultivated image earned him enduring admiration. Louis XIV raised France to be the exemplar nation-state of the early modern period, and established a cultural prestige which lasted through the subsequent centuries, and continues today.\n\nLouis began his personal rule of France in 1661, after the death of his chief minister Cardinal Mazarin, when the King famously declared that he would take over the job himself. [4] An adherent of the divine right of kings, Louis continued his predecessors' work of creating a centralised state governed from the capital. He sought to eliminate the remnants of feudalism persisting in parts of France; by compelling many members of the nobility to reside at his lavish Palace of Versailles, he succeeded in pacifying the aristocracy, many of whom had participated in the Fronde rebellions during his minority. He thus became one of the most powerful French monarchs and consolidated a system of absolute monarchy in France that endured until the French Revolution. Louis also enforced uniformity of religion under the Catholic Church. His revocation of the Edict of Nantes abolished the rights of the Huguenot Protestant minority and subjected them to a wave of dragonnades, effectively forcing Huguenots to emigrate or convert, virtually destroying the French Protestant community.\n\nDuring Louis's long reign, France emerged as the leading European power and regularly made war. A conflict with Spain marked his entire childhood, while during his personal rule, Louis fought three major continental conflicts, each against powerful foreign alliances: the Franco-Dutch War, the Nine Years' War, and the War of the Spanish Succession. In addition, France contested shorter wars such as the War of Devolution and the War of the Reunions. Warfare defined Louis's foreign policy, impelled by his personal ambition for glory and power: \"a mix of commerce, revenge, and pique\". [5] His wars strained France's resources to the utmost, while in peacetime he concentrated on preparing for the next war. He taught his diplomats that their job was to create tactical and strategic advantages for the French military. [6] Upon his death in 1715, Louis XIV left his great-grandson and successor, Louis XV, a powerful but war-weary kingdom, in major debt after the War of the Spanish Succession that had raged on since 1701.\n\nSome of his other notable achievements include the construction of the Canal du Midi, the patronage of artists, and the founding of the French Academy of Sciences.\n\n## Early years\n\n## Louis XIV\n\nPortrait by Hyacinthe Rigaud , 1701\n\n<!-- image -->\n\nKing of France (more...)\n\nReign\n\n14 May 1643 - 1 September\n\n1715\n\nCoronation\n\n7 June 1654\n\nReims Cathedral\n\nPredecessor\n\nLouis XIII\n\nSuccessor\n\nLouis XV\n\nRegent\n\nAnne of Austria (1643-1651)\n\nChief ministers See list\n\n- Cardinal Mazarin (1643-1661)\n- Jean-Baptiste Colbert (1661-1683)\n- The Marquis of Louvois (1683-1691)\n\nBorn\n\n5 September 1638\n\nChâteau de Saint-Germain- en-Laye, Saint-Germain-en- Laye, France\n\nDied\n\n1 September 1715 (aged 76) Palace of Versailles, Versailles, France\n\nBurial\n\n9 September 1715 Basilica of Saint-Denis\n\nSpouses\n\nMaria Theresa of Spain (m. 1660; died 1683)", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Louis XIV in 1670, engraved portrait by Robert Nanteuil\n\n<!-- image -->\n\nand Lionne, however, made the renunciation conditional on the full payment of a Spanish dowry of 500,000 écus. [40] The dowry was never paid and would later play a part persuading his maternal first cousin Charles II of Spain to leave his empire to Philip, Duke of Anjou (later Philip V of Spain), the grandson of Louis XIV and Maria Theresa.\n\nThe War of Devolution did not focus on the payment of the dowry; rather, the lack of payment was what Louis XIV used as a pretext for nullifying Maria Theresa's renunciation of her claims, allowing the land to \"devolve\" to him. In Brabant (the location of the land in dispute), children of first marriages traditionally were not disadvantaged by their parents' remarriages and still inherited property. Louis's wife was Philip IV's daughter by\n\nhis first marriage, while the new king of Spain, Charles II, was his son by a subsequent marriage. Thus, Brabant allegedly \"devolved\" to Maria Theresa, justifying France to attack the Spanish Netherlands.\n\n## Relations with the Dutch\n\nDuring the Eighty Years' War with Spain, France supported the Dutch Republic as part of a general policy of opposing Habsburg power. Johan de Witt, Dutch Grand Pensionary from 1653 to 1672, viewed this as crucial for Dutch security and a counterweight against his domestic Orangist opponents. Louis provided support in the 1665-1667 Second Anglo-Dutch War but used the opportunity to launch the War of Devolution in 1667. This captured Franche-Comté and much of the Spanish Netherlands; French expansion in this area was a direct threat to Dutch economic interests. [41]\n\nThe Dutch opened talks with Charles II of England on a common diplomatic front against France, leading to the Triple Alliance, between England, the Dutch and Sweden. The threat of an escalation and a secret treaty to divide Spanish possessions\n\nThe Battle of Tolhuis, Louis XIV crosses the Lower Rhine at Lobith on 12 June 1672; Rijksmuseum Amsterdam\n\n<!-- image -->\n\nwith Emperor Leopold, the other major claimant to the throne of Spain, led Louis to relinquish many of his gains in the 1668 Treaty of Aix-la-Chapelle. [42]\n\nLouis placed little reliance on his agreement with Leopold and as it was now clear French and Dutch aims were in direct conflict, he decided to first defeat the Republic, then seize the Spanish Netherlands. This required breaking up the Triple Alliance; he paid Sweden to remain neutral and signed the 1670 Secret Treaty of Dover with Charles, an Anglo-French alliance against the Dutch Republic. In May 1672, France invaded the Republic, supported by Münster and the Electorate of Cologne. [43]\n\nLouis XIV, 1670, by Claude Lefèbvre\n\n<!-- image -->\n\nRapid French advance led to a coup that toppled De Witt and brought William III to power. Leopold viewed French expansion into the Rhineland as an increasing threat, especially after they seized the strategic Duchy of Lorraine in 1670. The prospect of Dutch defeat led Leopold to an alliance with Brandenburg-Prussia on 23 June, followed by another with the Republic on 25th. [44] Although Brandenburg was forced out of the war by the June 1673 Treaty of Vossem, in August an anti-French alliance was formed by the Dutch, Spain, Emperor Leopold and the Duke of Lorraine. [45]", - "page_start": 6, - "page_end": 6, - "source_file": "wikipedia5.pdf" - }, - { - "text": "In July 1695, the city of Namur, occupied for three years by the French, was besieged by an allied army led by William III. Louis XIV ordered the surprise destruction of a Flemish city to divert the attention of these troops. This led to the bombardment of Brussels, in which more than 4,000 buildings were destroyed, including the entire city centre. The strategy failed, as Namur fell three weeks later, but harmed Louis XIV's reputation: a century later, Napoleon deemed the bombardment \"as barbarous as it was useless\". [85]\n\nPeace was broached by Sweden in 1690. By 1692, both sides evidently wanted peace, and secret bilateral talks began, but to no avail. [86] Louis tried to break up the alliance against him by dealing with individual opponents but did not achieve his aim until 1696 when the Savoyards agreed to the Treaty of Turin and switched sides. Thereafter, members of the League of Augsburg rushed to the peace table, and negotiations for a general peace began in earnest, culminating in the Peace of Ryswick of 1697. [87]\n\nMarshal de Luxembourg\n\n<!-- image -->\n\n## Peace of Ryswick\n\nThe Peace of Ryswick ended the War of the League of Augsburg and disbanded the Grand Alliance. By manipulating their rivalries and suspicions, Louis divided his enemies and broke their power.\n\nThe treaty yielded many benefits for France. Louis secured permanent French sovereignty over all of Alsace, including Strasbourg, and established the Rhine as the Franco-German border (as it is to this day). Pondichéry and Acadia were returned to France, and Louis's de facto possession of Saint-Domingue was recognised as lawful. However, he returned Catalonia and most of the Reunions.\n\nFrench military superiority might have allowed him to press for more advantageous terms. Thus, his generosity to Spain with regard to Catalonia has been read as a concession to foster pro-French sentiment and may ultimately have induced King Charles II to name Louis's grandson Philip, Duke of Anjou, heir to the Spanish throne. [88] In exchange for financial compensation, France renounced its interests in the Electorate of Cologne and the Palatinate. Lorraine, which had been occupied by the French since 1670, was returned to its rightful Duke Leopold, albeit with a right of way to the French military. William and Mary were recognised as joint sovereigns of the British Isles, and Louis withdrew support for James II. The Dutch were given the right to garrison forts in the Spanish Netherlands that acted as a protective barrier against possible French aggression. Though in some respects the Treaty of Ryswick may appear a diplomatic defeat for Louis since he failed to place client rulers in control of the Palatinate or the Electorate of Cologne, he did fulfil many of the aims laid down in his 1688 ultimatum. [89] In any case, peace in 1697 was desirable to Louis, since France was exhausted from the costs of the war.\n\n## War of the Spanish Succession\n\n## Causes and build-up to the war\n\nBy the time of the Peace of Ryswick, the Spanish succession had been a source of concern to European leaders for well over forty years. King Charles II ruled a vast empire comprising Spain, Naples, Sicily, Milan, the Spanish Netherlands, and numerous Spanish colonies. He produced no children, however, and consequently had no direct heirs.", - "page_start": 12, - "page_end": 12, - "source_file": "wikipedia5.pdf" - }, - { - "text": "experiences during the Fronde , when men of high birth readily took up the rebel cause against their king, who was actually the kinsman of some. This victory over the nobility may thus have ensured the end of major civil wars in France until the French Revolution about a century later.\n\n## France as the pivot of warfare\n\nUnder Louis, France was the leading European power, and most wars pivoted around its aggressiveness. No European state exceeded it in population, and no one could match its wealth, central location, and very strong professional army. It had largely avoided the devastation of the Thirty Years' War. Its weaknesses included an inefficient financial system that was hard-pressed to pay for its military adventures, and the tendency of most other powers to gang up against it.\n\nDuring Louis's reign, France fought three major wars: the Franco-Dutch War, the Nine Years' War, and the War of the Spanish Succession. There were also two lesser conflicts: the War of Devolution and the War of the Reunions. [64] The wars were very expensive but defined Louis XIV's foreign policy, and his personality shaped his approach. Impelled \"by a mix of commerce, revenge, and pique\", Louis sensed that war was the ideal way to enhance his glory. In peacetime, he concentrated on preparing for the next war. He taught his diplomats that their job was to create tactical and strategic advantages for the French military. [6] By 1695, France retained much of its dominance but had lost control of the seas to England and Holland, and most countries, both Protestant and Catholic, were in alliance against it. Sébastien Le Prestre de Vauban, France's leading military strategist, warned Louis in 1689 that a hostile \"Alliance\" was too powerful at sea. He recommended that France fight back by licensing French merchant ships to privateer and seize enemy merchant ships while avoiding its navies:\n\nLouis XIV\n\n<!-- image -->\n\nFrance has its declared enemies Germany and all the states that it embraces; Spain with all its dependencies in Europe, Asia, Africa and America; the Duke of Savoy [in Italy], England, Scotland, Ireland, and all their colonies in the East and West Indies; and Holland with all its possessions in the four corners of the world where it has great establishments. France has ... undeclared enemies, indirectly hostile, hostile, and envious of its greatness, Denmark, Sweden, Poland, Portugal, Venice, Genoa, and part of the Swiss Confederation, all of which states secretly aid France's enemies by the troops that they hire to them, the money they lend them and by protecting and covering their trade. [65]\n\nVauban was pessimistic about France's so-called friends and allies:\n\nFor lukewarm, useless, or impotent friends, France has the Pope, who is indifferent; the King of England [James II] expelled from his country; the Grand Duke of Tuscany; the Dukes of Mantua, Modena, and Parma [all in Italy]; and the other faction of the Swiss. Some of these are sunk in the softness that comes of years of peace, the others are cool in their affections....The English and Dutch are the main pillars of the Alliance; they support it by making war against us in concert with the other powers, and they keep it going by means of the money that they pay every year to... Allies.... We must therefore fall back on privateering as the method of conducting war which is most feasible, simple, cheap, and safe, and which will cost least to the state, the more so since any losses will not be felt by the King, who risks virtually nothing....It will enrich the country, train many good officers for the King, and in a short time force his enemies to sue for peace. [66]\n\n## Edict of Fontainebleau", - "page_start": 9, - "page_end": 9, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Louis XIV at the siege of Namur (1692)\n\n<!-- image -->\n\nFrench armies were generally victorious throughout the war because of Imperial commitments in the Balkans, French logistical superiority, and the quality of French generals such as Condé's famous pupil, François Henri de Montmorency-Bouteville, duc de Luxembourg. [81] He triumphed at the Battles of Fleurus in 1690, Steenkerque in 1692, and Landen in 1693, although, the battles proved to be of little of strategic consequence, [82][83] mostly due to the nature of late 17th-century warfare. [84]\n\nAlthough an attempt to restore James II failed at the Battle of the Boyne in 1690, France accumulated a string of victories from Flanders in the north, Germany in the east, and Italy and Spain in the south, to the high seas and the colonies. Louis personally supervised the captures of Mons in 1691 and Namur in 1692. Luxembourg gave France the defensive line of the Sambre by capturing Charleroi in 1693. France also overran most of the Duchy of Savoy after the battles of Marsaglia and Staffarde in 1693. While naval stalemate ensued after the French victory at the Battle of Beachy Head in 1690 and the Allied victory at Barfleur-La Hougue in 1692, the Battle of Torroella in 1694 exposed Catalonia to French invasion, culminating in the capture of Barcelona.\n\nThe Dutch captured Pondichéry in 1693, but a 1697 French raid on the Spanish treasure port of Cartagena, Spain, yielded a fortune of 10,000,000 livres.", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Philip V of Spain\n\n<!-- image -->\n\nsucceeded to his father's throne. [90] The signatories, however, omitted to consult the ruler of these lands, and Charles II was passionately opposed to the dismemberment of his empire. In 1699, he re-confirmed his 1693 will that named Joseph Ferdinand as his sole successor. [91]\n\nSix months later, Joseph Ferdinand died. Therefore, in 1700, Louis and William III concluded a fresh partitioning agreement, the Treaty of London. This allocated Spain, the Low Countries, and the Spanish colonies to the Archduke. The Dauphin would receive all of Spain's Italian territories. [92] Charles II acknowledged that his empire could only remain undivided by bequeathing it entirely to a Frenchman or an Austrian. Under pressure from his German wife, Maria Anna of Neuburg, Charles II named Archduke Charles as his sole heir.\n\n## Acceptance of the will of Charles II and consequences\n\nOn his deathbed in 1700, Charles II of Spain unexpectedly changed his will. The clear demonstration of French military superiority for many decades before this time, the pro-French faction at the court of Spain, and even Pope\n\nInnocent XII convinced him that France was more likely to preserve his empire intact. He thus offered the entire empire to the Dauphin's second son Philip, Duke of Anjou, provided it remained undivided. Anjou was not in the direct line of French succession, thus his accession would not cause a Franco-Spanish union. [92] If Anjou refused, the throne would be offered to his younger brother Charles, Duke of Berry. If the Duke of Berry declined it, it would go to Archduke Charles, then to the distantly related House of Savoy if Charles declined it. [93]\n\nLouis was confronted with a difficult choice. He could agree to a partition of the Spanish possessions and avoid a general war, or accept Charles II's will and alienate much of Europe. He may initially have been inclined to abide by the partition treaties, but the Dauphin's insistence persuaded him otherwise. [94] Moreover, Louis's foreign minister, Jean-Baptiste Colbert, marquis de Torcy, pointed out that war with the Emperor would almost certainly ensue whether Louis accepted the partition treaties or Charles II's will. He emphasised that, should it come to war, William III was unlikely to stand by France since\n\nLouis in 1701\n\n<!-- image -->\n\nhe \"made a treaty to avoid war and did not intend to go to war to implement the treaty\". [91] Indeed, in the event of war, it might be preferable to be already in control of the disputed lands. Eventually, therefore, Louis decided to accept Charles II's will. Philip, Duke of Anjou, thus became Philip V, King of Spain.\n\nMost European rulers accepted Philip as king, some reluctantly. Depending on one's views of the war's inevitability, Louis acted reasonably or arrogantly. [95] He confirmed that Philip V retained his French rights despite his new Spanish position. Admittedly, he may only have been hypothesising a theoretical eventuality and not attempting a Franco-Spanish union. But his actions were certainly not read as disinterested. Moreover, Louis sent troops to the Spanish Netherlands to evict Dutch garrisons and secure Dutch recognition of Philip V. In 1701, Philip transferred the asiento (the right to supply slaves to Spanish colonies) to France, as a sign of the two nations' growing connections. As tensions mounted, Louis decided to acknowledge James Stuart, the son of James II, as King of England, Scotland and Ireland on the latter's death, infuriating William III. These actions enraged Britain and the Dutch Republic. [96] With the Holy Roman Emperor and the petty German states, they formed another Grand Alliance and declared war on France in 1702. French diplomacy secured Bavaria, Portugal, and Savoy as Franco-Spanish allies. [97]\n\n## Commencement of fighting", - "page_start": 13, - "page_end": 13, - "source_file": "wikipedia5.pdf" - }, - { - "text": "Félix, Joël. \"'The most difficult financial matter that has ever presented itself': paper money and the financing of warfare under Louis XIV.\" Financial History Review 25.1 (2018): 43-70 online (http://centaur.reading.ac.uk/72452/ 2/The%20most%20difficult%20financial%20matter%20FH.pdf) Archived (https://web.archive.org/web/2021022610 4833/http://centaur.reading.ac.uk/72452/2/The%20most%20difficult%20financial%20matter%20FH.pdf) 26 February 2021 at the Wayback Machine.\n\nGoubert, Pierre (197). Louis XIV and Twenty Million Frenchmen . social history from Annales School. ISBN 978-03947-1751-7.\n\nJones, Colin. The Great Nation: France from Louis XIV to Napoleon (1715-1799) (2002)\n\nKlaits, Joseph. Printed propaganda under Louis XIV: absolute monarchy and public opinion (Princeton University Press, 2015).\n\nLe Roy Ladurie, Emmanuel. The Ancien Régime: A History of France 1610-1774 (1999), survey by leader of the Annales School ISBN 0631211969\n\nLewis, W. H. The Splendid Century: Life in the France of Louis XIV (1953) ISBN 0881339210\n\nMitford, Nancy (1966). The Sun King: Louis XIV at Versailles (2012 ed.). New York Review of Books. ISBN 978-15901-7491-3.\n\nPrest, Julia, and Guy Rowlands, eds. The Third Reign of Louis XIV, c. 1682-1715 (Taylor & Francis, 2016).\n\nRothkrug, Lionel. Opposition to Louis XIV: The Political and Social Origins of French Enlightenment (Princeton University Press, 2015).\n\nRowlands, Guy. The Dynastic State and the Army under Louis XIV: Royal Service and Private Interest, 1661-1701 (2002)\n\nRubin, David Lee, ed. Sun King: The Ascendancy of French Culture during the Reign of Louis XIV . Washington: Folger Books and Cranbury: Associated University Presses, 1992.\n\nRule, John C., Louis XIV and the craft of kingship 1969.\n\nShennan, J. H. Louis XIV (1993)\n\nThompson, Ian. The Sun King's Garden: Louis XIV, André Le Nôtre And the Creation of the Gardens of Versailles . London: Bloomsbury Publishing, 2006 ISBN 1-5823-4631-3\n\nTreasure, Geoffrey. The Making of Modern Europe, 1648-1780 (3rd ed. 2003). pp. 230-296.\n\nWilkinson, Rich. Louis XIV (Routledge, 2007). ISBN 978-0-4153-5815-6\n\nCénat, Jean-Philippe. Le roi stratège: Louis XIV et la direction de la guerre, 1661-1715 (Presses universitaires de Rennes, 2019).\n\nCroix, Alain. \"Vingt millions de Français et Louis XIV.\" Revue dhistoire moderne contemporaine 2 (2020): 27-46.\n\nEngerand, Fernand, editor (1899). (in French) Inventaire des tableaux du Roy rédigé en 1709 et 1710 par Nicolas Bailly . Paris: Ernest Leroux. Copy (http://gallica.bnf.fr/ark:/12148/bpt6k6323734m/f11.image) Archived (https://we b.archive.org/web/20160307153902/http://gallica.bnf.fr/ark:/12148/bpt6k6323734m/f11.image) 7 March 2016 at the Wayback Machine at Gallica.\n\n## External links", - "page_start": 33, - "page_end": 33, - "source_file": "wikipedia5.pdf" - }, - { - "text": "impact of this victory won the support of Portugal and Savoy. Later, the Battle of Ramillies delivered the Low Countries to the Allies, and the Battle of Turin forced Louis to evacuate Italy, leaving it open to Allied forces. Marlborough and Eugene met again at the Battle of Oudenarde, which enabled them to invade France.\n\nFrance established contact with Francis II Rákóczi and promised support if he took up the cause of Hungarian independence.\n\nDefeats, famine, and mounting debt greatly weakened France. Between 1693 and 1710, over two million people died in two famines, made worse as foraging armies seized food supplies from the villages. [98] In desperation, Louis ordered a disastrous invasion of the English island of Guernsey in the autumn of 1704 with the aim of raiding their successful harvest. By the winter of 1708-09, he was willing to accept peace at nearly any cost. He agreed that the entire Spanish empire should be surrendered to Archduke Charles, and also consented to return to the frontiers of the Peace of Westphalia, giving up all the territories he had acquired over 60 years. But he could not promise that Philip V would accept these terms, so the Allies demanded that Louis single-handedly attack his grandson to force these terms on him. If he could not achieve this within the year, the war would resume. Louis would not accept these terms. [99]\n\n## Turning point\n\nThe final phases of the War of the Spanish Succession demonstrated that the Allies could not maintain Archduke Charles in Spain just as surely as France could not retain the entire Spanish inheritance for Philip V. The Allies were definitively expelled from central Spain by the Franco-Spanish victories at the Battles of Villaviciosa and Brihuega in 1710. French forces elsewhere remained obdurate despite their defeats. The Allies suffered a Pyrrhic victory at the Battle of Malplaquet with 21,000 casualties, twice that of the French. [100] Eventually, France recovered its military pride with the decisive victory at Denain in 1712.\n\nFrench military successes near the end of the war took place against the background of a changed political situation in Austria. In 1705, Emperor Leopold I died. His elder son and successor, Joseph I, followed him in 1711. His heir was none other than Archduke Charles, who secured control of all of his brother's Austrian landholdings. If the Spanish empire then fell to him, it would have resurrected a domain as vast as Holy Roman Emperor Charles V's in the 16th century. To the maritime powers of Great Britain and the Dutch Republic, this would have been as undesirable as a Franco-Spanish union. [101]\n\n## Conclusion of peace\n\nAs a result of the fresh British perspective on the European balance of power, AngloFrench talks began, culminating in the 1713 Peace of Utrecht between Louis, Philip V of Spain, Anne of Great Britain, and the Dutch Republic. In 1714, after losing Landau and Freiburg, the Holy Roman Emperor also made peace with France in the Treaties of Rastatt and Baden.\n\nIn the general settlement, Philip V retained Spain and its colonies, while Austria received the Spanish Netherlands and divided Spanish Italy with Savoy. Britain kept Gibraltar and Menorca. Louis agreed to withdraw his support for James Stuart, son of James II and pretender to the thrones of Great Britain and Ireland, and ceded Newfoundland, Rupert's Land, and Acadia in the Americas to Anne. Britain gained the most from the treaty, but the final terms were much more favourable to France than those being discussed in peace\n\nThe Franco-Spanish army led by the Duke of Berwick defeated decisively the Alliance forces of Portugal, England, and the Dutch Republic at the Battle of Almansa.\n\n<!-- image -->\n\nThe Battle of Ramillies where the French fought the Dutch and British, 23 May 1706\n\n<!-- image -->", - "page_start": 14, - "page_end": 14, - "source_file": "wikipedia5.pdf" - }, - { - "text": "The French were nevertheless forced to retreat from most of the Dutch Republic, which deeply shocked Louis; he retreated to St Germain for a time, where no one, except a few intimates, was allowed to disturb him. [47] French military advantages allowed them however to hold their ground in Alsace and the Spanish Netherlands while retaking Franche-Comté. By 1678, mutual exhaustion led to the Treaty of Nijmegen, which was generally settled in France's favour and allowed Louis to intervene in the Scanian War. Despite the military defeat, his ally Sweden regained much of what it had lost under the 1679 treaties of SaintGermain-en-Laye, Fontainebleau and Lund imposed on Denmark-Norway and Brandenburg. [48] Yet Louis's two primary goals, the destruction of the Dutch Republic and the conquest of the Spanish Netherlands, had failed. [49]\n\nLouis was at the height of his power, but at the cost of uniting his opponents; this increased as he continued his expansion. In 1679, he dismissed his foreign minister Simon Arnauld, marquis de Pomponne, because he was seen as having compromised too much with the allies. Louis maintained the strength of his army, but in his next series of territorial claims avoided using military force alone. Rather, he combined it with legal pretexts in his efforts to augment the boundaries of his kingdom. Contemporary treaties were intentionally phrased ambiguously. Louis established the Chambers of Reunion to determine the full extent of his rights and obligations under those treaties.\n\nCities and territories, such as Luxembourg and Casale, were prized for their strategic positions on the frontier and access to important waterways. Louis also sought Strasbourg, an important strategic crossing on the left bank of the Rhine and theretofore a Free Imperial City of the Holy Roman Empire, annexing it and other territories in 1681. Although a part of Alsace, Strasbourg was not part of Habsburg-ruled Alsace and was thus not ceded to France in the Peace of Westphalia.\n\nFollowing these annexations, Spain declared war, precipitating the War of the Reunions. However, the Spanish were rapidly defeated because the Emperor (distracted by the Great Turkish War) abandoned them, and the Dutch only supported them minimally. By the Truce of Ratisbon, in 1684, Spain was forced to acquiesce in the French occupation of most of the conquered territories, for 20 years. [50]\n\nLouis's policy of the Réunions may have raised France to its greatest size and power during his reign, but it alienated much of Europe. This poor public opinion was compounded by French actions off the Barbary Coast and at Genoa. First, Louis had\n\n## Silver coin of Louis XIV, dated 1674\n\nObverse. The Latin inscription is LVDOVICVS XIIII D[EI] GRA[TIA] (\"Louis XIV, by the grace of God\").\n\n<!-- image -->\n\nReverse. The Latin\n\ninscription is\n\nFRAN[CIÆ] ET\n\nNAVARRÆ REX 1674\n\n(\"King of France and of Navarre, 1674\").\n\nAlgiers and Tripoli, two Barbary pirate strongholds, bombarded to obtain a favourable treaty and the liberation of Christian slaves. Next, in 1684, a punitive mission was launched against Genoa in retaliation for its support for Spain in previous wars. Although the Genoese submitted, and the Doge led an official mission of apology to Versailles, France gained a reputation for brutality and arrogance. European apprehension at growing French might and the realisation of the extent of the dragonnades' effect (discussed below) led many states to abandon their alliances with France. [51] Accordingly, by the late 1680s, France became increasingly isolated in Europe.\n\n## Non-European relations and the colonies", - "page_start": 7, - "page_end": 7, - "source_file": "wikipedia5.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed2.pdf", - "query": "Does nerve transection or crushing affect small afferents within the dorsal root ganglion in the same way?", - "target_page": 5, - "target_passage": "Both SNItrans (Fig. 2C) and SNIcrush (Fig. 2D) injuries resulted in a rightward shift in population distributions of the cross-sectional area of nucleated, FB-labelled DRG neurons when compared with contralateral DRG, consistent with a loss of small afferents post–nerve injury.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "observed 7809 6 153 neurons per DRG; this was not significantly different to the number of neurons in the contralateral DRG (7917 6 349), whereas cell number approximately halved by 8 weeks postinjury to 3963 6 410 neurons per DRG ( Fig. 1C ). Separating analysis into intact vs axotomized afferents revealed that only axotomized afferents were lost, with no difference observed in numbers of intact afferents ( Fig. 1D ). Between 1 and 8 weeks after injury, we observed a 61.0 6 7.0% decrease in the number of GFP 1 neurons. This loss of injured afferents resulted in a loss of neuron-containing (ie, excluding white matter regions) DRG volume ( Fig. 1E ), but not neuron density ( Fig. 1F ). Cell loss predominantly occurred between 1 and 2 weeks postinjury and stabilized after this timepoint. Population distributions of the cross-sectional area of nucleated, tdTomato-expressing cell profiles were not significantly different at 1 vs 8 weeks postSNItrans, in contrast to GFP-expressing/injured afferents, in which a loss of a population of small afferents at 8 weeks postinjury was observed ( Fig. 1G ).\n\nSNItrans resulted in a mixed population of axotomized and intact afferents within the L4 DRG. Therefore, we developed an approach to restrict our analysis to axotomized afferents, without relying on transgenic labelling, and used this as a complementary approach to confirm our findings. We injected the neuronal tracer FB into the glabrous, tibial innervation territory of both hindpaws 1 week before common peroneal and tibial transection (SNItrans) or crush (SNIcrush) surgeries ( Figs. 2A and B ). FastBlue-uptake was complete across neurons of all sizes by 1 week (Fig. S3, http://links.lww.com/PAIN/ C84), so this approach allowed us to profile a sample of the axotomized afferents. Both SNItrans ( Fig. 2C ) and SNIcrush ( Fig. 2D ) injuries resulted in a rightward shift in population distributions of the cross-sectional area of nucleated, FB-labelled DRG neurons when compared with contralateral DRG, consistent with a loss of small afferents post-nerve injury.\n\nAs a third complementary approach, we applied semiautomated volumetric analyses of nuclei size following tissue clearing. In this study, whole DRGs were cleared 4 weeks after SNItrans for nuclei counting in 'complete' tissue ( Figs. 2E-H ). Nuclei were labelled by TDP-43, in line with the study by West et al., 67 and were quantified using Imaris software ( Fig. 2F , Video 1). We observed a slight but significant rightward shift in nuclear spot volume population distribution 4 weeks after SNItrans ( Fig. 2G ). In addition, there was a significant reduction in the number of small but not medium or large nuclear spots, in support of a loss of small-diameter neuron populations ( Fig. 2H ).\n\nTogether, our data derived from several different experimental approaches show that a population of small-diameter afferents are lost following peripheral nerve injury.\n\n## 3.2. Spared nerve crush or transection results in death of Mrgprd-expressing neurons\n\nTo date, determining cell loss among specific populations of afferent neurons has proved challenging due to the downregulation of subpopulation-specific marker genes following axonal transection. 37,44 To overcome this issue, we took advantage of transgenic strategies to label populations in a manner that persisted after injury. Owing to the bias for the loss of small neurons and the known loss of IB4-binding central terminals postinjury, 36 we initially focused on nonpeptidergic nociceptive neurons. We used MrgD ChR2-YFP mice to identify neurons belonging to the largest of the 3 classes of nonpeptidergic nociceptors, NP1. 55,59 To determine whether these neurons are lost following nerve injury, we used a stereological method to quantify L4 DRG MrgD-YFP 1 (yellow fluorescent", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed2.pdf" - }, - { - "text": "- [64] Welin D, Novikova LN, Wiberg M, Kellerth JO, Novikov LN. Survival and regeneration of cutaneous and muscular afferent neurons after peripheral nerve injury in adult rats. Exp Brain Res 2008;186:315-23.\n - [65] West CA, Davies KA, Hart AM, Wiberg M, Williams SR, Terenghi G. Volumetric magnetic resonance imaging of dorsal root ganglia for the objective quantitative assessment of neuron death after peripheral nerve injury. Exp Neurol 2007;203:22-33.\n - [66] West CA, Ljungberg C, Wiberg M, Hart A. Sensory neuron death after upper limb nerve injury and protective effect of repair: clinical evaluation using volumetric magnetic resonance imaging of dorsal root ganglia. Neurosurgery 2013;73:632-40.\n - [67] West SJ, Bonboire D, Bennett DL. StereoMate: 3D stereological automated analysis of biological structures. bioRxiv 2020:648337.\n - [68] Wiberg R, Novikova LN, Kingham PJ. Evaluation of apoptotic pathways in dorsal root ganglion neurons following peripheral nerve injury. Neuroreport 2018;29:779-85.\n - [69] Yu X, Liu H, Hamel KA, Morvan MG, Yu S, Leff J, Guan Z, Braz JM, Basbaum AI. Dorsal root ganglion macrophages contribute to both the initiation and persistence of neuropathic pain. Nat Commun 2020;11:264.\n - [70] Zheng J, Lu Y, Perl ER. Inhibitory neurones of the spinal substantia gelatinosa mediate interaction of signals from primary afferents. J Physiol 2010;588:2065-75.", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed2.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n## Peripheral nerve injury results in a biased loss of sensory neuron subpopulations\n\nAndrew H. Cooper a , Allison M. Barry b , Paschalina Chrysostomidou a , Romane Lolignier a , Jinyi Wang a , Magdalena Redondo Canales a , Heather F. Titterton a , David L. Bennett b , Greg A. Weir a, *\n\n## Abstract\n\nThere is a rich literature describing the loss of dorsal root ganglion (DRG) neurons following peripheral axotomy, but the vulnerability of discrete subpopulations has not yet been characterised. Furthermore, the extent or even presence of neuron loss following injury has recently been challenged. In this study, we have used a range of transgenic recombinase driver mouse lines to genetically label molecularly defined subpopulations of DRG neurons and track their survival following traumatic nerve injury. We find that spared nerve injury leads to a marked loss of cells containing DRG volume and a concomitant loss of small-diameter DRG neurons. Neuron loss occurs unequally across subpopulations and is particularly prevalent in nonpeptidergic nociceptors, marked by expression of Mrgprd. We show that this subpopulation is almost entirely lost following spared nerve injury and severely depleted (by roughly 50%) following sciatic nerve crush. Finally, we used an in vitro model of DRG neuron survival to demonstrate that nonpeptidergic nociceptor loss is likely dependent on the absence of neurotrophic support. Together, these results profile the extent to which DRG neuron subpopulations can survive axotomy, with implications for our understanding of nerve injury-induced plasticity and pain.\n\nKeywords: Sensory neuron, Neuron death, Transgenic reporter line, Neuropathic pain, Nerve injury\n\n## 1. Introduction\n\nDorsal root ganglion (DRG) neurons represent a molecularly and functionally heterogeneous population. Under normal conditions, this diversity contributes to the ability of the somatosensory nervous system to detect a myriad of sensory stimuli that result in the perceptions of touch, temperature, itch, and pain. Following nerve injury, physiological changes in DRG neurons lead to hyperexcitability, 57 which is a key pathological driver of neuropathic pain. 20,63 Concomitant molecular changes in discrete subpopulations also occur, and these have recently been comprehensively described in single-cell 37,44 and subpopulation-specific sequencing studies. 3 These studies describe a transient and generalized reduction in the expression of subpopulation-specific genes following nerve injury. 3,37,44\n\nIn addition to molecular changes, there is a rich literature describing the frank loss of DRG neurons following traumatic\n\nSupplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Web site (www.painjournalonline.com).\n\nCopyright © 2024 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the International Association for the Study of Pain. This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\nhttp://dx.doi.org/10.1097/j.pain.0000000000003321", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed2.pdf" - }, - { - "text": "http://dx.doi.org/10.1097/j.pain.0000000000003321\n\nnerve injury in experimental rodent models. 24,50,53,56 Some studies have suggested that neuron loss occurs in certain patient cohorts, 48,66 but this is yet to be definitively demonstrated in humans. In rodents, most studies support a preferential loss of small cells that give rise to unmyelinated fibers 53 but some contrasting studies describe the preferential loss of large cells 6 or loss of cells of all sizes. 46 Variation is evident across studies in terms of experimental species, age, type of injury, and quantification methods. 56 Shi et al. 50 used stereological counting methods to identify a 54% loss of DRG neuron number 4 weeks after 'mid-thigh' sciatic nerve transection in C57BL/6 mice. Estimates for the degree of loss following commonly used nerve injury paradigms (eg, spared nerve injury [SNI] and sciatic nerve crush) are not available and because of the neurochemical changes following injury and the loss of subpopulation marker gene expression, 5,44,50 the vulnerability of molecularly defined subpopulations has not been characterized. Moreover, more recent studies have cast doubt on the extent or even presence of DRG neuron death following nerve injury. One study which developed a deep learning approach to assess rat DRG cellular plasticity found no loss of neurons up to 2 weeks post-SNI, 49 while another observed no loss of genetically labelled damaged DRG neurons 2 months after sciatic nerve crush. 44\n\nThe issue of whether neuron loss occurs, and if so, in what subpopulations, is important. It will likely have implications for our understanding of reinnervation and functional recovery in patients. Furthermore, better insight will provide critical context for those investigating the plasticity that occurs following nerve injury and may inform therapeutic targeting of sensory neuron populations.\n\nAn expanding repertoire of transgenic recombinase driver lines now makes it possible to permanently label DRG neuron subpopulations and study their fate in rodent nerve injury paradigms. The aim of this study was to use this technology to characterize", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 2. Spared nerve crush and transection lead to a loss of small DRG neurons. (A) Approach to restrict analysis to damaged afferents: a subcutaneous injection of the tracer FB into both hindpaws labelled tibial afferents, before unilateral SNItrans or SNIcrush surgery. (B) Representative image of FB labelling and NeuN immunostaining in the L4 DRG. The image is a projection of optical sections at 3m mintervals through the entirety of a 30m m-thick tissue section. Scale bar 5 100 m m. (C and D) Quantification of the cross-sectional area of FastBlue labelled DRG neurons ipsilateral and contralateral to SNItrans (C) or SNIcrush injury (D) reveals a loss of small afferents and subsequent shift in population distribution. Kolmogorov-Smirnov tests of cumulative distributions; SNItrans: D 5 0.25, P , 0.001; n 5 183 or 191 neurons from 3 mice; SNIcrush: D 5 0.22, P , 0.001, n 5 319 or 325 neurons from 3 mice. (E) Experimental approach for whole DRG volumetric analyses after SNItrans. (F) Representative 3D rendering of TDP-43 profiles and corresponding nuclear spot profiles following Imaris-based spot detection feature. Scale bar 5 100 m m. (G) Quantification of DRG nuclear spot volume ipsilateral and contralateral to SNItrans. Kolmogorov-Smirnov tests of cumulative distribution: D 5 0.06, P , 0.001, n 5 30,206 (contra) or 32,544 (ipsi) nuclei from 4 (contra) or 5 (ipsi) mice. (H) Total number of nuclear spots, by size, per DRG. Two-way RM ANOVA; size bin 3 injury interaction: F 2,14 5 8.26, P 5 0.004; n 5 4 to 5 mice; ˇ S'ıd 'ak multiple comparisons tests: ** P , 0.01. ANOVA, analysis of variance; DRG, dorsal root ganglion; FB, FastBlue; RM, repeated measures.\n\n<!-- image -->\n\n## 3.3. Spared nerve injury induces a loss of Trpm8 1 and calcitonin gene-related peptide 1 but not myelinated dorsal root ganglion neurons\n\nLoss restricted to nonpeptidergic nociceptors would not fully account for the degree of total neuron loss that we observed. Therefore, we studied a range of other subpopulations, both small and large in diameter, for their vulnerability to injury-\n\ninduced loss. To investigate potential loss of Trpm8 1 (coldsensitive), calcitonin gene-related peptide 1 (CGRP) (peptidergic), and myelinated subpopulations of DRG neurons following nerve injury, we applied our FB-labelling approach in Trpm8 FlpO ; RC::FLTG (FlpO-dependent tdTom expression), Calca CreERT2 ; Ai32 (Cre-dependent ChR2-YFP expression) and Thy1-CFP mice, respectively ( Figs. 4A-D ). Trpm8-tdTom was expressed", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed2.pdf" - }, - { - "text": "SNI-related gene expression signatures were less evident in Mrgprd-expressing and C-LTMR neurons at later timepoints, compared with other populations in injured DRG. 3 This could be explained by a loss of axotomized neurons of these classes and therefore sampling of only uninjured neurons at this timepoint. 24,43,64 In terms of the transcriptional response to injury, nonpeptidergic nociceptors show enrichment of individual proapoptotic factors early after injury, 23,68 and we extend these results in this study, by describing a subpopulation-specific enrichment of GO terms associated with apoptosis that is evident as early as 3 days after injury. Such data and single-cell transcriptomic profiling of all DRG neurons following injury 37,44 may offer the opportunity to elucidate the cell death pathways engaged and upstream effectors that enrich this process to nonpeptidergic nociceptive neurons.\n\n## 4.3. Implications for pain pathogenesis\n\nNeuronal loss has been proposed as a key contributor to poor functional recovery following nerve injury, 54 and biased survival of different afferent types might be expected to contribute to modality-specific sensory deficits. Beyond loss of function, does DRGneuronlosscontribute to chronic pain, in either an adaptive or maladaptive manner? Intrathecal delivery of GDNF is neuroprotective and reverses the reduction in the number of IB4-binding DRG neurons and central terminals seen following transection. 5 Treatment is concurrently analgesic and abrogates pain-related behaviors. 7,60 However, the pleiotropic nature of GDNF makes it impossible to directly attribute the analgesic effects to the reversal of neuron loss. Indeed, it is possible that GDNF exerts its effect by actions on intact nonpeptidergic nociceptive afferents, 52 activation of which is known to drive aversive behaviors in the neuropathic state. 62 These data leave the contribution of nonpeptidergic nociceptor loss to behavior in the GDNF treatment paradigm ambiguous. Other pharmacological approaches have been found effective at reversing a neuronal loss in rodent models, but the impact on pain behavior was not studied. 21,22\n\nRodents develop marked mechanical and thermal hypersensitivity rapidly following nerve injury and before timepoints at which neuron loss is observed. 10 This lack of a temporal correlation may suggest a limited contribution to evoked hypersensitivities. The temporal profile of ongoing tonic pain (eg, pain aversiveness as measured by condition place preference assays 26 ) is less defined and so is its correlation to the timing of neuron loss.\n\nThere are many anatomical sites within the somatosensory nervous system where differential loss of sensory neuron populations could impact neurobiology. For example, loss of cutaneous afferents may afford more opportunity for plasticity in reinnervation patterns, such as collateral sprouting of uninjured or surviving afferents, and the types of nerve endings made by different molecular subpopulations. 17,27 It also seems likely that the death of many neurons within a DRG could contribute to the expansion and activation of immune cell types, which are known to play a major role in neuropathic pain. 30,69 Finally, under normal conditions, peripheral sensory input is integrated into the dorsal horn of the spinal cord by complex interneuron circuitry. Many spinal circuits are engaged by convergent input from different afferent types. 9,41,70 Therefore, selective loss of input from discrete afferent types could undoubtedly impact the normal processing of remaining afferent signals. 34 Experimentally abrogating neuronal loss may be a fruitful approach to assess the contribution to nervous system plasticity (adaptive or maladaptive) following injury. In this regard, our in vitro readout would be a useful experimental", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed2.pdf" - }, - { - "text": "injury (Fig. S6A-C, http://links.lww.com/PAIN/C84), indicating that any loss of neurons within specific neuronal subpopulations wasnot biased towards soma size. Collectively, these data show that unrepaired axonal damage to peripheral sensory neurons induces a partial loss of Trpm8 1 and CGRP 1 subpopulations, but no major loss of myelinated afferents.\n\nBased on our findings of preferential loss of nonpeptidergic nociceptors, we re-analyzed a previous population-specific transcriptomic dataset of mouse DRG neurons following nerve injury for potential upregulation of cell death pathways (Fig. S7, http://links.lww.com/PAIN/C84). 3 Wefound that early after injury (3 days post-SNItrans), nonpeptidergic (MrgD CreERT2 -expressing) neurons showed enhanced enrichment of GO terms associated with apoptosis, in contrast to a broad population of nociceptors (labelled with Scn10a CreERT2 ), peptidergic nociceptors (CalcaCreERT2 ), C-LTMRs (Th CreERT2 ), and A b -RA (rapidly adapting) and A d -LTMRs (A d /A b -LTMR, Ntrk2 CreERT2 ;Advillin FlpO ), in which there was less or no enrichment of cell death pathways. By 4 weeks, only C-LTMR and A d /A b -LTMR subtypes show any overrepresentation of cell death pathways (in the populations studied). Both injury-specific and apoptotic signatures in nonpeptidergic neurons were no longer significantly enriched, consistent with a loss of axotomized nonpeptidergic afferents by this late timepoint postinjury. These data suggest that apoptotic pathways are upregulated acutely after injury in a celltype-specific manner.\n\n## 3.4. Mrgprd dorsal root ganglion neurons are sensitive to loss in vitro\n\nEarlier studies postulated that a lack of neurotrophic support underlies neuronal loss, which is supported by the observation that exogenous GDNF treatment at the time of injury, or shortly after, rescues the loss of IB4-binding central terminals posttransection. 5 We sought to use the DRG neurons from MrgD CreERT2 ;Ai32 mice to test this postulate and establish an in vitro platform capable of probing the molecular basis of loss, with axonal transection during isolation providing a correlate for in vivo nerve injury ( Figs. 5A-E ). Twenty-four hours after plating, YFP was expressed by 16.3 6 1.3% of DRG neurons, which was reduced to 11.8 6 1.7% after 28 days of culture in the presence of exogenous GFs, NGF and GDNF ( Fig. 5F ). However, in the absence of GFs, YFP 1 neurons only accounted for 1.7 6 0.6% of neurons after 28 days, accompanied by an apparent reduction in the overall number of neurons within the culture, despite all conditions being seeded at the same initial density ( Figs. 5C and F ). YFP 1 cell loss was partially rescued by the presence of GDNF, but not NGF alone, in the culture media ( Figs. 5D-F ). These results contrasted with experiments using neurons derived from Calca CreERT2 ;Ai32 mice, in which we observed no change in the proportion of neurons that were Calca-YFP 1 after 28 days in culture, regardless of exogenous GF addition ( Figs. 5G-L ). Collectively, these data support the use of DRG cultures to probe the mechanisms underlying selective loss of sensory neurons following nerve injury and suggest a role for trophic support, particularly by GDNF signaling, in preventing the loss of nonpeptidergic nociceptors.\n\n## 4. Discussion\n\nWe present data herein to support the hypothesis that traumatic nerve injury in rodents leads to a profound loss of small-diameter DRG neurons. Taking advantage of newly", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 1. SNItrans induces death of small primary afferent neurons, accompanied by a reduction in volume, not cell density, of the dorsal root ganglion. (A) Approach to differentially labelled intact afferents with tdTomato and damaged afferents with GFP after peripheral nerve injury using the Avil FlpO ;Atf3 CreERT2 ;RC:: FLTGmouseline and schematic of experimental timeline. (B) Representative image of GFP, tdTomato, and NeuN expression in an L4 DRG, 2 weeks after SNItrans. Scale bars 5 100 m m. (C and D) Stereological quantification of the total number of DRG neurons (C) or number of axotomized and intact neurons (D) in the L4 DRG 1, 2, 4, and 8 weeks after SNItrans or contralateral (contra) to injury. (C) One-way ANOVA with Tukey posttests; F 4,10 5 37.98, P , 0.001. (D) Two-way RM ANOVA; Timepoint 3 Color interaction F 4,10 5 39.04, P , 0.001, n 5 3 mice; Tukey posttests (between injured groups): † P , 0.05 vs contra, ‡ P , 0.05 vs 1-week. (E) Volume of DRG-containing cells (ie, excluding white matter tracts) following SNItrans. One-way ANOVA with Tukey posttests; F 4,10 5 21.25, P , 0.001, n 5 3. (F) Neuronal density within the DRG following SNItrans. One-way ANOVA; F 4,10 5 2.77, P 5 0.09, n 5 3. (G) Population distribution of uninjured and injured afferents by cross-sectional area, 1 and 8 weeks post-SNItrans. Kolmogorov-Smirnov tests of cumulative distributions; Uninjured: D 5 0.08, P 5 0.18; Injured: D 5 0.32, P , 0.001; n 5 310 to 427 neurons from 3 mice. * P , 0.05, ** P , 0.01, *** P , 0.001 vs contra. ANOVA, analysis of variance; DRG, dorsal root ganglion; GFP, green fluorescent protein.\n\n<!-- image -->\n\nprotein) neurons 28 days after sham surgery or SNItrans ( Figs. 3A and B ). SNItrans, but not sham, resulted in a significant decrease (54.0 6 6.6%) in the total number of MrgD-YFP 1 neurons in L4 DRG ( Fig. 3C ).\n\nYellow fluorescent protein expression in MrgD ChR2-YFP mice is driven by the endogenous Mrgprd promotor, which has been reported to be upregulated or downregulated following axonal damage. 44,58 Such changes in promoter activity could affect the proportion of nonpeptidergic nociceptors identified by YFP expression. Therefore, to verify these findings, we used MrgD CreERT2 ;Ai32 mice and tamoxifen administration before injury, to permanently label Mrgprdexpressing afferents with ChR2-YFP ( Figs. 3D-F ). We then tested whether the proportion of cutaneous tibial afferents that were YFP 1 was altered following nerve injury. Following hindpaw FB injection, ; 15% of contralateral, FB-labelled DRG neurons expressed YFP. This was reduced to 6.0 6 1.2% 28 days after SNIcrush injury and to only 1.7 6 0.9%\n\n28 days after SNItrans ( Fig. 3G ). Uptake by uninjured YFP 1 neurons was equivalent 7 and 35 days after FB injection, demonstrating that this reduction was not because 7 days were insufficient for YFP 1 neurons to fully uptake FB (Fig. S3C, http:// links.lww.com/PAIN/C84). No significant difference in the percentage of FB-labelled YFP 1 DRG neurons between ipsilateral and contralateral DRG was observed at 7 days following SNItrans (Figs. S4A and B, http://links.lww.com/PAIN/C84), demonstrating that loss occurred after this timepoint. Analysis of the crosssectional soma area of FB-labelled, YFP 1 neurons in uninjured DRGrevealed an area of 361 6 138 m m 2 (mean 6 SD) (Fig. S4C, http://links.lww.com/PAIN/C84), which is a distribution profile matching those neurons presumed lost. Collectively, these data show that peripheral nerve injury results in a substantial loss of nonpeptidergic, Mrgprd -expressing neurons, with SNItrans (ie, an unrepaired axonal transection) resulting in an almost complete loss of this population.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed2.pdf" - }, - { - "text": "## 4. Discussion\n\nWe present data herein to support the hypothesis that traumatic nerve injury in rodents leads to a profound loss of small-diameter DRG neurons. Taking advantage of newly\n\ndeveloped transgenic recombinase driver lines, we have shown that loss is biased across molecularly defined subpopulations. Nonpeptidergic nociceptive neurons are particularly susceptible to loss, with almost all Mrgprd 1 axotomized afferents lost following an unrepaired transection injury (SNItrans) and roughly half lost following a model which contrastingly allows for nerve regenerations (SNIcrush). Finally, we have observed that the vulnerability of Mrgprd 1 neurons extends to the in vitro setting and provide data to support the hypothesis that loss is driven by a lack of neurotrophic support following injury.\n\n## 4.1. Neuronal loss\n\nThe question of whether DRG neurons die following traumatic injury has been addressed by several groups over the last few decades. Despite contrasting findings on the extent, timing, and form that loss takes, most studies have observed frank loss of DRG neurons. 6,38,46,53 However, more recent studies using recombinase driver lines and novel machine-learning approaches have cast doubt on this consensus. 44,49 Our data strongly support the loss hypothesis and suggest that approximately 60% of axotomized afferents die within 2 weeks of SNI. The discrepancy between our findings and other recent studies may be partly explained by the sampling method used to estimate neuronal numbers. For example, Schulte et al. 49 developed a novel machine-learning approach and found no reduction in neuron density across serial sections of rat DRG following SNI, and they inferred from this that frank loss did not occur. Our results are congruous, in that we also observed no reduction in neuron density. However, we found a substantial loss in the total neuron-containing volume of injured DRG, which underlies our contrasting conclusion of frank loss. Of note, morphological volumetric analysis and MRI have also previously demonstrated volume loss in both rodent and human DRG following nerve injury. 35,65,66 These findings occur despite a major increase of nonneuronal cells in the injured DRG 30 and support the notion that the total DRG neuron number is decreased.\n\n## 4.2. Selectivity of neuron loss\n\nWhile definitively characterizing loss of molecularly defined subpopulations was challenging before the advent of recombinase driver lines, a consensus emerged that small-diameter neurons are more vulnerable to nerve injury-induced loss. 50,53 Our data support this consensus and extend it to reveal that while there is a generalized partial loss of C-fiber populations including CGRP- and Trpm8-expressing neurons, Mrgprd-expressing neurons are particularly sensitive to loss. This selective vulnerability has been hinted at previously by the stark reduction in the number of DRG neurons and their central terminals that bind IB4 and express canonical markers such as the P2X3 receptor following nerve injury. 5,8,29,36 Type 1a glomeruli are also reduced in lamina II, suggesting a structural loss of central terminals and not simply a loss of IB4-binding. 2 However, it was not clear whether these data represented phenotypic changes in nonpeptidergic nociceptors or frank loss of neurons. We describe neuron loss that is delayed (occurring . 7 days postinjury) with respect to histochemical and structural changes (occurring 15 days postinjury 2,29 ), suggesting that these changes precede and are not in themselves indicative of neuron loss.\n\nThe vulnerability of Mrgprd-expressing neurons is congruous with recent subpopulation bulk RNA-seq data, which found that", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed2.pdf" - }, - { - "text": "platform to help delineate the precise cell death pathways and signaling cascades engaged (which could then be experimentally manipulated). Such studies should consider that plasticity may evolve over time. The loss of IB4 1 central terminals is transient following crush and has even been observed to reverse at longer timepoints following SNItrans. 36 These observations, in conjunction with ours of loss of neurons, raise the intriguing question of the source of such central reinnervation.\n\n## 4.4. Study limitations\n\nOur efforts focused on traumatic nerve injury paradigms owing to previous contrasting results using these robust and reproducible experimental models. We did not extend our studies to systemic neuropathy models, such as chemotherapy or diabetic neuropathy. A recent postmortem analysis reported a neuronal loss in the DRG from patients with painful diabetic peripheral neuropathy. 19 Transcriptional responses vary substantially across different nerve insults, 44 so it would be of interest to test whether neuronal loss and the subpopulation vulnerability reported in this study are common features across different types of insults.\n\nUsing multiple approaches, we assess the na¨ıve mouse L4 DRG to contain approximately 8000 neurons, consistent with a previous estimate, 67 and observed a frank loss of smalldiameter neurons following injury. However, the extent of loss observed using our semiautomated approach was less than that observed using manual techniques. 67 Two major limitations in this study may explain this discrepancy: First, owing to technical issues, the cleared DRG dataset is unpaired ipsilateral-contralateral which adds larger variability. Second, the analysis method is prone to undercounting deep nuclei. The signal-to-noise is better for superficial nuclei and smaller tissue volumes. Given the reduction in DRG volume after SNItrans, nuclei in larger contralateral DRG may be undercounted.\n\nWhile we made efforts to profile the loss of several molecularly discrete sensory neuron populations, we acknowledge that not all subtypes were profiled. Furthermore, recent single-cell RNA sequencing has given us a more granular appreciation of the heterogeneity of sensory neurons. 42 Future studies could leverage our experimental approach and new transgenic lines to characterize the loss of neurons in more detail. Such experiments may be pertinent before embarking on molecular or functional profiling of populations post-nerve injury.\n\n## 4.5. Conclusions\n\nIn sum, we have provided data from multiple complementary experimental approaches to support the hypothesis that DRG neurons are lost following nerve injury in mice. We describe a substantial loss, which is biased towards specific subpopulations and particularly present in small-diameter nonpeptidergic nociceptive neurons.\n\n## Conflict of interest statement\n\nD.L.B. has acted as a consultant in the last 2 years for AditumBio, Biogen, Biointervene, Combigene, LatigoBio, GSK, Ionis, Lexicon therapeutics, Neuvati, Olipass, Orion, Replay, SC Health Managers, Theranexus, Third Rock Ventures, and Vida Ventures on behalf of Oxford University Innovation. D.L.B. has received research funding from Lilly and Astra Zeneca, and G.A.W. has received research funding from Ono Pharmaceutical. D.L.B. has received", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed2.pdf" - } - ] - }, - { - "references": { - "source_file": "legal5_eubiodiversity_cc4.pdf", - "query": "What are the EU's key nature conservation commitments for 2030?", - "target_page": 6, - "target_passage": "1. Legally protect a minimum of 30% of the EU’s land area and 30% of the EU’s sea area and integrate ecological corridors, as part of a true Trans-European Nature Network. 2. Strictly protect at least a third of the EU’s protected areas, including all remaining EU primary and old-growth forests. 3. Effectively manage all protected areas, defining clear conservation objectives and measures, and monitoring them appropriately.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "States and the European Environment Agency, will put forward in 2020 criteria and guidance for identifying and designating additional areas, including a definition of strict protection, as well as for appropriate management planning. In doing so, it will indicate how other effective area-based conservation measures and greening of cities could contribute to the targets.\n\nThe targets relate to the EU as a whole and could be broken down according to the EU bio-geographical regions and sea basins or at a more local level. Every Member State will have to do its fair share of the effort based on objective ecological criteria, recognising that each country has a different quantity and quality of biodiversity. Particular focus will be placed on protecting and restoring the tropical and sub-tropical marine and terrestrial ecosystems in the EU's outermost regions given their exceptionally high biodiversity value.\n\nIn addition, in order to have a truly coherent and resilient Trans-European Nature Network, it will be important to set up ecological corridors to prevent genetic isolation, allow for species migration, and maintain and enhance healthy ecosystems. In this context, investments in green and blue infrastructure 27 and cooperation across borders among Member States should be promoted and supported, including through the European Territorial Cooperation.\n\nThe Commission will aim to agree the criteria and guidance for additional designations with Member States by the end of 2021. Member States will then have until the end of 2023 to demonstrate significant progress in legally designating new protected areas and integrating ecological corridors. On this basis, the Commission will assess by 2024 whether the EU is on track to meet its 2030 targets or whether stronger actions, including EU legislation, are needed.\n\nFinally, the Overseas Countries and Territories also host important biodiversity hotspots, not governed by EU environmental rules. The Commission encourages relevant Member States to consider promoting equal or equivalent rules in these countries and territories.\n\n## Nature protection: key commitments by 2030\n\n- 1. Legally protect a minimum of 30% of the EU's land area and 30% of the EU's sea area and integrate ecological corridors, as part of a true Trans-European Nature Network.\n- 2. Strictly protect at least a third of the EU's protected areas, including all remaining EU primary and old-growth forests.\n- 3. Effectively manage all protected areas, defining clear conservation objectives and measures, and monitoring them appropriately.", - "page_start": 5, - "page_end": 5, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "encouraging cooperation in education for environmental sustainability in 2021. This will provide guidance for schools and teachers on how to cooperate and exchange experiences across Member States on biodiversity teaching. The Commission will also provide support materials and facilitate the exchange of good practices in EU networks of teacher-training programmes.\n\n## 4. THE EUROPEAN UNION FOR AN AMBITIOUS GLOBAL BIODIVERSITY AGENDA\n\nBiodiversity is a priority of the EU's external action and an integral part of efforts to meet the United Nations Sustainable Development Goals. It will be mainstreamed throughout bilateral and multilateral engagements, through the EU's 'Green Deal diplomacy', and forthcoming green alliances 76 . The Commission will work closely with the European Parliament and Member States to ensure a high level of EU ambition and mobilise all efforts for the good of the world's biodiversity.\n\n## 4.1. Raising the level of ambition and commitment worldwide\n\nProtecting biodiversity is a global challenge and the next decade will be decisive. Global efforts under the United Nations Convention on Biological Diversity have largely been insufficient. Nature cannot afford any half measures or lack of ambition.\n\nIn this spirit, the EU is ready to lead all efforts - working with like-minded partners in a high-ambition coalition on biodiversity - to agree an ambitious new global framework for post-2020 at the upcoming 15 th Conference of the Parties to the Convention on Biological Diversity.\n\nWith this strategy, the Commission proposes ambitious commitments for the EU to bring to the table. The EU should also support governments and stakeholders across the globe to significantly step up their ambition and their action.\n\nThe Commission proposes that the EU ensures that the post-2020 global framework includes, at a minimum, the elements outlined below:\n\n -  Overarching global goals for biodiversity for 2050, in line with the United Nations 2030 Agenda for Sustainable Development and the vision of 'living in harmony with nature'. The ambition should be that, by 2050, all of the world's ecosystems are restored, resilient, and adequately protected. The world should commit to the net-gain principle to give nature back more than it takes. The world should commit to no human-induced extinction of species, at minimum where avoidable.\n -  Ambitious global 2030 targets in line with EU commitments in this strategy. These should clearly address the drivers of biodiversity loss and be specific, measurable, actionable, relevant and time-bound.\n -  A much stronger implementation, monitoring and review process. Parties should revise their National Biodiversity Strategies and Action Plans by the end of 2021, or as a minimum, submit national commitments for the most important targets. There should be a regular review cycle to look at progress towards the", - "page_start": 19, - "page_end": 19, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "policies. In addition, by integrating policy coherence for sustainable development in all its policies, the EU will reduce the pressure on biodiversity worldwide. In all of its international cooperation, the EU should promote sustainable agricultural and fisheries practices and actions to protect and restore the world's forests. Particular attention will also be paid to sustainable water resource management, the restoration of degraded land, and the protection and restoration of biodiverse areas with high ecosystem services and climate mitigation potential. A better protection of natural ecosystems, coupled with efforts to reduce wildlife trade and consumption, will also help prevent and build up resilience to possible future diseases and pandemics. The EU will enhance its support to global efforts to apply the One Health approach 83 , which recognises the intrinsic connection between human health, animal health and healthy resilient nature.\n\nThe EU will step up support to partner countries across the world to achieve the new global targets, fight environmental crime, and tackle the drivers of biodiversity loss. In Africa, the EU will launch the NaturAfrica initiative to protect wildlife and key ecosystems while offering opportunities in green sectors for local populations. Similar projects will be developed in other regions. The EU will also support the Western Balkans and EU Neighbourhood countries in their efforts to protect biodiversity.\n\nIn all of its work, the EU will strengthen the links between biodiversity protection and human rights , gender, health, education, conflict sensitivity, the rights-based approach, land tenure and the role of indigenous peoples and local communities.\n\nAs part of its global efforts, the EU will promote biodiversity coalitions with partners and civil society around the world. For example, in March 2020, the Commission launched the Global Biodiversity Coalition of national parks, aquariums, botanic gardens, zoos, natural history and sciencemuseums to help raise awareness around the world on the need to protect and nurture biodiversity. The Commission will consider launching or joining other High Ambition Coalitions to help develop the post-2020 framework.\n\n## 5. CONCLUSION\n\nProtecting and restoring biodiversity is the only way to preserve the quality and continuity of human life on Earth. The commitments proposed in this strategy pave the way for ambitious and necessary changes - changes that will ensure the wellbeing and economic prosperity of present and future generations in a healthy environment. The implementation of these commitments will take into account the diversity of challenges across sectors, regions and Member States, recognise the need to ensure social justice, fairness and inclusiveness in line with the European Pillar of Social Rights, and will require a sense of responsibility and strong joint efforts from the EU, its Member States, stakeholders and citizens.\n\nThe Commission invites the European Parliament and the Council to endorse this strategy ahead of the 15 th Conference of the Parties to the Convention on Biological Diversity. To ensure full political ownership of this strategy, the Commission will suggest a standing progress point at the Council and at the European Parliament. It will review the strategy by 2024 to assess progress and whether further action is needed to meet its objectives.", - "page_start": 22, - "page_end": 22, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "build on the headline ambition to ensure that by 2050 all of the world's ecosystems are restored, resilient, and adequately protected. The world should commit to the net-gain principle to give nature back more than it takes. As part of this, the world should commit to no human-induced extinction of species, at minimum where avoidable.\n\nThis strategy sets out how Europe can help make this happen. As a milestone, it aims to ensure that Europe's biodiversity will be on the path to recovery by 2030 for the benefit of people, the planet, the climate and our economy, in line with the 2030 Agenda for Sustainable Development and with the objectives of the Paris Agreement on Climate Change. It addresses the five main drivers of biodiversity loss, sets out an enhanced governance framework to fill remaining gaps, ensures the full implementation of EU legislation, and pulls together all existing efforts. This strategy is enterprising and incentivising in spirit and action. It reflects the fact that protecting and restoring nature will need more than regulation alone . It will require action by citizens, businesses, social partners and the research and knowledge community, as well as strong partnerships between local, regional, national and European level. This strategy is in line with the ambitions and commitment set out in President von der Leyen's Political Guidelines and in the European Green Deal.\n\nAdopted in the heart of the COVID-19 pandemic, this strategy will also be a central element of the EU's recovery plan. It will be crucial to prevent and build resilience to future zoonosis outbreaks and to provide immediate business and investment opportunities for restoring the EU's economy.\n\nAll new initiatives and proposals will be underpinned by the Commission's better regulation tools. Based on public consultations and on the identification of the environmental, social and economic impacts, impact assessments will contribute to ensuring that all initiatives achieve their objectives in the most effective and least burdensome way and live up to a green oath to 'do no harm'.\n\n## 2. PROTECTING AND RESTORING NATURE IN THE EUROPEAN UNION\n\nThe EU has legal frameworks, strategies and action plans to protect nature and restore habitats and species. But protection has been incomplete, restoration has been smallscale, and the implementation and enforcement of legislation has been insufficient 17 .\n\nTo put biodiversity on the path to recovery by 2030, we need to step up the protection and restoration of nature. This should be done by improving and widening our network of protected areas and by developing an ambitious EU Nature Restoration Plan .\n\n## 2.1. A coherent network of protected areas\n\nBiodiversity fares better in protected areas. However, the current network of legally protected areas, including those under strict protection, is not sufficiently large to safeguard biodiversity. Evidence shows that the targets defined under the Convention on Biological Diversity are insufficient to adequately protect and restore nature 18 . Global", - "page_start": 3, - "page_end": 3, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "- 9. There is a 50% reduction in the number of Red List species threatened by invasive alien species.\n - 10. The losses of nutrients from fertilisers are reduced by 50%, resulting in the reduction ofthe use of fertilisers by at least 20%.\n - 11. Cities with at least 20,000 inhabitants have an ambitious Urban Greening Plan.\n - 12. No chemical pesticides are used in sensitive areas such as EU urban green areas.\n - 13. The negative impacts on sensitive species and habitats, including on the seabed through fishing and extraction activities, are substantially reduced to achieve good environmental status.\n - 14. The by-catch of species is eliminated or reduced to a level that allows species recovery and conservation.\n\n## 3. ENABLING TRANSFORMATIVE CHANGE\n\n## 3.1. A new governance framework\n\nIn the EU, there is currently no comprehensive governance framework to steer the implementation of biodiversity commitments agreed at national, European or international level. To address the gap, the Commission will put in place a new European biodiversity governance framework . This will help map obligations and commitments and set out a roadmap to guide their implementation.\n\nAs part of this new framework, the Commission will put in place a monitoring and review mechanism. This will include a clear set of agreed indicators and will enable regular progress assessment and set out corrective action if necessary. This mechanism will feed the Environmental Implementation Review and contribute to the European Semester.\n\nThe new governance framework will ensure co-responsibility and co-ownership by all relevant actors in meeting the EU's biodiversity commitments. It will support administrative capacity building, transparency, stakeholder dialogue, and participatory governance at different levels.\n\nThe Commission will assess the progress and suitability of this approach in 2023, and consider whether a legally binding approach to governance is needed.\n\n## 3.2. Stepping up implementation and enforcement of EU environmental legislation\n\nAll environmental legislation relies on proper implementation and enforcement. Over the last 30 years, the EU has put in place a solid legislative framework to protect and restore its natural capital. However, recent evaluations show that although legislation is fit for purpose, implementation on the ground is lagging behind 60 . This is having dramatic consequences on biodiversity and comes with a substantial economic cost 61 . The full implementation and enforcement of EU environmental legislation is therefore at the heart of this strategy , for which political support and financial and human resources will need to be prioritised.", - "page_start": 15, - "page_end": 15, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "## 2.2. An EU Nature Restoration Plan: restoring ecosystems across land and sea\n\nProtecting the nature we have will not be enough to bring nature back into our lives. To reverse biodiversity loss, the world needs to be more ambitious on nature restoration. With a new EU Nature Restoration Plan , Europe will lead the way.\n\nThe plan will help improve the health of existing and new protected areas, and bring diverse and resilient nature back to all landscapes and ecosystems. This means reducing pressures on habitats and species, and ensuring all use of ecosystems is sustainable. It also means supporting the recovery of nature, limiting soil sealing and urban sprawl, and tackling pollution and invasive alien species. The plan will create jobs, reconcile economic activities with nature growth and help ensure the long-term productivity and value of our natural capital.\n\n## 2.2.1. Strengthening the EU legal framework for nature restoration\n\nNature restoration is already partially required from the Member States in existing EU legislation 28 . However, significant implementation and regulatory gaps hinder progress . For instance, there is no requirement for Member States to have biodiversity restoration plans. There are not always clear or binding targets and timelines and no definition or criteria on restoration or on the sustainable use of ecosystems. There is also no requirement to comprehensively map, monitor or assess ecosystem services, health or restoration efforts. These issues are exacerbated by the gaps in implementation that prevent the existing legislation from achieving its objectives 29 . Stronger implementation support and enforcement is required. To ensure that nature restoration across land and sea picks up, increases the EU's resilience, and contributes to climate change mitigation and adaptation as a key nature-based solution, this strategy puts forward two strands of actions:\n\n -  Firstly, and subject to an impact assessment, the Commission will put forward a proposal for legally binding EU nature restoration targets in 2021 to restore degraded ecosystems, in particular those with the most potential to capture and store carbon and to prevent and reduce the impact of natural disasters. This will identify the conditions in which the targets must be met, as well as the most effective measures to reach them. The impact assessment will also look at the possibility of an EU-wide methodology to map, assess and achieve good condition of ecosystems so they can deliver benefits such as climate regulation, water regulation, soil health, pollination and disaster prevention and protection.\n -  In that context, the Commission will request and support Member States to raise the level of implementation of existing legislation within clear deadlines. It will in particular request Member States to ensure no deterioration in conservation trends and status of all protected habitats and species by 2030 30 . In addition, Member States will have to ensure that at least 30% of species and habitats not", - "page_start": 6, - "page_end": 6, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "principle 79 and taking into account the call of the European Parliament 80 . In parallel, the EU will continue to fund research on the impact of deep-sea mining activities and on environmentally-friendly technologies. The EU should also advocate for more transparency in international bodies such as the International Seabed Authority.\n\n## 4.2.2. Trade policy\n\nTrade policy will actively support and be part of the ecological transition . In this spirit, the Commission will ensure full implementation and enforcement of the biodiversity provisions in all trade agreements, including through the EU Chief Trade Enforcement Officer. The Commission will better assess the impact of trade agreements on biodiversity, with follow-up action to strengthen the biodiversity provisions of existing and new agreements if relevant. The Commission will also present in 2021 a legislative proposal and other measures to avoid or minimise the placing of products associated with deforestation or forest degradation on the EU market 81 , and to promote forest-friendly imports and value chains. The Commission will take a number of steps to crack down on illegal wildlife trade . This trade contributes to the depletion or extinction of entire species, is the world's fourth most lucrative black market and is thought to be one of the causes behind the emergence of zoonotic diseases. It is a human, economic and environmental duty to dismantle it.\n\nWith this in mind, the Commission will revise the EU Action Plan against Wildlife Trafficking in 2021 and propose a further tightening of the rules on EU ivory trade later this year. It will explore a possible revision of the Environmental Crime Directive, including by looking at expanding its scope and introducing specific provisions for types and levels of criminal sanctions. It will consider strengthening the coordinating and investigative capacities of the European Anti-Fraud Office (OLAF) to work with Member States and non-EU countries to prevent illicit trade and the entry of illicit products into the Single Market.\n\nThe Commission will continue to engage with partner countries to ensure a smooth and fair transition, mobilising in particular Aid for Trade to ensure that partners reap the benefits of biodiversity-friendly trade.\n\n## 4.2.3. International cooperation, neighbourhood policy and resource mobilisation\n\nDelivering an ambitious post-2020 global biodiversity framework will require greater cooperation with partners, increased support and financing and phasing out of subsidies harmful to biodiversity. In the last decade, the EU and its Member States collectively upheld their commitment to double financial flows to developing countries for biodiversity 82 . The EU is ready to continue working with its partners and further increase its support post-2020. This will be part of its work on biodiversity conservation, restoration, sustainable use and mainstreaming in all development and partnership", - "page_start": 21, - "page_end": 21, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "## 3.3.2. Investments, pricing and taxation\n\nTackling biodiversity loss and restoring ecosystems will require significant public and private investments at national and European level. This will mean making the most of all relevant EU programmes and financing instruments. The Commission will strengthen its biodiversity proofing framework 69 , inter alia by using in an appropriate way the criteria established under the EU taxonomy, to ensure that EU funding supports biodiversity-friendly investments.\n\nTo meet the needs of this strategy, including investment priorities for Natura 2000 and green infrastructure, at least €20 billion a year 70 should be unlocked for spending on nature . This will require mobilising private and public funding at national and EU level 71 , including through a range of different programmes in the next long-term EU budget. Moreover, as nature restoration will make a major contribution to climate objectives, a significant proportion of the 25% of the EU budget dedicated to climate action will be invested on biodiversity and nature-based solutions.\n\nUnder Invest EU, a dedicated natural-capital and circular-economy initiative will be established to mobilise at least €10 billion over the next 10 years, based on public/private blended finance. Nature and biodiversity is also a priority for the European Green Deal Investment Plan. To help unlock the investment needed, the EU must provide long-term certainty for investors and help embed sustainability in the financial system. The EU sustainable finance taxonomy will help guide investment towards a green recovery and the deployment of nature-based solutions. In 2021, the Commission will adopt a delegated act under the Taxonomy Regulation 72 to establish a common classification of economic activities that substantially contribute to protecting and restoring biodiversity and ecosystems. This will be further supported by a Renewed Sustainable Finance Strategy later this year which will help ensure that the financial system contributes to mitigating existing and future risks to biodiversity and better reflect how biodiversity loss affects companies' profitability and long-term prospects 73 .\n\nThe Commission will further promote tax systems and pricing that reflect environmental costs, including biodiversity loss. This should encourage changes in national fiscal systems to shift the tax burden from labour to pollution, under-priced resources, and other environmental externalities. The ' user pays' and 'polluter pays' principles have to be applied to prevent and correct environmental degradation.\n\nPublic authorities' purchasing power represents 14% of EU GDP and can serve as a powerful driver of demand for the products and services of companies that invest in or contribute to nature-based solutions. To tap into this potential, when proposing further", - "page_start": 17, - "page_end": 17, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "currently in favourable status are in that category or show a strong positive trend. The Commission and the European Environmental Agency will provide guidance to Member States in 2020 on how to select and prioritise species and habitats.\n\n## 2.2.2. Bringing nature back to agricultural land\n\nAs guardians of our land, farmers play a vital role in preserving biodiversity. They are among the first to feel the consequences when biodiversity is lost but also among the first to reap the benefits when it is restored. Biodiversity enables them to provide us with safe, sustainable, nutritious and affordable food and provides them with the income they need to thrive and develop. European farmers are an essential part of the EU's future and must continue to be the social and economic hub of many communities across our Union.\n\nAt the same time, certain agricultural practices are a key driver of biodiversity decline. This is why it is important to work with farmers to support and incentivise the transition to fully sustainable practices . Improving the condition and diversity of agroecosystems will increase the sector's resilience to climate change, environmental risks and socioeconomic shocks, while creating new jobs, for example in organic farming, rural tourism or recreation.\n\nTo support the long-term sustainability of both nature and farming, this strategy will work in tandem with the new Farm to Fork Strategy and the new Common Agricultural Policy (CAP) , including by promoting eco-schemes and result-based payment schemes. In implementing the Biodiversity and the Farm to Fork Strategies, the Commission will closely monitor progress and improvements in terms of food security and farmers income. The Commission will ensure that the CAP Strategic plans are assessed against robust climate and environmental criteria, and that Member States set explicit national values for the relevant targets set in this strategy, as well as in the Farm to Fork Strategy. These plans should lead to sustainable practices such as precision agriculture, organic farming, agro-ecology, agro-forestry, low-intensive permanent grassland, and stricter animal welfare standards.\n\nFarmland birds and insects, particularly pollinators, are key indicators of the health of agroecosystems and are vital for agricultural production and food security. Their alarming decline must be reversed. As set out in the Farm to Fork Strategy, the Commission will take action to reduce by 50% the overall use of - and risk from chemical pesticides by 2030 and reduce by 50% the use of more hazardous pesticides by 2030. This must be supported by the full implementation of the EU Pollinators initiative 31 . By the end of 2020, the Commission will review the initiative and propose additional measures if necessary. To provide space for wild animals, plants, pollinators and natural pest regulators, there is an urgent need to bring back at least 10% of agricultural area under high-diversity landscape features . These include, inter alia , buffer strips, rotational or non-rotational fallow land, hedges, non-productive trees, terrace walls, and ponds. These help enhance carbon sequestration, prevent soil erosion and depletion, filter air and water, and support climate adaptation. In addition, more biodiversity often helps lead to more agricultural production. Member States will need to translate the 10% EU target to a lower geographical scale to ensure connectivity among habitats, especially through the CAP instruments and CAP Strategic Plans, in line with the Farm to Fork Strategy, and through the implementation of the Habitats Directive. The", - "page_start": 7, - "page_end": 7, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "climate change, the effects of erosion and losses of soil organic carbon are becoming increasingly apparent. Desertification is also a growing threat in the EU 35 .\n\nIt is therefore essential to step up efforts to protect soil fertility, reduce soil erosion and increase soil organic matter . This should be done by adopting sustainable soil management practices, including as part of the CAP. Significant progress is also needed on identifying contaminated soil sites, restoring degraded soils, defining the conditions for their good ecological status, introducing restoration objectives, and improving the monitoring of soil quality.\n\nTo address these issues in a comprehensive way and help to fulfil EU and international commitments on land-degradation neutrality, the Commission will update the EU Soil Thematic Strategy 36 in 2021. The Zero Pollution Action Plan for Air, Water and Soil that the Commission will adopt in 2021 will also look at these issues. Soil sealing and rehabilitation of contaminated brownfields will be addressed in the upcoming Strategy for a Sustainable Built Environment. A mission in the area of soil health and food under Horizon Europe 37 will aim to develop solutions for restoring soil health and functions.\n\n## 2.2.4. Increasing the quantity of forests and improving their health and resilience\n\nForests are hugely important for biodiversity, climate and water regulation, the provision of food, medicines and materials, carbon sequestration and storage, soil stabilisation and the purification of air and water. They are also a natural home for recreation and learning about nature. Foresters have a key role to play in ensuring sustainable forest management and in restoring and sustaining biodiversity in forests.\n\nIn addition to strictly protecting all remaining EU primary and old-growth forests, the EU must increase the quantity, quality and resilience of its forests , notably against fires, droughts, pests, diseases and other threats likely to increase with climate change. To retain their function for both biodiversity and climate, all forests need to be preserved in good health. More resilient forests can support a more resilient economy. They also play an important role in providing materials, products and services, which are key for the circular bio-economy.\n\nTo make this happen, the Commission will propose a dedicated EU Forest Strategy in 2021 in line with our wider biodiversity and climate neutrality ambitions. It will include a roadmap for planting at least 3 billion additional trees in the EU by 2030 , in full respect of ecological principles. This will create substantial job opportunities linked to the collecting and cultivating of seeds, planting seedlings, and ensuring their development. Tree planting is particularly beneficial in cities, while in rural areas it can work well with agroforestry, landscape features and increased carbon sequestration. At the same time, the Commission will continue to work with Member States to ensure that the EU is sufficiently equipped to prevent and respond to major forest fires, which can inflict significant damages on forest biodiversity.", - "page_start": 9, - "page_end": 9, - "source_file": "legal5_eubiodiversity_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "legal5_eubiodiversity_cc4.pdf", - "query": "Was there a biodiversity governance framework in place in the EU before the European Commission's proposal?", - "target_page": 16, - "target_passage": "In the EU, there is currently no comprehensive governance framework to steer the implementation of biodiversity commitments agreed at national, European or international level. To address the gap, the Commission will put in place a new European biodiversity governance framework. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "- 9. There is a 50% reduction in the number of Red List species threatened by invasive alien species.\n - 10. The losses of nutrients from fertilisers are reduced by 50%, resulting in the reduction ofthe use of fertilisers by at least 20%.\n - 11. Cities with at least 20,000 inhabitants have an ambitious Urban Greening Plan.\n - 12. No chemical pesticides are used in sensitive areas such as EU urban green areas.\n - 13. The negative impacts on sensitive species and habitats, including on the seabed through fishing and extraction activities, are substantially reduced to achieve good environmental status.\n - 14. The by-catch of species is eliminated or reduced to a level that allows species recovery and conservation.\n\n## 3. ENABLING TRANSFORMATIVE CHANGE\n\n## 3.1. A new governance framework\n\nIn the EU, there is currently no comprehensive governance framework to steer the implementation of biodiversity commitments agreed at national, European or international level. To address the gap, the Commission will put in place a new European biodiversity governance framework . This will help map obligations and commitments and set out a roadmap to guide their implementation.\n\nAs part of this new framework, the Commission will put in place a monitoring and review mechanism. This will include a clear set of agreed indicators and will enable regular progress assessment and set out corrective action if necessary. This mechanism will feed the Environmental Implementation Review and contribute to the European Semester.\n\nThe new governance framework will ensure co-responsibility and co-ownership by all relevant actors in meeting the EU's biodiversity commitments. It will support administrative capacity building, transparency, stakeholder dialogue, and participatory governance at different levels.\n\nThe Commission will assess the progress and suitability of this approach in 2023, and consider whether a legally binding approach to governance is needed.\n\n## 3.2. Stepping up implementation and enforcement of EU environmental legislation\n\nAll environmental legislation relies on proper implementation and enforcement. Over the last 30 years, the EU has put in place a solid legislative framework to protect and restore its natural capital. However, recent evaluations show that although legislation is fit for purpose, implementation on the ground is lagging behind 60 . This is having dramatic consequences on biodiversity and comes with a substantial economic cost 61 . The full implementation and enforcement of EU environmental legislation is therefore at the heart of this strategy , for which political support and financial and human resources will need to be prioritised.", - "page_start": 15, - "page_end": 15, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "States and the European Environment Agency, will put forward in 2020 criteria and guidance for identifying and designating additional areas, including a definition of strict protection, as well as for appropriate management planning. In doing so, it will indicate how other effective area-based conservation measures and greening of cities could contribute to the targets.\n\nThe targets relate to the EU as a whole and could be broken down according to the EU bio-geographical regions and sea basins or at a more local level. Every Member State will have to do its fair share of the effort based on objective ecological criteria, recognising that each country has a different quantity and quality of biodiversity. Particular focus will be placed on protecting and restoring the tropical and sub-tropical marine and terrestrial ecosystems in the EU's outermost regions given their exceptionally high biodiversity value.\n\nIn addition, in order to have a truly coherent and resilient Trans-European Nature Network, it will be important to set up ecological corridors to prevent genetic isolation, allow for species migration, and maintain and enhance healthy ecosystems. In this context, investments in green and blue infrastructure 27 and cooperation across borders among Member States should be promoted and supported, including through the European Territorial Cooperation.\n\nThe Commission will aim to agree the criteria and guidance for additional designations with Member States by the end of 2021. Member States will then have until the end of 2023 to demonstrate significant progress in legally designating new protected areas and integrating ecological corridors. On this basis, the Commission will assess by 2024 whether the EU is on track to meet its 2030 targets or whether stronger actions, including EU legislation, are needed.\n\nFinally, the Overseas Countries and Territories also host important biodiversity hotspots, not governed by EU environmental rules. The Commission encourages relevant Member States to consider promoting equal or equivalent rules in these countries and territories.\n\n## Nature protection: key commitments by 2030\n\n- 1. Legally protect a minimum of 30% of the EU's land area and 30% of the EU's sea area and integrate ecological corridors, as part of a true Trans-European Nature Network.\n- 2. Strictly protect at least a third of the EU's protected areas, including all remaining EU primary and old-growth forests.\n- 3. Effectively manage all protected areas, defining clear conservation objectives and measures, and monitoring them appropriately.", - "page_start": 5, - "page_end": 5, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "<!-- image -->\n\nBrussels, 20.5.2020 COM(2020) 380 final\n\n## COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS\n\nEU Biodiversity Strategy for 2030\n\nBringing nature back into our lives", - "page_start": 0, - "page_end": 0, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "principle 79 and taking into account the call of the European Parliament 80 . In parallel, the EU will continue to fund research on the impact of deep-sea mining activities and on environmentally-friendly technologies. The EU should also advocate for more transparency in international bodies such as the International Seabed Authority.\n\n## 4.2.2. Trade policy\n\nTrade policy will actively support and be part of the ecological transition . In this spirit, the Commission will ensure full implementation and enforcement of the biodiversity provisions in all trade agreements, including through the EU Chief Trade Enforcement Officer. The Commission will better assess the impact of trade agreements on biodiversity, with follow-up action to strengthen the biodiversity provisions of existing and new agreements if relevant. The Commission will also present in 2021 a legislative proposal and other measures to avoid or minimise the placing of products associated with deforestation or forest degradation on the EU market 81 , and to promote forest-friendly imports and value chains. The Commission will take a number of steps to crack down on illegal wildlife trade . This trade contributes to the depletion or extinction of entire species, is the world's fourth most lucrative black market and is thought to be one of the causes behind the emergence of zoonotic diseases. It is a human, economic and environmental duty to dismantle it.\n\nWith this in mind, the Commission will revise the EU Action Plan against Wildlife Trafficking in 2021 and propose a further tightening of the rules on EU ivory trade later this year. It will explore a possible revision of the Environmental Crime Directive, including by looking at expanding its scope and introducing specific provisions for types and levels of criminal sanctions. It will consider strengthening the coordinating and investigative capacities of the European Anti-Fraud Office (OLAF) to work with Member States and non-EU countries to prevent illicit trade and the entry of illicit products into the Single Market.\n\nThe Commission will continue to engage with partner countries to ensure a smooth and fair transition, mobilising in particular Aid for Trade to ensure that partners reap the benefits of biodiversity-friendly trade.\n\n## 4.2.3. International cooperation, neighbourhood policy and resource mobilisation\n\nDelivering an ambitious post-2020 global biodiversity framework will require greater cooperation with partners, increased support and financing and phasing out of subsidies harmful to biodiversity. In the last decade, the EU and its Member States collectively upheld their commitment to double financial flows to developing countries for biodiversity 82 . The EU is ready to continue working with its partners and further increase its support post-2020. This will be part of its work on biodiversity conservation, restoration, sustainable use and mainstreaming in all development and partnership", - "page_start": 21, - "page_end": 21, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "policies. In addition, by integrating policy coherence for sustainable development in all its policies, the EU will reduce the pressure on biodiversity worldwide. In all of its international cooperation, the EU should promote sustainable agricultural and fisheries practices and actions to protect and restore the world's forests. Particular attention will also be paid to sustainable water resource management, the restoration of degraded land, and the protection and restoration of biodiverse areas with high ecosystem services and climate mitigation potential. A better protection of natural ecosystems, coupled with efforts to reduce wildlife trade and consumption, will also help prevent and build up resilience to possible future diseases and pandemics. The EU will enhance its support to global efforts to apply the One Health approach 83 , which recognises the intrinsic connection between human health, animal health and healthy resilient nature.\n\nThe EU will step up support to partner countries across the world to achieve the new global targets, fight environmental crime, and tackle the drivers of biodiversity loss. In Africa, the EU will launch the NaturAfrica initiative to protect wildlife and key ecosystems while offering opportunities in green sectors for local populations. Similar projects will be developed in other regions. The EU will also support the Western Balkans and EU Neighbourhood countries in their efforts to protect biodiversity.\n\nIn all of its work, the EU will strengthen the links between biodiversity protection and human rights , gender, health, education, conflict sensitivity, the rights-based approach, land tenure and the role of indigenous peoples and local communities.\n\nAs part of its global efforts, the EU will promote biodiversity coalitions with partners and civil society around the world. For example, in March 2020, the Commission launched the Global Biodiversity Coalition of national parks, aquariums, botanic gardens, zoos, natural history and sciencemuseums to help raise awareness around the world on the need to protect and nurture biodiversity. The Commission will consider launching or joining other High Ambition Coalitions to help develop the post-2020 framework.\n\n## 5. CONCLUSION\n\nProtecting and restoring biodiversity is the only way to preserve the quality and continuity of human life on Earth. The commitments proposed in this strategy pave the way for ambitious and necessary changes - changes that will ensure the wellbeing and economic prosperity of present and future generations in a healthy environment. The implementation of these commitments will take into account the diversity of challenges across sectors, regions and Member States, recognise the need to ensure social justice, fairness and inclusiveness in line with the European Pillar of Social Rights, and will require a sense of responsibility and strong joint efforts from the EU, its Member States, stakeholders and citizens.\n\nThe Commission invites the European Parliament and the Council to endorse this strategy ahead of the 15 th Conference of the Parties to the Convention on Biological Diversity. To ensure full political ownership of this strategy, the Commission will suggest a standing progress point at the Council and at the European Parliament. It will review the strategy by 2024 to assess progress and whether further action is needed to meet its objectives.", - "page_start": 22, - "page_end": 22, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "encouraging cooperation in education for environmental sustainability in 2021. This will provide guidance for schools and teachers on how to cooperate and exchange experiences across Member States on biodiversity teaching. The Commission will also provide support materials and facilitate the exchange of good practices in EU networks of teacher-training programmes.\n\n## 4. THE EUROPEAN UNION FOR AN AMBITIOUS GLOBAL BIODIVERSITY AGENDA\n\nBiodiversity is a priority of the EU's external action and an integral part of efforts to meet the United Nations Sustainable Development Goals. It will be mainstreamed throughout bilateral and multilateral engagements, through the EU's 'Green Deal diplomacy', and forthcoming green alliances 76 . The Commission will work closely with the European Parliament and Member States to ensure a high level of EU ambition and mobilise all efforts for the good of the world's biodiversity.\n\n## 4.1. Raising the level of ambition and commitment worldwide\n\nProtecting biodiversity is a global challenge and the next decade will be decisive. Global efforts under the United Nations Convention on Biological Diversity have largely been insufficient. Nature cannot afford any half measures or lack of ambition.\n\nIn this spirit, the EU is ready to lead all efforts - working with like-minded partners in a high-ambition coalition on biodiversity - to agree an ambitious new global framework for post-2020 at the upcoming 15 th Conference of the Parties to the Convention on Biological Diversity.\n\nWith this strategy, the Commission proposes ambitious commitments for the EU to bring to the table. The EU should also support governments and stakeholders across the globe to significantly step up their ambition and their action.\n\nThe Commission proposes that the EU ensures that the post-2020 global framework includes, at a minimum, the elements outlined below:\n\n -  Overarching global goals for biodiversity for 2050, in line with the United Nations 2030 Agenda for Sustainable Development and the vision of 'living in harmony with nature'. The ambition should be that, by 2050, all of the world's ecosystems are restored, resilient, and adequately protected. The world should commit to the net-gain principle to give nature back more than it takes. The world should commit to no human-induced extinction of species, at minimum where avoidable.\n -  Ambitious global 2030 targets in line with EU commitments in this strategy. These should clearly address the drivers of biodiversity loss and be specific, measurable, actionable, relevant and time-bound.\n -  A much stronger implementation, monitoring and review process. Parties should revise their National Biodiversity Strategies and Action Plans by the end of 2021, or as a minimum, submit national commitments for the most important targets. There should be a regular review cycle to look at progress towards the", - "page_start": 19, - "page_end": 19, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "## 3.3.2. Investments, pricing and taxation\n\nTackling biodiversity loss and restoring ecosystems will require significant public and private investments at national and European level. This will mean making the most of all relevant EU programmes and financing instruments. The Commission will strengthen its biodiversity proofing framework 69 , inter alia by using in an appropriate way the criteria established under the EU taxonomy, to ensure that EU funding supports biodiversity-friendly investments.\n\nTo meet the needs of this strategy, including investment priorities for Natura 2000 and green infrastructure, at least €20 billion a year 70 should be unlocked for spending on nature . This will require mobilising private and public funding at national and EU level 71 , including through a range of different programmes in the next long-term EU budget. Moreover, as nature restoration will make a major contribution to climate objectives, a significant proportion of the 25% of the EU budget dedicated to climate action will be invested on biodiversity and nature-based solutions.\n\nUnder Invest EU, a dedicated natural-capital and circular-economy initiative will be established to mobilise at least €10 billion over the next 10 years, based on public/private blended finance. Nature and biodiversity is also a priority for the European Green Deal Investment Plan. To help unlock the investment needed, the EU must provide long-term certainty for investors and help embed sustainability in the financial system. The EU sustainable finance taxonomy will help guide investment towards a green recovery and the deployment of nature-based solutions. In 2021, the Commission will adopt a delegated act under the Taxonomy Regulation 72 to establish a common classification of economic activities that substantially contribute to protecting and restoring biodiversity and ecosystems. This will be further supported by a Renewed Sustainable Finance Strategy later this year which will help ensure that the financial system contributes to mitigating existing and future risks to biodiversity and better reflect how biodiversity loss affects companies' profitability and long-term prospects 73 .\n\nThe Commission will further promote tax systems and pricing that reflect environmental costs, including biodiversity loss. This should encourage changes in national fiscal systems to shift the tax burden from labour to pollution, under-priced resources, and other environmental externalities. The ' user pays' and 'polluter pays' principles have to be applied to prevent and correct environmental degradation.\n\nPublic authorities' purchasing power represents 14% of EU GDP and can serve as a powerful driver of demand for the products and services of companies that invest in or contribute to nature-based solutions. To tap into this potential, when proposing further", - "page_start": 17, - "page_end": 17, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "targets, with the ability to ratchet up action if needed. These reviews should be based on an independent, science-based gap-analysis and foresight process, with common headline indicators for all Parties.\n\n -  An enabling framework to bring the ambition to life, across areas such as finance, capacity, research, innovation and technology.\n -  Fair and equitable sharing of the benefits from the use of genetic resources linked to biodiversity.\n -  A principle of equality . This includes respect for the rights and the full and effective participation of indigenous peoples and local communities. There should be an inclusive approach with participation of all stakeholders, including women, youth, civil society, local authorities, the private sector, academia and scientific institutions.\n\n## 4.2. Using external action to promote the EU's ambition\n\n## 4.2.1. International Ocean Governance\n\nIn line with the International Ocean Governance agenda 77 , the EU will support the conclusion of an ambitious legally binding agreement on marine biological diversity of areas beyond national jurisdiction (BBNJ) by the end of 2020. It must set clear global procedures for identifying, designating and effectively managing ecologically representative marine protected areas in the high seas. It should be ratified and implemented as quickly as possible.\n\nThe EU should also use all of its diplomatic leverage and outreach capacities to help broker agreement on the designation of three vast Marine Protected Areas in the Southern Ocean 78 , two of which were co-proposed by the EU in East Antarctica and in the Weddell Sea. If agreed, this would constitute one of the biggest acts of nature protection in history.\n\nWork will continue with partner countries and regional organisations to put in place measures to protect and sustainably use sensitive maritime ecosystems and species, including in areas beyond national jurisdiction, with a focus on marine biodiversity hotspots. The EU should continue supporting Small Island Developing States and other relevant partner countries to participate in meetings of regional and global organisations and bodies, and to implement relevant international commitments and regulations.\n\nThe EU will apply zero tolerance towards illegal, unreported and unregulated fishing and will combat overfishing, including through WTO negotiations on a global agreement to ban harmful fisheries subsidies .\n\nIn international negotiations, the EU should advocate that marine minerals in the international seabed area cannot be exploited before the effects of deep-sea mining on the marine environment, biodiversity and human activities have been sufficiently researched, the risks are understood and the technologies and operational practices are able to demonstrate no serious harm to the environment, in line with the precautionary", - "page_start": 20, - "page_end": 20, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "As regards the Birds and Habitats Directives, enforcement will focus on completing the Natura 2000 network , the effective management of all sites, species-protection provisions, and species and habitats that show declining trends. The Commission will also ensure that environment-related legislation with an impact on biodiversity 62 is better implemented, enforced and - where necessary - reviewed and revised.\n\nThe Commission will strive to improve compliance assurance , working closely with Member States and European networks of environmental agencies, inspectors, auditors, police, prosecutors and judges.\n\nIn addition, the Commission will support civil society's role as a compliance watchdog and will engage with Member States to improve access to justice in national courts in environmental matters for individuals and NGOs. It will also broaden standing for NGOs by proposing a revision of the Aarhus Regulation 63 .\n\n## 3.3. Building on an integrated and whole-of-society approach\n\n## 3.3.1. Business for biodiversity\n\nIn the partnership spirit of this strategy, all parts of the economy and society will have to play their role. Industry and business have an impact on nature, but they also produce the important innovations, partnerships and expertise that can help address biodiversity loss.\n\nTo ensure environmental and social interests are fully embedded into business strategies, the Commission will put forward a new initiative in 2021 on sustainable corporate governance . This initiative, which may take the form of a legislative proposal, will address human rights and environmental duty of care and due diligence across economic value chains in a proportionate way according to different sizes of entreprises 64 . This will help ensure that shareholder and stakeholder interests are fully aligned with the objectives set out in this strategy. In addition, in 2020, the Commission launched a review of the reporting obligations of businesses under the Non-Financial Reporting Directive 65 , with a view to improving the quality and scope of non-financial disclosures, including on environmental aspects such as biodiversity.\n\nThrough its existing platforms 66 , the Commission will help to build a European Business for Biodiversity movement, taking inspiration from recent initiatives 67 and making this movement an integral part of the European Climate Pact. Particular attention will be paid to measures to incentivise and eliminate barriers for the take-up of naturebased solutions, as these can lead to significant business and employment opportunities in various sectors 68 and are the key to innovation for economic or societal needs that rely on nature.", - "page_start": 16, - "page_end": 16, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "build on the headline ambition to ensure that by 2050 all of the world's ecosystems are restored, resilient, and adequately protected. The world should commit to the net-gain principle to give nature back more than it takes. As part of this, the world should commit to no human-induced extinction of species, at minimum where avoidable.\n\nThis strategy sets out how Europe can help make this happen. As a milestone, it aims to ensure that Europe's biodiversity will be on the path to recovery by 2030 for the benefit of people, the planet, the climate and our economy, in line with the 2030 Agenda for Sustainable Development and with the objectives of the Paris Agreement on Climate Change. It addresses the five main drivers of biodiversity loss, sets out an enhanced governance framework to fill remaining gaps, ensures the full implementation of EU legislation, and pulls together all existing efforts. This strategy is enterprising and incentivising in spirit and action. It reflects the fact that protecting and restoring nature will need more than regulation alone . It will require action by citizens, businesses, social partners and the research and knowledge community, as well as strong partnerships between local, regional, national and European level. This strategy is in line with the ambitions and commitment set out in President von der Leyen's Political Guidelines and in the European Green Deal.\n\nAdopted in the heart of the COVID-19 pandemic, this strategy will also be a central element of the EU's recovery plan. It will be crucial to prevent and build resilience to future zoonosis outbreaks and to provide immediate business and investment opportunities for restoring the EU's economy.\n\nAll new initiatives and proposals will be underpinned by the Commission's better regulation tools. Based on public consultations and on the identification of the environmental, social and economic impacts, impact assessments will contribute to ensuring that all initiatives achieve their objectives in the most effective and least burdensome way and live up to a green oath to 'do no harm'.\n\n## 2. PROTECTING AND RESTORING NATURE IN THE EUROPEAN UNION\n\nThe EU has legal frameworks, strategies and action plans to protect nature and restore habitats and species. But protection has been incomplete, restoration has been smallscale, and the implementation and enforcement of legislation has been insufficient 17 .\n\nTo put biodiversity on the path to recovery by 2030, we need to step up the protection and restoration of nature. This should be done by improving and widening our network of protected areas and by developing an ambitious EU Nature Restoration Plan .\n\n## 2.1. A coherent network of protected areas\n\nBiodiversity fares better in protected areas. However, the current network of legally protected areas, including those under strict protection, is not sufficiently large to safeguard biodiversity. Evidence shows that the targets defined under the Convention on Biological Diversity are insufficient to adequately protect and restore nature 18 . Global", - "page_start": 3, - "page_end": 3, - "source_file": "legal5_eubiodiversity_cc4.pdf" - } - ] - }, - { - "references": { - "source_file": "legal5_eubiodiversity_cc4.pdf", - "query": "What is the EU's tolerance for unauthorised fishing?", - "target_page": 21, - "target_passage": "The EU will apply zero tolerance towards illegal, unreported and unregulated fishing", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "energy 41 . It will also review in 2021 the data on biofuels with high indirect land-use change risk and establish a trajectory for their gradual phase out by 2030.\n\nThe overall objective is to ensure that EU regulatory framework on bioenergy is in line with the increased ambition set out in the European Green Deal.\n\n## 2.2.6. Restoring the good environmental status of marine ecosystems\n\nRestored and properly protected marine ecosystems bring substantial health, social and economic benefits to coastal communities and the EU as a whole. The need for stronger action is all the more acute as marine and coastal ecosystem biodiversity loss is severely exacerbated by global warming 42 .\n\nAchieving good environmental status of marine ecosystems, including through strictly protected areas, must involve the restoration of carbon-rich ecosystems as well as important fish spawning and nursery areas. Some of today's sea uses endanger food security, fishers' livelihoods, and the fishery and seafood sectors. Marine resources must be harvested sustainably and there must be zero-tolerance for illegal practices . In this regard, the full implementation of the EU's Common Fisheries Policy, the Marine Strategy Framework Directive and the Birds and Habitats Directives is essential.\n\nThe application of an ecosystem-based management approach under EU legislation 43 will reduce the adverse impacts of fishing, extraction and other human activities, especially on sensitive species and seabed habitats. To support this, national maritime spatial plans , which Member States have to deliver in 2021, should aim at covering all maritime sectors and activities, as well as area-based conservation-management measures. 44 The Commission will also propose a new action plan to conserve fisheries resources and protect marine ecosystems by 2021. Where necessary, measures will be introduced to limit the use of fishing gear most harmful to biodiversity, including on the seabed. It will also look at how to reconcile the use of bottom-contacting fishing gear with biodiversity goals, given it is now the most damaging activity to the seabed. This must be done in a fair and just way for all. The European Maritime and Fisheries Fund should also support the transition to more selective and less damaging fishing techniques.\n\nHealthy fish stocks are key to the long-term prosperity of fishermen and the health of our oceans and biodiversity. This makes it all the more important to maintain or reduce fishing mortality at or under Maximum Sustainable Yield levels . This will help achieve a healthy population age and size distribution for fish stocks.\n\nThe by-catch of species threatened with extinction must also be eliminated or reduced to a level that allows full recovery. This should also be the case for those in bad conservation status or not in good environmental status. Furthermore, the by-catch of other species 45 must be eliminated or, where this is not possible, minimised so as not to", - "page_start": 11, - "page_end": 11, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "targets, with the ability to ratchet up action if needed. These reviews should be based on an independent, science-based gap-analysis and foresight process, with common headline indicators for all Parties.\n\n -  An enabling framework to bring the ambition to life, across areas such as finance, capacity, research, innovation and technology.\n -  Fair and equitable sharing of the benefits from the use of genetic resources linked to biodiversity.\n -  A principle of equality . This includes respect for the rights and the full and effective participation of indigenous peoples and local communities. There should be an inclusive approach with participation of all stakeholders, including women, youth, civil society, local authorities, the private sector, academia and scientific institutions.\n\n## 4.2. Using external action to promote the EU's ambition\n\n## 4.2.1. International Ocean Governance\n\nIn line with the International Ocean Governance agenda 77 , the EU will support the conclusion of an ambitious legally binding agreement on marine biological diversity of areas beyond national jurisdiction (BBNJ) by the end of 2020. It must set clear global procedures for identifying, designating and effectively managing ecologically representative marine protected areas in the high seas. It should be ratified and implemented as quickly as possible.\n\nThe EU should also use all of its diplomatic leverage and outreach capacities to help broker agreement on the designation of three vast Marine Protected Areas in the Southern Ocean 78 , two of which were co-proposed by the EU in East Antarctica and in the Weddell Sea. If agreed, this would constitute one of the biggest acts of nature protection in history.\n\nWork will continue with partner countries and regional organisations to put in place measures to protect and sustainably use sensitive maritime ecosystems and species, including in areas beyond national jurisdiction, with a focus on marine biodiversity hotspots. The EU should continue supporting Small Island Developing States and other relevant partner countries to participate in meetings of regional and global organisations and bodies, and to implement relevant international commitments and regulations.\n\nThe EU will apply zero tolerance towards illegal, unreported and unregulated fishing and will combat overfishing, including through WTO negotiations on a global agreement to ban harmful fisheries subsidies .\n\nIn international negotiations, the EU should advocate that marine minerals in the international seabed area cannot be exploited before the effects of deep-sea mining on the marine environment, biodiversity and human activities have been sufficiently researched, the risks are understood and the technologies and operational practices are able to demonstrate no serious harm to the environment, in line with the precautionary", - "page_start": 20, - "page_end": 20, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "principle 79 and taking into account the call of the European Parliament 80 . In parallel, the EU will continue to fund research on the impact of deep-sea mining activities and on environmentally-friendly technologies. The EU should also advocate for more transparency in international bodies such as the International Seabed Authority.\n\n## 4.2.2. Trade policy\n\nTrade policy will actively support and be part of the ecological transition . In this spirit, the Commission will ensure full implementation and enforcement of the biodiversity provisions in all trade agreements, including through the EU Chief Trade Enforcement Officer. The Commission will better assess the impact of trade agreements on biodiversity, with follow-up action to strengthen the biodiversity provisions of existing and new agreements if relevant. The Commission will also present in 2021 a legislative proposal and other measures to avoid or minimise the placing of products associated with deforestation or forest degradation on the EU market 81 , and to promote forest-friendly imports and value chains. The Commission will take a number of steps to crack down on illegal wildlife trade . This trade contributes to the depletion or extinction of entire species, is the world's fourth most lucrative black market and is thought to be one of the causes behind the emergence of zoonotic diseases. It is a human, economic and environmental duty to dismantle it.\n\nWith this in mind, the Commission will revise the EU Action Plan against Wildlife Trafficking in 2021 and propose a further tightening of the rules on EU ivory trade later this year. It will explore a possible revision of the Environmental Crime Directive, including by looking at expanding its scope and introducing specific provisions for types and levels of criminal sanctions. It will consider strengthening the coordinating and investigative capacities of the European Anti-Fraud Office (OLAF) to work with Member States and non-EU countries to prevent illicit trade and the entry of illicit products into the Single Market.\n\nThe Commission will continue to engage with partner countries to ensure a smooth and fair transition, mobilising in particular Aid for Trade to ensure that partners reap the benefits of biodiversity-friendly trade.\n\n## 4.2.3. International cooperation, neighbourhood policy and resource mobilisation\n\nDelivering an ambitious post-2020 global biodiversity framework will require greater cooperation with partners, increased support and financing and phasing out of subsidies harmful to biodiversity. In the last decade, the EU and its Member States collectively upheld their commitment to double financial flows to developing countries for biodiversity 82 . The EU is ready to continue working with its partners and further increase its support post-2020. This will be part of its work on biodiversity conservation, restoration, sustainable use and mainstreaming in all development and partnership", - "page_start": 21, - "page_end": 21, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "States and the European Environment Agency, will put forward in 2020 criteria and guidance for identifying and designating additional areas, including a definition of strict protection, as well as for appropriate management planning. In doing so, it will indicate how other effective area-based conservation measures and greening of cities could contribute to the targets.\n\nThe targets relate to the EU as a whole and could be broken down according to the EU bio-geographical regions and sea basins or at a more local level. Every Member State will have to do its fair share of the effort based on objective ecological criteria, recognising that each country has a different quantity and quality of biodiversity. Particular focus will be placed on protecting and restoring the tropical and sub-tropical marine and terrestrial ecosystems in the EU's outermost regions given their exceptionally high biodiversity value.\n\nIn addition, in order to have a truly coherent and resilient Trans-European Nature Network, it will be important to set up ecological corridors to prevent genetic isolation, allow for species migration, and maintain and enhance healthy ecosystems. In this context, investments in green and blue infrastructure 27 and cooperation across borders among Member States should be promoted and supported, including through the European Territorial Cooperation.\n\nThe Commission will aim to agree the criteria and guidance for additional designations with Member States by the end of 2021. Member States will then have until the end of 2023 to demonstrate significant progress in legally designating new protected areas and integrating ecological corridors. On this basis, the Commission will assess by 2024 whether the EU is on track to meet its 2030 targets or whether stronger actions, including EU legislation, are needed.\n\nFinally, the Overseas Countries and Territories also host important biodiversity hotspots, not governed by EU environmental rules. The Commission encourages relevant Member States to consider promoting equal or equivalent rules in these countries and territories.\n\n## Nature protection: key commitments by 2030\n\n- 1. Legally protect a minimum of 30% of the EU's land area and 30% of the EU's sea area and integrate ecological corridors, as part of a true Trans-European Nature Network.\n- 2. Strictly protect at least a third of the EU's protected areas, including all remaining EU primary and old-growth forests.\n- 3. Effectively manage all protected areas, defining clear conservation objectives and measures, and monitoring them appropriately.", - "page_start": 5, - "page_end": 5, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "## GETTING IN TOUCH WITH THE EU\n\n## In person\n\nAll over the European Union there are hundreds of Europe Direct centres. You can /find the address of the centre nearest you online (european-union.europa.eu/contact-eu/meet-us\\_en).\n\n## On the phone or in writing\n\nEurope Direct is a service that answers your questions about the European Union. You can contact this service:\n\n - · by freephone: 00 800 6 7 8 9 10 11 (certain operators may charge for these calls),\n - · at the following standard number: +32 22999696,\n - · via the following form: european-union.europa.eu/contact-eu/write-us\\_en.\n\n## FINDING INFORMATION ABOUT THE EU\n\n## Online\n\nInformation about the European Union in all the o/fficial languages of the EU is available on the Europa website (european-union.europa.eu).\n\n## EU publications\n\nYou can view or order EU publications at op.europa.eu/en/publications. Multiple copies of free publications can be obtained by contacting Europe Direct or your local documentation centre (european-union.europa.eu/contact-eu/meet-us\\_en).\n\n## EU law and related documents\n\nFor access to legal information from the EU, including all EU law since 1951 in all the o/fficial language versions, go to EUR-Lex (eur-lex.europa.eu).\n\n## EU open data\n\nThe portal data.europa.eu provides access to open datasets from the EU institutions, bodies and agencies. These can be downloaded and reused for free, for both commercial and non-commercial purposes. The portal also provides access to a wealth of datasets from European countries.", - "page_start": 162, - "page_end": 162, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "tackling undeclared work' provides fact sheets of the type and quantity of undeclared work in all EU Member States; 464 Eurofound published several reports on platform work, 465 and the FRA had a series of publications and fact sheets on severe cases of exploitation, particularly of migrant workforces. 466 Also, the creation of the European Labour Authority (ELA) 467 is partly a consequence of the often irregular working conditions of mobile, posted, contracted or seasonal workers who leave their country to work in the EU or in another European country. ELA particularly aims to mitigate such critical issues related to labour mobility and social security coordination between countries.\n\nIn this report , the quantitative data and the interpretation of the developments will cover - in an ideal case the period 2005 to 2020 . In 2004, a major extension of the EU took place, from 15 to 25 Member States. If it is not possible to cover the whole period, the analysis is limited to the maximum possible period. If comparability is high, for a very few selected data a further look back to the 1990s was taken.\n\nMoreover, there can be major comparability difficulties caused by the change of methodological approaches, geographical coverage and other context factors during the last 10 to 30 years. Major challenges for comparative assessments of EU-wide harmonised data collections from different years were:\n\n - · The EU went through several enlargement processes , expanded from EU-12 to EU-15 in 1994, expanded from EU-15 to EU-25 in 2004, to EU27 in 2007 and to EU28 in 2013, and from 2020 on - due to the departure of the United Kingdom - the EU consists of 27 Member States. In statistical publications the identifier EU27\\_2020 is often used to distinguish this period from the EU27 phase between 2008 and 2012, before Croatia joined and the EU27 became EU28.\n - · Methodologies of data collection changed , questions in surveys were abandoned or changed, and sample sizes or structures changed, for example, the given period in survey questions changed. One example is from the EWCS: the time categories for health-related absence from work changed from 'between 10 and 20 days' to absence of 'more than 15 days'.\n - · Important structural decisions were taken in the sector of economic statistics , like the change of the statistical composition and the coding of economic sectors, NACE Code 1, Revision 1 (NACE 1.1) was applied until 2007, and from 2008 NACE Code 2 is applied.\n - · The survey providers use(d) for occupation and educational attainment different categories and aggregations levels, for example, ESEG, ISCED or ISCO.\n - · Some important categories and definitions are not fully harmonised in statistics, for example, the definition of 'manual worker' or of 'migration status'. 468\n\n## 7.3 Qualitative data and research", - "page_start": 133, - "page_end": 133, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- 112 EU-OSHA, 2018: Management of psychosocial risks in European workplaces : evidence from the second European survey of enterprises on new and emerging risks (ESENER-2)\n - 113 EU-OSHA, 2018: Foresight on new and emerging occupational safety and health risks associated with digitalisation by 2025\n - 114 Wellbeing is often measured as composite indicator covering several aspects of work, for example, in the WHO-5 Well-Being index.\n - 115 Incidence rate = number of work accidents per 100,000 workers. The number of EU Member States changed significantly in 1995 from (EU-12 to EU-15) and 2004 (from EU-15 to EU-25). That is the reason why we use here the incidence rate from ESAW as indicator and not the total number.\n - 116 Eurostat: Statistics in focus, Theme 3-16/2001: Accidents at work in the EU 1998-1999 (p. 2).", - "page_start": 144, - "page_end": 144, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- 452 European Chemical Agency ECHA (https://echa.europa.eu/home) Exposure scenario examples\n - 453 European Centre for Disease Prevention and Control, https://www.ecdc.europa.eu/en\n - 454 European Maritime Safety Agency EMSA (http://www.emsa.europa.eu/ ), Section on Safety and Security http://www.emsa.europa.eu/we-do/safety.html\n - 455 Fundamental Rights Agency FRA, https://fra.europa.eu/en, Section on 'Trafficking and labour exploitation, e.g the report from June 2021 titled: Protecting migrants in an irregular situation from labour exploitation - Role of the Employers Sanctions Directive\n - 456 European Monitoring Centre for Drugs and Drug Addiction EMCDDA (https://www.emcdda.europa.eu/), Section 'Best practice', Policy and practice briefings: Work places, https://www.emcdda.europa.eu/bestpractice/briefings/workplace\\_en", - "page_start": 157, - "page_end": 157, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Table 32: Non-EU Migrants - over-represented in certain sectors and occupations in 2019\n\nThe highest share of intra-EU and extra-EU workers per occupation is among cleaners and helpers (37% in total, intra-EU 11%, extra-EU 25%), labourers in mining and construction (24% in total, intraEU 7%, extra-EU 17%), stationary plant and machine operators (20% in total, intra-EU 6%, extra-EU 14%), and personal care workers (19% in total, intra-EU 5%, extra-EU 14%). 311\n\nThe occupations with a high share of migrant workforce are those with higher physical risks and lower expectations to do this job until 60 years old . The common characteristic of these occupations is the well-known 3-D assignment: dirty, dangerous and demanding. 312\n\nBeside the occupation-related risks, specific health and safety issues might result from a lower level of language dominance; communication and instruction have to cope with different capacities to speak and understand. In a more diverse workforce other factors might differ, like awareness and traditions regarding aspects such as the importance of hierarchy, ways to communicate, perception of behaviour as aggression, harassment and discrimination. In general, a greater variety of the workforce poses wider challenges for prevention.\n\nPosting of workers has similar implications for the organisation of OSH in enterprises. 313 Posting means that companies provide services in other EU Member States without having to establish themselves in the other countries. They send out employees to carry out the tasks required. The latest official data from 2020 estimated 2.3 million posted workers in the EU. 314", - "page_start": 112, - "page_end": 112, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- A1 forms issued for postings of workers by EU27 countries, 2011-2020')\n\nEurostat: EU citizens living in another Member State - statistical overview, here\n\nEuropean Commission, 2019: Towards Fair Labour Mobility: Revision of EU Posting of Workers Rules, 2019", - "page_start": 152, - "page_end": 152, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_SMFG_2011.pdf", - "query": "What are the missions of the Sumitomo Mitsui Financial Group?", - "target_page": 7, - "target_passage": "• To provide optimum added value to our customers and together with them achieve growth • To create sustainable shareholder value through business growth• To create sustainable shareholder value through business growth • To provide a challenging and professionally rewarding work environment for our dedicated employees• To provide a challenging and professionally rewarding work environment for our dedicated employee", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "<!-- image -->\n\nIn the past, the Sumitomo Group In the past, the Sumitomo Group programs to solve the problem of programs to solve the problem of mine, while the Mitsui Group set up mine, while the Mitsui Group set up give the poorest in society access to give the poorest in society access to corporate social responsibility corporate social responsibility philosophies of both the Sumitomo philosophies of both the Sumitomo years of their existence, we will years of their existence, we will problems facing the international problems facing the international service service operations.operations.\n\nundertook large-scale afforestation undertook large-scale afforestation pollution around the Besshi copper pollution around the Besshi copper the Mitsui Memorial Hospital to the Mitsui Memorial Hospital to basic medical care. Based on this basic medical care. Based on this DNA embedded in the business DNA embedded in the business and Mitsui groups over the 400 and Mitsui groups over the 400 continue to play our part in solving continue to play our part in solving community through our financial community through our financial", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## Corporate Outline (as of September 30, 2011)\n\nCompany Name\n\nBusiness Description\n\n - Established\n\nHead Office\n\nChairman of the Board\n\nPresident\n\nCapital\n\nStock Exchange Listings\n\n - Sumitomo Mitsui Financial Group, Inc. ::\n - Management of banking subsidiaries (under the stipulations of Japan's Banking Act) and of non-bank subsidiaries, as well as the performance of ancillary functions :\n - December 2, 2002 :\n - 1-2, Marunouchi 1-chome, Chiyoda-ku, Tokyo, Japan :\n\nMasayuki Oku :\n\n - Koichi Miyata (Concurrent Director at Sumitomo Mitsui Banking Corporation) :\n - ¥2,337.8 billion :\n\nTokyo Stock Exchange (First Section) :\n\nOsaka Securities Exchange (First Section) Nagoya Stock Exchange (First Section) Note: American Depositary Receipts (ADRs) are listed on the New York Stock Exchange.\n\n## Structure of Sumitomo Mitsui Financial Group (as of September 30, 2011)\n\n* SMFG plans to make PROMISE a wholly owned subsidiary in April 2012.\n\n<!-- image -->\n\n## Our CSR reporting\n\nAt Sumitomo Mitsui Financial Group, three kinds of CSR reports are compiled.\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n| | Covers CSR baselines and CSR activities at SMFG and its Group companies, Covers CSR baselines and CSR activities at SMFG and its Group companies, centered on specific examples centered on specific examples CSR report 2011 (digest version) | CSR disclosure through specific examples |\n|------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| information on CSR activities information on CSR activities CSR report 2011 statistical performance, online PDF file) | Comprehensive disclosure of CSR activities | Covers environment-related statistical data and gives more detailed Covers environment-related statistical data and gives more detailed (digest version with examples of activities and |\n| | This is the official version of our CSR report. Covers the full spectrum of This is the official version of our CSR report. Covers the full spectrum of CSR activities at SMFG CSR activities at SMFG CSR report (online version, Japanese only) www.smfg.co.jp/responsibility | Enriched CSR disclosure |\n\n## Editorial Policy\n\nThis report has been created in an effort to convey to our stakeholders the variety of our initiatives and the roles the SMFG Group is fulfilling as we work to create a sustainable society.", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## Social Contribution Activities\n\n<!-- image -->\n\nSMFG as a corporate citizen: Working to create a prosperous society for all\n\n## SMFG and its Group companies participate in neighborhood cleanup programs\n\nIn fiscal 2010, 150 volunteers from the In fiscal 2010, 150 volunteers from the SMFG Group participated in beach cleanup SMFG Group participated in beach cleanup activities in Kanagawa and Hyogo prefectures activities in Kanagawa and Hyogo prefectures on 'SMFG Clean-up Day.' This initiative is on 'SMFG Clean-up Day.' This initiative is not simply a matter of picking up garbage. It not simply a matter of picking up garbage. It also involves inspections and analysis of also involves inspections and analysis of garbage to identify pointers for providing garbage to identify pointers for providing solutions for environmental issues in the solutions for environmental issues in the future. future.\n\nIn addition to beach cleanup activities in In addition to beach cleanup activities in Chiba and Hyogo prefectures by SMBC Chiba and Hyogo prefectures by SMBC Friend Securities, Group companies of Friend Securities, Group companies of Cedyna, Sumitomo Mitsui Finance & Leasing, Cedyna, Sumitomo Mitsui Finance & Leasing, the Japan Research Institute and SMBC the Japan Research Institute and SMBC Nikko Securities carry out ongoing cleanup Nikko Securities carry out ongoing cleanup and other activities in the areas around their and other activities in the areas around their offices and branches. offices and branches.\n\nThe Minato Bank and Kansai Urban Banking The Minato Bank and Kansai Urban Banking Corporation also engage in cleanup activities Corporation also engage in cleanup activities around Suma Beach and Lake Biwa, to around Suma Beach and Lake Biwa, to protect the regional environment. protect the regional environment.\n\n## Supporting education in developing countries, together with our customers and employees\n\nCardholders and employees of Sumitomo Cardholders and employees of Sumitomo Mitsui Card joined a literary social contribution Mitsui Card joined a literary social contribution initiative by participating in the Books To initiative by participating in the Books To The People 2010 project operated by BOOKOFF The People 2010 project operated by BOOKOFF CORP. This project aims to provide CORP. This project aims to provide environ environments in which children can read books in ments in which children can read books in purpose-built facilities, through donations to purpose-built facilities, through donations to Room to Read, a non-governmental organi Room to Read, a non-governmental organization that supports education in developing zation that supports education in developing countries. These NGO donations are pegged countries. These NGO donations are pegged to total numbers of used books and other to total numbers of used books and other items purchased by cardholders. Through items purchased by cardholders. Through the Sumitomo Mitsui Card-operated online the Sumitomo Mitsui Card-operated online shopping mall POINT UP Mall, cardholders shopping mall POINT UP Mall, cardholders are encouraged to buy used books through are encouraged to buy used books through BOOKOFF, and employees collect and donate BOOKOFF, and employees collect and donate used books from their homes and companies. used books from their homes and companies.\n\n<!-- image -->\n\nCollection box for used books and other items installed in an employee canteen\n\n<!-- image -->\n\nSupporting education in developing countries\n\nGarbage was analyzed in the Kugenuma Beach cleanup event, in which SMFG and its Group companies participated\n\n## Donations through 'The World Bank Green Fund'", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nSumitomo Mitsui Financial Group CSR Report\n\nDigest version\n\n<!-- image -->", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "This report has been created in an effort to convey to our stakeholders the variety of our initiatives and the roles the SMFG Group is fulfilling as we work to create a sustainable society.\n\nWe have aimed to present the information clearly, so that readers may understand our attitude that the fulfillment of CSR is\n\nthe essence of business itself, and our initiatives act upon this.\n\nOur CSR Report 2011 (digest version), launched last fiscal year, is intended to present more concise reports of the Group's CSR activities, with a focus on specific activities of interest. To complement this, we have also posted online our CSR Report 2011 (digest version, with examples of activities and statistical performance), with more detailed information on CSR activities and statistical data omitted in the CSR Report 2011 (digest version).\n\nWe disclose the full range of our CSR activities as a Group on our website in the official-use version of our CSR Report (in Japanese only). It is recommended that you read it in combination with the above two digest versions in order to understand our CSR and other activities in greater detail.\n\nFrom the current fiscal year, we are including third-party opinions in the website version.\n\n## Scope of this Report\n\n - GLYPH<129> Sumitomo Mitsui Financial Group, Inc.\n - GLYPH<129> Sumitomo Mitsui Banking Corporation\n - GLYPH<129> SMFG Card & Credit, Inc.\n - GLYPH<129> Sumitomo Mitsui Card Company, Limited\n - GLYPH<129> Cedyna Financial Corporation\n - GLYPH<129> Sumitomo Mitsui Finance and Leasing Co., Ltd.\n - GLYPH<129> The Japan Research Institute, Limited\n - GLYPH<129> SMBC Friend Securities Co., Ltd.\n - GLYPH<129> SMBC Nikko Securities Inc.\n - GLYPH<129> THE MINATO BANK, LTD.\n - GLYPH<129> Kansai Urban Banking Corporation\n - GLYPH<129> Other Group companies\n\n## Company name abbreviations and other special terminology\n\nThroughout this report, 'Sumitomo Mitsui Financial Group' or 'SMFG' refers to the holding company alone. 'The SMFG Group' refers to the holding company and its primary domestic and international subsidiaries and affiliates.\n\n## Reference guidelines\n\nGlobal Reporting Initiative (GRI) Sustainability Reporting Guidelines 2006 (G3)\n\n - * Global Reporting Initiative (GRI): Established as an international standard for sustainability reporting, compilers set up an international organization (GRI) in 1997 to encourage its adoption worldwide.\n\n## About this Report\n\nPeriod Covered\n\nPublication Date of Japanese Document\n\nContact\n\n - : April 1, 2010 to March 31, 2011 ( 'Fiscal 2010' )\n - : December 2011\n - :\n\nNote: Certain items in this report refer to activities taking place after April 2011.\n\n - Group CSR Department, Sumitomo Mitsui Financial Group, Inc. 1-2 Marunouchi 1-chome, Chiyoda-ku, Tokyo 100-0005 TEL: +81-3-3282-8111", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## EXECUTIVES\n\nFrom left: Mitsuhiko Yamashita, Tadao Takahashi, Toshiyuki Shiga, Carlos Ghosn, Itaru Koeda, Hiroto Saikawa, Carlos Tavares\n\n<!-- image -->\n\n## BOARD OF DIRECTORS AND AUDITORS\n\n## Representative Board Members\n\nCarlos Ghosn\n\nPresident and Co-Chairman\n\nItaru Koeda\n\nCo-Chairman\n\nToshiyuki Shiga\n\nCo-Chairman\n\nBoard Members\n\nTadao Takahashi\n\nHiroto Saikawa\n\nMitsuhiko Yamashita\n\nCarlos Tavares\n\nShemaya Lévy\n\nPatrick Pélata\n\nAuditors\n\nHisayoshi Kojima\n\nShinji Ichishima\n\nKeishi Imamura\n\nHaruo Murakami\n\n## EXECUTIVE COMMITTEE MEMBERS\n\nCarlos Ghosn\n\nToshiyuki Shiga\n\nItaru Koeda\n\nTadao Takahashi\n\nHiroto Saikawa\n\nMitsuhiko Yamashita\n\nCarlos Tavares\n\nAlain-Pierre Raynaud\n\n(As of June 21, 2005)", - "page_start": 6, - "page_end": 6, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "Miyata : In the same way, other SMFG : In the same way, other SMFG Group companies have been sending out Group companies have been sending out volunteers, and providing donations not only volunteers, and providing donations not only as a company, but also through individual as a company, but also through individual employees. SMBC was at the heart of all these employees. SMBC was at the heart of all these activities, and this was a good opportunity activities, and this was a good opportunity for us to appreciate anew how our business for us to appreciate anew how our business contributes to the public good. contributes to the public good.\n\n<!-- image -->\n\n## Koichi Miyata\n\nPresident Sumitomo Mitsui Financial Group, Inc.\n\nThe SMFG Group has 62,000 employees, The SMFG Group has 62,000 employees, 'stepping up to the plate and working hard 'stepping up to the plate and working hard to give something back to society.' I think it to give something back to society.' I think it is important to develop ways of making this is important to develop ways of making this a shared aspiration of all the employees of a shared aspiration of all the employees of\n\nthe Group. the Group.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## Environmental Activities\n\nInternational initiatives in Asian countries and others\n\n## Taking a leading role in environmental businesses in Asia\n\n## Promoting energy-saving and low-emission industries in China\n\n## Support for adoption of electric vehicles and car-sharing\n\nThe SMFG Group supports environmental The SMFG Group supports environmental businesses in the rapidly growing markets of businesses in the rapidly growing markets of Southeast Asia from various perspectives. Southeast Asia from various perspectives. For example in Malaysia, SMBC signed an For example in Malaysia, SMBC signed an operational alliance on environmental operational alliance on environmental businesses with the Federation of Malaysian businesses with the Federation of Malaysian Manufacturers in April 2010, and in October Manufacturers in April 2010, and in October that year acted as main sponsor for Malaysia that year acted as main sponsor for Malaysia's first large-scale international environmental first large-scale international environmental exhibition, International Greentech & Eco exhibition, International Greentech & Eco products Exhibition & Conference Malaysia products Exhibition & Conference Malaysia\n\n2010 (IGEM). At this event, a keynote 2010 (IGEM). At this event, a keynote speech was given by Chairman Teisuke speech was given by Chairman Teisuke Kitayama, and SMBC and Sumitomo Mitsui Kitayama, and SMBC and Sumitomo Mitsui Finance & Leasing opened booths. Finance & Leasing opened booths. The The exhibition, visited on successive days exhibition, visited on successive days by by Malaysia Malaysia's King, prime minister, some of s K ing, prime minister, some of the regional Kings of Malaysia, t he regional Kings of Malaysia, and and cabinet ministers, raised awareness cabinet ministers, raised awareness of of environmental businesses in the nation. environmental businesses in the nation. At the same time, in April 2011, the bank At the same time, in April 2011, the bank's s Malaysia unit Sumitomo Mitsui Banking Malaysia unit Sumitomo Mitsui Banking Corporation Malaysia Berhad began Corporation Malaysia Berhad began operations. This unit is broadening support operations. This unit is broadening support measures to contribute to the development measures to contribute to the development of environmental businesses in Malaysia. of environmental businesses in Malaysia. Meanwhile, in August 2010, the Japan Meanwhile, in August 2010, the Japan\n\n<!-- image -->\n\nResearch Institute, SMBC and a number of Research Institute, SMBC and a number of other companies publicly recruited by Japan other companies publicly recruited by Japan's s New Energy and Industrial Technology New Energy and Industrial Technology Development Organization (NEDO) were Development Organization (NEDO) were jointly commissioned to carry out basic jointly commissioned to carry out basic research into Malaysia research into Malaysia's Green Township s Green Township concept, a national town-planning project concept, a national town-planning project backed by NEDO. backed by NEDO.\n\nLooking ahead, SMBC plans to jointly Looking ahead, SMBC plans to jointly compile an action plan with the Malaysian compile an action plan with the Malaysian government and related enterprises for government and related enterprises for establishment of 'green townships' based establishment of 'green townships' based on the cities Putrajaya and Cyberjaya Prime on the cities Putrajaya and Cyberjaya Prime Minister Najib Razak is promoting. It also Minister Najib Razak is promoting. It also plans to propose specific projects in the plans to propose specific projects in the concept. concept.\n\n<!-- image -->", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## Commitment from the Top\n\nA Conversation with Tadao Ando, Takeshi Kunibe and Koichi Miyata\n\n## What can we do now to spur the reconstruction and revitalization of Japan, and help resolve global issues?\n\nUplifting the nation's spirits Uplifting the nation's spirits\n\nJapan is now facing a wide variety of problems, ranging from the reconstruction of the Tohoku region (the northeastern region of Japan) Japan is now facing a wide variety of problems, ranging from the reconstruction of the Tohoku region (the northeastern region of Japan) after the March 11 earthquake and tsunami ('the Great East Japan Earthquake') to a shrinking and aging population, with falling birth rates after the March 11 earthquake and tsunami ('the Great East Japan Earthquake') to a shrinking and aging population, with falling birth rates and increasing numbers of the aged. and increasing numbers of the aged.\n\nWe must now find ways for people to coexist in harmony with nature, based on a global perspective. We must now find ways for people to coexist in harmony with nature, based on a global perspective.\n\nSumitomo Mitsui Financial Group (SMFG) invited the world-famous architect Tadao Ando to join in a conversation on the issues facing society Sumitomo Mitsui Financial Group (SMFG) invited the world-famous architect Tadao Ando to join in a conversation on the issues facing society and the ways in which SMFG and its Group companies can bring their expertise to bear as a financial services group. and the ways in which SMFG and its Group companies can bring their expertise to bear as a financial services group.\n\n<!-- image -->\n\n## Tadao Ando\n\nArchitect. Professor Emeritus at the University of Tokyo, Representative and Vice-chairman of the Great East Japan Earthquake Reconstruction Design Council. Awarded the Order of Cultural Merit in 2010.\n\nOur measures to support reconstruction after the disastrous earthquake and tsunami Uplifting the nation's spirits\n\n̶ ̶ SMFG has the following priorities in its SMFG has the following priorities in its corporate social responsibility program: corporate social responsibility program: Reconstruction after the earthquake Reconstruction after the earthquake and tsunami, environmental measures, and tsunami, environmental measures, addressing the shrinking and aging addressing the shrinking and aging population, and global challenges. population, and global challenges. -\n\nKunibe : : Japan is facing a difficult period J a p a n i s f a c i ng a d i f f icu lt period with limited prospects for economic growth with limited prospects for economic growth due to a shrinking, aging population and due to a shrinking, aging population and a mature economy. Against this backdrop, a mature economy. Against this backdrop, the country was hit by the unprecedented the country was hit by the unprecedented catastrophe of the Great East Japan catastrophe of the Great East Japan Earthquake. We must face up to the new Earthquake. We must face up to the new challenges arising from this disaster. challenges arising from this disaster.\n\nI believe the time has come for us to I believe the time has come for us to reconsider what we can do in our capacity reconsider what we can do in our capacity as a financial institution to address a variety as a financial institution to address a variety of issues, including the four priorities. of issues, including the four priorities. Today I hope we can discuss not only the road Today I hope we can discuss not only the road to reconstruction after the disaster, but also to reconstruction after the disaster, but also\n\nways to uplift the nation's spirits. ways to uplift the nation's spirits.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nEurope\n\n## Donations to charity groups\n\nEmployees of Sumitomo Mitsui Banking Corporation Europe Employees of Sumitomo Mitsui Banking Corporation Europe (SMBCE) conducted volunteer activities in their time off. (SMBCE) conducted volunteer activities in their time off. SMBCE contributes to charitable organizations through an SMBCE contributes to charitable organizations through an in-house fund and also uses a matching gifts program under in-house fund and also uses a matching gifts program under\n\nwhich it donates a which it donates a certain amount for certain amount for every donation made every donation made by its employees. by its employees.\n\nEmployee volunteers who participated in landscape improvement projects\n\n<!-- image -->\n\nEurope\n\n## Donation for a Japanese-language speech contest\n\nThe European office of the Japan Research Institute (JRI) The European office of the Japan Research Institute (JRI) made a donation in support of a Japanese-language speech made a donation in support of a Japanese-language speech contest. contest.\n\nMozambique\n\n## UNICEF support initiatives\n\nThrough the Climate & Children Supporters project, the bank Through the Climate & Children Supporters project, the bank has supported UNICEF projects in Mozambique benefitting has supported UNICEF projects in Mozambique benefitting\n\nchildren and improving children and improving the water-supply and the water-supply and sanitary environment. sanitary environment.\n\n*Please see this website for further details (in Japanese): www.smbc.co.jp/ccs/\n\nⓒ ⓒ UNICEF Mozambique/Arild Drivdal\n\n<!-- image -->\n\n<!-- image -->", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_SMFG_2011.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_SMFG_2011.pdf", - "query": "Did Katsutoshi Konuma participate in the August 2011 expert roundtable on the role of the Sumitomo Mitsui Financial Group's new Food and Agricultural Assessment Loan? ", - "target_page": 8, - "target_passage": "Key comments of participants Together with Our Customers Katsutoshi Konuma, Section Manager, Social & Environmental Management, Asahi Breweries Ltd", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "<!-- image -->\n\nIn the past, the Sumitomo Group In the past, the Sumitomo Group programs to solve the problem of programs to solve the problem of mine, while the Mitsui Group set up mine, while the Mitsui Group set up give the poorest in society access to give the poorest in society access to corporate social responsibility corporate social responsibility philosophies of both the Sumitomo philosophies of both the Sumitomo years of their existence, we will years of their existence, we will problems facing the international problems facing the international service service operations.operations.\n\nundertook large-scale afforestation undertook large-scale afforestation pollution around the Besshi copper pollution around the Besshi copper the Mitsui Memorial Hospital to the Mitsui Memorial Hospital to basic medical care. Based on this basic medical care. Based on this DNA embedded in the business DNA embedded in the business and Mitsui groups over the 400 and Mitsui groups over the 400 continue to play our part in solving continue to play our part in solving community through our financial community through our financial", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## Together with Our Customers\n\nWe work as a team to improve customer satisfaction and product quality, and, while supporting the customer, contribute to the sustainable development of society as a whole.\n\n<!-- image -->\n\n## The financial sector's role in improving the nation's diet and in strengthening the agricultural and fisheries sectors\n\nFor many years, food supply networks in For many years, food supply networks in Japan were premised on mass production and Japan were premised on mass production and mass consumption, enabling the country to mass consumption, enabling the country to meet soaring food demand at a time of rapid meet soaring food demand at a time of rapid growth in the population and economy. growth in the population and economy.\n\nBut in recent years, consumers have come to But in recent years, consumers have come to place more priority on factors other than place more priority on factors other than volume and price, such as food safety and volume and price, such as food safety and healthiness, and the cultural aspects of diet. healthiness, and the cultural aspects of diet. As discussion continues on the need for As discussion continues on the need for farmers to increase production scale and farmers to increase production scale and move into processing and marketing, major move into processing and marketing, major changes are underway in the agriculture and changes are underway in the agriculture and fisheries sector in Japan. fisheries sector in Japan.\n\nAgainst this backdrop, SMBC has developed Against this backdrop, SMBC has developed a new financial product for this sector. a new financial product for this sector.\n\n## Roundtable session: SMBC Food and Agricultural Assessment Loan\n\nA roundtable session with experts held in August 2011 considered the role of the new SMBC Food and Agricultural Assessment Loan in improving the food supply chain that links food and fishery producers with food processors and consumers. Opinions were also exchanged on what other future role the bank might assume in this regard, given the current situation and issues facing the food industry\n\nand agriculture in Japan.\n\n<!-- image -->\n\n## Key comments of participants\n\n'We want to deliver value by creating demand and quality combined with safety, peace of mind and trust.' Katsutoshi Konuma, Section Manager, Social & Environmental Management, Asahi Breweries Ltd.\n\nYasuhiro Nakashima Associate Professor Graduate School of Agricultural and Life Sciences, The University of Tokyo\n\n'Eating should be something that generates emotion. New potential exists in the world of cuisine.' Daisuke Yamamoto, Vice Senior Consultant, Research Department,\n\nThe Japan Research Institute, Limited\n\n'As consumer tastes go through a time of great change, I think it is important to prioritize ingredients and the attitude of customers toward eating.'\n\nYoichiro Fukayama, Planning Dept., Deputy Head (with powers of representation) of the Corporate Banking Unit & Middle Market Banking Unit, SMBC\n\n'An important concept is multilateral dialogue as the number of parties involved in food production increases throughout the supply chain.'\n\nModerated by Kenji Sawami, Partner, Ernst & Young ShinNihon LLC\n\nThe SMBC Food and Agricultural Assessment The SMBC Food and Agricultural Assessment Loan comes with conditions, depending on Loan comes with conditions, depending on the results of an evaluation of food-producers' the results of an evaluation of food-producers' progress in areas such as food safety and progress in areas such as food safety and environment-friendliness, healthiness and environment-friendliness, healthiness and nutritional value, and efficiency of distribution. nutritional value, and efficiency of distribution. The Japan Research Institute researches The Japan Research Institute researches", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## EXECUTIVES\n\nFrom left: Mitsuhiko Yamashita, Tadao Takahashi, Toshiyuki Shiga, Carlos Ghosn, Itaru Koeda, Hiroto Saikawa, Carlos Tavares\n\n<!-- image -->\n\n## BOARD OF DIRECTORS AND AUDITORS\n\n## Representative Board Members\n\nCarlos Ghosn\n\nPresident and Co-Chairman\n\nItaru Koeda\n\nCo-Chairman\n\nToshiyuki Shiga\n\nCo-Chairman\n\nBoard Members\n\nTadao Takahashi\n\nHiroto Saikawa\n\nMitsuhiko Yamashita\n\nCarlos Tavares\n\nShemaya Lévy\n\nPatrick Pélata\n\nAuditors\n\nHisayoshi Kojima\n\nShinji Ichishima\n\nKeishi Imamura\n\nHaruo Murakami\n\n## EXECUTIVE COMMITTEE MEMBERS\n\nCarlos Ghosn\n\nToshiyuki Shiga\n\nItaru Koeda\n\nTadao Takahashi\n\nHiroto Saikawa\n\nMitsuhiko Yamashita\n\nCarlos Tavares\n\nAlain-Pierre Raynaud\n\n(As of June 21, 2005)", - "page_start": 6, - "page_end": 6, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## Corporate Outline (as of September 30, 2011)\n\nCompany Name\n\nBusiness Description\n\n - Established\n\nHead Office\n\nChairman of the Board\n\nPresident\n\nCapital\n\nStock Exchange Listings\n\n - Sumitomo Mitsui Financial Group, Inc. ::\n - Management of banking subsidiaries (under the stipulations of Japan's Banking Act) and of non-bank subsidiaries, as well as the performance of ancillary functions :\n - December 2, 2002 :\n - 1-2, Marunouchi 1-chome, Chiyoda-ku, Tokyo, Japan :\n\nMasayuki Oku :\n\n - Koichi Miyata (Concurrent Director at Sumitomo Mitsui Banking Corporation) :\n - ¥2,337.8 billion :\n\nTokyo Stock Exchange (First Section) :\n\nOsaka Securities Exchange (First Section) Nagoya Stock Exchange (First Section) Note: American Depositary Receipts (ADRs) are listed on the New York Stock Exchange.\n\n## Structure of Sumitomo Mitsui Financial Group (as of September 30, 2011)\n\n* SMFG plans to make PROMISE a wholly owned subsidiary in April 2012.\n\n<!-- image -->\n\n## Our CSR reporting\n\nAt Sumitomo Mitsui Financial Group, three kinds of CSR reports are compiled.\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n| | Covers CSR baselines and CSR activities at SMFG and its Group companies, Covers CSR baselines and CSR activities at SMFG and its Group companies, centered on specific examples centered on specific examples CSR report 2011 (digest version) | CSR disclosure through specific examples |\n|------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| information on CSR activities information on CSR activities CSR report 2011 statistical performance, online PDF file) | Comprehensive disclosure of CSR activities | Covers environment-related statistical data and gives more detailed Covers environment-related statistical data and gives more detailed (digest version with examples of activities and |\n| | This is the official version of our CSR report. Covers the full spectrum of This is the official version of our CSR report. Covers the full spectrum of CSR activities at SMFG CSR activities at SMFG CSR report (online version, Japanese only) www.smfg.co.jp/responsibility | Enriched CSR disclosure |\n\n## Editorial Policy\n\nThis report has been created in an effort to convey to our stakeholders the variety of our initiatives and the roles the SMFG Group is fulfilling as we work to create a sustainable society.", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## Commitment from the Top\n\nA Conversation with Tadao Ando, Takeshi Kunibe and Koichi Miyata\n\n## What can we do now to spur the reconstruction and revitalization of Japan, and help resolve global issues?\n\nUplifting the nation's spirits Uplifting the nation's spirits\n\nJapan is now facing a wide variety of problems, ranging from the reconstruction of the Tohoku region (the northeastern region of Japan) Japan is now facing a wide variety of problems, ranging from the reconstruction of the Tohoku region (the northeastern region of Japan) after the March 11 earthquake and tsunami ('the Great East Japan Earthquake') to a shrinking and aging population, with falling birth rates after the March 11 earthquake and tsunami ('the Great East Japan Earthquake') to a shrinking and aging population, with falling birth rates and increasing numbers of the aged. and increasing numbers of the aged.\n\nWe must now find ways for people to coexist in harmony with nature, based on a global perspective. We must now find ways for people to coexist in harmony with nature, based on a global perspective.\n\nSumitomo Mitsui Financial Group (SMFG) invited the world-famous architect Tadao Ando to join in a conversation on the issues facing society Sumitomo Mitsui Financial Group (SMFG) invited the world-famous architect Tadao Ando to join in a conversation on the issues facing society and the ways in which SMFG and its Group companies can bring their expertise to bear as a financial services group. and the ways in which SMFG and its Group companies can bring their expertise to bear as a financial services group.\n\n<!-- image -->\n\n## Tadao Ando\n\nArchitect. Professor Emeritus at the University of Tokyo, Representative and Vice-chairman of the Great East Japan Earthquake Reconstruction Design Council. Awarded the Order of Cultural Merit in 2010.\n\nOur measures to support reconstruction after the disastrous earthquake and tsunami Uplifting the nation's spirits\n\n̶ ̶ SMFG has the following priorities in its SMFG has the following priorities in its corporate social responsibility program: corporate social responsibility program: Reconstruction after the earthquake Reconstruction after the earthquake and tsunami, environmental measures, and tsunami, environmental measures, addressing the shrinking and aging addressing the shrinking and aging population, and global challenges. population, and global challenges. -\n\nKunibe : : Japan is facing a difficult period J a p a n i s f a c i ng a d i f f icu lt period with limited prospects for economic growth with limited prospects for economic growth due to a shrinking, aging population and due to a shrinking, aging population and a mature economy. Against this backdrop, a mature economy. Against this backdrop, the country was hit by the unprecedented the country was hit by the unprecedented catastrophe of the Great East Japan catastrophe of the Great East Japan Earthquake. We must face up to the new Earthquake. We must face up to the new challenges arising from this disaster. challenges arising from this disaster.\n\nI believe the time has come for us to I believe the time has come for us to reconsider what we can do in our capacity reconsider what we can do in our capacity as a financial institution to address a variety as a financial institution to address a variety of issues, including the four priorities. of issues, including the four priorities. Today I hope we can discuss not only the road Today I hope we can discuss not only the road to reconstruction after the disaster, but also to reconstruction after the disaster, but also\n\nways to uplift the nation's spirits. ways to uplift the nation's spirits.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## Environmental Activities\n\nInternational initiatives in Asian countries and others\n\n## Taking a leading role in environmental businesses in Asia\n\n## Promoting energy-saving and low-emission industries in China\n\n## Support for adoption of electric vehicles and car-sharing\n\nThe SMFG Group supports environmental The SMFG Group supports environmental businesses in the rapidly growing markets of businesses in the rapidly growing markets of Southeast Asia from various perspectives. Southeast Asia from various perspectives. For example in Malaysia, SMBC signed an For example in Malaysia, SMBC signed an operational alliance on environmental operational alliance on environmental businesses with the Federation of Malaysian businesses with the Federation of Malaysian Manufacturers in April 2010, and in October Manufacturers in April 2010, and in October that year acted as main sponsor for Malaysia that year acted as main sponsor for Malaysia's first large-scale international environmental first large-scale international environmental exhibition, International Greentech & Eco exhibition, International Greentech & Eco products Exhibition & Conference Malaysia products Exhibition & Conference Malaysia\n\n2010 (IGEM). At this event, a keynote 2010 (IGEM). At this event, a keynote speech was given by Chairman Teisuke speech was given by Chairman Teisuke Kitayama, and SMBC and Sumitomo Mitsui Kitayama, and SMBC and Sumitomo Mitsui Finance & Leasing opened booths. Finance & Leasing opened booths. The The exhibition, visited on successive days exhibition, visited on successive days by by Malaysia Malaysia's King, prime minister, some of s K ing, prime minister, some of the regional Kings of Malaysia, t he regional Kings of Malaysia, and and cabinet ministers, raised awareness cabinet ministers, raised awareness of of environmental businesses in the nation. environmental businesses in the nation. At the same time, in April 2011, the bank At the same time, in April 2011, the bank's s Malaysia unit Sumitomo Mitsui Banking Malaysia unit Sumitomo Mitsui Banking Corporation Malaysia Berhad began Corporation Malaysia Berhad began operations. This unit is broadening support operations. This unit is broadening support measures to contribute to the development measures to contribute to the development of environmental businesses in Malaysia. of environmental businesses in Malaysia. Meanwhile, in August 2010, the Japan Meanwhile, in August 2010, the Japan\n\n<!-- image -->\n\nResearch Institute, SMBC and a number of Research Institute, SMBC and a number of other companies publicly recruited by Japan other companies publicly recruited by Japan's s New Energy and Industrial Technology New Energy and Industrial Technology Development Organization (NEDO) were Development Organization (NEDO) were jointly commissioned to carry out basic jointly commissioned to carry out basic research into Malaysia research into Malaysia's Green Township s Green Township concept, a national town-planning project concept, a national town-planning project backed by NEDO. backed by NEDO.\n\nLooking ahead, SMBC plans to jointly Looking ahead, SMBC plans to jointly compile an action plan with the Malaysian compile an action plan with the Malaysian government and related enterprises for government and related enterprises for establishment of 'green townships' based establishment of 'green townships' based on the cities Putrajaya and Cyberjaya Prime on the cities Putrajaya and Cyberjaya Prime Minister Najib Razak is promoting. It also Minister Najib Razak is promoting. It also plans to propose specific projects in the plans to propose specific projects in the concept. concept.\n\n<!-- image -->", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## Social Contribution Activities\n\n<!-- image -->\n\nSMFG as a corporate citizen: Working to create a prosperous society for all\n\n## SMFG and its Group companies participate in neighborhood cleanup programs\n\nIn fiscal 2010, 150 volunteers from the In fiscal 2010, 150 volunteers from the SMFG Group participated in beach cleanup SMFG Group participated in beach cleanup activities in Kanagawa and Hyogo prefectures activities in Kanagawa and Hyogo prefectures on 'SMFG Clean-up Day.' This initiative is on 'SMFG Clean-up Day.' This initiative is not simply a matter of picking up garbage. It not simply a matter of picking up garbage. It also involves inspections and analysis of also involves inspections and analysis of garbage to identify pointers for providing garbage to identify pointers for providing solutions for environmental issues in the solutions for environmental issues in the future. future.\n\nIn addition to beach cleanup activities in In addition to beach cleanup activities in Chiba and Hyogo prefectures by SMBC Chiba and Hyogo prefectures by SMBC Friend Securities, Group companies of Friend Securities, Group companies of Cedyna, Sumitomo Mitsui Finance & Leasing, Cedyna, Sumitomo Mitsui Finance & Leasing, the Japan Research Institute and SMBC the Japan Research Institute and SMBC Nikko Securities carry out ongoing cleanup Nikko Securities carry out ongoing cleanup and other activities in the areas around their and other activities in the areas around their offices and branches. offices and branches.\n\nThe Minato Bank and Kansai Urban Banking The Minato Bank and Kansai Urban Banking Corporation also engage in cleanup activities Corporation also engage in cleanup activities around Suma Beach and Lake Biwa, to around Suma Beach and Lake Biwa, to protect the regional environment. protect the regional environment.\n\n## Supporting education in developing countries, together with our customers and employees\n\nCardholders and employees of Sumitomo Cardholders and employees of Sumitomo Mitsui Card joined a literary social contribution Mitsui Card joined a literary social contribution initiative by participating in the Books To initiative by participating in the Books To The People 2010 project operated by BOOKOFF The People 2010 project operated by BOOKOFF CORP. This project aims to provide CORP. This project aims to provide environ environments in which children can read books in ments in which children can read books in purpose-built facilities, through donations to purpose-built facilities, through donations to Room to Read, a non-governmental organi Room to Read, a non-governmental organization that supports education in developing zation that supports education in developing countries. These NGO donations are pegged countries. These NGO donations are pegged to total numbers of used books and other to total numbers of used books and other items purchased by cardholders. Through items purchased by cardholders. Through the Sumitomo Mitsui Card-operated online the Sumitomo Mitsui Card-operated online shopping mall POINT UP Mall, cardholders shopping mall POINT UP Mall, cardholders are encouraged to buy used books through are encouraged to buy used books through BOOKOFF, and employees collect and donate BOOKOFF, and employees collect and donate used books from their homes and companies. used books from their homes and companies.\n\n<!-- image -->\n\nCollection box for used books and other items installed in an employee canteen\n\n<!-- image -->\n\nSupporting education in developing countries\n\nGarbage was analyzed in the Kugenuma Beach cleanup event, in which SMFG and its Group companies participated\n\n## Donations through 'The World Bank Green Fund'", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nSumitomo Mitsui Financial Group CSR Report\n\nDigest version\n\n<!-- image -->", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## Japan Research Institute. Japan Research Institute.\n\nAndo : Our world changed overnight : O u r world changed overnight following the earthquake and tsunami. following the earthquake and tsunami. Take the matters of food, energy and Take the matters of food, energy and resources. In energy-saving, I think Japan resources. In energy-saving, I think Japan is now the world leader. Although Japan is now the world leader. Although Japan has technologies that can contribute to has technologies that can contribute to global affluence, I think it has not been global affluence, I think it has not been able to fully communicate their benefits able to fully communicate their benefits to the world. to the world.\n\nso the other side doesn so the other side doesn't understand. We t understand. We must listen carefully to each other and must listen carefully to each other and express ourselves clearly. This is true for express ourselves clearly. This is true for both individuals and companies. both individuals and companies.\n\nJapan enjoys a high degree of trust, not Japan enjoys a high degree of trust, not only in Asia but also in the world. It has an only in Asia but also in the world. It has an image of safety and stability. There is trust image of safety and stability. There is trust between people and between enterprises. between people and between enterprises. We must revitalize the country while this We must revitalize the country while this trust remains intact. People from various trust remains intact. People from various\n\n## Commitment from the Top\n\nA Conversation with Tadao Ando, Takeshi Kunibe and Koichi Miyata\n\n## A problem not only for Japan but the A problem not only for Japan but the whole world whole world - -\n\nMiyata : As I said, before the : A s I s a i d , b e f o r e t h e earthquake and tsunami, a sense of earthquake and tsunami, a sense of stagnation was spreading throughout stagnation was spreading throughout Japanese society. Young people were Japanese society. Young people were wedded to the status quo, and I thought wedded to the status quo, and I thought we were struggling with weighty and we were struggling with weighty and intractable issues. But now more people intractable issues. But now more people think, 'Let think, 'Let's pull together.' I think the s p u l l t o g e t h e r.' I t h i n k t he\n\n<!-- image -->\n\nJapanese companies tend to be quite Japanese companies tend to be quite reserved and unobtrusive. I believe they reserved and unobtrusive. I believe they must become more willing to blow their must become more willing to blow their own trumpets. For example, our students own trumpets. For example, our students and salary men cannot freely express and salary men cannot freely express themselves. A place where anybody can themselves. A place where anybody can speak their mind freely and clearly speak their mind freely and clearly that is the kind of country we must t h a t i s t h e k i n d o f c o u n t r y w e m u s t become. The reason why Japan is so very become. The reason why Japan is so very poor at dealing with foreign countries is poor at dealing with foreign countries is that the Japanese people do not express that the Japanese people do not express their own opinions clearly in words, and their own opinions clearly in words, and\n\nwalks of life - industry, business - must walks of life - industry, business - must come together. While helping each other, come together. While helping each other, they must work for the national interest. they must work for the national interest.\n\nFoster the next generation and deal with a shrinking, aging population. The bonds between people and their families are crucial\n\n- A shrinking, aging population: - A shrinking, aging population:\n\nJapanese people always come together Japanese people always come together in times of crisis. in times of crisis.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "This report has been created in an effort to convey to our stakeholders the variety of our initiatives and the roles the SMFG Group is fulfilling as we work to create a sustainable society.\n\nWe have aimed to present the information clearly, so that readers may understand our attitude that the fulfillment of CSR is\n\nthe essence of business itself, and our initiatives act upon this.\n\nOur CSR Report 2011 (digest version), launched last fiscal year, is intended to present more concise reports of the Group's CSR activities, with a focus on specific activities of interest. To complement this, we have also posted online our CSR Report 2011 (digest version, with examples of activities and statistical performance), with more detailed information on CSR activities and statistical data omitted in the CSR Report 2011 (digest version).\n\nWe disclose the full range of our CSR activities as a Group on our website in the official-use version of our CSR Report (in Japanese only). It is recommended that you read it in combination with the above two digest versions in order to understand our CSR and other activities in greater detail.\n\nFrom the current fiscal year, we are including third-party opinions in the website version.\n\n## Scope of this Report\n\n - GLYPH<129> Sumitomo Mitsui Financial Group, Inc.\n - GLYPH<129> Sumitomo Mitsui Banking Corporation\n - GLYPH<129> SMFG Card & Credit, Inc.\n - GLYPH<129> Sumitomo Mitsui Card Company, Limited\n - GLYPH<129> Cedyna Financial Corporation\n - GLYPH<129> Sumitomo Mitsui Finance and Leasing Co., Ltd.\n - GLYPH<129> The Japan Research Institute, Limited\n - GLYPH<129> SMBC Friend Securities Co., Ltd.\n - GLYPH<129> SMBC Nikko Securities Inc.\n - GLYPH<129> THE MINATO BANK, LTD.\n - GLYPH<129> Kansai Urban Banking Corporation\n - GLYPH<129> Other Group companies\n\n## Company name abbreviations and other special terminology\n\nThroughout this report, 'Sumitomo Mitsui Financial Group' or 'SMFG' refers to the holding company alone. 'The SMFG Group' refers to the holding company and its primary domestic and international subsidiaries and affiliates.\n\n## Reference guidelines\n\nGlobal Reporting Initiative (GRI) Sustainability Reporting Guidelines 2006 (G3)\n\n - * Global Reporting Initiative (GRI): Established as an international standard for sustainability reporting, compilers set up an international organization (GRI) in 1997 to encourage its adoption worldwide.\n\n## About this Report\n\nPeriod Covered\n\nPublication Date of Japanese Document\n\nContact\n\n - : April 1, 2010 to March 31, 2011 ( 'Fiscal 2010' )\n - : December 2011\n - :\n\nNote: Certain items in this report refer to activities taking place after April 2011.\n\n - Group CSR Department, Sumitomo Mitsui Financial Group, Inc. 1-2 Marunouchi 1-chome, Chiyoda-ku, Tokyo 100-0005 TEL: +81-3-3282-8111", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_SMFG_2011.pdf" - } - ] - }, - { - "references": { - "source_file": "news2.pdf", - "query": "What is the trend of flood risk in Canada in 2024?", - "target_page": 1, - "target_passage": "(NC) Communities in Canada are facing increased flood risks, with 1.5 million homes highly exposed", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "<!-- image -->\n\nMENU\n\n<!-- image -->\n\n## ISSUE\n\nDecember 2024\n\n## CATEGORIES\n\nHome - Safety Community Affairs Finance - Insurance Editor's Picks\n\n## FRANÇAIS\n\nTrois façons dont des collectivités au Canada réduisent leurs risques d'inondation\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nRADIO\n\n<!-- image -->\n\n## Three ways Canadian communities are reducing flood risks\n\n(NC) Communities in Canada are facing increased flood risks, with 1.5 million homes highly exposed. There are large-scale programs available across the country providing flood protection measures for communities at risk, such as Intact's Municipal Climate Resiliency Grants. This program is helping build the resilience of communities and homes through a variety of preventative actions.\n\nWetlands can reduce flood risk by absorbing large quantities of water, but they are not typically found in cities. In Vancouver, B.C., Environmental Youth Alliance and Strathcona Community Gardens created a wetland on downtown's east side, an area historically prone to flooding. Made up of natural elements like ponds and marshes, the wetland reduces the community's flood risk by catching and absorbing rainfall and runoff from surrounding surfaces.\n\nKnowing the risks is the first step to protecting homes and communities. In New Brunswick, the City of Fredericton launched a Neighbourhood Flood Risk Tool to provide easy access to online flood prevention guidance. Residents can input their addresses to see if they are at risk and learn tips to reduce the risk of flooding around their properties. The portal launched in the summer of 2023 and was viewed 27,000 times in its first year.\n\nRebate programs are a powerful motivation for homeowners to make upgrades that might otherwise be put off. In PEI, the City of Charlottetown offered rebates covering 75 per cent of eligible material and labour costs, up to a maximum of $1,000. More than 90 properties completed upgrades, including installing sump pumps, backup batteries, backwater valves, and water monitors and alarms, to better prepare them for extreme weather events.\n\nCommunities can learn more about the grant program and how to apply at intactfc.com/mcrg.\n\nwww.newscanada.com\n\nWord Count: 281\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n## EDITOR'S PICKS\n\nHave your say! Complete our 2025 Media Survey\n\n<!-- image -->\n\nRetrain your way to a new job\n\n<!-- image -->\n\nThe top AI-powered tech trends in 2025\n\n<!-- image -->", - "page_start": 0, - "page_end": 0, - "source_file": "news2.pdf" - }, - { - "text": "- 200. \"Big tech and the pursuit of AI dominance\" (https://www.economist.com/business/2023/03/2 6/big-tech-and-the-pursuit-of-ai-dominance). The Economist . 26 March 2023. Archived (http s://web.archive.org/web/20231229021351/https://www.economist.com/business/2023/03/26/ big-tech-and-the-pursuit-of-ai-dominance) from the original on 29 December 2023.\n - 201. Fung, Brian (19 December 2023). \"Where the battle to dominate AI may be won\" (https://ww w.cnn.com/2023/12/19/tech/cloud-competition-and-ai/index.html). CNN Business . Archived (https://web.archive.org/web/20240113053332/https://www.cnn.com/2023/12/19/tech/cloudcompetition-and-ai/index.html) from the original on 13 January 2024.\n - 202. Metz, Cade (5 July 2023). \"In the Age of A.I., Tech's Little Guys Need Big Friends\" (https://w ww.nytimes.com/2023/07/05/business/artificial-intelligence-power-data-centers.html). The New York Times . Archived (https://web.archive.org/web/20240708214644/https://www.nytim es.com/2023/07/05/business/artificial-intelligence-power-data-centers.html) from the original on 8 July 2024. Retrieved 5 October 2024.\n - 203. \"Electricity 2024 - Analysis\" (https://www.iea.org/reports/electricity-2024). IEA . 24 January 2024. Retrieved 13 July 2024.\n - 204. Calvert, Brian (28 March 2024). \"AI already uses as much energy as a small country. It's only the beginning\" (https://www.vox.com/climate/2024/3/28/24111721/ai-uses-a-lot-of-ener gy-experts-expect-it-to-double-in-just-a-few-years). Vox . New York, New York. Archived (http s://web.archive.org/web/20240703080555/https://www.vox.com/climate/2024/3/28/2411172 1/ai-uses-a-lot-of-energy-experts-expect-it-to-double-in-just-a-few-years) from the original on 3 July 2024. Retrieved 5 October 2024.\n - 205. Halper, Evan; O'Donovan, Caroline (21 June 2024). \"AI is exhausting the power grid. Tech firms are seeking a miracle solution\" (https://www.washingtonpost.com/business/2024/06/2 1/artificial-intelligence-nuclear-fusion-climate/?utm\\_campaign=wp\\_post\\_most&utm\\_medium =email&utm\\_source=newsletter&wpisrc=nl\\_most&carta-url=https%3A%2F%2Fs2.washingto npost.com%2Fcar-ln-tr%2F3e0d678%2F6675a2d2c2c05472dd9ec0f4%2F596c09009bbc0f 20865036e7%2F12%2F52%2F6675a2d2c2c05472dd9ec0f4). Washington Post .\n - 206. Davenport, Carly. \"AI Data Centers and the Coming YS Power Demand Surge\" (https://web. archive.org/web/20240726080428/https://www.goldmansachs.com/intelligence/pages/gs-res earch/generational-growth-ai-data-centers-and-the-coming-us-power-surge/report.pdf) (PDF). Goldman Sachs . Archived from the original (https://www.goldmansachs.com/intellige nce/pages/gs-research/generational-growth-ai-data-centers-and-the-coming-us-power-surg e/report.pdf) (PDF) on 26 July 2024. Retrieved 5 October 2024.\n - 207. Ryan, Carol (12 April 2024). \"Energy-Guzzling AI Is Also the Future of Energy Savings\" (http s://www.wsj.com/business/energy-oil/ai-data-centers-energy-savings-d602296e). Wall Street Journal . Dow Jones.\n - 208. Hiller, Jennifer (1 July 2024). \"Tech Industry Wants to Lock Up Nuclear Power for AI\" (https:// www.wsj.com/business/energy-oil/tech-industry-wants-to-lock-up-nuclear-power-for-ai-6cb7 5316?mod=djem10point). Wall Street Journal . Dow Jones. Archived (https://web.archive.or g/web/20241005165650/https://www.wsj.com/business/energy-oil/tech-industry-wants-to-loc k-up-nuclear-power-for-ai-6cb75316?mod=djem10point) from the original on 5 October 2024. Retrieved 5 October 2024.\n - 209. Kendall, Tyler (28 September 2024). \"Nvidia's Huang Says Nuclear Power an Option to Feed Data Centers\" (https://www.bloomberg.com/news/articles/2024-09-27/nvidia-s-huang-s ays-nuclear-power-an-option-to-feed-data-centers). Bloomberg .", - "page_start": 41, - "page_end": 41, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- Wong, Matteo (19 May 2023), \"ChatGPT Is Already Obsolete\" (https://www.theatlantic.com/tech nology/archive/2023/05/ai-advancements-multimodal-models/674113/), The Atlantic , archived (https://web.archive.org/web/20240918022529/https://www.theatlantic.com/technol ogy/archive/2023/05/ai-advancements-multimodal-models/674113/) from the original on 18 September 2024, retrieved 5 October 2024", - "page_start": 65, - "page_end": 65, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- ( h ) 2010 c. 29.\n - ( i ) And see section 2 of the Flood and Water Management Act 2010 for the meaning of 'risk'.\n - ( j ) S.I. 2014/3120. There are no relevant amending instruments.", - "page_start": 39, - "page_end": 39, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## 4. Results\n\nThe Central Scenario estimates that the prison population will rise to 87,700 by the end of June 2015 and to 90,200 by the end of June 2020.\n\nChart 2 presents Prison population projections from November 2014 to December 2020.\n\nChart 2: Projected monthly prison population (all scenarios)\n\n<!-- image -->\n\nIllustrative Scenario 1 estimates that the prison population will rise to 87,100 by the end of June 2015 and then fall to 81,400 by the end of June 2020.\n\nIllustrative Scenario 2 estimates that the prison population will rise to 88,900 by the end of June 2015 and to 98,900 by the end of June 2020.\n\nThe projected trends reflect the cumulative impacts of the various sentencing, legislative and procedural assumptions that are used to generate the projections. The seasonal pattern reflects the dip in the prison population which is always seen around the Christmas period.\n\nIn the Central Scenario, the prison population is expected to rise to 90,200 by June 2020. The projected population increase is largely due to the recent trends in case mix where we have seen more serious cases come before the courts. This results in offenders receiving longer custodial sentence lengths, which in turn places an upward pressure on the prison population. The growth in this scenario is largely driven by the rise in the determinate population which is projected to grow to 60,200 by June 2020. This is partially due to the", - "page_start": 12, - "page_end": 12, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "- 265. Cellan-Jones (2014).\n - 266. Russell & Norvig 2021, p. 1001.\n - 267. Bostrom (2014).\n - 268. Russell (2019).\n - 269. Bostrom (2014); Müller & Bostrom (2014); Bostrom (2015).\n - 270. Harari (2023).\n - 271. Müller & Bostrom (2014).\n - 272. Leaders' concerns about the existential risks of AI around 2015: Rawlinson (2015), Holley (2015), Gibbs (2014), Sainato (2015)\n - 273. \" \"Godfather of artificial intelligence\" talks impact and potential of new AI\" (https://www.cbsne ws.com/video/godfather-of-artificial-intelligence-talks-impact-and-potential-of-new-ai). CBS News . 25 March 2023. Archived (https://web.archive.org/web/20230328225221/https://www. cbsnews.com/video/godfather-of-artificial-intelligence-talks-impact-and-potential-of-new-ai) from the original on 28 March 2023. Retrieved 28 March 2023.\n - 274. Pittis, Don (4 May 2023). \"Canadian artificial intelligence leader Geoffrey Hinton piles on fears of computer takeover\" (https://www.cbc.ca/news/business/ai-doom-column-don-pittis1.6829302). CBC . Archived (https://web.archive.org/web/20240707032135/https://www.cbc. ca/news/business/ai-doom-column-don-pittis-1.6829302) from the original on 7 July 2024. Retrieved 5 October 2024.\n - 275. \" '50-50 chance' that AI outsmarts humanity, Geoffrey Hinton says\" (https://www.bnnbloomb erg.ca/50-50-chance-that-ai-outsmarts-humanity-geoffrey-hinton-says-1.2085394). Bloomberg BNN . 14 June 2024. Retrieved 6 July 2024.\n - 276. Valance (2023).\n - 277. Taylor, Josh (7 May 2023). \"Rise of artificial intelligence is inevitable but should not be feared, 'father of AI' says\" (https://www.theguardian.com/technology/2023/may/07/rise-of-arti ficial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says). The Guardian . Archived (https://web.archive.org/web/20231023061228/https://www.theguardian.com/techn ology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-fatherof-ai-says) from the original on 23 October 2023. Retrieved 26 May 2023.\n - 278. Colton, Emma (7 May 2023). \" 'Father of AI' says tech fears misplaced: 'You cannot stop it' \" (https://www.foxnews.com/tech/father-ai-jurgen-schmidhuber-says-tech-fears-misplaced-can not-stop). Fox News . Archived (https://web.archive.org/web/20230526162642/https://www.fo xnews.com/tech/father-ai-jurgen-schmidhuber-says-tech-fears-misplaced-cannot-stop) from the original on 26 May 2023. Retrieved 26 May 2023.\n - 279. Jones, Hessie (23 May 2023). \"Juergen Schmidhuber, Renowned 'Father Of Modern AI,' Says His Life's Work Won't Lead To Dystopia\" (https://www.forbes.com/sites/hessiejones/20 23/05/23/juergen-schmidhuber-renowned-father-of-modern-ai-says-his-lifes-work-wont-leadto-dystopia). Forbes . Archived (https://web.archive.org/web/20230526163102/https://www.fo rbes.com/sites/hessiejones/2023/05/23/juergen-schmidhuber-renowned-father-of-modern-ai -says-his-lifes-work-wont-lead-to-dystopia/) from the original on 26 May 2023. Retrieved 26 May 2023.\n - 280. McMorrow, Ryan (19 December 2023). \"Andrew Ng: 'Do we think the world is better off with more or less intelligence?' \" (https://www.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f93 52be3). Financial Times . Archived (https://web.archive.org/web/20240125014121/https://ww w.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f9352be3) from the original on 25 January 2024. Retrieved 30 December 2023.\n - 281. Levy, Steven (22 December 2023). \"How Not to Be Stupid About AI, With Yann LeCun\" (http s://www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview). Wired . Archived (h ttps://web.archive.org/web/20231228152443/https://www.wired.com/story/artificial-intelligenc e-meta-yann-lecun-interview/) from the original on 28 December 2023. Retrieved 30 December 2023.", - "page_start": 44, - "page_end": 44, - "source_file": "wikipedia3.pdf" - }, - { - "text": "## 5. Previous Projections\n\nAt the end of September 2014 the published prison population was within 1.8 % of the 2013 Scenario 2 (central) projection, and within 3.4 % of the 2013 Scenario 1 projection and 0.2 % of the 2013 Scenario 3 projection. This does not indicate which scenario the actual prison population will track going forward.\n\nDifferences between the 2013 projections and the actual population could be explained by changes, different to those projected, in overall demand, offence mix, age and gender of defendants, court routes, custody rates or sentence lengths.\n\nChart 3 plots the 2014 Central Scenario projection against the three 2013 prison population projections. The 2014-2020 Central Scenario projection is above all three scenarios from last year. The higher level of the new projections can be attributed to a more serious case mix coming into the courts with a resulting increase in average custodial sentence lengths. The projection for June 2019 in the Central Scenario this year is 10.2 % above the equivalent scenario (Scenario 2) last year.\n\nChart 3: Comparing 2013 and 2014 projections (November 2014 - December 2020)\n\n<!-- image -->", - "page_start": 14, - "page_end": 14, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "## Table of Contents\n\n## PART II. OTHER INFORMATION\n\n## ITEM 1. LEGAL PROCEEDINGS\n\nFor a description of our material pending legal proceedings, please see Note 10, Commitments and Contingencies , to the consolidated financial statements included elsewhere in this Quarterly Report on Form 10-Q.\n\n## ITEM 1A. RISK FACTORS\n\nOur operations and financial results are subject to various risks and uncertainties, including the factors discussed in Part I, Item 1A, Risk Factors in our Annual Report on Form 10-K for the year ended December 31, 2023, which could adversely affect our business, financial conditions and future results.\n\n## ITEM 2. UNREGISTERED SALES OF EQUITY SECURITIES AND USE OF PROCEEDS\n\nIn connection with the offering of 2.00% Convertible Senior Notes due 2024 in May 2019, we sold warrants to each of Société Générale, Wells Fargo Bank, National Association, Credit Suisse Capital LLC (later assigned to UBS AG, London Branch) and Goldman, Sachs & Co. LLC (together, the '2019 Warrantholders'). Between August 19, 2024 and September 30, 2024, we issued an aggregate of 8,506,223 shares of our common stock to the 2019 Warrantholders pursuant to their exercise of such warrants, which were net of the applicable exercise prices. Such shares were issued pursuant to an exemption from registration provided by Rule 3(a)(9) of the Securities Act of 1933.\n\n## ITEM 3. DEFAULTS UPON SENIOR SECURITIES\n\nNone.\n\n## ITEM 4. MINE SAFETY DISCLOSURES\n\nNot applicable.\n\n## ITEM 5. OTHER INFORMATION\n\nNone of the Company's directors or officers adopted, modified or terminated a Rule 10b5-1 trading arrangement or a non-Rule 10b5-1 trading arrangement during the Company's fiscal quarter ended September 30, 2024, as such terms are defined under Item 408(a) of Regulation S-K, except as follows:\n\nOn July 25, 2024, Robyn Denholm, one of our directors, adopted a Rule 10b5-1 trading arrangement for the potential sale of up to 674,345 shares of our common stock (all resulting from stock options expiring in June 2025), subject to certain conditions. The arrangement's expiration date is June 18, 2025.\n\nOn July 31, 2024, Kimbal Musk, one of our directors, adopted a Rule 10b5-1 trading arrangement for the potential sale of up to 152,088 shares of our common stock, subject to certain conditions. The arrangement's expiration date is May 30, 2025.\n\nOn August 12, 2024, Kathleen Wilson-Thompson, one of our directors, adopted a Rule 10b5-1 trading arrangement for the potential sale of up to 300,000 shares of our common stock, subject to certain conditions. The arrangement's expiration date is February 28, 2025.", - "page_start": 46, - "page_end": 46, - "source_file": "tesla_form_10q.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n## What can users expect from UKCP18?\n\nThere are three components to UKCP18: observations of historic climate, marine projections and projections over land. These components are described below and summarised in Table 1. UKCP18 will provide each of these components at a higher spatial and temporal resolution than UKCP09 and with more information on different types of uncertainty.\n\n<!-- image -->\n\n<!-- image -->\n\n## OBSERVATIONS\n\n## Annual report: State of the UK Climate. Downloadable data.\n\nThe 'State of the UK Climate' report for 2017 will be included as part of the UKCP18 package, bringing the observed data right up to date. This annual update 8 covers trends, the multidecade climate record and significant weather events such as the early July 2015 hot spell and the exceptionally mild and wet December of the same year.\n\nQuality controlled UK observational datasets from the Met Office observing network, provided at spatial resolutions to match the land projections and for pre-defined administrative regions and river basins, will be available under an Open Government Licence 9 . For variables such as temperature and precipitation these data sets will span the late 19th Century to the present day and will be provided for daily, monthly, seasonal, annual and long term averages.\n\n## MARINE PROJECTIONS\n\n## Sea level rise. Storm surge. Past event case studies.\n\nSea-level rise projections will extend to 2100 and will include contributions from glaciers, ice sheets, freshwater reservoirs, groundwater and thermal expansion. Outputs will include an estimate of the year-to-year changes in sea level rise and a 'plausible but highly unlikely' scenario known as H++. A new feature of UKCP18 will be assessing the credibility of making sea level rise projections to 2300. The projections will use the latest information from the CMIP5 models and application of the methods used in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report 10 .\n\nThe UKCP09 storm surge projections will be updated to provide new estimates of the change in high water levels over the 21st Century. These estimates will be based on a combination of projected mean sea level change and projections of change in the extremes due to changes in atmospheric storminess. These 'storminess' projections will use the same surge model used in operational weather forecasting, using the wind and pressure from the CMIP5 ensemble to drive the surge. New understanding of the modification of large-scale sea level change signals as they pass from the open ocean onto the shelf sea around the UK will be incorporated into the UKCP18 marine projections. UKCP18 will also include storm surge historical case studies derived from applying plausible future sea level change to historical extreme events.\n\n - 8 The latest update can be found at http://www.metoffice.gov.uk/climate/uk/about/state-of-climate\n - 9 http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/\n - 10 https://www.ipcc.ch/report/ar5/", - "page_start": 1, - "page_end": 1, - "source_file": "legal1_opengouvernementlicense.pdf" - }, - { - "text": "- 157. Roberts, Siobhan (25 July 2024). \"AI achieves silver-medal standard solving International Mathematical Olympiad problems\" (https://www.nytimes.com/2024/07/25/science/ai-math-al phaproof-deepmind.html). The New York Times . Archived (https://web.archive.org/web/2024 0926131402/https://www.nytimes.com/2024/07/25/science/ai-math-alphaproof-deepmind.ht ml) from the original on 26 September 2024. Retrieved 7 August 2024.\n - 158. LLEMMA . (https://blog.eleuther.ai/llemma/) eleuther.ai. Retrieved 2024-08-07.\n - 159. AI Math. (https://julius.ai/home/ai-math) Archived (https://web.archive.org/web/20241005165 649/https://julius.ai/home/ai-math) 5 October 2024 at the Wayback Machine Caesars Labs, 2024. Retrieved 2024-08-07.", - "page_start": 37, - "page_end": 37, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "news2.pdf", - "query": "How flooding was prevented in Vancouver? ", - "target_page": 1, - "target_passage": "In Vancouver, B.C., Environmental Youth Alliance and Strathcona Community Gardens created a wetland on downtown’s east side, an area historically prone to flooding. ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "<!-- image -->\n\nMENU\n\n<!-- image -->\n\n## ISSUE\n\nDecember 2024\n\n## CATEGORIES\n\nHome - Safety Community Affairs Finance - Insurance Editor's Picks\n\n## FRANÇAIS\n\nTrois façons dont des collectivités au Canada réduisent leurs risques d'inondation\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nRADIO\n\n<!-- image -->\n\n## Three ways Canadian communities are reducing flood risks\n\n(NC) Communities in Canada are facing increased flood risks, with 1.5 million homes highly exposed. There are large-scale programs available across the country providing flood protection measures for communities at risk, such as Intact's Municipal Climate Resiliency Grants. This program is helping build the resilience of communities and homes through a variety of preventative actions.\n\nWetlands can reduce flood risk by absorbing large quantities of water, but they are not typically found in cities. In Vancouver, B.C., Environmental Youth Alliance and Strathcona Community Gardens created a wetland on downtown's east side, an area historically prone to flooding. Made up of natural elements like ponds and marshes, the wetland reduces the community's flood risk by catching and absorbing rainfall and runoff from surrounding surfaces.\n\nKnowing the risks is the first step to protecting homes and communities. In New Brunswick, the City of Fredericton launched a Neighbourhood Flood Risk Tool to provide easy access to online flood prevention guidance. Residents can input their addresses to see if they are at risk and learn tips to reduce the risk of flooding around their properties. The portal launched in the summer of 2023 and was viewed 27,000 times in its first year.\n\nRebate programs are a powerful motivation for homeowners to make upgrades that might otherwise be put off. In PEI, the City of Charlottetown offered rebates covering 75 per cent of eligible material and labour costs, up to a maximum of $1,000. More than 90 properties completed upgrades, including installing sump pumps, backup batteries, backwater valves, and water monitors and alarms, to better prepare them for extreme weather events.\n\nCommunities can learn more about the grant program and how to apply at intactfc.com/mcrg.\n\nwww.newscanada.com\n\nWord Count: 281\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n## EDITOR'S PICKS\n\nHave your say! Complete our 2025 Media Survey\n\n<!-- image -->\n\nRetrain your way to a new job\n\n<!-- image -->\n\nThe top AI-powered tech trends in 2025\n\n<!-- image -->", - "page_start": 0, - "page_end": 0, - "source_file": "news2.pdf" - }, - { - "text": "- 3. How can it be demonstrated that the proposed solution realizes the set of goals?", - "page_start": 625, - "page_end": 625, - "source_file": "sg247938.pdf" - }, - { - "text": "| The outdoor environment | How was it to exercise outdoors? |\n| | How did you perceive the city park environment for exercise? |\n| Closing questions | Are there any experiences from participation that you would like to elaborate on? Is anything related to this project that we have not talked about that you would like to say? |", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed13.pdf" - }, - { - "text": "It was an added positive experience to use our city park and notice all the other people who were there … it is something about challenging our comfort-zone . (ID4, EDSS: 0)\n\nThe natural environment was also described as taking focus away from MS symptoms. Cold, rainy or snowy weather conditions required planning of adequate clothing; in addition, these conditions led some participants to use cautious behavior when the ground was slippery and led a few to omit sessions. However, mastering outdoor exercise was highlighted in positive terms, such as discovering new ways to become active.\n\n## 3.4 Professional leadership, tailoring and co-creation of enjoyment\n\nThe way the physiotherapists led the group and, in particular, interacted with each participant were regarded as helpful for improving their bodily functions and activity levels. Some participants reported being afraid to try out new activities or training at high intensities after being diagnosed with MS but felt safe to explore when supervised by the physiotherapist because of their trust in the relationship between them and in the physiotherapist ' s professional knowledge.\n\nHow the physiotherapist approached the participants individually was described as important from this perspective. In particular, bodily interactions in which the physiotherapist demonstrated with his or her own body or placed his or her hands on the participant ' s body to correct a movement were reported to be successful, as it helped to increase speed and gave participants a sense of performing better or for a longer duration. If they did an exercise in a suboptimal way, participants reported receiving precise supervision, or if they expressed pain or were injured, the physiotherapist was supportive, assessed them and", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed13.pdf" - }, - { - "text": "threaten their conservation status. To support this, data collection on by-catch for all sensitive species needs to be stepped up.\n\nIn addition, fisheries-management measures must be established in all marine protected areas according to clearly defined conservation objectives and on the basis of the best available scientific advice.\n\n## 2.2.7. Restoring freshwater ecosystems\n\nThe EU's legal framework on water is ambitious but implementation is lagging behind and enforcement must be stepped up 46 . Greater efforts are needed to restore freshwater ecosystems and the natural functions of rivers in order to achieve the objectives of the Water Framework Directive. This can be done by removing or adjusting barriers that prevent the passage of migrating fish and improving the flow of water and sediments. To help make this a reality, at least 25,000 km of rivers will be restored into free-flowing rivers by 2030 47 through the removal of primarily obsolete barriers and the restoration of floodplains and wetlands. Technical guidance and support to the Member States to identify sites and help mobilise funding will be provided by the Commission in 2021, in consultation with all relevant authorities 48 . Member State authorities should review water abstraction and impoundment permits to implement ecological flows in order to achieve good status or potential of all surface waters and good status of all groundwater by 2027 at the latest, as required by the Water Framework Directive 49 . To that effect, the Commission will provide technical support to Member States on their measures by 2023.\n\nOverall, large-scale river and floodplain restoration investments 50 can provide a major economic boost for the restoration sector and for local socioeconomic activities such as tourism and recreation. At the same time, these investments can improve water regulation, flood protection, nursery habitats for fish, and the removal of nutrient pollution.\n\n## 2.2.8. Greening urban and peri-urban areas\n\nGreen urban spaces , from parks and gardens to green roofs and urban farms, provide a wide range of benefits for people. They also provide opportunities for businesses and a refuge for nature. They reduce air, water and noise pollution, provide protection from flooding, droughts and heat waves, and maintain a connection between humans and nature 51 .\n\nThe recent lockdowns due to the COVID-19 pandemic have shown us the value of green urban spaces for our physical and mental wellbeing . While protection of some urban", - "page_start": 12, - "page_end": 12, - "source_file": "legal5_eubiodiversity_cc4.pdf" - }, - { - "text": "- ( h ) 2010 c. 29.\n - ( i ) And see section 2 of the Flood and Water Management Act 2010 for the meaning of 'risk'.\n - ( j ) S.I. 2014/3120. There are no relevant amending instruments.", - "page_start": 39, - "page_end": 39, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "- (a) 'flood' and 'coastal erosion' have the meanings given in section 1 of the Flood and Water Management Act 2010( h );\n - (b) 'lead local flood authority' has the meaning given in section 6(7) of that Act;\n - (c) 'risk management' has the meaning given in section 3 of that Act( i ).\n - 23. -(1) Workers engaged in essential or emergency works-\n - (a) related to-\n - (i) a generating station,\n - (ii) an electricity interconnector,\n - (iii) a district heat network as defined in regulation 2 of the Heat Network (Metering and Billing) Regulations 2014( j ),\n - (iv) communal heating as defined in regulation 2 of the Heat Network (Metering and Billing) Regulations 2014,\n - (v) automated ballast cleaning and track re-laying systems on a network, or\n - (vi) the commissioning, maintenance and repair of industrial machinery for use on a network; or\n - ( g ) Section 17A was inserted by section 1 of the Water Act 2014.", - "page_start": 39, - "page_end": 39, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "Panorama of the inner city of Lyon, taken from the basilica of Notre-Dame de Fourvière's roof\n\n<!-- image -->\n\n## Climate\n\nLyon has a humid subtropical climate (Köppen: Cfa ), bordering an oceanic climate ( Köppen : Cfb , Trewartha: Do ). [38] The mean temperature in Lyon in the coldest month is 4.1 °C (39.4 °F) in January and in the warmest month in July is 22.6 °C (72.7 °F). Precipitation is adequate year-round, at an average of 820 mm (32.3 in), the winter months are the driest. The highest recorded temperature was 40.5 °C (104.9 °F) on 13 August 2003 while the lowest recorded temperature was -24.6 °C (-12.3 °F) on 22 December 1938. [39]\n\nIce on the Saône, 2012\n\n<!-- image -->", - "page_start": 4, - "page_end": 4, - "source_file": "wikipedia4.pdf" - }, - { - "text": "- (2) For the purposes of sub-paragraph (1)-\n - (a) 'essential or emergency works' includes-\n - (i) inspections, maintenance, repairs, and asset replacement activities,\n - (ii) monitoring, sampling and analysis of water supplies under the Private Water Supplies (England) Regulations 2016( a ), the Water Supply (Water Quality) Regulations 2016( b ), the Private Water Supplies (Wales) Regulations 2017( c ), or the Water Supply (Water Quality) Regulations 2018( d );\n - (b) 'sewerage licensee' means the holder of a sewerage licence under section 17BA of the Water Industry Act 1991( e );\n - (c) 'sewerage services' has the meaning given in section 219(1) of the Water Industry Act 1991( f );\n - (d) 'water supply licensee' has the meaning given in sections 17A(7) and 219(1) of the Water Industry Act 1991( g ).\n - 22. -(1) Workers engaged in essential or emergency works relating to flood and coastal erosion risk management on behalf of-\n - (a) the Environment Agency; or\n - (b) a lead local flood authority in England.\n - (2) For the purposes of sub-paragraph (1)-", - "page_start": 39, - "page_end": 39, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "| V-JEPA | ViT-L/16 | 270M | 90K | 80.8 | 69.5 | 25.6 | 74.8 | 60.3 | 67.8 | 85.6 | 75.1 |", - "page_start": 6, - "page_end": 6, - "source_file": "arxiv3.pdf" - } - ] - }, - { - "references": { - "source_file": "news2.pdf", - "query": "How can citizens in Fredericton easily access flood risk data?", - "target_page": 1, - "target_passage": "New Brunswick, the City of Fredericton launched a Neighbourhood Flood Risk Tool to provide easy access to online flood prevention guidance.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "<!-- image -->\n\nMENU\n\n<!-- image -->\n\n## ISSUE\n\nDecember 2024\n\n## CATEGORIES\n\nHome - Safety Community Affairs Finance - Insurance Editor's Picks\n\n## FRANÇAIS\n\nTrois façons dont des collectivités au Canada réduisent leurs risques d'inondation\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nRADIO\n\n<!-- image -->\n\n## Three ways Canadian communities are reducing flood risks\n\n(NC) Communities in Canada are facing increased flood risks, with 1.5 million homes highly exposed. There are large-scale programs available across the country providing flood protection measures for communities at risk, such as Intact's Municipal Climate Resiliency Grants. This program is helping build the resilience of communities and homes through a variety of preventative actions.\n\nWetlands can reduce flood risk by absorbing large quantities of water, but they are not typically found in cities. In Vancouver, B.C., Environmental Youth Alliance and Strathcona Community Gardens created a wetland on downtown's east side, an area historically prone to flooding. Made up of natural elements like ponds and marshes, the wetland reduces the community's flood risk by catching and absorbing rainfall and runoff from surrounding surfaces.\n\nKnowing the risks is the first step to protecting homes and communities. In New Brunswick, the City of Fredericton launched a Neighbourhood Flood Risk Tool to provide easy access to online flood prevention guidance. Residents can input their addresses to see if they are at risk and learn tips to reduce the risk of flooding around their properties. The portal launched in the summer of 2023 and was viewed 27,000 times in its first year.\n\nRebate programs are a powerful motivation for homeowners to make upgrades that might otherwise be put off. In PEI, the City of Charlottetown offered rebates covering 75 per cent of eligible material and labour costs, up to a maximum of $1,000. More than 90 properties completed upgrades, including installing sump pumps, backup batteries, backwater valves, and water monitors and alarms, to better prepare them for extreme weather events.\n\nCommunities can learn more about the grant program and how to apply at intactfc.com/mcrg.\n\nwww.newscanada.com\n\nWord Count: 281\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n## EDITOR'S PICKS\n\nHave your say! Complete our 2025 Media Survey\n\n<!-- image -->\n\nRetrain your way to a new job\n\n<!-- image -->\n\nThe top AI-powered tech trends in 2025\n\n<!-- image -->", - "page_start": 0, - "page_end": 0, - "source_file": "news2.pdf" - }, - { - "text": "## Management's Discussion and Analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## Investment in the Urban Centres of New Brunswick and PEI\n\n26% of Killam's apartment NOI is currently generated in New Brunswick, split principally between the province's three major urban centres, Fredericton, Moncton and Saint John. Fredericton and Moncton both experienced high population growth over the last number of years, posting 9.3% and 8.7% growth, respectively, between the 2006 and 2011 Census periods. Fredericton is the provincial capital and home to the province's largest university. Moncton is the largest city and a transportation and distribution hub for Atlantic Canada. Population growth in Moncton in recent years has been driven by urbanization from French communities in Northern New Brunswick. The Saint John market, representing 5.6% of Killam's apartment NOI, is focused on industry and energy. After strong energy investments in the city in the mid-2000s, the city has seen a reduction in economic projects over the last three years. Home to Irving Oil's refinery operations, the proposed Energy East Pipeline project to bring oil from Western Canada to refineries in Quebec and New Brunswick, has potential for strong economic growth for the city and the province.\n\nKillam also has a 19% market share in Charlottetown, the capital and economic center of Prince Edward Island.\n\n## Expanding Ownership in Ontario\n\nKillam's apartment portfolio includes 1,359 apartment units in Ontario, up from 225 units three years ago, and includes properties in Ottawa, Toronto, London and Cambridge. In addition to apartments, 42% of Killam's MHC sites are located in Ontario. Killam is focused on increasing its geographic diversification by acquiring more properties in Ontario.\n\n## A Diversified Portfolio of Apartment Properties\n\nKillam's apartment portfolio includes a variety of property types, including high-rise (24% of units), mid-rise with elevators (33%) , walk-ups (41%) and a small number of townhouses (2%). The portfolio includes rents ranging from affordable to high-end Class A properties. The average rent for Killam's apartment units at the end of 2013 was $915.\n\nThe average age of Killam's apartment portfolio is 28 years. With a focus on both developing and acquiring newer properties, 23% of Killam's apartments are considered new (built after 2001), on a unit count basis. Compared to the national average of 7%, as per CMHC's 2010 Housing Observer, Killam's portfolio is considerably newer and should result in lower capital and maintenance costs for the foreseeable future. 43% of Killam's noi is generated from apartment units that are considered new, with 20% of the company's noi generated from units built in the last five years.\n\n## MHCs Compliment Killam's Apartment Portfolio\n\nWith MHCs, Killam owns the land and infrastructure supporting each community and leases the sites to the tenants, who own their own homes and pay Killam a monthly rent. In addition to site rent, the tenant may have a mortgage payment to a financial institution for their home. The average site rent in Killam's MHC portfolio was $222 per month, which offers value and affordability to tenants. The homeowner is responsible for property taxes based on the assessed value of their home and Killam is responsible for the property tax on the land.\n\nMHCs require less recurring capital investment and deliver a more predictable and stable cash flow than apartments. MHC home owners are responsible for the repair, maintenance and operating costs of their homes, which removes significant variable costs that are typically borne by Killam for apartments. The operating profit margin in Killam's MHC business averaged 62.4% over the last two years, compared to 58.9% for apartments.", - "page_start": 32, - "page_end": 32, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "digital, attacks to privacy and to civil rights in general can and are coming by so many other sides that those from (properly done) Open Data are a really tiny percentage of the total.\n\nThis is a consequence of the fact that data about us end up online from the most different sources (including ourselves and our acquaintances), and that often it would be very hard to discover, never mind prove , that they've been used against our interest. There have been concerns, for example, that insurance companies may charge higher fees for life insurance to those among their customers who... put online a family tree from which it shows that they come from families with an average life expectancy lower than usual.\n\nAssuming such concerns were real, would it always be possible to spot and prove such abuses of data, that weren't even published by any Public Administration? Of course, publishing online complete, official Census data of several generations, in a way that would make such automatic analysis possible would be a totally different matter.\n\nGetting rid of all the unjustified concerns about privacy is very simple, at least in theory. All is needed to dismiss for good the idea that Open Data is a generalized attack to privacy is to always remember and explain that:\n\n - 1. Most Open Data have nothing personal to begin with (examples: digital maps, budgets, air pollution measurements....)\n - 2. The majority of data that are directly related to individuals (e.g. things like names and address of people with specific diseases, or who were victims of some crime) have no reason to be published, nor there is any actual demand for them by Open Data advocates\n - 3. Exceptions that limit privacy for specific cases and categories of people (e.g. candidates to public offices, Government and Parliament members etc...) already exist in many countries\n - 4. Very often, in practice, Open Data struggles only happen about when and how to make available in the most effective way for society information that was already recognized as public. What to declare public, hence open, is indeed a serious issue (more on this in the next paragraph) but is a separate one.\n\n## 3.8. Need to better define what is Public Data\n\nTogether with citizens education, there is a huge challenge that Governments and the Open Data movement will have to face (hopefully together) in 2011 and beyond. This challenge is to update and expand the definition of Public Data and to have it accepted by lawmakers and public administrators.", - "page_start": 22, - "page_end": 22, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "speaking about when, in September 2010, he wrote about the great divide caused by Open Health Data:\n\n[in the USA] \"statistically speaking, chronic disease is associated with being older, African American, less educated, and living in a lower-income household. By contrast, Internet use is statistically associated with being younger, white, collegeeducated, and living in a higher-income household. Thus, it is not surprising that the chronically ill report lower rates of Internet access.\n\nStarting from this, and commenting a study of the performances, with respect to coronary artery bypass grafting, of several medical centers, Frydman expressed his concern that:\n\nthe empowered will have access to [this data] and will act upon it, while many of the people suffering from chronic diseases (the same population that would benefit most from access to this information) won't. Over time it is therefore probable that the current centers of excellence will treat an ever growing number of empowered while the centers that currently experience high mortality rates will get worse and worse result, simply because they will treat an ever growing number of digital outliers who haven't the possibility to obtain health data and apply filters.\n\nSince one of the topics of this project is the economic value of Open Data, it is necessary to add a somewhat obvious observation to Frydman's concerns (regardless of their probability). Even if it is difficult now to make accurate estimates, such negative developments would surely impact also the costs of health services and insurances, not to mention healthcare-related jobs, both in the communities hosting centers of excellence and in those with the worst ones.\n\n## 3.6.4. Lack of education to data\n\nBoris Müller, professor for interface and interaction design at the University of Applied Sciences in Potsda, said in an April 2011 interview: \"I think that really a citizen needs to know how visualizations work in order to really evaluate the quality of the data and the quality of the evaluation.\" As data visualization and analysis becomes more popular easier to use (even as a tool for manipulating the public opinion), it's important for the public to:\n\n - · understand that, before becoming digital, information was coded, stored and used in many ways, through social norms and human interactions more complex than computer ones (cfr the digitization of India land ownership records), therefore making exact, one-to-one equivalence between analog and digital procedures hard or impossible in many cases\n - · think critically about where data comes from\n - · remember to always follow the development of data-based stories, or accusation.", - "page_start": 19, - "page_end": 19, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "decisions. Ideally, this training should be provided at a local level with local programs, in a way that makes it possible to use it on local issues, for the reasons and in the ways discussed in the next paragraph. For example, visualization techniques like those used by ABC News to show the effects of the March 2011 Japan Earthquake, in which all the user has to do to compare scenes from before and after the earthquake is to move a slider, should be routinely used to explain proposals about urban planning, zoning and related topics.\n\n## 4.6. Focus on local, specific issues to raise interest for Open Data\n\nConsidering the continuous evidence and concerns about scarce interest and preparation of citizens to use Open Data in their political, economic and professional decisions, one of the final recommendations of the Open Data, Open Society report confirms its importance and needs to be repeated: it is very effective, if not simply necessary if the goal is to generate a critical mass of citizens that demand and use Open Data in the shortest possible time, to practice all the recommendations of this report at the local level ,\n\nMost people encounter their local governments much more often then their national ones. When working within a single city or region it is much easier to inform citizens, raise their interest and involve them, because they would be searching local solutions to improve local services and/or save local money. There may also be much more opportunities to do so, especially in this period of financial crisis that will see substantial decreases both in credit by financial institutions and in subsidies from central governments. Concreteness and, as they say in marketing, \"customer focus\" must be the keys for local activists and public employees working on local Open Data:\n\n - · work on specific issues and with precise objectives\n - · focus on immediate usefulness\n - · work on demand, on the services that people want. Required services define what data must be open, not the contrary\n\nThis is the most effective, if not the only strategy, to solve one of the biggest debates in open data: \"how do we get people to use the data that we publish?\" . The right question, instead, is \"what data do people want?\". Even if citizens don't realize yet that what they actually want is more Open Data, or that what they need can be done more quickly and cheaply by releasing some information in that way.\n\nA great example of what all this means is the Great British Public Toilet Map: a public participation", - "page_start": 30, - "page_end": 30, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "## 4. Conclusion: seven Open Data strategy and best practices suggestions\n\nStarting from the trends and conclusion described in the previous chapter, this section lists, in the most synthetic way possible, some strategic actions and best practices for 2011, that we consider important in making Open Data succeed and bring the greatest possible benefits to all citizens and businesses.\n\n## 4.1. Properly define and explain both Open Data and Public Data\n\nJust because Open Data is becoming more popular (and, we may say, more and more necessary every year), it is essential to intensify efforts to explain, both to the general public and to public administrators, that\n\n - 1. Privacy issues are almost always a non-issue. Quoting from What \"open data\" means and what it doesn't): Privacy and/or security concerns with putting all the government's data out there are a separate issue that shouldn't be confused with Open Data. Whether data should be made publicly available is where privacy concerns come into play. Once it has been determined that government data should be made public, then it should be done openly.\n - 2. Defining as Public and consequently opening them in the right way, much more data than those born and stored inside Public Administration is an urgent task that is in the best interest of all citizens and businesses\n\n## 4.2. Keep political issues separated by economics ones\n\nOpen Data can reduce the costs of Public Administrations and generate (or at least protect, as in the case of deals from local merchants) local jobs in all sectors of the economy, not just high-tech ones. There seems to be enough evidence for these two assertions to go for more Open Data even if they had no effect at all on participation to politics. This should always be kept in mind, also because some data that can directly stimulate business are not the same that would be useful for transparency.", - "page_start": 26, - "page_end": 26, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "based PSI analysis and presentation, not just to crime mapping:\n\nIn general, a map is just a map, not reality. It doesn't always and necessarily provide scientific evidence. Crime maps, for example, are NOT safety maps, as most citizens would, more or less consciously, like them to be: a tool that tells them where to buy a house their according to the level of criminality in the district.\n\nWhen used in that way, crime maps can give unprepared users two false impressions: the first, obvious one, is that certain areas are only criminal spaces, exclusively inhabited by criminals. The other is to encourage a purely egoistic vision of the city, where the need for safety becomes paranoia and intolerance and all that matters is to be inside some gated community. This doesn't lower crime levels at all: the only result is to increase urban segregation.\n\nTo make things worse, crime data not analyzed and explained properly don't just contribute to strengthen egoistic attitudes and lock the urban areas that are actually the most plagued by crime into their current difficult state indefinitely. Sometimes, they may even perpetuate beliefs that are, at least in part, simply false. Of course, when those beliefs not grounded in facts already existed, open crime data can help, by finding and proving the gaps between perception of criminality and reality. Belleri, for example, notes that residents of Milan consider the outskirts of their city more dangerous than downtown Milan, while Londoners think the opposite about London... but in both cities the truth emerging from data is exactly the opposite (at least for certain categories of crime) of what their residents believe.\n\n## 3.6.3. Unequal access\n\nEven ignoring crime mapping, in some worst case scenarios, data openness may be not only hindered by social divisions, but also create or enhance them. If citizens can't find and recognize real, relevant meaning and practical value in data, as well as way to use them to make change happen, there won't be any widespread, long lasting benefit from openness. How can we guarantee, instead, that such meaning and value will be evident and usable? What are the ingredients for success here?\n\nEnhancing access to PSI it's harder than it may seem because it isn't just a matter of physical infrastructure. It is necessary that those who access Open Data are in a position to actually understand them and use them in their own interest.\n\nThis is far from granted also because, sometimes, the citizens who would benefit the most from certain data are just those, already poor, marginalized and/or without the right education, who have the least chances to actually discover and be able to use them. This is what G. Friedman was", - "page_start": 18, - "page_end": 18, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "## RISK MANAGEMENT\n\nWe are committed to continually strengthening our risk management capabilities to protect and enhance shareholder value. The purpose of risk management is not to eliminate risk but to optimize trade-offs between risk and return to maximize value to the organization.\n\n## Risk G overnance\n\nThe Board has overall responsibility for risk governance and oversees management in identifying the principal risks we face in our business and implementing appropriate risk assessment processes to manage these risks. It delegates certain duties to the Audit Committee.\n\nThe Audit Committee discusses risk policies with management and the Board, and assists the Board in overseeing our compliance with legal and regulatory requirements.\n\nThe Audit Committee also reviews:", - "page_start": 75, - "page_end": 75, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "with a project called \"Tales of Things\" to allow people to leave messages for each other (or just for the world) at the bus stops. Scanning the QR code now allows people to see not just the bus timetable, but also the notes other travelers have left on that stop, including \"what's nearby, who's waiting for whom, what number can you call for a good time. It's a cross between bus stop Facebook and digital graffiti\" , that happened thanks to the openness of the original bus stop data.\n\nThe Social Life of Data Project will study instead how particular datasets have been used, who used them, how those people are connected and what conversations happen around Open Data.\n\n## 3.3. Legal issues remain crucial\n\nProper licensing of Public data is essential. The more Open Data activities continue, the clearer this rule becomes. What distinguishes Open Data from \"mere\" transparency is reuse. Paraphrasing Eaves, until a government get the licensing issue right, Open Data cannot bring all the possible benefits in that country. If there are no guarantees that public data can be used without restriction, very little happens in practice, and when it happens it may be something against the public interest.\n\nCanadian Company Public Engines Inc, that is paid by local police departments to collect, process and analyze official crime data, also publishes online, with a proprietary license, anonymized summaries of those data. When in 2010 another company, Report See Inc, scraped those data from their website to reuse them, Public Engines sued.\n\nReporting this, D. Eaves rightly points out that both companies are right: one is trying to protect its investment, the other is simply trying to reuse what IS public data, by getting it from the ONLY place where it's available. This is what happens when public officials leave the ownership of public data to the third parties hired to collect them. Please note that, in practice, it makes very little difference whether those third parties are private, for-profit corporations or even other Public Administrations. Unless, of course, there are national laws already in place that define in advance what is the license of all present and future Public Data, no matter how they were generated and by whom , those data can be lost in any moment for society. In all other cases, the legal status of data will be either officially closed and locked, or uncertain enough to prevent most or all reuses. In February 2011, the news came that, even if they weren't the original copyright holders, Public Engines had been able to put together enough legal claims to convince Report See to give up.\n\nDisputes like this should not happen and would not happen if all contracts regarding collection and management of PSI clearly specified that all the resulting data either go directly into the public domain (after being anonymized if necessary, of course) or remain exclusive property of the", - "page_start": 12, - "page_end": 12, - "source_file": "Open_Data_Report.pdf" - }, - { - "text": "What is, exactly, Public Data? A definition that is accepted almost implicitly is \"data that is of public interest, that belongs to the whole community, data that every citizen is surely entitled to know and use\" . This definition is so generic that accepting it together with the assumption that all such data should be open as preached by the Open Data movement (online, as soon as possible, in machine readable format with an open license etc...) doesn't create any particular problem or conflict.\n\nReal problems however start as it has happened all too often so far, whenever we assume more or less consciously that \"Public Data\" in the sense defined above and data directly produced by Governments and Public Administrations, that is what's normally called PSI (Public Sector Information) are the same thing.\n\nThere is no doubt that Governments and Public Administrations produce huge quantities of Public Data. But this is an age of privatization of many public services, from transportation to healthcare, energy and water management. This is an age in which many activities with potentially very serious impacts on whole communities, like processing of hazardous substances or toxic waste, happen outside Public Administrations. The paradox is that, as Sasaki put it, this increased privatization is happening in the very same period in which \" we are observing a worldwide diffusion of access to information laws that empower citizens to hold government agencies accountable.\"\n\nIn such a context, \"Public Data\"is critical just because it is a much bigger set of data than what constitutes traditional, official PSI. \"Public Data\" includes all that information plus the much bigger amount of data describing and measuring all the activities of private companies, from bus timetables to packaged food ingredients, aqueducts performances and composition of fumes released in the atmosphere, that have a direct impact on the health and rights of all citizens of the communities affected by the activities of those companies.\n\nAre such data \"Public\" today, in the sense defined at the beginning of this paragraph, that is something every citizen has the right to know without intermediaries or delegates, or not? Should they be public? If yes, shouldn't law mandate that all such data be Open (that is, published online as soon as possible, in machine readable format with an open license etc...) just like, for example, the budget of some Ministry? Answering these questions may be one of the biggest challenges for the Open Data community, and for society as a whole, in the next years.\n\nHere are, in order to facilitate reflection on this issue, a few recent, real world examples of \"Public Data\" that are not PSI, and of the impacts of their lack of openness.", - "page_start": 23, - "page_end": 23, - "source_file": "Open_Data_Report.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed2.pdf", - "query": "In these mice, which lumbar levels were the dorsal root ganglion removed from?", - "target_page": 3, - "target_passage": "L3 to L5 DRGs were removed and postfixed for another 2 hours", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- [64] Welin D, Novikova LN, Wiberg M, Kellerth JO, Novikov LN. Survival and regeneration of cutaneous and muscular afferent neurons after peripheral nerve injury in adult rats. Exp Brain Res 2008;186:315-23.\n - [65] West CA, Davies KA, Hart AM, Wiberg M, Williams SR, Terenghi G. Volumetric magnetic resonance imaging of dorsal root ganglia for the objective quantitative assessment of neuron death after peripheral nerve injury. Exp Neurol 2007;203:22-33.\n - [66] West CA, Ljungberg C, Wiberg M, Hart A. Sensory neuron death after upper limb nerve injury and protective effect of repair: clinical evaluation using volumetric magnetic resonance imaging of dorsal root ganglia. Neurosurgery 2013;73:632-40.\n - [67] West SJ, Bonboire D, Bennett DL. StereoMate: 3D stereological automated analysis of biological structures. bioRxiv 2020:648337.\n - [68] Wiberg R, Novikova LN, Kingham PJ. Evaluation of apoptotic pathways in dorsal root ganglion neurons following peripheral nerve injury. Neuroreport 2018;29:779-85.\n - [69] Yu X, Liu H, Hamel KA, Morvan MG, Yu S, Leff J, Guan Z, Braz JM, Basbaum AI. Dorsal root ganglion macrophages contribute to both the initiation and persistence of neuropathic pain. Nat Commun 2020;11:264.\n - [70] Zheng J, Lu Y, Perl ER. Inhibitory neurones of the spinal substantia gelatinosa mediate interaction of signals from primary afferents. J Physiol 2010;588:2065-75.", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed2.pdf" - }, - { - "text": "cell death and apoptosis with more than 10 genes were examined. Filtered count data of expressed and nondifferentially expressed genes were used as a background.\n\n## 2.8. Dorsal root ganglion culture\n\nDorsal root ganglia were dissected from MrgD CreERT2 ;Ai32 and Calca CreERT2 ;Ai32 mice . 1 week after dosing with tamoxifen and enzymatically digested at 37˚˚C for 80 minutes in dispase type II (4.7 mg/mL) plus collagenase type II (4 mg/mL) (Worthington Biochemical), as described previously. 63 Mechanically dissociated cells were plated onto laminin/poly-D-lysine (R&D Systems, Minneapolis, MN) treated coverslips in complete Neurobasal Plus medium (Neurobasal Plus media supplemented with 2% (vol/vol) B27 Plus, 1% N2, 1% Glutamax, and 1% antibiotic-antimycotic [ThermoFisher Scientific, Waltham, MA]). Mouse nerve growth factor (GF) (50 ng/mL; nerve growth factor (NGF), PeproTech, Cranbury, NJ) and 10 ng/mL glial-derived neurotrophic factor (GDNF, PeproTech) were added to the media under some conditions. Cytosine b -D-arabinofuranoside (4 m M) was added to the media for 24 hours the day after plating to reduce the proliferation of nonneuronal cells. Media was refreshed 3 times per week thereafter. Cultures were fixed for 10 minutes at room temperature with 4% paraformaldehyde and subsequently processed by immunocytochemistry (described earlier).\n\n## 2.9. Statistical analysis\n\nData are expressed as mean 6 SEM unless otherwise specified, and P values of less than 0.05 were considered significant. Power calculations were performed using G*Power 3.1.9.7. 15 A quantitative Venn diagram was created using BioVenn. 25 All other statistical analyses were performed in Prism 10 (GraphPad Software, Inc, Boston, MA) or R using paired t tests or 1- or 2-way RM ANOVAs (repeated measures analysis of variance), where appropriate. Normality was assessed by the Shapiro-Wilk test. If the main analysis of variance effect was significant, ˇ S'ıd 'ak or Tukey multiple comparisons tests were performed. To compare population distributions of soma cross-sectional area or volume, Kolmogorov-Smirnov tests were performed.\n\n## 3. Results\n\n## 3.1. Peripheral nerve injury induces a loss of small neurons from the dorsal root ganglion\n\nTo assess the gross loss of neurons from DRG following nerve injury, we generated the Avil FlpO ;Atf3 CreERT2 ;RC::FLTG mouse line in which na¨ıve and axotomized sensory neurons were differentially labelled. In this mouse line, all neurons express tdTomato (Flp-dependent) in the na¨ıve state and switch to expressing green fluorescent protein (GFP) upon axonal damage and concurrent tamoxifen treatment (Flp- and Cre-dependent) ( Figs. 1A and B ). Following pilot experiments to optimize tamoxifen dosing regimen, this approach was both highly efficient and specific (with the caveat that it was necessary to wait for several days after nerve injury for Cre-induced GFP expression): 14 days after SNItrans surgery, GFP was expressed by 99.1 6 0.6% of Atf3-expressing ipsilateral L4 DRG neurons, while we observed GFP in only 4.6 6 0.7% of contralateral DRG neurons (Figs. S2A-D, http://links.lww.com/PAIN/C84). We then used a stereological approach to quantify the total number of neurons in L4 DRG ipsilateral to injury 1, 2, 4, and 8 weeks after SNItrans, as well as contralateral to injury. One week after SNItrans, we", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed2.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n## Peripheral nerve injury results in a biased loss of sensory neuron subpopulations\n\nAndrew H. Cooper a , Allison M. Barry b , Paschalina Chrysostomidou a , Romane Lolignier a , Jinyi Wang a , Magdalena Redondo Canales a , Heather F. Titterton a , David L. Bennett b , Greg A. Weir a, *\n\n## Abstract\n\nThere is a rich literature describing the loss of dorsal root ganglion (DRG) neurons following peripheral axotomy, but the vulnerability of discrete subpopulations has not yet been characterised. Furthermore, the extent or even presence of neuron loss following injury has recently been challenged. In this study, we have used a range of transgenic recombinase driver mouse lines to genetically label molecularly defined subpopulations of DRG neurons and track their survival following traumatic nerve injury. We find that spared nerve injury leads to a marked loss of cells containing DRG volume and a concomitant loss of small-diameter DRG neurons. Neuron loss occurs unequally across subpopulations and is particularly prevalent in nonpeptidergic nociceptors, marked by expression of Mrgprd. We show that this subpopulation is almost entirely lost following spared nerve injury and severely depleted (by roughly 50%) following sciatic nerve crush. Finally, we used an in vitro model of DRG neuron survival to demonstrate that nonpeptidergic nociceptor loss is likely dependent on the absence of neurotrophic support. Together, these results profile the extent to which DRG neuron subpopulations can survive axotomy, with implications for our understanding of nerve injury-induced plasticity and pain.\n\nKeywords: Sensory neuron, Neuron death, Transgenic reporter line, Neuropathic pain, Nerve injury\n\n## 1. Introduction\n\nDorsal root ganglion (DRG) neurons represent a molecularly and functionally heterogeneous population. Under normal conditions, this diversity contributes to the ability of the somatosensory nervous system to detect a myriad of sensory stimuli that result in the perceptions of touch, temperature, itch, and pain. Following nerve injury, physiological changes in DRG neurons lead to hyperexcitability, 57 which is a key pathological driver of neuropathic pain. 20,63 Concomitant molecular changes in discrete subpopulations also occur, and these have recently been comprehensively described in single-cell 37,44 and subpopulation-specific sequencing studies. 3 These studies describe a transient and generalized reduction in the expression of subpopulation-specific genes following nerve injury. 3,37,44\n\nIn addition to molecular changes, there is a rich literature describing the frank loss of DRG neurons following traumatic\n\nSupplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Web site (www.painjournalonline.com).\n\nCopyright © 2024 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the International Association for the Study of Pain. This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\nhttp://dx.doi.org/10.1097/j.pain.0000000000003321", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed2.pdf" - }, - { - "text": "- [47] Schmitz C, Hof PR. Design-based stereology in neuroscience. Neuroscience 2005;130:813-31.\n - [48] Schulte A, Degenbeck J, Aue A, Schindeh utte M, Schlott F, Schneider M, Monoranu CM, Bohnert M, Pham M, Antoniadis G, Blum R, Rittner HL. Humandorsalroot ganglia after plexus injury: either preservation or loss of the multicellular unit. bioRxiv 2023.02.06.526934.\n - [49] Schulte A, Lohner H, Degenbeck J, Segebarth D, Rittner HL, Blum R, Aue A. Unbiased analysis of the dorsal root ganglion after peripheral nerve injury: no neuronal loss, no gliosis, but satellite glial cell plasticity. PAIN 2023;164:728-40.\n - [50] Shi TJS, Tandrup T, Bergman E, Xu ZQD, Ulfhake B, H okfelt T. Effect of peripheral nerve injury on dorsal root ganglion neurons in the C57 BL/6J\n - mouse: marked changes both in cell numbers and neuropeptide expression. Neuroscience 2001;105:249-63.\n - [51] Song H, Yao E, Lin C, Gacayan R, Chen MH, Chuang PT. Functional characterization of pulmonary neuroendocrine cells in lung development, injury, and tumorigenesis. Proc Natl Acad Sci 2012;109:17531-6.\n - [52] Takasu K, Sakai A, Hanawa H, Shimada T, Suzuki H. Overexpression of GDNF in the uninjured DRG exerts analgesic effects on neuropathic pain following segmental spinal nerve ligation in mice. J Pain 2011;12: 1130-1139.\n - [53] Tandrup T, Woolf CJ, Coggeshall RE. Delayed loss of small dorsal root ganglion cells after transection of the rat sciatic nerve. J Comp Neurol 2000;422:172-80.\n - [54] Terenghi G, Hart A, Wiberg M. The nerve injury and the dying neurons: diagnosis and prevention. J Hand Surg Eur Vol 2011;36:730-4.\n - [55] Usoskin D, Furlan A, Islam S, Abdo H, Lonnerberg P, Lou D, HjerlingLeffler J, Haeggstrom J, Kharchenko O, Kharchenko PV, Linnarsson S, Ernfors P. Unbiased classification of sensory neuron types by large-scale single-cell RNA sequencing. Nat Neurosci 2015;18:145-53.\n - [56] Vestergaard S, Tandrup T, Jakobsen J. Effect of permanent axotomy on number and volume of dorsal root ganglion cell bodies. J Comp Neurol 1997;388:307-12.\n - [57] Wall PD, Gutnick M. Properties of afferent nerve impulses originating from a neuroma. Nature 1974;248:740-43.\n - [58] Wang C, Gu L, Ruan Y, Geng X, Xu M, Yang N, Yu L, Jiang Y, Zhu C, Yang Y, Zhou Y, Guan X, Luo W, Liu Q, Dong X, Yu G, Lan L, Tang Z. Facilitation of MrgprD by TRP-A1 promotes neuropathic pain. FASEB J 2019;33: 1360-73.\n - [59] Wang H, Zylka MJ. Mrgprd-expressing polymodal nociceptive neurons innervate most known classes of substantia gelatinosa neurons. J Neurosci 2009;29:13202-9.\n - [60] Wang R, Guo W, Ossipov MH, Vanderah TW, Porreca F, Lai J. Glial cell line-derived neurotrophic factor normalizes neurochemical changes in injured dorsal root ganglion neurons and prevents the expression of experimental neuropathic pain. Neuroscience 2003; 121:815-24.\n - [61] Wang X, Archibald ML, Stevens K, Baldridge WH, Chauhan BC. Cyan fluorescent protein (CFP) expressing cells in the retina of Thy1-CFP transgenic mice before and after optic nerve injury. Neurosci Lett 2010; 468:110-4.\n - [62] Warwick C, Cassidy C, Hachisuka J, Wright MC, Baumbauer KM, Adelman PC, Lee KH, Smith KM, Sheahan TD, Ross SE, Koerber HR. MrgprdCre lineage neurons mediate optogenetic allodynia through an emergent polysynaptic circuit. PAIN 2021;162:2120-31.\n - [63] Weir GA, Middleton SJ, Clark AJ, Daniel T, Khovanov N, McMahon SB, Bennett DL. Using an engineered glutamate-gated chloride channel to silence sensory neurons and treat neuropathic pain at the source. Brain 2017;140:2570-85.", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 2. Spared nerve crush and transection lead to a loss of small DRG neurons. (A) Approach to restrict analysis to damaged afferents: a subcutaneous injection of the tracer FB into both hindpaws labelled tibial afferents, before unilateral SNItrans or SNIcrush surgery. (B) Representative image of FB labelling and NeuN immunostaining in the L4 DRG. The image is a projection of optical sections at 3m mintervals through the entirety of a 30m m-thick tissue section. Scale bar 5 100 m m. (C and D) Quantification of the cross-sectional area of FastBlue labelled DRG neurons ipsilateral and contralateral to SNItrans (C) or SNIcrush injury (D) reveals a loss of small afferents and subsequent shift in population distribution. Kolmogorov-Smirnov tests of cumulative distributions; SNItrans: D 5 0.25, P , 0.001; n 5 183 or 191 neurons from 3 mice; SNIcrush: D 5 0.22, P , 0.001, n 5 319 or 325 neurons from 3 mice. (E) Experimental approach for whole DRG volumetric analyses after SNItrans. (F) Representative 3D rendering of TDP-43 profiles and corresponding nuclear spot profiles following Imaris-based spot detection feature. Scale bar 5 100 m m. (G) Quantification of DRG nuclear spot volume ipsilateral and contralateral to SNItrans. Kolmogorov-Smirnov tests of cumulative distribution: D 5 0.06, P , 0.001, n 5 30,206 (contra) or 32,544 (ipsi) nuclei from 4 (contra) or 5 (ipsi) mice. (H) Total number of nuclear spots, by size, per DRG. Two-way RM ANOVA; size bin 3 injury interaction: F 2,14 5 8.26, P 5 0.004; n 5 4 to 5 mice; ˇ S'ıd 'ak multiple comparisons tests: ** P , 0.01. ANOVA, analysis of variance; DRG, dorsal root ganglion; FB, FastBlue; RM, repeated measures.\n\n<!-- image -->\n\n## 3.3. Spared nerve injury induces a loss of Trpm8 1 and calcitonin gene-related peptide 1 but not myelinated dorsal root ganglion neurons\n\nLoss restricted to nonpeptidergic nociceptors would not fully account for the degree of total neuron loss that we observed. Therefore, we studied a range of other subpopulations, both small and large in diameter, for their vulnerability to injury-\n\ninduced loss. To investigate potential loss of Trpm8 1 (coldsensitive), calcitonin gene-related peptide 1 (CGRP) (peptidergic), and myelinated subpopulations of DRG neurons following nerve injury, we applied our FB-labelling approach in Trpm8 FlpO ; RC::FLTG (FlpO-dependent tdTom expression), Calca CreERT2 ; Ai32 (Cre-dependent ChR2-YFP expression) and Thy1-CFP mice, respectively ( Figs. 4A-D ). Trpm8-tdTom was expressed", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed2.pdf" - }, - { - "text": "injury (Fig. S6A-C, http://links.lww.com/PAIN/C84), indicating that any loss of neurons within specific neuronal subpopulations wasnot biased towards soma size. Collectively, these data show that unrepaired axonal damage to peripheral sensory neurons induces a partial loss of Trpm8 1 and CGRP 1 subpopulations, but no major loss of myelinated afferents.\n\nBased on our findings of preferential loss of nonpeptidergic nociceptors, we re-analyzed a previous population-specific transcriptomic dataset of mouse DRG neurons following nerve injury for potential upregulation of cell death pathways (Fig. S7, http://links.lww.com/PAIN/C84). 3 Wefound that early after injury (3 days post-SNItrans), nonpeptidergic (MrgD CreERT2 -expressing) neurons showed enhanced enrichment of GO terms associated with apoptosis, in contrast to a broad population of nociceptors (labelled with Scn10a CreERT2 ), peptidergic nociceptors (CalcaCreERT2 ), C-LTMRs (Th CreERT2 ), and A b -RA (rapidly adapting) and A d -LTMRs (A d /A b -LTMR, Ntrk2 CreERT2 ;Advillin FlpO ), in which there was less or no enrichment of cell death pathways. By 4 weeks, only C-LTMR and A d /A b -LTMR subtypes show any overrepresentation of cell death pathways (in the populations studied). Both injury-specific and apoptotic signatures in nonpeptidergic neurons were no longer significantly enriched, consistent with a loss of axotomized nonpeptidergic afferents by this late timepoint postinjury. These data suggest that apoptotic pathways are upregulated acutely after injury in a celltype-specific manner.\n\n## 3.4. Mrgprd dorsal root ganglion neurons are sensitive to loss in vitro\n\nEarlier studies postulated that a lack of neurotrophic support underlies neuronal loss, which is supported by the observation that exogenous GDNF treatment at the time of injury, or shortly after, rescues the loss of IB4-binding central terminals posttransection. 5 We sought to use the DRG neurons from MrgD CreERT2 ;Ai32 mice to test this postulate and establish an in vitro platform capable of probing the molecular basis of loss, with axonal transection during isolation providing a correlate for in vivo nerve injury ( Figs. 5A-E ). Twenty-four hours after plating, YFP was expressed by 16.3 6 1.3% of DRG neurons, which was reduced to 11.8 6 1.7% after 28 days of culture in the presence of exogenous GFs, NGF and GDNF ( Fig. 5F ). However, in the absence of GFs, YFP 1 neurons only accounted for 1.7 6 0.6% of neurons after 28 days, accompanied by an apparent reduction in the overall number of neurons within the culture, despite all conditions being seeded at the same initial density ( Figs. 5C and F ). YFP 1 cell loss was partially rescued by the presence of GDNF, but not NGF alone, in the culture media ( Figs. 5D-F ). These results contrasted with experiments using neurons derived from Calca CreERT2 ;Ai32 mice, in which we observed no change in the proportion of neurons that were Calca-YFP 1 after 28 days in culture, regardless of exogenous GF addition ( Figs. 5G-L ). Collectively, these data support the use of DRG cultures to probe the mechanisms underlying selective loss of sensory neurons following nerve injury and suggest a role for trophic support, particularly by GDNF signaling, in preventing the loss of nonpeptidergic nociceptors.\n\n## 4. Discussion\n\nWe present data herein to support the hypothesis that traumatic nerve injury in rodents leads to a profound loss of small-diameter DRG neurons. Taking advantage of newly", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed2.pdf" - }, - { - "text": "http://dx.doi.org/10.1097/j.pain.0000000000003321\n\nnerve injury in experimental rodent models. 24,50,53,56 Some studies have suggested that neuron loss occurs in certain patient cohorts, 48,66 but this is yet to be definitively demonstrated in humans. In rodents, most studies support a preferential loss of small cells that give rise to unmyelinated fibers 53 but some contrasting studies describe the preferential loss of large cells 6 or loss of cells of all sizes. 46 Variation is evident across studies in terms of experimental species, age, type of injury, and quantification methods. 56 Shi et al. 50 used stereological counting methods to identify a 54% loss of DRG neuron number 4 weeks after 'mid-thigh' sciatic nerve transection in C57BL/6 mice. Estimates for the degree of loss following commonly used nerve injury paradigms (eg, spared nerve injury [SNI] and sciatic nerve crush) are not available and because of the neurochemical changes following injury and the loss of subpopulation marker gene expression, 5,44,50 the vulnerability of molecularly defined subpopulations has not been characterized. Moreover, more recent studies have cast doubt on the extent or even presence of DRG neuron death following nerve injury. One study which developed a deep learning approach to assess rat DRG cellular plasticity found no loss of neurons up to 2 weeks post-SNI, 49 while another observed no loss of genetically labelled damaged DRG neurons 2 months after sciatic nerve crush. 44\n\nThe issue of whether neuron loss occurs, and if so, in what subpopulations, is important. It will likely have implications for our understanding of reinnervation and functional recovery in patients. Furthermore, better insight will provide critical context for those investigating the plasticity that occurs following nerve injury and may inform therapeutic targeting of sensory neuron populations.\n\nAn expanding repertoire of transgenic recombinase driver lines now makes it possible to permanently label DRG neuron subpopulations and study their fate in rodent nerve injury paradigms. The aim of this study was to use this technology to characterize", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed2.pdf" - }, - { - "text": "neuron loss after nerve injury and to test the hypothesis that loss is not equally distributed across molecular populations.\n\n## 2. Methods\n\n## 2.1. Animals\n\nMice were housed in groups in humidity- and temperature-controlled rooms with free access to food and water, on a 12-hour light-dark cycle, and with environmental enrichment. Animal procedures were performed under a UK Home Office Project Licence and in accordance with the UK Home Office (Scientific Procedures) Act (1986). All studies were approved by the Ethical Review Process Applications Panel of the University of Glasgow or Oxford and conform to the ARRIVE guidelines. Experiments were performed on adult male and female mice aged 7 to 16 weeks at the start of the experiments. All experimental cohorts contained a mix of male and female mice, apart from the cohort of Mrgprd CreERT2 ;Ai32 mice that underwent SNIcrush surgery, which was exclusively female. Details of transgenic lines are provided in Table 1 . Tamoxifen was administered by i.p. injection of 20 mg/mL tamoxifen (Sigma-Aldrich) dissolved in wheat germ oil (doses described in Table 1 ). There were 2 instances where animals were excluded from data analysis: One (cyan fluorescent protein) Thy1-CFP died of unknown causes not related to the procedure and before the experimental endpoint, and one MrgD CreERT2 ;Ai32 exhibited no fluorophore expression and was therefore deemed to have been incorrectly genotyped. Group sizes were based on the extent of neuronal loss 28d following sciatic nerve transection identified by Shi et al. 50 Given a 5 0.05, power 5 0.8, and an effect size of 4.81, power analysis projects that a group size of 3 mice would be needed.\n\n## Transgenic lines used in the study.\n\nTable 1", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 1. SNItrans induces death of small primary afferent neurons, accompanied by a reduction in volume, not cell density, of the dorsal root ganglion. (A) Approach to differentially labelled intact afferents with tdTomato and damaged afferents with GFP after peripheral nerve injury using the Avil FlpO ;Atf3 CreERT2 ;RC:: FLTGmouseline and schematic of experimental timeline. (B) Representative image of GFP, tdTomato, and NeuN expression in an L4 DRG, 2 weeks after SNItrans. Scale bars 5 100 m m. (C and D) Stereological quantification of the total number of DRG neurons (C) or number of axotomized and intact neurons (D) in the L4 DRG 1, 2, 4, and 8 weeks after SNItrans or contralateral (contra) to injury. (C) One-way ANOVA with Tukey posttests; F 4,10 5 37.98, P , 0.001. (D) Two-way RM ANOVA; Timepoint 3 Color interaction F 4,10 5 39.04, P , 0.001, n 5 3 mice; Tukey posttests (between injured groups): † P , 0.05 vs contra, ‡ P , 0.05 vs 1-week. (E) Volume of DRG-containing cells (ie, excluding white matter tracts) following SNItrans. One-way ANOVA with Tukey posttests; F 4,10 5 21.25, P , 0.001, n 5 3. (F) Neuronal density within the DRG following SNItrans. One-way ANOVA; F 4,10 5 2.77, P 5 0.09, n 5 3. (G) Population distribution of uninjured and injured afferents by cross-sectional area, 1 and 8 weeks post-SNItrans. Kolmogorov-Smirnov tests of cumulative distributions; Uninjured: D 5 0.08, P 5 0.18; Injured: D 5 0.32, P , 0.001; n 5 310 to 427 neurons from 3 mice. * P , 0.05, ** P , 0.01, *** P , 0.001 vs contra. ANOVA, analysis of variance; DRG, dorsal root ganglion; GFP, green fluorescent protein.\n\n<!-- image -->\n\nprotein) neurons 28 days after sham surgery or SNItrans ( Figs. 3A and B ). SNItrans, but not sham, resulted in a significant decrease (54.0 6 6.6%) in the total number of MrgD-YFP 1 neurons in L4 DRG ( Fig. 3C ).\n\nYellow fluorescent protein expression in MrgD ChR2-YFP mice is driven by the endogenous Mrgprd promotor, which has been reported to be upregulated or downregulated following axonal damage. 44,58 Such changes in promoter activity could affect the proportion of nonpeptidergic nociceptors identified by YFP expression. Therefore, to verify these findings, we used MrgD CreERT2 ;Ai32 mice and tamoxifen administration before injury, to permanently label Mrgprdexpressing afferents with ChR2-YFP ( Figs. 3D-F ). We then tested whether the proportion of cutaneous tibial afferents that were YFP 1 was altered following nerve injury. Following hindpaw FB injection, ; 15% of contralateral, FB-labelled DRG neurons expressed YFP. This was reduced to 6.0 6 1.2% 28 days after SNIcrush injury and to only 1.7 6 0.9%\n\n28 days after SNItrans ( Fig. 3G ). Uptake by uninjured YFP 1 neurons was equivalent 7 and 35 days after FB injection, demonstrating that this reduction was not because 7 days were insufficient for YFP 1 neurons to fully uptake FB (Fig. S3C, http:// links.lww.com/PAIN/C84). No significant difference in the percentage of FB-labelled YFP 1 DRG neurons between ipsilateral and contralateral DRG was observed at 7 days following SNItrans (Figs. S4A and B, http://links.lww.com/PAIN/C84), demonstrating that loss occurred after this timepoint. Analysis of the crosssectional soma area of FB-labelled, YFP 1 neurons in uninjured DRGrevealed an area of 361 6 138 m m 2 (mean 6 SD) (Fig. S4C, http://links.lww.com/PAIN/C84), which is a distribution profile matching those neurons presumed lost. Collectively, these data show that peripheral nerve injury results in a substantial loss of nonpeptidergic, Mrgprd -expressing neurons, with SNItrans (ie, an unrepaired axonal transection) resulting in an almost complete loss of this population.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed2.pdf" - }, - { - "text": "a\n\n## Whole-brain subcortical volumes\n\n<!-- image -->\n\nb\n\nCA1Medial temporal lobe subregion volumes\n\n<!-- image -->\n\nCA2/CA3Fig. 3 | Subcortical GMV changed throughout gestation. a , Multivariate regression analyses revealed largely negative relationships between gestation week and subcortical GMV regions over pregnancy, including bilateral thalamus, caudate, hippocampus, ventral diencephalon (encompassing hypothalamus, substantia nigra, mammillary body and red nucleus) and left caudate. Lateral ventricles displayed the only positive relationships with gestation week (also depicted in Fig. 1d). The whole-brain subcortical GMV estimates shown here were derived via FreeSurfer and 'aseg' subcortical segmentation. FDRcorrected at q < 0.05. Inset, right ventral diencephalon displayed the strongest negative association with gestation (left; baseline-36 weeks, 19 scans) and did not return to baseline postpartum (right; gestation and postpartum, 26 scans). b , The participant's hippocampus and surrounding cortex were segmented\n\n<!-- image -->\n\ninto seven bilateral subregions. Quadratic (CA1, CA2/CA3) and linear regression analyses (PHC) revealed subfields were negatively associated with gestation week (baseline-36 weeks, 18 scans) and did not return to baseline postpartum (gestation and postpartum, 25 scans). Shaded regions in scatterplots represent a 95% confidence interval. Each boxplot represents IQR for each stage, with a horizontal line representing the median value. The whiskers indicate variability outside (±1.5) of this range. Outside values are >1.5× and <3× IQR beyond either end of the box. FDR-corrected at q < 0.05. For a and b , nonsignificant regions were set to zero for interpretability. See Supplementary Fig. 6 for complete labeling of regions in both segmentations. Brain visualizations created with R package ggseg 48 . DC, diencephalon.\n\noutstanding questions. This study and corresponding open-access dataset offer neuroscientists a detailed map of the human brain across gestation, a resource for which a wide range of previously unattainable neurobiological questions can now be explored.\n\nOur findings from this precision imaging study show that pregnancy is characterized by reductions in GMV, cortical thinning and enhanced white matter microstructural integrity that unfold week by week. These changes were also tied to the significant rise in steroid hormone concentrations over pregnancy. Some of these changes persist at 2 years postpartum (for example, global reductions in GMV and CT), while others, including markers of white matter integrity, appear to be transient. Ventricular expansion and contraction parallel these cortical changes. These widespread patterns, and the notable increase in CSF volume across gestation, could reflect increased water retention and subsequent compression of cortical tissue. However, the persistence of these changes at 2 years postpartum and regional variation in GMV, CT and QA, hint at cellular underpinnings, such as alterations in glia\n\nor neuron number, synaptic density and myelination (for review on the latter, see ref. 4). Future studies of the relationship between fluid dynamics and volumetric changes will help clarify the factors that drive global neural changes during pregnancy; such insights will have broad implications for maternal health (for example, neurological effects tied to pre-eclampsia or edema).", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed4.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed2.pdf", - "query": "Did the researcher responsible for quantifying the cells in the dorsal root ganglion know which group each mouse belonged to?", - "target_page": 4, - "target_passage": "During all image quantification, the experimenter was blind to the experimental groups.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- [64] Welin D, Novikova LN, Wiberg M, Kellerth JO, Novikov LN. Survival and regeneration of cutaneous and muscular afferent neurons after peripheral nerve injury in adult rats. Exp Brain Res 2008;186:315-23.\n - [65] West CA, Davies KA, Hart AM, Wiberg M, Williams SR, Terenghi G. Volumetric magnetic resonance imaging of dorsal root ganglia for the objective quantitative assessment of neuron death after peripheral nerve injury. Exp Neurol 2007;203:22-33.\n - [66] West CA, Ljungberg C, Wiberg M, Hart A. Sensory neuron death after upper limb nerve injury and protective effect of repair: clinical evaluation using volumetric magnetic resonance imaging of dorsal root ganglia. Neurosurgery 2013;73:632-40.\n - [67] West SJ, Bonboire D, Bennett DL. StereoMate: 3D stereological automated analysis of biological structures. bioRxiv 2020:648337.\n - [68] Wiberg R, Novikova LN, Kingham PJ. Evaluation of apoptotic pathways in dorsal root ganglion neurons following peripheral nerve injury. Neuroreport 2018;29:779-85.\n - [69] Yu X, Liu H, Hamel KA, Morvan MG, Yu S, Leff J, Guan Z, Braz JM, Basbaum AI. Dorsal root ganglion macrophages contribute to both the initiation and persistence of neuropathic pain. Nat Commun 2020;11:264.\n - [70] Zheng J, Lu Y, Perl ER. Inhibitory neurones of the spinal substantia gelatinosa mediate interaction of signals from primary afferents. J Physiol 2010;588:2065-75.", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed2.pdf" - }, - { - "text": "cell death and apoptosis with more than 10 genes were examined. Filtered count data of expressed and nondifferentially expressed genes were used as a background.\n\n## 2.8. Dorsal root ganglion culture\n\nDorsal root ganglia were dissected from MrgD CreERT2 ;Ai32 and Calca CreERT2 ;Ai32 mice . 1 week after dosing with tamoxifen and enzymatically digested at 37˚˚C for 80 minutes in dispase type II (4.7 mg/mL) plus collagenase type II (4 mg/mL) (Worthington Biochemical), as described previously. 63 Mechanically dissociated cells were plated onto laminin/poly-D-lysine (R&D Systems, Minneapolis, MN) treated coverslips in complete Neurobasal Plus medium (Neurobasal Plus media supplemented with 2% (vol/vol) B27 Plus, 1% N2, 1% Glutamax, and 1% antibiotic-antimycotic [ThermoFisher Scientific, Waltham, MA]). Mouse nerve growth factor (GF) (50 ng/mL; nerve growth factor (NGF), PeproTech, Cranbury, NJ) and 10 ng/mL glial-derived neurotrophic factor (GDNF, PeproTech) were added to the media under some conditions. Cytosine b -D-arabinofuranoside (4 m M) was added to the media for 24 hours the day after plating to reduce the proliferation of nonneuronal cells. Media was refreshed 3 times per week thereafter. Cultures were fixed for 10 minutes at room temperature with 4% paraformaldehyde and subsequently processed by immunocytochemistry (described earlier).\n\n## 2.9. Statistical analysis\n\nData are expressed as mean 6 SEM unless otherwise specified, and P values of less than 0.05 were considered significant. Power calculations were performed using G*Power 3.1.9.7. 15 A quantitative Venn diagram was created using BioVenn. 25 All other statistical analyses were performed in Prism 10 (GraphPad Software, Inc, Boston, MA) or R using paired t tests or 1- or 2-way RM ANOVAs (repeated measures analysis of variance), where appropriate. Normality was assessed by the Shapiro-Wilk test. If the main analysis of variance effect was significant, ˇ S'ıd 'ak or Tukey multiple comparisons tests were performed. To compare population distributions of soma cross-sectional area or volume, Kolmogorov-Smirnov tests were performed.\n\n## 3. Results\n\n## 3.1. Peripheral nerve injury induces a loss of small neurons from the dorsal root ganglion\n\nTo assess the gross loss of neurons from DRG following nerve injury, we generated the Avil FlpO ;Atf3 CreERT2 ;RC::FLTG mouse line in which na¨ıve and axotomized sensory neurons were differentially labelled. In this mouse line, all neurons express tdTomato (Flp-dependent) in the na¨ıve state and switch to expressing green fluorescent protein (GFP) upon axonal damage and concurrent tamoxifen treatment (Flp- and Cre-dependent) ( Figs. 1A and B ). Following pilot experiments to optimize tamoxifen dosing regimen, this approach was both highly efficient and specific (with the caveat that it was necessary to wait for several days after nerve injury for Cre-induced GFP expression): 14 days after SNItrans surgery, GFP was expressed by 99.1 6 0.6% of Atf3-expressing ipsilateral L4 DRG neurons, while we observed GFP in only 4.6 6 0.7% of contralateral DRG neurons (Figs. S2A-D, http://links.lww.com/PAIN/C84). We then used a stereological approach to quantify the total number of neurons in L4 DRG ipsilateral to injury 1, 2, 4, and 8 weeks after SNItrans, as well as contralateral to injury. One week after SNItrans, we", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed2.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n## Peripheral nerve injury results in a biased loss of sensory neuron subpopulations\n\nAndrew H. Cooper a , Allison M. Barry b , Paschalina Chrysostomidou a , Romane Lolignier a , Jinyi Wang a , Magdalena Redondo Canales a , Heather F. Titterton a , David L. Bennett b , Greg A. Weir a, *\n\n## Abstract\n\nThere is a rich literature describing the loss of dorsal root ganglion (DRG) neurons following peripheral axotomy, but the vulnerability of discrete subpopulations has not yet been characterised. Furthermore, the extent or even presence of neuron loss following injury has recently been challenged. In this study, we have used a range of transgenic recombinase driver mouse lines to genetically label molecularly defined subpopulations of DRG neurons and track their survival following traumatic nerve injury. We find that spared nerve injury leads to a marked loss of cells containing DRG volume and a concomitant loss of small-diameter DRG neurons. Neuron loss occurs unequally across subpopulations and is particularly prevalent in nonpeptidergic nociceptors, marked by expression of Mrgprd. We show that this subpopulation is almost entirely lost following spared nerve injury and severely depleted (by roughly 50%) following sciatic nerve crush. Finally, we used an in vitro model of DRG neuron survival to demonstrate that nonpeptidergic nociceptor loss is likely dependent on the absence of neurotrophic support. Together, these results profile the extent to which DRG neuron subpopulations can survive axotomy, with implications for our understanding of nerve injury-induced plasticity and pain.\n\nKeywords: Sensory neuron, Neuron death, Transgenic reporter line, Neuropathic pain, Nerve injury\n\n## 1. Introduction\n\nDorsal root ganglion (DRG) neurons represent a molecularly and functionally heterogeneous population. Under normal conditions, this diversity contributes to the ability of the somatosensory nervous system to detect a myriad of sensory stimuli that result in the perceptions of touch, temperature, itch, and pain. Following nerve injury, physiological changes in DRG neurons lead to hyperexcitability, 57 which is a key pathological driver of neuropathic pain. 20,63 Concomitant molecular changes in discrete subpopulations also occur, and these have recently been comprehensively described in single-cell 37,44 and subpopulation-specific sequencing studies. 3 These studies describe a transient and generalized reduction in the expression of subpopulation-specific genes following nerve injury. 3,37,44\n\nIn addition to molecular changes, there is a rich literature describing the frank loss of DRG neurons following traumatic\n\nSupplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Web site (www.painjournalonline.com).\n\nCopyright © 2024 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the International Association for the Study of Pain. This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\nhttp://dx.doi.org/10.1097/j.pain.0000000000003321", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 2. Spared nerve crush and transection lead to a loss of small DRG neurons. (A) Approach to restrict analysis to damaged afferents: a subcutaneous injection of the tracer FB into both hindpaws labelled tibial afferents, before unilateral SNItrans or SNIcrush surgery. (B) Representative image of FB labelling and NeuN immunostaining in the L4 DRG. The image is a projection of optical sections at 3m mintervals through the entirety of a 30m m-thick tissue section. Scale bar 5 100 m m. (C and D) Quantification of the cross-sectional area of FastBlue labelled DRG neurons ipsilateral and contralateral to SNItrans (C) or SNIcrush injury (D) reveals a loss of small afferents and subsequent shift in population distribution. Kolmogorov-Smirnov tests of cumulative distributions; SNItrans: D 5 0.25, P , 0.001; n 5 183 or 191 neurons from 3 mice; SNIcrush: D 5 0.22, P , 0.001, n 5 319 or 325 neurons from 3 mice. (E) Experimental approach for whole DRG volumetric analyses after SNItrans. (F) Representative 3D rendering of TDP-43 profiles and corresponding nuclear spot profiles following Imaris-based spot detection feature. Scale bar 5 100 m m. (G) Quantification of DRG nuclear spot volume ipsilateral and contralateral to SNItrans. Kolmogorov-Smirnov tests of cumulative distribution: D 5 0.06, P , 0.001, n 5 30,206 (contra) or 32,544 (ipsi) nuclei from 4 (contra) or 5 (ipsi) mice. (H) Total number of nuclear spots, by size, per DRG. Two-way RM ANOVA; size bin 3 injury interaction: F 2,14 5 8.26, P 5 0.004; n 5 4 to 5 mice; ˇ S'ıd 'ak multiple comparisons tests: ** P , 0.01. ANOVA, analysis of variance; DRG, dorsal root ganglion; FB, FastBlue; RM, repeated measures.\n\n<!-- image -->\n\n## 3.3. Spared nerve injury induces a loss of Trpm8 1 and calcitonin gene-related peptide 1 but not myelinated dorsal root ganglion neurons\n\nLoss restricted to nonpeptidergic nociceptors would not fully account for the degree of total neuron loss that we observed. Therefore, we studied a range of other subpopulations, both small and large in diameter, for their vulnerability to injury-\n\ninduced loss. To investigate potential loss of Trpm8 1 (coldsensitive), calcitonin gene-related peptide 1 (CGRP) (peptidergic), and myelinated subpopulations of DRG neurons following nerve injury, we applied our FB-labelling approach in Trpm8 FlpO ; RC::FLTG (FlpO-dependent tdTom expression), Calca CreERT2 ; Ai32 (Cre-dependent ChR2-YFP expression) and Thy1-CFP mice, respectively ( Figs. 4A-D ). Trpm8-tdTom was expressed", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed2.pdf" - }, - { - "text": "http://dx.doi.org/10.1097/j.pain.0000000000003321\n\nnerve injury in experimental rodent models. 24,50,53,56 Some studies have suggested that neuron loss occurs in certain patient cohorts, 48,66 but this is yet to be definitively demonstrated in humans. In rodents, most studies support a preferential loss of small cells that give rise to unmyelinated fibers 53 but some contrasting studies describe the preferential loss of large cells 6 or loss of cells of all sizes. 46 Variation is evident across studies in terms of experimental species, age, type of injury, and quantification methods. 56 Shi et al. 50 used stereological counting methods to identify a 54% loss of DRG neuron number 4 weeks after 'mid-thigh' sciatic nerve transection in C57BL/6 mice. Estimates for the degree of loss following commonly used nerve injury paradigms (eg, spared nerve injury [SNI] and sciatic nerve crush) are not available and because of the neurochemical changes following injury and the loss of subpopulation marker gene expression, 5,44,50 the vulnerability of molecularly defined subpopulations has not been characterized. Moreover, more recent studies have cast doubt on the extent or even presence of DRG neuron death following nerve injury. One study which developed a deep learning approach to assess rat DRG cellular plasticity found no loss of neurons up to 2 weeks post-SNI, 49 while another observed no loss of genetically labelled damaged DRG neurons 2 months after sciatic nerve crush. 44\n\nThe issue of whether neuron loss occurs, and if so, in what subpopulations, is important. It will likely have implications for our understanding of reinnervation and functional recovery in patients. Furthermore, better insight will provide critical context for those investigating the plasticity that occurs following nerve injury and may inform therapeutic targeting of sensory neuron populations.\n\nAn expanding repertoire of transgenic recombinase driver lines now makes it possible to permanently label DRG neuron subpopulations and study their fate in rodent nerve injury paradigms. The aim of this study was to use this technology to characterize", - "page_start": 0, - "page_end": 0, - "source_file": "pubmed2.pdf" - }, - { - "text": "injury (Fig. S6A-C, http://links.lww.com/PAIN/C84), indicating that any loss of neurons within specific neuronal subpopulations wasnot biased towards soma size. Collectively, these data show that unrepaired axonal damage to peripheral sensory neurons induces a partial loss of Trpm8 1 and CGRP 1 subpopulations, but no major loss of myelinated afferents.\n\nBased on our findings of preferential loss of nonpeptidergic nociceptors, we re-analyzed a previous population-specific transcriptomic dataset of mouse DRG neurons following nerve injury for potential upregulation of cell death pathways (Fig. S7, http://links.lww.com/PAIN/C84). 3 Wefound that early after injury (3 days post-SNItrans), nonpeptidergic (MrgD CreERT2 -expressing) neurons showed enhanced enrichment of GO terms associated with apoptosis, in contrast to a broad population of nociceptors (labelled with Scn10a CreERT2 ), peptidergic nociceptors (CalcaCreERT2 ), C-LTMRs (Th CreERT2 ), and A b -RA (rapidly adapting) and A d -LTMRs (A d /A b -LTMR, Ntrk2 CreERT2 ;Advillin FlpO ), in which there was less or no enrichment of cell death pathways. By 4 weeks, only C-LTMR and A d /A b -LTMR subtypes show any overrepresentation of cell death pathways (in the populations studied). Both injury-specific and apoptotic signatures in nonpeptidergic neurons were no longer significantly enriched, consistent with a loss of axotomized nonpeptidergic afferents by this late timepoint postinjury. These data suggest that apoptotic pathways are upregulated acutely after injury in a celltype-specific manner.\n\n## 3.4. Mrgprd dorsal root ganglion neurons are sensitive to loss in vitro\n\nEarlier studies postulated that a lack of neurotrophic support underlies neuronal loss, which is supported by the observation that exogenous GDNF treatment at the time of injury, or shortly after, rescues the loss of IB4-binding central terminals posttransection. 5 We sought to use the DRG neurons from MrgD CreERT2 ;Ai32 mice to test this postulate and establish an in vitro platform capable of probing the molecular basis of loss, with axonal transection during isolation providing a correlate for in vivo nerve injury ( Figs. 5A-E ). Twenty-four hours after plating, YFP was expressed by 16.3 6 1.3% of DRG neurons, which was reduced to 11.8 6 1.7% after 28 days of culture in the presence of exogenous GFs, NGF and GDNF ( Fig. 5F ). However, in the absence of GFs, YFP 1 neurons only accounted for 1.7 6 0.6% of neurons after 28 days, accompanied by an apparent reduction in the overall number of neurons within the culture, despite all conditions being seeded at the same initial density ( Figs. 5C and F ). YFP 1 cell loss was partially rescued by the presence of GDNF, but not NGF alone, in the culture media ( Figs. 5D-F ). These results contrasted with experiments using neurons derived from Calca CreERT2 ;Ai32 mice, in which we observed no change in the proportion of neurons that were Calca-YFP 1 after 28 days in culture, regardless of exogenous GF addition ( Figs. 5G-L ). Collectively, these data support the use of DRG cultures to probe the mechanisms underlying selective loss of sensory neurons following nerve injury and suggest a role for trophic support, particularly by GDNF signaling, in preventing the loss of nonpeptidergic nociceptors.\n\n## 4. Discussion\n\nWe present data herein to support the hypothesis that traumatic nerve injury in rodents leads to a profound loss of small-diameter DRG neurons. Taking advantage of newly", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed2.pdf" - }, - { - "text": "- [47] Schmitz C, Hof PR. Design-based stereology in neuroscience. Neuroscience 2005;130:813-31.\n - [48] Schulte A, Degenbeck J, Aue A, Schindeh utte M, Schlott F, Schneider M, Monoranu CM, Bohnert M, Pham M, Antoniadis G, Blum R, Rittner HL. Humandorsalroot ganglia after plexus injury: either preservation or loss of the multicellular unit. bioRxiv 2023.02.06.526934.\n - [49] Schulte A, Lohner H, Degenbeck J, Segebarth D, Rittner HL, Blum R, Aue A. Unbiased analysis of the dorsal root ganglion after peripheral nerve injury: no neuronal loss, no gliosis, but satellite glial cell plasticity. PAIN 2023;164:728-40.\n - [50] Shi TJS, Tandrup T, Bergman E, Xu ZQD, Ulfhake B, H okfelt T. Effect of peripheral nerve injury on dorsal root ganglion neurons in the C57 BL/6J\n - mouse: marked changes both in cell numbers and neuropeptide expression. Neuroscience 2001;105:249-63.\n - [51] Song H, Yao E, Lin C, Gacayan R, Chen MH, Chuang PT. Functional characterization of pulmonary neuroendocrine cells in lung development, injury, and tumorigenesis. Proc Natl Acad Sci 2012;109:17531-6.\n - [52] Takasu K, Sakai A, Hanawa H, Shimada T, Suzuki H. Overexpression of GDNF in the uninjured DRG exerts analgesic effects on neuropathic pain following segmental spinal nerve ligation in mice. J Pain 2011;12: 1130-1139.\n - [53] Tandrup T, Woolf CJ, Coggeshall RE. Delayed loss of small dorsal root ganglion cells after transection of the rat sciatic nerve. J Comp Neurol 2000;422:172-80.\n - [54] Terenghi G, Hart A, Wiberg M. The nerve injury and the dying neurons: diagnosis and prevention. J Hand Surg Eur Vol 2011;36:730-4.\n - [55] Usoskin D, Furlan A, Islam S, Abdo H, Lonnerberg P, Lou D, HjerlingLeffler J, Haeggstrom J, Kharchenko O, Kharchenko PV, Linnarsson S, Ernfors P. Unbiased classification of sensory neuron types by large-scale single-cell RNA sequencing. Nat Neurosci 2015;18:145-53.\n - [56] Vestergaard S, Tandrup T, Jakobsen J. Effect of permanent axotomy on number and volume of dorsal root ganglion cell bodies. J Comp Neurol 1997;388:307-12.\n - [57] Wall PD, Gutnick M. Properties of afferent nerve impulses originating from a neuroma. Nature 1974;248:740-43.\n - [58] Wang C, Gu L, Ruan Y, Geng X, Xu M, Yang N, Yu L, Jiang Y, Zhu C, Yang Y, Zhou Y, Guan X, Luo W, Liu Q, Dong X, Yu G, Lan L, Tang Z. Facilitation of MrgprD by TRP-A1 promotes neuropathic pain. FASEB J 2019;33: 1360-73.\n - [59] Wang H, Zylka MJ. Mrgprd-expressing polymodal nociceptive neurons innervate most known classes of substantia gelatinosa neurons. J Neurosci 2009;29:13202-9.\n - [60] Wang R, Guo W, Ossipov MH, Vanderah TW, Porreca F, Lai J. Glial cell line-derived neurotrophic factor normalizes neurochemical changes in injured dorsal root ganglion neurons and prevents the expression of experimental neuropathic pain. Neuroscience 2003; 121:815-24.\n - [61] Wang X, Archibald ML, Stevens K, Baldridge WH, Chauhan BC. Cyan fluorescent protein (CFP) expressing cells in the retina of Thy1-CFP transgenic mice before and after optic nerve injury. Neurosci Lett 2010; 468:110-4.\n - [62] Warwick C, Cassidy C, Hachisuka J, Wright MC, Baumbauer KM, Adelman PC, Lee KH, Smith KM, Sheahan TD, Ross SE, Koerber HR. MrgprdCre lineage neurons mediate optogenetic allodynia through an emergent polysynaptic circuit. PAIN 2021;162:2120-31.\n - [63] Weir GA, Middleton SJ, Clark AJ, Daniel T, Khovanov N, McMahon SB, Bennett DL. Using an engineered glutamate-gated chloride channel to silence sensory neurons and treat neuropathic pain at the source. Brain 2017;140:2570-85.", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed2.pdf" - }, - { - "text": "platform to help delineate the precise cell death pathways and signaling cascades engaged (which could then be experimentally manipulated). Such studies should consider that plasticity may evolve over time. The loss of IB4 1 central terminals is transient following crush and has even been observed to reverse at longer timepoints following SNItrans. 36 These observations, in conjunction with ours of loss of neurons, raise the intriguing question of the source of such central reinnervation.\n\n## 4.4. Study limitations\n\nOur efforts focused on traumatic nerve injury paradigms owing to previous contrasting results using these robust and reproducible experimental models. We did not extend our studies to systemic neuropathy models, such as chemotherapy or diabetic neuropathy. A recent postmortem analysis reported a neuronal loss in the DRG from patients with painful diabetic peripheral neuropathy. 19 Transcriptional responses vary substantially across different nerve insults, 44 so it would be of interest to test whether neuronal loss and the subpopulation vulnerability reported in this study are common features across different types of insults.\n\nUsing multiple approaches, we assess the na¨ıve mouse L4 DRG to contain approximately 8000 neurons, consistent with a previous estimate, 67 and observed a frank loss of smalldiameter neurons following injury. However, the extent of loss observed using our semiautomated approach was less than that observed using manual techniques. 67 Two major limitations in this study may explain this discrepancy: First, owing to technical issues, the cleared DRG dataset is unpaired ipsilateral-contralateral which adds larger variability. Second, the analysis method is prone to undercounting deep nuclei. The signal-to-noise is better for superficial nuclei and smaller tissue volumes. Given the reduction in DRG volume after SNItrans, nuclei in larger contralateral DRG may be undercounted.\n\nWhile we made efforts to profile the loss of several molecularly discrete sensory neuron populations, we acknowledge that not all subtypes were profiled. Furthermore, recent single-cell RNA sequencing has given us a more granular appreciation of the heterogeneity of sensory neurons. 42 Future studies could leverage our experimental approach and new transgenic lines to characterize the loss of neurons in more detail. Such experiments may be pertinent before embarking on molecular or functional profiling of populations post-nerve injury.\n\n## 4.5. Conclusions\n\nIn sum, we have provided data from multiple complementary experimental approaches to support the hypothesis that DRG neurons are lost following nerve injury in mice. We describe a substantial loss, which is biased towards specific subpopulations and particularly present in small-diameter nonpeptidergic nociceptive neurons.\n\n## Conflict of interest statement\n\nD.L.B. has acted as a consultant in the last 2 years for AditumBio, Biogen, Biointervene, Combigene, LatigoBio, GSK, Ionis, Lexicon therapeutics, Neuvati, Olipass, Orion, Replay, SC Health Managers, Theranexus, Third Rock Ventures, and Vida Ventures on behalf of Oxford University Innovation. D.L.B. has received research funding from Lilly and Astra Zeneca, and G.A.W. has received research funding from Ono Pharmaceutical. D.L.B. has received", - "page_start": 11, - "page_end": 11, - "source_file": "pubmed2.pdf" - }, - { - "text": "Figure 1. SNItrans induces death of small primary afferent neurons, accompanied by a reduction in volume, not cell density, of the dorsal root ganglion. (A) Approach to differentially labelled intact afferents with tdTomato and damaged afferents with GFP after peripheral nerve injury using the Avil FlpO ;Atf3 CreERT2 ;RC:: FLTGmouseline and schematic of experimental timeline. (B) Representative image of GFP, tdTomato, and NeuN expression in an L4 DRG, 2 weeks after SNItrans. Scale bars 5 100 m m. (C and D) Stereological quantification of the total number of DRG neurons (C) or number of axotomized and intact neurons (D) in the L4 DRG 1, 2, 4, and 8 weeks after SNItrans or contralateral (contra) to injury. (C) One-way ANOVA with Tukey posttests; F 4,10 5 37.98, P , 0.001. (D) Two-way RM ANOVA; Timepoint 3 Color interaction F 4,10 5 39.04, P , 0.001, n 5 3 mice; Tukey posttests (between injured groups): † P , 0.05 vs contra, ‡ P , 0.05 vs 1-week. (E) Volume of DRG-containing cells (ie, excluding white matter tracts) following SNItrans. One-way ANOVA with Tukey posttests; F 4,10 5 21.25, P , 0.001, n 5 3. (F) Neuronal density within the DRG following SNItrans. One-way ANOVA; F 4,10 5 2.77, P 5 0.09, n 5 3. (G) Population distribution of uninjured and injured afferents by cross-sectional area, 1 and 8 weeks post-SNItrans. Kolmogorov-Smirnov tests of cumulative distributions; Uninjured: D 5 0.08, P 5 0.18; Injured: D 5 0.32, P , 0.001; n 5 310 to 427 neurons from 3 mice. * P , 0.05, ** P , 0.01, *** P , 0.001 vs contra. ANOVA, analysis of variance; DRG, dorsal root ganglion; GFP, green fluorescent protein.\n\n<!-- image -->\n\nprotein) neurons 28 days after sham surgery or SNItrans ( Figs. 3A and B ). SNItrans, but not sham, resulted in a significant decrease (54.0 6 6.6%) in the total number of MrgD-YFP 1 neurons in L4 DRG ( Fig. 3C ).\n\nYellow fluorescent protein expression in MrgD ChR2-YFP mice is driven by the endogenous Mrgprd promotor, which has been reported to be upregulated or downregulated following axonal damage. 44,58 Such changes in promoter activity could affect the proportion of nonpeptidergic nociceptors identified by YFP expression. Therefore, to verify these findings, we used MrgD CreERT2 ;Ai32 mice and tamoxifen administration before injury, to permanently label Mrgprdexpressing afferents with ChR2-YFP ( Figs. 3D-F ). We then tested whether the proportion of cutaneous tibial afferents that were YFP 1 was altered following nerve injury. Following hindpaw FB injection, ; 15% of contralateral, FB-labelled DRG neurons expressed YFP. This was reduced to 6.0 6 1.2% 28 days after SNIcrush injury and to only 1.7 6 0.9%\n\n28 days after SNItrans ( Fig. 3G ). Uptake by uninjured YFP 1 neurons was equivalent 7 and 35 days after FB injection, demonstrating that this reduction was not because 7 days were insufficient for YFP 1 neurons to fully uptake FB (Fig. S3C, http:// links.lww.com/PAIN/C84). No significant difference in the percentage of FB-labelled YFP 1 DRG neurons between ipsilateral and contralateral DRG was observed at 7 days following SNItrans (Figs. S4A and B, http://links.lww.com/PAIN/C84), demonstrating that loss occurred after this timepoint. Analysis of the crosssectional soma area of FB-labelled, YFP 1 neurons in uninjured DRGrevealed an area of 361 6 138 m m 2 (mean 6 SD) (Fig. S4C, http://links.lww.com/PAIN/C84), which is a distribution profile matching those neurons presumed lost. Collectively, these data show that peripheral nerve injury results in a substantial loss of nonpeptidergic, Mrgprd -expressing neurons, with SNItrans (ie, an unrepaired axonal transection) resulting in an almost complete loss of this population.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed2.pdf" - }, - { - "text": "neuron loss after nerve injury and to test the hypothesis that loss is not equally distributed across molecular populations.\n\n## 2. Methods\n\n## 2.1. Animals\n\nMice were housed in groups in humidity- and temperature-controlled rooms with free access to food and water, on a 12-hour light-dark cycle, and with environmental enrichment. Animal procedures were performed under a UK Home Office Project Licence and in accordance with the UK Home Office (Scientific Procedures) Act (1986). All studies were approved by the Ethical Review Process Applications Panel of the University of Glasgow or Oxford and conform to the ARRIVE guidelines. Experiments were performed on adult male and female mice aged 7 to 16 weeks at the start of the experiments. All experimental cohorts contained a mix of male and female mice, apart from the cohort of Mrgprd CreERT2 ;Ai32 mice that underwent SNIcrush surgery, which was exclusively female. Details of transgenic lines are provided in Table 1 . Tamoxifen was administered by i.p. injection of 20 mg/mL tamoxifen (Sigma-Aldrich) dissolved in wheat germ oil (doses described in Table 1 ). There were 2 instances where animals were excluded from data analysis: One (cyan fluorescent protein) Thy1-CFP died of unknown causes not related to the procedure and before the experimental endpoint, and one MrgD CreERT2 ;Ai32 exhibited no fluorophore expression and was therefore deemed to have been incorrectly genotyped. Group sizes were based on the extent of neuronal loss 28d following sciatic nerve transection identified by Shi et al. 50 Given a 5 0.05, power 5 0.8, and an effect size of 4.81, power analysis projects that a group size of 3 mice would be needed.\n\n## Transgenic lines used in the study.\n\nTable 1", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed2.pdf" - } - ] - }, - { - "references": { - "source_file": "basic-english-language-skills.PDF", - "query": "Does the Oxbridge Academy have a guide on how to apply to college?", - "target_page": 21, - "target_passage": "To make the college registration process easier for you, we’ve compiled a comprehensive guide on how to register at Oxbridge Academy (www.oxbridgeacademy.co.za/enrol-now/).", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "<!-- image -->\n\n## TIPS FOR FILLING IN YOUR COLLEGE REGISTRATION FORM\n\nApplying for college (www.oxbridgeacademy.co.za/enrol-now/) can be a daunting experience. Not only do you need to choose a course, but you also need to make sure that you:\n\n - · meet the entry requirements\n - · meet the deadlines\n - · fill in the forms correctly\n - · send the forms to the right address\n - · include all the necessary attachments\n\nTo make the college registration process easier for you, we've compiled a comprehensive guide on how to register at Oxbridge Academy (www.oxbridgeacademy.co.za/enrol-now/). The guide also includes general tips that will be relevant to the application and registration processes at other colleges.\n\n## There are 4 steps you need to follow when you want to register as a student at Oxbridge Academy:\n\n - 1. Select Your Course\n - 2. Fill in Your Student Details\n - 3. Select Your Delivery Option\n - 4. Pay Your Registration Fee and Send in Your Form\n\n<!-- image -->", - "page_start": 20, - "page_end": 20, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## Did you enjoy reading this book?\n\nJoin our online social community and share your opinion:\n\nwww.facebook.com/oxbridgeacademysa twitter.com/oxbridgeEdu www.linkedin.com/company/oxbridge-academy\n\nOxbridge Academy is an established distance learning college offer -ing skills courses, national qualifications, and internationally recognised courses to students in South Africa and abroad.\n\nWith our head office in Stellenbosch in the Western Cape, we cater to our students' needs by recruiting industry-expert tutors to provide academic assistance via telephone and e-mail, as well as by designing our study material in such a way that it is clear, simple, and easy for our students to understand.\n\nWith us, studying from home is easy, affordable, and convenient.\n\n## CONTACT NUMBERS:\n\nTel: 021 1100 200 Tel:+2721 883 2454 (international) Fax: 086 111 2121\n\nFax: +2721 883 2378 (international)\n\nWhatsapp: 0605671585 Email: info@oxbridgeacademy.co.za\n\nPostal Address:\n\nPO Box 12723, Die Boord, Stellenbosch, 7613\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nWe are registered with the Department of Higher Education and Training as a Private College in terms of Section 31(6)(a) of the Continuing Education and Training Act, 2006 (Act No. 16 of 2006). Registration No. 2009/FE07/070.", - "page_start": 58, - "page_end": 58, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "3\n\n4\n\n<!-- image -->\n\nSend your registration form to the registrations office at Oxbridge Academy via one of the following channels:\n\nFax:\n\n086 262 5550\n\nPost: PO Box 12723, Die Boord, 7613 E-mail: registrar@oxbridgeacademy.co.za\n\n6", - "page_start": 26, - "page_end": 26, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## STEP 1 - SELECT YOUR COURSE\n\nOxbridge Academy Short Course: Marketing Management\n\nADV101\n\nBefore you start filling in the registration form, you need to choose your course. Once you've identified the course that you would like to study, remember to check that you meet the entry requirements.\n\nYou can find the course name and course code for your chosen course on the relevant detailed course information page on our website. Have a look at the example in the screenshot below (the course name and course code are circled in red):\n\n<!-- image -->\n\nPlease make sure to check the accreditation status of your chosen course. Some of our courses are non-credit bearing skills development courses, which are neither accredited by external bodies nor registered on the NQF. Please go to our website: oxbridgeacademy.co.za for more information about our skills development courses.", - "page_start": 21, - "page_end": 21, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## STEP 4 - PAY YOUR REGISTRATION FEE AND SEND IN YOUR FORM\n\nDifferent courses have different registration fees. Please check the course fees list (www.oxbridgeacademy.co.za/Documents/ Price-list-2015.pdf) to find out how much you need to pay to register for your chosen course, and pay this amount using the banking details provided at the bottom of the registration form. Remember to attach your proof of payment.\n\nIf you are under the age of 18, your parent or guardian will need to sign this section of the form to state that they are aware of your registration with Oxbridge Academy, and that they do not have any objections. If you are unemployed, you will need a guarantor to sign this section of the form. Your parent or guarantor will be held responsible if you miss any of your payments in relation to your course fees.\n\n<!-- image -->", - "page_start": 25, - "page_end": 25, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "<!-- image -->\n\n## CHAPTER 7:\n\n## HOW TO ASK FOR HELP FROM YOUR TUTOR\n\n<!-- image -->\n\nAs a student, you are going to experience times when you need help with your studies. You might be unsure about an assignment question, you might be confused by a particular concept, or you might be stressed about the upcoming exams.\n\nAnd if you are studying via distance learning (www.oxbridgeacademy.co. za/distance-learning/), where you don't have any face-to-face interaction with lecturers, you will need to rely on your tutors for the necessary academic support.", - "page_start": 32, - "page_end": 32, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## IN THIS E-BOOK, WE'LL BE HELPING YOU TO:\n\n - · Develop your basic English language skills.\n - · Improve your English grammar.\n\nApply your language and communication skills in a business contexT. ( www.oxbridgeacademy.co.za/find-a- course/business-administrationcourses/)\n\n'Grammar is a litmus test. If job hopefuls can't distinguish between 'to' and too', their applications go into the bin'\n\nKyle Wiens, CEO of iFixit\n\n<!-- image -->\n\n'Grammar often seems to be a low priority in education. Are school undervaluing grammar, given that employers may rule out applications with sloppy writing?'\n\nThe New York Times", - "page_start": 5, - "page_end": 5, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "- (7) In this paragraph-\n - (a) 'boarding school' means a school or college, which-\n - (i) provides accommodation for its pupils or, as the case may be, students on its own premises, or\n - (ii) arranges accommodation for its pupils or students to be provided elsewhere (other than in connection with a residential trip away from the school);\n - (b) 'school' means-", - "page_start": 79, - "page_end": 79, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "## HERE ARE 10 TIPS FOR HOW YOU CAN ACHIEVE HIGHER MARKS FOR YOUR WRITTEN ASSIGNMENTS:\n\n## 1. Read (and follow) the instructions carefully.\n\nIf you are an Oxbridge Academy student, the general assignment guidelines will be provided in your 'Success' Study Guide. Specific instructions will also be included at the beginning of each of your assignments.\n\n## 2. Read the questions carefully.\n\nMake sure you understand what is being asked of you, so that you focus on answering the right questions, instead of providing irrelevant information.\n\n## 3. Remember that presentation is important.\n\nNeatness, spelling, and the structure of your assignment will all count toward the mark that you receive for your assignment.\n\n## 4. Use your course material and other external sources to find answers to the assignment questions.\n\nBut make sure to use your own words - don't just copy. You need to show the person marking your assignment that you have developed a sound understanding of the subject.\n\n## 5. When you use external resources, remember to reference them properly, and to include them in a bibliography.\n\nIf you don't, you may be guilty of plagiarism (www.oxforddictionaries. com/definition/english/plagiarism), which is a serious offence.\n\n - 6. Always hand in your own work, and make sure that you use your own words when you formulate your answers.\n\n## 7. When it comes to essay questions:\n\n - · Plan/outline your answer before doing the final draft.\n - · Remember that essays have titles, introductions, bodies, and conclusions.\n - · Use headings and paragraphs to structure your answer.", - "page_start": 37, - "page_end": 37, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "<!-- image -->\n\n## CHAPTER 8:\n\n## TIPS FOR COMPLETING YOUR WRITTEN ASSIGNMENTS\n\n<!-- image -->\n\nDepending on which course you study, you will either be assessed by means of written assignments, or through a combination of written assignments and exams. Assignments not only help to deepen your understanding of the work, but they often also count toward your final mark.\n\nIt is therefore important that you put effort into your assignments, and that you complete them to the best of your ability.\n\nWe realise that, like many other students, you might be unsure of how to go about completing your assignments, or that you might be afraid of failure.\n\nIf you are an Oxbridge Academy student, we'd like you to know that we are here to help you every step of the way, and that we will give you the opportunity to resubmit your assignments if you don't achieve a pass mark the first time around.", - "page_start": 36, - "page_end": 36, - "source_file": "basic-english-language-skills.PDF" - } - ] - }, - { - "references": { - "source_file": "basic-english-language-skills.PDF", - "query": "I have trouble writing effective summaries in English, do you have any tips?", - "target_page": 29, - "target_passage": "To make a good summary, you need to: • Keep it brief. • Make sure to use main headings and keywords. • Focus on the main ideas. • Classify and organise the information in a logical manner. • Use your own words where possible. • Include examples. • Remember that your summaries are there to help you", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## TABLE OF CONTENTS:\n\n- 1. General Language Tips to Get You Started\n- 2. Parts of Speech\n- 3. Punctuation\n- 4. Commonly Confused Words and Phrases\n- 5. Tips for Filling in Your College Registration Form\n- 6. Learn How to Summarise Your Study Material\n- 7. How to Ask for Help from Your Tutor\n- 8. Tips for Completing Your Written Assignments\n- 9. Tips for Answering Exam Questions\n- 10. Language Skills at Work - How to Write a Cover Letter\n- 11. Language Skills at Work - How to Write a Resignation Letter\n- 12. Language Skills at Work - Sending E-mails to Your Colleagues", - "page_start": 2, - "page_end": 2, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "- 19. You cannot use a dictionary when summarising your study material.\n - 20. Plagiarism is not a serious offence.\n - 21. When writing an exam, you should always answer the questions in numerical order.\n - 22. E-mail etiquette is important in the workplace.\n - 23. Mind maps help you to understand the relationships between con -cepts.\n - 24. When you answer an essay question, you should try to include as much information as possible.\n\n## Do the following:\n\n - 25. Create a mind map to summarise Chapter 7 (How to Ask for Help from Your Tutor). (5)\n - 26. List 3 things you need to do if you want to earn good marks for your written assignments. (3)\n - 27. List 5 important things to keep in mind when writing a cover letter.\n\n(5)\n\n - 28. List 5 of the things that you should include in a resignation letter.\n\n(5)\n\n - 29. List 3 methods you can use to summarise your study material. (3)\n - 30. Give 2 examples of how good language skills can benefit your ca -reer. (2)\n - 31. Complete the following sentence:\n\nSummarising your study material gives you the opportunity to", - "page_start": 57, - "page_end": 57, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## SUMMARIES\n\n## General Tips for Making Summaries\n\n - · Underline or highlight key points as you work through your study material, and make notes.\n - · When you come across a word or concept you don't understand, look it up in a dictionary, or do some research on the concept, and add your own definition to your summary.\n\n<!-- image -->", - "page_start": 31, - "page_end": 31, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## To start off with, here are a few tips for improving your general language and communication skills:\n\n - 1. Read as much as possible. Reading improves your vocabulary, and helps you to become familiar with sentence structure, word order, and the correct use of punctuation.\n - 2. Invest in a good dictionary. When you are unsure of the meaning of a word, or when you come across an unfamiliar word, make sure to look it up in your dictionary.\n - 3. Keep a journal. This will give you an opportunity to practice your writing skills on a regular basis.\n\n<!-- image -->", - "page_start": 6, - "page_end": 6, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## HERE ARE 10 TIPS FOR HOW YOU CAN ACHIEVE HIGHER MARKS FOR YOUR WRITTEN ASSIGNMENTS:\n\n## 1. Read (and follow) the instructions carefully.\n\nIf you are an Oxbridge Academy student, the general assignment guidelines will be provided in your 'Success' Study Guide. Specific instructions will also be included at the beginning of each of your assignments.\n\n## 2. Read the questions carefully.\n\nMake sure you understand what is being asked of you, so that you focus on answering the right questions, instead of providing irrelevant information.\n\n## 3. Remember that presentation is important.\n\nNeatness, spelling, and the structure of your assignment will all count toward the mark that you receive for your assignment.\n\n## 4. Use your course material and other external sources to find answers to the assignment questions.\n\nBut make sure to use your own words - don't just copy. You need to show the person marking your assignment that you have developed a sound understanding of the subject.\n\n## 5. When you use external resources, remember to reference them properly, and to include them in a bibliography.\n\nIf you don't, you may be guilty of plagiarism (www.oxforddictionaries. com/definition/english/plagiarism), which is a serious offence.\n\n - 6. Always hand in your own work, and make sure that you use your own words when you formulate your answers.\n\n## 7. When it comes to essay questions:\n\n - · Plan/outline your answer before doing the final draft.\n - · Remember that essays have titles, introductions, bodies, and conclusions.\n - · Use headings and paragraphs to structure your answer.", - "page_start": 37, - "page_end": 37, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## 9. Use correct grammar and spelling.\n\nThis will contribute to the clarity of your answers, and will prevent the person marking your paper from having to guess what you mean.\n\n## 10. For longer questions and essay-style questions: plan your answers before you start writing.\n\nThis will help you to formulate logical arguments, as well as to structure your answers clearly. In essay questions, you will get marks for using the correct format, which includes making sure that you have an introduction, sub-headings and paragraphs, and a conclusion.\n\n## 11. Where relevant, give examples.\n\nThis will help to demonstrate that you understand the topic.\n\n## 12. If you are writing an open-book exam, keep in mind that you won't have enough time to look up all the answers.\n\nMake sure that you know your work, and that you know where to look for key information. These types of exams are more focused on testing your understanding than on testing your knowledge, which means that you need to have a thorough grasp of the work.\n\n - 13. If you have to answer multiple-choice questions, make sure that you read the questions very carefully.\n\nTry to think of the correct answer before you read through the options, as you are less likely to become confused. When in doubt, go with your first instinct. If there is more than one correct answer, go with the an -swer that appears to be most correct.\n\n - 14. If you start running out of time towards the end of the exam, write short notes as answers to each of the remaining questions, instead of trying to answer each question perfectly.\n\nThis way, you should still earn some marks for writing down the most important points.\n\n - 15. If you have time left at the end of the exam, go back and read through your answers to make sure that you are happy with them.", - "page_start": 43, - "page_end": 43, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "<!-- image -->\n\n## CHAPTER 1:\n\n## GENERAL LANGUAGE TIPS TO GET YOU STARTED\n\nThis chapter focuses on the importance of language skills in the workplace, and covers basic tips for how you can improve your command of the English language.\n\n<!-- image -->\n\n'The English language is nobody's special property. It is the property of the imagination. It is the property of the language itself'\n\nDerek Walcott", - "page_start": 3, - "page_end": 3, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "## CHAPTER 9:\n\n## TIPS FOR ANSWERING EXAM QUESTIONS\n\n<!-- image -->\n\nYou're sitting at a table in a room full of students, hunched over your exam paper, with your pen in hand. Your brain feels fried, and your hand is starting to cramp. You look at the clock, and you realise that you have only ten minutes left to answer Question 5b - which counts for 50 marks.\n\nExams can be a stressful experience. To help reduce the stress and anxiety surrounding exams, and to help you achieve the best possible marks, we've compiled a list of exam-writing tips for you.\n\n## IMPROVE YOUR MARKS!", - "page_start": 41, - "page_end": 41, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "- · Each paragraph should contain one main thought or idea, and there should be a logical link between each paragraph and the next.\n - · Make sure that you focus on answering the question - only include relevant information, and remember to present logical arguments in support of your answer.\n - 8. Proofread your assignment before handing it in. Tip: read your answers out loud to make sure that they sound logical.\n\n## 9. Always keep a copy or electronic backup of your assignment.\n\nThis way, you won't have to start over if your computer crashes, or\n\nredo the whole assignment if the original goes missing.\n\n - 10. When you get your assignment back from your tutor: Read through the feedback, and learn from your mistakes. This will help you to prepare for your exams (if you have to write them), as well as to help you achieve better marks in future assignments.\n\n## TYPES OF QUESTIONS THAT YOU WILL FREQUENTLY COME ACROSS IN ASSIGNMENTS\n\nIn your assignments, you will often be asked to write short paragraphs or longer essays in which you have to 'explain' a particular concept, 'identify' certain features, or 'prove' a certain point.\n\nIt's sometimes difficult to figure out exactly what these questions mean -- which is why we are providing you with the following explanations:\n\n<!-- image -->\n\n<!-- image -->", - "page_start": 38, - "page_end": 38, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "- /SM590000 Troubleshooting tips", - "page_start": 358, - "page_end": 358, - "source_file": "sg246915.pdf" - } - ] - }, - { - "references": { - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf", - "query": "Is exposure to risk factors that may affect mental wellbeing at work comparable across European countries?", - "target_page": 25, - "target_passage": "The country data vary significantly. Sweden, Greece and Luxembourg report over two-thirds such exposures, and Germany, Lithuania and Czechia one-third or less.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "<!-- image -->\n\nIn 2007, 2013 and 2020, Eurostat asked employed persons in its ad hoc surveys to the Labour Force Survey (LFS) whether they had '… exposure to risk factors that can adversely affect mental wellbeing' . 10 In 2007 and 2013, the questions covered four items (time pressure and overload of work, violence or threat of violence, harassment and bullying, other factors). In the 2020 survey, 11 'Mental well-being' was operationalised by an additional four response options, resulting in a total of eight options: 12\n\n - 1. Severe time pressure or overload of work;\n - 2. Violence or threat of violence;\n - 3. Harassment or bullying;\n - 4. Poor communication or cooperation within the organisation;\n - 5. Having to deal with difficult customers, patients, pupils etc.;\n - 6. Job insecurity;\n - 7. Lack of autonomy, or lack of influence over the work pace or work processes; and\n - 8. Another significant risk factor for mental well-being.\n\nForty-five per cent of the employed persons reported being exposed to risk factors that can adversely affect mental wellbeing. The country data vary significantly. Sweden, Greece and Luxembourg report over two-thirds such exposures, and Germany, Lithuania and Czechia one-third or less. 13", - "page_start": 24, - "page_end": 24, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## ILO 'List of Occupational Diseases Recommendation'\n\n## 2.4. Mental and behavioural disorders\n\n - · 2.4.1. Post-traumatic stress disorder\n - · 2.4.2. Other mental or behavioural disorders not mentioned in the preceding item where a direct link is established scientifically, or determined by methods appropriate to national conditions and practice, between the exposure to risk factors arising from work activities and the mental and behavioural disorder(s) contracted by the worker\n\nAnd there are also emerging and new risks where health data will not be available until a certain number of workers are exposed for quite a while . Some prominent examples are nanotechnologies, the significant increase of new chemically based technologies, vision impairment due to long hours of work under artificial light at the same distance with small digital equipment, 183 more exposure to 'global' biological agents due to more interactional tasks, and travel and transport between countries and continents. On that note, the Covid-19 pandemic could also be used as an example. In 2022, the Commission proposed an update of the Recommendation on the ESOD to recognise Covid-19 as an occupational disease for workers particularly concerned: health and social care, home help or where there is a proven risk of infection (during a pandemic) in other sectors 184 .\n\nIt adds to these difficulties that workers are often not only exposed to one disease causing exposure but to several exposures at the same time (exposure is understood here in a broad sense: ranging from long working hours over postures and movements to harassment and violence and to noise and chemical and biological substances, etc.). In theory, a single risk - if below the threshold limit values and in line with legislation and standards will not cause harm - given that it is the only exposure . The impact of this single exposure is not strong enough to generate a disease on the level of severity of a recognised occupational disease. A combination of several risks might add several exposures, worsen the impact and cause serious harm.\n\nQuite well studied is the increased prevalence of musculoskeletal diseases, if not only ergonomic risks but also high psychosocial risks are prevalent at the workplace. 185 Research has also found unexpected connections like the synergistic effect of noise and certain chemicals on hearing impairments. Such outcomes of multi-risk profiles are often particularly difficult to identify and understand. Obviously, most sectors and occupations involve workplaces with multi-risk profiles . Some prominent major risks in certain sectors or occupations are:", - "page_start": 75, - "page_end": 75, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## 3.1 Psychosocial risks at work\n\nDuring the last 30 years, the scientific, political and practical discussions on psychosocial risks and preventive measures against psychosocial risks have gained strong importance. After a period of doubts and resistance, today they are regarded as risks of the same severity as the classical physical safety and health risks. 4 (Chapter 1 covers the psychosocial risk aspect; for the prevalence of mental diseases and the burden of mental diseases see Chapter 2.2. 5 )\n\nLooking at the steady increase of certain psychosocial risk indicators at workplace level, either the risks have increased and/or the number of people working in occupations with higher psychosocial risks has increased. 6,7 This is valid, for example, for the indicator time pressure, for example, in delivery services, transport, and often also clerical work; the workforce has grown in sectors where emotional demands from dealing with difficult clients, customers, pupils or patients are common; there are also more workers employed (or self-employed) in interactional occupations, for example, in call centres, or in occupations with a high level of emotional tensions, for example, education, health and care.\n\nFigure 2: Risk factors that can adversely affect mental wellbeing - EWCS 8 and ESENER 9\n\n<!-- image -->\n\nA major difference between the ESENER and the EWCS survey is the respondent. In ESENER those persons who are most familiar with OSH or responsible for OSH in an enterprise were asked whether a certain risk factor exists in the enterprise; in the EWCS survey workers themselves were asked whether they are exposed to a risk factor.", - "page_start": 23, - "page_end": 23, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Some of these groups are directly addressed by European and national legislation , for example, workers with disabilities, young workers or pregnant women. For other groups of workers, for example, for women or migrant workers, the legislative protection is formulated as a general 'equal treatment' prescription, like to provide preventive measures for all groups in an enterprise (Framework Directive, Article 15 'Risk groups'), or to provide solutions that fit to the individual (Framework Directive, Art. 6.2.d.). There are some prescriptions that refer to specific preventive activities, for example, to provide written instructions in different languages for safe work with chemicals.\n\n## 3.6 Conclusions\n\nThe exposure to psychosocial risks is increasing, with mental health prevalence still emerging. Major work-related exposures have grown in the past 15 to 25 years that is, time pressure, difficult clients, longer working hours and poor communication. There is also some evidence that countries with overaverage employment in sectors like health and care or other human and client-oriented services (education, social work, tourism, entertainment) suffer from longer working hours and more mental burden. The northern countries are at the top of the countries with highest mental burden. The southern countries have a high share of specific psychosocial risks related to work in tourism and entertainment, characterised by atypical working times and issues with difficult clients.\n\n## EU-OSHA found in its ESENER 2014 data analysis: 112\n\n'Concerning the sectors, national context appears to be related to differences in psychosocial risk management in all types of organisations, although in some sectors this relationship is weak. In the agriculture, forestry and fishing sector and the sectors of mining, construction, electricity, trade, transport, and accommodation and food, the low level of psychosocial risk management is observed also in a favourable national context. An explanation for this finding might relate to the large proportion of small organisations in these sectors, which, as concluded earlier, have poorer psychosocial risk management independently of the national context.'\n\nThere is a stable block of 'conventional' physical health risks - ergonomics and risk from the work environment - and ergonomic risks that did not significantly change since 1990. It varies between 15% for exposure to smoke, fumes and dusts to over 60% for repetitive hand/arm movements. Ergonomic risks develop in two directions: 1) traditional risks stagnate in total, that is, lifting and moving heavy loads, painful or tiring positions, and shifts between sectors (from industry to transport, health and care); 2) risks of inactivity and highly repetitive hand/arm movements increase. Beside sectoral and occupational differences, it can be noted that in general higher percentages of exposed employed persons (workers and self-employed) are working in eastern and southern Member States.\n\nSince 2006 the average working time per week went down by 15 minutes for employees, and a slight reduction of most atypical - or unsocial - working times can be observed. Work intensification has emerged until 2005 but seems to stagnate since then. There are strong indications but no quantitative evidence on the extent to which working long hours, work at atypical times and probably also work with higher risks were transferred to workers in non-standard types of employment .", - "page_start": 58, - "page_end": 58, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Figure 3: 'Exposure to risk factors adversely affecting mental wellbeing' - LFS Ad hoc survey 2020 14\n\n<!-- image -->\n\nESENER 2019 reveals that several psychosocial risk factors are reported to be present in a significant share of establishments in the EU27, namely having to deal with difficult customers, patients and pupils (59%) and time pressure (45%).\n\nThe aspects 'Difficult clients', 'Poor communication' and 'Long working hours' are major psychosocial risks. The increase of workforce in communicative and client-oriented occupations - social work, education, tourism and entertainment, health and care - during the last 30 years adds to the conventional work with clients in service, sales and health occupations.\n\nThe next table shows the top seven EU Member states with the highest share of these risks for all sectors and for the sector 'Human health and social work activities' (HHSW).\n\nTable 1: Psychosocial risks, Top countries 'All Sectors' and 'Human health and social work' - ESENER 2019\n\nDifficult customers, patients and pupils ('clients') seem to be the most widespread psychosocial burden, with workers in Portugal, Malta and Cyprus are most exposed. In the sector HHSW, eastern European countries are much more present, Slovenia at the top, followed by Portugal, Estonia, Poland and Bulgaria.", - "page_start": 25, - "page_end": 25, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## 3 Status of working conditions\n\nThis chapter on health and safety-related working conditions provides an overview on status and development of working conditions; it is mainly based on the indicators that were selected for the data visualisation in the OSH Barometer . This is a quite limited selection of major data; in surveys and statistics many more indicators on working conditions are provided, particularly at national level.\n\nPractically all working conditions influence mental health , that is, they involve psychosocial risks , and all also involve 'physical risks' , including safety aspects of these risks. Mental health risks are illustrated in the OSH Barometer by datasets on time pressure, poor communication, dealing with difficult clients, discrimination and harassment, and similar. Physical risks include datasets on accidents at work, exposures to chemical and biological substances, exposure to noise, vibrations, high or low temperatures, and working tasks with ergonomic risks, like carrying, lifting heavy loads or work in tiring or painful positions; and also permanent physical inactivity, mainly sitting or long standing. 2\n\nThe figure below shows the percentage of enterprises reporting OSH risks 'present in the establishment', compared between 2014 and 2019 (ESENER) and covering mental and physical risks. 3\n\nFigure 1: Risk factors present (% of establishments) - ESENER 2014 and 2019\n\n<!-- image -->\n\nNote: Prolonged sitting was a new item in the 2019 survey.\n\nBetween 2014 and 2019, some risk factors increased, like 'Repetitive hand and arm movements', 'Lifting or moving people of heavy loads', and 'Having to deal with difficult customer, patient and pupils; many others showed no changes, like 'Risk of accidents with machines or hand tools', 'Chemical or biological substances', and 'Loud noise', or minor decreases like 'Risk of accidents with vehicles'.", - "page_start": 22, - "page_end": 22, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- 2 Methodological remark: Many workers in the service sectors have similar physically demanding work like workers in manufacturing, construction and agriculture. The statistical assignment of enterprises of a certain type to the service sectors and the sectors industry/construction/agriculture is a too rough approach to describe and analyse working conditions, particularly if more detailed data on working conditions are available. For that reason, when talking about health outcomes, in this report often more informative categories are used, for example, managerial jobs (LFS, Eurostat terminology), or high-, medium- and low-skilled clerical work (EWCS), or high-skilled manual and low-skilled manual work (Eurostat), independent on the sector where this work is performed.\n - 3 EU-OSHA - European Agency for Safety and Health at Work: Third European Survey of Enterprises on New and Emerging Risks (ESENER 3), ESENER Data visualisation, section 'Comparisons 2014-2019'; for 'Prolonged sitting' value from 'Data visualisation 2019' not from 'Comparisons'.\n - 4 Some of the very first OSH regulations on psychosocial risks at workplaces were issued by Denmark in the early 1980s, dealing with monotony at work, stress, risk of violence at work and risks of working alone.\n - 5 Psychosocial risks are regarded as reason, and mental health/disease as consequence or outcome of these risks.\n - 6 OSHWiki, 2022: Psychosocial issues - the changing world of work; OSHWiki, 2022: Psychosocial risks and workers health\n - 7 EU-OSHA, 2007: Expert forecast on emerging psychosocial risks related to occupational safety and health data for 2015: Eurofound: European Working Conditions Survey - Data Visualisation; Data for 2005: Eurofound:\n - 8 Eurofound, 2017: Sixth European Working Conditions Survey - Overview report (2017 Update) (p. 48). Raw Fourth European Working Conditions Survey\n - 9 EU-OSHA: ESENER Data visualisation, Comparisons 2014-2019.\n - 10 Due to the change of possible response items, the data for the three surveys cannot be compared; the number of mental risk factors increased from three in 2007 and 2013 to eight in 2020.\n - 11 Eurostat, 2021: EU labour force survey 2020 module on accidents at work and other work-related health problems : assessment report : 2021 edition", - "page_start": 140, - "page_end": 140, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "<!-- image -->\n\nIf a risk assessment is conducted just for compliance purposes , and not used appropriately for the successful management of OSH and reduction of accidents and occupational diseases, the risk assessment may lose its dynamic nature, and findings may be neither implemented nor communicated appropriately to employees.\n\nThe types of risks included in risk assessments are related to the risk profiles of different sectors, for example, it is likely that risk assessments in heavy industries and manual occupations focus more on safety risks. However, while sectoral risk profiles will naturally bias the identification of risks, smaller establishments seem to have less of a focus on MSDs or psychosocial risk factors , which would suggest that they are less well recognised or understood, in particular for MSEs. 415 Establishments also report that psychosocial risk factors are more difficult to manage than other OSH risks, while as business size grows, so does the proportion of respondents who perceive psychosocial risks as more difficult to manage than other OSH risks. 416\n\nESENER 2019 shows that a reluctance to talk openly about these issues seems to be the main difficulty for addressing psychosocial risks (60% of establishments in the EU27). This, as with all the other difficulties considered (lack of awareness among staff/management and lack of expertise or specialist support), is reported in all enterprise sizes but more frequently as establishment size grows.\n\nSpecifically, among those establishments that report having to deal with difficult customers, patients or pupils, 51% of those employing 20 or more workers report having a procedure in place to deal with possible cases of threats, abuse or assaults by clients, patients or other external persons. This share rises to 74% among establishments in human health and social work activities.\n\nThe development of concrete outputs such as measures to better manage risks that can result in musculoskeletal diseases has actually seen a decline between 2014 and 2019, as follows:\n\n - · 85% to 77% on the measure of 'provision of equipment to help with the lifting or moving of loads or other physical heavy work'; 417\n - · 73% to 67% concerning 'provision of ergonomic equipment'; and\n - · 66% to 60% regarding 'encouraging regular breaks for people in uncomfortable or static postures including prolonged sitting'. 418", - "page_start": 127, - "page_end": 127, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- 47 Adăscăliței et al., 2021: The intensification of work in Europe: A multilevel analysis\n - 48 EU-OSHA, 2002: Report - New forms of contractual relationships and the implications for occupational safety and health (p. 7).\n - 49 Eurofound, 2011: Impact of subcontracting on working conditions\n - 50 Koranyi et al., 2018: Precarious employment and occupational accidents and injuries - a systematic review\n - 51 ILO Indicator description: Occupational injuries\n - 52 See the diagrams and country data in the OSH Barometer under: https://visualisation.osha.europa.eu/oshbarometer/\n - 53 Tynes et al., 2017: Physical working conditions as covered in European monitoring questionnaires 54 EU-OSHA: Third European Survey of Enterprises on New and Emerging Risks (ESENER 3) - first findings, 2019, p. 3 and ESENER Data visualisation, section 'Comparisons 2014-2019', section 'Risk factors present in the establishment', Export data\n - 55 EU-OSHA calculations based on EWCS raw data.\n - 56 Eurostat, LFS Ad hoc modules: Persons reporting exposure to risk factors that can adversely affect physical health by sex, age and factor\n\n57 In the LFS-survey the respondents had to decide which of 11 possible risk factors is the most 'serious one'. Quote: 'Eurostat proposed to implement the exposure to risk factors for physical health at work by using one question that strictly reflects the variable or twelve questions asking for the presence of any of the eleven risk factors and then ask for the most serious one.'\n\nIn the EWCS and ESENER all reported risk factors were registered.", - "page_start": 142, - "page_end": 142, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Concerning the complaints about poor communication and cooperation within the organisation, all three Nordic EU Member States are represented in the seven countries with the highest burden, together with several central European countries. This is valid for both selected groupings, 'All sectors' and 'HHSW'.\n\nRegarding long or irregular working hours , we see a mix of countries from all regions. The order of countries in the sector HHSW - a mixture of countries from the East, South and North - is probably due to specific sectoral regulations of working times. Sweden is at the top in HHSW with 57%, followed by Denmark, Cyprus, Latvia and Czechia, all between 44% and 48%.\n\nMany analyses of psychosocial risks include other relevant factors like decision latitude (or decision authority) and skill discretion (level of skill and creativity required on the job). In a long-term analysis of the responses to the EWCS between 1995 and 2015, the authors conclude: 15\n\n'Our findings suggest that work stress generally increased from 1995 to 2015, and that the increase was mostly driven by psychological demands. People working in lower-skilled occupations had generally higher levels of job strain and effort-reward imbalance, as well as they tend to have a steeper increase in job strain than people working in higher-skilled occupations. Most of the change occurred from 1995 to 2005.'\n\nAccording to this study, the differences between the skills groups are significant, below illustrated for the development of 'Psychological demands' and 'Job strain' ; for these two indicators high-skilled and low-skilled manual workers are at the top of the scale.\n\nFigure 4: Psychosocial risk factors - Differences between skill groups (Job strain)\n\n<!-- image -->", - "page_start": 26, - "page_end": 26, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf", - "query": "Has the average working week for employees working full-time decreased since 2006?", - "target_page": 31, - "target_passage": ". The statistical data (Eurostat) show a slight decrease of the average weekly working time for full-time employees (15-64 years) from 40.2 to 39.9 hours between 2006 and 2019.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "Figure 7: Psychosocial risk factors - Differences between skill groups (Skill discretion)\n\n<!-- image -->\n\nFor 'Decision authority' and 'Skill discretion', the authors found a stable situation since 1995, even a small rise of skill discretion for manual workers after 2010. Regarding 'Psychological demands' and 'Job strain', the major increase for all groups took place between 1995 and 2005. This growth decelerated after 2005, this observation is also valid for other working conditions, like work intensity.\n\n## 3.1.1 Working time in hours and at atypical times\n\nToo many hours of working time and/or working hours at atypical or unsocial times can put the mental and the physical health of humans at risk. It is also regarded as a major contributing factor to work accidents , due to fatigue or exhaustion. 16\n\nThe main indicator to describe working time is the number of the weekly average working hours of full-time employees. However, regarding its impact on health and safety, other aspects of working time are of the same relevance :\n\n - · How long is the average working day?\n - · At which times and days is this work done (typical, atypical times)?\n - · How often do long working hours take place?\n - · Is the work split between two jobs?\n - · How flexible are start and end?\n - · How intense is the work during this time (breaks, deadlines)?\n - · Which groups of workers have standard working times and which do not (e.g. depending on the sector or the type of contract, e.g. sub-contracted workers or self-employed)?\n\nThere is a slight trend towards fewer working hours for full-time employees (not 'Employed persons') in the EU27; between 2006 and 2019 the average weekly working time dropped from 40.2 to 39.9 hours, a decrease of approximately 15 minutes. 17\n\nRegarding the weekly hours, there are no striking differences between the EU27 Member States. In 2019, Cyprus, Austria and Malta with a high share of workers in the sector of tourism (accommodation) had the highest number of working hours per week (above 41 hours), and Denmark, the Netherlands and Italy the lowest number (39 or fewer) (full-time, employees, 15-64 years, all NACE codes). 18", - "page_start": 28, - "page_end": 28, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Figure 9: Average working time and work during unsocial hours - Eurostat LFS\n\n<!-- image -->\n\nTwo country examples might illustrate these developments (all data for 2019): Slovakia, a country with a high share of process-based industries, reports that 15.0% of its workforce is working at night and 29% in shifts; for the EU27 this rate is 5.2% respectively and 18.3%. 25 Regarding work on Sundays three other countries are at the top of the EU27, the Netherlands, Ireland and Spain; they report between 18% and 21% (EU27 average = 13.5%); all three countries have an above-average share of sectors like transport, tourism and agriculture. 26\n\nFor all these types of work it should be take into account that other groups of workers under nonstandard types of employment contracts (self-employed, agency workers, students, pensioners, undeclared workers) might have taken over work at these atypical working times.\n\nConcluding, it can be stated that there is a slight trend towards a reduction of weekly working hours for regularly employed workers, including a stable commuting time. Working hours at atypical times show a mixed picture. Looking at most types of employees, atypical working time decreased, except work on Sundays . For self-employed with employees, the working time at atypical hours is in general at a higher level. The number of employees in night work is decreasing. More employees in service and client-related occupations at night or in shifts but also here the atypical times are slightly decreasing.\n\nProbably these changes mirror the structural economic changes , that is, the shift of workforce between sectors. Night work was common in many industries as part of a three 8-hours shifts, not only in industries with permanent production processes (steel, chemicals, etc.). 27 Moreover night work is and was common in essential services like health, transport, technical infrastructure and security. The", - "page_start": 31, - "page_end": 31, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Figure 8: Hours worked per week of full-time employment, EU27 - Eurostat\n\n<!-- image -->\n\nThe commuting time between home and workplace is quite stable; in 2005 at EU27 level, it stood at 42.4 minutes, and in 2015 Eurostat reports 40.2 minutes (time for both ways, to the workplace and back). 19\n\nWork at atypical working times is in general regarded as a working condition with negative health impact, called work extensity . The two major indicators of atypical working times are work at 'atypical working times' and 'long working hours' .\n\nEurostat reports for 'Employment at atypical working time' 20 a minor decrease between 2011 and 2019, from 38.8% to 37.2% (EU27), for all employed workforce and all types of such atypical time. 21 Some groups of self-employed show a higher rate of atypical working times but also for most of the categories of self-employed the rates decreased during the period 2011 to 2019. High managerial selfemployed had a slight increase from 42.1% to 43.2% in this period. For the low managerial selfemployed Eurostat finds a decrease from 69.2% to 64.5%. The figures for small entrepreneurs dropped slightly from 56.6% to 54.1%, the same applies for employed persons in personal care work with a minor change (50.6% to 49.8%). Agricultural self-employed had the highest level of such working times; they showed a decrease from 68.4% to 63.4%.\n\nThe length of the daily or weekly working time, its allocation over the 24 hours of a day or at night are important factors for health and wellbeing. The statistical data (Eurostat) show a slight decrease of the average weekly working time for full-time employees (15-64 years) from 40.2 to 39.9 hours between 2006 and 2019. 22 The data also document slight increases and decreases of work at atypical times (response option for frequency: 'usual'). 23 In 2006 and 2019, the following percentages of all employed persons worked at atypical times: on Saturdays the percentage decreased from 28% to 25%, working on Sundays remained stable at around 13.5%, working in the evenings decreased from 19% to 15%, work at night fell from 7% to 5% and shift work increased slightly from 17% to 18%. 24", - "page_start": 30, - "page_end": 30, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "In several occupations, classical safety risks often add to the above-mentioned exposures , that is, slips, trips and falls, risks related to moving parts of machinery, moving vehicles, exposure to hot, cold, or hazardous materials, loud noise, chemical or biological substances, and in general physically exhaustive work.\n\nA certain ergonomic risk of many administrative and supervisory jobs is physical inactivity (61%), in practice meaning sitting most of the working time in front of digital equipment, sitting to make phone calls or sitting in meetings. Not only administrative tasks but also many occupations in transport and industry require prolonged sitting (transport, cashiers, parts assembly, etc.).\n\nIn the 10-year period before 2005, EU-wide surveys found a significant increase in work intensity. Major differences in work intensity and working time patterns can be seen between occupations, forms of work, sectors and enterprise size, for example. The length of the daily or weekly working time and its allocation with the 24 hours of a day or at night are important factors for health and wellbeing. The Eurostat data show a slight decrease in the average weekly working time for full-time employees (15-64 years) from 40.2 to 39.9 hours between 2006 and 2019.\n\nEurostat reports for all types of 'employment at atypical working time' a minor decrease between 2011 and 2019, from 38.8% to 37.2% (EU27 average), for all employed workforce and all types of such atypical time. The data also document slight increases or decreases of the different types of work during atypical times > on Saturdays the percentage decreased from 28% to 25%, working in the evenings decreased from 19% to 15%, working on Sundays remained stable at around 13.5%, work at night fell from 7% to 5%, and shift work increased slightly from 17% to 18%. Some groups of self-employed show a higher rate of atypical working times: for high-managerial self-employed , this rate is 43.2% and for low-managerial self-employed 64.5%.\n\nSignificant differences also exist between eastern/southern and central/northern/western European countries. More physical and ergonomic risks (except inactivity) are reported from eastern and southern EU Member States but more emotional demands (e.g. difficult clients, poor communication and long working hours) in northern and central European countries. One of the major reasons might be the reallocation of industrial production to eastern countries after the EU extension to 24 and later to 27 Member States.\n\n## Conditions of employment and workforce development\n\nDuring the past decades and at faster pace after 1990, a greater variety of non-standard contractual relations has emerged. Typical characteristics of non-standard work are part-time work, temporary (or fixed-term) work, seasonal work, casual work, home-based work, telework, self-employment or family work. Currently, high public awareness is directed to those types of non-standard work that are connected either to new forms of contracts (voucher, platform, zero-hours, portfolio, etc.) or increasing types of work not bound to the premises of the employer (mobile, at home, at client's place), mostly made possible by the increased use of modern information and communication technologies (ICT). These forms of work often have as a - additional - major characteristic a less clear employerworker relationship .\n\nHowever, in 2019 the conventional employment contract still accounted for around 86% of the workforce (EU27), 9% are 'own-account' workers, that is, self-employed without employees. The remaining 4% were self-employed with employees (employers) and less than 1% were contributing family workers. Of all employed workers, 17.2% worked part-time and 13.3% had temporary contracts.", - "page_start": 10, - "page_end": 10, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "methodology, the OSH practitioners who were asked in ESENER seem to have a different view on time pressure than the workers themselves who are respondents in the LFS.\n\nFigure 15: Percentage of employed persons with working time under pressure (per country, sum of responses 'Always' and 'Often') - LFS Ad hoc 2019\n\n<!-- image -->\n\nOne hypothesis to explain the increased time pressure is to draw a direct connection between short weekly working time and more intense work ; or in other words, a short weekly working time leads to more intensification of work or more long hours or atypical working times ('trading flexibility for effort'). 38\n\nThe analysis of EU survey data shows a mixed picture : Firstly, ESENER data corroborate this hypothesis, the three countries with highest percentage of work under time constraints - that is, Finland, Sweden and Denmark - all have working hours under the EU average. Secondly, LFS data show a different picture; a country like Greece has the longest working hours and also reports the highest time pressure, the same 'combination' - but less extreme - applies to Austria, Cyprus and Malta. Trends of low or less than average working time and no time constraints are reported for Lithuania, and medium working time and low time constraints for Italy and Ireland.\n\nAn analysis of EWCS data concluded 39 that in general intensity increases with long working hours, in enterprises with 1-19 the work intensity index (on a scale between 0 and 12) is 4.4, in larger enterprises with above 40 employees it is 6.3. This is in line with ESENER data that corroborate the importance of the size of the enterprise for time pressure and long working hours.\n\nLiterature - from very diverse disciplines - on work intensification points to reasons for intensification on developments as: 40\n\n - · Economic developments, particularly the dominance of neoliberalist policies and enhanced competition between workers, companies and states; reduction of state influence and privatisation. 41\n - · Pressure due to substantial organisational changes, for example, introduction of short-term economic objectives in enterprise policies, 42 expansion into new markets or new countries, acquiring other enterprises or merging, being acquired, restructuring of management or of basic staff working conditions (contracts, working time, flexibility). 43\n - · Decrease of trade union influence or worker participation regarding labour relations.\n - · Liberalisation of labour legislation, creation of 'new forms of work' and new contract types, beyond the permanent full-time employment. 44\n - · New forms of management, application of management concepts like just-in-time production or lean management, higher flexibility of production and higher customer orientation, 45", - "page_start": 35, - "page_end": 35, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Figure 18: Employment types in EU27, development 2005 to 2022 65 - Eurostat\n\n<!-- image -->\n\nThe minor deviation of the sum of the different types of employment to the 100% 'Employed persons' is due to 'No response' answers. The data of part-time employees and of employees with a temporary contract are for the full year 2019, not for Q4.\n\nThe group 'employees' is characterised by two major contractual distinctions that are important for OSH: 1) full- or part-time work, and 2) the time limit of the contract (indefinite or temporary). Moreover, in many Member States there are major differences between employment contracts of private employers in comparison to public employers.\n\n## Definitions Eurostat 66\n\nEmployers = self-employed with employee: employing one or more employees: persons who work in their own business, professional practice or farm for the purpose of earning a profit and who employ at least one other person.\n\nSelf-employed: not employing any employees (self-employed without employees): persons who work in their business, professional practices or farm for the purpose of earning a profit and who employ no other persons.\n\nEmployees: persons who work for a public or private employer and who receive compensation in the form of wages, salaries, fees, gratuities, payment by result or in kind. Contributing family workers: persons who help another member of the family to run a farm or business, provided they are not classed as employees.", - "page_start": 46, - "page_end": 46, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- 21 Eurostat: Ad hoc module 2019 on work organisation and working time arrangements. Employment at an atypical working time (time period start with 2011), here and here\n - 22 Eurostat Data for 2019: Average number of usual weekly hours of work in main job, by sex, professional status, full-time/part-time and economic activity (from 2008 onwards, NACE Rev. 2). here Filter: Employees, Full-time, All NACE, EU27 2019 Q4.\n\nEurostat Data for 2006: Average number of usual weekly hours of work in main job, by sex, professional status, full-time/part-time and economic activity (1998-2008, NACE Rev. 1.1), here Filter: Employees, Full-time, All NACE, EU27 2019 Q4.", - "page_start": 141, - "page_end": 141, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "number of workers in industry decreased, but the number of workers in the above-mentioned service sectors increased.\n\n## 3.1.2 Work intensity\n\nThere are numerous references showing that during the period between 1990 and 2005 work intensity has considerably increased . 28\n\nFor example, Eurofound has analysed the responses to the two EWCS questions on high speed at work and tight deadlines. The EWCS found a significant increase of work intensity between 1991 and 2005. In 1991, 'Working at a very high speed' was for the majority of respondents not an issue. Fifty-two per cent of the workers responded to this statement 'Never' or 'Almost never'; in 1991, 24% worked at high speed and responded 'Around ¾ of the time', 'Almost all of the time' and 'All of the time'; until 2005 this response rate went up by 11% to 35%.\n\nWorking to tight deadlines was not an issue for 34% in 1990, and in 2005 only for 19%, a reduction of 15%. The percentage of the sum of responses 'Around ¾ of the time', 'Almost all of the time' or 'All of the time' to this question on tight deadlines increased between 1991 and 2005 from 29% to 37%. Regarding these two indicators, work intensity has evidently increased between 1991 and 2005. 29\n\n<!-- image -->\n\nAfter that first period between 1991 and 2005, this development seems to stagnate between 2005 and 2015 . 30 The responses 'Almost all of the time' or 'All of the time' vary only slightly, between 33% and 37% depending on year and question ('Working at high speed' or 'Working to tight deadlines').\n\nDifferences can be seen regarding sector, company size and occupation. Regarding work intensity , ESENER enterprise data on time pressure for the EU27 indicate a slight increase of 2.3% between 2014 and 2019 from 43% to 45%. 31 Interestingly, according to ESENER, time pressure drastically increases with the size of the enterprise . In enterprises with 5 - 9 employees, 39% report time pressure, and in enterprises with above 250 employees 69%. 32 The same applies for long working hours, where enterprises with 5 - 9 employees report 19% 'long working hours', and in enterprises with above 250 employees this percentage increases to about 39% (EU27, 2019). 33", - "page_start": 32, - "page_end": 32, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- 12 Eurostat: Persons reporting exposure to risk factors that can adversely affect mental well-being by sex, age and factor, data here and explanatory metadata here\n - 13 It has to be noted that in 2007 and 2013 the interviews were done face-to-face. In 2020 the interviews were conducted either face-to-face or by phone, depending on the public health measures in each country. The responses were influenced by work under conditions of the pandemic.\n - 14 Eurostat: Persons reporting exposure to risk factors that can adversely affect mental well-being by sex, age and educational attainment level\n - 15 Rigó et al., 2021: Work stress on rise? Comparative analysis of trends in work stressors using the European working conditions survey\n - 16 WHO/ILO, 2021: WHO/ILO joint estimates of the work-related burden of disease and injury, 2000-2016: Global monitoring report (p. 35ff).\n - 17 Eurostat provide data for the periods before and after the NACE revision in 2008. Data for 2019: Average number of usual weekly hours of work in main job, by sex, professional status, full-time/part-time and economic activity (from 2008 onwards, NACE Rev. 2), here Filter: Full-time, 15-64 years, all NACE sectors. Data for 2006: Average number of usual weekly hours of work in main job, by sex, professional status, full-time/part-time and economic activity (1998-2008, NACE Rev. 1.1), here\n - 18 Eurostat, 2018: How many hours do Europeans work per week? Average number of usual weekly hours of work in main job, by sex, professional status, full-time/part-time and economic activity (from 2008 onwards, NACE Rev. 2) - hours[lfsa\\_ewhun2], here\n - 19 Mean duration of commuting time one-way between work and home by sex and age (source: Eurofound), Here\n - 20 Eurostat definition: The atypical work distinguishes between 'evening or night work', 'Saturday or Sunday working', and 'shift work'. Data for 2020 are available but indicate a strong reduction of atypical working times, the reason is probably that sectors with a high rate of atypical working times like tourism, transport, entertainment, hotels and restaurants could not work as in previous years, and also production lines in industry, often shift work, were stopped.", - "page_start": 140, - "page_end": 140, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Some EU OSH legislation may be adapted and modernised to cope with the changes in technologies, employment conditions, longer working life, and a growing share of mobile and remote work. Many of these changes in the world of work have caused higher insecurity, less clear employer-worker relations, and a higher burden of psychosocial and ergonomic risks.\n\n## Which are the areas of concern?\n\nIncomplete compliance with OSH regulation is more noticeable in certain sectors and types of work. Most of these types of work - mobile and home-based work, domestic work, care work and long-term domestic care work, seasonal work, platform work, non-voluntary self-employed - are growing in terms of workforce. But many of these work and employment formats are until now not covered in the same", - "page_start": 17, - "page_end": 17, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf", - "query": "What is the definition of a work accident according to the International Labour Organisation?", - "target_page": 38, - "target_passage": "ILO Definition of accident: ‘An occupational accident is an unexpected and unplanned occurrence, including acts of violence, arising out of or in connection with work, which results in one or more workers incurring a personal injury, disease or death.’", - "chunk_present": { - "presence": true, - "index": 5 - } - }, - "top_chunk": [ - { - "text": "8.7 Take immediate and effective measures to eradicate forced labour, end modern slavery and human trafficking and secure the prohibition and elimination of the worst forms of child labour, including recruitment and use of child soldiers, and by 2025 end child labour in all its forms\n\n8.8 Protect labour rights and promote safe and secure working environments for all workers, including migrant workers, in particular women migrants, and those in precarious employment\n\nThe WHO is following a global approach towards occupational health . They summarised their base of evidence on global working conditions in some key facts : 332\n\n - · In many countries more than half of workers are employed in the informal sector with no social protection for seeking health care and lack of regulatory enforcement of occupational health and safety standards.\n - · Occupational health services to advise employers on improving working conditions and monitoring the health of workers cover mostly big companies in the formal sector and more than 85% of workers in small enterprises, informal sector, agriculture and migrants worldwide do not have any occupational health coverage.\n - · Work-related health problems result in an economic loss of 4-6% of GDP for most countries. The basic health services to prevent occupational and work-related diseases cost on average between US$ 18 and US$ 60 (purchasing power parity) per worker.\n - · About 70% of workers do not have any insurance to compensate them in case of occupational diseases and injuries.\n - · Research has demonstrated that workplace health initiatives can help reduce sick leave absenteeism by 27% and health-care costs for companies by 26%.\n\nBased on this evidence, the WHO Global Assembly agreed on a 'Worker health global plan of action' in 2007 333 (updated 2013) that included targets like better prevention at workplaces, that is, Objective 2: to protect and promote health at the workplace. The WHO has worked together with the ILO to estimate the burden of diseases from work and published the 'WHO/ILO joint estimates of the work-related burden of disease and injury'.\n\nWhen looking at the work of global institutions during the past two to three decades - and for the ILO also much further back - many important agreements, conventions, government actions and global business programmes have been negotiated, agreed and issued. The objectives and necessary measures at a global level have been made much more concrete by these efforts. OSH and working conditions are on the agenda of these organisations, and general and concrete targets and indicators have been set. The task is the implementation of these principles and programmes in every region and country of the world in a way that it reaches all workplaces.\n\n<!-- image -->\n\nOSH Barometer - OSH Infrastructure - International organisations and international programmes https://visualisation.osha.europa.eu/osh-barometer/osh-infrastructure/international-organisations https://visualisation.osha.europa.eu/osh-barometer/osh-infrastructure/international-programmes\n\nESENER - Data visualisation\n\nhttps://visualisation.osha.europa.eu/esener/en/survey/datavisualisation/2019", - "page_start": 116, - "page_end": 116, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "segmentation of enterprises into profit centres, quality management obligations, externalisation/subcontracting of service areas like cleaning, canteen, security and so on.\n\n - · Increased communication and interdependency, time coordination and synchronisation requirements between units, enterprises and in supply chains.\n - · Less direct supervision and more objective and results-based management.\n - · Last but not least the massive introduction of ICT and other work-intensifying technologies.\n\n<!-- image -->\n\nThe main reasons for stagnation after 2005 might be that many of the above-mentioned concepts or policies were developed or had their peak during the 1980s, 1990s or the first decade of the 21st century. Some of them lost their dynamic (e.g. privatisation), or have become a kind of standard (management by objectives), or were widely implemented in the first decade of the 21 st century (ICT facilities at most workplaces); also, some negative impacts on working time were mitigated by state interventions (i.e. the EU Working time directive 46 ) or labour agreements. 47\n\nOf particular interest for OSH probably is that the changes in labour legislation, the production in international supply chains and technological improvements were sufficiently developed to shift quite a relevant part of work to other types of contracts, that is, to subcontractors, self-employed or temporary agent workers and other forms of non-standard work contracts. Reasons were economic savings but also better management of intense work periods, peak times and risky work .\n\nThese developments are probably the main reason that work intensity stayed at a similar level for the employed workers with a standard contract while the working conditions of other types of work degraded. EU-OSHA has taken this conclusion already in 2002 in its report 48 on 'New Forms of Contractual Relationships and the Implications for Occupational Safety and Health':\n\n - '1. the transfer of risks in the (practical) conditions of work to non-permanent employees and to subcontractors;\n - 2. segmentation in the workforce based on differences in contractual conditions of employment (working hours, job insecurity, and qualifications).\n\nIn the first scenario, risks directly related to working conditions (bad ambient and ergonomic conditions)", - "page_start": 36, - "page_end": 36, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## 6 OSH legislation and OSH infrastructure in the EU\n\n## 6.1 Foundation, legislation, compliance and supervision\n\nThe ethical and economic importance of safe and healthy working conditions led to an integration of this target in international conventions and agreements; it is also embedded in the treaties of the EU.\n\nUN has included 'Safe and secure work environment' as an indicator for Goal 8 of their 17 global 'Sustainable Development Goals ' for 2030. Goal 8 aims to 'Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all' . 334 It requests in its target 8.8 to 'Protect labour rights and promote safe and secure working environments for all workers, including migrant workers, in particular women migrants, and those in precarious employment.'\n\nThe Preamble to the Constitution 335 of the ILO includes as an objective ' … the protection of the worker against sickness, disease and injury arising out of his employment ...' . In 2022, the objective of a safe and healthy working environment became part of the 'Declaration on Fundamental Principles and Rights at Work', adding OSH to the existing four basic principles, that is, 1) freedom of association and right to collective bargaining, 2) the elimination of all forms of forced or compulsory labour, 3) the effective abolition of child labour, and 4) the elimination of discrimination. Between the year of the foundation in 1919 and today, the ILO agreed on more than 40 conventions and recommendations addressing OSH, be it either general provisions or provisions for specific groups and sectors or specific risks. 336\n\nThe EU and its predecessors have enshrined health and safety of workers in their founding treaties . Already in 1951, it was stated in Article 3 of the European Coal and Steel Community (ECSC) Treaty that 'The institutions of the Community shall, within the limits of their respective powers, in the common interest … promote improved working conditions and an improved standard of living for the workers in each of the industries for which it is responsible …' . 337 During the development of the European institutions and the EU from those years until today, references to working conditions and safety and health were always part of the treaties, and also in the latest Treaty of Lisbon from 2009. 338\n\nIn Article 151 of the Lisbon Treaty, it is stated that 'The Union and the Member States, shall have as their objectives the promotion of employment, improved living and working conditions …' . The areas of such promotion are set out in Article 153 , where two bullet points refer to OSH: (a) improvement in particular of the working environment to protect workers' health and safety; (b) working conditions. In 2017, the European Commission launched an initiative to agree on the 'European Pilar of Social Rights' (EPSR), comprising 20 key principles guiding the EU in the field of social policy. 339 These pillars were agreed by the Member States; Principle 10 refers to a ' Healthy, safe and well-adapted work environment and data protection.'\n\nThese European and international agreements and treaties regard safety and health as essential for human development, a basic human right . The main reasoning is to eliminate or reduce as much as possible suffering, sickness, disability and death of workers. Often the reasoning refers to intertwined objectives, that is, to economic growth (UN), or to reduce the economic burden of incomplete health and safety at work, be it the burden for enterprises or the society as a whole, that is, by 'Promotion of employment' (Lisbon Treaty) or by 'Prolongation of the participation in the labour market' (EPSR) or 'Data protection' (EPSR).", - "page_start": 117, - "page_end": 117, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "ICOH stated in its Centennial Declaration:\n\n'The globalization process has not succeeded in equalising the conditions of work but in fact the opposite has occurred; the gaps are increasing. Poverty, inequality and under-development are closely associated with the poor safety, health and social conditions of work, as they are also linked with illiteracy, lack of education, poor access to health services and low or non-existent social protection. 323\n\nInternational organisations like the ILO, WHO and UN have also taken up the task to promote OSH worldwide . The ILO has established a system of conventions; their implementation is monitored in the signature states. 324 The ILO has issued and decided on nine 'Fundamental conventions' that have been signed by 92% of the ILO member states. 325 These fundamental conventions are:\n\n - 1. Freedom of Association and Protection of the Right to Organise Convention, 1948 (No. 87);\n - 2. Right to Organise and Collective Bargaining Convention, 1949 (No. 98);\n - 3. Forced Labour Convention, 1930 (No. 29) (and its 2014 Protocol);\n - 4. Abolition of Forced Labour Convention, 1957 (No. 105);\n - 5. Minimum Age Convention, 1973 (No. 138);\n - 6. Worst Forms of Child Labour Convention, 1999 (No. 182);\n - 7. Equal Remuneration Convention, 1951 (No. 100);\n - 8. Discrimination (Employment and Occupation) Convention, 1958 (No. 111); and\n - 9. (since 2022) Two conventions on Occupational Safety and Health, that is, C-155 Occupational Safety and Health Convention, 326 and C-187 Promotional Framework for OSH Convention. 327\n\nThe ILO also promotes the 'Decent work' approach to improve working conditions, covering aspects like fair income, social protection for families, better prospects for personal development and social integration, and equal opportunities and treatment. In the frame of this approach, the ILO has developed flagship programmes like 'Safety and Health for all' 328 and the 'Global Action for Prevention on Occupational Safety and Health' (OSH-GAP) , a programme to support and promote OSH globally. 329 Its priorities are:", - "page_start": 115, - "page_end": 115, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "- 359 Fundamental Rights Agency (FRA), here, section on Trafficking and labour exploitation\n - 360 Special Eurobarometer 498: Undeclared Work in the European Union\n - 361 European Commission, Directorate-General for Employment, Social Affairs and Inclusion et al., 2018: An evaluation of the scale of undeclared work in the European Union and its structural determinants : estimates using the labour input method, here\n - 362 ELA: European Platform tackling undeclared work\n - 363 The OSH Barometer contains a special section on enforcement capacities, here\n - 364 SLIC, 2015: Common Principles for Labour Inspection in Relation to Health and Safety In the Workplace\n - 365 Cardiff University et al., 2011: Contract to assess the potential impact of emerging trends and risks on labour inspection methodologies in the domain of occupational health and safety,\n - European Federation of Public Service Unions (EPSU), 2012: A mapping report on Labour Inspection Services in 15 European countries (p. 13ff).", - "page_start": 153, - "page_end": 153, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "are shifted towards non-permanent workers and subcontractors, who have less protection and/or knowledge to cope with these risks. This scenario is not easy to verify in quantitative data, although it is frequently stated in case study research.'\n\nAlso, Eurofound draws such conclusions on the impact of subcontracting on working conditions : 'First, employees in subcontracting perceive higher health and safety risks, notably through more workrelated accidents and increased time pressure. Second, there are a number of psychological risk factors, such as perceived economic insecurity and worries about losing one's job, that are more likely among subcontracting workers.' 49\n\nThere is even an evident relation between such forms of employment and higher rates of work accidents . In a first systematic review the authors conclude: 50\n\n'This review supports an association between some of the dimensions of precarious employment and occupational injuries; most notably for multiple jobholders and employees of temp agencies or subcontractors at the same worksite. However, results for temporary employment are inconclusive.'\n\nOSH Barometer - Mental risks:\n\n<!-- image -->\n\nhttps://visualisation.osha.europa.eu/osh-barometer/working-conditions-preventions/workingconditions\n\nESENER - Data visualisation:\n\nhttps://visualisation.osha.europa.eu/esener/en/survey/datavisualisation/2019\n\n## 3.2 Physical health risks at work\n\nRisks at work that can result in physical harm can be divided into safety and health risks .\n\nThe main result of insufficient safety is a work accident. A work accident has as immediate consequences either a personal injury, a disease, or death of one or more workers. Eurostat distinguishes between non-fatal and fatal work accidents, and for the majority of sectors it provides also the duration of the absence due to the accident - an indicator for the severity of the injury. Non-fatal accidents at work can cause medium- or long-term health consequences, and in the worst case a permanent disability.\n\nILO Definition of accident: 'An occupational accident is an unexpected and unplanned occurrence, including acts of violence, arising out of or in connection with work, which results in one or more workers incurring a personal injury, disease or death.' 51\n\nPhysical health risks can be caused by a variety of circumstances and exposures or by inadequate ergonomics . Natural circumstances at work can pose such health risks, that is, temperature, storms and floods, unsafe terrain, biological agents and so on; or the risks are due to manmade circumstances, that is, work in buildings, on roofs and towers, on traffic routes, under artificial ventilation. Exposure is a general term to describe the interaction between environment / emissions / contaminants and the human organism. In a workplace context, 'exposure' mainly covers emissions from machinery or from tools and materials, for example, noise, vibration, dust, electromagnetic fields and chemical substances.\n\nRisks from inadequate ergonomics harm in particular the musculoskeletal system. Ergonomic risks of manual work are typically caused by repetitive hand and arm movements, tiring positions, for example, permanent kneeling or overhead work, lifting and moving of heavy loads, or of patients and so on. A certain ergonomic risk is physical inactivity , in practice sitting most of the working time. Not only administrative tasks but also many occupations in service or industry require permanent sitting, for example, drivers, cashiers, part assembly operators and so on (often called 'sedentary occupations').", - "page_start": 37, - "page_end": 37, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Still, not only preventive measures but also other non-OSH-related developments worked in the same direction. The shrinkage of the workforce in certain sectors, for example, mining, textile, agriculture, and specific high-risk subsectors of manufacturing, that is, shipyards or foundries, has led to a reduction of the workforce in particularly dangerous working conditions. The production of these sectors was - partly or fully - relocated to other regions of the world, and EU enterprises import the needed products as part of global supply chains.\n\n## Major economic changes of sectors with over average work accident rates\n\nThe decrease of production in the mining and textile sectors was replaced by the import of mining or textile products. Nowadays the share of workforce in these sectors is much smaller in the EU than 30 years ago. In the EU28 in 2019, mining and quarrying employed 392,000 people, or 0.2% of all employed persons,128 and the textile industry 129 employed 1.5 million people, or 0.7% of all employed persons. 130\n\nThe share of employees in agriculture, also a sector with high accident rates, dropped mainly due to automation from 6.5% in 2005 to 4.5% in 2019 131 (worldwide still at 27% 132 ). In construction, another sector with work accident rates over average, the employment is quite stable in the past 25 years and fell only from 6.9% to 6.5%. Some specific works with high accident risk have been outsourced to other regions, well-known examples are the dangerous shipwrecking but also recycling of plastics and electric and electronic devices. 133\n\nThe decline of these sectors and the growth of workforce in other sectors like wholesale, transport, education, health and care shifted the safety risks of working conditions. Several EU Member States also observe a growth of road transport-related accidents during work. 134\n\n## 4.1.1 Non-fatal work accidents\n\n## DEFINITIONS\n\nEurostat has developed the European Statistics on Accidents at Work, or ESAW, methodology to harmonise the monitoring of work accidents. This methodology describes how accidents at work have to be reported and defines several terms and conditions.\n\n## What is an accident?\n\n'Accident at work' is defined in the ESAW methodology 135 as a 'discrete occurrence in the course of work which leads to physical or mental harm.'\n\n## When is a non-fatal work accident counted?\n\nESAW counts a work accident 'if the resumption of work occurred 5 days after the work accident' ; Chapter 4.2 of the ESAW Methodology 2012 explains: 'Accidents at work with more than three calendar days' absence from work: Only full calendar days of absence from work have to be considered, excluding the day of the accident. Consequently, more than three calendar days' means 'at least four calendar days', which implies that only if the victim resumes work on the fifth (or subsequent) working day after the date on which the accident occurred should the incident be included.'\n\nExempted are: Commuting accidents, self-inflicted injuries (e.g. suicides), and strictly natural causes that injure people at their workplaces (e.g. earthquakes, floods).\n\nThe total number of reported non-fatal accidents for the EU27 was 3,140,950 in 2019. 136 As mentioned in the introduction to this chapter, the incident rates of non-fatal accidents fell in about 25 years from 4,089 (year 1998 137 ) to 1,713 (2019), that is, it decreased about 58% . 138 The greatest part of this decrease took place between 1998 and 2010 , 139 the incidence rate halved to 2,021, a drop of 51% . Still, between 2010 and 2019 the incidence rate for the EU27 fell from 2,021 incidents per 100,000 workers to 1,713, a drop of a further 15% (taking 2010 as the reference year). 140", - "page_start": 63, - "page_end": 63, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "## 1 Executive summary\n\n## How can the 'state of OSH' in the EU be assessed?\n\nThis report describes the state of OSH in the EU , and accordingly the trends and the developments , that is, the changes in state over time. The report refers to different periods in time, mostly to the situation between 2005 - after the substantive enlargement of the EU in 2004 - and 2019; if the use of earlier or more recent start or endpoints was reasonable and data were available, a different time frame was applied.\n\nTwo criteria were crucial for the selection of these indicators: availability of reliable data and the relevance of the indicators. An ideal and complete set of indicators would cover even more indicators than presented in this report, but major limits were set by the availability of reliable data.\n\nThe main data sources comprise a large variety of quantitative datasets , for example, Eurostat statistics and EU-wide surveys (e.g. EU-OSHA's European Survey of Enterprises on New and Emerging Risks (ESENER), Eurofound's European Working Conditions Survey (EWCS), Eurostat's Labour Force Survey (LFS) and its ad hoc modules, and the Flash Eurobarometer, detailed background reports on risks, groups of workers, OSH systems and infrastructures (e.g. by EU-OSHA, Eurofound, the Fundamental Rights Agency, etc.), and evaluations and assessments of the level of implementation of OSH directives (e.g. by the Directorate-General for Employment, Social Affairs and Inclusion (DG EMPL) or the Senior Labour Inspectors Committee (SLIC) surveys facilitated by the National Labour Inspectorates). Regarding the description of developments beyond the EU, data were taken from the International Labour Organisation (ILO), the World Health Organisation (WHO), the International Social Security Association (ISSA), the United Nations (UN), the Organisation for Economic Co-operation and Development (OECD), the International Commission on Occupational Health (ICOH) and the International Association of Labour Inspection (IALI).\n\nPlease note that Eurostat employment data and ICOH data were retrieved in 2023. Current figures might slightly deviate due to updates and corrections.\n\n## Working conditions - Risk factors at work\n\nShifts in work tasks and workforce between sectors, technological progress and the development of higher skill levels have led to less work in manual occupations and more work in administrative (clerical, professional, managerial, etc.) occupations as well as in client-oriented and communicative occupations.\n\nConsequently, these developments caused a shift of risks to psychosocial and emotional challenges . This can be documented by the growing percentage of workers who report difficult clients (60%), long or irregular working hours (22%), and poor communication in the organisation (18%) (all data from ESENER 2019 or EWCS 2015) The OSH risks for these occupations - gradually but also significantly - shifted from safety risks to health risks. The psychosocial risks for mental health and the emotional challenges increased; they clearly correlate with more work in emotionally demanding and/or client-oriented sectors, be it in tourism, entertainment or education, public transport, social work, or health and care.", - "page_start": 9, - "page_end": 9, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "methodology, the OSH practitioners who were asked in ESENER seem to have a different view on time pressure than the workers themselves who are respondents in the LFS.\n\nFigure 15: Percentage of employed persons with working time under pressure (per country, sum of responses 'Always' and 'Often') - LFS Ad hoc 2019\n\n<!-- image -->\n\nOne hypothesis to explain the increased time pressure is to draw a direct connection between short weekly working time and more intense work ; or in other words, a short weekly working time leads to more intensification of work or more long hours or atypical working times ('trading flexibility for effort'). 38\n\nThe analysis of EU survey data shows a mixed picture : Firstly, ESENER data corroborate this hypothesis, the three countries with highest percentage of work under time constraints - that is, Finland, Sweden and Denmark - all have working hours under the EU average. Secondly, LFS data show a different picture; a country like Greece has the longest working hours and also reports the highest time pressure, the same 'combination' - but less extreme - applies to Austria, Cyprus and Malta. Trends of low or less than average working time and no time constraints are reported for Lithuania, and medium working time and low time constraints for Italy and Ireland.\n\nAn analysis of EWCS data concluded 39 that in general intensity increases with long working hours, in enterprises with 1-19 the work intensity index (on a scale between 0 and 12) is 4.4, in larger enterprises with above 40 employees it is 6.3. This is in line with ESENER data that corroborate the importance of the size of the enterprise for time pressure and long working hours.\n\nLiterature - from very diverse disciplines - on work intensification points to reasons for intensification on developments as: 40\n\n - · Economic developments, particularly the dominance of neoliberalist policies and enhanced competition between workers, companies and states; reduction of state influence and privatisation. 41\n - · Pressure due to substantial organisational changes, for example, introduction of short-term economic objectives in enterprise policies, 42 expansion into new markets or new countries, acquiring other enterprises or merging, being acquired, restructuring of management or of basic staff working conditions (contracts, working time, flexibility). 43\n - · Decrease of trade union influence or worker participation regarding labour relations.\n - · Liberalisation of labour legislation, creation of 'new forms of work' and new contract types, beyond the permanent full-time employment. 44\n - · New forms of management, application of management concepts like just-in-time production or lean management, higher flexibility of production and higher customer orientation, 45", - "page_start": 35, - "page_end": 35, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "The quantitative amount of other non-standard types of work often is not well statistically monitored but based on estimates. There are several difficulties to generate reliable statistical data: many of these types of work are less regular, they can be below the level of notification obligations or fully undeclared, obligations for statistical notification are not issued or not followed, and/or these types of work are done in parallel together with other forms of work or income. The OECD estimates that the group with the highest share of these types of work are pensioners who perform such work as a second job. 67\n\nA special case of contract types are posted and seasonal workers 68 in the EU. Their numbers have been estimated, for example, in annual reports on 'Labour mobility'. 69\n\nTable 8: Posted and seasonal workers - Eurostat\n\nRegarding posted workers a recently published report shows the diversity of working conditions, a low availably of data and the high risk of infringements. 70\n\nConcerning seasonal workers several deficits of OSH were made public, particularly during the COVID19 pandemic. The European Commission guidance on 'Seasonal workers' states: 71\n\n - '… due to the nature of their work, seasonal workers are often more vulnerable than other workers to situations such as precarious working and living conditions, infringement of labour law, inadequate social security coverage, as well as undeclared work. Practices that ensure that employers and workers are provided with correct information and assistance can prevent or address these issues.'\n\nOverall, the OECD estimates that the quantitative amount of all types of non-standard work is more than 30% of employment: 'All types of non-standard work combined, non-standard employment accounts for more than one-third of employment in OECD countries.' 72\n\nThe OECD also estimates that in 2019 the new forms of non-standard work account for 0.5% to 3% of total employment . 73 They highlight the change from traditional non-standard work to new forms: 'Non-standard work is undergoing substantive transformation. In recent years, the decline of some types of self-employment including in agriculture has been partly offset by the emergence and expansion of new forms of non-standard work, in particular jobs relying on new technologies, such as platform-based taxi-like drivers. While today this type of work accounts for only 0.5-3% of total employment in developed countries, it is of considerable importance for young people who rely on new forms of work more frequently than older generations and some of whom seem to set a higher value on work autonomy.' 74\n\nThe extrapolation of these OECD estimates to the EU27 indicates that the amount of new forms of non-standard work (i.e. beyond temporary contract or part-time) would be in a range between 1 million and 6 million persons in the EU27.\n\nEurofound distinguishes in its 2020 report on 'New forms of employment' 75 between nine different types: ICT-based Mobile work, Platform work, Casual work, Employee sharing, Job sharing, Voucher-based work, Collaborative employment, Interim management and Portfolio work. They report several estimates about the scale of these types of work per Member State but they do not present final quantitative estimates for the EU level.\n\nObviously, as the term 'non-standard' already indicates, these types of work and their consequences are much less documented and less visible than regular forms of employment. To gain a better quantitative picture of the safety and health situation under such working conditions based on administrative data , advanced administrative and research efforts would be necessary, 76 for example, a strong collaboration between labour inspections (and other OSH authorities) and those authorities that are supervising, enforcing and policing labour law and obligatory social security regulations, from employment services to police forces.", - "page_start": 47, - "page_end": 47, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "infographic5.pdf", - "query": "Was knowledge domain agnosticism a goal in the development of OLAF?", - "target_page": 1, - "target_passage": "Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations represented largely depend on one or more business use cases. As we designed our framework with industry application in mind, we need to consider it within its real-world usage context.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "The general problem of simulating (or creating) intelligence has been broken into subproblems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research. [a]\n\n## Reasoning and problem-solving\n\nEarly researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. [13] By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics. [14]\n\nMany of these algorithms are insufficient for solving large reasoning problems because they experience a \"combinatorial explosion\": They become exponentially slower as the problems grow. [15] Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments. [16] Accurate and efficient reasoning is an unsolved problem.\n\n## Knowledge representation\n\nKnowledge representation and knowledge engineering [17] allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval, [18] scene interpretation, [19] clinical decision support, [20] knowledge discovery (mining \"interesting\" and actionable inferences from large databases), [21] and other areas. [22]\n\nA knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by a particular domain of knowledge. [23] Knowledge bases need to represent things such as objects, properties, categories, and relations between objects; [24] situations, events, states, and time; [25] causes and effects; [26] knowledge about knowledge (what we know about what other people\n\nAn ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.\n\n<!-- image -->\n\nknow); [27] default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing); [28] and many other aspects and domains of knowledge.\n\nAmong the most difficult problems in knowledge representation are the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous); [29] and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as \"facts\" or \"statements\" that they could express verbally). [16] There is also the difficulty of knowledge acquisition, the problem of obtaining knowledge for AI applications. [c]\n\n## Planning and decision-making", - "page_start": 1, - "page_end": 1, - "source_file": "wikipedia3.pdf" - }, - { - "text": "| STATE OF THE ART | | | | | | OLAF IN A PRACTICAL CONTEXT | |", - "page_start": 0, - "page_end": 0, - "source_file": "infographic5.pdf" - }, - { - "text": "<!-- image -->\n\nFigure 4.15 Domain and Range inferred by the reasoner\n\n<!-- image -->\n\nIt is possible to specify more than one class as the domain or range of a property. One of the most common mistakes of new users is to do this and expect that the resulting domain/range is the union of the two classes. However, note that next to the Domain and Range in the Description view it says (intersection). This is because the semantics of having 2 or more classes as the domain or range is the intersection of those classes not the union. E.g., if one defined the domain for a property to be Pizza and then added another domain IceCream that would mean that for something to be in the domain of that property it would have to be an instance of both Pizza and IceCream not (as people often expect) the union of those two sets which would be either the class Pizza or the class IceCream . Also, note that the domain and range are for inferencing, they are not data integrity constraints. This distinction will be explained in more detail below in the section on SHACL.", - "page_start": 28, - "page_end": 28, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "meta-problem will solve or dissolve the hard problem. A weaker line holds that it will not remove the hard problem, but it will constrain the form of a solution.\n\nIn other words, the 'strong line' holds that the solution to the meta-problem would provide an explanation of our beliefs about consciousness that is independent of consciousness. That would debunk our beliefs about consciousness, in the same way that explaining beliefs about god in evolutionary terms may provide arguments against theism itself. [144]\n\n## In popular culture\n\nTom Stoppard's play The Hard Problem , first produced in 2015, is named after the hard problem of consciousness, which Stoppard defines as having \"subjective First Person experiences\". [145]\n\n## See also\n\n<!-- image -->\n\n## Philosophy portal\n\n - Animal consciousness\n - Artificial consciousness\n - Binding problem\n - Blindsight\n - Chinese room\n - Cogito, ergo sum\n - Cryonics\n - Free will\n - Ideasthesia\n - Introspection\n - Knowledge by acquaintance\n - List of unsolved problems in biology\n - Mind-body problem\n - Phenomenalism\n - Philosophy of self\n - Primary-secondary quality distinction\n - Problem of mental causation\n - Problem of other minds\n - Vertiginous question\n - Von Neumann-Wigner interpretation\n\n## Notes\n\n - 1. \"But, without any delusive representations of images or phantasms, I am most certain that I am, and that I know and delight in this. In respect to these truths I am not at all afraid of the arguments of the Academians, who say, What if you are deceived? For if I am deceived, I am. For he who is not, cannot be deceived...\"\n - 2. There has been debate over how best to characterize James' position. The Stanford Encyclopedia of Philosophy states: \"James's commitment to panpsychism remains somewhat controversial, since he also advanced a cogent set of objections against a version of the view, which he labelled the 'mind dust' theory, in chapter six of The Principles of Psychology ([1890] 1981). These objections are the inspiration for the so-called 'combination problem', around which much of the twenty first century literature on panpsychism focuses.\"\n\n## References", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia2.pdf" - }, - { - "text": "| Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE |", - "page_start": 0, - "page_end": 0, - "source_file": "infographic5.pdf" - }, - { - "text": "of being conscious is merely an error in perception, held by brains which evolved to hold erroneous and incomplete models of their own internal workings, just as they hold erroneous and incomplete models of their own bodies and of the external world. [77][78]\n\n## Criticisms\n\nThe main criticisms of eliminative materialism and illusionism hinge on the counterintuitive nature of the view. Arguments of this form are called Moorean Arguments . A Moorean argument seeks to undermine the conclusion of an argument by asserting that the negation of that conclusion is more certain than the premises of the argument. [79]\n\nThe roots of the Moorean Argument against illusionism extend back to Augustine of Hippo who stated that he could not be deceived regarding his own existence, since the very act of being deceived secures the existence of a being there to be the recipient of that deception. [note 1][80]\n\nIn the Early-Modern era, these arguments were repopularized by René Descartes, who coined the now famous phrase \"Je pense, donc je suis\" (\"I think, therefore I am\"). [81] Descartes argued that even if he was maximally deceived (because, for example, an evil demon was manipulating all his senses) he would still know with certainty that his mind exists, because the state of being deceived requires a mind as a prerequisite. [82]\n\nThis same general argumentative structure is still in use today. For example, in 2002 David Chalmers published an explicitly Moorean argument against illusionism. The argument goes like this: The reality of consciousness is more certain than any theoretical commitments (to, for example, physicalism) that may be motivating the illusionist to deny the existence of consciousness. The reason for this is because we have direct \"acquaintance\" with consciousness, but we do not have direct acquaintance with anything else (including anything that could inform our beliefs in consciousness being an illusion). In other words: consciousness can be known directly, so the reality of consciousness is more certain than any philosophical or scientific theory that says otherwise. [83] Chalmers concludes that \"there is little doubt that something like the Moorean argument is the reason that most people reject illusionism and many find it crazy.\" [84]\n\nEliminative materialism and illusionism have been the subject of criticism within the popular press. One highly cited example comes from the philosopher Galen Strawson who wrote an article in the New York Review of Books titled \"The Consciousness Deniers\". In it, Strawson describes illusionism as the \"silliest claim ever made\", next to which \"every known religious belief is only a little less sensible than the belief that the grass is green.\" [85] Another notable example comes from Christof Koch (a neuroscientist and one of the leading proponents of Integrated Information Theory) in his popular science book The Feeling of Life Itself . In the early pages of the book, Koch describes eliminativism as the \"metaphysical counterpart to Cotard's syndrome, a psychiatric condition in which patients deny being alive.\" [86] Koch takes the prevalence of eliminativism as evidence that \"much of twentieth-century analytic philosophy has gone to the dogs\". [87]\n\n## Type-B Materialism\n\nType-B Materialism, also known as Weak Reductionism or A Posteriori Physicalism , is the view that the hard problem stems from human psychology, and is therefore not indicative of a genuine ontological gap between consciousness and the physical world. [43] Like Type-A Materialists, Type-B Materialists are", - "page_start": 9, - "page_end": 9, - "source_file": "wikipedia2.pdf" - }, - { - "text": "impossible within the bounds of nature but possible within the bounds of logic. [47] This would imply that facts about experience are not logically entailed by the \"physical\" facts. Therefore, consciousness is irreducible. In Chalmers' words, \"after God (hypothetically) created the world, he had more work to do.\" [48] Daniel Dennett, a philosopher of mind, criticised the field's use of \"the zombie hunch\" which he deems an \"embarrassment\" [49] that ought to \"be dropped like a hot potato\". [29]\n\n## Knowledge argument\n\nThe knowledge argument, also known as Mary's Room , is another common thought experiment: A hypothetical neuroscientist named Mary has lived her whole life in a black-and-white room and has never seen colour before. She also happens to know everything there is to know about the brain and colour perception. [50] Chalmers believes [48] that when Mary sees the colour red for the first time, she gains new knowledge - the knowledge of \"what red looks like\" - which is distinct from, and irreducible to, her prior physical knowledge of the brain or visual system. A stronger form of the knowledge argument [50] claims not merely that Mary would lack subjective knowledge of \"what red looks like,\" but that she would lack knowledge of an objective fact about the world: namely, \"what red looks like,\" a non-physical fact that can be learned only through direct experience (qualia). Others, such as Thomas Nagel, take a \"physicalist\" position, disagree with the argument in its stronger and/or weaker forms. [50] For example, Nagel put forward a \"speculative proposal\" of devising a language that could \"explain to a person blind from birth what it is like to see.\" [31] The knowledge argument implies that such a language could not exist.\n\n## Philosophical responses\n\nDavid Chalmers' formulation of the hard problem of consciousness provoked considerable debate within philosophy of mind, as well as scientific research. [43]\n\nThe hard problem is considered a problem primarily for physicalist views of the mind (the view that the mind is a physical object or process), since physical explanations tend to be functional, or structural. Because of this, some physicalists have responded to the hard problem by seeking to show that it dissolves upon analysis. Other researchers accept the problem as real and seek to develop a theory of consciousness' place in the world that can solve it, by either modifying physicalism or abandoning it in favour of an alternative ontology (such as panpsychism or dualism). A third response has been to accept the hard problem as real but deny human cognitive faculties can solve it.\n\nA diagram showing the relationship between various views concerning the relationship between consciousness and the physical world\n\n<!-- image -->\n\nPhilPapers is an organization that archives academic philosophy papers and periodically surveys professional philosophers about their views. It can be used to gauge professional attitudes towards the hard problem. As of the 2020 survey results, it seems that the majority of philosophers (62.42%) agree that the hard problem is real, with a substantial minority that disagrees (29.76%). [25]", - "page_start": 5, - "page_end": 5, - "source_file": "wikipedia2.pdf" - }, - { - "text": "Attitudes towards physicalism also differ among professionals. In the 2009 PhilPapers survey, 56.5% of philosophers surveyed subscribed to physicalism and 27.1% of philosophers surveyed rejected physicalism. 16.4% fell into the \"other\" category. [51] In the 2020 PhilPapers survey, 51.93% of philosophers surveyed indicated that they \"accept or lean towards\" physicalism and 32.08% indicated that they reject physicalism. 6.23% were \"agnostic\" or \"undecided\". [25]\n\nDifferent solutions have been proposed to the hard problem of consciousness. The sections below taxonomizes the various responses to the hard problem. The shape of this taxonomy was first introduced by Chalmers in a 2003 literature review on the topic. [52] The labelling convention of this taxonomy has been incorporated into the technical vocabulary of analytic philosophy, being used by philosophers such as Adrian Boutel, [53] Raamy Majeed, [54] Janet Levin, [55] Pete Mandik & Josh Weisberg, [56] Roberto Pereira, [57] and Helen Yetter-Chappell. [58]\n\n## Type-A Materialism\n\nType-A materialism (also known as reductive materialism or a priori physicalism ) is a view characterized by a commitment to physicalism and a full rejection of the hard problem. By this view, the hard problem either does not exist or is just another easy problem, because every fact about the mind is a fact about the performance of various functions or behaviours. So, once all the relevant functions and behaviours have been accounted for, there will not be any facts left over in need of explanation. [52] Thinkers who subscribe to type-A materialism include Paul and Patricia Churchland, Daniel Dennett, Keith Frankish, and Thomas Metzinger.\n\nSome type-A materialists believe in the reality of phenomenal consciousness but believe it is nothing extra in addition to certain functions or behaviours. This view is sometimes referred to as strong reductionism . [43][52] Other type-A materialists may reject the existence of phenomenal consciousness entirely. This view is referred to as eliminative materialism or illusionism. [59][60][61]\n\n## Strong reductionism\n\nMany philosophers have disputed that there is a hard problem of consciousness distinct from what Chalmers calls the easy problems of consciousness. Some among them, who are sometimes termed strong reductionists , hold that phenomenal consciousness (i.e., conscious experience) does exist but that it can be fully understood as reducible to the brain. [43]\n\nBroadly, strong reductionists accept that conscious experience is real but argue it can be fully understood in functional terms as an emergent property of the material brain. [43] In contrast to weak reductionists (see above), strong reductionists reject ideas used to support the existence of a hard problem (that the same functional organization could exist without consciousness, or that a blind person who understood vision through a textbook would not know everything about sight) as simply mistaken intuitions. [43][52]\n\nA notable family of strong reductionist accounts are the higher-order theories of consciousness. [62][43] In 2005, the philosopher Peter Carruthers wrote about \"recognitional concepts of experience\", that is, \"a capacity to recognize [a] type of experience when it occurs in one's own mental life,\" and suggested that such a capacity could explain phenomenal consciousness without positing qualia. [63] On the higher-order view, since consciousness is a representation, and representation is fully functionally analyzable, there is no hard problem of consciousness. [43]", - "page_start": 6, - "page_end": 6, - "source_file": "wikipedia2.pdf" - }, - { - "text": "## 3.3 World knowledge\n\nThe bulk of evidence about commonsense knowledge captured in BERT comes from practitioners using it to extract such knowledge. One direct probing study of BERT reports that BERT struggles with pragmatic inference and role-based event knowledge (Ettinger, 2019). BERT also struggles with abstract attributes of objects, as well as visual and perceptual properties that are likely to be assumed rather than mentioned (Da and Kasai, 2019).\n\nThe MLM component of BERT is easy to adapt for knowledge induction by filling in the\n\nKG\n\nDante\n\nborn-in\n\nFlorence\n\nFigure 1:\n\n<!-- image -->\n\nQuerying knowledge bases (KB) and lan-\n\nguage models (LM) for factual knowledge. Figure 2: BERT world knowledge (Petroni et al., 2019)\n\nvast amounts of linguistic knowledge (Peters et al., 2018b; Goldberg, 2019; Tenney et al., 2019) useful for downstream tasks. This knowledge is usually accessed either by conditioning on latent context representations produced by the original model or by using the original model weights to initialize a task-specific model which is then further fine-tuned. This type of knowledge transfer is crucial for current state-of-the-art results on a wide range of tasks. In contrast, knowledge bases are e ective soblanks (e.g. \"Cats like to chase [\\_\\_\\_]\"). Petroni et al. (2019) showed that, for some relation types, vanilla BERT is competitive with methods relying on knowledge bases (Figure 2), and Roberts et al. (2020) show the same for open-domain QA using T5 model (Raffel et al., 2019). Davison et al. (2019) suggest that it generalizes better to unseen data. In order to retrieve BERT's knowledge, we need good template sentences, and there is work on their automatic extraction and augmentation (Bouraoui et al., 2019; Jiang et al., 2019b).\n\nff lutions for accessing annotated gold-standard relational data by enabling queries such as (D ante , born-in , X ). However, in practice we often need to extract relational data from text or other modalities to populate these knowledge bases. This requires complex NLP pipelines involving entity extraction, coreference resolution, entity linking and relation extraction (Surdeanu and Ji, 2014)components that often need supervised data and fixed schemas. Moreover, errors can easily propagate and accumulate throughout the pipeline. Instead, we could attempt to query neural language models for relational data by asking them to fill in masked tokens in sequences like 'Dante was born However, BERT cannot reason based on its world knowledge . Forbes et al. (2019) show that BERTcan \"guess\" the affordances and properties of many objects, but can not reason about the relationship between properties and affordances. For example, it 'knows\" that people can walk into houses, and that houses are big, but it cannot infer that houses are bigger than people. Zhou et al. (2020) and Richardson and Sabharwal (2019) also show that the performance drops with the number of necessary inference steps. Some of BERT's world knowledge success comes from learning stereotypical associations (Poerner et al., 2019), e.g., a person with an Italian-sounding name is predicted to be Italian, even when it is incorrect.\n\n## 3.4 Limitations\n\nMultiple probing studies in section 3 and section 4 report that BERT possesses a surprising amount of syntactic, semantic, and world knowledge. However, Tenney et al. (2019a) remarks, 'the fact that a linguistic pattern is not observed by our probing classifier does not guarantee that it is not there, and the observation of a pattern does not tell us how it is used.\" There is also the issue of how complex a probe should be allowed to be (Liu et al., 2019a). If a more complex probe recovers more information, to what extent are we still relying on the original model?\n\nFurthermore, different probing methods may lead to complementary or even contradictory conclusions, which makes a single test (as in most stud-\n\n(\n\nDante\n\n,\n\nborn-in\n\n,\n\nX\n\n)\n\nSymbolic\n\nMemory Access\n\nFlorence", - "page_start": 2, - "page_end": 2, - "source_file": "arxiv2_taclccby4_license.pdf" - }, - { - "text": "Finding a provably correct or optimal solution is intractable for many important problems. [15] Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks.\n\n## Narrow vs. general AI\n\nAI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals. [378][379] General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively.\n\n## Machine consciousness, sentience, and mind\n\nThe philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. Russell and Norvig add that \"[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on.\" [380] However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction.\n\n## Consciousness\n\nDavid Chalmers identified two problems in understanding the mind, which he named the \"hard\" and \"easy\" problems of consciousness. [381] The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While human information processing is easy to explain, human subjective experience is difficult to explain. For example, it is easy to imagine a colorblind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to know what red looks like . [382]\n\n## Computationalism and functionalism\n\nComputationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind-body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam. [383]\n\nPhilosopher John Searle characterized this position as \"strong AI\": \"The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.\" [ac] Searle challenges this claim with his Chinese room argument, which attempts to", - "page_start": 25, - "page_end": 25, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "infographic5.pdf", - "query": "Is OLAF a specific strategy for ontological learning or is it a toolbox of different strategies?", - "target_page": 1, - "target_passage": "Our vision is to implement a toolbox of methods we can gather to build pipelines. ", - "chunk_present": { - "presence": true, - "index": 3 - } - }, - "top_chunk": [ - { - "text": "| Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE | Most ontology learning systems do not consider the targeted ontology- based system. Though an ideal ontology should model a domain in an application-independent manner, in practice, concepts and relations ONTOLOGY LEARNING FRAMEWORK ARCHITECTURE |", - "page_start": 0, - "page_end": 0, - "source_file": "infographic5.pdf" - }, - { - "text": "next section. Which option you choose for your ontology will depend on the specific requirements you have as well as the standards established by your organization or organizations that you work with.\n\nFinally, another name related concept you should be aware of is the concept of a namespace. If you have worked with most modern programming languages such as Python or Java, you are already familiar with the concept of a namespace. The concept is identical in OWL. A namespace is used to avoid naming conflicts between different ontologies. For example, you may have a class called Network in an ontology about telecommunications. You might also have a class called Network in an ontology about graph theory. The two concepts are related but are different. Just as with programming languages you use namespace prefixes to determine what specific namespace a name refers to. E.g., in this example you might have the prefix tc for the Telecom ontology and gt for the Graph Theory ontology. Thus, when you referred to the Network class for the Telecom ontology you would use tc:Network and gt:Network for the graph theory class.\n\nNote that you already have some experience with other namespaces. The OWL namespace prefix is owl and is used to refer to classes such as owl:Thing and owl:Nothing . The Resource Description Framework Schema (RDFS) is a model that OWL is built on top of and thus some properties that ontologies use such as rdfs:label leverage this namespace.\n\nIn the bottom view of the Active ontology tab there is a tab called Ontology Prefixes. This tab shows all the current namespace mappings in your ontology. There are certain concepts from OWL, RDF, RDFS, XML and XSD that are required for every ontology, so those namespaces are by default mapped in every new Protégé ontology. There is also a mapping to the empty string for whatever the namespace is for your ontology. This allows you to display and refer to entities in your ontology without entering a namespace prefix. If you look at that tab now you should see a row where the first column is blank, and the second column has the base IRI for your ontology. It should be the same IRI as the Ontology IRI at the top of the Active ontology tab, except it also has a # sign at the end. E.g., the Pizza tutorial developed for this tutorial has an IRI of: http://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial and the row that has a blank first column in Ontology Prefixes has the IRI:\n\nhttp://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial#.", - "page_start": 61, - "page_end": 61, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "To understand what is going on you first need to understand that each SPARQL query consists of two parts. The first part at the beginning consists of several namespace prefixes. These statements consist of the prefix used for a particular namespace as well as the IRI associated with this namespace. Recall that these concepts were described in chapter 7. You may be wondering where all these prefixes came from since you didn't add them to your ontology. The answer is that every OWL ontology comes with a set of namespaces and prefixes that are required to define the ontology.\n\nAlso, to understand SPARQL you need to 'peak under the hood' of OWL. So far, we have been discussing concepts in purely logical and set theoretic terms, i.e., at the semantic level. However, like any language or database there is a lower level that describes how the concepts are mapped to actual data. In a relational database the fundamental construct to represent data is a table. In OWL the fundamental construct is a triple. OWL is actually built on top of RDFS which is a language built on top of RDF. RDF (Resource Description Framework) is a language to describe graphs (in the mathematical sense of the term). I.e., to describe nodes and links.\n\nThe foundation for RDF graphs are triples consisting of a subject, predicate, and object. This results in what is called an undirected or network graph because objects can be subjects and vice versa. Whenever you define a property in OWL you are defining a predicate. An individual can be a subject or an object (or both). E.g., in our ontology Customer1 purchasedPizza AmericanaHotPizza1 . In this example Customer1 is the subject, purchasedPizza is the predicate and AmericanaHotPizza1 is the object.\n\nHowever, classes and properties themselves are also represented as triples. So for example, when you create the class Pizza what Protégé does for you is to add the triple: Pizza rdf:type owl:Class to the ontology. I.e., the Pizza entity is of type (is an instance of) owl:Class . Similarly when you add NamedPizza as a subclass of Pizza , Protégé adds the triple: NamedPizza rdfs: s ubClassOf Pizza .\n\nHopefully, now you can make some sense of this initial query. The query is looking for all the entities that are the subjects of triples where the predicate is rdfs: s ubClassOf and the object is any other entity. The ? before a name indicates that the name is a wildcard that can match anything that fits with the rest of the pattern. This is part of the power of SPARQL, one can match a Subject, an Object, a Predicate or even all three. Making all 3 parts of the pattern wildcards would return every triple in the graph (in this case our entire Pizza ontology) being searched. You may notice that in some cases the object is simply the name of a class while in others it is a class expression with an orange circle in front of it. This is because when defining classes using DL axioms Protégé creates anonymous classes that correspond to various DL axioms.\n\nThe SELECT part of a SPARQL query determines what data to display. The WHERE part of a query determines what to match in the query. If you want to display everything matched in the WHERE clause you can just use a * for the SELECT clause. The initial default query in this tab is set up with no knowledge of the specific ontology. I.e., it will return all the classes that are subclasses of other classes regardless of the ontology. To get information about Pizzas the first thing we need to do is to add another prefix to the beginning of the query. In our case the Pizza ontology has been set up with a mapping to the prefix pizza (you can see this in the ontology prefixes tab in the Active ontology tab discussed in chapter 7). So, add the following to the SPARQL query after the last PREFIX statement:\n\n## PREFIX pizza: <http://www.semanticweb.org/pizzatutorial/ontologies/2020/PizzaTutorial#>", - "page_start": 68, - "page_end": 68, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "| ConceptNet-based extraction Grouping terms based on synonyms | | Concept/Relation Extraction | OLAF | Our vision is to implement a gather to build pipelines ontology. | . These pipelines can be run, optimised and analysed to learn the best possible | Ressource | Algorithm implemented Upcoming implementation |", - "page_start": 0, - "page_end": 0, - "source_file": "infographic5.pdf" - }, - { - "text": "| STATE OF THE ART | | | | | | OLAF IN A PRACTICAL CONTEXT | |", - "page_start": 0, - "page_end": 0, - "source_file": "infographic5.pdf" - }, - { - "text": "During the last two decades, nearly all EU Member States have developed strategic approaches, mostly called 'National OSH Strategies' or 'National OSH Plans'. In most cases, these strategies have helped to identify and mitigate recognised structural weaknesses of the national OSH system, for example, low levels of implementation of existing legislation, insufficient reporting and monitoring tools, or specific sector or risk-related actions, and finally also regulatory improvements. The EU OSH strategies and OSH strategic frameworks have often been used as orientation for objectives and actions of national strategies; the first started in 2002 ('Communication from the Commission - Adapting to change in work and society: a new Community strategy on health and safety at work 2002-2006'). The latest EU Strategic Framework on Health and Safety at Work 2021-2027 puts the focus on changes; it is titled 'Occupational safety and health in a changing world of work' and focuses on three key objectives for the coming years: (1) anticipating and managing change in the new world of work brought about by the", - "page_start": 15, - "page_end": 15, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "example, in Sweden. 378 Meanwhile, the spectrum of guidance developed regarding work-related psychosocial risks is very wide; it covers aspects such as job satisfaction (overall level of wellbeing), engagement, performance and work-related stress, 379 and also discrimination, harassment, aggression and violence. 380\n\n## 6.2 EU and national OSH strategies\n\nThe EU and many Member States applied and apply strategic approaches , based on EU or national evidence of the state of OSH. OSH strategies are a steering instrument to focus the activities of all actors on major recognised deficits of OSH infrastructures or processes. 381\n\nThe newest EU Strategic Framework on Health and Safety at Work 2021-2027 puts the focus on change, with the title 'Occupational safety and health in a changing world of work' . 382 Consequently, the strategic framework focuses on three key objectives for these years:\n\n - · anticipating and managing change in the new world of work brought about by the green, digital and demographic transitions;\n - · improving prevention of workplace accidents and illnesses;\n - · increasing preparedness for any potential future health crises.\n\nThe proposed focus areas and actions are related to these three objectives. Under the first key objective there are actions like 'Modernising and simplifying EU OSH rules in the context of the green and digital transitions'; a special focus is on psychosocial and ergonomic risks. The second objective promotes a vision zero approach to work-related deaths, particularly referring to hazardous substances and cardiovascular diseases, the promotion of health at work and inclusive workplaces for all. 383\n\nThe third objective responds to the impact of the pandemic situation in 2020 and 2021. It includes the development of emergency procedures for future similar situations ('Health crisis'). The Strategic Framework repeats and corroborates the value of research and data-based evidence by stating: 'Research and data collection, both at EU and national level, are a pre-condition for the prevention of work-related diseases and accidents. Scientific advice and the latest technological developments feed into OSH legislation and policy.'\n\nAlso, many Member States have agreed on provision of better data as an objective in their national strategies. 384 The EU strategy often gives orientation for the development of national OSH strategies. Under the last strategy period, 24 of the 27 Member States had applied a strategy. Many national OSH strategies contained similar targets. EU-OSHA published an overview report on national strategies, and the OSH Barometer contains as one indicator a harmonised overview on the aspects of national strategies. 385\n\nOSH strategies are regarded as an important and innovative policy area, a chance for better collaboration, and also a very relevant joint national OSH activity. Those strategies help in priority setting and focused action on weaknesses. Strategies were often agreed in social dialogue processes, and many strategy actors also developed new and better monitoring instruments and indicators. 386 Labour inspections play an important or essential role in most of these strategies. 387\n\n<!-- image -->\n\nOSH Barometer Steering of OSH, National strategies:\n\nhttps://visualisation.osha.europa.eu/osh-barometer/osh-steering/national-strategies\n\nOSHWiki: Section 'OSH System at national level', descriptions of the OSH Systems of the EU Member States: https://oshwiki.eu/wiki/Category:OSH\\_systems\\_at\\_national\\_level", - "page_start": 123, - "page_end": 123, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n## OLAF : Ontology Learning Applied Framework\n\nMarion SCHAEFFER (marion.schaeffer@insa-rouen.fr) - Matthias SESBOUE (matthias.sesboue@insa-rouen.fr) Jean-Philippe KOTOWICZ - Nicolas DELESTRE - Cecilia ZANNI-MERK\n\nSince the beginning of the century, research on ontology learning has gained popularity. Automatically extracting and structuring knowledge relevant to a domain of interest from unstructured data is a major scientific challenge. We propose a new approach with a modular ontology learning framework considering tasks from data pre-processing to axiom extraction. Whereas previous contributions considered ontology learning systems as tools to help the domain expert, we developed the proposed framework with full automation in mind. An implementation as an opensource and collaborative python library is available at https://gitlab.insa-rouen.fr/msesboue/ontology-learning.\n\n<!-- image -->\n\n<!-- image -->", - "page_start": 0, - "page_end": 0, - "source_file": "infographic5.pdf" - }, - { - "text": "## Chapter 13 Conclusion: Some Personal Thoughts and Opinions\n\nThis tutorial is just the entry point to a technology that is entering the Slope of Enlightenment in the Gartner technology hype cycle [Gartner Hype Cycle]. Tim Berners-Lee published his paper on the Semantic Web [Berners-Lee 2001] way back in 2001. At least in my experience for most large US corporations the excitement around Machine Learning seemed for a while to eclipse serious interest in OWL, SPARQL, and other Semantic Web technologies in the United States. Then influential technology companies such as Google [Singhal 2012], Facebook [Olanof 2013], and Amazon [Neptune 2017] started to embrace the technology using the term Knowledge Graphs [Noy 2019] and the corporate world is finally realizing that machine learning and knowledge graphs are complimentary not competitive technologies.\n\nThe term knowledge graph itself can be used in different ways. The best definition I've heard is that an ontology provides the vocabulary (i.e., essentially the T-Box) and a knowledge graph is an ontology combined with data (A-Box). Although in the corporate world I often hear people simply talk about knowledge graphs without much interest in the distinction between the vocabulary and the data.\n\nThere are a number of vendors emerging who are using the technology in very productive ways and are providing the foundation for federated knowledge graphs that can scale to hundreds of millions of triples or more and provide a framework for all corporate data. I've listed several in the bibliography but those are only the ones I've had some experience with. I'm sure there are many others. One of the products I've had the best experience with is the AllegroGraph triplestore and the Gruff visualization tool from Franz Inc. Although Allegro is a commercial tool, the free version supports most of the core capabilities of the commercial version. I've found the Allegro triplestore easy to use on a Windows PC with the Docker tool to emulate a Linux server.\n\nI first started working with classification-based languages when I worked at the Information Sciences Institute (ISI) and used the Loom language [Macgregor 91] to develop B2B systems for the US Department of Defense and their contractors. Since then, I've followed the progress of the technology, especially the DARPA knowledge sharing initiative [Neches 91] and always thought there was great promise in the technology. When I first discovered Protégé it was a great experience. It is one of the best supported and most usable free tools I've ever seen, and it always surprised me that there weren't more corporate users leveraging it in major ways. I think we are finally starting to see this happen and I hope this tutorial helps in a small way to accelerate the adoption of this powerful and robust tool.", - "page_start": 88, - "page_end": 88, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "## SO WHAT EXACTLY IS A SUMMARY?\n\nA summary is more than just a condensed or shortened version of your work. A summary requires you to analyse your study material, to identify the key concepts, and to explain it in your own words.\n\n## To make a good summary, you need to:\n\n - · Keep it brief.\n - · Make sure to use main headings and keywords.\n - · Focus on the main ideas.\n - · Classify and organise the information in a logical manner.\n - · Use your own words where possible.\n - · Include examples.\n - · Remember that your summaries are there to help you.\n\n## YOU CAN MAKE YOUR SUMMARIES IN DIFFERENT FORMATS. HERE ARE SOME EXAMPLES:\n\n## Mind Maps (Spider Diagrams)\n\nA mind map is a visual expression of thoughts, ideas and concepts. It usually takes the form of a diagram, with the main concept in the centre, and the related concepts branching out from there. Here is an example:\n\n<!-- image -->", - "page_start": 28, - "page_end": 28, - "source_file": "basic-english-language-skills.PDF" - } - ] - }, - { - "references": { - "source_file": "infographic5.pdf", - "query": "Is Text2Onto still updated nowadays?", - "target_page": 1, - "target_passage": "But it is not maintained since 2011.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Note: At this point, the update is 50% completed. You now have one node from each iogrp updated with the new code manually. Always leave the configuration node for last during a manual software update.", - "page_start": 721, - "page_end": 721, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 Full Text Index Highlight\n - This field is optional. It returns the text that surrounds the matching text. It represents the context in which the text was found. This field can be only displayed in the hit list. Highlighting is not supported for XML documents.", - "page_start": 366, - "page_end": 366, - "source_file": "sg246915.pdf" - }, - { - "text": "- 2. A second write is run to perform the actual update to the database.", - "page_start": 461, - "page_end": 461, - "source_file": "sg247938.pdf" - }, - { - "text": "| Type of content | Tags |\n|------------------------|-----------------------------|\n| Table without Alt Text | |\n| | <Table> |\n| | <THead> |\n| | <TR> |\n| | <TH> |\n| | text content |\n| | <TH> |\n| | text content |\n| | <TBody> |\n| | <TR> |\n| | <TH> |\n| | text content |\n| | <TD> |\n| | text content |\n| | <TFoot> |\n| | <TR> |\n| | <TH> |\n| | text content |\n| | text content |\n| Table Header Cell | |\n| | <TH> |\n| | Scope=Row, Column, or Both |\n| Table Merged Cell | |\n| | <TH> or <TD> |\n| | Column span= c |\n| | Row span= r |\n| Group without Alt Text | |\n| | Alt= alt text |\n| | <TOCI> |\n| | Link - OBJR |\n| | <Span> |", - "page_start": 51, - "page_end": 51, - "source_file": "office-pdf.pdf" - }, - { - "text": "| Type of content | Tags |\n|----------------------------------------------------|-----------------------------------------------|\n| Table | |\n| | <Table> |\n| | <THead> |\n| | <TR> |\n| | <TH> |\n| | text content |\n| | <TH> |\n| | <TBody> |\n| | <TR> |\n| | <TH> |\n| | text content |\n| | text content |\n| Table Header Cell | |\n| | <TH> |\n| Decorative Graphical Object | no tags |\n| Graphical Object with Alt Text | |\n| | <Figure> |\n| Graphical Object other than Shape without Alt Text | <Figure> |\n| Shape without Alt Text with text | |\n| | <Sect> text content |\n| Shape with Alt Text with text | |\n| | <Figure> Alt= alt text + text (shape type) |", - "page_start": 58, - "page_end": 58, - "source_file": "office-pdf.pdf" - }, - { - "text": "Table 8: Prompts used for the evaluation of e5-mistral-7b-instruct .\n\n| Task type | Prompt |\n|---------------------|-------------------------------------------------------|\n| Classification | \"Classify the following task: \" |\n| Clustering | \"Identify the topic or theme based on the text: \" |\n| Retrieval | \"Retrieve semantically similar text: \" |\n| Reranking | \"Re-rank the following text: \" |\n| Pair Classification | \"Classify the following pair of text: \" |\n| STS | \"Determine the similarity between the following text: |\n| Summarization | \"Summarize the following text: \" |\n| Bitext Mining | \"Translate the following text: \" |", - "page_start": 19, - "page_end": 19, - "source_file": "arxiv4.pdf" - }, - { - "text": "Figure 13-16 New V8.1 update pause options\n\n<!-- image -->\n\n - 7. After the update packages upload, the update test utility looks for any known issues that might affect a concurrent update of your system. Click Read more (see Figure 13-17 on page 692).", - "page_start": 712, - "page_end": 712, - "source_file": "sg247938.pdf" - }, - { - "text": "The node\\_id is the name of the node on which the statistics were collected. The date is in the form < yymmdd > and the time is in the form < hhmmss > . The following example shows an MDisk statistics file name:\n\nNm\\_stats\\_113986\\_161024\\_151832\n\nExample A-3 shows typical MDisk, volume, node, and disk drive statistics file names.\n\nExample A-3 File names of per node statistics\n\nIBM\\_Storwize:ITSO-V7k:superuser>lsdumps -prefix /dumps/iostats id filename 0 Nd\\_stats\\_7822DFF-2\\_181101\\_173808 1 Nn\\_stats\\_7822DFF-2\\_181101\\_173808 2 Nv\\_stats\\_7822DFF-2\\_181101\\_173808 3 Nm\\_stats\\_7822DFF-2\\_181101\\_173808 4 Nm\\_stats\\_7822DFF-2\\_181101\\_175308 5 Nv\\_stats\\_7822DFF-2\\_181101\\_175308 6 Nd\\_stats\\_7822DFF-2\\_181101\\_175308 7 Nn\\_stats\\_7822DFF-2\\_181101\\_175308 ... 60 Nm\\_stats\\_7822DFF-2\\_181101\\_212314 61 Nn\\_stats\\_7822DFF-2\\_181101\\_212314 62 Nd\\_stats\\_7822DFF-2\\_181101\\_212314\n\n63 Nv\\_stats\\_7822DFF-2\\_181101\\_212314\n\nTip: The performance statistics files can be copied from the Storwize V7000 nodes to a local drive on your workstation by using pscp.exe (included with PuTTY) from an MS-DOS command line, as shown in this example:\n\nC:\\Program Files\\PuTTY> pscp -unsafe -load ITSO-V7K superuser@192.168.100.100:/dumps/iostats/* c:\\statsfiles\n\nUse the -load parameter to specify the session that is defined in PuTTY.\n\nSpecify the -unsafe parameter when you use wildcards.\n\nYou can obtain PuTTY from this website.\n\n## Real-time performance monitoring\n\nIBM Storwize V7000 supports real-time performance monitoring. Real-time performance statistics provide short-term status information for the Storwize V7000. The statistics are shown as graphs in the management GUI, or can be viewed from the CLI.\n\nWith system-level statistics, you can quickly view the CPU usage and the bandwidth of volumes, interfaces, and MDisks. Each graph displays the current bandwidth in megabytes per second (MBps) or I/Os operations per second (IOPS), and a view of bandwidth over time.\n\nEach node collects various performance statistics (mostly at 5-second intervals) and the statistics that are available from the config node in a clustered environment. This information can help you determine the performance effect of a specific node.", - "page_start": 764, - "page_end": 764, - "source_file": "sg247938.pdf" - }, - { - "text": "Note that as with object properties defining a domain and/or range is optional. In general, it is a good practice to do so as it can lead to finding errors in your ontology during the modeling phase rather than at run time.\n\n## 5.2 Customizing the Protégé User Interface\n\nIn order to demonstrate our new data property, we will need to create some instances of the Pizza class and set the value of the data property hasCaloricContent. One of the advantages of Protégé is that it is highly customizable to your specific requirements and work style. There are many views that are available that aren't included in the default Protégé environment because it would be too cluttered. In addition, all of the views that you have already used can be resized, removed, or added to existing tabs. You can also create completely new tabs of your own.\n\nAs an example, we are going to first bring up a new major tab called Individuals by class. This tab can be useful to create individuals and to add or edit their object and data property values. We are going to customize this tab to make it easier to use by adding a new view to it.\n\nTo begin use the menu option Window>Tabs>Individuals by class to bring up this new tab. Of course, if it already exists in your UI simply select it.\n\nWe want to make add a new view as an additional sub-tab in the view that currently has the Annotations and Usage, tabs near the upper right corner 8 . Once you are in the Individuals by class tab select Window>Views>Individual views>Individuals by type (inferred). This will give you a blue outline of the new view. As you move the outline around the existing window it will change depending how you move it, indicating how it will fit into the existing tab after you click. When the blue outline looks like figure 5.2 click left and you will see the new view added as another sub-tab.\n\nAfter you click your UI should now look similar to figure 5.3. If you clicked somewhere else you can just go to the new view and delete it by clicking the X in the upper right corner of the view and then redo it and position it correctly. At first it may seem a bit unintuitive but after you do it a few times it becomes very easy to position new views.\n\nWith this new view you can see the instances of each class displayed beneath the class. Each class can be expanded or contracted to view or hide its particular instances. Since we don't have many instances in our ontology yet the usefulness of this new view isn't that obvious but as we add more instances and as you deal with larger real ontologies in the future, this view can be very helpful to find specific instances of a class. Note that the UI just shows the most direct class (or classes) that the Individual is an instance of. For example, we currently just have three individuals, the three instances of Spiciness : Hot , Medium , and Mild . These are also instances of owl:Thing (as are all instances) however the UI only displays them as instances of Spiciness since it is implicit that they are also instances of all the superclasses of Spiciness .", - "page_start": 50, - "page_end": 50, - "source_file": "Protege5NewOWLPizzaTutorialV3.pdf" - }, - { - "text": "## Summary of changes\n\nThis section describes the technical changes made in this edition of the book and in previous editions. This edition might also include minor corrections and editorial changes that are not identified.\n\nSummary of Changes for SG24-7938-07\n\nfor Implementing the IBM Storwize V7000 with IBM Spectrum Virtualize V8.2.1 as created or updated on November 7, 2019.\n\n## June 2019, Eighth Edition\n\nThis revision includes the following new and changed information.\n\n## New information\n\n - /SM590000 Add new look GUI\n - /SM590000 Hot Spare node\n - /SM590000 RAS line items\n\n## Changed information\n\n - /SM590000 Added new GUI windows throughout", - "page_start": 20, - "page_end": 20, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_RCI_2013.pdf", - "query": "What was the proportion of revenue generated by wireless telecommunications operations in 2009?", - "target_page": 91, - "target_passage": "6,685", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "Network revenue was higher this year compared to last year. This was the net effect of:\n\n - GLYPH<129> higher data revenue related to an increase in subscriber levels and higher usage of wireless data services\n - GLYPH<129> partially offset by our introduction of new lower priced US and international roaming plans and rates which offer consumers more value, and\n - GLYPH<129> the continued adoption of customer friendly simplified plans, which often bundle in certain features like voicemail, caller ID and long distance that we have charged for separately in the past.\n\nExcluding the decline in US and international roaming revenue this year, network revenue would have increased 1 % .\n\nData revenue was 17 % higher this year mainly because of the continued penetration and growing use of smartphones, tablet devices and wireless laptops, which increased the use of e-mail, wireless, Internet access, text messaging and other wireless data services. Data revenue represented approximately 47 % of total network revenue this year, compared to approximately 41 % last year.\n\nPostpaid churn was 1.24 % this year, compared to 1.29 % in 2012. The lower churn rate is partly attributable to the new simplified plans and the roaming plans we introduced.\n\nGross postpaid subscriber additions were 1.4 million this year, or 3 % lower than last year, which reduced net postpaid subscriber additions to 228,000, despite a lower postpaid churn. We believe the industry transition from three year to two year plans resulting from the recent adoption of the Canadian Radio-television and Telecommunications Commission (CRTC) Wireless Code may have slowed our overall wireless subscriber growth from the second half of the year. See 'Regulation in Our Industry' for more information on the Wireless Code.\n\nWe activated and upgraded approximately 2.7 million smartphones this year, compared to approximately 2.9 million in 2012. Approximately 34 % of these were for new subscribers. The decrease was mainly because there was a 10 % reduction in hardware upgrades by existing subscribers during the year, which we also believe is at least partly due to the move from three to two year contracts and the associated pricing changes.\n\nThe percentage of subscribers with smartphones increased to 75 % of our overall postpaid subscriber base, compared to 69 % at the end of 2012. Smartphone subscribers typically generate significantly higher ARPU and are less likely to churn.\n\n<!-- image -->\n\nThe decrease in prepaid subscriber net additions was mainly because of increasing competition at the lower end of the wireless market where prepaid products are mainly sold.\n\nBlended ARPU was down slightly this year compared to last year because the voice component declined at a faster rate than the data component increased.\n\n<!-- image -->\n\n<!-- image -->\n\n## Lower Equipment Sales\n\nEquipment sales (net of subsidies) include revenue from sales to:\n\n - GLYPH<129> independent dealers, agents and retailers\n - GLYPH<129> directly to subscribers through fulfillment by Wireless' customer service groups, websites, telesales and corporate stores.", - "page_start": 43, - "page_end": 43, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## INDUSTRY TRENDS\n\nThe telecommunications industry in Canada, and our business segments, is affected by several overarching trends.\n\n## CHANGING TECHNOLOGIES AND CONSUMER DEMANDS\n\nConsumer demand for mobile devices, digital media and on-demand content across platforms is pushing providers to build networks that can provide more data faster, cheaper and more easily. Increased adoption of smartphones and double digit growth in our data revenue continued this year, reflecting expanded use of applications, mobile video, messaging and other wireless data.\n\n## COMPETITION\n\nCompetition in wireless from national and regional operators as well as smaller new entrants changes how we compete for wireless services. This puts downward pressure on pricing affecting profit margins and impacts customer churn.\n\nTraditional wireline telephone and television services are now offered over the Internet, opening the door to more non-traditional competitors, and changing how traditional providers compete. This is changing the mix of packages and pricing that service providers offer, affecting profit margins and customer churn.\n\n## WIRELE SS TREND S\n\nMore sophisticated wireless networks, devices and applications are making it easier and faster to receive data, driving growth in wireless data services.\n\nWireless providers are investing in the next generation of broadband wireless data networks, such as LTE, to support the growing data demand.\n\nWireless market penetration in Canada is approximately 80 % of the population, and is expected to grow at an estimated 2 % annually.\n\nThe new CRTC code of conduct has limited wireless term contracts to two years from three years. Although the code of conduct has only been in place for a month, we believe this is currently reducing churn and slowing growth in the wireless marketplace.\n\n## CABLE TREND S\n\nYounger generations are increasingly using the Internet and social media as a substitute for traditional wireline telephone services, and televised content is increasingly available online, both on wireline and on wireless devices.\n\nWe face new competition from companies like Skype and Vonage, who market Voice over Internet Protocol (VoIP) telephony services, and Netflix and Apple TV, who provide televised content over the Internet.\n\nNorth American cable companies are improving their cable networks and expanding their service offerings to include Internet, digital cable and VoIP telephony services, while competition from telco IPTV deployments and non-facilities based service providers continues to cause pricing pressures which negatively impacts revenue growth.\n\nIn the media industry, there continues to be a shift towards on-line media consumption by consumers which in turn drives advertisers to spend more on-line versus traditional media. In addition, there are more media competitors as additional on-line media companies enter the market, including large global companies.\n\n## REGULATION\n\nMost areas of our business are highly regulated, which affects who we compete with, the programming we can offer, where and how we use our networks, how we build our businesses and the spectrum we purchase. The telecommunications industry is being affected by more regulation and more reviews of the current regulations.\n\n## ECONOMIC CONDITIONS\n\nOur businesses are affected by general economic conditions and consumer confidence and spending, especially in our Media segment, where advertising revenue is directly affected by the economy.\n\n## BU S INE SS S OLUTION S TREND S\n\nCompanies are using fibre-based access and cloud computing to capture and share information in more volume and detail. This, combined with the rise of multimedia and Internet-based applications, is driving exponential growth in data demand.", - "page_start": 34, - "page_end": 34, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "The Company records equipment revenue from the sale of handsets and accessories to subscribers in its retail stores and to local distributors in its territories upon delivery. The Company does not record equipment revenue on handsets and accessories purchased from national third-party retailers, those purchased though the Company's business-tobusiness sales force, or directly from Sprint by subscribers in its territories. The Company believes the equipment revenue and related cost of equipment associated with the sale of wireless handsets and accessories is a separate earnings process from the sale of wireless services to subscribers. For competitive marketing reasons, the Company sells wireless handsets at prices lower than the cost. In certain instances the Company may offer larger handset discounts as an incentive for the customer to agree to a multi-year service contract. The Company also sells wireless handsets to existing customers at a loss in handset sales and the corresponding cost in cost of goods, and accounts for these transactions separately from agreements to provide customers wireless service. These transactions are viewed as a cost to retain the existing customers and deter churn.\n\nFor the Company's wireless customers that purchase and activate their service through a channel not covered by EITF 00-21, the wireless customers generally pay an activation fee to the Company when they initiate service. The Company defers the activation fee revenue (except when a special promotion reduces or waives the fee) over the average life of its subscribers, which is estimated to be 30 months. The Company recognizes service revenue from its subscribers as they use the service. The Company provides a reduction of recorded revenue for billing adjustments and the portion of revenue (8%) that is retained by Sprint. The Company also reduces recorded revenue for rebates and discounts given to subscribers on wireless handset sales in accordance with (\"EITF\") Issue No. 01-9 'Accounting for Consideration Given by a Vendor to a Subscriber (Including a Reseller of the Vendor's Products).' The Company\n\n<!-- image -->\n\n■", - "page_start": 45, - "page_end": 45, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "Wireless revenues from the Company's paging operation were $0.3 million, a decrease of $0.1 million as the local customer base increasingly chose alternative digital wireless services. Paging service subscribers declined by 7.8% in 2002 from 3,190 subscribers to 2,940 subscribers.\n\nWithin wireline revenues, the Telephone operation contributed $22.5 million, an increase of $0.9 million, or 4.0%. Telephone access revenues were $10.9 million, an increase of $1.4 million or 14.8%. The growth in access revenues was driven by a 38.4% increase in access minutes of use on the Company's network and an increased percentage of minutes in the intrastate jurisdiction, where rates are higher than the interstate jurisdiction. On January 1, 2002 the Federal subscriber line charge (SLC) for residential customers increased from $3.50 to $5.00 per month. The SLC\n\n■", - "page_start": 50, - "page_end": 50, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "The Company participates in the telecommunications industry, which requires substantial investment in fixed assets or plant. This significant capital requirement may preclude profitability during the initial years of operation. The strategy of the Company is to grow and diversify the business by adding services and geographic areas that can leverage the existing plant, but to do so within the opportunities and constraints presented by the industry. For many years the Company focused on reducing reliance on the regulated telephone operation, which up until 1981 was the primary business within the Company. This initial diversification was concentrated in other wireline businesses, such as the cable television and regional fiber facility businesses, but in 1990 the Company made its first significant investment in the wireless sector through its former investment in the Virginia 10 RSA Limited partnership. By 1998, revenues of the regulated telephone operation had decreased to 59.2% of total revenues. In that same year more than 76.6% of the Company's total revenue was generated by wireline operations, and initiatives were already underway to make wireless a more significant contributor to total revenues.\n\nDuring the 1990's significant investments were made in the cellular and PCS (wireless) businesses. The VA 10 RSA cellular operation, in which the Company held a 66% interest and was the general partner, experienced rapid revenue growth and excellent margins in the late 1990's. The cellular operation covered only six counties, and became increasingly dependent on roaming revenues. Management believed the roaming revenues and associated margins would be unsustainable as other wireless providers increasingly offered nationally-branded services with significantly reduced usage charges. To position it to participate in the newer, more advanced, digital wireless services, in 1995 the Company entered the PCS business through an affiliation with American Personal Communications (APC), initiating service along the Interstate 81 corridor from Harrisonburg, Virginia to Chambersburg, Pennsylvania. This territory was a very close match to the Company's fiber network, thereby providing economic integration that might not be available to other wireless carriers. In 1999, the Company entered a new affiliation arrangement with Sprint, the successor to APC (which introduced the Company to a nationally-branded wireless service) and expanded the PCS footprint further into Central Pennsylvania. The Company's combined capital investment in 2000 and 2001 in the PCS operation was $45.1 million.\n\n■", - "page_start": 40, - "page_end": 40, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## Results of Continuing Operations\n\n## 2003 compared to 2002\n\nTotal revenue was $105.9 million in 2003, an increase of $12.9 million or 13.9%. Total revenues included $70.0 million of wireless revenues, an increase of $12.0 million or 20.7%; wireline revenues of $29.0 million, an increase of $0.3 million or 0.9%; and other revenues of $7.0 million, an increase of $0.6 million or 9.7%.\n\nWithin wireless revenues, the PCS operation contributed $69.8 million, an increase of $11.6 million, or 20.8%. PCS service revenues were $44.4 million, an increase of $10.9 million or 32.4%. Service revenue growth was driven by the increase in subscribers, totaling 85,139 at December 31, 2003, an increase of 17,297 or 25.5%, compared to 67,842 subscribers at year-end 2002. The company had churn of 2.1% in 2003 compared to 2.8% in 2002. The decline in the churn rate is the result of tightening the credit screening for new subscribers as well as continued efforts to improve the after sales support. Competition in the wireless industry continues to have a significant impact on the results of the Company's PCS operation.\n\nPCS travel revenue, including reseller revenue, which is compensation between Sprint and its PCS Affiliates for use of the other party's network, was $16.8 million, an increase of $0.3 million or 1.8%. Travel revenue is impacted by the geographic size of the Company's network service area, the overall number of Sprint wireless customers, their travel patterns and the travel exchange rate. The rate received on travel was $0.058 per minute in 2003, compared to $0.10 per minute in 2002. As a part of the amended management agreement signed on January 30, 2004, Sprint and the Company agreed to maintain the travel rate at $0.058 per minute through December 31, 2006.\n\n<!-- image -->\n\n■", - "page_start": 46, - "page_end": 46, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## SHENANDOAH TELECOMMUNICATIONS COMPANY AND SUBSIDIARIES MANAGEMENT'S DISCUSSION AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS\n\nmarketing Sprint PCS. If financial difficulties are experienced by Sprint or any Affiliate, it could have an adverse impact on the Company's results. The Company's PCS network is part of Sprint's nationwide wireless network. The network is owned and operated by Sprint and its Affiliates. The financial viability of Sprint and its Affiliates is critical to the success of operating and\n\nThe current competitive nature of the wireless industry may prompt major wireless providers to strive for financial improvements through industry consolidation. Such consolidation could include Sprint. It is not clear to what extent consolidation may occur or which companies will be involved, but certain consolidation transactions may have an adverse impact on the operating results and valuation of the Company's wireless operations.\n\nThe Company's access revenue may be adversely impacted by legislative or regulatory actions that decrease access rates or exempt certain traffic from paying access to the Company's regulated telephone network. The Federal Communications Commission is currently reviewing the issue of Voice Over Internet Protocol (VOIP) as it relates to access charges. An unfavorable finding may have an adverse effect on the Company's telephone operations.\n\nThere has been a trend for incumbent local exchange carriers to see a decrease in access lines due to the effect of wireless and wireline competition, a slow down in the economy, and the elimination of a second line dedicated to dial up Internet as customers migrate to broadband connections. Although the Company has not seen a material reduction in its number of access lines to date, it experienced line decreases in each of the last two quarters. There is a significant risk that this trend could have a material adverse effect on the Company's telephone operations in the future.\n\nOn May 24, 2004, Local Number Portability (LNP) will be required in the Company's local wireline service area. The Company's customers will be able to retain their existing wireline phone number and use it to obtain service from a competing wireline or wireless provider in the service area. At this time, the Company cannot estimate the potential impact on its telephone operations. If a significant number of customers disconnect the Company's service, it will have an adverse impact on the Company's telephone operating results.\n\nThe Company's revenue from fiber leases may be adversely impacted by further erosion in demand or in price competition for these facilities. There is also the potential for additional bankruptcies of the Company's customers. The Company monitors each of its fiber lease customers closely to minimize the risk related to this business.\n\nThe Company operates the cable television system in Shenandoah County, Virginia. The Company has seen increased competition from satellite providers that are larger and have cost advantages over the Company in the procurement of programming. The continued success of the satellite television providers may have an adverse impact on the Company's cable television results.\n\nThe Company currently has a 12-month, $1.2 million contract with the Virginia Department of Transportation (VDOT) to provide 511 Travel services in the I-81 corridor of Virginia. This contract expires in February 2005. VDOT has recently requested a proposal for a three-year contract with two two-year extensions to extend 511 services to the entire state. Although the Company plans to submit a proposal for the new VDOT contract, there is no certainty that the Company will be selected to provide these services after the end of its current contract.", - "page_start": 56, - "page_end": 56, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "## Wireless\n\nThe trends in Wireless revenue and adjusted operating profit reflect:\n\n - GLYPH<129> the growing number of wireless voice and data subscribers\n - GLYPH<129> decreased churn\n - GLYPH<129> higher usage of wireless data\n - GLYPH<129> higher handset subsidies as more consumers shift to smartphones\n - GLYPH<129> a slight decrease in blended ARPU due to changes in wireless price plans.\n\nWe continue to target higher value postpaid subscribers, which has contributed to the significantly heavier mix of postpaid versus prepaid subscribers. Growth in our customer base and overall market penetration have resulted in higher costs over time for customer service, retention, credit and collection; however, most of the cost increases have been offset by gains in operating efficiencies.\n\nWireless' operating results are influenced by the timing of our marketing and promotional expenditures and higher levels of subscriber additions and related subsidies, resulting in higher subscriber acquisition and activation-related expenses in certain periods. This increased activity generally occurs in the third and fourth quarters, and can also occur or be accentuated by the launch of popular new wireless handset models.\n\n## Cable\n\nThe trends in Cable services revenue and operating profit increases are primarily due to:", - "page_start": 58, - "page_end": 58, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "Wireless Local Number Portability (WLNP) permits a subscriber to change wireless service providers in the same market area while retaining their existing telephone number. This Federal Communications Commission mandate was effective November 24, 2003 in the 100 largest metropolitan areas and will be effective in all areas of the United States on May 24, 2004. Although the initial impact of WLNP appears to be insignificant, there may be a significant future impact to the Company's operation. As a result of WLNP, portions of the PCS subscriber base may migrate to other wireless providers, thereby contributing to increased churn. Alternatively, the implementation of WLNP may allow the Company to attract additional subscribers from other wireless providers.\n\nThe Company has limited control over the service plans and marketing promotions offered to Sprint customers in the competitive wireless telecommunications industry. Sprint controls the marketing plans, advertising message and market promotions offered in the Company's market area. As a result, the plans and promotions offered may have a material adverse effect on the Company's results of operations.\n\nThe Company relies on Sprint for the development of new products and services to remain competitive in the wireless industry. Examples of these services are text messaging, video, and push to talk walkie-talkie features. If these services do not work properly or if Sprint should not continue to develop new competitive products, the results could have a material adverse impact on the results of the Company.\n\nThe Company is required to participate in national and regional third party distribution programs formulated and negotiated by Sprint. Sprint has entered into reseller agreements which may impact the Company. These distribution and reseller programs may have an adverse effect on the results of the Company.\n\n<!-- image -->\n\n■", - "page_start": 55, - "page_end": 55, - "source_file": "NASDAQ_SHEN_2003.pdf" - }, - { - "text": "Business Solutions generates revenue from services and equipment sales.\n\nNext generation revenue is generated by the provision of high-speed, high-reliability data and voice communications, provided on Rogers advanced IP and Ethernet and Cloud platforms and mainly over the extensive Rogers fibre, cable and wireless networks. Next generation revenue also includes Data Centre services revenue from the 2013 dates of business acquisitions.\n\nLegacy revenue is generated mainly by long distance, switched voice services and lower speed data communications, provided over TDM and end of life data platforms with client access primarily delivered through the use of third-party networks and tariffed ILEC services.\n\nBusiness Solutions continues to focus mainly on next generation IPbased services, and on leveraging higher margin on-net and near-net service revenue opportunities, using existing network facilities to expand offerings to the medium and large sized enterprise, public sector and carrier markets. Next generation services now represent 59 % of total service revenue.\n\nRevenue from the lower margin off-net legacy business generally includes local and long-distance voice services and legacy data services which often use facilities that are leased rather than owned.\n\nFollowing our recent data centre business acquisitions, Business Solutions is now also focused on data centre colocation, hosting, cloud and disaster recovery services.\n\n## Higher Operating Revenue\n\nOperating revenue was 7 % higher this year compared to last year, the net result of:\n\n - GLYPH<129> higher revenue from next generation services, which grew by 31 % , reflecting the impact of our acquisitions of Blackiron and Pivot Data Centres\n - GLYPH<129> continued execution of our plan to grow higher margin on-net and next generation IP-based services revenue\n - GLYPH<129> partially offset by ongoing decline in the legacy voice and data business, a trend management expects to continue as customers move to faster and more reliable IP services.\n\n## Higher Operating Expenses\n\nWe assess Business Solutions operating expenses in two categories:\n\n - GLYPH<129> the cost of operating and maintaining telecom and data networking equipment\n - GLYPH<129> all other expenses involved in day-to-day operations, to service existing subscriber relationships and attract new subscribers.\n\nOperating expenses were higher this year, the net result of:", - "page_start": 49, - "page_end": 49, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_RCI_2013.pdf", - "query": "What has Rogers Communications done to improve its television platform?", - "target_page": 2, - "target_passage": "Launched NextBox 3.0 delivering a superior TV experience and leveraged the success of Rogers AnyPlace TV, our Internet and mobile on-demand TV service.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## ROGERS COMMUNICATIONS INC. AT A GLANCE\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n## ROGERS COMMUNICATIONS\n\nRogers Communications (TSX: RCI; NYSE: RCI) is a diversi/fied Canadian telecommunications and media company. As discussed in the following pages, Rogers Communications is engaged in the telecom and media businesses through its primary operating segments Rogers Wireless, Rogers Cable, Rogers Business Solutions and Rogers Media.\n\nROGERS COMMUNICATIONS\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nBUSINESS SOLUTIONS\n\n## WIRELESS SEGMENT\n\nRogers Wireless provides wireless voice and data communications services across Canada to approximately 9.5 million customers under the Rogers Wireless, Fido and chatr brands. Rogers Wireless is Canada's largest wireless provider and the only national carrier operating on the combined global standard GSM/HSPA+/LTE technology platforms. Rogers Wireless is Canada's leader in innovative wireless services, and provides customers with the best and latest wireless devices and applications and the fastest network speeds. Rogers Wireless also provides seamless wireless roaming across the U.S. and more than 200 other countries, and is the Canadian leader in the deployment of mobile commerce and machineto-machine communications.\n\n## CABLE AND BUSINESS SOLUTIONS SEGMENTS\n\nRogers Cable is a leading Canadian cable services provider, whose service territory covers approximately 4.0 million homes in Ontario, New Brunswick and Newfoundland representing approximately 30% of the Canadian cable market. Our advanced digital hybrid /fibre-coax network provides market leading highspeed broadband Internet access speeds, the most innovative selection of digital television and online viewing and telephony services to millions of residential and small business customers. Together with Rogers Business Solutions, it also provides scalable carrier-grade business telecom, networking, hosting and managed data services, and IP connectivity and solutions to medium and large enterprise, government and carrier customers.\n\n## MEDIA SEGMENT\n\nRogers Media is Canada's premier destination for category-leading television and radio broadcasting, sports entertainment, publishing, and digital media properties. Television assets include national City network which reaches more than 80% of Canadians, /five OMNI Television multilingual channels, seven regional and national Sportsnet channels, as well as specialty channels FX Canada, OLN, The Biography Channel and G4. Rogers Media also owns The Shopping Channel, Canada's only nationally televised and online shopping service. It operates more than 50 Canadian radio stations, publishes 50+ well known consumer and business magazines, and owns a suite of digital media properties. Media owns the Toronto Blue Jays Baseball Club and Rogers Centre, Canada's largest sports and entertainment facility. Rogers also holds a 37.5% investment in Maple Leaf Sports & Entertainment, owner of NHL Toronto Maple Leafs, NBA Toronto Raptors and MLS Toronto FC.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "ROGERS COMMUNICATIONS INC. 2013 ANNUAL REPORT\n\n<!-- image -->", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "<!-- image -->\n\nOur new wireless Share Everything plans were Canada's first to let individuals, families and small businesses share wireless data and unlimited nationwide talk and text, with up to 10 wireless devices. Rogers recently further enhanced its exciting One Number service by introducing smartphone apps which enable customers to use mobile data or Wi-Fi to talk, text and video chat using their existing Rogers wireless number from any device.\n\nWe also keep customers informed and entertained with Rogers nextgeneration NextBox 3.0 TV experience which allows customers to view and record up to eight HD programs simultaneously, store hundreds of hours of content and enjoy whole-home PVR capability. And with Rogers Anyplace TV, it's also a wireless experience where viewers can navigate their cable guide, use a virtual remote, set PVR recordings and stream live or on-demand content from a tablet, smartphone, laptop or gaming console.\n\nRogers continues to be Canada's innovation leader in rapidly growing areas such as wireless machine-to-machine communications, remote home monitoring and automation, mobile payments, in-car infotainment and telematics, and digital media. As well, Rogers has deployed a suite of unique local digital services that create virtual marketplaces for bringing consumers and businesses together and provide location-based targeted offers.\n\nThese are just a few examples of the ways Rogers continues to innovate and lead the way, introducing wireless, broadband and digital technologies and services that fundamentally change the way customers stay connected, informed and entertained anywhere they are. Canadians know there's one thing to be certain of - if they're with Rogers, they'll never miss a thing.", - "page_start": 18, - "page_end": 18, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "With Canada's first and fastest LTE wireless network - the global gold standard in wireless network technology - Rogers makes 'placeshifting' a reality so customers can connect to their communications, information and entertainment from almost anywhere, easily and seamlessly. With Rogers, watching TV on the train, conducting a virtual white-boarding session from the beach, disarming a home monitoring system from a smartphone, or answering a home phone from 5,000 kilometers away are becoming everyday activities. Rogers customers no longer have to pick up the phone to check their voicemail; they don't need to be in town to catch their local news; and they don't have to be at their PCs to access their e-mail. And with Rogers, businesses no longer need to work in traditional offices because we help them to quickly set up virtual workspaces, with complete access to customers, colleagues, files and corporate applications, so they are as productive on the road as they are in the office.\n\nAnd now, small businesses as well as households can enjoy the flexibility and value of Rogers new Wireless Home and Small Business Phone products as well.\n\nCustomers know that Rogers makes it easy and seamless to connect with the same personalized information, communications and entertainment experiences no matter where they are - at work, at school, at home or away, including when travelling to more than 200 countries around the world. And they know that only Rogers is there first with innovative new services, such as mobile TV, remote home monitoring, and Rogers One Number, which allows them to switch calls between their wireless device, computer, and home phone without interruption; manage e-mails, text messages and voicemail; hold live video chats; and combine and sync contacts from across multiple devices - no matter where they are.", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "<!-- image -->\n\n## LEADING CONTENT\n\n<!-- image -->\n\nROGERS IS COMMITTED TO DELIVERING WORLD-CLASS CONTENT AND EXPERIENCES TO CONSUMERS AND ADVERTISING SOLUTIONS TO BUSINESSES. THE COMPANY HAS A STRONG LEGACY OF BUILDING POWERFUL MEDIA BRANDS WITH COMPELLING CONTENT THAT RESONATES WITH AUDIENCES ACROSS MULTIPLE PLATFORMS ON ANY DEVICE.\n\nToday, businesses across Canada connect with customers through Rogers category-leading television and radio assets, sports entertainment, televised and online shopping, publishing, and digital media properties as the one-stop solution for all their local and national advertising needs.\n\nRogers Media is Canada's premier combination of diversified broadcast, specialty, sports, print and online media assets which together touch nearly 90% of Canadians every week. This includes over 50 popular AM and FM radio stations across Canada. In television, it includes the seven station City network which broadcasts intensely local, urban-oriented", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "<!-- image -->\n\nknows how businesses work, we also offer a choice of specifically designed plans and options that allow users to share buckets of voice and data, connect directly with team members, establish wireless backup for point-of-sale and other systems, and roam frequently with cost certainty.\n\nFor hundreds of thousands of smaller businesses located in and around Rogers cable footprint, Rogers offers a compelling set of wired telephony and Internet solutions that provide enterprise-grade dependability and value. With voice, data, hosting and online security solutions built specifically for business, Rogers provides a single reliable source for innovative, dependable communications solutions that are backed up by around-the-clock live agent support.\n\nLarger enterprises also increasingly rely on Rogers to deliver corporatecritical voice, Internet, networking and managed data centre solutions\n\nacross its fibre-optic network that connects thousands of commercial and municipal buildings. These next generation on-net services for enterprise customers are backed by dedicated, around-the-clock support and connectivity to Rogers high-speed national fibre-optic backbone that provides redundancy as well as seamless connectivity into the United States and Europe.\n\nRogers also provides the most extensive set of advanced wireless machine-to-machine connectivity solutions which help businesses to increase productivity, reduce costs and optimize operations. As well, Rogers remains at the forefront of mobile commerce and electronic payments solutions in the Canadian market.\n\nBusinesses across Canada also connect with customers through Rogers leading media brands as the one-stop solution for all their local and national radio, television, online and print advertising needs.", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "<!-- image -->\n\n## BUSINESS SOLUTIONS\n\n<!-- image -->\n\nIN TODAY'S FAST-PACED DIGITAL WORLD OF BUSINESS, THE ABILITY TO COMMUNICATE AND ACCESS INFORMATION ANYTIME, ANYPLACE IS A COMPETITIVE ADVANTAGE THAT BUSINESS PROFESSIONALS LOOK TO ROGERS TO PROVIDE. ROGERS ENSURES THE INFORMATION THAT DRIVES COMMERCE FORWARD IS ALWAYS ON HAND AND HELPS BUSINESSES DEFINE HOW TO WIN IN THE DIGITAL WORLD.\n\nRogers provides a single reliable source for advanced business-focused voice, Internet and data networking solutions designed specifically for the most demanding of wireless and wired commercial requirements.\n\nBusinesses across Canada rely on Rogers for its national wireless network, world-leading LTE technology, seamless global connectivity, and the broadest array of wireless applications and devices, because they know that their mobility and remote connectivity needs are always covered with the most advanced solutions available. Because Rogers", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "<!-- image -->\n\n## CONNECTED HOME\n\n<!-- image -->\n\nROGERS CONTINUES TO DEFINE HOW FAMILIES COME TOGETHER AND CONNECT WITH THEIR WORLD. MILLIONS OF CANADIANS DEPEND ON ROGERS TO KEEP THEM INFORMED, CONNECTED AND ENTERTAINED WITH A COMBINATION OF THE FASTEST INTERNET SPEEDS AND THE MOST INNOVATIVE TELEVISION, TELEPHONY AND HOME MONITORING SOLUTIONS AVAILABLE.\n\nThe core of Rogers connected home strategy is to provide customers with the fastest broadband connections, together with the ability to seamlessly shift - to shift time, to shift screens and to shift places so they access what they want, when they want, on the screen of their choice.\n\nRogers offers the best in on-demand, sports, movies, specialty, episodic and multicultural programming. Customers can schedule, pause, rewind", - "page_start": 11, - "page_end": 11, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## DIRECTORS OF ROGERS COMMUNICATIONS INC.\n\nAS OF FEBRUARY 11, 2014\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n## DIRECTORS\n\n - 1 Alan D. Horn, CPA, CA\n\nChairman, President and Chief Executive Of/ficer,\n\n - Rogers Telecommunications Ltd.\n - 2 Peter C. Godsoe, O.C., O. Ont.\n\nLead Director,\n\nCompany Director\n\n## 14 Guy Laurence*\n\nPresident and Chief Executive Of/ficer, Rogers Communications\n\n - 3 Charles William David Birchall Vice Chairman, Barrick Gold Corporation\n - 4 Stephen A. Burch Chairman, University of Maryland Medical Systems\n - 5 John H. Clappison, FCPA, FCA Company Director\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n - 6 Thomas I. Hull Chairman and Chief Executive Of/ficer,\n\nThe Hull Group of Companies\n\n## 18\n\n - Philip B. Lind, CM* Executive Vice President, Regulatory and Vice Chairman, Rogers Communications\n\n## 7 John A. MacDonald\n\nCompany Director\n\n - 8 Isabelle Marcoux\n - Chair,\n\nTranscontinental Inc.\n\n - 9 The Hon. David R. Peterson, PC, QC Senior Partner and Chairman,\n\nCassels Brock & Blackwell LLP\n\n## 22 Edward S. Rogers*\n\nDeputy Chairman and Executive Vice President, Emerging Business, Corporate Development, Rogers Communications\n\n - * Management Directors are pictured on the following page.\n\n## 10 Loretta A. Rogers\n\n - Company Director\n\n## 11\n\n - Martha L. Rogers Doctor of Naturopathic Medicine\n\n## 23\n\n - Melinda M. Rogers* Senior Vice President, Strategy and Development, Rogers Communications\n\n## 12\n\n## 13\n\n - Dr. Charles Sirois Chief Executive Of/ficer,\n\nTelesystem Ltd.\n\n - John H. Tory, O. Ont. Company Director\n\nFor detailed biographical information of Rogers Directors, go to rogers.com/investors", - "page_start": 23, - "page_end": 23, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Governance and Risk Management\n\n## GOVERNANCE AT ROGERS\n\nRogers is a family-founded, family-controlled company, and we take pride in our proactive and disciplined approach to ensuring that our governance structure and practices instil the confidence of our shareholders.\n\nWith the passing in December 2008 of our founder and previous CEO, Ted Rogers, his voting control of Rogers Communications passed to a trust whose beneficiaries are members of the Rogers family. The trust holds voting control of Rogers Communications for the benefit of successive generations of the Rogers family. The Rogers family are substantial stakeholders, and owned approximately 28 % of our equity as of December 31, 2013.\n\nOur Board of Directors is made up of four members of the Rogers family, and another 13 directors who bring a mix of experience as business leaders in North America. All of our directors are firmly committed to strong oversight and the ongoing creation of shareholder value. The Board as a whole is committed to sound corporate governance, and continually reviews its governance practices and benchmarks them against acknowledged leaders and evolving legislation. The Board believes that Rogers' governance system is effective and that there are appropriate structures and procedures in place.\n\n## G overnance Best Practices\n\nThe majority of our directors are independent and we have adopted many best practices for effective governance:", - "page_start": 74, - "page_end": 74, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_RCI_2013.pdf", - "query": "Until what NHL season will the Vancouver's ice hockey team be a Rogers Communications partner?", - "target_page": 39, - "target_passage": "Sportsnet announced a 10-year partnership extension with the Vancouver Canucks through the 2022-2023 NHL seasons", - "chunk_present": { - "presence": true, - "index": 1 - } - }, - "top_chunk": [ - { - "text": "- GLYPH<129> Advanced our strategy of delivering highly sought-after sports content anywhere, anytime, on any platform and strengthening the value of our sports brand by entering into an exclusive 12-year licensing agreement with the NHL which begins with the 2014-2015 season and grants Rogers the following:\n - -national rights across television broadcasts, wireless and mobile tablets and Internet streaming\n - -national rights to all regular season games, all playoff games and the Stanley Cup Final, and all special events and nongame events (e.g. NHL All-Star Game, NHL Draft) - in multiple languages\n - -out-of-market rights for all regional games\n - -ownership of all linear and digital highlights, including condensed games and video archives\n - -NHL broadcast assets: Rogers to operate NHL Centre Ice and NHL Game Centre Live\n - -sponsorship rights to the NHL Shield logo as an official partner of the NHL\n - -Canadian representation of ad sales for NHL.com\n - -ownership of all commercial inventories for the television broadcasts", - "page_start": 51, - "page_end": 51, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## MEDIA\n\n - GLYPH<129> Exclusive NHL 12-year licensing agreement to broadcast national NHL games beginning with the 2014-2015 season was signed. The agreement grants Rogers the exclusive distribution of all national live and in-progress regular season and playoff games within Canada, in multiple languages, across all platforms. We executed separate agreements to sublicense certain of these broadcasting rights to TVA Sports and CBC.\n - GLYPH<129> Sportsnet 360 was launched, which is comprised of the rebranded theScore assets. The acquisition of theScore received final regulatory approval in the first half of this year.\n - GLYPH<129> Sportsnet announced a 10-year partnership extension with the Vancouver Canucks through the 2022-2023 NHL seasons, continuing a 14-year network tradition as the regional television broadcaster of Canucks hockey. The new agreement features a comprehensive suite of multimedia rights including television, online and mobile, delivering up to 60 regular season Vancouver Canucks games each season. Sportsnet is also the official regional television broadcast rights holder for the Toronto Maple Leafs, Calgary Flames and Edmonton Oilers.\n - GLYPH<129> Next Issue Canada, an innovative, all-you-can-read subscription digital magazine service that provides consumers with exclusive and unlimited access to a catalogue of more than 100 premium Canadian and US titles was launched. Next Issue Canada delivers access to our leading publishing brands alongside many of the most popular US magazine titles.\n - GLYPH<129> The Shopping Channel launched a brighter, easier, and more engaging multi-channel retail experience and a refreshed on-air and online look, an all-new mobile app, special-themed programming and improved shipping. The leading interactive and only national Canadian multi-channel retailer also added on-air social media engagement, new leading brands and more celebrity guest appearances.\n - GLYPH<129> Sportsnet announced an eight-year multi-platform broadcast rights extension with MLB Properties and MLB Advanced Media to show live and in-progress regular season and playoff baseball games and highlights within Canada.", - "page_start": 38, - "page_end": 38, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "<!-- image -->\n\nprogramming across the country's largest markets, as well as five OMNI Television stations which deliver multilingual news, information and entertainment to Canada's multiple language communities.\n\nThe Sportsnet specialty network provides sports programming across Canada through its four regional television channels and its nationallydistributed Sportsnet ONE, Sportsnet World, and Sportsnet 360 stations. Rogers also owns other Canadian specialty television channels, including FX Canada, OLN, The Biography Channel and G4.\n\nThe Shopping Channel - Canada's only nationally televised and Internet shopping service - is a leading interactive multi-channel retailer, offering a vast assortment of exclusive products and top brand names. As one of Canada's most innovative and diversified retailers, it provides customers with exceptional selections in health/beauty, jewelry, home/lifestyle, fashion/accessories, and electronics.\n\nRogers also publishes many well-known consumer magazines, such as Maclean's, Chatelaine, FLARE, L'actualité, and Canadian Business, and is the leading publisher of a number of industry, medical and financial publications. Rogers also controls a suite of fast-growing digital media assets, including 90+ owned and 300+ premium partnership online sites, as well as the recently launched Next Issue Canada digital magazine platform which provides 100+ of North America's most celebrated titles on an unlimited anytime, anywhere basis.\n\nIn sports entertainment, Rogers owns the Toronto Blue Jays baseball team and Rogers Centre stadium, Canada's largest sports and entertainment facility and home field of the Blue Jays. Rogers also holds a 37.5% investment in Maple Leaf Sports & Entertainment which owns the NHL Maple Leafs, NBA Raptors, MLS Toronto FC and a number of other sports related assets.", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## ROGERS COMMUNICATIONS INC. AT A GLANCE\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n## ROGERS COMMUNICATIONS\n\nRogers Communications (TSX: RCI; NYSE: RCI) is a diversi/fied Canadian telecommunications and media company. As discussed in the following pages, Rogers Communications is engaged in the telecom and media businesses through its primary operating segments Rogers Wireless, Rogers Cable, Rogers Business Solutions and Rogers Media.\n\nROGERS COMMUNICATIONS\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nBUSINESS SOLUTIONS\n\n## WIRELESS SEGMENT\n\nRogers Wireless provides wireless voice and data communications services across Canada to approximately 9.5 million customers under the Rogers Wireless, Fido and chatr brands. Rogers Wireless is Canada's largest wireless provider and the only national carrier operating on the combined global standard GSM/HSPA+/LTE technology platforms. Rogers Wireless is Canada's leader in innovative wireless services, and provides customers with the best and latest wireless devices and applications and the fastest network speeds. Rogers Wireless also provides seamless wireless roaming across the U.S. and more than 200 other countries, and is the Canadian leader in the deployment of mobile commerce and machineto-machine communications.\n\n## CABLE AND BUSINESS SOLUTIONS SEGMENTS\n\nRogers Cable is a leading Canadian cable services provider, whose service territory covers approximately 4.0 million homes in Ontario, New Brunswick and Newfoundland representing approximately 30% of the Canadian cable market. Our advanced digital hybrid /fibre-coax network provides market leading highspeed broadband Internet access speeds, the most innovative selection of digital television and online viewing and telephony services to millions of residential and small business customers. Together with Rogers Business Solutions, it also provides scalable carrier-grade business telecom, networking, hosting and managed data services, and IP connectivity and solutions to medium and large enterprise, government and carrier customers.\n\n## MEDIA SEGMENT\n\nRogers Media is Canada's premier destination for category-leading television and radio broadcasting, sports entertainment, publishing, and digital media properties. Television assets include national City network which reaches more than 80% of Canadians, /five OMNI Television multilingual channels, seven regional and national Sportsnet channels, as well as specialty channels FX Canada, OLN, The Biography Channel and G4. Rogers Media also owns The Shopping Channel, Canada's only nationally televised and online shopping service. It operates more than 50 Canadian radio stations, publishes 50+ well known consumer and business magazines, and owns a suite of digital media properties. Media owns the Toronto Blue Jays Baseball Club and Rogers Centre, Canada's largest sports and entertainment facility. Rogers also holds a 37.5% investment in Maple Leaf Sports & Entertainment, owner of NHL Toronto Maple Leafs, NBA Toronto Raptors and MLS Toronto FC.", - "page_start": 3, - "page_end": 3, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## G rowing Dividends\n\n - GLYPH<129> We increased our annualized dividend rate in February 2013 by 10 % to $1.74 per Class A Voting and Class B Non-Voting share and paid a quarterly dividend of $0.435 per share during 2013. We further increased our annualized dividend on February 12, 2014, by 5 % to $1.83.\n\n## New CEO\n\n - GLYPH<129> Guy Laurence joined Rogers in December 2013, as our new President and Chief Executive Officer, succeeding Nadir Mohamed who retired from Rogers. Mr. Laurence brings 30 years of global experience in the telecommunications and media industries.\n\n## S ignificant Developments\n\n - GLYPH<129> Exclusive 12-year licensing agreement to broadcast national NHL games, beginning with the 2014-2015 season was signed. The agreement grants Rogers the exclusive distribution rights of all national regular season and playoff games within Canada, in multiple languages, across all platforms. At the same time, we executed separate agreements to sublicence certain of these broadcasting rights to TVA Sports and CBC.\n - GLYPH<129> Strategic acquisitions of Score Media Inc. (theScore), Mountain Cablevision Ltd. (Mountain Cable), Blackiron Data ULC (Blackiron) and Pivot Data Centres were completed.\n - GLYPH<129> Rogers First Rewards, a new loyalty program allowing customers to earn points on their eligible purchases and redeem them online for a wide selection of Rogers products and services, was launched in the Greater Toronto Area, Ottawa, Kingston, Sudbury and other cities throughout Ontario. We also received regulatory approval to launch a Rogers credit card which augments this loyalty program and will accelerate the rate at which customers earn points.\n\n<!-- image -->\n\n<!-- image -->", - "page_start": 31, - "page_end": 31, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## ACQUISITIONS\n\n - GLYPH<129> Closed our agreement to acquire Metro 14 Montreal for $10 million on February 4, 2013, and relaunched the station as City Montreal, expanding the City broadcast TV network into the largest market in Quebec and increasing the City television network reach to over 80 % of Canadian households.\n - GLYPH<129> Finalized our purchase of theScore, Canada's third largest specialty sports channel, for $167 million. We later rebranded theScore as Sportsnet 360.\n\n## NHL", - "page_start": 51, - "page_end": 51, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "- -rights to sublicense broadcasting rights to TVA and CBC\n - -rights to use the Hockey Night In Canada brand through the CBC sublicense agreement.", - "page_start": 51, - "page_end": 51, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Higher Operating Expenses\n\nWe assess Media operating expenses in four areas:\n\n - GLYPH<129> the cost of broadcast content (including sports programming)\n - GLYPH<129> the cost of retail products sold by The Shopping Channel and Sports Entertainment\n - GLYPH<129> Blue Jays player payroll\n - GLYPH<129> all other expenses involved in day-to-day operations.\n\nOperating expenses were 8 % higher than 2012, mainly because of higher programming costs at Sportsnet, higher Toronto Blue Jays player salaries, higher merchandise spending at The Shopping Channel and costs associated with our launch of Next Issue Canada.\n\nThe higher programming costs this year are a combination of lower costs in 2012 because of the NHL player lockout, and higher costs this year because more hockey games than normal were aired because of the compressed NHL hockey schedule due in part to upcoming winter Olympics. Approximately $62 million of Media's year over year increase in operating expense this year resulted from the 2012 NHL lockout and the timing of games aired in 2013. Player salaries at the Toronto Blue Jays were $34 million higher this year.\n\n<!-- image -->\n\n## Lower Adjusted Operating Profit\n\nAdjusted operating profit was down compared to last year mainly because of revenue and expenses changes described above.\n\nExcluding the impact of the 2012 NHL lockout and the compressed NHL schedule:\n\n - GLYPH<129> operating revenue would have been 4 % higher this year compared to last year, instead of 5 % higher as reported\n - GLYPH<129> adjusted operating profit would have been 7 % higher this year compared to last year, instead of 15 % lower as reported.\n\nExcluding the acquisition of theScore:", - "page_start": 52, - "page_end": 52, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "Operating revenue was 5 % higher this year, mainly because of:\n\n - GLYPH<129> higher subscription and advertising revenue generated by the Sportsnet properties, including the acquisition of theScore, and overall growth in distribution of our other specialty channels\n - GLYPH<129> higher advertising revenue of $21 million resulting from timing of NHL hockey games. Advertising revenue last year was lower than normal due to the NHL player lockout which resulted in no NHL games being aired, and higher than normal this year due to the compressed 2012-2013 season which started in January 2013 and the compressed 2013-2014 NHL schedule in advance of the upcoming winter Olympics\n - GLYPH<129> higher attendance and merchandise sales at Blue Jays games\n - GLYPH<129> higher sales at The Shopping Channel.\n\nThe increases in revenue were partially offset by continuing volatility in advertising spending across most industry sectors, driven by a continued slow economy.", - "page_start": 51, - "page_end": 51, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "<!-- image -->\n\n## LEADING CONTENT\n\n<!-- image -->\n\nROGERS IS COMMITTED TO DELIVERING WORLD-CLASS CONTENT AND EXPERIENCES TO CONSUMERS AND ADVERTISING SOLUTIONS TO BUSINESSES. THE COMPANY HAS A STRONG LEGACY OF BUILDING POWERFUL MEDIA BRANDS WITH COMPELLING CONTENT THAT RESONATES WITH AUDIENCES ACROSS MULTIPLE PLATFORMS ON ANY DEVICE.\n\nToday, businesses across Canada connect with customers through Rogers category-leading television and radio assets, sports entertainment, televised and online shopping, publishing, and digital media properties as the one-stop solution for all their local and national advertising needs.\n\nRogers Media is Canada's premier combination of diversified broadcast, specialty, sports, print and online media assets which together touch nearly 90% of Canadians every week. This includes over 50 popular AM and FM radio stations across Canada. In television, it includes the seven station City network which broadcasts intensely local, urban-oriented", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_RCI_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_EMMS_2004.pdf", - "query": "I am a shareholder of Emmis Communication, but I will be available from the 20th of June to the 4th of July, will the Annual Meeting take place during this period?", - "target_page": 6, - "target_passage": "The Annual Meeting of shareholders will be held at 10 a.m. Central Time on Wednesday, June 30, 2004, at Emmis’ Corporate office.", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "In other words, you can count on Emmis to continue to do what it has always done: Outperform.\n\nThank you for your belief and investment in Emmis.\n\n<!-- image -->\n\n<!-- image -->\n\nJeffrey H. Smulyan\n\nchairman & ceo emmis communications", - "page_start": 4, - "page_end": 4, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## about emmis\n\nEmmis Communications (NASDAQ: EMMS) owns 23 FM and 4 AM domestic radio stations serving the nation's largest markets of New York, Los Angeles and Chicago as well as Phoenix, St. Louis, Austin, Indianapolis and Terre Haute, Ind. In addition, Emmis owns 16 television stations, award-winning regional and specialty magazines, a radio network, international radio interests, and ancillary businesses in broadcast sales and publishing.\n\nEmmis was founded in 1980, and the company launched its first radio station, WENS-FM, in July 1981. As Emmis (the Hebrew word for 'truth') acquired more radio stations across the nation, it established a reputation for sound operations and emerged as a radio industry leader and innovator. Emmis was the first broadcast company to own toprated radio stations in both L.A. and New York, and it pioneered such concepts as the all-sports format.\n\nThe company launched its magazine division in 1988 with the purchase of Indianapolis Monthly , and moved into the world of international radio in 1997, when it was awarded a license to operate a national radio network in Hungary. In 1998, Emmis expanded into television by buying six television stations in markets throughout the United States. In the last six years, the company has added properties in each of its divisions.\n\nWith its emphasis on solid operations, integrity, community involvement and fun, the company's culture has been repeatedly lauded by both its employees and its peers. Trade publications have regularly cited the company's leaders as being among the best in the business.\n\nEmmis became a public company in 1994. It maintains its worldwide headquarters in Indianapolis, where the company was founded.\n\nThis annual report contains certain non-GAAP measures. For a presentation of the directly comparable GAAP measure and a reconciliation of the non-GAAP measures to the GAAP measures, see the attachment to the back of our Form 10-K in this Annual Report.", - "page_start": 1, - "page_end": 1, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## Corporate Office\n\nOne Emmis Plaza, 40 Monument Circle, Suite 700, Indianapolis, Indiana 46204,\n\n317.266.0100.\n\n<!-- image -->\n\n<!-- image -->\n\n## Business\n\nEmmis Communications (NASDAQ: EMMS) is a diversified media firm with awardwinning radio broadcasting, television broadcasting and magazine publishing operations. Emmis' 23 FM and 4 AM domestic radio stations serve the nation's largest markets of New York, Los Angeles and Chicago as well as Phoenix, St. Louis, Austin, Indianapolis and Terre Haute, Ind. The company's 16 television stations are located in Albuquerque, N.M.; Fort Myers, Fla.; Green Bay, Wis.; Honolulu; Huntington, W.Va.; Mobile, Ala./Pensacola, Fla.; New Orleans; Omaha, Neb.; Orlando, Fla.; Portland, Ore.; Terre Haute, Ind.; Topeka, Kan.; Tucson, Ariz.; and Wichita, Kan. Emmis also publishes Indianapolis Monthly, Texas Monthly, Cincinnati, Atlanta, Los Angeles and Country Sampler Group magazines; has a 59.5% interest in Sláger Rádió, a national radio network in Hungary; operates nine FM radio stations serving more than 50 percent of the population in the Flanders region of Belgium; and has ancillary businesses in broadcast sales, publishing and interactive products.\n\n## Transfer Agent Register\n\nWachovia Bank N.A., Shareholder Services Group, 1525 West W.T. Harris Blvd., 3c3, Charlotte, North Carolina 28288-1153.\n\n## Annual Meeting\n\nThe Annual Meeting of shareholders will be held at 10 a.m. Central Time on Wednesday, June 30, 2004, at Emmis' Corporate office.\n\n## Form 10-K\n\nA copy of the Annual Report on Form 10-K for the fiscal year ended February 29, 2004, which was filed with the Securities and Exchange Commission, will be sent to shareholders without charge upon written request to Kate Healey, Emmis Communications Corporation, One Emmis Plaza, 40 Monument Circle, Suite 700, Indianapolis, Indiana 46204, or ir@emmis.com.\n\n## Market and Dividend Information\n\nThe Company's Class A Common Stock is traded in the over-the-counter market and is quoted on the National Association of Securities Dealers Automated Quotation (NASDAQ) National Market System under the symbol EMMS.\n\nThe following table sets forth the high and low bid prices of the Class A Common Stock for the periods indicated. No dividends were paid during any such periods.\n\n| Quarter Ended | High | Low |\n|-----------------|--------|-------|\n| May 2002 | 31.85 | 26.15 |\n| August 2002 | 30.15 | 11.65 |\n| November 2002 | 24.05 | 14.25 |\n| February 2003 | 24.86 | 17.82 |\n| May 2003 | 21.24 | 14.84 |\n| August 2003 | 23.87 | 18.68 |\n| November 2003 | 24.06 | 18 |\n| February 2004 | 28.65 | 22.74 |\n\nOn April 23, 2004, there were approximately 4,841 record holders of the Class A Common Stock and one record holder of the Class B Common Stock.\n\nEmmis intends to retain future earnings for use in its business and does not anticipate paying any dividends on shares of its common stock in the foreseeable future.\n\n## Executive Officers\n\nJeffrey H. Smulyan Chairman of the Board, President and Chief Executive Officer\n\nWalter Z. Berger\n\nExecutive Vice President, Chief Financial Officer and Treasurer\n\nRandall Bongarten Television Division President\n\nRichard F. Cummings Radio Division President\n\nGary L. Kaseff\n\nExecutive Vice President, General Counsel\n\nPaul W. Fiddick International Division President\n\nMichael Levitan Senior Vice President, Human Resources\n\nGary Thoe\n\nPublishing Division President\n\n## Board of Directors\n\nJeffrey H. Smulyan Chairman of the Board, President and Chief Executive Officer\n\nSusan B. Bayh\n\nFormer Commissioner of the International Joint Commission of the United States and Canada\n\nWalter Z. Berger\n\nExecutive Vice President,\n\nChief Financial Officer and Treasurer\n\nGary L. Kaseff Executive Vice President, General Counsel\n\nRichard A. Leventhal\n\nPresident and Majority Owner,\n\nLMCS, LLC\n\nPeter A. Lund\n\nMedia consultant and former President of CBS Inc.", - "page_start": 5, - "page_end": 5, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "- (5) The C hairm an and the m em bers of the C om m ission shall hold office for a period of tw o successive lives of P arliam ent.\n - (6) A person shall not be qualified to be appointed as a m em ber of the Independent E lectoral C om m ission if-\n - ( a ) he or she has been declared insolvent or adjudged or otherw ise declared", - "page_start": 29, - "page_end": 29, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- (i) a m em ber nom inated under paragraph ( e ) m ay be rem oved from office by the rest of the m em bers of the C om m ission acting together only for inability of the m em ber to discharge the functions of his or her office w hether arising from infirm ity of m ind or body or any other cause or for gross m isbehaviour; or\n - (ii) a m em ber appointed under paragraph ( f ) m ay be rem oved from office by the President only for inability of the m em ber to discharge the functions of his or her office w hether arising from infirm ity of m ind or body or any other cause or for gross m isbehaviour.\n - (3) A m em ber of the C om m ission shall not enter upon the duties of his or her office until he or she has taken and subscribed such oath for the due execution of his or her office as m ay be prescribed by P arliam ent.\n - (4) The Judicial S ervice C om m ission shall not be subject to the direction or control of any other person or authority in the exercise of its functions under this C onstitution.", - "page_start": 44, - "page_end": 44, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "any m em ber and its proceedings shall not be invalidated by the presence or participation of any person not entitled to be present at or to participate in those proceedings.\n\n - (6) The decisions of the C om m ission shall be by the vote of a m ajority of the m em bers present, and in the event of an equality of votes, the C hairm an shall have a casting vote.\n\n## 104. A ppointm ent, etc., of judicial officers", - "page_start": 45, - "page_end": 45, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- (8) A m em ber of the P ublic S ervice C om m ission shall not be rem oved from office except in accordance w ith the provisions of this section.\n - (9) If the office of C hairm an of the P ublic S ervice C om m ission is vacant or if the person holding that office is for any reason unable to perform the functions of his or her office, then, until a person has been appointed to and has assum ed the functions of that office or until the person holding that office has resum ed those functions, as the case m ay be, those functions shall be perform ed by such one of the other m em bers of the C om m ission as m ay be designated in that behalf by the P resident.\n - (10) If at any tim e there are less than tw o m em bers of the P ublic S ervice C om m ission besides the C hairm an or if any such m em ber is appointed to act as C hairm an or is for any reason unable to perform the functions of his or her office, the President m ay appoint a person w ho is qualified for appointm ent as a m em ber of the C om m ission to act as a m em ber, and any person so appointed shall, subject to the provisions of subsection (5)( b ) of this section, continue to act until the office in w hich he or she is acting is filled, or as the case m ay be, until the holder thereof resum es his or her functions or until his or her appointm ent to act is revoked by the P resident.", - "page_start": 46, - "page_end": 46, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "Provided that no person shall be disqualified from holding the office of C hairm an or m em ber of a D elim itation C om m ission by reason only of the fact that he or she has been the S peaker of the N ational A ssem bly if he or she w as elected to that office from am ongst persons w ho w ere not M em bers of the N ational A ssem bly.\n\n - (6) The office of C hairm an or other m em ber of the D elim itation C om m ission shall becom e vacant if circum stances arise that, w ere he or she not C hairm an or m em ber of the D elim itation C om m ission, w ould disqualify him or her for appointm ent as such.\n - (7) If, after the appointm ent of the D elim itation C om m ission and before the C om m ission has subm itted its report under section 65, the office of C hairm an or any other m em ber of the C om m ission falls vacant or the holder of the office becom es unable for any reason to discharge his or her functions as a m em ber of the C om m ission, the Judicial S ervice C om m ission m ay, subject to the provisions of subsections (3) to (5) of this section, appoint another person to be a m em ber of the C om m ission:\n\nProvided that a m em ber appointed under this section because of the inability of som e other m em ber to discharge his or her functions shall cease to be a m em ber of the C om m ission w hen, in the opinion of the Judicial S ervice C om m ission, that other m em ber is able to resum e his or her functions as a m em ber of the C om m ission.\n\n## 65. R eport of C om m ission\n\n - (1) W henever a D elim itation C om m ission has been appointed the C om m ission shall as soon as practicable subm it to the P resident a report w hich shall state w hether any alteration is necessary to the boundaries of the constituencies in order to give effect to subsection (2) of this section or in consequence of any alteration in the num ber of seats of E lected M em bers in the N ational A ssem bly and w here any alteration is necessary shall include a list of the constituencies delim ited by the C om m ission and a", - "page_start": 28, - "page_end": 28, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- (6) S ubject to subsection (7) of this section a m em ber of the P ublic S ervice C om m ission m ay be rem oved from office by the P resident for inability to discharge the functions of his or her office (w hether arising from infirm ity of body or m ind or any other cause) or for m isbehaviour.\n - (7) If the P resident considers that the question of rem oving a m em ber of the Public S ervice C om m ission under subsection (6) of this section ought to be investigated, then-\n - ( a ) the P resident shall appoint a tribunal w hich shall consist of a C hairm an and not less than tw o other m em bers selected by the C hief Justice from am ong persons w ho hold or have held high judicial office; and\n - ( b ) the tribunal shall enquire into the m atter and report on the facts thereof to the President and recom m end to him or her w hether the m em ber ought to be rem oved under subsection (6) of this section, and the P resident shall act in accordance w ith that recom m endation.", - "page_start": 46, - "page_end": 46, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "SHAREHOLDER INFORMATION", - "page_start": 90, - "page_end": 90, - "source_file": "NYSE_JWN_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_EMMS_2004.pdf", - "query": "Who is the President of the TV Department of Emmis Communications?", - "target_page": 6, - "target_passage": "Randall Bongarten Television Division President", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "In other words, you can count on Emmis to continue to do what it has always done: Outperform.\n\nThank you for your belief and investment in Emmis.\n\n<!-- image -->\n\n<!-- image -->\n\nJeffrey H. Smulyan\n\nchairman & ceo emmis communications", - "page_start": 4, - "page_end": 4, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## about emmis\n\nEmmis Communications (NASDAQ: EMMS) owns 23 FM and 4 AM domestic radio stations serving the nation's largest markets of New York, Los Angeles and Chicago as well as Phoenix, St. Louis, Austin, Indianapolis and Terre Haute, Ind. In addition, Emmis owns 16 television stations, award-winning regional and specialty magazines, a radio network, international radio interests, and ancillary businesses in broadcast sales and publishing.\n\nEmmis was founded in 1980, and the company launched its first radio station, WENS-FM, in July 1981. As Emmis (the Hebrew word for 'truth') acquired more radio stations across the nation, it established a reputation for sound operations and emerged as a radio industry leader and innovator. Emmis was the first broadcast company to own toprated radio stations in both L.A. and New York, and it pioneered such concepts as the all-sports format.\n\nThe company launched its magazine division in 1988 with the purchase of Indianapolis Monthly , and moved into the world of international radio in 1997, when it was awarded a license to operate a national radio network in Hungary. In 1998, Emmis expanded into television by buying six television stations in markets throughout the United States. In the last six years, the company has added properties in each of its divisions.\n\nWith its emphasis on solid operations, integrity, community involvement and fun, the company's culture has been repeatedly lauded by both its employees and its peers. Trade publications have regularly cited the company's leaders as being among the best in the business.\n\nEmmis became a public company in 1994. It maintains its worldwide headquarters in Indianapolis, where the company was founded.\n\nThis annual report contains certain non-GAAP measures. For a presentation of the directly comparable GAAP measure and a reconciliation of the non-GAAP measures to the GAAP measures, see the attachment to the back of our Form 10-K in this Annual Report.", - "page_start": 1, - "page_end": 1, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## Corporate Office\n\nOne Emmis Plaza, 40 Monument Circle, Suite 700, Indianapolis, Indiana 46204,\n\n317.266.0100.\n\n<!-- image -->\n\n<!-- image -->\n\n## Business\n\nEmmis Communications (NASDAQ: EMMS) is a diversified media firm with awardwinning radio broadcasting, television broadcasting and magazine publishing operations. Emmis' 23 FM and 4 AM domestic radio stations serve the nation's largest markets of New York, Los Angeles and Chicago as well as Phoenix, St. Louis, Austin, Indianapolis and Terre Haute, Ind. The company's 16 television stations are located in Albuquerque, N.M.; Fort Myers, Fla.; Green Bay, Wis.; Honolulu; Huntington, W.Va.; Mobile, Ala./Pensacola, Fla.; New Orleans; Omaha, Neb.; Orlando, Fla.; Portland, Ore.; Terre Haute, Ind.; Topeka, Kan.; Tucson, Ariz.; and Wichita, Kan. Emmis also publishes Indianapolis Monthly, Texas Monthly, Cincinnati, Atlanta, Los Angeles and Country Sampler Group magazines; has a 59.5% interest in Sláger Rádió, a national radio network in Hungary; operates nine FM radio stations serving more than 50 percent of the population in the Flanders region of Belgium; and has ancillary businesses in broadcast sales, publishing and interactive products.\n\n## Transfer Agent Register\n\nWachovia Bank N.A., Shareholder Services Group, 1525 West W.T. Harris Blvd., 3c3, Charlotte, North Carolina 28288-1153.\n\n## Annual Meeting\n\nThe Annual Meeting of shareholders will be held at 10 a.m. Central Time on Wednesday, June 30, 2004, at Emmis' Corporate office.\n\n## Form 10-K\n\nA copy of the Annual Report on Form 10-K for the fiscal year ended February 29, 2004, which was filed with the Securities and Exchange Commission, will be sent to shareholders without charge upon written request to Kate Healey, Emmis Communications Corporation, One Emmis Plaza, 40 Monument Circle, Suite 700, Indianapolis, Indiana 46204, or ir@emmis.com.\n\n## Market and Dividend Information\n\nThe Company's Class A Common Stock is traded in the over-the-counter market and is quoted on the National Association of Securities Dealers Automated Quotation (NASDAQ) National Market System under the symbol EMMS.\n\nThe following table sets forth the high and low bid prices of the Class A Common Stock for the periods indicated. No dividends were paid during any such periods.\n\n| Quarter Ended | High | Low |\n|-----------------|--------|-------|\n| May 2002 | 31.85 | 26.15 |\n| August 2002 | 30.15 | 11.65 |\n| November 2002 | 24.05 | 14.25 |\n| February 2003 | 24.86 | 17.82 |\n| May 2003 | 21.24 | 14.84 |\n| August 2003 | 23.87 | 18.68 |\n| November 2003 | 24.06 | 18 |\n| February 2004 | 28.65 | 22.74 |\n\nOn April 23, 2004, there were approximately 4,841 record holders of the Class A Common Stock and one record holder of the Class B Common Stock.\n\nEmmis intends to retain future earnings for use in its business and does not anticipate paying any dividends on shares of its common stock in the foreseeable future.\n\n## Executive Officers\n\nJeffrey H. Smulyan Chairman of the Board, President and Chief Executive Officer\n\nWalter Z. Berger\n\nExecutive Vice President, Chief Financial Officer and Treasurer\n\nRandall Bongarten Television Division President\n\nRichard F. Cummings Radio Division President\n\nGary L. Kaseff\n\nExecutive Vice President, General Counsel\n\nPaul W. Fiddick International Division President\n\nMichael Levitan Senior Vice President, Human Resources\n\nGary Thoe\n\nPublishing Division President\n\n## Board of Directors\n\nJeffrey H. Smulyan Chairman of the Board, President and Chief Executive Officer\n\nSusan B. Bayh\n\nFormer Commissioner of the International Joint Commission of the United States and Canada\n\nWalter Z. Berger\n\nExecutive Vice President,\n\nChief Financial Officer and Treasurer\n\nGary L. Kaseff Executive Vice President, General Counsel\n\nRichard A. Leventhal\n\nPresident and Majority Owner,\n\nLMCS, LLC\n\nPeter A. Lund\n\nMedia consultant and former President of CBS Inc.", - "page_start": 5, - "page_end": 5, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "Gary L. Kaseff Executive Vice President, General Counsel\n\nRichard A. Leventhal\n\nPresident and Majority Owner,\n\nLMCS, LLC\n\nPeter A. Lund\n\nMedia consultant and former President of CBS Inc.\n\nGreg A. Nathanson Media consultant and former President of Fox Television Stations and Emmis Television\n\nFrank V. Sica\n\nSenior Advisor\n\nSoros Fund Management LLC\n\nLawrence B. Sorrel Managing Partner and Co-CEO Tailwind Capital Partners", - "page_start": 5, - "page_end": 5, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "- (8) A m em ber of the P ublic S ervice C om m ission shall not be rem oved from office except in accordance w ith the provisions of this section.\n - (9) If the office of C hairm an of the P ublic S ervice C om m ission is vacant or if the person holding that office is for any reason unable to perform the functions of his or her office, then, until a person has been appointed to and has assum ed the functions of that office or until the person holding that office has resum ed those functions, as the case m ay be, those functions shall be perform ed by such one of the other m em bers of the C om m ission as m ay be designated in that behalf by the P resident.\n - (10) If at any tim e there are less than tw o m em bers of the P ublic S ervice C om m ission besides the C hairm an or if any such m em ber is appointed to act as C hairm an or is for any reason unable to perform the functions of his or her office, the President m ay appoint a person w ho is qualified for appointm ent as a m em ber of the C om m ission to act as a m em ber, and any person so appointed shall, subject to the provisions of subsection (5)( b ) of this section, continue to act until the office in w hich he or she is acting is filled, or as the case m ay be, until the holder thereof resum es his or her functions or until his or her appointm ent to act is revoked by the P resident.", - "page_start": 46, - "page_end": 46, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- (i) a m em ber nom inated under paragraph ( e ) m ay be rem oved from office by the rest of the m em bers of the C om m ission acting together only for inability of the m em ber to discharge the functions of his or her office w hether arising from infirm ity of m ind or body or any other cause or for gross m isbehaviour; or\n - (ii) a m em ber appointed under paragraph ( f ) m ay be rem oved from office by the President only for inability of the m em ber to discharge the functions of his or her office w hether arising from infirm ity of m ind or body or any other cause or for gross m isbehaviour.\n - (3) A m em ber of the C om m ission shall not enter upon the duties of his or her office until he or she has taken and subscribed such oath for the due execution of his or her office as m ay be prescribed by P arliam ent.\n - (4) The Judicial S ervice C om m ission shall not be subject to the direction or control of any other person or authority in the exercise of its functions under this C onstitution.", - "page_start": 44, - "page_end": 44, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "Provided that no person shall be disqualified from holding the office of C hairm an or m em ber of a D elim itation C om m ission by reason only of the fact that he or she has been the S peaker of the N ational A ssem bly if he or she w as elected to that office from am ongst persons w ho w ere not M em bers of the N ational A ssem bly.\n\n - (6) The office of C hairm an or other m em ber of the D elim itation C om m ission shall becom e vacant if circum stances arise that, w ere he or she not C hairm an or m em ber of the D elim itation C om m ission, w ould disqualify him or her for appointm ent as such.\n - (7) If, after the appointm ent of the D elim itation C om m ission and before the C om m ission has subm itted its report under section 65, the office of C hairm an or any other m em ber of the C om m ission falls vacant or the holder of the office becom es unable for any reason to discharge his or her functions as a m em ber of the C om m ission, the Judicial S ervice C om m ission m ay, subject to the provisions of subsections (3) to (5) of this section, appoint another person to be a m em ber of the C om m ission:\n\nProvided that a m em ber appointed under this section because of the inability of som e other m em ber to discharge his or her functions shall cease to be a m em ber of the C om m ission w hen, in the opinion of the Judicial S ervice C om m ission, that other m em ber is able to resum e his or her functions as a m em ber of the C om m ission.\n\n## 65. R eport of C om m ission\n\n - (1) W henever a D elim itation C om m ission has been appointed the C om m ission shall as soon as practicable subm it to the P resident a report w hich shall state w hether any alteration is necessary to the boundaries of the constituencies in order to give effect to subsection (2) of this section or in consequence of any alteration in the num ber of seats of E lected M em bers in the N ational A ssem bly and w here any alteration is necessary shall include a list of the constituencies delim ited by the C om m ission and a", - "page_start": 28, - "page_end": 28, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "## 68. Tenure of office of M em bers\n\n(1) The seat of an E lected M em ber or a S pecially E lected M em ber of the N ational A ssem bly shall becom e vacant-\n\n - ( a ) upon the dissolution of P arliam ent;\n - ( b ) if he or she is absent from the sittings of the A ssem bly for such period and in such circum stances as m ay be prescribed in the rules of procedure of the Assem bly;\n - ( c ) subject to the provisions of subsections (2) to (3) of this section, if any circum stances arise that, if he or she w ere not a M em ber of the A ssem bly, w ould cause him or her to be disqualified for election thereto.\n\n(2) If circum stances such as are referred to in paragraph ( c ) of the preceding subsection arise in relation to a M em ber of the A ssem bly by virtue of the fact that he or she is declared insolvent, adjudged to be of unsound m ind, sentenced to death or im prisonm ent, or convicted of an election offence and it is open to the M em ber to appeal against the decision (either w ith the leave of the court or other authority or w ithout such leave), he or she shall forthw ith cease to perform his or her functions as a M em ber of the Assem bly but, subject to the next follow ing subsection, he or she shall not vacate his or her seat until the expiration of a period of 30 days thereafter:\n\nProvided that the S peaker m ay, at the request of the M em ber, from tim e to tim e extend that period for further periods of 30 days to enable the M em ber to pursue an appeal against the decision, so, how ever, that extensions of tim e exceeding in the aggregate 150 days shall not be given w ithout the approval of the A ssem bly signified by resolution.", - "page_start": 32, - "page_end": 32, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "- (6) S ubject to subsection (7) of this section a m em ber of the P ublic S ervice C om m ission m ay be rem oved from office by the P resident for inability to discharge the functions of his or her office (w hether arising from infirm ity of body or m ind or any other cause) or for m isbehaviour.\n - (7) If the P resident considers that the question of rem oving a m em ber of the Public S ervice C om m ission under subsection (6) of this section ought to be investigated, then-\n - ( a ) the P resident shall appoint a tribunal w hich shall consist of a C hairm an and not less than tw o other m em bers selected by the C hief Justice from am ong persons w ho hold or have held high judicial office; and\n - ( b ) the tribunal shall enquire into the m atter and report on the facts thereof to the President and recom m end to him or her w hether the m em ber ought to be rem oved under subsection (6) of this section, and the P resident shall act in accordance w ith that recom m endation.", - "page_start": 46, - "page_end": 46, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\nDear Shareholders,\n\nOn our year-end conference call, I said that last year was the best in Emmis Communications' history. And while that might have sounded like the usual Wall Street hyperbole - like any other CEO bragging about his company's performance - the difference is, I believed it. And I still do.\n\nBut I've been in this business long enough to know two things for sure: What I believe is not as important as what I can prove, and what we did last year is only meaningful if it reflects on how we will do in the coming year. The good news is, Emmis does have the results to back up my high praise, and what we did to perform last year does directly relate to how we'll perform in the year ahead.\n\n## The best year\n\nThe bottom line is this: Emmis Communications turned in a remarkable performance last year. Again and again, and by a number of measures, we outperformed our peers, our markets and our own solid track record.\n\nAnd we did this in a year that was challenging in just about every way. The economy was unstable, public companies came under continuing scrutiny, indecency issues hounded broadcasters, competition for tight ad dollars increased and technology continued to reshape the media world.\n\nBut our people refused to be slowed by those challenges. Instead, they worked through them. They innovated, hustled and focused. And they produced.\n\nOur radio division's revenue growth led our markets and the industry - in our fiscal year, our group was up 4.5 percent while our markets were up 2.7 percent and the industry only 1 percent. Based on this kind of performance, we have consistently ranked among the nation's leaders in per-station revenue, and we continue to produce top-rated programming in markets across the nation.\n\nOur TV performance was even more impressive. The Emmis television group's revenues were up 0.5 percent in calendar 2003, a year when our markets saw a 2.3 percent decrease in revenues, and the industry experienced a 4.7 percent revenue decline. This industry-leading result made us one of the few groups in the nation to post positive growth. In addi-\n\ntion, we gained revenue share at 11 of our 13 measured stations and held the line on expenses, giving us a 1.2 percent increase in fiscal-year cash flow.\n\nOur publishing and international divisions also posted strong results. In a tough publishing market, our magazines boosted their division's revenues by 4.6 percent over last year and increased cash flow by 3.3 percent. Our international division turned in a revenue increase of 27 percent and a cash flow increase of 31 percent.\n\nIn addition to boosting performance in our divisions, we honed our corporate operations by continuing to build one of the most adept and hardest-working corporate groups in American media. With this team in place, we've brought our leverage and cost of capital down to more manageable levels, found ways to combat the continually increasing costs of health insurance and, in a truly top-notch effort, smoothly integrated our new Austin radio properties - in just under a year as a part of Emmis, the Austin properties are enjoying significant ratings and revenue increases.\n\nOf course, for you, the real bottom line on our performance is its impact on your investment. I'm proud to say that we saw a 27 percent increase in our share price over the course of the last fiscal year - we ended fiscal '03 at 19.79, and closed the book on fiscal '04 at 25.17.\n\n## How we did it\n\nOperationally, we were on top of our game last year. However, as I said, I know that the past year's performance really only matters if it reflects on what we'll do in the coming year. The good news is, it does. We performed at these high levels not by doing something unusual, but by operating the way Emmis has always operated, and the way we always will.", - "page_start": 3, - "page_end": 3, - "source_file": "NASDAQ_EMMS_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "NASDAQ_EMMS_2004.pdf", - "query": "Does the radio station 93.7 in Austin belong to Emmis Communication?", - "target_page": 7, - "target_passage": "KLBJ-FM (93.7), Album Oriented Rock", - "chunk_present": { - "presence": true, - "index": 6 - } - }, - "top_chunk": [ - { - "text": "## about emmis\n\nEmmis Communications (NASDAQ: EMMS) owns 23 FM and 4 AM domestic radio stations serving the nation's largest markets of New York, Los Angeles and Chicago as well as Phoenix, St. Louis, Austin, Indianapolis and Terre Haute, Ind. In addition, Emmis owns 16 television stations, award-winning regional and specialty magazines, a radio network, international radio interests, and ancillary businesses in broadcast sales and publishing.\n\nEmmis was founded in 1980, and the company launched its first radio station, WENS-FM, in July 1981. As Emmis (the Hebrew word for 'truth') acquired more radio stations across the nation, it established a reputation for sound operations and emerged as a radio industry leader and innovator. Emmis was the first broadcast company to own toprated radio stations in both L.A. and New York, and it pioneered such concepts as the all-sports format.\n\nThe company launched its magazine division in 1988 with the purchase of Indianapolis Monthly , and moved into the world of international radio in 1997, when it was awarded a license to operate a national radio network in Hungary. In 1998, Emmis expanded into television by buying six television stations in markets throughout the United States. In the last six years, the company has added properties in each of its divisions.\n\nWith its emphasis on solid operations, integrity, community involvement and fun, the company's culture has been repeatedly lauded by both its employees and its peers. Trade publications have regularly cited the company's leaders as being among the best in the business.\n\nEmmis became a public company in 1994. It maintains its worldwide headquarters in Indianapolis, where the company was founded.\n\nThis annual report contains certain non-GAAP measures. For a presentation of the directly comparable GAAP measure and a reconciliation of the non-GAAP measures to the GAAP measures, see the attachment to the back of our Form 10-K in this Annual Report.", - "page_start": 1, - "page_end": 1, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## Corporate Office\n\nOne Emmis Plaza, 40 Monument Circle, Suite 700, Indianapolis, Indiana 46204,\n\n317.266.0100.\n\n<!-- image -->\n\n<!-- image -->\n\n## Business\n\nEmmis Communications (NASDAQ: EMMS) is a diversified media firm with awardwinning radio broadcasting, television broadcasting and magazine publishing operations. Emmis' 23 FM and 4 AM domestic radio stations serve the nation's largest markets of New York, Los Angeles and Chicago as well as Phoenix, St. Louis, Austin, Indianapolis and Terre Haute, Ind. The company's 16 television stations are located in Albuquerque, N.M.; Fort Myers, Fla.; Green Bay, Wis.; Honolulu; Huntington, W.Va.; Mobile, Ala./Pensacola, Fla.; New Orleans; Omaha, Neb.; Orlando, Fla.; Portland, Ore.; Terre Haute, Ind.; Topeka, Kan.; Tucson, Ariz.; and Wichita, Kan. Emmis also publishes Indianapolis Monthly, Texas Monthly, Cincinnati, Atlanta, Los Angeles and Country Sampler Group magazines; has a 59.5% interest in Sláger Rádió, a national radio network in Hungary; operates nine FM radio stations serving more than 50 percent of the population in the Flanders region of Belgium; and has ancillary businesses in broadcast sales, publishing and interactive products.\n\n## Transfer Agent Register\n\nWachovia Bank N.A., Shareholder Services Group, 1525 West W.T. Harris Blvd., 3c3, Charlotte, North Carolina 28288-1153.\n\n## Annual Meeting\n\nThe Annual Meeting of shareholders will be held at 10 a.m. Central Time on Wednesday, June 30, 2004, at Emmis' Corporate office.\n\n## Form 10-K\n\nA copy of the Annual Report on Form 10-K for the fiscal year ended February 29, 2004, which was filed with the Securities and Exchange Commission, will be sent to shareholders without charge upon written request to Kate Healey, Emmis Communications Corporation, One Emmis Plaza, 40 Monument Circle, Suite 700, Indianapolis, Indiana 46204, or ir@emmis.com.\n\n## Market and Dividend Information\n\nThe Company's Class A Common Stock is traded in the over-the-counter market and is quoted on the National Association of Securities Dealers Automated Quotation (NASDAQ) National Market System under the symbol EMMS.\n\nThe following table sets forth the high and low bid prices of the Class A Common Stock for the periods indicated. No dividends were paid during any such periods.\n\n| Quarter Ended | High | Low |\n|-----------------|--------|-------|\n| May 2002 | 31.85 | 26.15 |\n| August 2002 | 30.15 | 11.65 |\n| November 2002 | 24.05 | 14.25 |\n| February 2003 | 24.86 | 17.82 |\n| May 2003 | 21.24 | 14.84 |\n| August 2003 | 23.87 | 18.68 |\n| November 2003 | 24.06 | 18 |\n| February 2004 | 28.65 | 22.74 |\n\nOn April 23, 2004, there were approximately 4,841 record holders of the Class A Common Stock and one record holder of the Class B Common Stock.\n\nEmmis intends to retain future earnings for use in its business and does not anticipate paying any dividends on shares of its common stock in the foreseeable future.\n\n## Executive Officers\n\nJeffrey H. Smulyan Chairman of the Board, President and Chief Executive Officer\n\nWalter Z. Berger\n\nExecutive Vice President, Chief Financial Officer and Treasurer\n\nRandall Bongarten Television Division President\n\nRichard F. Cummings Radio Division President\n\nGary L. Kaseff\n\nExecutive Vice President, General Counsel\n\nPaul W. Fiddick International Division President\n\nMichael Levitan Senior Vice President, Human Resources\n\nGary Thoe\n\nPublishing Division President\n\n## Board of Directors\n\nJeffrey H. Smulyan Chairman of the Board, President and Chief Executive Officer\n\nSusan B. Bayh\n\nFormer Commissioner of the International Joint Commission of the United States and Canada\n\nWalter Z. Berger\n\nExecutive Vice President,\n\nChief Financial Officer and Treasurer\n\nGary L. Kaseff Executive Vice President, General Counsel\n\nRichard A. Leventhal\n\nPresident and Majority Owner,\n\nLMCS, LLC\n\nPeter A. Lund\n\nMedia consultant and former President of CBS Inc.", - "page_start": 5, - "page_end": 5, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "In other words, you can count on Emmis to continue to do what it has always done: Outperform.\n\nThank you for your belief and investment in Emmis.\n\n<!-- image -->\n\n<!-- image -->\n\nJeffrey H. Smulyan\n\nchairman & ceo emmis communications", - "page_start": 4, - "page_end": 4, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "## what it has always done: outperform.\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nIn addition, we commit ourselves to creating the best content in our markets. Our magazines routinely dominate their industry awards ceremonies - last year, Texas Monthly won a coveted National Magazine Award, and Emmis publications claimed more than half of the awards at the City and Regional Magazine competition. Our radio stations feature some of the industry's most popular personalities - in 2003, Emmis people and stations were awarded three Marconi Radio Awards. And our television operations are regularly honored by journalism organizations for their news gathering and community service. In short, we provide our markets with reliable, high-quality content - content that helps us assemble the audiences our advertisers want to reach.\n\nWe then generate revenue by overallocating to sales. We give our teams well-developed strategies, clearly defined brands and solid products. We build bigger, better sales forces and put a greater emphasis on local dollars than our competitors. We hire aggressive managers, set ambitious goals and then watch our people work harder and smarter than anyone else.\n\nWe also seize the right opportunities and make the most of them. As the cost of buying radio properties has gone through the roof, we have been careful about buying. However, when we had a chance to acquire the LBJ stations in Austin, we knew it was the right fit: good stations, a tremendous heritage and a great culture, all with an opportunity for growth. And we've already built on that group's track record - since we bought them, we've reformatted one station and quickly sent it to No. 1 in the market, and we've pushed revenues up 9 percent for the entire group.\n\nFinally, we innovate. Why has Emmis, traditionally a radio company, become the company to emulate in TV? Because we approached TV in a way it's never been approached before. Why do we operate leading hip-hop stations in markets across the nation? Because we pioneered the concept. Why have we created a new 'Music with Class' format in St. Louis' Red 104.1? Because we believe we see a new opportunity. We know that successful companies don't follow the pack. They lead it, and that's what we'll always do.\n\n## The year ahead\n\nThat last point - innovation - is an important one, especially for the future of Emmis, because we are planning something\n\n<!-- image -->\n\n<!-- image -->\n\nthat could change the face of American TV and once again demonstrate that Emmis is a company that leads the way.\n\nForty years ago, Americans began taking down their TV antennas and severing broadcasters' direct link to television audiences. Since then, the cable companies-the middlemen who replaced us-have created more than $300 billion of value for themselves. However, changes in technology have given broadcasters the ability to provide the American public with the most popular TV channels, without the middlemen and at a more reasonable price.\n\nWe are developing an innovative model that will leverage that technology to get broadcast companies back into the game. I believe it has the potential to revolutionize the television industry. I also believe it will add substantial value to your investment.\n\nWe unveiled this concept at the National Association of Broadcasters meeting in April. I am proud to say that 11 other television companies joined us at that meeting to express their support for what we're calling the Broadcasters' Initiative, and more are signing on each week. Once again, Emmis has leveraged innovation to take a leading role in our industries.\n\nWe'll continue to use innovation to push us forward. Meanwhile, we'll also build and maintain the best teams, produce the best media content, outhustle and outsell our competitors, seize the best opportunities and operate this company better than any other.\n\nIn other words, you can count on Emmis to continue to do what it has always done: Outperform.", - "page_start": 4, - "page_end": 4, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "| Los Angeles | Fox programming/local news | Los Angeles |\n| KPWR-FM (105.9), Hip-Hop/R&B | Honolulu, KGMB-TV (Channel 9), | Texas Monthly |\n| KZLA-FM (93.9), Country | CBS programming/local news | |\n| New York | Huntington/Charleston, W.Va., WSAZ-TV (Channel 3), | INTERNATIONAL |\n| WQCD-FM (101.9), Smooth Jazz | NBC programming/local news | Hungary, Sláger Rádió, Classic Rock/local programming |\n| | Mobile, Ala./Pensacola, Fla., WALA-TV (Channel 10), | Belgium, nine stations serving the Flanders region |\n| WQHT-FM (97.7), Hip-Hop | Fox programming/local news | |\n| WRKS-FM(98.7), Classic Soul/Today's R&B Phoenix | Mobile, Ala./Pensacola, Fla., WBPG-TV (Channel | RELATED BUSINESSES |\n| KKFR-FM(92.3), Rhythmic CHR | 55), WB programming | Emmis Books |\n| KKLT-FM (98.7), Adult Contemporary | New Orleans, WVUE-TV (Channel 8), | Emmis Interactive |\n| KMVP-AM (860), Sports | Fox programming/local news | RDS |\n| KTAR-AM (620), News/Talk/Sports | Omaha, Neb., KMTV-TV (Channel 3), | |\n| | CBS programming/local news | |", - "page_start": 6, - "page_end": 6, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "Provided that no person shall be disqualified from holding the office of C hairm an or m em ber of a D elim itation C om m ission by reason only of the fact that he or she has been the S peaker of the N ational A ssem bly if he or she w as elected to that office from am ongst persons w ho w ere not M em bers of the N ational A ssem bly.\n\n - (6) The office of C hairm an or other m em ber of the D elim itation C om m ission shall becom e vacant if circum stances arise that, w ere he or she not C hairm an or m em ber of the D elim itation C om m ission, w ould disqualify him or her for appointm ent as such.\n - (7) If, after the appointm ent of the D elim itation C om m ission and before the C om m ission has subm itted its report under section 65, the office of C hairm an or any other m em ber of the C om m ission falls vacant or the holder of the office becom es unable for any reason to discharge his or her functions as a m em ber of the C om m ission, the Judicial S ervice C om m ission m ay, subject to the provisions of subsections (3) to (5) of this section, appoint another person to be a m em ber of the C om m ission:\n\nProvided that a m em ber appointed under this section because of the inability of som e other m em ber to discharge his or her functions shall cease to be a m em ber of the C om m ission w hen, in the opinion of the Judicial S ervice C om m ission, that other m em ber is able to resum e his or her functions as a m em ber of the C om m ission.\n\n## 65. R eport of C om m ission\n\n - (1) W henever a D elim itation C om m ission has been appointed the C om m ission shall as soon as practicable subm it to the P resident a report w hich shall state w hether any alteration is necessary to the boundaries of the constituencies in order to give effect to subsection (2) of this section or in consequence of any alteration in the num ber of seats of E lected M em bers in the N ational A ssem bly and w here any alteration is necessary shall include a list of the constituencies delim ited by the C om m ission and a", - "page_start": 28, - "page_end": 28, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "| emmis entities | St. Louis | Orlando, Fla., WKCF-TV (Channel 18), |\n|----------------------------------------------------------------------------------|-----------------------------------------------------|-------------------------------------------------------|\n| | KFTK-FM (97.1), Talk | WB programming |\n| RADIO | KIHT-FM (96.3), Classic Hits | Portland, Ore., KOIN-TV (Channel 6), |\n| Austin | KPNT-FM (105.7), Alternative Rock | CBS programming/local news |\n| KDHT-FM (93.3), Rhythmic CHR | KSHE-FM (94.7), Album Oriented Rock | Terre Haute, Ind., WTHI-TV (Channel 10), |\n| KEYI-FM (103.5), Oldies | WRDA-FM (104.1), New Standards | CBS programming/local news |\n| KGSR-FM (107.1), Adult Alternative | Terre Haute | Topeka, Kan., KSNT-TV (Channel 27), |\n| KLBJ-AM (590), News/Talk | WTHI-FM (99.9), Country | NBC programming/local news |\n| | WWVR-FM (105.5), Classic Rock | Tucson, Ariz., KGUN-TV (Channel 9), |\n| KLBJ-FM (93.7), Album Oriented Rock | | ABC programming/local news |\n| KROX-FM (101.5), Alternative Rock | TELEVISION | Wichita, Kan., KSNW-TV (Channel 3), |\n| Chicago | Albuquerque, N.M., KRQE-TV (Channel 13), | NBC programming/local news |\n| WKQX-FM (101.1), Alternative Rock | CBS programming/local news | |\n| Indianapolis | Fort Myers, Fla., WFTX-TV (Channel 4), | PUBLISHING |\n| WENS-FM (97.1), Adult Contemporary | Fox programming/local news | Atlanta |\n| WIBC-AM (1070), News/Talk/Sports WNOU-FM (93.1), CHR | Green Bay, Wis., WLUK-TV (Channel 11), | Country Sampler |\n| | Fox programming/local news | Cincinnati |\n| WYXB-FM (105.7), Soft Adult Contemporary Network Indiana, Statewide news network | Honolulu, KHON-TV (Channel 2), | Indianapolis Monthly |\n| Los Angeles | Fox programming/local news | Los Angeles |", - "page_start": 6, - "page_end": 6, - "source_file": "NASDAQ_EMMS_2004.pdf" - }, - { - "text": "Abstract (continued)\n\nand safety via a novel evaluation framework. This study suggests the importance of a physician-inloop implementation design for this model and demonstrates an effective strategy to measure preimplementation patient safety of LLM models.\n\nJAMANetwork Open. 2024;7(12):e2448723. doi:10.1001/jamanetworkopen.2024.48723\n\n## Introduction\n\nHandoffs, where patient information is exchanged between health professionals during a transfer of clinical responsibility, have been identified as a critical source of medical errors. 1,2 The Joint Commission, the Accreditation Council for Graduate Medical Education, and the Association of American Medical Colleges have all recommended the development of high-quality and standardized handoff processes to address the substantial patient risk of this ubiquitous event. 3,4 Implementing handoff tools has previously demonstrated significant reductions in medical errors. 5,6 High-quality handoffs from emergency medicine (EM) to inpatient (IP) services (EM-to-IP) are challenged by medical complexity, diagnostic uncertainty, rapidly evolving care plans, and time constraints. 7-10 The EM-to-IP handoff structure is not well standardized, frequently communicated verbally, and poorly adhered to in emergency departments (EDs), including in medical centers with formalized handoff systems. 11-14 Prior research has demonstrated that suboptimal EM-to-IP handoff is associated with adverse events, EM leaders and front-line clinicians themselves view the EM-to-IP handoff as high risk, and an electronic health record (EHR)-based technology is commonly mentioned as the most desired assistive tool in improving ED transitions of care. 15-18 Limited work to date has demonstrated EMelectronic handoff tools as feasible, efficient, and effective. 19-21 In April 2023, EM and internal medicine leadership of the study site collaboratively developed and launched a mandatory, EHR-based handoff workflow via a standardized EM-to-IP handoff note template, designed for realtime completion by the EM care team at time of admission. At 3 and 6 months postlaunch, informal evaluation of new EM-to-IP handoff notes through random medical record review and unstructured clinician feedback sessions revealed variable completeness, quality, and subsequent usefulness of the handoff notes.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed8.pdf" - }, - { - "text": "- (8) A m em ber of the P ublic S ervice C om m ission shall not be rem oved from office except in accordance w ith the provisions of this section.\n - (9) If the office of C hairm an of the P ublic S ervice C om m ission is vacant or if the person holding that office is for any reason unable to perform the functions of his or her office, then, until a person has been appointed to and has assum ed the functions of that office or until the person holding that office has resum ed those functions, as the case m ay be, those functions shall be perform ed by such one of the other m em bers of the C om m ission as m ay be designated in that behalf by the P resident.\n - (10) If at any tim e there are less than tw o m em bers of the P ublic S ervice C om m ission besides the C hairm an or if any such m em ber is appointed to act as C hairm an or is for any reason unable to perform the functions of his or her office, the President m ay appoint a person w ho is qualified for appointm ent as a m em ber of the C om m ission to act as a m em ber, and any person so appointed shall, subject to the provisions of subsection (5)( b ) of this section, continue to act until the office in w hich he or she is acting is filled, or as the case m ay be, until the holder thereof resum es his or her functions or until his or her appointm ent to act is revoked by the P resident.", - "page_start": 46, - "page_end": 46, - "source_file": "Botswana-constitution.pdf" - }, - { - "text": "The message format depends on the facility. The system can transmit syslog messages in the following formats:\n\n - - The concise message format provides standard detail about the event.", - "page_start": 745, - "page_end": 745, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed1.pdf", - "query": "What are the two components considered in the expected free energy?", - "target_page": 4, - "target_passage": "The former (utilitarian) objective is to realize one’s preferences, such as being satiated or safe, by minimizing the discrepancy between preferred sensa- tions (encoded as “priors over observations” in active inference) and current sensations in different modalities (e.g. interoceptive or exteroceptive). The latter (epistemic) objective is to reduce uncertainty about one’s estimated state", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## CLIMB PERFOLWANCE\n\nDuring climbing flight, the airplane gains potential energy by virtue of elevation. This increase in potential energy during a climb is provided by one, or a combination, of two means: (1) expenditure of propulsive energy above that required to maintain level flight or (2) expenditure of airplane kinetic energy, i.e., loss of velocity by a zoom. Zooming for altitude is a transient process of trading kinetic energy for potential energy and is of considerable importance for airplane configurations which can operate at very high levels of kinetic energy. However, the major portions of climb performance for most airplanes is a near steady process in which additional propulsive energy is converted into potential energy. The fundamental parts of airplane climb performance involve a flight condition where the airplane is in equilibrium but not at constant altitude.", - "page_start": 167, - "page_end": 167, - "source_file": "00-80T-80.pdf" - }, - { - "text": "FIG. 1: Effective McMillan-Mayer short-range pair potentials extracted from explicit solvent simulations using the HNC closure. (a) Cation anion, (b) cation cation, (c) anion anion, (d) cation anion RDF obtained from explicit solvent MD and implicit solvent MC simulations.\n\n<!-- image -->\n\npute all ion thermodynamic properties through implicit solvent MC simulations.\n\nThe second stage of our coarse-graining procedure consists in applying LPT, in order to deduce the best analytical model of electrolyte solutions which reproduces this molecular description. The principle of LPT is to describe the properties of a given system in terms of those of a well known reference system, with the difference between them treated as a perturbation in the reference potential. Assuming pairwise additive potentials, V ij = V (0) ij + ∆V ij , a first-order truncated expression for the free energy density of the system βf v is obtained,\n\nβf v /lessorsimilar βf (0) v + 1 2 β ∑ i,j ρ i ρ j ∫ d r g (0) ij ( r ) ∆V ij ( r ) (1)\n\nwhich depends only on the free-energy density f (0) v and RDF g (0) of the reference fluid, with β = ( k B T ) -1 and ρ i the concentration of species i . The Gibbs-Bogoliubov inequality [15] ensures that the right-hand side of Eq. (1) is actually a strict upper bound. Once a reference system has been chosen, the expression on the right-hand side of Eq. (1) must be minimized with respect to the parameters defining the reference. This procedure yields the best first-order approximation to the free energy of the system under consideration.\n\nFor a system of charged particles in solution, the natural reference is the PM, defined in terms of the charge and diameter ( σ i ) of each species. In this case, the perturbing potentials are just the short-range effective potentials computed above (∆ V ij = V SR ij ). We use the MSA [3] solution to the PM, since it provides analytical expressions for both the free energy and the RDF. The perturbation term is evaluated using an exponential approximation to the RDF obtained within the MSA, g ( r ) = exp [ g MSA ( r ) -1], which removes any unphysical negative regions and improves the comparison with HNC calculations.\n\nΦ\n\nFIG. 2: (Color online) (a) Osmotic coefficient Φ in the McMillan-Mayer frame of reference. (diamond) MC simulations, (dot dashed) MSA2, (dot) Debye Huckel Limiting law (DHLL), (cross) experiments (Ref. [18] with the McMillanMayer to Lewis Randall conversion). (b) Minimization diameters. (dot dashed) MSA2 and (diamond) MSA-fit.\n\n<!-- image -->\n\nWe first used LPT for a two-component system (Na + and Cl -free ions) within the MSA (model MSA2), for concentrations ranging from 0.1 to 2 . 0 mol l -1 . The minimization leads to almost constant diameters on the whole range of concentration: σ 1 = 3 . 67 ˚ A and σ 2 = 4 . 78 ˚ A. As shown in Fig. 2, these parameters yield osmotic coefficients close to MC calculations only at very low concentration, i.e., c ≤ 0 . 1 moll -1 (experimental values are given for indicative purposes only, since a perfect model will exactly match the MC results). For molar solutions, the LPT results differ considerably from MC calculations. This discrepancy can easily be understood by comparing the diameters found within the MSA2 calculation with the effective potentials given in Fig. 1. The anion/cation contact distance obtained within the MSA2 calculation is 4 . 2 ˚ A, which is in the region of the second minimum of the effective potential and corresponds to the situation where there is a single layer of water molecules between the ions. The first minimum of the potential, which corresponds to the contact ion pair (CIP) is thus completely ignored by the MSA2 calculation. If the MSA diameters are directly fitted to reproduce the MC osmotic pressure, much smaller values are obtained. These MSA-fit hydrated diameters, which are compared to the MSA2 diameters in the bottom part of Fig. 2, are averages of the CIP and the solvent-separated ion pair.", - "page_start": 1, - "page_end": 1, - "source_file": "1001.2648.pdf" - }, - { - "text": "FEP\n\nFree energy principle\n\nVFE\n\nVariational free energy\n\nEFE\n\nExpected free energy\n\nMCMC\n\nMarkov Chain Monte Carlo\n\nPOMDP\n\nPartially Observed Markov Decision Process\n\n## References\n\n - 1. Parr, T.; Pezzulo, G.; Friston, K.J. Active Inference: The Free Energy Principle in Mind, Brain, and Behavior ; The MIT Press: Cambridge, MA, USA, 2022. [CrossRef]\n - 2. Friston, K.; FitzGerald, T.; Rigoli, F.; Schwartenbeck, P.; O'Doherty, J.; Pezzulo, G. Active inference and learning. Neurosci. Biobehav. Rev. 2016 , 68 , 862-879. [CrossRef]\n - 3. Friston, K.; FitzGerald, T.; Rigoli, F.; Schwartenbeck, P.; Pezzulo, G. Active inference: A process theory. Neural Comput. 2017 , 29 , 1-49. [CrossRef]\n - 4. Friston, K.J.; Stephan, K.E. Free-energy and the brain. Synthese 2007 , 159 , 417-458. [CrossRef] [PubMed]\n - 5. Friston, K. The free-energy principle: A unified brain theory? Nat. Rev. Neurosci. 2010 , 11 , 127-138. [CrossRef] [PubMed]\n - 6. Friston, K. The free-energy principle: A rough guide to the brain? Trends Cogn. Sci. 2009 , 13 , 293-301. [CrossRef] [PubMed]\n - 7. Friston, K. A free energy principle for a particular physics. arXiv 2019 , arXiv:1906.10184. [CrossRef]\n - 8. Friston, K.; Da Costa, L.; Sajid, N.; Heins, C.; Ueltzhöffer, K.; Pavliotis, G.A.; Parr, T. The free energy principle made simpler but not too simple. Phys. Rep. 2023 , 1024 , 1-29. [CrossRef]\n - 9. Friston, K.; Kiebel, S. Predictive coding under the free-energy principle. Philos. Trans. R. Soc. B Biol. Sci. 2009 , 364 , 1211-1221. [CrossRef] [PubMed]\n - 10. Karl, F. A Free Energy Principle for Biological Systems. Entropy 2012 , 14 , 2100-2121. [CrossRef]\n - 11. Corcoran, A.W.; Pezzulo, G.; Hohwy, J. From allostatic agents to counterfactual cognisers: Active inference, biological regulation, and the origins of cognition. Biol. Philos. 2020 , 35 , 32. [CrossRef]\n - 12. Heins, C.; Millidge, B.; Da Costa, L.; Mann, R.P.; Friston, K.J.; Couzin, I.D. Collective behavior from surprise minimization. Proc. Natl. Acad. Sci. USA 2024 , 121 , e2320239121. [CrossRef] [PubMed]\n - 13. Patzelt, E.H.; Hartley, C.A.; Gershman, S.J. Computational Phenotyping: Using Models to Understand Individual Differences in Personality, Development, and Mental Illness. Personal. Neurosci. 2018 , 1 , e18. [CrossRef] [PubMed]", - "page_start": 29, - "page_end": 29, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "quantities as its target: the variational free energy ( VFE ) in the case of perception and the expected free energy ( EFE ) in the case of action. The VFE is the free energy associated with a given sensory observation and is resolved perceptually by updating beliefs about the environment. The EFE is the free energy that is expected in the future, contingent on a given policy or course of action. Choosing action policies associated with a low EFE lead to reducing uncertainty about the environment, as well as making preferred observations more likely.\n\n## 2.1. POMDPs in Active Inference\n\nIn AIF, the POMDP is one of the most common families of generative models used to make inferences about the environment. It is a Markovian discrete state-space model, where employing it means representing the environment and observations as inhabiting one among a set of possible (possibly multidimensional) states, and that the changes in these states can only depend on the system's previous state and the agent's actions. Environmental states are not directly observable, so they have to be inferred based on incoming sensory observations. In AIF for POMDPs and other generative models in general, both perception and action are cast as Bayesian inferences (see Sections 2.2 and 2.3), as well as the learning of parameters of the generative model (see Section 2.4). Crucially, an agent's generative model does not a priori have to be isomorphic to the true environment (i.e., the data-generating process), although this will generally lead to a successful inference, and that the generative model will therefore often come to resemble the environment through learning.\n\nAdiscrete state-space POMDP in AIF is conventionally defined by five main sets of parameters: A , B , C , D and E [1,33], see Figure 1. Together, these parametrise the agent's prior beliefs about the prior probability of different states in the environment, how states of the environment change and how they generate observations. Typically, they will be vectors, matrices or tensors; however, henceforth we denote them by their corresponding letter in bold. These make up the components needed for the agent to perform AIF.\n\nA , also called the observation model , represents the state-to-observation likelihood model. This describes how observations depend on or are generated by states of the environment. It is structured as a matrix with a column for each possible environmental state s , and a row for each possible observation o . Each column is then a categorical probability distribution over the observations that will occur given the environmental state (meaning that each column must contain non-negative values that sum to 1). If the observations are multidimensional (i.e., multiple observations are made at each time point), there is a matrix for each observation modality. If two or more states determine the observation, the likelihood model then becomes a tensor. If A is imprecise (i.e., the probabilities are highly entropic and evenly distributed), observations are taken to carry less information about the environment, in many cases leading to more uncertain inferences, and vice versa.", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "FIG. 3: Effective pair potentials derived for MSA3 and BIMSA3. (a) Cation anion (dashed line: without taking the pair into account), (b) pair cation, (c) pair anion, and (d) pair pair. The internal potential of the pair β ˜ V int ( r ) is set equal to βV eff ij ( r ) for distances less than 4 ˚ A.\n\n<!-- image -->\n\nrapolating the original potential at the barrier separating pairs from free ions (as shown in Fig. 3). We assume that the interaction potential is averaged over the rotational degrees of freedom of the CIP and thus pairwise additive. Hereafter, the quantities referring to such a three-component model are written with a tilda symbol. The short-range potentials involving the pair can be derived, in the infinite dilution limit, from an average of the contributing ion interactions. In Fourier space,\n\n˜ V SR 3 i ( k ) = w ( k / 2) [ V SR 1 i + V SR 2 i ] ( k ) , i = 1 , 2 (2a)\n\nwhere ˜ w ( r ) is the pair probability distribution\n\n˜ ˜ V SR 33 ( k ) = ˜ w ( k / 2) 2 [ V SR 11 + V SR 22 +2 V SR 12 ] ( k ) (2b)\n\n˜ w ( r ) = K -1 0 e -β ˜ V int ( r ) (2c)\n\n˜ V int ( r ) is the internal part of the pair potential (see Fig. 3), and K 0 is the association constant, defined as:\n\nK 0 = ∫ ∞ 0 d r 4 πr 2 e -β ˜ V int ( r ) = 0 . 43 L . mol -1 (3)\n\nThe excess free-energy density of the original system βf ex v is that of the three component mixture β ˜ f ex v plus a correction term\n\nβf ex v = β ˜ f ex v -˜ ρ 3 ln K 0 , (4)\n\nwhich is due to the change in standard chemical potential between the two component and three component models. It should be noted that the fraction of pairs is now an additional parameter in the minimization scheme, which serves to ensure chemical equilibrium. Within this representation, the pair can be modeled as a hard sphere (MSA3) or as a dumbbell-like CIP (BIMSA3) [4]. Since\n\nFIG. 4: (Color online) Excess free-energy density βf ex v as a function of the square root of the concentration √ c . (diamond) MC simulations, (dot dashed) MSA2, (dashed) MSA3, (solid) BIMSA3, (dot) DHLL, and (cross) experiments. The inset gives the fraction of pairs (MSA3, BIMSA3) as a function of √ c .\n\n<!-- image -->\n\nwe have no additional information, we consider only symmetric dumbbells. Furthermore, since analytic expressions for the RDF within BIMSA are not known, we approximate the dumbbell as a hard sphere when computing the perturbation term (this is not necessary for the reference term, since an expression for the free energy is available). Let ˜ σ c be the diameter of the cation (anion) within the dumbbell, the diameter of the hard sphere representing this dumbbell is taken to be σ 3 = 4 √ 2 π σ c [21].\n\n˜ ˜ Using these two reference systems, the threecomponent MSA3 and BIMSA3, we obtain results in much better agreement with the MC simulations, as shown in Fig. 4. The diameters obtained for species 1, 2, and 3 are 3.65, 4.79, and 5.76 ˚ A for MSA3 and 3.69, 4.75 and 6.19 ˚ A for BIMSA3. The free ion diameters are similar for MSA2, MSA3, and BIMSA3. The pair diameter is smaller when modeled as a hard sphere (MSA3) than when modeled as a dumbbell (BIMSA3). At high concentration (about 1 mol l -1 ), the MSA3 overestimates the free energy, because the excluded volume repulsion becomes too important for the pairs to be represented as hard spheres. The BIMSA3 model is the closest to the MC simulation results. It is worth noting that even at the lowest concentration considered, the fraction of pairs (shown in the insert of Fig. 4), although less then 5%, has a non-negligible effect on the thermodynamics of the system.\n\nThis procedure also provides an accurate description of the structure over the whole range of concentrations. A development similar to the one that leads to Eq. (2) derives the average unpaired RDF from the corresponding paired quantities:", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2648.pdf" - }, - { - "text": "## Management's Discussion and Analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## Apartment Property Expenses\n\nSame store apartment property expenses increased 5.5% for the year ended December 31, 2013, due primarily to increased utility and fuel expenses as a result of high natural gas prices in Atlantic Canada, and higher electricity costs.\n\n## Utility and Fuel Expense - Same Store\n\nFor the years ended December 31,\n\n| | 2013 | 2012 | % change |\n|---------------------------------|---------|---------|------------|\n| natural gas | $4,565 | $2,729 | 67.3% |\n| oil | 1,523 | 2,095 | (27.3)% |\n| electricity | 5,197 | 4,671 | 11.3% |\n| Water | 3,582 | 3,474 | 3.1% |\n| other | 30 | 33 | (9.1)% |\n| Total utility and fuel expenses | $14,897 | $13,002 | 14.6% |\n\nKillam's apartment properties are heated with a combination of natural gas (55%), electricity (36%), oil (8%) and other sources (1%).\n\nElectricity costs at the unit level are usually paid directly by tenants, reducing Killam's exposure to the majority of the 4,500 units heated with electricity. Fuel costs associated with natural gas or oil fired heating plants are paid by Killam. As such, the Company is exposed to fluctuations in natural gas and oil costs, which represent 40.9% of total same store utility and fuel costs in 2013. Killam invests in green initiatives at its properties to maximize efficiencies, including converting many of its Halifax properties to natural gas from oil over the last three years as natural gas infrastructure has been expanded in the city. The decision to convert was supported by the substantial price difference between the cost of natural gas and oil in recent years.\n\nAs noted in the table above, Killam's utility and fuel expenses increased 14.6% in 2013 compared to 2012. The increase was primarily attributable to higher natural gas, electricity costs and water costs.\n\nKillam's natural gas expenses increased by 67.3% in 2013 due to higher gas prices in Atlantic Canada and an increase in properties burning natural gas following conversions of certain Halifax heating plants from oil to gas in 2012 and 2013. The reduction in oil expense in the quarter and year-to-date reflects this reduction in oil exposure.\n\nAs the following chart highlights, the per gigajoule (Gj) commodity cost for natural gas in New Brunswick and Nova Scotia was much higher than NYMEX in 2013 and less correlated to NYMEX than in previous years. (NYMEX is the New York Mercantile Exchange, a commodity futures exchange. Henry Hub, a gas distribution hub in Louisiana is the pricing point for natural gas futures contracts traded on NYMEX). The cost of natural gas in Atlantic Canada and New England experienced a spike from December 2012 until late spring 2013 and a second spike in December 2013, compared to other areas of Canada. Those spikes were both due to increased demand from utilities in Northeast New England and a shortage of gas pipeline capacity in Northeastern New England and Atlantic Canada. A temporary decline in gas supply off the coast of Nova Scotia further contributed to the high pricing in the first part of the year.\n\n## Historic Natural Gas Pricing ($ per Gj) Henry Hub Vs. Heritage Gas\n\n<!-- image -->", - "page_start": 37, - "page_end": 37, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## 2.3. Action in Active Inference\n\nAs with perception, action in AIF is guided by the minimisation of free energy. However, instead of VFE being minimised directly, it is the free energy that is expected to occur depending on the actions taken by the agent-the expected free energy or EFE -that is minimised. As stated below, choosing actions that minimise the EFE leads to a natural balance between exploration and exploitation, ensuring preferences are realised and ambiguity about the environment is minimised. In AIF, policies π are sequences of actions u . The policy length (also called the planning horizon or temporal depth) is the length of the policies being considered. The total number of policies therefore depends on the policy length and the number of different actions that can be made at each time step. An EFE is assigned to each policy π (denoted as G π ), where policies associated with a lower EFE are then more likely to be chosen.\n\nOne can rewrite the EFE in different ways to highlight different consequences of optimising it. Below, we show the two most crucial ways to rewrite it, taken from [1,33]. We denote the states and observations that are expected future outcomes of actions with (~). Additionally, we introduce a preference prior C that encodes the agent's preferences:\n\nG π = -E q ( ˜ o , ˜ s | π ) [ ln q ( ˜ s | ˜ o , π ) -ln q ( ˜ s | π )] ︸ ︷︷ ︸ Information gain -E q ( ˜ o | π ) [ ln p ( ˜ o | C )] ︸ ︷︷ ︸ Pragmatic value (17)\n\nThe expression above shows how minimising the EFE leads to a natural balance between information gathering and realising preferences. The first term on the right-hand side is the change in belief from the prior to the posterior under a given policy called the epistemic value or information gain. Optimising this value is what leads to (notably non-random) exploratory behaviour. The second term is the pragmatic value; minimising this value ensures that observations are in accordance with the preference prior C .\n\nAnother way to express the EFE is in terms of risk and ambiguity:\n\nG π = E q ( ˜ s | π ) [ H ( p ( ˜ o | ˜ s ))] ︸ ︷︷ ︸ Expected ambiguity + D KL [ q ( ˜ o | π ) ∥ p ( ˜ o | C )] ︸ ︷︷ ︸ Risk (outcomes) (18)\n\nHere, the first term on the right-hand side captures the expected entropy, or uncertainty, of the outcomes given the environmental states. Minimising this quantity ensures that the agent will seek states where observations can most clearly be used to distinguish between environmental states. The second term is the KL divergence of the expected observations from preferred observations, capturing the risk of making unwanted (i.e., a priori surprising) observations, which is also minimised by minimising the EFE .\n\n## 2.4. Learning in Active Inference\n\nIn AIF, the parameters of the generative model can also be updated via Bayesian-beliefupdating methods, a process called 'parameter learning' or sometimes just 'learning' [2]. In general, this is performed by introducing belief distributions over the possible values of the parameters that are subject to learning, and updating this distribution for each observation using Bayesian belief updating. This additionally implies introducing priors on the belief distributions. Depending on the type of generative model used, the belief distributions and their priors will take different forms, and so will their update equations. In the following, we demonstrate parameter learning specifically in the context of POMDPs.\n\nThe parameters that are subject to learning in POMDPs are usually the entries in the five matrices. Since the matrices consist of categorical probability distributions, it is natural to use Dirichlet distributions-distributions over categorical probability distributions-as belief distributions over their values [33,52]. Beliefs about each probability distribution", - "page_start": 9, - "page_end": 9, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "can be achieved only when the annihilation processes are enhanced by Higgs resonances. Therefore, the mass of the RH neutrino DM should be around a half of Higgs boson masses. We have also calculated the elastic scattering cross section between the DM particle and a proton and found it within the reach of future experiments for the direct DM search.\n\n## Appendix A: The Higgs sector\n\nThe Higgs potential (4) contains five parameters: m 2 1 , m 2 2 , λ 1 , λ 2 and λ 3 . These parameters can be rewritten in terms of two Higgs VEVs, two physical Higgs masses and the mixing angle between them. The stationary conditions are\n\nm 2 1 + λ 1 v 2 + 1 2 λ 3 v ' 2 = 0 , (A1)\n\nm 2 2 + λ 2 v 2 + 1 2 λ 3 v ' 2 = 0 . (A2)\n\nThe physical Higgs masses are given by Eqs. (8) and (9) with the mixing angle that θ satisfies\n\ntan 2 θ = -λ 3 vv ' ( λ 1 v 2 -λ 2 v ' 2 ) . (A3)\n\nHiggs self interaction terms are expressed as\n\nL int = λ 1 vφ 3 + λ 2 v ' ψ 3 + 1 2 λ 3 ( vφψ 2 + v ' ψφ 2 ) + 1 4 ( λ 1 φ 4 + λ 2 ψ 4 + λ 3 φ 2 ψ 2 ) , (A4)\n\nin terms of φ and ψ . With Eq. (7), these are rewritten in terms of h and H with θ as\n\nL int = [ λ 1 v cos 3 θ -λ 2 v ' sin 3 θ + 1 2 λ 3 ( v cos θ sin 2 θ -v ' sin θ cos 2 θ ) ] hhh + [ 3 λ 1 v cos 2 θ sin θ +3 λ 2 v ' sin 2 θ cos θ + 1 2 λ 3 ( v (sin 3 θ -2 cos 2 θ sin θ ) + v ' (cos 3 θ -2 sin 2 θ cos θ )) ] hhH + [ 3 λ 1 v cos θ sin 2 θ -3 λ 2 v ' sin θ cos 2 θ + 1 2 λ 3 ( v (cos 3 θ -2 sin 2 θ cos θ ) + v ' ( -sin 3 θ +2sin θ cos 2 θ )) ] hHH + [ λ 1 v sin 3 θ + λ 2 v ' cos 3 θ + 1 2 λ 3 ( v sin θ cos 2 θ + v ' sin 2 θ cos θ ) ] HHH +four point interactions . (A5)\n\nWe can read off a Higgs three point vertex from Eq. (A5).", - "page_start": 8, - "page_end": 8, - "source_file": "1002.2525.pdf" - }, - { - "text": "Log in\n\n<!-- image -->\n\nHome / Arts and Entertainment / New Artificial Intelligence Summit Series Begins With Energy\n\n<!-- image -->\n\nARTS AND ENTERTAINMENT\n\n## New Artificial Intelligence Summit Series Begins With Energy\n\n07/31/2024\n\n(AI) continues to transform the United States and the world. To promote and inform rapid advancements in AI and maintain America's global competitiveness, the Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI, announces the AI+ Summit Series.\n\nThe series kicks off with the topic of energy. The AI + Energy Summit, scheduled for September 26, 2024, in Washington, D.C., will bring together policy makers, energy industry leaders, top government and academic energy researchers, and technologists to address the challenges of AI's energy consumption and develop solutions for a resilient and abundant energy future. The event also aims to address the implications of AI and energy for national security and promote partnerships between AI and energy stakeholders.\n\nAI and other emerging technologies can help the United States take the lead in energy areas including maximizing energy efficiencies, discovering new materials, and enabling new forms of power generation. AI also has a role to play in overcoming energy challenges. The Department of Energy (DOE) already uses AI in several areas including advanced computing, emergency response, environmental modeling, climate forecasting, and materials research.\n\nSCSP's recent 'Action Plan for U.S. Leadership in Next-Generation Energy,' raises many issues related to AI and energy, including recommendations for the government to bring America forward. The AI+ Energy Summit will highlight these and other issues, and promote collaboration to solve problems. The stakes are high; if the U.S. falls short on energy, American adversaries could gain the upper hand in AI leadership, according to SCSP experts.\n\nVisit scsp.ai to learn more about the AI+Energy Summit and the SCSP's Next-Generation Energy Action Plan.\n\nArticle Link\n\nhttps://about.newsusa.com/new-artificial-intelligence-summit-series-begins-with…\n\n## RELATED ARTICLES\n\n<!-- image -->\n\n<!-- image -->\n\nMar 06, 2024\n\nCelebrate St. Patrick's Day with No Booze, Just Pure Irish Fun and Entertainment\n\nMar 06, 2024\n\nExplore Downtown San Pedro with Flair: Ride the Iconic Red Car Trolley for Free\n\nMar 06, 2024\n\nSay Hello to Your Big Break at the Stapleton Library Job Fair in Vocation, Trade, or Civil Service\n\n<!-- image -->\n\n<!-- image -->\n\nFeb 22, 2024\n\nRetrain Your Emotional Brain: A Natural Alternative to Weight Loss Drugs\n\n© Copyright NewsUSA 2025. All Rights Reserved.\n\nFeb 21, 2024\n\nSerial Entrepreneur Teaches Us How to Go the Distance in Business and in Life\n\nNEWSUSA\n\nMar 06, 2024\n\nLocal Artists Collaborate for a Unique Fusion of Groove and Collage\n\nFASHION\n\nBUSINESS\n\nINFOGRAPHIC\n\nENVIRONMENT\n\nHEALTH\n\nMONEY\n\nFOOD\n\nTRAVEL\n\nBRIDAL\n\nRECREATION\n\nTECHNOLOGY\n\nHOME\n\nEDUCATION\n\nARTS & ENTERTAINMENT\n\nAUTO\n\nCHILDREN\n\nFITNESS\n\nHOLIDAY\n\nINSURANCE\n\nLAWN & GARDEN\n\nLISTICLE\n\nNUTRITION\n\nPARENTING\n\nPETS\n\nSEASONAL\n\nSENIORS\n\nSPANISH\n\nTIPS AND HOW TO\n\nENTERTAINMENT\n\nCAREER\n\nCOMMUNITY\n\nFAMILY\n\nTIPS\n\nINTERNET\n\nHUMAN\\_INTEREST\n\nBEAUTY\n\nARTS\n\nREALESTATE\n\nSAFETY\n\nMEDICINE\n\nBOOK\\_REVIEW\n\nRECIPE\n\nAFRICAN\\_AMERICANS\n\nHOW\\_TO\n\nBYLINED\\_COLUMN\n\nCHARITY\n\nSPORTS\n\nHOME\\_IMPROVEMENT\n\nTECH\n\nWELLNESS\n\nARTS AND ENTERTAINMENT\n\nFOOD & DRINK\n\nREAL\\_ESTATE\n\nVETERANS\n\nOUTDOORS\n\nREAL ESTATE\n\nHUMAN INTEREST\n\nMONEY & FINANCE\n\nFASHION & BEAUTY\n\nMONEY AND FINANCE\n\nBOOKS & ENTERTAINMENT\n\nBOOKS\n\nARTS & ENTERTAINMENT\n\nCATEGORIES\n\nRECENT POSTS", - "page_start": 0, - "page_end": 0, - "source_file": "news1.pdf" - }, - { - "text": "the dominant dynamic process, but does not allow one to probe this assumption. In Section III B we show how one may develop a dynamical density functional theory (DDFT) that describes the system at a similar level to the KMC. However, the DDFT may also be easily extended to include other effects such as fluid diffusion, that the KMC does not incorporate.\n\n## A. Kinetic Monte Carlo model\n\nThe kinetic Monte Carlo model for two-dimensional dewetting nanofluids [33] was first proposed in Ref. [35] and extended to include next-nearest neighbour interactions in [37]. The two key assumptions used are: (i) the relevant processes can be mapped on to a two-dimensional lattice gas model, thereby neglecting continuous changes in the thickness of the evaporating film, and (ii) all relevant dynamics results from diffusing nanoparticles and evaporating/condensing solvent.\n\nThe model builds on an Ising-type model for the liquid-gas phase transition. The surface is divided up into a regular array of lattice sites whose size is dictated by the nanoparticles. One then considers each lattice site to be occupied either by a nanoparticle, liquid or vapour. This effectively maps the system onto a two-dimensional two-component lattice gas having two fields n and l . The resulting three possible states of a cell are: liquid ( l = 1 , n = 0 ), nanoparticle ( l = 0 , n = 1 ), and vapour ( l = 0 , n = 0 , i.e., cell empty). The energy of an overall configuration is given by the hamiltonian\n\nE = -ε nn 2 ∑ <ij> n i n j -ε nl 2 ∑ <ij> n i l j -ε ll 2 ∑ <ij> l i l j -µ ∑ i l i (3)\n\nwhere ∑ <ij> denotes a sum over nearest neighbour pairs and ε ll , ε nn and ε nl are the liquid-liquid, particle-particle and liquid-particle interaction energies, respectively. Fixing the three interaction strength parameters ε ll , ε nn , ε nl and the effective chemical potential µ determines the equilibrium state of the system. We choose ε ll as unit of energy - i.e. we set ε ll = 1 .\n\nThe hamiltonian determines the equilibrium state and the energy landscape of the system. However, as the system 'dries in' during the course of the solvent evaporation, the final nanoparticle configurations do not necessarily represent equilibrium structures. This implies that the system dynamics is of paramount importance. It is determined by the possible Monte Carlo moves, their relative frequencies, and the probabilities for their acceptance. Two types of moves are allowed: (i) evaporation/condensation of liquid and (ii) diffusion of nanoparticles within the liquid. A mobility M corresponds to the ratio of cycles of particle and solvent moves and reflects the physical ratio of", - "page_start": 8, - "page_end": 8, - "source_file": "1001.2669.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed1.pdf", - "query": "How could the heart rate be estimated by means of an active inference paradigm?", - "target_page": 6, - "target_passage": "The second panel of Fig. 2 shows the Shannon surprise of an inference model that estimates the current heart rate using the two standard components of a generative model. The for- mer component is the prior, which encodes the person’s a priori probabilistic belief (i.e. probability distribution) about her “nor- mal” heart rate range; here, the prior is a Gaussian centered on 67 and has a precision of 0.11. The latter component is the likeli- hood, which encodes the probabilistic mapping between sensory (heartbeat) observations and the hidden state (heart rate); here, the likelihood is a Gaussian centered on the current heart rate with an additional bias of 15 pulses, and the panel shows the results for 10 values for precision obtained by subdividing the range [0.1,10] into equal intervals.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "Finally, there is a third essential element that determines the accuracy of the inference: precision control. In predictive coding, the influence of prediction errors on inference is weighted by their precision, i.e. inverse variance (pink triangles in Fig. 1). This weighting would ensure that very reliable sensations have more impact on inference than unreliable sensations. However, precision (like all other variables) needs to be estimated, but this might be incorrect. An incorrect setting of precisions has been associated with various psychopathological conditions, such as psychosis (Adams et al. 2013), eating disorders (Barca and Pezzulo 2020), panic disorders (Maisto et al. 2021), symptom perception (Pezzulo et al. 2019), depression (Barrett et al. 2016), and many others (Khalsa et al. 2018, Paulus et al. 2019). Intuitively, assigning excessively high weight to noisy sensations yields an incorrect\n\ninference that tracks the noise rather than the correct state of the estimated variable system (i.e. overfitting), whereas assigning excessively low weight to sensations (or excessively high weight to prior knowledge) makes the system poorly responsive to incoming observations that might signal a change in the state of the system-and both are examples of aberrant inference (Friston et al. 2014).\n\nFigure 2 provides a formal illustration of the above by plotting some examples of Bayesian inference using generative models under various levels of precision of the model components. For simplicity, we focus on a simplified example of inference of an interoceptive variable: one's heart rate. Heart rate is a 'hidden variable' in Bayesian parlance since it is not directly observable but needs to be inferred through two sources of information: prior knowledge about the most likely heart rate and sensory (heartbeat) observations. The top panel of Fig. 2 shows a series of (noisy) heartbeat observations. In the beginning, they are in the normal range for an adult (time steps 1-10), then they increase significantly, simulating tachycardia (time steps 11-20), then they go back to the normal range (time steps 21-30), then they decrease significantly, simulating bradycardia (time steps 31-40), and finally, they go back to the normal range (time steps 41-50).\n\nThe second panel of Fig. 2 shows the Shannon surprise of an inference model that estimates the current heart rate using the two standard components of a generative model. The former component is the prior, which encodes the person's a priori probabilistic belief (i.e. probability distribution) about her 'normal' heart rate range; here, the prior is a Gaussian centered on 67 and has a precision of 0.11. The latter component is the likelihood, which encodes the probabilistic mapping between sensory (heartbeat) observations and the hidden state (heart rate); here, the likelihood is a Gaussian centered on the current heart rate with an additional bias of 15 pulses, and the panel shows the results for 10 values for precision obtained by subdividing the range [0.1,10] into equal intervals. The results shown in the second panel of Fig. 2 show that Shannon surprise increases dramatically during episodes of tachycardia and bradycardia, which are far from the normal range. The pattern of results is the same across all levels of likelihood precision. However, the inference with a very high precision (a precision of 10) tracks more closely the noise sensory signals and can therefore lead to more extreme results.", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed1.pdf" - }, - { - "text": "Figure 1. Depiction of a POMDP generative model. This encodes the agent's expectations about how the state s of the environment changes over time t , and how it generates observation o at each time step. A , also called the observation model, describes how environmental states give rise to observations. B , also called the transition model, describes how environmental states change over time, depending on action u (called policy π when structured into sequences). C is the preference prior, which encodes the agent's preferences for observations. This shapes the expected free energy G associated with each policy, which is used for policy selection. D encodes the agent's prior belief over environmental states before making any observations, and E is the prior over policies that determines the agent's preferences for policies in the absence of other motivation.\n\n<!-- image -->\n\n## 2.2. Perception in Active Inference\n\nIn AIF, perception is conceptualised as the result of variational (i.e., approximate) Bayesian inference, performed by minimising the VFE to optimise parameters of posterior beliefs about the environment. In exact Bayesian inference, we use a parametrised generative model m to make an optimal inference about state s of the environment based on observation o . This is performed by combining a prior belief over states p ( s | m ) ; a likelihood model p ( o | s , m ) ; and the model evidence p ( o | m ) , a normalisation term encoding the likelihood of receiving the given observations across all possible environmental states, as follows [1]:\n\np ( s | o , m ) = p ( o | s , m ) p ( s | m ) p ( o | m ) (1)\n\nThe posterior distribution over states given observations p ( s | o , m ) here represent the agent's beliefs about the environment. Forming beliefs in this way is thought to be the process that enables conscious, as well as unconscious, perception. The product of the likelihood model and prior is also called the joint likelihood p ( o , s | m ) , which fully defines the generative model, and which we use henceforth. In the following, for notational simplicity, we also omit denoting the dependency on the generative model m .\n\nCalculating the model evidence p ( o ) is often intractable, making exact Bayesian inference unfeasible. The way to circumvent this in AIF is to use a variational approximation to Bayesian inference [23,33,50,51]. This works by transforming the inference into an optimisation problem, specifically the minimisation of the VFE . First, an arbitrary probability distribution over environmental states q ( s ) , an approximate posterior that is used to approximate the exact posterior, is introduced. We then introduce the Kullback-Leibler (KL)", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "Figure 2. A simplified example of (Bayesian) inference of one's heart rate. First panel: simulated time series of heartbeat observations. Second panel: Shannon surprise of a generative model composed of a fixed prior about heart rate (a Gaussian with a mean of 67 and a precision of 0.11) and a likelihood (a Gaussian centered on the current heart rate with an additional bias of 15 pulses, with various precisions that vary between 0.47 and 10, see the legend). Third panel: Bayesian surprise, which measures the discrepancy between posterior and prior probabilities over time. Bottom panels: the two series of panels are organized in two (left and right) columns, which show the first five time steps of inference for the two cases with high precision (of 10) and low precision (of 0.1) of the likelihood, respectively. See the main text for an explanation and online article for colored version of this figure.\n\n<!-- image -->\n\nthe current model generate significant surprise, and sometimes, the surprise can remain relatively high for long periods before the model adapts (or the world changes), especially with some parameterizations of the generative model. This is particularly relevant in this context since active inference agents strive to minimize their surprise (and the long-term average of surprise, entropy, which is a measure of uncertainty) by changing their model, or changing the world, or both.\n\nSecond, these examples illustrate the importance of precision control and the appropriate setting of precision parameters in guiding inference. Remarkably, the inference can be more or less accurate or fast using the same data, depending on the precision parameters. Note that in Fig. 2, we manipulated only the precision of the likelihood. However, it would also be possible to manipulate the precision of the prior, together or in alternative to the precision of the likelihood. Generally speaking, when the precision of the", - "page_start": 6, - "page_end": 6, - "source_file": "pubmed1.pdf" - }, - { - "text": "The third panel shows the Bayesian surprise (or the KullbackLeibler divergence between posterior and prior probability distributions) over time. This is a measure of how much dissimilar the posterior and the prior are, and it always decreases as a result of inference, but note that it decreases much more rapidly when the precision of the likelihood is 10, which is another indication that the posterior is 'overfitting,' meaning that the inference result is excessively biased by the likelihood distribution.\n\nFinally, the two bottom series of panels are organized in two (left and right) columns, which show the first five time steps of inference for the two cases with high precision (of 10) and low precision (of 0.1) of the likelihood, respectively. In these plots, the prior distributions are in blue, the posterior distributions are in green, and the likelihoods are in red. It is possible to note that in the left (high precision) panels, the posterior inference closely follows the likelihood (it 'overfits') after five time steps and the inferred heart rate is slightly biased (i.e. it is 79). Differently, in the right (low precision) panels, the inference converges much slower to a high precision posterior, but without overfitting.\n\nThese simple examples of Bayesian inference illustrate two things. First, sensory observations that are unpredictable given", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed1.pdf" - }, - { - "text": "Equipped with a generative model like the one shown in Fig. 1, an active inference agent can continuously infer (and act upon) the state of the world and of the body, including the internal milieu, at multiple time scales. Of particular interest, here are multimodal inferences that unite exteroceptive and interoceptive sources of evidence. One example of this is the perception of faces expressing emotions. Two studies reported that", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed1.pdf" - }, - { - "text": "quantities as its target: the variational free energy ( VFE ) in the case of perception and the expected free energy ( EFE ) in the case of action. The VFE is the free energy associated with a given sensory observation and is resolved perceptually by updating beliefs about the environment. The EFE is the free energy that is expected in the future, contingent on a given policy or course of action. Choosing action policies associated with a low EFE lead to reducing uncertainty about the environment, as well as making preferred observations more likely.\n\n## 2.1. POMDPs in Active Inference\n\nIn AIF, the POMDP is one of the most common families of generative models used to make inferences about the environment. It is a Markovian discrete state-space model, where employing it means representing the environment and observations as inhabiting one among a set of possible (possibly multidimensional) states, and that the changes in these states can only depend on the system's previous state and the agent's actions. Environmental states are not directly observable, so they have to be inferred based on incoming sensory observations. In AIF for POMDPs and other generative models in general, both perception and action are cast as Bayesian inferences (see Sections 2.2 and 2.3), as well as the learning of parameters of the generative model (see Section 2.4). Crucially, an agent's generative model does not a priori have to be isomorphic to the true environment (i.e., the data-generating process), although this will generally lead to a successful inference, and that the generative model will therefore often come to resemble the environment through learning.\n\nAdiscrete state-space POMDP in AIF is conventionally defined by five main sets of parameters: A , B , C , D and E [1,33], see Figure 1. Together, these parametrise the agent's prior beliefs about the prior probability of different states in the environment, how states of the environment change and how they generate observations. Typically, they will be vectors, matrices or tensors; however, henceforth we denote them by their corresponding letter in bold. These make up the components needed for the agent to perform AIF.\n\nA , also called the observation model , represents the state-to-observation likelihood model. This describes how observations depend on or are generated by states of the environment. It is structured as a matrix with a column for each possible environmental state s , and a row for each possible observation o . Each column is then a categorical probability distribution over the observations that will occur given the environmental state (meaning that each column must contain non-negative values that sum to 1). If the observations are multidimensional (i.e., multiple observations are made at each time point), there is a matrix for each observation modality. If two or more states determine the observation, the likelihood model then becomes a tensor. If A is imprecise (i.e., the probabilities are highly entropic and evenly distributed), observations are taken to carry less information about the environment, in many cases leading to more uncertain inferences, and vice versa.", - "page_start": 4, - "page_end": 4, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "participants processed faces expressing fear (but not neutral faces or faces expressing other emotions) when their heart rate was high-hence congruent with the fearful expression (Pezzulo et al. 2018, Yu et al. 2021). The generative model shown in Fig. 1 could support this kind of inference by using interoceptive information from the heart (i.e. high heart rate) as evidence that 'there might be something fearful out there' (Pezzulo 2013). Another more complex example regards emotional awareness and self-awareness-which significantly engage the brain regions involved in interoception and the representation of physiological processes (Garfinkel et al. 2013). The generative model shown in Fig. 1 might support processes of emotional awareness in a way that is neither purely bottom-up (i.e. as if interoceptive signals cause emotional awareness) nor top-down (i.e. as if emotional awareness causes interoceptive signals), but rather through a circular causality between central predictions about bodily statethat engage autonomic reflexes-and interoceptive streams-that update the predictions (Seth and Friston 2016). In this perspective, any representation that induces interoceptive predictions could be associated with emotional or affective content; crucially, this is also the case with some aspects of self-awareness (e.g. recognizing one's own face) that require integrating interoceptive streams with concurrent exteroceptive (e.g. visual) and proprioceptive cues. These examples illustrate that the generative model of Fig. 1 natively implements both the multisensory integration required to unite (for example) interoceptive and exteroceptive streams and the active aspects that are supposed to support emotional and self-processing-and the construction of an 'embodied self' (i.e. the circular causality between engaging autonomic reflexes and capturing the ensuing interoceptive signals).\n\nIn general, the accuracy of the inference of hidden bodily states, the 'embodied self,' or other aspects of the model depends on the signal-to-noise ratio of the sensations and on the quality of the model. For example, it is difficult to self-localize in a city if it is dark (low signal-to-noise ratio) or if one does not know the city well (poor model). The inference of hidden bodily and emotional states might function in an analogous manner. If the quality of the afferent interoceptive (e.g. cardiac) signals is low, or if one has a poor model of how one's body functions, then it would estimate one's bodily states such as fatigue incorrectly (which in turn would also impair its adaptive regulation of the same bodily states). Interoceptive signals could be 'too noisy' for various reasons, which might be related to physiology, inflammation, or stress. The body model can be poor in various ways, too. For example, it could poorly characterize the statistical relations between interoceptive sensations and hidden bodily states (e.g. systematically mischaracterize high heart rate as caused by hunger but not fatigue or joy).", - "page_start": 5, - "page_end": 5, - "source_file": "pubmed1.pdf" - }, - { - "text": "uncertainty about one's estimated state. This means that active inference agents tend to avoid ambiguous states, encompassing the avoidance of ambiguous places where self-localization is challenging, ambiguous social situations where safety is uncertain, and ambiguous bodily states, such as unsure feelings of fatigue. However, one apparent exception to this aversion to ambiguity arises when exploring novel states implies the opportunity to learn new things and enhance one's model; see Friston et al. (2017) for a discussion. Furthermore, and importantly, active inference agents will actively operate in the environment to reduce their ambiguity; for example, by actively seeking informative sensations that disambiguate in which location they are (e.g. by looking for traffic signs), whether their social context is safe or unsafe (e.g. by trying to understand other's intentions from their facial expressions and actions), or whether they are currently fatigued (e.g. by putting attention to one's heart), happy, or sad.\n\nThe last examples-disambiguating one's fatigue and emotional states-may seem strange if one assumes that we do have direct access to the body- and allostasis-related states (e.g. states of satiation, thirst, and fatigue) and to our emotions (e.g. we automatically know whether we are happy or sad). However, one assumption of active inference is that one's bodily and emotional states are not necessarily observable but, instead, 'hidden states' that need to be inferred on the basis of sensations (especially, but not exclusively, of interoceptive sensations from the inside of the body) and of an implicit, unconscious model of how the body functions (Barrett and Simmons 2015, Pezzulo et al. 2015, Seth and Friston 2016). In other words, the same inferential process that allows active inference agents to estimate the hidden state of the external environment (e.g. the presence or absence of an object in the environment) is also used to estimate other hidden states, such as fatigue, happiness, or sadness. This implies that one can also be wrong, or be fooled, about these states; for example, we could experience the 'interoceptive illusion' of feeling more fatigued than our physiological parameters would afford (Iodice et al. 2019).", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed1.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nFigure 5. A learning for the actual reward condition (reward condition left). The agent correctly learned the probability of receiving rewards in the rewarding arm. It did not learn the probabilities of the non-rewarding arm since it did not explore that option. The color grading signifies the likelihood of an observation being generated by a specific state. The more saturated the color, the higher the likelihood.\n\n<!-- image -->\n\n## 4.3. Fitting the Model to the Data\n\nSimulations are useful for a variety of purposes, like exploring the consequences of different priors and parameters and establishing the face validity of hypothetical mechanisms underlying behavioural phenomena. However, we often want to use models to make inferences about specific observed phenomena, like the differences in behaviour between various populations, as in computational psychiatry [14]. One standard method here is model fitting, where we estimate the parameter values (e.g., prior beliefs) of an AIF model that are the most likely given some observed behaviour of a participant. This is often performed with approximate Bayesian methods. In the cognitive and behavioural sciences, the predominant method is Markov Chain Monte Carlo (MCMC) methods [34], which are slower but in the limit can estimate parameter posteriors without making assumptions about their functional form. An alternative, which is more often used in other fields and also available in ActiveInference is variational methods, which are faster but require making assumptions about the functional form of the posterior. In general, MCMC methods are favourable when making parameter inferences (i.e., comparing parameters of the same model fitted to different data, like two groups of subjects). When performing a Bayesian model comparison (i.e., comparing different models fitted to the same data), the different approaches rely on different approximations of the model evidence, with the variational", - "page_start": 23, - "page_end": 23, - "source_file": "pubmed7_cc4.pdf" - }, - { - "text": "prior is very high, the posterior will closely reflect the prior, rendering the inference rigid and incapable of adapting to changing environmental conditions-which might be especially problematic in periods of significant changes, such as adolescence or more simply when one changes city, working environment, and friends. Furthermore, as shown in Fig. 1, hierarchical predictive coding architectures have precision values associated with every hierarchical level (whereas, for simplicity, the inference shown in Fig. 2 is not hierarchical). The correct balance of precision parameters within and across layers is crucial for accurate inference, as it ensures that the correct levels of confidence are assigned to data and prior information.\n\nFinally, and importantly, aberrant precision control (as well as various combinations of other factors discussed earlier, such as noisy bodily sensations and poor bodily mode) can render inference not just incorrect but also highly ambiguous, leaving a person in a permanent condition of uncertainty about whether one is fatigued (when considering the bodily state), happy, or sad (when considering the emotional state), what kind of person one is or what are one's desires (when considering self-models), etc. Importantly, this condition of uncertainty is not limited to perceptual inference but has a cascade effect on decision-making and action selection. Indeed, an uncertain estimate of one's state automatically implies that one has low confidence in the effects of one's plans; for example, it renders more difficult the prediction of whether a run would be too fatiguing or a party too stressful. It is exactly this kind of uncertainty (about the present and the future, the body state or the outcomes of social interactions, etc.) that active inference agents strive to avoid.\n\n## Avoiding excessive uncertainty in maladaptive ways\n\nOur previous discussion clarified that active inference agents have sophisticated (hierarchically deep, temporally extended) models of themselves that permit making inferences at multiple levels about hidden bodily states (which comprise both the classical 'body schema' and other states that are relevant for allostasis, such as hunger, thirst, and fatigue) and other states related to the emotional and embodied self. These models are essential for ensuring effective regulation and control at multiple levels, from simple reflexes to sophisticated goal-directed behaviors (Tschantz et al. 2022). However, in some cases, the aforementioned inferential process might not work properly (e.g. if the sensory channels are too noisy or are assigned excessively high or low precision). As a consequence, a person could experience an excessive or irreducible uncertainty about her bodily and emotional states or about the self, which in turn translates into a loss of confidence about which future courses of action could produce desired outcomes. Crucially, active inference agents follow the imperative to avoid such an uncertainty about the present or the future. Normally, uncertainty minimization strategies are adaptive (e.g. seeking advice if one is uncertain about the direction of the preferred restaurant). However, in some conditions, such as when a person experiences excessive and irreducible uncertainty and when the uncertainty is particularly distressing or related to fundamental life concerns, she might potentially seek 'maladaptive' ways to reduce it-or methods that reduce uncertainty at the cost of hindering fundamental imperatives of well-being and survival (see also Linson et al. 2020).\n\nIn this perspective, apparently paradoxical actions, such as food restriction and self-injurious behaviors, might be pursued", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed1.pdf" - } - ] - }, - { - "references": { - "source_file": "pubmed1.pdf", - "query": "At what stage of childhood does the construction of narrative identity take place?", - "target_page": 3, - "target_passage": "Among the challenges that adolescents have to face are the structuring of a “narrative identity” or self-story, featuring the development of a sense of personal identity that integrates past experiences with current, and future goals and meanings in a coherent whole over time ", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## NSSI in adolescence\n\nAdolescence is the period of developmental transition from childhood to adulthood, which might be stretched up to the early 20s due to current sociocultural changes (e.g. delays in completing education, occupational attainment, and parenthood) (Patton et al. 2018). Among the challenges that adolescents have to face are the structuring of a 'narrative identity' or self-story, featuring the development of a sense of personal identity that integrates past experiences with current, and future goals and meanings in a coherent whole over time (McAdams and McLean 2013, McLean and Lilgendahl 2019). The definition of the new boundaries of adolescents' personal identity involves significant changes in the", - "page_start": 2, - "page_end": 2, - "source_file": "pubmed1.pdf" - }, - { - "text": "reciprocity with caregivers and peers. Thus, in parallel to the negotiation of identity with caregivers (through a relative detachment from them, a renegotiation of intimacy, and the questioning of their confirmatory authority), the modifications of friendship structures-from childhood to adolescence-lay the ground for the progressive recognition of social contexts and peer relationships as the elite territories for the modulation and exploration of personal identity. The redefinition that the adolescent has to face in these territories of exploration (of the self as an individual separated from the other and of the self with the other) might pass through a phase of reduced coherence in the narration of the self and hence an increased level of uncertainty. Coherence in the self's narrative is considered a measure of well-being and has been associated with psychopathology in adulthood (Klimstra and Denissen 2017) and adolescence (Lind et al. 2020, Shiner et al. 2021). For example, narrative incoherence has been found to be associated with personality disorders in adolescents (Lind et al. 2019), where 'identity diffusion' (e.g. feelings of emptiness and being fragmented and lack of a sense of continuity over time) might be considered an expression of high levels of uncertainty of the self.\n\nEmotion-wise, a developmental trend toward an increased specificity of emotion-related maps of bodily sensations (Barca et al. 2023)-a proxy of interoceptive representations of emotions-has been reported from children aged 6years to adulthood (Hietanen et al. 2016). Pubertal changes encompass dramatic bodily and neuroendocrine system changes, comprising-but not reduced to-changes in the reproductive, adrenal, and growth axes (Cameron 2004). Thus, adolescents might face at least four sources of uncertainty: (i) the uncertainty due to physiological alterations related to bodily changes and to modification in hormonal levels leading to sexual maturity; (ii) the uncertainty in selfidentity (i.e. the structure of self-awareness) and personal identity (i.e, the narrative diachronic self) (Drummond 2021), which might be coupled with changes in body image and the development of gender identity; (iii) the uncertainty in affect regulation, with the emergence of new forms of affectivity as feelings of love and sexual attraction toward a partner; and (iv) uncertainty in the social context, with respect to their social status and role expectations in the adult society. Such high levels of uncertainty might lead to a poorly defined sense of self, with unclear boundaries and a sense of emptiness. In this context, pain becomes a possible way to recover a bodily sense of self, and self-injurious behavior might be instantiated as an attempt to reduce the rise in the levels of uncertainty in these (and potentially other) domains, toward the transition to adulthood (see Miller et al. 2020 for a closely related approach on addiction).\n\n## Active inference, interoceptive processing, and uncertainty reduction\n\nActive inference is based on the idea that in order to engage in adaptive allostatic regulation and goal-directed behavior, living organisms continuously strive to minimize the surprise of their sensations or, more formally, an upper bound to surprise: variational free energy (Parr et al. 2022). Notably, the (expected) free energy minimization processes that drive active inference jointly consider two complementary objectives. The former (utilitarian) objective is to realize one's preferences, such as being satiated or safe, by minimizing the discrepancy between preferred sensations (encoded as 'priors over observations' in active inference) and current sensations in different modalities (e.g. interoceptive or exteroceptive). The latter (epistemic) objective is to reduce", - "page_start": 3, - "page_end": 3, - "source_file": "pubmed1.pdf" - }, - { - "text": "## 3. Why Books are Important to Training AI\n\nDespite the proliferation of online content and some speculating that books would simply die out with the advent of the Internet, books remain a critical vehicle for disseminating 9 knowledge. The more scientists study how books can impact people, the less surprising this is. Our brains have been shown to interact with longform books in meaningful ways: we develop bigger vocabularies when we read books; we develop more empathy when we read literary fiction; and connectivity between different regions of our brain increases when we read. 10\n\nIn that light, it might be unsurprising that books are important for training AI models. A broadly accessible books dataset could be useful not only for building LLMs, but also for many other types of AI research and development.\n\n## Performance and Quality\n\nThe performance and versatility of an AI model can significantly depend on whether the training corpus includes books or not. Books are uniquely valuable for AI training due to several characteristics.\n\n- · Length: Books tend to represent longer-form content, and fiction books, in particular, represent long-form narrative. An AI trained on this longer-form, narrative type of content is able to make connections over a longer context, so instead of putting words together to form a single sentence, the AI becomes more able to string concepts together into a coherent whole; even after a book is divided into many 'chunks' before the process of tokenization, that will still provide long stretches of text that are longer than the average web page. While Web documents, for instance, tend to be longer than a single sentence, they are not typically hundreds of pages long like a book.\n- · Quality: The qualities of the training data impact the outputs a tool can produce. Consider an LLM trained on gibberish; it can learn the patterns of that gibberish and, in turn, produce related gibberish, but will not be very useful for writing an argument or a story, for instance. In contrast, training an LLM on books with well-constructed arguments or crafted stories could serve those purposes. While 'well-constructed' and 'crafted' are necessarily subjective, the traditional role of editors and the publishing process can provide a useful indicator for the quality of writing inside of books. What's more, metadata for books - information such as the title, author and year of publication - is often more comprehensive than metadata for information", - "page_start": 5, - "page_end": 5, - "source_file": "creative_common_ai.pdf" - }, - { - "text": "Management believes that building and buying new apartments has the opportunity to generate more stable cash flows and improved returns on investments over time compared to buying older buildings. Management acknowledges that there was a dilutive impact on FFO per share growth during the period of construction but believes the short-term impact is more than offset by the 10-15 years of nominal maintenance costs provided by a newly built project. Older buildings typically require a much higher capital spend per year, estimated at least $1,200 per unit per year, versus an estimated $300 per unit for new construction. Assuming similar NOI growth between an old and new building, the lower capital spend on the new build is expected to result in a higher return on the total investment in the property in the first 10 - 15 years of ownership. Management expects to provide disclosure regarding capital spend associated with its new development projects over the next few years to provide support for this theory and show the Company's ability to grow the return on investments of the new developments over time.", - "page_start": 48, - "page_end": 48, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "- (c) an accompanying child who is accompanying P or, where P is a child, is accompanying a person referred to in sub-paragraph (1)(b);\n - (d) a live donor who is attending a place for the purpose referred to in the definition of 'live donor' or is travelling directly between that place and the place where they are selfisolating.\n - (2) For the purposes of this paragraph-\n - (a) 'accompanying child', in relation to P, means a child who has arrived in England with P and for whom P has responsibility, or where P is a child, a child who has arrived in England with the person referred to in sub-paragraph (1)(b) and for whom that person has responsibility;\n - (b) 'healthcare' means all forms of healthcare provided for individuals, whether relating to mental or physical health, including healthcare in connection with giving birth;\n - (c) 'live donor' means a person who-\n - (i) has travelled to the United Kingdom for the purpose of donation of material which consists of or includes their human cells pursuant to arrangements made with a provider in the United Kingdom before travelling to the United Kingdom, and which are to be used by the provider for the purpose of providing healthcare, and\n - (ii) is in possession of written confirmation of the arrangements from the provider;", - "page_start": 43, - "page_end": 43, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "To deepen our understanding of what the participants perceive as meaningful, we turn to a theoretical perspective that integrates bodily capacities with the construction of meaning. Enactive theory emphasizes that making sense of the world depends essentially on the biological (living) body and the phenomenological (lived or experienced) body (19), which implies that the body is viewed as a neurobiological organism that is concurrently experiencing, expressing and social (embodiment) (20). Thus, what is experienced by an individual during an exercise intervention is constituted by her sensorimotor repertoire for perception and action in interactions with the requirements of the task and the context (21). From this perspective, dysfunctions related to MS, such as sensorimotor impairments, can in /uniFB02 uence how individuals with MS interpret and understand their participation in a PA intervention. Moreover, the notion of ' participatory sensemaking ' (22) extends the body into the social domain, enabling an understanding of how the interaction processes between two embodied individuals affect shared and individual meaning-making. These concepts may illuminate pwMS ' s experiences and direct the focus toward bodily, contextual, and interactional aspects that may generate new insights regarding sensorimotor exercise and high-intensity training as part of PA.\n\nThe aim of this study was to explore participants ' experiences of the content, delivery and setting of a new outdoor group intervention combining high-intensity training and detailed exercises to generate new knowledge about important aspects of exercise interventions for pwMS with low disability.", - "page_start": 1, - "page_end": 1, - "source_file": "pubmed13.pdf" - }, - { - "text": "- (ii) to access critical public services, including-\n - (aa) social services,\n - (bb) services provided to victims (such as victims of crime),\n - (iii) to move to a different place for self-isolation where it becomes impracticable to remain at the address at which they are self-isolating;\n - (j) for the purposes of, or connected with, undertaking a test in accordance with Schedule 8 or Schedule 10;\n - (k) if self-isolating in a goods vehicle by virtue of paragraph (3)(d)-\n - (i) for sanitary reasons,\n - (ii) to take exercise outside,\n - (iii) where required or permitted by that paragraph, to move to a different place for selfisolation,\n - (iv) to inspect the vehicle or its load or to carry out any other task required for the safe and continued operation of the vehicle, including refuelling, and\n - (v) for any other reason or purpose specified in this paragraph.\n - (12) For the purposes of this regulation, the place referred to in paragraph (3) includes the premises where P is self-isolating together with any garden, yard, passage, stair, garage, outhouse, or other appurtenance of such premises.\n - (13) If P is a child, any person who has custody or charge of P during P's period of self-isolation must ensure, so far as reasonably practicable, that P self-isolates in accordance with this regulation.\n - (14) If P has arrived from Wales or Scotland and is in England, temporarily, for a reason which would constitute an exception under paragraph (11), P is not required to comply with this regulation.\n - (15) If P is a person described-\n - (a) in paragraph 1(1) of Schedule 4-\n - (i) where P is a person described in paragraph 1(1)(a) to (k) of, and meets the conditions set out in paragraph 1(3) of, that Schedule, P is not required to comply with this regulation,\n - (ii) in any other case, paragraph (3)(b) and (c) does not apply to P;\n - (b) in paragraph 1(2) of Schedule 4 (essential work for foreign country etc), P is not required to comply with this regulation;\n - (c) in paragraph 33 of Schedule 4 (healthcare), paragraph (2) does not require P to remain in isolation in the circumstances set out in paragraph 33 of that Schedule;", - "page_start": 15, - "page_end": 15, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "The described emotional associations of these bodily changes are interesting. Achieving higher exercise intensities, easier movements, reduced pain and improved sensation lead to positive feelings and enhanced prospects for both PA and life, while for some individuals, a failure to achieve high-intensity or no immediate changes in impairments are associated with feelings of loss and negative prospects. This calls attention to acknowledging that sensorimotor capacities facilitate or constrain how an individual perceives the world, which is closely interlinked with feelings, and that in /uniFB02 uence why participants perceive what they do (34). These experiences necessitate that sensorimotor changes in pwMS involve not only their biological body but also their relational and self-individuating modes of operating in the world, including how an experience coheres with, for example, participants ' historical experiences (35). As we primarily regulate such modes to achieve an optimal positive mood state, this can also explain why only changes perceived as positive appear to enhance participants ' beliefs for the future (36). Negative experiences such as failure to achieve high intensity because the legs are not working in the last interval can thus be perceived as detrimental by pwMS.\n\nWe argue that participants ' perceived bodily changes affected their self-ef /uniFB01 cacy for being physically active. Self-ef /uniFB01 cacy involves an individual ' s perception of exerting control over his or her own actions (37) and has been extensively reported to be pertinent to PA engagement in pwMS (38, 39). However, selfef /uniFB01 cacy is theoretically described according to social cognitive theory (38). Our /uniFB01 ndings highlight how experiencing, expressing and socially interacting through the body (embodied experiences) shape individuals ' self-ef /uniFB01 cacy and suggest a crucial role of bodily perceptions in constituting self-ef /uniFB01 cacy for PA.\n\n## 4.2 Interactions and environment shape meaning making\n\nParticipants perceived the group setting to increase motivation, support, and commitment, which has been found in previously published work (16, 31).\n\nThe physiotherapist-participant interaction is acknowledged in exercise interventions for pwMS, pointing to professionals ' role in informing participants of exercise bene /uniFB01 ts in the management of MS, including the prescribing mode, frequency, intensity, and duration of exercise (40). Tailored interventions are supported", - "page_start": 7, - "page_end": 7, - "source_file": "pubmed13.pdf" - }, - { - "text": "## 5. Previous Projections\n\nAt the end of September 2014 the published prison population was within 1.8 % of the 2013 Scenario 2 (central) projection, and within 3.4 % of the 2013 Scenario 1 projection and 0.2 % of the 2013 Scenario 3 projection. This does not indicate which scenario the actual prison population will track going forward.\n\nDifferences between the 2013 projections and the actual population could be explained by changes, different to those projected, in overall demand, offence mix, age and gender of defendants, court routes, custody rates or sentence lengths.\n\nChart 3 plots the 2014 Central Scenario projection against the three 2013 prison population projections. The 2014-2020 Central Scenario projection is above all three scenarios from last year. The higher level of the new projections can be attributed to a more serious case mix coming into the courts with a resulting increase in average custodial sentence lengths. The projection for June 2019 in the Central Scenario this year is 10.2 % above the equivalent scenario (Scenario 2) last year.\n\nChart 3: Comparing 2013 and 2014 projections (November 2014 - December 2020)\n\n<!-- image -->", - "page_start": 14, - "page_end": 14, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "The ability to construct the permanent casino facility is currently subject to resolution of the Lac Vieux litigation. The 6th Circuit Court of Appeals has issued an injunction prohibiting the City and the developers from commencing construction pending further action of the 6th Circuit Court. Therefore, we do not know when we will be able to commence construction of, or complete, the permanent facility.", - "page_start": 38, - "page_end": 38, - "source_file": "NYSE_MGM_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "OTC_NSANY_2004.pdf", - "query": "What was the indicator related to increasing Nissan's research and development activities in terms of publication of scientific articles in 2004?", - "target_page": 46, - "target_passage": "And the number of research papers we present at societies such as The Japan Society of Mechanical Engineers rose dramatically in fiscal 2004. ", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "## FISCAL YEAR 2004 SHARE PERFORMANCE\n\nDESPITE NISSAN'S RECORD OPERATING RESULT IN FISCAL 2004, ITS STOCK PERFORMANCE RETURN WAS NEGATIVE AND LOWER THAN THE TOPIX INDEX. THE INVESTOR RELATIONS TEAM WAS STRENGTHENED AT THE START OF FISCAL 2005 TO BETTER ADDRESS THE NEEDS OF INVESTORS AND ENHANCE THEIR UNDERSTANDING OF NISSAN'S PERFORMANCE. INVESTORS WILL NOW BE ABLE TO GAIN A MORE IN-DEPTH VIEW OF THE COMPANY'S OPERATIONS AND PERFORMANCE INDICATORS.\n\n## Share Performance in Fiscal 2004\n\nNissan's share price began at ¥1,143 at the beginning of fiscal 2004 and ended the fiscal year at ¥1,099, generating a negative return of 3.85 percent. Total shareholder return (TSR) was -1.67 percent, while the dividend yield came to 2.18 percent (¥24 per share dividend, divided by the ¥1,099 closing price). Adverse movements in foreign exchange rates and commodity price hikes adversely affected Nissan's profitability, which was reflected in the share price. In addition, specific events relating directly to the company also had a negative impact. Later in this report, corporate officers will explain what actions Nissan has undertaken to ensure better performance.\n\n## Payout Policy\n\nNissan announced its NISSAN Value-Up three-year dividend policy, covering the period from fiscal 2005 to fiscal 2007, at the annual general meeting of shareholders on June 23, 2004. Nissan proposes a long-term dividend policy to provide more visibility and improve transparency into the ways in which Nissan rewards its shareholders. Nissan believes that a long-term dividend policy reduces uncertainty for investors who already own or are considering acquiring Nissan stock.\n\n## Fiscal Year 2004 Share Performance\n\n(Index: April 1, 2004=100)\n\n<!-- image -->\n\n## IR Activities\n\nUnder NISSAN Value-Up, the IR team's performance will be evaluated based on the price-earnings ratio (PER) and volatility relative to our major competitors. PER is used to measure how successfully the IR team manages market expectations about Nissan in order to maintain the Nissan share price close to an intrinsic value. The other measure, volatility, is used to measure the risk investors perceive when considering Nissan stock. If Nissan can successfully reduce volatility, the minimum return required by investors should decline. The IR team believes that a strengthening of disclosure activities is required to improve both measures. The team plans to disclose not only financial results but also more forward-looking information about Nissan fundamentals such as technology and product. Such forward-looking information helps investors to forecast future performance more precisely and reduces uncertainty about the future. As a consequence, Nissan will increase the number of investor conferences, events, and teleconferences during fiscal 2005.\n\n## Five-Year Share Performance\n\n<!-- image -->", - "page_start": 16, - "page_end": 16, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "Nissan Annual Report 2004\n\nc3", - "page_start": 112, - "page_end": 112, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## BUSINESS AND OTHER RISKS\n\nDue to changes in government regulations, information on risks involved in business operations has been disclosed in the Yukashoken-Houkokusho for the year ended March 31,2005 as follows:\n\n## Economic Factors\n\nThe demand for products manufactured by Nissan is affected by the economic conditions in each country or market in which they are offered for sale. Nissan conducts its operations all over the world and, in particular, in the major markets of North America, Europe, and Asia, to say nothing of Japan. While Nissan strives to develop a comprehensive and integrated projection of the global economic outlook, any greater-than-anticipated downturn in one of these markets may have a significant effect on Nissan financial position and results of operations.\n\n## International Activities and Overseas Expansion\n\nNissan's manufacturing and marketing activities outside Japan are conducted in the United States, in Europe, and in the developing and emerging markets of Asia. Nissan forecasts and evaluates a wide variety of risks inherent in doing business in such overseas markets including the following factors, each of which entails a greater-than-anticipated level of risk:\n\n - · Unfavorable political or economic factors\n - · Legal or regulatory changes\n - · Potentially adverse tax consequences\n - · Labor disputes including strikes\n - · Difficulties in recruiting and retaining personnel\n - · Social, political or economic turmoil due to terrorism, war, or other destabilizing factors.\n\n## Research and Development\n\nNissan's technology must be 'real world'-useful, pragmatic and easy to use. Nissan anticipates the nature and scope of the market demand, and then prioritizes and invests in new technologies. Nonetheless, any sudden and greater-than-anticipated changes in its business environment or in customer preferences may impact negatively on customer satisfaction with these new technologies.\n\n## Product Defects\n\nNissan places a high priority on safety and does its best to enhance safety from the standpoint of research and development, manufacturing and sales. Although Nissan takes out insurance policies to cover product liability, this does not necessarily mean that all potential defects and the related liabilities are fully covered. If Nissan were to implement strict product recalls for its customers, Nissan would incur significant additional expenses which could adversely affect its financial position and results of operations.\n\n## Fluctuation in Foreign Currency Exchange Rates\n\nNissan's Japanese operations export vehicles to various countries around the world. In general, the appreciation of the yen against other currencies adversely affects Nissan's financial results of operations and, on the contrary, the depreciation of the yen against other currencies favorably affects Nissan's financial results of operations. Any sharp appreciation of the currencies of those countries against the yen could lead to increases in both procurement and production costs which would adversely affect Nissan's competitiveness.\n\n## Derivatives", - "page_start": 72, - "page_end": 72, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## Derivatives\n\nNissan utilizes derivatives transactions for the purpose of hedging its exposure to fluctuation in foreign exchange rates, interest rates and commodity prices. While Nissan can hedge against these risks by using derivatives transactions, Nissan, by so doing, may miss the potential gains which could result from seizing the market opportunities to profit from such fluctuation in exchange rates and interest rates.\n\nIn addition, Nissan manages its exposure to credit risk by limiting its counterparties to financial institutions with high credit ratings. However, a default by any one of these counterparties could have an adverse effect on Nissan's financial position and operating results.\n\n## Lawsuits and Claims\n\nWith respect to various lawsuits and claims which Nissan encounters, the possibility exists that the position defended by Nissan will not be accepted\n\nand that the outcome may be significantly different from that anticipated. As a result, any such verdict or settlement could adversely affect Nissan's financial position and operating results.\n\n## Government Regulations\n\nThe automobile industry worldwide is influenced by a broad spectrum of regulations governing the emission levels of exhaust fumes, fuel economy guidelines, noise level limitations and safety standards, and Nissan expects these regulations to become increasingly stringent. In order to ensure compliance, it may be necessary for Nissan to make significant ongoing investments in these areas which would have an impact on its financial position and results of operations.\n\n## Intellectual Property Rights\n\nNissan owns a wide variety of proprietary technologies and has the expertise to differentiate Nissan's products making them unique from those of its competitors. These assets have proven their value in the growth of Nissan's business and will, no doubt, continue to be of value in the future. Nissan strives to protect its intellectual property assets; however, in certain markets, Nissan may encounter difficulty in fully protecting the proprietary rights to its own technologies. Cases may arise where Nissan finds itself unable to prohibit others from infringing on its intellectual property rights.\n\nThe Company has established Intellectual Property Rights Management Department for the purpose of protecting intellectual property rights in specific areas, strengthening activities to protect Nissan's intellectual property rights, and abstracting new intellectual property rights. And the department has been performing various activities to protect and create Nissan Brand.\n\n## Natural Disasters\n\nNissan's corporate headquarters and many of its manufacturing facilities are located in Japan, where the statistically proven probability of earthquakes is higher than in many other countries. Nissan has developed risk management guidelines relating to earthquake damage and the CEO has organized a global task force to direct disaster prevention and recovery activities. In addition, the Gruop has begun to strengthen its manufacturing facilities with anti-seismic reinforcement. However, if a severe earthquake were to hit one of Nissan's key facilities causing a halt in production, this would adversely affect Nissan's financial position and results of operations.\n\n## Sales Financing Business Risk\n\nSales financing is an integral part of Nissan's core business, providing strong support to its automotive sales, while maintaining high profitability and a sound and stable financial condition through strict risk management policies. However, the sales financing companies have a high exposure to interest-rate risk, residual value risk, and credit risk, any one of which may adversely affect Nissan's financial position and results of operations.\n\n## Counterparty Credit Risk", - "page_start": 72, - "page_end": 72, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "Under NISSAN Value-Up, we will work closely with Nissan Motor Co., Ltd. and Nissan North America to provide additional sales-financing capabilities in new global markets, which can be a key to increasing sales volume. To achieve the same kind of success we have achieved in our new Mexican sales-financing efforts under the NISSAN 180 plan, we will support the global Infiniti expansion and other geographic growth, including developing financial products for the light commercial vehicle market.'", - "page_start": 30, - "page_end": 30, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "<!-- image -->\n\nPURCHASING\n\n## More value, Higher quality, Win-win partnerships\n\n'The evolution that took place in Nissan's purchasing activities during the Nissan Revival Plan, or NRP, and continued through NISSAN 180, will stretch even further during NISSAN Value-Up. Why evolution and not revolution? Because the shift in purchasing that started six years ago was not a single action, it was a mindset change that continues to drive all our activities.\n\nPurchasing represents the single largest area of cost for Nissan. Through the NISSAN Value-Up business plan, we are determined to drive greater value from our purchasing activities and maintain the momentum built over the last six years.\n\nDuring the Nissan Revival Plan years, our focus was on catching up with the rest of the industry. NISSAN 180 was focused on reaching the benchmarks set during NRP and now as we enter the NISSAN Value-Up period, that focus evolves towards being the global cost leader.\n\nOne of the key breakthrough strategies of NISSAN Value-Up is the focus on new and emerging markets. On the sales side, markets like China, India, Russia and ASEAN represent significant opportunities for Nissan. On the purchasing side, we look at the cost competitiveness of these new markets and how we can increasingly use them to enhance our global competitiveness.\n\nOur strategy for what we call 'Leading Competitive Countries', or LCCs, is to focus on those markets that we see as trend leaders in both cost, quality and supply stability. We will focus first on China and then on ASEAN nations. This will bring cost advantages for our major regions, such as Japan, North America and Western Europe, making us more competitive. We're also investigating sourcing from Eastern Europe, the Mercosur trading zone, and India.\n\nHIROTO SAIKAWA Executive Vice President\n\n<!-- image -->\n\nOur Alliance with Renault has also provided substantial purchasing benefits and opportunities. Formed in 2001, the Renault Nissan Purchasing Organization, or RNPO, now accounts for over 70 percent of all purchasing for Nissan and Renault. Nissan will further benefit from RNPO through the utilization of Renault supply bases in certain LCCs.\n\nAlthough the turnaround in the Nissan business has been profound, we also recognize that our supplier partners have played a significant role. Going forward, we intend to reinforce those relationships, building value on both sides. For example, we are reinvigorating our innovative 3-3-3 engineering program.\n\nWe are also deploying a purchasing process that gets suppliers involved earlier and further upstream in the product development process, the concept of 'project partners'. This is a program that identifies key technologies and innovations that require substantial investments from both sides. Suppliers will be selected as project partners for a specific area and will work closer with us to develop lower cost and higher quality solutions. This win-win approach has already started with interior systems and chassis development projects.\n\nLast year, we faced several challenges with raw materials. Those risks-both price and supply related-are a factor that we have to recognize and address in the coming years. Last year, the pressure was concentrated on the supply side, going forward we see an increasingly challenging cost environment. Working closely with our key raw material suppliers as well as parts suppliers and accelerating our cost reduction countermeasures will be key during NISSAN Value-Up.\n\nOur purchasing philosophy at Nissan is focused on value, quality and relationships. We want our purchasing process to be transparent and proactive, and create more value for our suppliers and for the company.'", - "page_start": 49, - "page_end": 49, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "OUR WORLD\n\nNISSAN HAS A GLOBAL PRESENCE. BORN IN JAPAN, WE ARE PERFECTLY AT HOME IN THE U.S., THE UK, SPAIN, THAILAND, CHINA, EGYPT, BRAZIL AND WELL OVER 150 OTHER NATIONS WHERE NISSAN CARS AND THEIR COMPONENT PARTS ARE PRODUCED, SOLD AND DRIVEN. WITH NISSAN, DRIVING PLEASURE IS A SENSATION THAT KNOWS NO BORDERS. THIS IS THE NISSAN SHIFT\\_\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->", - "page_start": 59, - "page_end": 59, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "WHO WE ARE\n\nNISSAN IS ABOUT MEETING UNMET NEEDS, CRAFTING SINGULAR PRODUCTS AND TRANSFORMING BRAND STRENGTH AND INNOVATION INTO NEW BUSINESS OPPORTUNITIES. WE ARE NISSAN. WE ARE INFINITI. WE ARE NISSAN LIGHT COMMERCIAL VEHICLES, EXPANDING OUR RANGE. WE ARE NISSAN INDUSTRIAL MACHINERY, LEVERAGING OUR EXPERTISE TO BUILD FORKLIFTS AND MARINE PRODUCTS. AND WE ARE NISSAN FINANCIAL SERVICES, PROVIDING OUR CUSTOMERS WITH A COMPREHENSIVE LINEUP OF OFFERINGS. THIS IS THE NISSAN SHIFT\\_\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->", - "page_start": 17, - "page_end": 17, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "EUROPE\n\n<!-- image -->\n\n## Making Profit as a Smaller Player\n\nDOMINIQUE THORMANN Senior Vice President Nissan Europe\n\n<!-- image -->\n\n'Europe is one of the most fragmented automotive market in the world and a highly competitive one besides. Despite our relatively small size, however, we have begun to demonstrate that it is possible to make money in Europe. In fact, although Nissan does not yet deliver the levels of profitability here\n\nthat the U.S. or other markets generate, we surpassed our NISSAN 180 business targets in fiscal 2004. Our profitability is now on par with the best European manufacturers. Nissan has a foundation for increasing profitability further in the coming years in Europe.\n\nNissan is already an established name around the region, and the brand is strongly associated with 4x4 technology, off-road vehicles and pickup trucks. However, there is also a solid heritage built around the Micra, a model designed for urban driving. Both the first and second generations of this car were very successful, and the third generation is performing well. To leverage our 4x4 heritage and SUV strength into the passenger car segment, Nissan is developing a series of crossover vehicles that blend car-like performance with 4x4 versatility. The Qashqai concept vehicle introduced at the 2004 Geneva Motor Show is the first of these-smaller, more affordable, and better adapted to European roads. The Qashqai will go into production in our plant in Sunderland in the UK in early 2007. The Murano, launched this year, is a precursor to the Qashqai in the larger executive segment. Europeans have already taken to the Murano, driving sales far past our initial forecasts in all markets. This car is helping make Nissan a brand that people aspire to own.\n\nNissan is still a small player in the region, selling 550,000 cars across a very large and diverse territory that stretches from the Atlantic Ocean to Russia, and from Finland to Israel. In the past we covered the area through multiple distribution channels, which we are currently in the process of simplifying. A few aspects of the European market have made profitability more difficult to achieve. For example, automakers must provide models with much diversity: diesel and gasoline powertrains; manual and automatic transmissions. The cars must also be engineered to suit the high driving speeds typical in the region and ensure superior handling, which results in higher costs.\n\nAs in many other mature markets, an incentive war is raging in Europe. Nissan's position here, as elsewhere, is to use incentives selectively and to always protect profitability. Providing products which customers recognize and appreciate for their style and attributes rather than being the best deal is the foundation of Nissan's profitable growth. We now have a wide range of products, five of which were newly launched in 2005, including the Pathfinder and the Navara pickup. We will release the Micra C+C at the Frankfurt Motor Show in September, giving customers the option of a unique standard glass roof in a fully retracting hard convertible top.\n\nNissan's manufacturing still defines the leading edge in Europe. According to The Harbour Report , our plant in Sunderland is the most productive plant in Europe. Sunderland will start production on a new B-segment car based on the Tone concept car in early 2006, followed by the Qashqai crossover vehicle in early 2007. Our Barcelona plant, which manufactures SUVs, 4x4s and light commercial vehicles, will reach full capacity in mid-2005. Finally, our truck plant in Avila, Spain, which specializes in light-duty trucks, will start producing a replacement for the popular Cabstar in late 2006. This efficient production base is a critical part of our profitable growth scenario.", - "page_start": 62, - "page_end": 62, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "Eurostat has developed under the lead of UNECE a framework to assess the quality of employment in its multiple facets. 497 Eurostat describes this framework as set of 68 indicators 498 on seven dimensions 'that address employment quality from the perspective of the employed person. Its design also facilitates international comparisons.' 499 OSH is covered under the section 'Safety' and is based on four indicators and includes two outcome and two risk indicators: 1) Fatal occupational injuries / Number of fatal accidents at work (excluding traffic accidents); 2) Non-fatal occupational injuries / Number of non-fatal accidents at work; 3) Exposure to physical health risk factors; and 4) Exposure to mental health risk factors. Eurostat implements the OSH parts of this framework by its ESAW and by the OSH-related ad hoc modules to the LFS, called 'Accidents at work and other work-related health problems' (surveys in 2007, 2013 and 2020).\n\nFor more detailed monitoring at EU level, DG EMPL/ACSH and EU-OSHA developed a structural model that uses four groupings: Generic information on the basics of the OSH systems and on major context factors like age or sectoral structure, main policies for the Steering of OSH , an overview on relevant Working conditions and Prevention , and Outcomes , that is, accidents, diseases and wellbeing, and some elements of the OSH infrastructure and monitoring capacity . Currently, the OSH Barometer works with 16 quantitative and qualitative indicators in these four groupings. Some of these indicators are purely descriptive, like the short descriptions of OSH authorities, OSH institutions or OSH-related surveys, and others allow qualitative comparisons of structures and policies, for example, the indicator on 'National strategies' or 'Social dialogue'. Many indicators, for example, on working conditions or work accidents, are based on quantitative data from surveys and statistics. These indicators allow a comparison between sectors, occupations, types of enterprises, countries, for example.\n\n## CHAPTERS\n\n## INDICATORS\n\n## Generic information\n\nIndicator:\n\nOSH authorities (descriptive)\n\nIndicator:\n\nEconomic and sector profile (quantitative)\n\nIndicator:\n\nWorkforce profile (quantitative)\n\n## Steering of OSH\n\nIndicator:\n\nRegulation (descriptive)\n\nIndicator:\n\nNational strategies (descriptive)\n\nIndicator: Social dialogue (descriptive, composite indicator)\n\n## Working conditions and prevention\n\nIndicator:\n\nWorking conditions (quantitative)\n\nIndicator:\n\nPrevention in companies (quantitative)\n\nIndicator:\n\nWorker involvement (quantitative)\n\nIndicator: OSH culture and health awareness (quantitative)\n\n## Accidents, diseases and wellbeing\n\nIndicator:\n\nWork accidents (quantitative)\n\nIndicator:\n\nWork-related diseases (quantitative)\n\nIndicator: Health perception of workers (quantitative)", - "page_start": 137, - "page_end": 137, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - } - ] - }, - { - "references": { - "source_file": "OTC_NSANY_2004.pdf", - "query": "What was Nissan's vehicle production in Mexico in 2003?", - "target_page": 72, - "target_passage": "308,322", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "OUR WORLD\n\nNISSAN HAS A GLOBAL PRESENCE. BORN IN JAPAN, WE ARE PERFECTLY AT HOME IN THE U.S., THE UK, SPAIN, THAILAND, CHINA, EGYPT, BRAZIL AND WELL OVER 150 OTHER NATIONS WHERE NISSAN CARS AND THEIR COMPONENT PARTS ARE PRODUCED, SOLD AND DRIVEN. WITH NISSAN, DRIVING PLEASURE IS A SENSATION THAT KNOWS NO BORDERS. THIS IS THE NISSAN SHIFT\\_\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->", - "page_start": 59, - "page_end": 59, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "Under NISSAN Value-Up, we will work closely with Nissan Motor Co., Ltd. and Nissan North America to provide additional sales-financing capabilities in new global markets, which can be a key to increasing sales volume. To achieve the same kind of success we have achieved in our new Mexican sales-financing efforts under the NISSAN 180 plan, we will support the global Infiniti expansion and other geographic growth, including developing financial products for the light commercial vehicle market.'", - "page_start": 30, - "page_end": 30, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "| Nissan Chuo Parts Sales Co., Ltd. | Yokohama, Kanagawa | Sales of automobile repair parts | ¥545 | 80.61 |\n| US | | | | |\n| Nissan North America, Inc. | Gardena, California | Management of North American subsidiaries, manufacture and sales of automobiles and parts | $1,791 | 100.00 |\n| Nissan Motor Acceptance Corporation | Torrance California | Finance of wholesale and retail automobile sales in US | $499 | 100.00 |\n| Nissan Motor Corporation in Hawaii, Ltd. | Honolulu, Hawaii | Sales of automobiles and parts | $6 | 100.00 |\n| Nissan Capital of America, Inc. | Torrance, California | Financing for group companies | $1 | 100.00 |\n| Nissan Technical Center North America, Inc. | Farmington Hills Michigan | Research and development, testing | $16 | 100.00 |\n| Nissan Motor Insurance Corporation | Honolulu, Hawaii | Casualty insurance | $10 | 100.00 |\n| Nissan Forklift Co., North America | Marengo, Illinois | Manufacture and sales of forklifts and parts | $34 | 100.00 |\n| Canada | | | | |\n| Nissan Canada, Inc. | Mississauga, Ontario | Sales of automobiles and parts | CAN$68 | 100.00 |\n| Mexico | | | | |\n| Nissan Mexicana, S.A. de C.V. | Mexico D.F. | Manufacture and sales of automobiles and parts | P17,056 | 100.00 |", - "page_start": 107, - "page_end": 107, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "EUROPE\n\n<!-- image -->\n\n## Making Profit as a Smaller Player\n\nDOMINIQUE THORMANN Senior Vice President Nissan Europe\n\n<!-- image -->\n\n'Europe is one of the most fragmented automotive market in the world and a highly competitive one besides. Despite our relatively small size, however, we have begun to demonstrate that it is possible to make money in Europe. In fact, although Nissan does not yet deliver the levels of profitability here\n\nthat the U.S. or other markets generate, we surpassed our NISSAN 180 business targets in fiscal 2004. Our profitability is now on par with the best European manufacturers. Nissan has a foundation for increasing profitability further in the coming years in Europe.\n\nNissan is already an established name around the region, and the brand is strongly associated with 4x4 technology, off-road vehicles and pickup trucks. However, there is also a solid heritage built around the Micra, a model designed for urban driving. Both the first and second generations of this car were very successful, and the third generation is performing well. To leverage our 4x4 heritage and SUV strength into the passenger car segment, Nissan is developing a series of crossover vehicles that blend car-like performance with 4x4 versatility. The Qashqai concept vehicle introduced at the 2004 Geneva Motor Show is the first of these-smaller, more affordable, and better adapted to European roads. The Qashqai will go into production in our plant in Sunderland in the UK in early 2007. The Murano, launched this year, is a precursor to the Qashqai in the larger executive segment. Europeans have already taken to the Murano, driving sales far past our initial forecasts in all markets. This car is helping make Nissan a brand that people aspire to own.\n\nNissan is still a small player in the region, selling 550,000 cars across a very large and diverse territory that stretches from the Atlantic Ocean to Russia, and from Finland to Israel. In the past we covered the area through multiple distribution channels, which we are currently in the process of simplifying. A few aspects of the European market have made profitability more difficult to achieve. For example, automakers must provide models with much diversity: diesel and gasoline powertrains; manual and automatic transmissions. The cars must also be engineered to suit the high driving speeds typical in the region and ensure superior handling, which results in higher costs.\n\nAs in many other mature markets, an incentive war is raging in Europe. Nissan's position here, as elsewhere, is to use incentives selectively and to always protect profitability. Providing products which customers recognize and appreciate for their style and attributes rather than being the best deal is the foundation of Nissan's profitable growth. We now have a wide range of products, five of which were newly launched in 2005, including the Pathfinder and the Navara pickup. We will release the Micra C+C at the Frankfurt Motor Show in September, giving customers the option of a unique standard glass roof in a fully retracting hard convertible top.\n\nNissan's manufacturing still defines the leading edge in Europe. According to The Harbour Report , our plant in Sunderland is the most productive plant in Europe. Sunderland will start production on a new B-segment car based on the Tone concept car in early 2006, followed by the Qashqai crossover vehicle in early 2007. Our Barcelona plant, which manufactures SUVs, 4x4s and light commercial vehicles, will reach full capacity in mid-2005. Finally, our truck plant in Avila, Spain, which specializes in light-duty trucks, will start producing a replacement for the popular Cabstar in late 2006. This efficient production base is a critical part of our profitable growth scenario.", - "page_start": 62, - "page_end": 62, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "WHO WE ARE\n\nNISSAN IS ABOUT MEETING UNMET NEEDS, CRAFTING SINGULAR PRODUCTS AND TRANSFORMING BRAND STRENGTH AND INNOVATION INTO NEW BUSINESS OPPORTUNITIES. WE ARE NISSAN. WE ARE INFINITI. WE ARE NISSAN LIGHT COMMERCIAL VEHICLES, EXPANDING OUR RANGE. WE ARE NISSAN INDUSTRIAL MACHINERY, LEVERAGING OUR EXPERTISE TO BUILD FORKLIFTS AND MARINE PRODUCTS. AND WE ARE NISSAN FINANCIAL SERVICES, PROVIDING OUR CUSTOMERS WITH A COMPREHENSIVE LINEUP OF OFFERINGS. THIS IS THE NISSAN SHIFT\\_\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->", - "page_start": 17, - "page_end": 17, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "MANUFACTURING\n\n## Building on World-Class Productivity and Efficiency\n\n'By following the Nissan Production Way and the principle of doukiseisan -meaning synchronization with the customer-manufacturing at Nissan remains flexible and integrated, and keeps lead times short. The Nissan Production Way incorporates integration at the supplier, global and logistic levels. That is why we remain the most productive manufacturer in the world.\n\nWe've also become much more efficient, as our utilization rates show. In Japan, we were operating at 54 percent of capacity in 1999. In fiscal 2004 that figure increased to 86 percent, which is just about the maximum possible. During NISSAN Value-Up, we will increase our global utilization rate from approximately 74 percent to over 80 percent. We will not achieve that target by closing facilities, either. In fact, we've opened new plants in the U.S. and China, and increased capacity at our other facilities.\n\nManufacturing achieved a series of milestones during NISSAN 180. One of the biggest was opening the Canton plant in the U.S., which got up to speed quickly, launching five new vehicles in a period of just eight months. We built two plants in China, and restarted operations in Egypt. We dramatically expanded the Decherd, Tennessee engine plant in the U.S., and all engines for North America are now built at Decherd or at our plant in Mexico.\n\nWe also commenced cross-production with Renault: Nissan began building Renault's Platina in Mexico and its Traffic in Spain, while Renault began building our Pickup and Xterra at its factory in Brazil. We also started production of common engines with Renault, with our subsidiary Aichi Kikai and the Yokohama plant producing the four-cylinder engines used in our new Tiida, Note and Lafesta models. In Japan, we launched six new models in just six months-the Murano, Fuga, Lafesta, Tiida, Tiida Latio and Note. We also launched three vehicles-the Tiida, Teana and Tiida Latio-in China.\n\nWhile we were successful in Japan and China, we did have quality issues at the Canton facility. This was\n\nTADAO TAKAHASHI Executive Vice President\n\n<!-- image -->\n\nunfortunate, since it affected our ratings in the J. D. Power and Associates Initial Quality Study. We've since taken effective measures to resolve these problems. More importantly, we learned from them. We created new systems and new approaches to quality, which we then applied in Japan and to the new factories in China. Incidentally, the factories in China opened with no significant quality issues. This highlights one of our 'neverending' quests at Nissan, which is to identify problems and rapidly get solutions for them in place.\n\nWe do not rely solely on external quality evaluations. In cooperation with Renault, we created AVES, the Alliance Vehicle Evaluation System. AVES is a sophisticated process involving two people taking four to five hours to evaluate a vehicle. Because it is time-intensive, we also devised a short version of AVES that only takes an hour and can be done at the factory.\n\nThe second major area of focus is logistics, which is becoming more complicated. We send engine parts to the U.S., and soon we will be shipping more parts from leading competitive countries, or LCCs. During 2004, we encountered cargo-handling problems on the U.S. West Coast, which highlighted the need for a more sophisticated tracking system. If we had had such a system in place, we could have anticipated those problems and made the necessary adjustments.", - "page_start": 51, - "page_end": 51, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## AUTOMOBILES\n\n## Nissan\n\n<!-- image -->\n\n## Exceeding expectations -the Nissan automobile\n\nAt the center of everything we do stands the Nissan automobile. Our vehicles are the most tangible expression of our brand and the values of our company. We make cars that both inspire passion and exceed the expectations of our customers. Through bold and thoughtful designs, innovative technologies, and a richer and more rewarding driving experience, we are defining our unique place in the auto industry.\n\nOur product development philosophy differs from that which many of our competitors follow. Rather than focus on what the competition is providing, we concentrate on what they do not. We listen to drivers to discover their unmet needs and desires, and follow the most promising threads of emerging trends. Our designs are bold, geared to electrify and inspire. We see little point in building vehicles that please everyone but excite no one.\n\nThe appeal of a Nissan goes much deeper than the fine lines of its body and the gleam of its paint. We make some of the world's most advanced high-performance engines and transmissions. From our renowned VQ engine series to the latest in high technology, continuously variable transmissions (CVT), we blend driving pleasure with safety, fuel efficiency, and real-world environmental solutions.\n\nNissan has a long history of leadership and innovation in the automotive industry. We began our quest to create the best cars in the world in 1933, when the company was founded in Yokohama. The first Datsun passenger car rolled off the assembly line two years later. In the years since, we have fashioned a reputation for bold and innovative products. We were the first company to design, manufacture and export a small pickup truck from Japan to the United States, and to build and export a sports sedan, the Datsun 510. And we were the first to produce a true sports car that was also affordable, the Z. Today, we build equally exceptional vehicles in factories throughout the world that consistently rank in the top tier for efficiency, productivity and quality.\n\nIn the future, we will take the Nissan brand into new segments and markets. We will accelerate the pace of automotive evolution. And our products will continue to define our brand with clarity and consistency that brings lasting value to all our stakeholders.", - "page_start": 23, - "page_end": 23, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "| Nissan Europe S.A.S. | Trappes, France | Management of European manufacturing and sales | €1,626 | 100.00 |\n|--------------------------------------------------|-----------------------------|--------------------------------------------------------------------------|-----------|----------|\n| Nissan International Finance (Netherlands) B.V. | Amsterdam, The Netherlands | Financing for group companies | €13 | 100.00 |\n| Nissan France S.A. | Trappes, France | Sales of automobiles and parts | €4 | 94.77 |\n| Nissan Motor (GB) Ltd. | Rickmansworth, UK | Sales of automobiles and parts | £136 | 100.00 |\n| Nissan Holding (UK) Ltd. | Sunderland, UK | Holding company for English subsidiaries | €870 | 100.00 |\n| Nissan Italia S.p.A. | Rome, Italy | Sales of automobiles and parts | €5 | 100.00 |\n| Nissan Motor Manufacturing (UK) Ltd. | Sunderland, UK | Manufacture and sales of automobiles and parts | £250 | 100.00 |\n| Nissan Technical Center Europe Ltd. | Granfield, UK | Research and development, testing | £15 | 100.00 |\n| Nissan Forklift Europe B.V. | Amsterdam, The Netherlands | Sales of forklifts and parts | €6 | 100.00 |\n| Nissan Motor Iberica, S.A. | Barcelona, Spain | Manufacture and sales of automobiles and parts | €725 | 99.76 |\n| Nissan Motor Espana, S.A. | Barcelona, Spain | Sales of automobiles and parts | €12 | 100.00 |\n| Nissan Forklift Espana, S.A. | Noain, Spain | Manufacture and sales of forklifts and parts | €9 | 100.00 |\n| Australia | | | | |\n| Nissan Motor Co. (Australia) Pty. Ltd. | Dandenong, Victoria | Sales of automobiles and parts | A$290 | 100.00 |\n| New Zealand | | | | |\n| Nissan New Zealand Ltd. | Auckland | Managing New Zealand subsidiaries; automobile sales | NZ$51 | 100.00 |\n| South Africa | | | | |\n| Nissan Motor Company South Africa (Pty) Ltd. | Rosslyn | Managing South African subsidiaries; automobile manufacturing and sales | R39 | 100.00 |\n| Middle East | | | | |\n| Nissan Middle East F.Z.E. | Dubai, UAE | Automobile sales | Dh2 | 100.00 |\n| China | | | | |", - "page_start": 108, - "page_end": 108, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## Derivatives\n\nNissan utilizes derivatives transactions for the purpose of hedging its exposure to fluctuation in foreign exchange rates, interest rates and commodity prices. While Nissan can hedge against these risks by using derivatives transactions, Nissan, by so doing, may miss the potential gains which could result from seizing the market opportunities to profit from such fluctuation in exchange rates and interest rates.\n\nIn addition, Nissan manages its exposure to credit risk by limiting its counterparties to financial institutions with high credit ratings. However, a default by any one of these counterparties could have an adverse effect on Nissan's financial position and operating results.\n\n## Lawsuits and Claims\n\nWith respect to various lawsuits and claims which Nissan encounters, the possibility exists that the position defended by Nissan will not be accepted\n\nand that the outcome may be significantly different from that anticipated. As a result, any such verdict or settlement could adversely affect Nissan's financial position and operating results.\n\n## Government Regulations\n\nThe automobile industry worldwide is influenced by a broad spectrum of regulations governing the emission levels of exhaust fumes, fuel economy guidelines, noise level limitations and safety standards, and Nissan expects these regulations to become increasingly stringent. In order to ensure compliance, it may be necessary for Nissan to make significant ongoing investments in these areas which would have an impact on its financial position and results of operations.\n\n## Intellectual Property Rights\n\nNissan owns a wide variety of proprietary technologies and has the expertise to differentiate Nissan's products making them unique from those of its competitors. These assets have proven their value in the growth of Nissan's business and will, no doubt, continue to be of value in the future. Nissan strives to protect its intellectual property assets; however, in certain markets, Nissan may encounter difficulty in fully protecting the proprietary rights to its own technologies. Cases may arise where Nissan finds itself unable to prohibit others from infringing on its intellectual property rights.\n\nThe Company has established Intellectual Property Rights Management Department for the purpose of protecting intellectual property rights in specific areas, strengthening activities to protect Nissan's intellectual property rights, and abstracting new intellectual property rights. And the department has been performing various activities to protect and create Nissan Brand.\n\n## Natural Disasters\n\nNissan's corporate headquarters and many of its manufacturing facilities are located in Japan, where the statistically proven probability of earthquakes is higher than in many other countries. Nissan has developed risk management guidelines relating to earthquake damage and the CEO has organized a global task force to direct disaster prevention and recovery activities. In addition, the Gruop has begun to strengthen its manufacturing facilities with anti-seismic reinforcement. However, if a severe earthquake were to hit one of Nissan's key facilities causing a halt in production, this would adversely affect Nissan's financial position and results of operations.\n\n## Sales Financing Business Risk\n\nSales financing is an integral part of Nissan's core business, providing strong support to its automotive sales, while maintaining high profitability and a sound and stable financial condition through strict risk management policies. However, the sales financing companies have a high exposure to interest-rate risk, residual value risk, and credit risk, any one of which may adversely affect Nissan's financial position and results of operations.\n\n## Counterparty Credit Risk", - "page_start": 72, - "page_end": 72, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "## NISSAN Value-Up: Sustaining Performance\n\nNissan's position today is much different than it was six years ago or even three years ago. In 1999, we were in crisis, and the Nissan Revival Plan was needed to revive our company and build a future. In April 2002, when NISSAN 180 began, we wanted to complete the revival process, with an emphasis on profitable growth.\n\nNISSAN Value-Up is about sustaining performance. About taking all the gains we have made in connecting with our customers, in growing volumes, in creating value, in earning profits, in improving management- and then building upon these gains.\n\nWith NISSAN Value-Up, you will not see a radical break from NISSAN 180. This plan is evolutionary, not revolutionary. We will take the core elements that got us to this point-namely, more revenue, less cost, more quality and speed, and maximized Alliance benefit with Renaultand build upon them.\n\nNISSAN Value-Up has three critical commitments:\n\nProfit: Nissan will maintain the top level of operating profit margin among global automakers for each of the three years of the plan.\n\nVolume: Nissan will achieve global sales of 4.2 million units measured in fiscal 2008.\n\nROIC: Nissan will achieve a 20 percent ROIC on average over the course of the plan, based on the new formula that excludes cash on hand from the denominator.\n\nNISSAN Value-Up will oversee 28 new models, resulting in the start of production of 70 models worldwide, over two dozen more than the 44 production starts during NISSAN 180. Of the 28 new models, 18 will be replacements for existing models and 10 will be completely new 'conquest' models. We will enter more new segments, and we will introduce six models that will delight customers by being completely innovative in their concept and benefits.\n\nWe will pursue four major breakthroughs while implementing NISSAN Value-Up:\n\n - · Our Infiniti luxury brand will extend its reach into new markets such as China and Russia and continue to establish its credibility as a Tier-1 luxury player.\n - · We will develop our Light Commercial Vehicle (LCV) business into a fully competitive global operation through new market and product entries. By 2007, we plan to increase our LCV volume by 40 percent from fiscal 2004 to 434,000 units. During this period, operating margin is targeted to double from 4 percent to 8 percent.\n - · We will take a more efficient global sourcing approach to maximize our opportunities and minimize our overall costs as we grow. Our engineering, production and purchasing functions will continue their acceleration toward being fully integrated global operations.\n - · We will continue to invest in new and emerging markets, including China, India and Russia.", - "page_start": 11, - "page_end": 11, - "source_file": "OTC_NSANY_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_SEA_2014.pdf", - "query": "Why did Sundance Energy's oil sales improve in 2014?", - "target_page": 18, - "target_passage": "The increase in oil revenues was the result of increased oil production volumes ($81.3 million) offset by a decrease in product pricing ($15.7 million). ", - "chunk_present": { - "presence": true, - "index": 6 - } - }, - "top_chunk": [ - { - "text": "for a new energy future with greater natural gas usage and increased domestic oil production as two of its primary attributes, it is encouraging to see our political leadership finally grasp that natural gas stands alone as the only affordable, scalable and immediately available alternative to foreign oil and that U.S. oil production can be increased significantly in the years ahead.\n\nThe events of the past few months have unmistakably driven home the fact that it is insanity to rely on the Middle East to provide our economy's lifeline of oil. This should be especially obvious when one realizes that during the next 10 years, America will likely export at least another $4 trillion in national wealth to oil exporters around the world. Clearly, our country must demand from its leaders a new and more sustainable energy future.\n\n<!-- image -->\n\nAdvancing technology for cleaner operations: solar panels at a West Texas well power telemetry systems that provide pumpers with real-time information on oil and water tank levels to alarm them when levels near capacity, preventing tank spills.\n\nThe good news, however, is that America can now secure a new energy future thanks to Chesapeake and a handful of other leading U.S. E&P companies that have reinvented the process of finding natural gas and oil during the past five years. In doing so, we have discovered twice the resources of natural gas in the U.S. that Saudi Arabia possesses in oil. Furthermore, these same few companies that led the unconventional natural gas revolution have in just the past two years also reinvented the way in which we can find large new oil resources onshore in the U.S. In fact, I believe the U.S. can possibly increase its production of oil from the current 5.8 million barrels per day by 30-50% during the next 5-10 years, thereby potentially reaching the President's 2025 goal of reducing foreign oil imports by 33%, 5-10 years earlier than hoped.\n\nThe combination of these vast new discoveries of unconventional natural gas and liquids provides America with a unique future pathway toward greater energy independence, an industrial renaissance, economic rejuvenation and greater national security. I remain fully confident that the marketplace understands this and that over time the U.S. will more fully embrace and utilize clean, affordable, abundant American natural gas and increased domestic oil production as the best alternatives to burning environmentally challenged coal and expensive and dangerous foreign oil.\n\nThere is now a clear road ahead toward a more sustainable, affordable, dynamic and independent future if America embraces the remarkable gift of energy abundance that Chesapeake has helped discover in the U.S. You have my commitment, and the commitment of more than\n\nThe combination of these vast new discoveries of unconventional natural gas and liquids provides America with a unique future pathway toward greater energy independence, an industrial renaissance, economic rejuvenation and greater national security.\n\n10,000 other Chesapeake employees, that every day we are working hard to create shareholder value and a better future for our communities, our states and our country through the continued discovery and development of unconventional natural gas and liquids.\n\nBest regards,\n\n<!-- image -->\n\nAubrey K. McClendon\n\nChairman and Chief Executive Officer April 15, 2011", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "| Other interests | Oil and gas exploration and production | 31 |\n| Sorell Basin | Oil and gas exploration | 58 |\n| USA | | |\n| Gulf Coast | Oil and gas exploration and production | 39 |\n| Rocky Mountains | Oil and gas exploration and production | 50 |", - "page_start": 73, - "page_end": 73, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "TSR will be compared to a set of 22 oil and gas exploration and production companies headquartered in the United States and Australia. The Australian-headquartered companies are highlighted. The chart on the right depicts the TSR over a three year period ending 31 December 2014. Diamondback Energy Inc, Matador Resources Co and Midstates Petroleum Co Inc were excluded from the chart as there was not enough historical data to measure the defined TSR.\n\n| Company |\n|----------------------------|\n| Abraxas Petroleum Corp/NV |\n| Approach Resources Inc |\n| Austex Oil Ltd |\n| Beach Energy Ltd |\n| Bonanza Creek Energy Inc. |\n| Callon Petroleum CO/DE |\n| Carrizo Oil & Gas Inc |\n| Contango Oil & Gas Co |\n| Diamondback Energy Inc |\n| Emerald Oil Inc |\n| Lonestar Resources Ltd |\n| Matador Resources Co |\n| Midstates Petroleum Co Inc |\n| Panhandle Oil & Gas Inc |\n| Red Fork Energy Ltd |\n| Rex Energy Corp |\n| Sanchez Energy Corp |\n| Senex Energy Ltd |\n| Triangle Petroleum Corp |\n\n<!-- image -->\n\nRetirement and Other Benefits\n\nExecutive management participates in the same benefit plans and on the same basis as other employees. Those plans include health, dental and vision insurance (for which a premium contribution is required by the participant) and a 401(k) retirement plan under which the Company makes an annual contribution equal to 3 percent of the participant's eligible compensation.\n\nPost-Termination and Change In Control Benefits\n\nThe Managing Director's employment contract provides for payment of his base salary through the end of the contract term in the event he is terminated as a result of a change in control event. Additionally, in the event of a corporate take-over or change in control (as defined in the RSU Plan), our board in its discretion may cause all unvested RSUs to vest and be satisfied by the issue of one share each or provide for the cancellation of outstanding RSUs and a cash payment equal to the then-fair market value of the RSUs.", - "page_start": 39, - "page_end": 39, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "CHAIRMAN'S LETTER\n\n<!-- image -->\n\nDespite the reduction in crude oil and liquids prices towards the end of the year and continuing into 2015, the opertional performance and focused, value-adding transactions during the past year have positioned the Company very favourably for future growth in net asset value and shareholder returns.\n\n## Dear Fellow Shareholders,\n\nI am pleased to present Sundance Energy Australia Limited's Annual Report for the 12 months ended 31 December 2014. It has been another year of significant progress for Sundance across our portfolio of liquids rich oil and gas assets in the US.\n\nThe Company's strategic focus on growing production, cash flows and reserves from large, repeatable resource plays in North America continues to deliver positive results with growth in production, cash flows, and reserves.\n\nDuring late 2013 and 2014, we completed the divestment of our interest in the Williston Basin in North Dakota for $51 million which realised an internal rate of return of 45 percent; and also opportunistically divested our interest in the Denver-Julesburg Basin in Colorado for $114 million which realised an internal rate of return of 104 percent. These divestitures of smaller, less scalable positions enabled us to focus on developing and growing our assets in the Eagle Ford in Texas and our Mississippian/Woodford assets in Oklahoma.\n\nDespite the reduction in crude oil and liquids prices towards the end of the year and continuing into 2015, the operational performance and focused, value-adding transactions during the past year have positioned the Company very favourably for future growth in net asset value and shareholder returns.\n\n## A year of growing production, cash flow and reserves\n\nIn line with our strategy we continued to increase the level of company operated assets, and successfully maintained a very strong focus on optimising our operations and reducing costs. This resulted in an impressive improvement in well performance combined with a top tier cost structure.\n\nThrough our operated development program, we ended 2014 with record production of 9,434 barrels of oil equivalent per day (BOEPD) compared with an exit rate of 5,028 BOEPD in December 2013 and an average annual production of 6,635 BOEPD compared to 3,015 BOEPD in 2013. During 2014 we drilled and completed 42.7 net wells, primarily in the Eagle Ford, bringing our total well count to 81.3 by 31 December 2014. High value oil comprised approximately 69 percent of our total 2014 annual production and production from Sundance-operated projects accounted for 89 percent of total production for the year.\n\nCorresponding with the growth in annual production, the Company's full year revenues increased to $159.8 million and Adjusted EBITDAX increased to $126.4 million.\n\nThe Company's development program also generated significant growth in Constant Case reserves during the year. More details are contained elsewhere in this Annual Report, but in summary our 1P Reserves at the end of 2014 were 26.0 MBOE, 2P Reserves 54.1 MBOE, and 3P Reserves 147.7 MBOE. This compares with Reserves of 20.7 MBOE, 34.6 MBOE, and 92.8 MBOE, respectively, at the end of 2013.\n\nIn the current price environment, we have elected to scale back our drilling program to mainly concentrate on limited drilling obligations to hold Eagle Ford acreage. This will enable us to maintain our low leverage profile, which was approximately 1.03x debt to Adjusted EBITDAX at year end, and focus on growing our drilling inventory in an environment with less competition for leases and small acquisitions. Liquidity was $84 million at year end, with a borrowing base redetermination in 2015 expected to materially increase debt availability if the use of such funds is justified in line with our strategy.\n\n## The Eagle Ford - driving value and production growth", - "page_start": 3, - "page_end": 3, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "At year end, we had 197 gross 3P Reserves drilling locations across our Eagle Ford acreage where we continue to pursue operational and drilling efficiencies, opportunities to further improve well economics by improving recoveries and reducing costs. In 2014 this included a switch to pad drilling with zipper fracs and new completion techniques that have provided significant upside in production.\n\nDespite our current scaling back of drilling activity, we have set 2015 production guidance at 7,850 - 8,500 BOEPD, an increase from the previous year of some 13 - 17 percent, but a target that we believe is achievable while maintaining acceptable levels of liquidity given our demonstrated abilities and growing footprint in the Eagle Ford.\n\n## Safety and Environment\n\nSundance has a strong culture throughout the organisation of ensuring that high standards of safety are maintained and that our operations are conducted in an environmentally responsible way. During 2014 our comprehensive safety program was enhanced and further improvements will be a strong focus throughout 2015.\n\n## A strong financial position\n\nSundance is well placed for future growth in the Eagle Ford. The Company has a strong balance sheet to withstand the current low oil price environment, and our sound financial management strategy has seen the Company well supported by both new and existing investors in Australia and internationally.\n\nWe expect that Sundance will grow organically and also through further leasing or bolt-on acquisitions in our core Eagle Ford focus area within our current, conservative balance sheet parameters.\n\n## Positive outlook for 2015\n\nDespite the current oil pricing scenario, Sundance's medium-to-long term growth trajectory looks very positive.\n\nWe can demonstrate this through:\n\n- · A track record of capital efficient growth\n- · A track record of value creation\n- · Being a low cost/high margin operator\n- · Having top tier Eagle Ford assets with an extensive drilling inventory\n- · Having a clean balance sheet\n\nAs a mid-tier oil and gas producer and explorer in the S&P/ASX All Australian 200 index, and with the increasing interest and support from institutional and retail investors. I believe that Sundance will deliver significant long-term value from our assets for our shareholders.\n\n## Thank you for your support\n\nWe have had a busy year at Sundance and I would like to recognise the efforts and valued contribution of the Board of Directors, management team and all staff and contractors of the Company in helping us achieve our strategic goals. I am confident that we have the right team and excellent assets in place to execute our clear and focused strategy that we expect to deliver significant value for our shareholders.\n\nOn behalf of the Board and Company, I would like to thank our shareholders for your strong support of the Company throughout the year. We are committed to delivering long-term value for our shareholders and I look forward to reporting over the rest of the coming year on the continued value creation and growth of Sundance.\n\nYours sincerely,\n\n<!-- image -->\n\nMIKE HANNELL\n\nChairman\n\nThe Company has a strong balance sheet to withstand the current low oil price environment, and our sound financial management strategy has seen the Company well supported by both new and existing investors in Australia and internationally.", - "page_start": 4, - "page_end": 4, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "| Joint venture/area | Principal activities | Average interest % |\n|-----------------------------|--------------------------------------------------|----------------------|\n| Amadeus Basin | | |\n| Mereenie | Oil and gas production | 65 |\n| Mereenie Pipeline | Oil transportation | 65 |\n| Palm Valley | Gas production | 48 |\n| Browse Basin | Oil and gas exploration | 74 |\n| Carnarvon Basin | Oil and gas exploration and production | 32 |\n| Cooper Basin Downstream | Liquid hydrocarbon transportation and processing | 65 |\n| Cooper Basin Unit | | |\n| South Australia | Oil and gas production | 65 |\n| Queensland | Oil and gas production | 60 |\n| Cooper/Eromanga Basins | | |\n| South Australia | Oil and gas exploration and production | 65 |\n| Queensland, ATP 259P | Oil and gas exploration and production | 60 |\n| Other Eromanga | Oil and gas exploration and production | 74 |\n| Jackson Moonie Pipeline | Oil transportation | 83 |\n| Eastern Queensland | | |\n| Bowen Basin | Gas exploration and production | 50 |\n| Surat Basin | Oil and gas exploration and production | 48 |\n| Egypt | | |\n| Gulf of Suez | Oil and gas exploration | 50 |\n| Gippsland Basin | Oil and gas exploration and production | 35 |\n| Indonesia | | |\n| East Java Basin | Oil and gas exploration and production | 42 |\n| Kutei Basin | Oil and gas exploration | 35 |\n| West Natuna Basin | Oil and gas exploration and production | 6 |\n| West Papua | Oil and gas exploration | 20 |\n| Offshore Northern Australia | | |\n| Bonaparte Basin | Oil and gas exploration | 95 |\n| Houtman Basin | Oil and gas exploration | 42 |\n| Timor Gap | Oil and gas exploration and production | 17 |\n| Timor Sea | Oil and gas exploration and production | 22 |\n| Otway Basin | Oil and gas exploration and production | 36 |\n| Papua New Guinea | | |\n| PDL1 (Part Hides Field) | Oil and gas exploration | 31 |\n| Other interests | Oil and gas exploration and production | 31 |", - "page_start": 73, - "page_end": 73, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "The Eagle Ford contributed 4,187 Boe/d (68.1%) of total sales volume during the year ended 31 December 2014 compared to 1,371 Boe/d (46.4%) during the prior year. Mississippian/Woodford contributed 1,433 Boe/d (23.2%) of total sales volume during the year ended 31 December 2014 compared to 503 Boe/d (17.0%) during the prior year. Our sales volume is oil-weighted, with oil representing 75% and 77% of total sales volume for the year ended 31 December 2014 and 2013, respectively.\n\nOil sales. Oil sales increased by $65.6 million (82.7%) to $145.0 million for the year ended 31 December 2014 from $79.4 million for the prior year. The increase in oil revenues was the result of increased oil production volumes ($81.3 million) offset by a decrease in product pricing ($15.7 million). Oil production volumes increased 102.4% to 1,675,078 Bbls for the year ended 31 December 2014 compared to 827,432 Bbls for the prior year. The average price we realised on (NGL) the sale of our oil decreased by 9.8% to $86.56 per Bbl for the year ended 31 December 2014 from $95.92 per Bbl for the prior year.\n\nNatural gas sales. Natural gas sales increased by $3.4 million (122.1%) to $6.2 million for the year ended 31 December 2014 from $2.8 million for the prior year. The increase in natural gas revenues was primarily the result of increased production volumes ($2.6 million) and improved product pricing ($0.8 million). Natural gas production volumes increased 868,800 Mcf (93.0%) to 1,803,000 Mcf for the year ended 31 December 2014 compared to 934,200 Mcf for the prior year. The average price we realised on the sale of our natural gas increased by 15.1% to $3.42 per Mcf for the year ended 31 December 2014 from $2.97 per Mcf for the prior year.", - "page_start": 17, - "page_end": 17, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "Jeff Fisher Senior Vice President - Production\n\n<!-- image -->\n\n## What advantages does CHK's unique vertical integration strategy provide?\n\nChesapeake has built a large inventory of low-risk natural gas and liquids-rich plays that we plan to develop aggressively over the next two decades. As a result, we know that our company will consistently utilize a tremendous (and growing) amount of oilfield services for this resource development. This high level of planned drilling activity will create value for the provider of oilfield services, and Chesapeake's strategy is to capture a portion of this value for our shareholders rather than transfer it to third-party vendors whose interests and investments are not always aligned with ours. To date, Chesapeake has invested in drilling rigs, rental tools, water management equipment, trucking, compression equipment, midstream services, and most recently pressure pumping and fracture stimulation equipment. Chesapeake's activities require a high level of planning and project coordination that is best accomplished through vertical integration and ownership of the oilfield services we utilize. This approach creates a multitude of cost savings, an alignment of interests, operational synergies, greater capacity of equipment, increased safety and better coordinated logistics. In addition, Chesapeake's control of a large portion of the oilfield service equipment it utilizes provides a unique advantage to control the timing of leasehold development. Simply put, faster development of resources maximizes the present value of leasehold. This has been a key advantage for\n\nChesapeake over the past three years as the company has monetized leasehold investments at premium values through our joint ventures.\n\n## Will U.S. natural gas prices reconnect with world natural gas prices?\n\nNatural gas is a premium product and a cleaner-burning fuel than coal or oil-related products, including gasoline, diesel and heating oil. Despite this fact, over the past two years natural gas has received a low price in the U.S. market relative to coal and oil-related products, primarily as a result of a temporary surplus of production. This surplus has been principally caused by high levels of drilling activity as producers focused on holding by produc tion (HBP) leasehold in new highly productive, low cost natural gas shale plays. In essence, producers reinvented U.S. supply ahead of reinventing of U.S. demand. We believe HBP-incentivized drilling on natural gas plays will largely come to an end in 2012, and U.S. demand will soon also be reinvented to allow U.S. natural gas prices to reconnect to price parity with world natural gas prices that have risen to more than double U.S. natural gas prices.\n\nThis surge in world natural gas prices has been in response to $100+ oil prices and surging global liquefied natural gas (LNG) demand. In our view, the arbitrage in value between competing fuels is simply too wide. Capital and ideas will flow toward projects that make the most of this price disparity. Chesapeake and other companies are working to create the ability to export natural gas from the U.S. Gulf Coast and other regions in the form of LNG to premium Pacific Rim, European and South American markets, perhaps as soon as 2015. This initiative will also be aided by the widening of the Panama Canal to accommodate large LNG vessels. Furthermore, we believe that the\n\nJeff Mobley Senior Vice President -\n\n<!-- image -->\n\nInvestor Relations and Research", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "## Management's Discussion and Analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## Apartment Property Expenses\n\nSame store apartment property expenses increased 5.5% for the year ended December 31, 2013, due primarily to increased utility and fuel expenses as a result of high natural gas prices in Atlantic Canada, and higher electricity costs.\n\n## Utility and Fuel Expense - Same Store\n\nFor the years ended December 31,\n\n| | 2013 | 2012 | % change |\n|---------------------------------|---------|---------|------------|\n| natural gas | $4,565 | $2,729 | 67.3% |\n| oil | 1,523 | 2,095 | (27.3)% |\n| electricity | 5,197 | 4,671 | 11.3% |\n| Water | 3,582 | 3,474 | 3.1% |\n| other | 30 | 33 | (9.1)% |\n| Total utility and fuel expenses | $14,897 | $13,002 | 14.6% |\n\nKillam's apartment properties are heated with a combination of natural gas (55%), electricity (36%), oil (8%) and other sources (1%).\n\nElectricity costs at the unit level are usually paid directly by tenants, reducing Killam's exposure to the majority of the 4,500 units heated with electricity. Fuel costs associated with natural gas or oil fired heating plants are paid by Killam. As such, the Company is exposed to fluctuations in natural gas and oil costs, which represent 40.9% of total same store utility and fuel costs in 2013. Killam invests in green initiatives at its properties to maximize efficiencies, including converting many of its Halifax properties to natural gas from oil over the last three years as natural gas infrastructure has been expanded in the city. The decision to convert was supported by the substantial price difference between the cost of natural gas and oil in recent years.\n\nAs noted in the table above, Killam's utility and fuel expenses increased 14.6% in 2013 compared to 2012. The increase was primarily attributable to higher natural gas, electricity costs and water costs.\n\nKillam's natural gas expenses increased by 67.3% in 2013 due to higher gas prices in Atlantic Canada and an increase in properties burning natural gas following conversions of certain Halifax heating plants from oil to gas in 2012 and 2013. The reduction in oil expense in the quarter and year-to-date reflects this reduction in oil exposure.\n\nAs the following chart highlights, the per gigajoule (Gj) commodity cost for natural gas in New Brunswick and Nova Scotia was much higher than NYMEX in 2013 and less correlated to NYMEX than in previous years. (NYMEX is the New York Mercantile Exchange, a commodity futures exchange. Henry Hub, a gas distribution hub in Louisiana is the pricing point for natural gas futures contracts traded on NYMEX). The cost of natural gas in Atlantic Canada and New England experienced a spike from December 2012 until late spring 2013 and a second spike in December 2013, compared to other areas of Canada. Those spikes were both due to increased demand from utilities in Northeast New England and a shortage of gas pipeline capacity in Northeastern New England and Atlantic Canada. A temporary decline in gas supply off the coast of Nova Scotia further contributed to the high pricing in the first part of the year.\n\n## Historic Natural Gas Pricing ($ per Gj) Henry Hub Vs. Heritage Gas\n\n<!-- image -->", - "page_start": 37, - "page_end": 37, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "## Financial Position\n\nIn May 2014, the borrowing capacity under our credit facilities increased from an aggregate of $63 million to $135 million. The increase in the borrowing capacity was driven by the significant uplift of the Company's proved oil and gas reserves as at 31 December 2013. In conjunction with the increase in the Company's borrowing capacity, the Company expanded the syndicate of banks under the Senior Credit Facility. Bank of America Merrill Lynch and the Bank of Nova Scotia have now joined the bank group which is led by Wells Fargo.\n\nIn July 2014, the borrowing capacity increased an additional net $10 million, to $145 million, after taking into consideration the removal of proved oil and gas reserves associated with the DJ and Williston Basin dispositions and the development of proved oil and gas reserves in the Eagle Ford Formation.\n\nAt 31 December 2014, the Company had $130 million outstanding under our credit facilities and $15 million available under our borrowing capacity. Ending cash at 31 December 2014 was $69.2 million.\n\n## Cashflow\n\nCash provided by operating activities for the year ended 31 December 2014 increased 104.5% to $128.1 million compared to the prior year. This increase was primarily due to receipts from sales increasing $85.7 million, or 101.2%, to $170.4 million, while keeping payments to suppliers and employees relatively stable with an increase of $8.2 million, or 37.7%, to $30.0 million. See Review of Operations for more information.\n\nCash used in investing activities for the year ended 31 December 2014 increased $158.9 million, or 96.7%, to $323.2 million. This increase is due to successful implementation of the Company's strategy to develop and grow the reserves from our high working interest, repeatable resource plays, primarily in the Eagle Ford. Due to funding available to the Company through asset sales, capital raises and credit facilities, the Company was able to accelerate its 2015 drilling program into 2014. However, due to the reduction in crude oil prices in the fourth quarter of 2014 and continuing into early 2015, the Company will scale back its drilling program to concentrate on limited drilling obligations to hold Eagle Ford acreage during the 2015 year.\n\nCash provided by financing activities for the year ended 31 December 2014 increased $123.1 million, or 277.0%, to $167.6 million. This increase is a result of the increased availability and draws under the Company's credit facilities and proceeds received in a private placement of shares. In February 2014, the Company completed a private placement in which we sold 84.2 million ordinary shares at A$0.95 per share, resulting in net proceeds of approximately $68.4 million. The first tranche of 63.7 million shares was issued in March 2014 and the second tranche of 20.5 million shares was issued in April 2014.\n\n## Matters Subsequent to the End of the Financial Year\n\nSubsequent to 31 December 2014, an additional $13.9 million was drawn-down the credit facilities, bringing total outstanding debt to $143.9 million, with undrawn funds of $1.1 million.\n\nIn January 2015, the company acquired three leases totalling approximately 14,180 net acres in the Eagle Ford for approximately $13.4 million.\n\n## Future Developments, Prospects and Business Strategies\n\nThe Group's business strategies and prospects for growth in future financial years are presently concentrated on growing the value of the Group's current resource plays through direct leasing from mineral owners, small acquisitions of producing properties, drilling inventory within the Group's current balance sheet capabilities, and development of the Group's current acreage. Further information on likely development in the operations of the Group and expected results of operations has not been included because the Directors believe it would result in unreasonable prejudice to the Group.", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_SEA_2014.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_SEA_2014.pdf", - "query": "I heard that Sundance Energy has acquired land in South Texas in July 2014, where is it?", - "target_page": 21, - "target_passage": "In July 2014, the Company completed the acquisition of approximately 5,700 net Eagle Ford acres in Dimmit County, South Texas", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "- (1) Acquired as a portfolio.\n - (2) Purchase price on acquisition does not include transaction-related costs.\n - (3) Killam entered into a 50/50 joint development agreement with another company for the purchase of this land. The $1.8 million purchase price represents\n - Killam's interest in the land.\n - (4) Included in the acquisition is 21,242 square feet of commercial space.\n\nIn addition to apartment acquisitions during 2013, Killam purchased a MHC in Antigonish with 65 sites and three parcels of land for future development. The parcel of land located in Cambridge is 5.2 acres and is zoned for a maximum height of seven stories and a density of 180 units. The parcel of land in Moncton is 0.8 acres and the land located at 1057 Barrington Street in Halifax is 0.7 acres and was purchased under a joint development agreement for the purpose of developing a six-story mixed-use building.", - "page_start": 47, - "page_end": 47, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "TSR will be compared to a set of 22 oil and gas exploration and production companies headquartered in the United States and Australia. The Australian-headquartered companies are highlighted. The chart on the right depicts the TSR over a three year period ending 31 December 2014. Diamondback Energy Inc, Matador Resources Co and Midstates Petroleum Co Inc were excluded from the chart as there was not enough historical data to measure the defined TSR.\n\n| Company |\n|----------------------------|\n| Abraxas Petroleum Corp/NV |\n| Approach Resources Inc |\n| Austex Oil Ltd |\n| Beach Energy Ltd |\n| Bonanza Creek Energy Inc. |\n| Callon Petroleum CO/DE |\n| Carrizo Oil & Gas Inc |\n| Contango Oil & Gas Co |\n| Diamondback Energy Inc |\n| Emerald Oil Inc |\n| Lonestar Resources Ltd |\n| Matador Resources Co |\n| Midstates Petroleum Co Inc |\n| Panhandle Oil & Gas Inc |\n| Red Fork Energy Ltd |\n| Rex Energy Corp |\n| Sanchez Energy Corp |\n| Senex Energy Ltd |\n| Triangle Petroleum Corp |\n\n<!-- image -->\n\nRetirement and Other Benefits\n\nExecutive management participates in the same benefit plans and on the same basis as other employees. Those plans include health, dental and vision insurance (for which a premium contribution is required by the participant) and a 401(k) retirement plan under which the Company makes an annual contribution equal to 3 percent of the participant's eligible compensation.\n\nPost-Termination and Change In Control Benefits\n\nThe Managing Director's employment contract provides for payment of his base salary through the end of the contract term in the event he is terminated as a result of a change in control event. Additionally, in the event of a corporate take-over or change in control (as defined in the RSU Plan), our board in its discretion may cause all unvested RSUs to vest and be satisfied by the issue of one share each or provide for the cancellation of outstanding RSUs and a cash payment equal to the then-fair market value of the RSUs.", - "page_start": 39, - "page_end": 39, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "During 2004, $28.2 million of the total purchase price paid for acquisitions and contingent payments to former owners was allocated to landÑll airspace. As of December 31, 2004, we had $743.6 million of landÑll development costs, net of accumulated depletion and amortization, which includes purchase price allocated to landÑll airspace as well as other capitalized landÑll costs. When a landÑll is acquired as part of a group of assets, purchase price is allocated to airspace based upon the discounted expected future cash Öows of the landÑll relative to the other assets within the acquired group and is adjusted for other non-depletable landÑll assets and liabilities acquired (primarily Ñnal capping, closure and post-closure liabilities). LandÑll purchase price is amortized using the units-of-consumption method over total available airspace, which includes probable expansion airspace where appropriate.", - "page_start": 40, - "page_end": 40, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## UNITED STATES OF AMERICA AVG WORKING INTEREST\n\n| | East Texas |\n|----------------------|--------------|\n| Black Horse* | 100.0 |\n| BP America | 25.0 |\n| Jefferson Co | 18.8 |\n| Knight | 30.0 |\n| South Texas | |\n| Bar Harbor | 25.0 |\n| BP Green* | 50.0 |\n| Coquat | 25.0 |\n| Cougar* | 100.0 |\n| Duncan Slough* | 66.2 |\n| E. Edinburgh | 20.8 |\n| Elsa | 25.0 |\n| Hall Ranch* | 57.5 |\n| Hordes Creek | 50.3 |\n| Lafite / Allen Dome* | 92.1 |\n| Markham | 16.0 |\n| Mikeska | 54.5 |\n| Mountainside | 20.8 |\n| Petru | 30.9 |\n| Raymondville | 25.3 |\n| Remmers* | 66.3 |\n| Riverdale | 23.1 |\n| Tidehaven* | 38.9 |\n| Verdad | 25.0 |\n| W. Mercedes | 25.0 |\n| South Louisiana | |\n| Howards Creek | 25.0 |\n| Montana | |\n| Deer Creek | 50.0 |\n\n - * Santos operated.\n - (I) Includes interests held by Basin Oil Pty Ltd. By contract dated 17 February 2005, Santos agreed to acquire Basin Oil Pty Ltd effective 1 January 2005. The transaction is expected to be completed in the second quarter of 2005.", - "page_start": 44, - "page_end": 44, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## NOTES TO THE CONSOLIDATED FINANCIAL STATEMENTS\n\n## NOTE 2 - BUSINESS COMBINATIONS\n\n## Acquisitions in 2014\n\nThere were no business acquisitions for the year ended 31 December 2014.\n\n## Acquisition in 2013\n\nOn 8 March 2013, the Company acquired 100% of the outstanding shares of Texon Petroleum Ltd (\"Texon\", whose name was changed to Armadillo Petroleum Ltd), an Australian corporation with oil and gas assets in the Eagle Ford formation in the United States. The Company acquired Texon to gain access to its existing production and drilling inventory in the Eagle Ford formation. As consideration for substantially all of the net assets of Texon, the Company issued 122.7 million ordinary shares (approximately 30.6% of the total outstanding shares immediately subsequent to the acquisition), which had a fair value of $132.1 million on the acquisition date and net cash consideration of $26.3 million for a total purchase price of $158.4 million. The net cash consideration includes a $141.0 million premerger purchase by the Company of certain Texon oil and gas properties, offset by $114.7 million of cash acquired at the time of the merger. The current income tax liability, included in accrued expenses, and deferred tax liability of $33.4 million and $16.9 million, respectively, are comprised of tax liabilities assumed as at the acquisition date and an increase in the tax liability related to the incremental acquisition date fair value of the acquired development and production and exploration and evaluation assets as compared to Texon's historical basis.\n\nThe following table reflects the final adjusted assets acquired and the liabilities assumed at their fair value or otherwise where specified by AASB 3/IFRS 3 Business Combinations (in thousands):\n\n| Fair value of assets acquired: | |\n|-------------------------------------------------|--------------|\n| Trade and other receivables | $ 5,604 |\n| Other current assets | 456 |\n| Development and production assets | 53,937 |\n| Exploration and evaluation assets | 150,474 |\n| Prepaid drilling and completion costs | 3,027 |\n| Amount attributable to assets acquired | 213,498 |\n| Fair value of liabilities assumed: | |\n| Trade and other payables | 119 |\n| Accrued expenses | 37,816 |\n| Restoration provision | 277 |\n| Deferred tax liabilities | 16,884 |\n| Amount attributable to liabilities assumed | 55,096 |\n| Net assets acquired | $ 158,402 |\n| Purchase price: | |\n| Cash and cash equivalents, net of cash acquired | $ 26,310 |\n| Issued capital | 132,092 |\n| Total consideration paid | $ 158,402 |\n\nThe net assets recognized in the 31 December 2013 financial statements were based on a provisional assessment of their fair value.", - "page_start": 74, - "page_end": 74, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "## Nordstrom, Inc.\n\n## Notes to Consolidated Financial Statements\n\nDollar and share amounts in millions except per share, per option and per unit amounts\n\nRent expense for 2014, 2013 and 2012 was as follows:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n|-----------------------------------|--------|--------|--------|\n| Minimum rent: | | | |\n| Store locations | $170 | $145 | $124 |\n| Offices, warehouses and equipment | 36 | 35 | 32 |\n| Percentage rent | 14 | 14 | 14 |\n| Property incentives | (83) | (69) | (65) |\n| Total rent expense | $137 | $125 | $105 |\n\nThe rent expense above does not include common area charges, real estate taxes and other executory costs, which were $88 in 2014, $81 in 2013 and $74 in 2012.\n\n## NOTE 11: COMMITMENTS AND CONTINGENT LIABILITIES\n\nOur estimated total purchase obligations, capital expenditure contractual commitments and inventory purchase orders were $2,092 as of January 31, 2015. In connection with the purchase of foreign merchandise, we have outstanding trade letters of credit totaling $1 as of January 31, 2015.\n\nPlans for our Manhattan full-line store, which we currently expect to open in late 2018 to 2019, ultimately include owning a condominium interest in a mixed-use tower and leasing certain nearby properties. As of January 31, 2015, we had approximately $125 of fee interest in land, which is expected to convert to the condominium interest once the store is constructed. We have committed to make future installment payments based on the developer meeting pre-established construction and development milestones. Our fee interest in the land is currently and will continue to be subject to lien by project development lenders until project completion or fulfillment of our existing installment payment commitment. In the unlikely event that this project is not completed, the opening may be delayed and we may potentially be subject to future losses or capital commitments in order to complete construction or to monetize our previous investments in the land.\n\n## NOTE 12: SHAREHOLDERS' EQUITY\n\nIn February 2013, our Board of Directors authorized a program to repurchase up to $800 of our outstanding common stock, through March 1, 2015. In September 2014, our Board of Directors authorized a new program to repurchase up to $1,000 of our outstanding common stock through March 1, 2016, in addition to the remaining amount available for repurchase under the previously authorized program. The following is a summary of the activity related to our share repurchase programs in 2012, 2013 and 2014:", - "page_start": 66, - "page_end": 66, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "## ACHIEVING INNOVATIVE COMMERCIALISATION\n\nOn top of exploration and new ventures growth opportunities, Santos has a large inventory of gas fields that are yet to be committed to gas contracts. These fields, known as contingent resources, represent significant opportunities for Santos.\n\nEach year Santos works towards commercialising these fields by finding new gas contracts or extending existing contracts so that they can be booked as Proven (1P) or Proven plus Probable (2P) reserves.\n\nSantos' contingent gas resources are largely located offshore southern Australia and Western Australia, in the Bonaparte Basin offshore northern Australia and onshore Papua New Guinea.\n\nSantos continued to deliver on gas commercialisation during 2004, commercialising 27 million boe during the year. Santos also achieved positive contract price reviews for gas sales that were well above the indexed levels.\n\n## UNIQUE ENERGY HUBS DELIVER GAS SWAPS\n\nSome of the most important gas commercialisation achievements for the year were the innovative gas swaps agreements that were only possible because of Santos' unique spread of assets across key Australian gas hubs.\n\nSantos and the other South West Queensland Gas Producers announced a coal seam methane gas swap in May to allow each party to supply the other party's contractual obligations in different states via the Moomba gas hub in central Australia. This arrangement for 200 PJ meant that Origin could avoid building a pipeline and that Santos could capture a share of the saving.\n\nGas swapping will commence in 2005 and could continue until the end of 2011.\n\nA second gas swap, from eastern Queensland to Gippsland, moved gas through three states and five joint ventures, expanding market horizons for partners and providing backup options to customers.\n\n## EXPANDED CASINO CONTRACT ENHANCES VALUE\n\nThe commercialisation of the Casino gas field in the Otway Basin, offshore southern Australia, continued during 2004 with an increase in the quantity of gas being sold under the initial term sheet signed in September 2003 with TXU for 293 PJ.\n\nWhen the project was sanctioned in October 2004, the joint venture announced an extension to the original Gas Sales Agreement to supply up to 420 PJ of gas, and possibly another 105 PJ, over 12 years for the Victorian or South Australian markets.\n\nThe Casino contracts are unique in that the reserves have been contracted prior to the field being fully appraised to confirm the quantity of gas available. This has allowed the joint venture to undertake appraisal drilling and near field exploration programs with the knowledge that all of the gas likely to be discovered will be taken, thereby significantly reducing the risk. This shortens the time from discovery to production and delivers profits to Santos and its shareholders sooner.\n\n## WA CONTRACTS FAST-TRACK JOHN BROOKES\n\nSantos and its co-venturer Apache won two significant gas contracts in Western Australia\n\n## ENERGY HUB STRATEGY\n\n<!-- image -->\n\nwhich resulted in the fast tracking and sanctioning of the John Brookes gas field in the Carnarvon Basin.\n\nThe successful appraisal of the field in late 2003 and early 2004 significantly increased the available gas reserves. The decision to bring the field into production by mid-2005 enabled active marketing of gas above that already allocated to support the declining East Spar field.\n\nIn a separate move, designed to enhance future commercialisation opportunities, the joint venture equity interests in the East Spar and the John Brookes fields were aligned through an acquisition program which created an important production hub at Varanus Island.\n\nJohn Brookes has an expected field life of more than 15 years which could be further extended by a development of the Reindeer field in later years.", - "page_start": 21, - "page_end": 21, - "source_file": "ASX_STO_2004.pdf" - }, - { - "text": "## REPUBLIC SERVICES, INC. AND SUBSIDIARIES\n\n## NOTES TO CONSOLIDATED FINANCIAL STATEMENTS\n\n(All tables in millions, except per share data) Ì (Continued)\n\n## Capitalized LandÑll Costs\n\nCapitalized landÑll costs include expenditures for land, permitting costs, cell construction costs and environmental structures. Capitalized permitting and cell construction costs are limited to direct costs relating to these activities, including legal, engineering and construction costs associated with excavation, natural and synthetic liners, construction of leachate collection systems, installation of methane gas collection and monitoring systems, installation of groundwater monitoring wells, and other costs associated with the development of the site. Interest is capitalized on landÑll construction projects while the assets are undergoing activities to ready them for their intended use. Capitalized landÑll costs also include Ñnal capping, closure and post-closure assets accrued in accordance with SFAS 143 as discussed below.\n\nCosts related to acquiring land, excluding the estimated residual value of unpermitted, non-buÅer land, and costs related to permitting and cell construction are depleted as airspace is consumed using the units-ofconsumption method.\n\nCapitalized landÑll costs may also include an allocation of purchase price paid for landÑlls. For landÑlls purchased as part of a group of several assets, the purchase price assigned to the landÑll is determined based upon the discounted expected future cash Öows of the landÑll relative to the other assets within the acquired group. If the landÑll meets the Company's expansion criteria, the purchase price is further allocated between permitted airspace and expansion airspace based upon the ratio of permitted versus probable expansion airspace to total available airspace. LandÑll purchase price is amortized using the units-of-consumption method over the total available airspace including probable expansion airspace where appropriate.\n\n## Final Capping, Closure and Post-Closure Costs\n\nOn January 1, 2003, the Company changed the methodology it used to record Ñnal capping, closure and post-closure expense in accordance with SFAS 143. SFAS 143 does not change the basic landÑll accounting policies followed by the Company and others in the waste industry. Through December 31, 2002, the industry has generally amortized capitalized costs and accrued future Ñnal capping, closure and post-closure obligations using the units-of-consumption method as cubic yards of available airspace are consumed over the life of the related landÑll. This practice is referred to as life cycle accounting and will continue to be followed except as modiÑed by SFAS 143 as discussed below.\n\nThe table below reÖects signiÑcant changes between the Company's historical methodology and the methodology the Company currently uses to account for Ñnal capping, closure and post-closure activities and for methane gas collection systems:", - "page_start": 75, - "page_end": 75, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "## REPUBLIC SERVICES, INC. AND SUBSIDIARIES\n\n## NOTES TO CONSOLIDATED FINANCIAL STATEMENTS\n\n(All tables in millions, except per share data) Ì (Continued)\n\ndate of acquisition. The Company allocates the cost of the acquired business to the assets acquired and the liabilities assumed based on estimates of fair values thereof. These estimates are revised during the allocation period as necessary if, and when, information regarding contingencies becomes available to further deÑne and quantify assets acquired and liabilities assumed. To the extent contingencies such as preacquisition environmental matters, litigation and related legal fees are resolved or settled during the allocation period, such items are included in the revised allocation of the purchase price. After the allocation period, the eÅect of changes in such contingencies is included in results of operations in the periods in which the adjustments are determined. The Company does not believe potential diÅerences between its fair value estimates and actual fair values are material.\n\nThe Company acquired various solid waste businesses during the years ended December 31, 2004, 2003 and 2002. The aggregate purchase price paid for these transactions was $47.4 million, $51.5 million and $55.8 million, respectively.\n\nDuring 2004, 2003 and 2002, $28.2 million, $27.7 million and $5.1 million, respectively, of the total purchase price paid for acquisitions and contingent payments to former owners was allocated to landÑll airspace. For landÑlls purchased as part of a group of several assets, the allocations of purchase price were based on the discounted expected future cash Öow of each landÑll relative to other assets within the acquired group and were adjusted for other non-depletable landÑll assets and liabilities acquired (primarily Ñnal capping, closure and post-closure liabilities). LandÑll purchase price is amortized using the units-ofconsumption method over total available airspace, which includes probable expansion airspace where appropriate, and is included in property and equipment, net in the accompanying Consolidated Balance Sheets.\n\nThe following summarizes the preliminary purchase price allocations for business combinations accounted for under the purchase method of accounting:\n\n| | Years Ended December 31, | Years Ended December 31, | Years Ended December 31, |\n|----------------------------------------------------------------------|----------------------------|----------------------------|----------------------------|\n| | 2004 | 2003 | 2002 |\n| Property and equipment ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | $36.6 | $ 41.3 | $27.0 |\n| Intangible assets ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | 14.1 | 24.3 | 43.0 |\n| Restricted cash ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | .6 | Ì | Ì |\n| Working capital deÑcit ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | (3.4) | (14.9) | (8.9) |\n| Other assets (liabilities), net ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | (.6) | .8 | (5.3) |\n| Cash used in acquisitions, net of cash acquired ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ | $47.3 | $ 51.5 | $55.8 |\n\nSubstantially all of the intangible assets recorded for these acquisitions are deductible for tax purposes.", - "page_start": 81, - "page_end": 81, - "source_file": "NYSE_RSG_2004.pdf" - }, - { - "text": "In order to construct, expand and operate a landÑll, one or more construction or operating permits, as well as zoning and land use approvals, must be obtained. These are diÇcult and time-consuming to obtain, are often opposed by neighboring landowners and citizens' groups, may be subject to periodic renewal and are subject to modiÑcation and revocation by the issuing agency. In connection with our acquisition of existing landÑlls, it may be and on occasion has been necessary for our company to expend considerable time, eÅort and money to bring the acquired facilities into compliance with applicable requirements and to obtain the permits and approvals necessary to increase their capacity.\n\nMany of our facilities own and operate underground storage tanks which are generally used to store petroleum-based products. These tanks are generally subject to federal, state and local laws and regulations that mandate their periodic testing, upgrading, closure and removal, and that, in the event of leaks, require that polluted groundwater and soils be remediated. We believe that all of our underground storage tanks currently meet, in all material respects, all applicable regulations. If underground storage tanks we own or operate leak, and the leakage migrates onto the property of others, we could be liable for response costs and other damages to third parties. We are unaware of facts indicating that issues of compliance with regulations", - "page_start": 18, - "page_end": 18, - "source_file": "NYSE_RSG_2004.pdf" - } - ] - }, - { - "references": { - "source_file": "ASX_SEA_2014.pdf", - "query": "I am the CFO of Sundance Energy, will my base increase in 2015 as it did in 2014?", - "target_page": 31, - "target_passage": "No increases to Managing Director’s or KMP’s base salary", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "At year end, we had 197 gross 3P Reserves drilling locations across our Eagle Ford acreage where we continue to pursue operational and drilling efficiencies, opportunities to further improve well economics by improving recoveries and reducing costs. In 2014 this included a switch to pad drilling with zipper fracs and new completion techniques that have provided significant upside in production.\n\nDespite our current scaling back of drilling activity, we have set 2015 production guidance at 7,850 - 8,500 BOEPD, an increase from the previous year of some 13 - 17 percent, but a target that we believe is achievable while maintaining acceptable levels of liquidity given our demonstrated abilities and growing footprint in the Eagle Ford.\n\n## Safety and Environment\n\nSundance has a strong culture throughout the organisation of ensuring that high standards of safety are maintained and that our operations are conducted in an environmentally responsible way. During 2014 our comprehensive safety program was enhanced and further improvements will be a strong focus throughout 2015.\n\n## A strong financial position\n\nSundance is well placed for future growth in the Eagle Ford. The Company has a strong balance sheet to withstand the current low oil price environment, and our sound financial management strategy has seen the Company well supported by both new and existing investors in Australia and internationally.\n\nWe expect that Sundance will grow organically and also through further leasing or bolt-on acquisitions in our core Eagle Ford focus area within our current, conservative balance sheet parameters.\n\n## Positive outlook for 2015\n\nDespite the current oil pricing scenario, Sundance's medium-to-long term growth trajectory looks very positive.\n\nWe can demonstrate this through:\n\n- · A track record of capital efficient growth\n- · A track record of value creation\n- · Being a low cost/high margin operator\n- · Having top tier Eagle Ford assets with an extensive drilling inventory\n- · Having a clean balance sheet\n\nAs a mid-tier oil and gas producer and explorer in the S&P/ASX All Australian 200 index, and with the increasing interest and support from institutional and retail investors. I believe that Sundance will deliver significant long-term value from our assets for our shareholders.\n\n## Thank you for your support\n\nWe have had a busy year at Sundance and I would like to recognise the efforts and valued contribution of the Board of Directors, management team and all staff and contractors of the Company in helping us achieve our strategic goals. I am confident that we have the right team and excellent assets in place to execute our clear and focused strategy that we expect to deliver significant value for our shareholders.\n\nOn behalf of the Board and Company, I would like to thank our shareholders for your strong support of the Company throughout the year. We are committed to delivering long-term value for our shareholders and I look forward to reporting over the rest of the coming year on the continued value creation and growth of Sundance.\n\nYours sincerely,\n\n<!-- image -->\n\nMIKE HANNELL\n\nChairman\n\nThe Company has a strong balance sheet to withstand the current low oil price environment, and our sound financial management strategy has seen the Company well supported by both new and existing investors in Australia and internationally.", - "page_start": 4, - "page_end": 4, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "CHAIRMAN'S LETTER\n\n<!-- image -->\n\nDespite the reduction in crude oil and liquids prices towards the end of the year and continuing into 2015, the opertional performance and focused, value-adding transactions during the past year have positioned the Company very favourably for future growth in net asset value and shareholder returns.\n\n## Dear Fellow Shareholders,\n\nI am pleased to present Sundance Energy Australia Limited's Annual Report for the 12 months ended 31 December 2014. It has been another year of significant progress for Sundance across our portfolio of liquids rich oil and gas assets in the US.\n\nThe Company's strategic focus on growing production, cash flows and reserves from large, repeatable resource plays in North America continues to deliver positive results with growth in production, cash flows, and reserves.\n\nDuring late 2013 and 2014, we completed the divestment of our interest in the Williston Basin in North Dakota for $51 million which realised an internal rate of return of 45 percent; and also opportunistically divested our interest in the Denver-Julesburg Basin in Colorado for $114 million which realised an internal rate of return of 104 percent. These divestitures of smaller, less scalable positions enabled us to focus on developing and growing our assets in the Eagle Ford in Texas and our Mississippian/Woodford assets in Oklahoma.\n\nDespite the reduction in crude oil and liquids prices towards the end of the year and continuing into 2015, the operational performance and focused, value-adding transactions during the past year have positioned the Company very favourably for future growth in net asset value and shareholder returns.\n\n## A year of growing production, cash flow and reserves\n\nIn line with our strategy we continued to increase the level of company operated assets, and successfully maintained a very strong focus on optimising our operations and reducing costs. This resulted in an impressive improvement in well performance combined with a top tier cost structure.\n\nThrough our operated development program, we ended 2014 with record production of 9,434 barrels of oil equivalent per day (BOEPD) compared with an exit rate of 5,028 BOEPD in December 2013 and an average annual production of 6,635 BOEPD compared to 3,015 BOEPD in 2013. During 2014 we drilled and completed 42.7 net wells, primarily in the Eagle Ford, bringing our total well count to 81.3 by 31 December 2014. High value oil comprised approximately 69 percent of our total 2014 annual production and production from Sundance-operated projects accounted for 89 percent of total production for the year.\n\nCorresponding with the growth in annual production, the Company's full year revenues increased to $159.8 million and Adjusted EBITDAX increased to $126.4 million.\n\nThe Company's development program also generated significant growth in Constant Case reserves during the year. More details are contained elsewhere in this Annual Report, but in summary our 1P Reserves at the end of 2014 were 26.0 MBOE, 2P Reserves 54.1 MBOE, and 3P Reserves 147.7 MBOE. This compares with Reserves of 20.7 MBOE, 34.6 MBOE, and 92.8 MBOE, respectively, at the end of 2013.\n\nIn the current price environment, we have elected to scale back our drilling program to mainly concentrate on limited drilling obligations to hold Eagle Ford acreage. This will enable us to maintain our low leverage profile, which was approximately 1.03x debt to Adjusted EBITDAX at year end, and focus on growing our drilling inventory in an environment with less competition for leases and small acquisitions. Liquidity was $84 million at year end, with a borrowing base redetermination in 2015 expected to materially increase debt availability if the use of such funds is justified in line with our strategy.\n\n## The Eagle Ford - driving value and production growth", - "page_start": 3, - "page_end": 3, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "FINANCIALS 2014", - "page_start": 10, - "page_end": 10, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "<!-- image -->\n\n## WHILE IT IS EARLY DAYSS, II BBEELIEVE WE CANN EVOLVE THE BUSINESS IN A WWAAY THAT WILL BE EVEN MORE REWARDING FORR OOUR CUSTOMERS, OUR SHAREHOLDERS AND EMMPPLLOYEES.' '\n\nGUY LAURENCE\n\n## A MESSAGE FROM THE PRESIDENT & CEO\n\nAs I write these words after recently joining the company, I can say with genuine enthusiasm that it's great to be here at Rogers. I took this post because Rogers is a remarkable company with a rich history and an unrivalled mix of wireless, cable and media assets. It is a good match with my background and my experience.\n\nDuring the recruiting and onboarding process, I spent considerable time with the Rogers family, the Board of Directors and the leadership team. I am struck by their energy, passion and drive to win, which I think we can harness to do even greater things. I also value the support and longerterm focus of the founding Rogers family who own significant equity in the company.\n\nSince joining, I have criss-crossed Canada meeting my team, external stakeholders and customers. I have also conducted numerous business reviews, overseen the 700 MHz spectrum auction and reviewed the regulatory agenda. All this with the view to developing a detailed set of priorities and plans for the company going forward. After I complete this review in the Spring I will outline a detailed strategy and business plan working with my management team.\n\nRogers has many strengths and I intend to capitalize on them. This is a financially strong company with a solid balance sheet and investment grade credit ratings. We have highly advanced cable and wireless networks and a robust portfolio of media assets. We also have a strong pipeline of new products and services to offer to our customers and some of the most passionate, committed employees I have ever worked with.\n\nWhile it is early days, I believe we can evolve the business in a way that will be even more rewarding for our customers, our shareholders and employees. Our goal is clear - winning on a consistent basis. And while our industry faces the challenge of moderating growth and regulatory uncertainty, few industries are more dynamic and better at leveraging new technologies.\n\nTo win, we must put our customers' needs front and centre in everything we do. This means delivering a better and more consistent customer experience. It means strengthening our value proposition to make sure our customers can answer the question 'why Rogers?' As a company, we need to bring our collection of assets together in a way that strengthens and differentiates Rogers with our customers and our shareholders. We also need to align and focus our investments in key areas to accelerate our growth. Internally we need to execute with operational excellence. And we need to focus on clarifying accountabilities and strengthening our teams at all levels of the company.\n\nAs CEO, I will work to re-establish our leadership position and accelerate our growth. This will take time. It is a longterm effort that will require a clear strategy, rigorous prioritization and disciplined execution. It will not be easy, but it is the job I have signed up for, and it is a challenge I intend to meet head-on.\n\nI look forward to continuing Ted's legacy, and to leading Rogers through the next phase of growth and to serving you, our shareholders.\n\nThank you for your continued business, investment and support.\n\n<!-- image -->\n\nGUY LAURENCE\n\nPRESIDENT AND CHIEF EXECUTIVE OFFICER ROGERS COMMUNICATIONS INC.", - "page_start": 6, - "page_end": 6, - "source_file": "NYSE_RCI_2013.pdf" - }, - { - "text": "## Financial Position\n\nIn May 2014, the borrowing capacity under our credit facilities increased from an aggregate of $63 million to $135 million. The increase in the borrowing capacity was driven by the significant uplift of the Company's proved oil and gas reserves as at 31 December 2013. In conjunction with the increase in the Company's borrowing capacity, the Company expanded the syndicate of banks under the Senior Credit Facility. Bank of America Merrill Lynch and the Bank of Nova Scotia have now joined the bank group which is led by Wells Fargo.\n\nIn July 2014, the borrowing capacity increased an additional net $10 million, to $145 million, after taking into consideration the removal of proved oil and gas reserves associated with the DJ and Williston Basin dispositions and the development of proved oil and gas reserves in the Eagle Ford Formation.\n\nAt 31 December 2014, the Company had $130 million outstanding under our credit facilities and $15 million available under our borrowing capacity. Ending cash at 31 December 2014 was $69.2 million.\n\n## Cashflow\n\nCash provided by operating activities for the year ended 31 December 2014 increased 104.5% to $128.1 million compared to the prior year. This increase was primarily due to receipts from sales increasing $85.7 million, or 101.2%, to $170.4 million, while keeping payments to suppliers and employees relatively stable with an increase of $8.2 million, or 37.7%, to $30.0 million. See Review of Operations for more information.\n\nCash used in investing activities for the year ended 31 December 2014 increased $158.9 million, or 96.7%, to $323.2 million. This increase is due to successful implementation of the Company's strategy to develop and grow the reserves from our high working interest, repeatable resource plays, primarily in the Eagle Ford. Due to funding available to the Company through asset sales, capital raises and credit facilities, the Company was able to accelerate its 2015 drilling program into 2014. However, due to the reduction in crude oil prices in the fourth quarter of 2014 and continuing into early 2015, the Company will scale back its drilling program to concentrate on limited drilling obligations to hold Eagle Ford acreage during the 2015 year.\n\nCash provided by financing activities for the year ended 31 December 2014 increased $123.1 million, or 277.0%, to $167.6 million. This increase is a result of the increased availability and draws under the Company's credit facilities and proceeds received in a private placement of shares. In February 2014, the Company completed a private placement in which we sold 84.2 million ordinary shares at A$0.95 per share, resulting in net proceeds of approximately $68.4 million. The first tranche of 63.7 million shares was issued in March 2014 and the second tranche of 20.5 million shares was issued in April 2014.\n\n## Matters Subsequent to the End of the Financial Year\n\nSubsequent to 31 December 2014, an additional $13.9 million was drawn-down the credit facilities, bringing total outstanding debt to $143.9 million, with undrawn funds of $1.1 million.\n\nIn January 2015, the company acquired three leases totalling approximately 14,180 net acres in the Eagle Ford for approximately $13.4 million.\n\n## Future Developments, Prospects and Business Strategies\n\nThe Group's business strategies and prospects for growth in future financial years are presently concentrated on growing the value of the Group's current resource plays through direct leasing from mineral owners, small acquisitions of producing properties, drilling inventory within the Group's current balance sheet capabilities, and development of the Group's current acreage. Further information on likely development in the operations of the Group and expected results of operations has not been included because the Directors believe it would result in unreasonable prejudice to the Group.", - "page_start": 22, - "page_end": 22, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "## our Goals for 2014\n\ncomplete a minimum of $75 million in acquisitions.\n\nacquire over 50% of 2014 acquisitions outside atlantic canada, with a focus in ontario.\n\nGrow same store noi by up to 2%.\n\ncontinue to invest in development with two projects underway, managing projects on schedule and on budget.\n\ndevelopment program to a maximum of 5% of our balance sheet per year. We have three other developments projects in various planning stages, but don't expect to begin construction on any additional new projects until late 2014 or into 2015.\n\n## Geographic Diversi/fication is a Priority\n\nGeographic diversi/fication is a priority for Killam. Our asset base in Atlantic Canada is the foundation of the Company; however, with Atlantic Canada representing only 5% of the Canadian rental market, our growth opportunities increase signi/ficantly by expanding our target markets outside of this region. With its strong operating platform, Killam can support a larger and more geographically diverse portfolio. We are actively growing a portfolio of apartments in Ontario in three target markets: Ottawa, the Greater Toronto Area, and Southwestern Ontario. An increased investment outside Atlantic Canada will increase not only Killam's growth potential, it will also expand the Company's diversi/fication and exposure to higher growth markets.\n\nAcquisitions in Ontario represented 45% of acquisitions in 2013. In addition to 1,359 apartment units in the province, we also have 2,144 manufactured home community sites, representing 29% of the MHC NOI last year. Based on our current portfolio, 15% of Killam's 2014 NOI will be generated in Ontario, compared to our longer-term goal of generating 50% of NOI outside Atlantic Canada. We expect to reach this goal by focusing acquisition activity in Ontario, with the majority of future investment anticipated in the province over the next few years. We will look for additional development opportunities in Ontario and we are exploring opportunities in Western Canada, attracted by the strong population growth trends in Alberta's urban markets.\n\nI would like to thank all Killam employees for their contributions and commitment over the last year and our board of directors for their governance. Also, I would like to thank you, our shareholders, for your continued investment in Killam. I invite you to attend the Company's annual meeting on May 7, 2014 at 2:00 pm Atlantic Time at the Halifax Marriott Harbourfront Hotel, either in person or via webcast.\n\n<!-- image -->\n\nYours truly,\n\nPhilip Fraser", - "page_start": 10, - "page_end": 10, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "| Growth in Same Store Net Operating Income | |\n| 2013 target | Same Store NOI growth of 0% to 1% (adjusted from 2% to 4% following Q2 2013). |\n| 2013 Performance | consolidated same store noi decreased by 0.4% for the year ended December 31, 2013. this decrease was driven by an increase in natural gas prices in Atlantic Canada during the peak heating season in the first quarter as well as another spike in pricing in new Brunswick in December 2013. this resulted in a 14.6% increase in utility and fuel expenses compared to 2012 within the apartment portfolio. An increase in net property revenues, as well the management of other property operating expenses at levels consistent with 2012, helped to offset the impact of higher utility costs. |\n| 2014 Targets | |", - "page_start": 26, - "page_end": 26, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "TSR will be compared to a set of 22 oil and gas exploration and production companies headquartered in the United States and Australia. The Australian-headquartered companies are highlighted. The chart on the right depicts the TSR over a three year period ending 31 December 2014. Diamondback Energy Inc, Matador Resources Co and Midstates Petroleum Co Inc were excluded from the chart as there was not enough historical data to measure the defined TSR.\n\n| Company |\n|----------------------------|\n| Abraxas Petroleum Corp/NV |\n| Approach Resources Inc |\n| Austex Oil Ltd |\n| Beach Energy Ltd |\n| Bonanza Creek Energy Inc. |\n| Callon Petroleum CO/DE |\n| Carrizo Oil & Gas Inc |\n| Contango Oil & Gas Co |\n| Diamondback Energy Inc |\n| Emerald Oil Inc |\n| Lonestar Resources Ltd |\n| Matador Resources Co |\n| Midstates Petroleum Co Inc |\n| Panhandle Oil & Gas Inc |\n| Red Fork Energy Ltd |\n| Rex Energy Corp |\n| Sanchez Energy Corp |\n| Senex Energy Ltd |\n| Triangle Petroleum Corp |\n\n<!-- image -->\n\nRetirement and Other Benefits\n\nExecutive management participates in the same benefit plans and on the same basis as other employees. Those plans include health, dental and vision insurance (for which a premium contribution is required by the participant) and a 401(k) retirement plan under which the Company makes an annual contribution equal to 3 percent of the participant's eligible compensation.\n\nPost-Termination and Change In Control Benefits\n\nThe Managing Director's employment contract provides for payment of his base salary through the end of the contract term in the event he is terminated as a result of a change in control event. Additionally, in the event of a corporate take-over or change in control (as defined in the RSU Plan), our board in its discretion may cause all unvested RSUs to vest and be satisfied by the issue of one share each or provide for the cancellation of outstanding RSUs and a cash payment equal to the then-fair market value of the RSUs.", - "page_start": 39, - "page_end": 39, - "source_file": "ASX_SEA_2014.pdf" - }, - { - "text": "## Nordstrom, Inc.\n\n## Notes to Consolidated Financial Statements\n\nDollar and share amounts in millions except per share, per option and per unit amounts\n\nRent expense for 2014, 2013 and 2012 was as follows:\n\n| Fiscal year | 2014 | 2013 | 2012 |\n|-----------------------------------|--------|--------|--------|\n| Minimum rent: | | | |\n| Store locations | $170 | $145 | $124 |\n| Offices, warehouses and equipment | 36 | 35 | 32 |\n| Percentage rent | 14 | 14 | 14 |\n| Property incentives | (83) | (69) | (65) |\n| Total rent expense | $137 | $125 | $105 |\n\nThe rent expense above does not include common area charges, real estate taxes and other executory costs, which were $88 in 2014, $81 in 2013 and $74 in 2012.\n\n## NOTE 11: COMMITMENTS AND CONTINGENT LIABILITIES\n\nOur estimated total purchase obligations, capital expenditure contractual commitments and inventory purchase orders were $2,092 as of January 31, 2015. In connection with the purchase of foreign merchandise, we have outstanding trade letters of credit totaling $1 as of January 31, 2015.\n\nPlans for our Manhattan full-line store, which we currently expect to open in late 2018 to 2019, ultimately include owning a condominium interest in a mixed-use tower and leasing certain nearby properties. As of January 31, 2015, we had approximately $125 of fee interest in land, which is expected to convert to the condominium interest once the store is constructed. We have committed to make future installment payments based on the developer meeting pre-established construction and development milestones. Our fee interest in the land is currently and will continue to be subject to lien by project development lenders until project completion or fulfillment of our existing installment payment commitment. In the unlikely event that this project is not completed, the opening may be delayed and we may potentially be subject to future losses or capital commitments in order to complete construction or to monetize our previous investments in the land.\n\n## NOTE 12: SHAREHOLDERS' EQUITY\n\nIn February 2013, our Board of Directors authorized a program to repurchase up to $800 of our outstanding common stock, through March 1, 2015. In September 2014, our Board of Directors authorized a new program to repurchase up to $1,000 of our outstanding common stock through March 1, 2016, in addition to the remaining amount available for repurchase under the previously authorized program. The following is a summary of the activity related to our share repurchase programs in 2012, 2013 and 2014:", - "page_start": 66, - "page_end": 66, - "source_file": "NYSE_JWN_2014.pdf" - }, - { - "text": "We are targeting positive same store growth in 2014 of up to 2%. Year-over-year occupancy improvements and increased rental rates are expected to generate revenue growth. Increasing our leasing sta/ff and re/fining our marketing and leasing process is proving e/ffective, resulting in improved occupancy levels in many of our core markets, especially in Ontario and New Brunswick. A colder than normal winter this year (2014) is translating into increased energy consumption and continued volatility in natural gas prices in Atlantic Canada, expected to result in higher than normal heating costs. We continue to invest in energy and operational e/fficiencies which we expect will keep our controllable costs down throughout the year and partially o/ffset higher heating costs.", - "page_start": 8, - "page_end": 8, - "source_file": "TSX_KMP_2013.pdf" - } - ] - }, - { - "references": { - "source_file": "sg247938.pdf", - "query": "What are the physical requirements for installing the Storwize V7000?", - "target_page": 70, - "target_passage": "You must consider several key factors when you are planning the physical site of a Storwize V7000 installation. The physical site must have the following characteristics: \u0002 Meets power, cooling, and location requirements of the Storwize V7000 nodes. \u0002 Has two separate power sources. \u0002 Sufficient rack space exists for the installation of controller and disk expansion enclosures. \u0002 Has sufficient maximum power rating of the rack. Plan your rack placement carefully to not exceed maximum power rating of the rack. For more information about the power and environmental requirements, see this website", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "When you plan deployment of Storwize V7000, identify networking technologies that you will use.\n\nNote: With Spectrum Virtualize V8.1.1.1 and later, RDMA (iSER) is supported by 25 Gb Ethernet iSCSI adapter cards with V7000 Gen2+ only. For more information, see 3.7.4, 'iSCSI Extensions for RDMA (iSER)' on page 62.\n\n## 3.4 Physical planning\n\nYou must consider several key factors when you are planning the physical site of a Storwize V7000 installation. The physical site must have the following characteristics:\n\n - /SM590000 Meets power, cooling, and location requirements of the Storwize V7000 nodes.\n - /SM590000 Has two separate power sources.\n - /SM590000 Sufficient rack space exists for the installation of controller and disk expansion enclosures.\n - /SM590000 Has sufficient maximum power rating of the rack. Plan your rack placement carefully to not exceed maximum power rating of the rack. For more information about the power and environmental requirements, see this website.\n\nYour Storwize V7000 2076-524 and Storwize V7000 2076-624 order includes a printed copy of the IBM Storwize V7000 Gen2 and Gen2+ Quick Installation Guide, which also provides information about environmental and power requirements.\n\n## 3.4.1 Cabling\n\nCreate a cable connection table that follows your environment's documentation procedure to track all of the following connections that are required for the setup:", - "page_start": 69, - "page_end": 69, - "source_file": "sg247938.pdf" - }, - { - "text": "## 4.1 Prerequisites\n\nBefore initializing and setting up the Storwize V7000, ensure that the following prerequisites are met:", - "page_start": 109, - "page_end": 109, - "source_file": "sg247938.pdf" - }, - { - "text": "- 6. If the previous preparation steps were followed, the Storwize V7000 is now seen as a host from the system to be migrated. LUs can then be mapped to the Storwize V7000. Map the external storage system by following the instructions that are shown in Figure 9-6.", - "page_start": 413, - "page_end": 413, - "source_file": "sg247938.pdf" - }, - { - "text": "- 4. Establish a secure connection between the client and Storwize V7000 system.", - "page_start": 777, - "page_end": 777, - "source_file": "sg247938.pdf" - }, - { - "text": "## 3.1 General planning rules\n\nImportant: At the time of this writing, the statements that are provided in this book are accurate but can change. Always verify any statements that are made in this book with the IBM Storwize V7000 supported hardware list, device driver, firmware, and recommended software levels information that are available at the following websites:\n\n - /SM590000 Support Information for Storwize V7000\n - /SM590000 IBM System Storage Interoperation Center (SSIC)\n\nTo maximize the benefit that is realized from the Storwize V7000, pre-installation planning must include several important steps. These steps ensure that the Storwize V7000 provides the best possible performance, reliability, and ease of management for your application needs. The correct configuration also helps minimize downtime by avoiding changes to the Storwize V7000 and the storage area network (SAN) environment to meet future growth needs.\n\nThis book is not intended to provide in-depth information about the described topics. For an enhanced analysis of advanced topics, see IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and Performance Guidelines , SG24-7521.\n\n## 3.1.1 Basic planning flow\n\nThe general rule of planning is to define your goals, and then, plan a solution that can be shown to meet these goals. Always remember to verify that each element of your configuration is supported.\n\nConsider the following points when planning for the Storwize V7000:", - "page_start": 65, - "page_end": 65, - "source_file": "sg247938.pdf" - }, - { - "text": "Hosts can be connected to Storwize V7000 system using any of the following protocols:", - "page_start": 339, - "page_end": 339, - "source_file": "sg247938.pdf" - }, - { - "text": "## 13.4.4 Updating IBM Storwize V7000 drive code\n\nAfter completing the Storwize V7000 software update as described in 13.4, 'Software update' on page 687, the firmware of the Storwize V7000 drives also must be updated. The upgrade test utility identified that downlevel drives are in the system, as shown in Figure 13-25. However, this fact does not stop the system software from being performed.\n\nFigure 13-25 Upgrade test utility drive firmware warning\n\n<!-- image -->\n\nTo update the IBM Storwize V7000 drive code, complete the following steps:", - "page_start": 717, - "page_end": 717, - "source_file": "sg247938.pdf" - }, - { - "text": "A thin-provisioned volume feature that is called zero detect provides clients with the ability to reclaim unused allocated disk space (zeros) when they are converting a fully allocated volume to a thin-provisioned volume by using volume mirroring.\n\n## 3.12 Host attachment planning\n\nThe typical FC host attachment to the Storwize V7000 is done through SAN fabric. However, the system allows direct attachment connectivity between its 8 Gb or 16 Gb Fibre Channel ports and host ports. No special configuration is required for host systems that are using this configuration. However, the maximum number of directly attached hosts is severely limited by the number of FC ports on Storwize V7000's nodes.\n\nThe Storwize V7000 imposes no particular limit on the distance between the Storwize V7000 nodes and host servers. However, for host attachment, the Storwize V7000 supports up to three ISL hops in the fabric. This capacity means that the server to the Storwize V7000 can be separated by up to five FC links, four of which can be 10 km long (6.2 miles) if long wave Small Form-factor Pluggables (SFPs) are used.\n\nFigure 3-9 shows an example of a supported configuration with Storwize V7000 nodes using shortwave SFPs.\n\nFigure 3-9 Example of host connectivity\n\n<!-- image -->\n\nIn Figure 3-9, the optical distance between Storwize V7000 Node 1 and Host 2 is slightly over 40 km (24.85 miles).\n\nTo avoid latencies that lead to degraded performance, avoid ISL hops whenever possible. In an optimal setup, the servers connect to the same SAN switch as the Storwize V7000 nodes.\n\nNote: Before attaching host systems to Storwize V7000, review the Configuration Limits and Restrictions for the IBM System Storage Storwize V7000 at this IBM Support web page.", - "page_start": 91, - "page_end": 91, - "source_file": "sg247938.pdf" - }, - { - "text": "- /SM590000 'Storwize V7000 performance overview' on page 740", - "page_start": 760, - "page_end": 760, - "source_file": "sg247938.pdf" - }, - { - "text": "## 3.18 Storwize V7000 configuration backup procedure\n\nSave the configuration before and after any change to the clustered system, such as adding nodes and back-end storage. Saving the configuration is a crucial part of Storwize V7000 management, and various methods can be applied to back up your Storwize V7000 configuration. The preferred practice is to implement an automatic configuration backup using the configuration backup command. Make sure that you save the configuration to storage that is not dependent on the SAN Virtualization Controller.\n\nFor more information, see Chapter 13, 'RAS, monitoring, and troubleshooting' on page 673.\n\n## 3.19 Performance considerations\n\nStorage virtualization with the Storwize V7000 improves flexibility and simplifies management of storage infrastructure, and can provide a substantial performance advantage. The Storwize V7000 caching capability and its ability to stripe volumes across multiple disk arrays are the reasons why usually significant performance improvements are observed when Storwize V7000 is used to virtualize midrange back-end storage subsystems.\n\nTip: Technically, almost all storage controllers provide both striping (in the form of RAID 5, RAID 6, or RAID 10) and a form of caching. The real benefit of Storwize V7000 is the degree to which you can stripe the data across disks in a storage pool, even if they are installed in different back-end storage systems. This technique maximizes the number of active disks available to service I/O requests. The Storwize V7000 provides more caching, but its impact is secondary for sustained workloads.\n\nTo ensure the performance that you want and verify the capacity of your storage infrastructure, analyze performance and capacity to reveal the business requirements of your storage environment. Use the analysis results and the guidelines in this chapter to design a solution that meets the business requirements of your organization.\n\nWhen considering performance for a system, always identify the bottleneck and, therefore, the limiting factor of a specific system. This is a multidimensional analysis that needs to be performed for each of your workload patterns. There can be different bottleneck components for different workloads.\n\nWhen you are designing a storage infrastructure with the Storwize V7000 or implementing a Storwize V7000 in an existing storage infrastructure, you must ensure that the performance and capacity of the SAN, back-end disk subsystems, and Storwize V7000 meets requirements for the set of known or expected workloads.\n\nThe following Storwize V7000 models are supported for V8.2.1:", - "page_start": 101, - "page_end": 101, - "source_file": "sg247938.pdf" - } - ] - }, - { - "references": { - "source_file": "sg247938.pdf", - "query": "Is '1oijizer--10108453535318919918883384---jhjjzhiuhzrh--14584joiz///KK ' valid for a pool?", - "target_page": 218, - "target_passage": "Naming rules: When you choose a name for a pool, the following rules apply: \u0002 Names must begin with a letter. \u0002 The first character cannot be numeric. \u0002 The name can be a maximum of 63 characters. \u0002 Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9), underscore (_), period (.), hyphen (-), and space. \u0002 Names must not begin or end with a space. \u0002 Object names must be unique within the object type. For example, you can have a volume that is named ABC and a storage pool that is calledvolumes that are calledvolumes called ABC. \u0002 The default object name is valid (object prefix with an integer). \u0002 Objects can be renamed to their current names", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "user\\_public\\_key = \"ssh-rsa\n\nAAAAB3NzaC1yc2EAAAABIwAAAQEA09+YMqJ8VHX3HC7qy6HSxs3JjTGKbEgK+CExpf811uxsq+uJYbfXEKH19/NCf/U vpkozJBDDXDIxJ4uqOEBWDG4mUuu5U9a4lXgb6qaPYyXwVTygL/IcB0poSGEQQaJzhB05g71uZrya++sG1xHUjSQAQz hDuKrs4Bc3gcN4184UR+BX1pVgCls3NRn9hLrfLWS37M/kn+b/n6VMYYVpHsZ2XVydAn2nwuzktaEuWYaY/1cNd4xuu yVu08GQOon6t5KQ1EZBheADdSsyamulLqW9z4j6Y1wwDe4GPDc5zIW++ASDAZB0eEfbKGDLVdpFsI5YV8nLV1r/T0Y/ FiFZqQ== Bogdan Savu;IBMROO45771;IBMROZZ014E826;J;\"\n\ndns1 = \"192.168.11.210\" # DNS server 1\n\ndns\\_domain = \"domain.example.com\"\n\n# DNS Domain Name\n\n## #Network configuration\n\n#---------------------------------\n\nnet1\\_name = \"net\\_ocp\\_cluster1\" # Network Name\n\nnet1\\_vlan\\_id = \"1\" # VLAN ID\n\nnet1\\_subnet = \"192.168.11.0/21\"\n\n# Network/Mask\n\nnet1\\_gateway = \"192.168.11.1\"\n\n# Gateway\n\nnet1\\_start = \"192.168.11.223\"\n\n# First IP from Pool\n\nnet1\\_end = \"192.168.11.223\"\n\n# Last IP from Pool\n\n## #VM1 configuration (OCP - Master Nodes)\n\n#---------------------------------\n\nvm1\\_number = \"1\" # Number of VMs\n\nvm1\\_memory = \"32\" # Memory GB\n\nvm1\\_cpu = \"8\" # Virtual CPU\n\nvm1\\_vcpu\\_ratio = \"4\" # vCPU RATIO 1:4 1 vCPU = 0.25 eCPU (cores)\n\nvm1\\_name = \"bsocp\" # Hostname prefix\n\nvm1\\_first\\_ip = \"192.168.11.223\"\n\n# Fist IP from a consecutive pool of IPs\n\nvm1\\_image\\_name = \"xiv\\_p9\\_image\\_rhel76\" # The image name\n\nvm1\\_remote\\_restart = \"true\" # Enable Auto Remote Restart\n\nvm1\\_storage\\_name = \"xiv\\_StoragePool\" # Storage Template\n\nvm1\\_dockerdisk1 = \"0\" # Docker disk size in GB for ephemeral storage\n\n## #VM2 configuration (OCP - Infra Nodes)\n\n#---------------------------------\n\nvm2\\_number = \"0\" # Number of VMs\n\nvm2\\_memory = \"16\" # Memory GB\n\nvm2\\_cpu = \"4\" # Virtual CPU\n\nvm2\\_vcpu\\_ratio = \"4\" # vCPU RATIO 1:4 1 vCPU = 0.25 eCPU (cores)\n\nvm2\\_name = \"infnode\" # Hostname prefix\n\nvm2\\_first\\_ip = \"192.168.11.205\"\n\n# Fist IP from a consecutive pool of IPs\n\nvm2\\_image\\_name = \"xiv\\_p9\\_image\\_rhel76\" # The image name\n\nvm2\\_remote\\_restart = \"true\" # Enable Auto Remote Restart\n\nvm2\\_storage\\_name = \"xiv\\_StoragePool\" # Storage Template\n\nvm2\\_dockerdisk1 = \"68\" # Docker disk size in GB for ephemeral storage\n\n#VM3 configuration (OCP - Workers(App) Nodes)\n\n#---------------------------------\n\nvm3\\_number = \"0\" # Number of VMs\n\nvm3\\_memory = \"32\" # Memory GB\n\nvm3\\_cpu = \"4\" # Virtual CPU\n\nvm3\\_vcpu\\_ratio = \"4\" # vCPU RATIO 1:4 1 vCPU = 0.25 eCPU (cores)\n\nvm3\\_name = \"appnode\" # Hostname prefix\n\nvm3\\_first\\_ip = \"192.168.11.208\"\n\n# Fist IP from a consecutive pool of IPs\n\nvm3\\_image\\_name = \"xiv\\_p9\\_image\\_rhel76\" # The image name\n\nvm3\\_remote\\_restart = \"false\" # Disable Auto Remote Restart\n\nvm3\\_storage\\_name = \"xiv\\_StoragePool\" # Storage Template\n\nvm3\\_dockerdisk1 = \"34\" # Docker disk size in GB for ephemeral storage\n\n#VM4 configuration (OCP - Load Balancer Node)\n\n#---------------------------------\n\nvm4\\_number = \"0\" # Number of VMs", - "page_start": 130, - "page_end": 130, - "source_file": "sg248459.pdf" - }, - { - "text": "ρ i ρ j g ij ( k ) = ˜ ρ 3 ˜ w ( k ) (1 -δ ij ) + ˜ ρ i ˜ ρ j ˜ g ij ( k ) + ˜ ρ 3 ˜ w ( k / 2) [ ˜ ρ i ˜ g 3 i + ˜ ρ j ˜ g 3 j ] ( k ) (5) + ˜ ρ 2 3 [ ˜ w ( k / 2)] 2 ˜ g 33 ( k )", - "page_start": 2, - "page_end": 2, - "source_file": "1001.2648.pdf" - }, - { - "text": "- 21. Beneciuk JM, Lentz TA, He Y, Wu SS, George SZ. Prediction of persistent musculoskeletal pain at 12 months: a secondary analysis of the Optimal Screening for Prediction of Referral and Outcome (OSPRO) validation cohort study. Phys Ther. 2018;98:290 -301.\n - 22. Freburger JK, Holmes GM, Agans RP, Jackman AM, Darter JD, Wallace AS, et al. The rising prevalence of chronic low back pain. Arch Intern Med. 2009; 169:251 -8.\n - 23. Carey TS, Freburger JK, Holmes GM, Jackman A, Knauer S, Wallace A, et al. Race, care seeking, and utilization for chronic back and neck pain: population perspectives. J Pain Off J Am Pain Soc. 2010;11:343 -50.\n - 24. Jensen MP, Turner JA, Romano JM, Fisher LD. Comparative reliability and validity of chronic pain intensity measures. Pain. 1999;83:157 -62.\n - 25. Bolton JE. Accuracy of recall of usual pain intensity in back pain patients. Pain. 1999;83:533 -9.\n - 26. Childs JD, Piva SR, Fritz JM. Responsiveness of the numeric pain rating scale in patients with low back pain. Spine. 2005;30:1331 -4.\n - 27. Vernon H. The neck disability index: state-of-the-art, 1991-2008. J Manip Physiol Ther. 2008;31:491 -502.\n - 28. Vernon H, Mior S. The neck disability index: a study of reliability and validity. J Manip Physiol Ther. 1991;14:409 -15.\n - 29. Hudson-Cook N, Tomes-Nicholson K, Breen A. A revised Oswestry disability questionnaire. In: Roland M, Jenner J, editors. Back pain: new approaches to rehabilitation and education. New York: Manchester University Press; 1989. p. 187 -204.\n - 30. Fritz JM, Irrgang JJ. A comparison of a modified Oswestry low back pain disability questionnaire and the Quebec back pain disability scale. Phys Ther. 2001;81:776 -88.\n - 31. Beaton DE, Wright JG, Katz JN, Upper Extremity Collaborative Group. Development of the QuickDASH: comparison of three item-reduction approaches. J Bone Joint Surg Am. 2005;87:1038 -46.\n - 32. Irrgang JJ, Anderson AF, Boland AL, Harner CD, Kurosaka M, Neyret P, et al. Development and validation of the international knee documentation committee subjective knee form. Am J Sports Med. 2001;29:600 -13.\n - 33. Butera KA, Lentz TA, Beneciuk JM, George SZ. Preliminary evaluation of a modified STarT back screening tool across different musculoskeletal pain conditions. Phys Ther. 2016;96:1251 -61.\n - 34. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40:373 -83.\n - 35. Katz JN, Chang LC, Sangha O, Fossel AH, Bates DW. Can comorbidity be measured by questionnaire rather than medical record review? Med Care. 1996;34:73 -84.\n - 36. George SZ, Beneciuk JM, Bialosky JE, Lentz TA, Zeppieri G, Pei Q, et al. Development of a review-of-systems screening tool for orthopaedic physical therapists: results from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) cohort. J Orthop Sports Phys Ther. 2015;45: 512 -26.\n - 37. Lentz TA, Beneciuk JM, Bialosky JE, Zeppieri G, Dai Y, Wu SS, et al. Development of a yellow flag assessment tool for orthopaedic physical therapists: results from the Optimal Screening for Prediction of Referral and Outcome (OSPRO) cohort. J Orthop Sports Phys Ther. 2016;46:327 -43.\n - 38. Beneciuk JM, Fritz JM, George SZ. The STarT back screening tool for prediction of 6-month clinical outcomes: relevance of change patterns in outpatient physical therapy settings. J Orthop Sports Phys Ther. 2014;44: 656 -64.", - "page_start": 13, - "page_end": 13, - "source_file": "pubmed5.pdf" - }, - { - "text": "- 36 . Tang L, Sun Z, Idnay B, et al. Evaluating large language models on medical evidence summarization. NPJ Digit Med . 2023;6(1):158. doi:10.1038/s41746-023-00896-7\n - 37 . Goswami J, Prajapati KK, Saha A, Saha AK. Parameter-efficient fine-tuning large language model approach for hospital discharge paper summarization. Appl Soft Comput . 2024;157:111531. doi:10.1016/j.asoc.2024.111531\n - 38 . Huang KT, Mehta NH, Gupta S, See AP, Arnaout O. Evaluation of the safety, accuracy, and helpfulness of the GPT-4.0 large language model in neurosurgery. J Clin Neurosci . 2024;123:151-156. doi:10.1016/j.jocn.2024.03.021\n - 39 . Giuffrè M, Kresevic S, You K, et al. Systematic review: the use of large language models as medical chatbots in digestive diseases. Aliment Pharmacol Ther . 2024;60(2):144-166. doi:10.1111/apt.18058\n - 40 . Tailor PD, Dalvin LA, Chen JJ, et al. A comparative study of responses to retina questions from either experts, expert-edited large language models or large language models alone. Ophthalmol Sci . 2024;4(4):100485. doi:10. 1016/j.xops.2024.100485\n - 41 . Zaretsky J, Kim JM, Baskharoun S, et al. Generative artificial intelligence to transform inpatient discharge summaries to patient-friendly language and format. JAMANetwOpen . 2024;7(3):e240357. doi:10.1001/ jamanetworkopen.2024.0357\n - 42 . Zhou C, Liu P, Xu P, et al. Lima: less is more for alignment. arXiv . Preprint posted online May 18, 2023. doi:10. 48550/arXiv.2305.11206", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed8.pdf" - }, - { - "text": "- 23. Stone MH , Sands WA , Pierce KC , Carlock J , Cardinale M , Newton RU. Relationship of maximum strength to weightlifting performance. Med Sci Sports Exerc 37: 1037 -1043, 2005. doi:10.1249/01.mss. 0000171621.45134.10.\n - 24. Beattie K , Carson BP , Lyons M , Kenny IC. The relationship between maximal strength and reactive strength. Int J Sports Physiol Perform 12: 548 -553, 2017. doi:10.1123/ijspp.2016-0216.\n - 25. Suarez DG , Carroll KM , Slaton JA , Rochau KG , Davis MW , Stone MH. Utility of a shortened isometric midthigh pull protocol for assessing rapid force production in athletes. JStrengthCondRes 36: 1819 -1825, 2022. doi:10.1519/jsc.0000000000003774.\n - 26. Suchomel TJ , Nimphius S , Stone MH. Scaling isometric mid-thigh pull maximum strength in division I athletes: are we meeting the assumptions? Sports Biomech 19: 532 -546, 2020. doi:10.1080/ 14763141.2018.1498910.\n - 27. Cunningham DJ , Shearer DA , Drawer S , Pollard B , Cook CJ , Bennett M , Russell M , Kilduff LP. Relationships between physical qualities and key performance indicators during match-play in senior international rugby union players. PLoS One 13: e0202811, 2018. doi:10.1371/journal.pone.0202811.\n - 28. Doyle TLA , Fain AC , Wills JA , Cooper D , Toonen K , Kamphius B. Measures of lower body strength associated with injuries in Australian special forces selection candidates. JApplBiomech 38: 255 -262, 2022. doi:10.1123/jab.2021-0134.\n - 29. Kawamori N , Rossi SJ , Justice BD , Haff EE , Pistilli EE , O ' Bryant HS , Stone MH , Haff GG. Peak force and rate of force development during isometric and dynamic mid-thigh clean pulls performed at various intensities. JStrengthCondRes 20: 483 -491, 2006. doi:10.1519/ 18025.1.\n - 30. Wang R , Hoffman JR , Tanigawa S , Miramonti AA , Monica MB , Beyer KS , Church DD , Fukuda DH , Stout JR. Isometric mid-thigh pull correlates with strength, sprint, and agility performance in collegiate rugby union players. JStrengthCondRes 30: 3051 -3056, 2016. doi:10.1519/jsc.0000000000001416.\n - 31. Haff GG , Stone M , O ' Bryant HS , Harman E , Dinan C , Johnson R , Han KH. Force-time dependent characteristics of dynamic and isometric muscle actions. J Strength Cond Res 11: 269 -272, 1997. doi:10.1519/1533-4287(1997)011 < 0269:FTDCOD > 2.3.CO;2.\n - 32. Mercer RAJ , Russell JL , McGuigan LC , Coutts AJ , Strack DS , McLean BD. Finding the signal in the noise -interday reliability and seasonal sensitivity of 84 countermovement jump variables in professional basketball players. JStrengthCondRes 37: 394 -402, 2023. doi:10.1519/jsc.0000000000004182.\n - 33. Cabarkapa D , Philipp N , Cabarkapa D , Eserhaut D , Fry A. Comparison of force-time metrics between countermovement vertical jump with and without an arm swing in professional male basketball players. Int J Strength Cond 3: 1 -7, 2023. doi:10.47206/ijsc. v3i1.197.\n - 34. Tillin NA , Pain MT , Folland J. Explosive force production during isometric squats correlates with athletic performance in rugby union players. J Sports Sci 31: 66 -76, 2013. doi:10.1080/02640414.2012.720704.\n - 35. Morris CG , Weber JA , Netto KJ. Relationship between mechanical effectiveness in sprint running and force-velocity characteristics of a countermovement jump in Australian rules football athletes. J Strength Cond Res 36: e59 -e65, 2022. doi:10.1519/ jsc.0000000000003583.\n - 36. Johnson DL , Bahamonde R. Power output estimate in university athletes. JStrengthCondRes 10: 161 -166, 1996. doi:10.1519/1533-4287 (1996)010 < 0161:poeiua > 2.3.co;2.\n - 37. Mkaouer B , Jemni M , Amara S , Chaab /C18 en H , Tabka Z. Kinematic and kinetic analysis of counter movement jump versus two different types of standing back somersault. Sci Gymnast J 4: 61 -71, 2012. https://www.fsp.uni-lj.si/en/research/scienti /uniFB01 c-magazines/scienceof-gymnastics/previous-issues/2012102209114244/.", - "page_start": 10, - "page_end": 10, - "source_file": "pubmed12.pdf" - }, - { - "text": "Monthly tables of overall projected prison population\n\nTable A14: Monthly values of the overall projected prison population (end of month figures)\n\n| | Sentencing Scenarios | Sentencing Scenarios | Sentencing Scenarios |\n|--------|------------------------|------------------------|------------------------|\n| | Scenario 1 | Central | Scenario 2 |\n| Nov-14 | 85,800 | 86,100 | 86,100 |\n| Dec-14 | 84,300 | 84,600 | 84,800 |\n| Jan-15 | 85,900 | 86,200 | 86,700 |\n| Feb-15 | 86,400 | 86,800 | 87,400 |\n| Mar-15 | 86,700 | 87,200 | 87,900 |\n| Apr-15 | 86,700 | 87,400 | 88,300 |\n| May-15 | 86,900 | 87,500 | 88,600 |\n| Jun-15 | 87,100 | 87,700 | 88,900 |\n| Jul-15 | 87,100 | 88,000 | 89,100 |\n| Aug-15 | 87,300 | 88,400 | 89,600 |\n| Sep-15 | 87,400 | 88,700 | 90,100 |\n| Oct-15 | 87,300 | 88,600 | 90,000 |\n| Nov-15 | 87,200 | 88,600 | 90,200 |\n| Dec-15 | 85,500 | 87,000 | 88,900 |\n| Jan-16 | 86,900 | 88,500 | 90,500 |\n| Feb-16 | 87,100 | 88,900 | 91,100 |\n| Mar-16 | 87,100 | 89,000 | 91,400 |\n| Apr-16 | 87,000 | 89,000 | 91,600 |\n| May-16 | 86,900 | 89,100 | 91,800 |\n| Jun-16 | 86,800 | 89,100 | 92,000 |\n| Jul-16 | 86,500 | 89,200 | 92,100 |\n| Aug-16 | 86,700 | 89,400 | 92,400 |\n| Sep-16 | 86,800 | 89,600 | 92,600 |\n| Oct-16 | 86,500 | 89,400 | 92,600 |\n| Nov-16 | 86,300 | 89,400 | 92,800 |\n| Dec-16 | 84,400 | 87,600 | 91,300 |\n| Jan-17 | 85,600 | 88,900 | 92,800 |\n| Feb-17 | 85,600 | 89,200 | 93,200 |\n| Mar-17 | 85,600 | 89,200 | 93,300 |\n| Apr-17 | 85,400 | 89,300 | 93,300 |\n| May-17 | 85,300 | 89,300 | 93,500 |\n| Jun-17 | 85,200 | 89,300 | 93,600 |\n| Jul-17 | 85,000 | 89,300 | 93,900 |\n| Aug-17 | 85,200 | 89,600 | 94,200 |\n| Sep-17 | 85,200 | 89,800 | 94,500 |\n| Oct-17 | 84,900 | 89,600 | 94,500 |\n| Nov-17 | 84,700 | 89,500 | 94,600 |", - "page_start": 21, - "page_end": 21, - "source_file": "legal4_opengouvernementlicense.pdf" - }, - { - "text": "- [1] Abraira VE, Kuehn ED, Chirila AM, Springel MW, Toliver AA, Zimmerman AL, Orefice LL, Boyle KA, Bai L, Song BJ, Bashista KA, O'Neill TG, Zhuo J, Tsan C, Hoynoski J, Rutlin M, Kus L, Niederkofler V, Watanabe M, Dymecki SM, Nelson SB, Heintz N, Hughes DI, Ginty DD. The cellular and synaptic architecture of the mechanosensory dorsal horn. Cell 2017;168: 295-310.e19.\n - [2] Bailey AL, Ribeiro-Da-Silva A. Transient loss of terminals from nonpeptidergic nociceptive fibers in the substantia gelatinosa of spinal cord following chronic constriction injury of the sciatic nerve. Neuroscience 2006;138:675-90.\n - [3] Barry AM, Zhao N, Yang X, Bennett DL, Baskozos G. Deep RNA-seq of male and female murine sensory neuron subtypes after nerve injury. PAIN 2023;164:2196-215.\n - [4] Bell AM, Utting C, Dickie AC, Kucharczyk MW, Quillet R, GutierrezMecinas M, Razlan ANB, Cooper AH, Lan Y, Hachisuka J, Weir GA, Bannister K, Watanabe M, Kania A, Hoon MA, Macaulay IC, Denk F, Todd AJ. Deep sequencing of Phox2a nuclei reveals five classes of anterolateral system neurons. bioRxiv 2023.2023.08.20.553715.\n - [5] Bennett DL, Michael GJ, Ramachandran N, Munson JB, Averill S, Yan Q, McMahon SB, Priestley JV. A distinct subgroup of small DRG cells express GDNF receptor components and GDNF is protective for these neurons after nerve injury. J Neurosci 1998;18:3059-72.\n - [6] Bondok AA, Sansone FM. Retrograde and transganglionic degeneration of sensory neurons after a peripheral nerve lesion at birth. Exp Neurol 1984;86:322-30.\n - [7] Boucher TJ, Okuse K, Bennett DLH, Munson JB, Wood JN, McMahon SB. Potent analgesic effects of GDNF in neuropathic pain states. Science 2000;290:124-7.\n - [8] Bradbury EJ, Burnstock G, McMahon SB. The expression of P2X3 purinoreceptors in sensory neurons: effects of axotomy and glial-derived neurotrophic factor. Mol Cell Neurosci 1998;12:256-68.\n - [9] Br 'az JM, Basbaum AI. Triggering genetically-expressed transneuronal tracers by peripheral axotomy reveals convergent and segregated sensory neuron-spinal cord connectivity. Neuroscience 2009;163: 1220-32.\n - [10] Cobos EJ, Nickerson CA, Gao F, Chandran V, Bravo-Caparr 'os I, Gonz'alez-Cano R, Riva P, Andrews NA, Latremoliere A, Seehus CR, Perazzoli G, Nieto FR, Joller N, Painter MW, Ma CHE, Omura T, Chesler EJ, Geschwind DH, Coppola G, Rangachari M, Woolf CJ, Costigan M. Mechanistic differences in neuropathic pain modalities revealed by correlating behavior with global expression profiling. Cell Rep 2018;22: 1301-12.\n - [11] Coggeshall RE. A consideration of neural counting methods. Trends Neurosci 1992;15:9-13.\n - [12] Decosterd I, Woolf CJ. Spared nerve injury: an animal model of persistent peripheral neuropathic pain. PAIN 2000;87:149-58.\n - [13] Denk F, Ramer LM, Erskine ELKS, Nassar MA, Bogdanov Y, Signore M, WoodJN, McMahon SB, Ramer MS. Tamoxifen induces cellular stress in the nervous system by inhibiting cholesterol synthesis. Acta Neuropathol Commun 2015;3:74.\n - [14] Dobin A, Davis CA, Schlesinger F, Drenkow J, Zaleski C, Jha S, Batut P, Chaisson M, Gingeras TR. STAR: ultrafast universal RNA-seq aligner. Bioinformatics 2013;29:15-21.\n - [15] Faul F, Erdfelder E, Lang AG, Buchner A. G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods 2007;39:175-91.\n - [16] Feng G, Mellor RH, Bernstein M, Keller-Peck C, Nguyen QT, Wallace M, Nerbonne JM, Lichtman JW, Sanes JR. Imaging neuronal subsets in transgenic mice expressing multiple spectral variants of GFP. Neuron 2000;28:41-51.\n - [17] Gangadharan V, Zheng H, Taberner FJ, Landry J, Nees TA, Pistolic J, Agarwal N, M annich D, Benes V, Helmstaedter M, Ommer B, Lechner SG, Kuner T, Kuner R. Neuropathic pain caused by miswiring and abnormal end organ targeting. Nature 2022;606:137-45.\n - [18] Guillery RW. On counting and counting errors. J Comp Neurol 2002;447: 1-7.", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed2.pdf" - }, - { - "text": "| id WWPN WWNN port\\_id owning\\_node\\_id current\\_node\\_id nportid host\\_io\\_permitted virtualized protocol 1 500507680140A288 500507680100A288 1 1 1 010A00 yes no scsi |\n|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| 2 500507680142A288 500507680100A288 1 1 1 010A02 yes yes scsi |\n| 3 500507680144A288 500507680100A288 1 1 1 010A01 yes yes nvme |\n| 4 500507680130A288 500507680100A288 2 1 1 010400 yes no scsi |\n| 5 500507680132A288 500507680100A288 2 1 1 010401 yes yes scsi |\n| 6 500507680134A288 500507680100A288 2 1 1 010402 yes yes nvme |\n| 7 500507680110A288 500507680100A288 3 1 1 010500 yes no scsi |\n| 8 500507680112A288 500507680100A288 3 1 1 010501 yes yes scsi |\n| 9 500507680114A288 500507680100A288 3 1 1 010502 yes yes nvme |\n| 10 500507680120A288 500507680100A288 4 1 1 010A00 yes no scsi 11 500507680122A288 500507680100A288 4 1 1 010A02 yes yes scsi |\n| 12 500507680124A288 500507680100A288 4 1 1 010A01 yes yes nvme |\n| 49 500507680C110009 500507680C000009 1 2 2 010500 yes no scsi 50 500507680C150009 500507680C000009 1 2 2 010502 yes yes scsi |\n| 51 500507680C190009 500507680C000009 1 2 2 010501 yes yes nvme |\n| 52 500507680C120009 500507680C000009 2 2 2 010400 yes no scsi |\n| 53 500507680C160009 500507680C000009 2 2 2 010401 yes yes scsi |", - "page_start": 346, - "page_end": 346, - "source_file": "sg247938.pdf" - }, - { - "text": "## Annex 4: Default values\n\n## 1. Fraction of carbon stored for reference approach\n\nBitumen - 1\n\nCoal oils and tars (from coking coal - 0.75\n\nEthane - 0.8\n\nGas/Diesel oil - 0.5\n\nLPG - 0.8\n\nLubricants - 0.5\n\nNaphtha - 0.8\n\nNatural gas - 0.33\n\n## 2. Conversion factors\n\n - a. CH4 volume  CH4 Gg = 0.67\n\n## b. Conversion factors for energy\n\n| From | To | Multiply by |\n|--------|------|-----------------|\n| J | TJ | 10 -12 |\n| KJ | TJ | 10 -9 |\n| MJ | TJ | 10 -6 |\n| GJ | TJ | 10 -3 |\n| TJ | TJ | 1 |\n| cal | TJ | 4.1868 x 10 -12 |\n| kcal | TJ | 4.1868 x 10 -9 |\n| Mcal | TJ | 4.1868 x 10 -6 |\n| Gcal | TJ | 4.1868 x 10 -3 |\n| Tcal | TJ | 4.1868 |\n| kWh | TJ | 3.6 x 10 -6 |\n| MWh | TJ | 3.6 x 10 -3 |\n| GWh | TJ | 3.6 |\n| Btu | TJ | 1.0551 x 10 -9 |\n| kBtu | TJ | 1.0551 x 10 -6 |\n| MBtu | TJ | 1.0551 x 10 -3 |\n| GBtu | TJ | 1.0551 |\n| toe | TJ | 41.868 x 10 -3 |\n| ktoe | TJ | 41.868 |\n| Mtoe | TJ | 4.1868 x 10 4 |\n| TJ | J | 10 12 |\n| TJ | KJ | 10 9 |\n| TJ | MJ | 10 6 |\n| TJ | GJ | 10 3 |\n| TJ | cal | 238.8 x 10 9 |\n| TJ | kcal | 238.8 x 10 6 |\n| TJ | Mcal | 238.8 x 10 3 |\n| TJ | Gcal | 238.8 |\n| TJ | Tcal | 238.8 x 10 -3 |\n| TJ | kWh | 277.8 x 10 3 |\n| TJ | MWh | 277.8 |\n| TJ | GWh | 277.8 x 10 -3 |\n| TJ | Btu | 947.8 x 10 6 |\n| TJ | kBtu | 947.8 x 10 3 |\n| TJ | MBtu | 947.8 |\n| TJ | GBtu | 947.8 x 10 -3 |\n| TJ | toe | 23.88 |\n| TJ | ktoe | 23.88 x x 10 -3 |\n| TJ | Mtoe | 23.88 x 10 -6 |", - "page_start": 48, - "page_end": 48, - "source_file": "maiis-user-manual.pdf" - }, - { - "text": "- 17. Pérez C, Navarro A, Saldaña MT, Wilson K, Rejas J. Modeling the predictive value of pain intensity on costs and resources utilization in patients with peripheral neuropathic pain. Clin J Pain. 2015;31:273 -9.\n - 18. Hill JC, Fritz JM. Psychosocial influences on low back pain, disability, and response to treatment. Phys Ther. 2011;91:712 -21.\n - 19. George SZ, Beneciuk JM, Lentz TA, Wu SS. The Optimal Screening for Prediction of Referral and Outcome (OSPRO) in patients with musculoskeletal pain conditions: a longitudinal validation cohort from the USA. BMJ Open. 2017;7:e015188.\n - 20. George SZ, Beneciuk JM, Lentz TA, Wu SS, Dai Y, Bialosky JE, Zeppieri G Jr. Optimal Screening for Prediction of Referral and Outcome (OSPRO) for Musculoskeletal Pain Conditions: Results From the Validation Cohort. J Orthop Sports Phys Ther. 2018;48(6):460 -75.", - "page_start": 12, - "page_end": 12, - "source_file": "pubmed5.pdf" - } - ] - }, - { - "references": { - "source_file": "news4.pdf", - "query": "I want to start a company that automates kitchen tasks, does that sound like a good idea for 2025?", - "target_page": 1, - "target_passage": "Smart home automation Smart home automation has been around for a while, but AI is taking it to the next level. Imagine a home that not only follows your commands, but also anticipates your needs. Enhanced smart home systems can learn your daily routines and adjust settings accordingly, from lighting and temperature to security and entertainment, making your home smarter and more responsive than ever before.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "<!-- image -->\n\n## ISSUE\n\nDecember 2024\n\n## CATEGORIES\n\nTechnology & Cybersecurity\n\nEditor's Picks Finance - Personal Home - Interior\n\n<!-- image -->\n\n## The top AI-powered tech trends in 2025\n\n<!-- image -->\n\n(NC) As we look ahead to 2025, artificial intelligence (AI) continues to revolutionize our lives. From enhancing our daily routines to transforming entire industries, AI's impact is undeniable.\n\nThese five innovations are set to shape our future, offering unprecedented convenience, efficiency and personalization.\n\n## AI-powered computing\n\nAI-powered computing, such as Intel-powered laptops - or AI PC - is at the forefront of technological advancement. But what, exactly, is an AI PC? They're computers that have AI built into their processors - also known as the brain of the computer - which optimizes performance, enhances security and provides a more personalized experience as they learn from your usage patterns. For consumers, this means faster, smarter and more secure computing tailored to your individual needs.\n\n## Smart home automation\n\nSmart home automation has been around for a while, but AI is taking it to the next level. Imagine a home that not only follows your commands, but also anticipates your needs. Enhanced smart home systems can learn your daily routines and adjust settings accordingly, from lighting and temperature to security and entertainment, making your home smarter and more responsive than ever before.\n\n## Health and wellness\n\nThe health-care industry is seeing significant transformation. AI-driven health and wellness applications can monitor vital signs, predict potential health issues, and even provide personalized fitness and nutrition plans. Wearable devices equipped with this technology can offer real-time health insights, helping individuals make informed decisions about their well-being.\n\n## Financial services\n\nAI is also making waves in the financial sector, offering smarter and more secure ways to manage money. From AI-driven investment platforms that provide personalized financial advice to fraud detection systems that protect against cyber threats, AI can analyze vast amounts of data to identify trends and make more informed financial decisions.\n\n## Enhanced education\n\nIn education, enhanced learning tools provide personalized learning experiences that adapt to each student's strengths and weaknesses. This technology can offer real-time feedback, helping students improve their skills more effectively. Additionally, AI can assist educators by automating administrative tasks and providing insights into student performance, allowing for more focused and effective teaching.\n\nLearn more at intel.com/aipc.\n\nwww.newscanada.com\n\nWord Count: 346\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nRADIO\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nEN", - "page_start": 0, - "page_end": 0, - "source_file": "news4.pdf" - }, - { - "text": "## PRACTICAL AND PROFESSIONAL\n\nSomething of a paradox, too; highly competitive but approachable; stylish but never a slave to fashion. I have a true talent for leadership. I'm stable, steady, reliable, and efficient. At the same time, I'm good-looking, good-natured, and good-humored. Seek successful business person driven by values, with a 'whatever it takes' attitude - just like me, practical and professional.\n\nTHE\n\nHON COMPANY\n\n<!-- image -->", - "page_start": 7, - "page_end": 7, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "<!-- image -->\n\nDigitalisation and its impact on economy and work is a major topic in political and scientific discussions. Obviously the term 'Digitalisation' 271 covers such a broad array of technologies and developments that statements on their impact on society, economy and work can rarely be simple and straightforward. 272 Digitalisation includes technical issues like 5G coverage, widespread connectivity, IoT and big data, wearables, semiconductor capacities, edge and cloud computing, AI, data handling issues, for example, of medical records, mobile devices and online platforms, and it triggers economic and societal changes, for example, of business models, skills development, education and digital government.\n\nDigital transformation is globally supported by governments using financial, political and legal measures. The European Commission launched in February 2020 the European Digital Strategy 2020-2025 . This strategy aims to promote a new generation of digital technologies.\n\nConcerning the overall impact of digitalisation on work , most researchers state a decrease of certain types of work and growth of others. Cedefop describes this as 'the great divide' and writes:\n\n'Cedefop's European skills and jobs (ESJ) survey reveals that more than 7 in 10 adult employees in the EU need at least some fundamental ICT level to be able to perform their jobs. Yet, about one in three of those employees are at risk of digital skill gaps. At the same time, almost half of all employees in lowskilled occupations do not require ICT skills to do their work. Cedefop … notes that 'the digital divide is alive and well. A strikingly high share of the EU adult workforce is still employed in a semi-analogue world, at the same time that others are faced with technological obsolescence.' 273\n\nA statement of two researchers from the Massachusetts Institute of Technology shortly summarises this:\n\n'Technologies such as payroll-processing and inventory-control software, factory automation, computercontrolled machining centers, and scheduling tools have replaced workers on the shop floor and in clerical tasks and rote information processing. By contrast, big data, analytics, and high-speed communications have enhanced the output of people with engineering, creative, and design skills and made them more valuable. The net effect has been to decrease the demand for low-skilled information workers while increasing the demand for highly skilled ones.' 274\n\nDigital technologies can enhance prevention at workplaces. They can help to separate workers from hazardous working situations, facilitate better and innovative ways of monitoring exposure, and might improve the quality of work by relieving workers from repetitive or routine tasks. Digital technologies may also create higher levels of autonomy and flexibility or facilitate the access of a more diverse workforce to the labour market, in particular vulnerable groups such as disabled people, ageing", - "page_start": 103, - "page_end": 103, - "source_file": "EN-Annex II - EU-OSHA websites, SM accounts and tools.pdf" - }, - { - "text": "Log in\n\n<!-- image -->\n\nHome / Arts and Entertainment / New Artificial Intelligence Summit Series Begins With Energy\n\n<!-- image -->\n\nARTS AND ENTERTAINMENT\n\n## New Artificial Intelligence Summit Series Begins With Energy\n\n07/31/2024\n\n(AI) continues to transform the United States and the world. To promote and inform rapid advancements in AI and maintain America's global competitiveness, the Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI, announces the AI+ Summit Series.\n\nThe series kicks off with the topic of energy. The AI + Energy Summit, scheduled for September 26, 2024, in Washington, D.C., will bring together policy makers, energy industry leaders, top government and academic energy researchers, and technologists to address the challenges of AI's energy consumption and develop solutions for a resilient and abundant energy future. The event also aims to address the implications of AI and energy for national security and promote partnerships between AI and energy stakeholders.\n\nAI and other emerging technologies can help the United States take the lead in energy areas including maximizing energy efficiencies, discovering new materials, and enabling new forms of power generation. AI also has a role to play in overcoming energy challenges. The Department of Energy (DOE) already uses AI in several areas including advanced computing, emergency response, environmental modeling, climate forecasting, and materials research.\n\nSCSP's recent 'Action Plan for U.S. Leadership in Next-Generation Energy,' raises many issues related to AI and energy, including recommendations for the government to bring America forward. The AI+ Energy Summit will highlight these and other issues, and promote collaboration to solve problems. The stakes are high; if the U.S. falls short on energy, American adversaries could gain the upper hand in AI leadership, according to SCSP experts.\n\nVisit scsp.ai to learn more about the AI+Energy Summit and the SCSP's Next-Generation Energy Action Plan.\n\nArticle Link\n\nhttps://about.newsusa.com/new-artificial-intelligence-summit-series-begins-with…\n\n## RELATED ARTICLES\n\n<!-- image -->\n\n<!-- image -->\n\nMar 06, 2024\n\nCelebrate St. Patrick's Day with No Booze, Just Pure Irish Fun and Entertainment\n\nMar 06, 2024\n\nExplore Downtown San Pedro with Flair: Ride the Iconic Red Car Trolley for Free\n\nMar 06, 2024\n\nSay Hello to Your Big Break at the Stapleton Library Job Fair in Vocation, Trade, or Civil Service\n\n<!-- image -->\n\n<!-- image -->\n\nFeb 22, 2024\n\nRetrain Your Emotional Brain: A Natural Alternative to Weight Loss Drugs\n\n© Copyright NewsUSA 2025. All Rights Reserved.\n\nFeb 21, 2024\n\nSerial Entrepreneur Teaches Us How to Go the Distance in Business and in Life\n\nNEWSUSA\n\nMar 06, 2024\n\nLocal Artists Collaborate for a Unique Fusion of Groove and Collage\n\nFASHION\n\nBUSINESS\n\nINFOGRAPHIC\n\nENVIRONMENT\n\nHEALTH\n\nMONEY\n\nFOOD\n\nTRAVEL\n\nBRIDAL\n\nRECREATION\n\nTECHNOLOGY\n\nHOME\n\nEDUCATION\n\nARTS & ENTERTAINMENT\n\nAUTO\n\nCHILDREN\n\nFITNESS\n\nHOLIDAY\n\nINSURANCE\n\nLAWN & GARDEN\n\nLISTICLE\n\nNUTRITION\n\nPARENTING\n\nPETS\n\nSEASONAL\n\nSENIORS\n\nSPANISH\n\nTIPS AND HOW TO\n\nENTERTAINMENT\n\nCAREER\n\nCOMMUNITY\n\nFAMILY\n\nTIPS\n\nINTERNET\n\nHUMAN\\_INTEREST\n\nBEAUTY\n\nARTS\n\nREALESTATE\n\nSAFETY\n\nMEDICINE\n\nBOOK\\_REVIEW\n\nRECIPE\n\nAFRICAN\\_AMERICANS\n\nHOW\\_TO\n\nBYLINED\\_COLUMN\n\nCHARITY\n\nSPORTS\n\nHOME\\_IMPROVEMENT\n\nTECH\n\nWELLNESS\n\nARTS AND ENTERTAINMENT\n\nFOOD & DRINK\n\nREAL\\_ESTATE\n\nVETERANS\n\nOUTDOORS\n\nREAL ESTATE\n\nHUMAN INTEREST\n\nMONEY & FINANCE\n\nFASHION & BEAUTY\n\nMONEY AND FINANCE\n\nBOOKS & ENTERTAINMENT\n\nBOOKS\n\nARTS & ENTERTAINMENT\n\nCATEGORIES\n\nRECENT POSTS", - "page_start": 0, - "page_end": 0, - "source_file": "news1.pdf" - }, - { - "text": "## HON INDUSTRIES 2003\n\n## ALLSTEEL: AN ANTIDOTE TO THE ORDINARY\n\n<!-- image -->\n\n## A CASE STUDY IN QUALITY\n\nGreat, high-quality design creates better work environments and happier end-users. Whether we're building lateral files (the first product for which we became known) or designing awardwinning seating, like our #19 ® chair, the Allsteel core message remains constant: the highest quality in functionality, durability, and service.\n\nToday's Allsteel is about a broad array of workplace furniture solutions: new, exciting panel and desking systems, storage, seating, and tables that offer a unique counterpoint to the sea of sameness provided by most office furniture. Working closely with architects and designers, we target the contract market, providing project-driven and design-oriented office solutions. Our rapid modeling and prototyping allows for equally rapid product development, a reflection of our agile, lean culture. As innovative as many of our products are, design innovation - for us - is simply what happens along the way to solving customer problems.\n\nSome of our products, like the #19 ® chair, are iconographically associated with the Allsteel name, and are quite influential in our brand building efforts. Our two newest enterprises are Terrace ® 2.6 - a fast-growing systems line providing enormous flexibility and durability - and Get Set TM - an incredibly versatile line of multi-purpose room tables, chairs, and\n\ncommunication products. All of our products respond completely to the needs of end-users because that's where the design process starts.\n\nIn all that we do, our main focus is to identify end-user problems and solve them better than anyone else. The majority of our customers are large corporations with multiple locations worldwide. According to the senior vice president responsible for the global design, construction, and project management of an internationally renowned financial services company, 'Allsteel offers extremely attractive, cost-effective furniture solutions. Your manufacturing and service are best in class you turn everything around with impressive swiftness. There's really not much in the market to beat you.'\n\nWell-designed, forward-thinking, and glad to be of service. Allsteel is proud to uphold our long heritage of quality.", - "page_start": 19, - "page_end": 19, - "source_file": "NYSE_HNI_2003.pdf" - }, - { - "text": "## Example:\n\n'I have been offered an opportunity to work as an IT Manager abroad, and I have decided to accept the offer.'\n\n## 4.\n\n## A sentence or two in which you thank your employer for the opportunities you have been given during your time with the organisation.\n\n## Example:\n\n'I would like to thank you for the wonderful opportunities you have given me, both to develop my skills, and to work with such knowledgeable and inspiring people.'\n\n## 5.\n\n## An offer to help with the transition.\n\nOnly include this if you are sincere, and don't make any promises that you won't be able to keep. You could, for example, assure your employer that you will finish your current projects or hand them over to a colleague. You could also offer to train the person who will be replacing you.\n\n## Example:\n\n'During the next two weeks, I will do everything I can to ensure a smooth transition for the company. If required, I am more than willing to assist with the hiring and training of the new Assistant IT Manager.'\n\n## 6.\n\n## A suitable closing.\n\nIt is important to use a closing that is appropriate in the circumstances. If you have a good relationship with your employer, you may want to wish him/her well for the future, and provide contact details that he/she can use to get in touch with you once you have left the organisation. You can then end your letter with a greeting such as 'Kind regards,' followed by your signature.", - "page_start": 49, - "page_end": 49, - "source_file": "basic-english-language-skills.PDF" - }, - { - "text": "<!-- image -->\n\nThe Process and Resource Management division is one of Nissan's greatest assets. They are sometimes perceived as too rigid, and it is true that the division has established quite a number of rules. However, I easily imagine what can happen to a company without rules. The point, really, is to keep the structure and provide some freedom when needed. The core creative divisions can add great value to a process, such as when they interact with the advanced engineering team. When the creative people are happy with what they have developed, however, someone has to support the complex process of creating added value. That responsibility belongs to the Process and Resource Management division. Otherwise, a nicely crafted process may never be implemented. But at Nissan, employees in the Process and Resource Management division serve as the guardians of the timelines and support the implementation of processes. If a process is not working as we planned, they get the project back on track in a smooth and efficient manner. If a process is no longer relevant, they quickly organize a taskforce to update it.\n\nSo Corporate Planning provides the direction, Design and Product Planning create products with value, and Market Intelligence and Process and Resource Management support the creative teams. Someone has to drive the implementation, and that role belongs to our six program directors in Program Management. The program directors are involved from the beginning. They are businesspeople, the CEOs of their own platform businesses. Each has a different part of the vehicle lineup, but the substance of their mutual targets and commitments is simple: profit. Program directors make it happen. They ensure that everybody in the Company keeps each project consistently profitable through\n\n## Nre Global Product Launches\n\n## 28 All-New Models\n\n<!-- image -->\n\n| New Products | NISSAN 180 31 | NISSAN Value-Up 28 28 |\n|----------------------|-----------------|-------------------------|\n| Start of Production | 44 | 70 70 |\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nall phases: planning, development and launch, right through to the end of the lifecycle. Our program directors are persuasive people with strong characters, special skills and attributes, and they are not afraid to challenge the system. Their diversity contributes tremendously to Nissan's success. The cumulative work of all these divisions results in a very consistent organization with an upstream process that creates value.\n\nLooking at Nissan's global output over the last six years, it is clear that some terrific products have been created, and the value of the Company as a whole is greater. There are many scorecards that reflect this, and our stakeholders certainly know Nissan's success first-hand. At the same time, we must prepare for the future. We need to reinforce the strength of our program management groups and establish more precise, accurate groups to standardize and improve processes for the future. Ironically, our achievements have created uncertainty for the future. Success creates risk, and the more we highlight our successes, the more we raise the anxiety level of investors. How can our new products be as good as those already released? How can we keep it all going?\n\nOne way to sustain our strong pace is to take greater advantage of the Alliance. The value is there, in areas such as purchasing, development, benchmarking, sales networks, market knowledge and even financial strategy. Yet we must maintain both a balance and a clear separation between the brand identities of Renault and Nissan. Neither company wants to make the same cars, or have the same corporate culture, or have its brand mistaken for the other. We will continue to derive benefits from this strategic partnership while remaining Nissan.", - "page_start": 36, - "page_end": 36, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "PLANNING\n\n## Building on Strengths and Being Innovative\n\n'The Planning Group covers a great deal of corporate territory and handles a number of key responsibilities within Nissan. Our Corporate Planning division, for example, oversees strategy, setting the Company's long-term course under the Executive Committee's direction. The two creative divisions, Design and Product Planning, create value for the customer. Together, those three divisions form the core of our group, surrounded by several other key divisions. Market Intelligence supports Design and Product Planning in customer understanding. The people in Process and Resource Management provide the practical direction and restraint a company of our size must have when deploying its resources. And Program Management drives the implementation process, turning the work of all the other divisions into reality.\n\nThe role of Corporate Planning is to look to the future and devise ways to take advantage of the business opportunities we identify. In the past, the division relied primarily on three-year plans such the Nissan Revival Plan and NISSAN 180. That strategy served the interests of Nissan stakeholders well. The Company is now sound, and the power and constancy of vision Corporate Planning provides will determine how well Nissan maintains its strength. However, in addition to the mid-term plan, we have now entered a phase that requires us to extend that vision and implement a longer-term plan. Corporate Planning is working closely with the Executive Committee on this matter.\n\nDesign and Product Planning are central to the creation of Nissan's strength. Both focus on satisfying the consumer's unmet needs, and create value in the process. Our product planning DNA is to identify and target our customers, and do it better than our competitors. Rather than simply throwing a product into the market and waiting for a response, we first seek a deep understanding of the expected response. Only then can we create a product consistent with that understanding.\n\nCARLOS TAVARES Executive Vice President\n\n<!-- image -->\n\nOne key for both creative divisions is to focus on 'customer clusters.' We refuse to spend our money to develop products that should please everyone. In fact, we may invest in a certain innovation because we understand that a particular subset of customers will appreciate the performance it provides. Our process is very focused, and may even target a smaller customer cluster that no one else is addressing. The marketing process for these two divisions is deep and accurate. This creates value through differentiation.\n\nThe NISSAN Value-Up plan is about focusing on strong products that reinforce our brand, pursuing new concepts and innovation, and expanding geographically in a stronger and faster way. During the Nissan Revival Plan and NISSAN 180, we introduced some influential and innovative modelsthe Murano, the Z, the FX and the X-TRAIL, to name a few. It would be a mistake not to capitalize on those successes and reinforce the brand. At the same time, we cannot rely solely on our current concepts. Launching a new product naturally requires significant expenditures, because awareness and understanding must be created for the new product. We must differentiate to succeed, devise new products and concepts, and venture into areas that others have not. During the NISSAN Value-Up period, we will offer products that build on past successes-without being conservative-as well as products that are new and innovative. Our brand pyramid shows us the way to be both 'bold and thoughtful.'", - "page_start": 35, - "page_end": 35, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision. [o] Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction. [248] Even when used in conventional warfare, it is unlikely that they will be unable to reliably choose targets and could potentially kill an innocent person. [248] In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed. [249] By 2015, over fifty countries were reported to be researching battlefield robots. [250]\n\nAI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware. [251] All these technologies have been available since 2020 or earlier-AI facial recognition systems are already being used for mass surveillance in China. [252][253]\n\nThere many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours. [254]\n\n## Technological unemployment\n\nEconomists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment. [255]\n\nIn the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that \"we're in uncharted territory\" with AI. [256] A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in longterm unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. [257] Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at \"high risk\" of potential automation, while an OECD report classified only 9% of U.S. jobs as \"high risk\". [p][259] The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies. [255] In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence. [260][261]\n\nUnlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that \"the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution\" is \"worth taking seriously\". [262] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. [263]", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia3.pdf" - }, - { - "text": "show that even a computer capable of perfectly simulating human behavior would not have a mind. [387]\n\n## AI welfare and rights\n\nIt is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree. [388] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals. [389][390] Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights. [389] Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society. [391]\n\nIn 2017, the European Union considered granting \"electronic personhood\" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities. [392] Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own. [393][394]\n\nProgress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited. [390][389]\n\n## Future\n\n## Superintelligence and the singularity\n\nA superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. [379] If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an \"intelligence explosion\" and Vernor Vinge called a \"singularity\". [395]\n\nHowever, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do. [396]\n\n## Transhumanism\n\nRobot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger. [397]", - "page_start": 26, - "page_end": 26, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "news4.pdf", - "query": "I want to help my parents who are in residential care, are there any trendy AI-related devices I could help them with? ", - "target_page": 1, - "target_passage": "Wearable devices equipped with this technology can offer real-time health insights, helping individuals make informed decisions about their well-being", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "<!-- image -->\n\n## ISSUE\n\nDecember 2024\n\n## CATEGORIES\n\nTechnology & Cybersecurity\n\nEditor's Picks Finance - Personal Home - Interior\n\n<!-- image -->\n\n## The top AI-powered tech trends in 2025\n\n<!-- image -->\n\n(NC) As we look ahead to 2025, artificial intelligence (AI) continues to revolutionize our lives. From enhancing our daily routines to transforming entire industries, AI's impact is undeniable.\n\nThese five innovations are set to shape our future, offering unprecedented convenience, efficiency and personalization.\n\n## AI-powered computing\n\nAI-powered computing, such as Intel-powered laptops - or AI PC - is at the forefront of technological advancement. But what, exactly, is an AI PC? They're computers that have AI built into their processors - also known as the brain of the computer - which optimizes performance, enhances security and provides a more personalized experience as they learn from your usage patterns. For consumers, this means faster, smarter and more secure computing tailored to your individual needs.\n\n## Smart home automation\n\nSmart home automation has been around for a while, but AI is taking it to the next level. Imagine a home that not only follows your commands, but also anticipates your needs. Enhanced smart home systems can learn your daily routines and adjust settings accordingly, from lighting and temperature to security and entertainment, making your home smarter and more responsive than ever before.\n\n## Health and wellness\n\nThe health-care industry is seeing significant transformation. AI-driven health and wellness applications can monitor vital signs, predict potential health issues, and even provide personalized fitness and nutrition plans. Wearable devices equipped with this technology can offer real-time health insights, helping individuals make informed decisions about their well-being.\n\n## Financial services\n\nAI is also making waves in the financial sector, offering smarter and more secure ways to manage money. From AI-driven investment platforms that provide personalized financial advice to fraud detection systems that protect against cyber threats, AI can analyze vast amounts of data to identify trends and make more informed financial decisions.\n\n## Enhanced education\n\nIn education, enhanced learning tools provide personalized learning experiences that adapt to each student's strengths and weaknesses. This technology can offer real-time feedback, helping students improve their skills more effectively. Additionally, AI can assist educators by automating administrative tasks and providing insights into student performance, allowing for more focused and effective teaching.\n\nLearn more at intel.com/aipc.\n\nwww.newscanada.com\n\nWord Count: 346\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nRADIO\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nEN", - "page_start": 0, - "page_end": 0, - "source_file": "news4.pdf" - }, - { - "text": "<!-- image -->\n\n## Artificial intelligence\n\nArtificial intelligence ( AI ), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. [1] Such machines may be called AIs.\n\nHigh-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.\" [2][3]\n\nVarious subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. [a] General intelligence-the ability to complete any task performed by a human on an at least equal level-is among the field's long-term goals. [4] To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. [b] AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields. [5]\n\nArtificial intelligence was founded as an academic discipline in 1956, [6] and the field went through multiple cycles of optimism throughout its history, [7][8] followed by periods of disappointment and loss of funding, known as AI winters. [9][10] Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques. [11] This growth accelerated further after 2017 with the transformer architecture, [12] and by the early 2020s many billions of dollars were being invested in AI and the field experienced rapid ongoing progress in what has become known as the AI boom. The emergence of advanced generative AI in the midst of the AI boom and its ability to create and modify content exposed several unintended consequences and harms in the present and raised concerns about the risks of AI and its long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.\n\n## Goals", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia3.pdf" - }, - { - "text": "## Existential risk\n\nIt has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicist Stephen Hawking stated, \"spell the end of the human race\". [265] This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like \"self-awareness\" (or \"sentience\" or \"consciousness\") and becomes a malevolent character. [q] These sci-fi scenarios are misleading in several ways.\n\nFirst, AI does not require human-like sentience to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. Philosopher Nick Bostrom argued that if one gives almost any goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a paperclip factory manager). [267] Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that \"you can't fetch the coffee if you're dead.\" [268] In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is \"fundamentally on our side\". [269]\n\nSecond, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are built on language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive. [270]\n\nThe opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI. [271] Personalities such as Stephen Hawking, Bill Gates, and Elon Musk, [272] as well as AI pioneers such as Yoshua Bengio, Stuart Russell, Demis Hassabis, and Sam Altman, have expressed concerns about existential risk from AI.\n\nIn May 2023, Geoffrey Hinton announced his resignation from Google in order to be able to \"freely speak out about the risks of AI\" without \"considering how this impacts Google.\" [273] He notably mentioned risks of an AI takeover, [274] and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI. [275]\n\nIn 2023, many leading AI experts endorsed the joint statement that \"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war\". [276]\n\nSome other researchers were more optimistic. AI pioneer Jürgen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making \"human lives longer and healthier and easier.\" [277] While the tools that are now being used to improve lives can also be used by bad actors, \"they can also be used against the bad actors.\" [278][279] Andrew Ng also argued that \"it's a mistake to fall for the doomsday hype on AI-and that regulators who do will only benefit vested interests.\" [280] Yann LeCun \"scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction.\" [281] In the early 2010s, experts argued that the risks are too distant in", - "page_start": 18, - "page_end": 18, - "source_file": "wikipedia3.pdf" - }, - { - "text": "show that even a computer capable of perfectly simulating human behavior would not have a mind. [387]\n\n## AI welfare and rights\n\nIt is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree. [388] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals. [389][390] Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights. [389] Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society. [391]\n\nIn 2017, the European Union considered granting \"electronic personhood\" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities. [392] Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own. [393][394]\n\nProgress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited. [390][389]\n\n## Future\n\n## Superintelligence and the singularity\n\nA superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. [379] If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an \"intelligence explosion\" and Vernor Vinge called a \"singularity\". [395]\n\nHowever, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do. [396]\n\n## Transhumanism\n\nRobot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger. [397]", - "page_start": 26, - "page_end": 26, - "source_file": "wikipedia3.pdf" - }, - { - "text": "A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision. [o] Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction. [248] Even when used in conventional warfare, it is unlikely that they will be unable to reliably choose targets and could potentially kill an innocent person. [248] In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed. [249] By 2015, over fifty countries were reported to be researching battlefield robots. [250]\n\nAI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware. [251] All these technologies have been available since 2020 or earlier-AI facial recognition systems are already being used for mass surveillance in China. [252][253]\n\nThere many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours. [254]\n\n## Technological unemployment\n\nEconomists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment. [255]\n\nIn the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that \"we're in uncharted territory\" with AI. [256] A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in longterm unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. [257] Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at \"high risk\" of potential automation, while an OECD report classified only 9% of U.S. jobs as \"high risk\". [p][259] The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies. [255] In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence. [260][261]\n\nUnlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that \"the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution\" is \"worth taking seriously\". [262] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. [263]", - "page_start": 17, - "page_end": 17, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI, [367] with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did \"not actually use AI in a material way\". [368]\n\n## Evaluating approaches to AI\n\nNo established unifying theory or paradigm has guided AI research for most of its history. [aa] The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term \"artificial intelligence\" to mean \"machine learning with neural networks\"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.\n\n## Symbolic AI and its limits\n\nSymbolic AI (or \"GOFAI\") [370] simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at \"intelligent\" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: \"A physical symbol system has the necessary and sufficient means of general intelligent action.\" [371]\n\nHowever, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level \"intelligent\" tasks were easy for AI, but low level \"instinctive\" tasks were extremely difficult. [372] Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a \"feel\" for the situation, rather than explicit symbolic knowledge. [373] Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him. [ab][16]\n\nThe issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence, [375][376] in part because subsymbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.\n\n## Neat vs. scruffy\n\n\"Neats\" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). \"Scruffies\" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s, [377] but eventually was seen as irrelevant. Modern AI has elements of both.\n\n## Soft vs. hard computing", - "page_start": 24, - "page_end": 24, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, including virtual assistants, chatbots, autonomous vehicles, game-playing systems, and industrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks. [175][176][177]\n\nVincent van Gogh in watercolour created by generative AI software\n\n<!-- image -->\n\n## Other industry-specific tasks\n\nThere are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated \"AI\" in some offerings or processes. [178] A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management.\n\nAI applications for evacuation and disaster management are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions. [179][180][181]\n\nIn agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.\n\nArtificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for \"classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights.\" For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.", - "page_start": 11, - "page_end": 11, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers. [300]\n\nThe UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities. [301]\n\n## Regulation\n\nThe regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms. [302] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. [303] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone. [304][305] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI. [306] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and\n\nThe first global AI Safety Summit was held in 2023 with a declaration calling for international cooperation.\n\n<!-- image -->\n\nVietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia. [306] The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. [306] Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI. [307] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. [308] In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics. [309] In 2024, the Council of Europe created the first international legally binding treaty on AI, called the \"Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law\". It was adopted by the European Union, the United States, the United Kingdom, and other signatories. [310]\n\nIn a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that \"products and services using AI have more benefits than drawbacks\". [304] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. [311] In a 2023 Fox News poll, 35% of Americans thought it \"very important\", and an additional 41% thought it \"somewhat important\", for the federal government to regulate AI, versus 13% responding \"not very important\" and 8% responding \"not at all important\". [312][313]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Alternatively, dedicated models for mathematical problem solving with higher precision for the outcome including proof of theorems have been developed such as Alpha Tensor , Alpha Geometry and Alpha Proof all from Google DeepMind, [157] Llemma from eleuther [158] or Julius . [159]\n\nWhen natural language is used to describe mathematical problems, converters transform such prompts into a formal language such as Lean to define mathematical tasks.\n\nSome models have been developed to solve challenging problems and reach good results in benchmark tests, others to serve as educational tools in mathematics. [160]\n\n## Finance\n\nFinance is one of the fastest growing sectors where applied AI tools are being deployed: from retail online banking to investment advice and insurance, where automated \"robot advisers\" have been in use for some years. [161]\n\nWorld Pensions experts like Nicolas Firzli insist it may be too early to see the emergence of highly innovative AI-informed financial products and services: \"the deployment of AI tools will simply further automatise things: destroying tens of thousands of jobs in banking, financial planning, and pension advice in the process, but I'm not sure it will unleash a new wave of [e.g., sophisticated] pension innovation.\" [162]\n\n## Military\n\nVarious countries are deploying AI military applications. [163] The main applications enhance command and control, communications, sensors, integration and interoperability. [164] Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles. [163] AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles involving manned and unmanned teams. [164]\n\nAI has been used in military operations in Iraq, Syria, Israel and Ukraine. [163][165][166][167]\n\n## Generative AI\n\nIn the early 2020s, generative AI gained widespread prominence. GenAI is AI capable of generating text, images, videos, or other data using generative models, [168][169] often in response to prompts. [170][171]\n\nIn March 2023, 58% of U.S. adults had heard about ChatGPT and 14% had tried it. [172] The increasing realism and ease-of-use of AI-based text-to-image generators such as Midjourney, DALL-E, and Stable Diffusion sparked a trend of viral AI-generated photos. Widespread attention was gained by a fake photo of Pope Francis wearing a white puffer coat, the fictional arrest of Donald Trump, and a hoax of an attack on the Pentagon, as well as the usage in professional creative arts. [173][174]\n\n## Agents", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia3.pdf" - }, - { - "text": "models are prone to generating falsehoods called \"hallucinations\", although this can be reduced with RLHF and quality data. They are used in chatbots, which allow people to ask a question or request a task in simple text. [122][123]\n\nCurrent models and services include Gemini (formerly Bard), ChatGPT, Grok, Claude, Copilot, and LLaMA. [124] Multimodal GPT models can process different types of data (modalities) such as images, videos, sound, and text. [125]\n\n## Hardware and software\n\nIn the late 2010s, graphics processing units (GPUs) that were increasingly designed with AI-specific enhancements and used with specialized TensorFlow software had replaced previously used central processing unit (CPUs) as the dominant means for large-scale (commercial and academic) machine learning models' training. [126] Specialized programming languages such as Prolog were used in early AI research, [127] but general-purpose programming languages like Python have become predominant. [128]\n\nThe transistor density in integrated circuits has been observed to roughly double every 18 months-a trend known as Moore's law, named after the Intel co-founder Gordon Moore, who first identified it. Improvements in GPUs have been even faster. [129]\n\n## Applications\n\nAI and machine learning technology is used in most of the essential applications of the 2020s, including: search engines (such as Google Search), targeting online advertisements, recommendation systems (offered by Netflix, YouTube or Amazon), driving internet traffic, targeted advertising (AdSense, Facebook), virtual assistants (such as Siri or Alexa), autonomous vehicles (including drones, ADAS and self-driving cars), automatic language translation (Microsoft Translator, Google Translate), facial recognition (Apple's Face ID or Microsoft's DeepFace and Google's FaceNet) and image labeling (used by Facebook, Apple's iPhoto and TikTok). The deployment of AI may be overseen by a Chief automation officer (CAO).\n\n## Health and medicine\n\nThe application of AI in medicine and medical research has the potential to increase patient care and quality of life. [130] Through the lens of the Hippocratic Oath, medical professionals are ethically compelled to use AI, if applications can more accurately diagnose and treat patients. [131][132]\n\nFor medical research, AI is an important tool for processing and integrating big data. This is particularly important for organoid and tissue engineering development which use microscopy imaging as a key technique in fabrication. [133] It has been suggested that AI can overcome discrepancies in funding allocated to different fields of research. [133] New AI tools can deepen the understanding of biomedically relevant pathways. For example, AlphaFold 2 (2021) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein. [134] In 2023, it was reported that AI-guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria. [135] In 2024, researchers used machine learning to accelerate the search for Parkinson's disease", - "page_start": 8, - "page_end": 8, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "news4.pdf", - "query": "Is the topic of finance trending among AI topics for 2015 in Canada?", - "target_page": 1, - "target_passage": "Financial services", - "chunk_present": { - "presence": true, - "index": 2 - } - }, - "top_chunk": [ - { - "text": "Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers. [300]\n\nThe UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities. [301]\n\n## Regulation\n\nThe regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms. [302] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. [303] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone. [304][305] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI. [306] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and\n\nThe first global AI Safety Summit was held in 2023 with a declaration calling for international cooperation.\n\n<!-- image -->\n\nVietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia. [306] The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. [306] Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI. [307] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. [308] In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics. [309] In 2024, the Council of Europe created the first international legally binding treaty on AI, called the \"Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law\". It was adopted by the European Union, the United States, the United Kingdom, and other signatories. [310]\n\nIn a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that \"products and services using AI have more benefits than drawbacks\". [304] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. [311] In a 2023 Fox News poll, 35% of Americans thought it \"very important\", and an additional 41% thought it \"somewhat important\", for the federal government to regulate AI, versus 13% responding \"not very important\" and 8% responding \"not at all important\". [312][313]", - "page_start": 20, - "page_end": 20, - "source_file": "wikipedia3.pdf" - }, - { - "text": "<!-- image -->\n\n## Artificial intelligence\n\nArtificial intelligence ( AI ), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. [1] Such machines may be called AIs.\n\nHigh-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.\" [2][3]\n\nVarious subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. [a] General intelligence-the ability to complete any task performed by a human on an at least equal level-is among the field's long-term goals. [4] To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. [b] AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields. [5]\n\nArtificial intelligence was founded as an academic discipline in 1956, [6] and the field went through multiple cycles of optimism throughout its history, [7][8] followed by periods of disappointment and loss of funding, known as AI winters. [9][10] Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques. [11] This growth accelerated further after 2017 with the transformer architecture, [12] and by the early 2020s many billions of dollars were being invested in AI and the field experienced rapid ongoing progress in what has become known as the AI boom. The emergence of advanced generative AI in the midst of the AI boom and its ability to create and modify content exposed several unintended consequences and harms in the present and raised concerns about the risks of AI and its long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.\n\n## Goals", - "page_start": 0, - "page_end": 0, - "source_file": "wikipedia3.pdf" - }, - { - "text": "<!-- image -->\n\n## ISSUE\n\nDecember 2024\n\n## CATEGORIES\n\nTechnology & Cybersecurity\n\nEditor's Picks Finance - Personal Home - Interior\n\n<!-- image -->\n\n## The top AI-powered tech trends in 2025\n\n<!-- image -->\n\n(NC) As we look ahead to 2025, artificial intelligence (AI) continues to revolutionize our lives. From enhancing our daily routines to transforming entire industries, AI's impact is undeniable.\n\nThese five innovations are set to shape our future, offering unprecedented convenience, efficiency and personalization.\n\n## AI-powered computing\n\nAI-powered computing, such as Intel-powered laptops - or AI PC - is at the forefront of technological advancement. But what, exactly, is an AI PC? They're computers that have AI built into their processors - also known as the brain of the computer - which optimizes performance, enhances security and provides a more personalized experience as they learn from your usage patterns. For consumers, this means faster, smarter and more secure computing tailored to your individual needs.\n\n## Smart home automation\n\nSmart home automation has been around for a while, but AI is taking it to the next level. Imagine a home that not only follows your commands, but also anticipates your needs. Enhanced smart home systems can learn your daily routines and adjust settings accordingly, from lighting and temperature to security and entertainment, making your home smarter and more responsive than ever before.\n\n## Health and wellness\n\nThe health-care industry is seeing significant transformation. AI-driven health and wellness applications can monitor vital signs, predict potential health issues, and even provide personalized fitness and nutrition plans. Wearable devices equipped with this technology can offer real-time health insights, helping individuals make informed decisions about their well-being.\n\n## Financial services\n\nAI is also making waves in the financial sector, offering smarter and more secure ways to manage money. From AI-driven investment platforms that provide personalized financial advice to fraud detection systems that protect against cyber threats, AI can analyze vast amounts of data to identify trends and make more informed financial decisions.\n\n## Enhanced education\n\nIn education, enhanced learning tools provide personalized learning experiences that adapt to each student's strengths and weaknesses. This technology can offer real-time feedback, helping students improve their skills more effectively. Additionally, AI can assist educators by automating administrative tasks and providing insights into student performance, allowing for more focused and effective teaching.\n\nLearn more at intel.com/aipc.\n\nwww.newscanada.com\n\nWord Count: 346\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nRADIO\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nEN", - "page_start": 0, - "page_end": 0, - "source_file": "news4.pdf" - }, - { - "text": "Alternatively, dedicated models for mathematical problem solving with higher precision for the outcome including proof of theorems have been developed such as Alpha Tensor , Alpha Geometry and Alpha Proof all from Google DeepMind, [157] Llemma from eleuther [158] or Julius . [159]\n\nWhen natural language is used to describe mathematical problems, converters transform such prompts into a formal language such as Lean to define mathematical tasks.\n\nSome models have been developed to solve challenging problems and reach good results in benchmark tests, others to serve as educational tools in mathematics. [160]\n\n## Finance\n\nFinance is one of the fastest growing sectors where applied AI tools are being deployed: from retail online banking to investment advice and insurance, where automated \"robot advisers\" have been in use for some years. [161]\n\nWorld Pensions experts like Nicolas Firzli insist it may be too early to see the emergence of highly innovative AI-informed financial products and services: \"the deployment of AI tools will simply further automatise things: destroying tens of thousands of jobs in banking, financial planning, and pension advice in the process, but I'm not sure it will unleash a new wave of [e.g., sophisticated] pension innovation.\" [162]\n\n## Military\n\nVarious countries are deploying AI military applications. [163] The main applications enhance command and control, communications, sensors, integration and interoperability. [164] Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles. [163] AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles involving manned and unmanned teams. [164]\n\nAI has been used in military operations in Iraq, Syria, Israel and Ukraine. [163][165][166][167]\n\n## Generative AI\n\nIn the early 2020s, generative AI gained widespread prominence. GenAI is AI capable of generating text, images, videos, or other data using generative models, [168][169] often in response to prompts. [170][171]\n\nIn March 2023, 58% of U.S. adults had heard about ChatGPT and 14% had tried it. [172] The increasing realism and ease-of-use of AI-based text-to-image generators such as Midjourney, DALL-E, and Stable Diffusion sparked a trend of viral AI-generated photos. Widespread attention was gained by a fake photo of Pope Francis wearing a white puffer coat, the fictional arrest of Donald Trump, and a hoax of an attack on the Pentagon, as well as the usage in professional creative arts. [173][174]\n\n## Agents", - "page_start": 10, - "page_end": 10, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 265. Cellan-Jones (2014).\n - 266. Russell & Norvig 2021, p. 1001.\n - 267. Bostrom (2014).\n - 268. Russell (2019).\n - 269. Bostrom (2014); Müller & Bostrom (2014); Bostrom (2015).\n - 270. Harari (2023).\n - 271. Müller & Bostrom (2014).\n - 272. Leaders' concerns about the existential risks of AI around 2015: Rawlinson (2015), Holley (2015), Gibbs (2014), Sainato (2015)\n - 273. \" \"Godfather of artificial intelligence\" talks impact and potential of new AI\" (https://www.cbsne ws.com/video/godfather-of-artificial-intelligence-talks-impact-and-potential-of-new-ai). CBS News . 25 March 2023. Archived (https://web.archive.org/web/20230328225221/https://www. cbsnews.com/video/godfather-of-artificial-intelligence-talks-impact-and-potential-of-new-ai) from the original on 28 March 2023. Retrieved 28 March 2023.\n - 274. Pittis, Don (4 May 2023). \"Canadian artificial intelligence leader Geoffrey Hinton piles on fears of computer takeover\" (https://www.cbc.ca/news/business/ai-doom-column-don-pittis1.6829302). CBC . Archived (https://web.archive.org/web/20240707032135/https://www.cbc. ca/news/business/ai-doom-column-don-pittis-1.6829302) from the original on 7 July 2024. Retrieved 5 October 2024.\n - 275. \" '50-50 chance' that AI outsmarts humanity, Geoffrey Hinton says\" (https://www.bnnbloomb erg.ca/50-50-chance-that-ai-outsmarts-humanity-geoffrey-hinton-says-1.2085394). Bloomberg BNN . 14 June 2024. Retrieved 6 July 2024.\n - 276. Valance (2023).\n - 277. Taylor, Josh (7 May 2023). \"Rise of artificial intelligence is inevitable but should not be feared, 'father of AI' says\" (https://www.theguardian.com/technology/2023/may/07/rise-of-arti ficial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says). The Guardian . Archived (https://web.archive.org/web/20231023061228/https://www.theguardian.com/techn ology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-fatherof-ai-says) from the original on 23 October 2023. Retrieved 26 May 2023.\n - 278. Colton, Emma (7 May 2023). \" 'Father of AI' says tech fears misplaced: 'You cannot stop it' \" (https://www.foxnews.com/tech/father-ai-jurgen-schmidhuber-says-tech-fears-misplaced-can not-stop). Fox News . Archived (https://web.archive.org/web/20230526162642/https://www.fo xnews.com/tech/father-ai-jurgen-schmidhuber-says-tech-fears-misplaced-cannot-stop) from the original on 26 May 2023. Retrieved 26 May 2023.\n - 279. Jones, Hessie (23 May 2023). \"Juergen Schmidhuber, Renowned 'Father Of Modern AI,' Says His Life's Work Won't Lead To Dystopia\" (https://www.forbes.com/sites/hessiejones/20 23/05/23/juergen-schmidhuber-renowned-father-of-modern-ai-says-his-lifes-work-wont-leadto-dystopia). Forbes . Archived (https://web.archive.org/web/20230526163102/https://www.fo rbes.com/sites/hessiejones/2023/05/23/juergen-schmidhuber-renowned-father-of-modern-ai -says-his-lifes-work-wont-lead-to-dystopia/) from the original on 26 May 2023. Retrieved 26 May 2023.\n - 280. McMorrow, Ryan (19 December 2023). \"Andrew Ng: 'Do we think the world is better off with more or less intelligence?' \" (https://www.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f93 52be3). Financial Times . Archived (https://web.archive.org/web/20240125014121/https://ww w.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f9352be3) from the original on 25 January 2024. Retrieved 30 December 2023.\n - 281. Levy, Steven (22 December 2023). \"How Not to Be Stupid About AI, With Yann LeCun\" (http s://www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview). Wired . Archived (h ttps://web.archive.org/web/20231228152443/https://www.wired.com/story/artificial-intelligenc e-meta-yann-lecun-interview/) from the original on 28 December 2023. Retrieved 30 December 2023.", - "page_start": 44, - "page_end": 44, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Log in\n\n<!-- image -->\n\nHome / Arts and Entertainment / New Artificial Intelligence Summit Series Begins With Energy\n\n<!-- image -->\n\nARTS AND ENTERTAINMENT\n\n## New Artificial Intelligence Summit Series Begins With Energy\n\n07/31/2024\n\n(AI) continues to transform the United States and the world. To promote and inform rapid advancements in AI and maintain America's global competitiveness, the Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI, announces the AI+ Summit Series.\n\nThe series kicks off with the topic of energy. The AI + Energy Summit, scheduled for September 26, 2024, in Washington, D.C., will bring together policy makers, energy industry leaders, top government and academic energy researchers, and technologists to address the challenges of AI's energy consumption and develop solutions for a resilient and abundant energy future. The event also aims to address the implications of AI and energy for national security and promote partnerships between AI and energy stakeholders.\n\nAI and other emerging technologies can help the United States take the lead in energy areas including maximizing energy efficiencies, discovering new materials, and enabling new forms of power generation. AI also has a role to play in overcoming energy challenges. The Department of Energy (DOE) already uses AI in several areas including advanced computing, emergency response, environmental modeling, climate forecasting, and materials research.\n\nSCSP's recent 'Action Plan for U.S. Leadership in Next-Generation Energy,' raises many issues related to AI and energy, including recommendations for the government to bring America forward. The AI+ Energy Summit will highlight these and other issues, and promote collaboration to solve problems. The stakes are high; if the U.S. falls short on energy, American adversaries could gain the upper hand in AI leadership, according to SCSP experts.\n\nVisit scsp.ai to learn more about the AI+Energy Summit and the SCSP's Next-Generation Energy Action Plan.\n\nArticle Link\n\nhttps://about.newsusa.com/new-artificial-intelligence-summit-series-begins-with…\n\n## RELATED ARTICLES\n\n<!-- image -->\n\n<!-- image -->\n\nMar 06, 2024\n\nCelebrate St. Patrick's Day with No Booze, Just Pure Irish Fun and Entertainment\n\nMar 06, 2024\n\nExplore Downtown San Pedro with Flair: Ride the Iconic Red Car Trolley for Free\n\nMar 06, 2024\n\nSay Hello to Your Big Break at the Stapleton Library Job Fair in Vocation, Trade, or Civil Service\n\n<!-- image -->\n\n<!-- image -->\n\nFeb 22, 2024\n\nRetrain Your Emotional Brain: A Natural Alternative to Weight Loss Drugs\n\n© Copyright NewsUSA 2025. All Rights Reserved.\n\nFeb 21, 2024\n\nSerial Entrepreneur Teaches Us How to Go the Distance in Business and in Life\n\nNEWSUSA\n\nMar 06, 2024\n\nLocal Artists Collaborate for a Unique Fusion of Groove and Collage\n\nFASHION\n\nBUSINESS\n\nINFOGRAPHIC\n\nENVIRONMENT\n\nHEALTH\n\nMONEY\n\nFOOD\n\nTRAVEL\n\nBRIDAL\n\nRECREATION\n\nTECHNOLOGY\n\nHOME\n\nEDUCATION\n\nARTS & ENTERTAINMENT\n\nAUTO\n\nCHILDREN\n\nFITNESS\n\nHOLIDAY\n\nINSURANCE\n\nLAWN & GARDEN\n\nLISTICLE\n\nNUTRITION\n\nPARENTING\n\nPETS\n\nSEASONAL\n\nSENIORS\n\nSPANISH\n\nTIPS AND HOW TO\n\nENTERTAINMENT\n\nCAREER\n\nCOMMUNITY\n\nFAMILY\n\nTIPS\n\nINTERNET\n\nHUMAN\\_INTEREST\n\nBEAUTY\n\nARTS\n\nREALESTATE\n\nSAFETY\n\nMEDICINE\n\nBOOK\\_REVIEW\n\nRECIPE\n\nAFRICAN\\_AMERICANS\n\nHOW\\_TO\n\nBYLINED\\_COLUMN\n\nCHARITY\n\nSPORTS\n\nHOME\\_IMPROVEMENT\n\nTECH\n\nWELLNESS\n\nARTS AND ENTERTAINMENT\n\nFOOD & DRINK\n\nREAL\\_ESTATE\n\nVETERANS\n\nOUTDOORS\n\nREAL ESTATE\n\nHUMAN INTEREST\n\nMONEY & FINANCE\n\nFASHION & BEAUTY\n\nMONEY AND FINANCE\n\nBOOKS & ENTERTAINMENT\n\nBOOKS\n\nARTS & ENTERTAINMENT\n\nCATEGORIES\n\nRECENT POSTS", - "page_start": 0, - "page_end": 0, - "source_file": "news1.pdf" - }, - { - "text": "## our Goals for 2014\n\ncomplete a minimum of $75 million in acquisitions.\n\nacquire over 50% of 2014 acquisitions outside atlantic canada, with a focus in ontario.\n\nGrow same store noi by up to 2%.\n\ncontinue to invest in development with two projects underway, managing projects on schedule and on budget.\n\ndevelopment program to a maximum of 5% of our balance sheet per year. We have three other developments projects in various planning stages, but don't expect to begin construction on any additional new projects until late 2014 or into 2015.\n\n## Geographic Diversi/fication is a Priority\n\nGeographic diversi/fication is a priority for Killam. Our asset base in Atlantic Canada is the foundation of the Company; however, with Atlantic Canada representing only 5% of the Canadian rental market, our growth opportunities increase signi/ficantly by expanding our target markets outside of this region. With its strong operating platform, Killam can support a larger and more geographically diverse portfolio. We are actively growing a portfolio of apartments in Ontario in three target markets: Ottawa, the Greater Toronto Area, and Southwestern Ontario. An increased investment outside Atlantic Canada will increase not only Killam's growth potential, it will also expand the Company's diversi/fication and exposure to higher growth markets.\n\nAcquisitions in Ontario represented 45% of acquisitions in 2013. In addition to 1,359 apartment units in the province, we also have 2,144 manufactured home community sites, representing 29% of the MHC NOI last year. Based on our current portfolio, 15% of Killam's 2014 NOI will be generated in Ontario, compared to our longer-term goal of generating 50% of NOI outside Atlantic Canada. We expect to reach this goal by focusing acquisition activity in Ontario, with the majority of future investment anticipated in the province over the next few years. We will look for additional development opportunities in Ontario and we are exploring opportunities in Western Canada, attracted by the strong population growth trends in Alberta's urban markets.\n\nI would like to thank all Killam employees for their contributions and commitment over the last year and our board of directors for their governance. Also, I would like to thank you, our shareholders, for your continued investment in Killam. I invite you to attend the Company's annual meeting on May 7, 2014 at 2:00 pm Atlantic Time at the Halifax Marriott Harbourfront Hotel, either in person or via webcast.\n\n<!-- image -->\n\nYours truly,\n\nPhilip Fraser", - "page_start": 10, - "page_end": 10, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "- 160. Alex McFarland: 7 Best AI for Math Tools. (https://www.unite.ai/best-ai-for-math-tools/) Archived (https://web.archive.org/web/20240911125615/https://www.unite.ai/best-ai-for-mat h-tools/) 11 September 2024 at the Wayback Machine unite.ai. Retrieved 2024-08-07\n - 161. Matthew Finio & Amanda Downie: IBM Think 2024 Primer, \"What is Artificial Intelligence (AI) in Finance?\" 8 Dec. 2023\n - 162. M. Nicolas, J. Firzli: Pensions Age/European Pensions magazine, \"Artificial Intelligence: Ask the Industry\" May June 2024 https://videovoice.org/ai-in-finance-innovationentrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligence-act-wont-work-asintended/ Archived (https://web.archive.org/web/20240911125502/https://videovoice.org/ai-i n-finance-innovation-entrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligenceact-wont-work-as-intended/) 11 September 2024 at the Wayback Machine.\n - 163. Congressional Research Service (2019). Artificial Intelligence and National Security (https://f as.org/sgp/crs/natsec/R45178.pdf) (PDF). Washington, DC: Congressional Research Service.PD-notice\n - 164. Slyusar, Vadym (2019). Artificial intelligence as the basis of future control networks (Preprint). doi:10.13140/RG.2.2.30247.50087 (https://doi.org/10.13140%2FRG.2.2.30247.5 0087).\n - 165. Iraqi, Amjad (3 April 2024). \" 'Lavender': The AI machine directing Israel's bombing spree in Gaza\" (https://www.972mag.com/lavender-ai-israeli-army-gaza/). +972 Magazine . Retrieved 6 April 2024.\n - 166. Davies, Harry; McKernan, Bethan; Sabbagh, Dan (1 December 2023). \" 'The Gospel': how Israel uses AI to select bombing targets in Gaza\" (https://www.theguardian.com/world/2023/ dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets). The Guardian . Retrieved 4 December 2023.\n - 167. Marti, J Werner (10 August 2024). \"Drohnen haben den Krieg in der Ukraine revolutioniert, doch sie sind empfindlich auf Störsender - deshalb sollen sie jetzt autonom operieren\" (http s://www.nzz.ch/international/die-ukraine-setzt-auf-drohnen-die-autonom-navigieren-und-toet en-koennen-ld.1838731). Neue Zürcher Zeitung (in German). Retrieved 10 August 2024.\n - 168. Newsom, Gavin; Weber, Shirley N. (6 September 2023). \"Executive Order N-12-23\" (https:// www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-\\_-GGN-Signed.pdf) (PDF). Executive Department, State of California. Archived (https://web.archive.org/web/202402212 22035/https://www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-\\_-GGN-Signed.pd f) (PDF) from the original on 21 February 2024. Retrieved 7 September 2023.\n - 169. Pinaya, Walter H. L.; Graham, Mark S.; Kerfoot, Eric; Tudosiu, Petru-Daniel; Dafflon, Jessica; Fernandez, Virginia; Sanchez, Pedro; Wolleb, Julia; da Costa, Pedro F.; Patel, Ashay (2023). \"Generative AI for Medical Imaging: extending the MONAI Framework\". arXiv:2307.15208 (https://arxiv.org/abs/2307.15208) [eess.IV (https://arxiv.org/archive/eess.I V)].\n - 170. Griffith, Erin; Metz, Cade (27 January 2023). \"Anthropic Said to Be Closing In on $300 Million in New A.I. Funding\" (https://www.nytimes.com/2023/01/27/technology/anthropic-ai-fu nding.html). The New York Times . Archived (https://web.archive.org/web/20231209074235/h ttps://www.nytimes.com/2023/01/27/technology/anthropic-ai-funding.html) from the original on 9 December 2023. Retrieved 14 March 2023.\n - 171. Lanxon, Nate; Bass, Dina; Davalos, Jackie (10 March 2023). \"A Cheat Sheet to AI Buzzwords and Their Meanings\" (https://news.bloomberglaw.com/tech-and-telecom-law/a-c heat-sheet-to-ai-buzzwords-and-their-meanings-quicktake). Bloomberg News . Archived (http s://web.archive.org/web/20231117140835/https://news.bloomberglaw.com/tech-and-telecom -law/a-cheat-sheet-to-ai-buzzwords-and-their-meanings-quicktake) from the original on 17 November 2023. Retrieved 14 March 2023.", - "page_start": 38, - "page_end": 38, - "source_file": "wikipedia3.pdf" - }, - { - "text": "- 282. Arguments that AI is not an imminent risk: Brooks (2014), Geist (2015), Madrigal (2015), Lee (2014)\n - 283. Christian (2020), pp. 67, 73.\n - 284. Yudkowsky (2008).\n - 285. Anderson & Anderson (2011).\n - 286. AAAI (2014).\n - 287. Wallach (2010).\n - 288. Russell (2019), p. 173.\n - 289. Stewart, Ashley; Melton, Monica. \"Hugging Face CEO says he's focused on building a 'sustainable model' for the $4.5 billion open-source-AI startup\" (https://www.businessinsider. com/hugging-face-open-source-ai-approach-2023-12). Business Insider . Archived (https://w eb.archive.org/web/20240925013220/https://www.businessinsider.com/hugging-face-open-s ource-ai-approach-2023-12) from the original on 25 September 2024. Retrieved 14 April 2024.\n - 290. Wiggers, Kyle (9 April 2024). \"Google open sources tools to support AI model development\" (https://techcrunch.com/2024/04/09/google-open-sources-tools-to-support-ai-model-develop ment). TechCrunch . Archived (https://web.archive.org/web/20240910112401/https://techcrun ch.com/2024/04/09/google-open-sources-tools-to-support-ai-model-development/) from the original on 10 September 2024. Retrieved 14 April 2024.\n - 291. Heaven, Will Douglas (12 May 2023). \"The open-source AI boom is built on Big Tech's handouts. How long will it last?\" (https://www.technologyreview.com/2023/05/12/1072950/op en-source-ai-google-openai-eleuther-meta). MIT Technology Review . Retrieved 14 April 2024.\n - 292. Brodsky, Sascha (19 December 2023). \"Mistral AI's New Language Model Aims for Open Source Supremacy\" (https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-o pen-source-supremacy). AI Business . Archived (https://web.archive.org/web/202409052126 07/https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-open-source-supre macy) from the original on 5 September 2024. Retrieved 5 October 2024.\n - 293. Edwards, Benj (22 February 2024). \"Stability announces Stable Diffusion 3, a next-gen AI image generator\" (https://arstechnica.com/information-technology/2024/02/stability-announc es-stable-diffusion-3-a-next-gen-ai-image-generator). Ars Technica . Archived (https://web.ar chive.org/web/20241005170201/https://arstechnica.com/information-technology/2024/02/sta bility-announces-stable-diffusion-3-a-next-gen-ai-image-generator/) from the original on 5 October 2024. Retrieved 14 April 2024.\n - 294. Marshall, Matt (29 January 2024). \"How enterprises are using open source LLMs: 16 examples\" (https://venturebeat.com/ai/how-enterprises-are-using-open-source-llms-16-exa mples). VentureBeat . Archived (https://web.archive.org/web/20240926171131/https://ventur ebeat.com/ai/how-enterprises-are-using-open-source-llms-16-examples/) from the original on 26 September 2024. Retrieved 5 October 2024.\n - 295. Piper, Kelsey (2 February 2024). \"Should we make our most powerful AI models open source to all?\" (https://www.vox.com/future-perfect/2024/2/2/24058484/open-source-artificial -intelligence-ai-risk-meta-llama-2-chatgpt-openai-deepfake). Vox . Archived (https://web.archi ve.org/web/20241005170204/https://www.vox.com/future-perfect/2024/2/2/24058484/open-s ource-artificial-intelligence-ai-risk-meta-llama-2-chatgpt-openai-deepfake) from the original on 5 October 2024. Retrieved 14 April 2024.\n - 296. Alan Turing Institute (2019). \"Understanding artificial intelligence ethics and safety\" (https:// www.turing.ac.uk/sites/default/files/2019-06/understanding\\_artificial\\_intelligence\\_ethics\\_and \\_safety.pdf) (PDF). Archived (https://web.archive.org/web/20240911131935/https://www.turi ng.ac.uk/sites/default/files/2019-06/understanding\\_artificial\\_intelligence\\_ethics\\_and\\_safety. pdf) (PDF) from the original on 11 September 2024. Retrieved 5 October 2024.", - "page_start": 45, - "page_end": 45, - "source_file": "wikipedia3.pdf" - }, - { - "text": "Gleick, James, \"The Fate of Free Will\" (review of Kevin J. Mitchell, Free Agents: How Evolution Gave Us Free Will , Princeton University Press, 2023, 333 pp.), The New York Review of Books , vol. LXXI, no. 1 (18 January 2024), pp. 27-28, 30. \"Agency is what distinguishes us from machines. For biological creatures, reason and purpose come from acting in the world and experiencing the consequences. Artificial intelligences - disembodied, strangers to blood, sweat, and tears - have no occasion for that.\" (p. 30.)\n\nHalpern, Sue, \"The Coming Tech Autocracy\" (review of Verity Harding, AI Needs You: How We Can Change AI's Future and Save Our Own , Princeton University Press, 274 pp.; Gary Marcus, Taming Silicon Valley: How We Can Ensure That AI Works for Us , MIT Press, 235 pp.; Daniela Rus and Gregory Mone, The Mind's Mirror: Risk and Reward in the Age of AI , Norton, 280 pp.; Madhumita Murgia, Code Dependent: Living in the Shadow of AI , Henry Holt, 311 pp.), The New York Review of Books , vol. LXXI, no. 17 (7 November 2024), pp. 44-46. \"'We can't realistically expect that those who hope to get rich from AI are going to have the interests of the rest of us close at heart,' ... writes [Gary Marcus]. 'We can't count on governments driven by campaign finance contributions [from tech companies] to push back.'... Marcus details the demands that citizens should make of their governments and the tech companies. They include transparency on how AI systems work; compensation for individuals if their data [are] used to train LLMs (large language model)s and the right to consent to this use; and the ability to hold tech companies liable for the harms they cause by eliminating Section 230, imposing cash penalties, and passing stricter product liability laws... Marcus also suggests... that a new, AI-specific federal agency, akin to the FDA, the FCC, or the FTC, might provide the most robust oversight.... [T]he Fordham law professor Chinmayi Sharma... suggests... establish[ing] a professional licensing regime for engineers that would function in a similar way to medical licenses, malpractice suits, and the Hippocratic oath in medicine. 'What if, like doctors,' she asks..., 'AI engineers also vowed to do no harm?'\" (p. 46.)\n\nHenderson, Mark (24 April 2007). \"Human rights for robots? We're getting carried away\" (http:// www.thetimes.co.uk/tto/technology/article1966391.ece). The Times Online . London. Archived (https://web.archive.org/web/20140531104850/http://www.thetimes.co.uk/tto/techn ology/article1966391.ece) from the original on 31 May 2014. Retrieved 31 May 2014.\n\nHughes-Castleberry, Kenna, \"A Murder Mystery Puzzle: The literary puzzle Cain's Jawbone , which has stumped humans for decades, reveals the limitations of natural-languageprocessing algorithms\", Scientific American , vol. 329, no. 4 (November 2023), pp. 81-82. \"This murder mystery competition has revealed that although NLP (natural-language processing) models are capable of incredible feats, their abilities are very much limited by the amount of context they receive. This [...] could cause [difficulties] for researchers who hope to use them to do things such as analyze ancient languages. In some cases, there are few historical records on long-gone civilizations to serve as training data for such a purpose.\" (p. 82.)\n\nImmerwahr, Daniel, \"Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?\", The New Yorker , 20 November 2023, pp. 54-59. \"If by 'deepfakes' we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren't deep, and the deeps aren't fake. [...] A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of cartoons, especially smutty ones.\" (p. 59.)\n\nJohnston, John (2008) The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI , MIT Press.", - "page_start": 67, - "page_end": 67, - "source_file": "wikipedia3.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_CHK_2010.pdf", - "query": "Is there any chance that my cousin has been granted financial aid from Chesapeak Energy? He's studying at a college in Oklahoma.", - "target_page": 26, - "target_passage": "hat’s why we gave $1.0 million to establish the Chesapeake Energy dormitory for students at the Oklahoma School for Science and Mathematics (OSSM", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "We also hire locally whenever possible to help stimulate the local economy, and we provide training when the local work force isn't yet qualified for the jobs we have open. For example, when Chesapeake began operating in the Marcellus Shale of West Virginia and Pennsylvania, finding experienced rig workers was a challenge. To meet that need, Chesapeake's wholly owned subsidiary, Nomac Drilling, built the 40,000-square-foot Eastern Training Center and Housing Facility in Bradford County, near Sayre, Pennsylvania. The campus opened in 2010 and serves as a housing facility and training ground for 266 workers at a time. Nomac and Chesapeake host regular job fairs in the region and the lines of interested candidates often extend out the door.\n\n## Educational Impact\n\nWe are also proud to help prepare tomorrow's leaders today. In 2010 Chesapeake supported universities, schools, academic chairs, scholarships and other educational programs with contributions totaling $5.4 million.\n\nInvesting in programs that promote technology and innovation is a key to our country's success. That's why we gave $1.0 million to establish the Chesapeake Energy dormitory for students at the Oklahoma School for Science and Mathematics (OSSM), a public, tuition-free, residential high school located in Oklahoma City for juniors and seniors with exceptional abilities. The extremely competitive school is helping train the next generation of scientists and mathematicians.\n\nWe also established the Chesapeake Energy Presidential Scholars Program at the Oklahoma City University Meinders School of Business, making a $5.0 million commitment to be distributed over the next five years. The Chesapeake Scholars Program will provide up to $25,000 per year in tuition", - "page_start": 25, - "page_end": 25, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "to selected students pursuing careers in finance, economics, accounting, marketing, business administration, computer science and information technology. In addition, scholars will take part in a Chesapeake Presidential Leadership Course facilitated by faculty members in coordination with designated Chesapeake leadership coaches, including a Chesapeake senior vice president and OCU alumni.\n\nIn 2007 Chesapeake launched a scholarship program in Texas with an initial $1.25 million contribution, challenging the cities of Fort Worth and Dallas to match its gift within a year. The cities responded and matched the gift, so Chesapeake in 2008 added another $1.25 million to the fund, bringing the total to $3.75 million. The Chesapeake Scholarship Fund currently funds the cost of higher education for 48 minority students. The fund provides each student $20,000 a year for up to four years at the school of their choice. To date more than $1.0 million has been distributed to deserving local students.\n\nTo help ensure the training of qualified geologists, engineers, landmen and energy lawyers in the next generation, we award scholarships to students pursuing energy-related degrees. We also help mentor them through Chesapeake's Peak Program. Junior- and senior-level scholarship recipients are paired with Chesapeake employee mentors who help develop students' knowledge and provide career advice. There are currently 25 mentors and 40 scholarship recipients participating in the Peak Program.\n\nOur recruiting team also initiated a strategic military recruitment effort during the past two years to hire former military personnel to work in a variety of leadership and crew positions. This effort earned Chesapeake an honor from G.I. JOBS magazine when we were named a 2011 Top 100 Military-Friendly Employer. Chesapeake currently employs 37 men and women who formerly served as junior military officers and more than 100 former servicemen and servicewomen who joined the company through a program called Troops 2 Roughnecks.\n\nIn addition to our specific scholarship programs, one-time educational donations and recruitment efforts, in 2010 we gave more than $1.8 million to fund higher education for nearly 400 other students in 12 states through our Chesapeake Scholars program. Chesapeake's scholarships help recruit the best and brightest students and provide educational opportunities in communities where we operate. In Oklahoma City, more than 400 employees volunteer for up to an hour a week on company time at four local public schools. Chesapeake's program has grown to become the largest corporate mentoring program in Oklahoma.\n\n## Community Impact\n\nChesapeake employees have been enriching their hometowns as volunteers for many years. We formalized those efforts in 2009 by establishing an official employee volunteer program, the H.E.L.P. (Helping Energize Local Progress) Initiative, wherein employees are invited to volunteer each month for a variety of organizations from food pantries to animal shelters. Through that program, employees donated more than 26,000 hours to their communities in 2009.\n\nIn the summer of 2010, Chesapeake took the H.E.L.P. Initiative to a higher level through the launch of Operation Blue. From Memorial Day through Labor Day, each employee was given four hours of company time to complete the volunteer project of their choice. Our employees eagerly accepted the challenge, and in three months more than 4,900 employees donated 30,900 hours of service to 519 organizations in more than 96 communities across the country. Operation Blue is now an annual\n\nvolunteer program in which employees roll up their sleeves in the communities they call home.", - "page_start": 26, - "page_end": 26, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "## Management's Discussion and Analysis\n\nDollar amounts are in thousands of Canadian dollars (except as noted)\n\n## Apartment Property Expenses\n\nSame store apartment property expenses increased 5.5% for the year ended December 31, 2013, due primarily to increased utility and fuel expenses as a result of high natural gas prices in Atlantic Canada, and higher electricity costs.\n\n## Utility and Fuel Expense - Same Store\n\nFor the years ended December 31,\n\n| | 2013 | 2012 | % change |\n|---------------------------------|---------|---------|------------|\n| natural gas | $4,565 | $2,729 | 67.3% |\n| oil | 1,523 | 2,095 | (27.3)% |\n| electricity | 5,197 | 4,671 | 11.3% |\n| Water | 3,582 | 3,474 | 3.1% |\n| other | 30 | 33 | (9.1)% |\n| Total utility and fuel expenses | $14,897 | $13,002 | 14.6% |\n\nKillam's apartment properties are heated with a combination of natural gas (55%), electricity (36%), oil (8%) and other sources (1%).\n\nElectricity costs at the unit level are usually paid directly by tenants, reducing Killam's exposure to the majority of the 4,500 units heated with electricity. Fuel costs associated with natural gas or oil fired heating plants are paid by Killam. As such, the Company is exposed to fluctuations in natural gas and oil costs, which represent 40.9% of total same store utility and fuel costs in 2013. Killam invests in green initiatives at its properties to maximize efficiencies, including converting many of its Halifax properties to natural gas from oil over the last three years as natural gas infrastructure has been expanded in the city. The decision to convert was supported by the substantial price difference between the cost of natural gas and oil in recent years.\n\nAs noted in the table above, Killam's utility and fuel expenses increased 14.6% in 2013 compared to 2012. The increase was primarily attributable to higher natural gas, electricity costs and water costs.\n\nKillam's natural gas expenses increased by 67.3% in 2013 due to higher gas prices in Atlantic Canada and an increase in properties burning natural gas following conversions of certain Halifax heating plants from oil to gas in 2012 and 2013. The reduction in oil expense in the quarter and year-to-date reflects this reduction in oil exposure.\n\nAs the following chart highlights, the per gigajoule (Gj) commodity cost for natural gas in New Brunswick and Nova Scotia was much higher than NYMEX in 2013 and less correlated to NYMEX than in previous years. (NYMEX is the New York Mercantile Exchange, a commodity futures exchange. Henry Hub, a gas distribution hub in Louisiana is the pricing point for natural gas futures contracts traded on NYMEX). The cost of natural gas in Atlantic Canada and New England experienced a spike from December 2012 until late spring 2013 and a second spike in December 2013, compared to other areas of Canada. Those spikes were both due to increased demand from utilities in Northeast New England and a shortage of gas pipeline capacity in Northeastern New England and Atlantic Canada. A temporary decline in gas supply off the coast of Nova Scotia further contributed to the high pricing in the first part of the year.\n\n## Historic Natural Gas Pricing ($ per Gj) Henry Hub Vs. Heritage Gas\n\n<!-- image -->", - "page_start": 37, - "page_end": 37, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "volunteer program in which employees roll up their sleeves in the communities they call home.\n\nChesapeake's contributions take many forms: financial and equipment donations, volunteerism and scholarships. Last year, we made numerous in-kind donations of laptops, reconditioned Chesapeake fleet vehicles and subsidized office space. These contributions provide essential operating tools as nonprofit organizations across the nation attempt to serve more people - often with lower budgets - in tough economic times.\n\nFor example, in Louisiana we donated 12 vehicles in 2010, including one to the Panola College Oil and Natural Gas Technology Program, which teaches students about the natural gas industry and provides them with hands-on technical training. Across many of the company's operating areas, we've donated computers to deserving students, schools and organizations through Chesapeake's Discovering Tomorrow's Leaders program. In 2010 the company equipped 14 students with laptops and donated 70 computers to schools or supporting nonprofit organizations.\n\nChesapeake partners with other companies and organizations to meet basic, practical needs in hundreds of communities. An example is our\n\nPutting food on the table - Employees volunteer at the Regional Food Bank of Oklahoma as part of Operation Blue.\n\n<!-- image -->\n\nsponsorship of the annual Day of Caring at the Ganus Center of Harding University in White County, Arkansas. During the event, approximately 1,200 uninsured or underinsured residents received a day of free medical, dental and eye screenings.\n\nTo help cultivate an appreciation for the great outdoors, in 2010 Chesapeake provided $25,000 to REAL School Gardens, a Fort Worthbased organization that establishes gardens at approximately 70 lower income elementary schools in North Texas. At I.M. Terrell Elementary School, students, parents, teachers and volunteers from Chesapeake and other groups worked together to prepare vegetable gardens and flower beds. In addition to teamwork skills and gardening, students learned about nutrition and took home food from the garden's bounty.\n\nWe supported servicemen and servicewomen by partnering with the Shreveport Chapter of Operation Support Our Troops, Inc. Our contribution helped offset the postage to send more than 100 care packages to troops overseas. The shipment was the largest in the organization's history and included Christmas cards, games and nonperishable food items.\n\nBy investing in the communities where we operate and the people whose lives we touch, we ensure a stronger today and a more hopeful tomorrow.", - "page_start": 26, - "page_end": 26, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "- (7) In this paragraph-\n - (a) 'boarding school' means a school or college, which-\n - (i) provides accommodation for its pupils or, as the case may be, students on its own premises, or\n - (ii) arranges accommodation for its pupils or students to be provided elsewhere (other than in connection with a residential trip away from the school);\n - (b) 'school' means-", - "page_start": 79, - "page_end": 79, - "source_file": "uksi_20210582_en.pdf" - }, - { - "text": "ways to uplift the nation's spirits. ways to uplift the nation's spirits.\n\nAndo : Japan has achieved two miracles - the : Japan has achieved two miracles - the Meiji Restoration of 1868, and the economic Meiji Restoration of 1868, and the economic recovery following the end of World War II in recovery following the end of World War II in 1945. Both events are also regarded globally 1945. Both events are also regarded globally as being miraculous. as being miraculous.\n\nIn 1945, foreign diplomats and businessmen In 1945, foreign diplomats and businessmen visiting Japan were fully confident that the visiting Japan were fully confident that the country would recover as they surveyed the country would recover as they surveyed the ruins and the scorched earth around them, ruins and the scorched earth around them, because, in the words of one of them, 'People because, in the words of one of them, 'People really work hard and help each other, and really work hard and help each other, and children take heed of what their parents say children take heed of what their parents say and study hard. And because there is a and study hard. And because there is a sparkle in their eyes.' sparkle in their eyes.'\n\nThereafter, the Japanese worked furiously Thereafter, the Japanese worked furiously\n\n<!-- image -->\n\nuntil the country became an economic until the country became an economic juggernaut. However, in the early 1970s, juggernaut. However, in the early 1970s, people became complacent about their people became complacent about their affluence, and stopped working hard and affluence, and stopped working hard and making efforts. Children assumed that if they making efforts. Children assumed that if they went to a top-class university they would walk went to a top-class university they would walk into a top-class company and have nothing to into a top-class company and have nothing to worry about thereafter. So they started going worry about thereafter. So they started going to cram schools even before kindergarten. to cram schools even before kindergarten. I give lectures on the theme 'students born in I give lectures on the theme 'students born in and after 1980 are hopeless cases' (laughs). and after 1980 are hopeless cases' (laughs). That was because of the prevailing attitude at That was because of the prevailing attitude at the time that Japan the time that Japan's national development s national development would go on for ever and the economy would would go on for ever and the economy would remain stable. As a result, parents spoilt their remain stable. As a result, parents spoilt their children, and we saw more children who children, and we saw more children who could not do anything. Many such children could not do anything. Many such children are in their 30s now. are in their 30s now.\n\nAnd in this situation, the asset bubble burst And in this situation, the asset bubble burst [in the early 1990s], and the collapse of [in the early 1990s], and the collapse of Lehman [hit world markets] in 2008, and Lehman [hit world markets] in 2008, and now we have the earthquake and tsunami now we have the earthquake and tsunami disaster. It seems that everything that disaster. It seems that everything that happens these days merely makes us more happens these days merely makes us more anxious. I think everyone needs to hit the anxious. I think everyone needs to hit the 'reset' button in some sense. If we don 'reset' button in some sense. If we don't,t, more difficulties lie ahead. more difficulties lie ahead.\n\nMiyata : Indeed, prior to 1970, living : Indeed, prior to 1970, living standards or wage levels were very low, standards or wage levels were very low, but I think it was a very happy time. People but I think it was a very happy time. People believed that if they really worked hard, believed that if they really worked hard, their daily lives would improve and their their daily lives would improve and their\n\n## Takeshi Kunibe", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "- 5. Click OK to confirm the partnership relationship.", - "page_start": 593, - "page_end": 593, - "source_file": "sg247938.pdf" - }, - { - "text": "## Doing the right thing\n\nAt Killam we are investing in our communities, as well as our real estate. We believe that giving back to the community is an important part of being a responsible corporate citizen.\n\n## Supporting Killam Families with Scholarship program\n\nKillam's Scholarship Program awards three $3,000 scholarships to children or grandchildren of Killam employees on an annual basis. After a competitive application process in 2013, Bradley Price, Hayley Gillis and Georgia Telman were selected for demonstrating an outstanding combination of academic excellence and community involvement.\n\n## Home away from Home\n\nOn an annual basis, Killam donates six fully furnished apartments to hospitals in Halifax, Saint John, Moncton, Fredericton and Charlottetown. These units are used by families of patients who need to travel away from home for health care.\n\n## red Cross\n\nKillam has partnered with the Red Cross in many of its core markets. The Red Cross is on hand to help when emergencies and disasters impact communities. Over the last six years, Killam has provided the Red Cross with /financial assistance to fund their operations. In return, the Red Cross has provided emergency training to Killam sta/ff, helping us react e/ffectively to emergencies when required.\n\n## Supporting Higher education in atlantic Canada\n\nOn an annual basis, Killam's board of directors join together to support a common charity or organization. During 2013 the board members together donated $100,000 to establish an endowment at Mount Allison University in Sackville, New Brunswick, providing an annual entrance scholarship to the university. Previous $100,000 board donations supported the Boys and Girls Clubs of Prince Edward Island, the YMCA of Greater Halifax/Dartmouth and Saint Mary's University in Halifax.\n\n<!-- image -->\n\n## Caring for Kids\n\nDuring 2013 Killam organized the Caring for Kids Lottery, a fundraiser in support of the IWK Health Centre in Halifax. The IWK Health Centre provides quality medical care to women, children, youth and families in the Maritime provinces. Killam tenants supported the cause through the purchase of lottery tickets for the chance to win free rent for a year. All funds raised went directly to the IWK Foundation.\n\n<!-- image -->", - "page_start": 19, - "page_end": 19, - "source_file": "TSX_KMP_2013.pdf" - }, - { - "text": "- 4. Click OK and the key is uploaded.", - "page_start": 782, - "page_end": 782, - "source_file": "sg247938.pdf" - }, - { - "text": "<!-- image -->\n\nHome / Money / 3 Great Resources to Kick-Start Your Financial Planning Career\n\n<!-- image -->\n\nMONEY\n\n## 3 Great Resources to Kick-Start Your Financial Planning Career\n\n11/23/2022\n\n(NewsUSA) - Finding a rewarding career that offers growth potential, work-life balance and the satisfaction of helping others is a key priority for many job seekers. With those goals in mind, a career in financial planning should be a top contender, whether you are just starting out or looking to make a career change. But once you have decided that financial planning is the field for you, how do you get started? Here are three resources that can help you launch a successful financial planning career.\n\n- 1. Guide to Careers in Financial Planning. Based on interviews with leading financial services firms, this guide introduces you to the wide range of career opportunities in the financial planning profession. It identifies typical entry points and career tracks, explores the types of companies that hire financial planners and provides information on how to find financial planning career opportunities. It also includes resources such as a list of recommended questions to ask in a job interview.\n- 2. Scholarship Programs. Dozens of scholarship programs are available to support you on your professional journey. Some are offered directly through colleges and universities that have financial planning degree and certificate programs. Others are available through nonprofits and organizations like the CFP Board Center for Financial Planning, which administers 16 scholarship programs that help pay for the education and exam requirements to become a CERTIFIED FINANCIAL PLANNERTM professional. Financial services firms may offer scholarships or tuition reimbursements to employees to cover the costs of obtaining professional designations and credentials such as CFP® certification -- some of which may be required to advance within the company.\n- 3. Career Fairs. In-person and virtual career fairs provide valuable opportunities to connect with prospective employers. CFP Board's spring and fall career fairs are some of the most popular hiring events in the profession, with dozens of firms participating in these online exhibitions. Job seekers can visit employers' virtual exhibit booths and view open jobs and internships, apply for open positions and interact with employers through one-on-one video meetings and messaging. You can also visit the CFP Board Career Center to browse current job and internship opportunities in financial planning, as well as a collection of articles providing career guidance.\n\nOther top resources include career offices at your college or university, financial services companies' career websites and professional organizations that may have a local chapter near you.\n\nMaking the most of these resources will not only help you find a financial planning job, but also support your growth and development as a future financial planning professional. To learn more about CFP® certification, visit the CFP Board website.\n\nArticle Link\n\nhttps://about.newsusa.com/3-great-resources-to-kick-start-your-financial-planni…", - "page_start": 0, - "page_end": 0, - "source_file": "news3.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_SMFG_2011.pdf", - "query": "Has the Sumitomo Mitsui Financial Group offered help to the elderly?", - "target_page": 6, - "target_passage": "Currently, the proportion of people aged 65 or over in Japan has reached 23.4%*. SMFG will help create frameworks enabling the elderly to enjoy a vibrant lifestyle with peace of mind, through support for life-cycleframeworks enabling the elderly to enjoy a vibrant lifestyle with peace of mind, through support for life-cycle planning and other measures. The SMFG Group aims to create systems and a corporate culture that foster a soundplanning and other measures. The SMFG Group aims to create systems and a corporate culture that foster a sound balance between work and care needs, given that many group employees will later need to nurse ailing relatives.balance between work and care needs, given that many group employees will later need to nurse ailing relatives", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "<!-- image -->\n\nIn the past, the Sumitomo Group In the past, the Sumitomo Group programs to solve the problem of programs to solve the problem of mine, while the Mitsui Group set up mine, while the Mitsui Group set up give the poorest in society access to give the poorest in society access to corporate social responsibility corporate social responsibility philosophies of both the Sumitomo philosophies of both the Sumitomo years of their existence, we will years of their existence, we will problems facing the international problems facing the international service service operations.operations.\n\nundertook large-scale afforestation undertook large-scale afforestation pollution around the Besshi copper pollution around the Besshi copper the Mitsui Memorial Hospital to the Mitsui Memorial Hospital to basic medical care. Based on this basic medical care. Based on this DNA embedded in the business DNA embedded in the business and Mitsui groups over the 400 and Mitsui groups over the 400 continue to play our part in solving continue to play our part in solving community through our financial community through our financial", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## Corporate Outline (as of September 30, 2011)\n\nCompany Name\n\nBusiness Description\n\n - Established\n\nHead Office\n\nChairman of the Board\n\nPresident\n\nCapital\n\nStock Exchange Listings\n\n - Sumitomo Mitsui Financial Group, Inc. ::\n - Management of banking subsidiaries (under the stipulations of Japan's Banking Act) and of non-bank subsidiaries, as well as the performance of ancillary functions :\n - December 2, 2002 :\n - 1-2, Marunouchi 1-chome, Chiyoda-ku, Tokyo, Japan :\n\nMasayuki Oku :\n\n - Koichi Miyata (Concurrent Director at Sumitomo Mitsui Banking Corporation) :\n - ¥2,337.8 billion :\n\nTokyo Stock Exchange (First Section) :\n\nOsaka Securities Exchange (First Section) Nagoya Stock Exchange (First Section) Note: American Depositary Receipts (ADRs) are listed on the New York Stock Exchange.\n\n## Structure of Sumitomo Mitsui Financial Group (as of September 30, 2011)\n\n* SMFG plans to make PROMISE a wholly owned subsidiary in April 2012.\n\n<!-- image -->\n\n## Our CSR reporting\n\nAt Sumitomo Mitsui Financial Group, three kinds of CSR reports are compiled.\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n| | Covers CSR baselines and CSR activities at SMFG and its Group companies, Covers CSR baselines and CSR activities at SMFG and its Group companies, centered on specific examples centered on specific examples CSR report 2011 (digest version) | CSR disclosure through specific examples |\n|------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| information on CSR activities information on CSR activities CSR report 2011 statistical performance, online PDF file) | Comprehensive disclosure of CSR activities | Covers environment-related statistical data and gives more detailed Covers environment-related statistical data and gives more detailed (digest version with examples of activities and |\n| | This is the official version of our CSR report. Covers the full spectrum of This is the official version of our CSR report. Covers the full spectrum of CSR activities at SMFG CSR activities at SMFG CSR report (online version, Japanese only) www.smfg.co.jp/responsibility | Enriched CSR disclosure |\n\n## Editorial Policy\n\nThis report has been created in an effort to convey to our stakeholders the variety of our initiatives and the roles the SMFG Group is fulfilling as we work to create a sustainable society.", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## Commitment from the Top\n\nA Conversation with Tadao Ando, Takeshi Kunibe and Koichi Miyata\n\n## What can we do now to spur the reconstruction and revitalization of Japan, and help resolve global issues?\n\nUplifting the nation's spirits Uplifting the nation's spirits\n\nJapan is now facing a wide variety of problems, ranging from the reconstruction of the Tohoku region (the northeastern region of Japan) Japan is now facing a wide variety of problems, ranging from the reconstruction of the Tohoku region (the northeastern region of Japan) after the March 11 earthquake and tsunami ('the Great East Japan Earthquake') to a shrinking and aging population, with falling birth rates after the March 11 earthquake and tsunami ('the Great East Japan Earthquake') to a shrinking and aging population, with falling birth rates and increasing numbers of the aged. and increasing numbers of the aged.\n\nWe must now find ways for people to coexist in harmony with nature, based on a global perspective. We must now find ways for people to coexist in harmony with nature, based on a global perspective.\n\nSumitomo Mitsui Financial Group (SMFG) invited the world-famous architect Tadao Ando to join in a conversation on the issues facing society Sumitomo Mitsui Financial Group (SMFG) invited the world-famous architect Tadao Ando to join in a conversation on the issues facing society and the ways in which SMFG and its Group companies can bring their expertise to bear as a financial services group. and the ways in which SMFG and its Group companies can bring their expertise to bear as a financial services group.\n\n<!-- image -->\n\n## Tadao Ando\n\nArchitect. Professor Emeritus at the University of Tokyo, Representative and Vice-chairman of the Great East Japan Earthquake Reconstruction Design Council. Awarded the Order of Cultural Merit in 2010.\n\nOur measures to support reconstruction after the disastrous earthquake and tsunami Uplifting the nation's spirits\n\n̶ ̶ SMFG has the following priorities in its SMFG has the following priorities in its corporate social responsibility program: corporate social responsibility program: Reconstruction after the earthquake Reconstruction after the earthquake and tsunami, environmental measures, and tsunami, environmental measures, addressing the shrinking and aging addressing the shrinking and aging population, and global challenges. population, and global challenges. -\n\nKunibe : : Japan is facing a difficult period J a p a n i s f a c i ng a d i f f icu lt period with limited prospects for economic growth with limited prospects for economic growth due to a shrinking, aging population and due to a shrinking, aging population and a mature economy. Against this backdrop, a mature economy. Against this backdrop, the country was hit by the unprecedented the country was hit by the unprecedented catastrophe of the Great East Japan catastrophe of the Great East Japan Earthquake. We must face up to the new Earthquake. We must face up to the new challenges arising from this disaster. challenges arising from this disaster.\n\nI believe the time has come for us to I believe the time has come for us to reconsider what we can do in our capacity reconsider what we can do in our capacity as a financial institution to address a variety as a financial institution to address a variety of issues, including the four priorities. of issues, including the four priorities. Today I hope we can discuss not only the road Today I hope we can discuss not only the road to reconstruction after the disaster, but also to reconstruction after the disaster, but also\n\nways to uplift the nation's spirits. ways to uplift the nation's spirits.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## Social Contribution Activities\n\n<!-- image -->\n\nSMFG as a corporate citizen: Working to create a prosperous society for all\n\n## SMFG and its Group companies participate in neighborhood cleanup programs\n\nIn fiscal 2010, 150 volunteers from the In fiscal 2010, 150 volunteers from the SMFG Group participated in beach cleanup SMFG Group participated in beach cleanup activities in Kanagawa and Hyogo prefectures activities in Kanagawa and Hyogo prefectures on 'SMFG Clean-up Day.' This initiative is on 'SMFG Clean-up Day.' This initiative is not simply a matter of picking up garbage. It not simply a matter of picking up garbage. It also involves inspections and analysis of also involves inspections and analysis of garbage to identify pointers for providing garbage to identify pointers for providing solutions for environmental issues in the solutions for environmental issues in the future. future.\n\nIn addition to beach cleanup activities in In addition to beach cleanup activities in Chiba and Hyogo prefectures by SMBC Chiba and Hyogo prefectures by SMBC Friend Securities, Group companies of Friend Securities, Group companies of Cedyna, Sumitomo Mitsui Finance & Leasing, Cedyna, Sumitomo Mitsui Finance & Leasing, the Japan Research Institute and SMBC the Japan Research Institute and SMBC Nikko Securities carry out ongoing cleanup Nikko Securities carry out ongoing cleanup and other activities in the areas around their and other activities in the areas around their offices and branches. offices and branches.\n\nThe Minato Bank and Kansai Urban Banking The Minato Bank and Kansai Urban Banking Corporation also engage in cleanup activities Corporation also engage in cleanup activities around Suma Beach and Lake Biwa, to around Suma Beach and Lake Biwa, to protect the regional environment. protect the regional environment.\n\n## Supporting education in developing countries, together with our customers and employees\n\nCardholders and employees of Sumitomo Cardholders and employees of Sumitomo Mitsui Card joined a literary social contribution Mitsui Card joined a literary social contribution initiative by participating in the Books To initiative by participating in the Books To The People 2010 project operated by BOOKOFF The People 2010 project operated by BOOKOFF CORP. This project aims to provide CORP. This project aims to provide environ environments in which children can read books in ments in which children can read books in purpose-built facilities, through donations to purpose-built facilities, through donations to Room to Read, a non-governmental organi Room to Read, a non-governmental organization that supports education in developing zation that supports education in developing countries. These NGO donations are pegged countries. These NGO donations are pegged to total numbers of used books and other to total numbers of used books and other items purchased by cardholders. Through items purchased by cardholders. Through the Sumitomo Mitsui Card-operated online the Sumitomo Mitsui Card-operated online shopping mall POINT UP Mall, cardholders shopping mall POINT UP Mall, cardholders are encouraged to buy used books through are encouraged to buy used books through BOOKOFF, and employees collect and donate BOOKOFF, and employees collect and donate used books from their homes and companies. used books from their homes and companies.\n\n<!-- image -->\n\nCollection box for used books and other items installed in an employee canteen\n\n<!-- image -->\n\nSupporting education in developing countries\n\nGarbage was analyzed in the Kugenuma Beach cleanup event, in which SMFG and its Group companies participated\n\n## Donations through 'The World Bank Green Fund'", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## EXECUTIVES\n\nFrom left: Mitsuhiko Yamashita, Tadao Takahashi, Toshiyuki Shiga, Carlos Ghosn, Itaru Koeda, Hiroto Saikawa, Carlos Tavares\n\n<!-- image -->\n\n## BOARD OF DIRECTORS AND AUDITORS\n\n## Representative Board Members\n\nCarlos Ghosn\n\nPresident and Co-Chairman\n\nItaru Koeda\n\nCo-Chairman\n\nToshiyuki Shiga\n\nCo-Chairman\n\nBoard Members\n\nTadao Takahashi\n\nHiroto Saikawa\n\nMitsuhiko Yamashita\n\nCarlos Tavares\n\nShemaya Lévy\n\nPatrick Pélata\n\nAuditors\n\nHisayoshi Kojima\n\nShinji Ichishima\n\nKeishi Imamura\n\nHaruo Murakami\n\n## EXECUTIVE COMMITTEE MEMBERS\n\nCarlos Ghosn\n\nToshiyuki Shiga\n\nItaru Koeda\n\nTadao Takahashi\n\nHiroto Saikawa\n\nMitsuhiko Yamashita\n\nCarlos Tavares\n\nAlain-Pierre Raynaud\n\n(As of June 21, 2005)", - "page_start": 6, - "page_end": 6, - "source_file": "OTC_NSANY_2004.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nSumitomo Mitsui Financial Group CSR Report\n\nDigest version\n\n<!-- image -->", - "page_start": 0, - "page_end": 0, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "This report has been created in an effort to convey to our stakeholders the variety of our initiatives and the roles the SMFG Group is fulfilling as we work to create a sustainable society.\n\nWe have aimed to present the information clearly, so that readers may understand our attitude that the fulfillment of CSR is\n\nthe essence of business itself, and our initiatives act upon this.\n\nOur CSR Report 2011 (digest version), launched last fiscal year, is intended to present more concise reports of the Group's CSR activities, with a focus on specific activities of interest. To complement this, we have also posted online our CSR Report 2011 (digest version, with examples of activities and statistical performance), with more detailed information on CSR activities and statistical data omitted in the CSR Report 2011 (digest version).\n\nWe disclose the full range of our CSR activities as a Group on our website in the official-use version of our CSR Report (in Japanese only). It is recommended that you read it in combination with the above two digest versions in order to understand our CSR and other activities in greater detail.\n\nFrom the current fiscal year, we are including third-party opinions in the website version.\n\n## Scope of this Report\n\n - GLYPH<129> Sumitomo Mitsui Financial Group, Inc.\n - GLYPH<129> Sumitomo Mitsui Banking Corporation\n - GLYPH<129> SMFG Card & Credit, Inc.\n - GLYPH<129> Sumitomo Mitsui Card Company, Limited\n - GLYPH<129> Cedyna Financial Corporation\n - GLYPH<129> Sumitomo Mitsui Finance and Leasing Co., Ltd.\n - GLYPH<129> The Japan Research Institute, Limited\n - GLYPH<129> SMBC Friend Securities Co., Ltd.\n - GLYPH<129> SMBC Nikko Securities Inc.\n - GLYPH<129> THE MINATO BANK, LTD.\n - GLYPH<129> Kansai Urban Banking Corporation\n - GLYPH<129> Other Group companies\n\n## Company name abbreviations and other special terminology\n\nThroughout this report, 'Sumitomo Mitsui Financial Group' or 'SMFG' refers to the holding company alone. 'The SMFG Group' refers to the holding company and its primary domestic and international subsidiaries and affiliates.\n\n## Reference guidelines\n\nGlobal Reporting Initiative (GRI) Sustainability Reporting Guidelines 2006 (G3)\n\n - * Global Reporting Initiative (GRI): Established as an international standard for sustainability reporting, compilers set up an international organization (GRI) in 1997 to encourage its adoption worldwide.\n\n## About this Report\n\nPeriod Covered\n\nPublication Date of Japanese Document\n\nContact\n\n - : April 1, 2010 to March 31, 2011 ( 'Fiscal 2010' )\n - : December 2011\n - :\n\nNote: Certain items in this report refer to activities taking place after April 2011.\n\n - Group CSR Department, Sumitomo Mitsui Financial Group, Inc. 1-2 Marunouchi 1-chome, Chiyoda-ku, Tokyo 100-0005 TEL: +81-3-3282-8111", - "page_start": 15, - "page_end": 15, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "Miyata : In the same way, other SMFG : In the same way, other SMFG Group companies have been sending out Group companies have been sending out volunteers, and providing donations not only volunteers, and providing donations not only as a company, but also through individual as a company, but also through individual employees. SMBC was at the heart of all these employees. SMBC was at the heart of all these activities, and this was a good opportunity activities, and this was a good opportunity for us to appreciate anew how our business for us to appreciate anew how our business contributes to the public good. contributes to the public good.\n\n<!-- image -->\n\n## Koichi Miyata\n\nPresident Sumitomo Mitsui Financial Group, Inc.\n\nThe SMFG Group has 62,000 employees, The SMFG Group has 62,000 employees, 'stepping up to the plate and working hard 'stepping up to the plate and working hard to give something back to society.' I think it to give something back to society.' I think it is important to develop ways of making this is important to develop ways of making this a shared aspiration of all the employees of a shared aspiration of all the employees of\n\nthe Group. the Group.", - "page_start": 2, - "page_end": 2, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nEurope\n\n## Donations to charity groups\n\nEmployees of Sumitomo Mitsui Banking Corporation Europe Employees of Sumitomo Mitsui Banking Corporation Europe (SMBCE) conducted volunteer activities in their time off. (SMBCE) conducted volunteer activities in their time off. SMBCE contributes to charitable organizations through an SMBCE contributes to charitable organizations through an in-house fund and also uses a matching gifts program under in-house fund and also uses a matching gifts program under\n\nwhich it donates a which it donates a certain amount for certain amount for every donation made every donation made by its employees. by its employees.\n\nEmployee volunteers who participated in landscape improvement projects\n\n<!-- image -->\n\nEurope\n\n## Donation for a Japanese-language speech contest\n\nThe European office of the Japan Research Institute (JRI) The European office of the Japan Research Institute (JRI) made a donation in support of a Japanese-language speech made a donation in support of a Japanese-language speech contest. contest.\n\nMozambique\n\n## UNICEF support initiatives\n\nThrough the Climate & Children Supporters project, the bank Through the Climate & Children Supporters project, the bank has supported UNICEF projects in Mozambique benefitting has supported UNICEF projects in Mozambique benefitting\n\nchildren and improving children and improving the water-supply and the water-supply and sanitary environment. sanitary environment.\n\n*Please see this website for further details (in Japanese): www.smbc.co.jp/ccs/\n\nⓒ ⓒ UNICEF Mozambique/Arild Drivdal\n\n<!-- image -->\n\n<!-- image -->", - "page_start": 14, - "page_end": 14, - "source_file": "NYSE_SMFG_2011.pdf" - }, - { - "text": "## Environmental Activities\n\nInternational initiatives in Asian countries and others\n\n## Taking a leading role in environmental businesses in Asia\n\n## Promoting energy-saving and low-emission industries in China\n\n## Support for adoption of electric vehicles and car-sharing\n\nThe SMFG Group supports environmental The SMFG Group supports environmental businesses in the rapidly growing markets of businesses in the rapidly growing markets of Southeast Asia from various perspectives. Southeast Asia from various perspectives. For example in Malaysia, SMBC signed an For example in Malaysia, SMBC signed an operational alliance on environmental operational alliance on environmental businesses with the Federation of Malaysian businesses with the Federation of Malaysian Manufacturers in April 2010, and in October Manufacturers in April 2010, and in October that year acted as main sponsor for Malaysia that year acted as main sponsor for Malaysia's first large-scale international environmental first large-scale international environmental exhibition, International Greentech & Eco exhibition, International Greentech & Eco products Exhibition & Conference Malaysia products Exhibition & Conference Malaysia\n\n2010 (IGEM). At this event, a keynote 2010 (IGEM). At this event, a keynote speech was given by Chairman Teisuke speech was given by Chairman Teisuke Kitayama, and SMBC and Sumitomo Mitsui Kitayama, and SMBC and Sumitomo Mitsui Finance & Leasing opened booths. Finance & Leasing opened booths. The The exhibition, visited on successive days exhibition, visited on successive days by by Malaysia Malaysia's King, prime minister, some of s K ing, prime minister, some of the regional Kings of Malaysia, t he regional Kings of Malaysia, and and cabinet ministers, raised awareness cabinet ministers, raised awareness of of environmental businesses in the nation. environmental businesses in the nation. At the same time, in April 2011, the bank At the same time, in April 2011, the bank's s Malaysia unit Sumitomo Mitsui Banking Malaysia unit Sumitomo Mitsui Banking Corporation Malaysia Berhad began Corporation Malaysia Berhad began operations. This unit is broadening support operations. This unit is broadening support measures to contribute to the development measures to contribute to the development of environmental businesses in Malaysia. of environmental businesses in Malaysia. Meanwhile, in August 2010, the Japan Meanwhile, in August 2010, the Japan\n\n<!-- image -->\n\nResearch Institute, SMBC and a number of Research Institute, SMBC and a number of other companies publicly recruited by Japan other companies publicly recruited by Japan's s New Energy and Industrial Technology New Energy and Industrial Technology Development Organization (NEDO) were Development Organization (NEDO) were jointly commissioned to carry out basic jointly commissioned to carry out basic research into Malaysia research into Malaysia's Green Township s Green Township concept, a national town-planning project concept, a national town-planning project backed by NEDO. backed by NEDO.\n\nLooking ahead, SMBC plans to jointly Looking ahead, SMBC plans to jointly compile an action plan with the Malaysian compile an action plan with the Malaysian government and related enterprises for government and related enterprises for establishment of 'green townships' based establishment of 'green townships' based on the cities Putrajaya and Cyberjaya Prime on the cities Putrajaya and Cyberjaya Prime Minister Najib Razak is promoting. It also Minister Najib Razak is promoting. It also plans to propose specific projects in the plans to propose specific projects in the concept. concept.\n\n<!-- image -->", - "page_start": 12, - "page_end": 12, - "source_file": "NYSE_SMFG_2011.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_CHK_2010.pdf", - "query": "Does Chesapeake Energy have a project to reduce excessive water use?", - "target_page": 28, - "target_passage": "Created to meet the challenge of reducing our water usage, Chesapeake’s Aqua Renew® program uses state-of-the-art technology to recycle pro- duced water.", - "chunk_present": { - "presence": true, - "index": 0 - } - }, - "top_chunk": [ - { - "text": "## INVESTING IN OUR WORLD AND OUR PEOPLE »\n\nAs we explore for and produce clean, affordable, abundant, American natural gas, we provide an important solution to our nation's energy challenges and its quest for energy independence. With at least a 200year supply of natural gas located right here in the U.S., this versatile fuel can be used to not only heat homes, create electricity and meet America's transportation needs, but also to fuel the country's future by creating jobs and stimulating local and national economies through investment and taxes.\n\n## Environmentally Friendly Operations\n\nAt Chesapeake, we realize that the way a great product is produced is as important as the product itself. For example, we have helped pioneer the use of multiwell padsites to drill up to 16 wells from a single location, greatly reducing our land and road use and overall environmental footprint. We use the latest horizontal and directional drilling technology to place wells at a safe distance from homes, schools and businesses. In addition, we build and maintain access roads and work to eliminate soil erosion near our sites, as well as restore local vegetation.\n\nWe implement advanced, modern protective measures known as Best Management Practices (BMPs) to help ensure energy development is conducted in an environmentally responsible manner. Procedures are implemented throughout our operations to protect freshwater aquifers and reduce environmental impacts. BMPs protect wildlife, air quality, water and landscapes as we work to develop vitally needed domestic energy sources.\n\nImplemented throughout the entire life cycle of a well, BMPs can be as simple as strategically placing a berm, or land barrier, on locations to control surface water runoff. Others involve cutting-edge operational technologies such as utilizing the most advanced techniques offered in drilling fluids, well casing and cement design. Regardless of complexity, all BMPs are based on the idea that the environmental footprint of\n\nenergy development should be as small and temporary as possible. These practices are continually evolving and further improving as Chesapeake and the industry develop new innovative techniques and approaches to business.\n\nIn addition to our BMPs, Chesapeake has also initiated several innovative internal programs focused on water recycling and greener hydraulic fracturing processes.\n\n## Aqua Renew ®\n\nCreated to meet the challenge of reducing our water usage, Chesapeake's Aqua Renew ® program uses state-of-the-art technology to recycle pro-\n\nduced water. Since the company's preliminary reclamation project in\n\n<!-- image -->\n\n<!-- image -->\n\n2006, our focus on water reuse and conservation has become a companywide endeavor, stretching from the Barnett Shale of North Texas to the Marcellus Shale of northern Pennsylvania.\n\nThe Aqua Renew program has yet to find a limit to how much recycled water could be used without compromising well production. In fact, our Marcellus Shale operations are treating and recycling virtually 100% of produced water (more than 10 million gallons per month) for reuse in our hydraulic fracturing operations. Properly conducted modern fracking is a highly engineered, controlled, sophisticated and safe procedure.\n\nWith such large volumes of recycled water, the company is seeing more than just environmental advantages. We estimate that this\n\nGreen operations - Chesapeake's Best Management Practices ensure our operations are as environmentally friendly as possible, while protecting our employees, neighbors and the areas where we operate.\n\n<!-- image -->", - "page_start": 27, - "page_end": 27, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "wet natural gas and dry natural gas), similar to the components of the Eagle Ford Shale. We have made a large commitment to this play and have acquired approximately 1.2 million net leasehold acres and expect to increase this total to as much as 1.5 million net leasehold acres in the coming months. We are currently using three rigs to evaluate the play and believe our leasehold could support the drilling of up to 12,000 net wells. This is an area where we anticipate bringing in a joint venture partner late in 2011 or early in 2012.\n\n## Our People\n\nGreat assets cannot exist without great people, so we take great pride in hiring, training, motivating, rewarding and retaining what we regard\n\n<!-- image -->\n\nas the best employees in the industry. From our beginning 22 years ago with 10 employees in Oklahoma City to employing more than 10,000 people across 15 states today, Chesapeake has always focused on building first-class human resources within a distinctive corporate culture. Talk to Chesapeake employees and you will note genuine pride and great enthusiasm about the company and the critical role that we play in delivering increasing quantities of clean and affordable American natural gas and valuable and reliable liquids to energy consumers across the country.\n\nChesapeake employees are distinctive in other ways as well. They are much younger than the industry average, with half of our almost 4,000 Oklahoma City-based headquarters employees 33 years old or younger. Their enthusiasm and willingness to learn create an\n\natmosphere of vitality and energy at Chesapeake, important ingredients of our distinctive culture. These attributes, along with a vibrant and attractive corporate headquarters campus, low levels of bureaucracy, great assets and a well-executed corporate strategy combine to create our culture of success and innovation.\n\nThis has generated extremely positive external feedback as Chesapeake was recently recognized for the fourth consecutive year as one of the FORTUNE 100 Best Companies to Work For ®(3) in the U.S. In fact, we moved up to #32 overall and #1 in our industry - we are very proud of having created and sustained what is now considered the best place to work in all of the U.S. energy production industry.\n\nIn addition, we were honored in December 2010 at the 12th Annual Platts Global Energy Awards as finalists for CEO of the Year, Community\n\nFrom our beginning 22 years ago with 10 employees in Oklahoma City to employing more than 10,000 people across 15 states today, Chesapeake has always focused on building first-class human resources within a distinctive corporate culture.\n\n<< A Chesapeake rig drills in the Marcellus Shale, where the company is the leading leasehold owner, largest producer and most active driller.\n\nDevelopment Program of the Year, Deal of the Year, Energy Producer of the Year and the Industry Leadership Award. Chesapeake was one of only two companies selected as a finalist in five or more categories. The company was also honored in 2010 with a Certificate of Recognition for our military reserve recruiting efforts, named a 2010 Best Diversity Company by Engineering & Information Technology Magazine and recognized for Best Investor Relations in Energy Sector and Best Investor Relations Website at the 2010 IR Magazine U.S. Awards.\n\n## Recent Events and a Better Way Forward\n\nYou may be aware that I have been outspoken in attempting to persuade our country's political leadership to recognize that the discovery of vast resources of unconventional natural gas and oil in the U.S. is a complete game changer for our country from an economic, national security and environmental perspective. After two years of my best efforts and the efforts of many others in the industry, most notably T. Boone Pickens,", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "We also hire locally whenever possible to help stimulate the local economy, and we provide training when the local work force isn't yet qualified for the jobs we have open. For example, when Chesapeake began operating in the Marcellus Shale of West Virginia and Pennsylvania, finding experienced rig workers was a challenge. To meet that need, Chesapeake's wholly owned subsidiary, Nomac Drilling, built the 40,000-square-foot Eastern Training Center and Housing Facility in Bradford County, near Sayre, Pennsylvania. The campus opened in 2010 and serves as a housing facility and training ground for 266 workers at a time. Nomac and Chesapeake host regular job fairs in the region and the lines of interested candidates often extend out the door.\n\n## Educational Impact\n\nWe are also proud to help prepare tomorrow's leaders today. In 2010 Chesapeake supported universities, schools, academic chairs, scholarships and other educational programs with contributions totaling $5.4 million.\n\nInvesting in programs that promote technology and innovation is a key to our country's success. That's why we gave $1.0 million to establish the Chesapeake Energy dormitory for students at the Oklahoma School for Science and Mathematics (OSSM), a public, tuition-free, residential high school located in Oklahoma City for juniors and seniors with exceptional abilities. The extremely competitive school is helping train the next generation of scientists and mathematicians.\n\nWe also established the Chesapeake Energy Presidential Scholars Program at the Oklahoma City University Meinders School of Business, making a $5.0 million commitment to be distributed over the next five years. The Chesapeake Scholars Program will provide up to $25,000 per year in tuition", - "page_start": 25, - "page_end": 25, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "to selected students pursuing careers in finance, economics, accounting, marketing, business administration, computer science and information technology. In addition, scholars will take part in a Chesapeake Presidential Leadership Course facilitated by faculty members in coordination with designated Chesapeake leadership coaches, including a Chesapeake senior vice president and OCU alumni.\n\nIn 2007 Chesapeake launched a scholarship program in Texas with an initial $1.25 million contribution, challenging the cities of Fort Worth and Dallas to match its gift within a year. The cities responded and matched the gift, so Chesapeake in 2008 added another $1.25 million to the fund, bringing the total to $3.75 million. The Chesapeake Scholarship Fund currently funds the cost of higher education for 48 minority students. The fund provides each student $20,000 a year for up to four years at the school of their choice. To date more than $1.0 million has been distributed to deserving local students.\n\nTo help ensure the training of qualified geologists, engineers, landmen and energy lawyers in the next generation, we award scholarships to students pursuing energy-related degrees. We also help mentor them through Chesapeake's Peak Program. Junior- and senior-level scholarship recipients are paired with Chesapeake employee mentors who help develop students' knowledge and provide career advice. There are currently 25 mentors and 40 scholarship recipients participating in the Peak Program.\n\nOur recruiting team also initiated a strategic military recruitment effort during the past two years to hire former military personnel to work in a variety of leadership and crew positions. This effort earned Chesapeake an honor from G.I. JOBS magazine when we were named a 2011 Top 100 Military-Friendly Employer. Chesapeake currently employs 37 men and women who formerly served as junior military officers and more than 100 former servicemen and servicewomen who joined the company through a program called Troops 2 Roughnecks.\n\nIn addition to our specific scholarship programs, one-time educational donations and recruitment efforts, in 2010 we gave more than $1.8 million to fund higher education for nearly 400 other students in 12 states through our Chesapeake Scholars program. Chesapeake's scholarships help recruit the best and brightest students and provide educational opportunities in communities where we operate. In Oklahoma City, more than 400 employees volunteer for up to an hour a week on company time at four local public schools. Chesapeake's program has grown to become the largest corporate mentoring program in Oklahoma.\n\n## Community Impact\n\nChesapeake employees have been enriching their hometowns as volunteers for many years. We formalized those efforts in 2009 by establishing an official employee volunteer program, the H.E.L.P. (Helping Energize Local Progress) Initiative, wherein employees are invited to volunteer each month for a variety of organizations from food pantries to animal shelters. Through that program, employees donated more than 26,000 hours to their communities in 2009.\n\nIn the summer of 2010, Chesapeake took the H.E.L.P. Initiative to a higher level through the launch of Operation Blue. From Memorial Day through Labor Day, each employee was given four hours of company time to complete the volunteer project of their choice. Our employees eagerly accepted the challenge, and in three months more than 4,900 employees donated 30,900 hours of service to 519 organizations in more than 96 communities across the country. Operation Blue is now an annual\n\nvolunteer program in which employees roll up their sleeves in the communities they call home.", - "page_start": 26, - "page_end": 26, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "Jeff Fisher Senior Vice President - Production\n\n<!-- image -->\n\n## What advantages does CHK's unique vertical integration strategy provide?\n\nChesapeake has built a large inventory of low-risk natural gas and liquids-rich plays that we plan to develop aggressively over the next two decades. As a result, we know that our company will consistently utilize a tremendous (and growing) amount of oilfield services for this resource development. This high level of planned drilling activity will create value for the provider of oilfield services, and Chesapeake's strategy is to capture a portion of this value for our shareholders rather than transfer it to third-party vendors whose interests and investments are not always aligned with ours. To date, Chesapeake has invested in drilling rigs, rental tools, water management equipment, trucking, compression equipment, midstream services, and most recently pressure pumping and fracture stimulation equipment. Chesapeake's activities require a high level of planning and project coordination that is best accomplished through vertical integration and ownership of the oilfield services we utilize. This approach creates a multitude of cost savings, an alignment of interests, operational synergies, greater capacity of equipment, increased safety and better coordinated logistics. In addition, Chesapeake's control of a large portion of the oilfield service equipment it utilizes provides a unique advantage to control the timing of leasehold development. Simply put, faster development of resources maximizes the present value of leasehold. This has been a key advantage for\n\nChesapeake over the past three years as the company has monetized leasehold investments at premium values through our joint ventures.\n\n## Will U.S. natural gas prices reconnect with world natural gas prices?\n\nNatural gas is a premium product and a cleaner-burning fuel than coal or oil-related products, including gasoline, diesel and heating oil. Despite this fact, over the past two years natural gas has received a low price in the U.S. market relative to coal and oil-related products, primarily as a result of a temporary surplus of production. This surplus has been principally caused by high levels of drilling activity as producers focused on holding by produc tion (HBP) leasehold in new highly productive, low cost natural gas shale plays. In essence, producers reinvented U.S. supply ahead of reinventing of U.S. demand. We believe HBP-incentivized drilling on natural gas plays will largely come to an end in 2012, and U.S. demand will soon also be reinvented to allow U.S. natural gas prices to reconnect to price parity with world natural gas prices that have risen to more than double U.S. natural gas prices.\n\nThis surge in world natural gas prices has been in response to $100+ oil prices and surging global liquefied natural gas (LNG) demand. In our view, the arbitrage in value between competing fuels is simply too wide. Capital and ideas will flow toward projects that make the most of this price disparity. Chesapeake and other companies are working to create the ability to export natural gas from the U.S. Gulf Coast and other regions in the form of LNG to premium Pacific Rim, European and South American markets, perhaps as soon as 2015. This initiative will also be aided by the widening of the Panama Canal to accommodate large LNG vessels. Furthermore, we believe that the\n\nJeff Mobley Senior Vice President -\n\n<!-- image -->\n\nInvestor Relations and Research", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "Jeff Mobley Senior Vice President -\n\n<!-- image -->\n\nInvestor Relations and Research\n\ncurrent price disparity between natural gas and oil will increasingly lead to greater use of natural gas in the U.S. transportation system. Whether it be compressed natural gas (CNG) for medium and light-duty vehicles, LNG for heavy-duty vehicles or the commercialization of gas-to-liquids (GTL) natural gas refineries that supplement the U.S. liquid fuel supply stream, we believe that the marketplace will increasingly utilize and embrace natural gas. Chesapeake is working with industry, public policymakers and potential partners on each of these demand reinvention opportunities. Natural gas is clean, affordable, abundant and American. Why shouldn't it trade at a BTU premium in the years ahead?\n\nNick Dell'Osso\n\n<!-- image -->\n\nExecutive Vice President and Chief Financial Officer\n\n## Why is an investment grade rating on its debt securities important to CHK?\n\nWe believe that Chesapeake will benefit in multiple ways from an investment grade rating on our debt securities, which we hope to achieve in 2012 or 2013. First, a higher rating would obviously lower the company's borrowing costs over time. In addition, other less easily quantifiable benefits will also accrue to Chesapeake. Higher debt ratings would result in lower costs on long-term firm transportation contracts that we enter into in order to market our natural gas and oil production as well as facilitate our ability to enter into long-term contracts to sell our natural gas production to international buyers in the form of LNG. An improved rating will also enhance Chesapeake's ability to further attract world-class energy companies to participate in our joint venture projects, which profitably monetize a portion of our leasehold investments and also accelerate the development of our resource base. Finally, and perhaps most importantly, we believe that reduced financial leverage and an invest ment grade rating will lead to a higher stock price and provide further interest from worldwide equity investors.", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "## LIQUIDS-RICH AREAS\n\n<!-- image -->\n\nAnadarko Basin The Anadarko Basin is home to four of Chesapeake's liquids-rich plays, which we anticipate will become significant contributors to our growth in the years ahead. Chesapeake was one of the first to utilize modern horizontal drilling methods and has assembled an unrivaled leasehold position in numerous horizontal liquids-rich plays in the basin. Chesapeake will continue drilling with a focus on the Granite Wash, where rates of return are the highest in our company, and with an increasing focus on the Cleveland, Tonkawa and Mississippian liquids-rich unconventional plays. We estimate we could drill up to 11,400 net wells on our Anadarko Basin acreage in the future and plan to utilize an average of 31 operated rigs in 2011 to further develop our current 1.7 million net leasehold acres. 5\n\n<!-- image -->\n\n<!-- image -->\n\nEagle Ford Shale As part of a growing emphasis on increasing oil and natural gas liquids production, Chesapeake has built the industry's second-largest leasehold position in the Eagle Ford Shale play in South Texas. In 2010 Chesapeake increased its leasehold from 80,000 net acres at the beginning of the year to more than 600,000 net acres. In November 2010, Chesapeake completed a $2.2 billion Eagle Ford Shale joint venture agreement with Beijing-based CNOOC Limited (NYSE:CEO), whereby CNOOC acquired a 33.3% interest in 600,000 net leasehold acres in the Eagle Ford Shale. CNOOC paid Chesapeake approximately $1.12 billion in cash at closing and will pay 75% of Chesapeake's share of drilling and completion expenditures until the $1.08 billion carry obligation has been funded, which Chesapeake expects to occur by year-end 2012. Our focus has been in the wet gas and oil prone portions of the play. We estimate we could drill up to 5,500 net wells on our Eagle Ford acreage and plan to utilize an average of 23 operated rigs in 2011 to further develop our leasehold position in the Eagle Ford Shale. In addition, we believe that the Pearsall Shale should be prospective for natural gas underneath approximately 75% of our Eagle Ford leasehold. 6\n\n<!-- image -->\n\nPermian Basin Chesapeake has built a strong position of approx imately 1.2 million net leasehold acres in the Permian Basin including 560,000 net leasehold acres in the Bone Spring, Avalon, Wolfcamp and Wolfberry unconventional liquids plays. This area has the potential to deliver significant upside as we move toward increasing our oil production substantially in the years ahead. We have developed multiple new horizontal oil projects in this area, where we plan to utilize an average of approximately eight operated rigs in 2011 to further develop our leasehold in the Permian and Delaware basins and estimate we could drill up to 4,400 net wells. 7\n\n<!-- image -->\n\n<!-- image -->\n\nRockies Chesapeake is the second-largest leasehold owner in the Niobrara Shale, Frontier and Codell plays in the Powder River and Denver Julesburg (DJ) basins of Wyoming and Colorado. In February 2011, Chesapeake completed a $1.3 billion joint venture agreement with CNOOC, whereby CNOOC acquired a 33.3% interest in Chesapeake's approximately 800,000 net leasehold acres in the Powder River and DJ basins. CNOOC paid Chesapeake approximately $570 million in cash at closing and will pay an additional $697 million in carries by funding 66.7% of Chesapeake's 8\n\nNote: Figures do not add to company totals.\n\n - * Compared to last year\n - ** % of company total\n - *** Bossier Shale acreage overlaps with Haynesville Shale acreage\n\nNM Not meaningful\n\nshare of drilling and completion expenditures, which Chesapeake expects to occur by year-end 2014. We plan to utilize an average of approximately 11 rigs in 2011 to develop our current 535,000 net leasehold acres with our partner and estimate that we could drill up to 7,600 net wells.\n\n<!-- image -->\n\n## 2010 Total Production:\n\n145 bcfe, +4%, 14%", - "page_start": 20, - "page_end": 20, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "## INVESTING IN OUR COMMUNITIES »\n\nChesapeake's sense of civic commitment provides a bountiful harvest of benefits to cities large and small. We partner with groups and organizations across all of our operating areas to improve the communities our employees, contractors, vendors, land and mineral owners call home. We believe the success of our business depends on the strength, goodwill and vitality of those communities. Most importantly, we believe it is the responsibility of every successful business to share success with its neighbors.\n\nIn 2010 we gave more than $25 million to charitable organizations and projects across our operating areas, primarily focusing on community development, education, health and medical and social services.\n\n## Economic Impact\n\nWhile much of the U.S. is still struggling to recover from the economic recession, the positive impact of natural gas and oil operations has provided a valuable economic recovery stimulus for states that are home to exploration and development activities. As the nation's second-largest producer of natural gas, a Top 15 producer of liquids and most active driller of new wells, Chesapeake's arrival in a new play stimulates economic activity, augments personal income through jobs and royalty payments, generates substantial tax revenue and sustains communities throughout its operating areas.\n\nIn addition to the general economic impact of our activities on local economies, the company's tax contributions are substantial. In 2010 Chesapeake paid approximately $675 million in taxes, including ad valorem, severance, sales, employer, and corporate income and franchise taxes. These taxes pay for ongoing government services and also build and maintain schools, recreational facilities, and parks and roads - at a time when state and local governments are still feeling the pinch of recession. We are proud to support America's economy with our growth while also helping to protect the environment through the greater use of clean-burning natural gas and reducing the country's dependence on expensive foreign oil.\n\nChesapeake also makes contributions that help improve lives and economies in cities where we operate: $25 million in 2010 alone. For example, this past year we donated $200,000 to establish the Chesapeake Environmental and Recycling Center at Goodwill Industries of Central Oklahoma. The center will provide an additional 80 jobs to disabled Oklahomans, as well as help Goodwill recycle 10 million pounds a year, which\n\n## Chesapeake's $25 million of charitable giving in 2010\n\n - Community Development\n\nEducation\n\n - Health and Medical\n - Social Services\n\n<!-- image -->\n\n<!-- image -->\n\n<!-- image -->\n\nequates to one-third of the goods that otherwise would have been destined for Oklahoma City-area landfills. In West Virginia, we helped fund construction of the Morgantown Market\n\nEquipping the next generation - West Virginia students hold their new laptops from Chesapeake as part of the company's Discovering Tomorrow's Leaders program.\n\n<!-- image -->\n\nPlace, a permanent site for the city's farmers' market, creating more business opportunities for local farmers.\n\nChesapeake also supports local chambers of commerce and city councils in all of its operating areas. In the Haynesville Shale last year, we awarded grants to the Shelby County, Sabine Parish and Coushatta-Red River chambers of commerce to help fund tourism, business communications and chamber events. In Texas, we assisted more than 250 civic, professional and community service organizations throughout Johnson, Tarrant and western Dallas counties, and sponsored memberships in 35 local Texas chambers of commerce. By helping local chambers and businesses grow and thrive, we are creating stronger economies.", - "page_start": 25, - "page_end": 25, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "## DEAR FELLOW SHAREHOLDERS »\n\n2010 was a very important year of transition and achievement for Chesapeake, a year in which we initiated three very important strategic shifts: from asset gathering to asset harvesting, from focusing exclusively on natural gas to a balanced focus on natural gas and liquids and from having a leveraged balance sheet to one worthy of an investment grade rating.\n\n<!-- image -->\n\nHome to three distinct forms of hydrocarbons: dry natural gas, natural gas liquids and oil, the Eagle Ford Shale in South Texas epitomizes Chesapeake's shift to a balanced focus on natural gas and liquids.\n\n2010 also marked a truly transformative year for our industry. We and a handful of our peers enhanced our capabilities to find and produce significant new resources of oil and natural gas liquids (collectively, 'liquids') in unconventional formations. Chesapeake and these other companies combined creativity, innovation and technology to reinvent the way that our industry explores for and produces natural gas and liquids.\n\nFurthermore, 2010 was the year when global energy companies more fully recognized the importance of these developments and the tremendous opportunities that have emerged in the U.S. Through a wide variety of transactions, including several led by Chesapeake, the global energy industry made it clear that the assets owned by Chesapeake and some of its peers are the most attractive in the world. This realization has already increased the value of highquality unconventional assets in the U.S. and, in time, should lead to higher\n\nstock prices for the leading U.S. onshore E&P companies, especially Chesapeake. Simply put, the global energy industry is beating a path to our door, and we are welcoming it with open arms.\n\nBefore we move ahead, I want to emphasize that even though 2010 was a year of transition and achievement, our stock price was essentially unchanged. Nevertheless, it was still a very strong year for the company operationally and financially. Here are the year's highlights for your review:\n\n - >> Average daily natural gas and oil production increased 14% from 2.5 billion cubic feet of natural gas equivalent (bcfe) in 2009 to 2.8 bcfe in 2010;\n - >> Proved natural gas and oil reserves increased 20% in 2010, from 14.3 trillion cubic feet of natural gas equivalent (tcfe) to 17.1 tcfe;\n - >> Reserve replacement for 2010 reached 375% at a drilling, completion and net acquisition cost of only $0.76 per thousand cubic feet of natural gas equivalent (mcfe) (1) ;\n - >> Realized hedging gains were $2.1 billion;\n - >> Revenues increased 22% to $9.4 billion;\n - >> Adjusted ebitda (2) increased 15% to $5.1 billion;\n - >> Operating cash flow (2) increased 5% to $4.5 billion; and\n - >> Adjusted earnings per fully diluted share (2) increased 16% to $2.95.", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "for a new energy future with greater natural gas usage and increased domestic oil production as two of its primary attributes, it is encouraging to see our political leadership finally grasp that natural gas stands alone as the only affordable, scalable and immediately available alternative to foreign oil and that U.S. oil production can be increased significantly in the years ahead.\n\nThe events of the past few months have unmistakably driven home the fact that it is insanity to rely on the Middle East to provide our economy's lifeline of oil. This should be especially obvious when one realizes that during the next 10 years, America will likely export at least another $4 trillion in national wealth to oil exporters around the world. Clearly, our country must demand from its leaders a new and more sustainable energy future.\n\n<!-- image -->\n\nAdvancing technology for cleaner operations: solar panels at a West Texas well power telemetry systems that provide pumpers with real-time information on oil and water tank levels to alarm them when levels near capacity, preventing tank spills.\n\nThe good news, however, is that America can now secure a new energy future thanks to Chesapeake and a handful of other leading U.S. E&P companies that have reinvented the process of finding natural gas and oil during the past five years. In doing so, we have discovered twice the resources of natural gas in the U.S. that Saudi Arabia possesses in oil. Furthermore, these same few companies that led the unconventional natural gas revolution have in just the past two years also reinvented the way in which we can find large new oil resources onshore in the U.S. In fact, I believe the U.S. can possibly increase its production of oil from the current 5.8 million barrels per day by 30-50% during the next 5-10 years, thereby potentially reaching the President's 2025 goal of reducing foreign oil imports by 33%, 5-10 years earlier than hoped.\n\nThe combination of these vast new discoveries of unconventional natural gas and liquids provides America with a unique future pathway toward greater energy independence, an industrial renaissance, economic rejuvenation and greater national security. I remain fully confident that the marketplace understands this and that over time the U.S. will more fully embrace and utilize clean, affordable, abundant American natural gas and increased domestic oil production as the best alternatives to burning environmentally challenged coal and expensive and dangerous foreign oil.\n\nThere is now a clear road ahead toward a more sustainable, affordable, dynamic and independent future if America embraces the remarkable gift of energy abundance that Chesapeake has helped discover in the U.S. You have my commitment, and the commitment of more than\n\nThe combination of these vast new discoveries of unconventional natural gas and liquids provides America with a unique future pathway toward greater energy independence, an industrial renaissance, economic rejuvenation and greater national security.\n\n10,000 other Chesapeake employees, that every day we are working hard to create shareholder value and a better future for our communities, our states and our country through the continued discovery and development of unconventional natural gas and liquids.\n\nBest regards,\n\n<!-- image -->\n\nAubrey K. McClendon\n\nChairman and Chief Executive Officer April 15, 2011", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_CHK_2010.pdf" - } - ] - }, - { - "references": { - "source_file": "NYSE_CHK_2010.pdf", - "query": "Has the CEO of Chesapeake Energy met with the US President about America's energy production?", - "target_page": 16, - "target_passage": "I am pleased to report that we have apparently finally convinced President Barack Obama and Congressional leadership to recognize that the energy path America is on today is completely unsustainable.", - "chunk_present": { - "presence": false, - "index": null - } - }, - "top_chunk": [ - { - "text": "wet natural gas and dry natural gas), similar to the components of the Eagle Ford Shale. We have made a large commitment to this play and have acquired approximately 1.2 million net leasehold acres and expect to increase this total to as much as 1.5 million net leasehold acres in the coming months. We are currently using three rigs to evaluate the play and believe our leasehold could support the drilling of up to 12,000 net wells. This is an area where we anticipate bringing in a joint venture partner late in 2011 or early in 2012.\n\n## Our People\n\nGreat assets cannot exist without great people, so we take great pride in hiring, training, motivating, rewarding and retaining what we regard\n\n<!-- image -->\n\nas the best employees in the industry. From our beginning 22 years ago with 10 employees in Oklahoma City to employing more than 10,000 people across 15 states today, Chesapeake has always focused on building first-class human resources within a distinctive corporate culture. Talk to Chesapeake employees and you will note genuine pride and great enthusiasm about the company and the critical role that we play in delivering increasing quantities of clean and affordable American natural gas and valuable and reliable liquids to energy consumers across the country.\n\nChesapeake employees are distinctive in other ways as well. They are much younger than the industry average, with half of our almost 4,000 Oklahoma City-based headquarters employees 33 years old or younger. Their enthusiasm and willingness to learn create an\n\natmosphere of vitality and energy at Chesapeake, important ingredients of our distinctive culture. These attributes, along with a vibrant and attractive corporate headquarters campus, low levels of bureaucracy, great assets and a well-executed corporate strategy combine to create our culture of success and innovation.\n\nThis has generated extremely positive external feedback as Chesapeake was recently recognized for the fourth consecutive year as one of the FORTUNE 100 Best Companies to Work For ®(3) in the U.S. In fact, we moved up to #32 overall and #1 in our industry - we are very proud of having created and sustained what is now considered the best place to work in all of the U.S. energy production industry.\n\nIn addition, we were honored in December 2010 at the 12th Annual Platts Global Energy Awards as finalists for CEO of the Year, Community\n\nFrom our beginning 22 years ago with 10 employees in Oklahoma City to employing more than 10,000 people across 15 states today, Chesapeake has always focused on building first-class human resources within a distinctive corporate culture.\n\n<< A Chesapeake rig drills in the Marcellus Shale, where the company is the leading leasehold owner, largest producer and most active driller.\n\nDevelopment Program of the Year, Deal of the Year, Energy Producer of the Year and the Industry Leadership Award. Chesapeake was one of only two companies selected as a finalist in five or more categories. The company was also honored in 2010 with a Certificate of Recognition for our military reserve recruiting efforts, named a 2010 Best Diversity Company by Engineering & Information Technology Magazine and recognized for Best Investor Relations in Energy Sector and Best Investor Relations Website at the 2010 IR Magazine U.S. Awards.\n\n## Recent Events and a Better Way Forward\n\nYou may be aware that I have been outspoken in attempting to persuade our country's political leadership to recognize that the discovery of vast resources of unconventional natural gas and oil in the U.S. is a complete game changer for our country from an economic, national security and environmental perspective. After two years of my best efforts and the efforts of many others in the industry, most notably T. Boone Pickens,", - "page_start": 13, - "page_end": 13, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "for a new energy future with greater natural gas usage and increased domestic oil production as two of its primary attributes, it is encouraging to see our political leadership finally grasp that natural gas stands alone as the only affordable, scalable and immediately available alternative to foreign oil and that U.S. oil production can be increased significantly in the years ahead.\n\nThe events of the past few months have unmistakably driven home the fact that it is insanity to rely on the Middle East to provide our economy's lifeline of oil. This should be especially obvious when one realizes that during the next 10 years, America will likely export at least another $4 trillion in national wealth to oil exporters around the world. Clearly, our country must demand from its leaders a new and more sustainable energy future.\n\n<!-- image -->\n\nAdvancing technology for cleaner operations: solar panels at a West Texas well power telemetry systems that provide pumpers with real-time information on oil and water tank levels to alarm them when levels near capacity, preventing tank spills.\n\nThe good news, however, is that America can now secure a new energy future thanks to Chesapeake and a handful of other leading U.S. E&P companies that have reinvented the process of finding natural gas and oil during the past five years. In doing so, we have discovered twice the resources of natural gas in the U.S. that Saudi Arabia possesses in oil. Furthermore, these same few companies that led the unconventional natural gas revolution have in just the past two years also reinvented the way in which we can find large new oil resources onshore in the U.S. In fact, I believe the U.S. can possibly increase its production of oil from the current 5.8 million barrels per day by 30-50% during the next 5-10 years, thereby potentially reaching the President's 2025 goal of reducing foreign oil imports by 33%, 5-10 years earlier than hoped.\n\nThe combination of these vast new discoveries of unconventional natural gas and liquids provides America with a unique future pathway toward greater energy independence, an industrial renaissance, economic rejuvenation and greater national security. I remain fully confident that the marketplace understands this and that over time the U.S. will more fully embrace and utilize clean, affordable, abundant American natural gas and increased domestic oil production as the best alternatives to burning environmentally challenged coal and expensive and dangerous foreign oil.\n\nThere is now a clear road ahead toward a more sustainable, affordable, dynamic and independent future if America embraces the remarkable gift of energy abundance that Chesapeake has helped discover in the U.S. You have my commitment, and the commitment of more than\n\nThe combination of these vast new discoveries of unconventional natural gas and liquids provides America with a unique future pathway toward greater energy independence, an industrial renaissance, economic rejuvenation and greater national security.\n\n10,000 other Chesapeake employees, that every day we are working hard to create shareholder value and a better future for our communities, our states and our country through the continued discovery and development of unconventional natural gas and liquids.\n\nBest regards,\n\n<!-- image -->\n\nAubrey K. McClendon\n\nChairman and Chief Executive Officer April 15, 2011", - "page_start": 16, - "page_end": 16, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "Jeff Mobley Senior Vice President -\n\n<!-- image -->\n\nInvestor Relations and Research\n\ncurrent price disparity between natural gas and oil will increasingly lead to greater use of natural gas in the U.S. transportation system. Whether it be compressed natural gas (CNG) for medium and light-duty vehicles, LNG for heavy-duty vehicles or the commercialization of gas-to-liquids (GTL) natural gas refineries that supplement the U.S. liquid fuel supply stream, we believe that the marketplace will increasingly utilize and embrace natural gas. Chesapeake is working with industry, public policymakers and potential partners on each of these demand reinvention opportunities. Natural gas is clean, affordable, abundant and American. Why shouldn't it trade at a BTU premium in the years ahead?\n\nNick Dell'Osso\n\n<!-- image -->\n\nExecutive Vice President and Chief Financial Officer\n\n## Why is an investment grade rating on its debt securities important to CHK?\n\nWe believe that Chesapeake will benefit in multiple ways from an investment grade rating on our debt securities, which we hope to achieve in 2012 or 2013. First, a higher rating would obviously lower the company's borrowing costs over time. In addition, other less easily quantifiable benefits will also accrue to Chesapeake. Higher debt ratings would result in lower costs on long-term firm transportation contracts that we enter into in order to market our natural gas and oil production as well as facilitate our ability to enter into long-term contracts to sell our natural gas production to international buyers in the form of LNG. An improved rating will also enhance Chesapeake's ability to further attract world-class energy companies to participate in our joint venture projects, which profitably monetize a portion of our leasehold investments and also accelerate the development of our resource base. Finally, and perhaps most importantly, we believe that reduced financial leverage and an invest ment grade rating will lead to a higher stock price and provide further interest from worldwide equity investors.", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "We also hire locally whenever possible to help stimulate the local economy, and we provide training when the local work force isn't yet qualified for the jobs we have open. For example, when Chesapeake began operating in the Marcellus Shale of West Virginia and Pennsylvania, finding experienced rig workers was a challenge. To meet that need, Chesapeake's wholly owned subsidiary, Nomac Drilling, built the 40,000-square-foot Eastern Training Center and Housing Facility in Bradford County, near Sayre, Pennsylvania. The campus opened in 2010 and serves as a housing facility and training ground for 266 workers at a time. Nomac and Chesapeake host regular job fairs in the region and the lines of interested candidates often extend out the door.\n\n## Educational Impact\n\nWe are also proud to help prepare tomorrow's leaders today. In 2010 Chesapeake supported universities, schools, academic chairs, scholarships and other educational programs with contributions totaling $5.4 million.\n\nInvesting in programs that promote technology and innovation is a key to our country's success. That's why we gave $1.0 million to establish the Chesapeake Energy dormitory for students at the Oklahoma School for Science and Mathematics (OSSM), a public, tuition-free, residential high school located in Oklahoma City for juniors and seniors with exceptional abilities. The extremely competitive school is helping train the next generation of scientists and mathematicians.\n\nWe also established the Chesapeake Energy Presidential Scholars Program at the Oklahoma City University Meinders School of Business, making a $5.0 million commitment to be distributed over the next five years. The Chesapeake Scholars Program will provide up to $25,000 per year in tuition", - "page_start": 25, - "page_end": 25, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "Log in\n\n<!-- image -->\n\nHome / Arts and Entertainment / New Artificial Intelligence Summit Series Begins With Energy\n\n<!-- image -->\n\nARTS AND ENTERTAINMENT\n\n## New Artificial Intelligence Summit Series Begins With Energy\n\n07/31/2024\n\n(AI) continues to transform the United States and the world. To promote and inform rapid advancements in AI and maintain America's global competitiveness, the Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI, announces the AI+ Summit Series.\n\nThe series kicks off with the topic of energy. The AI + Energy Summit, scheduled for September 26, 2024, in Washington, D.C., will bring together policy makers, energy industry leaders, top government and academic energy researchers, and technologists to address the challenges of AI's energy consumption and develop solutions for a resilient and abundant energy future. The event also aims to address the implications of AI and energy for national security and promote partnerships between AI and energy stakeholders.\n\nAI and other emerging technologies can help the United States take the lead in energy areas including maximizing energy efficiencies, discovering new materials, and enabling new forms of power generation. AI also has a role to play in overcoming energy challenges. The Department of Energy (DOE) already uses AI in several areas including advanced computing, emergency response, environmental modeling, climate forecasting, and materials research.\n\nSCSP's recent 'Action Plan for U.S. Leadership in Next-Generation Energy,' raises many issues related to AI and energy, including recommendations for the government to bring America forward. The AI+ Energy Summit will highlight these and other issues, and promote collaboration to solve problems. The stakes are high; if the U.S. falls short on energy, American adversaries could gain the upper hand in AI leadership, according to SCSP experts.\n\nVisit scsp.ai to learn more about the AI+Energy Summit and the SCSP's Next-Generation Energy Action Plan.\n\nArticle Link\n\nhttps://about.newsusa.com/new-artificial-intelligence-summit-series-begins-with…\n\n## RELATED ARTICLES\n\n<!-- image -->\n\n<!-- image -->\n\nMar 06, 2024\n\nCelebrate St. Patrick's Day with No Booze, Just Pure Irish Fun and Entertainment\n\nMar 06, 2024\n\nExplore Downtown San Pedro with Flair: Ride the Iconic Red Car Trolley for Free\n\nMar 06, 2024\n\nSay Hello to Your Big Break at the Stapleton Library Job Fair in Vocation, Trade, or Civil Service\n\n<!-- image -->\n\n<!-- image -->\n\nFeb 22, 2024\n\nRetrain Your Emotional Brain: A Natural Alternative to Weight Loss Drugs\n\n© Copyright NewsUSA 2025. All Rights Reserved.\n\nFeb 21, 2024\n\nSerial Entrepreneur Teaches Us How to Go the Distance in Business and in Life\n\nNEWSUSA\n\nMar 06, 2024\n\nLocal Artists Collaborate for a Unique Fusion of Groove and Collage\n\nFASHION\n\nBUSINESS\n\nINFOGRAPHIC\n\nENVIRONMENT\n\nHEALTH\n\nMONEY\n\nFOOD\n\nTRAVEL\n\nBRIDAL\n\nRECREATION\n\nTECHNOLOGY\n\nHOME\n\nEDUCATION\n\nARTS & ENTERTAINMENT\n\nAUTO\n\nCHILDREN\n\nFITNESS\n\nHOLIDAY\n\nINSURANCE\n\nLAWN & GARDEN\n\nLISTICLE\n\nNUTRITION\n\nPARENTING\n\nPETS\n\nSEASONAL\n\nSENIORS\n\nSPANISH\n\nTIPS AND HOW TO\n\nENTERTAINMENT\n\nCAREER\n\nCOMMUNITY\n\nFAMILY\n\nTIPS\n\nINTERNET\n\nHUMAN\\_INTEREST\n\nBEAUTY\n\nARTS\n\nREALESTATE\n\nSAFETY\n\nMEDICINE\n\nBOOK\\_REVIEW\n\nRECIPE\n\nAFRICAN\\_AMERICANS\n\nHOW\\_TO\n\nBYLINED\\_COLUMN\n\nCHARITY\n\nSPORTS\n\nHOME\\_IMPROVEMENT\n\nTECH\n\nWELLNESS\n\nARTS AND ENTERTAINMENT\n\nFOOD & DRINK\n\nREAL\\_ESTATE\n\nVETERANS\n\nOUTDOORS\n\nREAL ESTATE\n\nHUMAN INTEREST\n\nMONEY & FINANCE\n\nFASHION & BEAUTY\n\nMONEY AND FINANCE\n\nBOOKS & ENTERTAINMENT\n\nBOOKS\n\nARTS & ENTERTAINMENT\n\nCATEGORIES\n\nRECENT POSTS", - "page_start": 0, - "page_end": 0, - "source_file": "news1.pdf" - }, - { - "text": "## DEAR FELLOW SHAREHOLDERS »\n\n2010 was a very important year of transition and achievement for Chesapeake, a year in which we initiated three very important strategic shifts: from asset gathering to asset harvesting, from focusing exclusively on natural gas to a balanced focus on natural gas and liquids and from having a leveraged balance sheet to one worthy of an investment grade rating.\n\n<!-- image -->\n\nHome to three distinct forms of hydrocarbons: dry natural gas, natural gas liquids and oil, the Eagle Ford Shale in South Texas epitomizes Chesapeake's shift to a balanced focus on natural gas and liquids.\n\n2010 also marked a truly transformative year for our industry. We and a handful of our peers enhanced our capabilities to find and produce significant new resources of oil and natural gas liquids (collectively, 'liquids') in unconventional formations. Chesapeake and these other companies combined creativity, innovation and technology to reinvent the way that our industry explores for and produces natural gas and liquids.\n\nFurthermore, 2010 was the year when global energy companies more fully recognized the importance of these developments and the tremendous opportunities that have emerged in the U.S. Through a wide variety of transactions, including several led by Chesapeake, the global energy industry made it clear that the assets owned by Chesapeake and some of its peers are the most attractive in the world. This realization has already increased the value of highquality unconventional assets in the U.S. and, in time, should lead to higher\n\nstock prices for the leading U.S. onshore E&P companies, especially Chesapeake. Simply put, the global energy industry is beating a path to our door, and we are welcoming it with open arms.\n\nBefore we move ahead, I want to emphasize that even though 2010 was a year of transition and achievement, our stock price was essentially unchanged. Nevertheless, it was still a very strong year for the company operationally and financially. Here are the year's highlights for your review:\n\n - >> Average daily natural gas and oil production increased 14% from 2.5 billion cubic feet of natural gas equivalent (bcfe) in 2009 to 2.8 bcfe in 2010;\n - >> Proved natural gas and oil reserves increased 20% in 2010, from 14.3 trillion cubic feet of natural gas equivalent (tcfe) to 17.1 tcfe;\n - >> Reserve replacement for 2010 reached 375% at a drilling, completion and net acquisition cost of only $0.76 per thousand cubic feet of natural gas equivalent (mcfe) (1) ;\n - >> Realized hedging gains were $2.1 billion;\n - >> Revenues increased 22% to $9.4 billion;\n - >> Adjusted ebitda (2) increased 15% to $5.1 billion;\n - >> Operating cash flow (2) increased 5% to $4.5 billion; and\n - >> Adjusted earnings per fully diluted share (2) increased 16% to $2.95.", - "page_start": 5, - "page_end": 5, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "## Chesapeake Energy Corporation is the second-largest producer of natural gas, a Top 15 producer of oil and natural gas liquids and the most active driller of new wells in the U.S.\n\nHeadquartered in Oklahoma City, the company's operations are focused on discovering and developing unconventional natural gas and oil fields onshore in the U.S. Chesapeake owns leading positions in the Barnett, Haynesville, Bossier, Marcellus and Pearsall natural gas shale plays and in the Granite Wash, Cleveland, Tonkawa, Mississippian, Bone Spring, Avalon, Wolfcamp, Wolfberry, Eagle Ford,\n\n<!-- image -->\n\n## CONTENTS\n\n- 1 Financial Review\n- 4 Letter to Shareholders\n- 16 Operating Areas\n- 20 Investor Q&A\n- 22 Social Responsibility\n- 24 Community Relations\n- 26 Environmental, Health & Safety\n- 28 Board of Directors\n- 28 Governance\n- 29 Officers\n- 30 Employees\n- 45 Form 10-K\n\nInside Back Cover\n\nCorporate Information\n\nNiobrara and Utica unconventional liquids-rich plays. The company has also vertically integrated its operations and owns substantial midstream, compression, drilling and oilfield service assets. Chesapeake's stock is listed on the New York Stock Exchange under the symbol CHK. Further information is available at www.chk.com where Chesapeake routinely posts announcements, updates, events, investor information, presentations and press releases.\n\nON THE COVER Moving west, a Chesapeake rig drills toward the Niobrara Shale in the Powder River Basin of southeastern Wyoming, one of several new liquids-rich plays that are enabling the company to increase its profitability and return on capital.", - "page_start": 1, - "page_end": 1, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "to selected students pursuing careers in finance, economics, accounting, marketing, business administration, computer science and information technology. In addition, scholars will take part in a Chesapeake Presidential Leadership Course facilitated by faculty members in coordination with designated Chesapeake leadership coaches, including a Chesapeake senior vice president and OCU alumni.\n\nIn 2007 Chesapeake launched a scholarship program in Texas with an initial $1.25 million contribution, challenging the cities of Fort Worth and Dallas to match its gift within a year. The cities responded and matched the gift, so Chesapeake in 2008 added another $1.25 million to the fund, bringing the total to $3.75 million. The Chesapeake Scholarship Fund currently funds the cost of higher education for 48 minority students. The fund provides each student $20,000 a year for up to four years at the school of their choice. To date more than $1.0 million has been distributed to deserving local students.\n\nTo help ensure the training of qualified geologists, engineers, landmen and energy lawyers in the next generation, we award scholarships to students pursuing energy-related degrees. We also help mentor them through Chesapeake's Peak Program. Junior- and senior-level scholarship recipients are paired with Chesapeake employee mentors who help develop students' knowledge and provide career advice. There are currently 25 mentors and 40 scholarship recipients participating in the Peak Program.\n\nOur recruiting team also initiated a strategic military recruitment effort during the past two years to hire former military personnel to work in a variety of leadership and crew positions. This effort earned Chesapeake an honor from G.I. JOBS magazine when we were named a 2011 Top 100 Military-Friendly Employer. Chesapeake currently employs 37 men and women who formerly served as junior military officers and more than 100 former servicemen and servicewomen who joined the company through a program called Troops 2 Roughnecks.\n\nIn addition to our specific scholarship programs, one-time educational donations and recruitment efforts, in 2010 we gave more than $1.8 million to fund higher education for nearly 400 other students in 12 states through our Chesapeake Scholars program. Chesapeake's scholarships help recruit the best and brightest students and provide educational opportunities in communities where we operate. In Oklahoma City, more than 400 employees volunteer for up to an hour a week on company time at four local public schools. Chesapeake's program has grown to become the largest corporate mentoring program in Oklahoma.\n\n## Community Impact\n\nChesapeake employees have been enriching their hometowns as volunteers for many years. We formalized those efforts in 2009 by establishing an official employee volunteer program, the H.E.L.P. (Helping Energize Local Progress) Initiative, wherein employees are invited to volunteer each month for a variety of organizations from food pantries to animal shelters. Through that program, employees donated more than 26,000 hours to their communities in 2009.\n\nIn the summer of 2010, Chesapeake took the H.E.L.P. Initiative to a higher level through the launch of Operation Blue. From Memorial Day through Labor Day, each employee was given four hours of company time to complete the volunteer project of their choice. Our employees eagerly accepted the challenge, and in three months more than 4,900 employees donated 30,900 hours of service to 519 organizations in more than 96 communities across the country. Operation Blue is now an annual\n\nvolunteer program in which employees roll up their sleeves in the communities they call home.", - "page_start": 26, - "page_end": 26, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "## Strong Partners\n\nOver the past few years, in addition to gathering the industry's best assets, Chesapeake has also built the industry's finest collection of global energy partners and energy stock investors. We have now entered into transactions with PXP, BP, Statoil, Total, CNOOC and BHP Billiton. Collectively, we have sold these companies certain assets for total consideration of $20.5 billion in the form of cash and drilling and completion carries for which our net cost was only $6.1 billion resulting in overall value creation of $14.4 billion. While these transactions have been very\n\n<!-- image -->\n\nrewarding to our buyers, they have been truly outstanding for Chesapeake, providing us an attractive source of capital, a reduction of risk, a quick recovery of our leasehold investment in new plays and a much greater ability to capture a large resource base with decades of highly profitable drilling opportunities.\n\nIn addition, we are the only U.S. E&P company that has attracted to its stock ownership roster some of the world's leading governmentsponsored investors: Temasek Holdings (Singapore), China Investment Corporation, Korea Investment Corporation and Abu Dhabi Investment Authority. Along with our largest shareholder, Memphis, Tennesseebased Southeastern Asset Management (12%), these shareholders are some of the world's largest and most astute investors, and who also\n\nhappen to manage some of the world's largest pools of capital and have a very long-term investment horizon. Their support is an important validation of our strategy.\n\n## Short-Term Pain for Long-Term Gain\n\nDespite this all-star lineup of global partners and investors, some other investors have not yet fully recognized the benefits of our industry leadership in acquiring unconventional natural gas and liquids assets. Whether it was our leveraged balance sheet during recent tough recessionary times, our heavy focus on natural gas during a time of persistent market pessimism about natural gas prices or our large capital investments in undeveloped liquids-rich leasehold undertaken to enable Chesapeake to remain an industry leader in the years ahead, it is clear\n\nThrough a wide variety of transactions, including several led by Chesapeake, the global energy industry made it clear that the assets owned by Chesapeake and some of its peers are the most attractive in the world.\n\n<< Aubrey K. McClendon, Co-Founder, Chairman and Chief Executive Officer\n\nthat we were less popular in the stock market in 2010 than we were in 2009, when our stock price increased by 60%.\n\nWe anticipated that some market unpopularity in 2010 would likely be the price we would pay as we positioned Chesapeake to be the leader not only in unconventional U.S. natural gas, but also in unconventional U.S. liquids. However, now that we have largely completed the investments needed to accomplish this transition to a portfolio balanced with liquids, the rebound in our stock price could be sharp as investors begin to focus more clearly on Chesapeake's three-way transition from an asset gatherer to an asset harvester, from less natural gas exposure to more liquids exposure and from a leveraged balance sheet to one worthy of an investment grade rating.\n\nAccordingly, in early January 2011, we announced our '25/25 Plan,' a two-year plan designed to reduce our long-term debt by 25% while still growing the company's production by 25%. We designed this plan to articulate very clearly the benefits of becoming an asset harvester", - "page_start": 6, - "page_end": 6, - "source_file": "NYSE_CHK_2010.pdf" - }, - { - "text": "Jeff Fisher Senior Vice President - Production\n\n<!-- image -->\n\n## What advantages does CHK's unique vertical integration strategy provide?\n\nChesapeake has built a large inventory of low-risk natural gas and liquids-rich plays that we plan to develop aggressively over the next two decades. As a result, we know that our company will consistently utilize a tremendous (and growing) amount of oilfield services for this resource development. This high level of planned drilling activity will create value for the provider of oilfield services, and Chesapeake's strategy is to capture a portion of this value for our shareholders rather than transfer it to third-party vendors whose interests and investments are not always aligned with ours. To date, Chesapeake has invested in drilling rigs, rental tools, water management equipment, trucking, compression equipment, midstream services, and most recently pressure pumping and fracture stimulation equipment. Chesapeake's activities require a high level of planning and project coordination that is best accomplished through vertical integration and ownership of the oilfield services we utilize. This approach creates a multitude of cost savings, an alignment of interests, operational synergies, greater capacity of equipment, increased safety and better coordinated logistics. In addition, Chesapeake's control of a large portion of the oilfield service equipment it utilizes provides a unique advantage to control the timing of leasehold development. Simply put, faster development of resources maximizes the present value of leasehold. This has been a key advantage for\n\nChesapeake over the past three years as the company has monetized leasehold investments at premium values through our joint ventures.\n\n## Will U.S. natural gas prices reconnect with world natural gas prices?\n\nNatural gas is a premium product and a cleaner-burning fuel than coal or oil-related products, including gasoline, diesel and heating oil. Despite this fact, over the past two years natural gas has received a low price in the U.S. market relative to coal and oil-related products, primarily as a result of a temporary surplus of production. This surplus has been principally caused by high levels of drilling activity as producers focused on holding by produc tion (HBP) leasehold in new highly productive, low cost natural gas shale plays. In essence, producers reinvented U.S. supply ahead of reinventing of U.S. demand. We believe HBP-incentivized drilling on natural gas plays will largely come to an end in 2012, and U.S. demand will soon also be reinvented to allow U.S. natural gas prices to reconnect to price parity with world natural gas prices that have risen to more than double U.S. natural gas prices.\n\nThis surge in world natural gas prices has been in response to $100+ oil prices and surging global liquefied natural gas (LNG) demand. In our view, the arbitrage in value between competing fuels is simply too wide. Capital and ideas will flow toward projects that make the most of this price disparity. Chesapeake and other companies are working to create the ability to export natural gas from the U.S. Gulf Coast and other regions in the form of LNG to premium Pacific Rim, European and South American markets, perhaps as soon as 2015. This initiative will also be aided by the widening of the Panama Canal to accommodate large LNG vessels. Furthermore, we believe that the\n\nJeff Mobley Senior Vice President -\n\n<!-- image -->\n\nInvestor Relations and Research", - "page_start": 22, - "page_end": 22, - "source_file": "NYSE_CHK_2010.pdf" - } - ] - } - ] -] \ No newline at end of file