content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
After a screenshot is taken, you can get details about the screenshot via the after:screenshot plugin event. This event is called whether a screenshot is taken with cy.screenshot() or as a result of a test failure. The event is called after the screenshot image is written to disk. This allows you to record those details or manipulate the image as needed. You can also return updated details about the image. Usage Using your pluginsFile you can tap into the after:screenshot event. // cypress/plugins/index.js const fs = require('fs') module.exports = (on, config) => { on('after:screenshot', (details) => { // details will look something like this: // { // size: 10248 // takenAt: '2018-06-27T20:17:19.537Z' // duration: 4071 // dimensions: { width: 1000, height: 660 } // multipart: false // pixelRatio: 1 // name: 'my-screenshot' // specName: 'integration/my-spec.js' // testFailure: true // path: '/path/to/my-screenshot.png' // scaled: true // blackout: [] // } // example of renaming the screenshot file const newPath = '/new/path/to/screenshot.png' return new Promise((resolve, reject) => { fs.rename(details.path, newPath, (err) => { if (err) return reject(err) // because we renamed/moved the image, resolve with the new path // so it is accurate in the test results resolve({ path: newPath }) }) }) }) } You can return an object or a promise that resolves an object from the callback function. Any type of value other than an object will be ignored. The object can contain the following properties: - path: absolute path to the image - size: size of the image file in bytes - dimensions: width and height of the image in pixels (as an object with the shape { width: 100, height: 50 }) If you change any of those properties of the image, you should include the new values in the returned object, so that the details are correctly reported in the test results. For example, if you crop the image, return the new size and dimensions of the image. The properties will be merged into the screenshot details and passed to the onAfterScreenshot callback (if defined with Cypress.Screenshot.defaults() and/or cy.screenshot()). Any other properties besides path, size, and dimensions will be ignored.
https://docs.cypress.io/api/plugins/after-screenshot-api.html
2019-12-05T19:30:55
CC-MAIN-2019-51
1575540482038.36
[]
docs.cypress.io
Gather metrics from your Nomad clusters to: Nomad emits metrics to Datadog via DogStatsD. To enable the Nomad integration, install the Datadog Agent on each client and server host. Once the Datadog Agent is installed, add a Telemetry stanza to the Nomad configuration for your clients and servers: telemetry { publish_allocation_metrics = true publish_node_metrics = true datadog_address = "localhost:8125" disable_hostname = true collection_interval = "10s" } Next, reload or restart the Nomad agent on each host. You should now begin to see Nomad metrics flowing to your Datadog account. The Nomad check does not include any events. The Nomad check does not include any service checks. Need help? Contact Datadog support. Mistake in the docs? Feel free to contribute!
https://docs.datadoghq.com/integrations/nomad/
2019-12-05T19:55:42
CC-MAIN-2019-51
1575540482038.36
[]
docs.datadoghq.com
ZRemesher is designed to create new topology over nearly any existing model. It can be a model with many subdivision levels, a scanned model of all triangles or even one that is a mix of quads and triangles such as from a DynaMesh model. ZRemesher has a maximum polygons limit that you can input into its algorithm. Keep in mind that complex mathematic formulas are required, which means a large amount of memory. The more polygons in your model, the more memory will be needed for computing. If your computer has enough memory, ZRemesher can work with models up to 8 million of vertices. This requires 4Gb for ZRemesher itself, plus additional memory for ZBrush, your operating system and any background applications. Attempting to work with more polygons could alter the stability of ZRemesher and ZBrush. If your model input has a polygon count higher than this limit (or if your computer doesn’t have enough RAM), go to a lower subdivision level or decimate your model with the Decimation Master plug-in before using ZRemesher. Keep in mind that your ultimate goal with ZRemesher will usually be to retopologize your model to a lower polygon count for exporting, creating a mesh for multiple subdivision levels, or for cleaning up the model for better sculpting flow. We recommend that for the sake of speed you reduce your polygon count to a manageable level before using ZRemesher. Please refer to the Tips and Tricks section for more information.
http://docs.pixologic.com/user-guide/3d-modeling/topology/zremesher/high-polycounts/
2019-12-05T19:34:48
CC-MAIN-2019-51
1575540482038.36
[]
docs.pixologic.com
DeleteApplication Deletes the specified application. Kinesis Data Analytics halts application execution and deletes the application. Request Syntax { "ApplicationName": " string", "CreateTimestamp": number} Request Parameters The request accepts the following data in JSON format. - ApplicationName The name of the application to delete. Type: String Length Constraints: Minimum length of 1. Maximum length of 128. Pattern: [a-zA-Z0-9_.-]+ Required: Yes - CreateTimestamp Use the DescribeApplicationoperation to get this value. Type: Timestamp Required: Yes Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors - ConcurrentModificationException Exception thrown as a result of concurrent modifications to an application. This error can be the result of attempting to modify an application without using the current application ID. HTTP Status Code: 400 - InvalidApplicationConfigurationException The user-provided application configuration is not valid.:
https://docs.aws.amazon.com/kinesisanalytics/latest/apiv2/API_DeleteApplication.html
2019-12-05T20:28:14
CC-MAIN-2019-51
1575540482038.36
[]
docs.aws.amazon.com
All content with label cacheloader+client_server+cloud+data_grid+deadlock+gridfs+gui_demo+hibernate_search+import+infinispan+installation+jpa+repeatable_read+s3+setup. Related Labels: podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, intro, contributor_project, pojo_cache, archetype, lock_striping, jbossas, nexus, guide, schema, listener, cache, amazon, memcached, grid, test, jcache, api, xsd, ehcache, maven, documentation, wcm, youtube, userguide, write_behind, ec2, s, streaming, hibernate, getting, aws, interface, custom_interceptor, clustering, eviction, large_object, fine_grained, concurrency, out_of_memory, examples, jboss_cache, index, events, batch, hash_function, configuration, buddy_replication, loader, pojo, write_through, remoting, mvcc, tutorial, notification, presentation, xml, read_committed, jbosscache3x, distribution, started, cachestore, resteasy, cluster, development, permission, websocket, async, transaction, interactive, xaresource, build, gatein, searchable, demo, client, non-blocking, migration, filesystem, tx, user_guide, eventing, student_project, testng, infinispan_user_guide, standalone, hotrod, webdav, docs, batching, consistent_hash, store, jta, faq, as5, 2lcache, jsr-107, docbook, lucene, jgroups, locking, rest, hot_rod more » ( - cacheloader, - client_server, - cloud, - data_grid, - deadlock, - gridfs, - gui_demo, - hibernate_search, - import, - infinispan, - installation, - jpa, - repeatable_read, - s3, - setup ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/cacheloader+client_server+cloud+data_grid+deadlock+gridfs+gui_demo+hibernate_search+import+infinispan+installation+jpa+repeatable_read+s3+setup
2019-12-05T19:54:50
CC-MAIN-2019-51
1575540482038.36
[]
docs.jboss.org
Configuration Guide¶ The static configuration for Placement lives in two main files: placement.conf and policy.json. These are described below. Configuration¶ Config Reference: A complete reference of all configuration options available in the placement.conffile. Sample Config File: A sample config file with inline documentation. Policy¶ Placement, like most OpenStack projects, uses a policy language to restrict permissions on REST API actions. Policy Reference: A complete reference of all policy points in placement and what they impact. Sample Policy File: A sample placement policy file with inline documentation.
https://docs.openstack.org/placement/train/configuration/index.html
2019-12-05T20:05:43
CC-MAIN-2019-51
1575540482038.36
[]
docs.openstack.org
Contents Now Platform Capabilities Previous Topic Next Topic Connect Support Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Connect Support Connect Support is a real-time messaging tool that enables support agents to easily keep track of their support cases, quickly find solutions, and resolve problems quickly. Connect Support builds on the messaging platform provided with Connect. For general information about the Connect interface, setup, and administration, see Connect. When Connect Support is enabled, users designated as support agents have access to the support tab of the Connect sidebar. Features include: Administrators can create chat queues and enable users to access live support. Support agents can monitor the queues to provide instant support. Drag-and-drop sharing of links, files, and records. UI16 or UI15 is required to use Connect Support. Note: Connect Support does not replace legacy chat but offers some of the same functionality. The features should not be used concurrently. Monitor incoming Connect Support conversationsIn the support tab of the Connect sidebar, you can monitor the queues for which you are an agent and accept incoming conversations.Share knowledge in a Connect Support conversationThe support view of the Connect workspace has a built-in knowledge tool that makes it easy to search for knowledge articles and share them in a conversation.Transfer a Connect Support conversation to a different agent or queueYou can transfer a Connect Support conversation to a different agent in the queue or to a different queue.Add a user to a Connect Support conversationYou can add additional users to a Connect Support conversation. Escalate a Connect Support conversationIf an escalation path is defined for a Connect Support conversation, you can use a shortcut to escalate a Connect Support conversation to a different queueCreate an incident from a Connect Support conversationYou can use a shortcut to create an incident on behalf of a user, directly from a Connect Support conversation.Connect Support chat statesConnect Support chats move through specific states. Connect Support administrationAdministrators can configure various performance settings and features of Connect Support.Content packs and in-form analytics for Service Desk ChatContent packs contain preconfigured best practice dashboards. These dashboards contain actionable data visualizations that help you improve your business processes and practices.Connect Support and Service PortalUse Connect Support in your portal to allow your users to ask questions or submit requests to support agents. Activate Connect Support for Service PortalActivate the Connect Support and Service Portal integration plugin so you can add the Connect Support widget to a portal page. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/jakarta-servicenow-platform/page/use/collaboration/concept/c_ConnectSupport.html
2019-12-05T20:02:31
CC-MAIN-2019-51
1575540482038.36
[]
docs.servicenow.com
We've made getting started with Codecov as easy as possible. The rest of this page will outline important first steps to getting your first repository up and running on codecov. Trying to get Started with Codecov Enterprise? Many of the instructions below can be used with our on-premises offering with minimal changes. If you're attempting to setup an on-premises install of Codecov Enterprise, be sure to read our Enterprise Install Guide Here are the things you'll need or want to have in place before using Codecov: An account on GitHub, GitLab or Bitbucket. Codecov relies on Git-based codehosts to run. Code coverage report(s) being generated by your test suite in the applicable programming language. Codecov ingests these reports to provide our product. The centrally supported code coverage report format is a .xmlformat. [Really nice to have] A continuous integration provider. This will automatically run tests, generate coverage and upload that coverage to Codecov for every commit. Basic Usage - Sign up on codecov.io and link either your GitHub, GitLab, or Bitbucket account. - Once linked, Codecov will automatically sync all the repositories to which you have access. - You can click on an organization from your dashboard to access its repositories, or navigate directly to a specific repository using:<repo-provider>/<account-name>/<repo-name>. Example:. A repository that has no coverage reports uploaded to codecov. Note that the repository upload token is displayed here. - Run your normal test suite to generate code coverage reports in .xmlformat, likely through a CI build process. - Use the bash uploader and a repository upload token, to upload your coverage report(s) to Codecov. - Navigate back to the repository on Codecov, and you should see coverage information. - Add a Team Bot so codecov can push notifications and properly interact with repository providers. Don't Want to Use The Bash Uploader Directly? Codecov has a number of framework/language specific implementations for uploading coverage reports. You can access many of these from our list of Supported Languages Advanced Usage - Get codecov comments on your Pull Requests: (GitLab/Bitbucket) use a Team Bot, (GitHub) use the codecov GitHub integration. - Enforce relative or absolute thresholds during your CI build using the Codecov YAML - Use flags to categorize reports in a single repository. This is great for monorepos or projects that include different types of test coverage (e.g., unit, end to end, integration). - Use codecov with the Sourcegraph Browser Extension. Note that on-premises versions of repository providers (e.g. GitHub Enterprise) are currently not supported with the Codecov Sourcegraph Browser Extension. What languages does Codecov support? See Supported Languages Will Codecov work with my CI provider? See the List of CI Providers that work with Codecov out of the box, but all CI providers are able to be detected. Additionally, you can read more about how Codecov fits in with your CI provider in the CI Provider Relationship Will Codecov work with my repository provider? Codecov works with Bitbucket, GitLab, and GitHub; and supports each provider's on-premises offering. What's Next Consult our FAQ for even more helpful information
https://docs.codecov.io/docs
2019-12-05T19:43:53
CC-MAIN-2019-51
1575540482038.36
[array(['https://files.readme.io/a1f8ca7-codecov-uploadreports.png', 'A repository that has no coverage reports uploaded to codecov. Note that the repository upload token is displayed here.'], dtype=object) ]
docs.codecov.io
Security for Internet of Things (IoT) from the ground up IoT solution accelerators provide a secure and private Internet of Things cloud solution. The solution accelerators deliver a complete end-to-end solution, with security built into every stage from the ground up. At Microsoft, developing secure software is part of the software engineering practice, rooted in Microsoft's solution accelerators offer unique features that make provisioning, connecting to, and storing data from IoT devices easy and transparent and, most of all, secure. This article examines the Azure IoT solution accelerators security features and deployment strategies to ensure security, privacy, and compliance challenges are addressed. Introduction that. Secure infrastructure from the ground up The Microsoft Cloud infrastructure supports more than one billion customers in 127 countries/regions. Drawing on Microsoft's decades-long experience building enterprise software and running some of the largest online services in the world, the Microsoft Cloud provides higher levels of enhanced security, privacy, compliance, and threat mitigation practices than most customers could achieve on their own. The Security Development Lifecycle (SDL) provides a mandatory company-wide development process that embeds security requirements into the entire software lifecycle. To help ensure that operational activities follow the same level of security practices, SDL uses rigorous security guidelines laid out in Microsoft's Operational Security Assurance (OSA) process. Microsoft also works with third-party audit firms for ongoing verification that it meets its compliance obligations, and Microsoft engages in broad security efforts through the creation of centers of excellence, including the Microsoft Digital Crimes Unit, Microsoft Security Response Center, and Microsoft Malware Protection Center. Microsoft Azure - secure IoT infrastructure for your business Microsoft Azure offers a complete cloud solution, one that combines a constantly growing collection of integrated cloud services—analytics, machine learning, storage, security, networking, and web—with an industry-leading commitment to the protection and privacy of your data. Microsoft's assume breach strategy uses a dedicated red team of software security experts who simulate attacks, testing the ability of Azure to detect, protect against emerging threats, and recover from breaches. Microsoft's global incident response team works around the clock to mitigate the effects of attacks and malicious activity. The team follows established procedures for incident management, communication, and recovery, and uses discoverable and predictable interfaces with internal and external partners. Microsoft's systems provide continuous intrusion detection and prevention, service attack prevention, regular penetration testing, and forensic tools that help identify and mitigate threats. Multi-factor authentication provides an extra layer of security for end users to access the network. And for the application and the host provider, Microsoft offers access control, monitoring, anti-malware, vulnerability scanning, patches, and configuration management. The solution accelerators take advantage of the security and privacy built into the Azure platform along with the SDL and OSA processes for secure development and operation of all Microsoft software. These procedures provide infrastructure protection, network protection, and identity and management features fundamental to the security of any solution. The Azure IoT Hub within the IoT solution accelerators solution accelerators, this article breaks down the suite into the three primary security areas. Secure device provisioning and authentication The solution accelerators secure devices while they are out in the field by providing a unique identity key for each device, which can be used by the IoT infrastructure to communicate with the device while it is in operation. The process is quick and easy to set up. The generated key with a user-selected device ID forms the basis of a token used in all communication between the device and the Azure IoT Hub. Device IDs can be associated with a device during manufacturing (that is, flashed in a hardware trust module) or can use an existing fixed identity as a proxy (for example (for example, an authenticated field engineer physically configures a new device while communicating with: Devices do not accept unsolicited network connections. They establish all connections and routes in an outbound-only fashion. For a device to receive a command from the backend, the device must initiate a connection to check for any pending commands to process. Once a connection between the device and IoT Hub is securely established, messaging from the cloud to the device and device to the cloud can be sent transparently. Devices only connect to or establish routes to well-known services with which they are peered, such as an Azure IoT Hub. System-level authorization and authentication use per-device identities, making access credentials and permissions near-instantly revocable. Secure connectivity Durability of messaging is an important feature of any IoT solution. The need to durably deliver commands and/or receive data from devices is underlined by the fact that IoT devices are connected over the Internet, or other similar networks that communication path between devices and Azure IoT Hub, or between gateways and Azure IoT Hub, is secured using industry-standard Transport Layer Security (TLS) with Azure IoT Hub authenticated using X.509 protocol. In order to protect devices from unsolicited inbound connections, Azure IoT Hub does not open any connection to the device. The device initiates all connections. Azure IoT Hub durably stores messages for devices and waits for the device to connect. These commands are stored for two days, enabling devices connecting sporadically, due to power or connectivity concerns, to receive these commands. Azure IoT Hub maintains a per-device queue for each device. Secure processing and storage in the cloud From encrypted communications to processing data in the cloud, the solution accelerators help keep data secure. It provides flexibility to implement additional encryption and management of security keys. Using Azure Active Directory (AAD) for user authentication and authorization, Azure IoT solution accelerators can provide a policy-based authorization model for data in the cloud, enabling easy access management that can be audited and reviewed. This model also enables near-instant revocation of access to data in the cloud, and of devices connected to the Azure IoT solution accelerators. reprovisioned. Data can be stored in Azure Cosmos DB or in SQL databases, enabling definition of the level of security desired. Additionally, Azure provides a way to monitor and audit all access to your data to alert you of any intrusion or unauthorized access. Conclusion solution accelerators build in security measures by design, enabling secure monitoring of assets to improve efficiencies, drive operational performance to enable innovation, and employ advanced data analytics to transform businesses. With its layered approach towards security, multiple security features, and design patterns, the solution accelerators help deploy an infrastructure that can be trusted to transform any business. Additional information Each solution accelerator creates instances of Azure services, such as: Azure IoT Hub: Your gateway that connects the cloud to devices. You can scale to millions of connections per hub and process massive volumes of data with per-device authentication support helping you secure your solution. Azure Cosmos DB: A scalable, fully-indexed database service for semi-structured data that manages metadata for the devices you provision, such as attributes, configuration, and security properties. Azure Cosmos DB offers high-performance and high-throughput processing, schema-agnostic indexing of data, and a rich SQL query interface. Azure Stream Analytics: Real-time stream processing in the cloud that enables you to rapidly develop and deploy a low-cost analytics solution to uncover real-time insights from devices, sensors, infrastructure, and applications. The data from this fully-managed service can scale to any volume while still achieving high throughput, low latency, and resiliency. Azure App Services: A cloud platform to build powerful web and mobile apps that connect to data anywhere; in the cloud or on-premises. Build engaging mobile apps for iOS, Android, and Windows. Integrate with your Software as a Service (SaaS) and enterprise applications with out-of-the-box connectivity to dozens of cloud-based services and enterprise applications. Code in your favorite language and IDE—.NET, Node.js, PHP, Python, or Java—to build web apps and APIs faster than ever. Logic Apps: The Logic Apps feature of Azure App Service helps integrate your IoT solution to your existing line-of-business systems and automate workflow processes. Logic Apps enables developers to design workflows that start from a trigger and then execute a series of steps—rules and actions that use powerful connectors to integrate with your business processes. Logic Apps offers out-of-the-box connectivity to a vast ecosystem of SaaS, cloud-based, and on-premises applications. Azure Blob storage: Reliable, economical cloud storage for the data that your devices send to the cloud. Next steps Read about IoT Hub security in Control access to IoT Hub in the IoT Hub developer guide. Feedback
https://docs.microsoft.com/en-us/azure/iot-fundamentals/iot-security-ground-up?context=azure/iot-hub/rc/rc
2019-12-05T19:46:23
CC-MAIN-2019-51
1575540482038.36
[array(['../includes/media/iot-security-ground-up/securing-iot-ground-up-fig3.png', 'Azure IoT solution accelerators'], dtype=object) ]
docs.microsoft.com
GUID_AVC_CLASS The GUID_AVC_CLASS device interface class is defined for audio video control (AV/C) devices that are supported by the AVStream architecture. Remarks The system-supplied AV/C client driver Avc.sys registers an instance of GUID_AVC_CLASS to represent an external AV/C unit on a 1394 bus. For information about the device interface class for virtual AV/C devices, see GUID_VIRTUAL_AVC_CLASS. Requirements See also Send comments about this topic to Microsoft
https://docs.microsoft.com/en-us/windows-hardware/drivers/install/guid-avc-class
2018-01-16T14:25:12
CC-MAIN-2018-05
1516084886436.25
[]
docs.microsoft.com
When it is first launched, the connector will ask you for your Memsource credentials. The connector uses the Memsource API, that is available to all user roles (Admin, PM, and Linguist) of all Memsource paid editions. Once you are connected to Memsource your projects will appear in a tree, arranged by Project Status (NEW, ASSIGNED, COMPLETED). By default, the connector shows the latest 100 projects to which your Memsource user has access. If you wish to work with older projects, choose Settings->Project Scope and then choose Latest 500 Projects, Latest 1000 Projects, or All Projects. If you have a large number of projects and choose to show all projects, the refresh of the project list may take longer. Your selection for Project Scope will be saved in future sessions. To run a QA pass in Xbench, select a project in the tree, and click Run QA. The following window will appear, where you will be able to select the language and the workflow step for the QA pass. If the project has Termbases attached and you are an Admin or PM user with export rights for the full termbase, two additional check boxes will appear in this dialog: After you click OK, the connector will retrieve any required files from the cloud and will launch an Xbench project ready for QA. When you find in Xbench a translation issue that you wish to fix, just right-click and then choose Edit Segment and the Memsource Editor will open right at the segment that needs to be fixed. To compare the files in two workflow steps, select the project in the project tree and choose Compare Workflow Steps. Please note that the Compare Workflow Steps only appears if the project has several workflow steps defined. Choose the language to compare, and the old and new workflow steps and then click OK. Your default browser will open with the comparison report. The comparison report will be stored in the My Documents folder. You can get to this folder with Windows Start->Documents. The ApSIC Xbench Connector is handy to keep an eye on the progress in your Memsource projects. If you wish to see the translation progress of each document, you need to first enable it in Settings -> Job Progress. When Job Progress is enabled, a Progress column appears in the document list. To update the progress data, or to see if a new project is listed in the Projects tree, click Refresh or press F5. If you wish to edit a document online with the Memsource WebEditor, just select the document, right-click and then choose Edit with Memsource WebEditor. Also, if you wish to manage the project, select the project in the tree, right-click and then choose Manage in Memsource.com.
https://docs.xbench.net/connector-memsource/use-connector/
2018-01-16T13:25:55
CC-MAIN-2018-05
1516084886436.25
[]
docs.xbench.net
Language: Run Stages This version of Puppet is not included in Puppet Enterprise. The latest version of PE includes Puppet 4.10. Default main Stage By default there is only one stage (named “ main”). All resources are automatically associated with this stage unless explicitly assigned to a different one. If you do not use run stages, every resource is in the main stage. Custom Stages Additional stages are declared as normal resources. Each additional stage must have an order relationship with another stage, such as Stage['main']. As with normal resources, these relationships. In order to assign a class to a stage, you must use the resource-like class declaration syntax. You cannot assign classes to stages with the include function. Limitations and Known Issues - You cannot assign a class to a run stage when declaring it with include. - You cannot subscribe to or notify resources across a stage boundary. - Classes that contain other classes (with either the containfunction or the anchor pattern) can sometimes behave badly if declared with a run stage — if the contained class is.
https://docs.puppet.com/puppet/4.1/lang_run_stages.html
2018-01-16T13:16:05
CC-MAIN-2018-05
1516084886436.25
[]
docs.puppet.com
This). Use this section of the interface to create new SSH key pairs, which include a public key and a private key. To generate a new SSH key pair, perform the following steps: To use a custom key name, enter the key name in the Key Name (This value defaults to id_rsa): text box. Select the desired key size. Click Generate Key. The interface will display the saved location of the key. To import an existing SSH key, perform the following steps: Click Import Key. To use a custom key name, enter the key name in the Choose a name for this key (defaults to id_dsa) text box. Paste the public and private keys into the appropriate text boxes. The Public Keys and Private Keys tables display the following information about your existing keys:
https://docs.cpanel.net/plugins/viewsource/viewpagesrc.action?pageId=2442527
2018-01-16T13:33:02
CC-MAIN-2018-05
1516084886436.25
[]
docs.cpanel.net
Installed). Label[sa_ci_type_icon] Maps CI types to icons which represent CI types in the business service map. Menu Action[sa_context_menu] Contains data on configurable menu options for CIs in the business service map. Custom Operations[sa_custom_operation] Contains data on planned enhancements in the Pattern Designer. Not in use in this release. Custom Operation Parameters[sa_custom_operation_param] Contains data on planned enhancements in the Pattern Designer. Not in use in this release. Custom Parsing Strategies[sa_custom_parsing_strategy] Contains data on planned enhancements in the Pattern Designer. Not in use in this release.. Boundary endpoints[sa_m2m_boundary_ep_service] Maps boundary endpoints to business service. Entry Point[sa_m2m_service_entry_point] Maps entry points to business service. Manual Connections[sa_manual_connections] Contains data on manual connections created by user.: Service Mapping > Administration > Properties. sa.debugger.max_timeout Maximum timeout (in seconds) since the last server activity during a Pattern Debugger run. Type: integer Default value: 120 Other possible values: any number higher than 60 Location: Service Mapping > Administration > Properties. sa.max_concurrent_connections The maximum number of concurrent tasks sent to an individual host. Type: integer Default value: 50 Other possible values: any number higher than 1 Scripts installed with Service Mapping Service Mapping adds the following client scripts. Client script Table Description IP source change NAT[cmdb_ci_translation_rule] Verifies that the source IP entered during a rule change is correct. IP Validation NAT[cmdb_ci_translation_rule] Validates the entered IP. IP Target change NAT[cmdb_ci_translation_rule] Verifies that the new target IP entered during a rule change is correct. Validate host TCP[cmdb_ci_endpoint_tcp] Validates the host attribute for the TCP entry point type. Enable view map Business Service[cmdb_ci_service_discovered] when you click an entry point link on the Entry Points tab on the Business Service form. ShowEditPatternButton Discovery patterns [sa_pattern] Controls display of the Edit button in the Pattern Designer module. Validation functions Endpoint[cmdb_ci_endpoint] A set of functions for validation of several attributes for different entry point types. Remove "--None--" from OS Types Uploaded file[sa_uploaded_file] Prevents the value — None — from being added to the OS Types list. Validate URL HTTP(S)[cmdb_ci_endpoint_http] Validates the URL attribute for the HTTP(S) entry point type. QBS - remove options from table drop down Technical Service[cmdb_ci_query_based_service] Removes redundant options from the table selection list in the Technical Business Service form. add url params Endpoint[cmdb_ci_endpoint] Saves data entered in the Edit entry point dialog box to the related business service. Enable Run Discovery button Business Service[cmdb_ci_service_discovered] Disables the Run Discovery button on the service map page while the discovery process is running. Validate port TCP[cmdb_ci_endpoint_tcp] Validates the port attribute for the TCP entry point type. Remove "--None--" from OS Architecture Uploaded file[sa_uploaded_file] Prevents the value — None — from being added to the OS Architecture list..
https://docs.servicenow.com/bundle/jakarta-it-operations-management/page/product/service-mapping/reference/r_PropertiesInstalSM.html
2018-01-16T13:29:30
CC-MAIN-2018-05
1516084886436.25
[]
docs.servicenow.com
Add a resource script to a resource block A resource script operates on a resource during deployment or returns data to the CMDB after a resource is deployed. Before you beginRole required: sn_cmp.cloud_service_designer Procedure Navigate to Cloud Management > Cloud Admin Portal >Extend Cloud Management entitiesExample: create a catalog item from an ARM template
https://docs.servicenow.com/bundle/kingston-it-operations-management/page/product/cloud-management-v2/task/add-resource-script-resource-block.html
2018-06-18T03:34:03
CC-MAIN-2018-26
1529267860041.64
[]
docs.servicenow.com
You can run a workflow from the Orchestrator client to test all JDBC example workflows in one full cycle. Prerequisites Verify that the user account you are logged in with has the necessary permissions to run JDBC workflows. Procedure - Click the Workflows view in the Orchestrator client. - In the workflows hierarchical list, expand to navigate to the Full JDBC cycle example workflow. - Right-click the Full JDBC cycle example workflow and select Start workflow. - Provide the required information, and click Next. - Type a database connection URL. - Type a user name to access the database. - Type a password to access the database. - Type the values to be used as entries in the database. - Click Submit to run the workflow.
https://docs.vmware.com/en/vRealize-Orchestrator/7.2/com.vmware.vrealize.orchestrator-use-plugins.doc/GUID-C6E8C344-BD9B-4D9A-B691-A9AB665CCBE6.html
2018-06-18T03:48:19
CC-MAIN-2018-26
1529267860041.64
[]
docs.vmware.com
Customize IAM Roles You may want to customize IAM roles and permissions for your requirements. For example, if your application does not use EMRFS consistent view, you may not want to allow Amazon EMR to access Amazon DynamoDB. To customize permissions, we recommend that you create new roles and policies. Begin with the permissions in the managed policies for the default roles (for example, AmazonElasticMapReduceforEC2Role and AmazonElasticMapReduceRole). Then, copy and paste the contents to new policy statements, modify the permissions as appropriate, and attach the modified permissions policies to the roles that you create. You must have the appropriate IAM permissions to work with roles and policies. For more information, see Allow Users and Groups to Create and Modify Roles. If you create a custom EMR role for EC2, follow the basic work flow, which automatically creates an instance profile of the same name. Amazon EC2 allows you to create instance profiles and roles with different names, but Amazon EMR does not support this configuration, and it results in an "invalid instance profile" error when you create the cluster. Important Inline policies are not automatically updated when service requirements change. If you create and attach inline policies, be aware that service updates might occur that suddenly cause permissions errors. For more information, see Managed Policies and Inline Policies in the IAM User Guide and Specify Custom IAM Roles When You Create a Cluster. For more information about working with IAM roles, see the following topics in the IAM User Guide:
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-iam-roles-custom.html
2018-06-18T04:14:16
CC-MAIN-2018-26
1529267860041.64
[]
docs.aws.amazon.com
Setting Your Admin Avatar Because many people may contribute to the Guides of your portal, your developers might want to be able to quickly see who the author is before reading a guide. For this reason, we identify the author not just with their name, but also with an avatar. Our avatars use Gravatar (globally recognized avatar). If you have a Gravatar account, you may have noticed you already have an avatar in place. In order to see what you have for an avatar currently, as well as make any updates, you should navigate to. Here, you can see all of your information as well as make changes. If you don't have an avatar image, or would like to change it, you can click the image and it will redirect you to. Create an account in Gravatar and upload your image. Once it's been uploaded, give it a few minutes to populate into the developer portal and you're all set.
https://docs.gelato.io/guides/setting-your-admin-avatar
2018-06-18T04:00:38
CC-MAIN-2018-26
1529267860041.64
[]
docs.gelato.io
Webhooks Gelato offers Webhooks so you can get notified every time a new developer registers on your portal. You can find Webhooks under the "Developer Settings" section under Portal Settings. Each time a Developer is created, Gelato will send a POST request to your Webhook target URL containing the new developer's details. They'll look something like this: { "developer": { "id": "123", "email": "[email protected]", "first_name":"Jason", "last_name":"TheApe", "access_approved":false, "username":"jason", "description":"Signed up about 5 years ago from Somewhere Mysterious", "avatar":null }, "action": "developer_create" } You can test this using the "TEST" button. If you'd like to verify that the request has come from Gelato, you can ask us to sign the Webhook request by specifying a secret. We'll then include a X-Hub-Signature header which is: sha1=SIGNATURE SIGNATURE is a SHA1 HMAC signature, using the hook's body as the data and the secret you specified above as the key (this follows the Authenticated Content Distribution section from the PubSubHubbub Spec). Note that this signature method is exactly the same as what's used by Github, and there's plenty of sample code available.
https://docs.gelato.io/guides/webhooks
2018-06-18T03:58:42
CC-MAIN-2018-26
1529267860041.64
[]
docs.gelato.io
Trial (Try and Buy) Mode for Presence Builder f you have Plesk with Presence Builder and Business Manager, you can tune your system to offer. Plesk helps such customers by giving them full access to Presence Builder functions to create sites at no charge. To claim the created sites the customers subscribe to a hosting plan in Plesk or upgrade the existing one. The more people you attract by this promotion, the more hosting subscriptions with Presence Builder you will have. The Try and Buy mode targets two different groups: - Existing customers who want to have a website but are unable to launch Presence Builder due to various limitations. For example, their hosting plan does not provide this option, or they have exceeded the number of sites to publish. By using the Try and Buy, you can upsell Presence Builder to these customers. Learn more in the section Configuring the Try and Buy for Existing Customers. - New customers who have created a trial site and now want to claim it with a new hosting plan. Learn how to configure the Try and Buy mode for such customers in the section Configuring the Try and Buy for Potential Customers. The scenario for upselling to existing customers works this way: - Links to launch Presence Builder become available to such customers on a number of pages in Customer Panel. The customers see the links and click them. - Plesk forwards these customers to Presence Builder where they create trial sites. - When the site is ready and the customers click Publish, they are prompted to upgrade their subscription. - If the upgrade is successfully completed, the site can be published to the upgraded hosting account. The scenario for selling hosting with Presence Builder to potential customers is as follows: - You enable public access to the Presence Builder trial mode in Plesk by selecting the corresponding checkbox in the Server Administration Panel > Tools & Settings > Try and Buy Mode Settings. If this mode is activated, everyone who follows a special promotional link is able to create a trial site. - You obtain the promotional link from the Plesk GUI and put this link on your website so that potential customers can access it. - When the customers click the link, they are forwarded to Presence Builder where they create trial sites. The editor interface is identical to the one that appears in step 2. - As soon as the site is ready, the customers click Publish, and Plesk requests them to buy a hosting plan. - The customers subscribe to a new plan and are free to publish the site. Note: All notifications that customers receive while using the Try and Buy mode can be customized. Learn more in the section Customizing Trial Mode Notifications.
https://docs.plesk.com/en-US/12.5/advanced-administration-guide-linux/trial-try-and-buy-mode-for-presence-builder.67078/
2018-06-18T04:03:53
CC-MAIN-2018-26
1529267860041.64
[]
docs.plesk.com
Restoring Backups You can restore backups made in Plesk 12.5 or earlier versions of Plesk, but no earlier than Plesk 8.6. To restore data from backup files, go to Websites & Domains >, you can select which objects to restore from a backup. You can restore a particular site, file, database, and so on. This helps you restore only what you need to restore, without overwriting other objects. For example, if you want to restore only a DNS zone of one domain, example.tld, there is no need to restore configurations of of all domains will be restored. To restore specific objects of a particular type, such as a mailbox or DNS zone of a domain: - Select the option Selected objects. - Select the type of object to restore. For example, Database. - Select the subscription to which the object belongs in the Subscription box (applicable for all object types except for the Subscription). Click in the box to see the list of all subscriptions or type first letters of the subscription name, and Plesk will find matches. For example, example.com. - Select one or more objects of the selected type. For example, wordpress_database_8. Similarly, when restoring your account and websites at the Customer Panel > Account, you can select objects to restore in the same way as in the Backup Manager wizard. You can restore the following types of objects from a backup: - Subscription - Site - DNS zone - Database - SSL into the Password field. Note that Plesk is unable to check whether the password you enter is wrong: The backup will be successfully restored but the data will be corrupted. If you wish to reset user passwords, clear the Provide the passwords option. If any errors or conflicts occur during restoration of data, the wizard will prompt you to select an appropriate resolution. Follow the instructions provided on the screen to complete the wizard.
https://docs.plesk.com/en-US/12.5/reseller-guide/website-management/backing-up-and-recovering-websites/restoring-data/restoring-backups.65200/
2018-06-18T04:05:37
CC-MAIN-2018-26
1529267860041.64
[]
docs.plesk.com
This document is a reference to the elements of the Page Template XML format. Refer to the Introduction to Page Templates. A page template describes a particular way or set of ways of organizing data from some sources on a page. It has the following structure: <page id="[id]"> <!-- The top-level section of this page --> [page-element]* </page>…where each [page-element]is one of: where the nested elements above are: source.[sourceName].[name]. All tags may also include a description attribute to document the use of the tag. Tags and attributes are described in detail in the following. An id has the format id ::= name(:major(.minor(.micro(.qualifier)?)?)?)? name ::= identifier major ::= integer minor ::= integer micro ::= integer qualifier ::= identifierAny omitted numeric version component missing is taken to mean 0, while a missing qualifier is taken to mean the empty string. The root tag of a page template. Defines a page, is also its root section. Attributes and subtags are the same as for section, with the exception that the id attribute is mandatory for a page. A representation of an area of screen real-estate. At runtime a section will contain content from various sources. The final renderer will render the section with its data items and/or subsections in an area of screen real-estate determined by its containing tag. Attributes: A data source whose data should be placed in the containing section. Attributes: A renderer to use to render a section of a data item (hit) of a particular type. Attributes: A choice between multiple alternative (lists of) page elements. A resolver chooses between the possible alternatives for each request at runtime. The alternative tag is used to enclose an alternative. If an alternative consists of just one page element tag, the enclosing alternative tag may be skipped. Attributes: Either:or Specify all the alternatives of a choice as a mapping function of elements to placeholders. A map is a convenience shorthand of writing many alternatives in the case where a collection of elements should be mapped to a set of placeholders with the constraint that each placeholder should get a unique element. This is useful e.g in the case where a set of sources are to be mapped to a set of sections. Attributes:Contained tags Includes the page elements contained directly in the page element in the given page template (the page tag itself is not included). Inclusion works exactly as if the include tag was literally replaced by the content of the included page. Attributes:
https://docs.vespa.ai/documentation/reference/page-templates-syntax.html
2018-06-18T04:00:16
CC-MAIN-2018-26
1529267860041.64
[]
docs.vespa.ai
Murano provides a very powerful and flexible platform to automate the provisioning, deployment, configuration and lifecycle management of applications in OpenStack clouds. However, the flexibility comes at cost: to manage an application with Murano one has to design and develop special scenarios which will tell Murano how to handle different aspects of application lifecycle. These scenarios are usually called “Murano Applications” or “Murano Packages”. It is not hard to build them, but it requires some time to get familiar with Murano’s DSL to define these scenarios and to learn the common patterns and best practices. This article provides a basic introductory course of these aspects and aims to be the starting point for the developers willing to learn how to develop Murano Application packages with ease. The course consists of the following parts: Before you proceed, please ensure that you have an OpenStack cloud (devstack-based will work just fine) and the latest version of Murano deployed. This guide assumes that the reader has a basic knowledge of some programming languages and object-oriented design and is a bit familiar with the scripting languages used to configure Linux servers. Also it would be beneficial to be familiar with YAML format: lots of software configuration tools nowadays use YAML, and Murano is no different. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/murano/queens/admin/appdev-guide/step-by-step/step_by_step.html
2018-06-18T03:19:33
CC-MAIN-2018-26
1529267860041.64
[]
docs.openstack.org
<. - topic/section troubleshooting/troubleSolution See troubleshooting. The following attributes are available on this element: Universal attribute group and outputclass.
http://docs.oasis-open.org/dita/dita/v1.3/errata01/os/complete/part3-all-inclusive/langRef/technicalContent/troubleSolution.html
2018-06-18T03:51:09
CC-MAIN-2018-26
1529267860041.64
[]
docs.oasis-open.org
tenantId: <> (1) subscriptionId: <> (2) aadClientId: <> (3) aadClientSecret: <> (4) aadTenantId: <> (5) resourceGroup: <> (6) cloud: <> (7) location: <> (8) vnetName: <> (9) securityGroupName: <> (10) primaryAvailabilitySetName: <> (11)) During advanced installations, Azure can be configured using the following parameters, which are configurable in the inventory file. # Cloud Provider Configuration # # Note: You may make use of environment variables rather than store # sensitive configuration within the ansible inventory. # For example: openshift_cloudprovider_kind=azure openshift_cloudprovider_azure_client_id="{{ lookup('env','AZURE_CLIENT_ID') }}" openshift_cloudprovider_azure_client_secret="{{ lookup('env','AZURE_CLIENT_SECRET') }}" openshift_cloudprovider_azure_tenant_id="{{ lookup('env','AZURE_TENANT_ID') }}" openshift_cloudprovider_azure_subscription_id="{{ lookup('env','AZURE_SUBSCRIPTION_ID') }}" openshift_cloudprovider_azure_resource_group=openshift openshift_cloudprovider_azure_location=australiasoutheast.
https://docs.openshift.org/latest/install_config/configuring_azure.html
2018-06-18T04:10:09
CC-MAIN-2018-26
1529267860041.64
[]
docs.openshift.org
JString::str split This Namespace has been archived - Please Do Not Edit or Create Pages in this namespace. Pages contain information for a Joomla! version which is no longer supported. It exists only as a historical reference, will not be improved and its content may be incomplete. J />
https://docs.joomla.org/API17:JString::str_split
2017-01-16T21:43:24
CC-MAIN-2017-04
1484560279368.44
[]
docs.joomla.org
Senate Record of Committee Proceedings Committee on Judiciary and Public Safety Senate Bill 135 Relating to: revocation of operating privilege for certain offenses related to operating while intoxicated, operating after revocation, making an appropriation, and providing a criminal penalty. By Senators Wanggaard and Marklein; cosponsored by Representatives Spiros, Jacque, Berceau, Allen, Horlacher, Katsma, Kleefisch, Novak, Skowronski and Thiesfeldt. March 29, 2017 Referred to Committee on Judiciary and Public Safety September 05, 2017 Public Hearing Held Present: (5) Senator Wanggaard; Senators Stroebel, Risser, L. Taylor and Testin. Absent: (0) None. Excused: (0) None. Appearances For · Kris Franceschi - Ritz Holman CPAs · Tricia Knight - Ritz Holman CPAs · Bill Mayer - Ritz Holman CPAs · Senator Van Wanggaard Appearances Against · None. Appearances for Information Only · None. Registrations For · Alice O'Connor - WI Chiefs of Police Association · Representative John Spiros · Representative Andre Jacque · Danielle Decker - City of Milwaukee · LeRoy Schmidt - WICPA Registrations Against · Jordan Lamb - Association of State Prosecutors Registrations for Information Only · None. October 19, 2017 Executive Session Held Present: (5) Senator Wanggaard; Senators Stroebel, Risser, L. Taylor and Testin. Absent: (0) None. Excused: (0) None. Moved by Senator Wanggaard, seconded by Senator Testin that Senate Bill 135 be recommended for passage. Ayes: (4) Senator Wanggaard; Senators Stroebel, Risser and Testin. Noes: (1) Senator L. Taylor. PASSAGE RECOMMENDED, Ayes 4, Noes 1 ______________________________ Valirie Maxim Committee Clerk
http://docs.legis.wisconsin.gov/2017/related/records/senate/judiciary_and_public_safety/1407287
2018-07-16T02:32:52
CC-MAIN-2018-30
1531676589172.41
[]
docs.legis.wisconsin.gov
Enable Network protection Applies to: - Windows 10, version 1709 and later - Windows Server 2016 Audience - Enterprise security administrators Manageability available with - Group Policy - PowerShell - Configuration service providers for mobile device management Supported in Windows 10 Enterprise, Network protection is a feature that is part of Windows Defender Exploit Guard. It helps to prevent employees from using any application to access dangerous domains that may host phishing scams, exploits, and other malicious content on the Internet. This topic describes how to enable Network protection with Group Policy, PowerShell cmdlets, and configuration service providers (CSPs) for mobile device management (MDM). Enable and audit Network protection You can enable Network protection in either audit or block mode with Group Policy, PowerShell, or MDM settings with CSP. For background information on how audit mode works, and when you might want to use it, see the audit Windows Defender Exploit Guard topic. Use Group Policy to enable or audit Network protection > Network protection. Double-click the Prevent users and apps from accessing dangerous websites setting and set the option to Enabled. In the options section you must specify one of the following: - Block - Users will not be able to access malicious IP addresses and domains - Disable (Default) - The Network protection feature will not work. Users will not be blocked from accessing malicious domains - Audit Mode - If a user visits a malicious IP address or domain, an event will be recorded in the Windows event log but the user will not be blocked from visiting the address. Important To fully enable the Network protection feature, you must set the Group Policy option to Enabled and also select Block in the options drop-down menu. Use PowerShell to enable or audit Network protection - Type powershell in the Start menu, right click Windows PowerShell and click Run as administrator Enter the following cmdlet: Set-MpPreference -EnableNetworkProtection Enabled You can enable the feauting in audit mode using the following cmdlet: Set-MpPreference -EnableNetworkProtection AuditMode Use Disabled insead of AuditMode or Enabled to turn the feature off. Use MDM CSPs to enable or audit Network protection Use the ./Vendor/MSFT/Policy/Config/Defender/EnableNetworkProtection configuration service provider (CSP) to enable and configure Network protection.
https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-exploit-guard/enable-network-protection
2018-07-16T03:19:34
CC-MAIN-2018-30
1531676589172.41
[]
docs.microsoft.com
DB17-jwd) DB17) DB17) DB17-wjd) DB17-lcconfigurations DB17-lc) DB17-l) DB17-lc) DB17-lsetup Modified 2018-06-24 by Andrea Censi This is the description of the saviors obstacle avoidance demo. Duckiebot in configuration DB17-wjd. Camera calibration completed. Wheel calibration completed. Joystick demo) successfully launched. Modified 2018-06-22 by Andrea Censi As it can be inferred from the video, the duckiebot should stop if an obstacle (duckie or cone) is placed in front of the Duckiebot. Modified 2018-06-22 by Andrea Censi Duckietown built to specifications. No special requirements like april-tags, traffic lights or similar needed. To demonstrate functionality, place obstacles (duckies S/M/L or cones) on driving lane. Best performance is achieved when obstacles are placed on the straights, not immediately after a curve. Modified 2018-06-22 by Andrea Censi Currently the bot has to be on the devel-saviors-23feb branch on git. Furthermore the additional package sk-image has to be installed: duckiebot $ sudo apt-get install python-skimage Modified 2018-06-22 by Andrea Censi Check: Joystick is turned on Check: Sufficient battery charge on duckiebot Modified 2018-06-22 by Andrea Censi Step by step instructions to run demo: Step 1: On the duckiebot, navigate to DUCKIETOWN_ROOT and run duckiebot $ source environment.sh duckiebot $ catkin_make -C catkin_ws/ duckiebot $ make demo-lane-following Wait for a couple of seconds until everything has been properly launched. Step 2: In a second terminal on the duckiebot, run: duckiebot $ roslaunch obst_avoid obst_avoid_lane_follow_light.launch veh:=robot_name This launches the obstacle avoidance node, wait again until it’s properly started up. Step 3: Press the X button on the joystick to generate an anti-instagram transformation. Within about the next 10 seconds in the terminal of Step2 this YELLOW message should appear: !!!!!!!!!!!!!!!!!!!!TRAFO WAS COMPUTED SO WE ARE READY TO GO!!!!!!!!!!!! Step 4: To (optionally) visualise output of the nodes run the following commands on your notebook: laptop $ source set_ros_master.sh robot_name laptop $ roslaunch obst_avoid obst_avoid_visual.launch veh:=robot_name laptop $ rviz Topics of interest are: /robot_name/obst_detect_visual/visualize_obstacles (Markers which show obstacles, visualize via rviz!), /robot_name/obst_detect_visual/image/compressed (Image with obstacle detection overlay, visualize via rqt!), /robot_name/obst_detect_visual/bb_linelist (bounding box of obstacle detection, visualize via rqt), /robot_name/duckiebot_visualizer/segment_list_markers (line segments). Step 5: To drive press R1 to start lane following. Duckiebot stops if obstacle detected and in reach of the duckiebot. Removal of the obstacle should lead to continuation of lane following. Modified 2018-06-22 by Andrea Censi P: Objects aren’t properly detected, random stops on track. S: Make sure that anti instagram was run properly. Repeat Step 3 if needed. P: Duckiebot crashes obstacles. S: Might be due to processes not running fast enough. Check if CPU load is too high, reduce if needed. Modified 2018-06-24 by Andrea Censi More information and details about our software packages can be found in our README on Github or in our Final Report. No questions found. You can ask a question on the website.
https://docs.duckietown.org/opmanual_duckiebot/out/demo_saviors.html
2018-07-16T02:33:22
CC-MAIN-2018-30
1531676589172.41
[]
docs.duckietown.org
Group Pickup Group Pickup# About Group Pickup# Pickup a call in the group/user/device configured. Schema# Validator for the group_pickup callflow's data object Usage# When a device is ringing in the office but no one is able to pick it up, it can be helpful to let others pick up the ringing line from their phone (versus hopping over desks and chairs, attempting to catch the call before it stops ringing). Which devices are checked# The first thing to decide is the scope of the pickup group - it can be a single device, a user's device(s), or a group's device(s). You define the appropriate device_id, user_id or group_id in the action's data. Note Preference is given to the most restrictive option if more than one are defined - so device_id is used before user_id is used before group_id. Which devices can pickup ringing calls# It might not always be preferred to have anyone in the office able to call the group_pickup callflow. You can restrict the device, user, or group of users/devices who can utilize the callflow by defining the appropriate approved_* field (following similar preference which is used if multiple exist in the action's data). Example# Define a callflow with extension 8000 that allows the sales group's lines to be picked up by the sales group's devices: {"numbers":["8000"] ,"flow":{ "module":"group_pickup" ,"data":{ "group_id":"{SALES_GROUP_ID}" ,"approved_group_id":"{SALES_GROUP_ID}" } } } The group_pickup action is a terminal one; no children will be evaluated and can be omitted.
https://docs.2600hz.com/supported/applications/callflow/doc/group_pickup/
2018-07-16T02:45:38
CC-MAIN-2018-30
1531676589172.41
[]
docs.2600hz.com
Command line User Guide¶ Overview¶ oxford_asl is an automated command line utility that can process ASL data to produce a calibrated map of resting state tissue perfusion. It also includes a range of other useful analysis methods inlcuding amongst others: - motion correction - registration to a structural image (and thereby a template space) - partial volume correction - distorition correction If you have ASL data to analyse, oxford_asl is most likely the tool you will want to use, unless you want a graphical user interface. In practice, the GUI in BASIL is largely a means to construct the right call to oxford_asl. What you will need¶ As a minimum to use oxford_asl all you need are some ASL data (label and control pairs). In practice you will also most probably want: - a calibration image: normally a proton-density-weighted image (or a close match) acquired with the same readout parameters as the main ASL data. Only once you have a calibration image can you get perfusion in absolute units. - a structural image: it is helpful to have a structral image to pass to oxford_asland if your data incldues this we strongly suggest you do use it with oxford_asl. By preference, we strongly suggest you process your structural image with fsl_anatbefore passing those results to oxford_asl. This is a good way to get all of the useful information that oxford_aslcan use, and you can scrutinise this analysis first to check you are happy with it before starting your ASL analysis. - multi-delay ASL: the methods in oxford_aslare perfectly applicable to the widely used single delay/PLD ASL acquisition. But, they offer particular advantages if you have multi-delay/PLD data. Things to note¶ To produce the most robust analysis possible oxford_asl includes a number of things in the overall analysis pipeline that you might want to be aware of: - spatial regularisation: this feature is now enabled by default for all analyses and applies to the estimated perfusion image. We do not recommend smoothing your data prior to passing to oxford_asl. If you really want to, only do ‘sub-voxel’ level of smoothing. - masking: oxford_aslwill attempt to produce a brain mask in which perfusion quantification will be performed. This is normally derived from any structural images with which it is provided (highly recommened), via registration. Therefore, if the registration is poor there will be an impact on the quality of the mask. Where no structural information is provided, the mask will be derived from the ASL data via brain extraction, this can be somewhat variable depending upon your data. It is thus always worth examining the mask created. oxford_aslprovides the option to input your own mask where you are not satisfied with the one automatically generated or you need a specific mask for your study. oxford_aslperforms the final registration using the perfusion image and the BBR cost function. We have found this to be reliable, as long as the perfusion image is of sufficient quality. In practice, an initial registration is done earlier in the pipeline using the raw ASL images and this is used in the mask generation step. You should always inspect the quality of the final registered images. - multiple repeats: ASL data typically contains many repeats of the same measurement to increase the overall signal-to-noise ratio of the data. You should provide this data to oxford_asl, and not average over all the repeats beforehand (unlike earlier versions of the tool). oxford_aslnow inlcudes a pipeline where it intially analyses the data having done averaging over the repeats, followed by a subsequent analysis with all the data - to achieve both good robustness and accuracy. If your data has already had the repeats averaged, it is still perfectly reasonable to do analysis with oxford_asl, if you have very few measurements in the data to pass to oxford_aslyou might want to use the special ‘noise prior’ option, since this sets information needed for spatial regularisation. - Avanced analyses: Partial volume correction, or analysis of the data into separate epochs, are avaialbe as advanecd supplementary analyses in oxford_asl. If you choose these options oxford_aslwill always run a conventional analysis first, this is used to intialise the subsequent analyses. This also means that you can get both conventional and advanced results in a single run of oxford_asl. - Multi-stage analysis: By default oxford_asl will analyse the data in multiple-stages where appropriate in an attempt to get as accurate and robust a result as possible. The main example of this is a preliminary analysis with the data having been averaged over multiple-repeats (see above). But, this also applies to the registration (see above). This does mean that you might find some differences in the results than if you did an analysis of the data yourself using a combination of other command line tools. Typical Usage¶ oxford_asl -i [asl_data] -o [output_dir] <data parameters> <analysis options> -c [M0_calib] <calibration parameters> –fslanat [fsl_anat_output_dir] This command would analyse the ASL data, including calculation of perfusion in absolute (ml/100g/min) units using the calibration data, and register the results to the strcutural image, as well as producing perfusion maps in MNI152 standard space. In general, the use of an fsl_anat analysis of a structural image with oxford_asl is recommended, but it is not required: perfusion can be calculated in the native space without the structural information. Output¶ in the calib subdirectory you will find: - In reference region (single) mode:. - In voxelwise mode (automatic when no structural image is provided): a M0.nii.gz image will be produced. Various subdirectories are created: native_spacein which perfusion and arrival time images in the native resolution of the ASL data are saved. struct_spaceprovides results in the same space as the structural image (if supplied). std_spaceprovides results in MNI152 standard space (if an fsl_anatresults directory has been provided). If you find the registration to be unsatisfactory, a new registration can be performed without having to repeat the main analysis using the results in native_space. Region analysis¶ If the --region-analysis option is specified an additional directory native_space/region_analysis will be created containing three files: - region_analysis.csv- This file contains region analysis for all voxels within the brain mask - region_analysis_gm.csv- This file contains sub-region analysis for all voxels with at least 80% GM (i.e. near ‘pure’ GM voxels) - region_analysis_wm.csv- This file contains sub-region analysis for all voxels with at least 90% WM (i.e. near ‘pure’ WM voxels) Region analysis is performed by using the registration from the structural image to standard space from an fsl_anat run. Hence --fslanat must be used in order to run region analysis. The output files are in comma-separated format, suitable for loading into most spreadsheet or data processing applications. Within each region the following information is presented: - Nvoxels- The number of voxels identified as being within this region - Mean, Std, Median, IQR- Standard summary statistics for the perfusion values within this region - Precision-weighted mean- The mean perfusion weighted by voxelwise precision (1/std.dev) estimates. This measure takes into account the confidence of the inference in the value returned for each voxel and is a standard measure used in meta-analysis to combine results of varying levels of confidence. - I2- A measure of heterogeneity for the voxels within the region expressed as a percentage. A high value of I2 suggests that there is significant variation in perfusion within the region that is not attributable to the inferred uncertainty in the estimates. For a definition of I2 and an overview of its use in meta-analyses, see The regions defined are taken from the Harvard-Oxford cortical and subcortical atlases. Standard space regions are transformed to native ASL space and voxels with probability fraction > 0.5 are considered to lie within a region. At least 10 voxels must be found in order for statistics to be presented. In addition, statistics are presented for ‘generic’ GM and WM regions. For each tissue type, two such regions are defined, one with ‘some’ of the tissue present (e.g. at least 10% GM), and one intended to capture ‘pure’ tissue types (e.g. at least 80% GM). Note that there is an overlap here with the separate output files for GM and WM which are explicitly based on the ‘pure’ tissue type subregions. Usage¶ Typing oxford_asl with no options will give the basic usage information, further options are revleaed by typing oxford_asl --more. Main options Acquisition specific There are a number of acquisition sepecific parameters that you should set to describe your data to oxford_asl. Note, it is highly unlikely that the defaults for all of these parameters will be correct for your data - in particular you should pay attention to the follwing options. There are further acquisition specific parameters that you might need to invoke depending upon your data, although the defaults here are more likely to apply. Structural image The inclusion of a structural image is optional but highly recommended, as various useful pieces of information can be extracted when this image is used as part of oxford_asl, and partial volume correction can be done. Generally, we recommend the use of fsl_anat to process the structural image prior to use with oxford_asl. the Hardvard-Oxford standard atlas. Calibration Most commonly you will have a calibration image that is some form of (approximately) proton-density-weighted image and thus will use the -c option. There are further advanced/extended options for calibraiton: There are some extended options (to be used alongside a structural image) for the purposes of registration. Kinetic Analysis Distortion Correction Distortion correction for (EPI) ASL images follows the methodology used in BOLD EPI distortion correction. Using a separately acquired fieldmap (structural image is required), this can in principle be in any image space (not necessarily already alinged with the ASL or structural image), the syntax follows epi_reg: Further information on fieldmaps can be found under the fsl_prepare_fieldmap documentation on the FSL webpages. Using phase-encode-reversed calibration image (a la topup): For topup the effective EPI echo spacing is converted to total readout time by multiplication by the number of slices (minus one) in the encode direction. Earlier versions of oxford_asl (pre v3.9.22) interpreted the --echospacing parameter as total readout time when supplied with a phase-encode-reversed calibration image. Partial volume correction Correction for the effect of partial voluming of grey and white matter, and CSF can be performed using oxford_asl to get maps of ‘pure’ grey (and white) matter perfusion. When partial volume correction is performed a separate subdirectory ( pvcorr) within the main results subdirectories will appear with the corrected perfusion images in: in this directory the perfusion.nii.gz image is for grey matter, perfusion_wm.nii.gz contains white matter estimates. Note that, the non-corrected analysis is always run prior to partial volume correction and thus you will also get a conventional perfusion image. Epoch analysis The data can also be analysed as separate epochs based on the different measurements (volumes) within the ASL data. This can be a useful way of examining changes in perfusion over the duration of the acquisition, although shorter epochs will contain fewer measurements and thus be more noisy. Epoch analysis is always preceeded by a conventional analysis of the full data and thus the conventional perfusion image will also be generated from the full dataset.
https://asl-docs.readthedocs.io/en/latest/oxford_asl_userguide.html
2022-05-16T22:14:27
CC-MAIN-2022-21
1652662512249.16
[]
asl-docs.readthedocs.io
This is the new Export to CSV Smart Service. If you need to export to Excel, use the Export to Excel Smart Service. The Export Data Store Entity to CSV Smart Service allows designers to safely export large datasets. It can be used to export data from Appian that can then be imported into other third-party applications. A designer may want to export all data or just updates made within the last day. The Smart Service returns a document in a CSV format. Exports the selected data store entity to CSV. This function will only execute inside a saveInto or a Web API. a!exportDataStoreEntityToCsv( entity, selection, aggregation, filters, documentName, documentDescription, saveInFolder, documentToUpdate, includeHeader, csvDelimiter, onSuccess, onError) Copy and paste an example into an Appian Expression Editor to experiment with it. Replace the ENTITY and FOLDER with the appropriate constants for your application. The following configurations and expected behavior apply when using the Export Data Store Entity to CSV Smart Service from the process modeler:
https://docs.appian.com/suite/help/21.2/Export_To_CSV_Smart_Service.html
2022-05-16T21:28:59
CC-MAIN-2022-21
1652662512249.16
[]
docs.appian.com
by default: The XML order is important for some elements - e.g. picklists. If we were to re-order. If this not the behavior you need - there is the option to Hide reordering in the following way: From November 2021, Gearset has started to set the hide re-ordering by default or ignore the re-ordering on the following metadata types: Clean data service Connected app or applying the "re-ordering logic" when determining the Difference Type in the Difference type column metadata types: Layout If you think there are other metadata types that Gearset should hide for you by default, please get in touch with us.
https://docs.gearset.com/en/articles/2413367-why-is-xml-order-of-some-metadata-in-the-comparison-results-different
2022-05-16T22:38:49
CC-MAIN-2022-21
1652662512249.16
[]
docs.gearset.com
- repo, a changelog is being made available to more easily follow along with updates throughout the alpha period.
https://docs.gitlab.com/12.10/charts/releases/alpha.html
2022-05-16T21:18:19
CC-MAIN-2022-21
1652662512249.16
[]
docs.gitlab.com
Azure Iaa SVMRecovery Inputs CSMObject. Region Property Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Required. region of AzureIaaSVMRecoveryInputsObject. public string Region { get; set; } member this.Region : string with get, set Public Property Region As String Property Value - System.String
https://docs.azure.cn/zh-cn/dotnet/api/microsoft.azure.management.backupservices.models.azureiaasvmrecoveryinputscsmobject.region?view=azure-dotnet
2022-05-16T21:39:53
CC-MAIN-2022-21
1652662512249.16
[]
docs.azure.cn
Command Class Represents a command. Namespace: DevExpress.Utils.Commands Assembly: DevExpress.Data.v20.2.dll Declaration Remarks The command pattern enables you to separate the object that invokes the operation from the one that knows how to perform it. A Command class holds the information on the command source (Command.CommandSourceType and data used in user interface elements to which the command is attached (Command.Description, Command.Image, Command.LargeImage and Command.MenuCaption). The command’s Command Command.ForceExecute method, which ignores the command state, should be used in this instance. For information and examples on commands in the XtraRichEdit Suite review the Commands document. Related GitHub Examples The following code snippets (auto-collected from DevExpress Examples) contain references to the Command class. Note The algorithm used to collect these code examples remains a work in progress. Accordingly, the links and snippets below may produce inaccurate results. If you encounter an issue with code examples below, please use the feedback form on this page to report the issue.
https://docs.devexpress.com/CoreLibraries/DevExpress.Utils.Commands.Command?v=20.2
2022-05-16T21:04:17
CC-MAIN-2022-21
1652662512249.16
[]
docs.devexpress.com
Administration - Start or stop services - Modify the default administrator password - Upgrade Re:dash - Create and restore application backups - Upload files using SFTP - - Modify the default administrator password - Create and restore PostgreSQL backups - Configure pgAdmin 4 - Connect to Redis from a different machine - Secure Redis - Create a Redis cluster apache gonit postgresql redis
https://docs.bitnami.com/aws/apps/redash/administration/
2022-05-16T20:50:07
CC-MAIN-2022-21
1652662512249.16
[]
docs.bitnami.com
HTTP Server Setup This documentation provides example configurations for both nginx and Apache, though any HTTP server which supports WSGI should be compatible. Info For the sake of brevity, only Ubuntu 20.04 instructions are provided here. These tasks are not unique to NetBox and should carry over to other distributions with minimal changes. Please consult your distribution's documentation for assistance if needed. Obtain an SSL Certificate To enable HTTPS access to NetBox, you'll need a valid SSL certificate. You can purchase one from a trusted commercial provider, obtain one for free from Let's Encrypt, or generate your own (although self-signed certificates are generally untrusted). Both the public certificate and private key files need to be installed on your NetBox server in a location that is readable by the netbox user. The command below can be used to generate a self-signed certificate for testing purposes, however it is strongly recommended to use a certificate from a trusted authority in production. Two files will be created: the public certificate ( netbox.crt) and the private key ( netbox.key). The certificate is published to the world, whereas the private key must be kept secret at all times. sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ -keyout /etc/ssl/private/netbox.key \ -out /etc/ssl/certs/netbox.crt The above command will prompt you for additional details of the certificate; all of these are optional. HTTP Server Installation Option A: nginx Begin by installing nginx: sudo apt install -y nginx Once nginx is installed, copy the nginx configuration file provided by NetBox to /etc/nginx/sites-available/netbox. Be sure to replace netbox.example.com with the domain name or IP address of your installation. (This should match the value configured for ALLOWED_HOSTS in configuration.py.) sudo cp /opt/netbox/contrib/nginx.conf /etc/nginx/sites-available/netbox Then, delete /etc/nginx/sites-enabled/default and create a symlink in the sites-enabled directory to the configuration file you just created. sudo rm /etc/nginx/sites-enabled/default sudo ln -s /etc/nginx/sites-available/netbox /etc/nginx/sites-enabled/netbox Finally, restart the nginx service to use the new configuration. sudo systemctl restart nginx Option B: Apache Begin by installing Apache: sudo apt install -y apache2 Next, copy the default configuration file to /etc/apache2/sites-available/. Be sure to modify the ServerName parameter appropriately. sudo cp /opt/netbox/contrib/apache.conf /etc/apache2/sites-available/netbox.conf Finally, ensure that the required Apache modules are enabled, enable the netbox site, and reload Apache: sudo a2enmod ssl proxy proxy_http headers sudo a2ensite netbox sudo systemctl restart apache2 Confirm Connectivity At this point, you should be able to connect to the HTTPS service at the server name or IP address you provided. Info Please keep in mind that the configurations provided here are bare minimums required to get NetBox up and running. You may want to make adjustments to better suit your production environment. Warning Certain components of NetBox (such as the display of rack elevation diagrams) rely on the use of embedded objects. Ensure that your HTTP server configuration does not override the X-Frame-Options response header set by NetBox. Troubleshooting If you are unable to connect to the HTTP server, check that: - Nginx/Apache is running and configured to listen on the correct port. - Access is not being blocked by a firewall somewhere along the path. (Try connecting locally from the server itself.) If you are able to connect but receive a 502 (bad gateway) error, check the following: - The WSGI worker processes (gunicorn) are running ( systemctl status netboxshould show a status of "active (running)") - Nginx/Apache is configured to connect to the port on which gunicorn is listening (default is 8001). - SELinux is not preventing the reverse proxy connection. You may need to allow HTTP network connections with the command setsebool -P 1
https://docs.netbox.dev/en/stable/installation/5-http-server/
2022-05-16T22:03:58
CC-MAIN-2022-21
1652662512249.16
[]
docs.netbox.dev
How to handle test failures¶ Stopping after the first (or N) failures¶ To stop the testing process after the first (N) failures: pytest -x # stop after first failure pytest --maxfail=2 # stop after two failures Using pdb — The Python Debugger with pytest¶ Dropping to pdb on failures¶ Python comes with a builtin Python debugger called pdb. pytest allows one to drop into the pdb prompt via a command line option: pytest --pdb This will invoke the Python debugger on every failure (or KeyboardInterrupt)."',) Dropping to pdb at the start of a test¶ pytest allows one to drop into the pdb prompt immediately at the start of each test via a command line option: pytest --trace This will invoke the Python debugger at the start of every test. Setting breakpoints¶ To set a breakpoint in your code use the native Python import pdb;pdb.set_trace() call in your code and pytest automatically disables its output capture for that test: Output capture in other tests is not affected. Any prior test output that has already been captured and will be processed as such. Output capture gets resumed when ending the debugger session (via the continuecommand). Using the builtin breakpoint function¶ Python 3.7 introduces a builtin breakpoint() function. Pytest supports the use of breakpoint() with the following behaviours: - When breakpoint()is called and PYTHONBREAKPOINTis set to the default value, pytest will use the custom internal PDB trace UI instead of the system default Pdb. - When tests are complete, the system will default back to the system Pdbtrace UI. - With --pdbpassed to pytest, the custom internal Pdb trace UI is used with both breakpoint()and failed tests/unhandled exceptions. - --pdbclscan be used to specify a custom debugger class. Fault Handler¶ New in version 5.0. The faulthandler standard module can be used to dump Python tracebacks on a segfault or after a timeout. The module is automatically enabled for pytest runs, unless the -p no:faulthandler is given on the command-line. Also the faulthandler_timeout=X configuration option can be used to dump the traceback of all threads if a test takes longer than X seconds to finish (not available on Windows). Note This functionality has been integrated from the external pytest-faulthandler plugin, with two small differences: To disable it, use -p no:faulthandlerinstead of --no-faulthandler: the former can be used with any plugin, so it saves one option. The --faulthandler-timeoutcommand-line option has become the faulthandler_timeoutconfiguration option. It can still be configured from the command-line using -o faulthandler_timeout=X. Warning about unraisable exceptions and unhandled thread exceptions¶ New in version 6.2. Note These features only work on Python>=3.8. Unhandled exceptions are exceptions that are raised in a situation in which they cannot propagate to a caller. The most common case is an exception raised in a __del__ implementation. Unhandled thread exceptions are exceptions raised in a Thread but not handled, causing the thread to terminate uncleanly. Both types of exceptions are normally considered bugs, but may go unnoticed because they don’t cause the program itself to crash. Pytest detects these conditions and issues a warning that is visible in the test run summary. The plugins are automatically enabled for pytest runs, unless the -p no:unraisableexception (for unraisable exceptions) and -p no:threadexception (for thread exceptions) options are given on the command-line. The warnings may be silenced selectively using the pytest.mark.filterwarnings mark. The warning categories are pytest.PytestUnraisableExceptionWarning and pytest.PytestUnhandledThreadExceptionWarning.
https://docs.pytest.org/en/7.1.x/how-to/failures.html
2022-05-16T22:01:52
CC-MAIN-2022-21
1652662512249.16
[]
docs.pytest.org
Crate microcrates_bytes [−] [src] Provides abstractions for working with bytes. The bytes crate provides an efficient byte buffer structure ( Bytes) and traits for working with buffer implementations ( Buf, BufMut). Bytes Bytes is an efficient container for storing and operating on continguous BytesMut is used first and written to. For example: use microcrates_bytes::{BytesMut, BufMut, BigEndian}; let mut buf = BytesMut::with_capacity(1024); buf.put(&b"hello world"[..]); buf.put_u16::<BigEndian>(1234); let a = buf.take(); assert_eq!(a, b"hello world\x04\xD2"[..]); buf.put(&b"goodbye world"[..]); let b = buf.take();. Buf, BufMut These two traits provide read and write access to buffers. The underlying storage may or may not be in contiguous memory. For example, Bytes is a buffer that guarantees contiguous memory, but a rope stores the bytes in disjoint chunks. Buf and BufMut maintain cursors tracking the current position in the underlying byte storage. When bytes are read or written, the cursor is advanced. Relation with Read and Write At first glance, it may seem that Buf and BufMut overlap in functionality with std::io::Read and std::io::Write. However, they serve different purposes. A buffer is the value that is provided as an argument to Read::read and Write::write. Read and Write may then perform a syscall, which has the potential of failing. Operations on Buf and BufMut are infallible.
https://docs.rs/microcrates-bytes/latest/microcrates_bytes/
2022-05-16T21:55:41
CC-MAIN-2022-21
1652662512249.16
[]
docs.rs
There are two ways to configure a line line. [Series #]with #as the index number of the data value. For example, [Series 1]. true. datacontains a!recordData or a record type reference, the categories and series parameters are ignored. a!measure()to perform a calculation on a single field. If you need to perform multiple calculations within the same chart, use categoriesand seriesto configure your chart.: Displays the following: See also: a!queryRecordType() Function, Tempo Report Best Practices
https://docs.appian.com/suite/help/20.4/Line_Chart_Component.html
2022-05-16T21:42:52
CC-MAIN-2022-21
1652662512249.16
[]
docs.appian.com
Cloud Functions Particle.syncTime() Particle.syncTime, syncTime Synchronize the time with the Particle Device Device Cloud Particle.syncTime(); lastSync = millis(); } } Note that this function sends a request message to the Cloud and then returns. The time on the device will not be synchronized until some milliseconds later when the Cloud responds with the current time between calls to your loop. See Particle.syncTimeDone(), Particle.timeSyncedLast(), Time.isValid() and Particle.syncTimePending() for information on how to wait for request to be finished. Synchronizing time does not consume Data Operations from your monthly or yearly quota. However, for cellular devices they do use cellular data, so unnecessary time synchronization can lead to increased data usage, which could result in hitting the monthly data limit for your account. For more information about real-time clocks on Particle devices, see Learn more about real-time clocks.
https://docs.particle.io/cards/firmware/cloud-functions/particle-synctime/
2022-05-16T22:13:41
CC-MAIN-2022-21
1652662512249.16
[]
docs.particle.io
-D on UNIX or Ctrl-Z, Enter). Raises. sys.path.. See also runpy.run_path() Equivalent functionality directly available to Python code If no interface option is given, -i is implied, sys.argv[0] is an empty string ( sys.path. Also, tab-completion and history editing is automatically enabled, if available on your platform (see Readline configuration). "") and the current directory will be added to the start of See also Changed in version 3.4: Automatic enabling of tab-completion and history editing. 1.1.2. Generic options¶ 1.1.3. Miscellaneous options¶ - -b¶ Issue a warning when comparing bytesor bytearraywith stror byteswith int. Issue an error when the option is given twice ( -bb). - -B¶¶ Turn on parser debugging output (for expert only, depending on compilation options). See also PYTHONDEBUG. - -E¶ Ignore all PYTHON*environment variables, e.g. PYTHONPATHand PYTHONHOME, that might be set. See also the -Pand -I(isolated) options. - -i¶¶ Run Python in isolated mode. This also implies -E, -Pand -soptions. In isolated mode sys.pathcontains neither the script’s directory nor the user’s site-packages directory. All PYTHON*environment variables are ignored, too. Further restrictions may be imposed to prevent the user from injecting malicious code. New in version 3.4. - -O¶ Remove assert statements and any code conditional on the value of __debug__. Augment the filename for compiled (bytecode) files by adding .opt-1before the .pycextension (see PEP 488). See also PYTHONOPTIMIZE. - -OO¶ Do -Oand also discard docstrings. Augment the filename for compiled (bytecode) files by adding .opt-2before the .pycextension (see PEP 488). - -P¶ Don’t prepend a potentially unsafe path to sys.path: python -m modulecommand line: Don’t prepend the current working directory. python script.pycommand line: Don’t prepend the script’s directory. If it’s a symbolic link, resolve symbolic links. python -c codeand python(REPL) command lines: Don’t prepend an empty string, which means the current working directory. See also the PYTHONSAFEPATHenvironment variable, and -Eand -I(isolated) options. New in version 3.11. - -R¶2) complexity. See for details. PYTHONHASHSEEDallows you to set a fixed value for the hash seed secret. Changed in version 3.7: The option is no longer ignored. New in version 3.2.3. - -s¶ Don’t add the user site-packages directoryto sys.path. - -S¶ Disable the import of the module siteand the site-dependent manipulations of sys.paththat it entails. Also disable these manipulations if siteis explicitly imported later (call site.main()if you want them to be triggered). - -u¶ered. - -v¶. Changed in version 3.10: The sitemodule reports the site-specific paths and .pthfiles being processed. See also PYTHONVERBOSE. - -W arg¶is the same as -Wignore. The full form of argument is: action:message:category:module:lineno Empty fields match all values; trailing empty fields may be omitted. For example -W ignore::DeprecationWarningignores all DeprecationWarning warnings. The action field is as explained above but only applies to warnings that match the remaining fields. The message field must match the wholeoptions can be given; when a warning matches more than one option, the action for the last matching option is performed. Invalid -Woptions are ignored (though, a warning message is printed about invalid options when the first warning is issued). Warnings can also be controlled using the PYTHONWARNINGSenvironment variable and from within a Python program using the warningsmodule. For example, the warnings.filterwarnings()function can be used to use a regular expression on the warning message. See The Warnings Filter and Describing Warning Filters for more details. - -x¶ Skip the first line of the source, allowing use of non-Unix forms of #!cmd. This is intended for a DOS specific hack only. - -X¶ the Python UTF-8 Mode. -X utf8=0explicitly disables Python UTF-8 Mode (even when it would otherwise activate automatically). -X pycache_prefix=PATHenables writing .pycfiles to a parallel tree rooted at the given directory instead of to the code tree. See also PYTHONPYCACHEPREFIX. -X warn_default_encodingissues a EncodingWarningwhen the locale-specific default encoding is used for opening files. See also PYTHONWARNDEFAULTENCODING. -X no_debug_rangesdis. See also PYTHONNODEBUGRANGES. -X frozen_modulesdetermines whether or not frozen modules are ignored by the import machinery. A value of “on” means they get imported and “off” means they are ignored. The default is “on” if this is an installed Python (the normal case). If it’s under development (running from the source tree) then the default is “off”. Note that the “importlib_bootstrap” and “importlib_bootstrap_external” frozen modules are always used, even if this flag is set to “off”.. New in version 3.10: The -X warn_default_encodingoption. Deprecated since version 3.9, removed in version 3.10: The -X oldparseroption. New in version 3.11: The -X no_debug_rangesoption. New in version 3.11: The -X frozen_modulesoption. 1.1.4. Options you shouldn’t use¶. - PYTHONSAFEPATH¶ If this is set to a non-empty string, don’t prepend a potentially unsafe path to sys.path: see the -Poption for details. New in version 3.11. - PYTHON. - macOS. - PYTHONDONTWRITEBYTECODE¶ If this is set to a non-empty string, Python won’t try to write .pycfiles on the import of source modules. This is equivalent to specifying the -Boption. - PYTHONPY. Changed in version 3.6: On Windows, the encoding specified by this variable is ignored for interactive console buffers unless PYTHONLEGACYWINDOWSSTDIOis also specified. Files and pipes redirected through the standard streams are not affected. -OS. - PYTHONWARNINGS¶ This is equivalent to the -Woption. If set to a comma separated string, it is equivalent to specifying -Wmultiple times, error handler, enable the Python UTF-8 Mode. If set to 0, disable the Python UTF-8 Mode. Setting any other non-empty string causes an error during interpreter initialisation. New in version 3.7. - PYTHONWARNDEFAULTENCODING¶ If this environment variable is set to a non-empty string, issue a EncodingWarningwhen the locale-specific default encoding is used. See Opt-in EncodingWarning for details. New in version 3.10. - PYTHONNODEBUGRANGES¶ If this variable is set, it dis. New in version 3.11. 1.2.1. Debug-mode variables¶ - PYTHONTHREADDEBUG¶ If set, Python will print threading debug info into stdout. Need a debug build of Python. Deprecated since version 3.10, will be removed in version 3.12. - PYTHONDUMPREFS¶ If set, Python will dump objects and reference counts still alive after shutting down the interpreter. Need Python configured with the --with-trace-refsbuild option. - PYTHONDUMPREFSFILE=FILENAME¶ If set, Python will dump objects and reference counts still alive after shutting down the interpreter into a file called FILENAME. Need Python configured with the --with-trace-refsbuild option. New in version 3.11.
https://docs.python.org/3.11/using/cmdline.html
2022-05-16T21:24:30
CC-MAIN-2022-21
1652662512249.16
[]
docs.python.org
overview darktable-chart is a tool for extracting luminance and color values from images of color reference cards such as IT8.7/1 charts. Its main purpose is to compare a source image (typically a largely unprocessed raw image) to a target image (typically a JPEG image created in-camera) and produce a darktable style that is able to use the luminance and color values of the source image to produce the target image. This style employs the tone curve module, the input color profile module, and the color look up table module for that purpose. Some cameras offer various film simulation modes of your choice. With the help of darktable-chart and the underlying modules you can create styles that replicate these film simulations from within darktable.
https://docs.darktable.org/usermanual/3.6/en/special-topics/darktable-chart/overview/
2022-05-16T22:18:14
CC-MAIN-2022-21
1652662512249.16
[]
docs.darktable.org
Edit PowerPoint Presentations This example demonstrates standard open-edit-save cycle with Presentation documents, using different options on every step. Introduction Presentation documents are presented by many formats: PPT, PPTX, PPTM, PPS(X/M), POT(X/M) and other, which are supported by GroupDocs.Editor as a separate format family among all others. Same like for all other family formats, Presentation family has its own load, edit and save options. Presentation format has significant distinction from the WordProcessing and all textual formats, and at the same time big similarity with Spreadsheet formats, — it has no pages, but instead of pages it has slides (like Spreadsheet has tabs). Like tabs in Spreadsheets, slides in Presentations are completely separate one from each other and has no valid representation in HTML markup, so the only way to edit slides is to edit them separately, one slide per one editing procedure. As a result, a Presentation document with multiple slides is loaded into the Editor class. Then, in order to open document for editing by calling an Editor.Edit method, user should select desired slide for editing by specifying its index in the PresentationEditOptions. As a result, user will obtain an instance of EditableDocument class, that holds that edited slide. For editing another slide, user should perform another call of the Editor.Edit method with another slide index, and obtain a new instance of EditableDocument class. Finally, when saving edited slides back to Presentation format, user should save every EditableDocument instance separately. Like all Office OOXML formats, Presentation documents can be encrypted with password. GroupDocs.Editor supports opening password-protected Presentation documents (password should be specified in the PresentationLoadOptions) and creating password-protected documents (password should be specified in the PresentationSaveOptions). First step for edit a document is to load it. In order to load presentation to the Editor class, user should use the PresentationLoadOptions class. It is not necessary in some general cases — even without PresentationLoadOptions the GroupDocs.Editor is able to recognize presentation format and apply appropriate default load options automatically. But when presentation is encoded, the PresentationLoadOptions is the only way to set a password and load the document properly. If document is encoded, but password is not set in the PresentationLoadOptions, or PresentationLoadOptions is not specified at all, then PasswordRequiredException exception will be thrown. If password is set, but is incorrect, then IncorrectPasswordException will be thrown. If password is set, but the document is not encoded, then password will be ignored. Below is an example of loading a presentation document from file path with load options and password. String inputPptxPath = "C://input//presentation.pptx"; PresentationLoadOptions loadOptions = new PresentationLoadOptions(); loadOptions.setPassword("password"); Editor editor = new Editor(inputPptxPath, loadOptions); Edit PowerPoint presentation For opening Presentation document for edit a PresentationEditOptions class should be used. It has two properties: SlideNumber of Integer type, and a ShowHiddenSlides, that is a boolean flag. SlideNumber is a zero-based index of a slide, that allows to specify and select one particular slide from a presentation to edit. If lesser then 0, the first slide will be selected (same as SlideNumber = 0). If greater then amount of all slides in presentation, the last slide will be selected. If input presentation contains only single slide, this option will be ignored, and this single slide will be edited. By default is 0, that implies first slide for edit. ShowHiddenSlides is a boolean flag, which specifies whether the hidden slides should be included or not. Default is false — hidden slides are not shown and exception will be thrown while trying to edit them. So, if input Presentation has 3 slides, where 2nd is hidden, user has specified SlideNumber = 1 (2nd slide) and simultaneously ShowHiddenSlides = false, the InvalidOperationException will be thrown. If parameterless overload of the Editor.Edit method is used, the default PresentationEditOptions instance will be used: first slide and disabled ShowHiddenSlides. Code example below demonstrates all options, described above. Let’s assume, that Presentation document with at least 3 slides is already loaded into the Editor class. //parameterless overload is used => default PresentationEditOptions is applied, which means 1st slide EditableDocument firstSlide = editor.edit(); PresentationEditOptions editOptions2 = new PresentationEditOptions(); editOptions2.setSlideNumber(1); //index is 0-based, so this is 2nd slide EditableDocument secondSlide = editor.edit(editOptions2); PresentationEditOptions editOptions3 = new PresentationEditOptions(); editOptions3.setSlideNumber(2); //index is 0-based, so this is 3rd slide editOptions3.setShowHiddenSlides(true); //if 3rd slide is hidden, it will be opened anyway EditableDocument thirdSlide = editor.edit(editOptions3); Save Presentation after edit PresentationSaveOptions class is designed for saving the edited Presentation documents. This class has one constructor, which has one parameter — a Presentation format, in which the output document should be saved. This output format is represented by the PresentationFormats struct. After creating an instance, output format can be obtained or modified later, through the OutputFormat property. This is the only parameter, which is necessary for saving the document: all others are optional and may be omitted. Along with format, user is able to specify a password through the Password string property — in this case output document will be encoded and protected with this password. By default password is not set, which implies that output document will be unprotected. This is shown in example below, where it is supposed that a user has an EditableDocument instance with edited slide. PresentationSaveOptions saveOptions = new PresentationSaveOptions(PresentationFormats.Pptm); saveOptions.setPassword("new password"); EditableDocument afterEdit = /* obtain it from somewhere */; FileStream outputStream = /* obtain it from somewhere */; //saving edited slide to specified stream in PPTM format and with password encoding editor.save(afterEdit, outputStream, saveOptions);
https://docs.groupdocs.com/editor/java/edit-powerpoint/
2022-05-16T22:43:46
CC-MAIN-2022-21
1652662512249.16
[]
docs.groupdocs.com
System Modes Automatic mode SYSTEM_MODE(AUTOMATIC), AUTOMATIC The automatic mode (threading disabled) of connectivity (with threading disabled) provides the default behavior of the device, which is that: SYSTEM_MODE(AUTOMATIC); void setup() { // This won't be called until the device is connected to the cloud } void loop() { // Neither will this } - When the device starts up, it automatically tries to connect to Wi-Fi or Cellular and the Particle Device Cloud. - Once a connection with the Particle Device Cloud has been established, the user code starts running. - Messages to and from the Cloud are handled in between runs of the user loop; the user loop automatically alternates with Particle.process(). Particle.process()is also called during any delay() of at least 1 second. - If the user loop blocks for more than about 20 seconds, the connection to the Cloud will be lost. To prevent this from happening, the user can call Particle.process()manually. - If the connection to the Cloud is ever lost, the device will automatically attempt to reconnect. This re-connection will block from a few milliseconds up to 8 seconds. SYSTEM_MODE(AUTOMATIC)does not need to be called, because it is the default state; however the user can invoke this method to make the mode explicit.
https://docs.particle.io/cards/firmware/system-modes/automatic-mode/
2022-05-16T22:00:26
CC-MAIN-2022-21
1652662512249.16
[]
docs.particle.io
Contents: Contents: Tip: Click the logo at the top of the menu bar to return to the Self-Managed Enterprise Edition Self-Managed Enterprise Edition or jump to creating flows and importing datasets. Tip: When keyboard shortcuts are enabled, press ?in the application to see the available shortcuts. Individual users must enable them. See User Profile Page. - Import Data: Import new datasets into Trifacta Self-Managed Enterprise Edition. Self-Managed Enterprise Edition. - Click a job ID to view its details. See Job Details Page. - Click the name of the flow to open it. See Flow View Page. - Click the name of a recipe to select it in Flow View. See Flow View Page. Actions: Resources Use the links on the left to learn more about wrangling your data. Menu Bar From the left side of the screen, you can access the top-level pages of Trifacta Self-Managed Enterprise Edition. Flows Use the Flows page to create and manage your flows. See Flows Page. - A flow is a container for one or more datasets. See Create Flow Page. Plans. Jobs After you finish building your transformation recipe, you can run jobs to execute the recipe against your dataset. The Jobs page shows status and history of your jobs. See Jobs Page.: If you have two open tabs for work on the same dataset, changes made in one browser tab may not be reflected in the other browser tab until you refresh the page. Overwriting results from one tab through another is certainly possible. This page has no comments.
https://docs.trifacta.com/pages/viewpage.action?pageId=155401917
2022-05-16T23:37:06
CC-MAIN-2022-21
1652662512249.16
[]
docs.trifacta.com
You're reading the documentation for a development version. For the latest released version, please have a look at Galactic. Building ROS 2 on macOS Table of Contents System requirements We currently support macOS Mojave (10.14). The Rolling Ridley distribution will change target platforms from time to time as new platforms become available. Most people will want to use a stable ROS distribution. Install prerequisites brew install==0.1.1 flake8-builtins \ flake8-class-newline flake8-comprehensions flake8-deprecated \ flake8-docstrings flake8-import-order flake8-quotes \ importlib-metadata lark==1.1.1 lxml matplotlib mock mypy==0.931 netifaces \ nose pep8 psutil pydocstyle pydot pygraphviz pyparsing==2.4.7 \ pytest-mock rosdep rosdistro setuptools==59.6.0 vcstool Please ensure that the $PATHenvironment variable contains the install location of the binaries (default: $HOME/Library/Python/<version>/bin) Create a workspace and clone all repos: mkdir -p ~/ros2_rolling/src cd ~/ros2_rolling wget vcs import src < ros2.repos Install additional DDS vendors (optional) If you would like to use another DDS or RTPS vendor besides the default, you can find instructions here. Build the ROS 2 code Run the colcon tool to build everything (more on using colcon in this tutorial): cd ~/ros2_rolling/ Source the ROS 2 setup file: . ~/ros2_rolling/install/setup.bash This will automatically set up the environment for any DDS vendors that support was built for. Try some examples
http://docs.ros.org/en/rolling/Installation/macOS-Development-Setup.html
2022-05-16T20:54:37
CC-MAIN-2022-21
1652662512249.16
[]
docs.ros.org
DSE version not supported by LCM LCM indicates that the current version of DSE is unsupported. When attempting to edit a Config Profile in Lifecycle Manager (LCM), the following error message might display: Unsupported DSE Version DSE 6.1.2 is not supported by this version of LCM LCM stores metadata about DataStax Enterprise (DSE) configuration parameters in definition files. These errors indicate that the specified version of DSE is not listed in the definition files available to OpsCenter and LCM. This situation can occur for the following reasons: Upgrading OpsCenter If the version of DSE is End Of Life (EOL) or End Of Service Life (EOSL), and you completed a major release upgrade of OpsCenter, then OpsCenter cannot access definition files for that DSE version. Check the supported software page for supported DSE versions. If your DSE version is unsupported, upgrade DSE to a supported version or downgrade OpsCenter to a version that supports the currently installed DSE version. Updating definition files offline By default, OpsCenter attempts to download definition files automatically from DataStax. If network connectivity or security policies prevent this outbound HTTPS connection from completing, definitions cannot be updated automatically. The result is that newly released DSE versions will not be supported by OpsCenter and LCM. Because DataStax cannot control the order and timing in which these updates are applied, it can lead to the following scenarios: Manually updating definition files improperly can cause this error to display. Consider the following example: - OpsCenter administrator manually updates OpsCenter definition files. These definition files include support for DSE 6.1.2, which was released more recently than the installed OpsCenter version. - OpsCenter user creates a configuration profile in LCM for DSE 6.1.2. - OpsCenter administrator upgrades OpsCenter to a newer version, but one which still predates DSE 6.1.2. The definition files bundled with this version of OpsCenter do not include DSE 6.1.2 support. - OpsCenter user attempts to edit the previously created configuration profile, or run a job on a cluster with that profile applied. LCM cannot access metadata associated with DSE 6.1.2 because the definition files are not included, which leads to errors. In this scenario, manually updating definition files again should restore the ability for LCM to view, edit, and run jobs associated with the DSE 6.1.2 configuration profile.
https://docs.datastax.com/en/dse-trblshoot/doc/troubleshooting/opsc/definitionsNotSupported.html
2022-05-16T21:45:37
CC-MAIN-2022-21
1652662512249.16
[]
docs.datastax.com
debops.secret¶ debops.secret role enables you to have a separate directory on the Ansible Controller (different than the playbook directory and inventory directory) which can be used as a handy "workspace" for other roles. Some usage examples of this role in DebOps include: - password lookups, either from current role, or using known location of passwords from other roles, usually dependencies (for example debops.mariadbrole can manage an user account in the database with random password and other role can lookup that password to include in a generated configuration file); - secure file storage, for example for application keys generated on remote hosts ( debops.boxbackuprole retrieves client keys for backup purposes), for that reason secret directory should be protected by an external means, for example encrypted filesystem (currently there is no encryption provided by default); - secure workspace ( debops.boxbackuprole, again, uses the secret directory to create and manage Root CA for backup servers – client and server certificates are automatically downloaded to Ansible Controller, signed and uploaded to destination hosts); - simple centralized backup (specific roles like debops.sshd, debops.pkiand debops.monkeyspherehave separate task lists that are invoked by custom playbooks to allow backup and restoration of ssh host keys and SSL certificates. Generated .tar.gz files are kept on Ansible Controller in secret directory). debops.secret - Manage sensitive data in the filesystem Copyright (C) 2013
https://docs.debops.org/en/stable-3.0/ansible/roles/secret/index.html
2022-05-16T22:10:30
CC-MAIN-2022-21
1652662512249.16
[]
docs.debops.org
Merging Tables Introduction DITA Merge supports CALS table and DITA simple table processing. The CALS table processing ensures that when syntactically and semantically valid input tables are provided, the result will be a valid CALS table. The OASIS CALS table model documentation defines validity. Simple changes to the table, such as changing the contents of an entry, adding a row or column, are generally represented as fine grain changes. Some types. Table processing configuration In DITA Merge, CALS tables and DITA simple table processing are configured separately. The following section talks about how to turn table processing on and off and set different CALS table processing modes. CALS table The CALS table processing is enabled/disabled using setCalsTableProcessing.. Warning report mode This mode specifies the way in which invalid table warnings should be reported. Different options such as comments, message or processing instruction are available to report warning. This can be configured using setWarningReportMode. CALS table validation level The CALS invalid table behaviour depends on the CALS table validation level. The CALS table validation level can either be STRICT or RELAXED. This can be configured using setCalsValidationLevel. DITA Simple table The DITA Simple table processing is enabled/disabled using setHtmlTableProcessing.
https://docs.deltaxml.com/dita-merge/5.0/Merging-Tables.2444692069.html
2022-05-16T20:53:18
CC-MAIN-2022-21
1652662512249.16
[]
docs.deltaxml.com
Using Deltas for XML Versioning (diff and patch) Introduction DeltaXML can be used to provide a 'diff and patch' capability for any XML document or data file. The diff would be generated by comparing two versions and producing a delta that contains only the differences, known as a changes-only delta. The patch capability is provided by the recombine operation. This 'diff and patch' capability can be useful where updates to XML files need to be sent using minimum bandwidth, for example in mobile or satellite applications. Another application is in situations where it is necessary to keep a full audit trail of changes to an XML file using minimum storage space. The DeltaXML DeltaV2 format comes in two main flavours: full context and changes-only. The full context delta efficiently stores the entire contents of the input files. It can be used to create a difference report or a document with changes marked-up. The changes-only delta format stores only what has changed and the necessary context for those changes. It is an ideal format for storing differences between different versions of XML documents. XML Compare comes bundled with a recombiner which can recreate Document ‘B’ with Document ‘A’ and the changes-only delta. The recombiner can also be used in reverse to recreate Document ‘A’ from Document ‘B’ and the changes-only delta. This process is heavily used at DeltaXML as part of our comprehensive roundtrip test suite (to ensure the comparator is 100% accurate). More usefully, these tools can be used when there are many versions of a document. There are various strategies that can be used to achieve this. Using Changes-only Generated Deltas Against Version 1 The most obvious approach is storing the original version of the document, and the changes-only delta between the versions. Any version can then easily be recreated by using the recombiner with the stored version and the relevant changes-only delta. The big issue with this approach is that as the document gets more revised the deltas will get larger and larger, potentially to the point where the delta is effectively the entire document. Using Changes-only Deltas Generated Against the Previous Version Another possible approach is to store the first and last versions, and changes-only deltas between each version and the last. Any version can then be recreated by either forward recombining from version 1, or reverse recombining form the latest version. The issue here is that the more versions you have, the more recombinations it will potentially take to reconstruct a version. The sensible solution here would be to store the first, latest and each nth version (what n is depends on your system’s use/data). Lexical Preservation XML Compare expects to be run in an environment where either its inputs or its output may have been processed by XSLT filters, which in particular conforms to the XSLT processing model's XDM tree model (as specified in XQuery 1.0 and XPath 2.0 Data Model). This model does not contain entries that correspond to all the XML node types, such as 'DOCTYPEs', 'entities', and 'CDATA sections and ignorable whitespace', which are removed, expanded and converted to text characters respectively as part of the XSLT parsing. Our lexical preservation filters can preserve the four mentioned XML node types, by converting them to and from XML element nodes. These preservation filters also enable processing instructions and comments, which are otherwise ignored by our comparator technology, to be retained and compared in a similar manner. For further information please refer to the How to Preserve Doctype Information sample, the How to Preserve Processing Instructions and Comments sample, and the LexicalPreservation Java API documentation. In order for the recombiner to work with filtered documents, its inputs need to be identical to those used to generate the delta. The sample ensures this by storing the marked-up version (i.e. after filtering with the LexicalPreservation filter), see graphic below (the inputs with '(Preserved)' are marked-up with the lexical preservation). The output filter is only run as the last stage when retrieving a version from the system, see below: Implementation The sample, which is available from Bitbucket, is an implementation of the second approach discussed in the previous section. It is made up of three classes: DeltasForVersioning.java - this class stores a list of Versions and offers methods for management. Its constructor offers a quick way to disable the lexical preservation filtering (the default is for it to be enabled). The important methods are: addVersion(File)- this adds the Fileas a version of the document. verifyDeltaForInput(File, File, File)- validates that the changes-only deltaV2 is valid and can be used to recreate either of the inputs, this is called by addVersion(File). This throws a InvalidChangesOnlyDeltaExceptionwhen the changes-only deltaV2 fails the validation. retrieveVersionForwards(int, File)- this retrieves the requested version of the document using forward recombine, starting with the first version of the document. retrieveVersionBackwards(int, File)- this retrieves the requested version of the document using reverse recombine, starting with the latest version. runLexicalPreservationInfilter(File input)- this runs the lexical preservation input filter, and returns the marked-up version of the input. runLexicalPreservationOutfilter(File input, File output)- this runs the lexical preservation output filter, and returns the non-marked-up version of the input (i.e. the expanded lexical content is back in its original form. Version.java - instances of this class store details about a version of the document (mainly the generated changes-only delta file). InvalidChangesOnlyDeltaException.java - an exception thrown when the generated changes-only deltaV2 is invalid. Limitations There are a few limitations with the sample implementation, including: Can only be used from PipelinedComparatorS9 - DocumentComparator does not support producing changes-only deltas The sample is implemented in Java. The Version object is storing the full copy of each document version, this should be a trivial thing to clean up. There is no persistence, so the version model needs to be built up each run. The LexicalPreservation output filter requires a Saxon PE license as the DeltaXML OEM license cannot be used in sample code. If this is a problem, please contact us./deltas-for-versioning.
https://docs.deltaxml.com/xml-compare/11.0/Using-Deltas-for-XML-Versioning-(diff-and-patch).2666145276.html
2022-05-16T22:20:48
CC-MAIN-2022-21
1652662512249.16
[]
docs.deltaxml.com
What is CPQ? CPQ (or Configure Price Quote) allows sales teams to generate quotes for products through the Salesforce UI. To create a quote you must first choose products, some of which are required to be sold together. For this reason a central concept in Salesforce CPQ is the bundle, which is simply a group of products we know should be sold together. When the sales rep chooses to sell the bundle, the product configuration interface brings in all the related products. When the Product Configuration page loads, what we see is really a page built from records—Product records, Option records, and Feature records. The configuration page itself is not actually a record, so the configuration page is not an object with its own fields. For more info about how CPQ works, we suggest completing this trail on Trailhead. Pre-deployment steps CPQ consists of both metadata and data, so before running a data deployment, you should first align the source and target metadata to make sure that all the necessary containers (metadata) are there to receive the deployed records (data). You can do so easily by running a metadata comparison using Gearset. Please also note that data deployments in Gearset are disabled by default. To enable data deployments, a team owner must manually enable them via the account page in the app. When you enable the data loader, deployments to Production will also be disabled by default, and can similarly be enabled by a team owner via the account page. Data Deployment Configuration Once you have selected a source and target, Gearset will perform a Salesforce listMetadata call to retrieve the object list from the two orgs, and display the results in a tree view. A data deployment configuration in Gearset has 3 steps. Step 1 - select the top-level objects for your data deployment Top-level objects are ones that Gearset will use as a starting point to analyse your data model. This selection of the top-level objects depends on the objective of your data deployment. At this stage you can choose to use the object filter settings to specify which records you would like to deploy by field value. You can also use complex filters to retrieve a specific subset of records. Once you select a top-level object, you will see any referenced objects automatically detected by Gearset. The Schema Builder in Salesforce (from Setup) is a handy tool for explaining the direction data flows through your system, allowing you to visualise the relationship between objects. Object relationships are a special field type (the lookup or the master-detail) that connect two objects together. The Gearset data loader detects referenced objects and brings related records into the deployment, but this referencing only works in one direction. We have written a separate support article with more tips about what to select at the top-level when configuring a CPQ data deployment: LINK. Step 2: Indirect dependencies and matching existing records. At this second step of the configuration, Gearset will traverse the dependency graph to determine any further indirect dependencies necessary to deploy records from the selected objects. You will also be able to define how to match existing records. On the left-hand side are the top-level objects that you selected in the previous step. On the right side are all the direct and indirect dependencies. For each object, you can choose to upsert records by selecting an external ID field. The ID field must be specified as an "external ID" or "lookup ID" field within Salesforce. If no such fields exist on the object, no matching of existing records will occur - all records will be inserted into the target as new. If you are deploying between 2 related orgs you might be able to upsert based on the SalesforceID. This will be available if: source is backup and target is the original org source is a sandbox, target is a sandbox, they’re sandboxes of the same org source is a prod org, target is a sandbox, target is a sandbox of the source org However, if you are deploying from an org to a non related one you will need to create an External ID field. CPQ objects often uses a randomly generated field to identify new records (for example the Name field of a Product Option record will be a randomly generated sequence, such as PO-000076). Unfortunately randomly generated fields cannot be used as external IDs, as they cannot be created by users. On several forums online, other Salesforce users have suggested a workaround: create a new text field (marked as External ID) and have an automated script copy and paste the number randomly generated by Salesforce into this new field. Let's use Product Option as an example. If we deploy from an org to its sandbox, we may be able to upsert based on SalesforceID. If we are deploying between 2 unrelated orgs, the Salesforce ID would not be available for upsert. In this case we need to have an External ID field. The best field for this would be the Name field, which is unique. However this is a field managed by Salesforce, and it is populated with a random generated text. This type of field cannot be used as an External ID for matching records. Moreover, as it's auto generated by Salesforce, the record PO-000076 in the source would be assigned to a different number (for example PO-000089) in the target, and this number cannot be modified (i.e. updated with the name that you want). Without another field for matching the two records, at every deployment you will create duplicate records because the random name will not be matched between source and target. In the example below we created an External ID field (Unique ID) and populated this field by simply copying and pasting the randomly generated Option Name field. After the deployment to the target, even if Salesforce assigns a different number to the record in the Name field, the Unique ID will keep track of what was the original name and match the records. Not all CPQ objects will require a new External ID field. When you see only create new records as an option for matching, this means that no such existing External ID field exists. If creating new records is not the desired outcome, it will be necessary to create an External ID. Step 3 - The deployment plan and templates The deployment plan is a collection of steps we run to move data from the source to the target. It is also at this stage that it is possible to create a configuration template. Data Deployment Templates save you time when setting up a new data deployment, and ensure that the same data configuration is used every time. Once you've configured a CPQ data deployment, you're able to save the configuration for that deployment as a new template (using the box at the bottom of the summary page). You can also save a template from a completed deployment. Creating a template from a deployment configuration will save: The selected objects Record limits on those objects Record filters on those objects External IDs used for matching records on those objects Data masking settings on those objects Selecting one of those templates will apply a previously saved configuration to your new data deployment (to replicate, for example, a Product bundle deployment). If you're not entirely happy with the configuration from the template, then you can still make changes to the deployment (but those changes won't update the selected template). In some cases, objects or fields from the template may not exist.. If there are no External ID fields this might be switched to “Create new records”. Gearset can also disable validation rules and triggers for you before you start your data deployment.
https://docs.gearset.com/en/articles/5211258-cpq-data-deployment
2022-05-16T21:09:00
CC-MAIN-2022-21
1652662512249.16
[]
docs.gearset.com
Azure DevOps Services With the Azure DevOps extension for Azure Command Line Interface (CLI), you can manage many Azure DevOps Services from the command line. CLI commands enable you to streamline your tasks with faster and flexible interactive canvas, bypassing user interface workflows. Note The Azure DevOps Command Line Interface (CLI) is only available for use with Azure DevOps Services. The Azure DevOps extension for the Azure CLI does not support any version of Azure DevOps Server. To start using the Azure DevOps extension for Azure CLI, perform the following steps: Install Azure CLI: Follow the instructions provided in Install the Azure CLI to set up your Azure CLI environment. At a minimum, your Azure CLI version must be 2.10.1.. To sign in using a Personal Access Token (PAT), see Sign in via Azure DevOps Personal Access Token (PAT). Configure defaults: We recommend you set the default configuration for your organization and project. Otherwise, you can set these within the individual commands themselves. az devops configure --defaults organization= project=ContosoWebApp Command usage Adding the Azure DevOps Extension adds devops, pipelines, artifacts, boards, and repos groups. For usage and help content for any command, enter shows the details of build with id 1 on the command-line and also opens it in the default browser. Related articles Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/azure/devops/cli/?toc=%2Fazure%2Fdevops%2Fdev-resources%2Ftoc.json&bc=%2Fazure%2Fdevops%2Fdev-resources%2Fbreadcrumb%2Ftoc.json&view=azure-devops
2022-05-16T21:10:13
CC-MAIN-2022-21
1652662512249.16
[]
docs.microsoft.com
DocumentDB Driver Table of Contents You can manage your DocumentDB migrations by using the Mongock drivers for MongoDB # Compatibility As AWS DocumentDB relies on the MongoDB driver/api, you can use Mongock to manage your migration in the same way you would do with MongoDB, using one of the drivers Mongock provides for MongoDB. You can see how to use it in our MongoDB driver section # Resources As mentioned, AWS uses the MongoDB driver to operate with the database. However, AWS DocumentDB is mainly intended to be used inside the trusted network, which means that in order to connect from outside of the VPN in which the database resides,some extra work related to security is required. The next resources can be helpful to address some of the common issues.
https://docs.mongock.io/v5/driver/documentdb/index.html
2022-05-16T20:55:36
CC-MAIN-2022-21
1652662512249.16
[]
docs.mongock.io
AN032 Calling API from Web Page This example illustrates a few techniques for calling the Particle API from a web page. The first few examples use the Blink an LED from the cloud firmware. You can use this firmware on most Particle devices, both Wi-Fi and Cellular, including the Argon, Boron, Photon, and Electron. Embedding the token in the HTML DO NOT USE THIS TECHNIQUE Really, this is a horrible idea. It's tempting because it's so easy, but you really should not do this. In this example we copy and paste our access token and device ID into the HTML source. The problem is that anyone can just "View Source" on the web page and get our access token. With that token, they can log into the Console, manage devices, delete source code from the Web IDE, pretty much everything. If you are using source code management like GitHub, it's easy to accidentally share your account access token if you paste it into your code and have to remove it every time you commit a change. Log in from page A more secure option is to prompt the user for their username and password and use that to log into the Particle cloud. This example: - Uses Particle API JS to interact with the Device from Javascript. - Uses jQuery to handle some AJAX (asynchronous HTTP requests) and web page manipulation. - Prompt for login to a Particle developer account. - Handles MFA (multi-factor authentication) if is is enabled on the account. - Stores a short-lived access token in browser local storage. Download this file and save it to disk on your computer. Then double-click to open it in your browser. The technique in this example works from file:// URLs so you don't need to host the page on an actual web server to use it. The user interface isn't very pretty because we've left out the CSS styling, but it works. The HTML The HTML page is really three separate user interfaces that are either displayed or hidden as needed. mainDivis the main user interface. If you want to play around with adding some other controls, here's where to put them. loginDivis where the user is prompted to log in. otpDivis where the user is prompted to enter their MFA token. <body> <div id="mainDiv" style="display: none;"> <h3>Control an LED!</h3> <form> <p>Device: <select id="deviceSelect"></select></p> <p><button id="ledOnButton">LED On</button> <button id="ledOffButton">LED Off</button></p> <p> </p> <p><span id="statusSpan"></span></p> <p> </p> <p>Logged in as <span id="userSpan"></span> <button id="logoutButton">Log out</button></p> </form> </div> <div id="loginDiv"> <h3>Login to Particle</h3> <div id="loginFailedDiv" style="display: none;"> <p>Login failed, please try again.</p> </div> <form id="loginForm"> <p>Username (Email): <input type="text" id="userInput" /></p> <p>Password: <input type="password" id="passwordInput" /></p> <p><input type="submit" value="Login" /></p> </form> </div> <div id="otpDiv" style="display: none;"> <form id="otpForm"> <p>One-time password from your authenticator app: <input type="text" id="otpInput" /></p> <p><input type="submit" value="Login" /></p> </form> </div> </body> These are the external Javascript requirements for jQuery and Particle API JS. <script src=" <script src=" The Javascript There's a fair amount of Javascript embedded in the top of the HTML file, but it breaks down into manageable chunks: In jQuery, this marks the beginning of the function that is called when the HTML document and its scripts have been loaded. $(document).ready(function () { This is a single-page app that handles all operations using AJAX (asynchronous Javascript) and never reloads the page. Here's how we attach a function to be called when the Login form is submitted in jQuery. This also prevents the page from changing. $('#loginForm').submit(function (e) { e.preventDefault(); We hide the login page. We'll bring it back if login fails, or we'll display the OTP (MFA/one time password) page or the main page. This application uses your browser session storage. When you close the browser window, the storage goes away, which helps keep your access token secure. On your own computer, you could switch the locations that use sessionStorage to use localStorage which persists across closing the browser. This would eliminate the need to log in every time. You may need to adjust the lifetime of the localStorage and your Particle token to fit your needs. $('#loginDiv').css('display', 'none'); $('#loginFailedDiv').css('display', 'none'); sessionStorage.particleUser = $('#userInput').val(); This chunk of code handles logging in. While the Particle API JS has a login function, it does not work with MFA. This code works for accounts with MFA enabled or disabled. If you want to change the lifetime of the access token, change the value in expires_in. It's in seconds (3600 = 1 hour). The code that handles the 403 error is used for MFA. It enables the otpDiv for the user to enter their one-time password. And on success, it calls the loginSuccess() method with the access token we received. $.ajax({ data: { 'client_id': 'particle', 'client_secret': 'particle', 'expires_in': 3600, 'grant_type': 'password', 'password': $('#passwordInput').val(), 'username': $('#userInput').val() }, error: function (jqXHR, textStatus, errorThrown) { if (jqXHR.status === 403) { // Got a 403 error, MFA required. Show the MFA/OTP page. mfa_token = jqXHR.responseJSON.mfa_token; $('#otpDiv').css('display', 'inline'); return; } console.log('error ' + textStatus, errorThrown); $('#loginDiv').css('display', 'inline'); $('#loginFailedDiv').css('display', 'inline'); }, method: 'POST', success: function (data) { loginSuccess(data.access_token); }, url: ' }); In the MFA case, this code handles making the second part of the login process. It passes the mfa_token we got in the first step (that returned the 403) and adds in the otp that the user entered in the text box. $('#otpForm').submit(function (e) { // Login on the OTP/MFA form e.preventDefault(); $('#otpDiv').css('display', 'none'); $.ajax({ data: { 'client_id': 'particle', 'client_secret': 'particle', 'grant_type': 'urn:custom:mfa-otp', 'mfa_token': mfa_token, 'otp': $('#otpInput').val() }, error: function (jqXHR, textStatus, errorThrown) { // Invalid MFA token $('#loginDiv').css('display', 'inline'); $('#loginFailedDiv').css('display', 'inline'); }, method: 'POST', success: function (data) { loginSuccess(data.access_token); }, url: ' }); }); The Particle API JS does not have a function to delete the current access token but it's easy to do with AJAX. $('#logoutButton').on('click', function (e) { // Logout button clicked e.preventDefault(); // Delete the access token from local session storage const accessToken = sessionStorage.particleToken; delete sessionStorage.particleToken; delete sessionStorage.particleUser; // Invalidate the token on the cloud side $.ajax({ data: { 'access_token': accessToken }, method: 'DELETE', complete: function () { // Show the login page $('#mainDiv').css('display', 'none'); $('#loginDiv').css('display', 'inline'); $('#loginFailedDiv').css('display', 'none'); }, url: ' }); }); This is where we handle the two user-oriented buttons. The ledControl() function is called to do either the on or off operation. $('#ledOnButton').on('click', function (e) { e.preventDefault(); ledControl('on'); }); $('#ledOffButton').on('click', function (e) { e.preventDefault(); ledControl('off'); }); And this code will attempt to use a previously saved token. This makes it possible to refresh the page and not have to reenter your password every time. if (sessionStorage.particleToken) { // We have a Particle access token in the session storage. Probably // refreshed the page, so try to use it. You don't need to log in // every time, you can reuse the access token if it has not expired. $('#loginDiv').css('display', 'none'); getDevices(); } There are three cases that lead to calling getDevices(): - Already have a token (page refresh) - Successfully logged in in one step (MFA disabled) - Successfully logged in after the OTP entered (MFA enabled) The getDevices() function uses Particle API JS to call the listDevices() function. It uses the access token stored in the session storage. It's possible that this token has expired (the code currently sets the expiration to 1 hour), and if so it brings back up the login window. If it's successful, then it shows the mainDiv and loads the device list into the popup using loadDeviceList(). The Particle API JS is asynchronous, and uses promises. The parts in the then are called later. The first function is called on successful completion, or the second is called on error. function getDevices() { // Request the device list from the cloud particle.listDevices({ auth: sessionStorage.particleToken }).then( function (data) { // Success! Show the main page $('#mainDiv').css('display', 'inline'); // Load the device selector popup loadDeviceList(data.body); }, function (err) { // Failed to retrieve the device list. The token may have expired // so prompt for login again. $('#mainDiv').css('display', 'none'); $('#loginDiv').css('display', 'inline'); $('#loginFailedDiv').css('display', 'inline'); } ); } The loadDeviceList() function initializes the "logged in as" field then builds the device select popup menu. It only includes devices that implement the led Particle.function, and also includes a label if the device is currently offline. function loadDeviceList(deviceList) { let html = ''; $('#userSpan').text(sessionStorage.particleUser); deviceList.forEach(function (dev) { if (dev.functions.includes('led')) { html += '<option value="' + dev.id + '">' + dev.name + (dev.online ? '' : ' (offline)') + '</option>'; } }); $('#deviceSelect').html(html); if (html == '') { $('#statusSpan').text('No device are running led control firmware'); } else { $('#statusSpan').text(''); } } And last but not least, the function that is called to turn on or off the LED: This gets the selected Device ID from the select, and then calls the led particle function on it with a parameter of on or off. function ledControl(cmd) { // Used to turn on or off the LED by using the Particle.function "led" const deviceId = $('#deviceSelect').val(); $('#statusSpan').text(''); particle.callFunction({ deviceId, name: 'led', argument: cmd, auth: sessionStorage.particleToken }).then( function (data) { $('#statusSpan').text('Call completed'); }, function (err) { $('#statusSpan').text('Error calling device: ' + err); } ); } Simple server These techniques work fine when the user can log into an account, but what if you want a public page where an anonymous user can make a function call without exposing your access token? In this case, you may want to use your own server that acts as front-end for allowed calls. This allows precise control over what is allowed, and allows the access token to be stored in the back-end, safely away from prying eyes looking at the HTML source. This example uses Express JS in node.js. You can download the files associated with this app note as a zip file. This example is in the server directory. This is the node server source: This is a node app so it includes a package.json file with its dependencies. To install these dependencies, use: cd server npm install Then run it. The preferred way is to use environment variables: export AUTH_TOKEN=fe12630d2dbbd1ca6e8e28bd5a4b953dd3f1c53f export DEVICE_ID=1d002a000547343232363230 node app.js Or for Windows: set AUTH_TOKEN=fe12630d2dbbd1ca6e8e28bd5a4b953dd3f1c53f set DEVICE_ID=1d002a000547343232363230 node app.js You can also do this using command line options, but this is less secure. node app.js --auth-token fe12630d2dbbd1ca6e8e28bd5a4b953dd3f1c53f --device-id 1d002a000547343232363230 Then open the site in a browser: The source in app.js is a pretty normal express app. It serves the files in the public directory just like a regular web server. The only special code, and the reason for having the server in the first place, is to handle the special /led URL. What this does is translates calls to the /led URL into Particle API calls using the specified access token and Device ID. This keeps the actual token and device ID safely in the server, while allowing it to be called from an anonymous browser. app.post('/led', function (req, res) { const arg = req.body.arg; particle.callFunction({ deviceId: deviceId, name: 'led', argument: arg, auth: token }) .then(function (data) { // console.log('call success'); res.json({ ok: true }); }, function (err) { // console.log('call failure', err); res.json({ ok: false, err }); }); }); This is the HTML source. This is much simpler than even the first example because all of the API logic has been moved into the server. The only noticeable change (other than removing a lot of stuff) is the ledControl() function. Instead of calling the Particle API, it calls the led function on the server that it is served from. Note that this does not have any authentication, but your token is still safe because it's stored in the server, not in your browser. function ledControl(cmd) { // Used to turn on or off the LED by using the Particle.function "led" $('#statusSpan').text(''); $.ajax({ contentType: "application/json; charset=utf-8", data: JSON.stringify({ 'arg': cmd }), error: function (jqXHR, textStatus, errorThrown) { $('#statusSpan').text('Error calling led server'); }, method: 'POST', success: function (data, textStatus, jqXJHR) { const respObj = jqXJHR.responseJSON; if (!respObj.ok) { $('#statusSpan').text('Error calling device: ' + respObj.err.body.error); } }, url: '/led' }); } Sensor Page What if you wanted to send data in the other direction - from the device to a web page? You can do that too! This example demonstrates a few useful techniques: - Smoothing data in device firmware - Publishing values - Subscribing to events from a web page It's based on the login example above, so a lot of the code will already be familiar. You can download the files associated with this app note as a zip file. This example is in the two sensorPage files (.cpp and .htm). The Circuit For testing this, instead of using an actual sensor I used a potentiometer. One side is connected to GND (black), the other side to 3V3 (red), and the center wiper is connected to A0 (orange). A better illustration of how you would use this would be a tank level sensor, but this particular sensor is almost 4 feet long and is incredibly unwieldy when sitting on my desk. Device Firmware A few things to note in this firmware: This configures which pin the analog sensor is connected to. const pin_t SENSOR_PIN = A0; A sensor can sometime have a little jitter to it. Averaging the samples can reduce this. This code is where you configure how many samples to average. It's being run at 1000 samples per second and this seems to be a good compromise with RAM usage. const size_t NUM_SAMPLES_TO_AVERAGE = 50; int16_t samples[NUM_SAMPLES_TO_AVERAGE]; size_t sampleIndex = 0; Additionally, this code only publishes when: - The change from the last publish exceeds MIN_DELTA_TO_PUBLISH(on a scale of 0 - 4095). - It's been MIN_PUBLISH_PERIODsince the last publish. It's 1 second here ( 1s) but you might want to make this longer. - You're connected to the Particle cloud (breathing cyan). const int MIN_DELTA_TO_PUBLISH = 5; const std::chrono::milliseconds MIN_PUBLISH_PERIOD = 1s; This is the event name that is published. Make sure it doesn't conflict with any events you are using. This is also in the HTML/Javascript code. const char *EVENT_NAME = "sensorValueEvent"; In loop() we check the sensor using analogRead(). The samples array is the samples. We write to the array in a circular fashion. sampleIndex starts out at zero and increments for each sample added. However, we take it modulo NUM_SAMPLES_TO_AVERAGE using the NUM_SAMPLES_TO_AVERAGE. %operator, so the array index will always be in the range of 0 <= index < Once we have enough samples we sum the entire sample buffer and divide to get the average (mean). void loop() { samples[sampleIndex++ % NUM_SAMPLES_TO_AVERAGE] = (int16_t) analogRead(SENSOR_PIN); if (sampleIndex >= NUM_SAMPLES_TO_AVERAGE) { // Sum the recent samples to calculate the mean int sum = 0; for(size_t ii = sampleIndex - NUM_SAMPLES_TO_AVERAGE; ii < sampleIndex; ii++) { sum += (int) samples[ii % NUM_SAMPLES_TO_AVERAGE]; } int mean = (sum / NUM_SAMPLES_TO_AVERAGE); The rest of the code checks to see if the delta from last publish is large enough and enough time has elapsed, and the device is connected to the cloud. If all three are true, then the value is published! int delta = abs(mean - lastValue); if (delta > MIN_DELTA_TO_PUBLISH && millis() - lastPublish >= MIN_PUBLISH_PERIOD.count() && Particle.connected()) { // lastPublish = millis(); lastValue = mean; Particle.publish(EVENT_NAME, String(mean)); Log.info("published %d", mean); } Web Page As in the earlier example, download the file and double-click to open in your web browser. When a device running the sensor firmware above comes online, it will start publishing values and the page will automatically update. If you refresh the page you may have to move the potentiometer as for simplicity the code does not send up values when the value does not change. That would be a good enhancement. As you add new devices, new rows are added to the table. As the values change, the progress bar updates. This works for both cellular and Wi-Fi. It's normally pretty fast, within a second or so, but if a cellular device has been idle for a while it may take longer for the first update (up to 10 seconds) but then subsequent updates will be much faster. Most of the code is the same as the login page example above, however there are a few differences: Upon retrieving the device list, we build a hash which maps the device IDs to device names for devices in the account that logged in. deviceList = data.body; deviceIdToName = {}; deviceList.forEach(function(dev) { deviceIdToName[dev.id] = dev.name; }); Also we open up a new server-sent-events data stream to monitor this account's event stream for events whose names begin with sensorValueEvent. particle.getEventStream({ name: 'sensorValueEvent', auth: sessionStorage.particleToken }).then( function (stream) { console.log('starting event stream'); stream.on('event', function (eventData) { showSensor(eventData) }); }); The showSensor() function maps the Device ID (that we get from SSE) into the device name. It also grabs the value from the eventData. The value is an ASCII representation of the numeric value, base 10. function showSensor(eventData) { // eventData.coreid = Device ID const deviceId = eventData.coreid; // We retrieved the device list at startup to validate the access token // and also to be able to map device IDs to device names. const deviceName = deviceIdToName[deviceId] || deviceId; // eventData.data = event payload const sensorValue = parseInt(eventData.data); This code checks to see if there's a progress bar for this device already. If not, it adds a row to the sensorTable for this device with the device name and a new progress bar. if ($('#prog' + deviceId).length == 0) { // Add a row let html = '<tr><td>' + deviceName + '</td><td><progress id="prog' + deviceId + '" value="0" max="4095"></progress></td></tr>'; $('#sensorTable > tbody').append(html); } Finally, whether the row is new or not, the progress bar is updated with the value. $('#prog' + deviceId).val(sensorValue); Log in manually (not using single sign-on)
https://docs.particle.io/datasheets/app-notes/an032-calling-api-from-web-page/
2022-05-16T22:39:21
CC-MAIN-2022-21
1652662512249.16
[]
docs.particle.io
Press Release MONTGOMERY, AL– November 18, 2021- Trustmark is pleased to announce a community-wide partnership with Operation HOPE and City of Montgomery to offer opportunities for financial education and counseling. Representatives from Trustmark joined local leaders from the City of Montgomery, along with representatives from Operation HOPE to make the partnership announcement today. “Trustmark has a long-standing commitment to helping our clients establish financial success to build better lives,” said Duane Dewey, Trustmark CEO. “We believe Trustmark’s partnership with Operation HOPE and the City of Montgomery will expand opportunities for financial empowerment to individuals, businesses and families in the Montgomery community.” An Operation HOPE Inside location is now located at Trustmark’s Carmichael Road location in Montgomery. “Trustmark associates will continue to look for opportunities to offer financial solutions to all of our customers and offering this unique counseling service through Operation HOPE is yet another way of offering tools for financial success,” said Tod Etheredge, Trustmark Montgomery President. “This partnership allows us to make lasting change in the communities we serve here in Montgomery.” Trustmark associates and the Operation HOPE Wellbeing Coach will work with customers in Montgomery to create customized financial plans and small business development. “Operation HOPE is on a mission to help every American reach their potential and fully participate in the greatest economy on earth,” said John Hope Bryant, Operation HOPE CEO and founder. “I began Operation HOPE nearly 30 years ago with a vision that financial literacy can change the fortunes of those who are less fortunate. I am excited to launch this initiative and build on that vision with this new partnership with Trustmark and the City of Montgomery.” This is the third Operation HOPE partnership with Trustmark; Operation HOPE coaches are located at Trustmark’s Poplar Plaza location in Memphis, Tennessee and at Trustmark’s Financial CORE location at Metro Center in Jackson, Mississippi. Leaders with the City of Montgomery say they welcome the partnership with Trustmark and Operation HOPE. “My administration has prioritized working with Trustmark and Operation HOPE to ensure everyone in Montgomery has the chance to live, learn and earn,” said Steven Reed, Mayor of Montgomery. “This partnership will significantly increase accessibility to life-changing fiscal tools and resources that change the financial trajectory for many residents and entrepreneurs in our community.” About Trustmark FBBINSURANCE. Visit trustmark.com for more information. Contact Information: Melanie MorganDirector of Corporate Communications & Marketing601.208.2979
https://docs.publicnow.com/viewDoc?hash_primary=49872BA62B7EFD8B319106754DFA120BBC435585
2022-05-16T22:31:08
CC-MAIN-2022-21
1652662512249.16
[]
docs.publicnow.com
Protocol Specification Preface This document is the protocol specification for a permissioned blockchain implementation for industry use-cases. It is not intended to be a complete explanation of the implementation, but rather a description of the interfaces and relationships between components in the system and the application. Intended Audience The intended audience for this specification includes the following groups: - Blockchain vendors who want to implement blockchain systems that conform to this specification - Tool developers who want to extend the capabilities of the fabric - Application developers who want to leverage blockchain technologies to enrich their applications Table of Contents 1. Introduction 2. Fabric - 2.1 Architecture - 2.1.1 Membership Services - 2.1.2 Blockchain Services - 2.1.3 Chaincode Services - 2.1.4 Events - 2.1.5 Application Programming Interface - 2.1.6 Command Line Interface - 2.2 Topology - 2.2.1 Single Validating Peer - 2.2.2 Multiple Validating Peers - 2.2.3 Multichain 3. Protocol - 3.1 Message - 3.1.1 Discovery Messages - 3.1.2 Transaction Messages - 3.1.2.1 Transaction Data Structure - 3.1.2.2 Transaction Specification - 3.1.2.3 Deploy Transaction - 3.1.2.4 Invoke Transaction - 3.1.2.5 Query Transaction - 3.1.3 Synchronization Messages - 3.1.4 Consensus Messages - 3.2 Ledger - 3.2.1 Blockchain - 3.2.1.1 Block - 3.2.1.2 Block Hashing - 3.2.1.3 NonHashData - 3.2.1.4 Transaction - 3.2.2 World State - 3.2.2.1 Hashing the world state - 3.2.2.1.1 Bucket-tree - 3.3 Chaincode - 3.3.1 Virtual Machine Instantiation - 3.3.2 Chaincode Protocol - 3.3.2.1 Chaincode Deploy - 3.3.2.2 Chaincode Invoke - 3.3.2.3 Chaincode Query - 3.3.2.4 Chaincode State - 3.4 Pluggable Consensus Framework - 3.4.1 Consenter interface - 3.4.2 Consensus Programming Interface - 3.4.3 Inquirer interface - 3.4.4 Communicator interface - 3.4.5 SecurityUtils interface - 3.4.6 LedgerStack interface - 3.4.7 Executor interface - 3.4.7.1 Beginning a transaction batch - 3.4.7.2 Executing transactions - 3.4.7.3 Committing and rolling-back transactions - 3.4.8 Ledger interface - 3.4.8.1 ReadOnlyLedger interface - 3.4.8.2 UtilLedger interface - 3.4.8.3 WritableLedger interface - 3.4.9 RemoteLedgers interface - 3.4.10 Controller package - 3.4.11 Helper package - 3.5 Events - 3.4.1 Event Stream - 3.4.2 Event Structure - 3.4.3 Event Adapters 4. Security - 4. Security - 4.1 Business security requirements - 4.2 User Privacy through Membership Services - 4.2.1 User/Client Enrollment Process - 4.2.2 Expiration and revocation of certificates - 4.2.3 Online wallet service - 4.3 Transaction security offerings at the infrastructure level - 4.3.1 Security lifecycle of transactions - 4.3.2 Transaction confidentiality - 4.3.2.1 Confidentiality against users - 4.3.2.2 Confidentiality against validators - 4.3.3 Invocation access control - 4.3.4 Replay attack resistance - 4.4 Access control features on the application - 4.4.1 Invocation access control - 4.4.2 Read access control - 4.5 Online wallet service - 4.6 Network security (TLS) - 4.7 Restrictions in the current release - 4.7.1 Simplified client - 4.7.1 Simplified transaction confidentiality 5. Byzantine Consensus - 5.1 Overview - 5.2 Core PBFT 6. Application Programming Interface - 6.1 REST Service - 6.2 REST API - 6.3 CLI 7. Application Model - 7.1 Composition of an Application - 7.2 Sample Application 8. Future Directions - 8.1 Enterprise Integration - 8.2 Performance and Scalability - 8.3 Additional Consensus Plugins - 8.4 Additional Languages 9.1 Authors 9.2 Reviewers 9.3 Acknowledgements 10. References 1. Introduction This document specifies the principles, architecture, and protocol of a blockchain implementation suitable for industrial use-cases. 1.1 What is the fabric? The fabric. Transactions are secured, private, and confidential. Each participant registers with proof of identity to the network membership services to gain access to the system. Transactions are issued with derived certificates unlinkable to the individual participant, offering a complete anonymity on the network. Transaction content is encrypted with sophisticated key derivation functions to ensure only intended participants may see the content, protecting the confidentiality of the business transactions. The ledger allows compliance with regulations as ledger entries are auditable in whole or in part. In collaboration with participants, auditors may obtain time-based certificates to allow viewing the ledger and linking transactions to provide an accurate assessment of the operations. The fabric is an implementation of blockchain technology, where Bitcoin could be a simple application built on the fabric. It is a modular architecture allowing components to be plug-and-play by implementing this protocol specification. It features powerful container technology to host any main stream language for smart contracts development. Leveraging familiar and proven technologies is the motto of the fabric architecture. 1.2 Why the fabric? Early blockchain technology serves a set of purposes but is often not well-suited for the needs of specific industries. To meet the demands of modern markets, the fabric is based on an industry-focused design that addresses the multiple and varied requirements of specific industry use cases, extending the learning of the pioneers in this field while also addressing issues such as scalability. The fabric provides a new approach to enable permissioned networks, privacy, and confidentially on multiple blockchain networks. 1.3 Terminology The following terminology is defined within the limited scope of this specification to help readers understand clearly and precisely the concepts described here. Transaction is a request to the blockchain to execute a function on the ledger. The function is implemented by a chaincode. Transactor is an entity that issues transactions such as a client application. Ledger is a sequence of cryptographically linked blocks, containing transactions and current world state. World State is the collection of variables containing the results of executed transactions. Chaincode is an application-level code (a.k.a. smart contract) stored on the ledger as a part of a transaction. Chaincode runs transactions that may modify the world state. Validating Peer is a computer node on the network responsible for running consensus, validating transactions, and maintaining the ledger. Non-validating Peer is a computer node on the network which functions as a proxy connecting transactors to the neighboring validating peers. A non-validating peer doesn’t execute transactions but does verify them. It also hosts the event stream server and the REST service. Permissioned Ledger is a blockchain network where each entity or node is required to be a member of the network. Anonymous nodes are not allowed to connect. Privacy is required by the chain transactors to conceal their identities on the network. While members of the network may examine the transactions, the transactions can’t be linked to the transactor without special privilege. Confidentiality is the ability to render the transaction content inaccessible to anyone other than the stakeholders of the transaction. Auditability of the blockchain is required, as business usage of blockchain needs to comply with regulations to make it easy for regulators to investigate transaction records. 2. Fabric The fabric is made up of the core components described in the subsections below. 2.1 Architecture The reference architecture is aligned in 3 categories: Membership, Blockchain, and Chaincode services. These categories are logical structures, not a physical depiction of partitioning of components into separate processes, address spaces or (virtual) machines. 2.1.1 Membership Services Membership provides services for managing identity, privacy, confidentiality and auditability on the network. In a non-permissioned blockchain, participation does not require authorization and all nodes can equally submit transactions and/or attempt to accumulate them into acceptable blocks, i.e. there are no distinctions of roles. Membership services combine elements of Public Key Infrastructure (PKI) and decentralization/consensus to transform a non-permissioned blockchain into a permissioned blockchain. In the latter, entities register in order to acquire long-term identity credentials (enrollment certificates), and may be distinguished according to entity type. In the case of users, such credentials enable the Transaction Certificate Authority (TCA) to issue pseudonymous credentials. Such credentials, i.e., transaction certificates, are used to authorize submitted transactions. Transaction certificates persist on the blockchain, and enable authorized auditors to cluster otherwise unlinkable transactions. 2.1.2 Blockchain Services Blockchain services manage the distributed ledger through a peer-to-peer protocol, built on HTTP/2. The data structures are highly optimized to provide the most efficient hash algorithm for maintaining the world state replication. Different consensus (PBFT, Raft, PoW, PoS) may be plugged in and configured per deployment. 2.1.3 Chaincode Services Chaincode services provides a secured and lightweight way to sandbox the chaincode execution on the validating nodes. The environment is a “locked down” and secured container along with a set of signed base images containing secure OS and chaincode language, runtime and SDK layers for Go, Java, and Node.js. Other languages can be enabled if required. 2.1.4 Events Validating peers and chaincodes can emit events on the network that applications may listen for and take actions on. There is a set of pre-defined events, and chaincodes can generate custom events. Events are consumed by 1 or more event adapters. Adapters may further deliver events using other vehicles such as Web hooks or Kafka. 2.1.5 Application Programming Interface (API) The primary interface to the fabric is a REST API and its variations over Swagger 2.0. The API allows applications to register users, query the blockchain, and to issue transactions. There is a set of APIs specifically for chaincode to interact with the stack to execute transactions and query transaction results. 2.1.6 Command Line Interface (CLI) CLI includes a subset of the REST API to enable developers to quickly test chaincodes or query for status of transactions. CLI is implemented in Golang and operable on multiple OS platforms. 2.2 Topology A deployment of the fabric can consist of a membership service, many validating peers, non-validating peers, and 1 or more applications. All of these components make up a chain. There can be multiple chains; each one having its own operating parameters and security requirements. 2.2.1 Single Validating Peer Functionally, a non-validating peer is a subset of a validating peer; that is, every capability on a non-validating peer may be enabled on a validating peer, so the simplest network may consist of a single validating peer node. This configuration is most appropriate for a development environment, where a single validating peer may be started up during the edit-compile-debug cycle. A single validating peer doesn’t require consensus, and by default uses the noops plugin, which executes transactions as they arrive. This gives the developer an immediate feedback during development. 2.2.2 Multiple Validating Peers Production or test networks should be made up of multiple validating and non-validating peers as necessary. Non-validating peers can take workload off the validating peers, such as handling API requests and processing events. The validating peers form a mesh-network (every validating peer connects to every other validating peer) to disseminate information. A non-validating peer connects to a neighboring validating peer that it is allowed to connect to. Non-validating peers are optional since applications may communicate directly with validating peers. 2.2.3 Multichain Each network of validating and non-validating peers makes up a chain. Many chains may be created to address different needs, similar to having multiple Web sites, each serving a different purpose. 3. Protocol The fabric’s peer-to-peer communication is built on gRPC, which allows bi-directional stream-based messaging. It uses Protocol Buffers to serialize data structures for data transfer between peers. Protocol buffers are a language-neutral, platform-neutral and extensible mechanism for serializing structured data. Data structures, messages, and services are described using proto3 language notation. 3.1 Message Messages passed between nodes are encapsulated by Message proto structure, which consists of 4 types: Discovery, Transaction, Synchronization, and Consensus. Each type may define more subtypes embedded in the payload. message Message { enum Type { UNDEFINED = 0; DISC_HELLO = 1; DISC_DISCONNECT = 2; DISC_GET_PEERS = 3; DISC_PEERS = 4; DISC_NEWMSG = 5; CHAIN_STATUS = 6; CHAIN_TRANSACTION = 7; CHAIN_GET_TRANSACTIONS = 8; CHAIN_QUERY = 9; SYNC_GET_BLOCKS = 11; SYNC_BLOCKS = 12; SYNC_BLOCK_ADDED = 13; SYNC_STATE_GET_SNAPSHOT = 14; SYNC_STATE_SNAPSHOT = 15; SYNC_STATE_GET_DELTAS = 16; SYNC_STATE_DELTAS = 17; RESPONSE = 20; CONSENSUS = 21; } Type type = 1; bytes payload = 2; google.protobuf.Timestamp timestamp = 3; } The payload is an opaque byte array containing other objects such as Transaction or Response depending on the type of the message. For example, if the type is CHAIN_TRANSACTION, the payload is a Transaction object. 3.1.1 Discovery Messages Upon start up, a peer runs discovery protocol if CORE_PEER_DISCOVERY_ROOTNODE is specified. CORE_PEER_DISCOVERY_ROOTNODE is the IP address of another peer on the network (any peer) that serves as the starting point for discovering all the peers on the network. The protocol sequence begins with DISC_HELLO, whose payload is a HelloMessage object, containing its endpoint: message HelloMessage { PeerEndpoint peerEndpoint = 1; uint64 blockNumber = 2; } message PeerEndpoint { PeerID ID = 1; string address = 2; enum Type { UNDEFINED = 0; VALIDATOR = 1; NON_VALIDATOR = 2; } Type type = 3; bytes pkiID = 4; } message PeerID { string name = 1; } Definition of fields: PeerIDis any name given to the peer at start up or defined in the config file PeerEndpointdescribes the endpoint and whether it’s a validating or a non-validating peer pkiIDis the cryptographic ID of the peer addressis host or IP address and port of the peer in the format ip:port blockNumberis the height of the blockchain the peer currently has If the block height received upon DISC_HELLO is higher than the current block height of the peer, it immediately initiates the synchronization protocol to catch up with the network. After DISC_HELLO, peer sends DISC_GET_PEERS periodically to discover any additional peers joining the network. In response to DISC_GET_PEERS, a peer sends DISC_PEERS with payload containing an array of PeerEndpoint. Other discovery message types are not used at this point. 3.1.2 Transaction Messages There are 3 types of transactions: Deploy, Invoke and Query. A deploy transaction installs the specified chaincode on the chain, while invoke and query transactions call a function of a deployed chaincode. Another type in consideration is Create transaction, where a deployed chaincode may be instantiated on the chain and is addressable. This type has not been implemented as of this writing. 3.1.2.1 Transaction Data Structure Messages with type CHAIN_TRANSACTION or CHAIN_QUERY carry a Transaction object in the payload: message Transaction { enum Type { UNDEFINED = 0; CHAINCODE_DEPLOY = 1; CHAINCODE_INVOKE = 2; CHAINCODE_QUERY = 3; CHAINCODE_TERMINATE = 4; } Type type = 1; string uuid = 5; bytes chaincodeID = 2; bytes payloadHash = 3; ConfidentialityLevel confidentialityLevel = 7; bytes nonce = 8; bytes cert = 9; bytes signature = 10; bytes metadata = 4; google.protobuf.Timestamp timestamp = 6; } message TransactionPayload { bytes payload = 1; } enum ConfidentialityLevel { PUBLIC = 0; CONFIDENTIAL = 1; } Definition of fields: - type - The type of the transaction, which is 1 of the following: - UNDEFINED - Reserved for future use. - CHAINCODE_DEPLOY - Represents the deployment of a new chaincode. - CHAINCODE_INVOKE - Represents a chaincode function execution that may read and modify the world state. - CHAINCODE_QUERY - Represents a chaincode function execution that may only read the world state. - CHAINCODE_TERMINATE - Marks a chaincode as inactive so that future functions of the chaincode can no longer be invoked. - chaincodeID - The ID of a chaincode which is a hash of the chaincode source, path to the source code, constructor function, and parameters. - payloadHash - Bytes defining the hash of TransactionPayload.payload. - metadata - Bytes defining any associated transaction metadata that the application may use. - uuid - A unique ID for the transaction. - timestamp - A timestamp of when the transaction request was received by the peer. - confidentialityLevel - Level of data confidentiality. There are currently 2 levels. Future releases may define more levels. - nonce - Used for security. - cert - Certificate of the transactor. - signature - Signature of the transactor. - TransactionPayload.payload - Bytes defining the payload of the transaction. As the payload can be large, only the payload hash is included directly in the transaction message. More detail on transaction security can be found in section 4. 3.1.2.2 Transaction Specification A transaction is always associated with a chaincode specification which defines the chaincode and the execution environment such as language and security context. Currently there is an implementation that uses Golang for writing chaincode. Other languages may be added in the future. message ChaincodeSpec { enum Type { UNDEFINED = 0; GOLANG = 1; NODE = 2; } Type type = 1; ChaincodeID chaincodeID = 2; ChaincodeInput ctorMsg = 3; int32 timeout = 4; string secureContext = 5; ConfidentialityLevel confidentialityLevel = 6; bytes metadata = 7; } message ChaincodeID { string path = 1; string name = 2; } message ChaincodeInput { string function = 1; repeated string args = 2; } Definition of fields: - chaincodeID - The chaincode source code path and name. - ctorMsg - Function name and argument parameters to call. - timeout - Time in milliseconds to execute the transaction. - confidentialityLevel - Confidentiality level of this transaction. - secureContext - Security context of the transactor. - metadata - Any data the application wants to pass along. The peer, receiving the chaincodeSpec, wraps it in an appropriate transaction message and broadcasts to the network. 3.1.2.3 Deploy Transaction Transaction type of a deploy transaction is CHAINCODE_DEPLOY and the payload contains an object of ChaincodeDeploymentSpec. message ChaincodeDeploymentSpec { ChaincodeSpec chaincodeSpec = 1; google.protobuf.Timestamp effectiveDate = 2; bytes codePackage = 3; } Definition of fields: - chaincodeSpec - See section 3.1.2.2, above. - effectiveDate - Time when the chaincode is ready to accept invocations. - codePackage - gzip of the chaincode source. The validating peers always verify the hash of the codePackage when they deploy the chaincode to make sure the package has not been tampered with since the deploy transaction entered the network. 3.1.2.4 Invoke Transaction Transaction type of an invoke transaction is CHAINCODE_INVOKE and the payload contains an object of ChaincodeInvocationSpec. message ChaincodeInvocationSpec { ChaincodeSpec chaincodeSpec = 1; } 3.1.2.5 Query Transaction A query transaction is similar to an invoke transaction, but the message type is CHAINCODE_QUERY. 3.1.3 Synchronization Messages Synchronization protocol starts with discovery, described above in section 3.1.1, when a peer realizes that it’s behind or its current block is not the same with others. A peer broadcasts either SYNC_GET_BLOCKS, SYNC_STATE_GET_SNAPSHOT, or SYNC_STATE_GET_DELTAS and receives SYNC_BLOCKS, SYNC_STATE_SNAPSHOT, or SYNC_STATE_DELTAS respectively. The installed consensus plugin (e.g. pbft) dictates how synchronization protocol is being applied. Each message is designed for a specific situation: SYNC_GET_BLOCKS requests for a range of contiguous blocks expressed in the message payload, which is an object of SyncBlockRange. The correlationId specified is included in the SyncBlockRange of any replies to this message. message SyncBlockRange { uint64 correlationId = 1; uint64 start = 2; uint64 end = 3; } A receiving peer responds with a SYNC_BLOCKS message whose payload contains an object of SyncBlocks message SyncBlocks { SyncBlockRange range = 1; repeated Block blocks = 2; } The start and end indicate the starting and ending blocks inclusively. The order in which blocks are returned is defined by the start and end values. For example, if start=3 and end=5, the order of blocks will be 3, 4, 5. If start=5 and end=3, the order will be 5, 4, 3. SYNC_STATE_GET_SNAPSHOT requests for the snapshot of the current world state. The payload is an object of SyncStateSnapshotRequest message SyncStateSnapshotRequest { uint64 correlationId = 1; } The correlationId is used by the requesting peer to keep track of the response messages. A receiving peer replies with SYNC_STATE_SNAPSHOT message whose payload is an instance of SyncStateSnapshot message SyncStateSnapshot { bytes delta = 1; uint64 sequence = 2; uint64 blockNumber = 3; SyncStateSnapshotRequest request = 4; } This message contains the snapshot or a chunk of the snapshot on the stream, and in which case, the sequence indicate the order starting at 0. The terminating message will have len(delta) == 0. SYNC_STATE_GET_DELTAS requests for the state deltas of a range of contiguous blocks. By default, the Ledger maintains 500 transition deltas. A delta(j) is a state transition between block(i) and block(j) where i = j-1. The message payload contains an instance of SyncStateDeltasRequest message SyncStateDeltasRequest { SyncBlockRange range = 1; } A receiving peer responds with SYNC_STATE_DELTAS, whose payload is an instance of SyncStateDeltas message SyncStateDeltas { SyncBlockRange range = 1; repeated bytes deltas = 2; } A delta may be applied forward (from i to j) or backward (from j to i) in the state transition. 3.1.4 Consensus Messages Consensus deals with transactions, so a CONSENSUS message is initiated internally by the consensus framework when it receives a CHAIN_TRANSACTION message. The framework converts CHAIN_TRANSACTION into CONSENSUS then broadcasts to the validating nodes with the same payload. The consensus plugin receives this message and process according to its internal algorithm. The plugin may create custom subtypes to manage consensus finite state machine. See section 3.4 for more details. 3.2 Ledger The ledger consists of two primary pieces, the blockchain and the world state. The blockchain is a series of linked blocks that is used to record transactions within the ledger. The world state is a key-value database that chaincodes may use to store state when executed by a transaction. 3.2.1 Blockchain 3.2.1.1 Block The blockchain is defined as a linked list of blocks as each block contains the hash of the previous block in the chain. The two other important pieces of information that a block contains are the list of transactions contained within the block and the hash of the world state after executing all transactions in the block. message Block { version = 1; google.protobuf.Timestamp timestamp = 2; bytes transactionsHash = 3; bytes stateHash = 4; bytes previousBlockHash = 5; bytes consensusMetadata = 6; NonHashData nonHashData = 7; } message BlockTransactions { repeated Transaction transactions = 1; } version- Version used to track any protocol changes. timestamp- The timestamp to be filled in by the block proposer. transactionsHash- The merkle root hash of the block’s transactions. stateHash- The merkle root hash of the world state. previousBlockHash- The hash of the previous block. consensusMetadata- Optional metadata that the consensus may include in a block. nonHashData- A NonHashDatamessage that is set to nil before computing the hash of the block, but stored as part of the block in the database. BlockTransactions.transactions- An array of Transaction messages. Transactions are not included in the block directly due to their size. 3.2.1.2 Block Hashing - The previousBlockHashhash is calculated using the following algorithm. Serialize the Block message to bytes using the protocol buffer library. Hash the serialized block message to 512 bits of output using the SHA3 SHAKE256 algorithm as described in FIPS 202. The transactionHashis the root of the transaction merkle tree. Defining the merkle tree implementation is a TODO. The stateHashis defined in section 3.2.2.1. 3.2.1.3 NonHashData The NonHashData message is used to store block metadata that is not required to be the same value on all peers. These are suggested values. message NonHashData { google.protobuf.Timestamp localLedgerCommitTimestamp = 1; repeated TransactionResult transactionResults = 2; } message TransactionResult { string uuid = 1; bytes result = 2; uint32 errorCode = 3; string error = 4; } localLedgerCommitTimestamp- A timestamp indicating when the block was commited to the local ledger. TransactionResult- An array of transaction results. TransactionResult.uuid- The ID of the transaction. TransactionResult.result- The return value of the transaction. TransactionResult.errorCode- A code that can be used to log errors associated with the transaction. TransactionResult.error- A string that can be used to log errors associated with the transaction. 3.2.1.4 Transaction Execution A transaction defines either the deployment of a chaincode or the execution of a chaincode. All transactions within a block are run before recording a block in the ledger. When chaincodes execute, they may modify the world state. The hash of the world state is then recorded in the block. 3.2.2 World State The world state of a peer refers to the collection of the states of all the deployed chaincodes. Further, the state of a chaincode is represented as a collection of key-value pairs. Thus, logically, the world state of a peer is also a collection of key-value pairs where key consists of a tuple {chaincodeID, ckey}. Here, we use the term key to represent a key in the world state i.e., a tuple {chaincodeID, ckey} and we use the term cKey to represent a unique key within a chaincode. For the purpose of the description below, chaincodeID is assumed to be a valid utf8 string and ckey and the value can be a sequence of one or more arbitrary bytes. 3.2.2.1 Hashing the world state During the functioning of a network, many occasions such as committing transactions and synchronizing peers may require computing a crypto-hash of the world state observed by a peer. For instance, the consensus protocol may require to ensure that a minimum number of peers in the network observe the same world state. Since, computing the crypto-hash of the world state could be an expensive operation, this is highly desirable to organize the world state such that it enables an efficient crypto-hash computation of the world state when a change occurs in the world state. Further, different organization designs may be suitable under different workloads conditions. Because the fabric is expected to function under a variety of scenarios leading to different workloads conditions, a pluggable mechanism is supported for organizing the world state. 3.2.2.1.1 Bucket-tree Bucket-tree is one of the implementations for organizing the world state. For the purpose of the description below, a key in the world state is represented as a concatenation of the two components ( chaincodeID and ckey) separated by a nil byte i.e., key = chaincodeID+ nil+ cKey. This method models a merkle-tree on top of buckets of a hash table in order to compute the crypto-hash of the world state. At the core of this method, the key-values of the world state are assumed to be stored in a hash-table that consists of a pre-decided number of buckets ( numBuckets). A hash function ( hashFunction) is employed to determine the bucket number that should contain a given key. Please note that the hashFunction does not represent a crypto-hash method such as SHA3, rather this is a regular programming language hash function that decides the bucket number for a given key. For modeling the merkle-tree, the ordered buckets act as leaf nodes of the tree - lowest numbered bucket being the left most leaf node in the tree. For constructing the second-last level of the tree, a pre-decided number of leaf nodes ( maxGroupingAtEachLevel), starting from left, are grouped together and for each such group, a node is inserted at the second-last level that acts as a common parent for all the leaf nodes in the group. Note that the number of children for the last parent node may be less than maxGroupingAtEachLevel. This grouping method of constructing the next higher level is repeated until the root node of the tree is constructed. An example setup with configuration {numBuckets=10009 and maxGroupingAtEachLevel=10} will result in a tree with number of nodes at different level as depicted in the following table. For computing the crypto-hash of the world state, the crypto-hash of each bucket is computed and is assumed to be the crypto-hash of leaf-nodes of the merkle-tree. In order to compute crypto-hash of a bucket, the key-values present in the bucket are first serialized and crypto-hash function is applied on the serialized bytes. For serializing the key-values of a bucket, all the key-values with a common chaincodeID prefix are serialized separately and then appending together, in the ascending order of chaincodeIDs. For serializing the key-values of a chaincodeID, the following information is concatenated: 1. Length of chaincodeID (number of bytes in the chaincodeID) - The utf8 bytes of the chaincodeID - Number of key-values for the chaincodeID - For each key-value (in sorted order of the ckey) - Length of the ckey - ckey bytes - Length of the value - value bytes For all the numeric types in the above list of items (e.g., Length of chaincodeID), protobuf’s varint encoding is assumed to be used. The purpose of the above encoding is to achieve a byte representation of the key-values within a bucket that can not be arrived at by any other combination of key-values and also to reduce the overall size of the serialized bytes. For example, consider a bucket that contains three key-values namely, chaincodeID1_key1:value1, chaincodeID1_key2:value2, and chaincodeID2_key1:value1. The serialized bytes for the bucket would logically look as - 12 + chaincodeID1 + 2 + 4 + key1 + 6 + value1 + 4 + key2 + 6 + value2 + 12 + chaincodeID2 + 1 + 4 + key1 + 6 + value1 If a bucket has no key-value present, the crypto-hash is considered as nil. The crypto-hash of an intermediate node and root node are computed just like in a standard merkle-tree i.e., applying a crypto-hash function on the bytes obtained by concatenating the crypto-hash of all the children nodes, from left to right. Further, if a child has a crypto-hash as nil, the crypto-hash of the child is omitted when concatenating the children crypto-hashes. If the node has a single child, the crypto-hash of the child is assumed to be the crypto-hash of the node. Finally, the crypto-hash of the root node is considered as the crypto-hash of the world state. The above method offers performance benefits for computing crypto-hash when a few key-values change in the state. The major benefits include - Computation of crypto-hashes of the unchanged buckets can be skipped - The depth and breadth of the merkle-tree can be controlled by configuring the parameters numBuckets and maxGroupingAtEachLevel. Both depth and breadth of the tree has different implication on the performance cost incurred by and resource demand of different resources (namely - disk I/O, storage, and memory) In a particular deployment, all the peer nodes are expected to use same values for the configurations numBuckets, maxGroupingAtEachLevel, and hashFunction. Further, if any of these configurations are to be changed at a later stage, the configurations should be changed on all the peer nodes so that the comparison of crypto-hashes across peer nodes is meaningful. Also, this may require to migrate the existing data based on the implementation. For example, an implementation is expected to store the last computed crypto-hashes for all the nodes in the tree which would need to be recalculated. 3.3 Chaincode Chaincode is an application-level code deployed as a transaction (see section 3.1.2) to be distributed to the network and managed by each validating peer as isolated sandbox. Though any virtualization technology can support the sandbox, currently Docker container is utilized to run the chaincode. The protocol described in this section enables different virtualization support implementation to plug and play. 3.3.1 Virtual Machine Instantiation A virtual machine implements the VM interface: type VM interface { build(ctxt context.Context, id string, args []string, env []string, attachstdin bool, attachstdout bool, reader io.Reader) error start(ctxt context.Context, id string, args []string, env []string, attachstdin bool, attachstdout bool) error stop(ctxt context.Context, id string, timeout uint, dontkill bool, dontremove bool) error } The fabric instantiates the VM when it processes a Deploy transaction or other transactions on the chaincode while the VM for that chaincode is not running (either crashed or previously brought down due to inactivity). Each chaincode image is built by the build function, started by start and stopped by stop function. Once the chaincode container is up, it makes a gRPC connection back to the validating peer that started the chaincode, and that establishes the channel for Invoke and Query transactions on the chaincode. 3.3.2 Chaincode Protocol Communication between a validating peer and its chaincodes is based on a bidirectional gRPC stream. There is a shim layer on the chaincode container to handle the message protocol between the chaincode and the validating peer using protobuf message. message ChaincodeMessage { enum Type { UNDEFINED = 0; REGISTER = 1; REGISTERED = 2; INIT = 3; READY = 4; TRANSACTION = 5; COMPLETED = 6; ERROR = 7; GET_STATE = 8; PUT_STATE = 9; DEL_STATE = 10; INVOKE_CHAINCODE = 11; INVOKE_QUERY = 12; RESPONSE = 13; QUERY = 14; QUERY_COMPLETED = 15; QUERY_ERROR = 16; RANGE_QUERY_STATE = 17; } Type type = 1; google.protobuf.Timestamp timestamp = 2; bytes payload = 3; string uuid = 4; } Definition of fields: - Type is the type of the message. - payload is the payload of the message. Each payload depends on the Type. - uuid is a unique identifier of the message. The message types are described in the following sub-sections. A chaincode implements the Chaincode interface, which is called by the validating peer when it processes Deploy, Invoke or Query transactions. type Chaincode interface { i Init(stub *ChaincodeStub, function string, args []string) ([]byte, error) Invoke(stub *ChaincodeStub, function string, args []string) ([]byte, error) Query(stub *ChaincodeStub, function string, args []string) ([]byte, error) } Init, Invoke and Query functions take function and args as parameters to be used by those methods to support a variety of transactions. Init is a constructor function, which will only be invoked by the Deploy transaction. The Query function is not allowed to modify the state of the chaincode; it can only read and calculate the return value as a byte array. 3.3.2.1 Chaincode Deploy Upon deploy (chaincode container is started), the shim layer sends a one time REGISTER message to the validating peer with the payload containing the ChaincodeID. The validating peer responds with REGISTERED or ERROR on success or failure respectively. The shim closes the connection and exits if it receives an ERROR. After registration, the validating peer sends INIT with the payload containing a ChaincodeInput object. The shim calls the Init function with the parameters from the ChaincodeInput, enabling the chaincode to perform any initialization, such as setting up the persistent state. The shim responds with RESPONSE or ERROR message depending on the returned value from the chaincode Init function. If there are no errors, the chaincode initialization is complete and is ready to receive Invoke and Query transactions. 3.3.2.2 Chaincode Invoke When processing an invoke transaction, the validating peer sends a TRANSACTION message to the chaincode container shim, which in turn calls the chaincode Invoke function, passing the parameters from the ChaincodeInput object. The shim responds to the validating peer with RESPONSE or ERROR message, indicating the completion of the function. If ERROR is received, the payload contains the error message generated by the chaincode. 3.3.2.3 Chaincode Query Similar to an invoke transaction, when processing a query, the validating peer sends a QUERY message to the chaincode container shim, which in turn calls the chaincode Query function, passing the parameters from the ChaincodeInput object. The Query function may return a state value or an error, which the shim forwards to the validating peer using RESPONSE or ERROR messages respectively. 3.3.2.4 Chaincode State Each chaincode may define its own persistent state variables. For example, a chaincode may create assets such as TVs, cars, or stocks using state variables to hold the assets attributes. During Invoke function processing, the chaincode may update the state variables, for example, changing an asset owner. A chaincode manipulates the state variables by using the following message types: PUT_STATE Chaincode sends a PUT_STATE message to persist a key-value pair, with the payload containing PutStateInfo object. message PutStateInfo { string key = 1; bytes value = 2; } GET_STATE Chaincode sends a GET_STATE message to retrieve the value whose key is specified in the payload. DEL_STATE Chaincode sends a DEL_STATE message to delete the value whose key is specified in the payload. RANGE_QUERY_STATE Chaincode sends a RANGE_QUERY_STATE message to get a range of values. The message payload contains a RangeQueryStateInfo object. message RangeQueryState { string startKey = 1; string endKey = 2; } The startKey and endKey are inclusive and assumed to be in lexical order. The validating peer responds with RESPONSE message whose payload is a RangeQueryStateResponse object. message RangeQueryStateResponse { repeated RangeQueryStateKeyValue keysAndValues = 1; bool hasMore = 2; string ID = 3; } message RangeQueryStateKeyValue { string key = 1; bytes value = 2; } If hasMore=true in the response, this indicates that additional keys are available in the requested range. The chaincode can request the next set of keys and values by sending a RangeQueryStateNext message with an ID that matches the ID returned in the response. message RangeQueryStateNext { string ID = 1; } When the chaincode is finished reading from the range, it should send a RangeQueryStateClose message with the ID it wishes to close. message RangeQueryStateClose { string ID = 1; } INVOKE_CHAINCODE Chaincode may call another chaincode in the same transaction context by sending an INVOKE_CHAINCODE message to the validating peer with the payload containing a ChaincodeSpec object. QUERY_CHAINCODE Chaincode may query another chaincode in the same transaction context by sending a QUERY_CHAINCODE message with the payload containing a ChaincodeSpec object. 3.4 Pluggable Consensus Framework The consensus framework defines the interfaces that every consensus plugin implements: consensus.Consenter: interface that allows consensus plugin to receive messages from the network. consensus.CPI: Consensus Programming Interface ( CPI) is used by consensus plugin to interact with rest of the stack. This interface is split in two parts: consensus.Communicator: used to send (broadcast and unicast) messages to other validating peers. consensus.LedgerStack: which is used as an interface to the execution framework as well as the ledger. As described below in more details, consensus.LedgerStack encapsulates, among other interfaces, the consensus.Executor interface, which is the key part of the consensus framework. Namely, consensus.Executor interface allows for a (batch of) transaction to be started, executed, rolled back if necessary, previewed, and potentially committed. A particular property that every consensus plugin needs to satisfy is that batches (blocks) of transactions are committed to the ledger (via consensus.Executor.CommitTxBatch) in total order across all validating peers (see consensus.Executor interface description below for more details). Currently, consensus framework consists of 3 packages consensus, controller, and helper. The primary reason for controller and helper packages is to avoid “import cycle” in Go (golang) and minimize code changes for plugin to update. controllerpackage specifies the consensus plugin used by a validating peer. helperpackage is a shim around a consensus plugin that helps it interact with the rest of the stack, such as maintaining message handlers to other peers. There are 2 consensus plugins provided: pbft and noops: pbftpackage contains consensus plugin that implements the PBFT [1] consensus protocol. See section 5 for more detail. noopsis a ‘’dummy’‘ consensus plugin for development and test purposes. It doesn’t perform consensus but processes all consensus messages. It also serves as a good simple sample to start learning how to code a consensus plugin. 3.4.1 Consenter interface Definition: type Consenter interface { RecvMsg(msg *pb.Message) error } The plugin’s entry point for (external) client requests, and consensus messages generated internally (i.e. from the consensus module) during the consensus process. The controller.NewConsenter creates the plugin Consenter. RecvMsg processes the incoming transactions in order to reach consensus. See helper.HandleMessage below to understand how the peer interacts with this interface. 3.4.2 CPI interface Definition: type CPI interface { Inquirer Communicator SecurityUtils LedgerStack } CPI allows the plugin to interact with the stack. It is implemented by the helper.Helper object. Recall that this object: - Is instantiated when the helper.NewConsensusHandleris called. - Is accessible to the plugin author when they construct their plugin’s consensus.Consenterobject. 3.4.3 Inquirer interface Definition: type Inquirer interface { GetNetworkInfo() (self *pb.PeerEndpoint, network []*pb.PeerEndpoint, err error) GetNetworkHandles() (self *pb.PeerID, network []*pb.PeerID, err error) } This interface is a part of the consensus.CPI interface. It is used to get the handles of the validating peers in the network ( GetNetworkHandles) as well as details about the those validating peers ( GetNetworkInfo): Note that the peers are identified by a pb.PeerID object. This is a protobuf message (in the protos package), currently defined as (notice that this definition will likely be modified): message PeerID { string name = 1; } 3.4.4 Communicator interface Definition: type Communicator interface { Broadcast(msg *pb.Message) error Unicast(msg *pb.Message, receiverHandle *pb.PeerID) error } This interface is a part of the consensus.CPI interface. It is used to communicate with other peers on the network ( helper.Broadcast, helper.Unicast): 3.4.5 SecurityUtils interface Definition: type SecurityUtils interface { Sign(msg []byte) ([]byte, error) Verify(peerID *pb.PeerID, signature []byte, message []byte) error } This interface is a part of the consensus.CPI interface. It is used to handle the cryptographic operations of message signing ( Sign) and verifying signatures ( Verify) 3.4.6 LedgerStack interface Definition: type LedgerStack interface { Executor Ledger RemoteLedgers } A key member of the CPI interface, LedgerStack groups interaction of consensus with the rest of the fabric, such as the execution of transactions, querying, and updating the ledger. This interface supports querying the local blockchain and state, updating the local blockchain and state, and querying the blockchain and state of other nodes in the consensus network. It consists of three parts: Executor, Ledger and RemoteLedgers interfaces. These are described in the following. 3.4.7 Executor interface Definition: type Executor interface { BeginTxBatch(id interface{}) error ExecTXs(id interface{}, txs []*pb.Transaction) ([]byte, []error) CommitTxBatch(id interface{}, transactions []*pb.Transaction, transactionsResults []*pb.TransactionResult, metadata []byte) error RollbackTxBatch(id interface{}) error PreviewCommitTxBatchBlock(id interface{}, transactions []*pb.Transaction, metadata []byte) (*pb.Block, error) } The executor interface is the most frequently utilized portion of the LedgerStack interface, and is the only piece which is strictly necessary for a consensus network to make progress. The interface allows for a transaction to be started, executed, rolled back if necessary, previewed, and potentially committed. This interface is comprised of the following methods. 3.4.7.1 Beginning a transaction batch BeginTxBatch(id interface{}) error This call accepts an arbitrary id, deliberately opaque, as a way for the consensus plugin to ensure only the transactions associated with this particular batch are executed. For instance, in the pbft implementation, this id is the an encoded hash of the transactions to be executed. 3.4.7.2 Executing transactions ExecTXs(id interface{}, txs []*pb.Transaction) ([]byte, []error) This call accepts an array of transactions to execute against the current state of the ledger and returns the current state hash in addition to an array of errors corresponding to the array of transactions. Note that a transaction resulting in an error has no effect on whether a transaction batch is safe to commit. It is up to the consensus plugin to determine the behavior which should occur when failing transactions are encountered. This call is safe to invoke multiple times. 3.4.7.3 Committing and rolling-back transactions RollbackTxBatch(id interface{}) error This call aborts an execution batch. This will undo the changes to the current state, and restore the ledger to its previous state. It concludes the batch begun with BeginBatchTx and a new one must be created before executing any transactions. PreviewCommitTxBatchBlock(id interface{}, transactions []*pb.Transaction, metadata []byte) (*pb.Block, error) This call is most useful for consensus plugins which wish to test for non-deterministic transaction execution. The hashable portions of the block returned are guaranteed to be identical to the block which would be committed if CommitTxBatch were immediately invoked. This guarantee is violated if any new transactions are executed. CommitTxBatch(id interface{}, transactions []*pb.Transaction, transactionsResults []*pb.TransactionResult, metadata []byte) error This call commits a block to the blockchain. Blocks must be committed to a blockchain in total order. CommitTxBatch concludes the transaction batch, and a new call to BeginTxBatch must be made before any new transactions are executed and committed. 3.4.8 Ledger interface Definition: type Ledger interface { ReadOnlyLedger UtilLedger WritableLedger } Ledger interface is intended to allow the consensus plugin to interrogate and possibly update the current state and blockchain. It is comprised of the three interfaces described below. 3.4.8.1 ReadOnlyLedger interface Definition: type ReadOnlyLedger interface { GetBlock(id uint64) (block *pb.Block, err error) GetCurrentStateHash() (stateHash []byte, err error) GetBlockchainSize() (uint64, error) } ReadOnlyLedger interface is intended to query the local copy of the ledger without the possibility of modifying it. It is comprised of the following functions. GetBlockchainSize() (uint64, error) This call returns the current length of the blockchain ledger. In general, this function should never fail, though in the unlikely event that this occurs, the error is passed to the caller to decide what if any recovery is necessary. The block with the highest number will have block number GetBlockchainSize()-1. Note that in the event that the local copy of the blockchain ledger is corrupt or incomplete, this call will return the highest block number in the chain, plus one. This allows for a node to continue operating from the current state/block even when older blocks are corrupt or missing. GetBlock(id uint64) (block *pb.Block, err error) This call returns the block from the blockchain with block number id. In general, this call should not fail, except when the block queried exceeds the current blocklength, or when the underlying blockchain has somehow become corrupt. A failure of GetBlock has a possible resolution of using the state transfer mechanism to retrieve it. GetCurrentStateHash() (stateHash []byte, err error) This call returns the current state hash for the ledger. In general, this function should never fail, though in the unlikely event that this occurs, the error is passed to the caller to decide what if any recovery is necessary. 3.4.8.2 UtilLedger interface Definition: type UtilLedger interface { HashBlock(block *pb.Block) ([]byte, error) VerifyBlockchain(start, finish uint64) (uint64, error) } UtilLedger interface defines some useful utility functions which are provided by the local ledger. Overriding these functions in a mock interface can be useful for testing purposes. This interface is comprised of two functions. HashBlock(block *pb.Block) ([]byte, error) Although *pb.Block has a GetHash method defined, for mock testing, overriding this method can be very useful. Therefore, it is recommended that the GetHash method never be directly invoked, but instead invoked via this UtilLedger.HashBlock interface. In general, this method should never fail, but the error is still passed to the caller to decide what if any recovery is appropriate. VerifyBlockchain(start, finish uint64) (uint64, error) This utility method is intended for verifying large sections of the blockchain. It proceeds from a high block start to a lower block finish, returning the block number of the first block whose PreviousBlockHash does not match the block hash of the previous block as well as an error. Note, this generally indicates the last good block number, not the first bad block number. 3.4.8.3 WritableLedger interface Definition: type WritableLedger interface { PutBlock(blockNumber uint64, block *pb.Block) error ApplyStateDelta(id interface{}, delta *statemgmt.StateDelta) error CommitStateDelta(id interface{}) error RollbackStateDelta(id interface{}) error EmptyState() error } WritableLedger interface allows for the caller to update the blockchain. Note that this is NOT intended for use in normal operation of a consensus plugin. The current state should be modified by executing transactions using the Executor interface, and new blocks will be generated when transactions are committed. This interface is instead intended primarily for state transfer or corruption recovery. In particular, functions in this interface should NEVER be exposed directly via consensus messages, as this could result in violating the immutability promises of the blockchain concept. This interface is comprised of the following functions. PutBlock(blockNumber uint64, block *pb.Block) error This function takes a provided, raw block, and inserts it into the blockchain at the given blockNumber. Note that this intended to be an unsafe interface, so no error or sanity checking is performed. Inserting a block with a number higher than the current block height is permitted, similarly overwriting existing already committed blocks is also permitted. Remember, this does not affect the auditability or immutability of the chain, as the hashing techniques make it computationally infeasible to forge a block earlier in the chain. Any attempt to rewrite the blockchain history is therefore easily detectable. This is generally only useful to the state transfer API. ApplyStateDelta(id interface{}, delta *statemgmt.StateDelta) error This function takes a state delta, and applies it to the current state. The delta will be applied to transition a state forward or backwards depending on the construction of the state delta. Like the `Executor` methods, `ApplyStateDelta` accepts an opaque interface `id` which should also be passed into `CommitStateDelta` or `RollbackStateDelta` as appropriate. CommitStateDelta(id interface{}) error This function commits the state delta which was applied in `ApplyStateDelta`. This is intended to be invoked after the caller to `ApplyStateDelta` has verified the state via the state hash obtained via `GetCurrentStateHash()`. This call takes the same `id` which was passed into `ApplyStateDelta`. RollbackStateDelta(id interface{}) error This function unapplies a state delta which was applied in `ApplyStateDelta`. This is intended to be invoked after the caller to `ApplyStateDelta` has detected the state hash obtained via `GetCurrentStateHash()` is incorrect. This call takes the same `id` which was passed into `ApplyStateDelta`. EmptyState() error This function will delete the entire current state, resulting in a pristine empty state. It is intended to be called before loading an entirely new state via deltas. This is generally only useful to the state transfer API. 3.4.9 RemoteLedgers interface Definition: type RemoteLedgers interface { GetRemoteBlocks(peerID uint64, start, finish uint64) (<-chan *pb.SyncBlocks, error) GetRemoteStateSnapshot(peerID uint64) (<-chan *pb.SyncStateSnapshot, error) GetRemoteStateDeltas(peerID uint64, start, finish uint64) (<-chan *pb.SyncStateDeltas, error) } The RemoteLedgers interface exists primarily to enable state transfer and to interrogate the blockchain state at other replicas. Just like the WritableLedger interface, it is not intended to be used in normal operation and is designed to be used for catchup, error recovery, etc. For all functions in this interface it is the caller’s responsibility to enforce timeouts. This interface contains the following functions. GetRemoteBlocks(peerID uint64, start, finish uint64) (<-chan *pb.SyncBlocks, error) This function attempts to retrieve a stream of *pb.SyncBlocksfrom the peer designated by peerIDfor the range from startto finish. In general, startshould be specified with a higher block number than finish, as the blockchain must be validated from end to beginning. The caller must validate that the desired block is being returned, as it is possible that slow results from another request could appear on this channel. Invoking this call for the same peerIDa second time will cause the first channel to close. ``` GetRemoteStateSnapshot(peerID uint64) (<-chan *pb.SyncStateSnapshot, error) ``` This function attempts to retrieve a stream of *pb.SyncStateSnapshotfrom the peer designated by peerID. To apply the result, the existing state should first be emptied via the WritableLedger EmptyStatecall, then the contained deltas in the stream should be applied sequentially. GetRemoteStateDeltas(peerID uint64, start, finish uint64) (<-chan *pb.SyncStateDeltas, error) This function attempts to retrieve a stream of `*pb.SyncStateDeltas` from the peer designated by `peerID` for the range from `start` to `finish`. The caller must validated that the desired block delta is being returned, as it is possible that slow results from another request could appear on this channel. Invoking this call for the same `peerID` a second time will cause the first channel to close. 3.4.10 controller package 3.4.10.1 controller.NewConsenter Signature: func NewConsenter(cpi consensus.CPI) (consenter consensus.Consenter) This function reads the peer.validator.consensus value in core.yaml configuration file, which is the configuration file for the peer process. The value of the peer.validator.consensus key defines whether the validating peer will run with the noops consensus plugin or the pbft one. (Notice that this should eventually be changed to either noops or custom. In case of custom, the validating peer will run with the consensus plugin defined in consensus/config.yaml.) The plugin author needs to edit the function’s body so that it routes to the right constructor for their package. For example, for pbft we point to the obcpft.GetPlugin constructor. This function is called by helper.NewConsensusHandler when setting the consenter field of the returned message handler. The input argument cpi is the output of the helper.NewHelper constructor and implements the consensus.CPI interface. 3.4.11 helper package 3.4.11.1 High-level overview A validating peer establishes a message handler ( helper.ConsensusHandler) for every connected peer, via the helper.NewConsesusHandler function (a handler factory). Every incoming message is inspected on its type ( helper.HandleMessage); if it’s a message for which consensus needs to be reached, it’s passed on to the peer’s consenter object ( consensus.Consenter). Otherwise it’s passed on to the next message handler in the stack. 3.4.11.2 helper.ConsensusHandler Definition: type ConsensusHandler struct { chatStream peer.ChatStream consenter consensus.Consenter coordinator peer.MessageHandlerCoordinator done chan struct{} peerHandler peer.MessageHandler } Within the context of consensus, we focus only on the coordinator and consenter fields. The coordinator, as the name implies, is used to coordinate between the peer’s message handlers. This is, for instance, the object that is accessed when the peer wishes to Broadcast. The consenter receives the messages for which consensus needs to be reached and processes them. Notice that fabric/peer/peer.go defines the peer.MessageHandler (interface), and peer.MessageHandlerCoordinator (interface) types. 3.4.11.3 helper.NewConsensusHandler Signature: func NewConsensusHandler(coord peer.MessageHandlerCoordinator, stream peer.ChatStream, initiatedStream bool, next peer.MessageHandler) (peer.MessageHandler, error) Creates a helper.ConsensusHandler object. Sets the same coordinator for every message handler. Also sets the consenter equal to: controller.NewConsenter(NewHelper(coord)) 3.4.11.4 helper.Helper Definition: type Helper struct { coordinator peer.MessageHandlerCoordinator } Contains the reference to the validating peer’s coordinator. Is the object that implements the consensus.CPI interface for the peer. 3.4.11.5 helper.NewHelper Signature: func NewHelper(mhc peer.MessageHandlerCoordinator) consensus.CPI Returns a helper.Helper object whose coordinator is set to the input argument mhc (the coordinator field of the helper.ConsensusHandler message handler). This object implements the consensus.CPI interface, thus allowing the plugin to interact with the stack. 3.4.11.6 helper.HandleMessage Recall that the helper.ConsesusHandler object returned by helper.NewConsensusHandler implements the peer.MessageHandler interface: type MessageHandler interface { RemoteLedger HandleMessage(msg *pb.Message) error SendMessage(msg *pb.Message) error To() (pb.PeerEndpoint, error) Stop() error } Within the context of consensus, we focus only on the HandleMessage method. Signature: func (handler *ConsensusHandler) HandleMessage(msg *pb.Message) error The function inspects the Type of the incoming Message. There are four cases: - Equal to pb.Message_CONSENSUS: passed to the handler’s consenter.RecvMsgfunction. - Equal to pb.Message_CHAIN_TRANSACTION(i.e. an external deployment request): a response message is sent to the user first, then the message is passed to the consenter.RecvMsgfunction. - Equal to pb.Message_CHAIN_QUERY(i.e. a query): passed to the helper.doChainQuerymethod so as to get executed locally. - Otherwise: passed to the HandleMessagemethod of the next handler down the stack. 3.5 Events The event framework provides the ability to generate and consume predefined and custom events. There are 3 basic components: - Event stream - Event adapters - Event structures 3.5.1 Event Stream An event stream is a gRPC channel capable of sending and receiving events. Each consumer establishes an event stream to the event framework and expresses the events that it is interested in. the event producer only sends appropriate events to the consumers who have connected to the producer over the event stream. The event stream initializes the buffer and timeout parameters. The buffer holds the number of events waiting for delivery, and the timeout has 3 options when the buffer is full: - If timeout is less than 0, drop the newly arriving events - If timeout is 0, block on the event until the buffer becomes available - If timeout is greater than 0, wait for the specified timeout and drop the event if the buffer remains full after the timeout 3.5.1.1 Event Producer The event producer exposes a function to send an event, Send(e *pb.Event), where Event is either a pre-defined Block or a Generic event. More events will be defined in the future to include other elements of the fabric. message Generic { string eventType = 1; bytes payload = 2; } The eventType and payload are freely defined by the event producer. For example, JSON data may be used in the payload. The Generic event may also be emitted by the chaincode or plugins to communicate with consumers. 3.5.1.2 Event Consumer The event consumer enables external applications to listen to events. Each event consumer registers an event adapter with the event stream. The consumer framework can be viewed as a bridge between the event stream and the adapter. A typical use of the event consumer framework is: adapter = <adapter supplied by the client application to register and receive events> consumerClient = NewEventsClient(<event consumer address>, adapter) consumerClient.Start() ... ... consumerClient.Stop() 3.5.2 Event Adapters The event adapter encapsulates three facets of event stream interaction: - an interface that returns the list of all events of interest - an interface called by the event consumer framework on receipt of an event - an interface called by the event consumer framework when the event bus terminates The reference implementation provides Golang specific language binding. EventAdapter interface { GetInterestedEvents() ([]*ehpb.Interest, error) Recv(msg *ehpb.Event) (bool,error) Disconnected(err error) } Using gRPC as the event bus protocol allows the event consumer framework to be ported to different language bindings without affecting the event producer framework. 3.5.3 Event Structure This section details the message structures of the event system. Messages are described directly in Golang for simplicity. The core message used for communication between the event consumer and producer is the Event. message Event { oneof Event { //consumer events Register register = 1; //producer events Block block = 2; Generic generic = 3; } } Per the above definition, an event has to be one of Block or Generic. As mentioned in the previous sections, a consumer creates an event bus by establishing a connection with the producer and sending a Register event. The Register event is essentially an array of Interest messages declaring the events of interest to the consumer. message Interest { enum ResponseType { //don't send events (used to cancel interest) DONTSEND = 0; //send protobuf objects PROTOBUF = 1; //marshall into JSON structure JSON = 2; } string eventType = 1; ResponseType responseType = 2; } Events can be sent directly as protobuf structures or can be sent as JSON structures by specifying the responseType appropriately. Currently, the producer framework can generate a Block or a Generic event. A Block is a message used for encapsulating properties of a block in the blockchain. 4. Security This section discusses the setting depicted in the figure below. In particular, the system consists of the following entities: membership management infrastructure, i.e., a set of entities that are responsible for identifying an individual user (using any form of identification considered in the system, e.g., credit cards, id-cards), open an account for that user to be able to register, and issue the necessary credentials to successfully create transactions and deploy or invoke chaincode successfully through the fabric. * Peers, that are classified as validating peers, and non-validating peers. Validating peers (also known as validators) order and process (check validity, execute, and add to the blockchain) user-messages (transactions) submitted to the network. Non validating peers (also known as peers) receive user transactions on behalf of users, and after some fundamental validity checks, they forward the transactions to their neighboring validating peers. Peers maintain an up-to-date copy of the blockchain, but in contradiction to validators, they do not execute transactions (a process also known as transaction validation). * End users of the system, that have registered to our membership service administration, after having demonstrated ownership of what is considered identity in the system, and have obtained credentials to install the client-software and submit transactions to the system. * Client-software, the software that needs to be installed at the client side for the latter to be able to complete his registration to our membership service and submit transactions to the system. * Online wallets, entities that are trusted by a user to maintain that user’s credentials, and submit transactions solely upon user request to the network. Online wallets come with their own software at the client-side, that is usually light-weight, as the client only needs to authenticate himself and his requests to the wallet. While it can be the case that peers can play the role of online wallet for a set of users, in the following sessions the security of online wallets is detailed separately. Users who wish to make use of the fabric, open an account at the membership management administration, by proving ownership of identity as discussed in previous sections, new chaincodes are announced to the blockchain network by the chaincode creator (developer) through the means of a deployment transaction that the client-software would construct on behalf of the developer. Such transaction is first received by a peer or validator, and afterwards circulated in the entire network of validators, this transaction is executed and finds its place to the blockchain network. Users can also invoke a function of an already deployed chain-code through an invocation transaction. The next section provides a summary of the business goals of the system that drive the security requirements. We then overview the security components and their operation and show how this design fulfills the security requirements. 4.1 Business security requirements This section presents business security requirements that are relevant to the context of the fabric. Incorporation of identity and role management. In order to adequately support real business applications it is necessary to progress beyond ensuring cryptographic continuity. A workable B2B system must consequently move towards addressing proven/demonstrated identities or other attributes relevant to conducting business. Business transactions and consumer interactions with financial institutions need to be unambiguously mapped to account holders. Business contracts typically require demonstrable affiliation with specific institutions and/or possession of other specific properties of transacting parties. Accountability and non-frameability are two reasons that identity management is a critical component of such systems. Accountability means that users of the system, individuals, or corporations, who misbehave can be traced back and be set accountable for their actions. In many cases, members of a B2B system are required to use their identities (in some form) to participate in the system, in a way such that accountability is guaranteed. Accountability and non-frameability are both essential security requirements in B2B systems and they are closely related. That is, a B2B system should guarantee that an honest user of such system cannot be framed to be accused as responsible for transactions originated by other users. In addition a B2B system should be renewable and flexible in order to accommodate changes of participants’s roles and/or affiliations. Transactional privacy. In B2B relationships there is a strong need for transactional privacy, i.e., allowing the end-user of a system to control the degree to which it interacts and shares information with its environment. For example, a corporation doing business through a transactional B2B system requires that its transactions are not visible to other corporations or industrial partners that are not authorized to share classified information with. Transactional privacy in the fabric is offered by the mechanisms to achieve two properties with respect to non authorized users: Transaction anonymity, where the owner of a transaction is hidden among the so called anonymity set, which in the fabric, is the set of users. Transaction unlinkability, where two or more transactions of the same user should not be linked as such. Clearly depending on the context, non-authorized users can be anyone outside the system, or a subset of users. Transactional privacy is strongly associated to the confidentiality of the content of a contractual agreement between two or more members of a B2B system, as well as to the anonymity and unlinkability of any authentication mechanism that should be in place within transactions. Reconciling transactional privacy with identity management. As described later in this document, the approach taken here to reconcile identity management with user privacy and to enable competitive institutions to transact effectively on a common blockchain (for both intra- and inter-institutional transactions) is as follows: add certificates to transactions to implement a “permissioned” blockchain utilize a two-level system: (relatively) static enrollment certificates (ECerts), acquired via registration with an enrollment certificate authority (CA). transaction certificates (TCerts) that faithfully but pseudonymously represent enrolled users, acquired via a transaction CA. offer mechanisms to conceal the content of transactions to unauthorized members of the system. Audit support. Commercial systems are occasionally subjected to audits. Auditors in such cases should be given the means to check a certain transaction, or a certain group of transactions, the activity of a particular user of the system, or the operation of the system itself. Thus, such capabilities should be offered by any system featuring transactions containing contractual agreements between business partners. 4.2 User Privacy through Membership Services Membership Services consists of an infrastructure of several entities that together manage the identity and privacy of users on the network. These services validate user’s identity, register the user in the system, and provide all the credentials needed for him/her to be an active and compliant participant able to create and/or invoke transactions. A Public Key Infrastructure (PKI) is a framework based on public key cryptography that ensures not only the secure exchange of data over public networks but also affirms the identity of the other party. A PKI manages the generation, distribution and revocation of keys and digital certificates. Digital certificates are used to establish user credentials and to sign messages. Signing messages with a certificate ensures that the message has not been altered. Typically a PKI has a Certificate Authority (CA), a Registration Authority (RA), a certificate database, and a certificate storage. The RA is a trusted party that authenticates users and vets the legitimacy of data, certificates or other evidence submitted to support the user’s request for one or more certificates that reflect that user’s identity or other properties. A CA, upon advice from an RA, issues digital certificates for specific uses and is certified directly or hierarchically by a root CA. Alternatively, the user-facing communications and due diligence responsibilities of the RA can be subsumed as part of the CA. Membership Services is composed of the entities shown in the following figure. Introduction of such full PKI reinforces the strength of this system for B2B (over, e.g. Bitcoin). Root Certificate Authority (Root CA): entity that represents the trust anchor for the PKI scheme. Digital certificates verification follows a chain of trust. The Root CA is the top-most CA in the PKI hierarchy. Registration Authority (RA): a trusted entity that can ascertain the validity and identity of users who want to participate in the permissioned blockchain. It is responsible for out-of-band communication with the user to validate his/her identity and role. It creates registration credentials needed for enrollment and information on root of trust. Enrollment Certificate Authority (ECA): responsible for issuing Enrollment Certificates (ECerts) after validating the registration credentials provided by the user. Transaction Certificate Authority (TCA): responsible for issuing Transaction Certificates (TCerts) after validating the enrollment credentials provided by the user. TLS Certificate Authority (TLS-CA): responsible for issuing TLS certificates and credentials that allow the user to make use of its network. It validates the credential(s) or evidence provided by the user that justifies issuance of a TLS certificate that includes specific information pertaining to the user. In this specification, membership services is expressed through the following associated certificates issued by the PKI: Enrollment Certificates (ECerts) ECerts are long-term certificates. They are issued for all roles, i.e. users, non-validating peers, and validating peers. In the case of users, who submit transactions for candidate incorporation into the blockchain and who also own TCerts (discussed below), there are two possible structure and usage models for ECerts: Model A: ECerts contain the identity/enrollmentID of their owner and can be used to offer only nominal entity-authentication for TCert requests and/or within transactions. They contain the public part of two key pairs – a signature key-pair and an encryption/key agreement key-pair. ECerts are accessible to everyone. Model B: ECerts contain the identity/enrollmentID of their owner and can be used to offer only nominal entity-authentication for TCert requests. They contain the public part of a signature key-pair, i.e., a signature verification public key. ECerts are preferably accessible to only TCA and auditors, as relying parties. They are invisible to transactions, and thus (unlike TCerts) their signature key pairs do not play a non-repudiation role at that level. Transaction Certificates (TCerts) TCerts are short-term certificates for each transaction. They are issued by the TCA upon authenticated user-request. They securely authorize a transaction and may be configured to not reveal the identities of who is involved in the transaction or to selectively reveal such identity/enrollmentID information. They include the public part of a signature key-pair, and may be configured to also include the public part of a key agreement key pair. They are issued only to users. They are uniquely associated to the owner – they may be configured so that this association is known only by the TCA (and to authorized auditors). TCerts may be configured to not carry information of the identity of the user. They enable the user not only to anonymously participate in the system but also prevent linkability of transactions. However, auditability and accountability requirements assume that the TCA is able to retrieve TCerts of a given identity, or retrieve the owner of a specific TCert. For details on how TCerts are used in deployment and invocation transactions see Section 4.3, Transaction Security offerings at the infrastructure level. TCerts can accommodate encryption or key agreement public keys (as well as digital signature verification public keys). If TCerts are thus equipped, then enrollment certificates need not also contain encryption or key agreement public keys. Such a key agreement public key, Key_Agreement_TCertPub_Key, can be generated by the transaction certificate authority (TCA) using a method that is the same as that used to generate the Signature_Verification_TCertPub_Key, but using an index value of TCertIndex + 1 rather than TCertIndex, where TCertIndex is hidden within the TCert by the TCA for recovery by the TCert owner. The structure of a Transaction Certificate (TCert) is as follows: TCertID – transaction certificate ID (preferably generated by TCA randomly in order to avoid unintended linkability via the Hidden Enrollment ID field). Hidden Enrollment ID: AES_EncryptK(enrollmentID), where key K = [HMAC(Pre-K, TCertID)]256-bit truncation and where three distinct key distribution scenarios for Pre-K are defined below as (a), (b) and (c). Hidden Private Keys Extraction: AES_EncryptTCertOwner_EncryptKey(TCertIndex || known padding/parity check vector) where || denotes concatenation, and where each batch has a unique (per batch) time-stamp/random offset that is added to a counter (initialized at 1 in this implementation) in order to generate TCertIndex. The counter can be incremented by 2 each time in order to accommodate generation by the TCA of the public keys and recovery by the TCert owner of the private keys of both types, i.e., signature key pairs and key agreement key pairs. Sign Verification Public Key – TCert signature verification public key. Key Agreement Public Key – TCert key agreement public key. Validity period – the time window during which the transaction certificate can be used for the outer/external signature of a transaction. There are at least three useful ways to consider configuring the key distribution scenario for the Hidden Enrollment ID field: (a) Pre-K is distributed during enrollment to user clients, peers and auditors, and is available to the TCA and authorized auditors. It may, for example, be derived from Kchain (described subsequently in this specification) or be independent of key(s) used for chaincode confidentiality. (b) Pre-K is available to validators, the TCA and authorized auditors. K is made available by a validator to a user (under TLS) in response to a successful query transaction. The query transaction can have the same format as the invocation transaction. Corresponding to Example 1 below, the querying user would learn the enrollmentID of the user who created the Deployment Transaction if the querying user owns one of the TCerts in the ACL of the Deployment Transaction. Corresponding to Example 2 below, the querying user would learn the enrollmentID of the user who created the Deployment Transaction if the enrollmentID of the TCert used to query matches one of the affiliations/roles in the Access Control field of the Deployment Transaction. Example 1: Example 2: (c) Pre-K is available to the TCA and authorized auditors. The TCert-specific K can be distributed the TCert owner (under TLS) along with the TCert, for each TCert in the batch. This enables targeted release by the TCert owner of K (and thus trusted notification of the TCert owner’s enrollmentID). Such targeted release can use key agreement public keys of the intended recipients and/or PKchain where SKchain is available to validators as described subsequently in this specification. Such targeted release to other contract participants can be incorporated into a transaction or done out-of-band. If the TCerts are used in conjunction with ECert Model A above, then using (c) where K is not distributed to the TCert owner may suffice. If the TCerts are used in conjunction with ECert Model A above, then the Key Agreement Public Key field of the TCert may not be necessary. The Transaction Certificate Authority (TCA) returns TCerts in batches, each batch contains the KeyDF_Key (Key-Derivation-Function Key) which is not included within every TCert but delivered to the client with the batch of TCerts (using TLS). The KeyDF_Key allows the TCert owner to derive TCertOwner_EncryptKey which in turn enables recovery of TCertIndex from AES_EncryptTCertOwner_EncryptKey(TCertIndex || known padding/parity check vector). TLS-Certificates (TLS-Certs) TLS-Certs are certificates used for system/component-to-system/component communications. They carry the identity of their owner and are used for network level security. This implementation of membership services provides the following basic functionality: there is no expiration/revocation of ECerts; expiration of TCerts is provided via the validity period time window; there is no revocation of TCerts. The ECA, TCA, and TLS CA certificates are self-signed, where the TLS CA is provisioned as a trust anchor. 4.2.1 User/Client Enrollment Process The next figure has a high-level description of the user enrollment process. It has an offline and an online phase. Offline Process: in Step 1, each user/non-validating peer/validating peer has to present strong identification credentials (proof of ID) to a Registration Authority (RA) offline. This has to be done out-of-band to provide the evidence needed by the RA to create (and store) an account for the user. In Step 2, the RA returns the associated username/password and trust anchor (TLS-CA Cert in this implementation) to the user. If the user has access to a local client then this is one way the client can be securely provisioned with the TLS-CA certificate as trust anchor. Online Phase: In Step 3, the user connects to the client to request to be enrolled in the system. The user sends his username and password to the client. On behalf of the user, the client sends the request to the PKI framework, Step 4, and receives a package, Step 5, containing several certificates, some of which should correspond to private/secret keys held by the client. Once the client verifies that the all the crypto material in the package is correct/valid, it stores the certificates in local storage and notifies the user. At this point the user enrollment has been completed. Figure 4 shows a detailed description of the enrollment process. The PKI framework has the following entities – RA, ECA, TCA and TLS-CA. After Step 1, the RA calls the function “AddEntry” to enter the (username/password) in its database. At this point the user has been formally registered into the system database. The client needs the TLS-CA certificate (as trust anchor) to verify that the TLS handshake is set up appropriately with the server. In Step 4, the client sends the registration request to the ECA along with its enrollment public key and additional identity information such as username and password (under the TLS record layer protocol). The ECA verifies that such user really exists in the database. Once it establishes this assurance the user has the right to submit his/her enrollment public key and the ECA will certify it. This enrollment information is of a one-time use. The ECA updates the database marking that this registration request information (username/password) cannot be used again. The ECA constructs, signs and sends back to the client an enrollment certificate (ECert) that contains the user’s enrollment public key (Step 5). It also sends the ECA Certificate (ECA-Cert) needed in future steps (client will need to prove to the TCA that his/her ECert was created by the proper ECA). (Although the ECA-Cert is self-signed in the initial implementation, the TCA and TLS-CA and ECA are co-located.) The client verifies, in Step 6, that the public key inside the ECert is the one originally submitted by the client (i.e. that the ECA is not cheating). It also verifies that all the expected information within the ECert is present and properly formed. Similarly, In Step 7, the client sends a registration request to the TLS-CA along with its public key and identity information. The TLS-CA verifies that such user is in the database. The TLS-CA generates, and signs a TLS-Cert that contains the user’s TLS public key (Step 8). TLS-CA sends the TLS-Cert and its certificate (TLS-CA Cert). Step 9 is analogous to Step 6, the client verifies that the public key inside the TLS Cert is the one originally submitted by the client and that the information in the TLS Cert is complete and properly formed. In Step 10, the client saves all certificates in local storage for both certificates. At this point the user enrollment has been completed. In this implementation the enrollment process for validators is the same as that for peers. However, it is possible that a different implementation would have validators enroll directly through an on-line process. Client: Request for TCerts batch needs to include (in addition to count), ECert and signature of request using ECert private key (where Ecert private key is pulled from Local Storage). TCA generates TCerts for batch: Generates key derivation function key, KeyDF_Key, as HMAC(TCA_KDF_Key, EnrollPub_Key). Generates each TCert public key (using TCertPub_Key = EnrollPub_Key + ExpansionValue G, where 384-bit ExpansionValue = HMAC(Expansion_Key, TCertIndex) and 384-bit Expansion_Key = HMAC(KeyDF_Key, “2”)). Generates each AES_EncryptTCertOwner_EncryptKey(TCertIndex || known padding/parity check vector), where || denotes concatenation and where TCertOwner_EncryptKey is derived as [HMAC(KeyDF_Key, “1”)]256-bit truncation. Client: Deriving TCert private key from a TCert in order to be able to deploy or invoke or query: KeyDF_Key and ECert private key need to be pulled from Local Storage. KeyDF_Key is used to derive TCertOwner_EncryptKey as [HMAC(KeyDF_Key, “1”)]256-bit truncation; then TCertOwner_EncryptKey is used to decrypt the TCert field AES_EncryptTCertOwner_EncryptKey(TCertIndex || known padding/parity check vector); then TCertIndex is used to derive TCert private key: TCertPriv_Key = (EnrollPriv_Key + ExpansionValue) modulo n, where 384-bit ExpansionValue = HMAC(Expansion_Key, TCertIndex) and 384-bit Expansion_Key = HMAC(KeyDF_Key, “2”). 4.2.2 Expiration and revocation of certificates It is practical to support expiration of transaction certificates. The time window during which a transaction certificate can be used is expressed by a ‘validity period’ field. The challenge regarding support of expiration lies in the distributed nature of the system. That is, all validating entities must share the same information; i.e. be consistent with respect to the expiration of the validity period associated with the transactions to be executed and validated. To guarantee that the expiration of validity periods is done in a consistent manner across all validators, the concept of validity period identifier is introduced. This identifier acts as a logical clock enabling the system to uniquely identify a validity period. At genesis time the “current validity period” of the chain gets initialized by the TCA. It is essential that this validity period identifier is given monotonically increasing values over time, such that it imposes a total order among validity periods. A special type of transactions, system transactions, and the validity period identified are used together to announce the expiration of a validity period to the Blockchain. System transactions refer to contracts that have been defined in the genesis block and are part of the infrastructure. The validity period identified is updated periodically by the TCA invoking a system chaincode. Note that only the TCA should be allowed to update the validity period. The TCA sets the validity period for each transaction certificate by setting the appropriate integer values in the following two fields that define a range: ‘not-before’ and ‘not-after’ fields. TCert Expiration: At the time of processing a TCert, validators read from the state table associated with the ledger the value of ‘current validity period’ to check if the outer certificate associated with the transaction being evaluated is currently valid. That is, the current value in the state table has to be within the range defined by TCert sub-fields ‘not-before’ and ‘not-after’. If this is the case, the validator continues processing the transaction. In the case that the current value is not within range, the TCert has expired or is not yet valid and the validator should stop processing the transaction. ECert Expiration: Enrollment certificates have different validity period length(s) than those in transaction certificates. Revocation is supported in the form of Certificate Revocation Lists (CRLs). CRLs identify revoked certificates. Changes to the CRLs, incremental differences, are announced through the Blockchain. 4.3 Transaction security offerings at the infrastructure level Transactions in the fabric are user-messages submitted to be included in the ledger. As discussed in previous sections, these messages have a specific structure, and enable users to deploy new chaincodes, invoke existing chaincodes, or query the state of existing chaincodes. Therefore, the way transactions are formed, announced and processed plays an important role to the privacy and security offerings of the entire system. On one hand our membership service provides the means to authenticate transactions as having originated by valid users of the system, to disassociate transactions with user identities, but while efficiently tracing the transactions a particular individual under certain conditions (law enforcement, auditing). In other words, membership services offer to transactions authentication mechanisms that marry user-privacy with accountability and non-repudiation. On the other hand, membership services alone cannot offer full privacy of user-activities within the fabric. First of all, for privacy provisions offered by the fabric to be complete, privacy-preserving authentication mechanisms need to be accompanied by transaction confidentiality. This becomes clear if one considers that the content of a chaincode, may leak information on who may have created it, and thus break the privacy of that chaincode’s creator. The first subsection discusses transaction confidentiality. Enforcing access control for the invocation of chaincode is an important security requirement. The fabric exposes to the application (e.g., chaincode creator) the means for the application to perform its own invocation access control, while leveraging the fabric’s membership services. Section 4.4 elaborates on this. Replay attacks is another crucial aspect of the security of the chaincode, as a malicious user may copy a transaction that was added to the Blockchain in the past, and replay it in the network to distort its operation. This is the topic of Section 4.3.3. The rest of this Section presents an overview of how security mechanisms in the infrastructure are incorporated in the transactions’ lifecycle, and details each security mechanism separately. 4.3.1 Security Lifecycle of Transactions Transactions are created on the client side. The client can be either plain client, or a more specialized application, i.e., piece of software that handles (server) or invokes (client) specific chaincodes through the blockchain. Such applications are built on top of the platform (client) and are detailed in Section 4.4. Developers of new chaincodes create a new deploy transaction by passing to the fabric infrastructure: the confidentiality/security version or type they want the transaction to conform with, the set of users who wish to be given access to parts of the chaincode and a proper representation of their (read) access rights the chaincode specification, code metadata, containing information that should be passed to the chaincode at the time of its execution (e.g., configuration parameters), and * transaction metadata, that is attached to the transaction structure, and is only used by the application that deployed the chaincode. Invoke and query transactions corresponding to chaincodes with confidentiality restrictions are created using a similar approach. The transactor provides the identifier of the chaincode to be executed, the name of the function to be invoked and its arguments. Optionally, the invoker can pass to the transaction creation function, code invocation metadata, that will be provided to the chaincode at the time of its execution. Transaction metadata is another field that the application of the invoker or the invoker himself can leverage for their own purposes. Finally transactions at the client side, are signed by a certificate of their creator and released to the network of validators. Validators receive the confidential transactions, and pass them through the following phases: pre-validation phase, where validators validate the transaction certificate against the accepted root certificate authority, verify transaction certificate signature included in the transaction (statically), and check whether the transaction is a replay (see, later section for details on replay attack protection). consensus phase, where the validators add this transaction to the total order of transactions (ultimately included in the ledger) pre-execution phase, where validators verify the validity of the transaction / enrollment certificate against the current validity period, decrypt the transaction (if the transaction is encrypted), and check that the transaction’s plaintext is correctly formed(e.g., invocation access control is respected, included TCerts are correctly formed); mini replay-attack check is also performed here within the transactions of the currently processed block. execution phase, where the (decrypted) chaincode is passed to a container, along with the associated code metadata, and is executed commit* phase, where (encrypted) updates of that chaincodes state is committed to the ledger with the transaction itself. 4.3.2 Transaction confidentiality Transaction confidentiality requires that under the request of the developer, the plain-text of a chaincode, i.e., code, description, is not accessible or inferable (assuming a computational attacker) by any unauthorized entities(i.e., user or peer not authorized by the developer). For the latter, it is important that for chaincodes with confidentiality requirements the content of both deploy and invoke transactions remains concealed. In the same spirit, non-authorized parties, should not be able to associate invocations (invoke transactions) of a chaincode to the chaincode itself (deploy transaction) or these invocations to each other. Additional requirements for any candidate solution is that it respects and supports the privacy and security provisions of the underlying membership service. In addition, it should not prevent the enforcement of any invocation access control of the chain-code functions in the fabric, or the implementation of enforcement of access-control mechanisms on the application (See Subsection 4.4). In the following is provided the specification of transaction confidentiality mechanisms at the granularity of users. The last subsection provides some guidelines on how to extend this functionality at the level of validators. Information on the features supported in current release and its security provisions, you can find in Section 4.7. The goal is to achieve a design that will allow for granting or restricting access to an entity to any subset of the following parts of a chain-code: 1. chaincode content, i.e., complete (source) code of the chaincode, 2. chaincode function headers, i.e., the prototypes of the functions included in a chaincode, 3. chaincode [invocations &] state, i.e., successive updates to the state of a specific chaincode, when one or more functions of its are invoked 4. all the above Notice, that this design offers the application the capability to leverage the fabric’s membership service infrastructure and its public key infrastructure to build their own access control policies and enforcement mechanisms. 4.3.2.1 Confidentiality against users To support fine-grained confidentiality control, i.e., restrict read-access to the plain-text of a chaincode to a subset of users that the chaincode creator defines, a chain is bound to a single long-term encryption key-pair (PKchain, SKchain). Though initially this key-pair is to be stored and maintained by each chain’s PKI, in later releases, however, this restriction will be moved away, as chains (and the associated key-pairs) can be triggered through the Blockchain by any user with special (admin) privileges (See, Section 4.3.2.2). Setup. At enrollment phase, users obtain (as before) an enrollment certificate, denoted by Certui for user ui, while each validator vj obtain its enrollment certificate denoted by Certvj. Enrollment would grant users and validators the following credentials: - Users: a. claim and grant themselves signing key-pair (spku, ssku), b. claim and grant themselves encryption key-pair (epku, esku), c. obtain the encryption (public) key of the chain PKchain - Validators: a. claim and grant themselves signing key-pair (spkv, sskv), b. claim and grant themselves an encryption key-pair (epkv, eskv), c. obtain the decryption (secret) key of the chain SKchain Thus, enrollment certificates contain the public part of two key-pairs: one signature key-pair [denoted by (spkvj,sskvj) for validators and by (spkui, sskui) for users], and an encryption key-pair [denoted by (epkvj,eskvj) for validators and (epkui, eskui) for users] Chain, validator and user enrollment public keys are accessible to everyone.). The following section provides a high level description of how transaction format accommodates read-access restrictions at the granularity of users. Structure of deploy transaction. The following figure depicts the structure of a typical deploy transaction with confidentiality enabled. One can notice that a deployment transaction consists of several sections: Section general-info: contains the administration details of the transaction, i.e., which chain this transaction corresponds to (chained), the type of transaction (that is set to ‘’deplTrans’‘), the version number of confidentiality policy implemented, its creator identifier (expressed by means of transaction certificate TCert of enrollment certificate Cert), and a Nonce, that facilitates primarily replay-attack resistance techniques. Section code-info: contains information on the chain-code source code, and function headers. As shown in the figure below, there is a symmetric key used for the source-code of the chaincode (KC), and another symmetric key used for the function prototypes (KH). A signature of the creator of the chaincode is included on the plain-text code such that the latter cannot be detached from the transaction and replayed by another party. Section chain-validators: where appropriate key material is passed to the validators for the latter to be able to (i) decrypt the chain-code source (KC), (ii) decrypt the headers, and (iii) encrypt the state when the chain-code has been invoked accordingly(KS). In particular, the chain-code creator generates an encryption key-pair for the chain-code it deploys (PKC, SKC). It then uses PKC to encrypt all the keys associated to the chain-code: SKc for the users to be able to read any message associated to that chain-code (invocation, state, etc), KC for the user to be able to read only the contract code, KH for the user to only be able to read the headers, KS for the user to be able to read the state associated to that contract. Finally users are given the contract’s public key PKc, for them to be able to encrypt information related to that contract for the validators (or any in possession of SKc) to be able to read it. Transaction certificate of each contract user is appended to the transaction and follows that user’s message. This is done for users to be able to easily search the blockchain for transactions they have been part of. Notice that the deployment transaction also appends a message to the creator uc of the chain-code, for the latter to be able to retrieve this transaction through parsing the ledger and without keeping any state locally. The entire transaction is signed by a certificate of the chaincode creator, i.e., enrollment or transaction certificate as decided by the latter. Two noteworthy points: Messages that are included in a transaction in an encrypted format, i.e., code-functions, code-hdrs, are signed before they are encrypted using the same TCert the entire transaction is signed with, or even with a different TCert or the ECert of the user (if the transaction deployment should carry the identity of its owner. A binding to the underlying transaction carrier should be included in the signed message, e.g., the hash of the TCert the transaction is signed, such that mix\&match attacks are not possible. Though we detail such attacks in Section 4.4, in these cases an attacker who sees a transaction should not be able to isolate the ciphertext corresponding to, e.g., code-info, and use it for another transaction of her own. Clearly, such an ability would disrupt the operation of the system, as a chaincode that was first created by user A, will now also belong to malicious user B (who is not even able to read it). To offer the ability to the users to cross-verify they are given access to the correct key, i.e., to the same key as the other contract users, transaction ciphertexts that are encrypted with a key K are accompanied by a commitment to K, while the opening of this commitment value is passed to all users who are entitled access to K in contract-users, and chain-validator sections. In this way, anyone who is entitled access to that key can verify that the key has been properly passed to it. This part is omitted in the figure above to avoid confusion. Structure of invoke transaction. A transaction invoking the chain-code triggering the execution of a function of the chain-code with user-specified arguments is structured as depicted in the figure below. Invocation transaction as in the case of deployment transaction consists of a general-info section, a code-info section, a section for the chain-validators, and one for the contract users, signed altogether with one of the invoker’s transaction certificates. General-info follows the same structure as the corresponding section of the deployment transaction. The only difference relates to the transaction type that is now set to ‘’InvocTx’‘, and the chain-code identifier or name that is now encrypted under the chain-specific encryption (public) key. Code-info exhibits the same structure as the one of the deployment transaction. Code payload, as in the case of deployment transaction, consists of function invocation details (the name of the function invoked, and associated arguments), code-metadata provided by the application and the transaction’s creator (invoker’s u) certificate, TCertu. Code payload is signed by the transaction certificate TCertu of the invoker u, as in the case of deploy transactions. As in the case of deploy transactions, code-metadata, and tx-metadata, are fields that are provided by the application and can be used (as described in Section 4.4), for the latter to implement their own access control mechanisms and roles. Finally, contract-users and chain-validator sections provide the key the payload is encrypted with, the invoker’s key, and the chain encryption key respectively. Upon receiving such transactions, the validators decrypt [code-name]PKchain using the chain-specific secret key SKchain and obtain the invoked chain-code identifier. Given the latter, validators retrieve from their local storage the chaincode’s decryption key SKc, and use it to decrypt chain-validators’ message, that would equip them with the symmetric key KI the invocation transaction’s payload was encrypted with. Given the latter, validators decrypt code-info, and execute the chain-code function with the specified arguments, and the code-metadata attached(See, Section 4.4 for more details on the use of code-metadata). While the chain-code is executed, updates of the state of that chain-code are possible. These are encrypted using the state-specific key Ks that was defined during that chain-code’s deployment. In particular, Ks is used the same way KiTx is used in the design of our current release (See, Section 4.7). Structure of query transaction. Query transactions have the same format as invoke transactions. The only difference is that Query transactions do not affect the state of the chaincode, and thus there is no need for the state to be retrieved (decrypted) and/or updated (encrypted) after the execution of the chaincode completes. 4.3.2.2 Confidentiality against validators This section deals with ways of how to support execution of certain transactions under a different (or subset) sets of validators in the current chain. This section inhibits IP restrictions and will be expanded in the following few weeks. 4.3.3 Replay attack resistance In replay attacks the attacker “replays” a message it “eavesdropped” on the network or ‘’saw’‘ on the Blockchain. Replay attacks are a big problem here, as they can incur into the validating entities re-doing a computationally intensive process (chaincode invocation) and/or affect the state of the corresponding chaincode, while it requires minimal or no power from the attacker side. To make matters worse, if a transaction was a payment transaction, replays could potentially incur into the payment being performed more than once, without this being the original intention of the payer. Existing systems resist replay attacks as follows: Record hashes of transactions in the system. This solution would require that validators maintain a log of the hash of each transaction that has ever been announced through the network, and compare a new transaction against their locally stored transaction record. Clearly such approach cannot scale for large networks, and could easily result into validators spending a lot of time to do the check of whether a transaction has been replayed, than executing the actual transaction. Leverage state that is maintained per user identity (Ethereum). Ethereum keeps some state, e.g., counter (initially set to 1) for each identity/pseudonym in the system. Users also maintain their own counter (initially set to 0) for each identity/pseudonym of theirs. Each time a user sends a transaction using an identity/pseudonym of his, he increases his local counter by one and adds the resulting value to the transaction. The transaction is subsequently signed by that user identity and released to the network. When picking up this transaction, validators check the counter value included within and compare it with the one they have stored locally; if the value is the same, they increase the local value of that identity’s counter and accept the transaction. Otherwise, they reject the transaction as invalid or replay. Although this would work well in cases where we have limited number of user identities/pseudonyms (e.g., not too large), it would ultimately not scale in a system where users use a different identifier (transaction certificate) per transaction, and thus have a number of user pseudonyms proportional to the number of transactions. Other asset management systems, e.g., Bitcoin, though not directly dealing with replay attacks, they resist them. In systems that manage (digital) assets, state is maintained on a per asset basis, i.e., validators only keep a record of who owns what. Resistance to replay attacks come as a direct result from this, as replays of transactions would be immediately be deemed as invalid by the protocol (since can only be shown to be derived from older owners of an asset/coin). While this would be appropriate for asset management systems, this does not abide with the needs of a Blockchain systems with more generic use than asset management. In the fabric, replay attack protection uses a hybrid approach. That is, users add in the transaction a nonce that is generated in a different manner depending on whether the transaction is anonymous (followed and signed by a transaction certificate) or not (followed and signed by a long term enrollment certificate). More specifically: - Users submitting a transaction with their enrollment certificate should include in that transaction a nonce that is a function of the nonce they used in the previous transaction they issued with the same certificate (e.g., a counter function or a hash). The nonce included in the first transaction of each enrollment certificate can be either pre-fixed by the system (e.g., included in the genesis block) or chosen by the user. In the first case, the genesis block would need to include nonceall , i.e., a fixed number and the nonce used by user with identity IDA for his first enrollment certificate signed transaction would be nonceround0IDA <- hash(IDA, nonceall),where IDA appears in the enrollment certificate. From that point onward successive transactions of that user with enrollment certificate would include a nonce as follows nonceroundiIDA <- hash(nonceround{i-1}IDA),that is the nonce of the ith transaction would be using the hash of the nonce used in the {i-1}th transaction of that certificate. Validators here continue to process a transaction they receive, as long as it satisfies the condition mentioned above. Upon successful validation of transaction’s format, the validators update their database with that nonce. Storage overhead: on the user side: only the most recently used nonce, on validator side: O(n), where n is the number of users. - Users submitting a transaction with a transaction certificate should include in the transaction a random nonce, that would guarantee that two transactions do not result into the same hash. Validators add the hash of this transaction in their local database if the transaction certificate used within it has not expired. To avoid storing large amounts of hashes, validity periods of transaction certificates are leveraged. In particular validators maintain an updated record of received transactions’ hashes within the current or future validity period. Storage overhead (only makes sense for validators here): O(m), where m is the approximate number of transactions within a validity period and corresponding validity period identifier (see below). 4.4 Access control features on the application An application, is a piece of software that runs on top of a Blockchain client software, and, performs a special task over the Blockchain, i.e., restaurant table reservation. Application software have a version of developer, enabling the latter to generate and manage a couple of chaincodes that are necessary for the business this application serves, and a client-version that would allow the application’s end-users to make use of the application, by invoking these chain-codes. The use of the Blockchain can be transparent to the application end-users or not. This section describes how an application leveraging chaincodes can implement its own access control policies, and guidelines on how our Membership services PKI can be leveraged for the same purpose. The presentation is divided into enforcement of invocation access control, and enforcement of read-access control by the application. 4.4.1 Invocation access control To allow the application to implement its own invocation access control at the application layer securely, special support by the fabric must be provided. In the following we elaborate on the tools exposed by the fabric to the application for this purpose, and provide guidelines on how these should be used by the application for the latter to enforce access control securely. Support from the infrastructure. For the chaincode creator, let it be, uc, to be able to implement its own invocation access control at the application layer securely, special support by the fabric must be provided. More specifically fabric layer gives access to following capabilities: The client-application can request the fabric to sign and verify any message with specific transaction certificates or enrollment certificate the client owns; this is expressed via the Certificate Handler interface The client-application can request the fabric a unique binding to be used to bind authentication data of the application to the underlying transaction transporting it; this is expressed via the Transaction Handler interface Support for a transaction format, that allows for the application to specify metadata, that are passed to the chain-code at deployment, and invocation time; the latter denoted by code-metadata. The Certificate Handler interface allows to sign and verify any message using signing key-pair underlying the associated certificate. The certificate can be a TCert or an ECert. // CertificateHandler exposes methods to deal with an ECert/TCert type CertificateHandler interface { // GetCertificate returns the certificate's DER GetCertificate() []byte // Sign signs msg using the signing key corresponding to the certificate Sign(msg []byte) ([]byte, error) // Verify verifies msg using the verifying key corresponding to the certificate Verify(signature []byte, msg []byte) error // GetTransactionHandler returns a new transaction handler relative to this certificate GetTransactionHandler() (TransactionHandler, error) } The Transaction Handler interface allows to create transactions and give access to the underlying binding that can be leveraged to link application data to the underlying transaction. Bindings are a concept that have been introduced in network transport protocols (See, known as channel bindings, that. Transaction bindings offer the ability to uniquely identify the fabric layer of the transaction that serves as the container that application data uses to be added to the ledger. // TransactionHandler represents a single transaction that can be uniquely determined or identified by the output of the GetBinding method. // This transaction is linked to a single Certificate (TCert or ECert). type TransactionHandler interface { // GetCertificateHandler returns the certificate handler relative to the certificate mapped to this transaction GetCertificateHandler() (CertificateHandler, error) // GetBinding returns a binding to the underlying transaction (container) GetBinding() ([]byte, error) // NewChaincodeDeployTransaction is used to deploy chaincode NewChaincodeDeployTransaction(chaincodeDeploymentSpec *obc.ChaincodeDeploymentSpec, uuid string) (*obc.Transaction, error) // NewChaincodeExecute is used to execute chaincode's functions NewChaincodeExecute(chaincodeInvocation *obc.ChaincodeInvocationSpec, uuid string) (*obc.Transaction, error) // NewChaincodeQuery is used to query chaincode's functions NewChaincodeQuery(chaincodeInvocation *obc.ChaincodeInvocationSpec, uuid string) (*obc.Transaction, error) } For version 1, binding consists of the hash(TCert, Nonce), where TCert, is the transaction certificate used to sign the entire transaction, while Nonce, is the nonce number used within. The Client interface is more generic, and offers a mean to get instances of the previous interfaces. type Client interface { ... // GetEnrollmentCertHandler returns a CertificateHandler whose certificate is the enrollment certificate GetEnrollmentCertificateHandler() (CertificateHandler, error) // GetTCertHandlerNext returns a CertificateHandler whose certificate is the next available TCert GetTCertificateHandlerNext() (CertificateHandler, error) // GetTCertHandlerFromDER returns a CertificateHandler whose certificate is the one passed GetTCertificateHandlerFromDER(der []byte) (CertificateHandler, error) } To support application-level access control lists for controlling chaincode invocation, the fabric’s transaction and chaincode specification format have an additional field to store application-specific metadata. This field is depicted in both figures 1, by code-metadata. The content of this field is decided by the application, at the transaction creation time. The fabric layer treats it as an unstructured stream of bytes. message ChaincodeSpec { ... ConfidentialityLevel confidentialityLevel; bytes metadata; ... } message Transaction { ... bytes payload; bytes metadata; ... } To assist chaincode execution, at the chain-code invocation time, the validators provide the chaincode with additional information, like the metadata and the binding. Application invocation access control. This section describes how the application can leverage the means provided by the fabric to implement its own access control on its chain-code functions. In the scenario considered here, the following entities are identified: C: is a chaincode that contains a single function, e.g., called hello; uc: is the C deployer; ui: is a user who is authorized to invoke C‘s functions. User uc wants to ensure that only ui can invoke the function hello. Deployment of a Chaincode: At deployment time, uc has full control on the deployment transaction’s metadata, and can be used to store a list of ACLs (one per function), or a list of roles that are needed by the application. The format which is used to store these ACLs is up to the deployer’s application, as the chain-code is the one who would need to parse the metadata at execution time. To define each of these lists/roles, uc can use any TCerts/Certs of the ui (or, if applicable, or other users who have been assigned that privilege or role). Let this be TCertui. The exchange of TCerts or Certs among the developer and authorized users is done through an out-of-band channel. Assume that the application of uc‘s requires that to invoke the hello function, a certain message M has to be authenticated by an authorized invoker (ui, in our example). One can distinguish the following two cases: M is one of the chaincode’s function arguments; M is the invocation message itself, i.e., function-name, function-arguments. Chaincode invocation: To invoke C, ui‘s application needs to sign M using the TCert/ECert, that was used to identify ui‘s participation in the chain-code at the associated deployment transaction’s metadata, i.e., TCertui. More specifically, ui‘s client application does the following: Retrieves a CertificateHandler for Certui, cHandler; obtains a new TransactionHandler to issue the execute transaction, txHandler relative to his next available TCert or his ECert; gets txHandler‘s binding by invoking txHandler.getBinding(); signs ‘M || txBinding’ by invoking cHandler.Sign(‘M || txBinding’), let sigma be the output of the signing function; issues a new execute transaction by invoking, txHandler.NewChaincodeExecute(…). Now, sigma can be included in the transaction as one of the arguments that are passed to the function (case 1) or as part of the code-metadata section of the payload(case 2). Chaincode processing: The validators, who receive the execute transaction issued ui, will provide to hello the following information: The binding of the execute transaction, that can be independently computed at the validator side; The metadata of the execute transaction (code-metadata section of the transaction); The metadata of the deploy transaction (code-metadata component of the corresponding deployment transaction). Notice that sigma is either part of the arguments of the invoked function, or stored inside the code-metadata of the invocation transaction (properly formatted by the client-application). Application ACLs are included in the code-metadata section, that is also passed to the chain-code at execution time. Function hello is responsible for checking that sigma is indeed a valid signature issued by TCertui, on ‘M || txBinding’. 4.4.2 Read access control This section describes how the fabric’s infrastructure offers support to the application to enforce its own read-access control policies at the level of users. As in the case of invocation access control, the first part describes the infrastructure features that can be leveraged by the application for this purpose, and the last part details on the way applications should use these tools. For the purpose of this discussion, we leverage a similar example as before, i.e., C: is a chaincode that contains a single function, e.g., called hello; uA: is the C‘s deployer, also known as application; ur: is a user who is authorized to read C‘s functions. User uA wants to ensure that only ur can read the function hello. Support from the infrastructure. For uA to be able to implement its own read access control at the application layer securely, our infrastructure is required to support the transaction format for code deployment and invocation, as depicted in the two figures below. More specifically fabric layer is required to provide the following functionality: Provide minimal encryption capability such that data is only decryptable by a validator’s (infrastructure) side; this means that the infrastructure should move closer to our future version, where an asymmetric encryption scheme is used for encrypting transactions. More specifically, an asymmetric key-pair is used for the chain, denoted by Kchain in the Figures above, but detailed in Section Transaction Confidentiality. The client-application can request the infrastructure sitting on the client-side to encrypt/decrypt information using a specific public encryption key, or that client’s long-term decryption key. The transaction format offers the ability to the application to store additional transaction metadata, that can be passed to the client-application after the latter’s request. Transaction metadata, as opposed to code-metadata, is not encrypted or provided to the chain-code at execution time. Validators treat these metadata as a list of bytes they are not responsible for checking validity of. Application read-access control. For this reason the application may request and obtain access to the public encryption key of the user ur; let that be PKur. Optionally, ur may be providing uA with a certificate of its, that would be leveraged by the application, say, TCertur; given the latter, the application would, e.g., be able to trace that user’s transactions w.r.t. the application’s chain-codes. TCertur, and PKur, are exchanged in an out-of-band channel. At deployment time, application uA performs the following steps: Uses the underlying infrastructure to encrypt the information of C, the application would like to make accessible to ur, using PKur. Let Cur be the resulting ciphertext. (optional) Cur can be concatenated with TCertur Passes the overall string as ‘’Tx-metadata’‘ of the confidential transaction to be constructed. At invocation time, the client-application on ur‘s node, would be able, by obtaining the deployment transaction to retrieve the content of C. It just needs to retrieve the tx-metadata field of the associated deployment transaction, and trigger the decryption functionality offered by our Blockchain infrastrucure’s client, for Cur. Notice that it is the application’s responsibility to encrypt the correct C for ur. Also, the use of tx-metadata field can be generalized to accommodate application-needs. E.g., it can be that invokers leverage the same field of invocation transactions to pass information to the developer of the application, etc. Important Note: It is essential to note that validators do not provide any decryption oracle to the chain-code throughout its execution. Its infrastructure is though responsible for decrypting the payload of the chain-code itself (as well as the code-metadata fields near it), and provide those to containers for deployment/execution. 4.5 Online wallet service This section describes the security design of a wallet service, which in this case is a node with which end-users can register, store their key material and through which they can perform transactions. Because the wallet service is in possession of the user’s key material, it is clear that without a secure authorization mechanism in place a malicious wallet service could successfully impersonate the user. We thus emphasize that this design corresponds to a wallet service that is trusted to only perform transactions on behalf of its clients, with the consent of the latter. There are two cases for the registration of an end-user to an online wallet service: - When the user has registered with the registration authority and acquired his/her <enrollID, enrollPWD>, but has not installed the client to trigger and complete the enrollment process; - When the user has already installed the client, and completed the enrollment phase. Initially, the user interacts with the online wallet service to issue credentials that would allow him to authenticate to the wallet service. That is, the user is given a username, and password, where username identifies the user in the membership service, denoted by AccPub, and password is the associated secret, denoted by AccSec, that is shared by both user and service. To enroll through the online wallet service, a user must provide the following request object to the wallet service: AccountRequest /* account request of u \*/ { OBCSecCtx , /* credentials associated to network \*/ AccPub<sub>u</sub>, /* account identifier of u \*/ AccSecProof<sub>u</sub> /* proof of AccSec<sub>u</sub>\*/ } OBCSecCtx refers to user credentials, which depending on the stage of his enrollment process, can be either his enrollment ID and password, <enrollID, enrollPWD> or his enrollment certificate and associated secret key(s) (ECertu, sku), where sku denotes for simplicity signing and decryption secret of the user. The content of AccSecProofu is an HMAC on the rest fields of request using the shared secret. Nonce-based methods similar to what we have in the fabric can be used to protect against replays. OBCSecCtx would give the online wallet service the necessary information to enroll the user or issue required TCerts. For subsequent requests, the user u should provide to the wallet service a request of similar format. TransactionRequest /* account request of u \*/ { TxDetails, /* specifications for the new transaction \*/ AccPub<sub>u</sub>, /* account identifier of u \*/ AccSecProof<sub>u</sub> /* proof of AccSec<sub>u</sub> \*/ } Here, TxDetails refer to the information needed by the online service to construct a transaction on behalf of the user, i.e., the type, and user-specified content of the transaction. AccSecProofu is again an HMAC on the rest fields of request using the shared secret. Nonce-based methods similar to what we have in the fabric can be used to protect against replays. TLS connections can be used in each case with server side authentication to secure the request at the network layer (confidentiality, replay attack protection, etc) 4.6 Network security (TLS) The TLS CA should be capable of issuing TLS certificates to (non-validating) peers, validators, and individual clients (or browsers capable of storing a private key). Preferably, these certificates are distinguished by type, per above. TLS certificates for CAs of the various types (such as TLS CA, ECA, TCA) could be issued by an intermediate CA (i.e., a CA that is subordinate to the root CA). Where there is not a particular traffic analysis issue, any given TLS connection can be mutually authenticated, except for requests to the TLS CA for TLS certificates. In the current implementation the only trust anchor is the TLS CA self-signed certificate in order to accommodate the limitation of a single port to communicate with all three (co-located) servers, i.e., the TLS CA, the TCA and the ECA. Consequently, the TLS handshake is established with the TLS CA, which passes the resultant session keys to the co-located TCA and ECA. The trust in validity of the TCA and ECA self-signed certificates is therefore inherited from trust in the TLS CA. In an implementation that does not thus elevate the TLS CA above other CAs, the trust anchor should be replaced with a root CA under which the TLS CA and all other CAs are certified. 4.7 Restrictions in the current release This section lists the restrictions of the current release of the fabric. A particular focus is given on client operations and the design of transaction confidentiality, as depicted in Sections 4.7.1 and 4.7.2. - Client side enrollment and transaction creation is performed entirely by a non-validating peer that is trusted not to impersonate the user. See, Section 4.7.1 for more information. - A minimal set of confidentiality properties where a chaincode is accessible by any entity that is member of the system, i.e., validators and users who have registered through Hyperledger Fabric’s Membership Services and is not accessible by anyone else. The latter include any party that has access to the storage area where the ledger is maintained, or other entities that are able to see the transactions that are announced in the validator network. The design of the first release is detailed in subsection 4.7.2 - The code utilizes self-signed certificates for entities such as the enrollment CA (ECA) and the transaction CA (TCA) - Replay attack resistance mechanism is not available - Invocation access control can be enforced at the application layer: it is up to the application to leverage the infrastructure’s tools properly for security to be guaranteed. This means, that if the application fails to bind the transaction binding offered by the fabric, secure transaction processing may be at risk. 4.7.1 Simplified client Client-side enrollment and transaction creation are performed entirely by a non-validating peer that plays the role of an online wallet. In particular, the end-user leverages their registration credentials 4.7.2 Simplified transaction confidentiality Disclaimer: The current version of transaction confidentiality is minimal, and will be used as an intermediate step to reach a design that allows for fine grained (invocation) access control enforcement in a subsequent release. In its current form, confidentiality of transactions is offered solely at the chain-level, i.e., that the content of a transaction included in a ledger, is readable by all members of that chain, i.e., validators and users. At the same time, application auditors who are not members of the system can be given the means to perform auditing by passively observing the blockchain data, while guaranteeing that they are given access solely to the transactions related to the application under audit. State is encrypted in a way that such auditing requirements are satisfied, while not disrupting the proper operation of the underlying consensus network. More specifically, currently symmetric key encryption is supported in the process of offering transaction confidentiality. In this setting, one of the main challenges that is specific to the blockchain setting, is that validators need to run consensus over the state of the blockchain, that, aside from the transactions themselves, also includes the state updates of individual contracts or chaincode. Though this is trivial to do for non-confidential chaincode, for confidential chaincode, one needs to design the state encryption mechanism such that the resulting ciphertexts are semantically secure, and yet, identical if the plaintext state is the same. To overcome this challenge, the fabric utilizes a key hierarchy that reduces the number of ciphertexts that are encrypted under the same key. At the same time, as some of these keys are used for the generation of IVs, this allows the validating parties to generate exactly the same ciphertext when executing the same transaction (this is necessary to remain agnostic to the underlying consensus algorithm) and offers the possibility of controlling audit by disclosing to auditing entities only the most relevant keys. Method description: Membership service generates a symmetric key for the ledger (Kchain) that is distributed at registration time to all the entities of the blockchain system, i.e., the clients and the validating entities that have issued credentials through the membership service of the chain. At enrollment phase, user obtain (as before) an enrollment certificate, denoted by Certui for user ui , while each validator vj obtains its enrollment certificate denoted by Certvj. Entity enrollment would be enhanced, as follows.). In order to defeat crypto-analysis and enforce confidentiality, the following key hierarchy is considered for generation and validation of confidential transactions: To submit a confidential transaction (Tx) to the ledger, a client first samples a nonce (N), which is required to be unique among all the transactions submitted to the blockchain, and derive a transaction symmetric key (KTx) by applying the HMAC function keyed with Kchain and on input the nonce, KTx= HMAC(Kchain, N). From KTx, the client derives two AES keys: KTxCID as HMAC(KTx, c1), KTxP as HMAC(KTx, c2)) to encrypt respectively the chain-code name or identifier CID and code (or payload) P. c1, c2 are public constants. The nonce, the Encrypted Chaincode ID (ECID) and the Encrypted Payload (EP) are added in the transaction Tx structure, that is finally signed and so authenticated. Figure below shows how encryption keys for the client’s transaction are generated. Arrows in this figure denote application of an HMAC, keyed by the key at the source of the arrow and using the number in the arrow as argument. Deployment/Invocation transactions’ keys are indicated by d/i respectively. To validate a confidential transaction Tx submitted to the blockchain by a client, a validating entity first decrypts ECID and EP by re-deriving KTxCID and KTxP from Kchain and Tx.Nonce as done before. Once the Chaincode ID and the Payload are recovered the transaction can be processed. When V validates a confidential transaction, the corresponding chaincode can access and modify the chaincode’s state. V keeps the chaincode’s state encrypted. In order to do so, V generates symmetric keys as depicted in the figure above. Let iTx be a confidential transaction invoking a function deployed at an early stage by the confidential transaction dTx (notice that iTx can be dTx itself in the case, for example, that dTx has a setup function that initializes the chaincode’s state). Then, V generates two symmetric keys KIV and Kstate as follows: - It computes as KdTx , i.e., the transaction key of the corresponding deployment transaction, and then Nstate = HMAC(Kdtx ,hash(Ni)), where Ni is the nonce appearing in the invocation transaction, and hash a hash function. - It sets Kstate = HMAC(KdTx, c3 || Nstate), truncated opportunely deeding on the underlying cipher used to encrypt; c3 is a constant number - It sets KIV = HMAC(KdTx, c4 || Nstate); c4 is a constant number In order to encrypt a state variable S, a validator first generates the IV as HMAC(KIV, crtstate) properly truncated, where crtstate is a counter value that increases each time a state update is requested for the same chaincode invocation. The counter is discarded after the execution of the chaincode terminates. After IV has been generated, V encrypts with authentication (i.e., GSM mode) the value of S concatenated with Nstate(Actually, Nstate doesn’t need to be encrypted but only authenticated). To the resulting ciphertext (CT), Nstate and the IV used is appended. In order to decrypt an encrypted state CT|| Nstate’ , a validator first generates the symmetric keys KdTX‘ ,Kstate‘ using Nstate’ and then decrypts CT. Generation of IVs: In order to be agnostic to any underlying consensus algorithm, all the validating parties need a method to produce the same exact ciphertexts. In order to do so, the validators need to use the same IVs. Reusing the same IV with the same symmetric key completely breaks the security of the underlying cipher. Therefore, the process described before is followed. In particular, V first derives an IV generation key KIV by computing HMAC(KdTX, c4 || Nstate ), where c4 is a constant number, and keeps a counter crtstate for the pair (dTx, iTx) with is initially set to 0. Then, each time a new ciphertext has to be generated, the validator generates a new IV by computing it as the output of HMAC(KIV, crtstate) and then increments the crtstate by one. Another benefit that comes with the above key hierarchy is the ability to enable controlled auditing. For example, while by releasing Kchain one would provide read access to the whole chain, by releasing only Kstate for a given pair of transactions (dTx,iTx) access would be granted to a state updated by iTx, and so on. The following figures demonstrate the format of a deployment and invocation transaction currently available in the code. One can notice that both deployment and invocation transactions consist of two sections: Section general-info: contains the administration details of the transaction, i.e., which chain this transaction corresponds to (is chained to), the type of transaction (that is set to ‘’deploymTx’‘ or ‘’invocTx’‘), the version number of confidentiality policy implemented, its creator identifier (expressed by means of TCert of Cert) and a nonce (facilitates primarily replay-attack resistance techniques). Section code-info: contains information on the chain-code source code. For deployment transaction this is essentially the chain-code identifier/name and source code, while for invocation chain-code is the name of the function invoked and its arguments. As shown in the two figures code-info in both transactions are encrypted ultimately using the chain-specific symmetric key Kchain. 5. Byzantine Consensus The pbft package is an implementation of the seminal PBFT consensus protocol [1], which provides consensus among validators despite a threshold of validators acting as Byzantine, i.e., being malicious or failing in an unpredictable manner. In the default configuration, PBFT tolerates up to t<n/3 Byzantine validators. In the default configuration, PBFT is designed to run on at least 3t+1 validators (replicas), tolerating up to t potentially faulty (including malicious, or Byzantine) replicas. 5.1 Overview The pbft plugin provides an implementation of the PBFT consensus protocol. 5.2 Core PBFT Functions The following functions control for parallelism using a non-recursive lock and can therefore be invoked from multiple threads in parallel. However, the functions typically run to completion and may invoke functions from the CPI passed in. Care must be taken to prevent livelocks. 5.2.1 newPbftCore Signature: func newPbftCore(id uint64, config *viper.Viper, consumer innerCPI, ledger consensus.Ledger) *pbftCore The newPbftCore constructor instantiates a new PBFT box instance, with the specified id. The config argument defines operating parameters of the PBFT network: number replicas N, checkpoint period K, and the timeouts for request completion and view change duration. The arguments consumer and ledger pass in interfaces that are used to query the application state and invoke application requests once they have been totally ordered. See the respective sections below for these interfaces. 6. Application Programming Interface The primary interface to the fabric is a REST API. The REST API allows applications to register users, query the blockchain, and to issue transactions. A CLI is also provided to cover a subset of the available APIs for development purposes. The CLI enables developers to quickly test chaincodes or query for status of transactions. Applications interact with a non-validating peer node through the REST API, which will require some form of authentication to ensure the entity has proper privileges. The application is responsible for implementing the appropriate authentication mechanism and the peer node will subsequently sign the outgoing messages with the client identity. The fabric API design covers the categories below, though the implementation is incomplete for some of them in the current release. The REST API section will describe the APIs currently supported. - Identity - Enrollment to acquire or to revoke a certificate - Address - Target and source of a transaction - Transaction - Unit of execution on the ledger - Chaincode - Program running on the ledger - Blockchain - Contents of the ledger - Network - Information about the blockchain peer network - Storage - External store for files or documents - Event Stream - Sub/pub events on the blockchain 6.1 REST Service The REST service can be enabled (via configuration) on either validating or non-validating peers, but it is recommended to only enable the REST service on non-validating peers on production networks. func StartOpenchainRESTServer(server *oc.ServerOpenchain, devops *oc.Devops) This function reads the rest.address value in the core.yaml configuration file, which is the configuration file for the peer process. The value of the rest.address key defines the default address and port on which the peer will listen for HTTP REST requests. It is assumed that the REST service receives requests from applications which have already authenticated the end user. 6.2 REST API You can work with the REST API through any tool of your choice. For example, the curl command line utility or a browser based client such as the Firefox Rest Client or Chrome Postman. You can likewise trigger REST requests directly through Swagger. To obtain the REST API Swagger description, click here. The currently available APIs are summarized in the following section. 6.2.1 REST Endpoints - Block - GET /chain/blocks/{block-id} - Blockchain - GET /chain - Chaincode - POST /chaincode - Network - GET /network/peers - Registrar - POST /registrar - GET /registrar/{enrollmentID} - DELETE /registrar/{enrollmentID} - GET /registrar/{enrollmentID}/ecert - GET /registrar/{enrollmentID}/tcert - Transactions - GET /transactions/{UUID} 6.2.1.1 Block API - GET /chain/blocks/{block-id} Use the Block API to retrieve the contents of various blocks from the blockchain. The returned Block message structure is defined in section 3.2.1.1. Block Retrieval Request: GET host:port/chain/blocks/173 Block Retrieval Response: { "transactions": [ { ==" } ], "stateHash": "7ftCvPeHIpsvSavxUoZM0u7o67MPU81ImOJIO7ZdMoH2mjnAaAAafYy9MIH3HjrWM1/Zla/Q6LsLzIjuYdYdlQ==", "previousBlockHash": "lT0InRg4Cvk4cKykWpCRKWDZ9YNYMzuHdUzsaeTeAcH3HdfriLEcTuxrFJ76W4jrWVvTBdI1etxuIV9AO6UF4Q==", "nonHashData": { "localLedgerCommitTimestamp": { "seconds": 1453758316, "nanos": 250834782 } } } 6.2.1.2 Blockchain API - GET /chain Use the Chain API to retrieve the current state of the blockchain. The returned BlockchainInfo message is defined below. message BlockchainInfo { uint64 height = 1; bytes currentBlockHash = 2; bytes previousBlockHash = 3; } height- Number of blocks in the blockchain, including the genesis block. currentBlockHash- The hash of the current or last block. previousBlockHash- The hash of the previous block. Blockchain Retrieval Request: GET host:port/chain Blockchain Retrieval Response: { "height": 174, "currentBlockHash": "lIfbDax2NZMU3rG3cDR11OGicPLp1yebIkia33Zte9AnfqvffK6tsHRyKwsw0hZFZkCGIa9wHVkOGyFTcFxM5w==", "previousBlockHash": "Vlz6Dv5OSy0OZpJvijrU1cmY2cNS5Ar3xX5DxAi/seaHHRPdssrljDeppDLzGx6ZVyayt8Ru6jO+E68IwMrXLQ==" } 6.2.1.3 Chaincode API - POST /chaincode Use the Chaincode API to deploy, invoke, and query chaincodes. The deploy request requires the client to supply a path parameter, pointing to the directory containing the chaincode in the file system. The response to a deploy request is either a message containing a confirmation of successful chaincode deployment or an error, containing a reason for the failure. It also contains the generated chaincode name in the message field, which is to be used in subsequent invocation and query transactions to uniquely identify the deployed chaincode. To deploy a chaincode, supply the required ChaincodeSpec payload, defined in section 3.1.2.2. Deploy Request: POST host:port/chaincode { "jsonrpc": "2.0", "method": "deploy", "params": { "type": "GOLANG", "chaincodeID":{ "path":"github.com/hyperledger/fabic/examples/chaincode/go/chaincode_example02" }, "ctorMsg": { "function":"init", "args":["a", "1000", "b", "2000"] } }, "id": "1" } Deploy Response: { "jsonrpc": "2.0", "result": { "status": "OK", "message": " }, "id": 1 } With security enabled, modify the required payload to include the secureContext element passing the enrollment ID of a logged in user as follows: Deploy Request with security enabled: POST host:port/chaincode { "jsonrpc": "2.0", "method": "deploy", "params": { "type": "GOLANG", "chaincodeID":{ "path":"github.com/hyperledger/fabic/examples/chaincode/go/chaincode_example02" }, "ctorMsg": { "function":"init", "args":["a", "1000", "b", "2000"] }, "secureContext": "lukas" }, "id": "1" } The invoke request requires the client to supply a name parameter, which was previously returned in the response from the deploy transaction. The response to an invocation request is either a message containing a confirmation of successful execution or an error, containing a reason for the failure. To invoke a function within a chaincode, supply the required ChaincodeSpec payload, defined in section 3.1.2.2. Invoke Request: POST host:port/chaincode { "] } }, "id": "3" } Invoke Response: { "jsonrpc": "2.0", "result": { "status": "OK", "message": "5a4540e5-902b-422d-a6ab-e70ab36a2e6d" }, "id": 3 } With security enabled, modify the required payload to include the secureContext element passing the enrollment ID of a logged in user as follows: Invoke Request with security enabled: { "] }, "secureContext": "lukas" }, "id": "3" } The query request requires the client to supply a name parameter, which was previously returned in the response from the deploy transaction. The response to a query request depends on the chaincode implementation. The response will contain a message containing a confirmation of successful execution or an error, containing a reason for the failure. In the case of successful execution, the response will also contain values of requested state variables within the chaincode. To invoke a query function within a chaincode, supply the required ChaincodeSpec payload, defined in section 3.1.2.2. Query Request: POST host:port/chaincode/ { "] } }, "id": "5" } Query Response: { "jsonrpc": "2.0", "result": { "status": "OK", "message": "-400" }, "id": 5 } With security enabled, modify the required payload to include the secureContext element passing the enrollment ID of a logged in user as follows: Query Request with security enabled: { "] }, "secureContext": "lukas" }, "id": "5" } 6.2.1.4 Network API Use the Network API to retrieve information about the network of peer nodes comprising the blockchain fabric. The /network/peers endpoint returns a list of all existing network connections for the target peer node. The list includes both validating and non-validating peers. The list of peers is returned as type PeersMessage, containing an array of PeerEndpoint, defined in section 3.1.1. message PeersMessage { repeated PeerEndpoint peers = 1; } Network Request: GET host:port/network/peers Network Response: { "peers": [ { "ID": { "name": "vp1" }, "address": "172.17.0.4:30303", "type": 1, "pkiID": "rUA+vX2jVCXev6JsXDNgNBMX03IV9mHRPWo6h6SI0KLMypBJLd+JoGGlqFgi+eq/" }, { "ID": { "name": "vp3" }, "address": "172.17.0.5:30303", "type": 1, "pkiID": "OBduaZJ72gmM+B9wp3aErQlofE0ulQfXfTHh377ruJjOpsUn0MyvsJELUTHpAbHI" }, { "ID": { "name": "vp2" }, "address": "172.17.0.6:30303", "type": 1, "pkiID": "GhtP0Y+o/XVmRNXGF6pcm9KLNTfCZp+XahTBqVRmaIumJZnBpom4ACayVbg4Q/Eb" } ] } 6.2.1.5 Registrar API (member services) - POST /registrar - GET /registrar/{enrollmentID} - DELETE /registrar/{enrollmentID} - GET /registrar/{enrollmentID}/ecert - GET /registrar/{enrollmentID}/tcert Use the Registrar APIs to manage end user registration with the certificate authority (CA). These API endpoints are used to register a user with the CA, determine whether a given user is registered, and to remove any login tokens for a target user from local storage, preventing them from executing any further transactions. The Registrar APIs are also used to retrieve user enrollment and transaction certificates from the system. The /registrar endpoint is used to register a user with the CA. The required Secret payload is defined below. The response to the registration request is either a confirmation of successful registration or an error, containing a reason for the failure. message Secret { string enrollId = 1; string enrollSecret = 2; } enrollId- Enrollment ID with the certificate authority. enrollSecret- Enrollment password with the certificate authority. Enrollment Request: POST host:port/registrar { "enrollId": "lukas", "enrollSecret": "NPKYL39uKbkj" } Enrollment Response: { "OK": "Login successful for user 'lukas'." } The GET /registrar/{enrollmentID} endpoint is used to confirm whether a given user is registered with the CA. If so, a confirmation will be returned. Otherwise, an authorization error will result. Verify Enrollment Request: GET host:port/registrar/jim Verify Enrollment Response: { "OK": "User jim is already logged in." } Verify Enrollment Request: GET host:port/registrar/alex Verify Enrollment Response: { "Error": "User alex must log in." } The DELETE /registrar/{enrollmentID} endpoint is used to delete login tokens for a target user. If the login tokens are deleted successfully, a confirmation will be returned. Otherwise, an authorization error will result. No payload is required for this endpoint. Remove Enrollment Request: DELETE host:port/registrar/lukas Remove Enrollment Response: { "OK": "Deleted login token and directory for user lukas." } The GET /registrar/{enrollmentID}/ecert endpoint is used to retrieve the enrollment certificate of a given user from local storage. If the target user has already registered with the CA, the response will include a URL-encoded version of the enrollment certificate. If the target user has not yet registered, an error will be returned. If the client wishes to use the returned enrollment certificate after retrieval, keep in mind that it must be URL-decoded. Enrollment Certificate Retrieval Request: GET host:port/registrar/jim/ecert Enrollment Certificate Retrieval Response: { "OK": "-----BEGIN+CERTIFICATE-----%0AMIIBzTCCAVSgAwIBAgIBATAKBggqhkjOPQQDAzApMQswCQYDVQQGEwJVUzEMMAoG%0AA1UEChMDSUJNMQwwCgYDVQQDEwNPQkMwHhcNMTYwMTIxMDYzNjEwWhcNMTYwNDIw%0AMDYzNjEwWjApMQswCQYDVQQGEwJVUzEMMAoGA1UEChMDSUJNMQwwCgYDVQQDEwNP%0AQkMwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAARSLgjGD0omuJKYrJF5ClyYb3sGEGTU%0AH1mombSAOJ6GAOKEULt4L919sbSSChs0AEvTX7UDf4KNaKTrKrqo4khCoboMg1VS%0AXVTTPrJ%2BOxSJTXFZCohVgbhWh6ZZX2tfb7%2BjUDBOMA4GA1UdDwEB%2FwQEAwIHgDAM%0ABgNVHRMBAf8EAjAAMA0GA1UdDgQGBAQBAgMEMA8GA1UdIwQIMAaABAECAwQwDgYG%0AUQMEBQYHAQH%2FBAE0MAoGCCqGSM49BAMDA2cAMGQCMGz2RR0NsJOhxbo0CeVts2C5%0A%2BsAkKQ7v1Llbg78A1pyC5uBmoBvSnv5Dd0w2yOmj7QIwY%2Bn5pkLiwisxWurkHfiD%0AxizmN6vWQ8uhTd3PTdJiEEckjHKiq9pwD%2FGMt%2BWjP7zF%0A-----END+CERTIFICATE-----%0A" } The /registrar/{enrollmentID}/tcert endpoint retrieves the transaction certificates for a given user that has registered with the certificate authority. If the user has registered, a confirmation message will be returned containing an array of URL-encoded transaction certificates. Otherwise, an error will result. The desired number of transaction certificates is specified with the optional ‘count’ query parameter. The default number of returned transaction certificates is 1; and 500 is the maximum number of certificates that can be retrieved with a single request. If the client wishes to use the returned transaction certificates after retrieval, keep in mind that they must be URL-decoded. Transaction Certificate Retrieval Request: GET host:port/registrar/jim/tcertAQfwJORRED9RAsmSl%2FEowq1STBb%0A%2FoFteymZ96RUr%2BsKmF9PNrrUNvFZFhvukxZZjqhEcGiQqFyRf%2FBnVN%2BbtRzMowSRWQFmErr0SmQO9AFP4GJYzQ%0APQMmcsCjKiJf%2Bw1df%2FLnXunCsCUlf%2FalIUaeSrT7MAoGCCqGSM49BAMDA0gAMEUC%0AIQC%2FnE71FBJd0hwNTLXWmlCJff4Yi0J%2BnDi%2BYnujp%2Fn9nQIgYWg0m0QFzddyJ0%2FF%0AKzIZEJlKgZTt8ZTlGg3BBrgl7qY%3D%0A-----END+CERTIFICATE-----%0A" ] } Transaction Certificate Retrieval Request: GET host:port/registrar/jim/tcert?count=5" ] } 6.2.1.6 Transactions API - GET /transactions/{UUID} Use the Transaction API to retrieve an individual transaction matching the UUID from the blockchain. The returned transaction message is defined in section 3.1.2.1. Transaction Retrieval Request: GET host:port/transactions/f5978e82-6d8c-47d1-adec-f18b794f570e Transaction Retrieval Response: { ==" } 6.3 CLI The CLI includes a subset of the available APIs to enable developers to quickly test and debug chaincodes or query for status of transactions. CLI is implemented in Golang and operable on multiple OS platforms. The currently available CLI commands are summarized in the following section. 6.3.1 CLI Commands To see what CLI commands are currently available in the implementation, execute the following: $ peer You will receive a response similar to below: Usage: peer [command] Available Commands: node node specific commands. network network specific commands. chaincode chaincode specific commands. help Help about any command Flags: -h, --help[=false]: help for peer --logging-level="": Default logging level and overrides, see core.yaml for full syntax Use "peer [command] --help" for more information about a command. Some of the available command line arguments for the peer command are listed below: -c- constructor: function to trigger in order to initialize the chaincode state upon deployment. -l- language: specifies the implementation language of the chaincode. Currently, only Golang is supported. -n- name: chaincode identifier returned from the deployment transaction. Must be used in subsequent invoke and query transactions. -p- path: identifies chaincode location in the local file system. Must be used as a parameter in the deployment transaction. -u- username: enrollment ID of a logged in user invoking the transaction. Not all of the above commands are fully implemented in the current release. The fully supported commands that are helpful for chaincode development and debugging are described below. Note, that any configuration settings for the peer node listed in the core.yaml configuration file, which is the configuration file for the peer process, may be modified on the command line with an environment variable. For example, to set the peer.id or the peer.addressAutoDetect settings, one may pass the CORE_PEER_ID=vp1 and CORE_PEER_ADDRESSAUTODETECT=true on the command line. 6.3.1.1 node start The CLI node start command will execute the peer process in either the development or production mode. The development mode is meant for running a single peer node locally, together with a local chaincode deployment. This allows a chaincode developer to modify and debug their code without standing up a complete network. An example for starting the peer in development mode follows: peer node start --peer-chaincodedev To start the peer process in production mode, modify the above command as follows: peer node start 6.3.1.2 network login The CLI network login command will login a user, that is already registered with the CA, through the CLI. To login through the CLI, issue the following command, where username is the enrollment ID of a registered user. peer network login <username> The example below demonstrates the login process for user jim. peer network login jim The command will prompt for a password, which must match the enrollment password for this user registered with the certificate authority. If the password entered does not match the registered password, an error will result. 22:21:31.246 [main] login -> INFO 001 CLI client login... 22:21:31.247 [main] login -> INFO 002 Local data store for client loginToken: /var/hyperledger/production/client/ Enter password for user 'jim': ************ 22:21:40.183 [main] login -> INFO 003 Logging in user 'jim' on CLI interface... 22:21:40.623 [main] login -> INFO 004 Storing login token for user 'jim'. 22:21:40.624 [main] login -> INFO 005 Login successful for user 'jim'. You can also pass a password for the user with -p parameter. An example is below. peer network login jim -p 123456 6.3.1.3 chaincode deploy The CLI deploy command creates the docker image for the chaincode and subsequently deploys the package to the validating peer. An example is below. peer chaincode deploy -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Function":"init", "Args": ["a","100", "b", "200"]}' With security enabled, the command must be modified to pass an enrollment id of a logged in user with the -u parameter. An example is below. peer chaincode deploy -u jim -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Function":"init", "Args": ["a","100", "b", "200"]}' Note: If your GOPATH environment variable contains more than one element, the chaincode must be found in the first one or deployment will fail. 6.3.1.4 chaincode invoke The CLI invoke command executes a specified function within the target chaincode. An example is below. peer chaincode invoke -n <name_value_returned_from_deploy_command> -c '{"Function": "invoke", "Args": ["a", "b", "10"]}' With security enabled, the command must be modified to pass an enrollment id of a logged in user with the -u parameter. An example is below. peer chaincode invoke -u jim -n <name_value_returned_from_deploy_command> -c '{"Function": "invoke", "Args": ["a", "b", "10"]}' 6.3.1.5 chaincode query The CLI query command triggers a specified query method within the target chaincode. The response that is returned depends on the chaincode implementation. An example is below. peer chaincode query -l golang -n <name_value_returned_from_deploy_command> -c '{"Function": "query", "Args": ["a"]}' With security enabled, the command must be modified to pass an enrollment id of a logged in user with the -u parameter. An example is below. peer chaincode query -u jim -l golang -n <name_value_returned_from_deploy_command> -c '{"Function": "query", "Args": ["a"]}' 7. Application Model 7.1 Composition of an Application 7.2 Sample Application 8. Future Directions 8.1 Enterprise Integration 8.2 Performance and Scalability 8.3 Additional Consensus Plugins 8.4 Additional Languages 9.1 Authors The following authors have written sections of this document:. 9.2 Reviewers The following reviewers have contributed to this document: Frank Lu, John Wolpert, Bishop Brock, Nitin Gaur, Sharon Weed, Konrad Pabjan. 9.3 Acknowledgements The following contributors have provided invaluable technical input to this specification: Gennaro Cuomo, Joseph A Latone, Christian Cachin 10. References [1] Miguel Castro, Barbara Liskov: Practical Byzantine fault tolerance and proactive recovery. ACM Trans. Comput. Syst. 20(4): 398-461 (2002) [2] Christian Cachin, Rachid Guerraoui, Luís E. T. Rodrigues: Introduction to Reliable and Secure Distributed Programming (2. ed.). Springer 2011, ISBN 978-3-642-15259-7, pp. I-XIX, 1-367 [3] Tushar Deepak Chandra, Vassos Hadzilacos, Sam Toueg: The Weakest Failure Detector for Solving Consensus. J. ACM 43(4): 685-722 (1996) [4] Cynthia Dwork, Nancy A. Lynch, Larry J. Stockmeyer: Consensus in the presence of partial synchrony. J. ACM 35(2): 288-323 (1988) [5] Manos Kapritsos, Yang Wang, Vivien Quéma, Allen Clement, Lorenzo Alvisi, Mike Dahlin: All about Eve: Execute-Verify Replication for Multi-Core Servers. OSDI 2012: 237-250 [6] Pierre-Louis Aublin, Rachid Guerraoui, Nikola Knezevic, Vivien Quéma, Marko Vukolic: The Next 700 BFT Protocols. ACM Trans. Comput. Syst. 32(4): 12:1-12:45 (2015) [7] Christian Cachin, Simon Schubert, Marko Vukolić: Non-determinism in Byzantine Fault-Tolerant Replication
https://openblockchain.readthedocs.io/en/latest/protocol-spec/
2022-05-16T22:18:43
CC-MAIN-2022-21
1652662512249.16
[]
openblockchain.readthedocs.io
lighttable view layout 🔗left panel From top to bottom: - import - Import images from the filesystem or a connected camera. - collections - Filter the images displayed in the lighttable center panel – also used to control the images displayed in the filmstrip and timeline modules. - recently used collections - View recently used collections of images. - image information - Display image information. - lua scripts installer (optional) - Install lua scripts. 🔗right panel From top to bottom: - Select images in the lighttable using simple criteria. - selected images(s) - Perform actions on selected images. - history stack - Manipulate the history stack of selected images. - styles - Store an image’s history stack as a named style and apply it to other images. - metadata editor - Edit metadata for selected images. - tagging - Tag selected images. - geotagging - Import and apply GPX track data to selected images. - export - Export selected images to local files or external services. 🔗bottom panel From left to right: - star ratings - Apply star ratings to images. - color labels - Apply color categories to images. - mode selector - Choose a lighttable mode. - zoom - Adjust the size of thumbnails. - enable focus-peaking mode - Highlight the parts of the image that are in focus. - set display profile - Set the display profile of your monitor(s).
https://docs.darktable.org/usermanual/3.6/en/lighttable/lighttable-view-layout/
2022-05-16T21:33:27
CC-MAIN-2022-21
1652662512249.16
[]
docs.darktable.org
unbounded colors Screens and most image file formats can only encode RGB intensities confined within a certain range. For example, images encoded on 8 bits can only contain values from 0 to 255, images on 10 bits from 0 to 1023, and so on… Graphic standards postulate that the maximum of that range, no matter its actual value, will always represent the maximum brightness that the display medium is able to render, usually between 100 and 160 Cd/m² (or nits) depending on the actual standard. We generally call this maximum “100 % display-referred”. The minimum of the range, encoded 0 no matter the bit-depth used, becomes then “0 % display-referred”. 100 % encodes pure white, 0 % encodes pure black. This is a limitation for image processing applications, because it means that any pixel lying outside of this range will be clipped to the nearest bound, resulting in non-recoverable loss of data (colors and/or textures). For the longest time, image processing software too was bounded to this limitation for technical reasons, and some still is, but now by design choice. As a result, they would clip RGB intensities at 100 % display-referred between image operations. darktable uses floating-point arithmetic inside its color pipeline, which means it can handle any RGB value internally, even those outside the display-referred range, as long as it is positive. Only at the very end of the pipeline, before the image is saved to a file or sent to display, are the RGB values clipped if needed. Pixels that can take values outside of the display range are said to have “unbounded colors”. One could choose to clamp (i.e. confine) those values to the allowed range at every processing step or choose to carry on with them, and clamp them only at the last step in the pipeline. However, it has been found that processing is less prone to artifacts if the unbounded colors are not clamped but treated just like any other color data. At the end of the pipeline, modules like filmic can help you to remap RGB values to the display-referred range while maximizing the data preservation and avoiding hard clipping, which is usually not visually pleasing. However, at all times in the pipeline, you must ensure that you do not create negative RGB values. RGB intensities encode light emissions and negative light does not exist. Those modules that rely on a physical understanding of light to process pixels will fail if they encounter a non-physical light emission. For safety, negative RGB values are still clipped whenever they might make the algorithms fail, but the visual result might look degraded. Negative values can be produced when abusing the black level in exposure or the offset in color balance and care should be taken when using these modules.
https://docs.darktable.org/usermanual/3.6/en/special-topics/color-management/unbounded-colors/
2022-05-16T20:54:25
CC-MAIN-2022-21
1652662512249.16
[]
docs.darktable.org
Additional Resources Additional Resources Some additional resources are provided to assist with the use of the DITA compare product. A deltaxml.ditaval file for highlighting the identified changes. A guide for generating PDF and XHTML output with change markup using the DITA Open Toolkit, is available on our website (DITA-OT change markup Guide). Some oXygen CSS files for colouring changes identified using the DITA markup's revand statusattributes in oXygen editor's Author mode. Instructions for installing these CSS files is provided in the Adding change markup colouring for DITA maps in oXygen Section of this document. The DeltaXML oXygen Adaptor is available from our website. It provides some additional map-styling and enables our DITA compare product to be run from within oXygen editor. For information on running DITA Compare product please see the User Guide and the Samples and Guides page. Adding change markup colouring for DITA maps in oXygen See also Styling changes with CSS in Oxygen Editor The default behaviour of the DITA map comparator is to use DITA's 'rev' and 'status' attributes to record which topics have been added, deleted, changed, or left unchanged. The CSS files in this directory can be used to highlight these changes within oXygen's author view of the map. The following steps enabled the provided css to be used to highlight DITA markup changes when using oXyegen Editor in Author mode. Copy the css files contained in the css directory to oXygen application's frameworks\dita\css_classed directory. The CSS files to copy are: coloured_revisions.css - provides the styling for both topic and map additions, deletions, changed, and unchanged markup. dita_coloured_revisions.css - merges our coloured_revisions.css with oXygen's dita.css. map_edit_coloured_revisions.css - merges our coloured_revisions.css with oXygen's map_edit.css. map_show_link_text_coloured_revisions.css - merges our coloured_revisions.css with oXygen's map_show_link_text.css. Configure the oXygen Editor application Select the 'DITA Map' entry in the list and press the 'Edit' button Select the 'Author' entry tab. Select the CSS tab and press the + symbol under the list Enter the URI for the stylesheet (e.g. ${frameworks}/dita/css/map_edit_coloured_revisions.css) and a title for it (e.g. Edit Attribute in-place with coloured revisions) If you want to keep the standard CSS as the default, select the Alternate checkbox to make this CSS selectable rather than default Repeat this process for the other CSS files, as illustrated in the figure below. Press OK (3 times to close all preference windows). In Author view for DITA Map files you should now be able to select 'Edit Attribute in-place with coloured revisions' from the 'Styles' drop-down list, located in the toolbar at the top of the window. The new CSS style will now be applied to the current content.
https://docs.deltaxml.com/dita-compare/9.0/Additional-Resources.2440365170.html
2022-05-16T22:43:39
CC-MAIN-2022-21
1652662512249.16
[]
docs.deltaxml.com
Attribution If you're sending attribution information to RevenueCat through the Purchases SDK, we will display the latest information from the network in the Attribution card for the customer. Below are the possible attribution fields that can be displayed. Keep in mind that RevenueCat itself is not an attribution network, and will only display the information that available from the network you're using (e.g. Appsflyer, Adjust, etc.). See our attribution integrations to start sending this information. Next Steps Updated 7 months ago Did this page help you?
https://docs.revenuecat.com/docs/attribution-card
2022-06-25T05:26:34
CC-MAIN-2022-27
1656103034170.1
[array(['https://files.readme.io/85524ac-Screen_Shot_2020-09-17_at_11.38.26_AM.png', 'Screen Shot 2020-09-17 at 11.38.26 AM.png'], dtype=object) array(['https://files.readme.io/85524ac-Screen_Shot_2020-09-17_at_11.38.26_AM.png', 'Click to close...'], dtype=object) ]
docs.revenuecat.com
Monitor infrastructure This page documents an earlier version of InfluxDB. InfluxDB v2.3 is the latest stable version. View this page in the v2.3 Docker Use the Docker Monitoring template to monitor your Docker containers. Monitor vSphere Use the vSphere Dashboard for InfluxDB v2 template to monitor your vSphere host. Monitor Windows Use the Windows System Monitoring template to monitor your Windows system..
https://test2.docs.influxdata.com/influxdb/v2.0/monitor-alert/templates/infrastructure/
2022-06-25T04:12:17
CC-MAIN-2022-27
1656103034170.1
[]
test2.docs.influxdata.com
Turn on autofill Autofill is available only on devices. - Tap > >Autofill. - Select theAutofill from the BlackBerry Keyboardcheckbox. - TapOpen settings. - TapPassword Keeper autofill. - Turn on the switch. - TapOK. - If prompted, enter your screen lock, and then tapNext. This turns offSecure start-up. To keep your device secure, turn it back on: - Tap >Security>Screen lock. - Enter your pattern, PIN, or password. - TapPattern,PIN, orPassword. - SelectRequire Pattern/PIN/Password to start device. - TapContinue>OK. - Enter your pattern, PIN, or password, and then tapContinue. - Enter your pattern, PIN, or password again, and then tapOK. Turn on autofill for username and password fields If your device is running 8.0 or later, you can use the autofill feature by tapping a username and password field in an app or website. This feature might not be available for some web browsers. - Tap > >Autofill. - Select theAutofill from a username or password fieldcheckbox. - SelectPassword Keeper autofill>OK. - When you are using an app or website that has a saved username and password, tap on the field. - If prompted, tap orTap to unlock. Enter yourBlackBerry Password Keeperpassword. - Tap on one of the suggested saved usernames and passwords. - TapAssociate.Once associated, the username and password will be suggested the next time you tap on the field in an app or website.
https://docs.blackberry.com/en/apps-for-android/password-keeper/latest/help/rjn1466175559820
2022-06-25T04:39:52
CC-MAIN-2022-27
1656103034170.1
[]
docs.blackberry.com
Managing Users & Licenses Overview You'll manage your DayBack subscription and invite other users to your group from the admin settings inside DayBack. Any DayBack administrator can do this. DayBack for Salesforce relies on the Salesforce App Exchange to allocate users. More on that below. In this article Purchasing DayBack Purchasing To purchase DayBack you'll first need to start a trial. If you have one gong already, log-in and head to admin settings and click on the "Users & Billing" tab in the settings left-hand sidebar. (If you don't have a trial, please start one here.) Once you're logged in, click "Purchase DayBack" and you're almost done. How many users to purchase Note that DayBack will set up your purchase with as many users as you've invited to your trial: you can change that number of users when finalizing your subscription as shown above. If you reduce the number of users, DayBack will deactivate the last few users you've invited so that the number of active users matches the number of users you've purchased. You can then deactivate and reactive users until you have the correct users active. Note that you can add users at any time. Adding, Inviting, and Removing Users Inviting users and managing their access DayBack admins (you can have more than one) and which users have read-only access. In FileMaker, you can set up DayBack so that you don't need to invite each user. With this enabled, FileMaker will create a new user record for each FileMaker user who visits the calendar, until you exceed the number of users in your subscription. Learn how to set that up here: creating and signing-in users automatically in FileMaker. When you invite a new user to your group DayBack will send them a short email letting them they've been invited and telling them the name of your group. The email also contains instructions for logging in using their email address and a new password created for them by DayBack. Once they log in they'll be able to change this password on their own Admin / Settings screen. Adding more users to your plan Removing users Note: Removing and deactivating users here won't automatically change the number of users billed on your subscription. That number is always shown here as "X user licenses purchased" at the top of the screen. Click "Manage Subscription" then "Configure" to increase or decrease the number of users on your plan. Canceling or Pausing Your Account You can pause or cancel your account at any time from the admin settings section inside DayBack. The short video below shows you how. Click "Manage Subscription" and then click the pause or cancel button on your account. Note that DayBack does not issue refunds for the unused time on monthly or yearly subscriptions when you pause or cancel. Salesforce users will want to contact us in addition to uninstalling the DayBack package to cancel their account-- unfortunately, there is no way to do this from inside Salesforce. Managing to. Tab Visibility: Custom profiles may not have tab visibility of DayBack by default. This will prevent users with this profile from accessing the DayBack app. To fix this, make sure DayBack is set to Default On under the tab settings for the assigned profile. You can find Salesforce's instructions for modifying tab permissions here: Show or hide Tabs for Users Modifying Your Billing Information, Adding More Licenses, and Cancelling While you start your DayBack trial via App Exchange, please call or email us for any modifications to your subscription--including adding or reducing the number of users, or canceling your subscription. You can reach us at 855-733-3263, by email at [email protected], or here. Entering a Licenses in DayBack Classic (for FileMaker 13-18) Here's how to enter your license for the older DayBack Classic connecting to FileMaker CWP. (The new DayBack for FileMaker 19 and higher uses the license management described at the top of the page.).
https://docs.dayback.com/article/82-managing-licenses
2022-06-25T05:32:53
CC-MAIN-2022-27
1656103034170.1
[]
docs.dayback.com
Choose a Data Source - 2 minutes to read Note The default Report Designer implementation uses the Report Wizard (Fullscreen) version. To use the popup version instead, disable the ReportDesignerWizardSettings.UseFullscreenWizard option. On this wizard page, you can select an available data source or create a new one to assign to the report. Selecting an Existing Data Source To use an existing data source, select the first option and choose the required item from the list. - If you invoke the Report Wizard by selecting the New via Wizard command in the Report Designer Menu, this list displays data sources defined as default data sources. If you invoke the Report Wizard by selecting the Design in Report Wizard menu command, the list contains data sources defined for the Report Designer as default data sources, as well as data sources added to the report using the SQL Data Source Wizard. When the report and report designer have data sources with identical names, this page displays the report’s data source. Click Next to proceed to the next wizard page: Choose Columns (Multi-Query Version). Creating a New Data Source To create a new data source, select the second option on this page. In this case, you follow the steps in the SQL Data Source Wizard to add a new data source and then proceed to the Choose Columns (Multi-Query Version) page on completion. Note Creating a new data source is only available if you provided the Web Report Designer with default data connections.
https://docs.devexpress.com/XtraReports/17665/web-reporting/gui/wizards/report-wizard-popup/data-bound-report/choose-a-data-source
2022-06-25T04:53:00
CC-MAIN-2022-27
1656103034170.1
[array(['/XtraReports/images/web-designer-report-wizard-choose-data-source127886.png', 'web-designer-report-wizard-choose-data-source'], dtype=object) ]
docs.devexpress.com
docker pull snyk/broker:github-com. The following environment variables are mandatory to configure the Broker client: BROKER_TOKEN- the Snyk Broker token, obtained from your Snyk Org settings view (app.snyk.io). GITHUB_TOKEN- a personal access token with full repo, read:organd admin:repo_hookscopes. PORT- the local port at which the Broker client accepts connections. Default is 8000. BROKER_CLIENT_URL- the full URL of the Broker client as it will be accessible to GitHub.com webhooks, such as accept.jsonfor Snyk IaC, Code, Open Source and Container for GitHub is attached. Add Projects. accept.json. This should be in a private array: docker logs <container id>where container id is the GitHub Broker container ID to look for any errors
https://docs.snyk.io/features/snyk-broker/snyk-broker-set-up-examples/broker-example-how-to-setup-broker-with-jira
2022-06-25T04:52:49
CC-MAIN-2022-27
1656103034170.1
[]
docs.snyk.io
Reclusters tables that were previously clustered with CLUSTER. clusterdb [<connection-option> ...] [--verbose | -v] [--table | -t <table>] [[--dbname | -d] <dbname] clusterdb [<connection-option> ...] [--all | -a] [--verbose | -v] clusterdb -? | --help clusterdb -V | --version. PGDATABASE. If that is not set, the user name specified for the connection is used. clusterdbgenerates and sends to the server. -tswitches. clusterdbversion and exit. cluster, and if that does not exist, template1will be used. To cluster the database test: clusterdb test To cluster a single table foo in a database named xyzzy: clusterdb --table foo xyzzyb
https://docs.vmware.com/en/VMware-Tanzu-Greenplum/6/greenplum-database/GUID-utility_guide-ref-clusterdb.html
2022-06-25T05:27:13
CC-MAIN-2022-27
1656103034170.1
[]
docs.vmware.com
To create section 10 (1) (d) of article V of the constitution; Relating to: prohibiting the governor from using the partial veto to increase state expenditures (first consideration). Amendment Histories Joint Resolution Text (PDF: ) SJR59 ROCP for Committee on Rules (PDF: ) SJR59 ROCP for Committee on Insurance, Financial Services, Government Oversight and Courts On 10/23/2019 (PDF: ) LC Bill Hearing Materials Wisconsin Ethics Commission information 2019 Assembly Joint Resolution 108 - A - Rules
https://docs.legis.wisconsin.gov/2019/proposals/reg/sen/joint_resolution/sjr59
2021-05-06T10:18:24
CC-MAIN-2021-21
1620243988753.91
[]
docs.legis.wisconsin.gov
Leia currently provides two main objects in the LeiaNativeSDK. These are the following: LeiaCameraData LeiaCameraView Together, they allow applications to make full use of the Leia display with minimal changes to their current applications. There are also 2 Java objects to note: LeiaDisplayManager SimpleDisplayQuery While these are not strictly part of the LeiaNativeSDK, these are the classes that allow you to get information about the Leia hardware from the Android system, and this information is needed by the LeiaNativeSDK. Usage of these objects are discussed at the end of this section. The LeiaCameraData struct is a storage variable that the rest of the Native SDK can use to better understand the application. It stores values relevant to the perspective matrices which will be generated for your application later. By keeping this data together, it allows the LeiaCameraView struct to stay small, enabling it to scale better based on the necessary number of cameras and removing duplicate data. An important part of the LeiaNativeSDK is to provide perspective matrice, simplifying the work behind creating 3D effects. Each LeiaCameraView struct contains a perspective matrix. Your application will use a 2D array of these and the LeiaNativeSDK library will be able to properly fill the matrices inside for you to use in your render loop. This matrix will replace the standard camera projection matrix your application would normally use.
https://docs.leialoft.com/developer/android-sdk/working-with-the-code
2021-05-06T09:31:19
CC-MAIN-2021-21
1620243988753.91
[]
docs.leialoft.com
Set up for live events in Microsoft Teams When you're setting up for live events, there are several steps that you must take. Step 1: Set up your network for live events in Teams Live events produced in Teams require you to prepare your organization's network for Teams. Step 2: Get and assign licenses Ensure you have correct license assignments for who can create and schedule live events and who can watch live events. Step 3: Set up live events policies Live events policies are used to control who in your organization can hold live events and the features that are available in the events they create. You can use the default policy or create one or more custom live events policies. After you create a custom policy, assign it to a user or groups of users in your organization. Note Users in your organization will get the global (Org-wide default) policy unless you create and assign a custom policy. By default in the global policy, live event scheduling is enabled for Teams users, live captions and subtitles (transcription) is turned off, everyone in the organization can join live events, and the recording setting is set to always record. Create or edit a live events policy In the left navigation of the Microsoft Teams admin center, go to Meetings > Live events policies. Do one of the following options: - If you want to edit the existing default policy, choose Global (Org-wide default). - If you want to create a new custom policy, choose Add. - If you want to edit a custom policy, select the policy, and then choose Edit. Here are the settings you can change to fit the needs of your organization. You can also do this by using Windows PowerShell. For more information, see Use PowerShell to set live events policies in Teams. Assign a live events policy to users If you created a custom live events policy, assign it to users for the policy to be active.. Enable users to schedule events that were produced with an external app or device For users to schedule events produced with an external app or device, you must also do the following steps: Enable Microsoft Stream for users in your organization. Stream is available as part of eligible Microsoft 365 or Office 365 subscriptions or as a standalone service. Stream isn't included in Business Essentials or Business Premium plans. See Stream licensing overview for more details. Note, and some time in early 2021 we'll require all customers to use OneDrive for Business and SharePoint for new meeting recordings. Learn more about how you can assign licenses to users so that users can access Stream. Ensure Stream isn't blocked for the users as defined in this article. Ensure users have live event creation permission in Stream. By default, administrators can create events with an external app or device. Stream administrator can enable additional users for live event creation in Stream. Ensure live event organizers have consented to the company policy set by Stream admin. If a Stream administrator has set up a company guidelines policy and requires employees to accept this policy before saving content, then users must do so before creating a live event (with an external app or device) in Teams. Before you roll out the live events feature in the organization, make sure users who will be creating these live events have consented to the policy. Step 4: Set up a video distribution solution for live events in Teams Playback of live event videos uses adaptive bitrate streaming (ABR) but it's a unicast stream, meaning every viewer is getting their own video stream from the internet. For live events or videos sent out to large portions of your organization, there could be a significant amount of internet bandwidth consumed by viewers. For organizations that want to reduce this internet traffic for live events, live events solutions are integrated with Microsoft's trusted video delivery partners offering software defined networks (SDNs) or enterprise content delivery networks (eCDNs). These SDN/eCDN platforms enable organizations to optimize network bandwidth without sacrificing end user viewing experiences. Our partners can help enable a more scalable and efficient video distribution across your enterprise network. Purchase and set up your solution outside of Teams Get expert help with scaling video delivery by leveraging Microsoft's trusted video delivery partners. Before you can enable a video delivery provider to be used with Teams, you must purchase and set up the SDN/eCDN solution outside and separate from Teams. The following SDN/eCDN solutions are pre-integrated and can be set up to be used with Stream. Microsoft customers. Learn more. Kollective is a cloud-based, smart peering distribution platform that leverages your existing network infrastructure to deliver content, in many forms, (live streaming video, on-demand video, software updates, security patches, etc.) faster, more reliably and with less bandwidth. Our secure platform is trusted by the world's largest financial institutions and with no additional hardware, setup and maintenance are easy. Learn more. Ramp OmniCache provides next-generation network distribution and ensures seamless delivery of video content across global WANs, helping event producers optimize network bandwidth and support successful live event broadcasts and on-demand streaming. The support for Ramp OmniCache for live events produced in Teams is coming soon. Learn more. Riverbed, the industry standard in network optimization, is extending its acceleration solutions to Microsoft Teams and Stream. Now Microsoft 365 customers can confidently accelerate 365 traffic including Teams and Stream along with a wealth of other leading enterprise SaaS services to increase workforce productivity from anywhere. Teams and Stream acceleration can be enabled through an effortless setup that comes with all the assurance of Riverbed’s world-class support and ongoing investment. Note Your chosen SDN or eCDN solution is subject to the selected 3rd party provider's terms of service and privacy policy, which will govern your use of the provider's solution. Your use of the provider's solution will not be subject to the Microsoft volume licensing terms or Online Services Terms. If you do not agree to the 3rd party provider's terms, then don't enable the solution in Teams. After you set up the SDN or eCDN solution, you're ready to configure the provider for live events in Teams. Next steps Go to Configure live events settings in Teams.
https://docs.microsoft.com/en-us/MicrosoftTeams/teams-live-events/set-up-for-teams-live-events?WT.mc_id=M365-MVP-4039827
2021-05-06T11:09:49
CC-MAIN-2021-21
1620243988753.91
[]
docs.microsoft.com
Lösungskinetik von Gips und Anhydrit Jeschke, Alexander A Univ. Bremen Monography Verlagsversion Deutsch Jeschke, Alexander A, 2002: Lösungskinetik von Gips und Anhydrit. Univ. Bremen, 120 S., DOI 10.23689/fidgeo-210. The dissolution kinetics of gypsum and anhydrite have been measured under various conditions. For gypsum an almost linear rate equation R=ks1(1-c/ceq)n1 is valid, where R is the surface rate, n1 1 is the kinetics order, c is the total calcium concentration at the surface, and ceq the equilibrium concentration with respect to gypsum. For the determination of the entire dissolution kinetics a batch set-up was used. This batch experiment reveals a dissolution rate equation R=ks1(1-c/ceq)n1 which switch close to equilibrium to a nonlinear rate equation R=ks2(1-c/ceq)n2 with n2 4.5. The experimentally observed dissolution rates from the batch experiment could be fitted by only minor variations with a mixed kinetics model. The rotating disk experiment on anhydrite reveals a surface controlled rate equation. For anhydrite the observed experimentally dissolution rates by a batch experiment are described by R=ks(1-c/ceq})n, where ks is the surface rate constant and n 4.2 is the kinetic order. Furthermore, a method for the determination of the rate equation parameters was developed ... Statistik:View Statistics Collection Subjects:Experimentelle Geochemie Sulfate, Chromate, Molybdate und Wolframate {Mineralogie} Mineralchemie
https://e-docs.geo-leo.de/handle/11858/00-1735-0000-0001-315A-5
2021-05-06T09:07:55
CC-MAIN-2021-21
1620243988753.91
[]
e-docs.geo-leo.de
Modules¶ Apt Configure¶ Summary: configure apt. entries. Each entry under sources key supports variable replacements for the following strings: - $MIRROR - $PRIMARY - $SECURITY - $RELEASE Internal name: cc_apt_configure Module frequency: per instance Supported distros: ubuntu, debian Config keys: apt: preserve_sources_list: <true, whcih talbes run growpart on mode key is set to auto, then any available utility (either growpart or existance look for the first existing device in: - /dev/sda - /dev/vda - /dev/xvda - /dev/sda1 - /dev/vda1 - /dev/xvda1:: per instance Supported distros: all Config keys: locale: <locale str> locale_configfile: <path to locale config file> /etc/ntp.conf.dist: centos, debian, fedora, opensuse, sles, ubuntu - Config schema: ntp: (object/null) pools: (array of string) List of ntp pools. If both pools and servers are empty, 4 default pool servers will be provided of the format {0-3}.{distro}.pool.ntp.org. servers: (array of string) List of ntp servers. If both pools and servers are empty, 4 default pool servers will be provided with the format {0-3}.{distro}.pool.ntp.org. Examples: ntp: a form that the shutdown utility recognizes. The most common format is the form +5 for 5 minutes. See man shutdown for more options.). Internal name: cc_puppet Module frequency: per instance Supported distros: all Config keys: puppet: install: <true/false> version: <version> conf: agent: server: "puppetmaster.example.org" certname: "%i.%f" ca_cert: | -------BEGIN CERTIFICATE------- <cert data> -------END CERTIFICATE------- resovlconf, and similarly RedHat will use sysconfig, this module is likely to be of little use unless those are configured correctly. Note For RedHat:HatHat. all commands must be proper yaml, so you have to quote any characters yaml would eat (‘:’ can be problematic). Internal name: cc_salt_minion Module frequency: per instance Supported distros: all Config keys: salt_minion: conf: master: salt.example.com. Internal name: cc_scripts_per_instance Module frequency: per instance Supported distros: all Scripts Per Once¶ Summary: run one time scripts Any scripts in the scripts/per-once directory on the datasource will be run only once..... Snappy¶ Summary: snappy modules allows configuration of snappy. ... SSH Authkey Fingerprints¶ Summary: log fingerprints of user ssh keys Write fingerprints of authorized keys for each user to log. This is enabled by default, but can be disabled using no_ssh_fingerprints. The hash type for the keys can be specified, but defaults to md5. login. Mark user inactive. Default: false - - ssh-import-id: Optional. SSH id to import for user. Default: none - sudo: Optional. Sudo rule to use, or list of sudo rules to use. Default: none. -. Internal name: cc_users_groups Module frequency: per instance Supported distros: all Config keys: groups: - <group>: [<user>, <user>] - <group> users: - default - standargs. to specify binary data, use the yaml option !!binary Internal name: cc_write_files Module frequency: per instance Supported distros: all Config keys:' -': fedora, rhel Config keys: yum_repos: <repo-name>: baseurl: <repo url> name: <repo name> enabled: <true/false> # any repository configuration options (see man yum.conf)
https://cloudinit.readthedocs.io/en/18.1/topics/modules.html
2021-05-06T09:46:47
CC-MAIN-2021-21
1620243988753.91
[]
cloudinit.readthedocs.io
REPL Configuration Behavior on connect Normally, when you first establish a REPL connection, the REPL buffer is auto-displayed in a separate window. You can suppress this behavior) Customizing the Return key’s behavior behavior — you can now replicate it by setting the built-in option scroll-conservatively, for example: (add-hook 'cider-repl-mode-hook '(lambda () (setq scroll-conservatively 101))) Auto-trimming the REPL buffer As noted previously, the REPL buffer’s performance will degrade if its size is allowed to grow infinitely. You can obviously clear the REPL manually from time to time, but CIDER also has some auto-trimming functionality that can simplify the process for you. Auto-trimming can be enabled by setting cider-repl-buffer-size-limit to an integer. By setting a limit to the number of characters in the buffer, the buffer can be trimmed (from its beginning) after each evaluation if the set limit has been exceeded. Here’s how you can set the size limit in your Emacs config: (setq cider-repl-buffer-size-limit 100000) Result Prefix You can change the string used to prefix REPL results: (setq cider-repl-result-prefix ";; => ") Which then results in the following REPL output: user> (+ 1 2) ;; => 3 By default, REPL results have no prefix. Set ns in REPL By default cider-repl-set-ns won’t require the target ns, just set it. That’s done with the assumption that you’ve probably evaluated the ns in question already before switching to it (e.g. by doing C-c C-k ( cider-load-buffer) in its source buffer). If you want to change this behavior (to avoid calling cider-repl-set-ns and then (require 'my-ns) manually), you can set: (setq cider-repl-require-ns-on-set t) Customizing the initial REPL namespace Normally, the CIDER REPL will start in the user namespace. You can supply an initial namespace for REPL sessions in the repl-options section of your Leiningen project configuration: :repl-options {:init-ns 'my-ns} behavior use: (setq cider-repl-use-clojure-font-lock nil) You can temporarily disable the Clojure font-locking by using M-x cider-repl-toggle-clojure-font-lock or the REPL shortcut toggle-font-lock. Keep in mind that by default cider-repl-input-face simply makes the input bold and cider-repl-result-face is blank (meaning it doesn’t really apply any font-locking to results), so you might want to adjust those faces to your preferences. Some Emacs color themes might be providing different defaults for them. Font-locking of Results There are a few things you need to keep in mind about Clojure font-locking of results: When streaming is enabled only single-chunk results will be font-locked as Clojure, as each chunk is font-locked by itself and the results can’t really be combined The font-locking of results is an expensive operation which involves copying the value to a temporary buffer, where we check its integrity and do the actual font-locking. By default CIDER instructs nREPL to stream data in 4K chunks, but you can easily modify this: ;; let's stream data in 8K chunks (setq cider-print-buffer-size (8 * 1024)) Setting this to nil will result in using nREPL’s default buffer-size of 1024 bytes. The smaller the print buffer size the faster you’ll get feedback/updates in the REPL, so generally it’s a good idea to stick to some relatively small size. Pretty printing in the REPL By default the REPL always prints the results of your evaluations using the printing function specified by cider-print-fn. You can temporarily disable this behavior and revert to the default behavior (equivalent to clojure.core/pr) using M-x cider-repl-toggle-pretty-printing or the REPL shortcut toggle-pprint. can be rendered as images in the REPL. You can enable this behavior like this: (setq cider-repl-use-content-types t) Alternatively, you can toggle this behavior on and off using M-x cider-repl-toggle-content-types or the REPL shortcut toggle-content-types. REPL type detection Normally CIDER would detect automatically the type of a REPL (Clojure or ClojureScript), based on information it receives from the track-state middleware, that’s part of cider-nrepl. In some rare cases (e.g. a bug in cider-nrepl or shadow-cljs) this auto-detection might fail and return the wrong type (e.g. Clojure instead of ClojureScript). You can disable the auto-detection logic like this: (setq cider-repl-auto-detect-type nil) Afterwards you can use cider-repl-set-type to set the right type manually..
https://docs.cider.mx/cider/1.1/repl/configuration.html
2021-05-06T10:40:49
CC-MAIN-2021-21
1620243988753.91
[]
docs.cider.mx
Genesys App Automation Platform Deployment Guide Welcome to the Genesys App Automation Platform Deployment Guide. This document describes the necessary steps to complete a new installation of the Genesys App Automation Platform (GAAP) software or upgrade an existing version. It pertains to installing or upgrading to GAAP version 3.6.000.05 (also known as Oz). Who should use this document? The intended audience for this document are users responsible for deploying GAAP and associated components in the customer environment. What is in this document? This document explains the following topics: - Hardware and Software Specifications - Provides information on required hardware and software to support the GAAP installation. - Pre-Installation Checklist - Lists steps you must complete before installing GAAP. - New Installation - Provides information in installing a new instance of GAAP. - Existing Installation - Provides information on upgrading an existing instance of GAAP. - Post-Installation Configuration - Lists steps you must complete after installing GAAP. This page was last edited on April 9, 2020, at 09:14.
https://docs.genesys.com/Documentation/GAAP/9.0.0/Dep/Welcome
2021-05-06T10:58:40
CC-MAIN-2021-21
1620243988753.91
[]
docs.genesys.com
Under File backup in the Program settings you can set specific settings for a backup. Prefer external USB drive to store image files If you enable this function, external USB drives will be the preferred target for storing images. Under the option Check file backup automatically after its creation you can determine if image files should be checked for errors immediately after they are created. By selecting Automatically overwrite existing image files, the prompt asking if you want to overwrite an existing older image file with the same name will be suppressed. This function is very useful when, for example, you're imaging data a number of times daily, and the image is automatically being named after the Day/Month/Year. Under Behavior you can adapt the program user guidance individually. In the DropDown menu, you can select what will happen when you double-click a file or folder.
https://docs.oo-software.com/en/oodiskimage7/program-settings/change-default-settings-for-file-backup
2021-05-06T10:02:42
CC-MAIN-2021-21
1620243988753.91
[]
docs.oo-software.com
Unified ensure. - Up-to-date Linux Kernel and software patches - Multiple NIC support for Internet and intranet traffic - Disabled SSH - Disabled FTP, Telnet, Rlogin, or Rsh services - Disabled unwanted services
https://docs.vmware.com/en/Unified-Access-Gateway/3.10/com.vmware.uag-310-deploy-config.doc/GUID-8996B593-6700-4493-8209-62A31C64836E.html
2021-05-06T10:43:23
CC-MAIN-2021-21
1620243988753.91
[]
docs.vmware.com
VMware Tools supports shared folders and cut and paste operations between the guest operating system and the machine from which you launch the vCloud Director Web console. vCloud Director depends on VMware Tools to customize the guest OS. Using VMware Tools, you can move the pointer in and out of the virtual machine console window. A virtual machine must be powered on to install VMware Tools. For information about installing VMware Tools on your operating system, see the Installing and Configuring VMware Tools Guide.
https://docs.vmware.com/en/VMware-Cloud-Director/9.7/com.vmware.vcloud.user.doc/GUID-61B230A7-188B-4F56-B969-6D0582FCEA72.html
2021-05-06T09:55:55
CC-MAIN-2021-21
1620243988753.91
[]
docs.vmware.com
Reflection Cubemaps¶ Specular Indirect Lighting is stored in an array of cubemaps. These are defined by the Reflection Cubemap objects. They specify where to sample the scene’s lighting and where to apply it. Siehe auch Screen Space Reflections are much more precise than reflection cubemaps. If enabled, they have priority and cubemaps are used as a fall back if a ray misses. If Ambient Occlusion is enabled, it will be applied in a physically plausible manner to specular indirect lighting. Bemerkung The cube probes are encoded into tetrahedral maps. Some distortions may occur on the negative Z hemisphere. Those are more visible with higher roughness values. Blending¶ The lighting values from a Reflection Cubemap will fade outwards until the volume bounds are reached. They will fade into the world’s lighting or another Reflection Cubemap’s lighting. If multiple Reflection Cubemaps overlap, smaller (in volume) ones will always have more priority. If an object is not inside any Reflection Cubemap influence, or if the indirect lighting has not been baked, the world’s cubemap will be used to shade it. Referenz - Panel - Distance A probe object only influences the lighting of nearby surfaces. This influence zone is defined by the Distance parameter and object scaling. The influence distance varies is a bit, depending on the probe type. For Reflection Cubemaps the influence volume can either be a box or a sphere centered on the probe’s origin. - Falloff Percentage of the influence distance during which the influence of a probe fades linearly. - Intensity Intensity factor of the recorded lighting. Making this parameter anything other than 1.0 is not physically correct. Use it for tweaking or artistic purposes. - Clipping Define the near and far clip distances when capturing the scene. - Visibility Collection Sometimes,. Bemerkung This is only a filtering option. That means if an object is not visible at render time it won’t be visible during the probe render. Custom Parallax¶ Referenz - Panel By default, the influence volume is also the parallax volume. The parallax volume is a volume on which is projected the recorded lighting. It should roughly fit it surrounding area. In some cases it may be better to adjust the parallax volume without touching the influence parameters. In this case, just enable the Custom Parallax and change the shape and distance of the parallax volume independently.
https://docs.blender.org/manual/de/dev/render/eevee/light_probes/reflection_cubemaps.html
2021-05-06T09:50:53
CC-MAIN-2021-21
1620243988753.91
[]
docs.blender.org
Static Subgraph Optimizations: Usage¶ Note This is an experimental feature and so the API might change in the future as it is developed. This feature intends to improve runtime performance by optimizing the execution of the static subgraphs in a model. When this feature is enabled, the first iteration runs as normal except that an execution trace is also collected. The trace is then used to generate optimized code that is will be called instead of the define-by-run code starting from the second iteration. Basic usage¶ To enable static graph optimizations, it is only necessary to add the chainer.static_graph() decorator to a chain’s __call__() method. We will now show how the Chainer MNIST example can be modified to use this feature. The modified version with static subgraph optimizations is located at examples/static_graph_optimizations/mnist. The first step is to import the necessary packages: Since the neural network model MLP corresponds to a static graph, we can annotate it as a static graph by using the chainer.static_graph() decorator on the chain’s __call__() method. This lets the framework know that that the define-by-run code of the chain always creates the same graph (that is, it always performs the same sequence of computations) each time it is called. We will refer to such a chain as a static chain in the documentation. Note If your model’s define-by-run code has any control flow operations that could cause it to potentially call different Chainer functions/links each time it is called, then you cannot use this decorator. Note There are currently some restrictions on how variables can be passed into a static chain’s __call__() method. Refer to the documentation of chainer.static_graph() for details. Recall that the define-by-run code of a static chain’s __call__() method only actually runs during the first iteration and is then replaced by optimized static schedule code. The current implementation only knows how to do this auto-replacement for calls to Chainer functions and links. Any other code that the user puts in __call__() (which we refer to as “side-effect code”) will only ever get called once by default, since the define-by-run code is only executed during the first iteration. In order to make sure such “side effect” code actually gets called each iteration, we need to put it inside a function or method decorated by static_code(). We expect there will rarely be a need to use side-effect code but for completeness, an example of a model that uses it is available in the MLPSideEffect Chain of the static graph MNIST example. In this example, we only need to use chainer.static_graph() on the model chain, since the whole model is static. However, in more general dynamic models, each of the largest static subgraphs (which should each be written as a chain) should also use chainer.static_graph(). Note Nested application of chainer.static_graph() is not allowed. That is, if a chainer.static_graph()-decorated chain calls another chains, only the outermost chain should use the decorator. Calling a static chain multiple times in the same iteration¶ In a general dynamic graph network, it is not possible to know in advance how many times a static chain will be called in any particular iteration. Note that during training, it is necessary to maintain separate internal state (such as intermediate activations) for each of these calls so that the gradients can be computed in the backward pass. So, although the layer functions of the static schedule will be identical each time the same static chain is called, any internal state must be distinct. It is also possible that a static chain could be called multiple times with inputs of different shapes and/or types during the same iteration. To avoid confuction, “static schedule” will refer to both the functions and any corresponding internal state such as activations. If backpropagation mode is disabled ( chainer.config.enable_backprop is False), it is safe for the implementation to simply compute a static schedule for the first call and reuse it for subsequent calls, provided that the cached schedule is compatible with the input shapes/types. However, during training, it is necessary to maintain distinct internal state for each call in order to compute the gradients for the backward pass, which prevents us from reusing the same static schedule for each of the multiple calls of a static chain in an iteration. The current implementation handles this issues as follows. A cache of static schedules, which is initially empty, is associated with each static chain. The size of this cache will be equal to the maximum number of times that the static chain has been called in any previous iteration, and the cache is reset whenever certain chain configuration flags change, such as training mode and backpropagation model. At the start of a given iteration, all cached schedules are available for use and the number of available schedules is decremented each time the static chain is called. If the chain is called when the cache is size zero, then its define-by-run code will execute to create a new schedule cache. In order for such an implementation to work, each static chain must be notified when the forward pass has ended (or when the forward pass is started) so that all cached schedules can be made available for use again. In the current implementation, this is accomplished by calling the backward() method on a loss variable in the model. This is expected to handle the typical use cases. However, in some models it may be necessary to perform multiple forward passes before calling backward(). In such a case, to signel to a static chain that the forward pass (and the iteration) has ended, call my_chain.schedule_manager.end_forward(). The schedule_manager attribute of a static chain is an instance of a class called StaticScheduleFunction that will be available after the chain has been called. Effects on model debugging¶ Note that since the code in the static chain’s __call__() only runs during the first iteration, you will only be able to debug this code as define-by-run during the first iteration. It is assumed that if the chain is actually is static, any problems in its define-by-run code should be apparent during the first iteration and it should not be (as) necessary to debug this code in later iterations. However, this feature does provide some functionality to help with debugging. For example, it is possible to obtain and inspect the current static schedules. It is also possible to directly step through the code of the static schedule if you wish (by debugging the forward() method of StaticScheduleFunction in static_graph). Disabling the static subgraph optimization¶ It is possible to turn off the static subgraph optimization feature by setting the chainer.config.use_static_graph to False. If set to False, the chainer.static_graph() decorator will simply call the wrapped function without any further side effects. Limitations and future work¶ Optimization switches to let the user select the trade-off between runtime performance and memory usage: The current implementation achieves its speedups mainly by reducing the amount of Python code that needs to run, but does not yet implement advanced optimizations for memory usage or runtime performance. Ideally, the user should be able to adjust performance tuning parameters to control the trade-off between memory consumption and runtime performance. Incompatibility with GRU and LSTM links: This feature requires that all input variables to a chain need to explicitly appear in the arguments to the chain’s __call__()method. However, the GRU and LSTM links with state maintain variable attributes of the chain for the RNN state variables. Design changes to support such links and/or modifications to these links are being considered. These links may still be used with the current implementation, as long as the corresponding RNN is unrolled inside of a static chain. For an example of this, see the modified ptb example at examples/static_graph_optimizations/ptb Memory usage: The current implementation caches all static schedules which can lead to high memory usage in some cases. For example, separate schedules are created when the training mode or mini-batch size changes. Advanced graph optimizations: Advanced optimizations such as fusion of operations is not yet implemented. Constraints on arguments to a static chain: The current version requires that all input variables used inside __call__()of a static chain must either appear in the arguments of this method or be defined in the define-by-run code. Furthermore, any variables that appear in the arguments list must appear by themselves or be contained inside a list or tuple. Arbitrary levels of nesting are allowed. Model export: In the case where the complete computation graph for the model is static, it should be possible in principle to export the static schedule in a format that can be run on other platforms and languages. One of the other original motivations for this feature was to support exporting static Chainer models to run on C/C++ and/or optimize the static schedule execution code in Cython/C/C++. However, it seems that ONNX is now fulfilling this purpose and there is a separate ONNX exporter already in development for Chainer. Perhaps these two features can be merged at some point in the future. Double-backward support: This feature was designed to support double-backward (gradient of gradient) but it has not been tested. ChainerX is not supported. If you have code written using this feature but would like to run the model with ChainerX, please set the chainer.config.use_static_graphconfiguration to False. The code should then work without any additional changes. Examples¶ For additional examples that use this feature, refer to the examples in examples/static_graph_optimizations.
https://docs.chainer.org/en/latest/reference/static_graph.html
2021-05-06T09:56:06
CC-MAIN-2021-21
1620243988753.91
[]
docs.chainer.org
Stargazer is an open source Stargaze blockchain indexer. It is similar to a subgraph in the Ethereum ecosystem. Third parties wanting to build Stargaze clients can easily do so via Stargazer instead of querying full nodes directly. Stargazer indexes blockchain data into a Postgres database, allowing fast and easy querying. This significantly reduces the go-to-market for Stargaze clients.
https://docs.stargaze.fi/products/stargazer
2021-05-06T10:24:09
CC-MAIN-2021-21
1620243988753.91
[]
docs.stargaze.fi
You can specify a new size for a Writable Volume using the App Volumes Manager and App Volumes increases the .vmdk file to the new size. Important: You cannot expand a Writable Volume if your Machine Manager is configured as VHD In-Guest Services. This feature is available only on vCenter Server. See Types of Hypervisor Connections and Machine Manager Configurations and Configure and Register the Machine Manager. Procedure - From the App Volumes Manager console, select . - Select a Writable Volume from the list and click Expand.A Confirm Expand window is displayed. - Enter the new size for the volume and click Expand.You must enter a size that is at least 1 GB greater than the current size of the Writable Volume. Results The Writable Volume file is expanded to the new size the next time the user logs in to the virtual machine.
https://docs.vmware.com/en/VMware-App-Volumes/4/com.vmware.appvolumes.admin.doc/GUID-30A27F69-DFFA-45ED-B7EC-4E64BBBEE54C.html
2021-05-06T11:05:41
CC-MAIN-2021-21
1620243988753.91
[]
docs.vmware.com
The upgrade sequence upgrades the Management Plane at the end. When the Management Plane upgrade is in progress, avoid any configuration changes from any of the nodes. Prerequisites Verify that the NSX Edge cluster is upgraded successfully. See Upgrade NSX Edge Cluster. If you are upgrading from a version earlier than NSX-T Data Center 3.0, provision a secondary disk of exactly 100 GB capacity on all NSX Manager appliances. Reboot the appliance if the secondary disk is not detected by the Upgrade Coordinator. Procedure - Backup the NSX Manager.See the NSX-T Data Center Administration Guide. - Click Start to upgrade the Management plane. - Accept the upgrade notification.You can safely ignore any upgrade related errors such as, HTTP service disruption that appears at this time. These errors appear because the Management plane is rebooting during the upgrading.Wait until all the nodes are upgraded. It may take several minutes for the cluster to reach a stable state. - Monitor the upgrade progress from the NSX Manager CLI for the orchestrator node.get upgrade progress-status - In the CLI, log in to the NSX Manager to verify that the services have started and to check the cluster status. - get service When the services start, the Service state appears as running. Some of the services include, SSH, install-upgrade, and manager. get servicelists the IP address of the orchestrator node. See Enabled on. Use this IP address throughout the upgrade process.Note: Ensure that you do not use any type of Virtual IP address to upgrade NSX-T Data Center. If the services are not running, troubleshoot the problem. See the NSX-T Data Center Troubleshooting Guide. - get cluster status If the group status is not Stable, troubleshoot the problem. See the NSX-T Data Center Troubleshooting Guide. What to do next Perform post-upgrade tasks or troubleshoot errors depending on the upgrade status. See Post-Upgrade Tasks or Troubleshooting Upgrade Failures.
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/upgrade/GUID-16DC1E8F-D5C3-4544-BE72-2E26528D8F27.html
2021-05-06T11:08:29
CC-MAIN-2021-21
1620243988753.91
[]
docs.vmware.com
This function retrieves all the connected devices for a device using the ID and type. API int Iotc_GetDevices ( IotcSession * iotcSession, IotcDeviceId * parentId ) Description The devices would be returned with a pointer to IotcDeviceSet in the IotcGetResponse with IOTC_GET_DEVICES as the response message type. Parameters iotcSession The current IotcSession to be used. parentId Device ID for which the connected device IDs must be retrieved. Parent topic: Functions
https://docs.vmware.com/en/VMware-Pulse-IoT-Center/2.0.0/pulse-api/GUID-107668F4-3A8F-4722-9EFA-5396DF5BB74E.html
2021-05-06T09:38:37
CC-MAIN-2021-21
1620243988753.91
[]
docs.vmware.com
8. Extensions¶ This chapter describes Mercurial extensions that are shipped with TortoiseHg binary packages for Windows. These external extensions are included as a convenience to users, so they can be easily enabled as soon as they are needed. 8.1. Hgfold¶. 8.2. Perfarce¶ This extension is documented in Perfarce (Perforce) section of Use with other VCS systems chapter. 8.3. Mercurial-Keyring¶ - Mercurial Keyring home page - Keyring Extension wiki page). 8.4. projrc¶ projrc is an extension that makes Mercurial look for and parse .hg/projrc for additional configuration settings.The file is transferred on clone and on pull (but never on push), after confirmation by the user, from a list of servers that ‘’’must’’’ be configured by the user. For security reasons the user ‘’’must’’’ also select which ‘. In particular, it can be used to remap subrepository sources, as explained on Mercurial’s SubrepoRemappingPlan. Configuration This extension (as most other extensions) is disabled by default. To use and configure you must first enable it on the Settings/Extensions panel. When the extension is enabled you will see a new entry, “Projrc” on the settings dialog. This let’s you configure the extension by setting the following settings: Request confirmation If True (the default) you’ll get a prompt whenever the extension detects changes to the remote server’s .hg/projrc file. If false, the extension will automatically accept any change to the remote .hg/projrc file. - Servers - This setting is a comma separated list of glob patterns matching the server names of the servers that the projrc file will be pulled from. Unless this setting is set, no .hg/projrc files will be ever transferred from any servers. - Include - This key lets you control which sections and which keys will be accepted from the remote projrc files. This is a a comma separated list of glob patterns that match the section or key names that will be included. Keys names must be specified with their section name followed by a ‘.’ followed by the key name (e.g. “’’diff.git’’”). To allow all sections and all keys you can set this setting to “*” (without the quotes). - Exclude - This setting is similar to the “’’Include’’” setting but it has the opposite effect. It sets an “exclude list” of settings that will not be transferred from the common projrc files. The exclude list has the same syntax as the include list. If an exclusion list is set but the inclusion list is empty or not set all non excluded keys will be included. - Update on incoming - Control whether the .hg/projrc file will be updated on incoming. It can have the following values: - never: The default. Show whether the remote projrc file has changed, but do not update (nor ask to update) the local projrc file. -. If False (the default) you’ll get a prompt whenever the extension detects changes to the remote server’s .hg/projrc file. If false, the extension will automatically accept any change to the remote .hg/projrc file..com.*’’”), with the longest matching pattern being the most explicit; - section level matches (e.g. “’’ui’’”); - global (”’’*’’”) matches. If a key matches both an include and an exclude (glob) pattern of the same length, the key is ‘’included’’ (i.e. inclusion takes precedence over exclusion). Usage Once enabled and properly configured, the extension will look for .hg/projrc files whenever you clone or pull from one of the repositories specified on its “servers” configuration key. Whenever the extension detects changes to the remote projrc file (e.g. when you do not have a .hg/projrc file yet, or when the contents of said file have changed on the server), you’ll receive a warning unless you have set the “Require confirmation” setting to False (in which case the extension assumes that you accept the changes). If you accept the changes your local .hg/projrc file will be updated, and its settings will be taken into account by mercurial and TortoiseHg. If a local repository has a .hg/projrc file, you’ll see an extra panel on the setting dialog. The title of the extra panel is “project settings (.hg/projrc)”. The “project settings” panel is a read-only panel that shows the settings that are set on the local .hg/projrc file. Although you can update your local version of the .hg/projrc file, the panel is read only to indicate that you cannot change the remote repository’s settings, and that if the remote repository settings change your local copy will be updated on the next pull (if you allow it). The “project settings” settings panel is shown between the “global settings” panel and the “repository settings” panel, indicating that its settings are applied _after_ the global settings but _before_ the local repository settings (i.e the settings specified in the repository .hg/hgrc file). Additional Information For the most up to date information regarding this extension, to see several detailed usage examples and to learn how to use it and configure it from the command line, please go to the extension’s Wiki.
https://tortoisehg.readthedocs.io/en/latest/extensions.html
2021-05-06T10:16:21
CC-MAIN-2021-21
1620243988753.91
[]
tortoisehg.readthedocs.io
Accessing and navigating the consoles You can accomplish most of your capacity planning, application monitoring, and infrastructure management activities from the TrueSight Capacity Optimization console. Use the TrueSight console to access and work with capacity views and the Investigate tool. If you are new to the product, the following topics will introduce you to the consoles:For more information, see the following sections: - Accessing and navigating the TrueSight console - Accessing and navigating the TrueSight Capacity Optimization console - Other user interfaces To access the TrueSight console from the Capacity Optimization console In the Home page of the TrueSight Capacity Optimization console, under the TrueSight console section, click the link here, as shown in the following screenshot: Was this page helpful? Yes No Submitting... Thank you In case, if we change the name/alias or setup a loadbalancer URL for the Truesight console, is there a way to change this link without reinstalling the datahub? Hi Meyyappan, I checked with the Engineering team. It seems that there is no other way to change this link without reinstalling the datahub. Regards Bipin Inamdar
https://docs.bmc.com/docs/btco113/accessing-and-navigating-the-consoles-775472040.html
2021-05-06T10:39:29
CC-MAIN-2021-21
1620243988753.91
[]
docs.bmc.com
Models Hevo Models help you transform the data loaded by Hevo pipelines in the Destination data warehouse into a form conducive for a BI tool to perform analytics and reporting. Hevo allows you to leverage the SQL query capability of the data warehouse to build the required data-model (dimensions and facts). You can define a Hevo Model using an SQL query (SELECT query). At runtime, Hevo executes the query against the data warehouse, and results are exported to another table in the warehouse. A complex SQL query (including joins, aggregates) can also be provided for more complex requirements. Hevo supports all query capabilities supported by your data warehouse. To access Models, click Transform in the Asset Palette. It opens the Models List View page by default. You can view your existing models or create new ones. Several new features have been introduced in Models in Release 1.44. To leverage these features in designing and using Models, you can update your legacy models to the new version of Models. Read about Updating Legacy Models. See Also - Key Features - Types of Models - Familiarizing with the Models UI - Creating and Modifying Models - Previewing a Model - Viewing the Query History for a Model - Legacy Models
https://docs.hevodata.com/transform/models/
2021-05-06T09:45:32
CC-MAIN-2021-21
1620243988753.91
[array(['https://res.cloudinary.com/hevo/image/upload/v1611564327/hevo-docs/Transform3294/transform-page.png', 'Transform UI'], dtype=object) ]
docs.hevodata.com
24.2.5. Georeferencer Plugin¶ The Georeferencer Plugin is a tool for generating world files for rasters. It allows you to reference rasters to geographic or projected coordinate systems by creating a new GeoTiff or by adding a world file to the existing image. The basic approach to georeferencing a raster is to locate points on the raster for which you can accurately determine coordinates. Features Table Georeferencer: Georeferencer Tools 24.2 Eklentiler Menüsü) and click on , which appears in the QGIS menu bar. The Georeferencer Plugin dialog appears as shown in figure_georeferencer_dialog. For this example, we are using a topo sheet of South Dakota from SDGS. It can later be visualized together with the data from the GRASS spearfish60 location. You can download the topo sheet here:. 24.2 figure_georeferencer_add_points). For this procedure you have three options:. With the button, you can move the GCPs in both windows, if they are at the wrong place. Continue entering points. You should have at least four points, and the more coordinates you can provide, the better the result will be. There are additional tools on the plugin dialog to zoom and pan. 24.2.5.1.2. Defining the transformation settings¶ After you have added your GCPs to the raster image, you need to define the transformation settings for the georeferencing process. 24.2. 24.2.5.1.2.2. Define the Resampling method¶ possible to choose between five different resampling methods: Nearest neighbour Linear Cubic Cubic Spline Lanczos 24.2.5.1.2.3. Projeksiyonlarla Çalışma).. 24.2.5.1.3. Show and adapt raster properties¶ Clicking on the Raster properties option in the Settings menu opens the Layer properties dialog of the raster file that you want to georeference. 24.2.5.1.4. Configure the georeferencer¶ 24.2.5.1.5. Running the transformation¶ After all GCPs have been collected and all transformation settings are defined, just press the Start georeferencing button to create the new georeferenced raster.
https://docs.qgis.org/3.10/tr/docs/user_manual/plugins/core_plugins/plugins_georeferencer.html
2021-05-06T08:55:57
CC-MAIN-2021-21
1620243988753.91
[]
docs.qgis.org
asset path name, or null, or an empty string if the asset does not exist. Returns the path name relative to the project folder where the asset is stored. All paths are relative to the project folder, for example: "Assets/MyTextures/hello.png".)); } }
https://docs.unity3d.com/2019.4/Documentation/ScriptReference/AssetDatabase.GetAssetPath.html
2021-05-06T10:47:47
CC-MAIN-2021-21
1620243988753.91
[]
docs.unity3d.com
K3 is a low-code, real-time streaming ETL platform for enterprise users.K3 is a low-code, real-time streaming ETL platform for enterprise users. This user guide will introduce you to the intuitive and flexible functionality of K3. It is a step-by-step guide that explains how K3 works and answers the vast majority of questions. Frequently Used SynonymsFrequently Used Synonyms - ROUTE: ETL Pipeline, Data Path, Message Bus, Topic, Message Bus Topic, Data Pipe - SOURCE: Producer, Data Source, Raw Data, Input Data - TARGET: Consumer, Data Target, Transformed Data, Mapped Data, Data Output - SOURCE FIELDS: K3 schema for initial data record - TARGET FIELDS: K3 schema for transformed data records
https://docs.broadpeakpartners.com/a/1084379-k3-overview
2021-05-06T09:27:03
CC-MAIN-2021-21
1620243988753.91
[]
docs.broadpeakpartners.com
Creating an Oracle user for the repository database instance Contributors Download PDF of this page An Oracle user is required to log in to and access the repository database instance. You must create this user with connect and resource privileges. Log in to SQL *Plus: sqlplus '/ as sysdba' Create a new user and assign an administrator password to that user: create useruser_name identified by admin_password default tablespace tablespace_name quota unlimited on tablespace_name; user_name is the name of the user you are creating for the repository database. admin_password is the password you want to assign to the user. tablespace_name is the name of the tablespace created for the repository database. Assign connect and resource privileges to the new Oracle user: grant connect, resource to user_name;
https://docs.netapp.com/us-en/snapmanager-oracle/unix-installation-cmode/task_creating_an_oracle_user_for_the_repository_database_instance.html
2021-05-06T10:57:10
CC-MAIN-2021-21
1620243988753.91
[]
docs.netapp.com
While there’s emphatically nothing awry with developing, it can go for certain vexing outcomes. Crepey skin being one of them. Right when skin loses collagen and elastin—proteins that are responsible for warding off hardly perceivable contrasts—the skin can appear to be saggy, crinkled, and free. In light of everything, crepey skin isn’t viable with wrinkles. “Wrinkling is related to genetic characteristics, facial turn of events, and some sun hurt,” explains board-affirmed dermatologist Ranella Hirsch. “Crepey skin, on the other hand, is associated particularly to sun hurt whereby the sun is isolating the elastin.” You can check the Crepe Erase review here to realize what is the item about. It’s the best treatment for crepey skin. Luckily, there are an enormous number of treatment choices and trimmings that work to both thwart and shroud the presence of crepeyness. We tapped Hirsch similarly as board-ensured dermatologist Joshua Zeichner to fill us in on the most capable strategy to discard crepey skin. Crepey skin, which can appear wherever on the body, is thin skin that looks like solidly stuffed, united wrinkles and takes after crepe paper. Generally achieved by a breakdown of collagen and elastin, crepey skin will as a rule look and feel thin, fragile and crinkled. The most frail areas where [crepey skin loves to show up are under the eyes, around the knees, the backs of our upper arms, and the décolletage. We should delve fairly more significant into what causes crepey skin and ways we can treat and hinder it.
https://crepey-skin-remedies.readthedocs.io/en/latest/index.html
2021-05-06T09:39:53
CC-MAIN-2021-21
1620243988753.91
[]
crepey-skin-remedies.readthedocs.io
Exercise - Create a new connector in a solution In this exercise, you will create your first custom connector for an existing API called Contoso Invoicing. Important Use a test environment with Microsoft Dataverse provisioned. If you don't have a test environment, you can sign up for the Power Apps Community Plan. Task 1: Review the API To review the API, follow these steps: Go to Contoso Invoicing. Select the documentations link. Review the available operations. Select to expand and review each operation. Close the documentation browser tab or window. Select the Open API definition link. The following image shows an example of the Open API version of what was shown on the documentations page. Right-click and select Save as. Save the file locally. You will use this file later in the exercise. Close the definition browser tab or window. Select the API Key link. Copy and save your API key because you will need it later. Select Return to home. Select Download Logo. Save the logo image locally; you will use it later. Task 2: Create a new solution To create a new solution, follow these steps: Go to Power Apps maker portal and make sure that you are in the correct environment. Select Solutions > + New solution. Enter Contoso invoicing for the Display name, select CDS Default Publisher for Publisher, and then select Create. When you are working with a real project, it's best to create your own publisher. Do not navigate away from this page after selecting Create. Task 3: Create a new connector To create a new connector, follow these steps: Select to open the Contoso invoicing solution that you created. Select + New > Other > Custom connector. Enter Contoso invoicing for the Connector name and then select to Upload the image. Select the connector logo image that you downloaded in Task 1: Review the API. Enter #175497 for Icon background color. Enter Contoso Invoicing API for Description. Enter contosoinvoicingtest.azurewebsites.net for Host. Select Create connector. Do not navigate away from this page. Task 4: Import the Open API definition To import the Open API definition, follow these steps: Select the arrow next to Connector Name. Select the ellipsis (...) button of the connector and then select Update from OpenAPI file. Select Import. Select the swagger.json file that you downloaded in Task 1: Review the API and then select Open. Select Continue. Fill in the host URL as contosoinvoicingtest.azurewebsites.netand then select Security. Notice that the field is filled out from the imported file. Do not navigate away from this page. Task 5: Review and adjust definitions To review and adjust definitions, follow these steps: Select the Definition tab. Take a few minutes to review the operations that were imported. Notice the orange triangle next to GetInvoice that indicates a warning. Select the GetInvoice operation. Notice that the operation indicates a missing Summary. Enter Get Invoice as the Summary to improve the usability. Notice the blue information circle on the PayInvoice operation and that it indicates a missing Description. Enter Pay an invoice as the Description. Delete both NewInvoice operations because you won't use them. Select the GetInvoiceSchema operation. Modify the Visibility option to internal so that people don't see it in their action list. Select Update connector. Do not navigate away from this page. Task 6: Test the connector To test the connector, follow these steps: Select the Test tab. Select + New connection. Paste in the API Key that you saved in Task 1: Review the API and then select Create connection. Select the Refresh button. Select ListInvoice > Test Operation. You should see some invoice data in the body area. Need help? See our troubleshooting guide or provide specific feedback by reporting an issue.
https://docs.microsoft.com/en-us/learn/modules/get-started-custom-connector/4-build
2021-05-06T11:00:59
CC-MAIN-2021-21
1620243988753.91
[]
docs.microsoft.com
It is possible to download the TrilioVault logs directly through the TrilioVault web gui. To download logs throught the TrilioVault web gui: Login into the TrilioVault web gui Go to "Logs" Choose the log to be downloaded Each log for every TrilioVault Appliance can be downloaded seperatly or a zip of all logfiles can be created and downloaded This will download the current log files. Already rotated logs need to be downloaded through SSH from the TrilioVault appliance directly. All logs, including rotated old logs, can be found at: /var/logs/workloadmgr/
https://docs.trilio.io/openstack/administration-guide/available-downloads-from-the-triliovault-cluster
2021-05-06T10:05:44
CC-MAIN-2021-21
1620243988753.91
[]
docs.trilio.io
Information for "Extension Installer/Writing a new installer adapter" Basic information Display titleExtension Installer/Writing a new installer adapter Default sort keyExtension Installer/Writing a new installer adapter Page length (in bytes)1,976 Page ID2368Pasamio (Talk | contribs) Date of page creation01:24, 15 July 2008 Latest editorTom Hutchison (Talk | contribs) Date of latest edit07:10, 19 October 2012 Total number of edits7 Total number of distinct authors2 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’
https://docs.joomla.org/index.php?title=Extension_Installer/Writing_a_new_installer_adapter&action=info
2015-10-04T05:16:00
CC-MAIN-2015-40
1443736672441.2
[]
docs.joomla.org
Difference between revisions of "JModelAdmin::populateState" From Joomla! Documentation Revision as of 19::populateState Description Stock method to auto-populate the model state. Description:JModelAdmin::populateState [Edit Descripton] protected function populateState () See also JModelAdmin::populateState source code on BitBucket Class JModelAdmin Subpackage Application - Other versions of JModelAdmin::populateState SeeAlso:JModelAdmin::populateState [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JModelAdmin::populateState&diff=next&oldid=57361
2015-10-04T06:05:33
CC-MAIN-2015-40
1443736672441.2
[]
docs.joomla.org
Getting Started with Email Notifications This article is not about the email functionality of User Management which uses predefined templates. However, some of the information presented here is applicable to it, such as how to edit the template. Using the Email Notifications feature in Telerik Platform you can send automated email messages to users of your app based on templates. Consider these examples: - You have a monthly email bulletin that you send to subscribers - You want to let your users know about an exciting offer - You need to notify your users about an upcoming outage in a service that you are offering through your app In this tutorial you will learn how to create an email template featuring dynamically populated content and how to send emails based on it. Steps at a glance: - Create an Email Template - Fill In the Template Form - Set Up the Template Parameters - Declare a Cloud Function that Sends Email Notifications - Execute the Cloud Function - Evaluate the Response Prerequisites - A Telerik Platform account with a Telerik Platform Professional subscription or higher - A Telerik Platform app - The Notifications service enabled Create an Email Template The Email Notifications feature in Telerik Platform requires you to create an email template before basing an email message on it. Templates provide the benefit of reusing your messages, dynamically populating them with event-specific data as you go. To create an email template: - Log in to the Telerik Platform portal. - Click your app. - Navigate to Notifications > Email Notifications > Templates. Click the Add an email template button. You will see the Create an email template form. Fill In the Template Form You have two options filling in the template form: providing fixed data or inserting placeholders. Placeholders allow you to set the field value later, when sending the email message. These are the fields you can modify: - Name—A name that you use to identify the email template in the UI and to refer to it in your code. Cannot contain blank characters. Example: "NotifyAdminTemplate". - Subject—The subject line that the email recipient will see. - From email—The sender's email address that the email recipient will see. - From name—The sender's name. - Reply to—The email address that the recipient should use for sending replies. - CC—A list of email addresses that are to receive a carbon copy of the email. - BCC—A list of email addresses that are to receive a blind carbon copy of the email. - Message—The body of the email message. You can use placeholders to dynamically set the data in any field except for the Name field. There are two types of placeholders that you can use: System placeholders System placeholders are populated with built-in variables that you can manage on Emails → Settings (or if you have added the User Management service, you can also use Users → Email Settings to set the same values). - {{DefaultFromEmail}} - {{DefaultFromName}} - {{DefaultReplyToEmail}} Custom placeholders In addition to system placeholders, you can define an arbitrary number of your own placeholders. To create and use a custom placeholder, take these steps: Insert a placeholder in a field of your choosing using the following format: {{YourPlaceholderName}} Set the placeholder value in the contextparameter of your payload (see next step). Examples include: - {{NotificationSubject}} - {{MessageBody}} An example of a populated template form is shown on the figure below: Click Save when you are ready. Use the Design view of the template editor to configure the template with the options available in the editor or the HTML view to add your custom HTML markup and CSS styles. Adding placeholders is supported in both views. Set Up the Template Parameters After you have created the template, you can refer to it by name in your code to send email notifications. But first you need to set its parameters. You typically do this in a JSON-formatted payload to a RESTful call or by setting variables in your code. These are the parameters you need to set: templateName—the unique name you gave to the template at the time of creating it recipients—an array of email addresses which you intend to send the email to context—an object containing key-value pairs that binds placeholder names to actual values that you want to use in the email The following example shows a JSON object that sets the values for the example template from the previous step. { "TemplateName": "NotifyAdminTemplate", "Recipients": ["[email protected]"], "Context": { "NotificationSubject": "New items require your attention", "MessageBody": "You have new items for approval." } } Don't forget to enter the actual recipient addresses in the Recipients array prior to using the payload. Declare a Cloud Function that Sends Email Notifications By creating a cloud function that sends an email from your template you define a RESTful endpoint that you can access whenever you need to notify your users about something. (Find out how to create a cloud function in Implementing Cloud Functions.) The following code snippet provides example code that you can include in your cloud function to send an email notification based on a payload like the one from the previous step: Everlive.CloudFunction.onRequest(function(request, response, done) { if (request.action === 'POST') { var templateName = request.data.TemplateName; var recipients = request.data.Recipients; var context = request.data.Context; Everlive.Email.sendEmailFromTemplate(templateName, recipients, context, function(err, res) { if (err) { response.body = err; done(); } else { response.body = res; done(); } }); } else { response.body = { "Response": "Please make a POST request and provide the required parameters." }; response.statusCode = 404; done(); } }); You can also send Email notifications using the Backend Services RESTful API. See RESTful API: Sending Email Programmatically for more information. Execute the Cloud Function - Click Execute. - In the window that appears, choose the POST verb and pass the object that sets the template variables as request body. Evaluate the Response The cloud function that sends the email notification is set to return the full execution result. It contains the HTTP response code among other details. Example: { "statusCode": 200, "body": "{\"Result\":null}", "headers": { "server": "nginx", "date": "Mon, 30 Mar 2015 10:56:57 GMT", "content-type": "application/json; charset=utf-8", "content-length": "15", "connection": "keep-alive", "x-powered-by": "Everlive", "access-control-allow-credentials": "true" }, "data": { "Result": null } } If you want to, you can change the code to return only a part of the full response or to return a custom response.
http://docs.telerik.com/platform/backend-services/android/email-templates/email-notifications-getting-started
2017-04-23T09:57:50
CC-MAIN-2017-17
1492917118519.29
[array(['images/email-template-complete-gs.png', 'A sample email template A sample email template.'], dtype=object) array(['images/send-email-from-template-parameters.png', 'Sending an email notification from a cloud function Sending an email notification from a cloud function'], dtype=object) ]
docs.telerik.com
Deletes an environment reference from a project in the Integration Services catalog. Syntax delete_environment_reference [ @reference_id = ] reference_id Arguments [ @reference_id = ] reference_id The unique identifier of the environment reference. The reference_id is bigint. Return Code Value 0 (success) Result Sets None Permissions This stored procedure requires one of the following permissions: MODIFY permission on the project Membership to the ssis_admin database role Membership to the sysadmin server role Errors and Warnings The following list describes some conditions that may raise an error or warning: The environment reference identifier is not valid The user does not have the appropriate permissions
https://docs.microsoft.com/en-us/sql/integration-services/system-stored-procedures/catalog-delete-environment-reference-ssisdb-database
2017-04-23T10:41:13
CC-MAIN-2017-17
1492917118519.29
[array(['../../includes/media/yes.png', 'yes'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object)]
docs.microsoft.com
Updating the class reference¶ Godot Engine provides an extensive panel of nodes and singletons that you can use with GDScript to develop your games. All those classes are listed and documented in the class reference, which is available both in the online documentation and offline from within the engine. The class reference is however not 100% complete. Some methods, constants and signals have not been described yet, while others may have their implementation changed and thus need an update. Updating the class reference is a tedious work and the responsibility of the community: if we all partake in the effort, we can fill in the blanks and ensure a good documentation level in no time! Important notes: - To coordinate the effort and have an overview of the current status, we use a collaborative pad. Please follow the instructions there to notify the other documentation writers about what you are working on. - We aim at completely filling the class reference in English first. Once it nears 90-95% completion, we could start thinking about localisating in other languages. Workflow for updating the class reference¶ The source file of the class reference is an XML file in the main Godot source repository on GitHub, doc/base/classes.xml. As of now, it is relatively heavy (more than 1 MB), so it can’t be edited online using GitHub’s text editor. The workflow to update the class reference is therefore: - Fork the upstream repository and clone your fork. - Edit the doc/base/classes.xmlfile, commit your changes and push to your fork. - Make a pull request on the upstream repository. The following section details this workflow, and gives also additional information about how to synchronise the XML template with the current state of the source code, or what should be the formatting of the added text. Important: The class reference is also available in the online documentation, which is hosted alongside the rest of the documentation in the godot-docs git repository. The rST files should however not be edited directly, they are generated via a script from the doc/base/classes.xml file described above. Getting started with GitHub¶ This section describes step-by-step the typical workflow to fork the git repository, or update an existing local clone of your fork, and then prepare a pull request. Fork Godot Engine¶ First of all, you need to fork the Godot Engine on your own GitHub repository. You will then need to clone the master branch of Godot Engine in order to work on the most recent version of the engine, including all of its features. git clone Then, create a new git branch that will contain your changes. git checkout -b classref-edit The branch you just created is identical to current master branch of Godot Engine. It already contains a doc/ folder, with the current state of the class reference. Note that you can set whatever name you want for the branch, classref-edit is just an example. Keeping your local clone up-to-date¶ If you already have a local clone of your own fork, it might happen pretty fast that it will diverge with the upstream git repository. For example other contributors might have updated the class reference since you last worked on it, or your own commits might have been squashed together when merging your pull request, and thus appear as diverging from the upstream master branch. To keep your local clone up-to-date, you should first add an upstream git remote to work with: git remote add upstream git fetch upstream You only need to run this once to define this remote. The following steps will have to be run each time you want to sync your branch to the state of the upstream repo: git pull --rebase upstream/master This command would reapply your local changes (if any) on top of the current state of the upstream branch, thus bringing you up-to-date with your own commits on top. In case you have local commits that should be discarded (e.g. if your previous pull request had 5 small commits that were all merged into one bigger commit in the upstream branch), you need to reset your branch: git fetch upstream git reset --hard upstream/master Warning: The above command will reset your branch to the state of the upstream/master branch, i.e. it will discard all changes which are specific to your local branch. So make sure to run this before making new changes and not afterwards. Alternatively, you can also keep your own master branch ( origin/master) up-to-date and create new branches when wanting to commit changes to the class reference: git checkout master git branch -d my-previous-doc-branch git pull --rebase upstream/master git checkout -b my-new-doc-branch In case of doubt, ask for help on our IRC channels, we have some git gurus there. doc/base/classes.xml The doc/base/classes.xml/base). Editing the doc/base/classes.xml file¶ This file is generated and updated by Godot Engine. It is used by the editor as base for the Help section. You can edit this file using your favourite text editor. If you use a code editor, make sure that it won’t needlessly change the indentation behaviour (e.g. change all tabs to spaces). Formatting of the XML file¶ Here is an example with the Node2D class: ="set_rot"> <argument index="0" name="rot" type="float"> </argument> <description> Set the rotation of the 2d node. </description> </method> <method name="set_scale"> <argument index="0" name="scale" type="Vector2"> </argument> <description> Set the scale of the 2d node. </description> </method> <method name="get_pos" qualifiers="const"> <return type="Vector2"> </return> <description> Return the position of the 2D node. </description> </method> <method name="get_rot" qualifiers="const"> <return type="float"> </return> <description> Return the rotation of the 2D node. </description> </method> <method name="get_scale" qualifiers="const"> <return type="Vector2"> </return> <description> Return the scale of the 2D node. </description> </method> <method name="rotate"> <argument index="0" name="degrees" type="float"> </argument> <description> </description> </method> <method name="move_local_x"> <argument index="0" name="delta" type="float"> </argument> <argument index="1" name="scaled" type="bool" default="false"> </argument> <description> </description> </method> <method name="move_local_y"> <argument index="0" name="delta" type="float"> </argument> <argument index="1" name="scaled" type="bool" default="false"> </argument> <description> </description> </method> <method name="get_global_pos" qualifiers="const"> <return type="Vector2"> </return> <description> Return the global position of the 2D node. </description> </method> <method name="set_global_pos"> <argument index="0" name="arg0" type="Vector2"> </argument> <description> </description> </method> <method name="set_transform"> <argument index="0" name="xform" type="Matrix32"> </argument> <description> </description> </method> <method name="set_global_transform"> <argument index="0" name="xform" type="Matrix32"> </argument> <description> </description> </method> <method name="edit_set_pivot"> <argument index="0" name="arg0" type="Vector2"> </argument> <description> </description> </method> </methods> <constants> </constants> </class> As you can see, some methods in this class have no description (i.e. there is no text between their marks). This can also happen for the description and brief_description of the class, but in our example they are already filled. Let’s edit the description of the rotate() method: <method name="rotate"> <argument index="0" name="degrees" type="float"> </argument> <description> Rotates the node of a given number of degrees. </description> </method> That’s all! You simply have to write any missing text between these marks: - <description></description> - <brief_description></brief_description> - <constant></constant> - <member></member> - <signal></signal> Describe clearly and shortly what the method does, or what the constant, member variable or signal mean. You can include an example of use if needed. Try to use grammatically correct English, and check the other descriptions to get an impression of the writing style. For setters/getters, the convention is to describe in depth what the method does in the setter, and to say only the minimal in the getter to avoid duplication of the contents. I don’t know what this method does!¶ Not a problem. Leave it behind for now, and don’t forget to notify the missing methods when you request a pull of your changes. Another editor will take care of it. If you wonder what a method does, you can still have a look at its implementation in Godot Engine’s source code on GitHub. Also, if you have a doubt, feel free to ask on the Q&A website and on IRC (freenode, #godotengine).
http://docs.godotengine.org/en/stable/contributing/updating_the_class_reference.html
2017-04-23T09:49:45
CC-MAIN-2017-17
1492917118519.29
[]
docs.godotengine.org
Send Docs Feedback 14.01.05.00 - Apigee Edge on-premises release notes On Monday, March 10, 2014, we released a new patch for the on-premises?)
http://docs.apigee.com/developer-services/content/14010500-apigee-edge-premises-release-notes?rate=i-eSJpJ3_Y3WYX7wXdVkkYPxRd2m4_sbmh-4wLzldTU
2016-10-21T11:13:07
CC-MAIN-2016-44
1476988717963.49
[]
docs.apigee.com