content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
See: Save to Google Pay. To activate this feature, define an Origin URL. - Go to Settings » API. - Click Edit next to Save to Google Pay. - Enter the root URL where you want to place your Save to Google Pay button, e.g.,. Note The Origin URL is a list of domains to whitelist JSON Web Token (JWT) saving functionality. The Save to Google Pay button will not render if this field is not filled out properly. - Click Save. Edit or Delete Origin URL - Go to Settings » API. - Click Edit next to Save to Google Pay. - Make your changes, then click Save.
https://docs.airship.com/tutorials/manage-project/wallet/google-pay/
2020-01-17T22:57:25
CC-MAIN-2020-05
1579250591234.15
[]
docs.airship.com
_Exception. Stack Trace Property Definition Provides COM objects with version-independent access to the StackTrace property. public: property System::String ^ StackTrace { System::String ^ get(); }; public string StackTrace { get; } member this.StackTrace : string Public ReadOnly Property StackTrace As String Property Value A string that describes the contents of the call stack, with the most recent method call appearing first. Remarks This method is for access to managed classes from unmanaged code and should not be called from managed code. The Exception.StackTrace property gets a string representation of the frames on the call stack at the time the current exception was thrown.
https://docs.microsoft.com/en-us/dotnet/api/system.runtime.interopservices._exception.stacktrace?view=netframework-4.8
2020-01-17T21:51:48
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
Remove a Handler Mapping (IIS 7) Applies To: Windows 7, Windows Server 2008, Windows Server 2008 R2, Windows Vista If you create a handler mapping for a site or an application, and later decide that you do not need the mapping, you can remove it from the list of handler mappings for that site or application. Important If you remove a handler mapping, the request type could be served by another handler, which may create a security issue. For example, if you remove a handler mapping for a handler that serves .aspx files, the StaticFile handler will serve the file if there are no other handler mappings that match the request for the .aspx file name extension. Prerequisites For information about the levels at which you can perform this procedure, and the modules, handlers, and permissions that are required to perform this procedure, see Handler Mappings Feature Requirements (IIS 7). Exceptions to feature requirements - None To remove a handler mapping. In the grid, select a handler mapping. In the Actions pane, click Remove and then click Yes. Command Line To remove a handler mapping, use the following syntax: appcmd set config /section:handlers /-[name='string'] The variable name string is the name of the handler mapping that you want to delete. For example, to remove a handler mapping named ImageCopyrightHandler, type the following at the command prompt, and then press ENTER: appcmd set config /section:handlers /-[name='ImageCopyrightHandler'].Remove method (IIS) HttpHandlersSection.Remove method Handler Mappings in IIS 7
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754894%28v%3Dws.10%29
2020-01-17T22:50:52
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
Logging and Metrics Page last updated: This documentation describes logging and metrics in Cloud Foundry Application Runtime (CFAR). It includes topics related to monitoring, event logging, CFAR data sources, and viewing logs and metrics. It also includes information about the Loggregator system, which aggregates and streams logs and metrics from apps and platform components in CFAR. Contents Logging and Metrics Overview of Logging and Metrics: This topic provides an overview of logging and metrics in CFAR. App Logging in CFAR: This topic describes log types and their messages. It also explains how to view logs from the cf CLI. - Security Event Logging: This topic describes how to enable and interpret security event logging for the Cloud Controller, the User Account and Authentication (UAA) server, and CredHub. Loggregator Loggregator Architecture: This topic describes the architecture of the Loggregator system. Loggregator Guide for CFAR Operators: The topic provides information about configuring Loggregator to avoid data loss with high volumes of logging and metrics data. Deploying a Nozzle to the Loggregator Firehose: This topic describes deploying a nozzle application to the Loggregator Firehose. Installing the Loggregator Firehose Plugin for cf CLI: This topic describes how to use the Loggregator Firehose Plugin for cf CLI to access the output of the Firehose.
https://docs.cloudfoundry.org/loggregator/index.html
2020-01-17T21:00:31
CC-MAIN-2020-05
1579250591234.15
[]
docs.cloudfoundry.org
The command filesystem.listfiles is used to list the directory files by saving them in a file. At this moment this command is not supported by Telium 2 terminals. In the example above, a file called files.dat will be created, and this file will save the name and size of all files that are in the directory being analyzed, in this case the directory F. As the command return is an integervariable, the command inttostring is used to convert the variable that will be displayed using the command display.
https://docs.cloudwalk.io/en/posxml/commands/filesystem.listfiles
2020-01-17T21:59:15
CC-MAIN-2020-05
1579250591234.15
[]
docs.cloudwalk.io
. After you add a Search web part (such as the Content Search web part) to a page, to configure the web part, you select both a control display template and an item display template, as shown in Figure 1. Figure 1. Tool pane of Content Search web part >. Figure 2. Combined HTML output of a control display template and item display template For more information about display templates, see the "Search-driven Web Parts and display templates" section in Overview of the SharePoint page model. Understanding the display template structure: .. <!--#_ if(!linkURL.isEmpty) { _#--> <a class="cbs-pictureImgLink" href="_#= linkURL =#_" title="_#= $htmlEncode(line1.defaultValueRenderer(line1)) =#_" id="_#= pictureLinkId =#_"> <!--#_ } _#--> Mapping input properties and getting their values =#_" /> Using jQuery with display templates =#_') Create a display template Before you can create a display template by using the following procedure, you must have a mapped network drive that points to the Master Page Gallery. For more information, see How to: Map a network drive to the SharePoint. At this point, SharePoint. -. See also Feedback
https://docs.microsoft.com/en-us/sharepoint/dev/general-development/sharepoint-design-manager-display-templates?redirectedfrom=MSDN
2020-01-17T23:19:52
CC-MAIN-2020-05
1579250591234.15
[array(['../images/115_content_search_web_part_tool_pane.gif', 'Tool pane of Content Search web part'], dtype=object) array(['../images/sp15con_createdisplaytemplatesp2013_figure02.png', 'Combined HTML output of a control display template and item display template'], dtype=object) ]
docs.microsoft.com
openATTIC Web UI Tests - E2E Test Suite¶ This section describes how our E2E test environment is set up, as well as how you can run our existing E2E tests on your openATTIC system and how to write your own tests. If you are looking for Web UI Unit tests documentation please refer to openATTIC Web UI Tests - Unit Note Protractor and most of its dependencies will be installed locally when you execute npm install on webui/. - (optional) npm install -g [email protected] apt-get install openjdk-8-jre-headless oracle-java8-installer - = { urls: { base: '<proto://addr:port>', ui: '/openattic/#/', api: '/openattic/api/' }, //leave this if you want to use openATTIC's default user for login username: 'openattic', password: 'openattic', }; }()); If you are using a Vagrant box, then you have to set urls.ui to /#/ and urls.api to /api/. Go to webui/ and type the following command: $ webdriver-manager start or: $ npm run webdriver Make Protractor Execute the Tests¶ Use a separate tab/window, go to webui/ and type: $ npm run protractor (-- --suite <suiteName>) Important Without a given suite protractor will execute all tests (and this will probably take a while!) Starting Only a Specific Test Suite¶ If you only want to test a specific action, you can run i.e. $ npm run protractor -- --suite general. Available test cases can be looked up in protractor.conf.js, i.e.: suites: { //suite name : '/path/to/e2e-test/file.e2e.js' general : '../e2e/base/general/**/general) than closing the browser. Just closing the browser window causes every single test to fail because protractor now tries to execute the tests and can not find the browser window anymore. E2E-Test Directory and File Structure¶ In directory e2e/ the following directories can be found: +-- base | '-- auth | '-- datatable | '-- general | '-- pagination | '-- pools | '-- settings | '-- taskqueue | '-- users +-- ceph | `-- iscsi | `-- nfs | `-- pools | `-- rbds | `-- rgw Most of the directories contain a *form setLocation function, you just have to add helpers. to the function: helpers.setLocation( location [, dialogIsShown ] ). The following helper functions are implemented: setLocation leaveForm checkForUnsavedChanges get_list_element get_list_element_cells delete_selection hasClass When using more than one helper function in one file, please make sure that you use the right order of creating and deleting functions in beforeAll and afterAll. If you need to navigate to a specific menu entry (every time!) where your tests should take place, you can make use of: beforeEach(function(){ //always navigates to menu entry "ISCSI" before executing the actions //defined in 'it('', function(){});' element(by.css('.tc_menuitem_ceph_iscsi')).click(); }); Style Guide - General e2e.js File Structure / Architecture¶ You should follow the official Protractor style guide. Here are a few extra recommendations: - describeshould contain a general description of what is going to be tested (functionality) in this spec file i.e. the site, menu entry (and its content), panel, wizard etc. example: “should test the user panel and its functionalities” - itshould) - If something has to be done frequently and across multiple spec filesfunction in the tests where it’s required. Therefore you just have to require the common.jsfile in the spec file and call the create_userfunction in the beforeAll function. This procedure is a good way to prevent duplicated code. (for examples see common.js-> loginfunction) - Make use of the beforeAll/ afterAllfunctions if possible. Those functions allow you to do some steps (which are only required once) before/after anything else in the spec file is going to be executed. For example, if you need to login first before testing anything, you can put this step in a beforeAllfunction. Also, using a beforeAllinstead of a beforeEachsaves a lot of time when executing tests. Furthermore, it’s not always necessary to repeat a specific step before each ìtsection. The afterAllfunction - Always navigate to the page which should be tested before each test to make sure that the page is in a “clean state”. This can be done by putting the navigation part in a beforeEachfunction - which ensures that itsections do not depend on each other as well. - Make sure that written tests do work in the latest version of Chrome and Firefox - The name of folders/files should tell what the test is about (i.e. folder “user” contains “user_add.e2e.js”).
https://docs.openattic.org/en/stable/developer_docs/dev_e2e.html
2020-01-17T22:00:55
CC-MAIN-2020-05
1579250591234.15
[]
docs.openattic.org
Did you find the need of making invoice for your business? Invoice is a list of goods sent or services provided. Along with that list, you will also get a statement of the sum due. Now that you know how necessary this invoice is, you must have been wondering how you can make one yourself, right? You don’t have to worry since you can easily make it with Google Docs. Well, there is procedure to make Google Docs invoice template. If you are interested in knowing it, let us tell you about the steps in this opportunity. Here we go below then. Step 1 for Google Docs Invoice Template There are some ways you can do to make invoice template. Sure, computers are that of modern devices, but people find it complicated to make the invoice with them. Then, we begin to wish if there is easier way to do so. It would be to use Google Docs if you ask that. In order to get access to Google Docs, Google account is necessary for that. Google Docs invoice account is not difficult to make. As long as you have Google email, you will have Google account made and ready right away.. Once you find them, clicking one of them will bring you to the invoice worksheet. Then, you are all ready to fill it in with your information. If it is as easy as that, it makes wonderful way to get the job done, right? Step 3 for Google Docs Invoice Template You see, invoices might look different from one to another. Why, of course, it is because there are a number of invoice templates to choose one from. Since you have the options, you are free to make your choice. Each Google account invoice template makes great invoice look. So, you just need to choose the one that meets your needs and expectations. Then, you should proceed with filling the information in and get the invoice done. It is convenient to make business invoices this way, indeed. Step 4 for Google Docs Invoice Template When it comes to choosing the template, you must have your own needs and preferences, right? Yes, every one of templates available in Google Docs is wonderful and all. However, not all templates could be the best for you if you consider the needs and the expectations. Each person has their own, so you should make the choice on your own too. As long as it meets the requirements, you have found the Google Docs invoice template that would be the best for you. That’s how you should do it. Contractor Agreement Invoice 1 Google Blank Invoice templates 1 Google Business Invoice templates 1 Google Contractor Invoice Form 1 Google Docs Service Invoice templates 1 Google Docs templatess Invoice 1 Google Drive Invoice templates 1 Google Invoice Payment Schedule 1 Google Invoice templates 1 1 Google Invoice templates Example 1 Google Invoice templatess 1 1 Google Spreadsheet Invoice Form 1 Printable Google Invoice templates 1 Sample Google Invoice templates 1
http://templatedocs.net/google-docs-invoice-template
2020-01-17T22:17:20
CC-MAIN-2020-05
1579250591234.15
[array(['http://templatedocs.net/wp-content/uploads/2019/04/Contractor-Agreement-Invoice-1.jpg', 'Contractor Agreement Invoice 1 Contractor Agreement Invoice 1'], dtype=object) array(['http://templatedocs.net/wp-content/uploads/2019/04/Google-Blank-Invoice-templates-1.jpg', 'Google Blank Invoice templates 1 Google Blank Invoice templates 1'], dtype=object) array(['http://templatedocs.net/wp-content/uploads/2019/04/Google-Business-Invoice-templates-1.jpg', 'Google Business Invoice templates 1 Google Business Invoice templates 1'], dtype=object) array(['http://templatedocs.net/wp-content/uploads/2019/04/Google-Contractor-Invoice-Form-1.jpg', 'Google Contractor Invoice Form 1 Google Contractor Invoice Form 1'], dtype=object) array(['http://templatedocs.net/wp-content/uploads/2019/04/Google-Docs-Service-Invoice-templates-1.jpg', 'Google Docs Service Invoice templates 1 Google Docs Service Invoice templates 1'], dtype=object) array(['http://templatedocs.net/wp-content/uploads/2019/04/Google-Docs-templatess-Invoice-1.jpg', 'Google Docs templatess Invoice 1 Google Docs templatess Invoice 1'], dtype=object) array(['http://templatedocs.net/wp-content/uploads/2019/04/Google-Drive-Invoice-templates-1.jpg', 'Google Drive Invoice templates 1 Google Drive Invoice templates 1'], dtype=object) array(['http://templatedocs.net/wp-content/uploads/2019/04/Google-Invoice-Payment-Schedule-1.jpg', 'Google Invoice Payment Schedule 1 Google Invoice Payment Schedule 1'], dtype=object) array(['http://templatedocs.net/wp-content/uploads/2019/04/Google-Invoice-templates-1-1.jpg', 'Google Invoice templates 1 1 Google Invoice templates 1 1'], dtype=object) array(['http://templatedocs.net/wp-content/uploads/2019/04/Google-Invoice-templates-Example-1.jpg', 'Google Invoice templates Example 1 Google Invoice templates Example 1'], dtype=object) array(['http://templatedocs.net/wp-content/uploads/2019/04/Google-Invoice-templatess-1-1.jpg', 'Google Invoice templatess 1 1 Google Invoice templatess 1 1'], dtype=object) array(['http://templatedocs.net/wp-content/uploads/2019/04/Google-Spreadsheet-Invoice-Form-1.jpg', 'Google Spreadsheet Invoice Form 1 Google Spreadsheet Invoice Form 1'], dtype=object) array(['http://templatedocs.net/wp-content/uploads/2019/04/Printable-Google-Invoice-templates-1.jpg', 'Printable Google Invoice templates 1 Printable Google Invoice templates 1'], dtype=object) array(['http://templatedocs.net/wp-content/uploads/2019/04/Sample-Google-Invoice-templates-1.jpg', 'Sample Google Invoice templates 1 Sample Google Invoice templates 1'], dtype=object) ]
templatedocs.net
See the influence of push notifications on in-app actions. Adobe Analytics provides behavioral analytics and content performance measurement for websites and apps. Airship Real-Time Data Streaming provides a live view into your application, adding the “messaging perspective” to your Adobe Analytics information. This integration enables a complete view of the user experience by combining Adobe Analytics data with user-specific interactions such as push sends, direct and indirect opens, and uninstalls. Via the Real-Time Data Streaming, Airship provides exclusive mobile engagement data that is not otherwise available from the Adobe Analytics SDK, including: - Send Events: A device was sent a notification. Includes the push ID and the user that push ID was sent to. - Control Events: A device, targeted as part of an A/B Test, was sorted into the control group. It received no notification, helping you answer the question: "What would have happened had you not sent a notification?" - Direct and Indirect Opens: Opens that were either caused by a notification, or which occurred within 12 hours of a notification. - In-App Message Events: Displays, resolutions (timeout or user action), expiration. - Message Center Events: Delivery, read, deleted. - Uninstall Events: The user uninstalled the application. You can also export users from Adobe Analytics and send messages to them in Airship. In addition, we augment Adobe's device reporting with information on in-app behavior defined by the Airship SDK. In particular, we provide information pertaining to our Message Center and In-App Messaging products. We can tell you when a Message Center message was delivered (which happens via a different mechanism than the push notification), read, and deleted. We can also describe what happened to an in-app notification: whether it was displayed or expired before it could be, and what happened to it after it was displayed—did the user dismiss it, interact with it, or allow it to resolve itself? Setup Client Code Fetch the Adobe Analytics visitor ID for your user, then associate it with the Airship channel ID. See ID Matching for details about this feature. iOS // Get the Adobe visitor ID let visitorID = ADBMobile.visitorMarketingCloudID() // Add the visitor ID to the current associated identifiers let identifiers = UAirship.shared().analytics.currentAssociatedDeviceIdentifiers() identifiers.setIdentifier(visitorID, forKey:"AA_visitorID") // Associate the identifiers UAirship.shared().analytics.associateDeviceIdentifiers(identifiers) // Get the Adobe visitor ID NSString *visitorID = [ADBMobile visitorMarketingCloudID]; // Add the visitor ID to the current associated identifiers UAAssociatedIdentifiers *identifiers = [[UAirship shared].analytics currentAssociatedDeviceIdentifiers]; [identifiers setIdentifier:visitorID forKey:@"AA_visitorID"]; // Associate the identifiers [[UAirship shared].analytics associateDeviceIdentifiers:identifiers]; Android // Get the Adobe visitor ID String visitorId = Visitor.getMarketingCloudId(); // Add the visitor ID to the current associated identifiers UAirship.shared().getAnalytics() .editAssociatedIdentifiers() .addIdentifier("AA_visitorID", visitorId) .apply(); Dashboard - Go to Settings » Real-Time Data Streaming. - Under Real-Time Data Streaming, click Adobe Analytics. Tip Previously configured integrations are listed under Enabled Integrations. - Configure a new Adobe Analytics integration: - Enter a user-friendly name and description. - Enter your Adobe Analytics Report Suite ID and Tracking Server URL. If you do not have these, contact your iOS or Android developer. Important Make sure to include http:// in the URL, e.g.,. - Enter your Adobe Analytics Tracking Server URL/ - Choose one or more event type: - Opens - Sends - Control - Uninstalls - Message Center Read, Delivery, and Delete Events - In-App Message Expiration, Resolution, and Display Events - Click the Save button. Now that you have configured your app and your integration, you can create event-based Segments in Adobe Analytics. See Recommended Attribute Mapping for recommended mapping of Data Streaming events to Adobe Analytics events. Recommended Attribute Mapping Capture what data is valuable to you by using Adobe Analytics context data and processing rules. Airship uses the Data Insertion API to send data to Adobe Analytics. Airship uses the contextData field so that we can include every piece of relevant data without taking up the limited number of variables and properties Adobe Analytics supports. In Adobe Analytics you can set processing rules so that every field on a data streaming event maps to an Adobe Analytics concept. Once your Adobe Analytics administrator has set up the processing rules, you can view Airship connect data in your Adobe Analytics dashboard. We derive contextData keys from the original data streaming events by flattening the JSON objects. For example, the push identifier of the notification that caused an open is triply nested: { "body": { "triggering_push": { "push_id": "d99bd842-f816-4560-bc59-b057f7c0e164" } } } This field would be available for mapping as body.triggering_push.push_id. When mapped to a variable (sProp, eVar, event, or other Adobe Analytics concept), the variable’s value would be d99bd842-f816-4560-bc59-b057f7c0e164. The above JSON example is a partial object. Actual JSON objects coming from Real-Time Data Streaming are much more complex. To see all the available fields, read the Data Streaming API Reference. contextData can take a long time to appear in the Adobe Analytics dashboard. Please allow at least one hour before contacting Airship Support. Known issue: If no processing rules have been defined, no contextData variables will be available for Mapping. As a workaround until Adobe Analytics fixes this issue, add a trivial rule that does nothing, then proceed establishing your mapping as normal. Sample Data sent to Adobe <?xml version="1.0"?> <request> <reportSuiteID>your-report-suite-id</reportSuiteID> <scXmlVer>1.0</scXmlVer> <pageName>OPEN-ios</pageName> <timestamp>2016-05-19T22:42:36.946Z</timestamp> <contextData> <id>03f13497-1e13-11e6-bc8d-001018948f58</id> <offset>444449</offset> <occurred>2016-05-19T22:42:36.946Z</occurred> <processed>2016-05-19T22:42:51.511Z</processed> <device.ios_channel>ec2816a3-72c7-4b9b-9ee6-ae31229f28bd</device.ios_channel> <device.named_user_id>mtr1234</device.named_user_id> <device.identifiers.com.urbanairship.limited_ad_tracking_enabled>true</device.identifiers.com.urbanairship.limited_ad_tracking_enabled> <device.identifiers.session_id>72153D08-B6A2-4900-9045-6776734B183B</device.identifiers.session_id> <device.identifiers.com.urbanairship.idfa>E51089C4-DD2D-44AA-BCD0-092DD6A16085</device.identifiers.com.urbanairship.idfa> <device.identifiers.com.urbanairship.vendor>DFEF1B87-2253-423C-97D4-003AA36F5C25</device.identifiers.com.urbanairship.vendor> <device.attributes.locale_variant/> <device.attributes.app_version>215</device.attributes.app_version> <device.attributes.device_model>iPhone6,1</device.attributes.device_model> <device.attributes.connection_type>CELL</device.attributes.connection_type> <device.attributes.app_package_name>com.urbanairship.internalsampleapp</device.attributes.app_package_name> <device.attributes.iana_timezone>America/Los_Angeles</device.attributes.iana_timezone> <device.attributes.push_opt_in>true</device.attributes.push_opt_in> <device.attributes.locale_country_code>US</device.attributes.locale_country_code> <device.attributes.device_os>9.3.2</device.attributes.device_os> <device.attributes.locale_timezone>-25200</device.attributes.locale_timezone> <device.attributes.carrier>T-Mobile</device.attributes.carrier> <device.attributes.locale_language_code>en</device.attributes.locale_language_code> <device.attributes.location_enabled>false</device.attributes.location_enabled> <device.attributes.background_push_enabled>true</device.attributes.background_push_enabled> <device.attributes.ua_sdk_version>7.1.0</device.attributes.ua_sdk_version> <device.attributes.location_permission>UNPROMPTED</device.attributes.location_permission> <body.session_id>787bf679-4732-4722-b761-bcd9b1e7a1eb</body.session_id> <body.last_delivered.push_id>685783c0-68b1-4d05-bc94-98e90d943fd1</body.last_delivered.push_id> <body.last_delivered.variant_id>1</body.last_delivered.variant_id> <body.last_delivered.time>2016-05-19T18:04:40.049Z</body.last_delivered.time> <type>OPEN</type> </contextData> <visitorID>ec2816a372c74b9b9ee6ae31229f28bd</visitorID> </request>
https://docs.airship.com/partners/adobe-analytics/
2020-01-17T22:56:14
CC-MAIN-2020-05
1579250591234.15
[]
docs.airship.com
Custom fields are metadata that help you integrate Orbitera billing and marketplace data with third-party systems. There are two types of custom fields: - Customer fields – Unique metadata fields that are applied to Orbitera-defined customers. - Cloud account fields – In Orbitera, a customer can be associated with multiple cloud accounts. Cloud account fields allow you to differentiate accounts from each other for a given customer. Custom fields are not visible to or editable by end users. This is important for SAP integrations, for example, because you cannot allow your customers to edit resource labels if those labels are necessary inputs to your billing system. Any project codes and internal codes are meaningless to Orbitera, but they might, for example, tell your internal systems which business unit owns the customer relationship, what their compensation schema is relative to that usage for that business unit, and how the invoice is routed through your internal systems. How custom fields work First, you attach custom fields as metadata to a customer or cloud account. Then, your existing third-party systems – such as ERP or billing systems – can translate this metadata into relevant fields. SAP account Example A commonly used customer field for a SAP account might be named and labeled something like SAP_ID_Internal, where Internal can be any code. This field would not be visible to a customer, but would be attached to a customer entity for integration with an SAP system. Configure a custom field OR Go to Cloud Account Fields - Click Add. Enter a descriptive name. The name is a unique key that you can use in querying the API. Enter a descriptive label. The label is the human-readable name that appears in reports and dashboards. (Optional) By default, custom fields are included in billing reports as an added column that you can use to group, sort, and filter data. Including custom fields in billing reports is relevant for IaaS resellers, for example. You can disable this feature by clicking the slider. Click Save. Populate custom fields After you create a custom field, it is available to be populated with metadata values within each customer or partner. Custom Cloud Account Fields function in the same way, except the metadata is attached to an account within a customer entity. Once populated, you can use these fields to filter and query reports and dashboards. In addition, you can add custom fields to invoice headers. In Settings > Billing > Invoice Options, select Include custom fields in invoice header. For customers - Edit a customer and select Custom fields or Cloud accounts. - Enter customer-specific or account-specific data into the custom fields. For partners - Edit a partner and select Custom fields. - Enter partner-specific data into the custom fields. Example Let’s say you have many customers that are covered by a limited number of sales people. For commission reports, you need to allocate each customer to a salesperson. To accomplish this, create a custom customer field named "Salesperson" and define this value within each Orbitera customer. After the field is defined and populated, you can customize your Orbitera dashboards and reports to group content by these values. Create the custom field Populate the custom field Use the custom field in a deployment Learn more about passing custom fields in API calls.
https://docs.orbitera.com/guides/customers/custom-fields
2020-01-17T22:46:42
CC-MAIN-2020-05
1579250591234.15
[array(['/images/screen-customer-label.png', 'Customer label'], dtype=object) array(['/images/screen-define-customer-fields.png', 'Define customer fields'], dtype=object) array(['/images/screen-customer-fields.png', 'Assign customer fields'], dtype=object) array(['/images/screen-customer-field-deployment.png', 'Use customer fields'], dtype=object) ]
docs.orbitera.com
Orbitera makes it relatively easy to create your own branded marketplace. You can think of a white-label marketplace as an out-of-the-box sales channel for selling products, services, and applications to cloud consumers and customers. For non-cloud products, too You can also use white-label marketplaces for non-cloud products. Imagine, for example, a device ecosystem that incorporates cloud service and hardware into a sophisticated distribution network. Low burden A marketplace serves as a storefront for products and services that customers can consume. You can have this with a lower operational burden of provisioning, launching, and maintaining your own storefront. Self service.
https://docs.orbitera.com/guides/marketplace
2020-01-17T22:44:42
CC-MAIN-2020-05
1579250591234.15
[]
docs.orbitera.com
Files modules¶ - acl – Set and retrieve file ACL information - archive – Creates a compressed archive of one or more files or trees - assemble – Assemble configuration files from fragments - blockinfile – Insert/update/remove a text block surrounded by marker lines - copy – Copy files to remote locations - fetch – Fetch files from remote nodes - file – Manage files and file properties - find – Return a list of files based on specific criteria - ini_file – Tweak settings in INI files - iso_extract – Extract files from an ISO image - lineinfile – Manage lines in text files - patch – Apply patch files using the GNU patch tool - read_csv – Read a CSV file - replace – Replace all instances of a particular string in a file using a back-referenced regular expression - stat – Retrieve file or file system status - synchronize – A wrapper around rsync to make common tasks in your playbooks quick and easy - tempfile – Creates temporary files and directories - template – Template a file out to a remote server - unarchive – Unpacks an archive after (optionally) copying it from the local machine - xattr – Manage user defined extended attributes - xml – Manage bits and pieces of XML files or strings Note - (D): This marks a module as deprecated, which means a module is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.
https://docs.ansible.com/ansible/latest/modules/list_of_files_modules.html
2020-01-17T22:45:10
CC-MAIN-2020-05
1579250591234.15
[]
docs.ansible.com
All content with label 2lcache+as5+client+datagrid+gridfs+hibernate_search+infinispan+installation+replication+scala+tutorial. Related Labels: expiration, publish, coherence, interceptor, server, recovery, transactionmanager, dist, release, query, jbossas, lock_striping, nexus, guide, schema, listener, httpd, cache, amazon, s3, memcached, grid, ha, jcache,, jboss_cache, import, index, events, batch, hash_function, configuration, buddy_replication, loader, xa, write_through, cloud, remoting, mvcc, notification, read_committed, xml, jbosscache3x, distribution, started, cachestore, data_grid, cacheloader, resteasy, cluster,, - hibernate_search, - infinispan, - installation, - replication, - scala, - tutorial ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/2lcache+as5+client+datagrid+gridfs+hibernate_search+infinispan+installation+replication+scala+tutorial
2020-01-17T22:17:01
CC-MAIN-2020-05
1579250591234.15
[]
docs.jboss.org
All content with label as5+concurrency+events+gui_demo+hot_rod+infinispan+installation+jboss_cache+listener+migration+release+scala+transaction+transactionmanager. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, recovery, dist, partitioning, query, deadlock, archetype, lock_striping, jbossas, nexus, guide, schema, cache, amazon, s3, grid, test, jcache, api, xsd, ehcache, maven, documentation, wcm, write_behind, ec2, 缓存, s, hibernate, getting, interface, custom_interceptor, setup, clustering, eviction, gridfs, out_of_memory, examples, import, index, configuration, hash_function, batch, buddy_replication, loader, xa, cloud, mvcc, tutorial, notification, read_committed, xml, jbosscache3x, distribution, meeting, started, cachestore, data_grid, cacheloader, hibernate_search, cluster, br, development, websocket, async, interactive, xaresource, build, gatein, searchable, demo, cache_server, ispn, client, jpa, filesystem, tx, eventing, client_server, infinispan_user_guide, standalone, webdav, repeatable_read, hotrod, snapshot, docs, consistent_hash, batching, jta, faq, 2lcache, jsr-107, lucene, jgroups, locking, rest more » ( - as5, - concurrency, - events, - gui_demo, - hot_rod, - infinispan, - installation, - jboss_cache, - listener, - migration, - release, - scala, - transaction, - transactionmanager ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/as5+concurrency+events+gui_demo+hot_rod+infinispan+installation+jboss_cache+listener+migration+release+scala+transaction+transactionmanager
2020-01-17T21:56:02
CC-MAIN-2020-05
1579250591234.15
[]
docs.jboss.org
December 2015 Volume 30 Number 13 Test Run - Introduction to Spark for .NET Developers By James McCaffrey | December 2015 best way to see where this article is headed is to take a look at the demo interactive session shown in Figure 1. From a Windows command shell running in administrative mode, I started a Spark environment by issuing a spark-shell command. Figure 1 Spark in Action The spark-shell command generates a Scala interpreter that runs in the shell and in turn issues a Scala prompt (scala>). Scala is a scripting language that’s based on Java. There are other ways to interact with Spark, but using a Scala interpreter is the most common approach, in part because the Spark framework is written mostly in Scala. You can also interact with Spark by using Python language commands or by writing Java programs. Notice the multiple warning messages in Figure 1. These messages are very common when running Spark because Spark has many optional components that, if not found, generate warnings. In general, warning messages can be ignored for simple scenarios. The first command entered in the demo session is: scala> val f = sc.textFile("README.md") This can be loosely interpreted to mean, “Store into an immutable RDD object named f the contents of file README.md.” Scala objects can be declared as val or var. Objects declared as val are immutable and can’t change.. Text file README.md (the .md extension stands for markdown document) is located in the Spark root directory C:\spark_1_4_1. If your target file is located somewhere else, you can provide a full path such as C:\\Data\\ReadMeToo.txt. The second command in the demo session is: scala> val ff = f.filter(line => line.contains("Spark")) This means, “Store into an immutable RDD object named ff only those lines from object f that have the word ‘Spark’ in them.” The filter function accepts what’s called a closure. You can think of a closure as something like an anonymous function. Here, the closure accepts a dummy string input parameter named line and returns true if line contains “Spark,” false otherwise. Because “line” is just a parameter name, I could’ve used any other name in the closure, for example: ln => ln.contains("Spark") Spark is case-sensitive, so the following would generate an error: ln => ln.Contains("Spark") Scala has some functional programming language characteristics, and it’s possible to compose multiple commands. For example, the first two commands could be combined into one as: val ff = sc.textFile("README.md").filter(line => lne.contains("Spark")) The final three commands in the demo session are: scala> val ct = ff.count() scala> println(ct) scala> :q The count function returns the number of items in an RDD, which in this case is the number of lines in file README.md that contain the word Spark. There are 19 such lines. To quit a Spark Scala session, you can type the :q command. Installing Spark on a Windows Machine There are four main steps for installing Spark on a Windows machine. First, you install a Java Development Kit (JDK) and the Java Runtime Environment (JRE). Second, you install the Scala language. Third, you install the Spark framework. And fourth, you configure the host machine system variables. The Spark distribution comes in a compressed .tar format, so you’ll need a utility to extract the Spark files. I recommend installing the open source 7-Zip program before you begin. Although Spark and its components aren’t formally supported on a wide range of Windows OS versions, I’ve successfully installed Spark on machines running Windows 7, 8, 10, and Server 2008 and 2012. The demo shown in Figure 1 is running on a Windows 8.1 machine. You install the JDK by running a self-extracting executable, which you can find by doing an Internet search. I used version jdk-8u60-windows-x64.exe. When installing the 64-bit version of the JDK, the default installation directory is C:\Program Files\Java\jdkx.x.x_xx\, as shown in Figure 2. I recommend that you don’t change the default location. Figure 2 The Default JDK Location Installing the JDK also installs an associated JRE. After the installation finishes, the default Java parent directory will contain both a JDK directory and an associated JRE directory, as shown in Figure 3. .png "Java JDK and JRE Installed to C:\Program Files\Java\") Figure 3 Java JDK and JRE Installed to C:\Program Files\Java\ Note that your machine will likely also have a Java directory with one or more 32-bit JRE directories at C:\Program Files (x86). It’s OK to have both 32-bit and 64-bit versions of JREs on your machine, but I recommend using only the 64-bit version of the Java JDK. Installing Scala The next step is to install the Scala language, but before you do so, you must go to the Spark download site (described in the next section of this article) and determine which version of Scala to install. The Scala version must be compatible with the Spark version you’ll install in the following step. Unfortunately, information about Scala-Spark version compatibility is scanty. When I was installing the Spark components (quite some time ago by the time you read this), the current version of Spark was 1.5.0, but I couldn’t find any information about which version of Scala was compatible with that version of Spark. Therefore, I looked at the previous version of Spark, which was 1.4.1, and found some information on developer discussion Web sites that suggested that version 2.10.4 of Scala was likely compatible with version 1.4.1 of Spark. Installing Scala is easy. The installation process simply involves running an .msi installer file. The Scala installation wizard guides you through the process. Interestingly, the default installation directory for Scala is in 32-bit directory C:\Program Files (x86)\ rather than in 64-bit directory C:\Program Files\ (see Figure 4). .png "Scala Installs to C:\Program Files (x86)\scala\") Figure 4 Scala Installs to C:\Program Files (x86)\scala\ If you intend to interact with Spark by writing Java programs rather than by using Scala commands, you’ll need to install an additional tool called the Scala Simple Build Tool (SBT). Interacting with Spark through compiled Java programs is much, much more difficult than using the interactive Scala. Installing Spark The next step is to install the Spark framework. First, though, be sure you have a utility program such as 7-Zip that can extract .tar format files. The Spark installation process is manual—you download a compressed folder to your local machine, extract the compressed files, and then copy the files to a root directory. This means that if you wish to uninstall Spark you simply delete the Spark files. You can find the Spark site at spark.apache.org. The download page allows you to select a version and a package type. Spark is a computing framework and requires a distributed file system (DFS). By far the most common DFS used with the Spark framework is the Hadoop distributed file system (HDFS). For testing and experimentation purposes, such as the demo session in Figure 1, you can install Spark on a system that doesn’t have a DFS. In this scenario, Spark will use the local file system. If you haven’t extracted .tar files before, you might find the process a bit confusing because you typically have to extract twice. First, download the .tar file (mine was named spark-1.4.1-bin-hadoop2.6.tar) to any temporary directory (I used C:\Temp). Next, right-click on the .tar file and select “Extract files” from the context menu and extract to a new directory inside the temporary directory. The first extraction process creates a new compressed file without any file extension (in my case spark-1.4.1-bin-hadoop2.6). Next, right-click on that new file and select “Extract files” again from the context menu and extract to a different directory. This second extraction will produce the Spark framework files. Create a directory for the Spark framework files. A common convention is to create a directory named C:\spark_x_x_x, where the x values indicate the version. Using this convention, I created a C:\spark_1_4_1 directory and copied the extracted files into that directory, as shown in Figure 5. .png "Manually Copy Extracted Spark Files to C:\spark_x_x_x\") Figure 5 Manually Copy Extracted Spark Files to C:\spark_x_x_x\ Configuring Your Machine After installing Java, Scala and Spark, the last step is to configure the host machine. This involves downloading a special utility file needed for Windows, setting three user-defined system environment variables, setting the system Path variable, and optionally modifying a Spark configuration file. Running Spark on Windows requires that a special utility file named winutils.exe be in a local directory named C:\hadoop. You can find the file in several places by doing an Internet search. I created directory C:\hadoop and then found a copy of winutils.exe at and downloaded the file into the directory. Next, create and set three user-defined system environment variables and modify the system Path variable. Go to Control Panel | System | Advanced System Settings | Advanced | Environment Variables. In the User Variables section, create three new variables with these names and values: JAVA_HOME C:\Program Files\Java\jdk1.8.0_60 SCALA_HOME C:\Program Files (x86)\scala HADOOP_HOME C:\hadoop Then, in the System Variables, edit the Path variable by adding the location of the Spark binaries, C:\spark_1_4_1\bin. Be careful; you really don’t want to lose any values in the Path variable. Note that the Scala installation process will have already added the location of the Scala binaries for you (see Figure 6). Figure 6 Configuring Your System After you set up your system variables, I recommend modifying the Spark configuration file. Go to the root directory C:\spark_1_4_1\config and make a copy of file log4j.properties.template. Rename that copy by removing the .template extension. Edit the first configuration entry from log4j.rootCategory=INFO to log4j.rootCategory=WARN. The idea is that by default Spark spews all kinds of informational messages. Changing the logging level from INFO to WARN greatly reduces the number of messages and makes interacting with Spark less messy. The Hello World of Spark The Hello World example of distributed computing is to calculate the number of different words in a data source. Figure 7 shows the word-count example using Spark. Figure 7 Word Count Example Using Spark The Scala shell is sometimes called a read, evaluate, print loop (REPL) shell. You can clear the Scala REPL by typing CTRL+L. The first command in Figure 7 loads the contents of file README.md into an RDD named f, as explained previously. In a realistic scenario, your data source could be a huge file spread across hundreds of machines, or could be in a distributed database such as Cassandra. The next command is: scala> val fm = f.flatMap(line => line.split(" ")) The flatMap function call splits each line in the f RDD object on blank space characters, so resulting RDD object fm will hold a collection of all the words in the file. From a developer’s point of view, you can think of fm as something like a .NET List<string> collection. The next command is: scala> val m = fm.map(word => (word, 1)) The map function creates an RDD object that holds pairs of items, where each pair consists of a word and the integer value 1. You can see this more clearly if you issue an m.take(5) command. You’ll see the first five words in file README.md, and a 1 value next to each word. From a developer’s point of view, m is roughly a List<Pair> collection in which each Pair object consists of a string and an integer. The string (a word in README.md) is a key and the integer is a value, but unlike many key-value pairs in the Microsoft .NET Framework, duplicate key values are allowed in Spark. RDD objects that hold key-value pairs are sometimes called pair RDDs to distinguish them from ordinary RDDs. The next command is: scala> val cts = m.reduceByKey((a,b) => a + b) The reduceByKey function combines the items in object m by adding the integer values associated with equal key values. If you did a cts.take(10) you’d see 10 of the words in README.md followed by the number of times each word occurs in the file. You might also notice that the words in object cts aren’t necessarily in any particular order. The reduceByKey function accepts a closure. You can use an alternate Scala shortcut notation: scala> val cts = m.reduceByKey(_ + _) The underscore is a parameter wild card, so the syntax can be interpreted as “add whatever two values are received.” Notice that this word-count example uses the map function followed by the reduceByKey function. This is an example of the MapReduce paradigm. The next command is: scala> val sorted = cts.sortBy(item => item._2, false) This command sorts the item in the cts RDD, based on the second value (the integer count) of the items. The false argument means to sort in descending order, in other words from highest count to lowest. The Scala shortcut syntax form of the sort command would be: scala> val sorted = cts.sortBy(_._2, false) Because Scala has many functional language characteristics and uses a lot of symbols instead of keywords, it’s possible to write Scala code that’s very unintuitive. The final command in the Hello World example is to display the results: scala> sorted.take(5).foreach(println) This means, “Fetch the first five objects in the RDD object named sorted, iterate over that collection, applying the println function to each item.” The results are: (,66) (the,21) (Spark,14) (to,14) (for,11) This means there are 66 occurrences of the empty/null word in README.md, 21 occurrences of the word “the,”, 14 occurrences of “Spark” and so on. Wrapping Up The information presented in this article should get you up and running if you want to experiment with Spark on a Windows machine. Spark is a relatively new technology (created at UC Berkeley in 2009), but interest in Spark has increased dramatically over the past few months, at least among my colleagues. In a 2014 competition among Big Data processing frameworks, Spark set a new performance record, easily beating the previous record set by a Hadoop system the year before. Because of its exceptional performance characteristics, Spark is particularly well-suited for use with machine learning systems. Spark supports an open source library of machine learning algorithms named MLib.: Gaz Iqbal and Umesh Madan
https://docs.microsoft.com/en-us/archive/msdn-magazine/2015/december/test-run-introduction-to-spark-for-net-developers
2020-01-17T22:30:04
CC-MAIN-2020-05
1579250591234.15
[array(['images/mt595756.mccaffrey_figure1-firstexample_hires%28en-us%2cmsdn.10%29.png', 'Spark in Action Spark in Action'], dtype=object) array(['images/mt595756.mccaffrey_figure2-installjdk_hires%28en-us%2cmsdn.10%29.png', 'The Default JDK Location The Default JDK Location'], dtype=object) array(['images/mt595756.McCaffrey_Figure3-JavaInstallDirectories_hires(en-us,MSDN.10', 'Java JDK and JRE Installed to C:\\Program Files\\Java\\'], dtype=object) array(['images/mt595756.McCaffrey_Figure5-ScalaInstallDirectories_hires(en-us,MSDN.10', 'Scala Installs to C:\\Program Files (x86)\\scala\\'], dtype=object) array(['images/mt595756.McCaffrey_Figure6-SparkFilesInstalled_hires(en-us,MSDN.10', 'Manually Copy Extracted Spark Files to C:\\spark_x_x_x\\'], dtype=object) array(['images/mt595756.mccaffrey_figure7-configuration_hires%28en-us%2cmsdn.10%29.png', 'Configuring Your System Configuring Your System'], dtype=object) array(['images/mt595756.mccaffrey_figure8-wordcountexample_hires%28en-us%2cmsdn.10%29.png', 'Word Count Example Using Spark Word Count Example Using Spark'], dtype=object) ]
docs.microsoft.com
August 2018 Volume 33 Number 8 [Data Points] Deep Dive into EF Core HasData Seeding By Julie Lerman | August 2018 | Get the Code The ability to seed data when migrations are run is a feature that disappeared in the transition from Entity Framework 6 (EF6) to Entity Framework Core (EF Core). With the latest version of EF Core, 2.1, seeding has made a comeback, yet in a very different form. In this article, you’ll learn how the new seeding feature works, as well as scenarios where you may or may not want to use it. Overall, the new mechanism of seeding is a really handy feature for models that aren’t overly complex and for seed data that remains mostly static once it’s been created. Basic Seeding Functionality In EF6 (and earlier versions) you added logic into the DbMigrationConfiguration.Seed method to push data into the database any time migrations updated the database. For a reminder of what that looks like, check out the Microsoft ASP.NET documentation on seeding with EF6 at bit.ly/2ycTAIm. In EF Core 2.1, the seeding workflow is quite different. There is now Fluent API logic to define the seed data in OnModelCreating. Then, when you create a migration, the seeding is transformed into migration commands to perform inserts, and is eventually transformed into SQL that that particular migration executes. Further migrations will know to insert more data, or even perform updates and deletes, depending on what changes you make in the OnModelCreating method. If you happened to read the July 2018 Data Points (msdn.com/magazine/mt847184), you may recall the Publications model I used to demonstrate query types. I’ll use that same model for these examples. In fact, I slid some data seeding into the July download sample! I’ll start from a clean slate here, though. The three classes in my model are Magazine, Article and Author. A magazine can have one or more articles and an article can have one author. There’s also a PublicationsContext that uses SQLite as its data provider and has some basic SQL logging set up. Seeding Data for a Single Entity Type Let’s start by seeing what it looks like to provide seed data for a magazine—at its simplest. The key to the new seeding feature is the HasData Fluent API method, which you can apply to an Entity in the OnModelCreating method. Here’s the structure of the Magazine type: public class Magazine { public int MagazineId { get; set; } public string Name { get; set; } public string Publisher { get; set; } public List<Article> Articles { get; set; } } It has a key property, MagazineId, two strings and a list of Article types. Now let’s seed it with data for a single magazine: protected override void OnModelCreating (ModelBuilder modelBuilder) { modelBuilder.Entity<Magazine> ().HasData (new Magazine { MagazineId = 1, Name = "MSDN Magazine" }); } A couple things to pay attention to here: First, I’m explicitly setting the key property, MagazineId. Second, I’m not supplying the Publisher string. Next, I’ll add a migration, my first for this model. I happen to be using Visual Studio Code for this project, which is a .NET Core app, so I’m using the CLI migrations command, “dotnet ef migrations add init.” The resulting migration file contains all of the usual CreateTable and other relevant logic, followed by code to insert the new data, specifying the table name, columns and values: migrationBuilder.InsertData( table: "Magazines", columns: new[] { "MagazineId", "Name", "Publisher" }, values: new object[] { 1, "MSDN Magazine", null }); Inserting the primary key value stands out to me here—especially after I’ve checked how the MagazineId column was defined further up in the migration file. It’s a column that should auto-increment, so you may not expect that value to be explicitly inserted: MagazineId = table.Column<int>(nullable: false) .Annotation("Sqlite:Autoincrement", true) Let’s continue to see how this works out. Using the migrations script command, “dotnet ef migrations script,” to show what will be sent to the database, I can see that the primary key value will still be inserted into the key column: INSERT INTO "Magazines" ("MagazineId", "Name", "Publisher") VALUES (1, 'MSDN Magazine', NULL); That’s because I’m targeting SQLite. SQLite will insert a key value if it’s provided, overriding the auto-increment. But what about with a SQL Server database, which definitely won’t do that on the fly? I switched the context to use the SQL Server provider to investigate and saw that the SQL generated by the SQL Server provider includes logic to temporarily set IDENTITY_INSERT ON. That way, the supplied value will be inserted into the primary key column. Mystery solved! You can use HasData to insert multiple rows at a time, though keep in mind that HasData is specific to a single entity. You can’t combine inserts to multiple tables with HasData. Here, I’m inserting two magazines at once: modelBuilder.Entity<Magazine>() .HasData(new Magazine{MagazineId=2, Name="New Yorker"}, new Magazine{MagazineId=3, Name="Scientific American"} ); What About Non-Nullable Properties? Remember that I’ve been skipping the Publisher string property in the HasData methods, and the migration inserts null in its place. However, if I tell the model builder that Publisher is a required property, in other words, that the database column is non-nullable, HasData will enforce that. Here’s the OnModelBuilding code I’ve added to require Publisher: modelBuilder.Entity<Magazine>().Property(m=>m.Publisher).IsRequired(); Now, when I try to add a migration to account for these new changes (the IsRequired method and seeding two more magazines), the migrations add command fails, with a very clear error message: "The seed entity for entity type 'Magazine' cannot be added because there was no value provided for the required property 'Publisher'." This happened because the two new magazines I’m adding don’t have a Publisher value. The same would happen if you tried to skip the MagazineId because it’s an integer, even though you know that the database will provide the value. EF Core also knows that the database will generate this value, but you’re still required to provide it in HasData. The need to supply required values leads to another interesting limitation of the HasData feature, which is that there’s a possibility it will conflict with a constructor. Imagine I have a constructor for Magazine that takes the magazine’s name and publisher’s name: public Magazine(string name, string publisher) { Name=name; Publisher=publisher; } As the database will create the key value (MagazineId), there’s no reason I’d have MagazineId as a parameter of such a constructor. Thanks to another new feature of EF Core 2.1, I no longer have to add a parameterless constructor to the class in order for queries to materialize magazine objects. That means the constructor is the only option for me to use in my HasData method: modelBuilder.Entity<Magazine>() .HasData(new Magazine("MSDN Magazine", "1105 Media")); But, again, this will fail because I’m not supplying a value for the non-nullable MagazineId property. There’s a way around this, however, which takes advantage of the EF Core shadows property feature—using anonymous types instead of explicit types. HasData with Anonymous Types The ability to seed with anonymous types instead of explicit types solves a lot of potential roadblocks with HasData. The first is the one I just explained, where I created a constructor for the Magazine class and there’s no way to set the non-nullable MagazineId when seeding with HasData. Instead of instantiating a Magazine, you can instantiate an anonymous type and supply the MagazineId, without worrying about whether the property is public or private, or even exists! That’s what I’m doing in the following method call: modelBuilder.Entity<Magazine>() .HasData(new {MagazineId=1, Name="MSDN Mag", Publisher="1105 Media"}); The migration code will correctly insert that data into the magazines table, and running the migrations update database command works as expected: migrationBuilder.InsertData( table: "Magazines", columns: new[] { "MagazineId", "Name", "Publisher" }, values: new object[] { 1, "MSDN Mag", "1105 Media" }); You’ll see a few more roadblocks that the anonymous type solves further on. What About Private Setters? The limitation posed by the required primary key stands out because Magazine uses an integer as a key property. I’ve written many solutions, however, that use Guids for keys and my domain logic ensures that a Guid value is created when I instantiate an entity. With this setup, I can protect any properties by using private setters, yet still get the key property populated without exposing it. But there’s a problem for HasData. First, let’s see the effect and then explore (and solve) the problem. As an example, I’ve transformed Magazine in Figure 1 so that MagazineId is a Guid, the setters are private and the only way (so far) to set their values is through the one and only constructor. Figure 1 The Magazine Type with a Guid Key, Private Setters and a Parameter Constructor public class Magazine { public Magazine(string name, string publisher) { Name=name; Publisher=publisher; MagazineId=Guid.NewGuid(); } public Guid MagazineId { get; private set; } public string Name { get; private set; } public string Publisher { get; private set; } public List<Article> Articles { get; set; } } Now I’m assured that when I create a new Magazine object a MagazineId value will be created, as well: modelBuilder.Entity<Magazine>().HasData(new Magazine("MSDN Mag", "1105 Media"); The migration generates the following InsertData method for Magazine, using the Guid created in the constructor: migrationBuilder.InsertData( table: "Magazines", columns: new[] { "MagazineId", "Name", "Publisher" }, values: new object[] { new Guid("8912aa35-1433-48fe-ae72-de2aaa38e37e"), "MSDN Mag", "1105 Media" }); However, this can cause a problem for the migration’s ability to detect changes to the model. That Guid was auto-generated when I created the new migration. The next time I create a migration a different Guid will be generated and EF Core will see this as a change to the data, delete the row created from the first migration and insert a new row using the new Guid. Therefore, you should use explicit Guids when seeding with HasData, never generated ones. Also, you’ll need to use the anonymous type again, rather than the constructor, because MagazineId doesn’t have a public setter: var mag1=new {MagazineId= new Guid("0483b59c-f7f8-4b21-b1df-5149fb57984e"), Name="MSDN Mag", Publisher="1105 Media"}; modelBuilder.Entity<Magazine>().HasData(mag1); Keep in mind that explicitly creating Guids in advance could get cumbersome with many rows. Seeding Related Data Using HasData to seed related data is very different from inserting related data with DbSets. I stated earlier that HasData is specific to a single entity. That means you can’t build graphs as parameters of HasData. You can only provide values for properties of that Entity. Therefore, if you want to add a Magazine and an Article, these tasks need to be performed with separate HasData methods—one on Entity<Magazine> and one on Entity<Article>. Take a look at the schema of the Article class: public class Article { public int ArticleId { get; set; } public string Title { get; set; } public int MagazineId { get; set; } public DateTime PublishDate { get; set; } public int? AuthorId { get; set; } } Notice that the MagazineId foreign key is an int and, by default, that’s non-nullable. No article can exist without a Magazine identified. However, the AuthorId is a nullable int, therefore it’s possible to have an article that hasn’t yet had an author assigned. This means that when seeding an Article, in addition to the required ArticleId value, you must supply the MagazineId. But you’re not required to supply the AuthorId. Here’s code to add an article where I’ve supplied the key value (1), the value of an existing magazine’s ID (1) and a Title—I didn’t provide an AuthorId or a date: modelBuilder.Entity<Article>().HasData( new Article { ArticleId = 1, MagazineId = 1, Title = "EF Core 2.1 Query Types"}); The resulting migration code is as follows: migrationBuilder.InsertData( table: "Articles", columns: new[] { "ArticleId", "AuthorId", "MagazineId", "PublishDate", "Title" }, values: new object[] { 1, null, 1, new DateTime(1, 1, 1, 0, 0, 0, 0, DateTimeKind.Unspecified), "EF Core 2.1 Query Types" }); The migration is adding null for AuthorId, which is fine. I didn’t supply a PublishDate, so it’s defaulting to the minimal .NET date value (01/01/0001). If I were to add a MagazineId or an AuthorId that doesn’t yet exist, it won’t be caught until the SQL is run against the database, triggering a referential integrity error. If you’ve followed my work for a while, you may know that I’m a big fan of using foreign key properties for related data, rather than navigation properties. But there are scenarios where you may prefer not to have the foreign key property in your dependent type. EF Core can handle that thanks to shadow properties. And once again, anonymous types come to the rescue with HasData to seed the related data that requires you to supply the value of the foreign key column. Seeding Owned Entities Owned entities, also known as owned types, are the way EF Core lets you map non-entity types, replacing the complex type feature of Entity Framework. I wrote about this new support in the April 2018 Data Points column (msdn.com/magazine/mt846463). Because an owned type is specific to the entity that owns it, you’ll need to do the data seeding as part of the definition of the type as a property of an entity. You can’t just populate it from modelBuilder the way you do for an entity. To demonstrate, I’ll introduce a new type in my model, Publisher: public class Publisher { public string Name { get; set; } public int YearFounded { get; set; } } Notice it has no key property. I’ll use Publisher as a property of Magazine in place of the Publisher string and, at the same time, revert to a simpler Magazine class: public class Magazine { public int MagazineId { get; set; } public string Name { get; set; } public Publisher Publisher { get; set; } public List<Article> Articles { get; set; } } Two important points to remember are that you can only provide properties for one entity type with HasData and that the Model Builder treats an owned type as a separate entity. In this case, that means you can’t populate a magazine and its publisher in a single Entity<Magazine>.HasData method. Instead, you have to identify the owned property (even if you’ve configured it elsewhere) and append HasData to it. I’ll first provide some Magazine data: modelBuilder.Entity<Magazine> () .HasData (new Magazine { MagazineId = 1, Name = "MSDN Magazine" }); Seeding the owned type is a little tricky only because it may not be something you can intuit. Because the model builder will treat Publisher as a related object in order to persist it properly, it needs to know the value of the MagazineId that owns it. As there’s no MagazineId property in Publisher—EF Core uses its shadow property feature to infer a MagazineId property. In order to set that property, you’ll need to instantiate an anonymous type rather than a Publisher. If you tried to instantiate a Publisher, it wouldn’t accept the MagazineId property in the initializer: modelBuilder.Entity<Magazine> () .OwnsOne (m => m.Publisher) .HasData (new { Name = "1105 Media", YearFounded = 2006, MagazineId=1 }); When I create a migration to take this pairing into account, the resulting InsertData method knows to insert all of the values—the properties of Magazine and its owned type, Publisher—into the Magazine table: migrationBuilder.InsertData( table: "Magazines", columns: new[] { "MagazineId", "Name", "Publisher_Name", "Publisher_YearFounded" }, values: new object[] { 1, "MSDN Magazine", "1105 Media", 2006 }); This works out easily enough when my classes are simple, although you may reach some limitations with more complicated classes. No Migrations? EnsureCreated Does the Job Finally, we’ve reached the point where you get to see the dual nature of the new seeding mechanism. When you’re using database providers with migrations commands, the migrations will contain the logic to insert, update or delete seed data in the database. But at run time, there’s only one way to trigger HasData to be read and acted upon, and that’s in response to the DbContext.Database.EnsureCreated method. Keep in mind that EnsureCreated won’t run migrations if the database already exists. The provider that really benefits from this is the InMemory provider. You can explicitly create and seed InMemory databases in your tests by calling EnsureCreated. Unlike the Migrate command, which runs migrations—and will execute and seed methods in those migrations—EnsureCreated creates a database using the model described by the context class. And whatever provider you’re using, that will also cause HasData methods to insert data at the same time. To demonstrate, I’ve modified the PublicationsContext by adding a new constructor to allow for injecting a provider by adding an explicit public parameterless constructor to allow for passing in pre-configured options: public PublicationsContext (DbContextOptions<PublicationsContext> options) : base (options) { } public PublicationsContext () { } And I’ve added logic to skip the UseSqlite method in OnConfiguring if the options have already been configured: protected override void OnConfiguring (DbContextOptionsBuilder optionsBuilder) { if (!optionsBuilder.IsConfigured) { optionsBuilder.UseSqlite (@"Filename=Data/PubsTracker.db"); } optionsBuilder.UseLoggerFactory (MyConsoleLoggerFactory); } Note that I moved the UseLoggerFactory command to run after the IsConfigured check. If it comes before the check, IsConfigured returns true. I started out that way and it took me a while to figure out what was wrong. My automated test sets up the options to use the InMemory provider. Next, critical to the seeding, it calls EnsureCreated and then tests to see if there is, indeed, some data already available: public void CanRetrieveDataCreatedBySeeding () { var options = new DbContextOptionsBuilder<PublicationsContext> () .UseInMemoryDatabase ("RetrieveSeedData").Options; using (var context = new PublicationsContext (options)) { context.Database.EnsureCreated(); var storedMag = context.Magazines.FirstOrDefault (); Assert.Equal ("MSDN Magazine", storedMag.Name); } } The test passes because EnsureCreated forced the HasData methods to push the seed data into the InMemory database. A Variety of Use Cases, but Not All of Them Even though you’ve seen some of the limitations of using HasData in a few more-complex scenarios, it’s definitely a nice improvement over the workflow that existed in earlier versions of EF. I really appreciate that I now have more control over the data flow by tying the insert, update and delete statements to individual migrations, rather than having to worry about upserts on every migration. The syntax is much cleaner, as well. Most important is the dual nature of this feature that not only allows you to get the seed data into your development (or even production) database, but also means that by calling EnsureCreated, you can seed the InMemory data to provide a consistent base of seed data that will be relevant for each test. But HasData isn’t a silver bullet. Keep in mind that this feature is best for seed data that will remain static once it’s been inserted into the database. Also, watch out for HasData migrations that could override data you’ve seeded outside of migrations. As I explained earlier with the Guids, HasData doesn’t work well with computed data. Andriy Svyryd from the EF team adds, “For seeding testing data that’s not expected to be maintained between migrations or to have more complex logic, like computing seed values from the current state of the database, it’s still possible and encouraged to just create a new instance of the context and add the data using SaveChanges.” As another alternative, I’ve heard from readers that my method for seeding with JSON data is still working nicely, even with EF Core 2.1. I wrote about this in a blog post at bit.ly/2MvTyhM. If you want to stay informed on how HasData will evolve, or even on issues that users are discovering, keep an eye on the GitHub repository at bit.ly/2l8VrEy and just filter on HasData. Discuss this article in the MSDN Magazine forum
https://docs.microsoft.com/en-us/archive/msdn-magazine/2018/august/data-points-deep-dive-into-ef-core-hasdata-seeding
2020-01-17T22:17:39
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
Windows 7: Deployment Applies To: Windows 7 This collection provides information about application compatibility, and guidance about how to deploy, upgrade, or migrate to the Windows® 7 operating system. Application Compatibility Read about the steps that you should take to prepare for application-compatibility testing and evaluation during a Windows 7 operating system-deployment project: Getting Started with Application Compatibility in a Windows Deployment Download the ACT 5.5 and its associated tools from the following Web site: Microsoft Application Compatibility Toolkit 5.5 Learn how to embark on the application compatibility process for testing new releases of Internet Explorer with the applications running in your organization: Addressing Application Compatibility When Migrating to Internet Explorer 8 Learn about how to reduce the cost of deployment for Windows 7 by accelerating the mitigation of blocking application compatibility issues, including understanding how shims work, when to consider applying shims, and how to manage the shims that you apply: Managing Shims in an Enterprise Read an overview about the new deployment features in Windows 7: Windows 7 Desktop Deployment Overview Learn answers to common questions about deploying Windows 7: Windows 7 Deployment Frequently Asked Questions Learn the high-level steps for IT professionals to perform an enterprise-scale desktop deployment project—starting with Windows XP and moving to Windows 7: Deploying Windows 7 from A to Z Learn about the new features related to VHD files for virtual machine deployment in Windows 7: Virtual Hard Disks in Windows 7 and Windows Server 2008 R2
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-7/dd349337(v=ws.10)?redirectedfrom=MSDN
2020-01-17T22:33:17
CC-MAIN-2020-05
1579250591234.15
[array(['images/dd349337.364dc276-e8a1-489a-8f01-89a5dd259406%28ws.10%29.gif', None], dtype=object) ]
docs.microsoft.com
noop Description The noop command is an internal command that is used for debugging. If you are looking for a way to add comments to your search, see Add comments to searches in the Search Manual. Syntax noop Optional arguments - search_optimization - Syntax: true | false - Description: Specifies if search optimization is enabled for the search. - Default: true Troubleshooting search optimization In some very limited situations, the optimization that is built into the search processor might not optimize a search correctly. To turn off optimization for a specific search, the last command in your search criteria should be |noop search_optimization!
https://docs.splunk.com/Documentation/Splunk/7.0.0/SearchReference/Noop
2020-01-17T22:52:15
CC-MAIN-2020-05
1579250591234.15
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
TFDataPipeline¶ TensorFlow data pipeline¶ This module covers all related code to handle the data loading, preprocessing, chunking, batching and related things in TensorFlow, i.e. the TensorFlow data pipeline from the Dataset. Some related documents: Data shuffling¶ - The sequence shuffling is implemented as part of the Dataset, although we could also use a tf.RandomShuffleQueue on sequence level for training. - Chunk shuffling can be done with another tf.RandomShuffleQueue for training. - Frame shuffling only makes sense for non-recurrent networks. It also only makes sense if we already did any windowing beforehand. It also only makes sense in training. In that case, we could do chunking just with chunk size 1 and chunk step 1, or maybe chunk size = context window size, and then the frame shuffling is just chunk shuffling, thus we do not need any separate frame shuffling logic. Generic pipeline¶ - We initialize the Dataset for some epoch. The Dataset could shuffle the sequence order based on the epoch. Then we iterate over the sequences/entries of the dataset (seq_idx), starting with seq_idx = 0, checking Dataset.is_less_than_num_seqs. We get the data for each data key (e.g. “data” or “classes”) as Numpy arrays with any shape. They don’t need to have the same number of time frames, although in the common case where we have a frame-wise alignment of class labels, they would have. In any case, we basically have dict[str,numpy.ndarray], the seq_idx and also a seq_tag. - We could implement another sequence shuffling with tf.RandomShuffleQueue in training. - We do chunking of the data of each sequence, i.e. selecting chunks of chunk size frames, iterating over the frames of a sequence with stride = chunk step. While doing chunking, we could add more context around each chunk (zero padding at the borders) if needed, e.g. if we use windowing or convolution with padding=”valid”, such that the output of the net will match that of the targets. However, this depends on where the data is used in the network; maybe it is used at multiple points? - We can do chunk shuffling with another tf.RandomShuffleQueue in training. - We build up batches from the chunks. First the simple method via feed_dict and placeholders¶ This is implemented in FeedDictDataProvider. The input data which (and optionally the targets) can be represented with tf.placeholder and feed via feed_dict from tf.Session.run which does one train/eval/forward step. In this case, any preprocessing such as chunking and batching must be done beforehand via Numpy. This was the initial implementation and is also the standard implementation for the Theano backend. This is not optimal because the tf.Session.run first has to copy the data from CPU to GPU and then can do whatever it is supposed to do (e.g. one train step). So copying and then the calculation is done in serial but it could be done in parallel with some other method which we will discuss below. Also the preprocessing could involve some more complex operations which could be slow with Python + Numpy. Also the chunk shuffling is more difficult to implement and would be slower compared to a pure TF solution. Implementation via TF queues¶ In QueueDataProvider. This is currently incomplete. Also, instead of finishing this, probably using the tf.dataset is the better approach. Some use case¶ Conv net training. For every sequence, window around every frame for context. Window must belong together, no unnecessary zero padding should be done introduced by chunking. Thus, windowing must be done before chunking, or additional zero-padding must be added before chunking. Then formulated differently: Chunking with step 1, output for a chunk is a single frame. It also means that the windowing can not be part of the network because we will get only chunks there, or the windowing makes only sense with padding=”valid”, otherwise we would get way too much zero-padding also at the border of every chunk. The same is true for convolution, pooling and others. I.e. padding in time should always be in “valid” mode. If we feed in a whole sequence, must return the whole sequence, in recog, forwarding, search or eval. With padding=”valid”, the output has less time frames, exactly context-size less frames. Conv should use padding=”valid” anyway to save computation time, and only explicitly pad zeros where needed. In recog, the input-format is (batch, time + context_size, …) which is zero-padded by context_size additional frames. So, have context_size as an additional global option for the network (could be set explicitly in config, or calculated automatically at construction). When chunking for such case, we also should have chunks with such zero-paddings so that recog matches. So, basically, a preprocessing step, before chunking, in both training and recog, is to add zero-padding of size context_size to the input, then we get the output of the expected size. Pipeline implementation¶ - One thread which goes over the Dataset. No need for different training/eval queue, no random-shuffle-queue, seq-shuffling is done by the Dataset. Here we can also handle the logic to add the context_size padding to the input. Maybe use Dataset.iterate_seqs which gets us the offsets for each chunk. We can then just add the context_size to each. After that, chunking can be done (can be done in the same thread just at the final step). - Another thread TFBatchingQueue, which collects seqs or chunks and prepares batches. It depends on whether the full network is recurrent or not. - class TFDataPipeline. PipeBase[source]¶ Abstract base class for a pipe. - class TFDataPipeline. PipeConnectorBase[source]¶ Base class for pipe connector. is_running(self)[source]¶ E.g. for pipe_in/pipe_out model: If the pipe_in has data, we increase our counter by 1, then dequeue from pipe_in, do sth and queue to pipe_out, and only then decrease the counter again. Thus, if we return False, we have ensured that the pipe_out already has the data, or there is no data anymore. If we return True, we will ensure that we will push more data to pipe_out at some point. - class TFDataPipeline. DatasetReader(extern_data, dataset, coord, feed_callback, with_seq_tag=False, with_seq_idx=False, with_epoch_end=False)[source]¶ Reads from Dataset into a queue. - class TFDataPipeline. MakePlaceholders(data_keys, extern_data, with_batch)[source]¶ Helper to create TF placeholders. - class TFDataPipeline. TFDataQueues(extern_data, capacity=100, seed=1, with_batch=False, enqueue_data=None)[source]¶ Generic queues which differ between train/eval queues. - class TFDataPipeline. TFChunkingQueueRunner(extern_data, make_dequeue_op, target_queue, chunk_size=None, chunk_step=None, context_window=None, source_has_epoch_end_signal=False)[source]¶ Implements chunking in pure TF. I.e. we get full sequences of varying lengths as input (from a queue), and we go over it with stride = chunk step, and extract a window of chunk size at each position, which we feed into the target queue. Optionally, for each chunk, we can add more frames (context window) around the chunk. - class TFDataPipeline. TFBatchingQueue(data_queues, batch_size, max_seqs, capacity=10)[source]¶ Wrapper around tf.PaddingFIFOQueue with more control. Gets in data via TFDataQueues without batch-dim, and adds the batch-dim, according to batch_size and max_seqs. Output can be accessed via self.output_as_extern_data. This will represent the final output used by the network, controlled by QueueDataProvider. - class TFDataPipeline. QueueOutput[source]¶ Queue output - class TFDataPipeline. CpuToDefaultDevStage(input_data, names, dtypes, extern_data, data_keys)[source]¶ Copy from CPU to the device (e.g. GPU) (if needed). - class TFDataPipeline. DataProviderBase(extern_data, data_keys)[source]¶ Base class which wraps up the logic in this class. See derived classes. have_more_data(self, session)[source]¶ It is supposed to return True as long as we want to continue with the current epoch in the current dataset (train or eval). This is called from the same thread which runs the main computation graph (e.g. train steps). get_feed_dict(self, single_threaded=False)[source]¶ Gets the feed dict for TF session run(). Note that this will block if there is nothing in the queue. The queue gets filled by the other thread, via self.thread_main(). - class TFDataPipeline. FeedDictDataProvider(tf_session, dataset, batches, enforce_min_len1=False, capacity=10, tf_queue=None, batch_slice=None, **kwargs)[source]¶ This class will fill all the placeholders used for training or forwarding or evaluation etc. of a TFNetwork.Network. It will run a background thread which reads the data from a dataset and puts it into a queue. get_next_batch(self, consider_batch_slice)[source]¶ This assumes that we have more data, i.e. self.batches.has_more(). have_more_data(self, session)[source]¶ If this returns True, you can definitely read another item from the queue. Threading safety: This assumes that there is no other consumer thread for the queue. get_feed_dict(self, single_threaded=False)[source]¶ Gets the feed dict for TF session run(). Note that this will block if there is nothing in the queue. The queue gets filled by the other thread, via self.thread_main(). - class TFDataPipeline. QueueDataProvider(shuffle_train_seqs=False, **kwargs)[source]¶ This class is supposed to encapsulate all the logic of this module and to be used by the TF engine. It gets the train and dev dataset instances. High-level (not differentiating between train/eval) queues: 1. sequence queue (filled by the data from Dataset) 2. chunk queue (filled by chunking, and maybe context window) 3. batch queue (constructed batches from the chunks) 4. staging area (e.g. copy to GPU) Creates the queues and connector instances (which will be the queue runners). Thus this will be created in the current TF graph, and you need to create a new instance of this class for a new TF graph. This is also only intended to be recreated when we create a new TF graph, so all other things must be created while it exists. have_more_data(self, session)[source]¶ It is supposed to return True as long as we want to continue with the current epoch in the current dataset (train or eval). We want to continue if we still can do a next self.final_stage.dequeue op with the current dataset. This is called from the same thread which runs the main computation graph (e.g. train steps), as well as from the final stage thread.
https://returnn.readthedocs.io/en/latest/api/TFDataPipeline.html
2020-01-17T22:54:33
CC-MAIN-2020-05
1579250591234.15
[]
returnn.readthedocs.io
Configure iOS You will need your Apple Push Notification Service (APNs): - Push SSL certificate (See: Get your APNs Push SSL certificate) - Certificate password (See: Export the certificate as a .p12 file) - Go to Settings » Channels » Mobile App. - Click Configure for iOS and enter the certificate password and upload the certificate .p12 file. - Click Save. Configure Amazon You will need your Amazon Device Messaging (ADM) OAuth Client Credentials: - Client ID - Client secret You can find these in the Amazon developer console. See: Get Your OAuth Credentials and API Key. - Go to Settings » Channels » Mobile App. - Click Configure for Amazon and enter your client ID and client secret. - Click Save. Configure Android You will need your Firebase Cloud Messaging (FCM): - Server key - Package name You can find the server key in the Firebase console in Project settings » Cloud Messaging. - Go to Settings » Channels » Mobile App. - Click Configure for Android and enter your server key and package name. - Click Save. Configure Windows You will need your Windows Push Notification Services (WNS): - Package security identifier (SID) - Secret key You can find these in the Partner Center; follow the instructions on the App Management - WNS/MPNS page. - Go to Settings » Channels » Mobile App. - Click Configure for Windows and enter your SID and secret key. - Click Save. Configure Apple News You will need your Apple News Publisher: - Channel ID - API key - API key secret You can find the channel ID and API key in the Settings tab in News Publisher. The API key secret is shown only when it is first issued. - Go to Settings » Channels » Mobile App. - Click Configure for Apple News and enter a channel name and your News Publisher channel ID, API key, and API secret The Apple News channel name is used for preview purposes in the dashboard. - Check the box for each of the Supported Countries you have configured in your News Publisher channel settings. - Click Save. Your News Publisher channel ID has no relation to an Airship Channel ID. We validate the country selection you click Send Now or Send When Live as the final step in creating an Apple News notification. If a country selected here is not configured in your News Publisher channel settings, you will see an error in the dashboard when attempting to send to that country. Edit or Delete a Mobile App Channel - Go to Settings » Channels » Mobile App. - Click Edit. - Make your changes and click Save, or click remove this service.
https://docs.airship.com/tutorials/manage-project/messaging/channels/apps/
2020-01-17T21:54:48
CC-MAIN-2020-05
1579250591234.15
[]
docs.airship.com
Discovery Run node A Discovery Run is a scan of one or more Discovery endpoints, specified as an IP address or addresses or ranges which are scanned as an entity. These ranges might be being scanned locally on from the appliance, or be the result of appliance consolidation. For each Discovery Run, a Discovery Run node is created which records information such as the user who started the run, the start and end time, and so on. A Discovery Run contains a number of Discovery Access nodes. In turn, Discovery Access nodes contain all other non-integration DDD nodes. Additionally a Discovery Run node can also contain Provider Access nodes. A Provider Access node is created for a particular IP address as a result of a SQL Discovery access. In turn, Provider Access nodes contain all other Integration related DDD nodes. Discovery Run lifecycle The following section describes the scenarios in which a Discovery Run is created or destroyed. DDD nodes are never updated. Creation A Discovery Run node is created when a Discovery Run starts; when a range that has been configured to be scanned starts actually scanning or, if the appliance is a Consolidation Appliance, when a Scanning Appliance's scan starts to be consolidated. The Discovery Run remains open until all of the endpoints (IP addresses) in the range have been scanned or consolidated. Removal Discovery Runs are removed when all of their contained Discovery Access have been destroyed through the Aging process. When the Discovery Run node is destroyed any linked Integration Access nodes and their associated DDD nodes are also destroyed. Discovery Run node attributes The attributes of a Discovery Run node are as described in the following table below: Starting from v10.0 the Discovery Run node has no run_type attribute. Discovery Run node relationships The relationships of a Discovery Run node are as described in the following table:
https://docs.bmc.com/docs/discovery/113/discovery-run-node-788110486.html
2020-01-17T23:21:11
CC-MAIN-2020-05
1579250591234.15
[]
docs.bmc.com
. Driver configuration Use sslOptions property in the ClientOptions to enable client TLS/SSL encryption: const client = new Client({ contactPoints, localDataCenter, sslOptions: { rejectUnauthorized: true }}); await client.connect(); You can define the same object properties as the options in the standard Node.js tls.connect() method. The main difference is that server certificate validation against the list of supplied CAs is disabled by default. You should specify rejectUnauthorized: true in your settings to enable it. Enabling client certificate authentication Much like in Node.js standard tls module, you can use cert and key properties to provide the certificate chain and private key. Additionally, you can override the trusted CA certificates using ca property: const sslOptions = { // Necessary only if the server requires client certificate authentication. key: fs.readFileSync('client-key.pem'), cert: fs.readFileSync('client-cert.pem'), // Necessary only if the server uses a self-signed certificate. ca: [ fs.readFileSync('server-cert.pem') ], rejectUnauthorized: true }; const client = new Client({ contactPoints, localDataCenter, sslOptions });
https://docs.datastax.com/en/developer/nodejs-driver/4.0/features/tls/
2020-01-17T22:12:22
CC-MAIN-2020-05
1579250591234.15
[]
docs.datastax.com
All content with label cloud+coherence+data_grid+distribution+documentation+gridfs+index+infinispan+interactive+jcache+jsr-107+locking, youtube, userguide, write_behind, 缓存, ec2, streaming, hibernate, aws, interface, clustering, setup, eviction, large_object, out_of_memory, concurrency, jboss_cache, import, events, hash_function, configuration, batch, buddy_replication, loader, colocation, remoting, mvcc, tutorial, notification, presentation, murmurhash2, xml, read_committed, jbosscache3x, meeting, cachestore, cacheloader, resteasy, hibernate_search, cluster, development, br, websocket, async, transaction, xaresource, build, hinting, searchable, demo, installation, ispn, client, migration, non-blocking, rebalance, filesystem, jpa, tx, user_guide, gui_demo, eventing, student_project, client_server, infinispan_user_guide, murmurhash, standalone, snapshot, hotrod, webdav, docs, consistent_hash, batching, store, jta, faq, 2lcache, as5, lucene, jgroups, rest, hot_rod more » ( - cloud, - coherence, - data_grid, - distribution, - documentation, - gridfs, - index, - infinispan, - interactive, - jcache, - jsr-107, - locking, - repeatable_read, - scala ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/cloud+coherence+data_grid+distribution+documentation+gridfs+index+infinispan+interactive+jcache+jsr-107+locking+repeatable_read+scala
2020-01-17T21:12:06
CC-MAIN-2020-05
1579250591234.15
[]
docs.jboss.org
All content with label async+client+hot_rod+infinispan+jboss_cache+jbosscache3x+jta+listener+mvcc+nexus+out_of_memory+release+scala+write_behind+缓存. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, recovery, transactionmanager, dist, partitioning, deadlock, archetype, jbossas, lock_striping, guide, schema, cache, s3, amazon, memcached, grid, test, jcache, api, ehcache, maven, documentation, ec2, hibernate, interface, custom_interceptor, setup, clustering, eviction, concurrency, import, index, events, hash_function, configuration, batch, buddy_replication, loader, xa, write_through, cloud, remoting, tutorial, notification, read_committed, xml, distribution, meeting, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, br, development, websocket, transaction, interactive, xaresource, build, searchable, demo, installation, cache_server, command-line, non-blocking, migration, jpa, filesystem, tx, gui_demo, eventing, shell, client_server, testng, infinispan_user_guide, standalone, hotrod, webdav, snapshot, repeatable_read, docs, consistent_hash, batching, faq, 2lcache, as5, jsr-107, jgroups, lucene, locking, rest more » ( - async, - client, - hot_rod, - infinispan, - jboss_cache, - jbosscache3x, - jta, - listener, - mvcc, - nexus, - out_of_memory, - release, - scala, - write_behind, - 缓存 ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/labels/viewlabel.action?ids=4456514&ids=4456499&ids=4456485&ids=4456479&ids=4456491&ids=4456573&ids=4456487&ids=4456527&ids=4456586&ids=4456463&ids=4456533&ids=4456467&ids=4456506&ids=4456559&ids=4456590
2020-01-17T21:21:03
CC-MAIN-2020-05
1579250591234.15
[]
docs.jboss.org
dcountif() (aggregation function) Returns an estimate of the number of distinct values of Expr of rows for which Predicate evaluates to true. Read about the estimation accuracy. Syntax summarize dcountif(Expr, Predicate, [ , Accuracy] ) Arguments - Expr: Expression that will be used for aggregation calculation. - Predicate: Expression that will be used to filter rows. - Accuracy, if specified, controls the balance between speed and accuracy. 0= the least accurate and fastest calculation. 1.6% error 1= the default, which balances accuracy and calculation time; about 0.8% error. 2= accurate and slow calculation; about 0.4% error. 3= extra accurate and slow calculation; about 0.28% error. 4= super accurate and slowest calculation; about 0.2% error. Returns Returns an estimate of the number of distinct values of Expr of rows for which Predicate evaluates to true in the group. Example PageViewLog | summarize countries=dcountif(country, country startswith "United") by continent Tip: Offset error dcountif() might result in a one-off error in the edge cases where all rows pass, or none of the rows pass, the Predicate expression
https://docs.microsoft.com/en-us/azure/kusto/query/dcountif-aggfunction
2020-01-17T23:37:28
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
Code Code groups are the building blocks of code access security policy. Each policy level consists of a root code group that can have child code groups. Each child code group can have their own child code groups; this behavior extends to any number of levels, forming a tree. Each code group has a membership condition that determines if a given assembly belongs to it based on the evidence for that assembly. Only those code groups whose membership conditions match a given assembly's evidence will be applied. If a matching code group has child code groups, then those children whose membership conditions also match the supplied evidence will likewise be applied.
https://docs.microsoft.com/en-us/dotnet/api/system.security.policy.codegroup?redirectedfrom=MSDN&view=netframework-4.8
2020-01-17T21:58:17
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
The instructions on this page take you through the steps for upgrading from MB 3.1.0 to MB 3.2.0. Note that you cannot rollback the upgrade process. However, it is possible to restore a backup of the previous database and restart the upgrade progress. Preparing to upgrade The following prerequisites must be completed before upgrading: Make a backup of the databases used for MB 3.1.0. Also, copy the <MB_HOME_3.1.0>directory in order to backup the product configurations. Download WSO2 Message Broker 3.2.0 from. NOTE: The downtime is limited to the time taken for switching databases when in the production environment. Upgrading from MB 3.1.0 to MB 3.2.0 WSO2 MB 3.2.0 comes with several database changes compared to WSO2 MB 3.1.0 in terms of the data format used for storing. We are providing a simple tool that you can easily download and run to carry out this upgrade. Follow the steps given below to upgrade from WSO2 MB 3.1.0 to WSO2 MB 3.2.0. - Disconnect all the subscribers and publishers for WSO2 MB 3.2.0. - Shut down the server. - Run the migration script to update the database: - Open a terminal and navigate to the <MB_HOME>/dbscripts/mb-store/ migration-3.1.0_to_3.2.0directory. - Run the migration script relevant to your database type. For example, if you are using an Oracle, use the following script: oracle-mb.sql. - Download and run the migration tool: Download the migration tool. Unzip the org.wso2.mb.migration.tool.zipfile. The directory structure of the unzipped folder is as follows: TOOL_HOME |-- lib <folder> |-- config.properties <file> |-- tool.sh <file> |-- README.txt <file> |-- org.wso2.carbon.mb.migration.tool.jar - Download the relevant database connector and copy it to the libdirectory in the above folder structure. For example, if you are upgrading your MySQL databases, you can download the MySQL connector JAR from and copy it to the libdirectory. Open the config.properties file from the org.wso2.mb.migration.tool.zip file that you downloaded in step 4 above and update the database connection details shown below. #Configurations for the database dburl= driverclassname= dbuser= dbpassword= The parameter in the above file are as follows: - dburl: The URL for the database. For example, jdbc:mysql://localhost/wso2_mb - driverclassname:The database driver class. For example, com.mysql.jdbc.Driver for MySQL. - dbuser: The user name for connecting to the database. - dbpassword: The password for connecting to the database. - Update the datasource connection for the MB database in the master-datasources.xmlfile (stored in the <MB_HOME_320>/repository/conf/datasourcesdirectory). Run the migration tool: If you are on a Linux environment, open a command prompt and execute the following command: tool.sh. - If you are on a non-Linux environment, execute org.wso2.carbon.mb.migration.tool.jarmanually. - Start WSO2 MB 3.2.0. - Reconnect all the publishers and subscribers to MB 3.2.0. Testing the upgrade Verify that all the required scenarios are working as expected as shown below. This confirms that the upgrade is successful. - Make sure that the server starts up fine without any errors. - Verify that the Users and Roles are picked up: - Navigate to Configure -> Accounts & Credentials -> Users and Roles - Verify that the list of users and roles are shown correctly. - View the permissions of a chosen role, and make sure that the permissions are correct. - Verify that all the messages to the Queues and Topics have been successfully published in the MB 3.2.0 instance. - Run some of the samples to see that the system is working fine.
https://docs.wso2.com/pages/viewpage.action?pageId=64290819
2020-01-17T21:17:26
CC-MAIN-2020-05
1579250591234.15
[]
docs.wso2.com
Get Feeds¶ Feeds is meant to be installed on your server and run periodically in a cron job or similar job scheduler. The easiest way to install Feeds is via pip in a virtual environment. Feeds does not provide any releases yet, so one might directly install the current master branch: $ git clone $ cd feeds $ python3 -m venv venv $ source bin/activate $ pip install -e . After installation feeds is available in your virtual environment. Feeds supports Python 3.5+.
https://pyfeeds.readthedocs.io/en/latest/get.html
2020-01-17T21:49:59
CC-MAIN-2020-05
1579250591234.15
[]
pyfeeds.readthedocs.io
postgresql_slot – Add or remove replication slots from a PostgreSQL database¶ New in version 2.8. Requirements¶ The below requirements are needed on the host that executes this module. - psycopg2 Notes¶ Note - Physical replication slots were introduced to PostgreSQL with version 9.4, while logical replication slots were added beginning with version 10.0. - - PostgreSQL pg_replication_slots view reference - Complete reference of the PostgreSQL pg_replication_slots view. - PostgreSQL streaming replication protocol reference - Complete reference of the PostgreSQL streaming replication protocol documentation. - PostgreSQL logical replication protocol reference - Complete reference of the PostgreSQL logical replication protocol documentation. Examples¶ - name: Create physical_one physical slot if doesn't exist become_user: postgres postgresql_slot: slot_name: physical_one db: ansible - name: Remove physical_one slot if exists become_user: postgres postgresql_slot: slot_name: physical_one db: ansible state: absent - name: Create logical_one logical slot to the database acme if doesn't exist postgresql_slot: name: logical_slot_one slot_type: logical state: present output_plugin: custom_decoder_one db: "acme" - name: Remove logical_one slot if exists from the cluster running on another host and non-standard port postgresql_slot: name: logical_one login_host: mydatabase.example.org port: 5433 login_user: ourSuperuser login_password: thePassword state: absent Return Values¶ Common return values are documented here, the following are the fields unique to this module: Status¶ - This module is not guaranteed to have a backwards compatible interface. [preview] - This module is maintained by the Ansible Community. [community]
https://docs.ansible.com/ansible/latest/modules/postgresql_slot_module.html
2020-01-17T21:43:53
CC-MAIN-2020-05
1579250591234.15
[]
docs.ansible.com
Retrieves the names of all job resources in this AWS account, or the resources with the specified tag. This operation allows you to see which resources are available in your account, and their names. This operation takes the optional Tags field, which you can use as a filter on the response so that tagged resources can be retrieved as a group. If you choose to use tags filtering, only resources with the tag are retrieved. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. list-jobs [--next-token <value>] [--max-results <value>] [--tags <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>] --next-token (string) A continuation token, if this is a continuation request. --max-results (integer) The maximum size of a list to return. --tags (map) Specifies to return only these tagged resources..
https://docs.aws.amazon.com/cli/latest/reference/glue/list-jobs.html
2020-01-17T21:14:20
CC-MAIN-2020-05
1579250591234.15
[]
docs.aws.amazon.com
Everytime you build your app through Monaca IDE, Monaca saves your build history where you can see your build detail and error logs. You can view your build history as following: From the Monaca Cloud IDE menu, go to Build → Build History . Your built history will be displayed as following. Click on Delete button to remove particular item from your history or Download button to download the built app. Once an item is removed, it cannot be undone.
https://docs.monaca.io/en/products_guide/monaca_ide/build/build_history/
2020-01-17T22:24:31
CC-MAIN-2020-05
1579250591234.15
[]
docs.monaca.io
Troubleshoot SAML SSO Here are some common issues and how to resolve them. Error message: SAML fails to verify assertions You see the following error message: Failed to verify the assertion - The 'Audience' field in the saml response from the IdP does not match the configuration Mitigation 1. Verify the message and get more information by running the following search: index=_internal host=sh* sourcetype=splunkd SAML You should see the following: 09-18-2017 14:58:06.939 +0000 ERROR Saml - Failed to verify the assertion - The 'Audience' field in the saml response from the IdP does not match the configuration, Error details=Expected=https://<instance_name>.com, found=https://<wrong_instance_name>.com/ 2. Modify authentication.conf with the entityId found in the error message in step 1. [saml] entityId= https://<instance_name>.com/ (found from ERROR message) 3. Reload authentication.conf from Splunk Web at Settings > Access Controls > Authentication Method > Reload Authentication configuration Error message: Leaf certificate does not match You receive the following message: No leaf certificate matched one from the assertion This error occurs when the signature certificate on Splunk does not match the certificate that the IdP uses to sign SAML messages. Mitigation If your signature verification certificate is a self-signed certificate: Make sure that the certificate specified in the idpCertPath attribute in authentication.conf is the same as the certificate the IdP uses to sign SAML messages. You can use OpenSSL to determine the details of the certificate that Splunk uses for signature verification. For example, the following command: openssl x509 -in etc/auth/idpCerts/idpCert.pem -text -noout | grep 'Serial\|Issuer:\|Subject:' Should produce information similar to this: Make sure that the signing certificates match and are consistently named. For example, a simple chain would have three files in the following order: - the root CA, for example: " cert_1.pem" - the intermediate certificate, for example: " cert_2.pem" - the leaf certificate or the signing certificate, for example: " cert_3.pem" In this example, make sure that the " cert_3.pem" (the leaf) is the same certificate that the IdP uses to sign responses. If you have multiple chains, or chains with more than one intermediate CA In most cases, the certificate chain consist of a single root certificate, a single intermediate certificate, and a single signing certificate. However, you may Issue: You experience the following message ERROR AuthenticationManagerSAML - Requesting user info from ID returned an error. Error in Attribute query request, AttributeQueryTransaction err=Cannot resolve hostname, AttributeQueryTransaction descr=Error resolving: Name or service not known, AttributeQueryTransaction statusCode=502 Mitigation - Make sure that the cipherSuiteis specified correctly in the SAML stanza. For example: cipherSuite = TLSv1+MEDIUM:@STRENGTH cipherSuite = ALL:!aNULL:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM - Make sure all SOAP password requirements are met. - Make sure your SSL settings for SAML are configured correctly in authentication.conf. Issue: You experience the following message: ERROR AuthenticationManagerS You experience the following message: ERROR UserManagerPro - user="samluser1" had no roles Mitigation Make sure that rolemap_SAML contains the correct role mapping with ";" at the end of each role name. User cannot login User cannot log in after successful assertion validation. No valid Splunk role is found in the local mapping or in the assertion. Mitigation - Make sure that rolemap_SAMLstanza contains proper mapping between roles returned from IdP and the appropriate Splunk role. - Make sure there are no spaces between, before, or after each role defined in authentication.conf. For example: user = User;Employee User cannot access SAML login page Authentication is configured as SAML and the settings appear to be correct, but the login screen shows the page for Splunk authentication instead. Mitigation - Make sure that in web.conf, appServerPortsis set to a valid port and not '0'. - Make sure web.confdoes case-sensitive IdPs that expect Splunk software You can add the SAML users as native Splunk users. API and CLI commands cannot be performed by users that are defined only in SAML. This is because the user password is never sent in the SAML assertion. Regarding the last issue described on this page "Authentication is configured as SAML and the settings appear to be correct, but the login screen shows the page for Splunk authentication instead", this problem also occurs when web.conf contains a value for 'trustedIP'. We had to remove that entry from our Splunk 6.5.5 installation to get the SP-Initiated SAML to work properly. Hi Gertkt, Thanks for the feedback! I've updated the documentation.
https://docs.splunk.com/Documentation/Splunk/7.2.9/Security/TroubleshootSAMLSSO
2020-01-17T22:12:54
CC-MAIN-2020-05
1579250591234.15
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
You are viewing documentation for version 3 of the AWS SDK for Ruby. Version 2 documentation can be found here. Class: Aws::Pinpoint::Types::GetSegmentRequest - Defined in: - gems/aws-sdk-pinpoint/lib/aws-sdk-pinpoint/types.rb Overview Note: When making an API call, you may pass GetSegmentRequest data as a hash: { application_id: "__string", # required segment_id: "__string", # required }
http://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/Pinpoint/Types/GetSegmentRequest.html
2017-12-11T08:04:42
CC-MAIN-2017-51
1512948512584.10
[]
docs.aws.amazon.com
Lists the controls on the action band. property ActionControls [const Index: Integer]: TCustomActionControl; __property TCustomActionControl ActionControls[int const Index]; Use ActionControls to access any of the controls available on the action band. Each control in the list is the user interface element generated to represent one of the TActionClientItem objects that the action manager stores for children of this action band. The TActionClientItem objects are children of the component that is the value of this action band's ActionClient property. Index is the index of the child control, where 0 specifies the first control, 1 specifies the second control, and so on.
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/ActnMan_TCustomActionBar_ActionControls.html
2017-12-11T07:33:30
CC-MAIN-2017-51
1512948512584.10
[]
docs.embarcadero.com
The local registry acts as a memory registry where you can store text strings, XML strings, and URLs. If there is no need to use this data, you can delete a local registry entry from the system. Follow the instructions below to delete local registry entries. 1. Sign In. Enter your user name and password to log on to the ESB Management Console. 2. Click the "Main" button to access the "Manage" menu. 3. Click on "Local Entries," in the left "Service Bus" menu, to access the "Manage Local Registry Entries" page. 4. The "Manage Local Registry Entries" page appears. 5. Select the "Available Localentries" tab. 6. Click on the "Delete" link of a particular local entry in the "Actions" column. 7. Confirm your request in the "WSO2 Carbon" window. Click "Yes."
https://docs.wso2.com/display/ESB481/Deleting+a+Local+Entry
2017-12-11T07:18:00
CC-MAIN-2017-51
1512948512584.10
[]
docs.wso2.com
Specifies the connection protection options supported by the System.Web.Security.ActiveDirectoryMembershipProvider class. The System.Web.Security.ActiveDirectoryConnectionProtection enumeration is used in an application's configuration file to set the protocol used to secure communications between an System.Web.Security.ActiveDirectoryMembershipProvider object and an Active Directory or Active Directory Application Mode server. The enumeration indicates the type of connection security the provider established based on the connectionProtection attribute of the membership configuration element. The connectionProtection attribute can be set only to either "None" or "Secure".
http://docs.go-mono.com/monodoc.ashx?link=T%3ASystem.Web.Security.ActiveDirectoryConnectionProtection
2017-12-11T07:36:50
CC-MAIN-2017-51
1512948512584.10
[]
docs.go-mono.com
- Cache loader is Infinispan's connection to a (persistent) data store. Cache loader fetches data from a store when that data is not in the cache, and when modifications are made to data in the cache the CacheLoader is called to store those modifications back to the store. Cache loaders are associated with individual caches, i.e. different caches from the same cache manager might have different cache store configurations. Configuration Cache loaders can be configured in a chain. Cache read operations. - passivation (false by default) has a significant impact on how Infinispan interacts with the loaders, and is discussed in the next paragraph. - shared (false by default). - preload (false by default) if true, when the cache starts, data stored in the cache loader will be pre-loaded into memory. This is particularly useful when data in the cache loader is needed immediately after startup and you want to avoid cache operations being delayed as a result of loading this data lazily. Can be used to provide a 'warm-cache' on startup, however there is a performance penalty as startup time is affected by this process. Note that preloading is done in a local fashion, so any data loaded is only stored locally in the node. No replication or distribution of the preloaded data happens. Also, Infinispan only preloads up to the maximum configured number of entries in eviction. - class attribute (mandatory) defines the class of the cache loader implementation. - fetchPersistentState (false by default) determines whether or not to fetch the persistent state of a cache when joining a cluster. The aim here is to take the persistent state of a cache and apply it to the local cache store of the joining node. Hence, if cache store is configured to be shared, since caches access the same cache store, fetch persistent state is ignored. Only one configured cache loader may set this property to true; if more than one cache loader does so, a configuration exception will be thrown when starting your cache service. - purgeSynchronously will control whether the expiration takes place in the eviction thread, i.e. if purgeSynchronously (false by default) is set to true, the eviction thread will block until the purging is finished, otherwise would return immediately. If the cache loader supports multi-threaded purge then purgeThreads (1 by default) are used for purging expired entries. There are cache loaders that support multi-threaded purge (e.g. FileCacheStore) and caches that don't (e.g. JDBCCacheStore); check the actual cache loader configuration in order to see that. - ignoreModifications(false by default) determines whether write methods are pushed down to the specific cache loader. Situations may arise where transient application data should only reside in a file based cache loader on the same server as the in-memory cache, for example, with a further JDBC based cache loader used by all servers in the network. This feature allows you to write to the 'local' file cache loader but not the shared JDBCCacheLoader. - purgeOnStatup empties the specified cache loader (if ignoreModifications is false) when the cache loader starts up. - - singletonStore (default for enabled is false) element enables modifications to be stored by only one node in the cluster, the coordinator. Essentially, whenever any data comes in to some node it is always replicated(or distributed) so as to keep the caches in-memory states in sync; the coordinator, though, has the sole responsibility of pushing that state to disk. This functionality can be activated setting the enabled attribute to true in all nodes, but again only the coordinator of the cluster will the modifications in the underlying cache loader as defined in loader element. You cannot define a shared and with singletonStore enabled at the same time. - pushStateWhenCoordinator (true by default) If true, when a node becomes the coordinator, it will transfer in-memory state to the underlying cache loader. This can be very useful in situations where the coordinator crashes and the new coordinator is elected. - async element has to do with cache store persisting data (a)synchronously to the actual store. It is discussed in detail here. Cache Passivation A cache loader and its children have been loaded, they're removed from the cache loader and a notification is emitted to the cache listeners that the entry has been activated. In order to enable passivation just set passivation to true (false by default). When passivation is used, only the first cache loader configured is used and all others are ignored.: - Insert keyOne - Insert keyTwo - Eviction thread runs, evicts keyOne - Read keyOne - Eviction thread runs, evicts keyTwo - Remove keyTwo When passivation is disabled : - Memory: keyOne Disk: keyOne - Memory: keyOne, keyTwo Disk: keyOne, keyTwo - Memory: keyTwo Disk: keyOne, keyTwo - Memory: keyOne, keyTwo Disk: keyOne, keyTwo - Memory: keyOne Disk: keyOne, keyTwo - Memory: keyOne Disk: keyOne When passivation is enabled : - Memory: keyOne Disk: - Memory: keyOne, keyTwo Disk: - Memory: keyTwo Disk: keyOne - Memory: keyOne, keyTwo Disk: - Memory: keyOne Disk: keyTwo - Memory: keyOne Disk: File system based cache loaders Infinispan ships with several cache loaders that utilize the file system as a data store. They all require a location attribute, which maps to a directory to be used as a persistent store. (e.g., location="/tmp/myDataStore" ). - FileCacheStore: a simple filesystem-based implementation. Usage on shared filesystems like NFS, Windows shares, etc. should be avoided as these do not implement proper file locking and can cause data corruption. File systems are inherently not transactional, so when attempting to use your cache in a transactional context, failures when writing to the file (which happens during the commit phase) cannot be recovered. requires a commercial license if distributed with an application (see for details). For detailed description of all the parameters supported by the stores, please consult the javadoc. JDBC based cache loaders Based on the type of keys to be persisted, there are three JDBC cache loaders: - JdbcBinaryCacheStore - can store any type of keys. It stores all the keys that have the same hash value (hashCode method on key) in the same table row/blob, having as primary key the hash value. While this offers great flexibility (can store any key type), it impacts concurrency/throughput. E.g. If storing k1,k2 and k3 keys, and keys had same hash code, then they'd persisted in the same table row. Now, if 3 threads try to concurrently update k1, k2 and k3 respectively, they would need to do it sequentially since these threads would be updating the same row. - JdbcStringBasedCacheStore - stores each key in its own row, increasing throughput under concurrent load. In order to store each key in its own column, it relies on a (pluggable) bijection that maps the each key to a String object. The bijection is defined by the Key2StringMapper interface. Infinispans ships a default implementation (smartly named DefaultKey2StringMapper) that knows how to handle primitive types. - JdbcMixedCacheStore - it is a hybrid implementation that, based on the key type, delegates to either JdbcBinaryCacheStore or JdbcStringBasedCacheStore. Which JDBC cache loader should I use? It is generally preferable to use JdbcStringBasedCacheStore when you are in control of the key types, as it offers better throughput under heavy load. One scenario in which it is not possible to use it though, is when you can't write an Key2StringMapper to map the keys to to string objects (e.g. when you don't have control over the types of the keys, for whatever reason). Then you should use either JdbcBinaryCacheStore or JdbcMixedCacheStore. The later is preferred to the former when the majority of the keys are handled by JdbcStringBasedCacheStore, but you still have some keys you cannot convert through Key2StringMapper. Connection management (pooling) In order to obtain a connection to the database all the JDBC cache loaders rely on an ConnectionFactory implementation. The connection factory implementations: - PooledConnectionFactory is a factory based on C3P0. Refer to javadoc for details on configuring it. - ManagedConnectionFactory is a connection factory that can be used within managed environments, such as application servers. It knows how to look into the JNDI tree at a certain location (configurable) and delegate connection management to the DataSource. Refer to javadoc javadoc for details on how this can be configured. - SimpleConnectionFactory. Sample configurations Bellow is an sample configuration for the JdbcBinaryCacheStore. For detailed description of all the parameters used refer to the javadoc. Please note the use of multiple XML schemas, since each cachestore has its own schema. Bellow is an sample configuration for the JdbcStringBasedCacheStore. For detailed description of all the parameters used refer to the javadoc. Bellow is an sample configuration for the JdbcMixedCacheStore. For detailed description of all the parameters used refer to the javadoc. Finally, below is an example of a JDBC cache store with a managed connection factory, which is chosen implicitly by specifying a datasource JNDI location: Cloud cache loader The CloudCacheStore implementation utilizes JClouds to communicate with cloud storage providers such as Amazon's S3, Rackspace's Cloudfiles or any other such provider supported by JClouds.. For a list of configuration refer to the javadoc. Remote cache loader The RemoteCacheStore is a cache loader implementation that stores data in a remote infinispan cluster. In order to communicate with the remote cluster, the RemoteCacheStore uses the HotRod client/server architecture. HotRod bering the load balancing and fault tolerance of calls and the possibility to fine-tune the connection between the RemoteCacheStore and the actual cluster. Please refer to HotRod for more information on the protocol, client and server configuration. For a list of RemoteCacheStore configuration refer to the javadoc. Example: In this sample configuration, the remote cache store is configured to use the remote cache named "mycache" on servers "one" and "two". It also configures connection pooling and provides a custom transport executor. Additionally the cache store is asynchronous. Cassandra cache loader The CassandraCacheStore was introduced in Infinispan 4.2. Read the specific page for details on implementation and configuration. Cluster cache loader The ClusterCacheLoader is a cache loader implementation that retrieves data from other cluster members..
https://docs.jboss.org/author/display/ISPN/Cache+Loaders+and+Stores
2017-12-11T07:52:19
CC-MAIN-2017-51
1512948512584.10
[]
docs.jboss.org
Breaking: #72405 - Removed traditional BE modules handling¶ See Issue #72405 Description¶ The traditional way of registering backend modules done via custom mod1/index.php and mod1/conf.php has been removed. Impact¶ Calling ExtensionManagementUtility::addModulePath() will result in a fatal error. Additionally, all modules that are registered via ExtensionManagementUtility::addModule() and setting a path will not be registered properly anymore. $TBE_MODULES['_PATHS'] is always empty now. Additionally, the options script and navFrameScript and navFrameScriptParam will have no effect anymore when registering a module. Affected Installations¶ Any installation using an extension that registers a module via the traditional way using standalone scripts.
https://docs.typo3.org/typo3cms/extensions/core/8.7/Changelog/8.0/Breaking-72405-RemovedTraditionalBEModulesHandling.html
2017-12-11T07:35:12
CC-MAIN-2017-51
1512948512584.10
[]
docs.typo3.org
Breaking: #73763 - Removed backPath from PageRenderer¶ See Issue #73763 Description¶ The PageRenderer class responsible for rendering Frontend output and Backend modules has no option to resolve the so-called backPath anymore. The second parameter has been dropped from the constructor method. Additionally the public property backPath as well as the method PageRenderer->setBackPath() have been removed. Impact¶ Calling the constructor of PageRenderer with a second parameter, or setting PageRenderer->backPath has no effect anymore. Calling PageRenderer->setBackPath() directly will result in a PHP error.
https://docs.typo3.org/typo3cms/extensions/core/8.7/Changelog/8.0/Breaking-73763-RemovedBackPathFromPageRenderer.html
2017-12-11T07:35:24
CC-MAIN-2017-51
1512948512584.10
[]
docs.typo3.org
Product Index The set of textures for the Changing Fantasy Suit for Michael contains 1 unique set of minutely detailed textures, perfect for the creation of a great variety of fantasy heroes. This pak includes special material poses, which make the application of each texture set and transparency map to the models very quick and easy. The textures have been painstakingly created at the high resolutions (2048 by 1536 pixels) necessary for close up renderings. 2 separate paks available; collect them.
http://docs.daz3d.com/doku.php/public/read_me/index/896/start
2017-12-11T07:46:52
CC-MAIN-2017-51
1512948512584.10
[]
docs.daz3d.com
Bro Package Manager¶ The Bro Package Manager makes it easy for Bro users to install and manage third party scripts as well as plugins for Bro and BroControl. The command-line tool is preconfigured to download packages from the Bro package source , a GitHub repository that has been set up such that any developer can request their Bro package be included. See the README file of that repository for information regarding the package submission process. See the package manager documentation for further usage information, how-to guides, and walkthroughs. For offline reading, it’s also available in the docs/ directory of the source code distribution.
http://bro-package-manager.readthedocs.io/en/stable/
2017-12-11T07:14:28
CC-MAIN-2017-51
1512948512584.10
[]
bro-package-manager.readthedocs.io
Using Aspose.Cells for Java with ColdFusion This article gives the basic information and code segment that ColdFusion developers need to use Aspose.Cells for Java in there ColdFusion application. This article shows how to create a simple web page using ColdFusion and use Aspose.Cells for Java to generate a simple Excel File. Aspose.Cells: The Real Product With Aspose.Cells developers can export data, format spreadsheets in every detail and at every level, import images, import charts, create charts, manipulate charts, stream Microsoft Excel data, save in various formats including XLS, CSV, SpreadsheetML, TabDelimited, TXT, XML (Aspose.Pdf integrated) and many more. To find out more about the product information, features and for a programmer’s guide, refer to the Aspose.Cells documentation and online featured demos. You can download and evaluate it for free. Prerequisites To use Aspose.Cells for Java in ColdFusion applications, copy the Aspose.Cells.jar file to the {InstallationFolder\}\wwwroot\WEB-INF\lib folder. Do not forget to restart the ColdFusion application server after putting new JARs in the lib folder. Using Aspose.Cells for Java & ColdFusion to Create an Excel file Below, we create a simple application that generates an empty Microsoft Excel file, inserts some content and saves it as an XLS file. Following is the actual code (ColdFusion & Aspose.Cells for Java). After executing the code, an Excel file, output.xls, is generated. Generated output.xls Java <html> <head><title>Hello World!</title></head> <body> <b>This example shows how to create a simple MS Excel Workbook using Aspose.Cells</b> <cfset workbook=CreateObject("java", "com.aspose.cells.Workbook").init()> <cfset worksheets = workbook.getWorksheets()> <cfset sheet= worksheets.get("Sheet1")> <cfset cells= sheet.getCells()> <cfset cell= cells.getCell(0,0)> <cfset cell.setValue("Hello World!")> <cfset workbook.save("C:\output.xls")> </body> </html> Summary This article explains how to use Aspose.Cells for Java with ColdFusion. Aspose.Cells offers great flexibility and provides outstanding speed, efficiency and reliability. Aspose.Cells has benefited from years of research, design and careful tuning. We welcome queries, comments and suggestions in the Aspose.Cells Forum.
https://docs.aspose.com/cells/java/using-aspose-cells-for-java-with-coldfusion/
2022-06-25T10:30:07
CC-MAIN-2022-27
1656103034930.3
[array(['using-aspose-cells-for-java-with-coldfusion_1.png', 'todo:image_alt_text'], dtype=object) ]
docs.aspose.com
3. Creating the GBank Solution Creating An ‘Auto Loan’ Case - Navigate to Case Setup > Case Types and click on the New Case Type button Enter Auto Loan as the Case Type Name. The Code field will be auto-populated using the Name field subtracting dash (-) and any spaces or special characters, it also changes to uppercase. You can override Code Value (ensuring no spaces or special characters are used). Code Value cannot be changed after saving - Type ‘car’ in the Icon field and select the automobile car icon as a pictorial representation of our Case Type. - Add some additional styling to the icon that represents the Case Type by choosing the alternative Color code B51A2E. You can use the palette to change the color. - Create the Resolution Codes for the Case Type. Case Resolution Codes appear as a list of drop-down values that the caseworker can select when closing a Case to indicate what the outcome was. - Add the Case Resolution Codes identified in Reviewing the GBank Auto Loan Case Scenario chapter by clicking on the plus icon. - Name: Approved ✅ | Icon: thumbs-up 👍 | Color Theme: Green - Name: Not Approved❌ | Icon: thumbs-down 👎 | Color Theme: Red - Type Auto Loan Application Process for GBank as the Description of the Case Type. This is particularly useful for Case Workers when several Case Type options are available when first creating a Case, so your description helps explain when to use this type. - Select Major as the Default Priority. When a case of this type is created, it automatically will be assigned this priority. Note the caseworker can override this at the execution time for case creation. - Leave Procedure blank by default. When you leave the Procedures fields blank, the system will automatically create it and link it accordingly. - Ensure the Data Model checkbox is selected to use the Master Data Management feature in further steps. - Leave Custom Processing and Document Management Defaults sections blank by default. DO NOT change anything here for now. - Check your screen with the following image and select Save when done. If there are no errors in the process of creating the Case Type, you will see the following summary screen after a short amount of time. - Click on the Milestone Diagram link (Auto Loan) to validate that in the process of creation of a Case Type the default Milestone diagram is also created. You can modify this milestone definition later. - Close this window - Click on the Procedure link (Auto Loan) to see that in the process of creation of a Case Type the Procedure is created. The default procedure will be empty. Close the window. - Finally click on the Data Model link (Auto Loan) to see that in the process of creating a Case Type, the Data Model is also created with two elements: Case and the Business Object Auto Loan. We will add more elements to the Data Model in the next steps. Next Steps 4. Editing the Master Data Model (MDM)
https://docs.eccentex.com/doc1/3-creating-the-gbank-solution
2022-06-25T11:33:12
CC-MAIN-2022-27
1656103034930.3
[]
docs.eccentex.com
Before you install OKD, decide what kind of installation process to follow and make sure you that you have all of the required resources to prepare the cluster for users. Before you install an OKD cluster, you need to select the best installation instructions to follow. Think about your answers to the following questions to select the best option. If you want to install and manage OKD yourself, you can install it on the following platforms: Amazon Web Services (AWS) Microsoft Azure Microsoft Azure Stack Hub Google Cloud Platform (GCP) OpenStack oVirt IBM Z and LinuxONE IBM Z and LinuxONE for Fedora KVM IBM Power VMware vSphere VMware Cloud (VMC) on AWS Bare metal or other platform agnostic infrastructure You can deploy an OKD 4 cluster to both on-premise hardware and to cloud hosting services, but all of the machines in a cluster must be in the same datacenter or cloud hosting service. If you want to use OKD OKD 3 and want to try OKD 4, you need to understand how different OKD 4 is. OKD 4 weaves the Operators that package, deploy, and manage Kubernetes applications and the operating system that the platform runs on, Fedora CoreOS (FCOS), together seamlessly. Instead of deploying machines and configuring their operating systems so that you can install OKD on them, the FCOS operating system is an integral part of the OKD cluster. Deploying the operating system for the cluster machines as part of the installation process for OKD. See Comparing OpenShift Container Platform 3 and OpenShift Container Platform 4. Because you need to provision machines as part of the OKD cluster installation process, you cannot upgrade an OKD 3 cluster to OKD 4. Instead, you must create a new OKD 4 cluster and migrate your OKD 3 workloads to them. For more information about migrating, see OpenShift Migration Best Practices. Because you must migrate to OKD 4, you can use any type of production cluster installation process to create your new cluster. Because the operating system is integral to OKD, it is easier to let the installation program for OKD OKD OKD clusters on them. You can use the installer-provisioned infrastructure method to create appropriate machine instances on your hardware for OpenStack, OpenStack with Kuryr, OpenStack on SR-IOV, oVirt, OpenStack, OpenStack on SR-IOV, oVirt, IBM Z or LinuxONE, IBM Z or LinuxONE with Fedora KVM, IBM Power, or vSphere, use the specific installation instructions to deploy your cluster. If you use other supported hardware, follow the bare metal installation procedure. For some of these platforms, such as OpenStack, Fedora KVM, IBM Power, vSphere, VMC on AWS, or bare metal. You can also install a cluster into a restricted network using installer-provisioned infrastructure by following detailed instructions for AWS, GCP, VMC on AWS, OpenStack, oVirt, and vSphere. If you need to deploy your cluster to an AWS GovCloud region, AWS China region, or Azure government region, you can configure those custom regions during an installer-provisioned infrastructure.
https://docs.okd.io/4.9/installing/installing-preparing.html
2022-06-25T11:24:47
CC-MAIN-2022-27
1656103034930.3
[]
docs.okd.io
📄️ Installing packages With Replit, you can use most packages available in Python and JavaScript. Replit will install many packages on the fly just by importing them in code. You can read more about how we do this using a universal package manager. 📄️ Working with shortcuts Be more productive with Replit by learning the code editor’s powerful shortcuts for editing, writing, and inspecting code. 📄️ Replit libraries While you can use nearly any package or library on Replit, we have also built several of our own. You can read more about these here. 📄️ Configuring a Repl Every new repl comes with a .replit and a replit.nix file that let you configure your repl to do just about anything in any language! 📄️ Using given the same instructions (and inputs, for example, the same Nix package set), it will give you the exact same result, regardless of when or where you run it. 📄️ Using Git with Replit There are a few ways to use Git and Replit together, using either the GUI controls built into the Replit IDE or the Replit shell. 📄️ Git commands reference guide You can keep track of changes to your repls using Git. Here are some basic and advanced Git commands you might find useful. Note that these are not intended as a replacement for the Git reference docs, but rather a simpler version of the most commonly used commands. 📄️ Running GitHub repositories on Replit GitHub repositories can be run automatically on Replit. Head to to import a repository. Any public repository under 500 MB can be cloned, and subscribing to our hacker plan unlocks private repos after authenticating with GitHub. 📄️ GitHub Authentication Errors in Replit While interacting with our Git plugin and GitHub integration, you may run into error messages that look like this. 📄️ Getting repl metadata In some cases, it's useful to automatically retrieve metadata about a repl from within that repl. 📄️ Debugging Repls that are written in the following languages can use a built-in, multiplayer debugger: 📄️ Secrets and environment variables Sensitive information such as credentials and API keys should be separate from your codebase so that you can share your code with others while ensuring that they cannot access your services, such as your user database. 📄️ Using repl history To help ensure you never lose work, Replit auto-saves your code as you write. If you ever lose an edit to your code that you'd like to recover, repl history is there to help. 📄️ Switch themes To change the theme of the Replit Workspace, navigate to the main menu by pressing the hamburger menu in the top left corner of your Workspace. 📄️ Add a Made with Replit badge to your Webview Now you can add a "Made with Replit" badge to your public Repl's webview.
https://docs.replit.com/category/programming--ide
2022-06-25T11:29:38
CC-MAIN-2022-27
1656103034930.3
[]
docs.replit.com
Pivot Grid Module - 5 minutes to read The Pivot Grid module is a comprehensive data analysis, data mining, and visual reporting solution for XAF applications. The module contains List Editors that adapt DevExpress WinForms Pivot Grid and ASP.NET Pivot Grid controls for XAF. The feature enables you to summarize large amounts of data in a multi-dimensional pivot table where you can sort, group, and filter the data. The Pivot Chart Module can also visualize the data in 2D or 3D graphics charts. End-users can customize the table's layout according to their analysis requirements with simple drag-and-drop operations. For the Pivot Grid module's demonstration, access the List Editors | PivotGrid section in the Feature Center application supplied with XAF. The Feature Center demo is installed in %PUBLIC%\Documents\DevExpress Demos 18.2\Components\eXpressApp Framework\FeatureCenter by default. The ASP.NET version of this demo is available online at. IMPORTANT Mobile applications do not support the Pivot Grid module. Getting Started You can add the Pivot Grid module to an existing project by following the How to: Display a List View as a Pivot Grid Table and Chart tutorial.). DevExpress Controls Used by the Pivot Grid Module The Pivot Grid module uses the following DevExpress controls: You can access these controls and change their behavior in code. For more details, study the How to: Access the List Editor's Control topic. Pivot Grid Module Components The Pivot Grid module consists of the following platform-agnostic and platform-specific components: You can add this module to your XAF application's platform-agnostic module project in the Module Designer. The module adds references to the DevExpress.ExpressApp.PivotGrid.v18.2.dll assembly. PivotGridWindowsFormsModule You can add this module to your WinForms module project in the Module Designer or the WinForms application project in the Application Designer. The module references the DevExpress.ExpressApp.PivotGrid.v18.2.dll and DevExpress.ExpressApp.PivotGrid.Win.v18.2.dll assemblies. You can add this module to your ASP.NET module project in the Module Designer or application project in the Application Designer. The module references the DevExpress.ExpressApp.PivotGrid.v18.2.dll and DevExpress.ExpressApp.PivotGrid.Web.v18.2.dll assemblies. For the Pivot Grid Module's best performance, add it to platform-specific projects only and customize the module there. Do not use the PivotGridModule component intended for the base module project. Pivot Grid Module Settings The entities below allow you to adjust the Pivot Grid module's settings in the Application Model: These settings are available in the Application Model's Views | <ListView> | PivotSettings node. The IPivotSettings.CustomizationEnabled property is set to true by default and allows end-users to modify their pivot table's settings. IMPORTANT Set the IPivotSettings.CustomizationEnabled property value depending on your application's prevailing usage scenario. - Choose false if you want to present data to end users in a specific layout. The false setting gives you full control over the pivot table's behavior and prevents users from changing it. - Choose true if end users need to frequently modify the pivot table's data layout. Note that once the user changes the pivot table's settings, they are recorded to the Application Model's top-layer .xafml file and you cannot overwrite them. In some cases, an end user's settings may merge with the modifications the developer specified in lower-level *.xafml files, and cause unpredictable behavior. The end user can get your default settings by right-clicking the pivot table and selecting **Reset View Settings* in the context menu. The IPivotSettings.Settings property value is a complex XML-formatted string. To manage these settings, click the Settings’ ellipsis button ( ). The button invokes the PivotGrid designer that allows you to modify the pivot table's layout and its other preferences. Pivot Grid Module List Editors The Pivot Grid module ships with the following List Editors: You can set the IModelListView.EditorType property value to one of these editors in the Application Model as the screenshots below illustrate. The EditorType property value is set to "DevExpress.ExpressApp.PivotGrid.Win.PivotGridListEditor" in the WinForms module project. The EditorType property value is set to "DevExpress.ExpressApp.PivotGrid.Web.ASPxPivotGridListEditor" in the ASP.NET module project. NOTE Alternatively, you can invoke the Model Editor from the application projects and change the EditorType property value there.
https://docs.devexpress.com/eXpressAppFramework/113303/concepts/extra-modules/pivot-grid-module?v=18.2
2021-02-25T08:19:29
CC-MAIN-2021-10
1614178350846.9
[array(['/eXpressAppFramework/images/pivotgridlisteditor116777.png?v=18.2', 'PivotGridListEditor'], dtype=object) array(['/eXpressAppFramework/images/pivotgridmodulesintoolbox131707.png?v=18.2', 'PivotGridModulesInToolbox'], dtype=object) array(['/eXpressAppFramework/images/pivotgridsettingsappmodel131932.png?v=18.2', 'PivotGridSettingsAppModel'], dtype=object) array(['/eXpressAppFramework/images/ellipsis131933.png?v=18.2', 'Ellipsis'], dtype=object) array(['/eXpressAppFramework/images/pivotgrid013-designer131901.png?v=18.2', 'PivotGrid013-Designer'], dtype=object) array(['/eXpressAppFramework/images/pivotgrid008-changeeditorwin131893.png?v=18.2', 'PivotGrid008-ChangeEditorWin'], dtype=object) array(['/eXpressAppFramework/images/pivotgrid-009changewebeditor131894.png?v=18.2', 'PivotGrid-009ChangeWebEditor'], dtype=object) ]
docs.devexpress.com
Add Image Watermarks Introduction This REST API allows adding image watermarks to the document. With this API you can add image watermarks with the following features: - Image watermark supports various image formats: PNG, GIF, TIFF, JPG. - You may upload the desired image to the Storage and then pass the path as a parameter of Watermark operation; - There are many watermark positioning and transforming properties; - There are format-specific options. These options allow to leverage specific format features and often allow to make watermarks stronger; - For protected documents, it is required to provide the password. The following example demonstrates how to add watermark to the document. Here you can see how to create a watermark from the image file, protect the document and it’s inner images. cURL Example Request * join several documents into one curl -v "" \ -X POST \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ -H "Authorization: Bearer <jwt token>" \ -d "{ "FileInfo": { "FilePath": "documents\\sample.pdf", "StorageName": "" }, "WatermarkDetails": [ { "ImageWatermarkOptions": { "Image": { "FilePath": "watermark_images\\sample_watermark.png", "StorageName": "" } } } ], "ProtectLevel": 2 }" Response { { "downloadUrl": "", "path": "watermark/added_watermark/documents/sample_pdf/sample.pdf" }. If you use SDK, it adds the Watermark API calls and lets you use GroupDocs Cloud features in a native way for your preferred language. SDK Examples C# Java Java ~{~{/box}}
https://docs.groupdocs.cloud/watermark/add-image-watermarks/
2021-02-25T07:52:46
CC-MAIN-2021-10
1614178350846.9
[]
docs.groupdocs.cloud
[−][src]Crate cursive_markup cursive-markup provides the MarkupView for cursive that can render HTML or other markup. Quickstart To render an HTML document, create a MarkupView with the html method, configure the maximum line width using the set_maximum_width method and set callbacks for the links using the on_link_select and on_link_focus methods. Typically, you’ll want to wrap the view in a ScrollView and add it to a Cursive instance. // Create the markup view let html = "<a href=''>Rust</a>"; let mut view = cursive_markup::MarkupView::html(&html); view.set_maximum_width(120); // Set callbacks that are called if the link focus is changed and if a link is selected with // the Enter key view.on_link_focus(|s, url| {}); view.on_link_select(|s, url| {}); // Add the view to a Cursive instance use cursive::view::{Resizable, Scrollable}; let mut s = cursive::dummy(); s.add_global_callback('q', |s| s.quit()); s.add_fullscreen_layer(view.scrollable().full_screen()); s.run(); You can use the arrow keys to navigate between the links and press Enter to trigger the on_link_select callback. For a complete example, see examples/browser.rs, a very simple browser implementation. Components The main component of the crate is MarkupView. It is a cursive view that displays hypertext: a combination of formatted text and links. You can use the arrow keys to navigate between the links, and the Enter key to select a link. The displayed content is provided and rendered by a Renderer instance. If the html feature is enabled (default), the html::Renderer can be used to parse and render an HTML document with html2text. But you can also implement your own Renderer. MarkupView caches the rendered document ( RenderedDocument) and only invokes the renderer if the width of the view has been changed. HTML rendering To customize the HTML rendering, you can change the TextDecorator that is used by html2text to transform the HTML DOM into annotated strings. Of course the renderer must know how to interpret the annotations, so if you provide a custom decorator, you also have to provide a Converter that extracts formatting and links from the annotations.
https://docs.rs/cursive-markup/0.1.0/cursive_markup/index.html
2021-02-25T07:21:36
CC-MAIN-2021-10
1614178350846.9
[]
docs.rs
Use the following steps to reduce the number of messages recorded in var/log/messages files on Red Hat Linux agents: Edit the uptimeagent or uptmagnt file under /etc/xinetd.d and add the following lines between the {} characters: log_on_success -= PID log_on_success -= EXIT log_on_success -= HOST DURATION log_on_failure -= HOST Restart the xinetd process. To test if the agent is still logging messages, run a tail -f on messages and then poll the Agent from the Uptime Infrastructure Monitor user interface to see if this action logs a new message.
http://docs.uptimesoftware.com/display/KB/Suppress+Linux+agent+inetd+uptmagnt+connection+warnings
2021-02-25T07:36:01
CC-MAIN-2021-10
1614178350846.9
[]
docs.uptimesoftware.com
Anaconda Environments (AEN 4.1.1 recognize it and all project members have access to it. This introduction to conda environments is specifically for AEN users. See conda for full conda documentation,. To limit their read, write and/or execute permissions from the Workbench app, select the notebook name, then select Permissions from the resulting drop-down menu.. From To use your new environment with Jupyter Notebooks, open the notebook app and open a new notebook by clicking the NEW button. In the resulting drop-down menu, under Notebooks, the environment you just created appears. Select that environment to activate it. “my_env” in a project named “test1” that includes NumPy and SciPy and want to use this environment in your notebook, select “Python [conda env:test1-my_env]” from the notebook’s top Kernel menu. The notebook code will run in that environment and.
https://docs.anaconda.com/ae-notebooks/4.1.1/user/anaconda/
2021-02-25T07:45:19
CC-MAIN-2021-10
1614178350846.9
[]
docs.anaconda.com
A page to tell the world about your project and engage supporters. Accept donations and sponsorship. Reimburse expenses and pay invoices. Automatic financial reporting and accountability. Levels or rewards for backers and sponsors. Tickets sales go straight to your Project budget. Give the gift of giving. Sponsor Projects on behalf of a company. Keep your supporters in the loop about your achievements and news. Create an umbrella entity to support a community of Projects. Tell the world who you are and show off the Projects you're managing or supporting. A community forum for your Project.
https://docs.donatepr.com/product/product
2021-02-25T07:58:00
CC-MAIN-2021-10
1614178350846.9
[]
docs.donatepr.com
We had worked with hundreds of home-improvement businesses in the past months and one thing we have noticed is that contractors that linked their Ccino Financing Page to their business websites have significantly more traffic volume and number of leads compared with the rest of the businesses. Increases more traffic and convert more website visitors Give your customers an easier way to find your financing page Establish web presence and competitive advantage There are several ways you can link your Financing Page, and we will walk you through each one of these methods so you can pick the best way: Attach your Ccino customer brochure to your business site Create a link to your Financing Website Create a "Financing" section This is the simplest way to let your customers know you now provide financing. Having a "Financing" tab in your navigation bar and when customers enter, they can see the PDF right away. Sample code for your site: <code><a href="your-financing-brochure-link" target="_blank">Financing</a> The idea is similar, creating a link to your financing website so that your customers can go directly to the site. Additionally, you can add a few sentences explaining how the financing works to customers who might be hesitating: <code>Financing Available - Get your competitive rates under 60 seconds. <a href="your-financing-website-link" target="_blank">Financing</a> Having a "Financing" section for your website creates the most value but it also takes time. We have prepared a template for you. We’ve teamed up with Ccino, penalties Step 1: Complete A Short Online Form Just answer a few questions to see available rates without affecting your credit score. Step 2: Review Loan Options If eligible, you’ll get personalized options from multiple lending partners. You can then apply for your chosen loan. Step 3: Get Funded As Soon As 24 Hours If approved, you can get money in your account the next day. Check your rates under 60 seconds - no impact on your credit score. <code><a href="your-financing-website-link" target="_blank">Financing</a> --- Congrats! Now you have completed all the getting started content and now it's time to take actions and start getting more sales coming your way. If you have any questions, please email us at [email protected]
https://docs.getccino.com/getting-started/list-your-financing-page-on-your-business-site
2021-02-25T07:14:20
CC-MAIN-2021-10
1614178350846.9
[]
docs.getccino.com
How to Change Logo Hit Theme Options > General Options. Here you can set your mobile logo and menu logo. Then Publish. Doc navigation← How to edit blog post meta info?How To Change GDPR Text & Link? → Was this article helpful to you? Yes No How can we help? Name Email subject message
https://docs.hibootstrap.com/docs/varn-theme-documentation/faqs/how-to-change-logo/
2021-02-25T07:24:22
CC-MAIN-2021-10
1614178350846.9
[]
docs.hibootstrap.com
Import from Sketch App Using our Sketch plugin, you can copy your Artboards, Groups, Assets, and Layers from the Sketch App directly into Kodika. Install Sketch Plugin Notes: - You will need to have Kodika Studio on your Mac. Kodika on iPad is not supported yet. - You will need to have Sketch App on your Mac. Sketch Cloud does not support plugins. Download You do not need to install anything in Kodika Studio, but you will need to install our Sketch plugin into the Sketch app. - Download the latest release of Export to Kodika.io Plugin. - Unzip it. - Double click on the extracted sketch-kodika-plugin.sketchplugin and follow the steps in the Sketch App to install the plugin. - Ready to use it! How to Import your designs In Sketch App: - Select the element(s) that you want to import/copy. Check also Supported Elements - Go to Plugins > Export to Kodika.io > Copy Selected Elements. Tip: You can use the Shift+ Command+ Kshortcut. In Kodika Studio: - Open the Screen or the Cell that you want to paste into. - Optional: Select the Group that you want to be the parent of the pasted Views. Root/Top view will be used if nothing is selected. - Right-click and select Paste. Tip: You can use the CMD+ Vshortcut. Supported Elements Artboard You can copy only one Artboard each time. Artboards are pasted as a new Group. Group You can select one or more Groups. Imported groups will keep their size and position. Exportable Every Sketch Element that has been marked as Exportable, will be imported as Image Asset and then will be displayed using an Image View. Text You can select one or more Texts. Texts inside Groups will be copied automatically. Each Text element is imported as a Label and its text style is applied in the Label’s properties. Imported properties - Font. If you do not have a Font installed in the Kodika Studio, an alert will be displayed. Tip: You can Install custom fonts from the Install Fonts feature in Application Info. - Text Color - Horizontal and Vertical Alignment - Auto size Symbols Sketch Symbols are handled similar to Groups. You can copy both Symbols and Symbol Instances. Shape Shares and Shape-Paths are pasted either as Views (if possible) or are imported as Image Assets. Imported properties - Fill Color - Opacity - Border - Shadow Note: If Fill Color can not be imported into Kodika, an Image Asset will be imported instead. Constraints If you have enabled Resizing Constraints in Sketch, they will be imported as Pin Constraints in Kodika. What’s next? After you have successfully imported your Designs you can connect your Screens using Screen Navigations and populate them with data from Kodika Server using a Datasource.
https://docs.kodika.io/integrations-plugins/import-from-sketch
2021-02-25T08:06:25
CC-MAIN-2021-10
1614178350846.9
[]
docs.kodika.io
Frequently Asked Questions¶ Why don’t you use SQLite for the song database?¶ Although the song data Quod Libet stores would benefit from a relational database, it does not have a predefined schema, and opts to let users define their own storage keys. This means relational databases based on SQL, which require predefined schemata, cannot be used directly. What about <my favourite NoSQL DB> then?¶ This gets asked fairly often. MongoDB, CouchDB etc are indeed a closer match to the existing setup, but there is significant work porting to any of these, and each comes with a compatibility / maintenance cost. There has to be a genuine case for the benefits outweighing the migration cost. Any environment variables I should know about?¶ - QUODLIBET_TEST_TRANS When set to a string will enclose all translatable strings with that string. This is useful for testing how the layout of the user interface behaves with longer text as can occur with translations and to see if all visible text is correctly marked as translatable. QUODLIBET_TEST_TRANS=XXX - QUODLIBET_DEBUG - When in the environment gives the same result as if --debugwas passed. - QUODLIBET_BACKEND Can be set to the audio backend, overriding the value present in the main config file. Useful for quickly testing a different audio backend. QUODLIBET_BACKEND=xinebe ./quodlibet.py - QUODLIBET_USERDIR Can be set to a (potentially not existing) directory which will be used as the main config directory. Useful to test Quod Libet with a fresh config, test the initial user experience, or to try out things without them affecting your main library. QUODLIBET_USERDIR=foo ./quodlibet.py
https://quodlibet.readthedocs.io/en/latest/development/faq.html
2021-02-25T08:14:39
CC-MAIN-2021-10
1614178350846.9
[]
quodlibet.readthedocs.io
Viewing Atlas Entity Audits In Data Catalog, Atlas audits help Data Stewards to identify and track the entity changes or modifications that are performed over a period of time. Information about the Atlas entity audit events are displayed for each entity in the Asset Details page in Data Catalog. Using this information, Data Stewards can distinguish between entity audits and data audits that emanate from Ranger. On the Asset Details page, a new tab called Metadata Audits displays information related to the selected entity type and about the events that occurred based on the user activities. Clicking on Metadata Audits, tab, you can view manage information about: - The user who made the changes to the specific entity - The time when the entity was changed - The kind of change that was made to the entity - Any other relevant changes pertaining to the audit entries The changes that can be identified for: - Created entities and related updates - Tagged entities - Labeled entities - Export and Import operations For example, the following image displays information about the Atlas audit events that are performed by each Atlas user that is displayed in the Asset Details page in Data Catalog. Clicking on any line item displays the JSON format, which is directly derived from Atlas, in other words the source of data available in Atlas. Use the toggle icon (on the top-right corner) for viewing Atlas Audits in different formats. By default, you can view Metadata Audits in tabular format in the Asset Details page and when you toggle the view icon, you can view the Timeline format. The events are listed as timelines in this format. Clicking on a user in the Timeline format displays the JSON data, which is again derived from Atlas.
https://docs.cloudera.com/data-catalog/cloud/managing/topics/dc-atlas-audits.html
2021-02-25T08:06:28
CC-MAIN-2021-10
1614178350846.9
[array(['../images/atlas-metadata-audits-tab.png', None], dtype=object) array(['../images/atlas-audit-first.png', None], dtype=object) array(['../images/atlas-audit-second.png', None], dtype=object) array(['../images/atlas-audit-timeline-view.png', None], dtype=object) array(['../images/atlas-audits-click-user-timeline.png', None], dtype=object) ]
docs.cloudera.com
If you want to close down your Project, you can either archive or delete it. Either way, you first need to zero the balance. Go to your Project's page, click on the gear icon next to your logo and head to the Advanced page to access those options. If your Project does not have any transactions or financial activity, you can delete it. Deleting a Project will remove its data, including memberships, payment methods, etc. If your Project has transactions associated with it, you will have to archive it. The reason we cannot delete transactions is the financial ledger must retain its accuracy and integrity. Your income is someone else's expense, and your expenses are someone else's income. If your Project has transactions, you can archive it instead of deleting. This will mark the Project as inactive and prevent any new donations.
https://docs.donatepr.com/projects/closing-a-project
2021-02-25T07:14:49
CC-MAIN-2021-10
1614178350846.9
[]
docs.donatepr.com
Overview These instructions help you set up Okta as your third-party identity provider for use with the Kong OIDC and Portal Application Registration plugins. Define an authorization server and create a custom claim in Okta Follow these steps to set up an authorization server in Okta for all authorization types. Click API > Authorization Servers. Notice that you already have an authorization server set up named default. This example uses the default auth server. You can also create as many custom authorization servers as necessary to fulfill your requirements. For more information, refer to the Okta developer documentation. Click default to view the details for the default auth server. Take note of the IssuerURL, which you will use to associate Kong with your authorization server. Click the Claims tab. - Click Add Claim. Add a custom claim called application_idthat will attach any successfully authenticated application’s idto the access token. - Enter application_idin the Name field. - Ensure the Include in token typeselection is Access Token. - Enter app.clientIdin the Value field. - Click Create. Now that you have created a custom claim, you can associate the client_idwith a Service via the Application Registration plugin. Start by creating a Service in Kong Manager. Create a Service and a Route and instantiate an OIDC plugin on that Service. You can allow most options to use their defaults. In the Config.Issuerfield, enter the Issuer URL of the Authorization server from your identity provider. In the Config.Consumer Claimfield, enter your <application_id>. Tip: Because Okta’s discovery document does not include all supported auth types by default, ensure the config.verify_parametersoption is disabled. The core configuration should be: { "issuer": "<auth_server_issuer_url>", "verify_credentials": false, "consumer_claim": "<application_id>", } - Configure a Portal Application Registration plugin on the Service as well. See Application Registration. Register an application in Okta Follow these steps to register an application in Okta and associate the Okta application with an application in the Kong Developer Portal. - Click Applications > + Add Application. Depending on which authentication flow you want to implement, the setup of your Okta application will vary: - Client Credentials: Select Machine-to-Machinewhen prompted for an application type. You will need your client_idand client_secretlater on when you authenticate with the proxy. Implicit Grant: Select Single-Page App, Native, or Webwhen prompted for an application type. Make sure Implicitis selected for Allowed grant types. Enter the Login redirect URIs, Logout redirect URIs, and Initiate login URIfields with the correct values, depending on your application’s routing. The Implicit Grant flow is not recommended if the Authorization Code flow is possible. Authorization Code: Select Single-Page App, Native, or Webwhen prompted for an application type. Make sure Authorization Codeis selected for Allowed grant types. Enter the Login redirect URIs, Logout redirect URIs, and Initiate login URIfields with the correct values, depending on your application’s routing. Associate the identity provider application with your Kong application Now that the application has been configured in Okta, you need to associate the Okta application with the corresponding application in Kong’s Developer Portal. This example assumes Client Credentials is the chosen OAuth flow. - In the Kong Dev Portal, create an account if you haven’t already. - After you’ve logged in, click My Apps. - On the Applications page, click + New Application. Complete the Name and Description fields. Paste the client_idof your corresponding Okta (or other identity provider) application into the Reference Id field. Now that the application has been created, developers can authenticate with the endpoint using the supported and recommended third-party OAuth flows.
https://docs.konghq.com/enterprise/2.3.x/developer-portal/administration/application-registration/okta-config/
2021-02-25T07:56:42
CC-MAIN-2021-10
1614178350846.9
[]
docs.konghq.com
Disable application registration for a Service. Disabling application registration deletes all plugins that were initially enabled for a Service. You cannot manually delete a plugin that was automatically enabled by app registration, such as the acl and key-auth or openid-connect plugins that were automatically enabled in tandem when app registration was enabled. When app registration is enabled, you cannot disable those authentication plugins either. If you attempt to do so, a modification not allowed when application registration message appears. The only way to (indirectly, automatically) delete the automatically enabled authentication plugins is to disable app registration. Any other plugins that were enabled manually, such as rate-limiting, remain enabled. The main reason for disabling app registration for a Service is when an API no longer requires authentication. If you want to disable Auto Approve at the Service level, disable app registration and then enable it again with the Auto Approve toggle set to disabled. You can enable application registration again any time at your discretion. From the Konnect menu, click Services. The Services page is displayed. Depending on your view, click the tile for the Service in cards view or the row for the Service in table view. The Overview page for the service is displayed. From the Actions menu, click Disable app registration. You are prompted to confirm the disable action. Click Disable.
https://docs.konghq.com/konnect/dev-portal/administrators/app-registration/disable-app-reg/
2021-02-25T07:28:12
CC-MAIN-2021-10
1614178350846.9
[]
docs.konghq.com
Troubleshooting Remedy Smart Reporting Installation and Upgrade issues This topic covers the troubleshooting steps for the issues that you can face during installation or upgrading Remedy Smart Reporting. Installation fails with Windows authentication user Issue symptoms While installing Remedy Smart Reporting with Windows authentication, the installation fails with the following error: Failed to execute changeStartTypeAndStart. Issue cause Remedy Smart Reporting installer restarts the BMC Smart Reporting service at the time of installation. However, if a Windows user does not have access to restart the Windows service, then the installation fails. Issue workaround Provide the following permission to the user to start and stop services on the local machine: Local Security Policy -> Local Policies -> User Rights Assignment -> Log on as a Service Additionally, ensure that the domain user is a part of the Administrator group and is the owner of Remedy Smart Reporting database. Remedy Smart Reporting installation with external Tomcat fails Issue symptoms Remedy Smart Reporting installation with external Tomcat fails with the following error message: Tomcat directory is invalid Issue cause The possible root causes are: - The Version.bat file is missing. - Some of the Tomcat binaries are missing. Issue workaround Copy the missing binaries from the working environment or reinstall Apache Tomcat and install Remedy Smart Reporting. To install Remedy Smart Reporting with external Tomcat, you must update the context.xml file by performing the following steps: - Navigate to <external tomcat installation directory>\conf\ context.xml. For the existing Context parameter, set the xmlBlockExternal to false. For example: <Context xmlBlockExternal="false"> Upgrade fails with "Not Able to Connect to Provided Smart Reporting URL" error Issue symptoms While upgrading to Remedy Smart Reporting version 9.1 Service Pack 2 with SSL enabled on the existing setup, you get the following error message: Not able to connect to Provided Smart Reporting url The installer logs displays the following error: SEVERE,com.bmc.install.product.arsuitekit.platforms.arsystemservers.reporting.task.ReportingAdminUserDeatilsValidationTask, THROWABLE EVENT {Description=[Cannot connect to Reporting server],Detail=[Redirection to SSL failed for provided Smart Reporting URL]}, Throwable=[com.bmc.inapp.reporting.tools.srwsclient.SmartReportingWSClientException: Redirection to SSL failed for provided Smart Reporting URL Issue cause Upgrade installer does not connect to the existing setup when SSL is enabled. Issue workaround - Disable SSL in Remedy Smart Reporting Tomcat by commenting out the SSL connector section. - Restart the Remedy Smart Reporting server. - Disable the Auto Forwarding of SSL parameter in the web.xml file. Upgrade issues when you change the hostname of the database Issue symptoms When you upgrade Remedy Smart Reporting, the database details displayed on the installer screen are incorrect. Issue cause When Remedy Smart Reporting is installed for the first time, the ARSystemInstalledConfiguration.xml file is created in the < Install Directory>\Program Files\BMC Software\ARSystem directory. This file stores the complete information that is provided during the installation, including the database details. After the installation is complete, if you reconfigure Remedy Smart Reporting with a different database or change the database server hostname, then the details are updated in the ARSystemInstalledConfiguration.xml file. While upgrading Remedy Smart Reporting, the installer reads the existing deployment information from the ARSystemInstalledConfiguration.xml file. If the database details are different in the web.xml file and the ARSystemInstalledConfiguration.xml file, then the installer displays the incorrect database details during the upgrade. Issue workaround Whenever you change the Remedy Smart Reporting database details in the web.xml file, ensure that you make the same changes in the ARSystemInstalledConfiguration.xml because the upgrade installer connects to the database using this file. - Open the ARSystemInstalledConfiguration.xml file located at: - For Windows: <Install Directory>\ProgramFiles\BMC Software\ARSystem\ - For Linux: <Install Directory>\opt\bmc\arsystem\ - Search for the Remedy Smart Reporting database details and update them as per the web.xml file. Schema version check error during Remedy Smart Reporting installation Issue symptoms During the installation of a secondary node in a clustered environment, the following error message is displayed: Schema version check during the upgrade Issue cause Remedy Smart Reporting 9.1.00 is installed as a primary node and then upgraded to version 9.1.04. Additionally, Remedy Smart Reporting 9.1.04 is installed as a secondary node and points to the primary Remedy Smart Reporting database. Issue workaround Install Remedy Smart Reporting 9.1.04 as a secondary node with a blank or dummy database. After the successful installation, change the secondary node database details. For more information, see How To Change Smart Reporting Repository Database HostName On Existing Deployment. Best Practice Using Remedy Smart Reporting with Tomcat version 8 Issue symptoms You get the following error message while using Remedy Smart Reporting with Tomcat version 8: This page isn’t working. If the problem continues, contact the site owner. HTTP ERROR 400 Issue workaround (For Tomcat 8 and later) Check if the generated URL has the | (pipe) character symbol. If yes, since Tomcat 8 and later does not support the pipe character in the URL, it blocks all the requests. Follow the steps given below to allow this character in the URL: - Go to the \ApacheTomcat\Conf\ directory. - Open the catalina.properties file. - Uncomment or add the following property: tomcat.util.http.parser.HttpParser.requestTargetAllow=| - Save the file. - Restart Tomcat. Warning Using this option will expose the server to CVE-2016-6816.
https://docs.bmc.com/docs/brid1808/troubleshooting-remedy-smart-reporting-installation-and-upgrade-issues-877696449.html
2021-02-25T08:49:09
CC-MAIN-2021-10
1614178350846.9
[]
docs.bmc.com
Virtual machine flavors All virtual machine flavors supported on a given Compute Canada cloud can be obtained from the OpenStack Command Line Clients with the following command: [name@server ~]$ openstack flavor list --sort-column RAM Virtual machine flavors have names like: c2-7.5gb-92 p1-0.75gb By convention the prefix "c" designates "compute" and "p" designates "persistent". The prefix is followed by the number of virtual CPUs, then the amount of RAM after the dash. If a second dash is present it is followed by the size of secondary ephemeral disk in gigabytes. A virtual machine of "c" flavor is intended for jobs of finite lifetime and for development and testing tasks. It starts from a qcow2-format image. Its disks reside on the local hardware running the VM and have no redundancy (raid0). The root disk is typically 20GB in size. "c" flavor VMs also have an secondary ephemeral data disk. These storage devices are created and destroyed with the instance. The Arbutus cloud treats “c” flavors differently as they have no over-commit on CPU so are targeted towards CPU intensive tasks. A virtual machine of "p" flavor is intended to run for an indeterminate length of time. There is no predefined root disk. The intended use of "p" flavors is that they should be booted from a volume, in which case the instance will be backed by the Ceph storage system and have greater redundancy and resistance to failure than a "c" instance. We recommend using a volume size of at least 20GB for the persistent VM root disk. The Arbutus cloud treats “p” flavors differently as they will be on compute nodes with a higher level of redundancy (disk and network) and do over-commit the CPU so are geared towards web servers, data base servers and instances that have a lower CPU or bursty CPU usage profile in general.
https://docs.computecanada.ca/wiki/Virtual_machine_flavors/en
2021-02-25T07:59:27
CC-MAIN-2021-10
1614178350846.9
[]
docs.computecanada.ca
- TiDB Data Migration Documentation TiDB Data Migration (DM) is an integrated data migration task management platform that supports the full data migration and the incremental data replication from MySQL/MariaDB into TiDB. It can help to reduce the operations cost and simplify the troubleshooting process. Note: DM migrates data to TiDB in the form of SQL statements, so each version of DM is compatible with all versions of TiDB. In the production environment, it is recommended to use the latest released version of DM. To install DM, see DM download link. Architecture The Data Migration tool includes three components: DM-master, DM-worker, and dmctl. DM-master DM-master manages and schedules the operation of data migration tasks. - Storing the topology information of the DM cluster - Monitoring the running state of DM-worker processes - Monitoring the running state of data migration tasks - Providing a unified portal for the management of data migration tasks - Coordinating the DDL migration of sharded tables in each instance under the sharding scenario DM-worker DM-worker executes specific data migration tasks. - Persisting the binlog data to the local storage - Storing the configuration information of the data migration subtasks - Orchestrating the operation of the data migration subtasks - Monitoring the running state of the data migration subtasks After DM-worker is started, it automatically migrates the upstream binlog to the local configuration directory (the default migration directory is <deploy_dir>/relay_log if DM is deployed using DM-Ansible). For details about DM-worker, see DM-worker Introduction. For details about the relay log, see Relay Log. dmctl dmctl is the command line tool used to control the DM cluster. - Creating/Updating/Dropping data migration tasks - Checking the state of data migration tasks - Handling the errors during data migration tasks - Verifying the configuration correctness of data migration tasks Data migration features This section describes the data migration features provided by the Data Migration tool. Schema and table routing The schema and table routing feature means that DM can migrate a certain table of the upstream MySQL or MariaDB instance to the specified table in the downstream, which can be used to merge or migrate the sharding data. Block and allow lists migration at the schema and table levels The block and allow lists filtering rule of the upstream database instance tables is similar to MySQL replication-rules-db/ replication-rules-table, which can be used to filter or only migrate all operations of some databases or some tables. Binlog event filtering Binlog event filtering is a more fine-grained filtering rule than the block and allow lists filtering rule. You can use statements like INSERT or TRUNCATE TABLE to specify the binlog events of schema/table that you need to migrate or filter out. Sharding support DM supports merging the original sharded instances and tables into TiDB, but with some restrictions. Usage restrictions Before using the DM tool, note the following restrictions: - Database version - DDL syntax - Sharding - Operations - Switching DM-worker connection to another MySQL instance
https://docs.pingcap.com/tidb-data-migration/v1.0/
2021-02-25T08:23:30
CC-MAIN-2021-10
1614178350846.9
[array(['https://download.pingcap.com/images/tidb-data-migration/dm-architecture.png', 'Data Migration architecture'], dtype=object) ]
docs.pingcap.com
7.1. Fermentrack Architecture¶ Todo Rewrite/fix architecture.rst to match the mkdocs/markdown version The Fermentrack stack is based on a front end application, a controller, and a firmware running on the device that handles reading temperatures, switching cooling and heating etc. Everything but the firmware part is running under a process manager which takes care of launching the front end and brewpi.py controller scripts. See [components](components.md) documentation for links and licenses. ## The webserver nginx and chaussette WSGI server Used to proxy http requests to chaussette over WSGI to the Fermentrack django application. ## cron Used to start the Fermentrack stack, it starts the Circus process manager via a @reboot job, it also checks the status of circus every 10 seconds, if it not running it will start it. All this is handled by a script: updateCronCircus.sh Supports the following arguments: {start|stop|status|startifstopped|add2cron} where: - start - will start circusd and all the services - stop - will quit circusd and all processes (note it would be started again in 10 minutes) - status - will output a status of all processes running (see below) - startifstopped - will start the process manager if stopped (called from cron every 10 minutes) - add2cron - if crontab entries are missing, it will add them back. Crontab entries added with add2cron: @reboot ~/fermentrack/brewpi-script/utils/updateCronCircus.sh start */10 * * * * ~/fermentrack/brewpi-script/utils/updateCronCircus.sh startifstopped Example status output: $ ~/fermentrack/brewpi-script/utils/updateCronCircus.sh status Fermentrack: active brewpi-spawner: active circusd-stats: active dev-brewpi1: active ## The process manager circus Fermentrack is started at boot with the help of cron (see cron), the process manager handles all the different processes needed by Fermentrack. - Fermentrack - The django application (web interface) runs under chaussette - brewpi-spawner - An internal Fermentrack process for spawning controller scripts for controlling controllers like brewpi-esp8266. - circusd-stats - An Internal circus process for stats, not used yet. - dev-brewpi1 - Is a controller script spawned by brewpi-spawner, handing a controller. Circus documentation can be found here. ## Logging - Circus process manager logs: - /home/fermentrack/fermentrack/log/circusd.log - Controller script (brewpi.py) log: - /home/fermentrack/fermentrack/log/dev-[name]-stdout.log - Controller script (brewpi.py) error/info log: - /home/fermentrack/fermentrack/log/dev-[name]-stderr.log - Controller script spawner: - /home/fermentrack/fermentrack/log/fermentrack-brewpi-spawner.log - Fermentrack django application: - /home/fermentrack/fermentrack/log/fermentrack.log Logs are rotated every 2MB and the last 5 are saved with a number suffix.
http://docs.fermentrack.com/en/dev/develop/architecture.html
2021-02-25T07:38:46
CC-MAIN-2021-10
1614178350846.9
[array(['img/fermentrack.png', 'Fermentrack Architecture'], dtype=object)]
docs.fermentrack.com
Important You are viewing documentation for an older version of Confluent Platform. For the latest, click here. KSQL Custom Function Reference (UDF and UDAF)¶¶ Folow these steps to create your custom functions: Write your UDF or UDAF class in Java. - If your Java class is a UDF, mark it with the @UdfDescriptionand @Udfannotations. - If your class is a UDAF, mark it with the @UdafDescriptionand @UdafFactoryannotations. For more information, see Example UDF class and Example UDAF class. Deploy the JAR file to the KSQL extensions directory. For more information, see Deploying. Implement a User-defined Function (UDF and UDAF). Creating UDF and UDAFs¶¶¶¶: compile 'io.confluent.ksql:ksql-udf:5.2.0' To compile with the latest version of ksql-udf: compile 'io.confluent.ksql:ksql-udf:+' If you’re using Maven to build your UDF or UDAF, specify the ksql-udf dependency in your POM file: <!-- Specify the repository for Confluent dependencies --> <repositories> <repository> <id>confluent</id> <url></url> </repository> </repositories> <!-- Specify the ksql-udf dependency --> <dependencies> <dependency> <groupId>io.confluent.ksql</groupId> <artifactId>ksql-udf</artifactId> <version>5.2.0</version> </dependency> </dependencies> UdfDescription Annotation¶ The @UdfDescription annotation is applied at the class level and has four fields, two of which are required. The information provided here is used by the SHOW FUNCTIONS and DESCRIBE FUNCTION <function> commands. Udf Annotation¶ The @Udf annotation is applied to public methods of a class annotated with @UdfDescription. Each annotated method will become an invocable function in KSQL. The annotation only has a single field description that is optional. You can use this to better describe what a particular version of the UDF does, for example: @Udf(description = "Returns a substring of str that starts at pos" + " and continues to the end of the string") public String substring(final String str, final int pos) @Udf(description = "Returns a substring of str that starts at pos and is of length len") public String substring(final String str, final int pos, final int len) UdfParameter Annotation¶ The @UdfParameter annotation is optional and is applied to the parameters of methods annotated with @Udf. KSQL will use the additional information in the @UdfParameter annotation to provide users with richer information about the method when, for example, they execute DESCRIBE FUNCTION on the method. The annotation has two parameters: value is the name of the parameter and description which can be used to better describe what the parameter does, for example: @Udf public String substring( @UdfParameter("str") final String str, @UdfParameter(value = "pos", description = "Starting position of the substring") final int pos) If your Java8 class is compiled with the -parameter compiler flag, the name of the parameter will be inferred from the method declaration. Configurable UDF¶ If the UDF class needs access to the KSQL server configuration it can implement io.confluent.common.Configurable, e.g. @UdfDescription(name = "MyFirstUDF", description = "multiplies 2 numbers") public class SomeConfigurableUdf implements Configurable { private String someSetting = "a.default.value"; @Override public void configure(final Map<String, ?> map) { this.someSetting = (String)map.get("ksql.functions.myfirstudf.some.setting"); } ... } For security reasons, only settings whose name is prefixed with ksql.functions.<lowercase-udfname>. or ksql.functions._global_. will be propagated to the Udf. UDAFs¶ To create a UDAF you need to create a class that is annotated with @UdafDescription. Each method in the class that is used as a factory for creating an aggregation must be public static, be annotated with @UdafFactory, and must return either Udaf or TableUdaf. The class you create represents a collection of UDAFs all with the same name but may have different arguments and return types. Example UDAF class¶ The class below creates a UDAF named my_sum. The name of the UDAF is provided in the name parameter of the UdafDescription annotation. This name is case-insensitive and is what can be used to call the UDAF. The UDAF can be invoked in four ways: - With a Long (BIGINT) column, returning the aggregated value as Long (BIGINT). Can also be used to support table aggregations as the return type is TableUdafand therefore supports the undooperation. - with an Integer column returning the aggregated value as Long (BIGINT). - with a Double column, returning the aggregated value as Double. - with a String (VARCHAR) and an initializer that is a String (VARCHAR), returning the aggregated String (VARCHAR) length as a Long (BIGINT). @UdafDescription(name = "my_sum", description = "sums") public class SumUdaf { @UdafFactory(description = "sums longs") // Can be used with table aggregations public static TableUdaf<Long, Long> createSumLong() { return new TableUdaf<Long, Long>() { @Override public Long undo(final Long valueToUndo, final Long aggregateValue) { return aggregateValue - valueToUndo; } @Override public Long initialize() { return 0L; } @Override public Long aggregate(final Long value, final Long aggregate) { return aggregate + value; } @Override public Long merge(final Long aggOne, final Long aggTwo) { return aggOne + aggTwo; } }; } @UdafFactory(description = "sums int") public static TableUdaf<Integer, Long> createSumInt() { return new TableUdaf<Integer, Long>() { @Override public Long undo(final Integer valueToUndo, final Long aggregateValue) { return aggregateValue - valueToUndo; } @Override public Long initialize() { return 0L; } @Override public Long aggregate(final Integer current, final Long aggregate) { return current + aggregate; } @Override public Long merge(final Long aggOne, final Long aggTwo) { return aggOne + aggTwo; } }; } @UdafFactory(description = "sums double") public static Udaf<Double, Double> createSumDouble() { return new Udaf<Double, Double>() { @Override public Double initialize() { return 0.0; } @Override public Double aggregate(final Double val, final Double aggregate) { return aggregate + val; } @Override public Double merge(final Double aggOne, final Double aggTwo) { return aggOne + aggTwo; } }; } // This method shows providing an initial value to an aggregated, i.e., it would be called // with my_sum(col1, 'some_initial_value') @UdafFactory(description = "sums the length of strings") public static Udaf<String, Long> createSumLengthString(final String initialString) { return new Udaf<String, Long>() { @Override public Long initialize() { return (long) initialString.length(); } @Override public Long aggregate(final String s, final Long aggregate) { return aggregate + s.length(); } @Override public Long merge(final Long aggOne, final Long aggTwo) { return aggOne + aggTwo; } }; } } UdafDescription Annotation¶ The @UdafDescription annotation is applied at the class level and has four fields, two of which are required. The information provided here is used by the SHOW FUNCTIONS and DESCRIBE FUNCTION <function> commands. UdafFactory Annotation¶ The @UdafFactory annotation is applied to public static methods of a class annotated with @UdafDescription. The method must return either Udaf, or, if it supports table aggregations, TableUdaf. Each annotated method is a factory for an invocable aggregate function in KSQL. The annotation only has a single field description that is required. You can use this to better describe what a particular version of the UDF does, for example: @UdafFactory(description = "Sums BIGINT columns.") public static TableUdaf<Long, Long> createSumLong(){...} @UdafFactory(description = "Sums the length of VARCHAR columns".) public static Udaf<String, Long> createSumLengthString(final String initialString) Supported Types¶ The types supported by UDFs are currently limited to: Note: Complex types other than List and Map are not currently supported Deploying¶ To deploy your UD(A)Fs you need to create a jar containing all of the classes required by the UD(A)Fs. If you depend on third-party libraries then this should be an uber-jar containing those libraries. Once the jar is created you need to deploy it to each KSQL server instance. The jar should be copied to the ext/ directory that is part of the KSQL distribution. The ext/ directory can be configured via the ksql.extension.dir. The jars in the ext/ directory are only scanned at start-up, so you will need to restart your KSQL server instances to pick up new UD(A)Fs. It is important to ensure that you deploy the custom jars to each server instance. Failure to do so will result in errors when processing any statements that try to use these functions. The errors may go unnoticed in the KSQL CLI if the KSQL server instance it is connected to has the jar installed, but one or more other KSQL servers don’t have it installed. In these cases the errors will appear in the KSQL server log (ksql.log) . The error would look something like: [2018-07-04 12:37:28,602] ERROR Failed to handle: Command{statement='create stream pageviews_ts as select tostring(viewtime) from pageviews;', overwriteProperties={}} (io.confluent.ksql.rest.server.computation.StatementExecutor:210) io.confluent.ksql.util.KsqlException: Can't find any functions with the name 'TOSTRING' The servers that don’t have the jars will not process any queries using the custom UD(A)Fs. Processing will continue, but it will be restricted to only the servers with the correct jars installed. Usage¶ Once your UD(A)Fs are deployed you can call them in the same way you would invoke any of the KSQL built-in functions. The function names are case-insensitive. For example, using the multiply example above: CREATE STREAM number_stream (int1 INT, int2 INT, long1 BIGINT, long2 BIGINT) WITH (VALUE_FORMAT = 'JSON', KAFKA_TOPIC = 'numbers'); SELECT multiply(int1, int2), MULTIPLY(long1, long2) FROM number_stream; KSQL Custom Functions and Security¶ Blacklisting¶ In some deployment environments it may be necessary to restrict the classes that UD(A)Fs have access to as they may represent a security risk. To reduce the attack surface of KSQL UD(A)Fs you can optionally blacklist classes and packages such that they can’t be used from a UD(A)F. There is an example blacklist that is found in the file resource-blacklist.txt that is in the ext/ directory. All the entries in it are commented out, but it demonstrates how you can use the blacklist. This file contains an entry per line, where each line is a class or package that should be blacklisted. The matching of the names is based on a regular expression, so if you have an entry, java.lang.Process java.lang.Process This would match any paths that begin with java.lang.Process, i.e., java.lang.Process, java.lang.ProcessBuilder etc. If you want to blacklist a single class, i.e., java.lang.Compiler, then you would add: java.lang.Compiler$ Any blank lines or lines beginning with # are ignored. If the file is not present, or is empty, then no classes are blacklisted. Security Manager¶ By default KSQL installs a simple java security manager for UD(A)F execution. The security manager blocks attempts by any UD(A)Fs to fork processes from the KSQL server. It also prevents them from calling System.exit(..). The security manager can be disabled by setting ksql.udf.enable.security.manager to false.
https://docs.confluent.io/5.2.0/ksql/docs/developer-guide/udf.html
2021-02-25T07:38:47
CC-MAIN-2021-10
1614178350846.9
[]
docs.confluent.io
Kong Gateway (Enterprise) uses common terms for entities and processes that have a specific meaning in context. This topic provides a conceptual overview of terms, and how they apply to Kong’s use cases. Admin An Admin is a Kong Gateway user account capable of accessing the Admin API or Kong Manager. With RBAC and Workspaces, access can be modified and limited to specific entities. Authentication Authentication is the process by which a system validates the identity of a user account. It is a separate concept from authorization. API gateway authentication is an important way to control the data that is allowed to be transmitted to and from your APIs. An API may have a restricted list of identities that are authorized to access it. Authentication is the process of proving an identity. Authorization Authorization is the system of defining access to certain resources. In Kong Gateway, Role-Based Access Control (RBAC) is the main authorization mode. To define authorization to an API, it is possible to use the ACL Plugin in conjunction with an authentication plugin. Beta A Beta designation in Kong software means the functionality of a feature or release version is of high quality and can be deployed in a non-production environment. Note the following when using a Beta feature or version: - A Beta feature or version should not be deployed in a production environment. - Beta customers are encouraged to engage Kong Support to report issues encountered in Beta testing. Support requests should be filed with normal priority, but contractual SLA’s will not be applicable for Beta features. - Support is not available for data recovery, rollback, or other tasks when using a Beta feature or version. - User documentation might not be complete or reflect entire functionality. A Beta feature or version is made available to the general public for usability testing and to gain feedback about the feature or version before releasing it as a production-ready, stable feature or version. Client A Kong Client refers to the downstream client making requests to Kong’s proxy port. It could be another service in a distributed application, a user’s identity, a user’s browser, or a specific device. Consumer A Consumer object represents a client of a Service. A Consumer is also the Admin API entity representing a developer or machine using the API. When using Kong, a Consumer only communicates with Kong which proxies every call to the said upstream API. You can either rely on Kong as the primary datastore, or you can map the consumer list with your database to keep consistency between Kong and your existing primary datastore. Host A Host represents the domain hosts (using DNS) intended to receive upstream traffic. In Kong, it is a list of domain names that match a Route object. Methods Methods represent the HTTP methods available for requests. It accepts multiple values, for example, GET, DELETE. Its default value is empty (the HTTP method is not used for routing). Permission A Permission is a policy representing the ability to create, read, update, or destroy an Admin API entity defined by endpoints. Plugin Plugins provide advanced functionality and extend the use of Kong Gateway, allowing you to add new features to your gateway. Plugins can be configured to run in a variety of contexts, ranging from a specific route to all upstreams. Plugins can perform operations in your environment, such as authentication, rate-limiting, or transformations on a proxied request. Proxy Kong is a reverse proxy that manages traffic between clients and hosts. As a gateway, Kong’s proxy functionality evaluates any incoming HTTP request against the Routes you have configured to find a matching one. If a given request matches the rules of a specific Route, Kong processes proxying the request. Because each Route is linked to a Service, Kong runs the plugins you have configured on your Route and its associated Service and then proxies the request upstream. Proxy Caching One of the key benefits of using a reverse proxy is the ability to cache frequently-accessed content. The benefit is that upstream services do not need to waste computation on repeated requests. One of the ways Kong delivers performance is through Proxy Caching, using the Proxy Cache Advanced Plugin. This plugin supports performance efficiency by providing the ability to cache responses based on requests, response codes and content type. Kong receives a response from a service and stores it in the cache within a specific timeframe. For future requests within the timeframe, Kong responds from the cache instead of the service. The cache timeout is configurable. Once the time expires, Kong forwards the request to the upstream again, caches the result, and then responds from the cache until the next timeout. The plugin can store cached data in-memory. The tradeoff is that it competes for memory with other processes, so for improved performance, use Redis for caching. Rate Limiting Rate Limiting allows you to restrict how many requests your upstream services receive from your API consumers, or how often each user can call the API. Rate limiting protects the APIs from inadvertent or malicious overuse. Without rate limiting, each user may request as often as they like, which can lead to spikes of requests that starve other consumers. After rate limiting is enabled, API calls are limited to a fixed number of requests per second. In this workflow, we are going to enable the Rate Limiting Advanced Plugin. This plugin provides support for the sliding window algorithm to prevent the API from being overloaded near the window boundaries and adds Redis support for greater performance. Role A Role is a set of permissions that may be reused and assigned to Admins. For example, this diagram shows multiple admins assigned to a single shared role that defines permissions for a set of objects in a workspace. Route A Route, also referred to as Route object, defines rules to match client requests to upstream services. Each Route is associated with a Service, and a Service may have multiple Routes associated with it. Routes are entry-points in Kong and define rules to match client requests. Once a Route is matched, Kong proxies the request to its associated Service. See the Proxy Reference for a detailed explanation of how Kong proxies traffic. Service A Service, also referred to as a Service object, is the upstream APIs and microservices Kong manages. Examples of Services include a data transformation microservice, a billing API, and so on. The main attribute of a Service is its URL (where Kong should proxy traffic to), which can be set as a single string or by specifying its protocol, host, port and path individually. The URL can be composed by specifying a single string or by specifying its protocol, host, port, and path individually. Before you can start making requests against a Service,. Stable A Stable release designation in Kong software means the functionality of the version is of high quality, production-ready, and released as general availability (GA). The version has been thoroughly tested, considered reliable to deploy in a production environment, and is fully supported. If updates or bug fixes are required, a patch version or minor release version is issued and fully supported. Super Admin A Super Admin, or any Role with read and write access to the /admins and /rbac endpoints, creates new Roles and customize Permissions. A Super Admin can: - Invite and disable other Admin accounts - Assign and revoke Roles to Admins - Create new Roles with custom Permissions - Create new Workspaces Tags are customer defined labels that let you manage, search for, and filter core entities using the ?tags querystring parameter. Each tag must be composed of one or more alphanumeric characters, \_\, -, . or ~. Most core entities can be tagged via their tags attribute, upon creation or edition. Teams Teams organize developers into working groups, implements policies across entire environments, and onboards new users while ensuring compliance. Role-Based Access Control (RBAC) and Workspaces allow users to assign administrative privileges and grant or limit access privileges to individual users and consumers, entire teams, and environments across the Kong platform. Upstream An Upstream object refers to your upstream API/service sitting behind Kong, to which client requests are forwarded. An Upstream object represents a virtual hostname and can be used to load balance incoming requests over multiple services (targets). For example, an Upstream named service.v1.xyz for a Service object whose host is service.v1.xyz. Requests for this Service object would be proxied to the targets defined within the upstream. Workspaces Workspaces enable an organization to segment objects and admins into namespaces. The segmentation allows teams of admins sharing the same Kong cluster to adopt roles for interacting with specific objects. For example, one team (Team A) may be responsible for managing a particular service, whereas another team (Team B) may be responsible for managing another service. Many organizations have strict security requirements. For example, organizations need the ability to segregate the duties of an administrator to ensure that a mistake or malicious act by one administrator does not cause an outage.
https://docs.konghq.com/enterprise/2.3.x/introduction/key-concepts/
2021-02-25T07:44:42
CC-MAIN-2021-10
1614178350846.9
[array(['/assets/images/docs/ee/proxy.png', 'Proxy'], dtype=object) array(['/assets/images/docs/ee/proxy-caching.png', 'Proxy caching'], dtype=object) array(['/assets/images/docs/ee/proxy-caching2.png', 'Proxy caching without service'], dtype=object) array(['/assets/images/docs/ee/rate-limiting.png', 'Rate limiting'], dtype=object) array(['/assets/images/docs/ee/role.png', 'Role'], dtype=object) array(['/assets/images/docs/ee/super-admin.png', 'Super Admin'], dtype=object) array(['/assets/images/docs/ee/workspaces.png', 'Workspaces'], dtype=object) ]
docs.konghq.com
, label="default") returns none - Parameters: - points_r - A list of tuples representing rescaled landmark points - centroid_r - A tuple representing the rescaled centroid point - bline_r - A tuple representing the rescaled baseline point - label - Optional label parameter, modifies the variable name of observations recorded. (default label="default") - Context: - Used to estimate the distance and angles of landmark points relative to shape reference landmarks (centroid and pot height aka baseline) - Output data stored: Data ('vert_ave_c', 'hori_ave_c', 'euc_ave_c', 'ang_ave_c', 'vert_ave_b', 'hori_ave_b', 'euc_ave_b', 'ang_ave_b') automatically gets stored to the Outputsclass when this function is ran. These data can always get accessed during a workflow (example below). For more detail about data output see Summary of Output Observations Input rescaled points, centroid and baseline points from plantcv import plantcv as pcv # Set global debug behavior to None (default), "print" (to file), # or "plot" (Jupyter Notebooks or X11) pcv.params.debug = "print" # Identify acute vertices (tip points) of an object # Results in set of point values that may indicate tip points pcv.landmark_reference_pt_dist(points_r=points_r, centroid_r=centroid_r, bline_r=bline_r, label="default") # Access data stored out from landmark_reference_pt_dist avg_vert_distance = pcv.outputs.observations['default']['vert_ave_c']['value'] Representation of many data points collected in two treatment blocks throughout time
https://plantcv.readthedocs.io/en/stable/landmark_reference_pt_dist/
2021-02-25T07:59:00
CC-MAIN-2021-10
1614178350846.9
[array(['../img/documentation_images/landmark_reference_pt_dist/lrpd_example_image.jpg', 'Screenshot'], dtype=object) array(['../img/documentation_images/landmark_reference_pt_dist/lrpd_output.jpg', 'Screenshot'], dtype=object) ]
plantcv.readthedocs.io
Enabling two factor authentication (2FA) in Metalcloud¶ Multi-factor authentication is an optional protection mechanism that greatly improves the security of your account against stolen credentials, phishing attacks etc. Make sure you are logged in into your account. Navigate to Account Security. Alternatively go to Infrastructure Editor > Account Settings Click on the Authenticator link Install one of the authenticator applications on your smartphone. We recommend Google Authenticator. Click on Setup a new authenticator On your smartphone, open Google Authenticator, click on the “+” button. On your smartphone, in Google Authenticator, click on the “Scan barcode” button. On your smartphone, in Google Authenticator scan the barcode on the screen. On your smartphone, in Google Authenticator identify the number generated by the two factor auth. Note that these expire quickly. Scroll down on the authenticator page and enter your password and the number generated by the authenticator. Congratulations, 2FA is now enabled on your account. Everytime you login you will be required to enter the nmber generated by the application. If you’ve lost your smartphone or need reset the authenticator for any reason use Authenticator Reset link.
https://docs.bigstep.com/en/latest/advanced/enabling_two_factor_authentication.html
2021-02-25T07:03:20
CC-MAIN-2021-10
1614178350846.9
[array(['../_images/enabling_2fa13.png', '../_images/enabling_2fa13.png'], dtype=object) ]
docs.bigstep.com
Hello All, I built a Serial driver. In Visual Studio 2019, I used "Desktop" as Target Platform. Now when I am running WHQL tests on my driver, at that time "ApiValidator" test is failing. It says API calls are not universal. Here I want to say, I don't want to make any universal driver. So Microsoft will allow this? Microsoft will provide certificate to my driver?
https://docs.microsoft.com/en-us/answers/questions/276843/whql-certification-unsupported-api-call.html
2021-02-25T07:53:26
CC-MAIN-2021-10
1614178350846.9
[]
docs.microsoft.com
Multiday recordings¶ In the matlab version of suite2p, Henry Dalgleish wrote the utility “registers2p” for multiday alignment, but it has not been ported to python. I recommend trying to run all your recordings together (add all the separate folders to data_path). This has worked well for people who have automated online registration on their microscope to register day by day (scanimage 2018b (free) offers this capability). I highly recommend checking this out - we have contributed to a module in that software for online Z-correction that has greatly improved our recording quality. However, if there are significant non-rigid shifts between days (angle changes etc) and low SNR then concatenating recordings and running them together will not work so well. In this case, (if you have a matlab license) here is a package written by Adam Ranson which is based on similar concepts as ‘registers2p’ by Henry Dalgleish that takes the output of suite2p-python directly:.
https://suite2p.readthedocs.io/en/latest/multiday.html
2021-02-25T08:13:23
CC-MAIN-2021-10
1614178350846.9
[]
suite2p.readthedocs.io
Troubleshooting (AEN 4.2.1)¶ This troubleshooting guide provides you with ways to deal with issues that may occur with your AEN installation. - General troubleshooting steps - Browser error: too many redirects - Error: unix:////opt/wakari/wakari-server/etc/supervisor.sock no such file - Error: “Data Center Not Found” when deleting a project - Forgotten administrator password - Log files being deleted - Error: This socket is closed - Service error 502: Cannot connect to the application manager - 502 communication error on Amazon web services (AWS) - Invalid username - Notebook Error: Cannot download notebook as PDF via LaTeX - Unresponsive wk-serverthread without error messages - Unresponsive wk-gatewaythread without error messages General troubleshooting steps¶ - Clear browser cookies. When you change the AEN configuration or upgrade AEN, cookies remaining in the browser can cause issues. Clearing cookies and logging in again can help to resolve problems. - Make sure NGINX and MongoDB are running. - Make sure that AEN services are set to start at boot, on all nodes. - Make sure that services are running as expected. If any services are not running or are missing, restart them. - Check for and remove extraneous processes. - Check the connectivity between nodes. - Check the configuration file syntax. - Check file ownership. - Verify that POSIX ACLs are enabled. Browser error: too many redirects¶ Error: unix:////opt/wakari/wakari-server/etc/supervisor.sock no such file¶ This is a supervisorctl error. Error: “Data Center Not Found” when deleting a project¶ Forgotten administrator password¶ Use ssh to log into the server as root. Run: /opt/wakari/wakari-server/bin/wk-server-admin reset-password -u SOME_USER -p SOME_PASSWORD NOTE: Replace SOME_USER with the administrator username and SOME_PASSWORD with the password. Log into AEN as the administrator user with the new password. Alternatively you may add an administrator user: Use ssh to log into the server as root. Run: /opt/wakari/wakari-server/bin/wk-server-admin add-user SOME_USER --admin -p SOME_PASSWORD -e YOUR_EMAIL NOTE: Replace SOME_USER with the username, replace SOME_PASSWORD with the password, and replace YOUR_EMAIL with your email address. Log into AEN as the administrator user with the new password. Log files being deleted¶ Log files are being deleted. NOTE: Locations of AEN log files for each process and application are shown in the node sections in Concepts. Cause¶ AEN installers log into /tmp/wakari\_{server,gateway,compute}.log. If the log files grow too large, they might be deleted. Solution¶ To set the logs to be more or less verbose, Jupyter Notebooks uses Application.log_level. To make the logs less verbose than the default, but still informative, set Application.log_level to ERROR. Error: This socket is closed¶ You receive the “This socket is closed” error message when you try to start an application. Cause¶ When the supervisord process is killed, information sent to the standard output stdout and the standard error stderr is held in a pipe that will eventually fill up. Once full, attempting to start any application will cause the “This socket is closed” error. Solution¶ To prevent this issue: - Follow the instructions in Managing services to stop and restart processes. - Do not stop or kill supervisord without first stopping wk-compute and any other processes that use it. To resolve the “This socket is closed” error: Stop wk-compute by running sudo kill -9. Restart the supervisord and wk-compute processes: sudo /etc/init.d/wakari-compute stop sudo /etc/init.d/wakari-compute start Service error 502: Cannot connect to the application manager¶ Gateway node displays “Service Error 502: Can not connect to the application manager.” 502 communication error on Amazon web services (AWS)¶ You receive the “502 Communication Error: This gateway could not communicate with the Wakari server” error message. Cause¶ An AEN gateway cannot communicate with the Wakari server on AWS. There may be an issue with the IP address of the Wakari server. Invalid username¶ Cause¶ The username does not follow 1 or more of these rules: - Must be at least 3 characters and no more than 25 characters. - The first character must be a letter (A-Z) or a digit (0-9). - Other characters can be a letter, digit, period (.), underscore (_) or hyphen (-). - The POSIX standard specifies that these characters are the portable filename character set, and that portable usernames have the same character set. Notebook Error: Cannot download notebook as PDF via LaTeX¶ CentOS/6 Solution¶ Install TeXLive from the TUG site. Follow the described steps. The installation may take some time. Add the installation to the PATHin the file /etc/profile.d/latex.sh. Add the following, replacing the year and architecture as needed: PATH=/usr/local/texlive/2017/bin/x86_64-linux:$PATH Restart the compute node. Unresponsive wk-server thread without error messages¶ Cause¶ Two things can cause the wk-server thread to freeze without error messages: - LDAP freezing - MongoDB freezing If LDAP or MongoDB are configured with a long timeout, Gunicorn can time out first and kill the LDAP or MongoDB process. Then the LDAP or MongoDB process dies without logging a timeout error. Unresponsive wk-gateway thread without error messages¶ Cause¶ If TLS is configured with a passphrase protected private key, wk-gateway will freeze without any error messages.
https://docs.anaconda.com/ae-notebooks/4.2.1/admin-guide/troubleshooting/
2021-02-25T08:38:21
CC-MAIN-2021-10
1614178350846.9
[]
docs.anaconda.com
sceval¶ gempa module to evaluate the goodness of origins setting the origin status or comments Description¶ sceval provides multiple. sceval also provides the plugin evsceval for scevent [2] for evaluating origin comments and status. Station selection by QC parameters¶ During real-time evaluation of origins by station - distance statistics sceval accounts for relevant data conditions given by the waveform quality control (QC) parameters and the bindings configuration (CONFIG). Only stations with status enabled and which have a binding of the setup configured by setupName are considered. Typically, setupName defines the phase detector, e.g. scautopick. Use default for all streams configured by the global bindings. Thus, setupName is used to define the module which delivers the picks. The restriction to a pick module allows to confine sceval specfically to a particular processing pipeline. From the accepted streams only those are considered where the QC parameters meet the value ranges defined in qc.parameters. Station status updates (station enabled or disabled) are received and considered through the message group CONFIG. When origins arrive with arrivals from disabled stations, then the stations are automatically activated. In real time, the QC parameters are provided by scqc [4] through the QC message groups of the SeisComP messaging. sceval must therefore subsrcribe to the message groups of the sending QC system and of the modules sending the origins and arrivals, e.g.: - QC - CONFIG - LOCATION Note - The names of the message groups may be different by configuration of scmaster and the sending modules. - scqc must be running at all times when running sceval in real time and it must compute the requested QC parameters. - To ignore unavailable QC parameters activate Evaluation methods¶ Various evaluation methods evaluate the quality of the origins setting either the status of the origin or adding a comment to the origin. The comment can be evaluated by scevent [2] using the plugin evsceval. The plugin is provided by the sceval package. The origin evaluation is performed in the following order: - minPhase: Number of used phases below threshold, set the origin status - minDepth: Minimum depth, set the origin status - maxDepth: Maximum depth, set the origin status - maxRMS: Maximum RMS, set the origin status - minPhaseConfirm: Number of used phases reached threshold, set the origin status - maxGap: Maximum azimuthal gap, add an origin comment - Station - distance evaluation: set the origin status (main evaluation method) - Extended gap criterion: set the origin status The evaluation methods set the status of an origin to confirmed or to rejected or add a comment to the origin. If the method neither allow confirmation or rejection, the origin status remains unchanged. The evaluation is finished when one method explicitly declares the status of an origin. Only the extended Gap check may re-evaluate origins after the station - distance evaluation. The re-evaluation by the extended Gap check allows to confirm origins which where rejeted before. Such origins are typically found in remote areas, e.g. mid-ocean ridges. By default manually processed origins are excluded from evaluation. Manually processed origins can be explicitly evaluated by setting the origin.manual parameter. For evaluation of origins by their arrivals only arrivals with weight > 0 are considered. minPhase: Minimum number of used phases¶ Declares: rejection An origin is rejected if the number of used phases (arrivals) is less than minPhase. Unused phases, i.e. phases with zero weight, are not counted. minPhase may be over-ruled by distanceProfilesMinPhase: Unless distanceProfilesMinPhase <= 0, the test is skipped if the number of used phases >= distanceProfilesMinPhase allowing for the station - distance evaluation. minDepth: Minimum origin depth¶ Declares: rejection An origin is rejected if the depth of the origin is less than minDepth. maxDepth: Maximum origin depth¶ Declares: rejection An origin is rejected if the depth of the origin is larger than maxDepth. Background¶. maxRMS: Maximum RMS residual¶ Declares: rejection An origin is rejected if the RMS residual of the origin is larger than maxRMS. Background¶ Large RMS residuals may be indicative of low-quality origins. While modules like scautoloc or scanloc check the RMS as a quality measure before sending an origin others may not do so. Origins with large RMS can be safely rejected by sceval but are kept in the database for later evaluation. minPhaseConfirm: Number of used phases reaches a threshold¶ Declares: confirmation An origin is confirmed if the number of used phases (arrivals) is equal or larger than minPhaseConfirm. Unused phases, i.e. phases with zero weight, are not counted. Background¶ The methods applies to areas with dense station coverage but variable data quality. It usually happens that origins with few arrivals nearby the event are confirmed by the Station - distance evaluation. If the event is large enough it may be recorded at more distance stations with sufficiently low background noise while others with higher noise at similar distance do not provide detections. In such cases sceval may not confirm such origins and scevent may prefer the confirmed origin with viewer arrivals over the origin with more arrivals. However, if the number of arrivals is large, you may wish to safely confirm an origin. Set minPhaseConfirm sufficiently high and according to the network layout and quality to cope with such situations. maxGap: Maximum azimuthal gap¶ Declares: property - sets the comment maxGap. Origins receive the comment “maxGap” if the maximum azimuthal gap exceeds maxGap. The comment is evaluated by scevent [2] if the plugin evsceval loaded. Background¶ The maximum azimuthal gap is the largest azimuthal distance between two neighboring stations with respect to the origin. The stations are sorted by their azimuth (Figure: Gap criterion). Station - distance evaluation¶.weights. Background¶ - i is the index over the intervals within the configurable distance range, - stationCount is the number of available stations, - arrivalCount is the number of associated arrivals and - weight is the configurable weight ( distanceProfile.$name.weights) within a defined distance interval. Setup¶. Extended gap criterion¶ Declares: confirmation Origins are confirmed if the number of used observed phases (arrivals) : gapMinPhase AND the maximum azimuthal gap : maxGap. Setting gapMinPhase <= 0 disables the extended gap criterion. Unused phases, i.e. phases with zero weight, are not counted. The confirmation overwrites the origin status if set to rejected by the station - distance evaluation. Background¶. Origin comments in Events List¶ The different algorithms add comment fields to the origin objects which can be used to identify the processing. When an evaluation methods triggers an ection, a comment indicating the algorithm is added to the origin parameters: - scevalMethod - the evaluation algorithm that has changed the origin status - mismatchScore - the mismatch score resulting from the Station - Distance evaluation - maxGap - comment indicating that the station GAP exceeds the configured maxGap. The comments can be shown in the Events list, e.g. of scolv, in a custom column. Configuration of Events list: Showing the mismatchScore in the Events table. Add to scolv.cfgor to global.cfg: eventlist.customColumn = "mismatchScore" # Define the default value if no comment is present eventlist.customColumn.default = "-" # Define the comment id to be used eventlist.customColumn.originCommentID = mismatchScore Show the comment scevalMethod in the Events table. eventlist.customColumn = "evaluation method" # Define the default value if no comment is present eventlist.customColumn.default = "-" # Define the comment id to be used eventlist.customColumn.originCommentID = scevalMethod Playbacks¶ sceval can be used in real-time playbacks or XML-based offline playbacks. Considerations¶ At the time of the playback, some waveform QC parameters, e.g. latency, availabilty, delay, measured at the time of waveform recording may not be available preventing stations to be active for evaluation. The data availability, delay and latency are generally different from values at the time of the data recording. Activate qc.sloppyor use --sloppyto activate stations as soon as wavform QC parameters are received and they are in range. In XML-based real-time playbacks without waveform and offline XML playbacks the waveform QC parameters are generally unavailable for the considered XML data. Activate qc.noQCor use --noQCto ignore all QC messages and to activate all stations which have bindings parameters defined by setupName. In tuning mode or when using XML playback, QC messages are generally ignored and all stations which have bindings parameters defined by setupNameare active. Real-time playbacks¶ sceval can be used in real-time playbacks of waveforms or in XML playbacks using playback_picks provided with the scanloc package. Waveform QC parameters may be unrealistic during such playbacks. Therefore, the QC parameter ranges defined in qc.parameters should be configured with -inf,inf and the parameter qc.sloppy should be activated by configuration or as commandline option. XML-based offline playbacks¶ sceval can be used in offline playbacks for fast data processing of event XML files. The evaluation of waveform QC parameters is inactive during XML-based offline playbacks. scevent [2] considers the origin status when determining the preferred origin. With the evsceval plugin it may also set the event certainty and type. Therefore, in XML-based playbacks sceval should be executed after running the locator, e.g. scanloc, but before scevent (compare with scanloc playbacks [5]): scanloc --ep picks.xml -d type://user:passwd@host/database --debug > origins.xml sceval --ep origins.xml -d type://user:passwd@host/database --debug > origins_eval.xml ... scmag ... scevent --ep mags.xml -d type://user:passwd@host/database --debug > events.xml You may also confine evaluation to one or more origins. Provide the option -O with a comma-separated list of their IDs, e.g.: sceval -d localhost --ep origins.xml -O Origin/abc,Origin/xyz > origins_sceval.xml Plugin: evsceval¶ evsceval is a plugin provided by sceval for scevent [2]. Description¶: - maxGap comment found Percentage check: The percentage of rejected origins of an event if the event has no manual origins. If the percentage exceeds a mismatchScore.rejectedthe. Configuration¶ The evsceval plugin is configured in the module configuration of scevent [2]. Adjust cevent.cfg: Load the evsceval plugin by adding to the global configuration or to the global configuration of scevent. plugins = ${plugins},evsceval sceval.setEventType = true Configure sceval.maxGapdefining the event type set if the preferred origin of an event has a maxGap comment. sceval sets the maxGap comment if the GAP criterion is met. Currently supported value are: “not existing”, “not locatable”,”outside of network interest”,”earthquake”, “induced earthquake”,”quarry blast”,”explosion”,”chemical explosion”, “nuclear explosion”,”landslide”,”rockslide”,”snow avalanche”,”debris avalanche”, “mine collapse”,”building collapse”,”volcanic eruption”,”meteor impact”,”plane crash”, “sonic boom”,”duplicate”,”other”. sceval.maxGap = "not locatable" Configure sceval.rejectedto set the event certainty. sceval.rejected = 25 Configure the parameters related to the multiple-agency check. sceval.multipleAgency.targetAgency = agency, agency1 sceval.multipleAgency.originStatus = reported Note scevent fails to set the event type if the evsceval plugin is loaded but sceval.maxGap is not defined. The plugin overwrites the event type set before. The type of events which have manual origins will not be changed. Tuning: during runtime¶ The results from the station - distance evaluation, specifically the mismatchScore can be observed during runtime without setting the origin status. The mismatchScore can be observed and compared with the event status set by operators. Eventually, the score limits, mismatchScore.confirmed and mismatchScore.rejected can be configured after a comparison. Deactivate mismatchScore.use to calculate the mismatchScore without setting the event status. Active a custom column in the Events table in scolv to view the mismatchScore. Tuning: sceval-tune¶ The tuning mode assists in providing the configuration parameters for a particular profile of the parameter distanceProfiles in the station - distance evaluation. For tuning sceval provides the auxiliary tool sceval-tune as a wrapper to run sceval in tuning mode. In tuning mode sceval considers manually evaluated events/origins to find the best-performing weight profile and threshold values for automatic confirmation or rejection of origins by sceval. The profile and threshold values can be used to configure sceval. Known limitations¶ In the current version the tuning mode has limitations: - The extended GAP evaluation is not considered. - QC parameters are not considered. Work Flow¶and mismatchScore.rejected. The best-performing weight profile and the threshold values are determined by optimizing the automatic w.r.t. the manual evaluation. As a measure, the mistfit is minimized. The configuration parameters and quality parameters are output to the command line. Based on the suggested configuration parameters, the percentages indicate the number of events that would be confirmed or rejected by the automatic system w.r.t. the manual evaluation. The main plot figure is generated to review the performance of the tuning and the configuration parameters. The optional additional plot figure shows the results for the 9 best-performing weight profiles. Two possible values for mismatchScore.rejectedare provided based on the confirmed or on the unflagged origins. Both values should be similar but the value for the confirmed origins should typically be chosen. If mismatchScore.confirmedand mismatchScore.rejectedare largely separated, many origins may remain unflagged by sceval. Increasing mismatchScore.confirmedor decreasing mismatchScore.rejectedwill. Tuning results¶ sceval-tune provides the configuration parameters and the tuning statistics on the command terminal where it was executed. In addition, one main and one additional (–addplots) figure are created for visual inspection of the tuning results. Figure 9: Main figure showing results for the best-performing weight profile. Solid line: mismatchScore values based on flagged events. Dashed line: mismatchScore values based on flagged and unflagged events. References¶ sceval has been demonstrated, promoted and discussed with scientists and the SeisComP community at international science conferences, e.g.: - D. Roessler, B. Weber, E. Ellguth, J. Spazier the team of gempa: Evaluierung der Qualität und Geschwindigkeit von Erdbebendetektionen in SeisComP3, 2017, Bad Breisig, Germany, AG Seismologie meeting - D. Roessler, B. Weber, E. Ellguth, J. Spazier the team of gempa: EVALUATION OF EARTHQUAKE DETECTION PERFORMANCE IN TERMS OF QUALITY AND SPEED IN SEISCOMP3 USING NEW MODULES QCEVAL, NPEVAL AND SCEVAL , 2017, New Orleans, USA, AGU Fall Meeting, abstract S13B-0648. Configuration¶ etc/defaults/global.cfg etc/defaults/sceval.cfg etc/global.cfg etc/sceval.cfg ~/.seiscomp/global.cfg ~/.seiscomp/sceval.cfg sceval inherits global options. minPhase¶ Type: integer Minimum number of phase arrivals (P or S) used for locating. Origins with arrivals fewer than minPhase and fewer then distanceProfilesMinPhase are rejected. Only consider arrivals with weight > 0. Default is 0. minDepth¶ Type: double Unit: km Minimum depth criterion: origins with depth less than minDepth are rejected. Default is -10.0. maxDepth¶ Type: double Unit: km Maximum depth criterion: origins with depth greater than maxDepth are rejected. Default is 745.0. maxRMS¶ Type: double Unit: s Maximum RMS: origins with RMS residual larger than maxRMS are rejected. Default is 3.5. maxGap¶ Type: double Unit: deg Gap criterion: maximum allowed azimuthal gap between adjacent stations providing arrivals to one origin. Origins with a larger gap receive a comment. The status remains unchanged. Only consider arrivals with weight > 0. Default is 360.0. minPhaseConfirm¶ Type: integer Minimum phase confirmation: Origins having at least the configured number of arrivals (associated phase picks) are confirmed. -1 disables the method. Default is -1. gapMinPhase¶ Type: integer Extended gap criterion: Origins with gap lower than maxGap and more than gapMinPhases arrivals are confirmed. gapMinPhase <= 0: ignore check. Default is -1. distanceProfiles¶ Type: list:string Registration of distance profiles for station - distance evaluation. An empty list disables the station - distance evaluation. distanceProfilesMinPhase¶ Type: integer Minimum number of P-phase arrivals for applying the station - distance evaluation. Only consider arrivals with weight > 0. distanceProfilesMinPhase <= 0: ignore parameter. Default is 0. setupName¶ Type: string Config setup name used for the initial setup of the active station list. “scautopick”: consider all stations with scautopick bindings and the streams defined therein. “default”: consider all stations with global bindings and the streams defined therein. Default is scautopick. Note qc.* Waveform quality control (QC) parameters. Ensure, sceval connects to the basic message groups, e.g. QC,CONFIG,LOCATION. qc.parameters¶ Type: list:string Defines QC parameters to observe and the value ranges to consider streams. Each QC parameter is associated with a value range. If any of the defined ranges is exceeded, the corresponding station is disabled. Use ‘-Inf’ resp. ‘Inf’ if no upper or lower bound should exist. scqc must be running. Typical parameters: rms, latency, delay, availability, gaps count, overlaps count, timing quality, offset, spikes count. Find the examples in $SEISCOMP_ROOT/etc/defaults/sceval.cfg. To effectively disable QC parameters set the ranges to -Inf,Inf. qc.sloppy¶ Type: boolean At startup of sceval waveform QC parameters may be initially missing preventing immediate station evaluation. If sloppy is active, a station is activated as soon as one QC parameter arrives which is in range. Out-of-range parameters deactivate the station. Recommended use: waveform playbacks, frequent restarts of sceval. Consider adjusting qc.parameters, e.g. disable the availability check. Default is false. qc.noQC¶ Type: boolean Do not consider QC parameters. This will activate all stations which have a binding configuration defined by setupName. Activate in real-time playbacks without waveforms. Default is false. qc.useDatabase¶ Type: boolean Load QC parameters from the database during startup. Setting to true may slow down the start up. Default is false. Note origin.* Limit the range of origins to be evaluated. Type: string Use to filter incoming origins by author. If omitted no filtering is applied. origin.ignoreStatus¶ Type: list:string Ignore an origin if its status has any of the given states. The option is ignored for manual origins if origin.manual = true. Default is rejected,reported,preliminary,confirmed,reviewed,final. origin.manual¶ Type: boolean Enable evaluation of origins where the evaluation mode is set MANUAL. Default: disable. Check the box to enable the evaluation. Default is false. Note mismatchScore.* Threshold parameters for the station-distance evaluation. They apply to all distance weight profiles. mismatchScore.confirmed¶ Type: double Mismatch between active stations and used stations. If the score is less than or equal to the evaluation status is set to CONFIRMED. Default is 0.5. mismatchScore.rejected¶ Type: double Mismatch between active stations and used stations. If the score is greater than or equal to the evaluation status is set to REJECTED. Default is 0.7. mismatchScore.use¶ Type: boolean Calculate the mismatch score and set the origin status. Deactivating only calculates the score without setting the status. Deactivating is useful for testing and tuning. View the score in scolv. Default is true. Note distanceProfile.* Distance-weight profiles for the station-distance evaluation. = ... distanceProfile.$name.max¶ Type: double Unit: deg Upper epicentral distance of stations for using this profile. Command Line¶ Generic¶ --config-file arg¶ Use alternative configuration file. When this option is used the loading of all stages is disabled. Only the given configuration file is parsed and used. To use another name for the configuration create a symbolic link of the application or copy it, eg scautopick -> scautopick2. Verbosity¶ --print-component arg¶ For each log entry print the component right after the log level. By default the component output is enabled for file output but disabled for console output. --component arg¶ Limits the logging to a certain component. This option can be given more than once. Origin¶ Filter incoming origins by author given in the supplied list. If omitted no filtering is applied. -O , --origins string¶ origin ID(s). Only process origins which have one of the given IDs. Provide a comma-separated list for multiple IDs. Only in offline playbacks with –ep. Mode¶ --sloppy ¶ Enable if sceval is used in a real-time waveform playback to disable waiting for waveform QC messages. Waveform QC messages may be missing or take some time to be created. Arteficial gaps may result from repeated real-time waveform playbacks. Consider adjusting qc.parameters, e.g. disable the availability check. Playbacks without waveforms: Set qc.parameter ranges to -inf,inf.
https://docs.gempa.de/sceval/current/apps/sceval.html
2021-02-25T07:58:05
CC-MAIN-2021-10
1614178350846.9
[array(['../_images/math/312db775fb2b6648a6d8fe690f3e8ce1cf86175e.png', 'mismatchScore =\\ &\\frac{\\sum_{i=1}^{10}weight_i*\\frac{stationCount_i - arrivalCount_i}{stationCount_i}}{\\sum_{i=1}^{10}weight_i}'], dtype=object) array(['../_images/sceval_tune_main.png', '../_images/sceval_tune_main.png'], dtype=object)]
docs.gempa.de
Introduction Kong Gateway (Enterprise) is Kong’s API gateway with enterprise functionality. As part of Kong Konnect, the gateway brokers an organization’s information across all services by allowing customers to manage the full lifecycle of services and APIs. On top of that, it enables users to simplify the management of APIs and microservices across hybrid-cloud and multi-cloud deployments. Kong Gateway is designed to run on decentralized architectures, leveraging workflow automation and modern GitOps practices. With Kong Gateway, users can: - Decentralize applications/services and transition to microservices - Create a thriving API developer ecosystem - Proactively identify API-related anomalies and threats - Secure and govern APIs/services, and improve API visibility across the entire organization Kong Gateway is a combination of several features and modules built on top of the open-sourced Kong Gateway, as shown in the diagram and described in the next section, Kong Gateway (Enterprise) Features. Kong Gateway Enterprise Features Kong Gateway (Enterprise) features are described in this section, including modules and plugins that extend and enhance the functionality of the Kong Konnect platform. Kong Gateway (OSS) Kong Gateway (OSS) is a lightweight, fast, and flexible cloud-native API gateway. It’s easy to download, install, and configure to get up and running once you know the basics. The gateway runs in front of any RESTful API and is extended through modules and plugins which provide extra functionality beyond the core platform. Kong Admin API Kong Admin API provides a RESTful interface for administration and configuration of Services, Routes, Plugins, and Consumers. All of the tasks you perform in the Kong Manager can be automated using the Kong Admin API. For more information, see Kong Admin API. Kong Developer Portal Kong Developer Portal (Kong Dev Portal) is used to onboard new developers and to generate API documentation, create custom pages, manage API versions, and secure developer access. For more information, see Kong Developer Portal. Kong Immunity Kong Immunity uses machine learning to autonomously identify service behavior anomalies in real-time to improve security, mitigate breaches and isolate issues. Use Kong Immunity to autonomously identify service issues with machine learning-powered anomaly detection. For more information, see Kong Immunity. Kubernetes Ingress Controller Kong for Kubernetes Enterprise (K4K8S) is a Kubernetes Ingress Controller. A Kubernetes Ingress Controller is a proxy that exposes Kubernetes services from applications (for example, Deployments, ReplicaSets) running on a Kubernetes cluster to client applications running outside of the cluster. The intent of an Ingress Controller is to provide a single point of control for all incoming traffic into the Kubernetes cluster. For more information, see Kong for Kubernetes. Kong Manager Kong Manager is the Graphical User Interface (GUI) for Kong Gateway (Enterprise). It uses the Kong Admin API under the hood to administer and control Kong Gateway (OSS). Use Kong Manager to organize teams, adjust policies, and monitor performance with just a few clicks. Group your teams, services, plugins, consumer management, and more exactly how you want them. Create new routes and services, activate or deactivate plugins in seconds. For more information, see the Kong Manager Guide. Kong Plugins Kong Gateway plugins provide advanced functionality to better manage your API and microservices. With turnkey capabilities to meet the most challenging use cases, Kong Gateway (Enterprise) plugins ensure maximum control and minimizes unnecessary overhead. Enable features like authentication, rate-limiting, and transformations by enabling Kong Gateway (Enterprise) plugins through Kong Manager or the Admin API. For more information on which plugins are Enterprise-only, see the Kong Hub. Kong Vitals Kong Vitals provides useful metrics about the health and performance of your Kong Gateway (Enterprise) nodes, as well as metrics about the usage of your gateway-proxied APIs. You can visually monitor vital signs and pinpoint anomalies in real-time, and use visual API analytics to see exactly how your APIs and Gateway are performing and access key statistics. Kong Vitals is part of the Kong Manager UI. For more information, see Kong Vitals. Kong Studio Kong Studio enables spec-first development for all REST and GraphQL services. With Kong Studio, organizations can accelerate design and test workflows using automated testing, direct Git sync, and inspection of all response types. Teams of all sizes can use Kong Studio to increase development velocity, reduce deployment risk, and increase collaboration. For more information, see Kong Studio. Try Kong Gateway (Enterprise) Kong Gateway (Enterprise) is bundled with Kong Konnect. Here are a couple of ways to try Konnect, or just the gateway alone: - Try out Kong for Kubernetes Enterprise using a live tutorial at - If you are interested in evaluating Konnect locally, the Kong sales team manages evaluation licenses as part of a formal sales process. The best way to get started with the sales process is to request a demo and indicate your interest.
https://docs.konghq.com/enterprise/2.3.x/introduction/
2021-02-25T07:58:51
CC-MAIN-2021-10
1614178350846.9
[array(['/assets/images/docs/ee/introduction.png', 'Introduction to Kong Gateway (Enterprise)'], dtype=object)]
docs.konghq.com
About the Sound Editor When you double-click on a sound layer in the Timeline view, the Sound Element Editor appears. - Mute/Unmute: This button mutes and unmutes the sound layer during the scene play back. - Layer Name: This field displays the layer's name. - Sound Editor: This button opens the Sound editor. - Start Frame/End Frame: These fields determine the start and end frame of the sound file. - Detect: This button launches the automated lip-sync detection. - Map: This button opens the Map Lip-sync dialog box. - Mouth Shapes: This section shows the automated lip-sync detection during the scene play back. Click on the thumbnail image of each mouth to change the phoneme assigned to the current frame.
https://docs.toonboom.com/help/harmony-17/essentials/sound/about-sound-editor.html
2021-02-25T07:58:07
CC-MAIN-2021-10
1614178350846.9
[array(['../Resources/Images/HAR/Sketch/HAR11/Sketch_soundEditor_view.png', None], dtype=object) ]
docs.toonboom.com
standing experience, you will need to leave the Origin of the Set Tracking Origin node to the default of Floor Level. Click for full image..
https://docs.unrealengine.com/en-US/SharingAndReleasing/XRDevelopment/VR/SteamVR/HowTo/StandingCamera/index.html
2021-02-25T08:41:49
CC-MAIN-2021-10
1614178350846.9
[array(['./../../../../../../../Images/SharingAndReleasing/XRDevelopment/VR/DevelopVR/ContentSetup/VR_Standing_Experiance.jpg', 'VR_Standing_Experiance.png'], dtype=object) ]
docs.unrealengine.com
Allowed daily CPU seconds notice from WebHost You may receive a warning from your webhost about your site exceeding the Allowed CPU seconds. This may happen very rarely during your first backup and it would depend upon the site size and the number of files on your website. Not to worry, since the first backup takes into account all of your files and processes them to be uploaded on your cloud storage, the number of calls will be higher. The subsequent backups will happen real quick and will in no way exceed the daily allowed CPU seconds. Note: On earlier versions, there would be a progress bar that would constantly auto-refresh adding excess load to the existing load. We've changed it to refresh manually thus reducing the CPU usage.
https://docs.wptimecapsule.com/article/17-allowed-daily-cpu-seconds-notice-from-webhost
2021-02-25T07:32:01
CC-MAIN-2021-10
1614178350846.9
[]
docs.wptimecapsule.com
a resource-based policy to a private CA. A policy can also be applied by sharing a private CA through AWS Resource Access Manager (RAM). For more information, see Attach a Policy for Cross-Account Access . The policy can be displayed with GetPolicy and removed with DeletePolicy . About Policies See also: AWS API Documentation See 'aws help' for descriptions of global parameters. put-policy --resource-arn <value> --policy <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --resource-arn (string) The Amazon Resource Number (ARN) of the private CA to associate with the policy. The ARN of the CA can be found by calling the ListCertificateAuthorities action. --policy (string) The path and file name of a JSON-formatted IAM policy to attach to the specified private CA resource. If this policy does not contain all required statements or if it includes any statement that is not allowed, the PutPolicy action returns an InvalidPolicyException . For information about IAM policy and statement structure, see Overview of JSON.
https://docs.aws.amazon.com/cli/latest/reference/acm-pca/put-policy.html
2021-02-25T09:11:17
CC-MAIN-2021-10
1614178350846.9
[]
docs.aws.amazon.com
.. include:: ../../Includes.txt ==================================================== Feature: #26 - Add addToCart form framework finisher ==================================================== See :issue:`26` Description =========== In order to allow to individualize events when adding them to the cart, a new addToCart finisher for the form framework allow to load a form and submit the form with the selected event. The fields are handled as frontend variants in the cart product. They have no intended impact on the price or stock handling. An example form template 'Cart Events - Example' can be used to create different forms for different events. It can also serve as a template for manually creating forms. .. IMPORTANT:: An update of the database is required. As this field is new there are no problems to be expected. .. NOTE:: The form is currently loaded via AJAX into a
https://docs.typo3.org/typo3cms/extensions/cart_events/2.2.0/_sources/Changelog/2.1/Feature-26-AddToCartFormFrameworkFinisher.rst.txt
2021-02-25T07:56:54
CC-MAIN-2021-10
1614178350846.9
[]
docs.typo3.org
Component best practices This page gives guidance on the best ways to use components. This will help reduce your game’s bandwidth and help your game scale. When to use events You should use an event when you want to broadcast information between workers about a transient occurrence (something that has happened which does not need to be persisted). This information will only be sent to the other workers which also this event and play a muzzle flash animation when it is triggered. An event is appropriate here instead of a property because no persistent state (state which is required after this event has happened) has been changed on the player that is shooting the gun. Metadata used for the visualisation, such as bullet type, can also be included in the event. If you wanted players to be able to communicate using emotes (displays of emotion), events would be suitable to trigger different animations on the player. You would want to use an event instead of a property because the emote does not need to be persisted. There is no state that needs to be changed on the player. When to use commands You should use a command when you want to communicate with while the command is being sent, the command will respond with a failure.) Therefore, properties that change in the command’s handler should be in the same component as the command. As an example of what that would look like, say you had entities that could be set on fire: type Void {} component Ignitable { id = 1001; bool is_on_fire = 1; command Void ignite(Void); } is_on_fire and ignite should be in the same component, because when the ignite command is received, the worker will have write authority on the Ignitable component and can therefore set is_on_fire to true. Properties that change atomically When you change a component, you specify which properties have changed, and then send the update in a ComponentUpdate operation. The operation will contain all of the properties that have changed. Therefore, if you need to make an atomic change to an entity, where multiple properties need to be changed at the same time, you must send these properties in the same ComponentUpdate operation. You must therefore put these properties in the same component. Optimising bandwidth The vast majority of worker communication comes from keeping the state of the entities synchronised across workers. This takes the form of ComponentUpdate operations. Therefore, it is important to reduce the bandwidth used by component updates if you want to optimise your game. There are two main ways to do this: reduce the rate of component updates and reduce the size of each component update. 1. Reducing the rate of component updates Only send component updates when you need to If a component’s data is only needed by it’s authoritative worker, you don’t necessarily need to send every update to SpatialOS. Instead, you can just change the data locally and send the up-to-date component at a low frequency, for example, every few seconds. You can also use the AuthorityLossImminent state to send all of the component updates when the worker is about to lose authority. For example, an NPC might have a large NPCStateMachine component, with many AI-related properties. Other workers that have this entity checked out do not need to know about these properties or receive updates whenever the properties change, but the data must be persisted when the NPC crosses a worker boundary or when a snapshot is taken. Sending the up-to-date NPCStateMachine component every few seconds, instead of whenever it changes, means that in the case of a worker failure or if a snapshot is taken, SpatialOS contains data that is no older then a few seconds. You can also use the AuthorityLossImminent state to determine if the worker is about to lose authority of the NPCStateMachine component. If the worker is about to lose authority, you should keep the NPCStateMachine component synchronised with SpatialOS by sending every property change in a component update. This ensures that SpatialOS can send the most up-to-date data to the worker which gains authority. Update components as infrequently as possible Typically, there are a few components which will make up most of the component updates. These are often components that encode transform properties such as the position and rotation of the characters in your game. There are techniques you can use to require less frequent transform updates, for example: Client-side interpolation Interpolate the position and rotation of the characters between component updates. Client-side prediction When you interpolate between component updates, you need to visualise your characters with a delay. To avoid this, you can instead predict the current position and rotation of your characters based on the previous component updates. When a new component update comes in, you can then correct your prediction. The effectiveness of this technique will depend on how accurate your predictions are. Both of these techniques allow for smooth movement on the position = 1; } Then periodically, you update the position and rotation of each NPC: SendComponentUpdate(NPCRotation.rotation, npc.rotation); SendComponentUpdate(NPCPosition.position, npc.position); This will send two component updates, each with an overhead, to SpatialOS. Instead, if you merge these components into one, you can use one component update instead: component NPCTransform { id = 1002; Quaternion rotation = 1; Vector3f position = 2; } 2. Reducing the size of each component update Component updates contain the properties of the component that have been changed. Therefore, to reduce the size of each component update, you need to reduce the size in bytes of the component’s properties. For example, say each character in your game has a Rotation component, with the rotation encoded as a Quaternion: type Quaternion { double x = 1; double y = 2; double z = 3; double w = 4; } component Rotation { id = 1002; // 32 bytes Quaternion rotation = 1; } With this implementation, each rotation value in the component updates will contain 4 doubles, which has a size of 32 bytes. Now it may be the case that in your game, characters can only rotate around one axis, for example, the y-axis. In this case, you can just transmit the y Euler angle. This can be encoded and decoded into a Quaternion locally by each worker. You can also make this a float instead of a double, as you may not need very high accuracy. The Rotation component now becomes: component Rotation { id = 1002; // 4 bytes float y_rotation = 1; } Now, each rotation value in the component updates contains only 1 float, which has a size of 4 bytes. Protobuf encoding We can do better than 4 bytes though. Component properties get encoded into a protobuf message, which is then sent to and from SpatialOS. In a protobuf message, every value a float or double takes will have a size of 4 or 8 bytes respectively. For integers, however, the smaller the integer value, the less bits the integer will use. Therefore, if you represent your Euler angle as a variable size integer, the value will be encoded using fewer bytes. You can choose the accuracy that you want, for example, 1 integer value representing 0.1 degrees. This, therefore, means that our integer value will range from 0 to 3600. Protobuf can then encode this value as a variable size integer, using 1 or 2 bytes. component Rotation { id = 1002; // 1 or 2 bytes uint32 y_rotation_x10 = 1; } Custom transform component As you’ve seen above, you can use techniques such as integer quantisation to reduce the size of rotation and position components. If you are updating these at a high frequency, it is important to optimise the encoding for these as much as possible. You have also seen that properties that change together, such as position and rotation, should be in the same component. Therefore, it is important to create and use your own custom, game-specific, transform component for high-frequency transform updates instead of the built-in Position component, which is 24 bytes. However, SpatialOS uses the Position component for tasks such as load balancing your game and updating the inspector. Therefore, it is important to still keep this component updated at a low frequency, for example, every two seconds. Redundant properties Removing redundant properties is another way to reduce the size of component updates. Often, there are properties that can be inferred from other properties, or the local state of the entity. Take this simple example: component PlayerInfo { id = 1001; Date birthday = 1; int32 age = 2; } The player’s age can be inferred from the other properties, therefore you should not include it in the component. This example sounds trivial, but often there are properties such as an explicit state machine state that can be inferred from other data.
https://docs.improbable.io/reference/13.0/shared/design/component-best-practices
2018-08-14T08:58:57
CC-MAIN-2018-34
1534221208750.9
[]
docs.improbable.io
Traffic Ops - Default Profiles¶ Traffic Ops has the concept of Parameters and Profiles, which are an integral function within Traffic Ops. To get started, a set of default Traffic Ops profiles need to be imported into Traffic Ops to get started to support Traffic Control components Traffic Router, Traffic Monitor, and Apache Traffic Server. Download Default Profiles from here Minimum Traffic Ops Profiles needed¶ - EDGE_ATS_<version>_<platform>_PROFILE.traffic_ops - MID_ATS_<version>_<platform>_PROFILE.traffic_ops - TRAFFIC_MONITOR_PROFILE.traffic_ops - TRAFFIC_ROUTER_PROFILE.traffic_ops - TRAFFIC_STATS_PROFILE.traffic_ops Steps to Import a Profile¶ - Navigate to ‘Parameters->Select Profile’ - Click the “Import Profile” button at the bottom - Choose the specific profile you want to import from your download directory - Click ‘Submit’ - Continue these steps for each Minimum Traffic Ops Profiles needed above
http://traffic-control-cdn.readthedocs.io/en/release-2.1.0/admin/traffic_ops/default_profiles.html
2018-08-14T08:52:25
CC-MAIN-2018-34
1534221208750.9
[]
traffic-control-cdn.readthedocs.io
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Container for the parameters to the DeleteAlias operation. Deletes the specified Lambda function alias. For more information, see Introduction to AWS Lambda Aliases. This requires permission for the lambda:DeleteAlias action. Namespace: Amazon.Lambda.Model Assembly: AWSSDK.Lambda.dll Version: 3.x.y.z The DeleteAliasRequest type exposes the following members This operation deletes a Lambda function alias var response = client.DeleteAlias(new DeleteAliasRequest { FunctionName = "myFunction", Name = "alias" });
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Lambda/TDeleteAliasRequest.html
2018-08-14T09:04:35
CC-MAIN-2018-34
1534221208750.9
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Constructs AmazonLexModelBuildingServiceClient with the credentials loaded from the application's default configuration, and if unsuccessful from the Instance Profile service on an EC2 instance. Example App.config with credentials set. Namespace: Amazon.LexModelBuildingService Assembly: AWSSDK.LexModelBuildingService.dll Version: 3.x.y.z .NET Standard: Supported in: 1.3 .NET Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/LexModelBuildingService/MLexModelBuildingServicector.html
2018-08-14T09:11:21
CC-MAIN-2018-34
1534221208750.9
[]
docs.aws.amazon.com
In Gothru Overlay editor you are able to create a multilevel menu via Overlay Menu Settings (Plugin Manager > Menu > Side/Top Menu) as well as via the Active Plugin (Active Plugins > Side/Top. After adding some menu lists, select the menu list that you want to use as a submenu and drag them below the target menu or parent menu. In this example, we will add Business class bar, Business class seat, and Business class bed to the target menu, SAS Business Class. 4. Click on the "in" button on the intended sub-menus. You will see that the submenus automatically added to the target menu or parent menu. 5. Click "Save" and “Publish” to see the results.
https://docs.gothru.co/submenu/
2021-01-15T17:30:55
CC-MAIN-2021-04
1610703495936.3
[]
docs.gothru.co
Using Node.js Modules with Azure applications This document provides guidance on using Node.js modules with applications hosted on Azure. It provides guidance on ensuring that your application uses a specific version of a module as well as using native modules with Azure. If you are already familiar with using Node.js modules, package.json and npm-shrinkwrap.json files, the following information provides a quick summary of what is discussed in this article: Azure App Service understands package.json and npm-shrinkwrap.json files and can install modules based on entries in these files. Azure Cloud Services expects all modules to be installed on the development environment, and the node_modules directory to be included as part of the deployment package. It is possible to enable support for installing modules using package.json or npm-shrinkwrap.json files on Cloud Services; however, this configuration requires customization of the default scripts used by Cloud Service projects. For an example of how to configure this environment, see Azure Startup task to run npm install to avoid deploying node modules Note Azure Virtual Machines are not discussed in this article, as the deployment experience in a VM is dependent on the operating system hosted by the Virtual Machine. Node.js Modules Modules are loadable JavaScript packages that provide specific functionality for your application. Modules are usually installed using the npm command-line tool, however some modules (such as the http module) are provided as part of the core Node.js package. When modules are installed, they are stored in the node_modules directory at the root of your application directory structure. Each module within the node_modules directory maintains its own directory that contains any modules that it depends on, and this behavior repeats for every module all the way down the dependency chain. This environment allows each module installed to have its own version requirements for the modules it depends on, however it can result in quite a large directory structure. Deploying the node_modules directory as part of your application increases the size of the deployment when compared to using a package.json or npm-shrinkwrap.json file; however, it does guarantee that the versions of the modules used in production are the same as the modules used in development. Native Modules App Service does not support all native modules, and might fail when compiling modules with specific prerequisites. While some popular modules like MongoDB have optional native dependencies and work fine without them, two workarounds proved successful with almost all native modules available today: (the current values can be checked on runtime from properties process.arch and process.version). Azure App Service can be configured to execute custom bash or shell scripts during deployment, giving you the opportunity to execute custom commands and precisely configure the way npm install is being run. For a video showing how to configure that environment, see Custom Website Deployment Scripts with Kudu. Using a package.json file The package.json file is a way to specify the top level dependencies your application requires so that the hosting platform can install the dependencies, rather than requiring you to include the node_modules. Note When deploying to Azure App Service, if your package.json file references a native module you might see an error similar to the following example when publishing the application using Git: npm ERR! [email protected] install: 'node-gyp configure build' npm ERR! 'cmd "/c" "node-gyp configure build"' failed with 1 Using a npm-shrinkwrap.json file command will use the versions currently installed in the node_modules folder, and record these versions to the npm-shrinkwrap.json file. After the application has been deployed to the hosting environment, the npm install command is used to parse the npm-shrinkwrap.json file and install all the dependencies listed. For more information, see npm-shrinkwrap. Note When deploying to Azure App Service, if your npm-shrinkwrap.json file references a native module you might see an error similar to the following example when publishing the application using Git: npm ERR! [email protected] install: 'node-gyp configure build' npm ERR! 'cmd "/c" "node-gyp configure build"' failed with 1 Next steps Now that you understand how to use Node.js modules with Azure, learn how to specify the Node.js version, build and deploy a Node.js web app, and How to use the Azure Command-Line Interface for Mac and Linux. For more information, see the Node.js Developer Center.
https://docs.microsoft.com/en-us/azure/nodejs-use-node-modules-azure-apps
2021-01-15T19:28:52
CC-MAIN-2021-04
1610703495936.3
[]
docs.microsoft.com
Mod operator (Visual Basic) Divides two numbers and returns only the remainder. Syntax result = number1 Mod number2 Parts result Required. Any numeric variable or property. number1 Required. Any numeric expression. number2 Required. Any numeric expression. Supported types All numeric types. This includes the unsigned and floating-point types and Decimal. Result The result is the remainder after number1 is divided by number2. For example, the expression 14 Mod 4 evaluates to 2. Note There is a difference between remainder and modulus in mathematics, with different results for negative numbers. The Mod operator in Visual Basic, the .NET Framework op_Modulus operator, and the underlying rem IL instruction all perform a remainder operation. The result of a Mod operation retains the sign of the dividend, number1, and so it may be positive or negative. The result is always in the range (- number2, number2), exclusive. For example: Public Module Example Public Sub Main() Console.WriteLine($" 8 Mod 3 = {8 Mod 3}") Console.WriteLine($"-8 Mod 3 = {-8 Mod 3}") Console.WriteLine($" 8 Mod -3 = {8 Mod -3}") Console.WriteLine($"-8 Mod -3 = {-8 Mod -3}") End Sub End Module ' The example displays the following output: ' 8 Mod 3 = 2 ' -8 Mod 3 = -2 ' 8 Mod -3 = 2 ' -8 Mod -3 = -2 Remarks If either number1 or number2 is a floating-point value, the floating-point remainder of the division is returned. The data type of the result is the smallest data type that can hold all possible values that result from division with the data types of number1 and number2. If number1 or number2 evaluates to Nothing, it is treated as zero. Related operators include the following: The \ Operator (Visual Basic) returns the integer quotient of a division. For example, the expression 14 \ 4evaluates to 3. The / Operator (Visual Basic) returns the full quotient, including the remainder, as a floating-point number. For example, the expression 14 / 4evaluates to 3.5. Attempted division by zero If number2 evaluates to zero, the behavior of the Mod operator depends on the data type of the operands: - An integral division throws a DivideByZeroException exception if number2cannot be determined in compile-time and generates a compile-time error BC30542 Division by zero occurred while evaluating this expressionif number2is evaluated to zero at compile-time. - A floating-point division returns Double.NaN. Equivalent formula The expression a Mod b is equivalent to either of the following formulas: a - (b * (a \ b)) a - (b * Fix(a / b)) Floating-point imprecision When you work with floating-point numbers, remember that they do not always have a precise decimal representation in memory. This can. Example The following example uses the Mod operator to divide two numbers and return only the remainder. If either number is a floating-point number, the result is a floating-point number that represents the remainder. Debug.WriteLine(10 Mod 5) ' Output: 0 Debug.WriteLine(10 Mod 3) ' Output: 1 Debug.WriteLine(-10 Mod 3) ' Output: -1 Debug.WriteLine(12 Mod 4.3) ' Output: 3.4 Debug.WriteLine(12.6 Mod 5) ' Output: 2.6 Debug.WriteLine(47.9 Mod 9.35) ' Output: 1.15 Example. firstResult = 2.0 Mod 0.2 ' Double operation returns 0.2, not 0. secondResult = 2D Mod 0.2D ' Decimal operation returns 0.
https://docs.microsoft.com/en-us/dotnet/visual-basic/language-reference/operators/mod-operator
2021-01-15T19:11:01
CC-MAIN-2021-04
1610703495936.3
[]
docs.microsoft.com
A file path defines the location of a file in a web site's folder structure. A file path is used when we link to such external files like: - web pages - images - JavaScripts - style sheets There exist absolute and relative file paths. Absolute File Paths¶ An absolute file path is the URL to access an internet file. Example of an absolute file path:¶ Try it Yourself » <html> <head> <title>Title of the document</title> </head> <body> <h2>Absolute File Path Example</h2> <img src="" alt="Sea" style="width:300px"> </body> </html> Relative File Paths¶ A relative file path mentions a file that is relative to the current page. In the example below, the file path points out a file in the images folder that is located at the root of the current web: Example of a relative file path:¶ Try it Yourself » <html> <head> <title>Title of the document</title> </head> <body> <h2>Relative File Path Example</h2> <img src="/build/images/smile-small.jpg" alt="Smile" width="290" height="260"> </body> </html>
https://www.w3docs.com/learn-html/html-file-paths.html
2021-01-15T17:17:00
CC-MAIN-2021-04
1610703495936.3
[]
www.w3docs.com
Qubole Data Service Documentation¶ Important We would like to publicly and unequivocally acknowledge that a few words and phrases in terminology used in our industry and subsequently adopted by Qubole over the last decade is insensitive, non-inclusive, and harmful. We are committed to inclusivity and correcting these terms and the negative impressions that they have facilitated. Qubole is actively replacing the following terms in our documentation and education materials: - Master becomes Coordinator - Slave becomes Worker - Whitelist becomes Allow List or Allowed - Blacklist becomes Deny List or Denied These terms have been pervasive for too long in this industry which is wrong and we will move as fast as we can to make necessary corrections, while we need your patience we will not take it for granted. Please do not hesitate to reach out if you feel there are other areas for improvement, if you feel we are not doing enough or moving fast enough or if you want to discuss anything further in this area.
https://docs-gcp.qubole.com/en/latest/
2021-01-15T18:33:37
CC-MAIN-2021-04
1610703495936.3
[]
docs-gcp.qubole.com
Artifact Store¶ Closely related to the Metadata Store is the Artifact Store. It will store all intermediary pipeline step results, a binary representation of your source data as well as the trained model. You have the following options to configure the Artifact Store: Local (Default) Remote (Google Cloud Storage) Soon: S3 compatible backends Local (Default)¶ By default, ZenML will use the .zenml directory created when you run zenml init at the very beginning. All artifacts and inputs will be persisted there. Using the default Artifact Store can be a limitation to the integrations you might want to use. Please check the documentation of the individual integrations to make sure they are compatible. Remote (GCS/S3)¶ Many experiments and many ZenML integrations require a remote Artifact Store to reliable retrieve and persist pipeline step artifacts. Especially dynamic scenarios with heterogenous environments will be only possible when using a remote Artifact Store. Configuring a remote Artifact Store for ZenML is a one-liner using the CLI: zenml config artifacts set gs://your-bucket/sub/dir
https://docs.zenml.io/repository/artifact-store.html
2021-01-15T17:18:19
CC-MAIN-2021-04
1610703495936.3
[]
docs.zenml.io
To connect to a file source, please follow the below steps – - Login to Intellicus – Navigate – Administration – Configure – Databases Tab - Click Add - The page will display the following options Figure 1: Adding a file system-based connection Connection Properties Make sure you uncheck read only in the settings of this connection as it will be accessed by both Intellicus and Data Science environment to exchange data. Once you have given the required details, you can Test your connection if it has been successfully created, Save it once you get the message ‘Connection Test Succeeded.’ Cancel if you want to start afresh. You can Delete a connection once you have saved it.
https://docs.intellicus.com/documentation/using-intellicus-19-1/data-science-with-intellicus-python-19-1/creating-connections-19-1/connecting-to-a-file-source-19-1/
2021-01-15T17:59:37
CC-MAIN-2021-04
1610703495936.3
[array(['https://docs.intellicus.com/wp-content/uploads/2019/13/Addingafileconnection_python.png', 'Creating a file connection'], dtype=object) ]
docs.intellicus.com
Attention This documentation is under active development, meaning that it can change over time as we refine it. Please email [email protected] if you require assistance, or have suggestions to improve this documentation. Machine Learning on M3¶ Software¶ There are a number of machine learning packages available on M3. TensorFlow¶ To use TensorFlow on M3: # Loading module module load tensorflow/2.0.0-beta1 # Unloading module module unload tensorflow/2.0.0-beta1 PyTorch¶ To use PyTorch on M3: # Loading module module load pytorch/1.3-cuda10 # Unloading module module unload pytorch/1.3-cuda10 Keras¶ Keras uses Tensorflow as a backend, and has advised users to use tf.keras going forward as it is better maintained and integrates well with Tensorflow features. To read more about these recommendations, see. This means to use Keras on M3, you will need to load Tensorflow: Then in your Python code, import Keras from Tensorflow and code as usual. # Import Keras from tensorflow import keras # Coding in Keras here Reference datasets¶ MASSIVE hosts Machine Learning related data collections in the interest of reducing the pressure on user storage, minimising download wait times, and providing easy access to researchers. Currently hosted data collections are listed below, and we are open to hosting others that are valuable to the community. If you would like to request a data collection be added, or see more information about the data collections hosted on M3, see Data Collections on M3. Quick guide for checkpointing¶ Why checkpointing?¶ Checkpoints in Machine/Deep Learning experiments prevent you from losing your experiments due to maximum walltime reached, blackout, OS faults or other types of bad errors. Sometimes you want just to resume a particular state of the training for new experiments or try different things. Pytorch:¶ First of all define a save_checkpoint function which handles all the instructions about the number of checkpoints to keep and the serialization on file: def save_checkpoint(state, condition, filename='/output/checkpoint.pth.tar'): """Save checkpoint if the condition achieved""" if condition: torch.save(state, filename) # save checkpoint else: print ("=> Validation condition not meet") Then, inside the training (usually a for loop with the number of epochs), we define the checkpoint frequency (at the end of every epoch) and the information (epochs, model weights and best accuracy achieved) we want to save: # Training the Model for epoch in range(num_epochs): train(...) # Train acc = eval(...) # Evaluate after every epoch # Some stuff with acc(accuracy) ... # Get bool not ByteTensor is_best = bool(acc.numpy() > best_accuracy.numpy()) # Get greater Tensor to keep track best acc best_accuracy = torch.FloatTensor(max(acc.numpy(), best_accuracy.numpy())) # Save checkpoint if is a new best save_checkpoint({ 'epoch': start_epoch + epoch + 1, 'state_dict': model.state_dict(), 'best_accuracy': best_accuracy }, is_best) To resume a checkpoint, before the training we have to load the weights and the meta information we need: checkpoint = torch.load(resume_weights) start_epoch = checkpoint['epoch'] best_accuracy = checkpoint['best_accuracy'] model.load_state_dict(checkpoint['state_dict']) print("=> loaded checkpoint '{}' (trained for {} epochs)".format(resume_weights, checkpoint['epoch'])) Keras¶ Keras provides a set of functions called callback: you can think of it as events that triggers at certain training state. The callback we need for checkpointing is the ModelCheckpoint which provides all the features we need according to the checkpoint strategy adopted. from keras.callbacks import ModelCheckpoint # Checkpoint In the /output folder filepath = "/output/mnist-cnn-best.hdf5" # Keep only a single checkpoint, the best over test accuracy. checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') # Train model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test), callbacks=[checkpoint]) # <- Apply our checkpoint strategy Keras models have the load_weights() method which load the weights from a hdf5 file. To load the model’s weight you have to add this line just after the model definition: ... # Model Definition model.load_weights(resume_weights) Tensorflow¶ Tensorflow 2 encourages uses to take advantage of the Keras API, and likewise uses Keras to save checkpoints while training a model. As described above, Keras provides a set of functions called callback: you can think of callbacks as events that triggers at a certain training state. The callback we need for checkpointing is the ModelCheckpoint. This code shows how to save the weights of a model at regular epoch intervals using the Keras API in Tensorflow 2. # Define where the checkpoints should be stored # This saves the checkpoints from each epoch with a corresponding name checkpoint_filename = "./checkpoints/cp--{epoch:04d}.cpkt" checkpoint_dir = os.path.dirname(checkpoint_filename) # Create callbacks to save the weights every 5 epochs cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_filename, verbose = 1, save_weights_only = True, # How often to save save_freq = 5) # Train the model, using our callback to save the weights every 5 epochs model.fit(x_train, y_train, batch_size = batch_size, epochs = epochs, validation_data = (x_test, y_test), callbacks = [cp_callback]) Once we’ve done this, we can load the weights into a model to resume training or start a new model with them. # To load the weights into a model, first get the latest checkpoint latest_cp = tf.train.latest_checkpoint(checkpoint_dir) # Load the latest weights into the model from the checkpoint new_model.load_weights(latest_cp)
https://docs.massive.org.au/communities/machine-learning.html
2021-01-15T18:13:22
CC-MAIN-2021-04
1610703495936.3
[]
docs.massive.org.au
The Print Composer Composer, you need to load some raster or vector layers in the QGIS map canvas and adapt their properties to suit your own convenience. After everything is rendered and symbolized to your liking, click the New Print Composer icon in the toolbar or choose File ‣ New Print Composer. You will be prompted to choose a title for the new Composer. To demonstrate how to create a map please follow the next instructions. You can add multiple elements to the Composer. It is also possible to have more than one map view or legend or scale bar in the Print Composer canvas, on one or several pages. Each element has its own properties and, in the case of the map, its own extent. If you want to remove any elements from the Composer canvas you can do that with the Delete or the Backspace key. The Composer Manager is the main window to manage print composers in the project. It helps you add new print composer, duplicate an existing one, rename or delete it. To open the composer manager dialog, click on the Composer Manager button in the toolbar or choose Composer ‣ Composer Manager. It can also be reached from the main window of QGIS with Project ‣ Composer Manager. Figure Composer Manager: The Print Composer Manager The composer manager lists in its upper part all the available print composers in the project. The bottom part shows tools that help to: With the Composer Manager, it’s also possible to create new print composers as an empty composer or from a saved template. By default, QGIS will look for templates in user directory (~/.qgis2/composer_templates) or application’s one (ApplicationFolder/composer_templates). QGIS will retrieve all the available templates and propose them in the combobox. The selected template will be used to create a new composer when clicking Add button. You can also save composer templates in another folder. Choosing specific in the template list offers the ability to select such template and use it to create a new print composer.
https://docs.qgis.org/2.14/ru/docs/user_manual/print_composer/overview_composer.html
2021-01-15T19:08:57
CC-MAIN-2021-04
1610703495936.3
[]
docs.qgis.org
Best practices FAQ 3. Best practices / FAQ What's the best sequence of setting up ACaaS for a new building? Install the cylinders Create locks and lock groups in ACaaS (or import them) Create users an assign roles in order to login (frontend, apps) Map the cylinders to locks using the BlueID lock installation app Connect mobile devices to ACaaS either by sending invitation e-mails to residents, and using the BlueID keys app or by using your custom app with our SDK to connect the device to our backend Create keys for residents/lock-groups Should I use predefined or customized roles for users? Every ACaaS tenant comes with a set of predefined roles so that you - as a customer - don't have to worry about the correct set of permissions. The most powerful user is the "administrator" who is allowed to view and use all the features we offer. You should limit the access to this role in any case. E.g., this user is also allowed to use the NFC writer app, but sharing the credentials might become dangerous. We highly recommend to use the permissions and roles where necessary. It is possible to operate ACaaS with the predefined roles and limit the permissions to whats necessary at the same time. If the predefined roles do not match your requirements create your own customized roles and assign them. Take your time to find the best fit in order to prevent misusage of the system.
https://docs.blueid.net/userGuides/accessGUI/Best%2Bpractices%2BFAQ/
2021-01-15T17:03:05
CC-MAIN-2021-04
1610703495936.3
[]
docs.blueid.net
Design Hub This application requires a configuration file to be passed as a command line argument. The configuration file holds all the key settings regarding database connection, networking, security, persistence, plugins and more. The minimum configuration requires database details, JChem Microservices details, a license and a port to be specified, but a more general sample of typical options is below. Note that some values point to files or other services. These should be changed to match your environment - of course you can store your ChemAxon license files or Design Hub plugins anywhere. See below in the spreadsheet for your full list of options: { "jchemMicroServices": "", "databaseHost": "db" "databasePort": 5432, "databaseName": "designhub", "databasePassword": "CHANGETHIS", "secretKey": "CHANGETHIS", "authentication": { "internal": { "type": "local", "label": "Testing domain", "accounts": [{ "username": "demo", "password": "demo" }] } } }. Design Hub requires JChem Microservices (DB, IO, Structure Manipulation and optionally Markush Enumeration modules) to function. Using Design Hub to it in the configuration’s converterService option. Design Hub": "/config/certs/example-com-key.pem", "cert": "/config": "/config/certs/example-com-key.pem", "cert": "/config": "/config/certs/example-com-key.pem", "cert": "/config/certs/example-com-cert.pem" } } Next, add a secretKeyand configure your identity provider in authentication: config.json: { "port": 443, "hostname": "example.com", "tls": { "key": "/config/certs/example-com-key.pem", "cert": "/config/certs/example-com-cert.pem" }, "secretKey": "lalilulelo", "authentication": { "internal": { "type": "saml", "label": "Example", "entryPoint": "" } } } Using the domain chosen in the authentication settings, in this example "internal", you can now register the Design Hub1/util/convert/clean", "molconvertws": "/rest-v1/util/calculate/molExport", "reactionconvertws": "/rest-v1/util/calculate/reactionExport", "stereoinfows": "/rest-v1/util/calculate/cipStereoInfo" }, "displaySettings": { "toolbars": "reporting" } }
https://docs.chemaxon.com/display/lts-fermium/design-hub-configuration-guide.md
2021-01-15T17:00:49
CC-MAIN-2021-04
1610703495936.3
[]
docs.chemaxon.com
- Created by Unknown User (willem), last modified by Umut Uyurkulak on Nov 13, 2019 Overview Inside the Material Editor you can browse, create, edit and assign materials to objects, terrain, etc. It also allows you to copy existing materials and use them as a starting point for your specific material needs. Materials files (*.mtl) are ideally located in the same folder as the object they were created for. However, as they can be freely assigned to any objects inside your game project, the only restriction is to place them inside the game project folder structure. Editing materials not only allows you to choose textures and shaders, but also fine-tune placement, animation of textures and tweaking parameters relevant for a chosen shader type. For more information about material effects, shaders, and texture maps, see the sub-pages of the Legacy Material Editor. To open the Material Editor, go to Tools → Material Editor. Click on the name of the area names below to learn more about them. 1. Menu Accessed via the icon situated at the top-right corner of the tool, the Menu contains the following options: File Menu Edit Menu Material Menu Toolbars When a tool has a toolbar, whether this is a default one or a custom one, the options above are also available when right-clicking in the toolbar area (only when a toolbar is already displayed). Window Menu Help Menu 2. Materials Overview In this panel, the material (or in the case of multi-materials, the sub-materials that are contained in your material) are shown.. - When the Sync Selection button in the toolbar is active, selecting a different asset in a tool-specific Asset Browser will instantly open it. This button makes it very easy to cycle through different assets and edit them on the fly. - You can drag an asset from the standalone Asset Browser and drop it onto the asset picker fields on the Material Editor's Properties panel. If the asset is compatible, e.g. textures, the field will be highlighted in green and the asset will be assigned to it; otherwise, it will be highlighted in red and the asset will not be assigned. Right Click Menu When right clicking on a (sub-)material, the following options appear: 4. Properties The Material Editor has basic material properties that are common to all materials, as well as shader parameters that can be adjusted if shaders are enabled. Material Settings Opacity Settings These settings are important if using alpha channels for transparency, for example when creating leaves and wire fences. Lighting Settings This section controls the material color and specular settings. Texture Maps In the material editor are different slots for texture maps. With texture maps, users can control different shader effects. To add a texture to the material slot, click the Browse button ( ) or copy and paste the texture path into an empty slot. The Texture Map slot layout is fairly straightforward: the left column displays which type of texture map the slot is associated with and the middle column is where the texture map file is located. Hovering over the name of a texture gives you a preview of that texture. The table below shows the textures available for the Illum shader. The types of Texture Maps available will be different depending on which Shader you have selected. All of these Texture Maps (for example Diffuse, Normal, Specular) have the following options when they're expanded: The rotator and oscillator functions are only available for diffuse and decal textures because of technical limitations. You can create many interesting effects like animated glow by using the decal slot: Advanced Settings Shader Params The Shader Params differ depending on the shader that is selected. See this page (and subpages) for more detailed information for each separate shader. Shader Generation Params The Shader Generation Params differ depending on the shader that is selected. See this page (and subpages) for more detailed information for each separate shader. Vertex Deformation The Vertex Deformation feature offers the opportunity to influence geometry. You can choose between different types, and each of them will deform your model in a different way. Vertex deformation is applied in the direction of the vertex normal. Layer Presets 5. Preview In the Preview panel, you can see what the material you have selected looks like. You can also see the material on several predefined objects, as well as any object in your project, to get a better idea of what it would look like in-game. When right-clicking on a preview, a context menu appears: Texture Tooltips In the new Material Editor, there are two different tooltips that can be displayed, a small one, with many details about the texture, and a large one, that displays the texture image in a bigger window, showing the image in much more detail. This larger tooltip can be displayed by holding CTRL: Small tooltip: Large tooltip:
https://docs.cryengine.com/display/CEMANUAL/Material+Editor
2021-01-15T18:21:03
CC-MAIN-2021-04
1610703495936.3
[array(['/download/attachments/35259544/material_editor_reference.jpg?version=1&modificationDate=1557483958000&api=v2', None], dtype=object) array(['/download/attachments/35259544/drag_and_drop_assets_me.gif?version=1&modificationDate=1560775106000&api=v2', None], dtype=object) array(['/download/attachments/35259544/effectexample.gif?version=1&modificationDate=1570020689000&api=v2', None], dtype=object) array(['/download/attachments/35259544/SmallTooltip.jpg?version=2&modificationDate=1547201912000&api=v2', None], dtype=object) array(['/download/attachments/35259544/LargeTooltip.jpg?version=2&modificationDate=1547202959000&api=v2', None], dtype=object) ]
docs.cryengine.com
If at any point you'd like to change the application name, add/remove API Keys, and modify Workflow management for Predict, you may do so by visiting the application details page and changing the values. If you'd like to delete an application, you may do so at any time by visiting the application details page and pressing the 'Delete application' button. You'll be asked to confirm your change. Please note that once you delete an application, we cannot recover it. You will also lose all images, concepts and models associated with that application. Proceed with caution.
https://docs.clarifai.com/getting-started/applications/application-settings
2021-01-15T17:53:50
CC-MAIN-2021-04
1610703495936.3
[]
docs.clarifai.com
Domains Settings/Status Configure and manage email domains protected by Email Gateway. Go to. Add a domain To view the instructions: - Expand Configure External Dependencies. - Under Inbound Settings, click the link for your chosen provider. - Use the information to help you configure your email domain. Click Outbound Settings to view your outbound relay host. To add a domain: - Click Add Domain. - In the Email Domain text field enter your email domain. Example: example.com. Domain ownership must be verified before mail will be delivered through Sophos Central. To verify domain ownership, you need to add a TXT record to your domain. Adding this record will not affect your email or other services. - Click Verify Domain Ownership. - Use the details given in Verify Domain Ownership to add the TXT record to your Domain Name Server (DNS).Note This can take up to ten minutes to take effect. - Click Verify.Caution You cannot save an unverified domain. You must correct any issues with the domain ownership verification. - Select the direction you want to configure the domain for. If you select Inbound and Outbound you will need to select an outbound gateway from the drop-down list. If you select Custom Gateway, at least one IP/CIDR (subnet range) is required. Enter the IP and CIDR and click Add. You can add multiple IP addresses/ranges. - Select whether you wish to use a mail host or a mail exchange (MX) record in the Inbound destination drop-down list.Note You must use a mail exchange record if you want to use multiple destinations. - If you selected Mail Host enter an IP address or an FQDN (fully-qualified domain name) in the IP/FQDN text field. Example: 111.111.11.111 or [email protected]. - If you selected MX enter an FQDN in the MX text field. Example: [email protected]. - In the Port text field enter the port information for your email domain. - Expand Information to configure External Dependencies. The Mail Routing Settings tab shows the Sophos delivery IP addresses and MX record values used for configuring mail flow for your region. - Make a note of the appropriate settings so that you know where to allow SMTP traffic from. - Ensure that you configure your mail flow for Email Security. - Click Save to validate your settings. - Click the Base Policy link to configure spam protection. You can add extra domains at any time. Delete a domain To delete a domain, click on the gray cross to the right of the domain you wish to remove. Edit a domain To edit a domain, click on the domain name in the list, change the settings and click Save.
https://docs.sophos.com/central/Customer/help/en-us/central/Customer/concepts/eg_EmailDomains.html
2021-01-15T18:23:15
CC-MAIN-2021-04
1610703495936.3
[]
docs.sophos.com
This plugin integrates Affiliates Enterprise with AddToAny -The Universal Sharing Platform- and their Share Buttons by AddToAny WordPress plugin. With Affiliates Enterprise and the Share Buttons by AddToAny plugin, the extension requires no specific setup – URLs to pages shared are converted into affiliate links automatically when affiliates are logged in. Access to this extension is included with Affiliates Enterprise.
http://docs.itthinx.com/document/affiliates-enterprise/setup/settings/integrations/addtoany/
2018-09-18T17:29:38
CC-MAIN-2018-39
1537267155634.45
[]
docs.itthinx.com
Newsletters are story containers, they have a title and a description. They combine one or more stories into a publication that will be sent out to newsletter subscribers through a campaign. Managing Newsletters There is a section dedicated to creating, editing and managing newsletters Newsletters > Newsletters. In this section, you can create and edit your newsletters, assign recipients including Newsletter subscribers and members of specific groups. When you add a new newsletter there, give it a title and an optional description. Newsletter subscribers will receive the newsletter by default. You can also select specific groups and exclude normal subscribers from receiving the newsletter. Note that to target specific user groups, the Groups plugin is required. For example, let’s say you want to make an announcement to the marketing department and only members of the Marketing group should receive this announcement. You would create one story that contains the announcement and assign the story to a new newsletter that is not sent to subscribers but only to the Marketing group. Add the newsletter to a new campaign and run it. Announcement made! Creating newsletters in Stories A newsletter can also be created while editing a story – you can use the Newsletters box for that:
http://docs.itthinx.com/document/groups-newsletters/newsletters/
2018-09-18T17:07:43
CC-MAIN-2018-39
1537267155634.45
[array(['http://docs.itthinx.com/wp-content/uploads/2015/04/Add-New-Newsletter.png', 'Add New Newsletter'], dtype=object) array(['http://docs.itthinx.com/wp-content/uploads/2015/04/Screen-Shot-2015-04-15-at-16.36.25-300x280.png', 'Screen Shot 2015-04-15 at 16.36.25'], dtype=object) array(['http://docs.itthinx.com/wp-content/uploads/2015/04/Newsletters-Box.png', 'Newsletters Box'], dtype=object) ]
docs.itthinx.com
CruiseControl Cru. What is Continuous Integration? Continuous Integration by Martin Fowler and Matthew Foemmel.. Related links - CruiseControl.NET: port of CruiseControl to the .NET platform - CruiseControl.rb: port of CruiseControl to the Ruby/Rails platform - ConfigurationGUI: a Java WebStart Swing GUI for creating cruisecontrol configuration files and monitoring project status. - CCScrape: a Java WebStart application that makes easy work of driving XFDs from your CruiseControl build results - see the CruiseControl wiki for a list of other 3rd Party Tools that work with CruiseControl.
http://docs.huihoo.com/cruisecontrol/2.7/
2008-05-16T20:44:09
crawl-001
crawl-001-009
[]
docs.huihoo.com
“If Numerai’s model does well, they can attract AUM [assets under management]. If AUM increases, payouts will increase. If payouts increase, we’re more profitable. If we’re more profitable, then we’re happy. I want that positive feedback loop to have zero interference.” “Your first year of trading is your tuition. You’re probably going to lose all of it, but you’re going to learn a lot.” — Arbitrage “If only we could create such a thing.” — Arbitrage, full of sorrow .” “I don’t know, I have no idea because I don’t optimize sharpe.” -Arbitrage
https://docs.numer.ai/community-content/numerai-community-office-hours/office-hours-recaps/ohwa-s01e08
2022-05-16T21:01:27
CC-MAIN-2022-21
1652662512249.16
[]
docs.numer.ai
Organizations Organizations equip companies deploying large fleets of Particle devices with sophisticated, Enterprise-grade administrative tools and capabilities. An Organization represent a top-level shared account, distinct from any individual user, that can own many products, have a team, and a centralized billing account. With an Organization, you can: - Centralize visibility and control by housing many Particle projects underneath a single, shared account - Collaborate at scale through tiered team management and cascading role based access controls (RBAC) Organizations are available in the Growth and Enterprise Tiers. Products that were started in the Free Tier in an individual account can be migrated to an Organization when upgrading to Growth or Enterprise. Setting up your Organization When you become an Enterprise customer with Particle, an Organization will be created on your behalf. At this time, you will provide the Particle team with some important information that will be used to configure your Organization properly: - Name: The name you'd like to use for your Organization, most often matching the name of your company. - Owner: The Particle username of the person you'd like to act as the account owner. This is a person that will have the highest levels of permissions for the Organization and all Products underneath it. - Products to transfer: You likely have started scaling one or more deployments using Products as a self-service customer before becoming an Enterprise customer. These Products will be re-associated to be owned by the Organization -- with access to those Products governed by RBAC rules of the org. Organization vs. Sandbox Spaces With an Organization, you will now have two distinct Spaces within the Console: - Sandbox space: Everything owned by your Particle user account intended for personal use of the platform. Personal devices, SIMs, and Products are associated and are owned by a single individual. The Sandbox space can contain up to 100 cellular or Wi-Fi devices in any combination at no additional charge. - Organization space: Everything owned by an Organization intended for business use of the platform. This is a shared account meant to centralize visibility and access across Particle deployments amongst a group of team members. Providing these two contexts allows you to use a single account for multiple purposes -- namely for both business and personal use. In this way, you can think of your Particle account as your identity on the platform, under which are the contexts in which you use the product. In the Console, you will notice a dropdown in the top-left corner of the navigation: This is your space switcher that will allow you to easily toggle between your Organization and Sandbox (Personal) spaces. When the space switcher is expanded, you will see the Organization(s) that you are a member of: Organization owned Products Clicking on the name of an Organization will expose the space for that org. You will be directed to the list of Products that belong to the Organization. You can also click on the Products icon () in the sidebar to reveal this list at any time. All team members of the Organization will have access to these products in accordance with their role. Platform usage associated with devices and SIM data usage housed in these products will be automatically billed to the Organization based on the parameters of your Enterprise agreement with Particle. New Products can be created from the Console (by clicking on the + New Product button here) or the API that will be assigned to the Organization. Team Management With an Organization, you get improved team management and access controls capabilities. Each Organization has a team that can be managed to grant access to the Products owned by the Organization. This is most useful for core team members at your company that should have visibility into all the IoT activity of your organization. In the Organization space, click on the Team icon () in the sidebar to reveal your Organization's team: Note: When an Organization is first created on your behalf, a single team member will be added -- the user who you've indicated should be the Owner of the org. That Owner should then use the team management tools in the Console to invite other team members to the Organization. Tiered Access One of the main benefits of having an Organization team is the introduction of tiered levels of access. This allows you to take a single action to grant access to many Products, instead of needing to re-invite a team member separately each time. Specifically, permissions at the Organization level cascade down to the Product level. That is, if you are invited to an Organization team, you will automatically inherit access to the Products belonging to the Organization. However, there may be cases when you specifically do not want to grant Organization-wide access -- but instead are seeking to give access to specific Products. A separate, distinct team can also be defined at the Product level to enable single-Product collaborators. This many be because you work with technical contractors that engage for a short period of time in a specific context. Or, you may have departments with team members who only need access to a subset of the projects associated with your Organization. To see the full list of who has access to a given Product, click on it from the list on the Products view (), then click on the Product's Team icon () in the sidebar. This will show you both the Product team as well as the users who have inherited access to this Product based on the Organization that it belongs to: The combination of the Organization and Product Teams give you the flexibility and fine-grained control needed to define the right hierarchy and levels of access to meet your company's needs. Product Collaborator View Once you are invited to collaborate on a product team (vs being part of an organization team), and accepted the invitation the product will show up in your Sandbox space with the following distinctions: - If you are collaborating on a product which belong to an organization it will show up in your Sandbox space with a suitcase icon next to the name of the organization that invited you. - If you are collaborating on a product which belong to an individual - as his/her sandbox product - it will show up in your Sandbox space with a person icon next to the Particle username (email) of the individual that invited you. - If it is your own sandbox product it will show up with the same person icon and your Particle username next to it. Here's an example: The product on the left belongs to the organization "Hex goods", which invited the user to collaborate on that specific product only. The product on the right belongs to the user [email protected] Roles Each person on the Organization team has a role that dictates their level of permissions. The role assigned to an Organization team member impacts both their permissions as it relates to the Organization as well as each Product belonging to the org. This means team members will have the same role both at the Organization and Product level. There are four available roles: -. For a comprehensive discussion of roles and permissions, please check out the tutorial on Team Access Controls. To give a specific example, if you are invited to an Organization team with the Administrator role, you will automatically inherit access to the Products belonging to the Organization with the same role (Administrator). At the Organization level, being an Administrator means you can take all actions like inviting new members to the org team. At the Product level, you can take all fleet management actions like releasing a version of firmware to devices as well as the ability to manage the Product team. Inviting Team Members You can click on Invite Team Member to add new members to the Organization team. Remember that this will grant this user access to all Products owned by the org at the specified role. If you instead only want to provide access to single Products, you should invite the user as a Product team member instead. When a person is invited to an Organization team, any existing memberships to Product teams belonging to the Organization will be removed — this is because an Organization team membership will grant access to all Products owned by the org.
https://docs.particle.io/tutorials/product-tools/organizations/
2022-05-16T22:27:58
CC-MAIN-2022-21
1652662512249.16
[]
docs.particle.io