content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Windows User Guide¶
Connect dcrd to the Decred network¶
Step One
Note that all
dcrd,
dcrwallet, and
dcrctl commands must be executed in the directory where your Decred files are! Start
dcrd:
C:\Decred> dcrd -u <username> -P <password>
Start dcrwallet:
C:\Decred> dcrwallet -u <username> -P <password>
Step Two
Generate a new wallet address:
C:\Decred> dcrctl -u <username> -P <password> --wallet getnewaddress
Copy the new address (output from the last command). Close/stop
dcrd and
dcrwallet by pressing
ctrl+c in each window.
Step Three
Restart
dcrd using the command:
C:\Decred> dcrd --miningaddr <new address from step two or your web client wallet address>
Right-click on
start_local.bat and click
Edit. Change the username and password to match the credentials used in step 1. Save and close start_local.bat For reference, here is the command in start_local.bat:
C:\Decred> cgminer --blake256 -o -u <username> -p <password> --cert "%LOCALAPPDATA%\Dcrd\rpc.cert"
Step Two
If dcrd is not finished syncing to the blockchain, wait for it to finish, then proceed to the next step. When it is finished, it will show:
[INF] BMGR: Processed 1 block in the last 5m34.49s
Step Three
Double click on
start_local.bat.
cgminer should open. double clicking
cgminer.exe. This concludes the basic solo cgminer setup guide. For more information on cgminer usage and detailed explanations on program functions, refer to the official cgminer README. | https://docs.decred.org/mining/proof-of-work/solo-mining/cgminer/windows/ | 2017-06-22T18:20:26 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.decred.org |
Overview
Thank:
Powerful DataBinding to Objects, Collections, XML and WCF services - Binding Telerik RadBreadcrumb is as simple as setting a single property. The binding sources which the breadcrumb supports include Objects, XML, WCF services.
Auto-complete - Telerik RadBreadcrumb control features an auto-complete searching mechanism
TextMode and UI Navigation- The Telerik RadBreadcrumb control enhances further your web application’s capabilities through the rich navigation functionality. Your users can navigate in the control by entering a destination in a text mode or by choosing an item from the RadBreadcrumbItem's children collection.
Styling and Appearance - The RadBreadcrumb control ships with several pre-defined themes that can be used to style it. Furthermore, Telerik's unique style building mechanism allows you to change the skin’s color scheme with just a few clicks.
Keyboard Support - Navigate through the nodes of the breadcrumb without using the mouse. The keyboard can replace the mouse by allowing you to perform navigation, expanding and selecting the breadcrumb items.
Item Images - RadBreadcrumb gives you the ability to define images for each item
Enhanced Routed Events Framework - To help your code become even more elegant and concise, we. | http://docs.telerik.com/devtools/wpf/controls/radbreadcrumb/overvew | 2017-06-22T18:33:51 | CC-MAIN-2017-26 | 1498128319688.9 | [array(['images/breadcrumb_wpf_icon.png', 'breadcrumb wpf icon'],
dtype=object)
array(['images/radbreadcrumb-overview-1.png', None], dtype=object)] | docs.telerik.com |
Smart Tags
In R1 2017 we introduced the Smart Tags feature which provides easy access to the most important documentation articles related to each control. Though the feature is especially useful for users that have recently started using Telerik controls, it might come handy for experienced users as it provides easy navigation to the control's documentation and demos.
As shown in Figure 1, when you focus a certain Telerik control in the Visual Studio Designer, the Smart Tag arrow would appear on the right top corner of the control.
Figure 1: Smart Tag icon on RadGridView:
Unfolding the Smart Tag presents you with a list of different useful resources as well as the possibility to search the forum for certain topics. Figure 2 shows the default menu for the RadGridView control. The appearance and resources vary depending on the control.
Figure 2: Available resources for the RadGridView control:
| http://docs.telerik.com/devtools/wpf/smarttags-overview | 2017-06-22T18:33:24 | CC-MAIN-2017-26 | 1498128319688.9 | [array(['images/smarttag.png', 'Smart Tag icon on RadGridView'],
dtype=object)
array(['images/smarttag2.png',
'Available resources for the RadGridView control'], dtype=object)] | docs.telerik.com |
Recently Viewed Topics
On the top navigation bar, click the Scans button.
The My Scans page appears.
On the left pane, click Target Groups.
The Target Groups page appears.
In the list of target groups, click the target group that you want to modify.
The Edit Target Group page appears, where you can manage the target group settings.. | https://docs.tenable.com/cloud/Content/Scans/EditATargetGroup.htm | 2017-06-22T18:30:10 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.tenable.com |
Submitting Data¶
To submit your test data to Treeherder, you have two options:
This is the new process Task Cluster.
This is historically how projects and users have submitted data to Treeherder. This requires getting Hawk credentials approved by a Treeherder Admin. There is a client library to help make this easier. However, there is no schema to validate the payload against. But using the client to build your payload will help you get it in the accepted form. Your data only goes to the host you send it to. Dev instances can not subscribe to this data.", "destinations": [ 'treeherder' ], "projects": [ 'mozilla-inbound._' ], },
Treeherder will bind to the exchange looking for all combinations of routing
keys from
destinations and
projects listed above. For example with
the above config, we will only load jobs with routing keys of
treeherder.mozilla-inbound._
If you want all jobs from your exchange to be loaded, you could simplify the config by having values:
"destinations": [ '#' ], "projects": [ '#' ],
If you want one config to go to Treeherder Staging and a different one to go to Production, please specify that in the bug. You could use the same exchange with different routing key.
Using the Python Client¶
There are two types of data structures you can submit with the Python client: job and resultset collections. The client provides methods for building a data structure that treeherder will accept. Data structures can be extended with new properties as needed, there is a minimal validation protocol applied that confirms the bare minimum parts of the structures are defined.
See the Python client section for how to control which Treeherder instance will be accessed by the client.
Authentication is covered here.
Job Collections¶
Job collections can contain test results from any kind of test. The revision provided should match the associated revision in the resultset structure. The revision is the top-most revision in the push. The job_guid provided can be any unique string of 50 characters at most. A job collection has the following data structure.
[ { 'project': 'mozilla-inbound', 'revision': ', 'product_name': 'spidermonkey', 'reason': 'scheduler', 'who': 'spidermonkey_info__mozilla-inbound-warnaserr', 'desc': 'Linux x86-64 mozilla-inbound spidermonkey_info-warnaserr build', 'name': 'SpiderMonkey --enable-sm-fail-on-warnings Build', # The symbol representing the job displayed in # treeherder.allizom.org 'job_symbol': 'e', # The symbol representing the job group in # treeherder.allizom.org 'group_symbol': 'SM', 'group_name': 'SpiderMonkey', 'submit_timestamp': 1387221298, 'start_timestamp': 1387221345, 'end_timestamp': 1387222817, 'state': 'completed', 'result': 'success', 'machine': 'bld-linux64-ec2-104', 'build_platform': { 'platform':'linux64', 'os_name': 'linux', 'architecture': 'x86_64' }, 'machine_platform': { 'platform': 'linux64', 'os_name': 'linux', 'architecture': 'x86_64' }, 'option_collection': {'opt': True}, # jobs can belong to different tiers # setting the tier here will determine which tier the job # belongs to. However, if a job is set as Tier of 1, but # belongs to the Tier 2 profile on the server, it will still # be saved as Tier 2. 'tier': 2, # the ``name`` of the log can be the default of "buildbot_text" # however, you can use a custom name. See below. 'log_references': [ { 'url': '...', 'name': 'buildbot_text' } ], # The artifact can contain any kind of structured data associated with a test. 'artifacts': [{ 'type': 'json', 'name': '', 'blob': { my json content here} }], # List of job guids that were coalesced to this job 'coalesced': [] }, ... ]
see Specifying Custom Log Names for more info.
Usage¶
If you want to use TreeherderJobCollection to build up the job data structures to send, do something like this:
from thclient import (TreeherderClient, TreeherderClientError, TreeherderJobCollection) tjc = TreeherderJobCollection() for data in dataset: tj = tjc.get_job() tj.add_revision( data['revision'] ) tj.add_project( data['project'] ) tj.add_coalesced_guid( data['coalesced'] ) tj.add_job_guid( data['job_guid'] ) tj.add_job_name( data['name'] ) tj.add_job_symbol( data['job_symbol'] ) tj.add_group_name( data['group_name'] ) tj.add_group_symbol( data['group_symbol'] ) tj.add_description( data['desc'] ) tj.add_product_name( data['product_name'] ) tj.add_state( data['state'] ) tj.add_result( data['result'] ) tj.add_reason( data['reason'] ) tj.add_who( data['who'] ) tj.add_tier( 1 ) tj.add_submit_timestamp( data['submit_timestamp'] ) tj.add_start_timestamp( data['start_timestamp'] ) tj.add_end_timestamp( data['end_timestamp'] ) tj.add_machine( data['machine'] ) tj.add_build_info( data['build']['os_name'], data['build']['platform'], data['build']['architecture'] ) tj.add_machine_info( data['machine']['os_name'], data['machine']['platform'], data['machine']['architecture'] ) tj.add_option_collection( data['option_collection'] ) tj.add_log_reference( 'buildbot_text', data['log_reference'] ) # data['artifact'] is a list of artifacts for artifact_data in data['artifact']: tj.add_artifact( artifact_data['name'], artifact_data['type'], artifact_data['blob'] ) tjc.add(tj) client = TreeherderClient(client_id='hawk_id', secret='hawk_secret') client.post_collection('mozilla-central', tjc)
If you don’t want to use TreeherderJobCollection to build up the data structure to send, build the data structures directly and add them to the collection.
from thclient import TreeherderClient, TreeherderJobCollection tjc = TreeherderJobCollection() for job in job_data: tj = tjc.get_job(job) # Add any additional data to tj.data here # add job to collection tjc.add(tj) client = TreeherderClient(client_id='hawk_id', secret='hawk_secret') client.post_collection('mozilla-central', tjc)
Job artifacts format¶
Artifacts can have name, type and blob. The blob property can contain any valid data structure accordingly to type attribute. For example if you use the json type, your blob must be json-serializable to be valid. The name attribute can be any arbitrary string identifying the artifact. Here is an example of what a job artifact looks like in the context of a job object:
[ { 'project': 'mozilla-inbound', 'revision_hash': ', # ... # other job properties here # ... 'artifacts': [ { "type": "json", "name": "my first artifact", 'blob': { k1: v1, k2: v2, ... } }, { 'type': 'json', 'name': 'my second artifact', 'blob': { k1: v1, k2: v2, ... } } ] } }, ... ]
A special case of job artifact is a “Job Info” artifact. This kind of artifact will be retrieved by the UI and rendered in the job detail panel. This is what a Job Info artifact looks like:
{ "blob": { "job_details": [ { "url": "", "value": "website", "content_type": "link", "title": "Mozilla home page" }, { "value": "bar", "content_type": "text", "title": "Foo" }, { "value": "This is <strong>cool</strong>", "content_type": "raw_html", "title": "Cool title" } ], }, "type": "json", "name": "Job Info" }
All the elements in the job_details attribute of this artifact have a mandatory title attribute and a set of optional attributes depending on content_type. The content_type drives the way this kind of artifact will be rendered. Here are the possible values:
Text - This is the simplest content type you can render and is the one used by default if the content type specified is not recognised or is missing.
This content type renders as:
<label>{{title}}</label><span>{{value}}</span>
Link - This content type renders as an anchor html tag with the following format:
{{title}}: <a title="{{value}}" href="{{url}}" target="_blank">{{value}}</a>
Raw Html - The last resource for when you need to show some formatted content.
Some Specific Collection POSTing Rules¶
Treeherder will detect what data is submitted in the
TreeherderCollection
and generate the necessary artifacts accordingly. The outline below describes
what artifacts Treeherder will generate depending on what has been submitted.
See Schema Validation for more info on validating some specialized JSON data.
JobCollections¶
Via the
/jobs endpoint:
- Submit a Log URL with no
parse_statusor
parse_statusset to “pending”
- This will generate
text_log_summaryand
Bug suggestionsartifacts
- Current Buildbot workflow
- Submit a Log URL with
parse_statusset to “parsed” and a
text_log_summaryartifact
- Will generate a
Bug suggestionsartifact only
- Desired future state of Task Cluster
- Submit a Log URL with
parse_statusof “parsed”, with
text_log_summaryand
Bug suggestionsartifacts
- Will generate nothing
- Submit a
text_log_summaryartifact
- Will generate a
Bug suggestionsartifact if it does not already exist for that job.
- Submit
text_log_summaryand
Bug suggestionsartifacts
- Will generate nothing
- This is Treeherder’s current internal log parser workflow
Specifying Custom Log Names¶
By default, the Log Viewer expects logs to have the name of
buildbot_text
at this time. However, if you are supplying the
text_log_summary artifact
yourself (rather than having it generated for you) you can specify a custom
log name. You must specify the name in two places for this to work.
- When you add the log reference to the job:
tj.add_log_reference( 'my_custom_log', data['log_reference'] )
- In the
text_log_summaryartifact blob, specify the
lognameparam. This artifact is what the Log Viewer uses to find the associated log lines for viewing.
{ "blob":{ "step_data": { "steps": [ { "errors": [ ], "name": "step", "started_linenumber": 1, "finished_linenumber": 1, "finished": "2015-07-08 06:13:46", "result": "success", } ], "errors_truncated": false }, "logurl": "", "logname": "my_custom_log" }, "type": "json", "id": 10577808, "name": "text_log_summary", "job_id": 1774360 }(See other entries in that file for examples of the data to fill.)
- | http://treeherder.readthedocs.io/submitting_data.html | 2017-06-22T18:20:53 | CC-MAIN-2017-26 | 1498128319688.9 | [] | treeherder.readthedocs.io |
Select a value from the Layers and Properties list to control how to export object styles from Revit Architecture to AutoCAD (or other CAD applications).
When you export a Revit view to DWG or DXF, each Revit category is mapped to an AutoCAD layer, as specified in the Export Layer dialog. In AutoCAD, the layer controls the display of the entities (Revit elements), including their colors, line weights, and line styles. In Revit Architecture, you define object styles in the Object Styles dialog. (See Object Styles.) The Layers and Properties setting determines what happens to a Revit element if it has attributes (object styles) that differ from those defined for its category. In AutoCAD and in Revit Architecture, view-specific element graphics are referred to as overrides.
Select one of the following values:
For example, suppose that, in a Revit Architecture project, most walls display with solid black lines, with a line weight of 5. In a floor plan, however, you have changed the view-specific element graphics for one wall to use dashed blue lines, with a line weight of 7.
When you export this view to DWG or DXF and, for Layers and Properties, select: | http://docs.autodesk.com/REVIT/2011/ENU/filesUsersGuide/WS1a9193826455f5ff104d7f510f19418261-6e66.htm | 2014-12-18T12:27:45 | CC-MAIN-2014-52 | 1418802766292.96 | [] | docs.autodesk.com |
Use a canvas strategy
The large area of a typical computer interface allows you to present an application with a mix of content and UI components. The same application that is created for the BlackBerry PlayBook to do this is to categorize your canvas as either continuous or discrete. A continuous canvas contains content that can be arbitrarily subdivided (for example, a map or a building blueprint). A discrete canvas contains content that has obvious, defined subcomponents (for example, a deck of cards, a contact list, or an eBook).
On a continuous canvas, consider allowing users to move (pan) slowly through the content, move quickly, zoom in and out, and perhaps rotate. On a discrete canvas, you should also consider allowing users to move (shift) slowly or quickly through the content, but there are likely some other actions that you might want to enable using gestures. For example, you can flip over a card, navigate within a contact list, or jump to the next chapter in a document.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/zh-tw/developers/deliverables/43087/Use_a_canvas_strategy_1562260_11.jsp | 2014-12-18T12:37:52 | CC-MAIN-2014-52 | 1418802766292.96 | [array(['bma1320689624402_lowres_en-us.jpg',
'This image shows an example of a canvas strategy.'], dtype=object)] | docs.blackberry.com |
.
Setting as the default,. | http://docs.codehaus.org/pages/viewpage.action?pageId=231082607 | 2014-12-18T12:35:23 | CC-MAIN-2014-52 | 1418802766292.96 | [] | docs.codehaus.org |
Upgrade Guide
Local Navigation
Configure permissions for the Windows account
On each computer that you want to install the BlackBerry® Enterprise Server Express components on, you must configure permissions for the Windows® account that you want to use to install the BlackBerry Enterprise Server Express components and run the services for the BlackBerry Enterprise Server Express.
Without the correct permissions, the BlackBerry Enterprise Server Express cannot run.
- Right-click My Computer. Click Manage.
- In the left pane, expand Local Users and Groups.
- Navigate to the Groups folder.
- In the right pane, double-click Administrators.
- Click Add.
- In the Enter the object names to select field, type the Windows account name that you want the services for the BlackBerry Enterprise Server Express to use (for example, BESAdmin).
- Click OK.
- Click Apply.
- Click OK.
- On the taskbar, click Start > Programs > Administrative Tools > Local Security Policy.
- Configure the following permissions for the Windows account:
- On the taskbar, click Start > Programs > Administrative Tools > Computer Management.
- Add the Windows account to the local administrators group.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/admin/deliverables/27981/Configure_permissions_for_the_MS_Windows_account_899838_11.jsp | 2014-12-18T12:43:47 | CC-MAIN-2014-52 | 1418802766292.96 | [] | docs.blackberry.com |
User Guide
Local Navigation
Use a shortcut for switching typing input languages when you're typing
- On the Home screen or in a folder, click the Options icon.
- Click Typing and Language > Language and Method.
- Press the
key > Save.
Next topic: Troubleshooting: Language
Previous topic: Change the language
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/33213/Use_shortcut_for_switching_langs_70_1658393_11.jsp | 2014-12-18T12:36:48 | CC-MAIN-2014-52 | 1418802766292.96 | [] | docs.blackberry.com |
The Intelligent Document Processing (IDP) application is primarily a mechanism to extract information from documents in order to digitize that data.
This process requires users to:
This document teaches users how to use IDP to upload, classify, and reconcile documents, as well as how to view information about processed documents, such as document status and the extracted information. It also provides an overview of the document processing metrics.
Not all users will have access to all actions and views. See the groups reference page for more information on what type of access each security group provides.
IDP transforms unstructured data from PDF documents into structured data. The application only accepts documents in PDF format, so before you get started, you may need to convert your documents. Appian Community offers multiple plug-ins to convert documents from other file formats to PDF:
We don't recommend converting Excel files to PDF for use in IDP. Instead, Appian can parse information from Excel files using rules. Use the Excel Tools plug-in to extract information from this file format.
If you need to upload documents manually, IDP features an easy-to-use upload form for document processing. However, you can also use it as a subprocess in a larger workflow. Furthermore, you can upload documents from external systems automatically, using a web API.
To upload documents manually:
After you upload the documents, the classification and extraction process will start.
The status of each document displays on the DOCUMENTS tab, along with other information and metrics about each document.
On the top-right corner of the page, click refresh
to view the updated status. You can also use the filters at the top of the page to find certain documents.
The possible statuses are:
The Reconciled By column shows who completed the reconciliation task for a document. If the document type has automatic validations set up, the reconciliation task is skipped and Straight Through Processed appears in this column. If automatic validation fails, a user will need to complete the reconciliation task and their name appears in this column.
After documents are uploaded, tasks are assigned to users so that they can classify documents and confirm or correct the extracted information. See Tasks for more information on using tasks in Appian.
If there are any documents that are in the Pending Classification or Pending Reconciliation status, you can classify and reconcile them in the TASKS tab.
In this tab, you can search for tasks by Task Name and filter by Task Type (Classification or Reconciliation), Document Channel (if configured), or who the task is Assigned To.
Classification tasks are represented by icons:
For documents that didn't meet the minimum confidence threshold during auto-classification, a task will automatically be created for a user to manually classify the document.
These documents will be in the Pending Classification status.
To complete a manual classification task:
If the document is invalid, click INVALIDATE. For example, if an unsigned process order is uploaded instead of a signed one, you can classify it as invalid. Invalid documents won't go through data extraction and reconciliation.
All documents need to be reviewed by a user for accuracy and to fill in any missing fields. This is called reconciliation. After a document is uploaded, the Appian Document Extraction runs, extracting data from the document. Extraction usually takes about 2 - 5 minutes. When it is finished, a reconciliation task is automatically generated.
While the data is extracting, these documents will be in the Auto-Extracting status. After extraction is complete, they will be in the Pending Reconciliation status.
To complete the reconciliation task:
The status for reconciled documents will change to Completed and the data extracted will be written to the database.
To view the information that was input for a document, go to the DOCUMENTS tab and click the document name.
The Summary tab lists Overview information about the document at the top of the page. It also displays the data that was extracted from the document, along with a document viewer.
To edit the information that was extracted, click the edit button. Then update the information in the fields.
The METRICS tab is used for reporting and governance so that users can see how well the application is performing.
You can filter the information by Document Channel (if configured) and Document Type, as well as only show information for documents processed in the Past 3 Months or the Past 6 Months.
The first section on this page shows some key performance indicators for documents that have completed processing, including:
The next section displays charts that show:
At the bottom of the page is a grid that shows the metadata for processed documents.
On This Page | https://docs.appian.com/suite/help/21.3/idp-1.6/idp-user-guide.html | 2021-11-27T12:11:23 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.appian.com |
To ensure traceability in processes, output needs to be versioned and tracked as it is handed from one process stage to another. With CloudBees CD/RO Artifact Management, implementing your processes is auditable and secure.
This tutorial shows how to produce and consume artifacts as part of a CloudBees CD/RO process.
To view an example procedure for this tutorial , go to the automation platform UI > Projects > EC-Tutorials- <version> , and click the procedure name.
An explanation of what you see—the following concepts are illustrated in this procedure:
The first five steps are simple ec-perl scripts that create files and directories as use as an artifact.
The first step, "Create myartifact.txt file", creates a text file in a workspace that can be added as an artifact. Click this step to go to the Edit Step page to see the full "dummy" script used to set up the artifact in the Command(s) field. This script creates a file named
myartifact.txt.
The next four steps create a directory, a subdirectory and the second text file in it.
The "Publish artifact" step calls the EC-Artifact:Publish procedure. This step is configured to place the file
myartifact.txt, created in the Setup step, into an artifact.
To do this, the EC-Artifact: Publish procedure requires the following information: The artifact where you want to publish. This is in the format
Artifact Group:Artifact Key. In this case, the name of the artifact (where you want to publish) is EC-Tutorials:MyArtifact. The artifact version being published. The version is in the form of
major.minor.patch-qualifier-buildnumberor
major.minor.patch.buildnumber-qualifier. For this tutorial, the version is set to
1.0.$[jobId]-tutorial. Using the
$[jobId]property expansion results in a new artifact being created every time the procedure is run. ** The repository server from which you need to retrieve the artifact. This tutorial uses a repository named
defaultthat was created when CloudBees CD/RO was installed.
+ To see how this information is specified in this step, click the "Publish artifact" step name to go to the Edit Step page.
The "Retrieve artifact" step calls the EC-Artifact: Retrieve procedure. This step takes two required parameters and several optional parameters (see the Edit Step page/Parameter section for this step). The required parameters are:
Artifact – EC-Tutorials:MyArtifact (this is the name of the artifact to retrieve).
Version –
1.0.$[jobId]-tutorial(this is the artifact version to be retrieved—either an exact version or a version range). This tutorial uses an exact version.
The optional parameters are:
Retrieve to directory – the location where the artifact files are downloaded when they are retrieved. This is where the retrieved artifact versions are stored.
Overwrite – the default is update where the artifact files are updated when they are retrieved. The other values are true (the files for the retrieved artifact versions are overwritten) or false (the files for the the retrieved artifact versions are not overwritten).
The Retrieved Artifact Location Property parameter is the name or property sheet path that the step uses to create a property sheet. The default value is ` /myJob/retrievedArtifactVersions/$[assignedResourceName]`.
This property sheet has information about the retrieved artifact versions, including their location in the file system. It displays the location from where the artifact is to be retrieved and retrieves the property value in the
/myJob/retrievedArtifactVersions/$[assignedResourceName].
For the Filter(s) parameter, enter search filters, one per line, applicable to querying the CloudBees CD/RO database for the artifact version to retrieve.
For more information about these parameters, see Retrieve Artifact Version Step.
The "Output location that artifact was retrieved to" step prints the message "Artifact extracted to: <directory where the retrieved artifact will be located>".
Click Run to run this sample procedure and see the resulting job on the Job Details page.
Implementation
To publish and retrieve artifacts, apply these concepts to your artifact projects, creating the steps as required to your procedure.
Related information
Artifact Management —Help topic
Artifact Details —Help topic
Job Details —Help topic
Publish Artifact Version step —Help topic
Retrieve Artifact Version step —Help topic | https://docs.cloudbees.com/docs/cloudbees-cd/10.1/automation-platform/help-tutorial-pubretrieveartifact | 2021-11-27T11:17:47 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.cloudbees.com |
Corelight Suricata Alerts
Overview
This article explains how to ingest your Corelight Suricata alerts to Hunters. Corelight Suricata alerts are a different data type than regular open source Suricata alerts (described here), since they're passed through the Zeek processing engine and are outputted in Zeek format, as explained here.
For Hunters to integrate with your Corelight Suricata logs, the logs should be collected to a Storage Service (e.g. to an S3 bucket or Azure Blob Storage) shared with Hunters.
Expected Log Format
The expected log format is JSON, which is configurable as part of the Corelight Suricata solution.
Below is an example of a currently supported log line:
{"_path":"suricata_corelight","_system_name":"sys-01","_write_ts":"2021-10-01T00:00:00.853803Z","ts":"2021-10-01T00:00:00. | https://docs.hunters.ai/wiki/Corelight-Suricata-Alerts.56885277.html | 2021-11-27T10:39:57 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.hunters.ai |
Jamf
Overview
Jamf is the most prominent way to manage MacOS devices in an enterprise organization. As such, logs pulled from the Jamf API provide important information regarding the organizational MacOS devices being used, which is all the more important as these MacOS endpoints are usually not a part of a managed Active Directory network (as opposed to Windows enterprise fleets).
For example, the Jamf Computers API allows establishing a contextual list of all endpoints belonging to the organization, which enables detection of access to organizational resources or SaaS applications done from an unmanaged device.
Additional important contextual information pulled from the Jamf API includes user lists, policies, managed scripts, network segments and more.
Supported data types
Computers
Mac Applications
Network Segments
Packages
Policies
Scripts
Users
Sending data to Hunters
Prerequisites
In order to intergate your JAMF instance with Hunters, you will need to follow these steps in order to create an appropriate user and an API key.
Login to jamf and go to the Settings section.
Go to Accounts. it can be found in All settings or System Settings tabs and under Jamf Pro User Accounts & Groups.
Add a new user.
Choose create account. Select Create Standard Account, and then click Next.
Fill out the new user account form. Please make sure that:
Access level is Full Access
Privilege Set is Auditor
Access Status is Enabled
Copy the Username and Password for the next stage and click save.
Generate Authentication Key - generate a basic authentication token by encoding the
<username>:<password>in Base64 using the username and password from last step.
One way to do it is to open a (macos or linux) terminal and run this command:
echo -n "username:password" | base64
Get API domain copy the api host address from your browser address bar when in the jamf console | https://docs.hunters.ai/wiki/Jamf.6291554.html | 2021-11-27T11:41:03 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.hunters.ai |
NAV 2017 CU1 on Azure
Last.
Building the image
A number of pre-requisites are installed before installing NAV on the image. All of these have changed in NAV 2017 CU1. Here's a list of what a NAV 2017 CU1 image consists of:
- Windows Server 2012 R2 Datacenter (might try 2016 for CU2)
- All updates from Windows Update
- PowerShellGet (PackageManagement_x64.msi) for PowerShell Gallery (from here:)
- NuGet Package Provider 2.8.5.201 or higher (Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force)
- Search Service (Install-WindowsFeature Search-Service)
- Download all DVD's to C:\NAVDVD\<countrycode>)
- .net 4.5.2 (\Prerequisite Components\Microsoft .NET Framework 4.5.2\dotNetFx452_Full_x86_x64.exe on the DVD)
- SQL Express 2016 (instance name NAVDEMO)
- SharePoint Online Management Shell ()
- Install full NAV W1 on the server to standard folders and standard ports, using localhost\NAVDEMO as the database server using Windows Authentication.
- Excluding the following installation options: ADCS, Outlook Addin, Excel Addin, Outlook Integration
- Copy the content of the DEMO folder to C:\DEMO
Basically that's it. When people select a different localization of NAV to use in the C:\DEMO\Initialize script, the script will remove the W1 database, restore the new database (from the DVD in C:\NAVDVD) and run all the local installers from the DVD (C:\NAVDVD\<countrycode>\Installers\)
Now of course there are some tweeks because creating an image, because I have to do SYSPREP, which removes users and settings, so I have to make all administrators sysadmins on the SQL Server (as the normal administrator who installs SQL Express will be gone).
Earlier versions of the image used WebPI (Web Platform Installer) which caused new users created on the server after spinning up a VM to be unable to logon if they were not administrators. Earlier versions also had a lot of other pre-requisites - primarily because we didn't have the PowerShell Gallery.
At this time, you might be wondering why I am explaining how to build the image? - hold that thought for one second...
Unfortunately no CZ (Czech) localization on NAV 2017 CU1
Unfortunately the Czech localization is NOT on NAV 2017 CU1.
NAV 2017CU1 is based on W1 build 14199.
Due to a late bug in the Czech app, we had to take a new build for the Czech Localization. This build was created two days later on W1 build 14300. Unfortunately during these two days, the database version number was bumped.
This means that it is not possible to restore the CZ database and run the local installers to change the country code on the image. We build 14199 cannot use/mount a database from W1 14300. This is unfortunately a hard stop and I was forced to remove the CZ build from the image.
If you want to create a CZ demo environment with NAV 2017 CU1 - you will have to create the NAV VM on azure manually following the steps in the section about building the image.
Aka.ms/...
When deploying a new image to Azure, it is available almost immediately in the Classic Portal. It is also available to templated deployment (like).
At this time, the following two Short URLs are working with NAV 2017CU1. - gives you a VM on Azure without running any scripts. You can run the demo scripts manually if you like. - gives you an initialized VM on Azure and runs a number of demo scripts to setup a demo environment.
Note The deployments done under these short Urls can potentially patch the files in the DEMO folder if there are any pending bug fixes to these files. This is the recommanded way of deploying NAV on Azure, you will always get the latest.
What else?
All DEMO scripts have been re-visited and updated. Earlier you had warnings and verbose messages flooding the screen and you couldn't really determine whether or not there were problems or everything was OK. As an example, the initialize output could look like this:
Every installation script in the DEMO folder used to have its own HelperFunctions.ps1. A lot of these had the same HelperFunctions. These have now been moved to a Common HelperFunctions.ps1 script. This supports things like logging to the DEMO status file and ensuring that a lot of the warnings and verbose messages that rolled over the screen whenever running a script, now is at a minimum and the necessary things are logged to ensure that it is easier to locate any problems on your VM.
Using the PowerShell Gallery
Some scripts, that needed to download and install PowerShell modules (like AzureRM) are now using the PowerShell Gallery to do this:
if (Get-Module -ListAvailable -Name AzureRM) { Log "AzureRM Powershell module already installed" } else { Log "Install AzureRM PowerShell Module" Install-Module -Name "AzureRM"-Repository "PSGallery" -Force } Import-Module -Name "AzureRM"
Other scripts, like the AzureSQL installer requires the DAC Framework and uses NuGet package provider to download this:
function Install-DACFx { $packageName = "Microsoft.SqlServer.DacFx.x64" if (Get-Package -Name $packageName) { Log "$PackageName Powershell module already installed" } else { Log "Install $PackageName PowerShell Module" Register-PackageSource -Name NuGet -Location -Provider NuGet -Trusted -Verbose | Out-Null Install-Package -Name $packageName -MinimumVersion 130.3485.1 -ProviderName NuGet -Force | Out-Null } (Get-Package -name $packageName).Version }
Office 365 Single Sign On
A number of people reported, that the way the AAD (Azure Active Directory) App for Single Sign On was created, it was impossible for non-admin O365 users to access NAV. The reason for this was, that in the NAV 2017RTM image, I changed from using the Set-NAVSingleSignOn Cmdlet, to do things in PowerShell. The PowerShell code did set the ClientServicesFederationMetadataLocation and the WSFederationLoginEndpoint to use a specific AAD Tenant (instead of Common) and as a result of this, the AAD App had to request permissions, that only an O365 Admin could give.
In CU1 - these keys are:
<add key="ClientServicesFederationMetadataLocation" value="" /> <add key="WSFederationLoginEndpoint" value="<Web Client URL>" />
and the permissions required from the user accessing the app is just: Sign you in and read your profile:
The PowerShell Code for creating an AAD App, assigning permissions and adding a Logo can be found in c:\demo\O365 Integration\HelperFunctions.ps1 in the method called Setup-AadApps.
Extensions
The Azure Gallery Image contains three sample extensions (also in NAV 2016)
- BingMaps Integration
- O365 Integration
- MSBand Integration
For NAV 2017 CU1, these extensions have gone through a major overhaul:
- Objects, controls, variables and all have been renumbered to be in the 50000 range
- Exposing Web Services is now part of the extension (earlier this was done in PowerShell)
- The Control Add-ins are now part of the extension (earlier this was done in PowerShell)
- Local translations are now part of the extension (ealier this was .txt files copied to the Service Translations folder in PowerShell)
Looking at the BingMaps Sources folder gives you the source of the BingMaps Extension:
Definitions
- All .DELTA files are source code DELTAs.
- The .zip files are Control Add-ins.
- The .xml files are Web Services definitions
- The .txt files are translations
and here I need to express my apology to my icelandic friends that I do not have any icelandic translations. The primary reason for this is, that the Cognitive Translation Service on Azure doesn't support Icelandic and I found out of this too late. I am sure that some of my icelandic friends will help me translate the texts for the extensions to icelandic and I will include this in future CU's.
The BingMaps Extension has also been made more fail safe. This means that a simple geocoding error won't give a hard error, but will only cause that customer not to be geocoded.
Extension Development Shell (now with translation support)
The Extension Development Shell on the Azure Gallery Image is a lightweight set of scripts, that helps me develop the extensions on the Gallery Image.
After installing the Extension Development Shell you will see a list of available commands:
Nothing looks different since the NAV 2017RTM image here - but a lot have changed behind the scenes.
An AppFolder still refers to a folder under C:\DEMO, meaning that BingMaps is the App Folder name of the BingMaps extension. The CmdLets are still called the same, but a lot of the internals have changes.
The CmdLets now have fewer parameters and instead they assume, that in the App Folder, there is an AppSettings.ps1 available. If AppSettings is not available it will default values for you.
Lets try to setup a development environment for the BingMaps extension. Write the following commands:
New-DevInstance -DevInstance dev -Language w1 Import-AppFolderDeltas -DevInstance dev -AppFolder BingMaps
(Yes, I know that language should have been countrycode)
The New-DevInstance will:
- Create a new Service Tier for development (using W1)
- Create a Web Client for this dev environment
- Export all object files (we need them later)
- Export all language files (we need them later)
- Create shortcuts on the desktop for accessing the development environment, the Web Client and the Windows Client.
The Import-AppFolderDeltas will:
- Merge the .DELTA files in the AppFolder\Sources with the exported object files
- Import and Compile the merged objects
If you open the DEV Development Environment now, you will see:
(sorry for the Version List mismatch)
Now you can modify objects as you like and test things. In BingMaps Events you can see how to subscribe to the RegisterServiceConnection and have your Extension appear in the Service Connections list.
You will also find a sample on how to subscribe to the OnAfterActionEvent on the Extension Details Page in order to show a dialog when users install your extension and ask whether they want to setup the service connection right away.
When you have done your modifications you can open the extensions development shell again and use one of these CmdLets
Update-AppFolderDeltas -DevInstance dev -AppFolder BingMaps Update-AppFolderNavX -DevInstance dev -AppFolder BingMaps
The Update-AppFolderDeltas Cmdlet will:
- Export object files from modified objects
- Compare your modified objects to original objects and create .DELTA files
- Export language files from modified objects
- Compare your modified texts to original and create <OBJ>-strings.txt files for each object
- Compare translations for all languages mentioned AppLanguages (in AppSettings.ps1) and
- Remove removed strings or strings that have been translated in the object
- Add new strings (and make a translation suggestion using the Azure Translation Service if you have provided a free key. See c:\demo\Common\HelperFunctions.ps1::TranslateText for implementation details)
- Update files in the Sources folder.
Note: If you modify the translation files, and fix translation errors, your translations will NOT be overridden by the Azure Translated strings:-)
The Update-AppFolderNavX Cmdlet will:
- Do everything the Update-AppFolderDeltas does
- Create a new .NavX file in the AppFolder.
So far so good.
If you want to test your extension, you can spin up a new test environment using
New-DevInstance -DevInstance devdk -Language DK Install-AppFolderNavX -DevInstance devdk -AppFolder BingMaps
Note: The first time spinning up a dev instance in new language will take some time.
Where the Install-AppFolderNavX Cmdlet will:
- Install the .NavX from the AppFolder to the corresponding development instance.
After this, you can test the extension on the Danish version of NAV 2017 CU1.
Important note: The Extensions Development Shell is not a full blown development environment, that should be used for all extension development. It is a light weight development environment, which can be used as inspiration for people when building up their Development Environments - there are a LOT of things that could be added to this.
2nd very Important note: The Extension Development Shell has nothing to do with the upcoming New Developer Experience in Visual Studio Code.
If you want to clean up, use:
Remove-AllDevInstances
which will do exactly as the name suggests.
You can of course edit the O365 Extension by using "O365 Integration" instead of BingMaps.
Enough for now.
Enjoy
Freddy Kristiansen
Technical Evangelist | https://docs.microsoft.com/en-us/archive/blogs/freddyk/nav-2017-cu1-on-azure | 2021-11-27T13:26:20 | CC-MAIN-2021-49 | 1637964358180.42 | [array(['https://msdnshared.blob.core.windows.net/media/2016/12/runinitialize.png',
'runinitialize'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2016/12/permissions.png',
'permissions'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2016/12/bingmapssources.png',
'bingmapssources'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2016/12/extensiondevelopmentshell.png',
'extensiondevelopmentshell'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2016/12/devenvbingmaps.png',
'devenvbingmaps'], dtype=object) ] | docs.microsoft.com |
$
Create a directory for your components:
$ mkdir my_components && cd my_components
Download the example back-end application:
$ git clone backend
Change to the back-end source directory:
$ cd backend
Check that you have the correct files in the directory:
$ --s2i frontend
Change the current directory to the front-end directory:
$ cd frontend
List the contents of the directory to see that the front end is a Node.js application.
$ ls
README.md openshift server.js views helm package.json tests
Create a component configuration of Node.js component-type named
frontend:
$ odo create --s2i to interact. OpenShift Container Platform provides linking mechanisms to publish communication bindings from a program to its clients.
List all the components that are running on the cluster:
$ odo list
OpenShift Components: APP NAME PROJECT TYPE SOURCETYPE STATE app backend testpro openjdk18 binary Pushed app frontend testpro nodejs local Pushed
Link the current front-end component to the back end:
$.
Navigate to the
frontend directory:
$ cd frontend
Create an external URL for the application:
$.
Use the
odo app delete command to delete your application.. | https://docs.openshift.com/container-platform/4.9/cli_reference/developer_cli_odo/creating_and_deploying_applications_with_odo/creating-a-multicomponent-application-with-odo.html | 2021-11-27T12:05:23 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.openshift.com |
agent tree provides access to the following functions:
Search for endpoints: Locate specific endpoints by typing search criteria in the text box.
Synchronize with OfficeScan: Synchronize the plug-in program’s agent tree with the OfficeScan server’s agent tree. For details, see Synchronizing the Agent Tree.
Administrators can also manually search the agent tree to locate endpoints or domains. Specific computer information displays in the table on the right. | https://docs.trendmicro.com/en-us/enterprise/endpoint-application-control-officescan-plug-in/client_management_ch_intro/cl_tr_tsk_sp.aspx | 2021-11-27T10:57:38 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.trendmicro.com |
Contents:
Contents:
This Trifacta Wrangler, you can use the following techniques to address some of the issues you might encounter in the standardization of units and values for numeric types.
Numeric precision
In Trifacta Wrangler,.
NOTE: When aggregation is applied, a new table of data is generated with the columns that you specifically select for inclusion.
For more information, see Pivot Data.
This page has no comments. | https://docs.trifacta.com/pages/viewpage.action?pageId=174747814&navigatingVersions=true | 2021-11-27T11:51:53 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.trifacta.com |
Local to remote¶
When developers write code they typically begin with a simple serial code and build upon it until all of the required functionality is present. The following set of examples were developed to demonstrate this iterative process of evolving a simple serial program to an efficient, fully-distributed HPX application. For this demonstration, we implemented a 1D heat distribution problem. This calculation simulates the diffusion of heat across a ring from an initialized state to some user-defined point in the future. It does this by breaking each portion of the ring into discrete segments and using the current segment’s temperature and the temperature of the surrounding segments to calculate the temperature of the current segment in the next timestep as shown by Fig. 2 below.
We parallelize this code over the following eight examples:
The first example is straight serial code. In this code we instantiate a vector
U that contains two vectors of doubles as seen in the structure
stepper.
struct stepper { // Our partition type typedef space do_work(std::size_t nx, std::size_t nt) { // U[t][i] is the state of position i at time t. std::vector<space> U(2); for (space& s : U) s.resize(nx); // Initial conditions: f(0, i) = i for (std::size_t i = 0; i != nx; ++i) U[0][i] = double(i); // Actual time step loop for (std::size_t t = 0; t != nt; ++t) { space const& current = U[t % 2]; space& next = U[(t + 1) % 2]; next[0] = heat(current[nx - 1], current[0], current[1]); for (std::size_t i = 1; i != nx - 1; ++i) next[i] = heat(current[i - 1], current[i], current[i + 1]); next[nx - 1] = heat(current[nx - 2], current[nx - 1], current[0]); } // Return the solution at time-step 'nt'. return U[nt % 2]; } };
Each element in the vector of doubles represents a single grid point. To
calculate the change in heat distribution, the temperature of each grid point,
along with its neighbors, is passed to the function
heat. In order to
improve readability, references named
current and
next are created
which, depending on the time step, point to the first and second vector of
doubles. The first vector of doubles is initialized with a simple heat ramp.
After calling the heat function with the data in the
current vector, the
results are placed into the
next vector.
In example 2 we employ a technique called futurization. Futurization is a method
by which we can easily transform a code that is serially executed into a code
that creates asynchronous threads. In the simplest case this involves replacing
a variable with a future to a variable, a function with a future to a function,
and adding a
.get() at the point where a value is actually needed. The code
below shows how this technique was applied to the
struct stepper.
struct stepper { // Our partition type typedef hpx::shared_future hpx::future<space> do_work(std::size_t nx, std::size_t nt) { using hpx::dataflow; using hpx::unwrapping; // U[t][i] is the state of position i at time t. std::vector<space> U(2); for (space& s : U) s.resize(nx); // Initial conditions: f(0, i) = i for (std::size_t i = 0; i != nx; ++i) U[0][i] = hpx::make_ready_future(double(i)); auto Op = unwrapping(&stepper::heat); // Actual time step loop for (std::size_t t = 0; t != nt; ++t) { space const& current = U[t % 2]; space& next = U[(t + 1) % 2]; // WHEN U[t][i-1], U[t][i], and U[t][i+1] have been computed, THEN we // can compute U[t+1][i] for (std::size_t i = 0; i != nx; ++i) { next[i] = dataflow(hpx::launch::async, Op, current[idx(i, -1, nx)], current[i], current[idx(i, +1, nx)]); } } // Now the asynchronous computation is running; the above for-loop does not // wait on anything. There is no implicit waiting at the end of each timestep; // the computation of each U[t][i] will begin as soon as its dependencies // are ready and hardware is available. // Return the solution at time-step 'nt'. return hpx::when_all(U[nt % 2]); } };
In example 2, we redefine our partition type as a
shared_future and, in
main, create the object
result, which is a future to a vector of
partitions. We use
result to represent the last vector in a string of
vectors created for each timestep. In order to move to the next timestep, the
values of a partition and its neighbors must be passed to
heat once the
futures that contain them are ready. In HPX, we have an LCO (Local Control
Object) named Dataflow that assists the programmer in expressing this
dependency. Dataflow allows us to pass the results of a set of futures to a
specified function when the futures are ready. Dataflow takes three types of
arguments, one which instructs the dataflow on how to perform the function call
(async or sync), the function to call (in this case
Op), and futures to the
arguments that will be passed to the function. When called, dataflow immediately
returns a future to the result of the specified function. This allows users to
string dataflows together and construct an execution tree.
After the values of the futures in dataflow are ready, the values must be pulled
out of the future container to be passed to the function
heat. In order to
do this, we use the HPX facility
unwrapping, which underneath calls
.get() on each of the futures so that the function
heat will be passed
doubles and not futures to doubles.
By setting up the algorithm this way, the program will be able to execute as quickly as the dependencies of each future are met. Unfortunately, this example runs terribly slow. This increase in execution time is caused by the overheads needed to create a future for each data point. Because the work done within each call to heat is very small, the overhead of creating and scheduling each of the three futures is greater than that of the actual useful work! In order to amortize the overheads of our synchronization techniques, we need to be able to control the amount of work that will be done with each future. We call this amount of work per overhead grain size.
In example 3, we return to our serial code to figure out how to control the
grain size of our program. The strategy that we employ is to create “partitions”
of data points. The user can define how many partitions are created and how many
data points are contained in each partition. This is accomplished by creating
the
struct partition, which contains a member object
data_, a vector of
doubles that holds the data points assigned to a particular instance of
partition.
In example 4, we take advantage of the partition setup by redefining
space
to be a vector of shared_futures with each future representing a partition. In
this manner, each future represents several data points. Because the user can
define how many data points are in each partition, and, therefore, how
many data points are represented by one future, a user can control the
grainsize of the simulation. The rest of the code is then futurized in the same
manner as example 2. It should be noted how strikingly similar
example 4 is to example 2.
Example 4 finally shows good results. This code scales equivalently to the OpenMP version. While these results are promising, there are more opportunities to improve the application’s scalability. Currently, this code only runs on one locality, but to get the full benefit of HPX, we need to be able to distribute the work to other machines in a cluster. We begin to add this functionality in example 5.
In order to run on a distributed system, a large amount of boilerplate code must
be added. Fortunately, HPX provides us with the concept of a component,
which saves us from having to write quite as much code. A component is an object
that can be remotely accessed using its global address. Components are made of
two parts: a server and a client class. While the client class is not required,
abstracting the server behind a client allows us to ensure type safety instead
of having to pass around pointers to global objects. Example 5 renames example
4’s
struct partition to
partition_data and adds serialization support.
Next, we add the server side representation of the data in the structure
partition_server.
Partition_server inherits from
hpx::components::component_base, which contains a server-side component
boilerplate. The boilerplate code allows a component’s public members to be
accessible anywhere on the machine via its Global Identifier (GID). To
encapsulate the component, we create a client side helper class. This object
allows us to create new instances of our component and access its members
without having to know its GID. In addition, we are using the client class to
assist us with managing our asynchrony. For example, our client class
partition‘s member function
get_data() returns a future to
partition_data get_data(). This struct inherits its boilerplate code from
hpx::components::client_base.
In the structure
stepper, we have also had to make some changes to
accommodate a distributed environment. In order to get the data from a
particular neighboring partition, which could be remote, we must retrieve the data from all
of the neighboring partitions. These retrievals are asynchronous and the function
heat_part_data, which, amongst other things, calls
heat, should not be
called unless the data from the neighboring partitions have arrived. Therefore,
it should come as no surprise that we synchronize this operation with another
instance of dataflow (found in
heat_part). This dataflow receives futures
to the data in the current and surrounding partitions by calling
get_data()
on each respective partition. When these futures are ready, dataflow passes them
to the
unwrapping function, which extracts the shared_array of doubles and
passes them to the lambda. The lambda calls
heat_part_data on the
locality, which the middle partition is on.
Although this example could run distributed, it only runs on one
locality, as it always uses
hpx::find_here() as the target for the
functions to run on.
In example 6, we begin to distribute the partition data on different nodes. This
is accomplished in
stepper::do_work() by passing the GID of the
locality where we wish to create the partition to the partition
constructor.
for (std::size_t i = 0; i != np; ++i) U[0][i] = partition(localities[locidx(i, np, nl)], nx, double(i));
We distribute the partitions evenly based on the number of localities used,
which is described in the function
locidx. Because some of the data needed
to update the partition in
heat_part could now be on a new locality,
we must devise a way of moving data to the locality of the middle
partition. We accomplished this by adding a switch in the function
get_data() that returns the end element of the
buffer data_ if it is
from the left partition or the first element of the buffer if the data is from
the right partition. In this way only the necessary elements, not the whole
buffer, are exchanged between nodes. The reader should be reminded that this
exchange of end elements occurs in the function
get_data() and, therefore, is
executed asynchronously.
Now that we have the code running in distributed, it is time to make some
optimizations. The function
heat_part spends most of its time on two tasks:
retrieving remote data and working on the data in the middle partition. Because
we know that the data for the middle partition is local, we can overlap the work
on the middle partition with that of the possibly remote call of
get_data().
This algorithmic change, which was implemented in example 7, can be seen below:
// The partitioned operator, it invokes the heat operator above on all elements // of a partition. static partition heat_part(partition const& left, partition const& middle, partition const& right) { using hpx::dataflow; using hpx::unwrapping; hpx::shared_future<partition_data> middle_data = middle.get_data(partition_server::middle_partition); hpx::future<partition_data> next_middle = middle_data.then( unwrapping( [middle](partition_data const& m) -> partition_data { HPX_UNUSED(middle); // All local operations are performed once the middle data of // the previous time step becomes available. std::size_t size = m.size(); partition_data next(size); for (std::size_t i = 1; i != size-1; ++i) next[i] = heat(m[i-1], m[i], m[i+1]); return next; } ) ); return dataflow( hpx::launch::async, unwrapping( [left, middle, right](partition_data next, partition_data const& l, partition_data const& m, partition_data const& r) -> partition { HPX_UNUSED(left); HPX_UNUSED(right); // Calculate the missing boundary elements once the // corresponding data has become available. std::size_t size = m.size(); next[0] = heat(l[size-1], m[0], m[1]); next[size-1] = heat(m[size-2], m[size-1], r[0]); // The new partition_data will be allocated on the same locality // as 'middle'. return partition(middle.get_id(), std::move(next)); } ), std::move(next_middle), left.get_data(partition_server::left_partition), middle_data, right.get_data(partition_server::right_partition) ); }
Example 8 completes the futurization process and utilizes the full potential of
HPX by distributing the program flow to multiple localities, usually defined as
nodes in a cluster. It accomplishes this task by running an instance of HPX main
on each locality. In order to coordinate the execution of the program,
the
struct stepper is wrapped into a component. In this way, each
locality contains an instance of stepper that executes its own instance
of the function
do_work(). This scheme does create an interesting
synchronization problem that must be solved. When the program flow was being
coordinated on the head node, the GID of each component was known. However, when
we distribute the program flow, each partition has no notion of the GID of its
neighbor if the next partition is on another locality. In order to make
the GIDs of neighboring partitions visible to each other, we created two buffers
to store the GIDs of the remote neighboring partitions on the left and right
respectively. These buffers are filled by sending the GID of newly created
edge partitions to the right and left buffers of the neighboring localities.
In order to finish the simulation, the solution vectors named
result are then
gathered together on locality 0 and added into a vector of spaces
overall_result using the HPX functions
gather_id and
gather_here.
Example 8 completes this example series, which takes the serial code of example 1 and incrementally morphs it into a fully distributed parallel code. This evolution was guided by the simple principles of futurization, the knowledge of grainsize, and utilization of components. Applying these techniques easily facilitates the scalable parallelization of most applications. | https://hpx-docs.stellar-group.org/branches/master/html/examples/1d_stencil.html | 2021-11-27T11:26:58 | CC-MAIN-2021-49 | 1637964358180.42 | [] | hpx-docs.stellar-group.org |
assertion¶
The assertion library implements the macros
HPX_ASSERT and
HPX_ASSERT_MSG. Those two macros can be used to implement assertions
which are turned of during a release build.
By default, the location and function where the assert has been called from are
displayed when the assertion fires. This behavior can be modified by using
hpx::assertion::set_assertion_handler. When HPX initializes, it uses
this function to specify a more elaborate assertion handler. If your application
needs to customize this, it needs to do so before calling
hpx::hpx_init,
hpx::hpx_main or using the C-main
wrappers.
See the API reference of the module for more details. | https://hpx-docs.stellar-group.org/branches/master/html/libs/core/assertion/docs/index.html | 2021-11-27T10:59:07 | CC-MAIN-2021-49 | 1637964358180.42 | [] | hpx-docs.stellar-group.org |
UMI Barcoded Illumina MiSeq 2x250 BCR mRNA¶
Overview of Experimental Data¶
This example uses publicly available data from:
B cells populating the multiple sclerosis brain mature in the draining cervical lymph nodes.Stern JNH, Yaari G, and Vander Heiden JA, et al.Sci Transl Med. 2014. 6(248):248ra107. doi:10.1126/scitranslmed.3008879.
Which may be downloaded from the NCBI Sequence Read Archive under BioProject accession ID: PRJNA248475. For this example, we will use the first 25,000 sequences of sample M12 (accession: SRR1383456), which may downloaded downloaded using fastq-dump from the SRA Toolkit:
fastq-dump --split-files -X 25000 SRR1383456
Primers sequences are available online at the supplemental website for the publication.
Read Configuration¶
Schematic of the Illumina MiSeq 2x250 paired-end reads with a 15 nucleotide UMI barcode preceding the C-region primer sequence.¶
Example Data¶
We have hosted a small subset of the data (Accession: SRR1383456) on the pRESTO website in FASTQ format with accompanying primer files. The sample data set and workflow script may be downloaded from here:
Stern, Yaari and Vander Heiden masking 1 sequence
with the 15 nucleotide UMI that precedes the C-region primer
(
MaskPrimers score --barcode):
To summarize these steps, the ParseLog.py tool is used to build a tab-delimited file from the MaskPrimers.py log:
Containing the following information:
Note
For this data set the UMI is immediately upstream of the C-region primer.
Another common approach for UMI barcoding involves placing the UMI
immediately upstream of a 5’RACE template switch site. Modifying the
workflow is simple for this case. You just need to replace the V-segment
primers with a fasta file containing the TS sequences and move the
--barcode argument to the
appropriate read:
MaskPrimers.py score -s R1_quality-pass.fastq -p CPrimers.fasta \ --start 0 --mode cut --outname R1 --log MP1.log MaskPrimers.py score -s R2_quality-pass.fastq -p TSSites.fasta \ --start 17 --barcode --mode cut --maxerror 0.5 \ --outname R2 --log MP2.log
In the above we have moved the UMI annotation to read 2, increased
the allowable error rate for matching the TS site
(
--maxerror 0.5),
cut the TS site (
--mode cut),
and increased the size of the UMI from 15 to 17 nucleotides
(
--start 17). 1, the
BARCODE annotation identified by MaskPrimers.py must
first be copied to the read 2 mate-pair of each read/ENA, then the appropriate argument would be
--coord illumina.
Note
If you have followed the 5’RACE modification above, then you must also
modify the first PairSeq.py step to copy the UMI from read 2 to read 1,
instead of vice versa (
--2f BARCODE):
PairSeq.py -1 R1_primers-pass.fastq -2 R2_primers-pass.fastq \ --2f BARCODE --coord sra. In the example data used here,
this step was not necessary due to the aligned primer design for the 45
V-segment primers, though this does require that the V-segment primers be
masked, rather than cut, during the MaskPrimers.py step
(
--mode mask).
See also
If your data requires alignment, then you can create multiple aligned UMI read groups as follows:
AlignSets.py muscle -s R1_primers-pass_pair-pass.fastq --bf BARCODE \ --exec ~/bin/muscle --outname R1 --log AS1.log AlignSets.py muscle -s R2_primers-pass_pair-pass.fastq --bf BARCODE \ --exec ~/bin/muscle --outname R2 --log AS2.log
Where the
--bf BARCODE defines the field
containing the UMI and
--exec ~/bin/muscle
is the location of the MUSCLE executable.
For additional details see the section on fixing UMI alignments.. As the accuracy of the primer assignment in read 1 is critical
for correct isotype identification, additional filtering of read 1 is carried out
during this step. Specifying the
--prcons 0.6
threshold: align subcommand of AssemblePairs.py:
During assembly, the consensus isotype.py can use the ungapped V-segment reference sequences to properly space non-overlapping reads. Or, if all else fails, the join subcommand can be used to simply stick mate-pairs together end-to-end with some intervening gap.
Deduplication and filtering¶
Combining UMI read group size annotations¶
In the final stage of the workflow, the high-fidelity Ig repertoire is
obtained by a series of filtering steps. First,
isotype primer (
--uf PRCONS)., MiSeq workflow are presented below. Performance was measured on a 64-core system with 2.3GHz AMD Opteron(TM) 6276 processors and 512GB of RAM, with memory usage measured at peak utilization. The data set contained 1,723,558 x 2 raw reads, and required matching of 1 C-region primer, 45 V-segment primers, and averaged 24.3 reads per UMI. | https://presto.readthedocs.io/en/latest/workflows/Stern2014_Workflow.html | 2021-11-27T11:18:38 | CC-MAIN-2021-49 | 1637964358180.42 | [] | presto.readthedocs.io |
Double
Animation
Double Using Key Frames Animation
Double Using Key Frames Animation
Double Using Key Frames Animation
Class
Using Key Frames
Definition
public : sealed class DoubleAnimationUsingKeyFrames : Timeline, IDoubleAnimationUsingKeyFrames
struct winrt::Windows::UI::Xaml::Media::Animation::DoubleAnimationUsingKeyFrames : Timeline, IDoubleAnimationUsingKeyFrames
public sealed class DoubleAnimationUsingKeyFrames : Timeline, IDoubleAnimationUsingKeyFrames
Public NotInheritable Class DoubleAnimationUsingKeyFrames Inherits Timeline Implements IDoubleAnimationUsingKeyFrames
<DoubleAnimationUsingKeyFrames> oneOrMoreDoubleKeyFrames </DoubleAnimationUsingKeyFrames>
- Inheritance
- DoubleAnimationUsingKeyFramesDoubleAnimationUsingKeyFramesDoubleAnimationUsingKeyFramesDoubleAnimationUsingKeyFrames
- Attributes
-
Windows 10 requirements
Examples(); }
Constructors
Properties
Methods
Events
See also
Feedback
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog. | https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.media.animation.doubleanimationusingkeyframes | 2019-02-16T05:30:12 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.microsoft.com |
1) Trial Balance: It shows the statement related to all debit credit entry in a double accounting entry system.( For a particular time period you can set date range to see the trial balance report for that specific range of time)
Accounts --> Reports --> Financial Statements --> Trial Balance
Before viewing any reports the following reporting window format will appear everytime shown below:
View of Trial Balance Report:
2) Balance Sheet: Shows the summarized statement which includes Assets, Liabilities & Equity
Accounts--> Reports--> Financial Statement--> Balance Sheet
3) Income Statement: Shows the statement of all the revenues & Expenses in a reporting format. Date range can be set up in order to see the report of particular date range
Accounts--> Report --> Financial Statements--> Income Statement
4) Cashflow Statement : Shows all the cashflow related report (Operating, Investing & Financing Activities) | https://docs.onebookcloud.com/en/financial-accounting-reports/financial-statements/ | 2019-02-16T04:52:15 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.onebookcloud.com |
The GetData wizard retrieves database data from an ODBC data source. Using the GetData wizard, DriveWorks allows you to connect to existing data within your company. The wizard walks you through the database connection, assists in the selection of the table or view from which to pull the data, and then helps sort and filter the data accordingly.
This is then a dynamic read only link to the data, and can again be used to create lists or lookups within DriveWorks. You can choose when this data gets updated, whether it’s as a manual process or performed automatically. This function is more suited to retrieving large amounts of data.
If the data in the database changes then updating this link in DriveWorks either manually or automatically will refresh the Table in DriveWorks with the new data. Using this method, it is important that the table structure of the external data is accessible and understood.
GETDATA( [" Database"], [" Username"], [" Password"], [ Table], [ Field], [ Where] )
The GetData wizard produces the query language required to connect, filter and display the data. The following explains each step required.
The DSN name of the database to connect to
The username to gain access to the database
The password to gain access to the database
The table that contains the data in the database
The field in the table the data is to be retrieved from
The Where clause relies on some knowledge of the SQL Query language. Examples of typical Where clauses are listed below:
CustomerName Like ‘A%’
Where CustomerName is the field in the table of the database. Like is the comparison operator. % is a wild card symbol.
This Where clause will display all customers in the CustomerName field where the name begins with A
CustomerName Like ‘ “&LetterReturn & “%’
This Where clause will display all customers in the CustomerName field where the name begins with a letter selected from a control on the user form called LeterReturn.
Other comparison operators that can be used with the where clause are listed below:
See also
How To: Troubleshoot SQL Connection | http://docs.driveworkspro.com/Topic/GetData | 2019-02-16T06:16:13 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.driveworkspro.com |
Guides
- Fastly Status
Google Cloud Storage
Last updated October 03, 2018
Google Cloud Storage (GCS) can be used as an origin server with your Fastly services once you set up and configure your GCS account and link it to a Fastly service. It can also be configured to use private content. This speeds up your content delivery and reduces your origin's workload and response times with the dedicated links between Google and Fastly's POPs.
TIP: Google offers a Cloud Accelerator integration discount that applies to any Google Cloud Platform product. If you’re a Fastly customer and would like to take advantage of this discount, email [email protected].
Using GCS as an origin server
To make your GCS data available through Fastly, follow the steps below.
Setting up and configuring your GCS account
Create a bucket to store your origin's data. The Create a bucket window appears.
- Use Google's Search Console to verify ownership of your domain name, if you have not already done so. See the instructions on Google's website.
- Fill out the Create a bucket fields as follows:
- In the Name field, type your domain name (e.g.,
example.comor
images.example.com) to create a domain-named bucket. Remember the name you type. You'll need it to connect your GCS bucket to your Fastly service.
- In the Default storage class area, select Regional.
- From the Regional location menu, select a location to store your content. Most customers select a region close to the interconnect location they specify for shielding.
- Click the Create button.
You should now add files to your bucket and make them externally accessible by selecting the Public link checkbox next to each of the files.
Adding your GCS bucket as an origin server
To add your GCS bucket as an origin server, follow the instructions for connecting to origins. You'll add specific details about your origin server when you fill out the Create a host fields:
- In the Name field, type the name of your server (for example,
Google Cloud Storage).
- In the Address field, type
storage.googleapis.com.
- In the Transport Layer Security (TLS) area, leave the Enable TLS? default set to Yes to secure the connection between Fastly and your origin.
- In the Transport Layer Security (TLS) area, type
storage.googleapis.comin the Certificate hostname field.
- From the Shielding menu, select an interconnect location from the list of shielding locations.
Interconnect locations
Interconnect locations allow you to establish direct links with Google's network edge when you choose your shielding location. By selecting one of the locations listed below, you will be eligible to receive discounted pricing from Google CDN Interconnect for traffic traveling from Google Cloud Platform to Fastly's network. Most customers select the interconnect closest to their GCS bucket's region.
Interconnects exist in the following locations within North America:
- Ashburn (DCA)
- Ashburn (IAD)
- Atlanta (ATL)
- Chicago (MDW)
- Dallas (DFW)
- Los Angeles (LAX)
- New York (JFK)
- Seattle (SEA)
- San Jose (SJC)
- Toronto (YYZ)
Interconnects outside of North America exist in:
- Amsterdam (AMS)
- Frankfurt (FRA)
- Frankfurt (HHN)
- Hong Kong (HKG)
- London (LCY)
- London (LHR)
- Madrid (MAD)
- Paris (CDG)
- Singapore (SIN)
- Stockholm (BMA)
- Tokyo (NRT)
Review our caveats of shielding and select an interconnect accordingly.
Setting the default host for your service to your GCS bucket
- Log in to the Fastly web interface and click the Configure link.
- From the service menu, select the appropriate service.
- Click the Configuration button and then select Clone active. The Domains page appears.
- Click the Settings link. The Settings page appears.
Click the Override host switch. The Override host header field appears.
- In the Override host header field, type the name of the override host for this service. The name you type should match the name of the bucket you created in your GCS account and will take the format
<your bucket name>.storage.googleapis.com. For example, if your bucket name is
test123, your override hostname would be
test123.storage.googleapis.com.
- Click the Save button. The new override host header appears in the Override host section.
Creating domains for GCS
- Log in to the Fastly web interface and click the Configure link.
- From the service menu, select the appropriate service.
- Click the Configuration button and then select Clone active. The Domains page appears.
Click the Create domain button. The Create a domain page appears.
- In the Domain Name field, type the name users will type in their browsers to access your site.
- In the Comment field, optionally type a comment that describes your domain.
- Click the Create button. A new domain appears on the Domains page.
- Because GCS responds to different hostnames than your Fastly service, click the Create domain button to create a second domain.
- In the Domain Name field of the second domain you create, type the same value as the default host you created earlier (e.g.,
<your bucket name>.storage.googleapis.com) and click the Create button. A second new domain appears on the Domains page. Shielding POPs need this additional domain so they can route requests correctly. (See Caveats of shielding for more information.)
- Click the Activate button to deploy your configuration changes.
- Add a CNAME DNS record for the domain if you haven't already done so.
You can use
http://<domain>.global.prod.fastly.net/<filename> to access the files you uploaded.
Setting the Cache-Control header for your GCS bucket
GCS performs its own caching, which may complicate efforts to purge cache. To avoid potential problems, we recommend using the gsutil command line utility to set the Cache-Control header for one or more files in your GCS bucket:
Replace
<bucket> in the example above with your GCS bucket's name. Note that
max-age should instruct GCS to cache your content for zero seconds, and Fastly to cache your content for one day. See Google's setmeta docs for more information.
Changing the default TTL for your GCS bucket
If you want to change the default TTL for your GCS bucket, if at all, keep the following in mind:
- Your GCS account controls the default TTL for your GCS content. GCS currently sets the default TTL to 3600 seconds. Changing the default TTL will not override the default setting in your GCS account.
- To override the default TTL set by GCS from within the Fastly web interface, create a new cache setting and enter the TTL there.
- To override the default TTL in GCS, download the gsutil tool and then change the cache-control headers to delete the default TTL or change it to an appropriate setting.
Using GCS with private objects
To use Fastly with GCS private objects, be sure you've already made your GCS data available to Fastly by pointing to the right GCS bucket, then follow the steps below.
Setting up interoperable access
By default, GCS authenticates requests using OAuth2, which Fastly does not support. To access private objects on GCS, your project must have HMAC authentication enabled and interoperable storage access keys (an "Access Key" and "Secret" pair) created. Do this by following the steps below.
- Open the Google Cloud Platform console and select the appropriate project.
- Click Settings. The Settings appear with the Project Access controls highlighted.
- Click the Interoperability tab. The Interoperability API access controls appear.
- If you have not set up interoperability before, click Enable interoperability access.
Click Make
<PROJECT-ID>your default project for interoperable access. If that project already serves as the default project, that information appears instead.
Click Create a new key. An access key and secret code appear.
- Save the access key and secret code that appear. You'll need these later when you're creating an authorization header.
Setting up Fastly to use GCS private content
To use GCS private content with Fastly, create two headers, a Date header (required Authorization Signature) and an Authorization header.
Creating a Date new header page appears.
- Fill out the Create a new header fields as follows:
- In the Name field, type
Date.
- From the Type menu, select Request, and from the Action menu, select Set.
- In the Destination field, type
http.Date.
- In the Source field, type
now.
- From the Ignore if set menu, select No.
- In the Priority field, type
10.
- Click the Create button. A new Date header appears on the Content page. You will use this later within the Signature of the Authorization header.
Creating an Authorization header
Click the Create header button again to create another new header. The Create a header page appears.
- Fill out the Create a header fields as follows:
- In the Name field, type
Authorization.
- From the Type menu, select Request, and from the Action menu, select Set.
- In the Destination field, type
http.Authorization.
- From the Ignore if set menu, select No.
- In the Priority field, type
20.
In the Source field, type the header authorization information using the following format:
replacing
<access key>,
<GCS secret>, and
<GCS bucket name>with the information you gathered before you began. For example:
- Click the Create button. A new Authorization header appears on the Content page.
- Click the Activate button to deploy your configurationExtensionHeaders><\n><CanonicalizedResource>
It tells us the following: | https://docs.fastly.com/guides/integrations/google-cloud-storage | 2019-02-16T06:00:38 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.fastly.com |
Getting support for Puppet Enterprise lifecycles End-of-life (EOL) and we encourage upgrading to the latest version.
See also the operating system support lifecycle.
Getting support
Getting support for Puppet Enterprise is easy; it is available both from Puppet customer support portal.
- Joining the Puppet Enterprise user group.
- Seeking help from the Puppet open source community.
See the support lifecycle page for more details.
Reporting issues to the customer support portal
Paid support
Puppet provides two levels of commercial support offerings for Puppet Enterprise: Standard and Premium. Both offerings allow you to report your support issues to our confidential customer support portal. You will receive an account and log-on for this portal when you purchase Puppet Enterprise.
Customer support portal:
The PE support script
When seeking support, you may be asked to run an information-gathering support script
- the PuppetDB
/summary-statsendpoint, non-identifying data that communicates table/data sizes for troubleshooting.
-
Join the Puppet Enterprise user group
- Click on “Sign in and apply for membership.”
- Click on “Enter your email address to access the document.”
- Enter your email address.
Your request to join will be sent to Puppet for authorization and you will receive an email when you’ve been added to the user group.
Getting support from the existing Puppet community
As a Puppet Enterprise customer you are more than welcome to participate in our large and helpful open source community as well as report issues against the open source project.
Puppet open source user group:
Puppet Developers group:
Report issues with the open source Puppet project: | https://docs.puppet.com/pe/2016.1/overview_getting_support.html | 2019-02-16T04:52:54 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.puppet.com |
Droplet¶
The
Droplet is a service container that gives you access to many of Vapor's facilities. It is responsible for registering routes, starting the server, appending middleware, and more.
Tip
Normally applications will only have one Droplet. However, for advanced use cases, it is possible to create more than one.
Initialization¶
As you have probably already seen, the only thing required to create an instance of
Droplet is to import Vapor.
import Vapor let drop = try Droplet() // your magic here try drop.run()
Creation of the
Droplet normally happens in the
main.swift file.
Note
For the sake of simplicity, most of the documentations sample code uses just the
main.swift file. You can read more about packages and modules in the Swift Package Manager conceptual overview.
Environment¶
The
environment is accessible via the config of the droplet.
It contains the current environment your application is running in. Usually development, testing, or production.
if drop.config.environment == .production { ... }
The environment affects Config and Logging. The environment is
development by default. To change it, pass the
--env= flag as an argument.
vapor run serve --env=production
If you are in Xcode, you can pass arguments through the scheme editor.
Warning
Debug logs can reduce the number of requests your application can handle per second. Enable production mode to silence non-critical logs.
Config Directory¶
The
workDir property contains a path to the current working directory of the application. Vapor uses this property to find the folders related to your project, such as
Resources,
Public, and
Config.
print(drop.config.workDir) // /var/www/my-project/
Vapor automatically determines the working directory in most situations. However, you may need to manually set it for advanced use cases.
You can override the working directory through the
Droplet's initializer, or by passing the
--workdir argument.
vapor run serve --workdir="/var/www/my-project"
Modifying Properties¶
The
Droplet's properties can be changed programmatically or through configuration.
Programmatic¶
Properties on the
Droplet are constant and can be overridden through the init method.
let drop = try Droplet(server: MyServerType.self)
Here the type of server the
Droplet uses is changed to a custom type. When the
Droplet is run, this custom server type will be booted instead of the default server.
Warning
Using the init method manually can override configured properties.
Configurable¶
If you want to modify a property of the
Droplet only in certain cases, you can use
addConfigurable. Say for example you want to email error logs to yourself in production, but you don't want to spam your inbox while developing.
let config = try Config() config.addConfigurable(log: MyEmailLogger.init, name: "email") let drop = Droplet(config)
The
Droplet will continue to use the default logger until you modify the
Config/droplet.json file to point to your email logger. If this is done in
Config/production/droplet.json, then your logger will only be used in production.
{ "log": "email" }
Supported Properties¶
Example¶
Let's create a custom logger to demonstrate Vapor's configurable properties.
AllCapsLogger.swift
final class AllCapsLogger: LogProtocol { var enabled: [LogLevel] = [] func log(_ level: LogLevel, message: String, file: String, function: String, line: Int) { print(message.uppercased() + "!!!") } } extension AllCapsLogger: ConfigInitializable { convenience init(config: Config) throws { self.init() } }
Now add the logger to the Droplet using the
addConfigurable method for logs.
main.swift
let config = try Config() config.addConfigurable(log: AllCapsLogger.init, name: "all-caps") let drop = try Droplet(config)
Whenever the
"log" property is set to
"all-caps" in the
droplet.json, our new logger will be used.
Config/development/droplet.json
{ "log": "all-caps" }
Here we are setting our logger only in the
development environment. All other environments will use Vapor's default logger.
Config Initializable¶
For an added layer of convenience, you can allow your custom types to be initialized from configuration files.
In our previous example, we initialized an
AllCapsLogger before adding it to the Droplet.
Let's say we want to allow our project to configure how many exclamation points get added with each log message.
AllCapsLogger.swift
final class AllCapsLogger: LogProtocol { var enabled: [LogLevel] = [] let exclamationCount: Int init(exclamationCount: Int) { self.exclamationCount = exclamationCount } func log(_ level: LogLevel, message: String, file: String, function: String, line: Int) { print(message.uppercased() + String(repeating: "!", count: exclamationCount)) } } extension AllCapsLogger: ConfigInitializable { convenience init(config: Config) throws { let count = config["allCaps", "exclamationCount"]?.int ?? 3 self.init(exclamationCount: count) } }
Note
The first parameter to
config is the name of the file.
Now that we have conformed our logger to
ConfigInitializable, we can pass just the type name to
addConfigurable.
main.swift
let config = try Config() config.addConfigurable(log: AllCapsLogger.init, name: "all-caps") let drop = try Droplet(config)
Now if you add a file named
allCaps.json to the
Config folder, you can configure the logger.
allCaps.json
{ "exclamationCount": 5 }
With this configurable abstraction, you can easily change how your application functions in different environments without needing to hard code these values into your source code. | https://docs.vapor.codes/2.0/vapor/droplet/ | 2019-02-16T05:41:36 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.vapor.codes |
Rich Content
If a Message Part contains a large amount of content (> 2KB), it will be sent as rich content instead. This has no effect on how you send a Message.
Sending Rich Content
Sending large messages works the same way as standard messages:
// Creates a message part with an image and sends it to the specified conversation NSData *imageData = UIImagePNGRepresentation(image); LYRMessagePart *messagePart = [LYRMessagePart messagePartWithMIMEType:@"image/png" data:imageData]; LYRMessage *message = [layerClient newMessageWithParts:@[messagePart] options:nil error:nil]; // Sends the specified message NSError *error = nil; BOOL success = [conversation sendMessage:message error:&error]; if (success) { NSLog(@"Message enqueued for delivery"); } else { NSLog(@"Message send failed with error: %@", error); }
Receiving Rich Content
There are some differences in how such a Message would be received:
Once the rich content has been downloaded (whether automatically or manually with the
downloadContent method), it can be filtered by its MIME type and handled appropriately.
// Filter for PNG message part NSPredicate *predicate = [NSPredicate predicateWithFormat:@"MIMEType == %@", @"image/png"]; LYRMessagePart *messagePart = [[message.parts filteredArrayUsingPredicate:predicate] firstObject]; // If it's ready, render the image from the message part if (!(messagePart.transferStatus == LYRContentTransferReadyForDownload || messagePart.transferStatus == LYRContentTransferDownloading)) { self.imageView.image = [UIImage imageWithData:messagePart.data]; }
Monitoring transfer progress
As Rich Content may involve upload of large files, being able to monitor the progress of sending these Messages can be significant. You can access the progress of any message part upload (while sending) or download (while receiving) via its progress.
LYRMessagePart *messagePart = message.parts.firstObject; // Auto-download LYRProgress *progress1 = messagePart.progress; // Manual download NSError *error; LYRProgress *progress2 = [messagePart downloadContent:&error];
The
progress property on the message part will reflect upload progress when sending a message.
The
fractionCompleted property of
LYRProgress reflects the download/upload progress as a percent value (ranging from 0.0 to 1.0). The
LYRProgress object also declares the
LYRProgressDelegate protocol, which you can implement to be notified of progress changes.
@interface MyAwesomeApp () <LYRProgressDelegate> @end @implementation MyAwesomeApp - (void)progressDidChange:(LYRProgress *)progress { NSLog(@"Transfer progress changed %@", progress.fractionCompleted); }
Best practice
The
LYRProgressdoesn’t contain a reference back to the message part that it’s associated with. A common UI is to show a progress bar in the view that will load the contents of the message part once it’s downloaded. If you are doing this, set the progress delegate to the content view itself, and update the progress bar when the progress delegate method is called.
Aggregate progress
You can get the progress of multiple operations in one progress object by combining
LYRProgress objects into a
LYRAggregateProgress object (a subclass of
LYRProgress). As each individual
LYRProgress object updates, the aggregate progress will update accordingly based on the total size of all the operations.
LYRMessagePart *part1 = self.message.parts[0]; LYRMessagePart *part2 = self.message.parts[1]; LYRMessagePart *part3 = self.message.parts[2]; NSError *error; LYRProgress *progress1 = [LYRClient downloadContent:part1 error:&error]; LYRProgress *progress2 = [LYRClient downloadContent:part2 error:&error]; LYRProgress *progress3 = [LYRClient downloadContent:part3 error:&error]; LYRAggregateProgress *aggregateProgress = [LYRAggregateProgress aggregateProgressWithProgresses:@[progress1, progress2, progress3]];
Transfer status
LYRMessagePart objects provide a
transferStatus property, which can be one of five values:
LYRContentTransferAwaitingUpload: The content has been saved in the local queue but hasn’t had a chance to start uploading yet
LYRContentTransferUploading: Content is currently uploading
LYRContentTransferReadyForDownload: Content is fully uploaded and ready on Layer servers but hasn’t had a chance to start downloading to the device yet
LYRContentTransferDownloading: Content is currently downloading
LYRContentTransferComplete: Content has finished uploading or downloading
The
transferStatus value is useful for certain use cases, such as only showing fully-downloaded messages. It is queryable.
Background transfers
You can continue a download or upload while the app is in the background by enabling background transfers.
layerClient.backgroundContentTransferEnabled = YES;
If you enable background content transfer, your app delegate must implement this method and call
handleBackgroundContentTransfersForSession:completion: on the Layer client:
Announcements QueryingAnnouncements Querying
- (void)application:(UIApplication *)application handleEventsForBackgroundURLSession:(NSString *)identifier completionHandler:(void (^)())completionHandler { // You'll have to get access to your `layerClient` instance // This may be a property in your app delegate // Or accesible via a `LayerController` or similar class in your app [layerClient handleBackgroundContentTransfersForSession:identifier completion:^(NSArray *changes, NSError *error)completionHandler { NSLog(@"Background transfers finished with %lu change(s)", (unsigned long)changes.count); completionHandler(); }]; } | https://docs.layer.com/sdk/ios/richcontent | 2019-02-16T05:42:15 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.layer.com |
How to Publish Configuration Manager Site Information to Active Directory Domain Services
Applies To: System Center Configuration Manager 2007, System Center Configuration Manager 2007 R2, System Center Configuration Manager 2007 R3, System Center Configuration Manager 2007 SP1, System Center Configuration Manager 2007 SP.
Note.
To enable a Configuration Manager site to publish site information to Active Directory Domain Services
In the Configuration Manager console, navigate to System CenterConfiguration Manager / Site Database / Site Management / <site code> - <site name>.
Right-click <site code> - <site name>, and click Properties.
On the Advanced tab of site properties, select the Publish this site in Active Directory Domain Services check box.
Click OK.
See Also
Other Resources
How to Extend the Active Directory Schema for Configuration Manager
For additional information, see Configuration Manager 2007 Information and Support.
To contact the documentation team, email [email protected]. | https://docs.microsoft.com/en-us/previous-versions/system-center/configuration-manager-2007/bb680711(v=technet.10) | 2019-02-16T05:02:07 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.microsoft.com |
How to create and configure Azure Integration Runtime
The Integration Runtime (IR) is the compute infrastructure used by Azure Data Factory to provide data integration capabilities across different network environments. For more information about IR, see Integration runtime.
Azure IR provides a fully managed compute to natively perform data movement and dispatch data transformation activities to compute services like HDInsight. It is hosted in Azure environment and supports connecting to resources in public network environment with public accessible endpoints.
This document introduces how you can create and configure Azure Integration Runtime.
Default Azure IR
By default, each data factory has an Azure IR in the backend that supports operations on cloud data stores and compute services in public network. The location of that Azure IR is auto-resolve. If connectVia property is not specified in the linked service definition, the default Azure IR is used. You only need to explicitly create an Azure IR when you would like to explicitly define the location of the IR, or if you would like to virtually group the activity executions on different IRs for management purpose.
Create Azure IR
Integration Runtime can be created using the Set-AzureRmDataFactoryV2IntegrationRuntime PowerShell cmdlet. To create an Azure IR, you specify the name, location and type to the command. Here is a sample command to create an Azure IR with location set to "West Europe":
Set-AzureRmDataFactoryV2IntegrationRuntime -DataFactoryName "SampleV2DataFactory1" -Name "MySampleAzureIR" -ResourceGroupName "ADFV2SampleRG" -Type Managed -Location "West Europe"
For Azure IR, the type must be set to Managed. You do not need to specify compute details because it is fully managed elastically in cloud. Specify compute details like node size and node count when you would like to create Azure-SSIS IR. For more information, see Create and Configure Azure-SSIS IR.
You can configure an existing Azure IR to change its location using the Set-AzureRmDataFactoryV2IntegrationRuntime PowerShell cmdlet. For more information about the location of an Azure IR, see Introduction to integration runtime.
Use Azure IR
Once an Azure IR is created, you can reference it in your Linked Service definition. Below is a sample of how you can reference the Azure Integration Runtime created above from an Azure Storage Linked Service:
{ "name": "MyStorageLinkedService", "properties": { "type": "AzureStorage", "typeProperties": { "connectionString": { "value": "DefaultEndpointsProtocol=https;AccountName=myaccountname;AccountKey=...", "type": "SecureString" } }, "connectVia": { "referenceName": "MySampleAzureIR", "type": "IntegrationRuntimeReference" } } }
Next steps
See the following articles on how to create other types of integration runtimes:
Feedback
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog. | https://docs.microsoft.com/en-us/azure/data-factory/create-azure-integration-runtime | 2019-02-16T05:21:29 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.microsoft.com |
Contents Now Platform Capabilities Previous Topic Next Topic Connect overlay Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Connect overlay The Connect overlay appears over the standard user interface. It consists of the Connect sidebar and any Connect mini windows that are open. Figure 1. Connect overlay Note: An administrator can disable the Connect overlay so users can only use the Connect workspace, a full-screen interface with additional Connect tools. Connect sidebarThe Connect sidebar is the primary interface for Connect Chat and Connect Support. It lists your conversations and provides access to create new conversations.Connect mini windowsWhen you open a Connect Chat or Connect Support conversation in the Connect overlay, it opens in a Connect mini window. Each mini window contains a header, a conversation area, and a message field. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/use/collaboration/concept/c_CollaborationOverlay.html | 2019-02-16T05:57:14 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.servicenow.com |
Please consult the Online Help File for the most up to date information and functionality.
The DriveWorks SOLIDWORKS CAM PowerPack plug-in extends a DriveWorks implementation by adding CAM related Generation Tasks.
Once downloaded, double click the DriveWorks-SolidWorksCamPowerPack-[version number].msi file to begin the installation process.
DriveWorks and SolidWorks should be closed while installing the plug-in.
Once installed the plug-in is automatically loaded in DriveWorks.
The plug-in is uninstalled from Windows Programs and Features and will be listed as DriveWorks-SolidWorksCamPowerPack [version number].
Once the SOLIDWORKS CAM PowerPack is installed, the following tasks will be available from Generation Tasks.
It is common to include these tasks in the Post Drive Tasks area of the Generation Tasks. This will allow the part to be driven by DriveWorks before any CAM data is created.
Each SOLIDWORKS CAM task is a separate task so other generation tasks can be run in between or after any CAM operation.
It is recommended the tasks are run in the following order:. | http://docs.driveworkspro.com/Topic/SwCAMPowerPack | 2019-02-16T06:25:55 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.driveworkspro.com |
This topic describes how to modify a Microsoft SQL Server Agent proxy in SQL Server 2016 by using SQL Server Management Studio or Transact-SQL.
In This Topic
Before you begin:
Limitations and Restrictions
To modify a SQL Server Agent proxy, using:
SQL Server Management Studio
Before You Begin.
Using SQL Server Management Studio.
Using Transact-SQL
To modify a SQL Server Agent proxy
In Object Explorer, connect to an instance of Database Engine.
On the Standard bar, click New Query.
Copy and paste the following example into the query window and click Execute.
-- Disables the proxy named 'Catalog application proxy'. USE msdb ; GO EXEC dbo.sp_update_proxy @proxy_name = 'Catalog application proxy', @enabled = 0; GO
For more information, see sp_update_proxy (Transact-SQL). | https://docs.microsoft.com/en-us/sql/ssms/agent/modify-a-sql-server-agent-proxy | 2017-05-22T17:30:03 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.microsoft.com |
Welcome to Vitalus’ documentation!¶
Vitalus is a rsync wrapper. Rsync is a good atomic tool, but it needs to be wrapped to have a real backup solution. Backup solutions are generally too basic or very difficult. This one fits my needs.
Contents:
Philosophy¶
- I want to centralize my backup in a unique script to achieve many tasks.
- I want to backup my desktop on external disks.
- I want to backup external disks to other external disks.
- I want to backup my /home/user on hosts accessible via ssh to a local disk.
- I want to backup my destop to hosts accessible via ssh.
- I want to keep increment if I need it.
- I want to adjust the frequency of copies for each task. The script starts much more frequently.
- I want readable log files telling me if everything goes fine.
- ...
Functionalities¶
This is just another way to express the philosophy :)
- Manage different tasks
- rsync from and to local disks
- rsync from SSH to local disk
- rsync from local to SSH almost supported
- Check disk space (local disks)
- Keep track of time evolution (increments done with hard links), not possible via SSH for the moment.
- Old increments deleted (keeping a minimal amount of increments)
- Rotated logs (general + one per task)
How to install?¶
See How to install?.
How to setup?¶
In my use case, I have a cron job running every hour. IMHO, this is quite atomic. Then, the script decides which task has to be done.
About ssh¶
Keys must be configured with an empty passphrase. Add in your ~/.ssh/config, something like
- Host sciunto.org
- IdentityFile ~/.ssh/key-empty-passphrase
Source or destination must have the format: login@server:path | http://vitalus.readthedocs.io/en/latest/ | 2017-05-22T17:23:31 | CC-MAIN-2017-22 | 1495463605485.49 | [] | vitalus.readthedocs.io |
After you specify whether you want to run the copy operation immediately, and after you optionally save the package that the wizard created, the SQL Server Import and Export Wizard shows Complete the Wizard. On this page, you review the choices that you made in the wizard, and then click Finish to start the copy operation.
Screen shot of the Complete the Wizard page
The following screen shot shows a simple example of the Complete the Wizard page of the wizard.
Review the options you selected
Review the summary and verify the following information:
- The source and destination of the data to copy.
- The data to copy.
- Whether the package will be saved.
- Whether the package will be run immediately.
What's next?
After you review the choices that you made in the wizard and click Finish, the next page is Performing Operation. On this page, you see the progress and the result of the operation that you configured on the preceding pages. For more info, see Performing Operation.
See also
Get started with this simple example of the Import and Export Wizard | https://docs.microsoft.com/en-us/sql/integration-services/import-export-data/complete-the-wizard-sql-server-import-and-export-wizard | 2017-05-22T17:30:43 | CC-MAIN-2017-22 | 1495463605485.49 | [array(['media/complete.png',
'Complete the Wizard page of the Import and Export Wizard Complete the Wizard page of the Import and Export Wizard'],
dtype=object) ] | docs.microsoft.com |
Box Connector Release Notes
Anypoint Connector for Box provides as a bi-directional gateway between Box, a secure content management and collaboration platform, and Mule.
Version 3.1.0 - March 8, 2017
Features
The following operations now support pagination:
Folders
Get Folder’s Items
Get Trashed Items
Get Folder Collaborations
Groups
Get Groups for an Enterprise
Get Memberships for Group
Get User’s Memberships.
Users
Get Enterprise Users
Improvement of exception messages: in addition to the HTTP status code, error messages also return the complete description of the failure cause.
Fields are now validated before sending the request: previously only a HTTP 400 response was returned.
New operation
Search with Parameters: unlike the search provided by the Box SDK, which still remains as an operation but deprecated, it provides all the parameters supported by the API, except for
mdfiltersand
filters.
Version 3.0.0 - August 11, 2016
Version 2.5.2 - April 23, 2015
Community
Version 2.5.2 - Fixed in this release
Retrieval of Remote User Id to enable integration with Dataloader.
Version 2.4.1 - September 25, 2013
See Also
Learn how to Install Anypoint Connectors using Anypoint Exchange.
Read more about Box Connector.
Access MuleSoft’s Forum to pose questions and get help from Mule’s broad community of users.
To access MuleSoft’s expert support team, subscribe to Mule ESB Enterprise and log in to MuleSoft’s Customer Portal. | https://docs.mulesoft.com/release-notes/box-connector-release-notes | 2017-05-22T17:23:56 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.mulesoft.com |
Understanding Application SSH
Page last updated:
This document describes details about the Pivotal Web Services (PWS) SSH components for access to deployed application instances. Pivotal Web Services (PWS) supports native SSH access to applications and load balancing of SSH sessions with the load balancer for your PWS deployment.
The SSH Overview document describes procedural and configuration information about application SSH access.
SSH Components
The PWS SSH includes the following central components, which are described in more detail below:
- An implementation of an SSH proxy server.
- A lightweight SSH daemon.
If these components are deployed and configured correctly, they provide a simple and scalable way to access containers apps and other long running processes (LRPs).
SSH Daemon
The SSH daemon is a lightweight implementation that is built around the Go SSH library. It supports command execution, interactive shells, local port forwarding, and secure copy. The daemon is self-contained and has no dependencies on the container root file system.
The daemon is focused on delivering basic access to application instances in PWS. It is intended to run as an unprivileged process, and interactive shells and commands will run as the daemon user. The daemon only supports one authorized key, and it is not intended to support multiple users.
The daemon can be made available on a file server and Diego LRPs that want to use it can include a download action to acquire the binary and a run action to start it. PWS applications will download the daemon as part of the lifecycle bundle.
SSH Proxy Authentication
The SSH proxy hosts the user-accessible SSH endpoint and is responsible for authentication, policy enforcement, and access controls in the context of PWS.. | http://docs.run.pivotal.io/concepts/diego/ssh-conceptual.html | 2017-05-22T17:26:11 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.run.pivotal.io |
What is machine learning?
Machine learning is a technique of data science that helps computers learn from existing data in order to forecast future behaviors, outcomes, and trends.
These.
For a brief overview, try the video series Data Science for Beginners. Without using jargon or math, Data Science for Beginners introduces machine learning and steps you through a simple predictive model.
What is Machine Learning in the Microsoft Azure cloud?.
Azure Machine Learning not only provides tools to model predictive analytics, but also provides a fully managed service you can use to deploy your predictive models as ready-to-consume web services.
What is predictive analytics?
Predictive analytics uses math formulas called algorithms that analyze historical or current data to identify patterns or trends in order to forecast future events.
Tools to build complete machine learning solutions in the cloud
Azure Machine Learning has everything you need to create complete predictive analytics solutions in the cloud, from a large algorithm library, to a studio for building models, to an easy way to deploy your model as a web service. Quickly create, test, operationalize, and manage predictive models.
Machine Learning Studio: Create predictive models
In Machine Learning Studio, you can quickly create predictive models by dragging, dropping, and connecting modules. You can experiment with different combinations, and try it out for free.
In Cortana Intelligence Gallery, you can try analytics solutions authored by others or contribute your own. Post questions or comments about experiments to the community, or share links to experiments via social networks such as LinkedIn and Twitter.
Use a large library of Machine Learning algorithms and modules in Machine Learning Studio to jump-start your predictive models. Choose from sample experiments, R and Python packages, and best-in-class algorithms from Microsoft businesses like Xbox and Bing. Extend Studio modules with your own custom R and Python scripts.
Operationalize predictive analytics solutions by publishing your own
The following tutorials show you how to operationalize your predictive analytics models:
- Deploy web services
- Retrain models through APIs
- Manage web service endpoints
- Scale a web service
- Consume web services tutorial and by building on samples. | https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-what-is-machine-learning | 2017-05-22T17:20:54 | CC-MAIN-2017-22 | 1495463605485.49 | [array(['media/machine-learning-what-is-machine-learning/machine-learning-service-parts-and-workflow.png',
'What is machine learning? Basic workflow to operationalize predictive analytics on Azure Machine Learning.'],
dtype=object) ] | docs.microsoft.com |
Changes the owner of an object in the current database.
Important. sp_changeobjectowner changes both the schema and the owner. To preserve compatibility with earlier versions of SQL Server, this stored procedure will only change object owners when both the current owner and the new owner own schemas that have the same name as their database user names.
Important
A new permission requirement has been added to this stored procedure.
||
|-|
|Applies to: SQL Server ( SQL Server 2008 through current version).|
Transact-SQL Syntax Conventions
Syntax
sp_changeobjectowner [ @objname = ] 'object' , [ @newowner = ] 'owner'
Arguments
[ .
Return Code Values
0 (success) or 1 (failure)
Remarks.
To change the owner of a securable, use ALTER AUTHORIZATION. To change a schema, use ALTER SCHEMA.
Permissions
Requires membership in the db_owner fixed database role, or membership in both the db_ddladmin fixed database role and the db_securityadmin fixed database role, and also CONTROL permission on the object.
Examples
The following example changes the owner of the
authors table to
Corporate\GeorgeW.
EXEC sp_changeobjectowner 'authors', 'Corporate\GeorgeW'; GO
See Also
ALTER SCHEMA (Transact-SQL)
ALTER DATABASE (Transact-SQL)
ALTER AUTHORIZATION (Transact-SQL)
sp_changedbowner (Transact-SQL)
System Stored Procedures (Transact-SQL) | https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-changeobjectowner-transact-sql | 2017-05-22T17:38:20 | CC-MAIN-2017-22 | 1495463605485.49 | [array(['../../includes/media/yes.png', 'yes'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)] | docs.microsoft.com |
About Custom Policies
If you want an API policy that isn’t included in the default set of policies, you can create your own custom policy. A custom policy requires the following files:
Policy Definition - YAML file that describes the policy and its configurable parameters
Policy Configuration - XML file with the backend processes that implement the policy
A custom policy also must contain a pointcut declaration]
You can create a custom policy if you use one of the following runtimes:
Mule 3.8 or later unified runtime
API Gateway runtime 1.3 or later
In Studio 6.1 and later, you can use the Studio custom policy editor (Beta).
Limitations
Custom policies must be self-contained. From a custom policy, do not reference another dependent policy, a connector on the app, or a shared connector.
You create a custom policy by leveraging the elements in Mule Runtime to evaluate and process HTTP calls and responses. The policy filters calls to an API and matches one of the query parameters in the request to a configurable, regular expression. Unmatched requests are rejected. You set the HTTP status and the payload to indicate an error whenever a request does not match the conditions of the filter.
Applying a Custom Policy
To make a custom policy available to users, you add the policy to Anypoint Platform in API Manager. On the API version details page of an API, users can then choose Policies, select the custom policy from the list, and apply the policy to the API.
If you deploy the API on a private server using a .zip file that you downloaded from Anypoint Platform, the new policy is available for on-premises users to apply. Even if the proxy was already deployed before creating the policy, there’s no need to re-download or re-deploy anything. The new policy automatically downloads to the
/policies folder, in the location where the API Gateway or Mule 3.8 unified runtime is installed. Configure your organization’s Client ID and Token in the
wrapper.conf file.
Failed Policies
In Mule Runtime 3.8 and later and API Gateway Runtime 2.1 and later, when an online policy is malformed and it raises a parse exception, it’s stored under
failedPolicies directory inside
policies directory, waiting to be reviewed. In the next poll for policies it won’t be parsed. If you delete that policy, it is deleted from that folder too. If the folder has no policies, it is deleted. | https://docs.mulesoft.com/api-manager/applying-custom-policies | 2017-05-22T17:23:02 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.mulesoft.com |
The basic units of OpenShift and Kubernetes add the ability to orchestrate Docker containers across multi-host installations.
Though you do not directly interact with Docker tools when using OpenShift, understanding Docker’s capabilities and terminology is important for understanding its role in OpenShift and how your applications function inside of containers. Docker is available as part of RHEL 7, as well as CentOS and Fedora, so you can experiment with it separately from OpenShift. Refer to the article Get Started with Docker Formatted Container Images on Red Hat Systems for a guided introduction.
Docker containers are based on Docker images. A Docker image is a binary that includes all of the requirements for running a single Docker container, as well as metadata describing its needs and capabilities. You can think of it as a packaging technology. Docker containers only have access to resources defined in the image, unless you give the container additional access when creating it. By deploying the same image in multiple containers across multiple hosts and load balancing between them, OpenShift can provide redundancy and horizontal scaling for a service packaged into an image.
You can use Docker directly to build images, but OpenShift also supplies builders that assist with creating an image by adding your code or configuration to existing images.
Since, Docker.
A Docker registry is a service for storing and retrieving Docker images. A
registry contains a collection of one or more Docker image repositories. Each
image repository contains one or more tagged images. Docker provides its own
registry, the Docker Hub, but you may
also use private or third-party registries. Red Hat provides a Docker registry
at
registry.access.redhat.com for subscribers. OpenShift can also supply its
own internal registry for managing custom Docker images.
The relationship between Docker containers, images, and registries is depicted in the following diagram: | https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/containers_and_images.html | 2017-05-22T17:14:16 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.openshift.com |
Storage as a Service
- Overview
- Benefits
- Using Storage Service Applications
- Web-Based Storage Browser
- Storage Service Scenarios
Overview
The CloudCenter platform decouples shared storage from CloudCenter and allows you to attach your own external Artifact Repository to store and access files. To this effect, the CloudCenter platform provides a File System service to build and own storage. This feature, also referred to as storage as a service or storage service, provides read-write storage access across users and deployments.
Benefits
The benefits of using storage as a service include:
- The ability to mount storage with multiple disks and encryption.
- The ability to control the cost of storage by providing the storage service at the application level (rather than at the system level).
- The flexibility to use the storage service as part of one application or as part of the deployment service for multiple applications.
Job-based applications (not N-tier applications) that require persistent storage can use the Storage as a Service application.
Using Storage Service Applications
You can model and deploy N-tier applications for a single tier the following storage service options:
The storage repository is automatically mounted when you reboot a running storage service application. The NFS or CephFS storage service is mounted in the /shared path and files can be accessed or written to/from /shared path.
Web-Based Storage Browser
CloudCenter launches the elFinder web-based storage browser using your SSO credentials (see SSO AD or SAML SSO or Use Case: Shibboleth SSO for additional details) so you can drag and drop files when using storage services. You do not need to explicitly login to the storage browser.
Storage Service Permissions
All users within the tenant can read (view) this storage service by default.
To allow specific user groups to have full access (read and write) to this storage service, add the group name to the User Groups (Write Permission) field.
By default, the follow conditions apply to this storage service:
Users from different tenants cannot access this service and associated storage browser.
Storage space owners have read/write access.
Users in any user group that was included in the User Groups (Write Permission) field have read/write access.
Other users within this tenant have read only access.
Configuring Storage Services
To use this feature, you must verify the following requirements:
- Allow the storage browser UI to be accessed by the he CloudCenter UI user. All users in a tenant can view this service by default.
- Specify user groups that can have full access (read and write) to this storage service and click Save App.
End-to-End Storage Configuration Use Case
To configure NFS storage using the storage mounting feature, follow this process.
- Login as the tenant admin and create a shared deployment environment, called StorageNFS.
- Log into the CCM UI as a the tenant administrator.
- Click Deployment > Deployment Environments > Add New Deployment Environment.
- Assign the following values (see Deployment Environment for additional context):
- Name = StorageNFS
- Select the required Cloud Region and Account
- Identify the Default Cloud Region, if applicable.
- Select the Default Instance Type
- Click Save. The Deployment Environments page refreshes to display the new deployment environment StorageNFS.
- Share StorageNFS with users (see Permission Control for additional context).
- Click the Share Action icon for the StorageNFS deployment environment.
- In the Share popup, verify that you are in the Users tab (default) and select Share with all users.
- Leave the Access permissions as View (default).
- Change the User's Deployment permissions to Access.
- Change the Others' Deployment permissions to Access.
- Click Save.
- Model an application using NFS storage service and grant read/write access to the user called group GroupNFS.
- In the CCM UI, click Applications > Model > N-Tier Execution.
- In the Topology Modeler,
- Click the Basic Information tab and assign the following values (see Model an Application for additional context).
- Web App Name = NFS Demo
- Version = 1.0
- Click the Topology Modeler tab.
- Click File System in the Services pane.
- Drag and drop the NFS service into the Graphical Workflow pane.
- Click the dropped NFS service box to access the corresponding properties in the Properties pane.
- In the General Settings section, assign the following values:
- In the Hardware Specification section, assign the following value:
- Memory = 256 MB
- Click Save as App. StorageNFS now displays in the Applications page along with other applications.
- Launch a new job from the modeled NFS Storage application into StorageNFS.
- Click the StorageNFS application to deploy it and assign the following values (see Deploy the Application for additional context).
- Deployment Name = NFS-Demo1
- In the nfs_0 section click Advanced and assign the following values (see IP Allocation Mode > NIC Configuration for additional context).
- Cluster = cluster1
- Datastore = DatastoreCluster
- Resource Pool = CliQr
- Target Deployment Folder = /test2
- Verify that the Network Interface Controller is displaying the configured number of NIC cards and the Network is identified in the Attach Network Interfaces section.
- Click Submit.
- The page refreshes to display the deployed job details.
- Access the NFS storage using the administrator account and upload data.
- In the CCM UI, access the Deployment page for the NFS-Demo1 deployment and click the Access NFS-Demo1 button.
- CloudCenter launches the web browser and displays the Your connection is not private... screen.
- Like Proceed to <URL> (unsafe), if you wish to proceed.
- The storage browser displays.
- Right-click and select New folder from the available options.
- Name this folder Test1.
- Create a second new folder and name it Test2.
- Select and drill into Test1.
- Right-click and select Upload files.
- In the Upload files popup, click Select files to upload.
- Select the required file(s) from the Browser file selection window. For example, Sample.png and Sample.dll.
- Log out of the storage browser and login again.
- Click Logout in the storage browser.
- Access the CCM UI and click the Access NFS-Demo-1 button to login again.
The following image provides an example of the Access NFS button:
- Verify that your files and folders are displaying as configured.
- Log out of the administrator account.
- Click Logout in the storage browser.
- Try accessing the storage browser URL and verify that the permissions are secure.
- From another browser, log in as a standard user (User1).
- In the CCM UI click Deployments.
- Expand the + icon for NFS-Demo1 and click the NFS-Demo1_run_1 link. The "NFS-Demo1_run_1" Deployment Details page displays.
- Click the Access NFS-Demo1 button. You should see the There is a problem with this website's security certificate... screen.
- If you want to proceed, click Continue to this webpage (not recommended).
- Verify that your files and folders are displaying as configured and that this user has view access to the storage browser and the files and folders.
- Add User1 to GroupNFS.
- Access the CCM UI as the tenant administrator.
- Click Admin > Groups
- Locate GroupNFS in the list and click the Edit link for GroupNFS.
- In the Edit User Group page, locate the Associated Users section.
- In the Add user to this group field within this section, select User1 from the list of users that display when you start typing us...(be aware that only users within this tenant can be added to this group).
- Click Save.
- User1 can now access the NFS storage in read/write mode.
- Login to the CCM UI as User1.
- Click Deployments.
- Expand the + icon for NFS-Demo1 and click the NFS-Demo1_run_1 link. The "NFS-Demo1_run_1" Deployment Details page displays.
- Click the Access NFS-Demo1 button.
- Verify that your files and folders are displaying as configured and that this user has READ/WRITE access to the storage browser and the files and folders.
Storage Service Scenarios
Enterprises can use the storage service in multiple scenarios:
- To write application logs and other output files to a persistent storage.
- To write output files when benchmarking applications or to run job-based applications.
- Other scenarios identified in this section.
Scenario 1: Unique Storage Service Per Deployment
In this scenario, the storage service is unique for each deployment. The storage settings are fine tuned based on the application dependencies and requirements. The application only needs a simple node initialization script to mount the storage and the Storage IP address can be passed as an environment variable through the NFS service.
Scenario 2: Storage Service Shared across Deployments
In this scenario, the storage service is shared across permitted deployments in your enterprise. The storage settings are fine tuned based on the application dependencies and requirements. Your enterprise can have multiple storage services available and you can set the instance at deployment time. The application only needs a simple node initialization script to mount the storage and the Storage IP address can be passed as an environment variable through the he CloudCenter platform.
Scenario 3: Storage Service Shared with Users
In this scenario, the storage service is shared with all users. The storage settings are fine tuned based on the application dependencies and requirements. Your enterprise can have multiple storage services available and then you can set the NFS service IP address or DNS at deployment time. The application only needs a simple node initialization script to mount the storage and the Storage IP address can be passed as a custom parameter.
- No labels | http://docs.cliqr.com/display/CCD48/Storage+as+a+Service | 2017-05-22T17:31:46 | CC-MAIN-2017-22 | 1495463605485.49 | [array(['/download/attachments/1084674/Screen%20Shot%202015-07-20%20at%208.50.16%20AM.png?version=1&modificationDate=1437411401000&api=v2',
None], dtype=object)
array(['/download/attachments/1084674/Screen%20Shot%202015-07-20%20at%209.06.13%20AM.png?version=1&modificationDate=1437411394000&api=v2',
None], dtype=object)
array(['/download/attachments/1084674/Screen%20Shot%202015-10-28%20at%2012.48.11%20PM.png?version=1&modificationDate=1446061876000&api=v2',
None], dtype=object)
array(['/download/attachments/1084674/Screen%20Shot%202015-07-20%20at%209.30.53%20AM.png?version=1&modificationDate=1437411398000&api=v2',
None], dtype=object)
array(['/download/attachments/1084674/Screen%20Shot%202015-07-20%20at%209.36.52%20AM.png?version=1&modificationDate=1437411393000&api=v2',
None], dtype=object)
array(['/download/attachments/1084674/Screen%20Shot%202015-07-20%20at%209.53.02%20AM.png?version=1&modificationDate=1437411400000&api=v2',
None], dtype=object) ] | docs.cliqr.com |
(PECL paradox >= 1.4.0)
px_update_record — Updates record in paradox database
$pxdoc, array
$data, int
$num).
Nota:.
Retorna
TRUE em caso de sucesso ou
FALSE em caso de falha.
px_insert_record() - Inserts record into paradox database | http://docs.php.net/manual/pt_BR/function.px-update-record.php | 2017-05-22T17:29:18 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.php.net |
When designing your target SuperSTAR database, in most cases you will want to create references between the fact table columns and the classification tables that contain the descriptions of the values in the fact table.
For example, you might have a fact table column called Gender that contains codes like M, F and U, and create a reference to a classification table containing the descriptions Male, Female, and Unknown:
However, you can also create classification tables directly from fact table columns. When you do this, SuperCHANNEL will automatically create the classifications based on the values in the fact table (it uses the "add to classification" cleansing action to do this during the build).
Although you can create a classification table from any fact table column, it is most useful in situations where the fact table column contains the actual descriptions, rather than a code.
For example, the Customer table in the sample retail banking database contains a column called Company, which contains values such as Individual or Company:
To include this column in the SXV4, you can create a classification table in SuperCHANNEL, as follows:
Right-click the fact table column in the Target View and select Create Classification.
The Create Classification dialog displays.
Enter a name for the classification table and click OK. The table name must be unique (it cannot be the same as any existing tables in the source database).
This label will appear by default in the drop-down list in SuperWEB2 (although you can override the drop-down labels if necessary):
SuperCHANNEL automatically creates the classification table with code and name columns:
SuperCHANNEL automatically sets the cleansing action on the fact table column to add to classification. Leave this setting unchanged.
This means that during the build, SuperCHANNEL will automatically populate both the name and code columns in the new classification table with the values from the fact table column: | http://docs.spacetimeresearch.com/display/SuperSTAR98/Create+Classification+Tables+-+SuperCHANNEL | 2017-05-22T17:17:57 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.spacetimeresearch.com |
Troubleshooting: BBM
I can't add a BBM contact
-
.
I.
An app keeps prompting me:
- In the Settings app, tap Security and Privacy > Application Permissions.
- Tap the app's name.
- Change the options. For example, limit certain actions or disconnect the app from BBM.
- Tap
.
I accidentally blocked an invitation
- In BBM, swipe down from the top of the screen.
- Tap
> Blocked Contacts and Updates.
- Touch and hold a name.
- Tap
.
I can't use BBM
- Check that your device is connected to a wireless network.
- Check that other apps that require Internet access are working. For example, open the BlackBerry Browser and go to a website.
- Open BBM. If prompted, complete any setup steps.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/60523/laf1334697770228.jsp | 2014-11-21T02:46:36 | CC-MAIN-2014-49 | 1416400372542.20 | [] | docs.blackberry.com |
Writing a pyglet application¶
Getting started with a new library or framework can be daunting, especially when presented with a large amount of reference material to read. This chapter gives a very quick introduction to pyglet without going into too much detail.
Hello, World¶()
This will enter pyglet’s default event loop, and let pyglet respond to
application events such as the mouse and keyboard.
Your event handlers will now be called as required, and the
run() method will return only when all application
windows have been closed.
If you are coming from another library, you may be used to writing your own event loop. This is possible to do with pyglet as well, but it is generally discouraged; see The application event loop for details.
Image viewer¶
Most games and applications will need to load and display images on the screen. In this example we’ll load an image from the application’s directory and display it within the window:
import pyglet window = pyglet.window.Window() image = pyglet.resource.image('kitten.png') @window.event def on_draw(): window.clear() image.blit(0, 0) pyglet.app.run()
We used the
image() function of
pyglet.resource.
Handling mouse and keyboard events¶. An easy way to find the event names and parameters you need is to add the following lines to your program:
event_logger = pyglet.window.event.WindowEventLogger() window.push_handlers(event_logger)
This will cause all events received on the window to be printed to the console.
An example program using keyboard and mouse events is in examples/programming_guide/events.py
Playing sounds and music¶
pyglet makes it easy to play and mix multiple sounds together.
pyglet.media.load().
By default, audio is streamed when playing. This works well for longer music
tracks. Short sounds, such as a gunfire shot used in a game, should instead be
fully decoded in memory before they are used. This allows them to play more
immediately and incur less of a CPU performance penalty. It also allows playing
the same sound repeatedly without reloading it..
Where to next?¶
The examples above have shown you how to display something on the screen, and perform a few basic tasks. You’re probably left with a lot of questions about these examples, but don’t worry. The remainder of this programming guide goes into greater technical detail on many of pyglet’s features. If you’re an experienced developer, you can probably dive right into the sections that interest you.
For new users, it might be daunting to read through everything all at once. If you feel overwhelmed, we recommend browsing through the beginnings of each chapter, and then having a look at a more in-depth example project. You can find an example of a 2D game in the In-depth game example section.
To write advanced 3D applications or achieve optimal performance in your 2D applications, you’ll need to work with OpenGL directly. If you only want to work with OpenGL primitives, but want something slightly higher-level, have a look at the Graphics module.
There are numerous examples of pyglet applications in the
examples/
directory of the documentation and source distributions. If you get
stuck, or have any questions, join us on the mailing list or Discord! | https://pyglet.readthedocs.io/en/latest/programming_guide/quickstart.html | 2020-07-02T10:27:08 | CC-MAIN-2020-29 | 1593655878639.9 | [] | pyglet.readthedocs.io |
Installing the TFS Process Template Editor
After an extended absence, I returned to
hacking Domain-Specific Language Tools for Visual Studio 2005 Redistributable Components. No problemo... but still no Editor. I then repaired the Power Tool... but still no Editor. It was only after an uninstall/reinstall cycle of the Power Tool did the Editor appear on the Team menu of Visual Studio.
The moral of the story... RTFM and install the DSL Tools before the Power Tool. | https://docs.microsoft.com/en-us/archive/blogs/anlynes/installing-the-tfs-process-template-editor | 2020-07-02T09:50:17 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.microsoft.com |
Overview
When setting up your sending domain, you'll need to add specific DNS records to authenticate your domain and get the best deliverability possible for your emails.
Update your DNS information
To get your domain authenticated, go to Account Settings
> Domains & IPs. From there, send the information to your DNS administrator or copy the fields for use in your DNS provider. To email them to your DNS administrator, select the email the full list of DNS records to a teammate option, and enter the email address of your administrator.
Once your administrator has updated the DNS records, you can return to the Domain & IPs screen and click Verify all DNS Records at the top of the table. The authentication usually takes 10-15 seconds. Once verified, the status for all the records should become green checkboxes.
If the entries do not verify, the DNS entries may still be propagating. This propagation period typically takes no longer than 30 minutes. If this period hasn't passed, please wait and try again. If you have issues with this step, please reach out to Support with questions.
Next: Setup Your Sender Profile | https://docs.zaius.com/hc/en-us/articles/360012277794-Authenticate-your-sending-domain | 2020-07-02T08:10:41 | CC-MAIN-2020-29 | 1593655878639.9 | [array(['/hc/article_attachments/360024091133/mceclip0.png', None],
dtype=object)
array(['/hc/article_attachments/360022648193/mceclip0.png', None],
dtype=object) ] | docs.zaius.com |
API Reference
Image Height
h
The height of the output image. Primary mode is a positive integer, interpreted as pixel height. The resulting image will be
h pixels tall.
A secondary mode is a float between
0.0 and
1.0, interpreted as a ratio in relation to the source image size. For example, an
h value of
0.5 will result in an output image half the height of the source image.
If only one dimension is specified, the other dimension will be calculated according to the aspect ratio of the input image. If both width and height are omitted, then the source image’s dimensions are used.
If the
fit parameter is set to
clip or
max, then the actual output height may be equal to or less than the dimensions you specify. If you’d like to resize using only
h, please use
fit=clip to ensure that the other dimension will be accurately calculated.
Note: The maximum output image size on imgix is 8192×8192 pixels. All output images will be sized down to accomodate this limit. | https://docs.imgix.com/apis/url/size/h | 2018-09-18T18:27:31 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.imgix.com |
All content with label as5+gridfs+infinispan+listener+non-blocking.
Related Labels:
expiration, publish, datagrid, interceptor, server, transactionmanager, release,, migration, jpa, filesystem, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, snapshot, webdav, hotrod, docs, consistent_hash, batching, store, jta, faq, 2lcache, jsr-107, jgroups, lucene, locking, hot_rod
more »
( - as5, - gridfs, - infinispan, - listener, - non-blocking )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/as5+gridfs+infinispan+listener+non-blocking | 2018-09-18T18:29:10 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.jboss.org |
Shape Keys Panel¶
Reference
Shape Keys panel.
Settings¶
- Active Shape Key Index
A List Views.
- Value
- Current Value of the Shape Key (0.0 to 1.0).
- Mute (eye icon)
- This visually disables the shape key in the 3D View.
- Specials
- Transfer Shape Key
- Transfer the active Shape Key from a different object. Select two objects, the active Shape Key is copied to the active object.
- Join as Shapes
- Transfer the Current Shape from a different object. Select two objects, the Shape is copied to the active object.
- Mirror Shape Key
- If your mesh is nice and symmetrical, in Object Mode, you can mirror the shape keys on the X axis. This will not work unless the mesh vertices are perfectly symmetrical. Use thetool in Edit Mode.
- Mirror Shape Key (Topology)
- This is the same as Mirror Shape Key though it detects the mirrored vertices based on the topology of the mesh. The mesh vertices do not have to be perfectly symmetrical for this one to work.
- New Shape From Mix
- Add a new shape key with the current deformed shape of the object.
- Delete All Shapes
- Delete all shape keys.
- Relative
- Set the shape keys to Relative or Absolute.
- Show Only (pin icon)
- Show the shape of the active shape key without interpolation in the 3D View. Show Only is enabled while the object is in Edit Mode, unless the setting below is enabled.
- Edit Mode
- Modify the shape key while the object is in Edit Mode.
Relative Shape Keys¶
Relative shape keys deform from a selected shape key. By default, all relative shape keys deform from the first shape key called the Basis shape key.
Relative Shape Keys options.
- Clear Weights
X
- Set all values to zero.
- Value
- The value of the active shape key.
- Range
- Min and Max range of the active shape key value.
- Blend
- Vertex Group
- Limit the active shape key deformation to a vertex group.
- Relative
- Select the shape key to deform from.
Absolute Shape Keys¶
Absolute shape keys deform from the previous and to the next shape key. They are mainly used to deform the object into different shapes over time.
Absolute Shape Keys options.
- Reset Timing (clock icon)
- Reset the timing for absolute shape keys.
- Interpolation
This controls the interpolation between shape keys.
Linear, Cardinal, Catmull-Rom, B-Spline
Different types of interpolation.The red line represents interpolated values between keys (black dots).
- Evaluation Time
- This is used to control the shape key influence.
Examples¶
Reset Timing¶
For example, if you have the shape keys, Basis, Key_1, Key_2, in that order.
Reset Timing will loop the shape keys, and set the shape keyframes to +0.1:
- Basis 0.1
- Key_1 0.2
- Key_2 0.3
Evaluation Time will show this as frame 100:
- Basis 10.0
- Key_1 20.0
- Key_2 30.0 | https://docs.blender.org/manual/en/dev/animation/shape_keys/shape_keys_panel.html | 2018-09-18T17:54:47 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['../../_images/animation_shape-keys_shape-keys-panel_basis.png',
'../../_images/animation_shape-keys_shape-keys-panel_basis.png'],
dtype=object)
array(['../../_images/animation_shape-keys_shape-keys-panel_relative.png',
'../../_images/animation_shape-keys_shape-keys-panel_relative.png'],
dtype=object)
array(['../../_images/animation_shape-keys_shape-keys-panel_absolute.png',
'../../_images/animation_shape-keys_shape-keys-panel_absolute.png'],
dtype=object) ] | docs.blender.org |
All content with label infinispan+jcache+jsr-107.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, deadlock, archetype, lock_striping, jbossas, nexus, guide, listener, cache,
amazon, grid, test, api, xsd, ehcache, maven, documentation, write_behind, ec2, 缓存, hibernate, aws, interface, custom_interceptor, clustering, setup, eviction, gridfs, concurrency, out_of_memory, jboss_cache, import, index, events, batch, hash_function, configuration, buddy_replication, write_through, cloud, tutorial, jbosscache3x, xml, read_committed, distribution, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, websocket, transaction, interactive, xaresource, searchable, demo, installation, scala, client, migration, non-blocking, jpa, filesystem, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, hotrod, snapshot, webdav, repeatable_read, docs, batching, consistent_hash, store, jta, faq, 2lcache, as5, jgroups, lucene, locking, rest, hot_rod
more »
( - infinispan, - jcache, - jsr-107 )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/infinispan+jcache+jsr-107 | 2018-09-18T18:19:59 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.jboss.org |
What's New in Exchange Server many new features for each server role. This topic discusses the new and improved features that are added when you install Exchange 2007 SP1.
To download Exchange 2007 SP1, see Exchange Server 2007 Downloads.
New Deployment Options
You can install Exchange 2007 SP1 on a computer that is running the Windows Server 2008 operating system. For more information about the installation prerequisites for installing Exchange 2007 SP1 on a Windows Server 2008 computer, see How to Install Exchange 2007 SP1 and SP2 Prerequisites on Windows Server 2008 or Windows Vista. For more information about the supported operating systems for Exchange 2007 SP1, see Exchange 2007 System Requirements.
If Exchange 2007 and SP2.
Client Access Server Role Improvements
Exchange ActiveSync in Exchange 2007 SP1 includes the following enhancements for the administrator and for the end user:
An Exchange ActiveSync default mailbox policy is created.
Enhanced Exchange ActiveSync mailbox policy settings have been added.
Remote Wipe confirmation has been added.
Direct Push performance enhancements have been added.
For more information about the new Exchange ActiveSync features in Exchange 2007 SP1 see New Client Access Features in Exchange 2007 SP1.
Outlook Web Access Office 2007 file formats. as follows:
Ability to integrate with custom message types in the Exchange store so that they are displayed correctly in Outlook Web Access
Ability to customize the Outlook Web Access user interface to seamlessly integrate custom applications together with Outlook Web Access
For more information about new Outlook Web Access features in Exchange 2007 SP1 see New Client Access Features in Exchange 2007 SP1.
POP3/IMAP4
A new administration user interface has been added to the Exchange Management Console for the POP3 and IMAP4 protocols. This administration user interface enables you to configure the following settings for POP3 and IMAP4 for your individual Client Access server:
Port settings
Authentication settings
Connection settings
Message and calendar settings
For more information about new POP3 and IMAP4 features in Exchange 2007 SP1 see New Client Access Features in Exchange 2007 SP1.
Improvements in Transport
Exchange 2007 SP1 includes the following improvements to core transport functionality:
Back pressure algorithm improvements
The addition of transport configuration options to the Exchange Management Console
Exchange 2007 SP1 includes the following enhancements to message processing and routing functionality on the Hub Transport server role:
Priority queuing
Message size limits on Active Directory site links
Message size limits on routing group connectors
The addition of Send connector configuration options to the Exchange Management Console
The addition of the Windows Rights Management Services (RMS) agent
X.400 authoritative domains
Transport rules are now able to act on Unified Messaging messages
Exchange 2007 SP1 includes the following enhancements to the Edge Transport server role:
Improvements to the following EdgeSync cmdlets:
Start-EdgeSynchronization cmdlet
Test-EdgeSynchronization cmdlet
Improvements to the cloned configuration scripts
For more information about improvements to the Transport server roles in Exchange 2007 SP1, see New Transport Features in Exchange 2007 SP1.
Mailbox Server Role Improvements
Exchange 2007 SP1 introduces several new features for the Mailbox server role including the following:
Public folder management by using the Exchange Management Console
New public folder features
Mailbox management improvements
Ability to import and export mailbox by using .pst files
Changes to Messaging Records Management (MRM)
New performance monitor counters for online database defragmentation
For more information about the Mailbox server role improvements in Exchange 2007 SP1, see New Mailbox Features in Exchange 2007 SP1.
High Availability:
Standby continuous replication
Support for Windows Server 2008
Support for multi-subnet failover clusters
Support for Dynamic Host Configuration Protocol (DHCP) IPv4
Support for IPv6
New quorum models (disk and file share witness)
Continuous replication (log shipping and seeding) over redundant cluster networks in a cluster continuous replication environment
Reporting and monitoring improvements
Performance improvements
Transport dumpster improvements
Exchange Management Console improvements
For more information about the high availability features in Exchange 2007 SP1, see New High Availability Features in Exchange 2007 SP1.
Unified Messaging Server Role Improvements.
Development Improvements
Exchange 2007 SP1 introduces several enhancements to the Exchange API set. The most significant of those changes are to the Exchange Web Services.
Exchange Web Services
Exchange 2007 SP1 introduces the following new functionality and improvements to the Exchange Web Services API. The following list identifies functionality now available in Exchange 2007 SP1:
Support for public folder access. Public folders can now be created, deleted, edited, and synchronized by using the Exchange Web Services.
Improved delegate access.
Delegate management.
Item identifier translation between identifier formats.
Folder level permissions.
Proxy to the best Client Access server.
For more information about Microsoft Exchange development and enhancements made to the Microsoft Exchange APIs, visit the Exchange Server Developer Center.
For More Information
For more information about each server role that is included in Exchange 2007, see the following topics: | https://docs.microsoft.com/en-us/previous-versions/office/exchange-server-2007/bb676323(v=exchg.80) | 2018-09-18T17:56:22 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.microsoft.com |
Contacting BigPanda From Inside The Application
When you log in to BigPanda, the Messenger button appears in the bottom corner of the screen. Use this feature to send and receive messages directly with the BigPanda team.
Sending Messages
- Click the Messenger button.The message window opens on the right side of the screen.
- Click New Conversation or select an existing conversation.
- Enter your message in the text field at the bottom of the window.
You can attach files by clicking on the paper clip in the lower right. All file types are supported and can be up to 20MB in size.
Click Send:
- The message is sent to the BigPanda team.
The BigPanda team responds within the time specified in your support tier SLA. Refer to your contract for guaranteed response times on support services.
- The response message appears in the conversation and you receive an email.
You can continue to correspond with the BigPanda team from the message window or by responding to the email. The BigPanda team may open a support ticket to get help from additional team members, if necessary. A link to the ticket will be available in the conversation.
Viewing Existing Conversations
You can see any conversations you've had with BigPanda from inside the application by clicking the Messenger button. If you're viewing an existing conversation, click the back arrow
to return to the list of conversations.
Viewing Announcements
BigPanda sometimes sends you a message to announce a new feature or service that is available to you. The announcement can appear across the full screen, as a window in the top right corner, or as a small text popup beside the Messenger button.
- Click the announcement to read it.
- (Optional) Send a message to provide feedback or ask a question.
- Click the X in the top right to close the message window.
The message is saved to your list of conversations.
Using BigPanda Docs and API Reference
BigPanda offers support documentation for common questions and API reference information for developers.
BigPanda Docs––comprehensive reference guides for how to use BigPanda features and functions.
BigPanda API Reference––REST API structures, example code, JSON objects, and parameters.
You can search the Docs and API Reference together or filter results to see only one or the other.
Opening a Ticket
You can submit a question or request directly to our technical support team.
- From BigPanda Docs and FAQs, click the Help button in the bottom right.
- Follow the direct link.
- Send an email to [email protected].
View your open tickets by logging in with the email address you used to contact BigPanda. | https://docs.bigpanda.io/docs/getting-help | 2018-09-18T18:13:57 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['https://bigpanda.io/docs/download/attachments/23003722/Messenger.png?version=1&modificationDate=1479234211536&api=v2',
None], dtype=object) ] | docs.bigpanda.io |
Managing Avatars on the Platform
This topic includes information about managing avatars on the platform, including:
- Resources that Have Avatars on the Platform
- Supported File Types and File Size
- Working With Resource Images
- Managing Caching for Avatars
- Usage Note: Difference Between getAvatar and getDefaultAvatar
- Process Flow: Example
Resources that Have Avatars on the Platform
The platform supports upload of an image for use as an avatar for the following resource types:
- APIs
- Apps
- Groups
- Users
Supported File Types and File Size
Supported image files must be no more than 4MB in size, and must be one of these supported image types:
- JPG
- GIF
- TIFF
- BMP
- PNG
Working With Resource Images
The Dropbox service provides operations for uploading images to the platform. Working with images might include such activities as uploading an image, cropping it, and saving the cropped portion as the avatar for a resource.
Below is a suggested implementation that uses several of the Dropbox operations, in sequence, to complete these tasks.
The services that support managing these various resources include operations that allow you to add, update, or delete avatars or get avatar information:
- DELETE /api/users/{UserID}/picture (Users only; deletes an avatar. For APIs and Apps services, you cannot delete an avatar, only replace it).
- GET {service}/{ID}/avatars/{AvatarVersionID}.png
- GET {service}/{ID}/avatar: Retrieves the default avatar image for the specified resource.
Managing Caching for Avatars
Because many different resources, such as APIs, apps, and users, can have a different avatar, it's possible that a platform resource, such as an API's Board page with many comments and notifications, might include many different images. For maximum efficiency, avatars can be cached.
Whenever the avatar for a resource is changed, the URL is updated. An updated image will always have a different URL. This means that you can cache avatars indefinitely without risk that you might be referencing an outdated avatar.
Usage Note: Difference Between getAvatar and getDefaultAvatar
There are two operations available for retrieving the avatar associated with a resource, with one key difference between them:
- GET /api/{service}/{ID}/avatar (getDefaultAvatar): This operation simply returns the avatar for the resource, such as app, API, or user.
- GET {service}/{ID}/avatars/{AvatarVersionID}.png (getAvatar): Each time a new avatar is set up for a resource, the platform assigns an AvatarVersionID. If the avatar is changed, the new avatar has a new AvatarVersionID. By specifying the AvatarVersionID you can retrieve the avatar for a resource even if it isn't the current avatar.
Process Flow: Example
Let's say a user, Jane Mead, adds an avatar to her user profile. She then decides to change the image; she removes the existing avatar and adds a different one. This process flow uses the following operations:
- User uploads an image from disk: POST /api/dropbox/pictures. Image is saved to database and PictureID is returned.
- Image is retrieved for user to crop: GET /api/dropbox/pictures/{pictureId}.
- User crops picture: PUT /api/dropbox/pictures/{pictureId}. Cropped image is saved to database and PictureID is returned (same PictureID).
- Cropped image is retrieved for use: GET /api/users/UserID/avatar.
- User record is updated: PUT /api/users/{UserID}.
- Updated user record is retrieved: GET /api/users/{UserID}.
- User deletes avatar: DELETE {service}/{ID}/picture.
- User record is updated: PUT /api/users/{UserID}.
- Updated user record is retrieved: GET /api/users/{UserID}.
- User uploads a second image from disk: steps 1-6 above are repeated. | http://docs.akana.com/cm/api/aaref/Ref_ManagingAvatarsOnThePortal.htm | 2018-09-18T18:36:26 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.akana.com |
Selecting an AI Navigation Type
Each AI agent needs to have a navigation type assigned, either animate (human-based) or inanimate (vehicle-based). The following AI agent properties are relevant from a navigation perspective:
AgentType - MediumSizedCharacters or VehicleMedium
voxelSize - 0.125m x 0.125m x 0.125m minimum
radius - agent radius, in voxels
climbableHeight - maximum climbable height of maximum slope, in voxels
maxWaterHeight - maximum walkable water depth, in voxels
To assign a navigation type for an AI agent
In Lumberyard Editor, click Tools, Other, DataBase View.
On the Entity Library tab, click the Load Library button and select your asset file.
Under Class Properties pane, for Navigation Type, make a selection. This sets the navigation type for all AI agents.
In Rollup Bar, under Objects, AI, NavigationArea, NavigationArea Params, make a selection. | https://docs.aws.amazon.com/lumberyard/latest/userguide/ai-nav-agent-types.html | 2018-09-18T17:46:10 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.aws.amazon.com |
Cloud Canvas Built-In Roles and Policies
You can use the built-in Cloud Canvas roles and policies to manage resource and deployment permissions for your project.
Roles
You can use the AWS::IAM:Role resource to define roles in your
project-template.json or
deployment-access-template.json files. Cloud Canvas Resource Manager defines
the following roles for you:
The configuration file in which you define a role determines the resources to which the role provides access.
You can use the
lmbr_aws command line tool to manage the role definitions in
the
project-template.json and
deployment-access-template.json files. For more information, see Using the Cloud Canvas Command Line to Manage Roles and
Permissions.
Implicit Roles
Some Cloud Canvas custom resources also create roles. For example, when a Lambda function
is
executed, it assumes the role that the
Custom::LambdaConfiguration resource
creates. When API Gateway invokes a Lambda function or accesses other resources, it
assumes the
role that the
Custom::ServiceApi resource creates. Including these custom
resources in a
resource-group-template.json file causes these implicit
roles to be created (and deleted when the resource is deleted). For information on
implicit
role names, see Implicit
Role Mappings.
Managed Policies
You can use
AWS::IAM::ManagedPolicy resources to define permissions that
are shared across any number of roles. Cloud Canvas defines the following managed
policies for
you:
The
ProjectAdmin and
DeploymentAdmin roles are granted the
same permissions as the
ProjectOwner and
DeploymentOwner roles,
minus any permissions specifically denied by the
ProjectAdminRestrictions and
DeploymentAdminRestrictions managed policies, respectively. In effect an,
"admin" is granted all the permissions of an "owner" minus any special actions that
the
"admin" should not be able to perform.
Role Mapping Metadata
The
AbstractRole property in the
Permission metadata object does
not directly specify the actual role that receives the described permission. These
values must
be mapped to actual IAM roles. This makes it possible to setup roles in whatever way
makes
sense for your project. It also removes the need to modify the permissions defined
by
individual resource groups.
The ability to map abstract roles to actual IAM roles is important when you use a cloud gem across multiple projects or from a third party. Cloud gems acquired from a third party might have roles that are different from the roles that you use in your organization. (A cloud gem is a Lumberyard gem that uses the AWS resources defined by a Cloud Canvas Resource Group. For more information, see Cloud Gems.)
The
Custom::AccessControl resource looks for CloudCanvas
RoleMappings metadata on
AWS::IAM::Role resources to determine
which abstract roles map to that physical role. In the following example, the
CustomerSupport abstract role from all resource groups is mapped to the
DevOps physical role.
... "DevOps": { "Type": "AWS::IAM::Role", "Properties": { "Path": { "Fn::Join": [ "", [ "/", { "Ref": "ProjectStack" }, "/", { "Ref": "DeploymentName" }, "/" ]] } }, "Metadata": { "CloudCanvas": { "RoleMappings": [ { "AbstractRole": [ "*.CustomerSupport" ], "Effect": "Allow" } ] } } }, ...
Each Cloud Canvas
RoleMapping metadata object can have the following
properties.
You can use the
lmbr_aws command line tool to manage
RoleMappings metadata on role resource definitions in the
project-template.json and
deployment-access-template.json files. For more information, see Using
the Cloud Canvas Command Line to Manage Roles and Permissions.
Default Role Mappings
Cloud Canvas defines role mappings for the following roles:
Implicit Role Mappings
As mentioned in Implicit
Roles, role mappings
are automatically defined for the implicit roles created by Cloud Canvas resources
like
Custom::LambdaConfiguration. These mappings are only used with permission
metadata in the same
resource-group-template.json file as the custom
resource that creates the role. The name of the abstract role used in permission metadata
to
reference an implicit role depends on the custom resource type. | https://docs.aws.amazon.com/lumberyard/latest/userguide/cloud-canvas-built-in-roles-and-policies.html | 2018-09-18T17:42:24 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.aws.amazon.com |
Community Advertising¶:
- Various Python conferences from PyCon to smaller regional conferences
- Many topic or framework specific conferences such as Djangocon US and Sustain
- Python Software Foundation
- Mozilla
- Beeware - tools and libraries for building native apps in Python
- Godot - an open source game engine
- Kiwi TCMS - an open source test case management system
- World Possible - an education non-profit
- Write the Docs - a series of events for documentatarians
Get in touch. We can help!¶
If you run a conference, non-profit, or a funding strapped open source project, we would love to help you get the word out. Make sure you qualify for our community ads and send us an email to be considered for the program.
If you have any feedback about our community advertising program, we’d love to hear from you too. | http://blog.readthedocs.com/community-ads-2018/ | 2018-09-18T18:22:09 | CC-MAIN-2018-39 | 1537267155634.45 | [] | blog.readthedocs.com |
aquarest spas as one of the first manufacturers hot tubs we design our to maximize personal relaxation family entertainment and hydrotherapy home depot.
Related Post
Tv Stand On Wheels Boy Play Tents Toy Race Track Exerpeutic Aero Air Ellipticals Adidas Ladies Golf Shoes Large Building Blocks Trifold Sleeping Mats 3 Ring Binders With Zipper Gorilla Playset Stamina In Motion Elliptical Trainer Remote Control Toy Trucks Coleman Canopy Battery Powered Rideon Seventh Generation Ultra Power Plus Chess And Checkers Set | http://top-docs.co/aquarest-spas/aquarest-spas-as-one-of-the-first-manufacturers-hot-tubs-we-design-our-to-maximize-personal-relaxation-family-entertainment-and-hydrotherapy-home-depot/ | 2018-09-18T17:52:57 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['http://top-docs.co/wp-content/uploads/2018/05/aquarest-spas-as-one-of-the-first-manufacturers-hot-tubs-we-design-our-to-maximize-personal-relaxation-family-entertainment-and-hydrotherapy-home-depot.jpg',
'aquarest spas as one of the first manufacturers hot tubs we design our to maximize personal relaxation family entertainment and hydrotherapy home depot aquarest spas as one of the first manufacturers hot tubs we design our to maximize personal relaxation family entertainment and hydrotherapy home depot'],
dtype=object) ] | top-docs.co |
h1. How do I back up Textpattern? [todo]
Currently, there is no built-in backup or export function in Textpattern. You can use several tools designed for the purpose:
phpMyAdmin is supplied by most web hosts. The phpMyAdmin FAQ has a brief “explanation”: of how to back up and restore.
mysqldump generates SQL backups that can be restored using phpMyAdmin or with the mysql command-line client. See the “MySQL manual”: for details.
rss_admin_db_manager is a Textpattern plugin that includes backup and restore functions. Read more “here”:.
If your server runs Unix, and has a cron function, here’s a sample crontab entry for an automatic weekly backup:
bq.
If you don’t know what a crontab is, or how to test one, we recommend instead using one of the other options listed above.
AutoMySQLBackup is an open source Unix shell script which automates the process of rotating daily, weekly, and monthly “MySQL backups”:.
The remark regarding crontabs applies here as well. | https://docs.textpattern.io/faqs/how-do-i-back-up-textpattern | 2017-11-17T23:02:58 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.textpattern.io |
Product: Carrara 8 Pro
Product Code: ca_ap38f
DAZ Original: Yes
Created by: DAZ 3D
Build: 8.0
Version Released: May 16, 2010
Please report any issues besides the ones listed above to the DAZ 3D Bug Database. When submitting more than one issue, please submit one report for each issue. For multiple issues, you do not need to include all of your hardware configuration information or company contact information. The DAZ 3D Bug Database requires a separate account than that of the DAZ 3D website.
Thank you for helping to make Carrara a powerful and enjoyable tool!
Visit our site for further technical support questions or concerns:
Thank you and enjoy your new products!
DAZ Productions Technical Support
12637 South 265 West #300
Draper, UT 84020
Phone:(801) 495-1777
TOLL-FREE 1-800-267-5170 | http://docs.daz3d.com/doku.php/artzone/azproduct/10608 | 2017-11-17T23:07:24 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.daz3d.com |
Advanced Features¶
Byte Pair Encoding¶
This is explained in the machine translation tutorial.
Dropout¶
Neural networks with a large number of parameters have a serious problem with an overfitting. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. But during the test time, the dropout is turned off. More information in
If you want to enable dropout on an encoder or on the decoder, you can simply add dropout_keep_prob to the particular section:
[encoder] class=encoders.recurrent.SentenceEncoder dropout_keep_prob=0.8 ...
or:
[decoder] class=decoders.decoder.Decoder dropout_keep_prob=0.8 ...
Pervasive Dropout¶
Detailed information in
If you want allow dropout on the recurrent layer of your encoder, you can add use_pervasive_dropout parameter into it and then the dropout probability will be used:
[encoder] class=encoders.recurrent.SentenceEncoder dropout_keep_prob=0.8 use_pervasive_dropout=True ... | http://neural-monkey.readthedocs.io/en/latest/features.html | 2017-11-17T22:43:38 | CC-MAIN-2017-47 | 1510934804019.50 | [] | neural-monkey.readthedocs.io |
sequence_annotations¶
This protocol defines annotations on GA4GH genomic sequences It includes two types of annotations: continuous and discrete hierarchical.
The discrete hierarchical annotations (called Features) are derived from the Sequence Ontology (SO) and GFF3 work
The goal is to be able to store annotations using the GFF3 and SO conceptual model, although there is not necessarly a one-to-one mapping in Protobuf messages to GFF3 records.
The minimum requirement is to be able to accurately represent the current state of the art annotation data and the full SO model. Feature is the core generic record which corresponds to the a GFF3 record.
The continuous data (called Continuous) represents numerical data associated with each base position, such as is stored in BigWig, Wiggle or BedGraph Formats | https://ga4gh-schemas.readthedocs.io/en/latest/schemas/sequence_annotations.proto.html | 2017-11-17T22:46:45 | CC-MAIN-2017-47 | 1510934804019.50 | [] | ga4gh-schemas.readthedocs.io |
The Sixcycle coach dashboard shows a Rhythm metric for each athlete.
The volume bar shows the percentage of recently applied training volume that the athlete has executed. The arrow on the right shows the athlete's current trend, be it increasing, maintaining or decreasing.
In the example image above, the athlete has executed approximately 75% of her recent training volume but is on a downwards trend in the near term.
For those familiar with the Stress & Form chart shown on on the athlete Calendar and Charting pages, you will be interested to learn that the volume bar is correlated to the 6wSB/Fitness metric, whereas the trend arrow is correlated to the 1xSB/Fatigue metric. This means that the Rhythm metric shows both the relationship between the athlete's medium and near term training rhythm, along with their overall Fitness and Fatigue levels. | http://docs.sixcycle.com/coaching-tools/the-rhythm-metric | 2017-11-17T23:12:20 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.sixcycle.com |
RadChartView: Palettes
In this article, you will learn to use the predefined palettes in RadChartView for Android and also how to create custom palettes.
Default Palette
In order to provide the default styles for its series, RadChartView uses palettes. Each palette defines a set of styles for the different series and axes types. Here's a demonstration for some of the colors which are provided by the default palette:
Creating Custom Palettes
All chart objects (RadPieChartView and RadCartesianChartView) have a method setPalette(ChartPalette) which allows you to set a new palette that you have defined. Here's an example of setting a custom palette to an instance of RadCartesianChartView. We will start with the chart that we have created in the Bar Series example:
// Use a copy of the existing palette in order to avoid redefining the whole palette. // We are only interested in changing the color of the bar series. ChartPalette customPalette = chartView.Palette.ClonePalette(); // Get the entry for the first bar series. PaletteEntry barEntry = customPalette.GetEntry(ChartPalette.BarFamily); barEntry.Fill = Color.Green; // Also if there are more than one bar series we can get the entry // for any of them with their index in the collection. // Edit the entry for the second bar series. barEntry = customPalette.GetEntry(ChartPalette.BarFamily, 1); barEntry.Fill = Color.Cyan; chartView.Palette = customPalette;
It is important to note that the chart palette will override all settings set manually by the developer. To prevent the chart palette from overriding the manual settings, developers must call the chartElement.setCanApplyPalette(false). This prevents the palette from being applied to the given chart element (axis, series etc.) and the developer can take full control over the visual customization.
Custom Palette Family
Finally, developers can take advantage of custom palette families. Each chart element has a setPaletteFamily(String) method that can be used to set a custom family. This is useful when we want to style different elements in the same way. For example in a scenario where we have multiple axes, each series can be paired with its relevant axis by using the same color. To apply the fill or stroke to both the axis and the series developers must call series.setPaletteFamily("CustomFamily") and axis.setPaletteFamily("CustomFamily"). | https://docs.telerik.com/devtools/xamarin/nativecontrols/android/chart/chart-palettes.html | 2017-11-17T23:17:51 | CC-MAIN-2017-47 | 1510934804019.50 | [array(['images/chart-palettes-1.png',
'Demo of Palettes in the chart. TelerikUI-Chart-Palettes'],
dtype=object) ] | docs.telerik.com |
As an account Administrator, you can keep track of board engagement and identify areas of opportunity. Quickly view and download reports of your board's activity for meetings, people, polls, and goals to share with staff and board members.
Visit our Reports Overview Help Article to start using Reports today!
Our Reports Feature includes:
Activity Snapshot
People Reports
Meetings Reports
Polls Reports
Goals Reports
Each report can be viewed or downloaded organization-wide or by group. Slice reports by 30 days, 90 days, year and all time intervals.
Organizations on the Essentials plan can access reports for the past 30 or 90 days, and organizations on the Professional or Enterprise plans can access reports for any time period since joining Boardable (past 30 days, past 90 days, past year, and all time).
Related Articles
Reports Feature Overview & Resource Links
Reports Overview: Account Administrators: report on up-to-date board health, board engagement, and monitor and identify areas of opportunity for improvement. | https://docs.boardable.com/en/articles/3606966-release-notes-reporting | 2022-08-08T08:35:14 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.boardable.com |
The PCA function (Principal Component Analysis (PCA)) outputs a set of principal components, and each principal component is a linear combination of the set of original predictors.
In the PCA example output table pca_health_ev_scaled (see Output), the first-ranked principal component is:
-0.082 * age + 0.387 * bmi + (-0.0935) * bloodpressure + 0.042 * glucose …
The PCA_Plot function uses these coefficients from (the output table of the PCA function) to compute a principal component score for each observation. If the PCA function returned n principal components, the PCA_Plot function calculates n scores for each observation. That is, the n principal components replace the original, larger set of predictors for subsequent analyses.
The version of PCA_Reduce, a component of the PCA function, must be AA 6.21 or later. | https://docs.teradata.com/r/Teradata-Aster-Analytics-Foundation-User-GuideUpdate-2/September-2017/Statistical-Analysis/PCAPlot | 2022-08-08T06:52:23 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.teradata.com |
The FFT function uses a Fast Fourier Transform (FFT) algorithm to compute the discrete Fourier Transform (DFT) of each signal in one or more input table columns. A signal can be either real or complex, and can have one, two, or three dimensions. If the signal length is not a power of two, the function either pads or truncates it to the closest power of two.
The DFT of a time sequence of length N, 0..N-1, is:
X(k) = Xk(N - k)
where k ϵ 0..N-1.
Therefore:
- The FFT of a time sequence of length 1 is the one-element sequence itself.
- The FFT of a time sequence of length 2 has only real values.
- The FFT of a time sequence of length 4 or greater has conjugate symmetry.
To recover the original signals, use the IFFT function. | https://docs.teradata.com/r/Teradata-Aster-Analytics-Foundation-User-GuideUpdate-2/September-2017/Time-Series-Path-and-Attribution-Analysis/Fast-Fourier-Transform-Functions/FFT | 2022-08-08T08:24:15 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.teradata.com |
Risk Matrix used in the RAID Log
A RAID Log is a great tool for managing Project Risk.
RAID stands for Risks, Assumptions, Issues, Dependencies.
BEWARE – it is hard to keep track of these aspects of your project in your head. You can keep track of them using a LOG TEMPLATE for your own safety.
Risks (R in RAID)
Your project risks are the “issues waiting to happen”.
i.e. Ask yourself “What could go wrong?”, and the list of items in answer to that are your risks.
e.g. when planning for a race, an example risk could be “My shoes fail during the race”
RISK SHEET – Excel RAID log & Dashboard Template
Assumptions (A in RAID)
Assumptions are items that you believe to be fine,… but that may not be. Assumptions are aspects of the environment, or of the surroundings to your project that you believe will be in a certain state.
The purpose of tracking assumptions is that you need to be prepared for your assumptions being wrong.
Issues (I in RAID)
Issues are the things which are actually going wrong – i.e. Risks that have been realised, and have turned into issues.
If you were lucky with your Risks identification earlier, you may already be prepared to deal with the issues 🙂
Dependencies (D in RAID)
Dependencies are items being delivered- or supplied- from elsewhere, and that may not be directly in your control.
i.e. in order for your project to deliver, your dependencies must be present / delivered / supported.
Dependencies are quite frequently what cause project failure – track these carefully!
RAID Log Template
This Excel Template is a handy format which allows you to track your RAID items, their status, and assign them to owners.
Some examples templates in the “Risk” area
Further Reading about Raid Log:
RAID logs are often called Risk Registers. Read about Risk Registers here on WIkipedia. | https://business-docs.co.uk/blog/raid-log-manage-risk/ | 2021-02-24T21:07:20 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['http://i17yj3r7slj2hgs3x244uey9z-wpengine.netdna-ssl.com/wp-content/uploads/edd/2016/04/BDUK-33-RAID-LOG-15-RISK-MATRIX-286x300.png',
'Risk Heatmap Score Matrix Risk Matrix used in the RAID Log'],
dtype=object)
array(['http://i17yj3r7slj2hgs3x244uey9z-wpengine.netdna-ssl.com/wp-content/uploads/edd/2016/04/BDUK-33-RAID-LOG-15-RISKS-300x142.png',
'RAID Log RISK SHEET - Excel RAID log & Dashboard Template'],
dtype=object) ] | business-docs.co.uk |
Optimizing queries using partition pruning
When predicate push-down optimization is not applicable—for example, if all stripes contain records that match the predicate condition—a query with a WHERE clause might need to read the entire data set. This becomes a bottleneck over a large table. Partition pruning is another optimization method; it exploits query semantics to avoid reading large amounts of data unnecessarily.
Partition pruning is possible when data within a table is split across multiple logical partitions. Each partition corresponds to a particular value of a partition column and is stored as a subdirectory within the table root directory on HDFS. Where applicable, only the required partitions (subdirectories) of a table are queried, thereby avoiding unnecessary I/O.
Spark supports saving data in a partitioned layout seamlessly, through the partitionBy method available during data source write operations. To partition the "people" table by the “age” column, you can use the following command:
people.write.format("orc").partitionBy("age").save("peoplePartitioned")
As a result, records are automatically partitioned by the age field and then saved into
different directories: for example,
peoplePartitioned/age=1/,
peoplePartitioned/age=2/, and so on.
After partitioning the data, subsequent queries can omit large amounts of I/O when the
partition column is referenced in predicates. For example, the following query automatically
locates and loads the file under
peoplePartitioned/age=20/and omits all
others:
val peoplePartitioned = spark.read.format("orc").load("peoplePartitioned") peoplePartitioned.createOrReplaceTempView("peoplePartitioned") spark.sql("SELECT * FROM peoplePartitioned WHERE age = 20") | https://docs.cloudera.com/runtime/7.2.7/developing-spark-applications/topics/spark-optimizing-queries-partition-pruning.html | 2021-02-24T21:07:37 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.cloudera.com |
Step 1: Access using a web browser (Chrome, Mozilla Firefox, Safari, etc.) and log in using your Kinderpedia credentials.
Step 2: Go to "Video Conferences" (on the left side of the page) and select the green "Activate Video Meetings" button.
Step 3: After you click on "Activate video meetings, you will receive an email to activate the account at the email address you entered into the Kinderpedia account, and the status on the "Kinderpedia"> "Video Conferences" page will be updated to "Account status: Waiting".
Step 4:.
Step 5: On the Zoom account activation page you will see 3 options: "Sign In With Google", "Sign In With Facebook" and "Sign Up With a Password".
From here, you must select "Sign Up with a Password".
Step 6: Enter a password for the Zoom account, then click on the orange button "Continue".Now, the Zoom account is successfully created and you can return to the Kinderpedia page.
On the Kinderpedia page, if you select "Video Conferences", you will notice that the status has been updated: "Account Status: Account Activated".
Also in the "Video Conferences" section, you will see 3 buttons (from left to right): The first one will send you to a tutorial on "How do I add an activity?", The second one to a tutorial on "How to plan an activity", and using the the third button called "Plan Now", you can schedule a video conference. If you have any questions about how to add and plan an activity, use the tutorials mentioned earlier.
Step 7: Schedule the video conference
To schedule a video conference go to the "Activities" module on the left side of the page or click on "Plan now" from the "Video Conferences" module (see the picture above). Here, you have to select the activity for which you want to schedule video conferencing by clicking on it. In the example below I will click on the activity called "Arts & Crafts".
After you click on the activity, you will see a button called "Schedule a meeting", click on that button to schedule the conference.
After you click on "Schedule a meeting", that button will change to "Start meeting". To enter the conference, click on "Start meeting". A new window will open in your browser to launch the Zoom application. If you do not have Zoom already installed, select "download & run Zoom" (if you have Zoom already installed, skip this step because the application will run automatically) and install the application. After you install the application, it will run and enter the conference automatically.
For more assistance regarding the Zoom application, you can access the following link:
These were all the necessary steps. For a better clarification, we also have a video tutorial below. | https://docs.kinderpedia.co/en/articles/3806823-i-am-a-teacher-manager-how-do-i-connect-my-kinderpedia-account-to-the-zoom-account-and-how-do-i-schedule-a-conference | 2021-02-24T20:53:23 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.kinderpedia.co |
How to display values from custom managed properties in search results - option 1
This is a blog post in the series "How to change the way search results are displayed in SharePoint Server 2013." | https://docs.microsoft.com/en-us/archive/blogs/tothesharepoint/how-to-display-values-from-custom-managed-properties-in-search-results-option-1 | 2021-02-24T20:58:49 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/prod.evol.blogs.technet.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/66/12/7103.SearchResultListItem.png',
'Values of custom properties displayed in search results'],
dtype=object) ] | docs.microsoft.com |
You can restart the vRealize Suite Lifecycle Manager server immediately or schedule weekly server restarts.
Procedure
- Click Settings and click the System Settings tab.
- To restart the server immediately, click RESTART SERVER.
- To schedule a weekly server restart, select Schedule a restart and select the day of the week and time for the weekly restart.
- Click SAVE. | https://docs.vmware.com/en/vRealize-Suite/2017/com.vmware.vrsuite.lcm.13.doc/GUID-9C403DA4-92CE-4D00-8055-8541D6F7CBCE.html | 2021-02-24T21:28:56 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.vmware.com |
How to display values from custom managed properties in search results – option 2
This | https://docs.microsoft.com/en-us/archive/blogs/tothesharepoint/how-to-display-values-from-custom-managed-properties-in-search-results-option-2 | 2021-02-24T20:57:53 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.microsoft.com |
Validation
This page provides solutions to common issues you may encounter while implementing the client-side validation.
Common Validation Issues
Validation Tooltips Are Shown in Widget Wrappers When Using the Validator
By default, the Tooltip is added right after the input so that if the input is used for a widget, the Tooltip is added inside the wrapper element and is not displayed correctly.
Solution
Customize the Tooltip position by using either of the following approaches:
Use the
ValidationMessageor
ValidationMessageForhelpers for the property.
@Html.Kendo().NumericTextBoxFor(model => model.UnitPrice) @Html.ValidationMessageFor(model => model.UnitPrice)
Use the approach demonstrated in the introductory article on the Kendo UI Validator to add a placeholder.
Widgets Are Hidden after Postbacks When Using jQuery Validation
If the client-side validation does not prevent the form to be posted and the server-side validation fails for a property, the
input-validation-error class is added to the input. For styling purposes, custom classes assigned to the inputs are copied to the wrapper element and because all elements with the error class will be hidden on validation, the widget will be hidden too.
Solution
To avoid this behavior, either implement a client-side validation for the rule that caused the validation to fail on the server, or remove the class from the wrapper elements after the initialization of the widgets.
@using (Html.BeginForm()) { //omitted for brevity } <script type="text/javascript"> $(function () { $(".k-widget").removeClass("input-validation-error"); }); </script>
Globalized Dates and Numbers Are Not Recognized As Valid When Using the Validator
The Kendo UI Validator uses the current Kendo UI culture to determine whether a value is in a valid format.
Solution
In order for the values to be recognized as valid, use the same culture on the client and on the server as described in the article on globalization.
If the above solution is not feasible, because a custom date format is used, then the build-in
mvcdate rule that comes from
kendo.aspnetmvc.min.js needs to be overridden.
<script src="../kendo/js/kendo.aspnetmvc.min.js"></script> <script> kendo.ui.validator.rules.mvcdate = function (input) { //use the custom date format here //kendo.parseDate - return input.val() === "" || kendo.parseDate(input.val(), "dd/MM/yyyy") !== null; } </script>
Globalized Dates and Numbers Are Not Recognized As Valid When Using jQuery Validation
The jQuery validation does not support globalized dates and numbers.
Solution
In order for the values to be recognized as valid when using a non-default culture, override the Validator date and number methods.
jQuery.extend(jQuery.validator.methods, { date: function (value, element) { return this.optional(element) || kendo.parseDate(value) != null; }, number: function (value, element) { return this.optional(element) || kendo.parseFloat(value) != null; } });
See Also
- Common
- Common Issues in Kendo UI
- JavaScript Errors
- Performance Issues
- Scheduler
- Common Issues in Kendo UI Upload
- Common Issues Related to Styling, Appearance, and Rendering | https://docs.telerik.com/aspnet-mvc/troubleshoot/troubleshooting-validation | 2021-02-24T21:41:25 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.telerik.com |
Parameters [Builder]¶
Please consider the following:
- Python does not recognize operating system environment variables, please use full paths in the parameters file (no
$HOMEetc).
- These parameters determine how scenery objects are generated offline as described in chapter Scenery Generation.
- All decimals need to be with “.” — i.e. local specific decimal separators like “,” are not accepted.
- Unless specified otherwise then all length is in metres and areas are in square metres.
- You do not have to specify all parameters in your
params.pyfile. Actually it is better only to specify those parameters, which you want to actively control — the rest just gets the defaults.
View a List of Parameters¶
The final truth about parameters is stored in
parameters.py — unfortunately the content in this chapter of the manual might be out of date (including default values).
It might be easiest to read
parameters.py directly. Python code is easy to read also for non-programmers. Otherwise you can run the following to see a listing:
/usr/bin/python3 /home/vanosten/develop_vcs/osm2city/parameters.py -d
If you want to see a listing of the actual parameters used during scenery generation (i.e. a combination of the defaults with the overridden values in your
params.py file, then you can run the following command:
/usr/bin/python3 --file /home/pingu/development/osm2city/parameters.py -f LSZS/params.py
Important Parameters¶
Minimal Set¶
See also Setting a Minimal Set of Parameters.
Buildings¶
Diverse Parameters¶
Parameters which influence the number of buildings from OSM taken to output.
In order to reduce the total number of nodes of the buildings mesh and thereby reducing both disk space volume and rendering demands as well as to simplify the rendering of roofs, the geometry of buildings is simplified as follows:
- Only if not part of a building parent
- Only if no inner circles
- Only if a multiple of 4 nodes gets reduced and always 4 neighbouring points are removed at the same time (e.g. something that looks like a balcony from above, but can also point inwards into the building)
- If points get removed, which are also part of a neighbour building, then the simplification is not accepted.
- The tolerance of the below parameters is respected.
Level of Details of Buildings¶
The more buildings you have in LOD detailed, the less resources for rendering are used. However you might find it “irritating” the more buildings suddenly appear. Experiment with the settings in FlightGear, see also Adjusting Visibility of Scenery Objects.
Building Levels and Height¶
In OSM the height of a building can be described using the following keys:
building:height
roof:height
height(the total of building_height and roof_height, but most often used alone)
building:levels
roof:levels(not used in osm2city)
levels
Most often none of these features are tagged and then the number of levels are determined based on the settlement type and the corresponding
BUILDING_NUMBER_LEVELS_* parameter. The height is always calculated as the product of the number of levels times parameter
BUILDING_LEVEL_HEIGHT_*. If only the height is given, then the levels are calculated by simple rounding — and this level value is then used for calculating the height. The reason for this is that some uniformity in building heights/values is normally observed in the real world — and because the generic textures used have a defined height per level.
An exception to this is made for building parts in a relationship (Simple 3D buildings), as the heights in this case might be necessary to be correct (e.g. a dome on a church).
There is some randomness about the number of levels within the same settlement type, which is determined by using a dictionary of level=ratio pairs, like:
BUILDING_NUMBER_LEVELS_CENTRE = {4: 0.2, 5: 0.7, 6: 0.1}
meaning that there is a ratio 0f 0.2 for 4 levels, a ratio of 0.7 for 5 levels and a ratio of 0.1 for 6 levels. I.e. the keys are integers for the number of levels and the values are the ratio, where the sum of ratios must be 1.
Visibility Underground Buildings¶
There seem to be challenges with consistency etc. in OSM in terms of deciding, whether something is under the ground and therefore
osm2city should not render it.
According to findings in the FG Forum and OSM there are different tags used, some of them better suited than others according to OSM documentation:
location=undergroundor
location=indoorseems to be a correct way (key:location)
indoorhas also some usage (key:indoor)
tunnelis according to taginfo (taginfo tunnel combinations) used max 1000 times together with buildings
levelwith negative values. NB: not to be confused with
levelsand
building:levels(see chapter Building Levels and Height)
layeris not to be used to determine visibility (key:layer)
European Style Inner Cities (Experimental)¶
Given the available textures in
osm2city-data and the in general limited tagging of buildings in OSM as of 201x, European cities look wrong, because there are too many modern facades used and too many flat roofs.
The following parameters try to “fix” this by adding OSM-tags
roof:colour=red and
roof:shape=gabled to all those buildings, which do not have parents or pseudo-parents (i.e. nor relationships or parts in OSM), but which share node references with other buildings. So typically what is happening in blocks in inner cities in Europe.
Example of using the flag set to True in a part of Prague:
vs. setting it to False (default):
Roofs on Buildings¶
Below you will find quite a lot of parameters deciding what type of roofs should be generated on buildings. To understand the basic concepts, you should understand OSM Simple 3D buildings. With
complex roof below all those roof types, which are not flat/horizontal are meant.
The following parameters decide whether a complex roof should be used on top of a building at all.
Finally the following parameters let you play around with how complex roofs are done.
Overlap Check for Buildings and Roads¶
Overlap checks try to omit overlap of buildings generated based on OSM data with static object as well as shared objects (depending on parameter
OVERLAP_CHECK_CONSIDER_SHARED) in the default scenery (defined by
PATH_TO_SCENERY).
If parameter
PATH_TO_SCENERY_OPT is not None, then also object from that path are considered (e.g. for Project3000).
Examples of overlap objects based on static objects at LSZS (light grey structures at bottom of buildings):
An example of difficult to handle situations is PHNL (Hawaii), where several objects inside the ac-file have rotations. The picture below shows the blocked areas (green) automatically detected. E.g. the large green area top left is a static object for the Admiral Bernard Chick Clarey Bridge. At the PHNL airport the green stripes parallel to the runway should actually be rather large areas covering most of the hangar areas of the airport.
The next image shows the blocked areas by parameter as well as the code to define these areas. There are at least two problems with this: (a) such issues cannot be detected automatically and (b) it is tedious and error prone to define these polygons. This is more a proof of concept and a “hack” for PHNL than anything else.
Remark in the parameters below that for roads only the bridge has been excluded - that is because it has been tested visually (again: tedious and slow).
_admiral_clarey_bridge = [(-157.953, 21.3693), (-157.953, 21.3671), (-157.9356, 21.3690), (-157.9346, 21.3711)] _phnl_airport = [(-157.9015, 21.3354), (-157.9264, 21.3393), (-157.9272, 21.3347), (-157.9459, 21.3393), (-157.9516, 21.3331), (-157.9558, 21.3371), (-157.9589, 21.3373), (-157.9694, 21.3287), (-157.9503, 21.3038), (-157.9048, 21.3040)] OVERLAP_CHECK_EXCLUDE_AREAS_ROADS = [_admiral_clarey_bridge] OVERLAP_CHECK_EXCLUDE_AREAS_BUILDINGS = [_admiral_clarey_bridge, _phnl_airport]
Rectify Buildings¶
Rectifies angles of corners in buildings to 90 degrees as far as possible (configurable). This operation works on existing buildings as mapped in OSM. It corrects human errors during mapping, when angles are not straight 90 degrees (which they are in reality for the major part of corners). I.e. there is no new information added, only existing information corrected.
This operation is mainly used for eye-candy and to allow easier 3-D visualization. It can be left out if you feel that the OSM mappers have done a good job / used good tooling. On the other hand the processing time compared to other operations is negligible.
The following picture shows an example of a rectified building with a more complex layout. The results are more difficult to predict the more corners there are. The red line is the original boundary, the green line the rectified boundary. Green circles are at corners, where the corner’s angle is different from 90 degrees but within a configurable deviation (typically between 5 and 10 degrees). Corners shared with other buildings are not changed by the rectify algorithm (not shown here).
Please note that if you are annoyed with angles in OSM, then you have to rectify them manually in OSM. One way to do that is to use JOSM and related plugins.
Generating Buildings Where OSM is Missing Buildings¶
It is possible to let
osm2city generate buildings, where it is plausible that there in reality would be buildings, but buildings were not mapped in OSM. The following set of parameters make some customisation to specific areas possible. However parts of the processing is rather hard-coded (e.g. the available buildings are defined in code in module
owbb/would_be_buildings.py in function
_read_building_models_library(). Still the results are much better than an empty scene.
No additional buildings are generated inside zones for aerodromes.
A lot of processing is dependent on land-use information (see e.g. Land-use Handling and Land-use Parameters). For a short explanation of the process used see Generate Would-Be Buildings.
In settlement areas an attempt is made to have the same terrace houses or apartment buildings along both sides of a way.
The first set of parameters determines the overall placement heuristics:
The second set of parameters determines the type of residential buildings to use and the distances between the buildings and the street as well as what happens in the backyard. The
width of a building is along the street, the
depth of a building is away from the street (front door to back door).
Finally a set of parameters for industrial buildings:
Linear Objects (Roads, Railways)¶
Parameters for roads, railways and related bridges. One of the challenges to show specific textures based on OSM data is to fit the texture such that it drapes ok on top of the scenery. Therefore several parameters relate to enabling proper draping.
With residuals:
After adjusted MAX_SLOPE_* and POINTS_ON_LINE_DISTANCE_MAX parameters:
Land-Use¶
Land-use data is only used for built-up area in
osm2city. All other land-use is per the scenery in FlightGear. The main use of the land-use information processed is to determine building types, building height etc. for those buildings (often the majority), where this information is lacking and therefore must be obtained by means of heuristics. See Land-use for an overall description.
Complement OSM Land-Use Information¶
These operations complement land-use information from OSM based on some simple heuristics, where there currently are no land-use zones for built-up areas in OSM: If there are clusters of buildings outside of registered OSM land-use zones, then zones are added based on clusters of buildings and buffers around them. The land-use type is based on information of building types, amenities etc. — if available.
On the left side of the picture below the original OSM-data is shown, where there only is one land-use zone (green), but areas with buildings outside of land-use zones as well as several streets without buildings (which from an arial picture actually have lots of buildings — they have just not been mapped in OSM.
On the right side of the picture the pink areas are generated based on building clusters and the yellow zone is from CORINE data.
Generating Areas Where Roads are Lit¶
Land-use information is used to determine which roads are lit during the night (in addition to those roads which in OSM are explicitly tagged as being lit).
The resulting built-up areas are also used for finding city and town areas — another reason why the values should be chosen conservative, i.e. large.
Other Parameters¶
Detailed Features¶
The following parameters determine, whether specific features for procedures
pylons respectively
details will be generated at all.
Database¶
OSM data is read from a PostGIS database. See also OSM Data in Database.
Skipping Specific Buildings and Roads/Railways¶
There might be situations, when you need to skip certain buildings or roads/railways, because e.g. the overlap checking does not work or the OSM features simply do not fit with the FlightGear scenery. Often it should be checked, whether the OSM data really is correct (if not, then please directly update the source in OSM) or the FlightGear scenery data is not correct (if not, then please check, whether source data can be improved, such that future versions of the scenery are more in line with reality and thereby with OSM data).
In order to temporarily exclude certain buildings or roads/railways, you can use parameter
SKIP_LIST. For buildings you can either specify the OSM id or (if available) the value of the
name tag. For roads/railways only the OSM id can be used.
E.g.
SKIP_LIST = ['St. Leodegar im Hof (Hofkirche)', 87220999]
On the other hand side there might be situations, where certain STG-entries should not be checked for overlap checking. For that situation parameter
SKIP_LIST_OVERLAP can be used as a list of
*.ac or
*.xml file names which should not be used for overlap tests
Clipping Region¶
The boundary of a scenery as specified by the parameters boundary command line argument is not necessarily sharp. As described in Getting OpenStreetMap Data it is recommended to use
completeWays=yes, when manipulating/getting OSM data - this happens also to be the case when using the OSM Extended API to retrieve data. However there are no parameters to influence the processing of OSM nodes and OSM ways depending on whether they are inside / outside the boundary or intersecting.
The processing is as follows:
- buildings.py: if the first node is inside the boundary, then the whole building is processed — otherwise not
- roads.py: if not entirely inside then split at boundary, such that the first node is always inside and the last is either inside by default or the first node outside for splitting.
- piers.py: as above for piers
- platforms.py: as above for platforms
- pylons.py
- storage tanks: if the centroid is inside the boundary, then the whole storage tank is processed — otherwise not
- wind turbines and chimneys: no checking because the source data for OSM should already be extracted correctly
- aerial ways: if the first node is inside the boundary, then the whole aerial way is processed — otherwise not (assuming that aerial ways are short)
- power lines and railway overhead lines: as for roads. If the last node was split, then no shared model is placed assuming it is continued in another tile (i.e. optimized for batch processing across tiles) | https://osm2city.readthedocs.io/en/latest/parameters.html | 2021-02-24T19:58:55 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['_images/force_european_true.png',
'_images/force_european_true.png'], dtype=object)
array(['_images/force_european_false.png',
'_images/force_european_false.png'], dtype=object)
array(['_images/lszs_hull_front.png', '_images/lszs_hull_front.png'],
dtype=object)
array(['_images/lszs_hull_back.png', '_images/lszs_hull_back.png'],
dtype=object)
array(['_images/blocked_areas_phnl.png', '_images/blocked_areas_phnl.png'],
dtype=object)
array(['_images/blocked_areas_with_excludes_phnl.png',
'_images/blocked_areas_with_excludes_phnl.png'], dtype=object)
array(['_images/rectify.png', '_images/rectify.png'], dtype=object)
array(['_images/elev_residuals.png', '_images/elev_residuals.png'],
dtype=object)
array(['_images/no_elev_residuals.png', '_images/no_elev_residuals.png'],
dtype=object)
array(['_images/landuse.png', '_images/landuse.png'], dtype=object)] | osm2city.readthedocs.io |
The default access policy set applies to all applications and desktops in your catalog. You can also set access policies for individual applications or desktops, which override the default access policy.
You can configure application policies for desktops and applications from the application configuration page or from the Policies page.
For detailed information on access policies and how they are applied, see the VMware Identity Manager Administration Guide.
Procedure
- To select an access policy for a specific application from the application configuration page, follow these steps.
- In the VMware Identity Manager console, click the tab.
- Click the application.
- Click Edit.Certain fields on the application page are now editable.
- In the Access Policies section, select the access policy for the application.
- Click Save at the top of the page.
- To apply an access policy to one or more applications and desktops from the Policies page, follow these steps.
- In the VMware Identity Manager console, navigate to the page.
- Click a policy to edit or click Add Policy to create a new policy.
- In the Definition page of the wizard, in the Applies to section, select the applications and desktops to which you want to apply the policy.
- In the Applies to section, select the applications to which you want to apply the policy.
- Save your changes. | https://docs.vmware.com/en/VMware-Workspace-ONE-Access/19.03/com.vmware.wsp-resource/GUID-30C64153-F00E-41E6-BF6B-115593D0BF3B.html | 2021-02-24T20:45:37 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.vmware.com |
Determining PATROL security levels
You can secure the data that passes between BMC PATROL components and also restrict unauthorized users from accessing your data by implementing PATROL security. You can select from five security levels when you install PATROL.
Note
Agents, BMC console servers, and BMC consoles must operate at the same security level to communicate with each other. When you install agents, console servers, or consoles that need to communicate with previously installed versions of these components, check the security level of the previously installed components and be sure to install the new ones at the same level.
For more information about implementing and using PATROL security, see the PATROL Security User Guide at PDFs.
To check the security level of a previously installed agent, console server, or console
- From the command line navigate to the path on the computer that you want to check.
- (Windows)
%BMC_ROOT\..\common\security\bin\platform
- (UNIX)
$BMC_ROOT/../common/security/bin/platform
- Run the following command:
esstool policy -a
The security level of the current computer is displayed in the security level field of the output.
Note
If your environment contains a firewall, see Configuring a firewall. | https://docs.bmc.com/docs/PATROL4BSA/82/determining-patrol-security-levels-142508614.html | 2021-02-24T19:56:50 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.bmc.com |
The Denied Requests Log contains data for cases where the number of checkout requests exceeded the number of available licenses in a given month, resulting in an inability for users to check out a license. Denied Requests pages are accessed from the License Statistics UI, as described in Denials, and can be downloaded in the formats described in Downloading License Statistics data.
Denied Requests Log generation works only with the following license managers, and you must meet the requirements given in Denied Requests Log requirements to generate denied request data for these license managers.
- LM-X License Manager
- FLEXlm/FlexNet
- IBM LUM
- Reprise License Manager (RLM)
- Sentinel RMS
Denials are imported only when enabled for the license server. See Importing license server data for more information.
By reviewing the denial log data in the License Statistics UI (or downloaded data; for example, charts created from a downloaded Excel file), you can easily see how many license checkout requests were denied and how often the denials took place.
The Denied Requests Log output is entirely reliant on the contents of the debug log file (for FLEXlm/FlexNet and RLM) or the server's data (IBM LUM). For example, FLEXlm/FlexNet normally overwrites debug logs each time the server is restarted. Therefore, the Denied Requests Log output contains information only for the period of time during which the server ran continuously.
License Statistics cannot control or modify the content, format, or behavior of the debug logs or server data. To learn more about manipulating this data, or for any issues with the content of Denied Requests Log output, please refer to your license server documentation. | https://docs.x-formation.com/display/LICSTAT/Denied+Requests+Log | 2021-02-24T19:54:47 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.x-formation.com |
Script dependencies
Deploy a VM and run code on the VMDeploy a VM and run code on the VM
Since we are working with a virtualized mini computer in CKB VM, there’s nothing stopping us from embedding another VM as a CKB script that runs on CKB VM. In this article we will explore this VM on top of VM path.
Through this method, we can have JavaScript on CKB via duktape, Ruby on CKB via mruby, we can even have Bitcoin Script or EVM on chain by just compiling those VMs and storing them as scripts on CKB. This compatibility ensures CKB VM can both help to preserve legacy code and build a diversified ecosystem.
All languages are treated equal on CKB, giving freedom to blockchain contract developers to build on top of CKB however they feel is best.
To use duktape on CKB, first you need to compile the duktape VM itself into a RISC-V executable binary:
$ git clone $ cd ckb-duktape $ git submodule init $ git submodule update $ sudo docker run --rm -it -v `pwd`:/code nervos/ckb-riscv-gnu-toolchain:xenial bash [email protected]0d31cad7a539:~# cd /code [email protected]0d31cad7a539:/code# make riscv64-unknown-elf-gcc -Os -DCKB_NO_MMU -D__riscv_soft_float -D__riscv_float_abi_soft -Iduktape -Ic -Wall -Werror c/entry.c -c -o build/entry.o riscv64-unknown-elf-gcc -Os -DCKB_NO_MMU -D__riscv_soft_float -D__riscv_float_abi_soft -Iduktape -Ic -Wall -Werror duktape/duktape.c -c -o build/duktape.o riscv64-unknown-elf-gcc build/entry.o build/duktape.o -o build/duktape -lm -Wl,-static -fdata-sections -ffunction-sections -Wl,--gc-sections -Wl,-s [email protected]0d31cad7a539:/code# exit exit $ ls build/duktape build/duktape*
Here we use the ruby SDK to interact with CKB, please refer to the official README for how to set it up. Then deploy the duktape script code in a CKB cell:
pry(main)> duktape_data = File.read("../ckb-duktape/build/duktape") pry(main)> duktape_data.bytesize => 269064 pry(main)> duktape_tx_hash = wallet.send_capacity(wallet.address, CKB::Utils.byte_to_shannon(280000), CKB::Utils.bin_to_hex(duktape_data)) pry(main)> duktape_data_hash = CKB::Blake2b.hexdigest(duktape_data) pry(main)> duktape_cell_dep = CKB::Types::CellDep.new(out_point: CKB::Types::OutPoint.new(tx_hash: duktape_tx_hash, index: 0))
The duktape script code now requires one argument: the JavaScript source you want to execute
pry(main)> duktape_hello_type_script = CKB::Types::Script.new(code_hash: duktape_data_hash, args: CKB::Utils.bin_to_hex("CKB.debug(\"I'm running in JS!\")"))
Notice that with a different argument, you can create a different duktape powered type script for a different use case:
pry(main)> duktape_hello_type_script = CKB::Types::Script.new(code_hash: duktape_data_hash, args: CKB::Utils.bin_to_hex("var a = 1;\nvar b = a + 2;"))
This demonstrates the differences mentioned above on script code vs script: here duktape serves as script code providing a JavaScript engine, while a different script leveraging the duktape script code serves a different function on chain.
Now we can create a cell with the duktape type script attached:
pry(main)> tx = wallet.generate_tx(wallet2.address, CKB::Utils.byte_to_shannon(200)) pry(main)> tx.cell_deps.push(duktape_out_point.dup) pry(main)> tx.outputs.type = duktape_hello_type_script.dup pry(main)> tx.witnesses[0] = "0x" pry(main)> tx = tx.sign(wallet.key, api.compute_transaction_hash(tx)) pry(main)> api.send_transaction(tx) => "0x2e4d3aab4284bc52fc6f07df66e7c8fc0e236916b8a8b8417abb2a2c60824028"
We can see that the script executes successfully and if you have the ckb-script module’s log level set to debug in your ckb.toml file, you will also notice the following log:
2019-07-15 05:59:13.551 +00:00 http.worker8 DEBUG ckb-script script group: c35b9fed5fc0dd6eaef5a918cd7a4e4b77ea93398bece4d4572b67a474874641 DEBUG OUTPUT: I'm running in JS!
Now you have successfully deployed a JavaScript engine on CKB, and have also run JavaScript-based script on CKB! Feel free to try any JavaScript code you want here.
Dynamic linkingDynamic linking
There are two dynamic linking functions implemented in nervosnetwork/ckb-c-stdlib, which are
ckb_dlopen() and
ckb_dlsym().
ckb_dlopen() loads the dynamic library from a cell by its data hash and returns an opaque "handle" for the dynamic library.
ckb_dlsym() takes a "handle" of a dynamic library returned by
ckb_dlopen() and the symbol name, and returns the address where that symbol is loaded into memory.
nervosnetwork/ckb-miscellaneous-scripts has a simple example for using these two functions.
int ckb_dlopen(const uint8_t *dep_cell_data_hash, uint8_t *aligned_addr, size_t aligned_size, void **handle, size_t *consumed_size); void *ckb_dlsym(void *handle, const char *symbol);
How dependencies workHow dependencies work
There are two different dependency fields in the transaction data structure:
cell_deps and
header_deps.
cell_deps allow scripts in the transaction to access (read-only) referenced live cells.
header_deps allow scripts in the transaction to access (read-only) data of referenced past block headers of the blockchain.
Please refer to the CKB Transaction Structure RFC for more details. | http://docs-old.nervos.org/technical-concepts/script-dependencies | 2021-02-24T20:51:47 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs-old.nervos.org |
Types and signatures¶
Rationale¶
As an optimizing compiler, Numba needs to decide on the type of each variable to generate efficient machine code. Python’s standard types are not precise enough for that, so we had to develop our own fine-grained type system.
You will encounter Numba types mainly when trying to inspect the results of Numba’s type inference, for debugging or educational purposes. However, you need to use types explicitly if compiling code ahead-of-time.
Signatures¶
A signature specifies the type of a function. Exactly which kind of signature is allowed depends on the context (AOT or JIT compilation), but signatures always involve some representation of Numba types to specify the concrete types for the function’s arguments and, if required, the function’s return type.
An example function signature would be the string
"f8(i4, i4)"
(or the equivalent
"float64(int32, int32)") which specifies a
function taking two 32-bit integers and returning a double-precision float.
Basic types¶
The most basic types can be expressed through simple expressions. The
symbols below refer to attributes of the main
numba module (so if
you read “boolean”, it means that symbol can be accessed as
numba.boolean).
Many types are available both as a canonical name and a shorthand alias,
following Numpy’s conventions.
Numbers¶
The following table contains the elementary numeric types currently defined by Numba and their aliases.
Arrays¶
The easy way to declare array types is to subscript an elementary type according to the number of dimensions. For example a 1-dimension single-precision array:
>>> numba.float32[:] array(float32, 1d, A)
or a 3-dimension array of the same underlying type:
>>> numba.float32[:, :, :] array(float32, 3d, A)
This syntax defines array types with no particular layout (producing code
that accepts both non-contiguous and contiguous arrays), but you can
specify a particular contiguity by using the
::1 index either at
the beginning or the end of the index specification:
>>> numba.float32[::1] array(float32, 1d, C) >>> numba.float32[:, :, ::1] array(float32, 3d, C) >>> numba.float32[::1, :, :] array(float32, 3d, F))
Advanced types¶
For more advanced declarations, you have to explicitly call helper functions or classes provided by Numba.
Warning
The APIs documented here are not guaranteed to be stable. Unless necessary, it is recommended to let Numba infer argument types by using the signature-less variant of @jit.)
Numpy scalars¶
Instead of using
typeof(), non-trivial scalars such as
structured types can also be constructed programmatically.
numba.
from_dtype(dtype)¶
Create a Numba type corresponding to the given Numpy dtype:
>>> struct_dtype = np.dtype([('row', np.float64), ('col', np.float64)]) >>> ty = numba.from_dtype(struct_dtype) >>> ty Record([('row', '<f8'), ('col', '<f8')]) >>> ty[:, :] unaligned array(Record([('row', '<f8'), ('col', '<f8')]), 2d, A)
- class
numba.types.
NPDatetime(unit)¶
Create a Numba type for Numpy datetimes of the given unit. unit should be a string amongst the codes recognized by Numpy (e.g.
Y,
M,
D, etc.).
- class
numba.types.
NPTimedelta(unit)¶
Create a Numba type for Numpy timedeltas of the given unit. unit should be a string amongst the codes recognized by Numpy (e.g.
Y,
M,
D, etc.).
See also
Numpy datetime units.
Arrays¶
Optional types¶
Type annotations¶
numba.extending.
as_numba_type(py_type)¶
Create a Numba type corresponding to the given Python type annotation.
TypingErroris raised if the type annotation can’t be mapped to a Numba type. This function is meant to be used at statically compile time to evaluate Python type annotations. For runtime checking of Python objects see
typeofabove.
For any numba type,
as_numba_type(nb_type) == nb_type.
>>> numba.extending.as_numba_type(int) int64 >>> import typing # the Python library, not the Numba one >>> numba.extending.as_numba_type(typing.List[float]) ListType[float64] >>> numba.extending.as_numba_type(numba.int32) int32
as_numba_typeis automatically updated to include any
@jitclass.
>>> @jitclass ... class Counter: ... x: int ... ... def __init__(self): ... self.x = 0 ... ... def inc(self): ... old_val = self.x ... self.x += 1 ... return old_val ... >>> numba.extending.as_numba_type(Counter) instance.jitclass.Counter#11bad4278<x:int64>
Currently
as_numba_typeis only used to infer fields for
@jitclass. | https://numba.readthedocs.io/en/stable/reference/types.html | 2021-02-24T20:05:44 | CC-MAIN-2021-10 | 1614178347321.0 | [] | numba.readthedocs.io |
Applications used in this site
What makes us go
Echo Knowledgebase
Great way to Build Documentation for your Business
Best way for me to present my thoughts in a way that is elegant, searchable and organized. I have been using this software since 2017 and their growth and vision with creating the best documentation software has been amazing.
Applications used here on this site are:
- Echo Knowledgebase
- Advanced Search
- Article Rating and Feedback
- Elegant Layouts
- Multiple Knowledge Bases
- Widgets
Asset Cleanup Pro
Plugin, JS and CSS cleanup
This has to be one of my favorite site enhancement and plugin cleanup applications – Period.
Every plugin you install loads its resources on every page. With this plugin you can control what you load, when you load it and how you load it.
This is controllable across the whole site or just on specific pages. | https://docs.itme.guru/applications-used-in-the-site/ | 2021-05-06T03:44:49 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.itme.guru |
Table of Contents
Kirkbymoorside Town Council
Planning application consultation during the COVID-19 emergency
The District Council will continue to determine planning applications during the coronavirus outbreak and the current lockdown arrangements. Local Parish and Town Councils will be consulted as normal on all planning applications which are received during this time. Whilst the Local Planning Authority understands that many Local Councils would prefer that the determination of planning applications is held in abeyance at this time, the Government has made it clear that it expects the planning system to continue to function and for planning decisions to be prioritised over other planning related work. All planning applications which are approved during this period will play an important role in supporting local businesses and the local economy once the current restrictions are lifted.
During this COVID-19 emergency period the views of the councillors will be collated and a response agreed using email or telephone exchange.
Members of the public are encouraged to submit their views to the Planning Authority via email [email protected] by the specified closing date for observations.
Current Planning Applications Closing date for observations 22nd April 2020 Planning decision: APPROVED
20/00412/HOUSE | Erection of single storey rear garden room extension | Kings Lea Vivers Place Kirkbymoorside YO62 6EA Closing date for observations 1st June 2020 The date of the next meeting of the Kirkbymoorside Town Council Planning Committee will be determined by government guidance.
Related Documents Draft Minutes of the Planning Committee 16th March 2020
Notice issued by Mrs L Bolland, Clerk to Kirkbymoorside Town Council | https://docs.kirkbymoorsidetowncouncil.gov.uk/doku.php/agendaplanning2020-cv19 | 2021-05-06T03:09:56 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.kirkbymoorsidetowncouncil.gov.uk |
Converts flat XML (.xml) files to Gettext PO format, a simple monolingual and single-level XML.
flatxml2po [options] <xml> <po> po2flatxml [options] <po> <xml> [-t <base-xml>]
Where:
Options (flatxml2po):
Options (po2flatxml):
Check flat XML format document to see to which extent the XML format is supported.
This example looks at roundtrip of flat XML translations as well as recovery of existing translations.
First we need to create a set of POT files.:
flatxml2po -P lang/en pot/
All .xml files found in the
lang/en directory are converted to Gettext POT
files and placed in the
pot directory.
If you are translating for the first time then you can skip the next step. If you need to recover your existing translations then we do the following:
flatxml2po -t lang/en lang/zu po-zu/
Using the English XMLflatxml -t lang/en po-zu/ lang/zu
Your translations found in the Zulu PO directory,
po-zu, will be converted
to XML using the files in
lang/en as templates and placing your new
translations in
lang/zu.
To update your translations simply redo the POT creation step and make use of pot2po to bring your translation up-to-date. | http://docs.translatehouse.org/projects/translate-toolkit/en/latest/commands/flatxml2po.html | 2021-05-06T04:20:25 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.translatehouse.org |
What is a USD file?
A file with .usd extension is a Universal Scene Description file format that encodes data for the purpose of data interchanging and augmenting between digital content creation applications. Developed by Pixar, USD provides the ability to interchange elemental assets (such as models) or animation. USD enables assembly and organization of any number of 3D scene elements such as virtual sets, scenes, and shots to transmit them from application to application. Some of the applciations that can be open USD files include Pixar Animation Studios USD and NVIDIA Omniverse.
USD File Format
USD files can have binary format (also known as Crate files) or ASCII-backed files. Both these file formats are interchangeable where the references can be linked to .usd assets without changing the sources. USD consists of a set of C++ libraries with Python bindings for scripting.
USD Data Types
The fundamental data types supported by the USD file format are listed in the following table.
USD Example
An example of a USD file in plain ASCII file format is as following.
#usda 1.0 class "_class_Planet" { bool has_life = False } def Xform "SolarSystem" { def "Earth" ( references = @./planet.usda@</Planet> ) { bool has_life = True string color = "blue" } def "Mars" ( references = @./planet.usda@</Planet> ) { string color = "red" } def "Saturn" ( references = @./planet.usda@</Planet> variants = { string rings = "with_rings" } ) { string color = "beige" } }
#usda 1.0 class "_class_Planet" { } def Sphere "Planet" ( inherits = </_class_Planet> kind = "model" variantSets = "rings" variants = { string rings = "none" } ) { variantSet "rings" = { "none" { bool has_rings = False } "with_rings" { bool has_rings = True } } } | https://docs.fileformat.com/3d/usd/ | 2021-05-06T04:34:40 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.fileformat.com |
Everyone can make user-based settings themselves. You must click the little arrow next to your name at the top right corner of the screen to do so.
A menu box will open listing all companies whose user you are. Here you can also change the company where work is being done.
The next menu item is registering a new company.
My devices
This page shows all of the devices that the user has used when logging into Envoice.
The list of devices also has a button to log out the device. This is useful if the user for some reason no longer uses the device.
By clicking the button “My devices”, the following view opens:
If the user account has been created by using mobile ID or ID card, an option to create a password shall appear on the page. A password is necessary for logging into the mobile application.
My rules
Each user can create the following personal approval rules for themselves.
Creating an unusual invoice rule
Establish a sum rule for yourself i.e. a limit from which you need to pay attention to an invoice, i.e. “Make it unusual”. An invoice that exceeds the limit shall in this case be displayed in the respective subdivision in the approval view.
Creating an automatic approval rule
You can make the approval of an invoice automatic up to a certain sum or limit: “Make automatic”. This can be done generally for all invoices as well as on a supplier-basis.
Settings
Under personal settings you can change your e-mail address and user password and choose the language in which you wish to use Envoice. | http://docs.envoice.eu/en/articles/1191727-personal-settings | 2021-05-06T05:38:52 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.envoice.eu |
This error occurs when the purchaser's bank has prevented the transaction from being authorised and is blocking the card from being charged. It could indicate that the card has been stolen.
If you suspect that the transaction is fraudulent, you may wish to block the user from transacting on your platform.
For more information on how to manage fraud online, refer to this article. | https://docs.assemblypayments.com/en/articles/2037324-why-am-i-getting-a-make-payment-base-credit-card-authorization-failed-pick-up-card-response | 2021-05-06T03:52:26 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.assemblypayments.com |
What are direct debits?
A direct debit is a request made to a bank account to take funds. This type of payment method is pulled from a bank account, rather than being pushed like an EFT or BPay payment are. In the US a Direct Debit is referred to as "ACH pull".
How do they work with the API?
To initiate a direct debit request on a given bank account, you must first have permission from the account holder in the form of a direct debit authority (DDA). Assembly has an API endpoint /direct_debit_authority to make this process easier.
Which countries are they supported in?.
In the United States, ACH payments are restricted to business bank accounts and won't work on personal bank accounts. In Australia and New Zealand, you can perform a direct debit on both business and personal bank accounts.
What is the timeframe for a DD?
In Australia and New Zealand a direct debit will take 3 business days to clear before funds are available to you in Assembly. If a direct debit fails, the time taken to get funds will increase the total payment time in your platform. In such a scenario you may wish to use faster payment methods like credit cards or BPay if available.
See the article on payment processing for more information.
What appears on my customer's bank statement?
Your customers will see something like the following in their bank account after a direct debit has been completed: PromisePay MyWidgets or Assembly MyWidgets, where MyWidgets is the name of your business.
Things to look out for.
When creating direct debit authority requests and attempting to initiate a payment from a bank account, there are a number of things that can cause the payment fail, including:
Incorrect bank account details. This is the most common error generated by direct debit requests and can be mitigated against in two ways. Firstly, have your users double-check their bank account details before submission to the Assembly API. Secondly, you can verify the bank account using Assembly's penny credit API.
Insufficient funds. At the time of the direct debit request being made, there were not enough funds to complete the transaction. This can happen primarily if there is not enough money in the account provided, but can also happen if multiple direct debit requests were made by multiple organisations on the bank account provided. For example, someone may have their phone bill, internet bill, gas bill and rent all taken from a bank account on the same day. Get your users to confirm they have the funds available before creating the request for payment, and if possible avoid providing goods or services until payment has been made.
Wrong account type. When creating the bank account record, if you pass "checking" when the account should have been "savings" this can cause the request to fail. Make sure your users verify the details of the bank account before creating the record via the API.
Accounts that block direct debit requests. Many online saver accounts offered by banks will not allow direct debits to be performed and such requests will generate an error. To avoid this from happening, make sure that when your users are adding a bank account they are providing an account which can be debited from.
Other banking errors. These type of errors often generate a "refer to customer" error from the person's bank which is an indication that something else has gone wrong with the bank account. Often, it is that an account is not available to be debited from (see directly above) or that for whatever reason the bank has blocked the debit request. In these situations, it is best to have your user contact their bank directly to identify what has gone wrong.
Developer documentation on the direct debits API.
For technical information on using direct debits with the Assembly API, the following developer articles can help get you started: | https://docs.assemblypayments.com/en/articles/2316535-direct-debit-payments | 2021-05-06T04:27:25 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.assemblypayments.com |
Use the following topics to help identify and solve problems that you might encounter when using or administering the App Visibility Manager and Real End User Monitoring Software Edition component products.
Troubleshooting App Visibility Manager
Troubleshooting TrueSight Synthetic Monitor
Troubleshooting Real End User Experience Monitoring Software Edition
Browser problems when using Online Technical Documentation portal
Troubleshooting the Presentation Server and the TrueSight console
/>
Troubleshooting an App Visibility Manager deployment
/> | https://docs.bmc.com/docs/display/tsavm107/Troubleshooting | 2021-05-06T03:32:02 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.bmc.com |
Add Bugsnag error monitoring to your Symfony applications.
This library supports Symfony versions 2, 3, 4 and 5, running PHP 5.5+.
This guide is for Symfony 4 and 5. You can find information on how to get started with Symfony versions 2 and 3 in the legacy Symfony section.
Add
bugsnag/bugsnag-symfony to your
composer.json:
$ composer require "bugsnag/bugsnag-symfony:^1.6"
The Bugsnag bundle will automatically be registered in your
config/bundles.php file.
To associate your application with a project in your Bugsnag dashboard, you’ll need to set your Integration API Key in your
config/packages/bugsnag.yaml file:
bugsnag: api_key: YOUR-API-KEY-HERE
You can find your API key in Project Settings.
For a list of available options, see the configuration options reference.
In order to manually use the Bugsnag client in any of your controllers you will need to acquire it from the service container.
In a controller extending a Symfony
Controller:
$this->get('bugsnag')->notifyException( new Exception('Example exception!') );
If a controller extending an
AbstractController the service can be acquired via Symfony’s dependency injection component following information in that guide.
After completing installation and basic configuration, unhandled exceptions in your Symfony app will be automatically reported to your Bugsnag dashboard.
Bugsnag will increase the PHP memory limit when your app runs out of memory to ensure events can be delivered.
This is enabled by default as part of the basic configuration steps. To disable this, or change the amount of extra memory Bugsnag allocates, see the memory limit increase configuration option.
This feature relies on Symfony’s ErrorHandler component, which was added in Symfony 4.4. Older versions of Symfony use the Debug component, which does not inform Bugsnag of out of memory exceptions..
In Symfony 4 and 5 application-wide callbacks should be registered within the
boot function of your
src/Kernel.php file:
public function boot() { parent::boot(); $this->container->get( Symfony, on GitHub | https://docs.bugsnag.com/platforms/php/symfony/ | 2021-05-06T03:32:16 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.bugsnag.com |
For the list of existing meters see the tables under the Measurements page of Ceilometer in the Administrator Guide.
Ceilometer is designed to collect measurements from OpenStack services and from other external components. If you would like to add new meters to the currently existing ones, you need to follow the guidelines given in this section.
Three type of meters are defined in Ceilometer:
When you’re about to add a new meter choose one type from the above list, which is applicable.
If you plan on adding meters, please follow the convention below:.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/ceilometer/pike/contributor/measurements.html | 2021-05-06T03:51:52 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.openstack.org |
aStatus
Applies To: Windows 10, Windows 7, Windows 8, Windows 8.1, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server Technical Preview, Windows Vista
The aStatus structure is an optional array that contains error codes returned by Message Queuing.
An aStatus array is included as a member in the following property structures:
MQQUEUEPROPS (queue properties)
MQMSGPROPS (message properties)
MQMGMTPROPS (queue and computer management properties).
MQQMPROPS (queue manager properties).
MQPRIVATEPROPS (private computer properties)
Position i in an aStatus array is a reported status code of the property whose identifier and value are in position i in the corresponding aPropID and aPropVar arrays.
Message Queuing errors are divided into four categories (by increasing order of severity); the category can be determined by looking at the upper two bits of the error code (success = 00, informational = 01, warning = 10, fatal = 11).
Requirements
Windows NT/2000/XP: Included in Windows NT 4.0 SP3 and later.
Windows 95/98/Me: Included in Windows 95 and later.
Header: Declared in Mq.h.
See Also
Message Queuing Properties
Message Queuing Structures
aPropID
aPropVar
MQMGMTPROPS
MQMSGPROPS
MQPRIVATEPROPS
MQQMPROPS
MQQUEUEPROPS | https://docs.microsoft.com/en-us/previous-versions/windows/desktop/msmq/ms707055(v=vs.85) | 2021-01-16T03:58:21 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.microsoft.com |
Zivver WebApp
Remove a Zivver account
Introduction
This document explains how you delete your Zivver account. You can do this if you no longer want to use your Zivver account or if you have a new account and no longer wish to use your old Zivver account.
If you delete your Zivver account, all messages and settings will be lost!
Delete your account
- Log in to the WebApp.
- Click Settings settings on the bottom left of your screen.
- At the bottom, click the DELETE ACCOUNT button.
A confirmation pop-up appears
- Enter your password.
- Click OK.
Your account has been successfully deleted.
Accounts created by a Zivver administrator cannot be deleted by users. | https://docs.zivver.com/en/user/webapp/references/deleting-your-zivver-account.html | 2021-01-16T01:55:51 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.zivver.com |
- Standalone
3DS SDK makes it easy to implement native 3D Secure 2 authentication in a mobile application.
NOTE: This guide is specific for the cases when Open Payment Platform (OPP) is not going to be used for some reason. Otherwise, refer to one of the integration types with MSDK.
3DS SDK provides the following features:
- Collecting and encrypting user's device data
- Performing security checks
- Performing challenge process (including presenting UI and communication with ACS)
Features that are NOT included into the scope of SDK:
- Performing authentication request to the 3DS Server
Requirements
Import libraries
Initialize the 3DS service
Initialization phase includes fetching actual config data from.
Create 3DS transaction
After shopper entered card details and clicked Pay, use 3DS service to create 3DS transaction for the specific payment brand. Store a reference to the transaction, it will be needed later to initiate challenge process.
Send authentication parameters
Calling
getAuthRequestParams() will encrypt shopper device data and other important information needed for the 3DS Server to authenticate a transaction. It will return JSON string which should be sent to the Server.
E.g. Platform expects it as
threeDSecure.deviceInfo parameter in the payment submission request.
Display processing view
It’s also required to show appropriate processing view while communicating with the Server. You can use processing view provided by the SDK.
Handle authentication response
If card is enrolled for the 3D Secure 2, Server will return 3DS authentication status and client authentication response which is required for the challenge flow.
E.g. Platform parameters looks like:
<Result name="clientAuthResponse">{"messageType":"AuthResponse","messageVersion":"2.1.0",...}</Result> <Result name="transactionStatus">C</Result>
Check authentication status
First, check the authentication status, it's one character string, see the meaning of possible values below:
Depending on status, start challenge or finish the checkout process:
Frictionless flow
Frictionless flow means that authentication is done. The payment will be completed or rejected depending on authentication result and system configuration. Request payment status to get the result of the transaction.
Challenge flow
For the challenge flow you will need to pass
clientAuthResponse received from the Server! | https://totalprocessing.docs.oppwa.com/tutorials/mobile-sdk/emv-3ds/standalone | 2021-01-16T03:26:36 | CC-MAIN-2021-04 | 1610703499999.6 | [] | totalprocessing.docs.oppwa.com |
After you create subnets for one or more managed clusters or projects as described in Create subnets or Automate multiple subnet creation using SubnetPool, follow the procedure below to create L2 templates for a managed cluster. This procedure contains exemplary L2 templates for the following use cases:
To create an L2 template for a new managed cluster:
Log in to a local machine where your management cluster
kubeconfig
is located and where
kubectl is installed.
Note
The management cluster
kubeconfig is created
during the last stage of the management cluster bootstrap.
Inspect the existing L2 templates to select the one that fits your deployment:
kubectl --kubeconfig <pathToManagementClusterKubeconfig> \ get l2template -n <ProjectNameForNewManagedCluster>
Create an L2 YAML template specific to your deployment using one of the exemplary templates:
L2 template example with bonds and bridges
L2 template example for automatic multiple subnet creation
Note
You can create several L2 templates with different configurations to be applied to different nodes of the same cluster. In this case:
First create the default L2 template for a cluster.
It will be used for machines that do not have
L2templateSelector.
Verify that the unique
ipam/DefaultForCluster label
is added to the first L2 template of the cluster.
Set a unique
name and add a unique
label to the
metadata section of each L2 template of the cluster.
To select a particular L2 template for a machine,
use either the L2 template name or label in the
L2templateSelector section of the corresponding machine
configuration file.
If you use an L2 template for only one machine, set
name.
For a group of machines, set
label.
For details about configuration of machines, see Deploy a machine to a specific bare metal host.
Add or edit the mandatory parameters in the new L2 template.
The following tables provide the description of the mandatory
and the
l3Layout section parameters in the example templates
mentioned in the previous step.
For more details about the
L2Template custom resource (CR), see
the L2Template API section.
The following table describes the main lookup functions for an L2 template.
Note
Every subnet referenced in an L2 template can have either a global or namespaced scope. In the latter case, the subnet must exist in the same project where the corresponding cluster and L2 template are located.
Add the L2 template to your management cluster:
kubectl --kubeconfig <pathToManagementClusterKubeconfig> apply -f <pathToL2TemplateYamlFile>
Optional. Further modify the template:
kubectl --kubeconfig <pathToManagementClusterKubeconfig> \ -n <ProjectNameForNewManagedCluster> edit l2template <L2templateName>
Proceed with creating a managed cluster as described in Create a managed cluster. The resulting L2 template will be used to render the netplan configuration for the managed cluster machines.
The workflow of the netplan configuration using an L2 template is as follows:
The
kaas-ipam service uses the data from
BareMetalHost,
the L2 template, and subnets to generate the netplan configuration
for every cluster machine.
The generated netplan configuration is saved in the
status.netconfigV2 section of the
IpamHost resource.
If the
status.l2RenderResult field of the
IpamHost resource
is
OK, the configuration was rendered in the
IpamHost resource
successfully. Otherwise, the status contains an error message.
The
baremetal-provider service copies data
from the
status.netconfigV2 of
IpamHost to the
Spec.StateItemsOverwrites[‘deploy’][‘bm_ipam_netconfigv2’] parameter
of
LCMMachine.
The
lcm-agent service on every host synchronizes the
LCMMachine
data to its host. The
lcm-agent service runs
a playbook to update the netplan configuration on the host
during the
pre-download and
deploy phases. | https://docs.mirantis.com/container-cloud/latest/operations-guide/operate-managed/operate-managed-bm/adv-nw-config/create-l2.html | 2021-01-16T03:37:56 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.mirantis.com |
Installing RapidMiner Radoop on RapidMiner Server
Prerequisites
The following requirements must be met before installing the RapidMiner Radoop extension on RapidMiner Server:
- RapidMiner Radoop Extension installed and tested on RapidMiner Studio. If necessary, see Configuring RapidMiner Radoop Connections to ensure that you have a valid connection to a Hadoop cluster in RapidMiner Studio.
Installing RapidMiner Radoop on RapidMiner Server and the connected Job Agent(s)
Installing the RapidMiner Radoop extension on RapidMiner Server requires that you copy files from your RapidMiner Studio configuration into your RapidMiner Server installation. The central resource management functionality will automatically synchronize the Radoop extension, Radoop licenses and connection definitions to all connected Job Agents.
You need to prepare with the following artifacts to accomplish the installation:
RapidMiner Radoop Extension (a JAR file). You can download RapidMiner Radoop extension from the Marketplace or you can get it on your desktop computer from your local
.RapidMiner/configuration directory (created by RapidMiner Studio).
Radoop license (a license string and/or a .lic file). RapidMiner Radoop license needs manual installation on RapidMiner Server (note that Radoop Basic license is not enough to use Radoop). You can get it on the or you can locate the license file on your desktop computer in your local
.RapidMiner/configuration directory (created by RapidMiner Studio).
Radoop Connection definitions (an XML file). Locate the radoop_connections.xml file in your local
.RapidMiner/configuration directory (created by RapidMiner Studio).
Installing RapidMiner Radoop on RapidMiner Server
Stop the server.
Copy the Radoop extension JAR file to the
resources/extensions/subfolder of your RapidMiner Server Home Directory.
Copy the radoop_connections.xml file into the
.RapidMiner/subfolder of your RapidMiner Server Home Directory
Start the server.
On the Server Web UI, navigate to Administration > Manage Licenses and check your Radoop license under Active licenses. If it is a Radoop Basic license, click on Install License in the Actions menu (located on the right side by default) and paste your Radoop license in the text field.
Installing RapidMiner Radoop on RapidMiner Server Job Agents
The central resource management functionality of RapidMiner Server will automatically synchronize the Radoop extension, installed licenses, and connections described in your radoop_connections.xml to all connected Job Agents. Please make sure that central resource management is configured to sync the locations where you uploaded these artifacts (the default locations will already be covered out-of-the-box).
If you need instructions on how to set up Radoop on all Job Agents manually, you will find it in the previous version of this document.
Updating Radoop connections on RapidMiner Server
Radoop connections are stored in radoop_connections.xml on the server side (in the
.RapidMiner/ subfolder of the RapidMiner Server Home Directory), but there is no GUI on the server to edit the connections. The recommended procedure is to edit connections on the client side using RapidMiner Studio and then upload them to the server as an XML file.
Follow these steps to apply your new connection definitions on your Server deployment:
Copy (overwrite) radoop_connections.xml in the
.RapidMiner/subfolder of the RapidMiner Server Home Directory
To avoid a server restart - but still broadcast the changes - you need to manually trigger an update on all connected Job Agents via calling a Server REST API. To achieve this, you need to invoke the
/executions/sync/updateREST endpoint of the Server, with the
"type":"EXECUTION_CONTEXT"parameter set and authentication in place. Successful trigger is indicated by a
2xxstatus code in the HTTP response. Here's an example using command line:
curl "https://<your_server_address:port>/executions/sync/update" \ -X POST \ -d '{"type":"EXECUTION_CONTEXT"}' \ -H "Content-Type: application/json" \ -H "Authorization: Bearer <JWT_token>" \ -w "\nResponse HTTP status code: %{http_code}\n"
Alternatively, restart RapidMiner Server to apply the changes to Server and all connected Job Agents.
Changes to the radoop_connections.xml are applied immediately to all process executions started after the update. Already running processes remain unaffected.
Managing multiple Radoop connections on RapidMiner Server
The radoop_connections.xml file can list an arbitrary number of connections and should list all connections that may be used by any process submitted by any user to this Server. These connections may point to the same Hadoop cluster or may point to different clusters. Rapidminer Server administrator may define connections for the same user or for different users (see Managing multiple Hadoop users below).
To control the access rights to these connections on the RapidMiner Server - e.g. to restrict which user can use which connection when submitting processes to the RapidMiner Server - each connection should set the so called Access Whitelist field. See Access control on Radoop connections for details.
The connection names must be the same on the RapidMiner Server and in the RapidMiner Studio instance that submits the process to ensure correct process execution across the platform.
Once you have created a radoop_connections.xml file containing all desired connections, follow the procedure about Updating Radoop connections to apply changes on the Server.
Managing multiple Hadoop users on RapidMiner Server
In a multi-user Hadoop environment the RapidMiner Server administrator needs to manually edit the radoop_connections.xml file on Server to make sure that all connections are included and to ensure that users of RapidMiner platform are restricted to use solely their own identity on the Hadoop cluster (i.e. execute Spark jobs and Hive queries using their Hadoop access rights). After the changes has been made to radoop_connection.xml then follow the procedure about Updating Radoop connections to apply changes on the Server.
Two different configuration strategies are available:
- Dedicated Radoop connections. One for each Hadoop user.
- One connection with the credentials of a privileged Hadoop user, which is a user allowed to impersonate other users. (see Apache Hadoop user impersonation)
Option #1: Creating dedicated Radoop connections
This approach requires a dedicated connection definition for each Hadoop user. Administrators must take care of Radoop connection name conflicts and setting up individual Hadoop credentials for each Radoop connection. RapidMiner Studio users only need to have their own connection(s) in their local connection file on their client machine belonging to their Hadoop identity. On the RapidMiner Server side, there will be multiple connections defined in the connection file. An example for naming the connections:
clustername_username, where
clustername is an identifier for the Hadoop cluster and
username is an identifier for the user (e.g. that may be the same as the value of the Hadoop username field). Edit XML... option on the Connection Settings dialog can be used to copy each user's connection entry into the merged radoop_connections.xml on the Server.
Although this strategy is the simplest to introduce since it doesn’t require a Hadoop cluster side setup, it may have its drawbacks. Eventually administrators has to keep several Radoop connection in sync, which connections may only differ in their Hadoop credentials.
Option #2: Using Hadoop user impersonation in the Radoop connection
Hadoop user impersonation is available for Radoop connections. This approach enables the administrators to maintain a single Radoop connection with the credentials of a privileged Hadoop user, who is able to impersonate other Hadoop users.
This approach results in less maintenance and simpler access right management, while the credentials of the individual users (their encrypted passwords or keytabs) are not stored on the RapidMiner server.
Prerequisite Hadoop cluster side configuration for impersonation
On the Hadoop side, there should be a dedicated user (username can be e.g.
privilegeduser), who has the rights to impersonate others. This configuration can be done based on the Hadoop documentation. In a simple case, the following snippet should be added to the core-site.xml in the Hadoop Configuration:
<property> <name>hadoop.proxyuser.privilegeduser.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.privilegeduser.groups</name> <value>*</value> </property>
If HDFS Encryption (and KMS service) is enabled, the similar settings should be also ensured in the kms-site.xml. For detailed information please visit the KMS Proxyuser Configuration section on the KMS documentation page or follow the instructions of your Hadoop vendor.
Creating and testing an impersonated connection for RapidMiner Server
As a recommended approach, a connection should be constructed using RapidMiner Studio. You can find RapidMiner Server related settings on the RapidMiner Server tab of the Connection Settings dialog.
As on the screenshot above, the Enable impersonation on Server checkbox should be enabled and the credentials of the superuser should be entered to the Server Principal and Server Keytab File fields similar to the case with client users (presented in section Hadoop security configuration).
In case of LDAP authentication is configured for Hive, the Hive Principal should be empty and the credentials of the
privilegeduser should be entered to the Hive Username and Password fields (these two fields are only enabled if Hive Principal is empty).
The connection can be tested from RapidMiner Studio, if the networking setup allows connecting to the Hadoop cluster from the client hosts. If the Impersonated user for local testing field is set (e.g.
scott is entered as username), then all the operations are submitted using the
privilegeduser credentials, but impersonating the
scott user and using its access rights. This field does not have an effect when running on RapidMiner Server: in that case, the effective user will always be the user who submitted the RapidMiner process.
Securing Radoop connections on RapidMiner Server
RapidMiner Server supports connections to Hadoop clusters with the same security settings as RapidMiner Studio, but you may need to manually edit the connection XML file (e.g. because of different file path settings on the server side). In general, connections should be constructed using RapidMiner Studio (using it as a "connection editor"), and the following additional steps should be considered.
Decrypting connection passwords
RapidMiner Radoop uses the local cipher.key file to encrypt and the key attribute of the radoop-entries tag in the XML file to decrypt the passwords in the radoop_connections.xml file by default. If the radoop_connections.xml contains entries from multiple users, there are two possible solutions:
- Creating every user's connection entry on the same computer (with the same cipher.key file), or
- it is possible to add a key attribute to each radoop-connection-entry manually. Radoop will use the per-entry key attribute instead of the per-file key.
For example, user John and Scott have the following radoop_connections.xml files:
<radoop-entries <radoop-connection-entry> <name>connection-john</name> ... </radoop-connection-entry> </radoop-entries>
<radoop-entries <radoop-connection-entry> <name>connection-scott</name> ... </radoop-connection-entry> </radoop-entries>
The merged radoop_connections.xml looks like the following:
<radoop-entries> <radoop-connection-entry <name>connection-john</name> ... </radoop-connection-entry> <radoop-connection-entry <name>connection-scott</name> ... </radoop-connection-entry> </radoop-entries>
Connection to Hadoop clusters with Kerberos authentication
For configuring a connection to a cluster with Kerberos authentication, see Hadoop security. Please take the following notes when using these connections through RapidMiner Server.
Connecting with Kerberos password
It is possible to use a password to connect to a Kerberized cluster. To make sure that the encrypted passwords in the connection XML can be decrypted on the Server, please refer to the Decrypting connection passwords section. Please note that on the Server side, using a keytab is recommended, as the ticket renewal is not supported in case of using a password.
Connecting with keytab file
Connections to a Kerberized cluster should specify the path for the users keytab file instead of the password. This means that the keytab file must be accessible on the local file system of the Server. The path usually differs from the path on the local file system of the user using RapidMiner Studio. The RapidMiner Server administrator have to ensure that the keytabFile field of the radoop_connections.xml file on the Server points to the appropriate path on the Server. The keytab file itself on the file system should only be accessible for the user running.
Connecting to Hive with LDAP authentication
If LDAP is used for authentication to HiveServer2, then passwords should be entered similarly to the Kerberos passwords, please refer to the Decrypting connection passwords section. In case of impersonation, the provided Hive LDAP user should also have Hadoop proxyuser privileges.
Access control on Radoop connections
The availability of a Hadoop connection on RapidMin group (or user) whitelist for a connection, add the
accesswhitelist xml tag for the corresponding radoop-connection-entry in the radoop_connections.xml. The value of this property is an arbitrary regular expression (.* or * can be used for allowing all users). Only RapidMiner Server users whose group matches this expression are allowed to use the connection in a submitted process. If this optional
accesswhitelist is not specified for a connection, then any user can use it in a process.
<radoop-connection-entry> .... <accesswhitelist>ds_group|dba_group|john|scott</accesswhitelist> </radoop-connection-entry>
Change Radoop Proxy enabled connections
Radoop Proxy is automatically disabled when a process is executed on RapidMiner Server, because in a typical setup, RapidMiner Server runs inside the secure zone, that's why there is no need to route the traffic through the Proxy.
In case you have a custom manual Radoop Proxy installed on an edge node, and RapidMiner Server (besides Studio) can only reach the Hadoop cluster via this edge node (so it runs outside the secure zone), you need to enable Force Radoop Proxy on Server setting on the RapidMiner Server tab. This setting has no effect when running in Studio.
Alternatively, you can manually edit the radoop_connectons.xml file on the Server. In this case add the forceproxyonserver tag with the value T.
<radoop-entries <radoop-connection-entry> ... <forceproxyonserver>T</forceproxyonserver> ... </radoop-connection-entry> </radoop-entries>
To apply the updated connection, follow the procedure about Updating Radoop connections.
The location of the Radoop Proxy connection specified in Studio for this connection needs to be the Remote Repository corresponding to this RapidMiner Server instance. Otherwise the process won’t be able to find the proxy connection when running on the Server and will fail because of that. | https://docs.rapidminer.com/latest/radoop/installation/radoop-server-install.html | 2021-01-16T03:28:18 | CC-MAIN-2021-04 | 1610703499999.6 | [array(['img/connection-editor-dialog/rapidminer-server-settings.png',
None], dtype=object)
array(['img/connection-editor-dialog/force-radoop-proxy-on-server.png',
None], dtype=object) ] | docs.rapidminer.com |
User attributes report
View a snapshot of your users in both graphical and tabular format with the User Attributes report. For example, view the lifetime spend, the lifetime minutes spent in the app, the app version and the location of all your users for a particular user segment.
To access the User Attributes report, on the Analytics menu, select User attributes.
You can perform the following actions to customize the data displayed in the report:
- Select the user attribute on which the graph is based.
- Select the user segment for which you want to view data.
- Select the number of values displayed on the X-axis.
In addition, you can perform the following actions on the User Attributes screen:
- Print the report.
- Save report data in CSV file format.
The following image shows an example of a typical user report:
The X-axis displays the attribute value, such as the lifetime spend.
Data point markers on the chart enable you to view specific data for a particular attribute; placing your cursor on any data point marker on the chart displays a tooltip which details the number of users with the particular attribute value. You can also read this data in the table below the chart.
For information about the data source for this report, see What is the data source for the user attributes report?
Customizing report data
To customize the data displayed in the report use the filter lists at the top of the chart.
- Select a user attribute to show – select one of the user attributes to be included in the results:
- Lifetime spend – the total amount of money spent by each user.
- Lifetime minutes in app – the total number of minutes that the user has spent in the app.
- App version – the most recent version of your app that each user has used.
- Level – an increasing integer that represents the game level or progress through the app.
- Age, Gender or Location
- Values Displayed – select the number of attribute values displayed in your results.
- View segment – select the user segment for which you want to view user data.
Additional actions
The following sections provide information about the additional actions you can perform on the User Attributes report.
Printing reports
To print a user report, on the menu in the top right-hand corner of the chart, select Print. Depending on the type of browser you are using, the print screen or print dialog box displays. Define your required print settings and print the report.
Saving report data in CSV format
To save the report data in comma-separated values (CSV) file format, on the menu in the top right-hand corner of the chart, select Download CSV. The CSV file is downloaded to your local machine.
Data source
The lifetime spend, lifetime minutes in app, and app version data on the User Attributes screen is calculated automatically for you by Swrve. The other attributes (level, age, gender and location) must be supplied by your development team to Swrve during your Swrve integration process. | https://docs.swrve.com/user-documentation/analytics/user-attributes-report/ | 2021-01-16T03:03:33 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.swrve.com |
Logging & Diagnostics¶
The cbapi provides extensive logging facilities to track down issues communicating with the REST API and understand potential performance bottlenecks.
Enabling Logging¶
The cbapi uses Python’s standard
logging module for logging. To enable debug logging for the cbapi, you
can do the following:
>>> import logging >>> root = logging.getLogger() >>> root.addHandler(logging.StreamHandler()) >>> logging.getLogger("cbapi").setLevel(logging.DEBUG)
All REST API calls, including the API endpoint, any data sent via POST or PUT, and the time it took for the call to complete:
>>> user.save() Creating a new User object Sending HTTP POST /api/user with {"email": "[email protected]", "first_name": "Jason", "global_admin": false, "id": null, "last_name": "Garman", "password": "cbisawesome", "teams": [], "username": "jgarman"} HTTP POST /api/user took 0.079s (response 200) Received response: {u'result': u'success'} HTTP GET /api/user/jgarman took 0.011s (response 200) | https://cbapi.readthedocs.io/en/latest/logging.html | 2021-01-16T02:39:13 | CC-MAIN-2021-04 | 1610703499999.6 | [] | cbapi.readthedocs.io |
Dos 18.3.
- Uninstall hot fixes and add-ons, if any, before you start the TeamForge 18 18.
Single Server Setup
You can install TeamForge on both RHEL/CentOS 7.5 or 6.10 and log on as root.
The host must be registered with the Red Hat Network if you are using Red Hat Enterprise Linux.
See the RHEL 7.5 profile:
compat-ctf-dc-media-1.0-1.el7.centos.noarch.rpm.
- Unpack the disconnected installation package.
rpm -ivh <package-name>
- Unpack the
compat-ctf-dc-media-1.0-1.el7.centos.noarch.rpmpackage if you are installing TeamForge 18.3 on CentOS 7 application packages.
yum install teamforge
-. services.
teamforge provisionTeamForge 18.3 installer expects the system locale to be
LANG=en_US.UTF-8. TeamForge create runtime (
teamforge provision) fails otherwise.
-
- Users are not getting email notifications for review requests and reviews. What should I do?
- For Jenkins plugin to notify TeamForge Webhooks Event Broker and to migrate Jenkins data from EventQ to TeamForge:
Also See…
FAQs on Install / Upgrade / Administration
[]: | https://docs.collab.net/teamforge183/allinoneserver_rhel_centos.html | 2021-01-16T03:25:06 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.collab.net |
Divi Block Reveal Image Module
Create engaging and fresh interactions using Divi Block Reveal Image Module. The effect first shows a decorative block element drawn and when it starts to decrease its size, it uncovers image underneath. You get 4 types of animation to choose from.
Content Options
Image
Add the Actual Image here that will be shown after a cool animation.
Link
Open in Lightbox
Here you can choose whether or not the image should open in Lightbox. Note: if you select to open the image in Lightbox, URL options below will be ignored.
Open as Video Popup
Put the Video link on the Image URL. Copy the video URL link and paste it here. Support: YouTube, Vimeo and Dailymotion. Then If you click on the Image a video Popup will be shown.
Block Reveal Animation
Block Reveal Animation style
Here you can set Animation Direction. You can select where it will start from and where it will be ending. You Have Four Options to choose from:
- Left to Right
- Right to Left
- Top to Bottom
- Bottom to Top
Block Reveal Color
Here you can change the color of the Block which will be shown first with an animation.
Delay
Adjust the Delay of the Block Reveal Animation here.
Animate in Viewport
The Text Animation will only be shows at a specific Viewport.
Design Options
Overlay
Image Overlay
Enable this switch if you want to add overlay color or icon or both to the image.
Overlay Color
Change the color of the overlay on the image to your own one.
Use Icon
You can enable this option if you want to show an icon in the overlay. And then can choose any icon from the Divi Icon Library and Change it’s color as well.
Overlay Rounded Corners, Border and Box Shadow
Here you can control the corner radius of the Image Overlay. Enable the link icon to control all four corners at once, or disable to define custom values for each. you can play around with adding borders and Box Shadow to the Image Overlay.
Add Rounded Corners, Border and Box Shadow to Image
Here you’ll find options to control the corner radius of the Image. You can play around with adding borders and Box Shadow to the Image under Design Options.
Advanced Options
Use the advanced options to give your Block Reveal Image module custom CSS ID’s and Classes. Add some custom CSS for advanced styling and designate the module’s visibility on certain devices. | https://docs.divisupreme.com/faqs/block-reveal-image-module | 2021-01-16T03:23:44 | CC-MAIN-2021-04 | 1610703499999.6 | [array(['https://malcolm-en-gb.s3.eu-west-1.amazonaws.com/instances/yTil6KxH5F/resources/IwjnfgDeSm/showcase-block-reveal-image-module.gif',
'showcase-block-reveal-image-module.gif'], dtype=object)
array(['https://malcolm-en-gb.s3.eu-west-1.amazonaws.com/instances/yTil6KxH5F/resources/xSzxD8AwfS/block-reveal-image-module.png',
'block-reveal-image-module.png'], dtype=object) ] | docs.divisupreme.com |