content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
.06(5)
(5)
Public rights features identified.
Public rights features are:
NR 1.06(5)(a)
(a)
Fish and wildlife habitat, including specific sites necessary for breeding, nesting, nursery and feeding.
NR 1.06 Note
Note:
Physical features constituting fish and wildlife habitat include stands of aquatic plants; riffles and pools in streams; undercut banks with overhanging vegetation or that are vegetated above; areas of lake or streambed where fish nests are visible; large woody cover.
NR 1.06(5)(b)
(b)
Physical features of lakes and streams that ensure protection of water quality.
NR 1.06 Note
Note:
Physical features that protect water quality include stands of aquatic plants (that protect against erosion and so minimize sedimentation), natural streambed features such as riffles or boulders (that cause turbulent stream flow and so provide aeration).
NR 1.06(5)(c)
(c)
Reaches of bank, shore or bed that are predominantly natural in appearance (not man-made or artificial) or that screen man-made or artificial features.
NR 1.06 Note
Note:
Reaches include those with stands of vegetation that include intermixed trees, shrubs and grasses; stands of mature pines or other conifer species; bog fringe; bluffs rising from the water's edge; beds of emergent plants such as wild rice, wild celery, reeds, arrowhead.
NR 1.06(5)(d)
(d)
Navigation thoroughfares or areas traditionally used for navigation during recreational boating, angling, hunting or enjoyment of natural scenic beauty.
NR 1.06 Note
Note:
Physical features indicative of navigation thoroughfares include shallow water areas typically used by wading anglers or areas frequently occupied by regularly repeated public uses such as water shows.
NR 1.06(6)
(6)
Basis of department determination.
The department shall base its identification of public rights features on factual information obtained from reputable sources, including:
NR 1.06(6)(a)
(a)
Field surveys and inspections, including historical surveys for fish, wildlife, rare species, aquatic plants, geologic features or water quality.
NR 1.06(6)(b)
(b)
Surveys or plans from federal, state or local agencies.
NR 1.06.:.
Down
Down
/code/admin_code/nr/001/1
true
administrativecode
/code/admin_code/nr/001/1/07/3/b
Department of Natural Resources (NR)
Chs. NR 1-99; Fish, Game and Enforcement, Forestry and Recreation
administrativecode/NR 1.07(3)(b)
administrativecode/NR 1.07. | https://docs-preview.legis.wisconsin.gov/code/admin_code/nr/001/1/07/3/b | 2020-11-23T23:05:00 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs-preview.legis.wisconsin.gov |
Note: All pages below are subject to having relevant Roles and Permissions.
From the Menu on the left go to Config > Setup > User Name and Formats.
From the dropdown menu, select the Name Format to be amended.
To amend the Format, clear the items seen in the Name Format Editor field by highlighting and deleting or using your cursor to delete the existing Data Fields.
Select an appropriate Name Field from the Name Format Fields dropdown.
Build up the Name as required ensuring a space is added between each Field, then click the Save button. An example of the Name Format will appear as each Field is added.
| https://docs.bromcom.com/knowledge-base/how-to-manage-user-name-and-formats/ | 2020-11-23T22:14:13 | CC-MAIN-2020-50 | 1606141168074.3 | [array(['https://docs.bromcom.com/wp-content/uploads/2020/07/image-55.png',
None], dtype=object)
array(['https://docs.bromcom.com/wp-content/uploads/2020/07/image-56.png',
None], dtype=object)
array(['https://docs.bromcom.com/wp-content/uploads/2020/07/image-57-1024x351.png',
None], dtype=object)
array(['https://docs.bromcom.com/wp-content/uploads/2020/07/image-58.png',
None], dtype=object)
array(['https://docs.bromcom.com/wp-content/uploads/2020/07/image-59.png',
None], dtype=object) ] | docs.bromcom.com |
Reporting
No promises made in CFEngine imply automatic aggregation of data to a central location. In CFEngine Enterprise (our commercial version), an optimized aggregation of standardized reports is provided, but the ultimate decision to aggregate must be yours.
Monitoring and reporting capabilities in CFEngine depend on your installation:
Enterprise Edition Reporting
The CFEngine Enterprise edition offers a framework for configuration management that goes beyond building and deploying systems. Features include compliance management, reporting and business integration, and tools for handling the necessary complexity.
In a CFEngine Enterprise installation, the CFEngine Server aggregates information about the environment in a centralized database. By default data is collected every 5 minutes from all bootstrapped hosts and includes information about:
- logs about promises kept, not kept and repaired
- current host contexts and classifications
- variables
- software information
- file changes
This data can be mined using SQL queries and then used for inventory management, compliance reporting, system diagnostics, and capacity planning.
Access to the data is provided through:
Command-Line Reporting
Community Edition
Basic output to file or logs can be customized on a per-promise basis. Users can design their own log and report formats, but data processing and extraction from CFEngine's embedded databases must be scripted by the user.
Note:
If you have regular reporting needs, we recommend using our commercially-supported version of CFEngine, Enterprise. It will save considerable time and resources in programming, and you will have access to the latest developments through the software subscription. | https://docs.cfengine.com/docs/3.12/guide-reporting.html | 2020-11-23T22:18:05 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.cfengine.com |
GraphExpert Professional 1.6.0 documentation
GraphEx GraphExpert Professional.
GraphEx GraphEx, With this setting, for example, users that utilize commas for decimals in GraphExpert Professional can choose to read and write files using dots for decimals. And vice versa.
Note
In the file import dialog (see Importing from file), GraphEx Importing from file.
If your file contains the following within the first ten lines of the file:
#DataFileProperties: locale=EN
or:
#DataFileProperties: locale=EURO
this signals to the GraphExpert Professional file importing mechanism that the the file uses English or European numerical formatting (as appropriate), and the default there (see Importing from file) GraphExpert Professional contain this line, so that the samples work straightforwardly regardless of the application’s current region settings. This small addition to a text datafile lets the file itself express its own numerical formatting. | https://docs.curveexpert.net/graphexpert/pro/html/localization.html | 2020-11-23T22:08:20 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.curveexpert.net |
SentinelOne Endpoint Detection and Response
SentinelOne Endpoint Detection and Response (EDR) is agent-based threat detection software that can address malware, exploit, and insider attacks on your network. InsightIDR features a SentinelOne event source that you can configure to parse SentinelOne EDR logs for virus infection documents.
You can learn more about SentinelOne EDR on their product website:
This SentinelOne event source configuration involves the following steps:
- Configure SentinelOne EDR to Send Logs to InsightIDR
- Configure the SentinelOne Event Source in InsightIDR
Configure SentinelOne EDR to Send Logs to InsightIDR
Before you configure the SentinelOne event source in InsightIDR, you need to configure SentineIOne EDR to send its logs to your collector. Consult your SentinelOne product documentation for instructions on how to do this:
Configure the SentinelOne Event Source in InsightIDR
After you’ve configured SentinelOne to send its logs to your collector, you can configure the event source in InsightIDR.
To configure this SentinelOne event source:
- From your InsightIDR dashboard, expand your left menu and click the Data Collection tab.
- On the “Data Collection Management” screen, expand the Setup Event Source dropdown and click Add Event Source.
- In the “Add Event Source” category window, browse to the “Security Data” section and click Virus Scan. The “Add Event Source” panel appears.
- Select your configured collector from the dropdown list. This should be the same collector that you configured SentinelOne to target for log ingestion.
- Expand the “Event Source” dropdown and select SentinelOne EDR.
- If desired, you can give your event source a custom name for reference purposes.
- Choose the timezone that matches the location of your event source logs.
- If desired, check the provided box to send unfiltered logs.
- Select a collection method and specify a port.
- If desired, you can choose to encrypt the event source if choosing TCP by downloading the Rapid7 Certificate.
- Click Save when finished.
| https://docs.rapid7.com/insightidr/sentinelone/ | 2020-11-23T21:40:42 | CC-MAIN-2020-50 | 1606141168074.3 | [array(['/areas/docs/_repos//product-documentation__master/a53365007403634cd965f11c07f241396a4a9e16/insightidr/images/insightidr_eventsourceconfig_sentinelone.png',
None], dtype=object) ] | docs.rapid7.com |
inst/CITATION
Ross, Noam, Evan A. Eskew, and Nicolas Ray. 2019. citesdb: An R package to support analysis of CITES Trade Database shipment-level data. Journal of Open Source Software, 4(37), 1483,
@Article{, doi = {10.21105/joss.01483}, url = {}, year = {2019}, month = {May}, publisher = {The Open Journal}, volume = {4}, number = {37}, pages = {1483}, author = {Noam Ross and Evan A. Eskew and Nicolas Ray}, title = {citesdb: An R package to support analysis of CITES Trade Database shipment-level data}, journal = {Journal of Open Source Software}, }
UNEP-WCMC (Comps.) 2019. Full CITES Trade Database Download. Version 2019.2. CITES Secretariat, Geneva, Switzerland. Compiled by UNEP-WCMC, Cambridge, UK. Available at:.
@Misc{, author = {{UNEP-WCMC}}, title = {Full CITES Trade Database Download. Version 2019.2}, year = {2019}, institution = {CITES Secretariat}, address = {Geneva, Switzerland}, url = {}, }
Noam Ross. Author, maintainer.
Evan A. Eskew. Author.
Nicolas Ray. Contributor.
UNEP World Conservation Monitoring Centre. Data contributor.
Maintainer of CITES Trade Database
Mauricio Vargas. Reviewer.
Reviewed package for rOpenSci:
Xavier Rotllan-Puig. Reviewer.
Reviewed package for rOpenSci:
EcoHealth Alliance. Copyright holder, funder. | https://docs.ropensci.org/citesdb/authors.html | 2020-11-23T21:30:09 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.ropensci.org |
Use the Environment tab to examine the status of the three badges as they relate to the objects in your environment hierarchy. You can then determine which objects are in a critical state for a particular badge. To view the relationships between your objects to determine whether an ancestor object that has a critical problem might be causing problems with the descendants of the object, use.
As you click each of the badges in the Environment tab, you see that several objects are experiencing critical problems with health. Others are reporting critical risk status.
Several objects are experiencing stress. You notice that you can reclaim capacity from multiple virtual machines and a host system, but the overall efficiency status for your environment displays no problems.
Prerequisites
Examine the status of your objects in views and heat maps. See Examine the Environment Details.
Examine the status of your objects in views and heat maps. See Examine the Environment Details.
Procedure
- Click .
- Examine the USA-Cluster environment overview to evaluate the badge states of the objects in a hierarchical view.
- In the inventory tree, click USA-Cluster, and click the Environment tab.
- On the Badge toolbar, click through the three badges - Health, Risk, and Efficiency - and look for red icons to identify critical problems.As you click through the badges, you notice that your vCenter Server and other top-level objects appear to be healthy. However, you see that a host system and several virtual machines are in a critical state for health, risk, and efficiency.
- Point to the red icon for the host system to display the IP address.
- Enter the IP address in the search text box, and click the link that appears.The host system is highlighted in the inventory tree. You can then look for recommendations or alerts for the host system on the Summary tab.
- Examine the environment list and view the badge status for your objects to determine which objects are in a critical state.
- Click the Environment tab.
- Examine the badge states for the objects in USA-Cluster.
- Many of the objects display critical states for risk and health. You notice that multiple virtual machines and a host system named w2-vropsqe2-009 are critically affected. Because the host system is experiencing the most critical problems, and is likely affecting other objects, you must focus on resolving the problems with the host system.
- Click the host system named w2-vropsqe2-009, which is in a critical state, to locate it in the inventory tree.
- Click w2-vropsqe2-009 in the inventory tree, and click the Summary tab to look for recommendations and alerts to act on.
- Examine the relationship map.
- Click .
- In the inventory tree, click USA-Cluster, and view the map of related objects.In the relationship map, you can see that the USA-Cluster has an ancestor data center, one descendant resource pool, and two descendant host systems.
- Click the host system named w2-vropsqe2-009.The types and numbers of descendant objects for this host system appear in the list following. Use the descendant object list identify all the objects related to the host system that might be experiencing problems.
What to do next
Use the user interface to resolve the problems.
Use the user interface to resolve the problems. See Fix the Problem. | https://docs.vmware.com/en/VMware-vRealize-Operations-Cloud/services/user-guide/GUID-996F4F1A-9B5A-47ED-A17D-75829B8079D7.html | 2020-11-23T23:02:21 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.vmware.com |
primary key columns. Different tables can use different keys.
SSTable data files are immutable once they have been flushed to disk and are only encrypted during the write to disk. To encrypt existing data, use the nodetool upgradesstables with the
-aoption to rewrite the tables to disk with encryption.Warning: Primary keys are stored in plain text. Do NOT put sensitive information in partition key or clustering columns.
Data that is not encrypted
- Table partition key columns
- Database files other than the commit log and SSTable data files
- DSEFS data files
- Spark spill files
Requirements
To use the DataStax Enterprise (DSE) Transparent Data Encryption (TDE) feature, enable the Java Cryptography Extension (JCE).
When using TDE on a secure local file system, encryption keys are stored remotely with KMIP encryption or locally with on-server encryption.
TDE limitations and recommendations
The following utilities cannot access encrypted data, but will operate on all unencrypted data.
Compression and encryption introduce performance overhead.
config_encryption_activeis
truein DSE and OpsCenter. For LCM limitations, see Configuration encryption.
TDE options
To get the full capabilities of TDE and to ensure full algorithm support, enable JCE Unlimited. | https://docs.datastax.com/en/security/5.1/security/secEncryptTDE.html | 2020-11-23T22:51:13 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.datastax.com |
Introduction
The Synthetics section displays the configured synthetic monitors. After creating a synthetic monitor, click the desired monitor to view details such as Overview, Metrics, and Monitors.
Dashboard
The dashboard displays the total number of configured synthetics, the status of each synthetic configured on the Infrastructure page, Add, Delete button, and a Refresh button.
Status of synthetics
ImportantAvailability.
Overview.
Availability Log
View root cause
To view Root Cause in Availability Log:
- From the Availability Log section, click view Root Cause.
The Last Collected Root Cause Analysis window appears.
- From the Last Collected Root Cause Analysis window, analyze the below details from the configured location.
- Analysis started time – The Root Cause Analysis (RCA) collection start time.
- Analysis ended time – The RCA collection end time.
- Analysis elapsed time – The time taken to perform the RCA.
- Initial Validation – Displays the reason for the Down availability status for the selected synthetic monitor.
- TraceRoute Test – Displays details that allow you to debug the issue.
Last Collected RCA.
Traceroute Test
The TraceRoute Test section displays the following details:
- Route Info – The path taken by packets from the configured location to the host.
- Error Log – Displays that Traceroute is a failure.
ImportantTraceRoute Test supports only 30 hops maximum and 60-byte packets.
Recent log
Recent Log displays information about the recently collected metrics.
Attributes
Attributes display all basic information on the configured synthetic monitor.
Attributes: Refers to the URL of the website.
- Method: Refers to the REST method of the connected website.
- Status Code: Refers to the HTTP response code or Libcurl error code of the Website.
- Response Headers: Refers to the HTTP response headers of the website.
- Raw Data Response: Refers to the HTTP response provided by the website.
- Status: Refers to the success or failure of the test.
HTTP.
ImportantTo create a device management policy, you can filter synthetic either using synthetic Name or Type when applying Resource filters as Filter Criteria. For example, to apply HTTPS templates, select HTTPS as Synthetic Type for the resource type Synthetics.
Filter Criteria - Device Management Policy
Use the following synthetic types to filter:
- HTTPS
- HTTP
- TCP
- PING
- UDP
- DNS
- SMTP
- POP3
- IMAP
- SCRIPT
- SSL
- FTP
- SIP
- RTT
Assigning templates
OpsRamp can start monitoring only after assigning the templates to your synthetic monitor. You can assign only one template per synthetic monitor. Monitoring does not work as expected if you assign more than one template.
ImportantYou can assign templates that pertain to a specific synthetic monitor. For example, you can assign DNS Templates to DNS synthetic monitors and not to PING synthetic monitors.
You can create templates using Setup > Monitoring > Templates.
Notes
- You can select the following basic details while creating a template:
- Collector Type: Synthetics
- Applicable For: Synthetics
- Type: Select the desired synthetic monitor
- To receive alerts for configured metrics from all the configured locations assigned to a template, you must enable the Alert option and configure the Component Threshold for each metric while creating the template.
To assign templates:
- From Templates, click Assign Templates.
Apply Templates screen is displayed.
Apply Templates
- From Select Templates > Available templates, select the desired templates.
Selected templates display the chosen templates.
- Click Assign.
Enter Configurations section is displayed.
Enter Configurations
- Provide Value for the Assigned Templates and Configuration Parameters and click Submit.
The templates screen displays the selected templates.
Configuration Parameter and Value
The following table describes the configuration parameters for each monitor:
Unassigning templates
You can remove an assigned template from the monitor. Use the Unassign Templates option to unassign the templates from the synthetic monitors. OpsRamp removes every graph associated with the templates.
Unassign Templates
Monitors
Monitors allow you to configure alerts and edit threshold values assigned to any metric in the selected synthetic monitor.
Monitors
Availability rule
Availability lets you configure rules to confirm the status of the resources with the availability check depending on the critical alerts received for the metrics.
Availability Rule.
Parameters.
Notes
To add Notes:
- Click Add displayed on the Notes window.
- On the Notes window, provide the required details.
- Click Save.
You can modify existing notes from the Notes window using Edit Notes.
Articles
Articles display notes tagged to the configured synthetic monitor.
Articles
ADD/MODIFY
The ADD/MODIFY option allows you to add or remove articles for the selected synthetic monitor.
To ADD/MODIFY articles:
- Click ADD/MODIFY.
The Articles window appears.
- Select the check-box of desired articles from the Articles window.
- Click Add.
The Articles section displays the selected articles.
What to do next
- See Modify Metric Custom Notifications and Thresholds for information on customizing threshold values.
- See Create Templates.
- See Credential Set in SCRIPT – HTTP Synthetic Transaction Synthetic Monitor. | https://docs.opsramp.com/solutions/discovery-monitoring/synthetic-monitors/viewing-synthetics-monitor/ | 2020-11-23T21:20:01 | CC-MAIN-2020-50 | 1606141168074.3 | [array(['https://docsmedia.opsramp.com/screenshots/Monitoring/Viewwebservices-overview-7.0.png',
'Overview'], dtype=object)
array(['https://docsmedia.opsramp.com/screenshots/Monitoring/Viewwebservices-availabilitylog-viewrootcause-5.0.png',
'Availability Log'], dtype=object)
array(['https://docsmedia.opsramp.com/screenshots/Monitoring/Viewwebservices-traceroutetest-5.5.0.png',
'Traceroute Test'], dtype=object)
array(['https://docsmedia.opsramp.com/screenshots/Monitoring/Viewwebservices-attributes.png',
'Attributes'], dtype=object)
array(['https://docsmedia.opsramp.com/screenshots/Monitoring/viewwebservices-locationavailability-header-7.0.png',
'HTTP Test'], dtype=object)
array(['https://docsmedia.opsramp.com/screenshots/Synthetics/Viewsynthetics-DMP-8.0.png',
'Filter Criteria - Device Management Policy'], dtype=object)
array(['https://docsmedia.opsramp.com/screenshots/Synthetics/Viewsynthetics-Templates-UnassignTemplates-8.0.png',
'Unassign Templates'], dtype=object)
array(['https://docsmedia.opsramp.com/screenshots/Synthetics/Viewsynthetics-Monitors-8.0.png',
'Monitors'], dtype=object)
array(['https://docsmedia.opsramp.com/screenshots/Synthetics/Viewsynthetics-AvailabilityRule-8.0.png',
'Availability Rule'], dtype=object)
array(['https://docsmedia.opsramp.com/screenshots/Synthetics/Viewsynthetics-Parameters-8.0.png',
'Parameters'], dtype=object)
array(['https://docsmedia.opsramp.com/screenshots/Monitoring/Viewwebservices-Notes.png',
'Notes'], dtype=object)
array(['https://docsmedia.opsramp.com/screenshots/Monitoring/Viewwebservices-articles.png',
'Articles'], dtype=object) ] | docs.opsramp.com |
Understand WordPress filesystem permissions
Bitnami applies the following default permissions to WordPress files and directories:
- Files and directories are owned by user bitnami and group daemon.
- Directories are configured with permissions 775 by default.
- Files are configured with permissions 664 by default.
- The wp-config.php file is configured with permissions 640.
If permissions are wrong, use the chmod or chown commands to restore them to their initial state. For example, if TARGET is the WordPress application folder:
$ sudo chown -R bitnami:daemon TARGET $ sudo find TARGET -type d -exec chmod 775 {} \; $ sudo find TARGET -type f -exec chmod 664 {} \; $ sudo chmod 640 TARGET/wp-config.php | https://docs.bitnami.com/azure/apps/wordpress-multisite/administration/understand-file-permissions/ | 2020-11-23T22:34:21 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.bitnami.com |
Create company connect groups in Kaizala
Company connect groups are one-way communication channels. They allow an organization to broadcast important announcements, updates, and information to the workforce. An organization can create company connect groups for their employees, partners, and customers.
Step 1 – Create a hub-and-spoke group
A hub-and-spoke group in Kaizala is a unique group where admins can broadcast messages to all of its members, and members of the group can interact with the admins of the group on a one-to-one basis without their messages being visible to other group members.
Note
You can only create hub and spoke groups through the Kaizala management portal.
On the Kaizala management portal, from the left navigation bar, choose Groups.
Select Create Group, and from the drop-down menu, select Broadcast Group.
Enter the group name, a short and long description, and a welcome message.
Choose between two group types: Managed or Public.
- Managed groups allow the group admins to view, manage, and invite subscribers.
- Public groups allow subscribers to also invite other subscribers.
Step 2 – Add people to the group
If you want to add several users without using the comma separated list, use bulk upload.
After you create a broadcast group, you can add subscribers (employees, partners, or customers) to it. Once they have been added, the broadcast group will start showing up on their Kaizala app.
To add subscribers, select Manage Subscribers, and then select Add Subscribers.
On the Add Subscribers page, download the CSV template and follow the format to add your subscribers. Save the file when you're done.
Choose Select File to choose the file you just saved, and then click Add.
Step 3 – Onboard the content moderation team
Identify admins who will manage and moderate group content.
Key responsibilities of the group admin are:
- User engagement – share company information, articles, and updates.
- Content moderation – share and implement guidelines on appropriate usage.
- Helping users – show how to perform queries.
- User management – remove or add members.
Your corporate communications team or senior team members are most likely to fit these roles. Add these users as admins to the group under the Users tab.
Tip
- You can set up RSS feeds to automatically post organizational content from across channels such as social media, websites and blogs. Follow these steps here.
- Consider creating separate groups for company employees, suppliers, and partners. This will allow you to send relevant content to each group depending on the group members.
Next> Collect employee feedback | https://docs.microsoft.com/en-us/office365/kaizala/create-company-connect-groups | 2020-11-23T23:07:53 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.microsoft.com |
Document type: notice
There are 4084 pages with document type 'notice' in the GOV.UK search index.
Rendering apps
This document type is rendered by:
Supertypes
Example pages
- Notice 144: trade imports by post - how to complete customs documents
- Public Sector Decarbonisation Scheme (PSDS)
- Notice 143: a guide for international post users
- Coronavirus (COVID-19) Pubs Code Declaration No.2 November 2020
- Condition Data Collection 2 (CDC2): provisional school visits
- Pubs Code Declaration No.2 November 2020 summary
- Notice 5: Transfer of Residence - moving to or returning to the UK from outside the EU
- Social Housing Decarbonisation Fund Demonstrator
- Who should register for VAT (VAT Notice 700/1)
- VAT Notice 733: Flat Rate Scheme for small businesses
Source query from Search API | https://docs.publishing.service.gov.uk/document-types/notice.html | 2020-11-23T21:57:19 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.publishing.service.gov.uk |
# Installation
AdvancedOreGen works best along with a supported SkyBlock plugin.
Without a SkyBlock plugin, AdvancedOreGen would use the permission of the closest player mining on the generator.
Buy and download the latest version (opens new window) jar-File and place it into your plugin folder, then restart or reload your server. Keep in mind that you will only see the resource when you have created an account on spigotmc.org. | https://docs.spaceio.xyz/plugin/advancedoregen/installation.html | 2020-11-23T22:13:39 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.spaceio.xyz |
Before proceeding, make sure you meet the requirements detailed here
At a high level, the installation of AI Fabric needs to execute these steps:
Network Configuration
- The Orchestrator machine (domain and port) needs to be accessible from AI Fabric cluster.
- The SQL Server (domain/IP and port) needs to be accessible from AI Fabric cluster.
- Robots/Studio that will make use of AI Fabric need connectivity to the AI Fabric Linux machine.
For peripheral Document Understanding components (Data Manager and OCR Engines):
- Data Manager needs access to AI Fabric on prem :<port_number> or to public SaaS endpoints like in case Prelabelling is needed (prelabeling is optional).
- Data Manager needs access to OCR engine :<port_number>. OCR engine might be UiPath Document OCR on premises, Omnipage OCR on premises, Google Cloud Vision OCR, Microsoft Read Azure, Microsoft Read on premises.
- Robots need access to OCR :<port_number>. Same OCR options as above, except for Omnipage, which is available in the Robots directly as an Activity Pack.
Connectivity Requirements
The AI Fabric Online install refers to an on-premises installation that downloads AI Fabric application and all related artifacts (e.g. machine learning models) from the internet.
Endpoints the Installer Connects
The AI Fabric installer downloads container images and machine learning models to populate your AI Fabric instance with ready-to-use machine learning (this includes Document Understanding models). For this reason, at installation time, the Linux machine needs access to these endpoints over https (port 443):
Endpoints GPU Installer Script Connects To
These endpoints only need to be allow connections for using a GPU with AI Fabric. All GPU installation is done through our GPU installer script in 4. Run the AI Fabric Infrastructure Installer .
Endpoints Connected To at Runtime
At runtime, an AI Fabric that was installed via the online installer connects to these endpoints:
Updated 28 days ago | https://docs.uipath.com/ai-fabric/docs/ai-fabric-installation-dockeree | 2020-11-23T22:52:13 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.uipath.com |
.
Global illumination is a group of techniques that model both direct and indirect lighting to provide realistic lighting results. Unity has two global illumination systems, which combine direct and indirect lighting.
The Baked Global Illumination system comprises lightmapping, Light Probes, and Reflection Probes.en is deprecated, and the Realtime Global Illumination system will soon be removed from Unity. For more information, see the Unity blog. | https://docs.unity3d.com/ru/2020.1/Manual/LightingInUnity.html | 2020-11-23T22:56:31 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.unity3d.com |
vSphere role-based access control (vSphere RBAC) for Azure VMware Solution
In Azure VMware Solution, vCenter has a built-in local user called cloudadmin and assigned to the built-in CloudAdmin role. The local cloudadmin user is used to set up users in AD. In general, the CloudAdmin role creates and manages workloads in your private cloud. In Azure VMware Solution, the CloudAdmin role has vCenter privileges that differ from other VMware cloud solutions.
Note
Azure VMware Solution currently doesn't offer custom roles on vCenter or the Azure VMware Solution portal.
In a vCenter and ESXi on-premises deployment, the administrator has access to the vCenter [email protected] account. They can also have additional Active Directory (AD) users/groups assigned.
In an Azure VMware Solution deployment, the administrator doesn't have access to the administrator user account. But they can assign AD users and groups to the CloudAdmin role on vCenter.
The private cloud user doesn't have access to and can't configure specific management components supported and managed by Microsoft. For example, clusters, hosts, datastores, and distributed virtual switches.
Azure VMware Solution CloudAdmin role on vCenter
You can view the privileges granted to the Azure VMware Solution CloudAdmin role on your Azure VMware Solution private cloud vCenter.
Log into the SDDC vSphere Client and go to Menu > Administration.
Under Access Control, select Roles.
From the list of roles, select CloudAdmin and then select Privileges.
The CloudAdmin role in Azure VMware Solution has the following privileges on vCenter. Refer to the VMware product documentation for a detailed explanation of each privilege.
Next steps
Refer to the VMware product documentation for a detailed explanation of each privilege. | https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-role-based-access-control | 2020-11-23T23:21:08 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.microsoft.com |
sys.dm_hadr_cluster (Transact-SQL)
Applies to:
SQL Server (all supported versions).
Tip
Beginning in SQL Server 2014 (12.x),) | https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-hadr-cluster-transact-sql?view=sql-server-ver15 | 2020-11-23T23:35:32 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.microsoft.com |
Catalogs¶
Description
A brief introduction to ZCatalogs, the Catalog Tool and what they're used for.
- Why ZCatalogs?
- Quick start
- Other catalogs
- Manually indexing object to a catalog
- Manually uncatalog object to a catalog
- Rebuilding a catalog
- Retrieving unique values from a catalog
- Minimal code for creating a new catalog
- Register a new catalog via portal_setup
- Map an catalog for an new type
- Additional info
Why ZCatalogs?¶
Plone is built on the CMF, which. So. If you want to perform advanced searches, AdvancedQuery, which is included with Plone since the 3.0 release, is what you're looking for. See Boolean queries (AdvancedQuery) for a brief introduction. 'just in time' as your code requests each result, and second, because retrieving a catalog brain doesn't wake up the objects themselves, avoiding a huge performance hit.
To see the ZCatalogs in action, fire up your favourite browser and open the ZMI. You'll see an object in the root of your Plone site named portal_catalog. This is the Catalog Tool, a Plone tool (like the Membership Tool or the Quickinstaller Tool) based on ZCatalogs, because, as it was said earlier, indexes are meant to search by them, and metadata to retrieve certain attributes from the content object without waking it up.
Back to the management view of the Catalog Tool, if you click the Indexes or the Metadata tab you'll see the full list of currently available indexes and metadata fields, respectively, its types and more. There you can also add and remove indexes and metadata fields. If you're working on a test environment, you can use this manager view to play with the catalog, but beware indexes and metadata are usually added through GenericSetup and not using the ZMI. object.reindexObject() is defined in CMFCatalogAware and will update the object data to portal_catalog.
If your code uses additional catalogs, you need to manually update cataloged values after the object has been modified.
Example:
# Update email_catalog which mantains loggable email addresses email_catalog = self.portal.email_catalog email_catalog.reindexObject(myuserobject) 'price'.
portal_catalog = self.portal.portal_catalog portal_catalog.Indexes['price'].uniqueValues()
the result would be a listing of all the prices stored in the 'price')
Register a new catalog via portal_setup¶
In toolset.xml add this lines
<?xml version="1.0"?> <tool-setup> <required tool_id="my_catalog" class="catalog.MyCatalog"/> </tool-setup>
archetype_tool catalog map¶
archetype_tool maintains map between content types and catalogs which are interested int them. When object is modified through Archetypes mechanisms, Archetypes post change notification to all catalogs enlisted.
See Catalogs tab on archetype_tool in Zope Management Interface.
Map an catalog for an new type¶
code
at = getToolByName(context,'archetype_tool') at.setCatalogsByType('MetaType', ['portal_catalog','mycatalog',]) | https://docs.plone.org/4/en/develop/plone/searching_and_indexing/catalog.html | 2020-11-23T22:36:13 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.plone.org |
Last updated: 21 Oct 2020
The component system
Components are packages of template, style, behaviour and documentation. Components live in your application unless needed by multiple applications, then they are shared using the govuk_publishing_components gem.
Component guides
Components in applications are documented in component guides using the govuk_publishing_components gem. It mounts a component guide at the path
/component-guide.
Find components in these guides:
- govuk_publishing_components]
For example, a lead paragraph component would be included in a template like this:
<%= render 'components/lead-paragraph', text: "A description is one or two leading sentences" %> | https://docs.publishing.service.gov.uk/manual/components.html | 2020-11-23T22:25:56 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.publishing.service.gov.uk |
Configure linting
This explains how to configure linting for a GOV.UK application. It is written with the expectation that you are configuring a conventional GOV.UK Rails application although the approaches can be applied to non-Rails applications by minor adjustments to the steps.
Linting Ruby
We use rubocop-govuk to lint Ruby projects.
This is installed by adding
gem "rubocop" to your Gemfile and then creating a
.rubocop.yml file in the root of your project:
inherit_gem: rubocop-govuk: - config/default.yml - config/rails.yml - config/rake.yml - config/rspec.yml inherit_mode: merge: - Exclude
After running
bundle install you can test the linting by running
bundle exec rubocop.
Linting JavaScript and SCSS
We use StandardJS for JavaScript linting and Stylelint for SCSS linting, using the stylelint-config-gds configuration.
To enable these in a Rails application you will first need to
install Yarn. Then you should create a
package.json file in
your project root. You can use the following template:
{ "name": "My application", "description": "A brief description of the application's purpose", "private": true, "author": "Government Digital Service", "license": "MIT", "scripts": { "lint": "yarn run lint:js && yarn run lint:scss", "lint:js": "standard 'app/assets/javascripts/**/*.js' 'spec/javascripts/**/*.js'", "lint:scss": "stylelint app/assets/stylesheets/" }, "stylelint": { "extends": "stylelint-config-gds/scss" } }
The dependencies can then be installed:
yarn add --dev standard stylelint stylelint-config-gds
You can now test the linting by running
yarn run lint.
To finish up you should add
node_modules and
yarn-error.log to
your
.gitignore file.
Configuring Rails
To configure this linting in Rails you should create a rake task for this
in
lib/tasks/lint.rake:
desc "Lint files" task "lint" do sh "bundle exec rubocop" sh "yarn run lint" # lint JS and SCSS end
You should then configure the default rake task for the application to include
linting. For example to run linting and RSpec as the default task add the
following code to your
Rakefile:
# undo any existing default tasks added by depenencies so we have control Rake::Task[:default].clear if Rake::Task.task_defined?(:default) task default: %i[lint spec]
You can confirm this works by running
bundle exec rake and seeing your
linting run followed by specs. | https://docs.publishing.service.gov.uk/manual/configure-linting.html | 2020-11-23T21:18:38 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.publishing.service.gov.uk |
Versions Compared
Old Version 4
changes.mady.by.user Julie Sullivan
Saved on
New Version Current
changes.mady.by.user Julie Sullivan
Saved on
Key
- This line was added.
- This line was removed.
- Formatting was changed.
General
What data is available in CellBase?
How can I search for data?
What's a "client"? What's the difference between "clients", the RESTful API, and the command line? Which one should I use?
Do I need to specify which database to search? Do I need my own database?
I want a list of variants associated with my gene of interest. How can I do that?
How can I visualise the data?
Can I use any visual tool?
Yes, you can use Genome Maps to browse variants.
None of these questions helped me. Help!
PyCellbase
How can I use PyCellbase? How can I install?
Does pycellbase work on Python 2?
PyCellBase requires Python 3.x, although most of the code is fully compatible with Python 2.7.
I'm getting an error message. What do I do?
Table of Contents: | http://docs.opencb.org/pages/diffpagesbyversion.action?pageId=15598722&selectedPageVersions=5&selectedPageVersions=4 | 2020-11-23T22:29:06 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.opencb.org |
7.9
datacell
This package gives a simple embedded dataflow language based around the notion of “cells”, similar to the concept of a cell in a Spreadsheet.
A simple example:
Examples:
Defines a new cell name, who’s value will be computed by body .... If the body uses any other cells, the then the value of name will be recomputed if the value of those cells change.
A cells value is computed on demand, which is to say that body is only evaluated when the value of named is needed. The value of a cell is almost memoized, meaning body is only evaluated if the current value is not known or a cell used by the body has changed.
Change the value in cell, which must be an identifier defined by define-cell. The rules for when body is evaluated are the same for definef-cell. | https://docs.racket-lang.org/datacell/index.html | 2020-11-23T21:28:28 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.racket-lang.org |
Containers CI/CD Plugin
Overview
InsightVM features a container assessment plugin that you can utilize via a Continuous Integration, Continuous Delivery (CI/CD) tool. Use this plugin to analyze a container image for vulnerability assessment on the Insight platform.
The plugin has the following capabilities:
- Configure custom rules for compliant container images
- Trigger build actions based on compliance to your rules
- Generate an assessment report
Container assessment plugin results are available both through your CI/CD tool and the Builds tab of InsightVM’s “Containers” screen.
Whitelist Platform Traffic
The plugin uses the following hostnames to communicate with the Insight platform:
In order for the plugin to transmit data, you must configure your network to allow traffic to the corresponding hostname of your designated InsightVM region.
NOTE
The region you chose for your plugin MUST match the region that has been selected previously in InsightVM. The plugin cannot communicate through a different region.
Install and Configure the Jenkins Plugin
Complete the following steps to configure your Jenkins plugin:
Generate the Rapid7 API Key
To use the Jenkins plugin, you need the Rapid7 API key to access the Rapid7 platform.
In order to access the Rapid7 platform, you will need a Rapid7 Insight platform account, which is different from your InsightVM Rapid7 Security Console account.
Here's how to get a Rapid7 Insight platform account:
- If you are an InsightIDR, InsightAppSec, or InsightOps customer, you already have a Rapid7 Insight platform account. You can link your existing InsightVM Security Console to the Rapid7 Insight platform account you created with your other Rapid7 product by following the instructions here:
- If you are already using the cloud capabilities of InsightVM and don’t have a Rapid7 Insight platform account, contact the Rapid7 Technical Support team to request one.
Follow these steps to generate the Rapid7 API key:
- Go to
- Log in to your Rapid7 Insight platform account.
- Go to API Keys.
- Under Organization Key, click on New Organization Key.
- In the “Organization” dropdown menu, select your InsightVM organization name.
- Enter a name for your key in the field.
- Click Generate.
- Copy and save the provided API Key.
NOTE
This is the only time you will be able to see the API key, so store it in a safe place. If you misplace your API key, you can always generate a new one.
- Click Done.
- Your new API key will display in the “Organization Key” table.
Install the Jenkins Plugin
There are two ways to install the Jenkins plugin. Both ways require Jenkins administrative privileges. Jenkins version 2.89.3 is the minimum supported version.
NOTE
For the latest version information and dependency requirements, see.
Install the Jenkins Plugin Through the Jenkins Update Center
We recommend this method to install the Jenkins plugin, because it’s the simplest and most common way to install plugins. You must be a Jenkins administrator to navigate through this path:
Manage Jenkins > Manage Plugins
- In the "Filter" box, search for "InsightVM."
- Under the Under the Available tab, select the checkbox for the InsightVM Container Image Scanner.
- Click the desired install button.
Install the Jenkins Plugin Manually
Follow these steps for manual installation:
- Download the plugin from Jenkins website. Verify your download with SHA1.
- Log into Jenkins as an admin user.
- Click Manage Jenkins in your navigation menu.
- On the “Manage Jenkins” page, click Manage Plugins.
- Click the Advanced tab.
- Under the “Upload Plugin” section, click Choose file to browse to the Rapid7 Jenkins plugin.
- Click Upload.
- Select the “Restart Jenkins” option.
Configure the Jenkins Plugin Use the API key and follow these steps:
- After Jenkins restarts, return to the “Manage Jenkins” page.
- Click Configure System.
- Scroll to the “Rapid7 InsightVM Container Assessment” section.
- In the “InsightVM Region” field, select the region that InsightVM uses to access the platform.
- In the “Insight Platform API Key” field, click Add. In the dropdown menu, select “Jenkins” to configure the Insight platform API key that you generated earlier.
- In the window that appears, complete these fields:
- In the “Domain” field, select "Global credentials (unrestricted)."
- In the “Kind” field, select "Secret text."
- In the "Scope" field, select “Global (Jenkins, nodes, items, all child items, etc).”
- In the “Secret” field, enter your API key.
- Leave the “ID” field blank.
- Enter a description for your reference.
- Click Add.
- Select your newly configured credential from the dropdown menu.
NOTE
Click Test Connection to verify that the region and token are valid.
- Click Save to complete your plugin configuration.
Job Setup
You must configure your build jobs to use the plugin after installation. Complete the steps for your chosen CI/CD tool:
Jenkins Build Job Setup
The plugin supports the following Jenkins build methods:
Freestyle Build
“Freestyle” is the classic job builder. Build steps can be added or removed via the user interface:
- In a new or existing job, click Add build step.
- Select Assess Container Image with Rapid7 InsightVM. This will add a build step with a blank configuration.
- Configure the items under “Options” as desired.
- Click Add under the respective “Rules” section to configure the conditions that will trigger a build action. Two rule types are available:
- “Threshold Rules” - Sets a numeric limit. Values that exceed this limit will trigger the build action.
- “Name Rules” - Matches text using the “contains” operator. A match will trigger the build action.
NOTE
The order of your configured rules will not matter when the job is run. Any individual rule can trigger the build action specified.
- Click Save when finished.
Pipeline Build
The “Pipeline” method involves generating build step scripts from the plugin and adding them to the existing Pipeline script:
- In a new or existing job, browse to the “Pipeline” section.
- Click Pipeline Syntax below the “Script” field.
- Open the dropdown next to “Sample Step” and select "assessContainerImage: Assess Container Image with Rapid7 InsightVM".
- Configure your build options and rules in the same manner as before.
- Click Generate Pipeline Script when finished.
- Add your new step script to the existing Pipeline script.
Pipeline Script
1// Assess the image2assessContainerImage failOnPluginError: true,3imageId: "${my_image.id}",4thresholdRules: [5exploitableVulnerabilities(action: 'Mark Unstable', threshold: '1')6],7nameRules: [8vulnerablePackageName(action: 'Fail', contains: 'nginx')9]10
- See the following example for correct location and syntax:
Pipeline Script Example
1node {2// Define a variable to use across stages3def my_image4stage('Build') {5// Get Dockerfile and code from Git (or other source control)6checkout scm7// Build Docker image and set image reference8my_image = docker.build("test-app:${env.BUILD_ID}")9echo "Built image ${my_image.id}"10}11stage('Test') {12// Assess the image13assessContainerImage failOnPluginError: true,14imageId: "${my_image.id}",15thresholdRules: [16exploitableVulnerabilities(action: 'Mark Unstable', threshold: '1')17],18nameRules: [19vulnerablePackageName(action: 'Fail', contains: 'nginx')20]21}22stage('Deploy') {23echo "Deploying image ${my_image.id} to somewhere..."24// Push image and deploy a container25}26}27
- Click Save when finished.
NOTE
Threshold rules must be unique per type. For example, you cannot have two rules for Critical Vulnerabilities. Only one instance of the rule will be applied.
View Assessment Results
After running a build, you can view assessment results in the following ways:
Assessment Results in Jenkins
Jenkins will generate an assessment report once the build finishes.
- Open the individual build that ran the plugin.
- Click Rapid7 Assessment on your navigation menu.
Assessment Results in InsightVM
The results of your build jobs are viewable on the Builds tab of the “Containers” screen in InsightVM. See Containers Build Interface: to learn more about this feature. | https://docs.rapid7.com/insightvm/containers-cicd-plugin/ | 2020-11-23T21:58:06 | CC-MAIN-2020-50 | 1606141168074.3 | [array(['/areas/docs/_repos//product-documentation__master/f9d3e5e101c2b1c3b68bc988ad2b8944b48ae896/insightvm/images/api_interface.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/f9d3e5e101c2b1c3b68bc988ad2b8944b48ae896/insightvm/images/copy_api_obscured.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/f9d3e5e101c2b1c3b68bc988ad2b8944b48ae896/insightvm/images/IVM_CS.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/f9d3e5e101c2b1c3b68bc988ad2b8944b48ae896/insightvm/images/restart_Jenkins_annotated.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/f9d3e5e101c2b1c3b68bc988ad2b8944b48ae896/insightvm/images/jenkins_token_obscued.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/f9d3e5e101c2b1c3b68bc988ad2b8944b48ae896/insightvm/images/global_credentials.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/f9d3e5e101c2b1c3b68bc988ad2b8944b48ae896/insightvm/images/add_build_step copy.png',
None], dtype=object) ] | docs.rapid7.com |
Legacy Imperva integration End-of-Life announcement
As of September 4, 2019, Rapid7 will start the End-of-Life (EOL) process for the legacy Imperva integration for InsightVM. The Imperva integration will no longer be publicly available for download on the Rapid7 website. To pursue integration opportunities between Imperva and Rapid7, please contact your Customer Success Manager (CSM). You can also view our other next-generation firewall (NGFW) and web application firewall (WAF) integration options shown on our Technology Partners page.
This EOL announcement only pertains to future deployments or feature requests.
Customers that currently have the Imperva integration configured will not see changes in functionality, but Rapid7 encourages migration to alternative integration options.
Schedule of Events
FAQs
What do I need to do if I still want to integrate with Imperva?
For more information and other integration options, please reach out to your CSM to receive a quote.
If I use the Imperva integration now, how will this impact me?
Any customers already utilizing this integration will not experience an interruption in service for a period of 12 months from the date of this announcement. You can contact your CSM for options after the last date of support.
Who can I contact if I have more questions that are not addressed in this announcement?
Contact your CSM. | https://docs.rapid7.com/insightvm/legacy-imperva-integration-end-of-life-announcement/ | 2020-11-23T22:00:42 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.rapid7.com |
Mapping is one of the most powerful features in Mason. You may use arrays and object keys to modify your UI dynamically using data from your server. Before mapping, make sure you've set up your Datasources.
Select any element in the Builder you'd like to map data to, and click the Mapping tab in the right sidebar. All mappable attributes will appear there. If there are no attributes, the element currently does not support mapping.
Mapping objects is the simplest use case. For each attribute, select a key path in the object using the dropdown menu to use that key's value as the attribute value.
Arrays, or object collections - an object with identical length keys and values that are objects - may be used as repeaters to duplicate parts of your UI once for each entry in the collection. Select the element you'd like to repeat, open the Mapping tab, and set the Repeat value to the path in your data you'd like to use to repeat. Only arrays and object collections will appear there. Once you have selected a repeater, the contents of the collection will be available for mapping to the descendants of the repeater.
If your datasource's response is a collection, you must use the top-level collection to repeat an element in your UI before you can use the datasource contents.
Arrays of primitives may also be used to repeat an element, and that element and all of its children may use the primitive value to map to attributes. | https://docs.trymason.com/development/mapping-data | 2020-11-23T22:07:15 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.trymason.com |
.
Multiple: TABLE define a storage tier with the paths and file paths that define the priority order.
- paths is the section of file paths that define the data directories for this tier of the disk configuration. Typically list the fastest storage media first.': 'TimeWind
'tiering_strategy': 'TimeWindowStorageStrategy' uses TimeWindowStorageStrategy (TWSS) to determine which tier to move the data to. TWSS is a DSE Tiered Storage strategy that uses TimeWindowCompactionStrategy (TWCS).
- TimeWindowStorageStrategy (TWSS),. | https://docs.datastax.com/en/dse/6.0/dse-admin/datastax_enterprise/tieredStorage/tieredStorageConfiguring.html | 2020-11-23T22:11:23 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.datastax.com |
Advanced configuration
- Using an external PostgreSQL
- Using an external Redis
- Using an external Mattermost
- Using an external Gitaly
- Using an external object storage
- Using your own NGINX Ingress Controller
- After install, managing Persistent Volumes
- Using Red Hat UBI-based images
- Making use of GitLab Geo | https://docs.gitlab.com/12.10/charts/advanced/ | 2020-11-23T22:27:11 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.gitlab.com |
- General troubleshooting
- What does
coordinatormean?
- Where are logs stored when run as a service?
- Run in
--debugmode
-
OffPeakTimezone
- Why can’t I run more than one instance of Runner?
-
- macOS troubleshooting
Troubleshoot GitLab Runner
This section can assist when troubleshooting GitLab Runner.
General troubleshooting
The following relate to general Runner troubleshooting. the GitLab Runner is run as service on Windows it logs to System’s Event Log.
Run in
--debug mode
Is it possible to run GitLab Runner in debug/verbose mode. From a terminal, run:
gitlab-runner --debug run
I’m seeing
x509: certificate signed by unknown authority
Please see the self-signed certificates. config config file can cause
unexpected and hard-to-debug behavior. In
GitLab Runner 12.2,
only a single instance of Runner can use a specific
config.toml file at
one time. | https://docs.gitlab.com/12.10/runner/faq/README.html | 2020-11-23T22:53:52 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.gitlab.com |
<center> # OddSlingers: Terms of Service <img src="" style="width: 20%; margin-right: 10px;"> <img src="" style="width: 25%; border-radius: 14px; box-shadow: 4px 4px 4px rgba(0,0,0,0.04);"> <br/> *Effective date: February 16, 2019* <hr/> </center> [TOC] By accessing OddSlingers.com (the "Site") you are indicating your agreement to be bound by these Terms. If you do not agree to be bound by these Terms, disconnect from the Site immediately. The Site is provided by Nycrean Technologies Inc, a company registered in Delaware, USA. If you have any questions, please contact us at:
- 1.0 DESCRIPTION OF THE SERVICE + 1.1 OddSlingers (also referred to in these terms as "we" or "us") will provide you with access to the Site. Access to the Site is free. Some other services provided by OddSlingers may not be free. If this is the case, OddSlingers will inform you before you pay for the other services that they are not free and make sure that it is clear what the charges are. + 1.2 In these Terms, Service refers to any services provided by us, including the provision of access to the Site. - 2.0 ACCESS TO OUR SERVICE + 2.1 We make no promise that our Service will be uninterrupted or error free. + 2.2 Your access to the Service may be occasionally restricted to allow for repairs, maintenance or the introduction of new facilities or services. OddSling play games on the website, but you may observe without registering. + 2.5 OddSlingers.0 LINKS TO OTHER WEB SITES AND SERVICES +.0 CONTENT OF THE SITE + 4.1 The content made available in the Site is provided "as is" without warranties of any kind. Because of the nature of the information displayed on the Site, OddSlingers cannot verify whether it is accurate or up to date. If OddSlingers becomes aware that the content is inaccurate or out of date it will endeavour to remove it from the Site or take other appropriate action as soon as is reasonably possible, but it accepts no legal obligation to do so. + 4.2 OddSlingers does not warrant that the content on the Site will be free of errors or viruses or that it will not be defamatory or offensive. + 4.3 OddSling OddSlingers. - 5. USE OF PLAY MONEY CHIPS + OddSlingers at any time. + 5.3 You may not sell your Play money chips to any person, company or entity. + 5.4 These terms may be ammended in the future such that Play Money may also be purchased using real world currency and/or by successfully completing special offers or promotions. OddSlingers may charge fees for the license to use its Play Money or may award and/or distribute its Play Money at no charge, at OddSlingers's sole discretion. + 5.5 Purchases of any virtual currency, including, but not limited to Play Money, sold on OddSlingers, are purchases of a limited, non-transferable, revocable license. The license may be terminated immediately if your account is terminated for any reason, in OddSlingers's sole and absolute discretion, or if OddSlingers discontinues providing and/or dramatically alters the services on the Site. + 5.6 Play Money can be awarded based on successful completion of special offers or promotions at OddSlingers's sole discretion. These awards are dependent on successful completion of said offers, as defined by the company or affiliate providing the offer or promotion. + 5.7 You agree that any awards or granting of Play Money, or any virtual currency, on OddSlingers received from completing a special offer or promotion are dependent upon successful completion of said offer or promotion. + 5.8 You agree that OddSlingers has the right to manage, control, modify and/or eliminate Play Money as it sees fit at its sole discretion. Further, you agree that OddSlingers will have no liability to you based on OddSlingers.com's exercise of these rights. + 5.9 You agree that Play Money transfers between OddSlingers accounts may be blocked by OddSlingers without notice if suspicious activity is detected. + 5.10 You agree that any play deemed unethical by OddSlingers in its sole discretion (including, without limitation, team play, soft play, sale of chips between players or chip dumping) may result in the applicable player(s) accounts being terminated and further access to OddSlingers being suspended permanently with no obligation to the applicable players. - 6. COPYRIGHT / TRADE MARKS + 6.1 All rights in the HTML, designs, programming and information posted by us on the Site are owned by OddSlingers. LIABILITY + 7.1 We will not be liable for your loss of profits, wasted expenditure, corruption or destruction of data or any other loss which does not directly result from something we have done wrong, even if we are aware of the possibility of such damage arising. + 7.2 Oddslingers maintains no liability to users (including where we have been negligent) given we are a free site with no obligations of monetary value to users (from purchases or otherwise). + 7.3 OddSlingers expressly disclaims any and all warranties, including without limitation, any warranties as to the availability, accuracy, or content of information, products, or services, or any terms, conditions or warranties of quality, merchantability or fitness of the content on the Site for a particular purpose. - 8. DATA PROTECTION + 8.1 We will respect your personal data and will comply with all applicable United States data protection legislation currently in force. + 8.2 We will use your personal information, which you provide to us and the records of your visits to the Site to constantly monitor and improve our Service and for marketing purposes in accordance with our Privacy Policy. - 9. GENERAL + United States law. The Courts of the U.S. shall have exclusive jurisdiction over any disputes arising out of these Terms. + 9.3 OddSlingers reserves the right to modify these Terms and Conditions at any time by the posting of modified Terms and Conditions on this website. ---/> | https://docs.oddslingers.com/s/terms-of-service | 2020-11-23T22:46:53 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.oddslingers.com |
Introduction
A sequence flow connects two elements in a process and varies according to requirements.
The following flows are supported:
- Conditional flow
- Default flow
- Sequence flow
Sequence flow
In a sequence flow, after the execution of an element in the process, all the outgoing sequences are followed.
The following screenshot shows a sequence flow in a process:
Sample Sequence Flow
Conditional flow
A conditional flow is for executing a process under certain conditions. The behavior is to evaluate the conditions on the outgoing sequence flows. When a condition evaluates to
true, that outgoing sequence flow is selected.
The following screenshot shows a conditional flow in a process:
Sample Conditional Flow
Default flow
The default flow refers to the flow that has to be followed when none of the conditions are met.
The following screenshot shows a conditional flow in a process:
Sample Default Flow
To change the flow:
- Click the flow.
- Click the settings icon.
- Select the required flow.
| https://docs.opsramp.com/solutions/remediation-automation/process-definitions/sequence-flow-reference/ | 2020-11-23T22:13:32 | CC-MAIN-2020-50 | 1606141168074.3 | [array(['https://docsmedia.opsramp.com/screenshots/Automation/example-sequence-flow-diagram.png',
'Sample Sequence Flow'], dtype=object)
array(['https://docsmedia.opsramp.com/screenshots/Automation/example-conditional-flow-diagram.png',
'Sample Conditional Flow'], dtype=object)
array(['https://docsmedia.opsramp.com/screenshots/Automation/sample-default-flow.png',
'Sample Default Flow'], dtype=object)
array(['https://docsmedia.opsramp.com/screenshots/Automation/changing-a-flow.png',
'Changing a Flow'], dtype=object) ] | docs.opsramp.com |
Overview
Gait is a common and early indicator for many neuromuscular diseases and a great means for orthotics' specialists to quantify lower limb related movements. Zeblok's proprietary Smart shoes provides access to 4 pressure zones and 3-axis 3D motion data real-time. You can visualize the data stream online on our secure cloud dashboard as well as download raw data for further analysis. We also provide data analytics results using our Bio-Informatics cloud which perform data crunching and gives a report within seconds. The smart shoes directly connect to your WiFi with simple setup process. With the help of Zeblok smart shoes, a clinician or researcher can simply ask the patient to wear the shoes and start walking while our smart electronics and sensors capture the walking data real-time and keeps a record of the session. The unique part of our technology is that unlike the Gait mats that are confined to a specific location or bulky electronics that go on top of your shoes, we have a seamless product that has the electronics embedded inside the shoes and doesn't require the user to worry about them. So, just wear them like any other pair of shoes and start collecting your data.
Features
Tech Specs | https://docs.zeblok.com/scientific/WIFI-smart-shoes.php | 2020-11-23T22:38:00 | CC-MAIN-2020-50 | 1606141168074.3 | [array(['https://docs.zeblok.com/admin/uploads/Shoes_Product.png', None],
dtype=object)
array(['https://docs.zeblok.com/admin/uploads/Shoes_Features.png',
'Shoes Features'], dtype=object) ] | docs.zeblok.com |
Steps On How To Request For A W-2 From A Former Employer
Changing jobs more often can help one to brand themselves and improve their internal consulting skills. When switching to a new job, there are crucial documents one needs to have such as the W-2. The reason why should have a W-2 is that employers must file the document for every employee. The W-2 provides information about the amount you were earning at the previous job and. This article here, will guide you on how to request a W-2 from a previous employer.
The first step to ask for a W-2 is checking with payroll. You can get a W-2 by simply sending an e-mail or calling the administrator for payroll. It is recommended to make sure the administrator has the appropriate email address. It is your obligation to make sure they have mailed the W-2 just in case you will.
When switching to a new job, there are crucial documents one needs to have such as the W-2. You required to have a W-2 because the very employer needs to file the document for every employee. The W-2 reports the income amount you were earning at your previous job and the withheld taxes of Medicare, state, security, social, and federal. Supplementary to that, the document offers information on further contributions relating to your healthcare and retirement you made in the course of that year. When switching to a new job, it is chiefly important to request a W-2 in time.
In the scenario that, your previous employer is not responding to your e-mails and calls, you should call the IRS. The process can be simplified for the IRS, by giving your identification number and the former company’s employer. The IRS will make an initiative to send a notice to your former employer. This step is critical because you will receive the W-2 document in time. You should put into consideration filing your taxes if you do not receive the W-2 after following these tips. | http://docs-prints.com/2020/10/03/my-most-valuable-advice-12/ | 2020-11-23T22:02:55 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs-prints.com |
CurveExpert Professional 2.7.3 documentation
After performing some analysis, a typical task is to write data, in some form, to disk for later retrieval. CurveExpert Professional supports several ways to extract data and graphs to a file or the clipboard.
If you want to save all of your data, results, graphs, and notes or function pickers.
To save your dataset into a text file, choose File-.
The same graph can be copied to the clipboard by right clicking and selecting Copy. | https://docs.curveexpert.net/curveexpert/pro/html/writingdata.html | 2020-11-23T22:56:34 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.curveexpert.net |
Self-signed certificates or custom Certification Authorities
Introduced in GitLab Runner 0.7.0
GitLab Runner allows you to configure certificates that are used to verify TLS peers when connecting to the GitLab server.
This solves the
x509: certificate signed by unknown authority problem when registering a runner.
Supported options for self-signed certificates
GitLab Runner supports the following options:
Default: GitLab Runner reads the system certificate store and verifies the GitLab server against the certificate authorities (CA) stored in the system.
GitLab Runner reads the PEM certificate (DER format is not supported) from a predefined file:
/etc/gitlab-runner/certs/hostname.crton *nix systems when GitLab Runner is executed as root.
~/.gitlab-runner/certs/hostname.crton *nix systems when GitLab Runner is executed as non-root.
./certs/hostname.crton other systems. If running Runner as a Windows service, this will not work. Use the last option instead.
If your server address is:, create the certificate file at:
/etc/gitlab-runner/certs/my.gitlab.server.com.crt.Note: You may need to concatenate the intermediate and server certificate for the chain to be properly identified..
Git cloning
The runner injects missing certificates to build the CA chain in build containers.
This allows
git clone and
artifacts to work with servers that do not use publicly
trusted certificates.
This approach is secure, but makes the runner a single point of | https://docs.gitlab.com/12.10/runner/configuration/tls-self-signed.html | 2020-11-23T22:41:05 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.gitlab.com |
A Datasource is any URL that provides data to a feature at runtime. In order to be used as a datasource, a URL must:
be publicly accessible using javascript in the browser (have appropriate cross-origin headers)
respond with a JSON payload
respond to a GET request
Datasources are shared across all features in your project, and added using the Project panel under the Datasources tab. This reduces redundancy as multiple features that need to use the same data can share it, and allows events in one feature (say, a successful resource creation) to modify data used in another feature. Give it a name, provide a URL, any query or header parameters, and hit Fetch. You have now added a datasource to your Project.
The response is not stored by Mason, but used during the build process to determine the structure of the expected response and configure mapping rules for your data and UI. The response structure must be consistent.
Datasources are created in your project, but fetched by your features when they mount. This is because you may not have all the dynamic data relevant to the datasource until a specific feature mounts. In order to tell a feature to fetch a datasource when it mounts, check the box next to the datasource name in the Configure section of the builder under the Fetch Datasources header. Ensure the feature that fetches the datasource has the appropriate url parameters and callbacks, if required.
If you are using tokens or unique identifiers in your datasource, you may mark them as private using the key button in the Builder. Any header or query parameters marked as private will not be supplied to your Datasource at runtime, and must be provided by you using a callback (see below). All parameters not marked as private will be supplied to your features at runtime, and will be visible by anyone with access to your application.
You may inject dynamic header or query parameters, like authorization tokens, at runtime by using the
willFetchData callback. Your function will receive the datasource to be fetched as an argument, which you may modify and return. See below for an example.
import React from 'react';import { Canvas } from '@mason-api/react-sdk';class MyFeed extends React.Component {render() {const { search, token, user } = this.props;return <Canvasid="YOUR_COMPONENT_ID"willFetchData={(datasource, featureId) => {if (datasource.id === 'YOUR_DATASOURCE_ID') {return {...datasource,headers: { 'Authorization': token },queries: { 'search': search },};}return datasource;}}/>;}}
Your function will receive two arguments:
datasource, an object with the following structure
{url: '',headers: {'Content-Type': 'application/json'},queries: {foo: 'bar'},name: 'DATASOURCE_NAME',id: 'DATASOURCE_ID'
and
featureId, the 12-byte unique identifier of your feature (which you can find in the Export instructions in the Builder).
You may modify any part of the datasource including the URL. However, URL modifications are most easily accomplished using the
urlParams property. You must return the datasource, if you have no modifications return the datasource unmodified.
As an alternative to providing callbacks using props, particularly if you are not using React, you may use the
Mason.callback function to register your
willSendData callback. Here is an example:
import Mason from '@mason-api/react-sdk';Mason.callback('willSendData', (datasource, featureId) => {if (datasource.id === 'YOUR_DATASOURCE_ID') {return {...datasource,headers: { 'Authorization': token },queries: { 'search': search },};}return datasource;}, 'YOUR_FEATURE_ID');
The third argument to the
callback function is an optional feature id. Even though datasources are shared across all features in a project, fetch events are triggered by feature's mounting (more on this below). If you want Mason to invoke a callback only when a specific feature is fetching a datasource, you may provide its id as the third argument.
In some cases you may want to use a form submission response to update a datasource and trigger a UI update. To accomplish this, use the Success event menu in the Form tab of the Configure section of the Builder. You may merge or replace a datasource with the response from your form submission. You may also trigger a refetch of the entire datasource.
Replace simply overwrites the entire datasource. When merging, the behavior is as follows:
if the Datasource is an object, the response will be shallowly merged
if the Datasource is an array, and the response is not an array, the response will be pushed onto the end of the array
if the Datasource is an array, and the response is an array, the response's entries will be pushed onto the end of the array | https://docs.trymason.com/development/fetching-data | 2020-11-23T22:22:49 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.trymason.com |
The following server roles and features are installed by the
InstallRolesAndFeatures.ps1 PowerShell script.
Expand Web Server (IIS) > Web Server > Common HTTP Features. The list contains the following items:
- Default Document
- HTTP Errors
- Static Content
Expand Web Server (IIS) > Web Server > Security. The list contains the following items:
- Request Filtering
- URL Authorization
- Windows Authentication
Expand Web Server (IIS) > Web Server > Application Development. The list contains the following items:
- ASP.NET45
- .NET Extensibility 4.5
- Application Initialization
- ISAPI Extensions
- ISAPI Filter
- WebSockets
Note:
Windows Server 2019 comes with ASP.NET47 by default. The .NET Extensibility 4.7 feature is also selected by default.
- Expand Web Server (IIS) > Web Server > Management Tools. The list displays the following items:
- IIS Management Console
After completing the installation, open a browser and go to. If you do not know your computer name, open Command Prompt and type
hostname, or open System and look for Computer Name.
The result of opening the address should be the default page of IIS.
If the page is not displayed as in the image above, you need to ensure that IIS server is running and port 80 is open. By default, IIS listens for connections on port 80, for any IP bound to the server.
That happens even if there are no host headers or bindings set for a specific IP. That can prevent you from running multiple web servers on port 80.
To set IIS to listen on specific IPs, follow the instructions below.
Note:
Minimum Windows Server version required: 2012/IIS 8.
- Open an Elevated Command Prompt and type
netsh.
- Type
http.
- Enter the
show iplistencommand to display the current list of IPs to listen to. If no IPs are displayed, IIS listens to all IPs by default.
- Use the
add iplisten ipaddress=0.0.0.0command to set IIS to listen to a specific IP. Make sure 0.0.0.0 is replaced by the correct IP. Run the command again for any additional addresses.
- Type
exitif you want to exit.
- (Optionally) If you need to delete any IP from this list, use the following command:
delete iplisten ipaddress=0.0.0.0.
- Restart IIS to apply the changes, using the
iisresetcommand.
File Extensions
Access to the following file extensions is required by Orchestrator:
Imporatant!
When using
Orchestratortype Storage Buckets, any file extensions may be used beyond the enumerated list above.
Updated 3 months ago | https://docs.uipath.com/installation-and-upgrade/docs/server-roles-and-features | 2020-11-23T21:58:58 | CC-MAIN-2020-50 | 1606141168074.3 | [array(['https://files.readme.io/5003929-image_38.png', 'image_38.png'],
dtype=object)
array(['https://files.readme.io/5003929-image_38.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/001a2fa-image_39.png', 'image_39.png'],
dtype=object)
array(['https://files.readme.io/001a2fa-image_39.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/d489157-image_40.png', 'image_40.png'],
dtype=object)
array(['https://files.readme.io/d489157-image_40.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/7eb0fa7-image_41.png', 'image_41.png'],
dtype=object)
array(['https://files.readme.io/7eb0fa7-image_41.png',
'Click to close...'], dtype=object) ] | docs.uipath.com |
Verification Planning and Requirements¶
A key activity of any verification effort is to capture a Verification Plan (aka Test Plan or just testplan). This document is not that. The purpose of a verification plan is to identify what features need to be verified; the success criteria of the feature and the coverage metrics for testing the feature. At the time of this writing the verification plan for the CV32E40P is under active development. It is located in the core-v-verif GitHub repository at.
The Verification Strategy (this document) exists to support the Verification Plan. A trivial example illustrates this point: the CV32E40P verification plan requires that all RV32I instructions be generated and their results checked. Obviously, the testbench needs to have these capabilities and its the purpose of the Verification Strategy document to explain how that is done. Further, an AC will be required to implement the testbench code that supports generation of RV32I instructions and checking of results, and this document defines how testbench and testcase development is done for the OpenHW projects.
The subsections below summarize the specific features of the CV32E40* verification environment as identified in the Verification Plan. It will be updated as the verification plan is completed.
Base Instruction Set¶
- Capability to generate all legal RV32I instructions using all operands.
- Ability to check status of GPRs after instruction execution.
- Ability to check side-effects, most notably underflow/overflow after instruction execution. | https://core-v-docs-verif-strat.readthedocs.io/en/latest/planning_requirements.html | 2020-11-23T22:44:17 | CC-MAIN-2020-50 | 1606141168074.3 | [] | core-v-docs-verif-strat.readthedocs.io |
Table of Contents
Product Index. | http://docs.daz3d.com/doku.php/public/read_me/index/32961/start | 2020-08-03T13:42:50 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.daz3d.com |
Table of Contents
Product Index with “cross” or “pentagram” options) and fully rigged, circle style sunglasses.. | http://docs.daz3d.com/doku.php/public/read_me/index/63615/start | 2020-08-03T12:56:38 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.daz3d.com |
5 projects in different stage of development or production. Every project will have 7 minutes to pitch including the trailer and it will be followed by a short Q&A.
SERBIAN DOCS IN PROGRESS 2019
Boogie and Demons / Bugi i demoni
Logline Documentary about Vladimir Milivojević Boogie, a famous street photographer, (), specifically about his work in Belgrade, a series of photographs created by a Wet plate-Collodion process, named “Demons”.
Synopsis Familiar with the most unusual deviations of human nature, Boogie is moving away from the street photography and in his Belgrade studio a work on a series of portraits in the technique of the collodion process, called Demons. This procedure is exclusively related to Belgrade. Unlike other cities where dark content is found in everyday life of people from margins in Belgrade, this content is found among acquaintances and friends, and due to long exposures characteristic of the Collodion wet process. With this exotic technique of the long overdue photographic process, Boogie manages to record something deeply, demonically and in the nature of the “ordinary” people portrayed. Boogie speaks of this process as an alchemical process that he says is able to record something “on the outside”.
Director’s Biography Ivan Šijak is professor at Faculty of Dramatic Arts in Belgrade. At Belgrade Rectorate of Arts leading professor, course on Digital image. Selected filmography: Director/ Cinematographer/VFX Supervisor: Blue Gypsy Emir Kusturica, Rasta Internacional. Promete moi Emir Kusturica, Rasta Internacional. 12 Nikita Mihalkov, 3T. Burnt by the Sun 2,3 Nikita Mihalkov, 3T. Director, Opera The Man Who Mistook his Wife for a Hat, BITEF 2001, music: Michael Nyman, New Moment.
Director Ivan Šijak
Producer Milan Kilibarda
Co-producer Jasmina Petković
Expected duration 80’
Shooting format Digital
Premiere end of 2019
Language English
Budget 107 000 EUR
Financing in place 47 000 EUR
Production country Serbia
Production company Mikijeva radionica
Partners attached Film Center Serbia
Contact Milan Kilibarda/Producer, + 381 65 322 3222, [email protected]
Christina / Kristina
Logline A transgender sex worker lives with a cat and collects antiques. Her life becomes stressful after the arrival of an inspector who is chasing her and a stranger she falls in love with, but who disappears.
Synopsis After many years of serious battle, Christina`s sex has finally been adjusted to her gender identity. Today, she lives with her cat in a luxury home and invests her money into antiques and paintings. This film arises from the need for radical reassessment of the term love, similar to Plato’s reassessment in Symposium. Christina is an ideal protagonist because she “sells love” and buys antiques which she loves. She adores her family which consists of her transgender friends. She longs for true love from a man and searches for God’s unconditional love. And just when she thinks that she will finally find true love, her lover suddenly disappears. After an exciting journey, she finds him in a rural area, surrounded by family. Seeing him, she decides that it is best for her to back down, so she could protect him from herself.
Director’s Biography Nikola Spasić is a film director, editor and producer based in Novi Sad. In January 2017, his debut documentary film Why Dragan Gathered his Band premiered at the MiradasDoc Festival in Spain and has been shown at more than 40 festivals around the world. At one of the biggest Serbian festivals, Cinema City, it won the best film award. So far, the film has been shown on various TV stations including Al Jazeera Balkans, RTRS, HRT, RTS, etc.
Director Nikola Spasić
Producer Nikola Spasić, Milanka Gvoić
Co-producer David Evans
Expected duration 52’; 80’
Shooting format Digital
Expected delivery February 2021
Language Serbian
Budget 309.500 €
Financing in place 15.500 €
Production country Serbia, UK
Production company Rezon, Shoot from the Hip
Partners attached Serbian Office for Human and Minority Rights
Contact Nikola Spasić, Director/Producer, +381 62 199 17 05, [email protected]
Do Not Come Home / Nemoj da se vraćaš
Logline University-educated truck drivers from eastern Europe roam the United States questioning their life choices. Was it worth leaving everything behind, just for the money?
Synopsis The characters of our film are neither particularly old nor quite young. They all have university degrees, but they do not perform the jobs they are qualified to perform. Despite loving their homeland, they don’t live there anymore. They are driving trucks across the USA. Driving through the wild and picturesque scenery across America, our characters communicate with their loved ones in the homeland. The truck cabin is the place where all their moments of happiness and sorrow are concentrated. Do not come home is a multilayered psychological, philosophical, political and social story interwined with intimacy and personal struggles of those who left and those who stayed. “Do not come home will take audience on an emotional journey colored by stunning sceneries of North America flavored by a dose of Balkan sense of humor.
Directors’s Biographies
Miloš Ljubomirović finished Master studies from Faculty of Dramatic Arts (Film and TV production program). Won 7 national and international awards for his short films, 4 of them with his master’s thesis that he both produced and directed – short film Shadows. Faculty of Dramatic Arts awarded that film with Dejan Kosanović Award for best film in Master studies. IDFAcademy and Sarajevo Talent Campus alumni.
Danilo Lazović (1985) is a producer, director, media and culture theorist. He has participated and initiated a variety of projects in various social and media fields. Graduated from the Academy of Arts and has a BA in – Production in Culture and Media, and Masters degree from the Faculty of Dramatic Arts.
Directors Miloš Ljubomirović, Danilo Lazović
Producer Miloš Ljubomirović, Danilo Lazović, Ivica Vidanović
Co-producer Jure Pavlović, Dagmar Sedláčková
Expected duration 90’, TV Hour
Shooting format Digital, 4K
Expected Delivery November 2020
Language Serbian, English
Budget € 312,975
Financing in place € 91,374
Production country Serbia, Croatia, Czech Republic
Production company Cinnamon Films, Serbia
Co-production companies Sekvenca, Croatia, MasterFilm, Czech Republic
Partners attached Film Center Serbia
Contact Miloš Ljubomirović, Director/Producer, +381 64 6150 953, [email protected]
Dream Collector // Dreams of Vladan Radovanović / Sakupljač snova // Snovi Vladana Radovanovića
Logline This is a documentary fairy tale about the two creators – Dreamer and Dream Transcriber – inhabiting the same physical membrane. We immerse into their stunning collection of dreams assembled throughout almost 70 years.
Synopsis Vladan Radovanović is an elderly man living in a small, yet scenic apartment filled with books, instruments and various art objects, together with his wife and a parrot. He sleeps, and after waking up, he often writes down the content of his previous dream. Later on he draws it, too, trying to capture it as accurately as possible. The apartment becomes the portal to Vladan’s dream world; the present melts into one with his memories, thoughts and creations. Gradually, we discover a unique, versatile artist: composer, painter, writer, theorist, art-syntetist, pioneer in electronic music and in several other fields of contemporary art. We follow Radovanović’s key life episodes, recollected from the “night diaries” that he has been keeping from 1953 onwards. At the age of 86 he still dreams, and the need to create is as strong as ever.
Director’s Biography Sonja Đekić was born in 1980 in Belgrade, where she works and lives today. She holds a MA degree in Film and TV directing from the Faculty of Drama Arts. Sonja has been passionately working on her documentary projects in many capacities (Joe Goes to Serbia, 2008, KOSMA, 2013, Speleonaut/Under The Stone Sky, 2018), while involved in several film festivals (Martovski, Grafest, Magnificent 7). Sonja recently founded the production company KEVA.
Director Sonja Đekić
Producer Sonja Đekić
Expected duration 70’
Shooting format 4K
Premiere winter 2020
Language Serbian with English subtitles
Budget 203,000 €
Financing in place 21,000 €
Production country Serbia
Production company KEVA
Contact Sonja Đekić, Director/Producer, +381 63 108 0605, [email protected]
The Box / Kutija
Logline Political ready-made comedy with real consequences exploring the basic nature of politics by looking into one of the strangest periods in Serbian.
Synopsis The Box is a documentary film about one of the strangest times in modern Serbian history. After the 45 years of the communist, single party governance in Serbia, in July 1990 political parties gained legal status. Due to the decades-old single-party system, opposition naïvely and comically pioneers its pluralistic beginnings. By using imprecise and subjective memories of protagonists and a bizarre and humorous prism of the 1990. election media campaign landscape, public gatherings, so as interviews with the presidential candidates and party members, specially prepared and conducted for the film, The Box aims to engage the viewer to assess the current political momentum , the ongoing process of democratization and to investigate the basic nature of politics, and question the basic notions of what actually is “political”.
Director Luka Papić
Producer Srđa Vučo
Expected duration 85 ’
Shooting format HD
Expected Delivery Early 2020
Language Serbian
Budget 113.940,00 EUR
Financing in place 22.000,00 EUR (19%)
Production country Serbia
Production company Cinnamon Films
Contact Luka Papić / Director, +381 64 400 7054, [email protected] | https://beldocs.rs/serbian-docs-in-progress/ | 2020-08-03T11:38:31 | CC-MAIN-2020-34 | 1596439735810.18 | [] | beldocs.rs |
Run Functional Tests task.
This task is deprecated in Azure Pipelines and TFS 2018 and later. Use version 2.x or higher of the Visual Studio Test task together with jobs to run unit and functional tests on the universal agent.
For more details, see Testing with unified agents and jobs.
TFS 2017 and earlier
Use this task to.
YAML snippet
# Run functional tests # Deprecated: This task and it’s companion task (Visual Studio Test Agent Deployment) are deprecated. Use the 'Visual Studio Test' task instead. The VSTest task can run unit as well as functional tests. Run tests on one or more agents using the multi-agent job setting. Use the 'Visual Studio Test Platform' task to run tests without needing Visual Studio on the agent. VSTest task also brings new capabilities such as automatically rerunning failed tests. - task: RunVisualStudioTestsusingTestAgent@1 inputs: testMachineGroup: dropLocation: #testSelection: 'testAssembly' # Options: testAssembly, testPlan #testPlan: # Required when testSelection == TestPlan #testSuite: # Required when testSelection == TestPlan #testConfiguration: # Required when testSelection == TestPlan #sourcefilters: '**\*test*.dll' # Required when testSelection == TestAssembly #testFilterCriteria: # Optional #runSettingsFile: # Optional #overrideRunParams: # Optional #codeCoverageEnabled: false # Optional #customSlicingEnabled: false # Optional #testRunTitle: # Optional #platform: # Optional #configuration: # Optional #testConfigurations: # Optional #autMachineGroup: # Optional
Arguments
The task supports a maximum of 32 machines/agents. Azure Pipelines
Build agents
- Hosted and on-premises agents.
- The build agent must be able to communicate with all test machines. If the test machines are on-premises behind a firewall, the hosted build agents Build-Deploy-Test (BDT) tasks are supported in both build and release pipelines.
Machine group configuration
- Only Windows machines are supported when using BDT tasks inside a Machine Group. Using Linux, mac
- Run continuous tests with your builds
- Testing in Continuous Integration and Continuous Deployment Workflows
Related tasks
- Deploy Azure Resource Group
- Azure File Copy
- Windows Machine File Copy
- PowerShell on Target Machines
- Visual Studio Test Agent Deployment
Open source
This task is open source on GitHub. Feedback and contributions are welcome. Customize Code Coverage Analysis.
::: moniker-end
Help and support
- See our troubleshooting page
- Get advice on Stack Overflow, and get support via our Support page | https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/test/run-functional-tests?view=azure-devops | 2020-08-03T12:59:52 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
Failure Modes in Machine Learning
November 2019
Introduction & Background.
Intentional failures wherein the failure is caused by an active adversary attempting to subvert the system to attain her goals – either to misclassify the result, infer private training data, or to steal the underlying algorithm.
Unintentional failures wherein the failure is because an ML system produces a formally correct but completely unsafe outcome.
We would like to point out that there are other taxonomies and frameworks that individually highlight intentional failure modes[1],[2] and unintentional failure modes[3],[4]. Our classification brings the two separate failure modes together in one place and addresses the following needs:
The need to equip software developers, security incident responders, lawyers, and policy makers with a common vernacular to talk about this problem. After developing the initial version of the taxonomy last year, we worked with security and ML teams across Microsoft, 23 external partners, standards organization, and governments to understand how stakeholders would use our framework. Based on this usability study and stakeholder feedback, we iterated on the framework.
Results: When presented with an ML failure mode, we frequently observed that software developers and lawyers mentally mapped the ML failure modes to traditional software attacks like data exfiltration. So, throughout the paper, we attempt to highlight how machine learning failure modes are meaningfully different from traditional software failures from a technology and policy perspective.
The need for a common platform for engineers to build on top of and to integrate into their existing software development and security practices. Broadly, we wanted the taxonomy to be more than an educational tool – we want it to effectuate tangible engineering outcomes.
Results: Using this taxonomy as a lens, Microsoft modified its Security Development Lifecycle process for its entire organization. Specifically, data scientists and security engineers at Microsoft now share the common language of this taxonomy, allowing them to more effectively threat model their ML systems before deploying to production; Security Incident Responders also have a bug bar to triage these net-new threats specific to ML, the standard process for vulnerabilities triage and response used by the Microsoft Security Response Center and all Microsoft product teams.
The need for a common vocabulary to describe these attacks amongst policymakers and lawyers. We believe that this for describing different ML failure modes and analysis of how their harms might be regulated is a meaningful first step towards informed policy.
Results: This taxonomy is written for a wide interdisciplinary audience – so, policymakers who are looking at the issues from a general ML/AI perspective, as well as specific domains such as misinformation/healthcare should find the failure mode catalogue useful. We also highlight any applicable legal interventions to address the failure modes.
See also Microsoft's Threat Modeling AI/ML Systems and Dependencies and SDL Bug Bar Pivots for Machine Learning Vulnerabilities.
How to use this document
At the outset, we acknowledge that this is a living document which will evolve over time with the threat landscape. We also do not prescribe technological mitigations to these failure modes here, as defenses are scenario-specific and tie in with the threat model and system architecture under consideration. Options presented for threat mitigation are based on current research with the expectation that those defenses will evolve over time as well..
Document Structure
In both the Intentional Failure Modes and Unintentional Failure Modes sections, we provide a brief definition of the attack, and an illustrative example from literature.
In the Intentional Failure Modes section, we provide the additional fields:
What does the attack attempt to compromise in the ML system – Confidentiality, Integrity or Availability? We define Confidentiality as assuring that the components of the ML system (data, algorithm, model) are accessible only by authorized parties; Integrity is defined as assuring that the ML system can be modified only by authorized parties; Availability is defined as an assurance that the ML system is accessible to authorized parties. Together, Confidentiality, Integrity and Availability is called the CIA triad. For each intentional failure mode, we attempt to identify which of the CIA triad is compromised.
How much knowledge is required to mount this attack – blackbox or whitebox? In Blackbox style attacks., the attacker does NOT have direct access to the training data, no knowledge of the ML algorithm used and no access to the source code of the model. The attacker only queries the model and observes the response. In a whitebox style attack the attacker has knowledge of either ML algorithm or access to the model source code.
Commentary on if the attacker is violating traditional technological notion of access/authorization.
Intentionally-Motivated Failures Summary
Unintended Failures Summary
Details on Intentionally-Motivated Failures
Details on Unintended Failures
Acknowledgements
We would like to thank Andrew Marshall, Magnus Nystrom, John Walton, John Lambert, Sharon Xia, Andi Comissoneru, Emre Kiciman, Jugal Parikh, Sharon Gillet, members of Microsoft’s AI and Ethics in Engineering and Research (AETHER) committee’s Security workstream, Amar Ashar, Samuel Klein, Jonathan Zittrain, members of AI Safety Security Working Group at Berkman Klein for providing helpful feedback. We would also like to thank reviewers from 23 external partners, standards organization, and government organizations for shaping the taxonomy.
Bibliography
[1] Li, Guofu, et al. "Security Matters: A Survey on Adversarial Machine Learning." arXiv preprint arXiv:1810.07339 (2018).
[2] Chakraborty, Anirban, et al. "Adversarial attacks and defences: A survey." arXiv preprint arXiv:1810.00069 (2018).
[3] Ortega, Pedro, and Vishal Maini. "Building safe artificial intelligence: specification, robustness, and assurance." DeepMind Safety Research Blog (2018).
[4] Amodei, Dario, et al. "Concrete problems in AI safety." arXiv preprint arXiv:1606.06565 (2016).
[5] Shankar Siva Kumar, Ram, et al. "Law and Adversarial Machine Learning." arXiv preprint arXiv:1810.10731 (2018).
[6] Calo, Ryan, et al. "Is Tricking a Robot Hacking?." University of Washington School of Law Research Paper 2018-05 (2018).
[7] Paschali, Magdalini, et al. "Generalizability vs. Robustness: Adversarial Examples for Medical Imaging." arXiv preprint arXiv:1804.00504 (2018).
[8] Ebrahimi, Javid, Daniel Lowd, and Dejing Dou. "On Adversarial Examples for Character-Level Neural Machine Translation." arXiv preprint arXiv:1806.09030 (2018)
[9] Carlini, Nicholas, and David Wagner. "Audio adversarial examples: Targeted attacks on speech-to-text." arXiv preprint arXiv:1801.01944 (2018).
[10] Jagielski, Matthew, et al. "Manipulating machine learning: Poisoning attacks and countermeasures for regression learning." arXiv preprint arXiv:1804.00308 (2018)
[11] []
[12] Fredrikson M, Jha S, Ristenpart T. 2015. Model inversion attacks that exploit confidence information and basic countermeasures
[13] Shokri R, Stronati M, Song C, Shmatikov V. 2017. Membership inference attacks against machine learning models. In Proc. of the 2017 IEEE Symp. on Security and Privacy (SP), San Jose, CA, 22–24 May 2017, pp. 3–18. New York, NY: IEEE.
[14] Tramèr, Florian, et al. "Stealing Machine Learning Models via Prediction APIs." USENIX Security Symposium. 2016.
[15] Elsayed, Gamaleldin F., Ian Goodfellow, and Jascha Sohl-Dickstein. "Adversarial Reprogramming of Neural Networks." arXiv preprint arXiv:1806.11146 (2018).
[16] Athalye, Anish, and Ilya Sutskever. "Synthesizing robust adversarial examples." arXiv preprint arXiv:1707.07397(2017)
[17] Sharif, Mahmood, et al. "Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition." arXiv preprint arXiv:1801.00349 (2017).
[19] Xiao, Qixue, et al. "Security Risks in Deep Learning Implementations." arXiv preprint arXiv:1711.11008 (2017).
[20] Gu, Tianyu, Brendan Dolan-Gavitt, and Siddharth Garg. "Badnets: Identifying vulnerabilities in the machine learning model supply chain." arXiv preprint arXiv:1708.06733 (2017)
[21] []
[22] []
[23] Amodei, Dario, et al. "Concrete problems in AI safety." arXiv preprint arXiv:1606.06565 (2016).
[24] Leike, Jan, et al. "AI safety gridworlds." arXiv preprint arXiv:1711.09883 (2017).
[25] Gilmer, Justin, et al. "Motivating the rules of the game for adversarial example research." arXiv preprint arXiv:1807.06732 (2018).
[26] Hendrycks, Dan, and Thomas Dietterich. "Benchmarking neural network robustness to common corruptions and perturbations." arXiv preprint arXiv:1903.12261 (2019). | https://docs.microsoft.com/en-us/security/engineering/failure-modes-in-machine-learning | 2020-08-03T13:24:17 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
Linino One ;).
Linino One has on-board debug probe and IS READY for debugging. You don’t need to use/buy external debug probe. | https://docs.platformio.org/en/stable/boards/atmelavr/one.html | 2020-08-03T11:37:58 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.platformio.org |
Technique editor
Introduction
First, what is a Technique?
A technique is a description in code form of what the agent has to do on the node. This code is actually composed of a series of Generic method calls. These different Generic method calls are conditional.
What is a Generic method?
A generic method is a description of an elementary state independent of the operating system (ex: a package is installed, a file contains such line, etc…). Generic methods are independent of the operating system (It has to work on any operating system). Generic methods calls are conditioned by condition expressions, which are boolean expression combining basic conditions with classic boolean operators (ex : operating system is Debian, such generic method produced a modification, did not produce any modification, produced an error, etc…)
Technique Editor
Utility (Configuration policy → Techniques), this tool has an easy-to-use interface, which doesn’t require any programming skills but nevertheless allows to create complex Techniques.
Interface.
You can add parameters to a technique to make it reusable. Go to Parameters and add a name and a description.
You can now use it in generic method instead of static value.
You can also add resources to a technique. Go to Resources and Manage resources.
can be found in the reference documentation)
_11<<
The Generic method details are divided into 3 blocks :
Conditions
Conditions allow user to restrict the execution of the method.
Parameters
Parameters are in mono or multi line text format. They can contains variables which will be extended at the time of the execution.
Result conditions
One result condition
Those conditions can be used in another Generic methods conditions. ie, you can execute a command if a previous one failed or was repaired.
Create your first Technique
Now we are going to see how to create a simple technique to configure a ntp server, step by step.
General information
Let’s start from the beginning. Click on the "New Technique" button and start filling in the General information fields (only name is required).
In our case:
Name: Configure NTP
Description: Install, configure and ensure the ntpd is running. Uses a template file to configuration.
Add and configure generic methods
Now, we have to find and add the generic methods which correspond to the actions we want to execute. In our case, we want to add the following methods:
Package present (You can find it in the Package category)
This method only take one parameter, the name of the package to install. So here, fill in the package_name field with the value ntp.
File content conditions defined following the execution of Package install method is Repaired (package_install_ntp_repaired). So here, fill in the Other conditions field in the Conditions panel with the value package_install_ntp_repaired.
Service enabled at boot (You can find it in the Service category)
This method only take one parameter, the name of the service we want to check. Again, here, fill in the service_name field with the value ntp.
You can also use parameters et resources to replace "File content" method by "File from local source with check" :
← Configuration concepts Variables → | https://docs.rudder.io/reference/6.1/usage/technique_editor.html | 2020-08-03T12:48:29 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['_images/technique_editor/1-rudder-technique-editor.png',
'1 rudder technique editor'], dtype=object)
array(['_images/technique_editor/1-rudder-technique-editor-open.png',
'1 rudder technique editor open'], dtype=object)
array(['_images/technique_editor/2-list-techniques.png',
'2 list techniques'], dtype=object)
array(['_images/technique_editor/3-ntp-configuration.png',
'3 ntp configuration'], dtype=object)
array(['_images/technique_editor/technique-editor-parameters.png',
'technique editor parameters'], dtype=object)
array(['_images/technique_editor/technique-parameters-ntp.png',
'technique parameters ntp'], dtype=object)
array(['_images/technique_editor/technique-resources.png',
'technique resources'], dtype=object)
array(['_images/technique_editor/technique-upload-resource.png',
'technique upload resource'], dtype=object)
array(['_images/technique_editor/technique-uploaded-file.png',
'technique uploaded file'], dtype=object)
array(['_images/technique_editor/technique-resource-added.png',
'technique resource added'], dtype=object)
array(['_images/technique_editor/4-list-generics-method.png',
'4 list generics method'], dtype=object)
array(['_images/technique_editor/5-configure-generic-method.png',
'5 configure generic method'], dtype=object)
array(['_images/technique_editor/technique-resource-usage.png',
'technique resource usage'], dtype=object) ] | docs.rudder.io |
Logging
By default, the Release server writes information, such as: warnings, errors, and log messages to your terminal output and to
XL_RELEASE_SERVER_HOME/log/xl-release.log.
In addition, Release writes access logging to
XL_RELEASE_SERVER_HOME/log/access.log.
Changing the logging behavior in Release
You can customize the logging behavior in Release. Example: To write log output to a file or to log output from a specific source.
Release uses Logback as logging technology. The Logback configuration is stored in
XL_RELEASE_SERVER_HOME/conf/logback.xml.
For more information about the
logback.xml file, see Logback documentation. INFO level --> <logger name="com.xebialabs" level="info"/> <!-- Set logging of class HttpClient to DEBUG level --> <logger name="HttpClient" level="debug"/> <!-- Set the logging of all other classes to INFO --> <root level="info"> <!-- Write logging to STDOUT and FILE appenders --> <appender-ref <appender-ref </root> </configuration>
Automatically reload the configuration file upon modification
Logback can be configured to scan for changes in its configuration file and reconfigure itself accordingly.
To enable this feature:
- Go to and open
XL_RELEASE_SERVER_HOME/conf/logback.xmlin a text editor.
- Set the
scanattribute of the
<configuration>element to
true, and optionally, set the
scanPeriodattribute to a period of time.
Note: By default, the configuration file will be scanned every 60 seconds.
Example:
<configuration scan="true" scanPeriod="30 seconds" > ... </configuration>
For more information, see Logback - auto scan.
The audit log
Important: The audit log is not available between versions 7.5.0 - 8.5.0 of Release.
Release keeps an audit log of each human-initiated event on the server, which complements the auditing provided by the release activity logs (which track activity for each release at a more domain-specific level of granularity).
Some of the events that are logged in the audit trail are:
- The system is started or stopped
- An application is imported
- A CI is created, updated, moved, or deleted
- A security role is created, updated, or deleted
- A task_RELEASE_SERVER_HOME/log/audit.log and is rolled over daily.
Enable audit logging
You can enable low-level audit logging by changing the log level of the
audit logger in
XL_RELEASE_SERVER_HOME/conf/logback.xml:
<logger name="audit" level="off" additivity="false">
<appender-ref
</logger>
By default, the log stream is stored in
XL_RELEASE_SERVER_HOME/log/audit.log. You can change this location, the file rolling policy, and so on by changing the configuration of the
AUDIT appender. You can also pipe the log stream to additional sinks (such as syslog) by configuring additional appenders. Refer to the Logback documentation for details.
This is an example of the audit stream in Release pre-7.5.0 with the level of the audit logger set to
info:
2014-11-22 11:24:18.764 [audit.system] system - Started 2014-11-22 11:25:18.125 [audit.repository] admin - Created CIs [Configuration/Custom/Configuration1099418] 2014-11-22 11:25:18.274 [audit.repository] admin - CI [Configuration/Custom/Configuration1099418]: <jenkins.Server <title>My Jenkins</title> <url></url> <username>foo</username> <password>{b64}C7JZetqurQo2B8x2V8qUhg==</password> </jenkins.Server>
This is an example in Release 8.6.0 and later:
2019-03-07 14:23:28.412 [audit.repository] admin - Created CI git.Repository[Configuration/Custom/Configuration0f924b19069545c9a6d14d9bfccdc5ac] 2019-03-07 14:23:28.415 [audit.repository] admin - CI [Configuration/Custom/Configuration0f924b19069545c9a6d14d9bfccdc5ac]: <git.Repository <title>repo</title> <url>repoUrl</url> <username>tamerlan</username> <password>********</password> <domain>placeholder</domain> </git.Repository>
The access log
You can use the access log to view all HTTP requests received by Release. Access logging provides the following details: URL, the time taken to process, username, the IP address where the request came from, and the response status code. You can use this logging data to analyze the usage of Release and to troubleshoot.
The access log is written to
XL_RELEASE_SERVER_HOME/log/access.log. | https://docs.xebialabs.com/v.9.7/release/concept/logging-in-xl-release/ | 2020-08-03T12:48:46 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.xebialabs.com |
DataKeeper attempts to utilize all of the available network bandwidth. If DataKeeper is sharing the available bandwidth with other applications, you may wish to limit the amount of bandwidth DataKeeper is allowed to use. DataKeeper includes a feature called Bandwidth Throttle that will do this. The feature is enabled via a registry setting.
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.us.sios.com/dkse/8.6.3/en/topic/bandwidth-throttle | 2020-08-03T12:53:36 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.us.sios.com |
two kinds of supported transformations:
- coalesce the files so that each file is at least a certain size and there will be a maximum of certain number of files.
- convert CSV files to Parquet files
In Alluxio version 2.3 a maximum of 100 files..file.count.max, 100).option(hive.file.size.min, 2147483648) | https://docs.alluxio.io/os/user/stable/en/core-services/Transformation.html | 2020-08-03T11:44:03 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.alluxio.io |
Overview
Microsoft recommends that you use one of two basic approaches when you upgrade from SharePoint 2007 to SharePoint 2010:
- In-place upgrade. An in-place upgrade is used to upgrade all Microsoft SharePoint sites on the same hardware. See How to Migrate a Bamboo Web Part from SharePoint 2007 to SharePoint 2010 using the In-Place Upgrade Method for details.
- Database attach upgrade. A database attach upgrade enables you to move your content to a new farm or new hardware. See How to Migrate a Bamboo Web Part from SharePoint 2007 to SharePoint 2010 using the Database Attach Upgrade Method for details.
You can also combine these two types of upgrade in hybrid approaches that reduce downtime during an upgrade. For more information about these approaches, see Determine upgrade approach (SharePoint Server 2010) from Microsoft TechNet.
General Migration Restrictions for Bamboo Products
- Regardless of the upgrade method you choose, you MUST change existing SharePoint sites to use the new user SharePoint 2010 experience. If you don’t, some features of your Bamboo products will not work as expected.
- You need to make sure that you are running the latest available Bamboo product release in your SP2007 environment before migrating. Review information specific to the product(s) you are migrating to learn the minimum product release to migrate. See Bamboo product releases that are supported for migration from SharePoint 2007 to SharePoint 2010 for details.
- Bamboo Solutions does not support the migration of Bamboo products deployed on WSSv2/SharePoint 2003 environments to SharePoint 2010 farms.
- Bamboo Solutions does not support the downgrade of SharePoint editions when migrating from SharePoint 2007 to SharePoint 2010. For example, Bamboo products are not supported when migrating from MOSS 2007 to SharePoint Foundation 2010.
Migration Restrictions for Specific Bamboo Products
Before following the instructions for the method you choose, review information specific to the product(s) you are migrating. The In-Place Upgrade method is sometimes easier, but it takes longer. Some Bamboo products do not support the In-Place upgrade method and we recommend a hybrid In-Place Upgrade method as an alternative. | https://docs.bamboosolutions.com/document/before_you_migrate_from_sharepoint_2007_to_sharepoint_2010_read_this/ | 2020-08-03T12:55:15 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.bamboosolutions.com |
The lab has been privileged to work with the following researchers all around the world. Their affiliation might not be up to date.
Rami Albatal (HeyStacks, Ireland)
Avishek Anand ( L3S Research Center, Germany)
Avi Arampatzis (University of Thrace, Greece)
Ioannis Arapakis (Eurecat, Spain)
Jaime Arguello (University of North Carolina at Chapel Hill, USA)
Leif Azzopardi (Univeristy of Strathclyde, UK)
Micheline Beaulieu (University of Sheffield, UK)
Roi Blanco (Amazon, Spain)
Ivan Cantador (Autonomous University of Madrid, Spain)
Lawrence Cavedon (RMIT University, Australia)
Long Chen (University of Glasgow, UK)
Paul Clough (University of Sheffield, UK)
Nicola Ferro (University of Padua, Italy)
Gaihua Fu (University of Leeds, UK)
Fredric C. Gey (University of California, Berkeley, USA)
Cathal Gurrin (Dublin City University, Ireland)
Jacek Gwizdka (University of Texas, USA)
Matthias Hagen (Bauhaus-Universität Weimar, Germany)
Martin Halvey (University of Strathclyde, UK)
Jannica Heinström (Oslo Metropolitan University, Norway)
Frank Hopfgartner (University of Sheffield, UK)
Adam Jatowt (Kyoto University, Japan)
Chris Jones (Cardiff University, UK)
Joemon Jose (University of Glasgow, UK)
Noriko Kando (National Institution of Informatics, Japan)
Makoto P Kato (University of Tsukuba, Japan)
Tsuneaki Kato (The University of Tokyo, Japan)
Hao-Ren Ke (Taiwan Normal University, China)
Julia Kiseleva (Microsoft, USA)
Kazuaki Kishida (Keio University, Japan)
Mounia Lalmas (Yahoo!, UK)
Ray R. Larson (University of California, Berkeley, USA)
Luis A Leiva (Aalto University, Finland)
Dirk Lewandowski (Hamburg University of Applied Sciences, Germany)
Pablo Bermejo Lopez (University of Castilla-La Mancha, Spain)
Mitsunori Matsushita (Kansai University, Japan)
Tetsuya Maeshiro (University of Tsukuba, Japan)
Andres Masegosa (The Norwegian University of Science and Technology, Norway)
Mamiko Matsubayashi (University of Tsukuba, Japan)
Yashar Moshfeghi (University of Glasgow, UK)
Shin-ichi Nakayama (University of Tsukuba, Japan)
Heather L O'Brien (University of British Columbia, Canada)
Antonio Penta (Pompeu Fabra University, Spain)
Vivien Petras (Humboldt University of Berlin, Germany)
Ross Purves (University of Zurich, Switzerland)
Filip Radlinski (Google, UK)
Fuji Ren (University of Tokushima, Japan)
Reede Ren (University of Surry, UK)
Mark Sanderson (RMIT University, Australia)
Tetsuya Sakai (Waseda University, Japan)
Nicu Sebe (University of Trento, Italy)
Chirag Shah (University of Washington, USA)
Milad Shokouhi (Microsoft, USA)
Tetsuya Shirai (University of Tsukuba, Japan)
Vivek K Singh (Rutgers University, USA)
Damiano Spina (RMIT University, Australia)
Benno Stein (Bauhaus-Universität Weimar, Germany)
Paul Thomas (Microsoft Research, Australia)
Johanne R Trippas (University of Melbourne, Australia)
Jana Urban (Google, Switzerland)
Marc van Kreveld (Utrecht University, The Netherlands)
C. J. "Keith" van Rijsbergen (University of Glasgow, UK)
Wim Vanderbauwhede (University of Glasgow, UK)
Robert Villa (University of Edinburgh, UK)
Shuhei Yamamoto (NTT Service Evolution Laboratories, Japan)
Takehiro Yamamoto (Hyogo Prefecture University, Japan)
Xiao Yang (European Bioinformatics Institute, UK)
Masatoshi Yoshikawa (Kyoto University, Japan)
Fajie Yuan (Tencent, China) | https://docs.joholab.com/lab/v/en/collaboration | 2020-08-03T12:13:41 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.joholab.com |
Wizards (and we're not talking Harry Potter)
I | https://docs.microsoft.com/en-us/archive/blogs/infopath/wizards-and-were-not-talking-harry-potter | 2020-08-03T12:59:28 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
[−][src]Crate loom
Loom is a tool for testing concurrent programs.
Background
Testing concurrent programs is challenging. The Rust memory model is relaxed and permits a large number of possible behaviors. Loom provides a way to deterministically explore the various possible execution permutations.
Consider a simple example:
use std::sync::Arc; use std::sync::atomic::AtomicUsize; use std::sync::atomic::Ordering::SeqCst; use std::thread; #[test] fn test_concurrent_logic() { let v1 = Arc::new(AtomicUsize::new(0)); let v2 = v1.clone(); thread::spawn(move || { v1.store(1, SeqCst); }); assert_eq!(0, v2.load(SeqCst)); }
This program is obviously incorrect, yet the test can easily pass.
The problem is compounded when Rust's relaxed memory model is considered.
Historically, the strategy for testing concurrent code has been to run tests in loops and hope that an execution fails. Doing this is not reliable, and, in the event an iteration should fail, debugging the cause is exceedingly difficult.
Solution
Loom fixes the problem by controlling the scheduling of each thread. Loom also simulates the Rust memory model such that it attempts all possible valid behaviors. For example, an atomic load may return an old value instead of the newest.
The above example can be rewritten as:
use loom::sync::atomic::AtomicUsize; use loom::thread; use std::sync::Arc; use std::sync::atomic::Ordering::SeqCst; #[test] fn test_concurrent_logic() { loom::model(|| { let v1 = Arc::new(AtomicUsize::new(0)); let v2 = v1.clone(); thread::spawn(move || { v1.store(1, SeqCst); }); assert_eq!(0, v2.load(SeqCst)); }); }
Loom will run the closure many times, each time with a different thread scheduling The test is guaranteed to fail.
Writing tests
Test cases using loom must be fully determinstic. All sources of non-determism must be via loom types. This allows loom to validate the test case and control the scheduling.
Tests must use the loom synchronization types, such as
Atomic*,
Mutex,
RwLock,
Condvar,
thread::spawn, etc. When writing a concurrent program,
the
std types should be used when running the program and the
loom types
when running the test.
One way to do this is via cfg flags.
It is important to not include other sources of non-determism in tests, such as random number generators or system calls.
Yielding
Some concurrent algorithms assume a fair scheduler. For example, a spin lock assumes that, at some point, another thread will make enough progress for the lock to become available.
This presents a challenge for loom as the scheduler is not fair. In such
cases, loops must include calls to
yield_now. This tells loom that another
thread needs to be scheduled in order for the current one to make progress.
Dealing with combinatorial explosion
The number of possible threads scheduling has factorial growth as the program complexity increases. Loom deals with this by reducing the state space. Equivalent executions are elimited. For example, if two threads read from the same atomic variable, loom does not attempt another execution given that the order in which two threads read from the same atomic cannot impact the execution.
Additional reading
For more usage details, see the README | https://docs.rs/loom/0.3.5/loom/ | 2020-08-03T11:37:29 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.rs |
output
Values: json (default), jsontop, text
Description:
Specifies transcript delivery format. The following outputs are supported:
Note
Earlier releases included an outer list (enclosing square brackets) in the JSON output, which has since been removed. The structure of the inner JSON dictionary has not changed, and is now returned directly without the outer list. To produce JSON output in the list format that was previously used by the API, refer to the
output parameters below. | https://docs.vocitec.com/en/output-58460.html | 2020-08-03T12:25:09 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.vocitec.com |
Start Release
To start the Release server, open a command prompt or terminal, point to the
XL_RELEASE_SERVER_HOME/bin directory, and execute the appropriate command:
Start Release in the background
Important: To run Release in the background, it must be configured to start without user interaction. The server should not require a password for the encryption key that is used to protect passwords in the repository. Alternatively, you can store the password in the
XL_RELEASE_SERVER_HOME/conf/xl-release-server.conf file, by adding following:
repository.keystore.password=MY_PASSWORD. Release will encrypt the password when you start the server.
To start the Release server as a background process:
- Open a command prompt or terminal and point to the
XL_RELEASE_SERVER_HOME/bindirectory.
- Execute the appropriate command:
Important: If you close the command prompt in Microsoft Windows, Release will stop.
Server options
Specify the options you want to use when starting the Release server in the
XL_RELEASE_SERVER_OPTS environment variable.
The following server options are available:
To view the available server options when starting the Release server, add the
-h flag to the start command. | https://docs.xebialabs.com/v.9.7/release/how-to/start-xl-release/ | 2020-08-03T12:21:23 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.xebialabs.com |
MegaPi Pro
Overview
MegaPi Pro is an ATmega2560-based micro control board. It is fully compatible with Arduino programming. It provides powerful programming functions and a maximum output power of 120 W. With four port jacks, one 4-way DC motor expansion interface, one RJ25 expansion board interface, and one smart servo interface, MegaPi Pro is highly expansible. Its strong expansion ability enables it to meet the requirements of education, competition, entertainment, etc. MegaPi Pro can be easily installed on Raspberry Pi, and connected through serial ports. With the corresponding programs, Raspberry Pi can be used to control electronic modules, such as motors and sensors.
Technical specifications
- Microcontroller: ATMEGA2560-16AU
- Input voltage: DC 6–12 V
- Operating voltage: DC 5 V
- Serial port: 3
- I²C interface: 1
- SPI interface: 1
- Analog input port: 16
- DC Current per I/O Pin: 20 mA
- Flash memory: 256 KB
- SRAM: 8 KB
- EEPROM: 4 KB
- Clock speed: 16 MHz
- Dimensions: 87 mm x 63 mm (width × height)
Features
- Four motor driver interfaces for adding encoder motor driver and stepper motor driver, and thus drving DC motors, encoder motors, and stepper motors
- One wireless communication interface for adding modules, such as Bluetooth module or 2.4G module
- One smart servo interface that enables the expansion of four DC motors
- One RJ25 expansion board interface that enables the expansion of eight RJ25 interfaces
- Three M4 installation holes that are compatible with Raspberry Pi
- Overcurrent protection
- Fully compatible with Arduino
- Uses RJ25 interfaces for communication
- Supports Arduino programming, equipped with the professional Makeblock library functions to facilitate programming
- Supports graphical programming on mBlock and can be used by users of all ages
Interface function
- Red pin: firmware burning
- Red socket: power output or motor output
- Yellow pin, black pin, black socket: input/output
- White pin: smart management system interface
- Green interface: motor interface
- Yellow interface: four-way motor drive power interface
- White interface: smart servo interface
| http://docs.makeblock.com/diy-platform/en/electronic-modules/main-control-boards/megapi-pro.html | 2020-08-03T11:26:36 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['../../../en/electronic-modules/main-control-boards/images/megapi-pro-1.png',
None], dtype=object)
array(['../../../en/electronic-modules/main-control-boards/images/megapi-pro-2.png',
None], dtype=object) ] | docs.makeblock.com |
)
Alternatives
This section enumerates other Emacs packages that provide a Clojure programming environment for Emacs.. | https://docs.cider.mx/cider/0.24/additional_packages.html | 2020-08-03T12:59:28 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.cider.mx |
Set up two-step verification
If your Administrator has enabled two-step verification, you can add an extra layer of protection beyond your username and password. Complete the following steps to enable the feature.
Go to User menu > Your account > Two-step verification.
Use the toggle to enable two-step verification.
Use the radio buttons to select your preferred notification method. You can access verification codes through your Contrast-associated email address or the Google Authenticator mobile application, which is available on the following devices:
Android
iPhone
Blackberry
If you run into issues using either method, use the backup codes provided.
Verification codes
If you choose to receive your verification codes by email, Contrast sends you a verification code to enter on the configuration screen.
If you select Google Authenticator, Contrast provides a QR code with further instructions. You can scan the QR code, enter the code manually or use the provided dropdown to select the device type. Use the Google Authenticator application to obtain a verification code and validate your device.
Before completing two-step verification setup, you can download a set of backup codes in the form of a .txt file, which allows you to login if you encounter an error or get locked out of your account. You must download and save these codes in a secure location.
Reconfigure notification methods
If you want to change the way you receive verification codes, you can reconfigure notification settings in the Two-Step Verification tab. Once you change your selection, Contrast automatically issues a new set of backup codes. It is not necessary to save your changes.
To learn more about Administrator settings, see Two-step verification. For some helpful tips on verification codes, go to the troubleshooting article. | https://docs.contrastsecurity.com/en/user-two-step-verification.html | 2020-08-03T12:51:53 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.contrastsecurity.com |
The forma.lms project is led and mantained by the forma.association
Scope
The forma.association was founded on January 2017, with the purpose of:
- PROTECTING the product and the brand, being a guarantee of the project continuity for the community of adopters
- DIRECTING the software development, the corrective and evolutive maintenance of forma.lms, and all the activities aimed at the product growth and improvement
- PROMOTING forma.lms through marketing activities, the participation or organization of exhibitions, webinars, or similar events.
- COORDINATING and animating:
Standard Fees
Association memberships are valid for 1 year, from January 1 to December 31
Special Discounts
Special Discounts will be offered during special initiatives (i.e. crowdfunding) or for outstanding contributions, at discretion of the association board | https://docs.formalms.org/association.html | 2020-08-03T11:50:12 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['/jfiles/images/layout/association_400px.png', 'association 400px'],
dtype=object) ] | docs.formalms.org |
Service-Oriented Modeling for Connected Systems – Part 1
Due my intense travel I completely forgot to blog that the first part of Arvindra and my paper on “Service-Oriented Modeling” got published on the Architecture Journal…
In this paper we introduce a three part model that helps you to map business capabilities to service oriented implementation artifacts by using a so called service model.
The more I work with this model the more I realize how important this separation of concerns really is. Especially defining the conceptional service model allows you to decouple contracts from technology restrictions. If you’re interested in that topic and plan to attend TechEd Israel or TechEd US, my session “Architecting for a Service-Oriented World” might be of interest for you. | https://docs.microsoft.com/en-us/archive/blogs/beatsch/service-oriented-modeling-for-connected-systems-part-1 | 2020-08-03T11:19:56 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
Access cloud data in a notebook.
Doing interesting work in a Jupyter notebook requires data. Data, indeed, is the lifeblood of notebooks.
You can certainly import data files into a project, even using commands like
curl from within a notebook to download a file directly. It's likely, however, that you need to work with much more extensive data that's available from non-file sources such as REST APIs, relational databases, and cloud storage such as Azure tables.
This article briefly outlines these different options. Because data access is best seen in action, you can find runnable code in the Azure Notebooks Samples - Access your data.
REST APIs
Generally speaking, the vast amount of data available from the Internet is accessed not through files, but through REST APIs. Fortunately, because a notebook cell can contain whatever code you like, you can use code to send requests and receive JSON data. You can then convert that JSON into whatever format you want to use, such as a pandas dataframe.
To access data using a REST API, use the same code in a notebook's code cells that you use in any other application. The general structure using the requests library is as follows:
import pandas import requests # New York City taxi data for 2014 data_url = '' # General data request; include other API keys and credentials as needed in the data argument response = requests.get(data_url, data={"limit": "20"}) if response.status_code == 200: dataframe_rest2 = pandas.DataFrame.from_records(response.json()) print(dataframe_rest2)
Azure SQL Database and SQL Managed Instance
You can access databases in SQL Database or SQL Managed Instance with the assistance of the pyodbc or pymssql libraries.
Use Python to query an Azure SQL database gives you instructions on creating a database in SQL Database containing AdventureWorks data, and shows how to query that data. The same code is shown in the sample notebook for this article.
Azure Storage
Azure Storage provides several different types of non-relational storage, depending on the type of data you have and how you need to access it:
- Table Storage: provides low-cost, high-volume storage for tabular data, such as collected sensor logs, diagnostic logs, and so on.
- Blob storage: provides file-like storage for any type of data.
The sample notebook demonstrates working with both tables and blobs, including how to use a shared access signature to allow read-only access to blobs.
Azure Cosmos DB
Azure Cosmos DB provides a fully indexed NoSQL store for JSON documents). The following articles provide a number of different ways to work with Cosmos DB from Python:
- Build a SQL API app with Python
- Build a Flask app with the Azure Cosmos DB's API for MongoDB
- Create a graph database using Python and the Gremlin API
- Build a Cassandra app with Python and Azure Cosmos DB
- Build a Table API app with Python and Azure Cosmos DB
When working with Cosmos DB, you can use the azure-cosmosdb-table library.
Other Azure databases
Azure provides a number of other database types that you can use. The articles below provide guidance for accessing those databases from Python:
- Azure Database for PostgreSQL: Use Python to connect and query data
- Quickstart: Use Azure Redis Cache with Python
- Azure Database for MySQL: Use Python to connect and query data
- Azure Data Factory | https://docs.microsoft.com/en-us/azure/notebooks/access-data-resources-jupyter-notebooks | 2020-08-03T12:57:38 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
Phalcon is compiled into a C extension loaded on your web server. Because of that, bugs lead to segmentation faults, causing Phalcon to crash some of your web server processes.
For debugging these segmentation faults a stacktrace is required. Creating a stack trace requires a special build of php and some steps need to be done to generate a trace that allows the Phalcon team to debug this behavior.
Please follow this guide to understand how to generate the backtrace. | https://docs.phalcon.io/4.0/pt-br/generating-backtrace | 2020-08-03T12:17:59 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.phalcon.io |
[−][src]Module sequoia_openpgp::
packet:: key
Key-related functionality.
Data Types
The main data type is the
Key enum. This enum abstracts away
the differences between the key formats (the deprecated version
3, the current version 4, and the proposed version 5
formats). Nevertheless, some functionality remains format
specific. For instance, the
Key enum doesn't provide a
mechanism to generate keys. This functionality depends on the
format.
This version of Sequoia only supports version 4 keys (
Key4).
However, future versions may include limited support for version 3
keys to allow working with archived messages, and we intend to add
support for version 5 keys once the new version of the
specification has been finalized.
OpenPGP specifies four different types of keys: public keys,
secret keys, public subkeys, and secret subkeys. These are
all represented by the
Key enum and the
Key4 struct using
marker types. We use marker types rather than an enum, to better
exploit the type checking. For instance, type-specific methods
like
Key::secret are only exposed for those types that
actually support them. See the documentation for
Key for an
explanation of how the markers work.
The
SecretKeyMaterial data type allows working with secret key
material directly. This enum has two variants:
Unencrypted,
and
Encrypted. It is not normally necessary to use this data
structure directly. The primary functionality that is of interest
to most users is decrypting secret key material. This is usually
more conveniently done using
Key::decrypt_secret.
Key Creation
Use
Key4::generate_rsa or
Key4::generate_ecc to create a
new key.
Existing key material can be turned into an OpenPGP key using
Key4::import_public_cv25519,
Key4::import_public_ed25519,
Key4::import_public_rsa,
Key4::import_secret_cv25519,
Key4::import_secret_ed25519, and
Key4::import_secret_rsa.
Whether you create a new key or import existing key material, you still need to create a binding signature, and, for signing keys, a back signature for the key to be usable.
In-Memory Protection of Secret Key Material
Whether the secret key material is protected on disk or not,
Sequoia encrypts unencrypted secret key material (
Unencrypted)
while it is memory. This helps protect against heartbleed-style
attacks where a buffer over-read allows an attacker to read from
the process's address space. This protection is less important
for Rust programs, which are memory safe. However, it is
essential when Sequoia is used via its FFI.
See
crypto::mem::Encrypted for details. | https://docs.sequoia-pgp.org/sequoia_openpgp/packet/key/index.html | 2020-08-03T11:42:24 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.sequoia-pgp.org |
Remote completion plugin
This topic describes how to install and configure the Release Remote completion plugin. For more information, see Using the Remote completion plugin.
Install the Remote completion plugin
To install the Remote completion plugin:
- In the top navigation bar, click Plugins.
- Click Browse.
- Locate the Release Remote completion plugin, and click Install.
- Restart Release to enable the newly installed plugin.
Server configuration
SMTP server
Release sends remote completion requests to users of the system by email. For more details on how to set up an SMTP server, see Configure SMTP server.
Important: The SMTP server page is only available to users with the Admin global permission.
To configure the email server that is used to send these requests:
- In the top navigation bar, click Settings.
- Click SMTP server.
- Fill in the required fields.
- Click Save.
IMAP server
Release receives remote completion emails sent by users that want to complete or fail a remote completion task.
To configure the email server that is used to receive the remote completion emails:
Important: IMAP server settings are only available to users with the Admin global permission.
Important: Release supports the use of one 1 IMAP server only.
Important: Set up a new email account specifically for receiving remote completion emails. All emails are deleted from the inbox after they are processed by Release, including unrecognized and existing emails.
- In the top navigation bar, click Settings.
- Click Shared configuration.
- Locate IMAP server, and click
.
Fill in the required fields. The following is a list of fields and descriptions:
- IMAP server host: the internet address of the mail server.
- IMAP server port: port where the server is listening on.
- Use TLS: used to secure the connection.
- IMAP from address: the email address of the IMAP server account; requests to remotely complete or fail a task are received from this email account.
- IMAP server login ID.
- IMAP server login password.
- Enable whitelisting: when enable whitelisting is checked, only emails to and from whitelisted domains are processed for remote completion.
- Domain whitelist: used for adding whitelisted domains.
- Secret for generating email signatures: generate an email signature that verifies the integrity of a received remote completion email. Notice that changing the secret will invalidate all previously send completion request emails.
- Click Save.
Settings in
xl-release.conf
Advanced configuration settings can be specified inside the
XL_RELEASE_SERVER_HOME/conf/xl-release.conf file. The advanced configuration is used by the email fetcher which processes incoming remote completion emails.
xl { remote-completion { sync-interval = 30 seconds startup-delay = 30 seconds } }
sync-interval: specifies the interval time for the email fetcher. The default value is 30 seconds.
startup-delay: specifies the initial startup delay of the mail fetcher. The default value is 30 seconds.
Mailbox auditing
Mailbox auditing can be enabled to log mailbox access by mailbox owners, delegates, and administrators. Contact your mailbox provider to set up mailbox auditing.
Troubleshooting
Release server
The Release server provides a mechanism for logging the application. By default, only the basic remote completion process is logged.
To enable detailed logging, you can add the following line into the
XL_RELEASE_SERVER_HOME/conf/logback.xml file:
<logger name="com.xebialabs.xlrelease.plugins.remotecompletion" level="debug" />
Use the log level trace for more detailed logging.
JavaMail debugging
To turn on session debugging, add the following system property to the
$JAVACMD inside the shell script that runs the Release server, located in
bin\run.sh:
-Dmail.debug=true
This property enables printing of debugging information to the console, including a protocol trace.
Security recommendations
The Release remote completion feature uses emails sent by users to complete or fail any task. These are the risks associated with this feature:
Spamming and flooding attacks
Release processes each incoming email for the configured mailbox. To avoid receiving thousands of emails that can flood your mailbox, you can enable whitelisting. Only emails sent to and received from whitelisted domains are processed for remote completion. Use content filters, enable DNS-based blacklists (DNSBL), enable Spam URI Real-time Block Lists (SURBL), and maintain the local blacklists of IP addresses of spam senders. Configure the email relay parameter on the email server to prevent open relay.
Data leakage
Release sends and receives email from a task owner to take action on any task. To prevent any data leakage during this process, you must encrypt IMAP and SMTP protocols with SSL/TLS and set up SMTP authentication to control user access.
DoS and DDoS attack
To avoid DoS and DDoS attacks, limit the number of connection and authentication errors with your SMTP server.
The majority of the abusive email messages carry fake sender addresses. Activate Sender Policy Framework (SPF) to prevent spoofed sources. The SPF check ensures that the sending Message Transfer Agent (MTA) is allowed to send emails on behalf of the senders domain name. You must also activate Reverse DNS to block fake senders. After the Reverse DNS Lookup is activated, your SMTP verifies that the senders IP address matches both the host and domain names that were submitted by the SMTP client in the EHLO/HELO command.
Release notes
Release Remote Completion plugin 9.5.0
Bug Fixes
- [XLINT-808] - Allow email recipient configuration with case-sensitive email addresses
- [XLINT-818] - Fix Release crashing on remote completion task
- [XLINT-959] - Fix whitelist with case-sensitive email addresses | https://docs.xebialabs.com/v.9.7/release/how-to/configure-the-remote-completion-plugin/ | 2020-08-03T12:50:37 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.xebialabs.com |
The following configuration options are available for PM Central Resource Reports.
Option 1: Generate resource reports using the System Account:
By default, all resource reports are security trimmed so users with access to the Resource Center will only see resource information pertaining to projects they have permissions to access.
PM Central provides the option of displaying reports generated for the System Account, providing a comprehensive, portfolio-wide, reports to all users with access to the Report Center.
The Report Settings feature was added in PM Central 4.3
Option 2. Configure the reports to “Run Now”
This option removes the wait time associated with the default report generation method by referencing content from List Rollup rather than the Report Information list.
Run Now is a configuration option associated with the following reports on the Portfolio and Department sites:
- All Allocations
- By Resource
- By Project
- By Project Department
- Allocation by Manager
- Resource Availability
- By Department
- Risk Chart (accessed from the Risks tab)
Important: This option can result in page time out errors. | https://docs.bamboosolutions.com/document/configuring_resource_reports/ | 2020-08-03T12:43:06 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.bamboosolutions.com |
-
-
-
-
-
-
-
event age for events
You can set the event age option to specify the time interval (in seconds). Citrix ADM monitors the appliances until the set duration and generates an event only if the event age exceeds the set duration.
Note:
The minimum value for the event age is 60 seconds. If you keep the Event Age field blank, the event rule is applied immediately after the event is occurred.
For example, consider that you want to manage various ADC appliances and get notified by email when any of your virtual servers goes down for 60 seconds or longer. You can create an event rule with the necessary filters and set the rule’s event age to 60 seconds. Then, whenever a virtual server remains down for 60 or more seconds, you will receive an email notification with details such as entity name, status change, and time.
To set event age in Citrix ADM:
In Citrix ADM, navigate to Networks > Events > Rules, and click Add.
On the Create Rule page, set the rule parameters.
Specify the event age in seconds.
Ensure to set all the co-related traps in the Category section and also set the respective severity in the Severity section when you set event age. In the above example, select the entityup, entitydown, and entityofs traps.
Set event age for. | https://docs.citrix.com/en-us/citrix-application-delivery-management-software/12-1/networks/events/how-to-set-event-age.html | 2020-08-03T12:20:53 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.citrix.com |
.
Applying license from file
The code below will explain how to apply a product license.
// initialize License License lic = new License(); // apply license lic.setLicense("D:\\GroupDocs.Metadata.lic");
Applying license from stream
The following example shows how to load a license from a stream.
try (InputStream stream = new FileInputStream("D:\\GroupDocs.Metadata.lic")) { License license = new License(); license.setLicense(stream); }
Applying Metered license
Here are the simple steps to use the
Metered class.
- Create an instance of
Meteredclass.
-class (Since version 19.5).
- It will return the credit that you have consumed so far.
Following is the sample code demonstrating how to use
Metered class.
// initialize Metered API Metered metered = new Metered(); // set-up credentials metered.setMeteredKey("publicKey", "privateKey"); | https://docs.groupdocs.com/metadata/java/evaluation-limitations-and-licensing/ | 2020-08-03T12:06:11 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.groupdocs.com |
Interfaces
To maximize component reuse, it's vital components be designed for reuse. Reusability partly depends on portability, which the Legato framework provides, but it also depends largely on interoperability. Reusable components must be designed for interoperability with as many other things as possible, including things that don't exist - yet!
Interface design is the most important factor to make something interoperable. Standardized data types and interaction methods must be used, and the simpler the interfaces, the better.
Legato embraces the philosophy that developers should always be allowed to choose the implementation language best suited to the task at hand, regardless of the components used. Some developers are better at programming in certain languages, and some languages are better suited to solving certain problems. That's why Legato provides developers with an easy way to interface components with each other, even if they have been written in different languages.
A common example of a programming-language-independent interface is a networking protocol. But networking protocols come with pitfalls and overhead, things like endian issues, race conditions, protocol stack framing overheads and poor processing time. Networking protocols also tend to require a lot of hand coding specific to the protocol implementation.
Legato supports networking, if that's what's needed, but it also has tools to implement much lighter-weight, language-independent communication between components that exist on the same host device or even running within the same process.
Inter-process Communication
Function Call APIs
Virtually all programmers are familiar with function calls. While Legato allows normal libraries to use their specific programming-language function call interfaces, Legato also supports language-independent function call interfaces..
See API Files for more information. | https://docs.legato.io/18_09/conceptsInterfaces.html | 2020-08-03T11:34:42 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.legato.io |
Free Windows Azure Webinars for ISVs
If you are an ISV or a partner you know that you had better be thinking about how your product of service will continue to exist and be successful in the new cloud paradigm.
The Windows Azure Team Blog just listed this free three part webinar series for ISVs that covers the benefits, implications of - and best practices for - adopting Windows Azure.
They include:
- The Business Case for Azure (Part 1) - January 12, 2011
- Understanding Implications of the Cloud (Part 2) - January 19, 2011
- Easing (Leaping) Into the Cloud (Part 3) - January 29, 2011
See this blog post for more information and how to register.
Bill Zack | https://docs.microsoft.com/en-us/archive/blogs/ignitionshowcase/free-windows-azure-webinars-for-isvs | 2020-08-03T12:52:21 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
Privileged, and who receives them.
Sender email address and subject line
Emails sent from Privileged Identity Management for both Azure AD and Azure resource roles have the following sender email address:
- Display name: Microsoft Azure
These emails include a PIM prefix in the subject line. Here's an example:
- PIM: Alain Charon was permanently assigned the Backup Reader role
Notifications for Azure AD roles
Privileged Identity Management sends emails when the following events occur for Azure AD roles:
- When a privileged role activation is pending approval
- When a privileged role activation request is completed
- When Azure AD Privileged Identity Management is enabled
Who receives these emails for Azure AD roles depends on your role, the event, and the notifications setting:
* If the Notifications setting is set to Enable.
The following shows an example email that is sent when a user activates an Azure AD role for the fictional Contoso organization.
Weekly Privileged Identity Management digest email for Azure AD roles
A weekly Privileged Identity Management summary email for Azure AD roles is sent to Privileged Role Administrators, Security Administrators, and Global Administrators that have enabled Privileged Identity Management. This weekly email provides a snapshot of Privileged Identity Management activities for the week as well as privileged role assignments. It is only available for Azure AD organizations on the public cloud. Here's an example email:
The email includes four tiles:
The Overview of your top roles section lists the top five roles in your organization based on total number of permanent and eligible administrators for each role. The Take action link opens the PIM wizard where you can convert permanent administrators to eligible administrators in batches.
When users activates their role and the role setting requires approval, approvers will receive three emails for each approval:
- Request to approve or deny the user's activation request (sent by the request approval engine)
- The user's request is approved (sent by the request approval engine)
- The user's role is activated (sent by Privileged Identity Management)
The first two emails sent by the request approval engine can be delayed. Currently, 90% of emails take three to ten minutes, but for 1% customers it can be much longer, up to fifteen minutes.
If an approval request is approved in the Azure portal before the first email is sent, the first email will no longer be triggered and other approvers won't be notified by email of the approval request. It might appear as if the they didn't get an email but it's the expected behavior.
PIM emails for Azure resource roles
Privileged Identity Management sends emails to Owners and User Access Administrators when the following events occur for Azure resource roles:
- When a role assignment is pending approval
- When a role is assigned
- When a role is soon to expire
- When a role is eligible to extend
- When a role is being renewed by an end user
- When a role activation request is completed
Privileged Identity Management sends emails to end users when the following events occur for Azure resource roles:
- When a role is assigned to the user
- When a user's role is expired
- When a user's role is extended
- When a user's role activation request is completed
The following shows an example email that is sent when a user is assigned an Azure resource role for the fictional Contoso organization.
| https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-email-notifications | 2020-08-03T13:32:36 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['media/pim-email-notifications/email-directory-new.png',
'New Privileged Identity Management email for Azure AD roles'],
dtype=object)
array(['media/pim-email-notifications/email-directory-weekly.png',
'Weekly Privileged Identity Management digest email for Azure AD roles'],
dtype=object)
array(['media/pim-email-notifications/email-resources-new.png',
'New Privileged Identity Management email for Azure resource roles'],
dtype=object) ] | docs.microsoft.com |
What is a content delivery network on Azure?
A content delivery network (CDN) is a distributed network of servers that can efficiently deliver web content to users. CDNs store cached content on edge servers in point-of-presence (POP) locations that are close to end users, to minimize latency.
Azure Content Delivery Network (CDN) offers developers a global solution for rapidly delivering high-bandwidth content to users by caching their content at strategically placed physical nodes across the world. Azure CDN can also accelerate dynamic content, which cannot be cached, by leveraging various network optimizations using CDN POPs. For example, route optimization to bypass Border Gateway Protocol (BGP)..
For a list of current CDN node locations, see Azure CDN POP locations.
How it works
A user (Alice) requests a file (also called an asset) by using a URL with a special domain name, such as <endpoint name>.. | https://docs.microsoft.com/en-us/azure/cdn/cdn-overview?WT.mc_id=partlycloudy-blog-masoucou | 2020-08-03T13:04:19 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['media/cdn-overview/cdn-overview.png', 'CDN Overview'],
dtype=object) ] | docs.microsoft.com |
This manual describes the Open Systems Pharmacology Suite. It includes a technical description of each software element with examples and references for further reading. The aim of the manual is to assist users in effectively developing PBPK models.
The handbook is divided into the following parts:
"Mechanistic Modeling of Pharmacokinetics and Dynamics" provides a brief general introduction to the science of computational systems biology with a strong focus on mechanistic modeling of pharmacokinetics and –dynamics.
Go to: Mechanistic Modeling of Pharmacokinetics and Dynamics
"Open Systems Pharmacology Suite" provides a brief overview of our software platform, its scope, and puts it into context with the science.
Go to: Open Systems Pharmacology Suite
A technical description of the different software elements is presented starting with PK-Sim® focusing on physiologically-based pharmacokinetics in "Working with PK-Sim®".
Go to: Working with PK-Sim®
MoBi® focusing on model customization and extension as well as on pharmacodynamics in "Working with MoBi®".
Go to: Working with MoBi®
Tools shared between PK-Sim® and MoBi® and some workflow examples are presented in "Shared Tools and Example Workflows".
Go to: Shared Tools and Example Workflows
The interfaces to the common computing environment R is described in "R Toolbox". | https://docs.open-systems-pharmacology.org/ | 2020-08-03T11:35:04 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.open-systems-pharmacology.org |
Overview:
Some of the things you need to know before you begin installing out Pocket Core CLI and configure a Pocket Node or Pocket Validator Node, you must meet the following prerequisites in order to continue:
- Have basic knowledge of Linux/Mac OS
- Basic Web architecture
- Static IP address or domain name
- SSL cert (self-signed not recommended)
- Basic network principals
- A process manager:
- e.g: Systemd
- Knowledge on implementing a reverse proxy using(but not limited to) one of the following:
- Apache
- Envoy
- Ngnix
- Basic knowledge with File descriptors
Hardware Requirements:
The base node hardware requirements are:
CPU: 2 CPU’s (or vCPU’s)
Memory: 4 GB RAM*
Disk: Blockchain is expected to grow 154 GB a year given our 15 minutes(4 MB blocks)
Note
The RAM requirement could vary, depending on network load and relays processed, we will be releasing more details on this process at a later date.
Next Step:
Learn more about each node in our node breakdown and our node reference guide to understand how the nodes work and interact on the protocol.
Updated 8 days ago | https://docs.pokt.network/docs/before-you-dive-in | 2020-08-03T11:25:51 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.pokt.network |
Plugin format
Rudder has a specific package format for plugins.
You can manage Rudder packages with the rudder-pkg command. This is the documentation of how they are created.
File description
A Rudder package file ends with the
.rpkg extension.
A Rudder package file is an archive file and can be managed with the 'ar' command.
The archive contains:
A metadata file in JSON format named medatata
A tarball file in txz format name scripts.txz that contains package setup utility scripts
One or more tarball files in txz format that contain the package files
The metadata file is a JSON file and is named 'metadata':
{ # the only currently supported type in "plugin" (mandatory) "type": "plugin", # the package name must consist of ascii characters without whitespace (mandatory) "name": "myplugin", # the package version has the form "rudder_major-version_major.version_minor" for a plugin (mandatory) "version": "4.1-1.0", # these are is purely informative (optional) "build-date": "2017-02-22T13:58:23Z", "build-commit": "34aea1077f34e5abdaf88eb3455352aa4559ba8b", # the list of jar files to enable if this is a webapp plugin (optional) "jar-files": [ "test.jar" ], # the list of packages or other plugins that this package depends on (optional) # this is currently only informative "depends": { # dependency on a specific binary that must be in the PATH "binary": [ "zip" ] # dependencies on dpkg based systems "dpkg": [ "apache2" ], "rpm": [ ], # dependency specific to debian-8 "debian-8": [ ], "sles-11": [ ], # rudder dependency, ie this is a Rudder format package "rudder": [ "new-plugin" ] }, # the plugin content (mandatory) "content": { # this will put the content of the extracted files.txz into /opt/rudder/share "files.txz": "/opt/rudder/share", "var_rudder.txz": "/var/rudder" } }
To see a package metadata file use:
ar p package.rpkg medatada
The scripts.txz is a tarball that can contain zero or more executable files named:
preinstthat will be run before installing the package files
postinstthat will be run after installing the package files
prermthat will be run before removing the package files
postrmthat will be run after removing the package files
preinst and
postinst take one parameter that can be 'install' or 'upgrade'. The value 'upgrade' is used when a previous version of the package is already installed.
To create the
scripts.txz file use:
tar cvfJ scripts.txz preinst postinst prerm postrm
To create a Rudder package file use the ar command:
ar r mypackage-4.1-3.0.rpkg medatada scripts.txz files.txz
Note that
ar r inserts or replaces files so you can create your package with incremental inserts.
To extract files, use
ar x instead.
← Variables Security considerations → | https://docs.rudder.io/reference/6.1/reference/plugin_format.html | 2020-08-03T12:47:41 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.rudder.io |
. following image shows the results of the search.
-.
Your dashboard should look like the following image.:
- At the top of the dashboard click Edit.
- In the VIP Client Purchases: 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/8.0.4/SearchTutorial/Addreportstodashboard | 2020-08-03T12:59:54 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
DC Motor-25 6V
Description
DC motors are the most commonly used motors in Makeblock Platform. With Makeblock DC Motor-25 Brackets, they may be easy to connect to Makeblock structural components.
Specification
Size Chart (mm)
Demo
It can be connected with Makeblock Me Orion or Makeblock Me Dual Motor Driver V1 by adding the plug 3.96-2P on the stripped-end to control the motors.
| http://docs.makeblock.com/diy-platform/en/electronic-modules/motors/dc-motor-25-6v.html | 2020-08-03T12:25:28 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['images/dc-motor-25-6v_微信截图_20160128160459.png',
'微信截图_20160128160459'], dtype=object)
array(['images/dc-motor-25-6v_微信截图_20160128160649.png',
'微信截图_20160128160649'], dtype=object)
array(['images/dc-motor-25-6v_微信截图_20160128160612.png',
'微信截图_20160128160612'], dtype=object)
array(['images/dc-motor-25-6v_微信截图_20160128160724.png',
'微信截图_20160128160724'], dtype=object)
array(['images/dc-motor-25-6v_微信截图_20160128160812.png',
'微信截图_20160128160812'], dtype=object) ] | docs.makeblock.com |
DKHealthCheck.exe, found in the
Note: DKHEALTHCHECK output is captured by DKSupport automatically and does not need to be run separately if you are already running DKSupport.
You can run this tool by right clicking the DataKeeper Notification Icon and then clicking on ‘Launch Health Check’ or by following the below procedure.
Open a command prompt
- Type cd %extmirrbase%
- You will now be placed in the DataKeeper directory or c:\Program Files (x86) \SIOS\DataKeeper
- From the aforementioned directory type cd DKTools
- From within the DKTools directory, execute the following command DKHealthCheck.exe
The results of the tool can be copied and pasted from the command prompt and emailed to [email protected].
Alternatively, you may direct the output to a file, by running this command inside of the DKTools directory.
- DKHealthCheck.exe > HealthCheck.txt
This file can then be attached and sent as part of an email.
Note: This command may take some time to execute.
Post your comment on this topic. | http://docs.us.sios.com/dkse/8.6.3/en/topic/dkhealthcheck | 2020-08-03T12:30:25 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.us.sios.com |
We introduced an experimental feature called Keen Flows. All steps of the
Keen Flow launch simultaneously as the flow starts and continue running until
the flow’s status changes to
sleeping.
We introduced the UI’s option for defining the CRON expression to schedule Flow executions. This functionality is available under the Settings tab on the Designer page.
We made the Node.js SDK for proper RabbitMQ’s disconnection. In case, one of the RabbitMQ’s instances fails or reports errors, the Node.js process terminates immediately and then restarts by the Platform’s orchestrator. Thus the process can reconnect to the already running RabbitMQ’s instance.
The
workspace_id and
workspace_role were added as optional attributes to the
POST /v2/contracts/:id/invites endpoint. In case the
workspace_id has already
been provided, then the
workspace_role will be required.
Previously it was possible to delete the Credential for any Component corrupting the integration Flows. Now you can’t delete any of the Credentials, while it is used in at least one Integration Flow.
As a owner of the Contract you can now retrieve any details of the Workspace you
are member of using the
/v2/workspaces/:id API endpoint request.
Updated the error messages on the password recovery page.
Expression tooltip in the mapper UI is now flashing when hovered with the mouse. | https://docs.elastic.io/releases/2018Q4.html | 2020-08-03T11:53:22 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.elastic.io |
The list below contains information relating to the most common Active Directory attributes. Not all attributes are appropriate for use with SecureAuth.
More Information related to syntax, ranges, Global catalog replication, etc for these and other AD Attributes can be found at here
Friendly Name: This is the name shown in Active Directory Users and Computers.
Attribute Name: This is the Active Directory attribute name.
Example: This column shows example usage or notes.
General Tab
Address Tab
Group Tab
Account Tab
Telephones Tab
Organization Tab
Exchange Tab
Exchange Attributes Tab | https://docs.secureauth.com/display/KBA/Active+Directory+Attributes+List | 2020-08-03T12:19:33 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.secureauth.com |
LagoInitFile Specification¶
Note: this is work under progress, if you’d like to contribute to the documentation, please feel free to open a PR. In the meanwhile, we recommend looking at LagoInitFile examples available at:
Each environment in Lago is created from an init file, the recommended format
is YAML, although at the moment of writing JSON is still supported. By default,
Lago will look for a file named
LagoInitFile in the directory it was
triggered. However you can pick a different file by running:
$ lago init <FILENAME>
Sections¶
The init file is composed out of two major sections: domains, and nets.
Each virtual machine you wish to create needs to be under the
domains
section.
nets will define the network topology, and when you add a
nic to a domain, it must be defined in the
nets section.
Example:
domains: vm-el73: memory: 2048 service_provider: systemd nics: - net: lago disks: - template_name: el7.3-base type: template name: root dev: vda format: qcow2 artifacts: - /var/log nets: lago: type: nat dhcp: start: 100 end: 254 management: true dns_domain_name: lago.local
domains¶
<name>: The name of the virtual machine.
- memory(int)
- The virtual machine memory in GBs.
- vcpu(int)
- Number of virtual CPUs.
- service_provider(string)
- This will instruct which service provider to use when enabling services in the VM by calling
lago.plugins.vm.VMPlugin.service(), Possible values: systemd, sysvinit.
- cpu_model(string)
-
CPU Family to emulate for the virtual machine. The list of supported types depends on your hardware and the libvirtd version you use, to list them you can run locally:$ virsh cpu-models x86_64
- cpu_custom(dict)
- This allows more fine-grained control of the CPU type, see CPU section for details.
- nics(list)
-
Network interfaces. Each network interface must be defined in the global nets section. By default each nic will be assigned an IP according to the network definition. However, you may also use static IPs here, by writing:nics: - net: net-01 ip: 192.168.220.2
The same network can be declared multiple times for each domain.
- disks(list)
-
- type
-
Disk type, possible values:
- template
- A Lago template, this would normally the bootable device.
- file
- A local disk image. Lago will thinly provision it during init stage, this device will not be bootable. But can obviously be used for additional storage.
- template_name(string)
- Applies only to disks of type
template. This should be one of the available Lago templates, see Templates section for the list.
- size(string)
- Disk size to thinly provision in GB. This is only supported in
filedisks.
- format(string)
- TO-DO: no docs yet..
- device(string)
- Linux device: vda, sdb, etc. Using a device named “sd*” will use virtio-scsi.
- build(list)
-
This section should describe how to build/configure VMs. The build/configure action will happen during
init.
- virt-customize(dict)
-
Instructions to pass to virt-customize, where the key is the name of the option and the value is the arguments for that option.
This operation is only supported on disks which contains OS.
A special instruction is
ssh-inject: ''Which will ensure Lago’s generated SSH keys will be injected into the VM. This is useful when you don’t want to run the bootstrap stage.
For example:- template_name: el7.3-base build: - virt-customize: ssh-inject: '' touch: [/root/file1, /root/file2]
See build section for details.
- artifacts(list)
- Paths on the VM that Lago should collect when using lago collect from the CLI, or
collect_artifacts()from the SDK.
- groups(list)
- Groups this VM belongs to. This is most usefull when deploying the VM with Ansible.
- bootstrap(bool)
- Whether to run bootstrap stage on the VM’s template disk, defaults to True.
- ssh-user(string)
- SSH user to use and configure, defaults to root
- vm-provider(string)
- VM Provider plugin to use, defaults to local-libvirt.
- vm-type(string)
- VM Plugin to use. A custom VM Plugin can be passed here, note that it needs to be available in your Python Entry points. See lago-ost-plugin for an example.
- metadata(dict)
- TO-DO: no docs yet.. | https://lago.readthedocs.io/en/0.42/LagoInitFile.html | 2020-08-03T12:55:28 | CC-MAIN-2020-34 | 1596439735810.18 | [] | lago.readthedocs.io |
Paste Deployment¶
Contents
- Paste Deployment
- Introduction
- Status
- Installation
- From the User Perspective
- Basic Usage
config:URIs
egg:URIs
- Defining Factories
- Outstanding Issues
Documents:
Introduction¶.
The result is something a system administrator can install and manage without knowing any Python, or the details of the WSGI application or its container.
Paste Deployment currently does not require other parts of Paste, and is distributed as a separate package.
To see updates that have been made to Paste Deploy see the news file.
Paste Deploy is released under the MIT license.
Status¶
Paste Deploy has passed version 1.0. Paste Script is an actively maintained project. As of 1.0, we’ll make a strong effort to maintain backward compatibility (this actually started happening long before 1.0, but now it is explicit). This will include deprecation warnings when necessary. Major changes will take place under new functions or with new entry points.
Note that the most key aspect of Paste Deploy is the entry points it
defines (such as
paste.app_factory). Paste Deploy is not the only
consumer of these entry points, and many extensions can best take
place by utilizing the entry points instead of using Paste Deploy
directly. The entry points will not change; if changes are necessary,
new entry points will be defined.
Installation¶
First make sure you have either setuptools or its modern replacement distribute installed. For Python 3.x you need distribute as setuptools does not work on it.
Then you can install Paste Deployment using pip by running:
$ sudo pip install PasteDeploy
If you want to track development, do:
$ hg clone $ cd pastedeploy $ sudo python setup.py develop
This will install the package globally, but will load the files in the
PasteDeploy==dev.
For downloads and other information see the Cheese Shop PasteDeploy page.
A complimentary package is Paste Script. To install
that, use
pip install PasteScript (or
pip install
PasteScript==dev).
From the User Perspective¶
In the following sections, the Python API for using Paste Deploy is given. This isn’t what users will be using (but it is useful for Python developers and useful for setting up tests fixtures).
The primary interaction with Paste Deploy is through its configuration files. The primary thing you want to do with a configuration file is serve it. To learn about serving configuration files, see the ``paster serve` command <>`_.
The Config File¶
A config file has different sections. The only sections Paste Deploy
cares about have prefixes, like
app:main or
filter:errors –
the part after the
: is the “name” of the section, and the part
before gives the “type”. Other sections are ignored.
The format is a simple INI format:
name = value. You can
extend the value by indenting subsequent lines.
# is a comment.
Typically you have one or two sections, named “main”: an application
section (
[app:main]) and a server section (
[server:main]).
[composite:...] signifies something that dispatches to multiple
applications (example below).
Here’s a typical configuration file that also shows off mounting multiple applications using paste.urlmap:
[composite:main] use = egg:Paste#urlmap / = home /blog = blog /wiki = wiki /cms = config:cms.ini [app:home] use = egg:Paste#static document_root = %(here)s/htdocs [filter-app:blog] use = egg:Authentication#auth next = blogapp roles = admin htpasswd = /home/me/users.htpasswd [app:blogapp] use = egg:BlogApp database = sqlite:/home/me/blog.db [app:wiki] use = call:mywiki.main:application database = sqlite:/home/me/wiki.db
I’ll explain each section in detail now:
[composite:main] use = egg:Paste#urlmap / = home /blog = blog /cms = config:cms.ini
That this is a
composite section means it dispatches the request
to other applications.
use = egg:Paste#urlmap means to use the
composite application named
urlmap from the
Paste package.
urlmap is a particularly common composite application – it uses a
path prefix to map your request to another application. These are
the applications like “home”, “blog”, “wiki” and “config:cms.ini”. The last
one just refers to another file
cms.ini in the same directory.
Next up:
[app:home] use = egg:Paste#static document_root = %(here)s/htdocs
egg:Paste#static is another simple application, in this case it
just serves up non-dynamic files. It takes one bit of configuration:
document_root. You can use variable substitution, which will pull
variables from the section
[DEFAULT] (case sensitive!) with
markers like
%(var_name)s. The special variable
%(here)s is
the directory containing the configuration file; you should use that
in lieu of relative filenames (which depend on the current directory,
which can change depending how the server is run).
Then:
[filter-app:blog] use = egg:Authentication#auth next = blogapp roles = admin htpasswd = /home/me/users.htpasswd [app:blogapp] use = egg:BlogApp database = sqlite:/home/me/blog.db
The
[filter-app:blog] section means that you want an application
with a filter applied. The application being filtered is indicated
with
next (which refers to the next section). The
egg:Authentication#auth filter doesn’t actually exist, but one
could imagine it logs people in and checks permissions.
That last section is just a reference to an application that you
probably installed with
pip install BlogApp, and one bit of
configuration you passed to it (
database).
Lastly:
[app:wiki] use = call:mywiki.main:application database = sqlite:/home/me/wiki.db
This section is similar to the previous one, with one important difference.
Instead of an entry point in an egg, it refers directly to the
application
variable in the
mywiki.main module. The reference consist of two parts,
separated by a colon. The left part is the full name of the module and the
right part is the path to the variable, as a Python expression relative to the
containing module.
So, that’s most of the features you’ll use.
Basic Usage¶
The basic way you’ll use Paste Deployment is to load WSGI applications. Many Python frameworks now support WSGI, so applications written for these frameworks should be usable.
The primary function is
paste.deploy.loadapp. This loads an
application given a URI. You can use it like:
from paste.deploy import loadapp wsgi_app = loadapp('config:/path/to/config.ini')
There’s two URI formats currently supported:
config: and
egg:.
config: URIs¶
URIs that being with
config: refer to configuration files. These
filenames can be relative if you pass the
relative_to keyword
argument to
loadapp().
Note
Filenames are never considered relative to the current working
directory, as that is a unpredictable location. Generally when
a URI has a context it will be seen as relative to that context;
for example, if you have a
config: URI inside another
configuration file, the path is considered relative to the
directory that contains that configuration file.
Config Format¶
Configuration files are in the INI format. This is a simple format that looks like:
[section_name] key = value another key = a long value that extends over multiple lines
All values are strings (no quoting is necessary). The keys and section names are case-sensitive, and may contain punctuation and spaces (though both keys and values are stripped of leading and trailing whitespace). Lines can be continued with leading whitespace.
Lines beginning with
# (preferred) or
; are considered
Applications¶
You can define multiple applications in a single file; each application goes in its own section. Even if you have just one application, you must put it in a section.
Each section name defining an application should be prefixed with
app:. The “main” section (when just defining one application)
would go in
[app:main] or just
[app].
There’s two ways to indicate the Python code for the application. The first is to refer to another URI or name:
[app:myapp] use = config:another_config_file.ini#app_name # or any URI: [app:myotherapp] use = egg:MyApp # or a callable from a module: [app:mythirdapp] use = call:my.project:myapplication # or even another section: [app:mylastapp] use = myotherapp
It would seem at first that this was pointless; just a way to point to another location. However, in addition to loading the application from that location, you can also add or change the configuration.
The other way to define an application is to point exactly to some Python code:
[app:myapp] paste.app_factory = myapp.modulename:app_factory
You must give an explicit protocol (in this case
paste.app_factory), and the value is something to import. In
this case the module
myapp.modulename is loaded, and the
app_factory object retrieved from it.
See Defining Factories for more about the protocols.
Configuration¶
Configuration is done through keys besides
use (or the protocol
names). Any other keys found in the section will be passed as keyword
arguments to the factory. This might look like:
[app:blog] use = egg:MyBlog database = mysql://localhost/blogdb blogname = This Is My Blog!
You can override these in other sections, like:
[app:otherblog] use = blog blogname = The other face of my blog
This way some settings could be defined in a generic configuration
file (if you have
use = config:other_config_file) or you can
publish multiple (more specialized) applications just by adding a
section.
Global Configuration¶
Often many applications share the same configuration. While you can do that a bit by using other config sections and overriding values, often you want that done for a bunch of disparate configuration values. And typically applications can’t take “extra” configuration parameters; with global configuration you do something equivalent to “if this application wants to know the admin email, this is it”.
Applications are passed the global configuration separately, so they must specifically pull values out of it; typically the global configuration serves as the basis for defaults when no local configuration is passed in.
Global configuration to apply to every application defined in a file
should go in a special section named
[DEFAULT]. You can override
global configuration locally like:
[DEFAULT] admin_email = [email protected] [app:main] use = ... set admin_email = [email protected]
That is, by using
set in front of the key.
Composite Applications¶
“Composite” applications are things that act like applications, but are made up of other applications. One example would be a URL mapper, where you mount applications at different URL paths. This might look like:
[composite:main] use = egg:Paste#urlmap / = mainapp /files = staticapp [app:mainapp] use = egg:MyApp [app:staticapp] use = egg:Paste#static document_root = /path/to/docroot
The composite application “main” is just like any other application
from the outside (you load it with
loadapp for instance), but it
has access to other applications defined in the configuration file.
Other Objects¶
In addition to sections with
app:, you can define filters and
servers in a configuration file, with
server: and
filter:
prefixes. You load these with
loadserver and
loadfilter. The
configuration works just the same; you just get back different kinds
of objects.
Filter Composition¶
There are several ways to apply filters to applications. It mostly depends on how many filters, and in what order you want to apply them.
The first way is to use the
filter-with setting, like:
[app:main] use = egg:MyEgg filter-with = printdebug [filter:printdebug] use = egg:Paste#printdebug # and you could have another filter-with here, and so on...
Also, two special section types exist to apply filters to your
applications:
[filter-app:...] and
[pipeline:...]. Both of
these sections define applications, and so can be used wherever an
application is needed.
filter-app defines a filter (just like you would in a
[filter:...] section), and then a special key
next which
points to the application to apply the filter to.
pipeline: is used when you need apply a number of filters. It
takes one configuration key
pipeline (plus any global
configuration overrides you want).
pipeline is a list of filters
ended by an application, like:
[pipeline:main] pipeline = filter1 egg:FilterEgg#filter2 filter3 app [filter:filter1] ...
Getting Configuration¶
If you want to get the configuration without creating the application,
you can use the
appconfig(uri) function, which is just like the
loadapp() function except it returns the configuration that would
be used, as a dictionary. Both global and local configuration is
combined into a single dictionary, but you can look at just one or the
other with the attributes
.local_conf and
.global_conf.
egg: URIs¶
Python Eggs are a distribution and installation format produced by setuptools and distribute that adds metadata to a normal Python package (among other things).
You don’t need to understand a whole lot about Eggs to use them. If
you have a distutils
setup.py script, just change:
from distutils.core import setup
to:
from setuptools import setup
Now when you install the package it will be installed as an egg.
The first important part about an Egg is that it has a
specification. This is formed from the name of your distribution
(the
name keyword argument to
setup()), and you can specify a
specific version. So you can have an egg named
MyApp, or
MyApp==0.1 to specify a specific version.
The second is entry points. These are references to Python objects in your packages that are named and have a specific protocol. “Protocol” here is just a way of saying that we will call them with certain arguments, and expect a specific return value. We’ll talk more about the protocols later.
The important part here is how we define entry points. You’ll add an
argument to
setup() like:
setup( name='MyApp', ... entry_points={ 'paste.app_factory': [ 'main=myapp.mymodule:app_factory', 'ob2=myapp.mymodule:ob_factory'], }, )
This defines two applications named
main and
ob2. You can
then refer to these by
egg:MyApp#main (or just
egg:MyApp,
since
main is the default) and
egg:MyApp#ob2.
The values are instructions for importing the objects.
main is
located in the
myapp.mymodule module, in an object named
app_factory.
There’s no way to add configuration to objects imported as Eggs.
Defining Factories¶
This lets you point to factories (that obey the specific protocols we mentioned). But that’s not much use unless you can create factories for your applications.
There’s a few protocols:
paste.app_factory,
paste.composite_factory,
paste.filter_factory, and lastly
paste.server_factory. Each of these expects a callable (like a
function, method, or class).
paste.app_factory¶
The application is the most common. You define one like:
def app_factory(global_config, **local_conf): return wsgi_app
The
global_config is a dictionary, and local configuration is
passed as keyword arguments. The function returns a WSGI application.
paste.composite_factory¶
Composites are just slightly more complex:
def composite_factory(loader, global_config, **local_conf): return wsgi_app
The
loader argument is an object that has a couple interesting
methods.
get_app(name_or_uri, global_conf=None) return a WSGI
application with the given name.
get_filter and
get_server
work the same way.
A more interesting example might be a composite factory that does something. For instance, consider a “pipeline” application:
def pipeline_factory(loader, global_config, pipeline): # space-separated list of filter and app names: pipeline = pipeline.split() filters = [loader.get_filter(n) for n in pipeline[:-1]] app = loader.get_app(pipeline[-1]) filters.reverse() # apply in reverse order! for filter in filters: app = filter(app) return app
Then we use it like:
[composite:main] use = <pipeline_factory_uri> pipeline = egg:Paste#printdebug session myapp [filter:session] use = egg:Paste#session store = memory [app:myapp] use = egg:MyApp
paste.filter_factory¶
Filter factories are just like app factories (same signature), except they return filters. Filters are callables that take a WSGI application as the only argument, and return a “filtered” version of that application.
Here’s an example of a filter that checks that the
REMOTE_USER CGI
variable is set, creating a really simple authentication filter:
def auth_filter_factory(global_conf, req_usernames): # space-separated list of usernames: req_usernames = req_usernames.split() def filter(app): return AuthFilter(app, req_usernames) return filter class AuthFilter(object): def __init__(self, app, req_usernames): self.app = app self.req_usernames = req_usernames def __call__(self, environ, start_response): if environ.get('REMOTE_USER') in self.req_usernames: return self.app(environ, start_response) start_response( '403 Forbidden', [('Content-type', 'text/html')]) return ['You are forbidden to view this resource']
paste.filter_app_factory¶
This is very similar to
paste.filter_factory, except that it also
takes a
wsgi_app argument, and returns a WSGI application. So if
you changed the above example to:
class AuthFilter(object): def __init__(self, app, global_conf, req_usernames): ....
Then
AuthFilter would serve as a filter_app_factory
(
req_usernames is a required local configuration key in this
case).
paste.server_factory¶
This takes the same signature as applications and filters, but returns a server.
A server is a callable that takes a single argument, a WSGI application. It then serves the application.
An example might look like:
def server_factory(global_conf, host, port): port = int(port) def serve(app): s = Server(app, host=host, port=port) s.serve_forever() return serve
The implementation of
Server is left to the user.
paste.server_runner¶
Like
paste.server_factory, except
wsgi_app is passed as the
first argument, and the server should run immediately.
Outstanding Issues¶
Should there be a “default” protocol for each type of object? Since there’s currently only one protocol, it seems like it makes sense (in the future there could be multiple). Except that
paste.app_factoryand
paste.composite_factoryoverlap considerably.
ConfigParser’s INI parsing is kind of annoying. I’d like it both more constrained and less constrained. Some parts are sloppy (like the way it interprets
[DEFAULT]).
config:URLs should be potentially relative to other locations, e.g.,
config:$docroot/.... Maybe using variables from
global_conf?
Should other variables have access to
global_conf?
Should objects be Python-syntax, instead of always strings? Lots of code isn’t usable with Python strings without a thin wrapper to translate objects into their proper types.
Some short-form for a filter/app, where the filter refers to the “next app”. Maybe like:
[app-filter:app_name] use = egg:... next = next_app [app:next_app] ... | https://pastedeploy.readthedocs.io/en/stable/ | 2020-08-03T12:00:41 | CC-MAIN-2020-34 | 1596439735810.18 | [] | pastedeploy.readthedocs.io |
If you must run CHKDSK on a volume that is being mirrored by SIOS DataKeeper, it is recommended that you first pause the mirror. After running CHKDSK, continue the mirror. A partial resync occurs (updating those writes generated by the CHKDSK) and mirroring will continue.
Failure to first pause the mirror may result in the mirror automatically entering the Paused state and performing a Resync while CHKDSK is in operation. While this will not cause any obvious problems, it will slow the CHKDSK down and result in unnecessary state changes in SIOS DataKeeper.
SIOS DataKeeper automatically ensures that volumes participating in a mirror, as either source or target, are not automatically checked at system startup. This ensures that the data on the mirrored volumes remains consistent.).
Post your comment on this topic. | http://docs.us.sios.com/dkse/8.6.3/en/topic/chkdsk-considerations | 2020-08-03T13:02:22 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.us.sios.com |
TOPICS×
Features of AEM Forms workspace not available in Flex workspace
AEM Forms workspace innovates beyond Flex-based workspace, to offer features, and capabilities that help improve business integration and user productivity.
Following is a quick overview of these capabilities. For more details, see the related articles listed at the end of this article.
Support for a summary pane for tasks
When you open a task, before the form opens, a pane allows you to show information about the task, using an external URL. Using Task Summary Pane additional and relevant information for a task can be displayed to add more value for the end user of AEM Forms workspace. See Display Summary Page for the implementation details.
Support for Manager View
This capability allows managers to access or act on tasks of their reports. Managers can also drill down, in the organization hierarchy, to tasks of their indirect reports. See Managing tasks in an organizational hierarchy using Manager View for more details.
Support for user avatars
Images, or avatars, for logged in user can now be displayed in the upper-right corner of the AEM Forms workspace. Also, in the Manager View, user avatars can be displayed to show the images of the managers and their reports. See Displaying the user avatar for more details.
Support for integrating third-party applications
The capability to integrate with third-party applications can be used to bring your workflows entirely to AEM Forms workspace. For example, you can render Correspondence Management letter templates as tasks within the AEM Forms workspace window itself. Thus, you can complete the task without leaving AEM Forms workspace. See Integrating Correspondence Management in AEM Forms workspace for detailed instructions.
Support for custom task rendering based on end user's device
AEM Forms workspace provides support for HTML rendition of XDP forms. This support, when used in a render process that routes to different renditions of XDP based on the device or user-agent, allows users to view an XDP form as HTML on the mobile devices and as PDF on a desktop. This helps in providing seamless coverage of Process Management to users who work in varied environments on different devices. | https://docs.adobe.com/content/help/en/experience-manager-64/forms/use-aem-forms-workspace/features-html-workspace-available-flex.html | 2020-08-03T13:12:27 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.adobe.com |
dse.yaml configuration file
The DataStax Enterprise configuration file for security, DSE Search, DataStax Graph, and DSE Analytics.
logback.xmlThe location of the logback.xml file depends on the type of installation:
dse.yamlThe location of the dse.yaml file depends on the type of installation:
cassandra.yamlThe location of the cassandra.yaml file depends on the type of installation:
The cassandra.yaml file is the primary configuration file for the DataStax Enterprise database.
Syntax
node_health_options: refresh_rate_ms: 60000 uptime_ramp_up_period_seconds: 10800 dropped_mutation_window_minutes: 30
Security and authentication
Authentication optionsDSE Authenticator supports multiple schemes for authentication at the same time in a DataStax Enterprise cluster. Additional authenticator configuration is required in cassandra.yaml.
# authentication_options: # enabled: false # default_scheme: internal # other_schemes: # scheme_permissions: false # allow_digest_with_kerberos: true # plain_text_without_ssl: warn # transitional_mode: disabled
- authentication_options
- Configures DseAuthenticator to authenticate users when the authenticator option in cassandra.yaml is set to
com.datastax.bdp.cassandra.auth.DseAuthenticator. Authenticators other than DseAuthenticator are not supported.
- enabled
- Enables user authentication.
- true - The DseAuthenticator authenticates users.
- false - The DseAuthenticator does not authenticate users and allows all connections.
Default: false
- default_scheme
- The first scheme to validate a user against when the driver does not request a specific scheme.
- internal - Plain text authentication using the internal password authentication.
- ldap - Plain text authentication using pass-through LDAP authentication.
- kerberos - GSSAPI authentication using the Kerberos authenticator.
Default: internal
- other_schemes
- List of schemes that are checked if validation against the first scheme fails and no scheme was specified by the driver.
- ldap - Plain text authentication using pass-through LDAP authentication.
- kerberos - GSSAPI authentication using the Kerberos authenticator.
Default: none
- scheme_permissions
- Determines if roles need to have permission granted to them to use specific authentication schemes. These permissions can be granted only when the DseAuthorizer is used.
- true - Use multiple schemes for authentication. To be assigned, every role requires permissions to a scheme.
- false - Do not use multiple schemes for authentication. Prevents unintentional role assignment that might occur if user or group names overlap in the authentication service.
Default: false
- allow_digest_with_kerberos
- Controls whether DIGEST-MD5 authentication is allowed with Kerberos. Kerberos uses DIGEST-MD5 to pass credentials between nodes and jobs. The DIGEST-MD5 mechanism is not associated directly with an authentication scheme.
- true - Allow DIGEST-MD5 authentication with Kerberos. In analytics clusters, set to
trueto use Hadoop internode authentication with Hadoop and Spark jobs.
- false - Do not allow DIGEST-MD5 authentication with Kerberos.
Default: true
- plain_text_without_ssl
- Controls how the DseAuthenticator responds to plain text authentication requests over unencrypted client connections.
- block - Block the request with an authentication error.
- warn - Log a warning but allow the request.
- allow - Allow the request without any warning.
Default: warn
- transitional_mode
- Sets transitional mode for temporary use during authentication setup in an established environment.Transitional mode allows access to the database using the
anonymousrole, which has all permissions except
AUTHORIZE.
Important: Credentials are required for all connections after authentication is enabled; use a blank username and password to login with anonymous role in transitional mode.
- disabled - Disable transitional mode. All connections must provide valid credentials and map to a login-enabled role.
- permissive - Only super users are authenticated and logged in. All other authentication attempts are logged in as the anonymous user.
- normal - Allow all connections that provide credentials. Maps all authenticated users to their role, and maps all other connections to
anonymous.
- strict - Allow only authenticated connections that map to a login-enabled role OR connections that provide a blank username and password as
anonymous.
Default: disabled
Role management options
#role_management_options: # mode: internal
- role_management_options
- Configures the DSE Role Manager. To enable role manager, set:
Tip: See Setting up logins and users.When scheme_permissions is enabled, all roles must have permission to execute on the authentication scheme. See Binding a role to an authentication scheme.
- authorization_options enabled to true
- role_manager in cassandra.yaml to
com.datastax.bdp.cassandra.auth.DseRoleManager
- mode
- Manages granting and revoking of roles.
- internal - Manage granting and revoking of roles internally using the GRANT ROLE and REVOKE ROLE CQL statements. See Managing database access. Internal role management allows nesting roles for permission management.
- ldap - Manage granting and revoking of roles using an external LDAP server configured using the ldap_options. To configure an LDAP scheme, complete the steps in Defining an LDAP scheme. Nesting roles for permission management is disabled.
Default: internal
- stats
- Set to true, to enable logging of DSE role creation and modification events in the
dse_security.role_statssystem table. All nodes must have the stats option enabled, and must be restarted for the functionality to take effect.
- To query role events:
SELECT * FROM dse_security.role_stats; role | created | password_changed -------+---------------------------------+--------------------------------- user1 | 2020-04-13 00:44:09.221000+0000 | null user2 | 2020-04-12 23:49:21.457000+0000 | 2020-04-12 23:49:21.457000+0000 (2 rows)
- Default: commented out (
false)
#authorization_options: # enabled: false # transitional_mode: disabled # allow_row_level_security: false
- Configures the DSE Authorizer to authorize users when the authorization option in cassandra.yaml is set to
com.datastax.bdp.cassandra.auth.DseAuthorizer.
- enabled
- Enables the DSE Authorizer for role-based access control (RBAC).
- true - Enable the DSE Authorizer for RBAC.
- false - Do not use the DSE Authorizer.
Default: false
- transitional_mode
- Allows the DSE Authorizer to operate in a temporary mode during authorization setup in a cluster.
- disabled - Transitional mode is disabled.
- normal - Permissions can be passed to resources, but are not enforced.
- strict - Permissions can be passed to resources, and are enforced on authenticated users. Permissions are not enforced against anonymous users.
Default: disabled
- allow_row_level_security
- Enables row-level access control (RLAC) permissions. Use the same setting on all nodes. See Setting up Row Level Access Control (RLAC).
- true - Use row-level security.
- false - Do not use row-level security.
Default: false
Kerberos options
kerberos_options: keytab: resources/dse/conf/dse.keytab service_principal: dse/_HOST@REALM http_principal: HTTP/_HOST@REALM qop: auth
- kerberos_options
- Configures security for a DataStax Enterprise cluster using Kerberos.
- keytab
- The filepath of dse.keytab.
Default: resources/dse/conf/dse.keytab
- service_principal
- The service_principal that the DataStax Enterprise process runs under must use the form dse_user/_HOST@REALM, where:
- dse_user is the username of the user that starts the DataStax Enterprise process.
- _HOST is converted to a reverse DNS lookup of the broadcast address.
- REALM is the name of your Kerberos realm. In the Kerberos principal, REALM must be uppercase.
Default: dse/_HOST@REALM
- http_principal
- Used by the Tomcat application container to run DSE Search. The Tomcat web server uses the GSSAPI mechanism (SPNEGO) to negotiate the GSSAPI security mechanism (Kerberos). REALM is the name of your Kerberos realm. In the Kerberos principal, REALM must be uppercase.
Default: HTTP/_HOST@REALM
- qop
- A comma-delimited list of Quality of Protection (QOP) values that clients and servers can use for each connection. The client can have multiple QOP values, while the server can have only a single QOP value.
- auth - Authentication only.
- auth-int - Authentication plus integrity protection for all transmitted data.
- auth-conf - Authentication plus integrity protection and encryption of all transmitted data.
Encryption using
auth-confis separate and independent of whether encryption is done using SSL. If both auth-conf and SSL are enabled, the transmitted data is encrypted twice. DataStax recommends choosing only one method and using it for encryption and authentication.
Default: auth
LDAP options
ldap_options.server_portparameter is used by default. This way, there is no change in configuration for existing users who have LDAP configured.
- A connection pool is created of each server separately. Once the connection is attempted, the best pool is chosen using a heuristic. DSE uses a circuit breaker to temporarily disable those servers that frequently fail to connect. Also, DSE tries to choose the pool that has the greatest number of idle connections.
- Failover parameters are configured through system properties.
- A new method was added in DSE 6.8.2 to the LDAP MBean to reset LDAP connectors - that is, close all connection pools and recreate them.
# ldap_options: # server_host: # server_port: 389 # hostname_verification: false # search_dn: # search_password: # use_ssl: false # use_tls: false # truststore_path: # truststore_password: # truststore_type: jks # user_search_base: # user_search_filter: (uid={0}) # user_memberof_attribute: memberof # group_search_type: directory_search # group_search_base: # group_search_filter: (uniquemember={0}) # group_name_attribute: cn # credentials_validity_in_ms: 0 # search_validity_in_seconds: 0 # connection_pool: # max_active: 8 # max_idle: 8
ldap_options: server_host: win2012ad_server.mycompany.lan server_port: 389 search_dn: cn=lookup_user,cn=users,dc=win2012domain,dc=mycompany,dc=lan search_password: lookup_user_password use_ssl: false use_tls: false truststore_path: truststore_password: truststore_type: jks #group_search_type: directory_search group_search_type: memberof_search #group_search_base: #group_search_filter: group_name_attribute: cn user_search_base: cn=users,dc=win2012domain,dc=mycompany,dc=lan user_search_filter: (sAMAccountName={0}) user_memberof_attribute: memberOf connection_pool: max_active: 8 max_idle: 8
- ldap_options
- Configures LDAP security when the authenticator option in cassandra.yaml is set to
com.datastax.bdp.cassandra.auth.DseAuthenticator.
- server_host
- A comma separated list of LDAP server hosts.Important: Do not use LDAP on the same host (localhost) in production environments. Using LDAP on the same host (localhost) is appropriate only in single node test or development environments.
Default: none
- server_port
- The port on which the LDAP server listens.
- 389 - The default port for unencrypted connections.
- 636 - Used for encrypted connections. Default SSL or TLS port for LDAP.
Default: 389
- hostname_verification
- Enable hostname verification. The following conditions must be met:
- Either
use_sslor
use_tlsmust be set to
true.
- A valid truststore with the correct path specified in
truststore_pathmust exist. The truststore must have a certificate entry,
trustedCertEntry, including a SAN
DNSNameentry that matches the hostname of the LDAP server.
Default: false
- search_dn
- Distinguished name (DN) of an account with read access to the
user_search_baseand
group_search_base. For example:
Warning: Do not create/use an LDAP account or group calledWhen not set, the LDAP server uses an anonymous bind for search.
- OpenLDAP:
uid=lookup,ou=users,dc=springsource,dc=com
- Microsoft Active Directory (AD):
cn=lookup, cn=users, dc=springsource, dc=com
cassandra. The DSE database comes with a default
cassandralogin role that has access to all database objects and uses the consistency level QUOROM.
Default: commented out
- search_password
- The password of the
search_dnaccount.
Default: commented out
- use_ssl
- Enables an SSL-encrypted connection to the LDAP server.Tip: See Defining an LDAP scheme.
- true - Use an SSL-encrypted connection.
- false - Do not enable SSL connections to the LDAP server.
Default: false
- use_tls
- Enables TLS connections to the LDAP server.
- true - Enable TLS connections to the LDAP server.
- false - Do not enable TLS connections to the LDAP server
Default: false
- truststore_path
- The filepath to the SSL certificates truststore.
Default: commented out
- truststore_password
- The password to access the truststore.
Default: commented out
- truststore_type
- Valid types are JKS, JCEKS, or PKCS12.
Default: jks
- user_search_base
- Distinguished name (DN) of the object to start the recursive search for user entries for authentication and role management memberof searches.
- For your LDAP domain, set the
ouand
dcelements. Typically set to
ou=users,dc=domain,dc=top_level_domain. For example,
ou=users,dc=example,dc=com.
- For your Active Directory, set the
dcelement for a different search base. Typically set to
CN=search,CN=Users,DC=ActDir_domname,DC=internal. For example,
CN=search,CN=Users,DC=example-sales,DC=internal.
Default: none
- user_search_filter
- Identifies the user that the search filter uses for looking up usernames.
- uid={0} - When using LDAP.
- samAccountName={0} - When using AD (Microsoft Active Directory). For example,
(sAMAccountName={0}).
Default: uid={0}
- user_memberof_attribute
- Contains a list of group names. Role manager assigns DSE roles that exactly match any group name in the list. Required when managing roles using
group_search_type: memberof_searchwith LDAP (role_manager.mode:ldap). The directory server must have memberof support, which is a default user attribute in Microsoft Active Directory (AD).
Default: memberof
- group_search_type
- Defines how group membership is determined for a user. Required when managing roles with LDAP (role_manager.mode: ldap).
- directory_search - Filters the results with a subtree search of group_search_base to find groups that contain the username in the attribute defined in the group_search_filter.
- memberof_search - Recursively searches for user entries using the
user_search_baseand
user_search_filter. Gets groups from the user attribute defined in
user_memberof_attribute. The directory server must have memberof support.
Default: directory_search
- group_search_base
- The unique distinguished name (DN) of the group record from which to start the group membership search.
Default: commented out
- group_search_filter
- Set to any valid LDAP filter.
Default: uniquemember={0}
- group_name_attribute
- The attribute in the group record that contains the LDAP group name. Role names are case-sensitive and must match exactly on DSE for assignment. Unmatched groups are ignored.
Default: cn
- credentials_validity_in_ms
- A credentials cache improves performance by reducing the number of requests that are sent to the internal or LDAP server. See Defining an LDAP scheme.
Note: Starting in DSE 6.8.2, the upper limit for
- 0 - Disable credentials cache.
- duration period - The duration period in milliseconds of the credentials cache.
ldap_options.credentials_validity_in_msincreased to 864,000,000 ms, which is 10 days.
Default: 0
- search_validity_in_seconds
- Configures a search cache to improve performance by reducing the number of requests that are sent to the internal or LDAP server.
Note: Starting in DSE 6.8.2, the upper limit for
- 0 - Disables search credentials cache.
- positive number - The duration period in seconds for the search cache.
ldap_options.search_validity_in_secondsincreased to 864,000 seconds, which is 10 days.
Default: 0
- connection_pool
- Configures the connection pool for making LDAP requests.
- max_active
- The maximum number of active connections to the LDAP server.
Default: 8
- max_idle
- The maximum number of idle connections in the pool awaiting requests.
Default: 8
Encrypt sensitive system resources
Options to encrypt sensitive system resources using a local encryption key or a remote KMIP key.
system_info_encryption: enabled: false cipher_algorithm: AES secret_key_strength: 128 chunk_length_kb: 64 key_provider: KmipKeyProviderFactory kmip_host: kmip_host_name
- system_info_encryption
- Sets the encryption settings for system resources that might contain sensitive information, including the
system.batchlogand
system.paxostables, hint files, and the database commit log.
- enabled
- Enables encryption of system resources. See Encrypting system resources.
Note: TheDefault: false
- true - Enable encryption of system resources.
- false - Does not encryption of system resources.
system_tracekeyspace is not encrypted by enabling the
system_information_encryptionsection. In environments that also have tracing enabled, manually configure encryption with compression on the
system_tracekeyspace. See Transparent data encryption.
- cipher_algorithm
- The name of the JCE cipher algorithm used to encrypt system resources. Default: AES
- secret_key_strength
- Length of key to use for the system resources. See Table 1.Note: DSE uses a matching local key or requests the key type from the KMIP server. For KMIP, if an existing key does not match, the KMIP server automatically generates a new key.Default: 128
- chunk_length_kb
- Optional. Size of SSTable chunks when data from the system.batchlog or system.paxos are written to disk.Note: To encrypt existing data, runDefault: 64
nodetool upgradesstables -a system batchlog paxoson all nodes in the cluster.
- key_provider
- KMIP key provider to enable encrypting sensitive system data with a KMIP key. Comment out if using a local encryption key.
Default: KmipKeyProviderFactory
- kmip_host
- The KMIP key server host. Set to the kmip_group_name that defines the KMIP host in kmip_hosts section. DSE requests a key from the KMIP host and uses the key generated by the KMIP provider.
Default: kmip_host_name
Encrypted configuration properties
system_key_directory: /etc/dse/conf config_encryption_active: false config_encryption_key_name: (key_filename | KMIP_key_URL )
- system_key_directory
- Path to the directory where local encryption key files are stored, also called system keys. Distributes the system keys to all nodes in the cluster. Ensure the DSE account is the folder owner and has read/write/execute (700) permissions.See Setting up local encryption keys.Note: This directory is not used for KMIP keys.
Default: /etc/dse/conf
- config_encryption_active
- Enables encryption on sensitive data stored in tables and in configuration files.
- true - Enable encryption of configuration property values using the specified config_encryption_key_name. When set to true, the configuration values must be encrypted or commented out. See Encrypting configuration file properties.Restriction: Lifecycle Manager (LCM) is not compatible when
config_encryption_activeis
truein DSE and OpsCenter. For LCM limitations, see Encrypted DSE configuration values.
- false - Do not enable encryption of configuration property values.
Default: false
- config_encryption_key_name
- The local encryption key filename or KMIP key URL to use for configuration file property value decryption.Note: Use dsetool encryptconfigvalue to generate encrypted values for the configuration file properties.Default: system_keyNote: The default name is not configurable.
KMIP encryption options
kmip_hosts: your key_cache_millis: 300000 timeout: 1000
- kmip_hosts
- Configures connections for key servers that support the KMIP protocol.
- kmip_groupname
- A user-defined name for a group of options to configure a KMIP server or servers, key settings, and certificates. For each KMIP key server or group of KMIP key servers, you must configure options for a kmip_groupname section. Using separate key server configuration settings allows use of different key servers to encrypt table data and eliminates the need to enter key server configuration information in Data Definition Language (DDL) statements and other configurations. DDL statements are database schema change commands like CREATE TABLE. Multiple KMIP hosts are supported.
- Default: commented out
- hosts
- A comma-separated list of KMIP hosts (host[:port]) using the FQDN (Fully Qualified Domain Name). Add KMIP hosts in the intended failover sequence because DSE queries the host in the listed order.
For example, if the host list contains
kmip1.yourdomain.com, kmip2.yourdomain.com, DSE tries
kmip1.yourdomain.comand then
kmip2.yourdomain.com.
- keystore_path
- The path to a Java keystore created from the KMIP agent PEM files.
Default: /etc/dse/conf/KMIP_keystore.jks
- keystore_type
- Valid types are JKS, JCEKS, PKCS11, and PKCS12. For file-based keystores, use PKCS12.
Default: JKS
- keystore_password
- Password used to protect the private key of the key pair.
Default: none
- truststore_path
- The path to a Java truststore that was created using the KMIP root certificate.
Default: /etc/dse/conf/KMIP_truststore
- truststore_password
- Password required to access the keystore.
Default: none
- key_cache_millis
- Milliseconds to locally cache the encryption keys that are read from the KMIP hosts. The longer the encryption keys are cached, the fewer requests to the KMIP key server are made and the longer it takes for changes, like revocation, to propagate to the DSE node. DataStax Enterprise uses concurrent encryption, so multiple threads fetch the secret key from the KMIP key server at the same time. DataStax recommends using the default value.
Default: 300000
- timeout
- Socket timeout in milliseconds.
Default: 1000
DSE Search index encryption
# solr_encryption_options: # decryption_cache_offheap_allocation: true # decryption_cache_size_in_mb: 256
- solr_encryption_options
- Tunes encryption of search indexes.
- decryption_cache_offheap_allocation
- Allocates shared DSE Search decryption cache off JVM heap.
- true - Allocate shared DSE Search decryption cache off JVM heap.
- false - Do not allocate shared DSE Search decryption cache off JVM heap.
Default: true
- decryption_cache_size_in_mb
- The maximum size of the shared DSE Search decryption cache in megabytes (MB).
Default: 256
DSE In-Memory options
To use DSE In-Memory, specify how much system memory to use for all in-memory tables by fraction or size.
# max_memory_to_lock_fraction: 0.20 # max_memory_to_lock_mb: 10240
- max_memory_to_lock_fraction
- A fraction of the system memory. For example, 0.20 allows use up to 20% of system memory. This setting is ignored if
max_memory_to_lock_mbis set to a non-zero value.
Default: 0.20
- max_memory_to_lock_mb
- Maximum amount of memory in megabytes (MB) for DSE In-Memory tables.
- not set - Use the fraction specified with
max_memory_to_lock_fraction.
- number greater than 0 - Maximum amount of memory in megabytes (MB).
Default: 10240
Node health options
node_health_options: refresh_rate_ms: 60000 uptime_ramp_up_period_seconds: 10800 dropped_mutation_window_minutes: 30
- node_health_options
- Node health options are always enabled. Node health is a score-based representation of how healthy a node is to handle search queries. See Collecting node health and indexing status scores.
- refresh_rate_ms
- How frequently statistics update., increase the uptime period to the expected repair time.
Default: 10800 (3 hours)
- dropped_mutation_window_minutes
- The historic time window over which the rate of dropped mutations affects the node health score.
Default: 30
Health-based routing
enable_health_based_routing: true
- enable_health_based_routing
- Enables node health as a consideration for replication selection for distributed DSE Search queries. Health-based routing enables a trade-off between index consistency and query throughput.
- true - Consider node health when multiple candidates exist for a particular token range.
- false - Ignore node health for replication selection. log entries related to lease holders.
- true - Enable log entries related to lease holders to help monitor performance of the lease subsystem.
- false - No not enable log entries.
Default: false
- ttl_seconds
- Time interval in milliseconds to persist the log of lease holder changes.
Default: 604800
DSE Search
Scheduler settings for DSE Search indexesTo ensure that records with time-to-live (TTL) are purged from search indexes when they expire, the search indexes are periodically checked for expired documents.
ttl_index_rebuild_options: fixed_rate_period: 300 initial_delay: 20 max_docs_per_batch: 4096 thread_pool_size: 1
- ttl_index_rebuild_options
- Configures the schedulers in charge of querying for expired records, removing expired records, and the execution of the checks.
- fix_rate_period
- Time interval in seconds expired documents are deleted from the index during each check. To avoid memory pressure, their unique keys are retrieved and then deletes are issued in batches.
Default: 4096
- thread_pool_size
- The maximum number of search indexes (cores) that can execute TTL cleanup concurrently. Manages system resource consumption and prevents many search cores from executing simultaneous TTL deletes.
Default: 1
Reindexing of bootstrapped data
async_bootstrap_reindex: false
- async_bootstrap_reindex
- For DSE Search, configure whether to asynchronously reindex bootstrapped data.
- true - The node joins the ring immediately after bootstrap and reindexing occurs asynchronously. Do not wait for post-bootstrap reindexing so that the node is not marked down. The dsetool ring command can be used to check the status of the reindexing.
- false - The node joins the ring after reindexing the bootstrapped data.
Default: false
CQL Solr paging
cql_solr_query_paging: off
- cql_solr_query_paging
- driver - Respects driver paging settings. Uses Solr pagination (cursors) only when the driver uses pagination. Enabled automatically for DSE SearchAnalytics workloads.
- off - Paging is off. Ignore driver paging settings for CQL.
Default: off
Solr CQL query option
cql_solr_query_row_timeout: 10000
- cql_solr_query_row_timeout
- The maximum time in milliseconds to wait for all rows to be read from the database during CQL Solr queries.
Default: 10000 (10 seconds)
DSE Search resource upload limit
solr_resource_upload_limit_mb: 10
- solr_resource_upload_limit_mb
- Configures
Shard transport
shard_transport_options: netty_client_request_timeout: 60000
- shard_transport_options
- Fault tolerance option for internode)
DSE Search indexing
# back_pressure_threshold_per_core: 1024 # flush_max_time_per_core: 5 # load_max_time_per_core: 5 # enable_index_disk_failure_policy: false # solr_data_dir: /MyDir # solr_field_cache_enabled: false # ram_buffer_heap_space_in_mb: 1024 # ram_buffer_offheap_space_in_mb: 1024
- back_pressure_threshold_per_core
- The maximum number of queued partitions during search index rebuilding and reindexing. This maximum number safeguards against excessive heap use by the indexing queue. If set lower than the number of threads per core (TPC), not all TPC threads can be actively indexing.
Default: 1024
- flush_max_time_per_core
- The maximum time, in minutes, to wait for the flushing of asynchronous index updates that occurs at DSE Search commit time or at flush time.CAUTION: Expert knowledge is required to change this value.Always set the wait time high enough to ensure flushing completes successfully to fully sync DSE Search indexes with the database data. If the wait time is exceeded, index updates are only partially committed and the commit log is not truncated which can undermine data durability.Note: When a timeout occurs, this node is typically overloaded and cannot flush in a timely manner. Live indexing increases the time to flush asynchronous index updates.
Default: 5
- load_max_time_per_core
- The maximum time, in minutes, to wait for each DSE Search index to load on startup or create/reload operations. This advanced option should be changed only if exceptions happen during search index loading.
Default: 5
- enable_index_disk_failure_policy
- Whether to apply the configured disk failure policy if IOExceptions occur during index update operations.
- true - Apply the configured Cassandra disk failure policy to index write failures
- false - Do not apply the disk failure policy
Default: false
- solr_data_dir
- The directory to store index data. See Managing the location of DSE Search data. By default, each DSE Search index is saved in solr_data_dir/keyspace_name.table_name or as specified by the
dse.solr.data.dirsystem property.
Default: A solr.data directory in the cassandra data directory, like /var/lib/cassandra/solr.data
- solr_field_cache_enabled
- The Apache Lucene® field cache is deprecated. Instead, for fields that are sorted, faceted, or grouped by, set
docValues="true"on the field in the search index schema. Then reload the search index and reindex.
Default: false
- ram_buffer_heap_space_in_mb
- Global Lucene RAM buffer usage threshold for heap to force segment flush. Setting too low can cause a state of constant flushing during periods of ongoing write activity. For near-real-time (NRT) indexing, forced segment flushes also de-schedule pending auto-soft commits to avoid potentially flushing too many small segments.
Default: 1024
- ram_buffer_offheap_space_in_mb
- Global Lucene RAM buffer usage threshold for offheap to force segment flush. Setting too low can cause a state of constant flushing during periods of ongoing write activity. For NRT, forced segment flushes also de-schedule pending auto-soft commits to avoid potentially flushing too many small segments. When not set, the default is 1024.
Default: 1024
Performance Service
- configDseYaml.html#configDseYaml__global-perf-optionsGlobal Performance Service
- configDseYaml.html#configDseYaml__cql-perform-opsPerformance Service
- configDseYaml.html#configDseYaml__solr-cql-queryDSE Search Performance Service
- configDseYaml.html#configDseYaml__sparkPerformanceSpark Performance Service
Global Performance Service
performance_max_threads+
performance_queue_capacity. When a task is dropped, collected statistics might not be current.
# performance_core_threads: 4 # performance_max_threads: 32 # performance_queue_capacity: 32000
- performance_core_threads
- Number of background threads used by the performance service under normal conditions.
Default: 4
- performance_max_threads
- Maximum number of background threads used by the performance service.
Default: 32
- performance_queue_capacity
- Allowed number of queued tasks in the backlog when the number of
performance_max_threadsare busy.
Default: 32000
Performance Service
Configures the collection of performance metrics on transactional nodes. Performance
metrics are stored in the
dse_perf keyspace and can be queried using
any CQL-based utility, such as cqlsh or any
application using a CQL driver. To temporarily make changes for diagnostics and
testing, use the dsetool perf subcommands.
graph_events: ttl_seconds: 600
- graph_events
- Graph event information.
- ttl_seconds
- Number of seconds a record survives before it is expired.
Default: 600
# cql_slow_log_options: # enabled: true # threshold: 200.0 # minimum_samples: 100 # ttl_seconds: 259200 # skip_writing_to_db: true # num_slowest_queries: 5
- cql_slow_log_options
- Configures reporting distributed sub-queries for search (query executions on individual shards) that take longer than a specified period of time.
- enabled
- true - Enables log entries for slow queries.
- false - Does not enable log entries.
Default: true
- threshold
- The threshold in milliseconds or as a percentile.
- A value greater than 1 is expressed in time and will log queries that take longer than the specified number of milliseconds. For example, 200.0 sets the threshold at 0.2 seconds.
- A value of 0 to 1 is expressed as a percentile and will log queries that exceed this percentile. For example, .95 collects information on 5% of the slowest queries.
Default: 200.0
- minimum_samples
- The initial number of queries before activating the percentile filter.
Default: commented out (
100)
- ttl_seconds
- Number of seconds a slow log record survives before it is expired.
Default: 259200
- skip_writing_to_db
- Keeps slow queries only in-memory and does not write data to database.
- true - Keep slow queries only in-memory. Skip writing to database.
- false - Write slow query information in the
node_slow_logtable. The threshold must be >= 2000 ms to prevent a high load on the database.
Default: commented out (
true)
- num_slowest_queries
- The number of slow queries to keep in-memory.
Default: commented out (
5)
cql_system_info_options: enabled: false refresh_rate_ms: 10000
- cql_system_info_options
- Configures collection of system-wide performance information about a cluster.
- enabled
- Enables collection of system-wide performance information about a cluster.
- true - Collect metrics.
- false - Do not collect metrics.
Default: false
- refresh_rate_ms
- The length of the sampling period in milliseconds; the frequency to update the performance statistics.
Default: 10000 (10 seconds)
resource_level_latency_tracking_options: enabled: false refresh_rate_ms: 10000
- resource_level_latency_tracking_options
- Configures collection of object I/O performance statistics.Tip: See Collecting system level diagnostics.
- enabled
- Enables collection of object input output performance statistics.
- true - Collect metrics.
- false - Do not collect metrics.
Default:
false
- refresh_rate_ms
- The length of the sampling period in milliseconds; the frequency to update the performance statistics.
Default: 10000 (10 seconds)
db_summary_stats_options: enabled: false refresh_rate_ms: 10000
- db_summary_stats_options
- Configures collection of summary statistics at the database level.Tip: See Collecting database summary diagnostics.
- enabled
- Enables collection of database summary performance information.
- true - Collect metrics.
- false - Do not collect metrics.
Default:
false
- refresh_rate_ms
- The length of the sampling period in milliseconds; the frequency to update the performance statistics.
Default: 10000 (10 seconds)
cluster_summary_stats_options: enabled: false refresh_rate_ms: 10000
- cluster_summary_stats_options
- Configures collection of statistics at a cluster-wide level.Tip: See Collecting cluster summary diagnostics.
- enabled
- Enables collection of statistics at a cluster-wide level.
- true - Collect metrics.
- false - Do not collect metrics.
Default:
false
- refresh_rate_ms
- The length of the sampling period in milliseconds; the frequency to update the performance statistics.
Default: 10000 (10 seconds)
- spark_cluster_info_options
- Configures collection of data associated with Spark cluster and Spark applications.
spark_cluster_info_options: enabled: false refresh_rate_ms: 10000
- enabled
- Enables collection of Spark performance statistics.
- true - Collect metrics.
- false - Do not collect metrics.
Default:
false
- refresh_rate_ms
- The length of the sampling period in milliseconds; the frequency to update the performance statistics.
Default: 10000 (10 seconds)
histogram_data_options: enabled: false refresh_rate_ms: 10000 retention_count: 3
- histogram_data_options
- Histogram data for the dropped mutation metrics are stored in the dropped_messages table in the dse_perf keyspace.Tip: See Collecting histogram diagnostics.
- enabled
- true - Collect metrics.
- false - Do not collect metrics.
Default:
false
- refresh_rate_ms
- The length of the sampling period in milliseconds; the frequency to update the performance statistics.
Default: 10000 (10 seconds)
- retention_count
- Default: 3
user_level_latency_tracking_options: enabled: false refresh_rate_ms: 10000 top_stats_limit: 100 quantiles: false
- user_level_latency_tracking_options
- User-resource latency tracking settings.Tip: See Collecting user activity diagnostics.
- enabled
- true - Collect metrics.
- false - Do not collect metrics.
Default:
false
- refresh_rate_ms
- The length of the sampling period in milliseconds; the frequency to update the performance statistics.
Default: 10000 (10 seconds)
- top_stats_limit
- The maximum number of individual metrics.
Default: 100
- quantiles
Default: false
DSE Search Performance Service
solr_slow_sub_query_log_options: enabled: false ttl_seconds: 604800 async_writers: 1 threshold_ms: 3000
- solr_slow_sub_query_log_options
- See Collecting slow search queries.
- enabled
- true - Collect metrics.
- false - Do not collect metrics.
Default:
false
- ttl_seconds
- The number of seconds a record survives before it is expired.
Default:
604800(about 10 minutes)
- async_writers
- The number of server threads dedicated to writing in the log. More than one server thread might degrade performance.
Default:
1
- threshold_ms
Default:
3000
solr_update_handler_metrics_options: enabled: false ttl_seconds: 604800 refresh_rate_ms: 60000
- solr_update_handler_metrics_options
- Options to collect search index direct update handler statistics over time.Tip: See Collecting handler statistics.
solr_request_handler_metrics_options: enabled: false ttl_seconds: 604800 refresh_rate_ms: 60000
- solr_request_handler_metrics_options
- Options to collect search index request handler statistics over time.Tip: See Collecting handler statistics.
solr_index_stats_options: enabled: false ttl_seconds: 604800 refresh_rate_ms: 60000
- solr_index_stats_options
- Options to record search index statistics over time.Tip: See Collecting index statistics.
solr_cache_stats_options: enabled: false ttl_seconds: 604800 refresh_rate_ms: 60000
- solr_cache_stats_options
- See Collecting cache statistics.
solr_latency_snapshot_options: enabled: false ttl_seconds: 604800 refresh_rate_ms: 60000
- solr_latency_snapshot_options
- See Collecting Apache Solr performance statistics.
Spark Performance Service
spark_application_info_options: enabled: false refresh_rate_ms: 10000 driver: sink: false connectorSource: false jvmSource: false stateSource: false executor: sink: false connectorSource: false jvmSource: false
- spark_application_info_options
- Collection of Spark application metrics.
- enabled
- true - Collect metrics.
- false - Do not collect metrics.
Default: false
- refresh_rate_ms
- The length of the sampling period in milliseconds; the frequency to update the performance statistics.
Default: 10000 (10 seconds)
- driver
- Collection that configures collection of metrics at the Spark Driver.
- connectorSource
- Enables collecting Spark Cassandra Connector metrics at the Spark Driver.
- true - Collect metrics.
- false - Do not collect metrics.
Default: false
- jvmSource
- Enables collection of JVM heap and garbage collection (GC) metrics from the Spark Driver.
- true - Collect metrics.
- false - Do not collect metrics.
Default: false
- stateSource
- Enables collection of application state metrics at the Spark Driver.
- true - Collect metrics.
- false - Do not collect metrics.
Default: false
- executor
- Configures collection of metrics at Spark executors.
- sink
- Enables collecting metrics collected at Spark executors.
- true - Collect metrics.
- false - Do not collect metrics.
Default: false
- connectorSource
- Enables collection of Spark Cassandra Connector metrics at Spark executors.
- true - Collect metrics.
- false - Do not collect metrics.
Default: false
- jvmSource
- Enables collection of JVM heap and GC metrics at Spark executors.
- true - Collect metrics.
- false - Do not collect metrics.
Default: false
DSE Analytics
Spark resource options
spark_shared_secret_bit_length: 256 spark_security_enabled: false spark_security_encryption_enabled: false spark_daemon_readiness_assertion_interval: 1000 resource_manager_options: worker_options: cores_total: 0.7 memory_total: 0.6 workpools: - name: alwayson_sql cores: 0.25 memory: 0.25
- The length of a shared secret used to authenticate Spark components and encrypt the connections between them. This value is not the strength of the cipher for encrypting connections.
Default: 256
- spark_security_enabled
When DSE authentication is enabled with authentication_options, Spark security is enabled regardless of this setting.
Default: false
- spark_security_encryption_enabled
- When DSE authentication is enabled with authentication_options, Spark security encryption is enabled regardless of this setting.Tip: Configure encryption between the Spark processes and DSE with client-to-node encryption in cassandra.yaml.
Default: false
- spark_daemon_readiness_assertion_interval
- Time interval in milliseconds between subsequent retries by the Spark plugin for Spark Master and Worker readiness to start.
Default: 1000
- resource_manager_options
- Controls the physical resources used by Spark applications on this node. Optionally add named workpools with specific dedicated resources. See Core management.
- worker_options
- Configures the amount of system resources that are made available to the Spark Worker.
- cores_total
- The number of total system cores available to Spark.Note: The
SPARK_WORKER_TOTAL_CORESenvironment variables takes precedence over this setting.
The lowest value that you can assign to Spark Worker cores is 1 core. If the results are lower, no exception is thrown and the values are automatically limited.Note: Setting
cores_totalor a workpool's
coresto 1.0 is a decimal value, meaning 100% of the available cores will be reserved. Setting
cores_totalor
coresto 1 (no decimal point) is an explicit value, and one core will be reserved.
- Default: 0.7
- memory_total
- The amount of total system memory available to Spark.
Note: The
- absolute value - Use standard suffixes like M for megabyte and G for gigabyte. For example, 12G.
- decimal value - Maximum fraction of system memory to give all executors for all applications running on a particular node. For example, 0.8.When the value is expressed as a decimal, the available resources are calculated in the following way:
The lowest values that you can assign to Spark Worker memory is 64 MB. If the results are lower, no exception is thrown and the values are automatically limited.
Spark Worker memory = memory_total x (total system memory - memory assigned to DataStax Enterprise)
SPARK_WORKER_TOTAL_MEMORYenvironment variables takes precedence over this setting.
Default: 0.6
- workpools
- A collection of named workpools that can use a portion of the total resources defined under
worker_options.
A default workpool namedThe total amount of resources defined in the
defaultis used if no workpools are defined in this section. If workpools are defined, the resources allocated to the workpools are taken from the total amount, with the remaining resources available to the
defaultworkpool.
workpoolssection must not exceed the resources available to Spark in
worker_options.
- name
- The name of the workpool. A workpool named
alwayson_sqlis created by default for AlwaysOn SQL. By default, the
alwayson_sqlworkpool is configured to use 25% of the resources available to Spark.
Default: alwayson_sql
- cores
- The number of system cores to use in this workpool expressed as an absolute value or a decimal value. This option follows the same rules as
cores_total.
- memory
- The amount of memory to use in this workpool expressed as either an absolute value or a decimal value. This option follows the same rules as
memory_total.
Spark encryption options
spark_ui_options: encryption: inherit encryption_options: enabled: false keystore: resources/dse/conf/.ui-keystore keystore_password: cassandra require_client_auth: false truststore: .truststore truststore_password: cassandra # Advanced settings # protocol: TLS # algorithm: SunX509 # keystore_type: JKS # truststore_type: JKS # cipher_suites: ]
- spark_ui_options
- Configures encryption for Spark Master and Spark Worker UIs. These options apply only to Spark daemon UIs, and do not apply to user applications even when the user applications are run in cluster mode.Tip: To set permissions on roles to allow Spark applications to be started, stopped, managed, and viewed, see Using authorization with Spark
- encryption
- The source for SSL settings.
- inherit - Inherit the SSL settings from the client_encryption_options in cassandra.yaml.
- custom - Use the following encryption_options in dse.yaml.
- encryption_options
- When
encryption: custom, configures encryption for HTTPS of Spark Master and Worker UI.
- enabled
- Enables Spark encryption for Spark client-to-Spark cluster and Spark internode communication.
Default: false
- keystore
- The keystore for Spark encryption keys.
The relative filepath is the base Spark configuration directory that is defined by the
SPARK_CONF_DIRenvironment variable. The default Spark configuration directory is resources/spark/conf.
Default: resources/dse/conf/.ui-keystore
- keystore_password
- The password to access the keystore.
Default: cassandra
- require_client_auth
- Enables custom truststore for client authentication.
- true - Require custom truststore for client authentication.
- false - Do not require custom truststore.
Default: false
- truststore
- The filepath to the truststore for Spark encryption keys if
require_client_auth: true.
The relative filepath is the base Spark configuration directory that is defined by theDefault: resources/dse/conf/.ui-truststore
SPARK_CONF_DIRenvironment variable. The default Spark configuration directory is resources/spark/conf.
- truststore_password
- The password to access the truststore.
Default: cassandra
- protocol
- The Transport Layer Security (TLS) authentication protocol. The TLS protocol must be supported by JVM and Spark. TLS 1.2 is the most common JVM default.
Default: JVM default
- algorithm
- The key manager algorithm.
Default: SunX509
- keystore_type
- Valid types are JKS, JCEKS, PKCS11, and PKCS12. For file-based keystores, use PKCS12.
Default: JKS
- truststore_type
- Valid types are JKS, JCEKS, and PKCS12.
Default: commented out (
JKS)
- cipher_suites
- A comma-separated list of cipher suites for Spark encryption. Enclose the list in square brackets.
-
Starting Spark drivers and executors
spark_process_runner: runner_type: default run_as_runner_options: user_slots: - slot1 - slot2
- spark_process_runner:
- Configures how Spark driver and executor processes are created and managed. See Running Spark processes as separate users.
- runner_type
- default - Use the default runner type.
- run_as - Spark applications run as a different OS user than the DSE service user.
- run_as_runner_options
- When
runner_type: run_as, Spark applications run as a different OS user than the DSE service user.
- user_slots
- The list slot users to separate Spark processes users from the DSE service user.
Default: slot1, slot2
AlwaysOn SQL
Properties to enable and configure AlwaysOn SQL on analytics nodes.
# AlwaysOn SQL options # alwayson_sql_options: # enabled: false # thrift_port: 10000 # web_ui_port: 9077 # reserve_port_wait_time_ms: 100 # alwayson_sql_status_check_wait_time_ms: 500 # workpool: alwayson_sql # log_dsefs_dir: /spark/log/alwayson_sql # auth_user: alwayson_sql # runner_max_errors: 10 # heartbeat_update_interval_seconds: 30
- alwayson_sql_options
- Configures the AlwaysOn SQL server.
- enabled
- Enables AlwaysOn SQL for this node.
- true - Enable AlwaysOn SQL for this node. The node must be an analytics node. Set workpools in Spark resource_manager_options.
- false - Do not enable AlwaysOn SQL for this node.
Default: false
- thrift_port
- The Thrift port on which AlwaysOn SQL listens.
Default: 10000
- web_ui_port
- The port on which the AlwaysOn SQL web UI is available.
Default: 9077
- reserve_port_wait_time_ms
- The wait time in milliseconds to reserve the
thrift_portif it is not available.
Default: 100
- alwayson_sql_status_check_wait_time_ms
- The time in milliseconds to wait for a health check status of the AlwaysOn SQL server.
Default: 500
- workpool
- The named workpool used by AlwaysOn SQL.
Default: alwayson_sql
- log_dsefs_dir
- Location in DSEFS of the AlwaysOn SQL log files.
Default: /spark/log/alwayson_sql
- auth_user
- The role to use for internal communication by AlwaysOn SQL if authentication is enabled. Custom roles must be created with
login=true.
Default: alwayson_sql
- runner_max_errors
- The maximum number of errors that can occur during AlwaysOn SQL service runner thread runs before stopping the service. A service stop requires a manual restart.
Default: 10
- heartbeat_update_interval_seconds
- The time interval to update heartbeat of AlwaysOn SQL. If heartbeat is not updated for more than three times the interval, AlwaysOn SQL automatically restarts.
Default: 30
DSE File System (DSEFS)
# dsefs_options: # enabled: # keyspace_name: dsefs # work_dir: /var/lib/dsefs # public_port: 5598 # private_port: 5599 # data_directories: # - dir: /var/lib/dsefs/data # storage_weight: 1.0 # min_free_space: 268435456
- dsefs_options
- Configures DSEFS. See Configuring DSEFS.
- enabled
- Enables DSEFS.
- true - Enables DSEFS on this node, regardless of the workload.
- false - Disables DSEFS on this node, regardless of the workload.
- blank or commented out (#) - DSEFS starts only if the node is configured to run analytics workloads.
Default:
-.
Default: /var/lib/dsefs
- public_port
- The public port on which DSEFS listens for clients.Note: DataStax recommends that all nodes in the cluster have the same value. Firewalls must open this port to trusted clients. The service on this port is bound to the native_transport_address.
Default: 5598
- private_port
- The private port for DSEFS internode communication.CAUTION: that are different from the devices that are used for DataStax Enterprise. Using multiple directories on JBOD improves performance and capacity.
Default: /var/lib/dsefs/data
- storage_weight
- Weighting factor for this location. Determines TB), gigabyte (10 GB), and megabyte (5000 MB).
Default: 268435456
# service_startup_timeout_ms:
- service_startup_timeout_ms
- Wait time in milliseconds before the DSEFS server times out while waiting for services to bootstrap.
Default: 60000
- service_close_timeout_ms
- Wait time in milliseconds before the DSEFS server times out while waiting for services to close.
Default: 60000
- server_close_timeout_ms
- Wait time in milliseconds that the DSEFS server waits during shutdown before closing all pending connections.
Default: 2147483647
- compression_frame_max_size
- The maximum accepted size of a compression frame defined during file upload.
Default: 1048576
- query_cache_size
- Maximum number of elements in a single DSEFS Server query cache.
Default: 2048
- query_cache_expire_after_ms
- The time to retain the DSEFS Server query cache element in cache. The cache element expires when this time is exceeded.
Default: 2000
- gossip options
- Configures DSEFS gossip rounds.
- round_delay_ms
- The delay in milliseconds between gossip rounds.
Default: 2000
- startup_delay_ms
- The delay in milliseconds between registering the location and reading back all other locations from the database.
Default: 5000
- shutdown_delay_ms
- The delay time in milliseconds between announcing shutdown and shutting down the node.
Default: 30000
- rest_options
- Configures DSEFS rest times.
- request_timeout_ms
- The time in milliseconds that the client waits for a response that corresponds to a given request.
Default: 330000
- connection_open_timeout_ms
-
- idle_connection_timeout_ms
- The time in milliseconds for RestClient to wait before closing an idle connection. If RestClient does not close connection after timeout, the connection is closed after 2 x this wait time.
- time - Wait time to close idle connection.
- 0 - Disable closing idle connections.
Default: 60000
- internode_idle_connection_timeout_ms
- Wait time in milliseconds before closing idle internode connection. The internode connections are primarily used to exchange data during replication. Do not set lower than the default value for heavily utilized DSEFS clusters.
Default: 0
- core_max_concurrent_connections_per_host
- Maximum number of connections to a given host per single CPU core. DSEFS keeps a connection pool for each CPU core.
Default: 8
- transaction_options
- Configures DSEFS transaction times.
- transaction_timeout_ms
- Transaction run time in milliseconds before the transaction is considered for timeout and rollback.
Default: 3000
- conflict_retry_delay_ms
- Wait time in milliseconds before retrying a transaction that was ended due to a conflict.
Default: 200
- conflict_retry_count
- The number of times to retry a transaction before giving up.
Default: 40
- execution_retry_delay_ms
- Wait time in milliseconds before retrying a failed transaction payload execution.
Default: 1000
- execution_retry_count
- The number of payload execution retries before signaling the error to the application.
Default: 3
- block_allocator_options
- Controls how much additional data can be placed on the local coordinator before the local node overflows to the other nodes. The trade-off is between data locality of writes and balancing the cluster. A local node is preferred for a new block allocation, if:
used_size_on_the_local_node < average_used_size_per_node x overflow_factor + overflow_margin
- overflow_margin_mb
- margin_size - Overflow margin size in megabytes.
- 0 - Disable block allocation overflow
Default: 1024
- overflow_factor
- factor - Overflow factor on an exponential scale.
- 1.0 - Disable block allocation overflow
Default: 1.05
DSE Metrics Collector
# insights_options: # data_dir: /var/lib/cassandra/insights_data # log_dir: /var/log/cassandra/
Uncomment these options only to change the default directories.
- insights_options
- Options for DSE Metrics Collector.
-/
Audit logging for database activities
audit_logging_options: enabled: false logger: SLF4JAuditWriter # included_categories: # excluded_categories: # # included_keyspaces: # excluded_keyspaces: # # included_roles: # excluded_roles:
- audit_logging_options
- Configures database activity logging.
- enabled
- Enables database activity auditing.
- true - Enable database activity auditing.
- false - Disable database activity auditing.
Default: false
- logger
- The logger to use for recording events:
Tip: Configure logging level, sensitive data masking, and log file name/location in the logback.xml file.
- SLF4JAuditWriter - Capture events in a log file.
- CassandraAuditWriter - Capture events in the
dse_audit.audit_logtable.
Default:
SLF4JAuditWriter
- included_categories
- Comma-separated list of event categories that are captured.. When specifying included categories leave excluded_categories blank or commented out.
Default: none (include all categories)
- excluded_categories
- Comma-separated list of categories to ignore, where the categories are:.
Default: exclude no categories
- included_keyspaces
- Comma-separated list of keyspaces for which events are logged. You can also use a regular expression to filter on keyspace name.Warning: DSE supports using either
included_keyspacesor
excluded_keyspacesbut not both.
Default: include all keyspaces
- excluded_keyspaces
- Comma-separated list of keyspaces to exclude. You can also use a regular expression to filter on keyspace name.
Default: exclude no keyspaces
- included_roles
- Comma-separated list of the roles for which events are logged.Warning: DSE supports using either
included_rolesor
excluded_rolesbut not both.
Default: include all roles
- excluded_roles
- The roles for which events are not logged. Specify a comma separated list role names.
Default: exclude no roles
Cassandra audit writer options
retention_time: 0 cassandra_audit_writer_options: mode: sync batch_size: 50 flush_time: 250 queue_size: 30000 write_consistency: QUORUM # dropped_event_log: /var/log/cassandra/dropped_audit_events.log # day_partition_millis: 3600000
- retention_time
- The number of hours to retain audit events by supporting loggers for the CassandraAuditWriter.
- hours - The number of hours to retain audit events.
- 0 - Retain events forever.
Default: 0
- cassandra_audit_writer_options
- Audit writer options.
- mode
- The mode the writer runs in.
-.Important: While async substantially improves performance under load, if there is a failure between when a query is executed, and its audit event is written to the table, the audit table might be missing entries for queries that were executed.
Default: sync
-
- queue_size
- The size of the queue feeding the asynchronous audit log writer threads.
- Number of events - When there are more events being produced than the writers can write out, the queue fills up, and newer queries are blocked until there is space on the queue.
- 0 - The queue size is unbounded, which can lead to resource exhaustion under heavy query load.
Default: 30000
- write_consistency
- The consistency level that is used to write audit events.
Default: QUORUM
- dropped_event_log
- The directory to store the log file that reports dropped events.
Default: /var/log/cassandra/dropped_audit_events.log
- day_partition_millis
- The time interval in milliseconds between changing nodes to spread audit log information across multiple nodes. For example, to change the target node every 12 hours, specify 43200000 milliseconds.
Default: 3600000 (1 hour)
DSE Tiered Storage
# tiered_storage_options: # strategy1: # tiers: # - paths: # - /mnt1 # - /mnt2 # - paths: [ /mnt3, /mnt4 ] # - paths: [ /mnt5, /mnt6 ] # # local_options: # k1: v1 # k2: v2 # # 'another strategy': # tiers: [ paths: [ /mnt1 ] ]
- tiered_storage_options
- Configures the smart movement of data across different types of storage media so that data is matched to the most suitable drive type, according to the required performance and cost characteristics.
- strategy1
- The first disk configuration strategy. Create a strategy2, strategy3, and so on. In this example, strategy1 is the configurable name of the tiered storage configuration strategy.
- tiers
- The unnamed tiers in this section configure a storage tier with the paths and filepaths that define the priority order.
- local_options
- Local configuration options overwrite the tiered storage settings for the table schema in the local dse.yaml file. See Testing DSE Tiered Storage configurations.
- - paths
- The section of filepaths that define the data directories for this tier of the disk configuration. List the fastest storage media first. These paths are used to store only data that is configured to use tiered storage and are independent of any settings in the cassandra.yaml file.
- - /filepath
- The filepaths that define the data directories for this tier of the disk configuration.
DSE Advanced Replication
# advanced_replication_options: # enabled: false # conf_driver_password_encryption_enabled: false # advanced_replication_directory: /var/lib/cassandra/advrep # security_base_path: /base/path/to/advrep/security/files/
- advanced_replication_options
- Configure DSE Advanced Replication.
- enabled
- Enables an edge node to collect data in the replication log.
Default: false
- conf_driver_password_encryption_enabled
- Enables encryption of driver passwords. See Encrypting configuration file properties.
Default: false
- advanced_replication_directory
- The directory for storing advanced replication CDC logs. The replication_logs directory will be created in the specified directory.
Default: /var/lib/cassandra/advrep
- security_base_path
- The base path to prepend to paths in the Advanced Replication configuration locations, including locations to SSL keystore, SSL truststore, and so on.
Default: /base/path/to/advrep/security/files/
Internode messaging
internode_messaging_options: port: 8609 # frame_length_in_mb: 256 # server_acceptor_threads: 8 # server_worker_threads: 16 # client_max_connections: 100 # client_worker_threads: 16 # handshake_timeout_seconds: 10 # client_request_timeout_seconds: 60
- internode_messaging_options
- Configures the internal messaging service used by several components of DataStax Enterprise. All internode messaging requests use this service.
- port
- The mandatory port for the internode messaging service.
Default: 8609
- frame_length_in_mb
- Maximum message frame length.
Default: 256
- server_acceptor_threads
- The number of server acceptor threads.
Default: The number of available processors
- server_worker_threads
- The number of server worker threads.
Default: The default is the number of available processors x 8
- client_max_connections
- The maximum number of client connections.
Default: 100
- client_worker_threads
- The number of client worker threads.
Default: The default is the number of available processors x 8
- handshake_timeout_seconds
- Timeout for communication handshake process.
Default: 10
- client_request_timeout_seconds
- Timeout for non-query search requests like core creation and distributed deletes.
Default: 60
DSE Multi-Instance
- server_id
- Unique generated ID of the physical server in DSE Multi-Instance /etc/dse-nodeId/dse.yaml files. You can change server_id when the MAC address is not unique, such as a virtualized server where the host’s physical MAC is cloned.
Default: the media access control address (MAC address) of the physical server
DataStax Graph (DSG)
- configDseYaml.html#configDseYaml__graphDSG system-level
- configDseYaml.html#configDseYaml__gremlin_serverDSG Gremlin Server options
DSG Gremlin Server
# gremlin_server: # port: 8182 # threadPoolWorker: 2 # gremlinPool: 0 # scriptEngines: # gremlin-groovy: # config: # sandbox_enabled: false # sandbox_rules: # whitelist_packages: # - package.name # whitelist_types: # - fully.qualified.type.name # whitelist_supers: # - fully.qualified.class.name # blacklist_packages: # - package.name # blacklist_supers: # - fully.qualified.class.name
- gremlin_server
- The top-level configurations in Gremlin Server.
- port
- The available communications port for Gremlin Server.
Default: 8182
- threadPoolWorker
- The number of worker threads that handle non-blocking read and write (requests and responses) on the Gremlin Server channel, including routing requests to the right server operations, handling scheduled jobs on the server, and writing serialized responses back to the client.
Default: 2
- gremlinPool
- This pool represents the workers available to handle blocking operations in Gremlin Server.
- 0 - the value of the JVM property cassandra.available_processors, if that property is set
- positive number - The number of Gremlin threads available to execute actual scripts in a ScriptEngine.
Default: the value of Runtime.getRuntime().availableProcessors()
- scriptEngines
- Configures gremlin server scripts.
- gremlin-groovy
- Configures for gremlin-groovy scripts.
- sandbox_enabled
- Configures gremlim groovy sandbox.
- true - Enable the gremlim groovy sandbox.
- false - Disable the gremlin groovy sandbox entirely.
Default: true
- sandbox_rules
- Configures sandbox rules.
- whitelist_packages
- List of packages, one package per line, to whitelist.
- -package.name
- The fully qualified package name.
- whitelist_types
- List of types, one type per line, to whitelist.
- -fully.qualified.type.name
- The fully qualified type name.
- whitelist_supers
- List of super classes, one class per line, to whitelist.
- -fully.qualified.class.name
- The fully qualified class name.
- blacklist_packages
- List of packages, one package per line, to blacklist.
- -package.name
- The fully qualified package name.
- blacklist_supers
- List of super classes, one class per line, to blacklist. Retain the hyphen before the fully qualified class name.
- -fully.qualified.class.name
- The fully qualified class name.
DSG system-level
# graph: # analytic_evaluation_timeout_in_minutes: 10080 # realtime_evaluation_timeout_in_seconds: 30 # schema_agreement_timeout_in_ms: 10000 # system_evaluation_timeout_in_seconds: 180 # adjacency_cache_size_in_mb: 128 # index_cache_size_in_mb: 128 # max_query_params: 16
- graph
- System-level configuration options and options that are shared between graph instances. Add an option if it is not present in the provided dse.yaml file.
Option names and values expressed in ISO 8601 format used in earlier DSE 5.0 releases are still valid. The ISO 8601 format is deprecated.
- analytic_evaluation_timeout_in_minutes
- Maximum time to wait for an OLAP analytic (Spark) traversal to evaluate.
Default: 10080 (168 hours)
- realtime_evaluation_timeout_in_seconds
- Maximum time to wait for an OLTP real-time traversal to evaluate.
Default: 30
- schema_agreement_timeout_in_ms
- Maximum time to wait for the database to agree on schema versions before timing out.
Default: 10000
- system_evaluation_timeout_in_seconds
- Maximum time to wait for a graph system-based request to execute, like creating a new graph.
Default: 180 (3 minutes)
- adjacency_cache_size_in_mb
- The amount of ram to allocate to each graph's adjacency (edge and property) cache.
Default: 128
- index_cache_size_in_mb
- The amount of ram to allocate to the index cache.
Default: 128
- max_query_params
- The maximum number of parameters that can be passed on a graph query request for TinkerPop drivers and drivers using the: 16 | https://docs.datastax.com/en/dse/6.8/dse-admin/datastax_enterprise/config/configDseYaml.html | 2020-08-03T12:59:47 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.datastax.com |
Joystick CHOP
Summary[edit]
The Joystick CHOP outputs values for all 6 possible axes on any game controller (joysticks, game controllers, driving wheels, etc.), as well as up to 32 button, 2 sliders and 4 POV Hats.
It handles game controllers connected to the gameport or USB ports, including the 3D Connexion mouse. You can have several devices attached, and any number of Joystick CHOPs in a project per device.
Before you use the game controller on your computer, calibrate them using Start -> Settings -> Control Panel -> Gaming Options -> Properties.
The main two outputs, the X-axis and Y-axis are output through channels called xaxis and yaxis. The other four axes are output through channels with similar names.
The range of the values for each channel is 0 to 1. For any axis, a value 0.5 is considered "centered". A value of 0 is given if the axis doesn't exist.
For any button, a value of 0 means the button is up or doesn't exist. A value of 1 means the button is pressed.
POV Hats behave like an X and Y axis. A POV axis only has 3 values though, 0, 0.5 and 1.
Contents
Parameters - Control Page
Joystick Source
source - This menu will list all the game controllers currently attached to the computer. The selected game controller is the one the CHOP reads data from. If the synth is saved with one joystick name, and the synth is moved to a machine with another joystick type, the CHOP will adopt the first game controller it find to replace the missing device.
Axis Range
axisrange - ⊞ -
- [-1, 1]
negoneone-
- [0, 1]
zeroone-
X Axis
xaxis - The name of the channel that records the X-axis position of the game controller.
Y Axis
yaxis - The name of the channel that records the Y-axis position of the game controller.
Invert Y Axis
yaxisinvert -
Z Axis
zaxis - The name of the channel that records the Z-axis position of the game controller.
X Rotation
xrot - The names of the channels that record the X-rotation axis position of the game controller.
Y Rotation
yrot - The names of the channels that record the Y-rotation axis position of the game controller.
Invert Y Rotation
yrotinvert -
Z Rotation
zrot - The names of the channels that record the Z-rotation axis position of the game controller.
Slider 1
slider0 - The name of the channel that records the position of the first slider on the game controller.
Slider 2
slider1 - The name of the channel that records the position of the second slider on the game controller.
Button Array
buttonarray - The names of the channels for the buttons on the game controller. This CHOP can handle up to 32 buttons.
POV Hat Array
povarrray - The names of the channels for the POV Hats. This CHOP can handle up to 4 POV Hats. The channels a POV hat is split up into are POVHatName_X and POVHatName_Y.
POV Hat State Array
povstatearray -
Connected
connected -
Axis Dead Zone
axisdeadzone - This value defines how much of the area in the center of the joystick is considered 'dead zone'. When a joystick axis is in this dead zone it is considered to be centered. This value applies to all normal axes and rotation axes. This value is a percentage that defaults to 7%.
Parameters - Channel Page
Sample Rate
rate -
Extend Left
left - ⊞ -
- Hold
hold-
- Slope
slope-
- Cycle
cycle-
- Mirror
mirror-
- Default Value
default-
Extend Right
right - ⊞ -
- Hold
hold-
- Slope
slope-
- Cycle
cycle-
- Mirror
mirror-
- Default Value
default-
Default Value
def custom interactive control panel built within TouchDesigner. Panels are created using Panel Components whose look is created entirely with TOPs.'. | https://docs.derivative.ca/Joystick_CHOP | 2020-08-03T12:37:31 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.derivative.ca |
Use the following options for your WHMCS:
- Module User Type : If you are an admin or reseller (Currently only Admin supported, Non of beta versions will have this function)
- Default Node : The node the virtual server will be built on if no overide is given
- Master Server : The master server this package is assigned to
- Default Plan : Then plan on the master the package is assigned to
- Virtualisation Type : Choose either OpenVZ, Xen-PV, Xen-HVM or KVM
- Default Operating System : The operating system that is used if not defined in configurable options.
- Username Prefix : This is the unique prefix that defines the username for each client. Set this the same in all plans. i.e: vmuser
- IP Addresses : The amount of ip addresses for this package
- Node Group : The node group this product should be built on (This will override the default node)
Click Save Changes. Select the Custom Fields tab. | https://docs.solusvm.com/display/DOCS/Options+Explained | 2020-08-03T11:34:58 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.solusvm.com |
Objects when the player gets or loses focus.
OnApplicationFocus is called when the application loses or
gains focus. Alt-tabbing or Cmd-tabbing can take focus away from the Unity
application to another desktop application. This causes the GameObjects to receive
an OnApplicationFocus call with the argument set to false. When the user
switches back to the Unity application, the GameObjects receive an OnApplicationFocus
call with the argument set to true.
OnApplicationFocus; } } | https://docs.unity3d.com/2020.2/Documentation/ScriptReference/MonoBehaviour.OnApplicationFocus.html | 2020-08-03T11:41:00 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.unity3d.com |
Use the Python SDK to extend the Release plugin for Deploy
You can extend the functionality of the official Release plugin for Deploy by using xldeploy-py, the Python SDK for Deploy.
The SDK allows interaction with Deploy over the REST services and supports the following:
- Deployfile
- Deployment
- Metadata
- Package
- Repository
- Tasks
Each of the above services has specific methods that correspond to the REST services offered by Deploy.
Note: Not all methods and services available in the REST API are currently supported by the SDK. The SDK is customizable depending on the requirements of the clients using it.
Custom tasks
In Release, you can create custom tasks. A custom task contains an XML section that becomes part of the
synthetic.xml in the
ext folder of the Release server, and a Python file stored to a location adjacent to
synthetic.xml.
For more information, see How to Create Custom Task Types.
To extend the Release plugin for Deploy, you can create custom tasks that can retrieve items from Deploy or perform actions in Deploy using the SDK. These custom tasks can extend the
xldeploy.XldTask to reuse properties required for any task related to Deploy.
Example of defining a custom task
Create a custom task to check if a CI exists on the Deploy Server. The new type to be included in
synthetic.xml contains one new
scriptLocation parameter representing the location of the python script within the
ext directory.
The other parameters are inherited from
xldeploy.XldTask.
Modify the
synthetic.xml
<type type="xld.CheckCIExist" extends="xldeploy.XldTask" label="XL-Deploy: Check CI exists" description="Custom Task to check if a CI exists"> <property name="scriptLocation" default="CheckExists.py" hidden="true"/> <property name="ci_path" category="input" label="CI Path" required="true"/> </type>
The
CheckExists.py Python script referred to in the sample XML above can perform the required actions using the Python SDK. The official plugin already contains methods to create the
xldeploy-py client. You must pass the
server property of the task as an argument to method
get_api_client(). The client returned can be used to call methods on any of the services mentioned above.
The
CheckExists.py python script
from xlrxldeploy import * client = get_api_client(task.getPythonScript().getProperty("server")) path="some/path/to/a/ci" print (client.repository.exists(path)) | https://docs.xebialabs.com/v.9.7/release/how-to/extend-official-xlr-xld-plugin-using-python-sdk/ | 2020-08-03T12:22:31 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.xebialabs.com |
Cisco ISE appliance
The Cisco ISE platform is a comprehensive, next-generation, contextually-based access control solution. Cisco ISE offers authenticated network access, profiling, posture, guest management, and security group access services along with monitoring, reporting, and troubleshooting capabilities on a single physical or virtual appliance.
More informations on
Starting ISE will start an installation of ISE onto a blank 200GB Drive. This will take time. The intial username is setup. This appliance requires KVM. You may try it on a system without KVM, but it will run really slow, if at all.
RAM: 4096 MB
You need KVM enable on your machine or in the GNS3 VM.
Documentation for using the appliance is available on | http://docs.gns3.com/appliances/cisco-ise.html | 2017-12-11T04:07:30 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.gns3.com |
Manual repair: Anti-entropy repair
Describe how manual repair works.
Anti-entropy node repairs are important for every Cassandra cluster. Frequent data deletions and downed nodes are common causes of data inconsistency. Use anti-entropy repair for routine maintenance and when a cluster needs fixing by running the nodetool repair command.
How does anti-entropy repair work?
- Build a Merkle tree for each replica
- Compare the Merkle trees to discover differences
Merkle trees are binary hash trees whose leaves are hashes of the individual key values. The leaf of a Cassandra Merkle tree is the hash of a row value. Each Parent node higher in the tree is a hash of its respective children. Because higher nodes in the Merkle tree represent data further down the tree, Casandra can check each branch independently without requiring the coordinator node to download the entire data set. For anti-entropy repair Cassandra uses a compact tree version with a depth of 15 (2^15 = 32K leaf nodes). For example, a node containing a million partitions with one damaged partition, about 30 partitions are streamed, which is the number that fall into each of the leaves of the tree. Cassandra works with smaller Merkle trees because they require less storage memory and can be transferred more quickly to other nodes during the comparison process.
After the initiating node receives the Merkle trees from the participating peer nodes, the initiating node compares every tree to every other tree. If the initiating node detects a difference, it directs the differing nodes to exchange data for the conflicting range(s). The new data is written to SSTables. The comparison begins with the top node of the Merkle tree. If Cassandra detects no difference between corresponding tree nodes, the process goes on to compares the left leaves (child nodes), then the right leaves. A difference between corresponding leaves indicates inconsistencies between the data in each replica for the data range that corresponds to that leaf. Cassandra replaces all data that corresponds to the leaves below the differing leaf with the newest version of the data.
Merkle tree building is quite resource intensive, stressing disk I/O and using memory. Some of the options discussed here help lessen the impact on the cluster performance. For details, see Repair in Cassandra.
You can run the
nodetool repair command on a specified node or on all
nodes. The node that initiates the repair becomes the coordinator node for the operation.
The coordinator node finds peer nodes with matching ranges of data and performs a major, or
validation, compaction on each peer node. The validation compaction builds a Merkle tree and
returns the tree to the initiating node. The initiating mode processes the Merkle trees as
described.
Full vs Incremental repair
The process described above represents what occurs for a full repair of a node's data:
Cassandra compares all SSTables for that node and makes necessary repairs. Cassandra 2.1 and
later support incremental repair. An incremental repair persists data that has already been
repaired, and only builds Merkle trees for unrepaired SSTables. This more efficient process
depends on new metadata that marks the rows in an SSTable as
repaired or
unrepaired.
If you run incremental repairs frequently, the repair process works with much smaller
Merkle trees. The incremental repair process works with Merkle trees as described above.
Once the process had reconciled the data and built new SSTables, the initiating node issues
an anti-compaction command. Anti-compaction is the process of segregating repaired and
unrepaired ranges into separate SSTables, unless the SSTable fits entirely within the
repaired range. If it does, the process just updates the SSTable's
repairedAt field.
- Size-tiered compaction splits repaired and unrepaired data into separate pools for separate compactions. A major compaction generates two SSTables, one for each pool of data.
- Leveled compaction performs size-tiered compaction on unrepaired data. After repair completes, Casandra moves data from the set of unrepaired SSTables to L0.
Full repair is the default in Cassandra 2.1 and earlier.
Parallel vs Sequential
Sequential repair takes action on one node after another. Parallel repair repairs all nodes with the same replica data at the same time.
Sequential repair takes a snapshot of each replica. Snapshots are hardlinks to existing SSTables. They are immutable and require almost no disk space. The snapshots are live until the repair is completed and then Cassandra removes them. The coordinator node compares the Merkle trees for one replica after the other, and makes required repairs from the snapshots. For example, for a table in a keyspace with a Replication factor RF=3 and replicas A, B and C, the repair command takes a snapshot of each replica immediately and then repairs each replica from the snapshots sequentially (using snapshot A to repair replica B, then snapshot A to repair replica C, then snapshot B to repair replica C).
Parallel repair works on nodes A, B, and C all at once. During parallel repair, the dynamic snitch processes queries for this table using a replica in the snapshot that is not undergoing repair.
Snapshots are hardlinks to existing SSTables. Snapshots are immutable and require almost no disk space. Repair requires intensive disk I/O because validation compaction occurs during Merkle tree construction. For any given replica set, only one replica at a time performs the validation compaction.
Partitioner range (
-pr)
nodetool repairon one node at a time, Cassandra may repair the same range of data several times (depending on the replication factor used in the keyspace). Using the partitioner range option,
nodetool repaironly repairs a specified range of data once, rather than repeating the repair operation needlessly. This decreases the strain on network resources, although
nodetool repairstill builds Merkle trees for each replica.
nodetool repair -pron EVERY node in the cluster to repair all data. Otherwise, some ranges of data will not be repaired.
Local (
-local, --in-local-dc) vs datacenter (
dc,
--in-dc) vs Cluster-wide
Consider carefully before using
nodetool repair across datacenters,
instead of within a local datacenter. When you run repair on a node using
-local or
--in-local-dc, the command runs only on nodes
within the same datacenter as the node that runs it. Otherwise, the command runs repair
processes on all nodes that contain replicas, even those in different datacenters. If run
over multiple datacenters,
nodetool repair increases network traffic
between datacenters tremendously, and can cause cluster issues. If the local option is too
limited, consider using the
-dc or
--in-dc options,
limiting repairs to a specific datacenter. This does not repair replicas on nodes in other
datacenters, but it can decrease network traffic while repairing more nodes than the local
options.
The
nodetool repair -pr option is good for repairs across multiple
datacenters, as the number of replicas in multiple datacenters can increase substantially.
For example, if you start
nodetool repair over two datacenters, DC1 and
DC2, each with a replication factor of 3,
repairmust build Merkle tables
for 6 nodes. The number of Merkle Tree increases linearly for additional datacenters.
-localrepairs:
- The
repairtool does not support the use of
-localwith the
-proption unless the datacenter's nodes have all the data for all ranges.
- Also, the tool does not support the use of
-localwith
-inc(incremental repair).
-dcparor
--dc-parallelto repair datacenters in parallel.
Endpoint range vs Subrange repair (
-st, --start-token, -et
--end-token)
A repair operation runs on all partition ranges on a node, or an endpoint range, unless you
use the
-st and
-et (or
-start-token and
-end-token ) options to run subrange repairs. When you specify a start
token and end token,
nodetool repair works between these tokens, repairing
only those partition ranges.
Subrange repair is not a good strategy because it requires generated token ranges. However, if you know which partition has an error, you can target that partition range precisely for repair. This approach can relieve the problem known as overstreaming, which ties up resources by sending repairs to a range over and over.
You can use subrange repair with Java to reduce overstreaming further. Send a Java
describe_splits call to ask for a split containing 32k partitions can
be iterated throughout the entire range incrementally or in parallel. Once the tokens are
generated for the split, you can pass them to
nodetool repair -st <start_token>
-et <end_token>. Add the
-local option to limit the repair to
the local datacenter. This reduces cross datacenter transfer. | https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsRepairNodesManualRepair.html | 2017-12-11T04:02:28 | CC-MAIN-2017-51 | 1512948512121.15 | [array(['../images/ops_inc_repair.png', None], dtype=object)] | docs.datastax.com |
Set-SPCentral
Administration
Syntax
Set-SPCentralAdministration -Port <Int32> [-AssignmentCollection <SPAssignmentCollection>] [-Confirm] [-WhatIf] [-SecureSocketsLayer] [<CommonParameters>]
Description
The
Set-SPCentralAdministration cmdlet sets the port for the Central Administration site.
For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation at ().
Examples
------------------EXAMPLE------------------
C:\PS>Set-SPCentralAdministration -Port 8282
This example sets the port for the Central Administration web application on the local farm to 8282.
Required Parameters
Specifies the administration port for the Central Administration site.
The type must be a valid port number; for example, 8080.
{{Fill SecureSocketsLayer Description}}
Displays a message that describes the effect of the command instead of executing the command.
For more information, type the following command:
get-help about_commonparameters | https://docs.microsoft.com/en-us/powershell/module/sharepoint-server/Set-SPCentralAdministration?view=sharepoint-ps | 2017-12-11T04:33:47 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.microsoft.com |
Bug Check 0x9C: MACHINE_CHECK_EXCEPTION
The MACHINE_CHECK_EXCEPTION bug check has a value of 0x0000009C. This bug check indicates that a fatal machine check exception has occurred.
Important This topic is for programmers. If you are a customer who has received a blue screen error code while using your computer, see Troubleshoot blue screen errors.
MACHINE_CHECK_EXCEPTION Parameters
The four parameters that are listed in the message have different meanings, depending on the processor type.
If the processor is based on an older x86-based architecture and has the Machine Check Exception (MCE) feature but not the Machine Check Architecture (MCA) feature (for example, the Intel Pentium processor), the parameters have the following meaning.
If the processor is based on a newer x86-based architecture and has the MCA feature and the MCE feature (for example, any Intel Processor of family 6 or higher, such as Pentium Pro, Pentium IV, or Xeon), or if the processor is an x64-based processor, the parameters have the following meaning.
On an Itanium-based processor, the parameters have the following meaning.
Note Parameter 1 indicates the type of violation.
Remarks
In Windows Vista and later operating systems, this bug check occurs only in the following circumstances.
- WHEA is not fully initialized.
- All processors that rendezvous have no errors in their registers.
For other circumstances, this bug check has been replaced with bug Check 0x124: WHEA_UNCORRECTABLE_ERROR in Windows Vista and later operating systems.
For more information about Machine Check Architecture (MCA), see the Intel or AMD Web sites. | https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/bug-check-0x9c--machine-check-exception | 2017-12-11T04:10:37 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.microsoft.com |
BurnEngine™ is a framework from ThemeBurn.com that adds advanced layout and cms features to OpeCart themes. It has unified control panel, which delivers consistent experience across different themes and stores. BurnEngine stands as a layer between OpenCart and the theme – it manages a huge amount of options to customize your shop, while keeping backward compatibility with third party OpenCart modules and extensions.
Why relying on BurnEngine™
BurnEngine helps you control every aspect of the theme – appearance, performance, data migration etc. Most importantly – it offers a single codebase. The main benefit from the single codebase is that all of the themes build on BurnEngine receive bug fixes and feature additions at once. Every BurnEngine compatible theme gets regular updates, regardless of its initial release date. This mitigates the negative effect of low (or even lacking) support for old themes that is typical for the theme market.
BurnEngine does not limit itself to cms settings only. It brings system-wide features like performance optimizations, SEO tools, ecommerce settings and many more. It has also internal plugin system, which guarantees that every installed BurnEngine extension is 100% compatible with all of the themes. The question `Is extension X compatible with my theme?` becomes thus obsolete.
BurnEngine is not a trendy, little tested technology that just came out of the wild. It lies on years of experience and testing from ThemeBurn team. It was incorporated under-the-hood in ThemeBurn’s themes much before its release as BurnEngine™ brand and powers many stores that operate for a long time. ThemeBurn’s long-term engagement guarantees your investment when relying on a BurnEngine powered theme.
By using a BurnEngine compatible theme you can count on continuous support and new features, because BurnEngine evolves separately from the theme. The BurnEngine release cycle is not targeted at specific theme. Every new BurnEngine version adds fixes and options to all existing themes that are compatible with it. You won’t be forced to buy new themes in order to get improved functionality.
How to obtain BurnEngine™
BurnEngine is not installed separately from your theme – it is included in your theme package. You don’t have to take additional actions to activate it. You can have unlimited themes that rely on BurnEngine – they are all managed from a single control panel. If you have bought a themeburn’s theme (Pavilion, Kiddos etc.) you already have BurnEngine – just proceed to installation. | http://docs.themeburn.com/burnengine/concepts/ | 2017-12-11T04:00:43 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.themeburn.com |
VS Code¶
If you use VS Code, you can also install the Imandra IDE Extension plugin to help with development, providing things like completion and syntax highlighting as well as full asynchronous proof checking and semantic editing, meaning you can see proved theorems, counterexamples and instances within the IDE as you type.
In order to install the standard extension, first follow the instructions above for installing Imandra. Then in VSCode itself VSCode go to the extensions view by following the instructions about the Extension Marketplace and searching for
Imandra IDE.
The extension will open automatically when a file with extension
.iml or
.ire is opened. The extension will look for the correct version of
ocamlmerlin on the opam switch associated with the folder in which the opened
.iml or
.ire file resides (defaulting to the current global switch). We recommend that the current global switch is that produced by the recommended installation of Imandra, as that contains all the artifacts to facilitate correct Imandra type inference, asynchronous proof checking and other language server features. Below are example images of the type information the Imandra IDE provides in VSCode.
With Simple Installation¶
If you have used the Simple installation instructions for Imandra then the VSCode extension should work automatically.
With Manual Installation¶
If you have used the Manual installation instructions for Imandra then it is necessary to modify some of the settings in VSCode by hand.
Pressing CMD+
, takes you to the settings section of VSCode. It is necessary to alter the following settings:
imandra_merlinand enter here the result of typing
which imandra-merlinin a terminal where you installed Imandra. So for example if you had installed Imandra in
~/imandrayou would add for this setting:
~/imandra/imandra-merlin
imandra-vscode-serverand enter here the result of typing
which imandra-vscode-serverthen
-serverthen
which imandra_network_clientin a terminal where you installed Imandra. So for example if you had installed Imandra in
~/imandrayou would add for this setting:
~/imandra/imandra-vscode-server -server ~/imandra/imandra_network_client
As is shown in the following screen shot.
| https://docs.imandra.ai/imandra-docs/notebooks/installation-vscode/ | 2019-12-05T20:26:12 | CC-MAIN-2019-51 | 1575540482038.36 | [array(['https://storage.googleapis.com/imandra-assets/images/docs/ImandraVSCodeManualOpam.png',
'Example settings screen'], dtype=object) ] | docs.imandra.ai |
Identity models and authentication in Microsoft Teams
Microsoft Teams support all the identity models that are available with Office 365. Supported identity models include:
Cloud Identity: In this model, a user is created and managed in Office 365 and stored in Azure Active Directory, and the password is verified by.
Configurations
Depending on your organization’s decisions of which identity model to implement and use, the implementation requirements may vary. Refer to the requirements table below to ensure that your deployment meets these prerequisites. If you have already deployed Office 365 and have already implemented the identity and authentication method, you may skip these steps.
Refer to Choosing a sign-in model for Office 365 and Understanding Office 365 identity and Azure Active Directory guides for additional details.
Multi-Factor Authentication
Office 365 plans support Multi-Factor Authentication (MFA) that increases the security of user logins to Office 365 services. With MFA for Office 365, users are required to acknowledge a phone call, text message, or an app notification on their smartphone after correctly entering their password. Only after this second authentication factor has been satisfied, can a user sign in.
Multi Factor authentication is supported with any Office 365 plan that includes Microsoft Teams. The Office 365 subscription plans that include Microsoft Teams are discussed later in the Licensing section below.
Once the users are enrolled for MFA, the next time a user signs in, they will see a message that asks them to set up their second authentication factor. Supported authentication methods are:
Feedback | https://docs.microsoft.com/en-us/MicrosoftTeams/identify-models-authentication?redirectSourcePath=%252fen-us%252farticle%252fModern-authentication-and-Microsoft-Teams-desktop-clients-71467704-92DB-4253-A086-5F63C4A886CA | 2019-12-05T19:54:52 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.microsoft.com |
For information about the Revenue Recognition feature that generates data in this export and how to configure this feature, visit Revenue Recognition
Revenue Recognition Schedules
The Revenue Recognition Schedules export provides a list of all schedules from each charge or credit and the corresponding revenue amortization information.
This export will only appear in the admin console for for sites that have enabled the feature.
Invoice Status Filter
All
Every invoice generated within your Recurly site, regardless of status.
Open
All invoices that have not received a payment attempt. This status has a value of 'pending' in the export and does not include invoice with a 'past_due' status. This status only exists for manual invoices. If you see this status for an automatically collected invoice, it may mean that the payment attempt had a transaction communication error. This filter will also include all credit invoices which have not been fully allocated to charge invoices.
Closed
All successfully collected invoices. This status has a value of 'paid' in the export and does not include the 'failed' status. It is important to note that the associated credit card, PayPal, or Amazon transaction with a paid invoice may not be settled with the gateway. Allow at least 1 to 3 business days for transaction settlement with the gateway. This will also include all credit invoices which have been fully allocated to charge invoices.
Past Due
All automatic invoices that attempted collection, but payment failed, or manual invoices that have reached their due date. Payment for automatic invoices will be retried automatically through the dunning process.
Failed
All invoices that have either experienced 20 declines, or have been through the dunning process without successful payment. These invoices will not be retried. It is important to note that failed invoices will clear the owed balance from the account and will not attempt any future collections, but the charge line items will still exist in the account history.
Time Range Filter
Created
The revenue recognition schedules export uses the invoice_created_at date column for the time range filter. In other words, all schedules from invoices created in the selected time range will be included in the results.
Export Contents
revenue_schedule_id
224934254870915
the unique identifier of the revenue schedule
invoice_number
192880
The invoice number from the Recurly UI.
invoice_date
2017-06-01 00:00:00 UTC
Creation date of the invoice.
invoice_state
paid, processing, open, failed
the current state of the invoice
account_code
11734
Account code being charged for this invoice.
accounting_code
monthly_fee
Internal accounting code for a specific invoice line item. This value will only populate if you define an accounting code for the line item. Accounting codes can be defined for all line items. Plan set-up fees and plan free trial charges will inherit the plan's accounting code.
line_item_id
3a6ae2555a2e2a76c077604bf5b90457
Unique internal identifier for the adjustment. Also called line_item_uuid or adjustment_uuid in other exports.
line_item_start_date
2017-06-01 12:00:00 UTC
Bill cycle start date for a specific invoice line item. Equivalent to line_item_start_date in the deprecated Invoices export.
line_item_end_date
2017-07-01 12:00:00 UTC
Bill cycle end date for a specific invoice line item. This date will not exist for custom charges (one-time).
origin
plan, add_on, credit, charge
the original source for a line item. A credit created from an original charge will have the value of the charge's origin. (plan = subscription fee, plan_trial = trial period 0 amount charge, setup_fee = subscription setup fee, add_on = subscription add-on fee, debit = custom charge through the UI or Adjustments API, one_time = custom charge through the Transactions API, credit = custom credit
days
31
number of days between line_item_start_date and line_item_end_date
amount_per_day
4.88
the total amount divided by the number of days
total_amount
1000
total amount of the charge to be recognized for this schedule, in other words "subtotal after discounts)
currency
USD
currency of the line item
schedule_type
at_range_start, evenly
the way in which revenue is recognized for that schedule, as defined in the plan settings for revenue recognition
revenue_recognition_date
2017-06-05 00:00:00 UTC
the date revenue is recognized. only populated if schedule type is at_range_start or at_rate_end
deferred_revenue_balance
evenly
reflects the total remaining deferred revenue to be recognized as of the date the export is requested
For example, if you specify an export date range of Dec 1- 31, the deferred revenue balance will calculate all remaining revenue on the schedule yet to be recognized as of December 31st.
arrears
0
the portion of the line item that applies to the past, i.e. before time range "start date"
month_1
20
the total amount recognized in a specific month. Note that the first month will be the current month
month_2
20
the total amount recognized in a specific month. The second column will be the month after the current month
month_3, etc.
20
the total amount recognized in a specific month. There will be a column for each month after the current month, creating 12 columns of months in total
future_revenue
90
revenue to be recognized in the future past 12 months from the current month
invoice_origin
purchase, refund
The event that created the invoice. Invoices issued before the Credit Invoices feature was enabled on your site will have either purchase or refund as the value. Once Credit Invoices is enabled, you can see new origins like renewal, immediate_change, and write_off. | https://docs.recurly.com/docs/revenue-recognition-export | 2019-12-05T20:55:39 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.recurly.com |
STV_WLM_QUERY_STATE
Records the current state of queries being tracked by WLM.
STV_WLM_QUERY_STATE is visible to all users. Superusers can see all rows; regular users can see only their own data. For more information, see Visibility of Data in System Tables and Views.
Table Columns
Sample Query
The following query displays all currently executing queries in service classes greater than 4. For a list of service class IDs, see WLM Service Class IDs.
select xid, query, trim(state), queue_time, exec_time from stv_wlm_query_state where service_class > 4;
This query returns the following sample output:
xid | query | btrim | queue_time | exec_time -------+-------+---------+------------+----------- 100813 | 25942 | Running | 0 | 1369029 100074 | 25775 | Running | 0 | 2221589242 | https://docs.aws.amazon.com/redshift/latest/dg/r_STV_WLM_QUERY_STATE.html | 2019-12-05T19:16:07 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.aws.amazon.com |
This page provides information on the V-Ray IES Light.
Page Contents
Overview
The V-Ray IES Light is a V-Ray specific light source plugin that can be used to create physically accurate area lights.
UI Paths:
V-Ray Lights Toolbar > IES Light
Extensions > V-Ray > V-Ray Lights > IES Light
Main
Enabled () – Turns the VRayLight on and off.
Color – Specifies the color of the light.
Intensity – Specifies the strength of the light..
The Diameter parameter is only available when the Circle and Sphere shapes are selected.. | https://docs.chaosgroup.com/pages/viewpage.action?pageId=30835180 | 2019-12-05T19:19:10 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.chaosgroup.com |
We have provided two distinct methods of installing Codecov Enterprise. We highly suggest using Docker, which is the easiest and quickest deployment option.
There are two main methods when deploying with Docker Compose, with varying degrees of configuration, availability, and scale. It is recommended to read the Codecov Enterprise Deployment Strategies documentation.
However, if you're just seeking a trial/proof of concept deployment of Codecov Enterprise, see Deploying with Docker Compose.
Full deployment scripts using terraform can be found for AWS, GCP, and Azure (coming soon) here:
Supported pathways currently in progress, if you would like a custom deployment / orchestration, please reach out to us directly at [email protected]
Linux / bare metal deprecated
As of January 22nd, 2019, we have deprecated support for Linux / bare metal.
In line with industry best practices, we recommend placing your enterprise install of Codecov behind your company's firewall, or otherwise perform other access controls such that it is only accessible by trusted staff and employees. Other Best Practices | https://docs.codecov.io/docs/install-guide | 2019-12-05T20:34:57 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.codecov.io |
February 2013
Volume 28 Number 02
Patterns in Practice - Data Design for Adding Functionality to a Class
By Peter Vogel | February 2013
In my last column, I described a common business problem: A SalesOrder with multiple OrderLines, with each OrderLine specifying a Product being bought by a Customer and the SalesOptions the customer has chosen to apply to that OrderLine/Product combination (e.g. giftwrapping, expediting). Those SalesOptions affect how that OrderLine/Product combination will be processed, including calculating the price for that purchase. In that column, I looked at a couple of patterns that might support a solution that was maintainable, extendable, testable and understandable. In the end, I decided that calculating the price should be handled by the decorator pattern because it would allow the SalesOptions to interact as required by the organization’s business rules (e.g. expediting increases the price of the Product by a set percentage after all other discounts are applied and ignoring any costs associated with giftwrapping). To deal with the rest of the processing required by the SalesOptions, I decided my best choice was to implement some version of the Roles pattern.
This column will look at the data design required to support implementing the solution.
Basic Tables
I’m old fashioned enough to begin to develop any solution by designing the database tables that I’ll need in my relational database (of course, if this organization had enough sales orders they might need a big data solution—but that isn’t the case here). Obviously, I’ll need a table listing valid SalesOptions. The primary key for this SalesOptions table is the SalesOptionId and the table has at least one other column: the SalesOption description, which is displayed in the user interface when the user is selecting or reviewing the SalesOptions for an OrderLine. Any data about the SalesOption that doesn’t vary from one Product to another will also go in this table.
Because not all SalesOptions can be applied to all Products, I also need a table of valid SalesOptions and Product combinations (the company sells both “things” and services, for instance, and you can’t gift wrap a service). This ValidSalesOptionsForProducts table would have a primary key made up of two columns: the ProductId and SalesOptionId. The table might have some additional columns to hold data related to the relationship between a particular Product and SalesOption.
If there’s no data associated with a Product/SalesOption combination, however, there’s another data design possible. If most SalesOptions apply to most Products, it would be more efficient to create an InvalidSalesOptionsForProduct table that lists just the Products and SalesOptions that can’t be combined. If the organization has thousands of products, a table of exceptions would be more efficient than a table of allowed combinations.
And before I get tons of comments about using natural keys/real data as the primary keys of my table: You’re perfectly welcome to give the ValidSalesOptionsForProduct table a meaningless primary key (a GUID or Identity key of some kind) and apply a unique index to the combination of SalesOptionId and ProductId. SQL Server’s hashing mechanisms for building an index will almost certainly give your better performance whenever you need to use that meaningless key as the foreign key of some other table. But, for this discussion, I don’t need that meaningless primary key so I will ignore that option. That also applies to the next table that I’ll discuss.
For any particular OrderLine, I’ll also need a table to record the SalesOptions that have been applied to it. That SalesOptionForOrderLine table will have a primary key, or unique index, consisting of the OrderId, the OrderLineId, and the SalesOptionId.
Supporting SalesOptions
Finally, each SalesOption will have data associated with its application to a particular Product/OrderLine. For instance, if the user selects the expediting SalesOption, the user needs to select the level of expediting (NextDay or Urgent); for the giftwrapping SalesOption the user will need to specify the type of giftwrapping.
There are at least two different data designs that would support recording this information. One design is to simply add the required columns to the SalesOptionForOrderLine table: a GiftwrapId column for the giftwrapping option and the ExpediteLevel for the expediting option. For any particular SalesOption most of those columns would be Null; in other words, for the giftwrapping SalesOption the GiftwrapId column will have a valid value but the ExpediteLevel column will hold a Null value.
In the other design, each SalesOption would have its own SalesOptionData table (including ExpediteSalesOptionData and GiftwrapSalesOptionData). Each of these tables would have as its primary key the SalesOrderId and OrderLineId and the columns required by the SalesOption (e.g. the ExpediteSalesOptionData table would have an ExpediteLevel column, the GiftwrapSalesOptionData table would have GiftwrapId column).
There’s no universally right answer here—the right answer will depend on the way the business works. For instance, having a separate SalesOptionsData table for each SalesOption requires me to use one of two data access plans when processing the OrderLines in a SalesOrder.
- For each OrderLine, make a separate trip to the database to retrieve the row from the relevant SalesOptionData table (more trips = worse performance).
- Fetch all of the rows from all of the relevant SalesOptionsData table by joining the OrderLine to all of the SalesOptionsData table with outer joins (more outer joins = worse performance). This solution also requires me to rewrite this query every time a new SalesOption is added to the application.
If the number of OrderLines being processed at any one time is small (such as one SalesOrder’s worth of OrderLines which is, typically, less than six OrderLines), I could live with the performance hit that comes with either data access plan. I could also live with either data access plan if any particular part of the organization only needs to process a small number of specific SalesOptions (if the shipping department only needs to retrieve the information for the Expedite SalesOption). However, if the number of OrderLines being processed at any one time is large ( if I process many SalesOrders at a time) or if there’s a part of the organization that needs to handle all of the SalesOptions applied to an OrderLine then the performance impact could be crippling.
Looking at the business, I can see that any part of the organization will typically only be interested in a few specific SalesOptions applied to a Product/OrderLine combination. The one exception is the order taking application—however, it only works with one SalesOrder’s worth of OrderLines at a time. However, there are several places in the organization where many Orders are processed at once. The shipping department, for instance, prepares a day’s worth of shipping at a time to support combining shipments and reducing costs. That second fact drives me to adding nullable columns to the SalesOptionForOrderLine table.
I admit to having another reason for adding nullable columns to the SalesOptionForOrderLine table. I also know that the amount of data associated with a SalesOption is typically small. If I used individual tables I’d end up with tables that have only one or two columns (other than their primary key columns). I have a visceral objection to that though I’m not sure that I could justify it.
Putting all of this together, it means that, on the data side, adding a new SalesOption consists of adding:
- A row to the SalesOptions table
- Multiple rows to the ValidSalesOptionsForProduct table
- Additional columns to the SalesOptionForOrderLine table
Next Steps
And, of course, adding a new SalesOption requires creating the appropriate role object to hold the code for processing the SalesOption. So that’s next month’s column—the object model.
One of the things that you may have noticed is that I’ve frequently referred to the way that the organization works both in the design phase and in the implementation phase (for me, the implementation phase includes deciding on the details of the design). That’s another one of the assumptions of this column: conditions alter cases. Your deep understanding of how your organization works is critical not only in selecting the right pattern for your application but also in deciding what makes sense when it comes to implementing that pattern. That’s one of the reasons that I appreciate patterns so much: they’re a support rather than a straightjacket and allow me to tailor the solution to my clients’ needs.
Sidebar: Which Class Should Manage SalesOptions?
One of the comments made by a reader on the original column suggested that the OrderLine should be responsible for managing the SalesOptions rather than the Products. That’s a good question. Certainly, the data design that ties OrderLines to SalesOptions suggests that it’s a reusable choice. But it really leads to more interesting question for this column: What would be the basis for deciding where to put control of the SalesOption role objects? The question is made a little harder to decide because, in this organization, an OrderLine always has a Product assigned to it; disentangling the two business entities is hard to do.
The reason that I felt that the Product should manage the SalesOptions was because of the necessity of validating SalesOptions against the ValidSalesOptionsForProduct table—I assumed would be handled by code in the Product class and that the rest of the SalesOptions code would go in the Product also. However, I’m not sure that’s a compelling argument; The code in an OrderLine class could validate the Product associated with the OrderLine as easily as the Product class could because an OrderLine always knows what Product is associated with it.
One way to make the decision would be to look at the way that the business is run. If, after assigning a SalesOption to a Product/Orderline combination, is it possible to move that Product to another OrderLine in the SalesOrder? Or to change the Product assigned to the OrderLine? If either of those changes is possible, what happens to the SalesOptions? Do the SalesOptions follow the Product to another OrderLine or stay with the OrderLine? If you replace the Product on an OrderLine with a new Product, would it always retain the SalesOptions assigned to the original Product? The SalesOptions stay with the OrderLine, it suggests that the OrderLine is responsible for the SalesOptions.
Another way to answer the question is to look at how the classes will be used elsewhere in the business’ processes. If I know that, somewhere in the organization, Products and SalesOptions needed to be processed even when the Products aren’t associated with an OrderLine then I will have a compelling reason for keeping the responsibility of processing the SalesOption with the Product.
I do have one scenario where Products and SalesOptions are processed independently of an OrderLine: Some SalesOptions have different processing, depending on which Product the SalesOption is assigned to and regardless of the state of the OrderLine involved. For instance, expediting a “thing” is different from expediting a “service”; expediting a thing means shipping it earlier, while expediting a service means the team delivering the service goes to the customer’s site earlier. As a result, when an application asks for the Expediting role object for a Product, different Products will provide a different Role object.
There are other solutions to this problem, of course. For instance, should delivering a thing and a service earlier both be called “expediting”? Having different Strategy objects (one for things and one for services) might also resolve the problem. However, rather than make the OrderLine responsible for determining which Expediting role object to use with a Product or create a complex Expediting role object that can handle all Products, I’ll turn the responsibility for returning the right role object over to the Product.
Peter Vogel is the principal system architect in PH&V Information Services, specializing in SharePoint and service-oriented architecture (SOA) development, with expertise in user interface design. In addition, Peter is the author of four books on programming and wrote Learning Tree International’s courses on SOA design ASP.NET development taught in North America, Europe, Africa and Asia. | https://docs.microsoft.com/en-us/archive/msdn-magazine/2013/february/patterns-in-practice-data-design-for-adding-functionality-to-a-class | 2019-12-05T20:56:54 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.microsoft.com |
Configure and manage Azure Active Directory authentication with SQL
This article shows you how to create and populate Azure AD, and then use Azure AD with Azure SQL Database, managed instance, and SQL Data Warehouse. For an overview, see Azure Active Directory Authentication.
Note
This article applies to Azure SQL server, and to both SQL Database and SQL Data Warehouse databases that are created on the Azure SQL server. For simplicity, SQL Database is used when referring to both SQL Database and SQL Data Warehouse.
Important
Connecting to SQL Server running on an Azure VM is not supported using an Azure Active Directory account. Use a domain Active Directory account instead.
Create and populate an Azure AD
Create an Azure AD How Azure subscriptions are associated with Azure AD..
Create an Azure AD administrator for Azure SQL server
Each Azure SQL server (which hosts a SQL Database or SQL Data Warehouse). For more information about the server administrator accounts, see Managing Databases and Logins in Azure SQL Database. Azure SQL server administrator account), cannot create Azure AD-based users, because they do not have permission to validate proposed database users with the Azure AD.
Provision an Azure Active Directory administrator for your managed instance
Important
Only follow these steps if you are provisioning a managed instance. This operation can only be executed by Global/Company administrator or a Privileged Role Administrator in Azure AD. Following steps describe the process of granting permissions for users with different privileges in directory.
Note
For Azure AD admins for MI created prior to GA, but continue operating post GA, there is no functional change to the existing behavior. For more information, see the New Azure AD admin functionality for MI section for more details.
Your managed instance needs permissions to read Azure AD to successfully accomplish tasks such as authentication of users through security group membership or creation of new users. For this to work, you need to grant permissions to managed instance to read Azure AD. There are two ways to do it: from Portal and PowerShell. The following steps both methods.
In the Azure portal, in the upper-right corner, select your connection to drop down a list of possible Active Directories.
Choose the correct Active Directory as the default Azure AD.
This step links the subscription associated with Active Directory with Managed Instance making sure that the same subscription is used for both Azure AD and the Managed Instance.
Navigate to Managed Instance and select one that you want to use for Azure AD integration.
Select the banner on top of the Active Directory admin page and grant permission to the current user. If you're logged in as Global/Company administrator in Azure AD, you can do it from the Azure portal or using PowerShell with the script below.
# Gives Azure Active Directory read permission to a Service Principal representing the'." }
After the operation is successfully completed, the following notification will show up in the top-right corner:
Now you can choose your Azure AD admin for your managed instance. For that, on the Active Directory admin page, select Set admin command.
In the AAD Server.
At the top of the Active Directory admin page, select Save.
The process of changing the administrator may take several minutes. Then the new administrator appears in the Active Directory admin box.
After provisioning an Azure AD admin for your managed instance, you can begin to create Azure AD server principals (logins) with the CREATE LOGIN syntax. For more information, see managed instance overview.
Tip
To later remove an Admin, at the top of the Active Directory admin page, select Remove admin, and then select Save.
New Azure AD admin functionality for MI
The table below summarizes the functionality for the public preview Azure AD login admin for MI, versus a new functionality delivered with GA for Azure AD logins.
As a best practice for existing Azure AD admins for MI created before GA, and still operating post GA, reset the Azure AD admin using the Azure portal “Remove admin” and “Set admin” option for the same Azure AD user or group.
Known issues with the Azure AD login GA for MI
If an Azure AD login exists in the master database for MI, created using the T-SQL command
CREATE LOGIN [myaadaccount] FROM EXTERNAL PROVIDER, it can't be set up as an Azure AD admin for MI. You'll experience an error setting the login as an Azure AD admin using the Azure portal, PowerShell, or CLI commands to create the Azure AD login.
- The login must be dropped in the master database using the command
DROP LOGIN [myaadaccount], before the account can be created as an Azure AD admin.
- Set up the Azure AD admin account in the Azure portal after the
DROP LOGINsucceeds.
- If you can't set up the Azure AD admin account, check in the master database of the managed instance for the login. Use the following command:
SELECT * FROM sys.server_principals
- Setting up an Azure AD admin for MI will automatically create a login in the master database for this account. Removing the Azure AD admin will automatically drop the login from the master database.
Individual Azure AD guest users are not supported as Azure AD admins for MI. Guest users must be part of an Azure AD group to be set up as Azure AD admin. Currently, the Azure portal blade doesn't gray out guest users for another Azure AD, allowing users to continue with the admin setup. Saving guest users as an Azure AD admin will cause the setup to fail.
- If you wish to make a guest user an Azure AD admin for MI, include the guest user in an Azure AD group, and set this group as an Azure AD admin.
Cmdlets used to provision and manage Azure AD admin for SQL managed instance:
The following command gets information about an Azure AD administrator for a managed instance named ManagedInstanceName01 associated with the resource group ResourceGroup01.
Remove-AzSqlInstanceActiveDirectoryAdministrator -ResourceGroupName "ResourceGroup01" -InstanceName "ManagedInstanceName01" -Confirm -PassThru
Provision an Azure Active Directory administrator for your Azure SQL Database server
Important
Only follow these steps if you are provisioning an Azure SQL Database server or Data Warehouse.
The following two procedures show you how to provision an Azure Active Directory administrator for your Azure SQL server in the Azure portal and by using PowerShell.
Azure portal. (The Azure SQL server can be hosting either Azure SQL Database or Azure SQL Data Warehouse.)
In the left banner select All services, and in the filter type in SQL server. Select Sql Servers.
Note
On this page, before you select SQL servers, you can select the star next to the name to favorite the category and add SQL servers to the left navigation bar. SQL Data Warehouse.) SQL Server authentication user. If present, the Azure AD admin setup will fail; rolling back its creation and indicating that such an admin (name) already exists. Since such a SQL Server authentication user is not part of the Azure AD, any effort to connect to the server using Azure AD authentication fails.
To later remove an Admin, at the top of the Active Directory admin page, select Remove admin, and then select Save.
PowerShell for Azure SQL Database and Azure SQL Data Warehouse Azure SQL Database and Azure SQL Data Warehouse: Azure SQL Azure SQL Database or Azure SQL Data Warehouse using Azure AD identities, you must install the following software:
- .NET Framework 4.6 or later from.
- Azure Active Directory Authentication Library for SQL Server (ADALSQL.DLL) is available in multiple languages (both x86 and amd64) from the download center at Microsoft Active Directory Authentication Library for Microsoft SQL Server.
You can meet these requirements by:
- Installing either SQL Server 2016 Management Studio or SQL Server Data Tools for Visual Studio 2015 meets the .NET Framework 4.6 requirement.
- SSMS installs the x86 version of ADALSQL.DLL.
- SSDT installs the amd64 version of ADALSQL.DLL.
- The latest Visual Studio from Visual Studio Downloads meets the .NET Framework 4.6 requirement, but does not install the required amd64 version of ADALSQL.DLL.
Create contained database users in your database mapped to Azure AD identities
Important
Managed instance now supports Azure AD server principals (logins), which enables you to create logins from Azure AD users, groups, or applications. Azure AD server principals (logins) provides the ability to authenticate to your managed instance without requiring database users to be created as a contained database user. For more information, see managed instance Overview. For syntax on creating Azure AD server principals (logins), see CREATE LOGIN.. For more information about contained database users, see Contained Database Users- Making Your Database Portable.
Note
Database users (with the exception of administrators) cannot be created using the Azure portal. RBAC roles are not propagated to SQL Server, SQL Database, or SQL Data Warehouse. Azure RBAC roles are used for managing Azure Resources, and do not apply to database permissions. For example, the SQL Server Contributor role does not grant access to connect to the SQL Database or SQL Data Warehouse. The access permission must be granted directly in the database using Transact-SQL statements.
Warning
Special characters like colon
: or ampersand
& when included as user names in the T-SQL CREATE LOGIN and CREATE USER statements are not supported.. AAD AAD tenant: they prevent the user from accessing the external provider. Updating the CA policies to allow access to the application '00000002-0000-0000-c000-000000000000' (the application ID of the AAD Azure SQL.
Azure AD users are marked in the database metadata with type E (EXTERNAL_USER) and for groups with type X (EXTERNAL_GROUPS). For more information, see sys.database_principals.
Connect to the user database or data warehouse by” option is only supported for Universal with MFA connection options, otherwise it is greyed out.)
Active Directory password authentication
Use this method when connecting with an Azure AD principal name using the Azure AD managed domain. You can also use it for federated accounts without access to the domain, for example when working remotely..
Start Management Studio or Data Tools and in the Connect to Server (or Connect to Database Engine) dialog box, in the Authentication box, select Active Directory - Password.
In the User name box, type your Azure Active Directory user name in the format [email protected]. User names must be an account from the Azure Active Directory or an account from a domain federate with the Azure Active Directory.
In the Password box, type your user password for the Azure Active Directory account or federated domain account.
Select the Options button, and on the Connection Properties page, in the Connect to database box, type the name of the user database you want to connect to. (See the graphic in the previous option.) integrated authentication and an Azure AD identity, connect to Azure SQL Database or Azure SQL Data Warehouse by obtaining a token from Azure Active Directory (AAD). It enables sophisticated scenarios including
Next steps
-.
Feedback | https://docs.microsoft.com/en-us/azure/sql-database/sql-database-aad-authentication-configure?branch=pr-en-us-16983 | 2019-12-05T19:41:28 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.microsoft.com |
chainer.functions.roi_max_align_2d¶
chainer.functions.
roi_max_align_2d(x, rois, roi_indices, outsize, spatial_scale, sampling_ratio=None)[source]¶
Spatial Region of Interest (ROI) max align function.
This function acts similarly to
roi_max_pooling_2d(), but it computes maximum of input spatial patch with bilinear interpolation for each channel with the region of interest.
- Parameters
x (Variable) – Input variable. The shape is expected to be 4 dimensional:
(n: batch, c: channel, h, height, w: width).
rois (Variable) – Input roi variable. The shape is expected to be
(n: data size,.
sampling_ratio ((int, int) or int) – Sampling step for the alignment. It must be an integer over \(1\) or
None, and the value is automatically decided when
Noneis passed. Use of different ratio in height and width axis is also supported by passing tuple of int as
(sampling_ratio_h, sampling_ratio_w).
sampling_ratio=sand
sampling_ratio=(s, s)are equivalent.
- Returns
Output variable.
- Return type
-
See the original paper proposing ROIAlign: Mask R-CNN. | https://docs.chainer.org/en/stable/reference/generated/chainer.functions.roi_max_align_2d.html | 2019-12-05T19:50:41 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.chainer.org |