content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Obtain Mendix Object action.
If this is the top level of the mapping, you can check Decide this at the place where the mapping gets used. If this is checked the option if no object was found can be set whenever you use the mapping, for instance in an import mapping action or a call REST service action.
1.3.
2 Mapping Attributes in Import Mappings
Each selected XML or JSON element needs to be mapped to an attribute in the domain entity. If you don’t want to map certain elements, simply uncheck them in the Select elements… dialog box.
2.1 Mapping Attribute Properties
3 Mapping Parameter. | https://docs.mendix.com/refguide8/import-mappings | 2021-01-16T06:52:38 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['attachments/import-mappings/16843942.png', None], dtype=object)
array(['attachments/import-mappings/16843943.png', None], dtype=object)
array(['attachments/import-mappings/no-object-found.png', None],
dtype=object)
array(['attachments/import-mappings/16843943.png', None], dtype=object)
array(['attachments/import-mappings/16843944.png', None], dtype=object)] | docs.mendix.com |
The Ambari Infra Solr instance is used to index data for Atlas, Ranger, and Log Search. The version of Solr used by Ambari Infra in Ambari 2.6 is Solr 5. The version of Solr used by the Ambari Infra in Ambari 2.7 is Solr 7. When moving from Solr 5 to Solr 7 indexed data needs to be backed up from Solr 5, migrated, and restored into Solr 7 as there are on disk format changes, and collection-specific schema changes. The Ambari Infra Solr components must also be upgraded. Fortunately scripts are available to do both, and are explained below.
This process will be broken up into four steps:
- Generate Migration Config
The migration utility requires some basic information about your cluster and this step will generate a configuration file that captures that information.
- Back up Ambari Infra Solr Data
This process will backup all indexed data either to a node-local disk, shared disk (NFS mount), or HDFS filesystem.
- Remove existing collections & Upgrade Binaries
This step will remove the Solr 5 collections, upgrade Ambari Infra to Solr 7, and create the new collections with the upgraded schema required by HDP 3.1 services. This step will also upgrade LogSearch binaries if they are installed.
- Migrate & Restore
This step will migrate the backed up data to the new format required by Solr 7 and restore the data into the new collections. This step will be completed after the HDP 3.1 Upgrade has been completed in the Post-upgrade Steps section of the upgrade guide
Generate Migration Config
The utility used in this process is included in the ambari-infra-solr-client package. This package must be upgraded before the utility can be run. To do this:
SSH into a host that has a Infra Solr Instance installed on it. You can locate this host by going to the Ambari Web UI and clicking Hosts. Click on the Filter icon and type
Infra Solr Instance: Allto find each host that has an Infra Solr Instance installed on it.
Upgrade the ambari-infra-solr-client package.
yum clean all
yum upgrade ambari-infra-solr-client -y
You can now proceed to configuring and running the migration tool from the same host.
Run the following commands as root, or with a user that has sudo access:
Export the variable that will hold the full path and filename of the configuration file.
export CONFIG_INI_LOCATION=ambari_solr_migration.ini
Run the
migrationConfigGenerator.pyscript, located in the
/usr/lib/ambari-infra-solr-client/directory, with the following parameters:
- --ini-file $CONFIG_INI_LOCATION
This is the previously exported environmental variable that holds the path and filename of the configuration file that will be generated.
- --host ambari.hortonworks.local
This should be the hostname of the Ambari Server.
- --port 8080
This is the port of the Ambari Server. If the Ambari Server is configured to use HTTPS, please use the HTTPS port and add the
-sparameter to configure HTTPS as the communication protocol.
- --cluster cl1
This is the name of the cluster that is being managed by Ambari. To find the name of your cluster, look in the upper right and corner of the Ambari Web UI, just to the left of the background operations and alerts.
- --username admin
This is the name of a user that is an “Ambari Admin” .
- --password admin
This is the password of the aforementioned user.
- --backup-base-path=/my/path
This is the location where the backed up data will be stored. Data will be backed up to this local directory path on each host that is running an Infra Solr instance in the cluster. So, if you have 3 Infra Solr server instances and you use --backup-base-path=/home/solr/backup, this directory will be created on all 3 hosts and the data for that host will be backed up to this path.
If you are using a shared file system that is mounted on each Infra Solr instance in the cluster, please use the
--shared-driveparameter instead of
--backup-base-path. The value of this parameter should be the path to the mounted drive that will be used for the backup. When this option is chosen, a directory will be created in this path for each Ambari Infra Solr instance with the backed up data. For example, if you had an NFS mount /export/solr on each host, you would use
--shared-drive=/exports/solr. Only use this option if this path exists and is shared amongst all hosts that are running the Ambari Infra Solr.
- --java-home /usr/jdk64/jdk1.8.0_112
This should point to a valid Java 1.8 JDK that is available at the same path on each host in the cluster that is running an Ambari Infra Solr instance.
- If the Ranger Audit collection is being stored in HDFS, please add the following parameter, --ranger-hdfs-base-path
The value of this parameter should be set to the path in HDFS where the Solr collection for the Ranger Audit data has been configured to store its data.
Example:
--ranger-hdfs-base-path=/user/infra-solr
Example Invocations:
If using HTTPS for the Ambari Server:
/usr/bin/python /usr/lib/ambari-infra-solr-client/migrationConfigGenerator.py \ --ini-file $CONFIG_INI_LOCATION \ --host c7401.ambari.apache.org \ --port 8443 -s \ --cluster cl1 \ --username admin \ --password admin \ --backup-base-path=/home/solr/backup \ --java-home /usr/jdk64/jdk1.8.0_112
If using HTTP for the Ambari Server:
/usr/bin/python /usr/lib/ambari-infra-solr-client/migrationConfigGenerator.py \ --ini-file $CONFIG_INI_LOCATION \ --host c7401.ambari.apache.org \ --port 8080 \ --cluster cl1 \ --username admin \ --password admin \ --backup-base-path=/home/solr/backup \ --java-home /usr/jdk64/jdk1.8.0_112
Ensure the script generates cleanly and there are no yellow warning texts visible. If so, review the yellow warnings.
Back up Ambari Infra Solr Data
Once the configuration file has been generated, it’s recommended to review the ini file created by the process. There is a configuration section for each collection that was detected. If, for whatever reason, you do not want to backup a specific collection you can set enabled = false and the collection will not be backed up. Ensure that enabled = true is set for all of the collections you do wish to back up. Only the Atlas, and Ranger collections will be backed up. Log Search will not be backed up.
To execute the backup, run the following command from the same host on which you generated the configuration file:
# /usr/lib/ambari-infra-solr-client/ambariSolrMigration.sh \ --ini-file $CONFIG_INI_LOCATION \ --mode backup | tee backup, and the output contains information regarding the number of documents and size of each backed up collection.
Remove Existing Collections & Upgrade Binaries
Once the data base been backed up, the old collections need to be deleted, and the Ambari Infra Solr, and Log Search (if installed) components need to be upgraded. To do all of that, run the following script:
# /usr/lib/ambari-infra-solr-client/ambariSolrMigration.sh \ --ini-file $CONFIG_INI_LOCATION \ --mode delete | tee delete.
Next Steps | https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-upgrade-major/content/backup_and_upgrade_ambari_infra_data.html | 2021-01-16T06:44:11 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.cloudera.com |
Bulk Actions in Schema Mapper
Select multiple Event Types in the Schema Mapper page to apply a bulk action to them. Using bulk actions, you can:
Map and un-map Event Types
Include and skip Event Types
Reset the schema for the selected Event Types
The Bulk Actions toolbar is displayed as soon as you select an Event Type in the Schema Mapper.
All the bulk actions that are applicable to one or more Event Types that you have selected are enabled. You can also see the summary count of the selected Event Types.
If you select more Event Types than which are visible on the current page, all of the bulk actions are enabled. However, the action is performed only on the eligible Event Types. Eligibility is decided based on the current mapping status of the Event Type. Hevo informs you if the bulk action did not get applied to any Event Type. For example, if you select five Event Types of which one is in Skipped status, and apply the SKIP bulk action, then Hevo displays the message, “1 of the 5 selected Event Types were not skippable at the moment.”
The following table lists the valid mapping statuses for applying different bulk actions: | https://docs.hevodata.com/pipelines/schema-mapper/bulk-actions-schema-mapper/ | 2021-01-16T06:24:47 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['https://res.cloudinary.com/hevo/image/upload/v1594904308/hevo-docs/SchemaMapper2476/BulkActions-Schema.png',
'Bulk Actions toolbar in Schema Mapper'], dtype=object) ] | docs.hevodata.com |
Amlogic NN Api is a set of NPU Api officially launched by amlogic. This document will introduce how to compile and use khadas to demonstrate based on this set of Api.
API Docs
For detailed information about the API, please refer to the document
docs/zh-cn/DDK_6.4.3_SDK_V1.6 API Description.pdf
Compile
Get demo source code
The source code of the
aml_npu_nnsdk_app repository is open on the official gitlab of Khadas
Source code structure introduction
There are currently 3 demos in the source code repository:
- body_pose: Detect 18-point posture of the human body, only support image recognition
- image_classify: Object recognition classification, only supports image recognition
- person_detect: Human body detection, supports image recognition and camera recognition
There are compilation scripts, makefiles and source codes in each directory. Take person_detect as an example.
- build-cv3.sh : Compiled script
- makefile-cv3.linux : Compilesd makefile
- person_detect_640x384_camera.cpp: Source code for image recognition
- person_detect_640x384_picture.cpp: Source code for camera recognition
Compilation method
Please refer to get SDK #Get-SDK
Here also take person_detect as an example,
Compilation will generate the generated file in
cv3_output,
Among them,
person_detect_640x384_camera and
person_detect_640x384_picture are the generated executable files
How to Run
Here also take person_detect as an example,
- Obtain the nb file, the nb file corresponding to
person_detectis:
Copy the executable file compiled on the PC to the board
Run
Identify the picture
Recognition camera
Note :
Just a simple template repository, please refer to the documentation for detailed API introduction. | https://docs.khadas.com/vim3/HowToUseAMLNNApi.html | 2021-01-16T05:15:22 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.khadas.com |
Plugins hard drive.
- Add your plugin of interest to the plugins folder on your machine, and restart FlowJo:
- Plugin actions can be accessed and initiated from within FlowJo under the Plugins Menu. (Workspace Tab –> Populations Band –> Plugins menu)
- If you do not see individual plugins listed within the Plugins menu following installation, you may need to tell FlowJo the location of the plugins folder by specifying the folder’s file path in the Diagnostics section of FlowJo’s Preferences (FlowJo / Preferences / Diagnostics).
- Restart FlowJo. Your plugins should now appear listed in the Plugins menu.
- As new plugins become available, they will be posted for download from the FlowJo Exchange website, which can be accessed from within FlowJo under the Plugins Menu.
Installing the R statistical computing environment
Please note that the Spade, CellOntology and FlowMeans plugins utilize the R statistical computing environment to produce results. For these plugins to work, R must first be installed and setup with the appropriate R packages.
To run a Plugin that calls to the R statistical computing environment:
- Install R on you computer. Download R from the Comprehensive R Archive Network (CRAN) website. Follow the links and installation directions for your operating system.
- If installing R on a Mac: Install R in your Applications folder. Set the R path by typing “/usr/local/bin/R” (no quotes) within the box. Though the R application is located in the Applications folder, the actual executable file is located within the above mentioned path. If you install R in a folder other than Applications, you must specify the full path.
- If installing R on a PC: R can be installed in any folder, but you must tell FlowJo the location of your R installation by selecting it in the Diagnostics panel of your FlowJo preferences.
- Restart FlowJo. Plugins that utilize R will now know where to look for and open the R environment.
- Open R and install the required R packages for a given plugin.
- R packages are installed by typing a specific string of commands from within the R console window. The appropriate files are then downloaded from Bioconductor.org or other repository (ex. GitHub), and installed within your local R environment.
- Briefly: To install FlowSOM, BiocManager, flowCore and pheatmap packages, open R and enter the following set of commands:
if (!requireNamespace("BiocManager", quietly = TRUE)) install.packages("BiocManager")
BiocManager::install(c("flowCore", "FlowSOM"))
install.packages("pheatmap")
- Once the necessary packages are installed from within R, you will no longer need to open R, and the Plugin can be run from the Plugins menu within FlowJo.
- Please see the documentation on specific plugins (links below) for details on the R package installation requirements.
For more information on installing and running specific Plugins:
Questions about plugins or FlowJo? Send us an email at [email protected] | https://docs.flowjo.com/flowjo/plugins-2/installing-plugins/ | 2021-01-16T04:54:41 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.flowjo.com |
Configure LDAP channel binding Configure LDAP channel binding in Enterprise A2019 On-Premises for enhanced security in network communications between an Active Directory and its clients. This method provides a more secure LDAP authentication over SSL and TLS. Enable channel binding in the um.properties file when required. Procedure Go to the Enterprise Control Room installation path. From the list of files in the config folder, open the um.properties file with an XML editor. Add the um.ldap.channel.binding.enabled property in the um.properties file. For example, um.ldap.channel.binding.enabled=false To enable channel binding, change the value to true. The default value is false and channel binding is disabled. Channel binding is enabled if it is enabled on the server side.For information about enabling channel binding on the server side, see LDAP enforce channel binding registry entry. Save the file. | https://docs.automationanywhere.com/bundle/enterprise-v2019/page/enterprise-cloud/topics/control-room/getting-started/cloud-configure-ldap-channel-binding.html | 2021-01-16T05:40:36 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.automationanywhere.com |
What was decided upon? (e.g. what has been updated or changed?) Anything with INTERNET in Library in Symphony was included in the P2E form
Why was this decided? (e.g. explain why this decision was reached. It may help to explain the way a procedure used to be handled pre-Alma) To make sure that items with that designation migrate as electronic and not print
Who decided this? (e.g. what unit/group) E-Resources
When was this decided? Pre-implementation
Additional information or notes. | https://docs.library.vanderbilt.edu/2018/10/11/p2e-form-changes/ | 2021-01-16T06:50:54 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.library.vanderbilt.edu |
1 Introduction
This document outlines performance issues and Mendix best practices for optimizing an app performance.
2 Calculated Attributes Best Practices
When an object has calculated attributes, each time this object is changed or retrieved from the storage, its calculated attributes are computed by calling a microflow. If the logic behind calculated attributes retrieves other objects or executes Integration activities, it will result in an extra load (and delay) while the outcome of the logic is not used. Creating calculated attributes always affects performance, so you should evaluate whether it is necessary to use them. For more information on attributes, see Attributes.
In most cases, the logic behind a calculated attribute is always executed when the object is used. It is executed whenever there is no retrieval schema for a Retrieve activity (which is the case with data grids). The logic behind calculated attributes is executed in the following elements:
- Retrieve and change object activities in microflows
- In UI widgets (e.g. data views, custom widgets)
- When an object is passed from the UI as a parameter to a microflow (e.g. a button triggering a microflow).
There are two different performance issues with calculated attributes that you can easily fix:
2.1 Avoid Using Calculated Attributes on a Page
Retrieve activities trigger the logic of calculated attributes, which can lead to database actions and microflow calls being executed (objects retrieving each other through calculated attributes).
If data widgets (list view, data view, or data grid) on a page are using calculated attributes, this may affect the time to load and display the page.
2.1.1 Steps to Fix
To fix the issue, do the following:
- In the domain model, change the attribute to be stored instead of calculated.
- Wherever the attribute is about to be committed to the database, calculate the value using the relevant microflow.
You will also need to migrate any existing data, since when the attribute is changed to be stored, the database will only contain the default value for that data type.
2.2 Remove Unused Calculated Attributes
As Retrieve activities trigger the logic of calculated attributes, it could lead to an execution chain of database actions and microflow calls (objects retrieving each other through calculated attributes).
If calculated attributes are not used, they can safely be removed to avoid redundant microflow calls.
2.2.1 Steps to Fix
To fix the issue, delete the unused calculated attribute.
3 Add an Index to Attributes in Sort Bars
Sort bars are used to sort items in data widgets. Sort bars can be used in three different types of data widgets:
- Data grid
- Template grid
- Reference set selector
Each sort item in the sort bar is sequentially utilized to order the data in the widget. Adding an index on the attributes used in sort items can make the sorting process faster, subsequently improve the performance of the page.
There can be four operations performed on an entity: create, update, delete, and select. Entities, for which the number of create, update, and delete operations is much greater than the number of select operations can be called write-intensive because most operations mutate data in a database rather than select from it.
Entities, for which the number of select operations is much greater than the number of create, update, and delete operations can be called read-intensive because most operations select data from the database. It is important to perform this optimization only on attributes belonging to entities which are predominantly read-intensive.
As totally different best practices apply for read-intensive and write-intensive entities, it would be valuable to differentiate entities by the type of operations that are performed on the entities.
3.1 Steps to Fix
To fix the issue, add an index on attributes which are used as sort items in sort bars on pages.
4 Avoid Committing Objects Inside a Loop with Create Object, Change Object, or Commit Activities
In a microflow, Mendix objects can be persisted to the database with three activities: the Create object activity, Change object activity, and Commit activity. For objects that are created or changed in a loop, it is not the best practice to commit them immediately in the loop, as this comes with an unnecessary performance overhead. Instead, it is recommended to perform a batch commit of several created/changed objects with the Commit activity outside of the loop to reduce database, application, and network overhead. For more information on Create object, Change object, and Commit activities, see Create Object, Change Object, and Commit Object(s).
Committing lists of objects has the following benefits compared to individual commits:
- The prepared statement of creating or modifying records in the database is explicitly reused by the JDBC driver and has the following benefits:
- The execution plan is cached
- The driver cares for a minimum of network overhead
- For each database action that changes data the following actions are taken and added overhead:
- A savepoint is created before the action and released afterwards
- Auto-committed objects are retrieved from the database
- Auto-committed objects are stored to the database (if relevant)
4.1 Steps to Fix for Create or Change Object Activities
To fix the issue for Create or Change object activities inside the loop, do the following:
- Change the Commit option of a Create/Change object activity from No and make sure created/changed objects are available in a list.
- Commit the list after the loop when the iteration has finished or when number of objects in the list reaches 1000 to avoid excessive memory usage.
4.2 Steps to Fix for the Commit Activity
To fix the issue for the Commit activity, commit the list after the loop when the iteration has finished or when number of objects in the list reaches 1000 to avoid excessive memory usage.
5 Convert Eligible Microflows to Nanoflows
Nanoflows are executed directly on an end-user’s device or browser. This makes them ideal for offline usage. In contrast, microflows run in the Runtime server, thus involve usage of network traffic. Converting an eligible microflow to a nanoflow helps avoiding communication over networks and significantly boosts app performance. For more information on when to use on nanoflows and when to use them, see Nanoflows.
You can identify convertible microflows using the following criteria:
- Microflows that have one or more of the following categories:
- Microflow has logic meant for offline applications.
- Microflow has logic for online applications but does not involve any database related actions like a committing Create object, Commit, Retrieve, and Rollback activities.
- Microflow has at-most one database related action. (Not the best practice)
- Microflows that contain nanoflow-compatible activities. For information on activities supported by nanoflows, see Activities.
- Microflow expressions do not contain the following variables:
$latestSoapFault,
$latestHttpResponse,
$currentSession,
$currentUser,
$currentDeviceType. These variables are not supported by nanoflows.
- As nanoflows are executed in the context of the current user, ensure that the microflow has only operations for which the current user is authorized. Otherwise the converted nanoflow will fail.
5.1 Steps to Fix
To fix the issue, do the following:
- Create a new nanoflow by right-clicking the module and selecting Add nanoflow.
- Replicate the same logic from the microflow. The new nanoflow must look almost identical to the microflow.
- Check usages of the microflow by right-clicking the microflow and selecting Find usages. Replace all usages with the newly created nanoflow.
- Delete the unused microflow. You can do this by selecting the microflow and pressing Delete or by right-clicking it and selecting Delete.
6 Add Index to Attributes that Are Used in XPath Expressions
XPath expressions can take a long time to run depending on how many records the underlying entities contain. For read-intensive entities, it makes sense to add an index on the attributes used in XPath expressions. This can significantly boost performance of object retrieval from the database. XPath expressions can also be optimized by ordering them in such a way that the first class excludes as many items as possible. This can be achieved by using indexed attributes earlier in the expression. This will make the rest of the data set to join/filter as small as possible and reduce the load on the database.
Note that XPath expressions can be used in three different places:
- Access rules and entities
- Source/Filter for lists and grids on pages
- Retrieve actions in both microflows and Java actions
6.1 Steps to Fix
To fix the issue, do the following:
- Check if the underlying entity contains a substantial amount of records before adding an index (at least 10000 records).
- Add an index per each attribute used in the XPath expression only for scenarios where read-intensive operations are predominantly performed on the underlying entities.
This optimization may not be very beneficial for data types like Boolean and enumerations due to a limited number of possible values of these types. It is not recommended to add indexes for such types.
7 Avoid Caching Non-Persistable Entities
A non-persistable object is an object that is considered temporary and only exists in the memory. It is an instance of a non-persistable entity. For more information on persistable and non-persistable entities, see Persistablity. As these objects exist only in memory, caching them is not useful. On the one hand, it is redundant to create associations of non-persistable entities with System.Session or System.User persistable entities. On the other hand, it is important to cache objects which do not change very often but are used frequently in logic. This will help avoid the overhead of database communication. Persistent entities can be connected to the System.Session of the user and be used as a cache of outcomes. For more information on objects and caching, see Objects & Caching.
You can use the following guidelines to decide whether caching is needed:
- Data does not change very often
- Data is read very often
- The volume of data is limited (usually less than 10 000 records)
- The impact of using stale data is accepted
7.1 Steps to Fix
To fix the issue, do the following:
- For an entity that does not change very often, make it persistable if its objects are used frequently for your logic.
- If the above condition is not met, remove the association of the non persisted entity with System.User or System.Session. | https://docs.mendix.com/refguide/performance-best-practices | 2021-01-16T06:45:28 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.mendix.com |
As always, if you need more information or are still confused please reach out to us at [email protected]
It is the new product we have built out for all our teachers. We are transitioning from classrooms to teams for education as it has a lot more features. Please check out our blog post here to learn more.
It could be due to a few issues:
Check if your browser is blocked javascript
Make sure to disable any ad block software
If none of these apply, please post your issue here along with the repl/s in question as well as browser information and operating system. You can always check and see if others have already raised a similar issue and just upvote and comment on that post.
At times the display can be buggy. The code may not run or if it does you get weird colors instead of your expected output. If this is the case please make sure your browser is up to date as this has been known to cause issues for some of our community members.
Here are a few simple steps you can do to retrieve your repl/s:
You could invite your friend temporarily, allow them to fork, and then remove them from the repl via the share menu.
You could download the repl as a .zip (this can be done by adding .zip to the end of your repl's URL), and send that copy of the code to your friend. They could then upload it to repl.it as their own repl.
Unforutnately the folders in the myrepls portion are more so metaphors and do not do much else. When you click on any repl it will always be which means all your repls need to be unique.
As of July 2020 we have always repls in the works. In the mean time you can use Keeper Upper as a work around.
How does this work?
Things to be wary of once your repl is on Keeper Upper it cannot be removed.
You just need to fork the repl in question and delete the original.
There are two ways that this can be done:
The first method would be a little more manual and would require you to share every repl from your old account to your new one. Once you share all the repls you would then fork each one on your new account.
The second way would be to change the email address on the old account which should solve your problem (you also would not need to create a new account this way). This method will not be useful if you have already created an account with the new email. | https://docs.repl.it/misc/General-FAQ | 2021-01-16T05:29:01 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.repl.it |
Configuration Manager for Jira
Configuration Manager for Jira allows Jira Administrators to automate the transfer of Jira configuration data and issues between different Jira Server instances. With this integration, a new capability has been added to ProForma. Now, ProForma data will be included in the backup snapshots generated by Configuration Manager and restored on import. See Configuration Manager for Jira's documentation for a full explanation.
Requirements
Starting with version 7.0.3, ProForma for Jira Server supports integration with version 6.3.4+ of Configuration Manager for Jira Management. | https://docs.thinktilt.com/proforma/Configuration-Manager-for-Jira.1445363812.html | 2021-01-16T06:10:38 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.thinktilt.com |
§ 33-201 Website and reporting.
1. Any recommendation made to the agency, and any relevant context for the recommendation;
2. Whether any such recommendation was accepted or rejected by the agency to which it was made; and
3. For each recommendation accepted by an agency, whether such agency implemented the recommendation.. | https://nyclaws.readthedocs.io/d01/c36/index.html | 2021-01-16T05:18:05 | CC-MAIN-2021-04 | 1610703500028.5 | [] | nyclaws.readthedocs.io |
Intended audience: administrators, users
The Tango controlled access system¶
User rights definition¶
Within the Tango controlled system, you give rights to a user. User is the name of the user used to log-in the computer where the application trying to access a device is running. Two kind of users are defined:
- Users with defined rights
- Users without any rights defined in the controlled system. These users will have the rights associated with the pseudo-user called All Users
The controlled system manages two kind of rights:
- Write access meaning that all type of requests are allowed on the device
- Read access meaning that only read-like access are allowed (write_attribute, write_read_attribute and set_attribute_config network calls are forbidden). Executing a command is also forbidden except for commands defined as Allowed commands. Getting a device state or status using the command_inout call is always allowed. The definition of the allowed commands is done at the device class level. Therefore, all devices belonging to the same class will have the allowed commands set.
The rights given to a user is the check result splitted in two levels:
- At the host level: You define from which hosts the user may have write access to the control system by specifying the host name. If the request comes from a host which is not defined, the right will be Read access. If nothing is defined at this level for the user, the rights of the All Users user will be used. It is also possible to specify the host by its IP address. You can define a host family using wide-card in the IP address (eg. 160.103.11.* meaning any host with IP address starting with 160.103.11). Only IP V4 is supported.
- At the device level: You define on which device(s) request are allowed using device name. Device family can be used using widecard in device name like domin/family/*
Therefore, the controlled system is doing the following checks when a client try to access a device:
- Get the user name
- Get the host IP address
- If rights defined at host level for this specific user and this IP address, gives user temporary write acccess to the control system
- If nothing is specified for this specific user on this host, gives to the user a temporary access right equal to the host access rights of the All User user.
- If the temporary right given to the user is write access to the control system
- If something defined at device level for this specific user
- If there is a right defined for the device to be accessed (or for the device family), give user the defined right
- Else
- If rights defined for the All Users user for this device, give this right to the user
- Else, give user the Read Access for this device
- Else
- If there is a right defined for the device to be accessed (or for the device family) for the All User user, give user this right
- Else, give user the Read Access right for this device
- Else, access right will be Read Access
Then, when the client tries to access the device, the following algorithm is used:
- If right is Read Access
- If the call is a write type call, refuse the call
- If the call is a command execution
- If the command is one of the command defined in the Allowed commands for the device class, send the call
- Else, refuse the call
All these checks are done during the DeviceProxy instance constructor except those related to the device class allowed commands which are checked during the command_inout call.
To simplify the rights management, give the All Users user host access right to all hosts (.*.*.*) and read access to all devices (/*/*). With such a set-up for this user, each new user without any rights defined in the controlled access will have only Read Access to all devices on the control system but from any hosts. Then, on request, gives Write Access to specific user on specific host (or family) and on specific device (or family).
The rights managements are done using the Tango Astor [ASTOR] tool which has some graphical windows allowing to grant/revoke user rights and to define device class allowed commands set. The following window dump shows this Astor window.
In this example, the user taurel has Write Access to the device sr/d-ct/1 and to all devices belonging to the domain fe but only from the host pcantares He has read access to all other devices but always only from the host pcantares. The user verdier has write access to the device sys/dev/01 from any host on the network 160.103.5 and Read Access to all the remaining devices from the same network. All the other users has only Read Access but from any host.
Running a Tango control system with the controlled access¶
All the users rights are stored in two tables of the Tango database. A dedicated device server called TangoAccessControl access these tables without using the classical Tango database server. This TangoAccessControl device server must be configured with only one device. The property Services belonging to the free object CtrlSystem is used to run a Tango control system with its controlled access. This property is an array of string with each string describing the service(s) running in the control system. For controlled access, the service name is AccessControl. The service instance name has to be defined as tango. The device name associated with this service must be the name of the TangoAccessControl server device. For instance, if the TangoAccessControl device server device is named sys/access_control/1, one element of the Services property of the CtrlSystem object has to be set to
AccessControl/tango:sys/access_control/1
If the service is defined but without a valid device name corresponding to the TangoAccessControl device server, all users from any host will have write access (simulating a Tango control system without controlled access). Note that this device server connects to the MySQL database and therefore may need the MySQL connection related environment variables MYSQL_USER and MYSQL_PASSWORD described in [sub:Db-Env-Variables]
Even if a controlled access system is running, it is possible to by-pass it if, in the environment of the client application, the environment variable SUPER_TANGO is defined to true. If for one reason or another, the controlled access server is defined but not accessible, the device right checked at that time will be Read Access. | https://tango-controls.readthedocs.io/en/latest/administration/services/access-control.html | 2021-01-16T06:31:08 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['../../_images/control.png', 'image21'], dtype=object)] | tango-controls.readthedocs.io |
Dandelion++ in Grin: Privacy-Preserving Transaction Aggregation and Propagation
Read this document in other languages: Korean[out of date].
Introduction
The Dandelion++ protocol for broadcasting transactions, proposed by Fanti et al. (Sigmetrics 2018)1, intends to defend against deanonymization attacks during transaction propagation. In Grin, it also provides an opportunity to aggregate transactions before they are broadcast to the entire network. This document describes the protocol and the simplified version of it that is implemented in Grin.
In the following section, past research on the protocol is summarized. This is then followed by describing details of the Grin implementation; the objectives behind its inclusion, how the current implementation differs from the original paper, what some of the known limitations are, and outlining some areas of improvement for future work.
Research
The original version of Dandelion was introduced by Fanti et al. and presented at ACM Sigmetrics 20172. On June 2017, a BIP3 was proposed introducing a more practical and robust variant of Dandelion called Dandelion++, which was formalized into a paper in 20181.
The protocols are outlined at a high level here. For a more in-depth presentation with extensive literature references, please refer to the original papers.
Motivation
Dandelion was conceived as a way to mitigate large scale deanonymization attacks on the network layer of Bitcoin, made possible by the diffusion method for propagating transactions on the network. By deploying "super-nodes" that connect to a large number of honest nodes on the network, adversaries can listen to the transactions relayed by the honest nodes as they get diffused symmetrically on the network using epidemic flooding or diffusion. By observing the spreading dynamic of a transaction, it has been proven possible to link it (and therefore also the sender's Bitcoin address) to the originating IP address with a high degree of accuracy, and as a result de-anonymize users.
Dandelion
In the original paper 2, a dandelion spreading protocol is introduced. Dandelion spreading propagation consists of two phases: first the anonymity phase, or the “stem” phase, and second the spreading phase, or the “fluff” phase, as illustrated in Figure 1.
Figure 1. Dandelion phase illustration.
┌-> F ... ┌-> D --┤ | └-> G ... A --[stem]--> B --[stem]--> C --[fluff]--┤ | ┌-> H ... └-> E --┤ └-> I ...
In the initial stem-phase, each node relays the transaction to a single randomly selected peer, constructing a line graph. Users then forward transactions along the same path on the graph. After a random number of hops along the stem, the transaction enters the fluff-phase, which behaves like ordinary diffusion. This means that even when an attacker can identify the originator of the fluff phase, it becomes more difficult to identify the source of the stem (and thus the original broadcaster of the transaction).
Each individual node pseudorandomly selects if he is a stem or a fluff node at regular intervals, called epoch periods. Epochs are asynchronous, with each individual node keeping its own internal clock and starting a new epoch once a certain threshold has been reached. Thus, the constructed line graph is periodically re-generated randomly, at the expiry of each epoch, limiting an adversary's possibility to build knowledge of the graph.
The 'Dandelion' name is derived from how the protocol resembles the spreading of the seeds of a dandelion.
Dandelion++
In the Dandelion++ paper1, the authors build on the original concept further by defending against stronger adversaries that are allowed to disobey protocol.
The original paper makes three ideal assumptions:
- All nodes obey protocol.
- Each node generates exactly one transaction.
- All nodes on the network run Dandelion.
An adversary can violate these rules, and by doing so, break some of the anonymity properties.
The modified Dandelion++ protocol makes small changes to many of the Dandelion choices, resulting in an exponentially more complex information space. This in turn makes it harder for an adversary to de-anonymize the network.
- The paper describes five types of attacks, and proposes specific updates to the original Dandelion protocol to mitigate against these, presented in Table A (here in summarized form).
Table A. Summary of Dandelion++ changes
Dandelion++ Algorithm
As with the original Dandelion protocol, epochs are asynchronous, each node keeping track of its own epoch, which the suggested duration being in the order of 10 minutes.
Anonymity Graph
Rather than a line graph as per the original paper (which is 2-regular), a quasi-4-regular graph is constructed by a node at the beginning of each epoch: the node chooses (up to) two of its outbound peers uniformly at random as its dandelion++ relays. As a node enters into a new epoch, new dandelion++ relays are chosen.
Figure 2. representation of a 4-regular graph.
in1 out1 \ / \ / NodeX / \ / \ in2 out2
NodeXhas four connections to other nodes, input nodes
in1and
in2, and output nodes
out1and
out2.
4-regular vs 2-regular graphs
The choice between using 4-regular or 2-regular (line) graphs is not obvious. The authors note that it is difficult to construct an exact 4-regular graph within a fully-distributed network in practice. They outline a method to construct an approximate 4-regular graph in the paper.
They also write:
... We recommend making the design decision between 4-regular graphs and line graphs based on the priorities of the system builders. If linkability of transactions is a first-order concern, then line graphs may be a better choice. Otherwise, we find that 4-regular graphs can give constant- order privacy benefits against adversaries with knowledge of the graph.
Transaction forwarding (own)
At the beginning of each epoch,
NodeX picks one of
out1 and
out2 to use as a route to broadcast its own transactions through as a stem-phase transaction. The same route is used throughout the duration epoch, and
NodeX always forwards (stems) its own transaction.
Transaction forwarding (relay)
At the start of each epoch,
NodeX makes a choice to be either in fluff-mode or in stem-mode. This choice is made in pseudorandom fashion, with the paper suggesting it being computed from a hash of the node's own identity and epoch number. The probability of choosing to be in fluff-mode (or as the paper calls it, the path length parameter
q) is recommended to be q ≤ 0.2.
Once the choice has been made whether to stem or to fluff, it applies to all relayed transactions passing through it during the epoch.
in fluff-mode,
NodeXis will broadcast any received transactions to the network using diffusion.
in stem-mode, at the beginning of each epoch
NodeXwill map
in1to either
out1or
out2pseudorandomly, and similarly map
in2to either
out1or
out2in the same fashion. Based on this mapping, it will then forward all txs from
in1along the chosen route, and similarly forward all transactions from
in2along that route. The mapping persists throughout the duration of the epoch.
Fail-safe mechanism
For each stem-phase transaction that was sent or relayed,
NodeX tracks whether it is seen again as a fluff-phase transaction within some random amount of time. If not, the node fluffs the transaction itself.
- This expiration timer is set by each stem-node upon receiving a transaction to forward, and is chosen randomly. Nodes are initialized with a timeout parameter Tbase. As per equation (7) in the paper, when a stem-node
vreceives a transaction, it sets an expiration time Tout(v):
Tout(v) ~ current_time + exp(1/Tbase)
If the transaction is not received again by relay
v before the expiry of Tout(v), then it broadcasts the message using diffusion. This approach means that the if the transaction gets does not enter fluff-phase in time, the first stem-node to broadcast is approximately uniformly selected among all stem-nodes who have seen the transaction, rather than the originating node who created it.
The paper also proceeds to specify the size of the initiating time out parameter Tbase as part of
Proposition 3 in the paper:
Proposition3. For a timeout parameter
Tbase ≥ (−k(k−1)δhop) / 2 log(1−ε ),
where
k,
ε are parameters and δhop is
the time between each hop (e.g., network and/or internal node latency), transactions travel for
k hops without any peer initiating diffusion with a probability of at least
1 − ε.
Dandelion in Grin
Objectives
The choice to include Dandelion in Grin has two main motives behind it:
- Act as a countermeasure against mass de-anonymization attacks. Similar to Bitcoin, the Grin P2P network would be vulnerable to attackers deploying malicious "super-nodes" connecting to most peers on the network and monitoring transactions as they become diffused by their honest peers. This would allow a motivated actor to infer with a high degree of probability from which peer (IP address) transactions originate from, having negative privacy consequences.
- Aggregate transactions before they are being broadcast to the entire network. This is a benefit to blockchains that enable non-interactive CoinJoins on the protocol level, such as Mimblewimble. Despite its good privacy features, some input and output linking is still possible in Mimblewimble and Grin.4 If you know which input spends to which output, it is possible to construct a transaction graph and follow a chain of transaction outputs (TXOs) as they are being spent. Aggregating transactions make this more difficult to carry out, as it becomes less clear which input spends to which output (Figure 3). In order for this to be effective, there needs to be a large anonymity set, i.e. many transactions to aggregate with one another. Dandelion enables this aggregation to occur before transactions are fluffed and diffused to the entire network. This adds obfuscation to the transaction graph, as a malicious observer who is not participating in the stemming or fluffing would not only need to figure out from where a transaction originated, but also which outputs and inputs out of a larger group should be attributed to the originating transaction.
Figure 3. (switch between tabs)
Current implementation
Grin implements a simplified version of the Dandelion++ protocol. It's been improved several times, most recently in version 1.1.05.
Dandelion configuration options in grin-server.toml (default)
#dandelion epoch duration epoch_secs = 600 #fluff and broadcast after embargo expires if tx not seen on network embargo_secs = 180 #dandelion aggregation period in secs aggregation_secs = 30 #dandelion stem probability (stem 90% of the time, fluff 10% of the time) stem_probability = 90 #always stem our (pushed via api) txs regardless of stem/fluff epoch (as per Dandelion++ paper) always_stem_our_txs = true
DandelionEpochtracks a node's current epoch. This is configurable via
epoch_secswith default epoch set to last for 10 minutes. Epochs are set and tracked by nodes individually.
- At the beginning of an epoch, the node chooses a single connected peer at random to use as their outbound relay.
- At the beginning of an epoch, the node makes a decision whether to be in stem mode or in fluff mode. This decision lasts for the duration of the epoch. By default, this is a random choice, with the probability to be in stem mode set to 90%, which implies a fluff mode probability,
qof 10%. The probability is configurable via
DANDELION_STEM_PROBABILITY. The number of expected stem hops a transaction does before arriving to a fluff node is
1/q = 1/0.1 = 10.
Any transactions received from inbound peers or transactions originated from the node itself are first added to the node's
stempool, which is a list of stem transactions, that each node keeps track of individually. Transactions are removed from the stempool if:
- The node fluffs the transaction itself.
- The node sees the transaction in question propagated through regular diffusion, i.e. from a different peer having "fluffed" it.
- The node receives a block containing this transaction, meaning that the transaction was propagated and included in a block.
For each transaction added to the stempool, the node sets an embargo timer. This is set by default to 180 seconds, and is configurable via
embargo_secs.
- A
dandelion_monitorruns every 10 seconds and handles tasks.
- If the node is in stem mode, then:
- After being added to the stempool, received stem transactions are forwarded onto the their relay node as a stem transaction.
- As peers connect at random, it is possible they create a circular loop of connected stem mode nodes (i.e.
A -> B -> C -> A). Therefore, if a node receives a stem transaction from an inbound node that already exists in its own stempool, it will fluff it, broadcasting it using regular diffusion.
dandelion_monitorchecks for transactions in the node's stempool with an expired embargo timer, and broadcast those individually.
- If the node is in fluff mode, then:
- Transactions received from inbound nodes are kept in the stempool.
dandelion_monitorchecks in the stempool whether any transactions are older than 30 seconds (configurable as
DANDELION_AGGREGATION_SECS). If so, these are aggregated and then fluffed. Otherwise no action is taken, allowing for more stem transactions to aggregate in the stempool in time for the next triggering of
dandelion_monitor.
- At the expiry of an epoch, all stem transactions remaining in the stem pool are aggregated and fluffed.
Nodes stem their own transactions
Regardless of whether the node is in fluff or stem mode, any transactions generated from the node itself are forwarded onwards to their relay node as a stem transaction.6
Known limitations
2-regular graphs are used rather than 4-regular graphs as proposed by the paper. It's not clear what impact this has, the paper suggests a trade-off between general linkability of transactions and protection against adversaries who know the entire network graph.
Additionally, unlike the Dandelion++ paper, the embargo timer is by default identical across all nodes. This means that during a black-hole attack where a malicious node withholds transactions, the node most likely to have its embargo timer expire and fluff the transaction will be the originating node, therefore exposing itself.
Future work
- Randomized embargo timer according to the recommendations of the paper to make it more random which node fluffs an expired transaction.
- Evaluation of whether 4-regular graphs are preferred over 2-regular line graphs.
- Simulation of the current implementation to understand performance. | https://docs.grin.mw/wiki/miscellaneous/dandelion/ | 2021-01-16T06:19:14 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.grin.mw |
A placeholder is a special piece of text which is placed in a document, template or chunk and when the content is output to the browser the placeholder is replaced with some value. The value could be the content of the document or a dynamically generated value such as the current system time.
An example place holder is:
[[+placeholder]]
The placeholder will be replaced with the value associated with it and the output stored in the page cache.
If the output of the placeholder should not be cached then you need to use the form:
[[!+placeholder]]
* can be used in place of + if the placeholder should be parsed before the rest of the template, usually used for including content [[*content]].
The following table details the standard set of placeholders available to all documents:
The following table details the extra placeholders available on error pages. | https://docs.clearfusioncms.com/guides/placeholders/ | 2021-01-16T05:34:04 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.clearfusioncms.com |
A cloud deployment or a virtual data center has a variety of applications across multiple tenants. These.
A logical switch is mapped to a unique VXLAN, which encapsulates the virtual machine traffic and carries it over the physical IP network.
L2 Bridges.
You must have the Super Administrator or Enterprise Administrator role permissions to manage logical switches. | https://docs.vmware.com/en/VMware-NSX-Data-Center-for-vSphere/6.4/com.vmware.nsx.admin.doc/GUID-DF57C441-CE9A-4138-9639-1658DBE65D48.html | 2021-01-16T06:06:08 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['images/GUID-93F9ADD7-C30B-4C52-AEC1-023CBD82389A-high.png',
'switch'], dtype=object) ] | docs.vmware.com |
Transforms
position from screen.
using UnityEngine;
public class Example : MonoBehaviour { // When attached to a GUITexture object, this will // move the texture around with the mouse.
void Update() { transform.position = Camera.main.ScreenToViewportPoint(Input.mousePosition); } } | https://docs.unity3d.com/kr/2019.1/ScriptReference/Camera.ScreenToViewportPoint.html | 2021-01-16T06:39:21 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.unity3d.com |
Disable usage statistics An administrator can disable the Usage statistics option by changing the settings in the Enterprise Control Room. Prerequisites This task is performed by the Enterprise Control Room administrator. You must have the necessary rights and permissions to complete this task. Ensure you are logged in to the Enterprise Control Room as the administrator. Procedure Navigate to Administration > Settings > General > Advanced settings. Click Edit. Select the Disabled option. Click Save changes. | https://docs.automationanywhere.com/bundle/enterprise-v2019/page/enterprise-cloud/topics/control-room/administration/settings/cloud-disable-usage-statistics.html | 2021-01-16T06:16:00 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.automationanywhere.com |
- )
Connecting Tableau to Hevo
Read through this section to connect Tableau to your managed BigQuery data warehouse.
Prerequisites
Readpermissions on the service account.
- Tableau connection settings downloaded from Hevo.
Steps
Access your Tableau application.
In the left navigation pane, under Connect, select To a Server.
Select Google BigQuery.
Click ALLOW to authorize Tableau to access your Google BigQuery data.
Select the Billing Project and Project as mentioned in the credentials provided by Hevo.
* The values provided in the image are indicative.
Select the Dataset mentioned in the credentials provided by Hevo. You can see the tables from your BigQuery warehouse in the Table list.
Was this page helpful?
Thank you for helping improve Hevo's documentation. If you need help or have any questions, please consider contacting support. | https://docs.hevodata.com/destinations/data-warehouses/managed-google-bigquery-dw/connecting-bi-tool-to-mdw/tableau/ | 2021-01-16T05:07:58 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.hevodata.com |
What's in the Release Notes
These release notes cover the following topics.
What's New
VMware Remote Console 10.0.2 includes the following changes.
- Support for Windows Server 2016 and macOS 10.13
- The Windows installer changed from .msi to .exe format.
- The VMware Remote Console project now uploads its corresponding Open Source Disclosure Package (ODP) as part the release process.
- The zlib compression library is updated from 1.2.8 to 1.2.11.
- The libcurl data transfer library is updated from 7.51.0 to 7.56.1.
- OpenSSL is updated from 1.0.2k to 1.0.2m.
- There are additional fixes as described in Fixed Issues.
Compatibility and Installation
You can install this release on the following host operating systems.
64-bit and 32-bit Windows
- Windows Server 2016
- Windows Server 2012 R2
- Windows Server 2012
- Windows Server 2008 R2 SP1
- Windows Server 2008 R2
- Windows 10
- Windows 8.1
- Windows 8
- Windows 7
Mac
- macOS 10.13
- macOS 10.12
- Apple OS X 10.11
- Apple OS X 10.10
Linux
- VMware Remote Console generally runs on the same Linux offerings as VMware Workstation versions that are released around the same time.
For more information, see the VMware Compatibility Guide.
Documentation
See the following guides for information about installing and using VMware Remote Console.
VMware Remote Console for vSphere
VMware Remote Console for vRealize Automation
Fixed Issues
The following issues are fixed in this release.
Windows
- Corrects a USB driver from VMware Remote Console 10.0.0 and 10.0.1 that had not been signed by Microsoft.
- Allows the Windows USB arbitrator service to start after VMware Remote Console is installed.
The issue only occurred on 64-bit Windows 8 without a certain runtime environment. The runtime environment is not part of Windows 8 by default but might or might not be present depending on other applications.
Mac
- Corrects issues that prevented the Mac installer from completing and required you to launch the application again to finish.
Known Issues
The following issues are known to affect this release.
General Issues
- Cannot connect to virtual machines hosted on ESXi 5.1
ESXi 5.1 does not support TLS versions greater than 1.0. The following error occurs.
Unable to connect to the MKS: Could not connect to pipe \\. \pipe\vmware-authdpipe within retry period.
Workaround: Configure VMware Remote Console to use TLS 1.0.
- Open the VMware Remote Console configuration file in a text editor.
Windows
C:\ProgramData\VMware\VMware Remote Console\config.ini
Linux
/etc/vmware/config
Mac
/Library/Preferences/VMware Remote Console/config
- Add or edit the TLS protocols entry. Include TLS 1.0.
tls.protocols=tls1.0,tls1.1,tls1.2
- Save and close the configuration file.
- NIC disconnects from vSphere Distributed Switch (vDS) portgroup
You try to edit virtual machine network settings by connecting through VMware Remote Console, and your receive an Invalid device network adapter error.
Workaround: Follow the guidelines in Knowledge Base Article 2151096.
- VMware Remote Console URL message
Starting VMware Remote Console from a Windows or Linux terminal session causes the following message to appear:
This application must be launched from a vmrc URL
The message appears when you omit the URL in the command. By design, you start VMware Remote Console from a client such as vSphere or vRealize Automation, or with a vmrc:// URL.
For help with the VMware Remote Console command line, enter:
Windows
vmrc.exe -?
Linux
vmrc --help
Mac
not available
Windows
- Keyboard hook timeout message
When connecting to a virtual machine, VMware Remote Console might display the following message:
The keyboard hook timeout value is not set to the value recommended by VMware Remote Console
By design, you click OK to update the timeout value, then log out of Windows to ensure that the update takes effect.
- HCmon driver error
Installing VMware Remote Console on a system where other VMware applications have been installed might result in the following error:
Failed to install the HCmon driver
Workaround: Go to Task Manager, Services tab, and stop the VMUSBArbService. Then, proceed with installation.
Alternatively, launch the .exe installer from a command prompt window that you opened with Administrator privileges.
Mac
- VMware Remote Console does not launch on macOS 10.13
The Mac displays a System Extension Blocked message when you attempt to launch VMware Remote Console on macOS 10.13.
Workaround: As a user with administrator privileges, go to System Preferences > Security & Privacy. Under the General tab, near the bottom, you see a message about VMware software being blocked. Click Allow. For more information, see Knowledge Base Article 2151770.
- Device options
There are device-related options that are not available when running VMware Remote Console on a Mac. For example, you cannot add new devices, or display sound card settings. Unavailable Mac options are noted where applicable in the documentation.
Linux
- Updates do not download and install
You click a vmrc:// link to launch VMware Remote Console from the browser. When you try to download and install VMware Remote Console updates, you receive the following error during download.
An error occurred while reading/writing files. Try again later and if the problem persists, contact your system administrator
Workaround: Do one of the following.
- Launch VMware Remote Console from the command line, with a minimum syntactically viable URL. For example:
sudo vmrc vmrc://x/?moid=x
- Download the update from and apply it manually.
After using one of the workarounds, the update installs, but VMware Remote Console displays the following error:
Install of VMware Remote Console failed. Contact VMware Support or your system administrator.
Close and relaunch VMware Remote Console. Go to Help > About VMware Remote Console to verify that the update installed.
- VMware Remote Console on Debian 8.7.1 does not open remote virtual machines
You successfully install VMware Remote Console on Debian 8.7.1, but it does not launch. When run from the console, the following message appears:
Loop on signal 11.
In addition, the log file under /tmp/vmware-$USER/ contains a panic and backtrace.
Workaround: Use VMware Remote Console 9.0.
- VMware Remote Console on Ubuntu 17.04 does not display certificates for viewing
You use VMware Remote Console on Ubuntu 17.04 to connect to an ESXi host that has an invalid or untrusted security certificate, and a certificate warning appears. When you click to view the certificate for inspection, an empty dialog appears, and you can only close the dialog.
Workaround: None
- Wayland protocol is not supported
VMware Remote Console requires Xorg and does not install or run under Wayland sessions. Installation errors occur on newer operating systems that use Wayland, such as Fedora 25.
Workaround: None
- Help link does not resolve
You click the Help option from within VMware Remote Console and receive a file not found error.
Workaround: Open a browser directly to
- Virtual machines automatically power on
VMware Remote Console automatically powers on virtual machines when connecting to virtual machines that are powered off.
Workaround: None
- VMware Remote Console automatically closes
VMware Remote Console automatically closes when the remote virtual machine shuts down.
Workaround: None
- VMware Workstation or VMware Workstation Player
VMware Remote Console cannot simultaneously be installed on the same machine as VMware Workstation or VMware Workstation Player.
Workaround: None | https://docs.vmware.com/en/VMware-Remote-Console/10.0/rn/vmware-remote-console-1002-release-notes.html | 2021-01-16T05:41:52 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.vmware.com |
Template Hooks
Overview
Introduction
Business Directory Plugin offers plugin and theme developers the ability to hook into some of our templates using WordPress actions.
Using these hooks extra HTML or output can be displayed on certain specific pages.
How to use the template hooks?
Since our template hooks are just regular WordPress actions, you need to familiarize yourself with the WordPress Plugin API and with the
add_action() function in particular.
Once you’ve decided which hook to use (see list below) you have to edit your theme’s
functions.php file or create a new plugin that takes advantage of said hook. See example below.
Example 1
The following PHP code would display “Thank you for your purchase!” after the checkout receipt is printed by BD.
<?php function my_thankyou_function() { echo '<h4>Thank you for your purchase!</h4>'; } add_action( 'wpbdp_after_render_receipt', 'my_thankyou_function' ); ?>
Example 2
The following PHP code displays a message on top of every category page, with a special message (“Everybody likes pizza”) for just the category with slug “pizza”.
<?php function pizza_places_message( $category ) { if ( 'pizza' == $category->slug ) { echo "<p>Everybody likes pizza.</p>"; } else { echo "<p>This is category {$category->name}.</p>"; } } add_action( 'wpbdp_before_category_page', 'pizza_places_message' ); ?> | http://docs.businessdirectoryplugin.com/api/template-hooks.html | 2018-09-18T15:17:12 | CC-MAIN-2018-39 | 1537267155561.35 | [] | docs.businessdirectoryplugin.com |
<subcommand> <arguments>
To see all available subcommands, run:
girder-cli --help
For help with a specific subcommand, run:
girder-cli <subcommand> - <subcommand> ...)
Further Examples and Function Level Documentation¶
- class
girder_client.
GirderClient(host=None, port=None, apiRoot=None, scheme=None, apiUrl=None, cacheSettings=None)[source]¶
A class for interacting with the Girder RESTful API. Some simple examples of how to use this class follow:
client = GirderClient(apiUrl='')FolderUploadCallbackItemUploadCallback
addMetadataToFolder(folderId, metadata)[source]¶
Takes)[source]¶
Creates and returns a folder.
createItem(parentFolderId, name, description='', reuseExisting=False)[source]¶
Creates and returns an item.
createUser(login, email, firstName, lastName, password, admin=None)[source]¶
Creates and returns a user.
delete(path, parameters=None)[source]¶
Convenience method to call
sendRestRequest()with the ‘DELETE’ HTTP method.
downloadFile(fileId, path, created=None)[source]¶
Download a file to the given local path or file-like object.
downloadFolderRecursive(folderId, dest, sync=False).
downloadResource(resourceId, dest, resourceType='folder', sync=False)[source]¶
Download a collection, user, or folder recursively from Girder into a local directory.
get(path, parameters=None)[source]¶
Convenience method to call
sendRestRequest()with the ‘GET’ HTTP method.
getResource(path, id=None, property=None)[source]¶
Returns a resource based on
idor None if no resource is found; if
propertyis passed, returns that property value from the found resource.)[source]¶
Returns a folder in Girder with the given name under the given parent. If none exists yet, it will create it and return it.
loadOrCreateItem(name, parentFolderId, reuseExisting=True)[source]¶
Create an item with the given name in the given parent folder.
patch(path, parameters=None, data=None, json=None)[source]¶
Convenience method to call
sendRestRequest()with the ‘PATCH’ HTTP method.
post(path, parameters=None, files=None, data=None, json=None)[source]¶
Convenience method to call
sendRestRequest()with the ‘POST’ HTTP method.
put(path, parameters=None, data=None, json=None)[source]¶
Convenience method to call
sendRestRequest()with the ‘PUT’ HTTP method.
resourceLookup(path, test=False)[source]¶
Look up and retrieve resource in the data hierarchy by path.
sendRestRequest(method, path, parameters=None, data=None, files=None, json.
setResourceTimestamp(id, type, created=None, updated=None)[source]¶
Set the created or updated timestamps for a resource.
transformFilename(name)[source]¶
Sanitize a resource name from Girder into a name that is safe to use as a filesystem path.
upload(filePattern, parentId, parentType='folder', leafFoldersAsItems=False, reuseExisting=False, blacklist=None, dryRun=False). | https://girder.readthedocs.io/en/v2.1.0/python-client.html | 2018-09-18T15:44:55 | CC-MAIN-2018-39 | 1537267155561.35 | [] | girder.readthedocs.io |
horizontal group and get its rect back.
This is an extension to GUILayout.BeginHorizontal. It can be used for making compound controls.
Horizontal Compound group.
// Create a Horizontal Compound Button
using UnityEngine; using UnityEditor;
public class BeginEndHorizontalExample : EditorWindow { [MenuItem("Examples/Begin-End Horizontal usage")] static void Init() { BeginEndHorizontalExample window = (BeginEndHorizontalExample)GetWindow(typeof(BeginEndHorizontalExample)); window.Show(); }
void OnGUI() { Rect r = EditorGUILayout.BeginHorizontal("Button"); if (GUI.Button(r, GUIContent.none)) Debug.Log("Go here"); GUILayout.Label("I'm inside the button"); GUILayout.Label("So am I"); EditorGUILayout.EndHorizontal(); } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/EditorGUILayout.BeginHorizontal.html | 2018-09-18T15:31:13 | CC-MAIN-2018-39 | 1537267155561.35 | [] | docs.unity3d.com |
Welcome to the operating manual for the VP75 Additive Manufacturing System of the Kühling&Kühling GmbH (in the following referred to as Kühling&Kühling).
The VP75 Additive Manufacturing System (in the following referred to as VP75 or machine) is a fully automatic stand-alone device for Fused-Filament-Fabrication (FFF) in a lab or commercial environment.
Any information needed for installing and commissioning, operating, troubleshooting, maintenance and repair of the machine are described in the following paragraphs and the accompanying sites.
This user's manual must be read thoroughly as it is meant to provide the operator with all information needed to operate the VP75 safely and reasonably. Please always provide access to the website for any user of the Additive Manufacturing System in case of questions or problems.
In this online version of the VP75 operating manual you will find special phrasing and symbols to help you navigate in order to understand your product better.
If you prefer having a printed version at hand, use the “Export to PDF”
button on the right hand side of the screen.
To navigate through this manual, use the chapter list on the Start Page, the Table of Contents in each chapter and the links provided in the text.
If there are additional information of related interest in a chapter, these are provided as at the end of a chapter, headed “Further information”.
To return to the top of a page, use the
on the right hand side of the screen.
Content of special interest is represented by its formatting. Safety information are placed at the beginning of the respective chapter and/or just before the relevant step of action.
INFO
This box gives you additional information about a topic, tipps and tricks or reminders.
NOTICE!
This box states possible machine damage and information about
CAUTION!
This box states possible personal damage of minor effects (e.g. cuts, bruises, light burns) and information about
WARNING!
This box states possible personal damage of major and possibly lasting effects (e.g. crushing injuries, deep cuts, caustic burns, intoxication) and information about
DANGER!
This box states mortal peril (e.g. electric shock) and information about
On the type plate at the rear side of the VP75 you find all information to precisely identify your Additive Manufacturing System.The valid hardware revision is also displayed on the [Setup] menu of the operating screen.
Please provide the serial number and the hardware revision when contacting us for technical support.
The VP75 Additive Manufacturing System has been designed and built for manufacturing three-dimensional workpieces of nearly random geometries from common 1.75 mm thermoplastic filament strands.
The machine features an advanced hot-end geometry with an extrusion temperature up to 500°C. In combination with the heated build chamber (up to 80°C) and the heated built plate (up to 150°C), this makes the VP75 suitable for processing a variety of materials. It is therefor capable of performing regular additive manufacturing tasks with standard thermoplastics as well as special jobs with technical plastics.
Contact Kühling&Kühling for more detailed information on available and applicable materials.
The VP75 is intended for industrial and commercial use. It is not valid for the operation in an explosive atmosphere.
Observing this manual and adhering to the stated information is part of the proper operation.
Improper operation of the VP75 can lead to hazardous situations.
It is forbidden to operate the machine under conditions and for purposes other than stated in this manual.
Operating the VP75 structural alteration of the VP75.
Parts of the VP75 are subject to wear and must be replaced in the specified intervals or on demand. Regularly check these components as stated in the maintenance manual and replace them if required.
Using the VP75 with worn parts is forbidden and frees Kühling&Kühling from liability.
Wear parts must meet the technical specifications defined by Kühling&Kühling. Kühling&Kühling original parts are subject to rigid requirements and meet these standards.
Contact [email protected] for complete list of available wear and spare parts.
Further information
Kühling&Kühling GmbH
Christiansprieß 30
24159 Kiel
Deutschland
Phone.: +49 (0) 431 98 35 24 73
VAT Reg.No.: DE305873054
Local Court: Kiel
Commercial Register No.: 17 535
WEEE-Reg.-No.: DE 11304600
Managing Directors
Jonas Kühling
Simon Kühling
Karsten Wenige | http://docs.kuehlingkuehling.de/vp75/general | 2019-02-16T02:14:54 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.kuehlingkuehling.de |
Release notes for Azure HDInsight
This article provides information about the most recent Azure HDInsight release updates. For information on earlier releases, see HDInsight Release Notes Archive.
Important
Linux is the only operating system used on HDInsight version 3.4 or greater. For more information, see HDInsight versioning article.
Summary
Azure HDInsight is one of the most popular services among enterprise customers for open-source Apache Hadoop and Apache Spark analytics on Azure. With the plus 50 percent price cut on HDInsight, customers moving to the cloud are reaping more savings than ever.
New features
The new updates and capabilities fall in to the following categories:
Update Hadoop and other open-source projects – In addition to 1000+ bug fixes across 20+ open-source projects, this update contains a new version of Spark (2.3) and Kafka (1.0).
a. New features in Apache Spark 2.3
b. New features in Apache Kafka 1.0
Update R Server 9.1 to Machine Learning Services 9.3 –.
Support for Azure Data Lake Storage Gen2 – HDInsight will support the Preview release of Azure Data Lake Storage Gen2. In the available regions, customers will be able to choose an ADLS Gen2 account as the Primary or Secondary store for their HDInsight clusters.
HDInsight Enterprise Security Package Updates (Preview) – (Preview) Virtual Network Service Endpoints support for Azure Blob Storage, ADLS Gen1, Cosmos DB, and Azure DB.
Component versions
The official Apache versions of all HDInsight 3.6 components are listed below. All components listed here are official Apache releases of the most recent stable versions available.
Apache Hadoop 2.7.3
Apache HBase 1.1.2
Apache Hive 1.2.1
Apache Hive 2.1.0
Apache Kafka 1.0.0
Apache Mahout 0.9.0+
Apache Oozie 4.2.0
Apache Phoenix 4.7.0
Apache Pig 0.16.0
Apache Ranger 0.7.0
Apache Slider 0.92.0
Apache Spark 2.2.0/2.3.0
Apache Sqoop 1.4.6
Apache Storm 1.1.0
Apache TEZ 0.7.0
Apache Zeppelin 0.7.
Apache patch information
Hadoop
This release provides Hadoop Common 2.7.3 and the following Apache patches:
HADOOP-13190: Mention LoadBalancingKMSClientProvider in KMS HA documentation.
HADOOP-13227: AsyncCallHandler should use an event driven architecture to handle async calls.
HADOOP-14104: Client should always ask namenode for kms provider path.
HADOOP-14799: Update nimbus-jose-jwt to 4.41.1.
HADOOP-14814: Fix incompatible API change on FsServerDefaults to HADOOP-14104.
HADOOP-14903: Add json-smart explicitly to pom.xml.
HADOOP-15042: Azure PageBlobInputStream.skip() can return negative value when numberOfPagesRemaining is 0.
HADOOP-15255: Upper/Lower case conversion support for group names in LdapGroupsMapping.
HADOOP-15265: exclude json-smart explicitly from hadoop-auth pom.xml.
HDFS-7922: ShortCircuitCache#close is not releasing ScheduledThreadPoolExecutors.
HDFS-8496: Calling stopWriter() with FSDatasetImpl lock held may block other threads (cmccabe).
HDFS-10267: Extra "synchronized" on FsDatasetImpl#recoverAppend and FsDatasetImpl#recoverClose.
HDFS-10489: Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones.
HDFS-11384: Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike.
HDFS-11689: New exception thrown by DFSClient%isHDFSEncryptionEnabled broke hacky hive code.
HDFS-11711: DN should not delete the block On "Too many open files" Exception.
HDFS-12347: TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently.
HDFS-12781: After Datanode down, In Namenode UI Datanode tab is throwing warning message.
HDFS-13054: Handling PathIsNotEmptyDirectoryException in DFSClient delete call.
HDFS-13120: Snapshot diff could be corrupted after concat.
YARN-3742: YARN RM will shut down if ZKClient creation times out.
YARN-6061: Add an UncaughtExceptionHandler for critical threads in RM.
YARN-7558: yarn logs command fails to get logs for running containers if UI authentication is enabled.
YARN-7697: Fetching logs for finished application fails even though log aggregation is complete.
HDP 2.6.4 provided Hadoop Common 2.7.3 and the following Apache patches:
HADOOP-13700: Remove unthrown IOException from TrashPolicy#initialize and #getInstance signatures.
HADOOP-13709: Ability to clean up subprocesses spawned by Shell when the process exits.
HADOOP-14059: typo in s3a rename(self, subdir) error message.
HADOOP-14542: Add IOUtils.cleanupWithLogger that accepts slf4j logger API.
HDFS-9887: WebHdfs socket timeouts should be configurable.
HDFS-9914: Fix configurable WebhDFS connect/read timeout.
MAPREDUCE-6698: Increase timeout on TestUnnecessaryBlockingOnHist oryFileInfo.testTwoThreadsQueryingDifferentJobOfSameUser.
YARN-4550: Some tests in TestContainerLanch fail on non-english locale environment.
YARN-4717: TestResourceLocalizationService.testPublicResourceInitializesLocalDir fails Intermittently due to IllegalArgumentException from cleanup.
YARN-5042: Mount /sys/fs/cgroup into Docker containers as readonly mount.
YARN-5318: Fix intermittent test failure of TestRMAdminService#te stRefreshNodesResourceWithFileSystemBasedConfigurationProvider.
YARN-5641: Localizer leaves behind tarballs after container is complete.
YARN-6004: Refactor TestResourceLocalizationService#testDownloadingResourcesOnContainer so that it is less than 150 lines.
YARN-6078: Containers stuck in Localizing state.
YARN-6805: NPE in LinuxContainerExecutor due to null PrivilegedOperationException exit code.
HBase
This release provides HBase 1.1.2 and the following Apache patches.
HBASE-13376: Improvements to Stochastic load balancer.
HBASE-13716: Stop using Hadoop's FSConstants.
HBASE-13848: Access InfoServer SSL passwords through Credential Provider API.
HBASE-13947: Use MasterServices instead of Server in AssignmentManager.
HBASE-14135: HBase Backup/Restore Phase 3: Merge backup images.
HBASE-14473: Compute region locality in parallel.
HBASE-14517: Show regionserver's version in master status page.
HBASE-14606: TestSecureLoadIncrementalHFiles tests timed out in trunk build on apache.
HBASE-15210: Undo aggressive load balancer logging at tens of lines per millisecond.
HBASE-15515: Improve LocalityBasedCandidateGenerator in Balancer.
HBASE-15615: Wrong sleep time when RegionServerCallable need retry.
HBASE-16135: PeerClusterZnode under rs of removed peer may never be deleted.
HBASE-16570: Compute region locality in parallel at startup.
HBASE-16810: HBase Balancer throws ArrayIndexOutOfBoundsException when regionservers are in /hbase/draining znode and unloaded.
HBASE-16852: TestDefaultCompactSelection failed on branch-1.3.
HBASE-17387: Reduce the overhead of exception report in RegionActionResult for multi().
HBASE-17850: Backup system repair utility.
HBASE-17931: Assign system tables to servers with highest version.
HBASE-18083: Make large/small file clean thread number configurable in HFileCleaner.
HBASE-18084: Improve CleanerChore to clean from directory which consumes more disk space.
HBASE-18164: Much faster locality cost function and candidate generator.
HBASE-18212: In Standalone mode with local filesystem HBase logs Warning message: Failed to invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream.
HBASE-18808: Ineffective config check in BackupLogCleaner#getDeletableFiles().
HBASE-19052: FixedFileTrailer should recognize CellComparatorImpl class in branch-1.x.
HBASE-19065: HRegion#bulkLoadHFiles() should wait for concurrent Region#flush() to finish.
HBASE-19285: Add per-table latency histograms.
HBASE-19393: HTTP 413 FULL head while accessing HBase UI using SSL.
HBASE-19395: [branch-1] TestEndToEndSplitTransaction.testMasterOpsWhileSplitting fails with NPE.
HBASE-19421: branch-1 does not compile against Hadoop 3.0.0.
HBASE-19934: HBaseSnapshotException when read replicas is enabled and online snapshot is taken after region splitting.
HBASE-20008: [backport] NullPointerException when restoring a snapshot after splitting a region.
Hive
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following patches:
Hive 1.2.1 Apache patches:
HIVE-10697: ObjectInspectorConvertors#UnionConvertor does a faulty conversion.
HIVE-11266: count(*) wrong result based on table statistics for external tables.
HIVE-12245: Support column comments for an HBase backed table.
HIVE-12315: Fix Vectorized double divide by zero.
HIVE-12360: Bad seek in uncompressed ORC with predicate pushdown.
HIVE-12378: Exception on HBaseSerDe.serialize binary field.
HIVE-12785: View with union type and UDF to the struct is broken.
HIVE-14013: Describe table doesn't show unicode properly.
HIVE-14205: Hive doesn't support union type with AVRO file format.
HIVE-14421: FS.deleteOnExit holds references to _tmp_space.db files.232: Support stats computation for columns in QuotedIdentifier.
HIVE-16828: With CBO enabled, Query on partitioned views throws IndexOutOfBoundException.
HIVE-17013: Delete request with a subquery based on select over a view.
HIVE-17063: insert overwrite partition onto a external table fail when drop partition first.
HIVE-17259: Hive JDBC does not recognize UNIONTYPE columns.
HIVE-17419: ANALYZE TABLE...COMPUTE STATISTICS FOR COLUMNS command shows computed stats for masked tables.
HIVE-17530: ClassCastException when converting uniontype.
HIVE-17621: Hive-site settings are ignored during HCatInputFormat split-calculation.
HIVE-17636: Add multiple_agg.q test for blobstores.29: ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in Hive2.
HIVE-17845: insert fails if target table columns are not lowercase.
HIVE-17900: analyze stats on columns triggered by Compactor generates malformed SQL with > 1 partition column.
HIVE-18026: Hive webhcat principal configuration optimization.
HIVE-18031: Support replication for Alter Database operation.
HIVE-18090: acid heartbeat fails when metastore is connected via hadoop credential.
HIVE-18189: Hive query returning wrong results when set hive.groupby.orderby.position.alias to true.
HIVE-18258: Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is broken.
HIVE-18293: Hive is failing to compact tables contained within a folder that is not owned by identity running HiveMetaStore.
HIVE-18327: Remove the unnecessary HiveConf dependency for MiniHiveKdc. (Prabhu Joseph via Thejas Nair).
HIVE-18390: IndexOutOfBoundsException when query a partitioned view in ColumnPruner.
HIVE-18429: Compaction should handle a case when it produces no output.
HIVE-18447: JDBC: Provide a way for JDBC users to pass cookie info via connection string.
HIVE-18460: Compactor doesn't pass Table properties to the Orc writer.
HIVE-18467: support whole warehouse dump / load + create/drop database events (Anishek Agarwal, reviewed by Sankar Hariappan).
HIVE-18551: Vectorization: VectorMapOperator tries to write too many vector columns for Hybrid Grace.
HIVE-18587: insert DML event may attempt to calculate a checksum on directories.
HIVE-18613: Extend JsonSerDe to support BINARY type.
HIVE-18626: Repl load "with" clause does not pass config to tasks.
HIVE-18660: PCR doesn't distinguish between partition and virtual columns.
HIVE-18754: REPL STATUS should support 'with' clause.07: Create utility to fix acid key index issue from HIVE-18817.
Hive 2.1.0 Apache Patches:
HIVE-14013: Describe table doesn't show unicode properly.
HIVE-14205: Hive doesn't support union type with AVRO file format.757: Remove calls to deprecated AbstractRelNode.getRows.
HIVE-16828: With CBO enabled, Query on partitioned views throws IndexOutOfBoundException.
HIVE-17063: insert overwrite partition onto a external table fail when drop partition first.
HIVE-17259: Hive JDBC does not recognize UNIONTYPE columns.
HIVE-17530: ClassCastException when converting uniontype.
HIVE-17600: Make OrcFile's enforceBufferSize user-settable.
HIVE-17601: improve error handling in LlapServiceDriver.
HIVE-17613: remove object pools for short, same-thread allocations.
HIVE-17617: Rollup of an empty resultset should contain the grouping of the empty grouping set.
HIVE-17621: Hive-site settings are ignored during HCatInputFormat split-calculation.
HIVE-17629: CachedStore: Have a whitelist/blacklist config to allow selective caching of tables/partitions and allow read while prewarming.
HIVE-17636: Add multiple_agg.q test for blobstores.
HIVE-17702: incorrect isRepeating handling in decimal reader in ORC.45: insert fails if target table columns are not lowercase.
HIVE-17900: analyze stats on columns triggered by Compactor generates malformed SQL with > 1 partition column.
HIVE-18006: Optimize memory footprint of HLLDenseRegister.
HIVE-18026: Hive webhcat principal configuration optimization.
HIVE-18031: Support replication for Alter Database operation.
HIVE-18090: acid heartbeat fails when metastore is connected via hadoop credential.
HIVE-18189: Order by position does not work when cbo is disabled.
HIVE-18258: Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is broken.
HIVE-18269: LLAP: Fast llap io with slow processing pipeline can lead to OOM.
HIVE-18293: Hive is failing to compact tables contained within a folder that is not owned by identity running HiveMetaStore.
HIVE-18318: LLAP record reader should check interrupt even when not blocking.
HIVE-18326: LLAP Tez scheduler - only preempt tasks if there's a dependency between them.
HIVE-18327: Remove the unnecessary HiveConf dependency for MiniHiveKdc.
HIVE-18331: Add relogin when TGT expire and some logging/lambda..
HIVE-18384: ConcurrentModificationException in log4j2.x library.
HIVE-18390: IndexOutOfBoundsException when query a partitioned view in ColumnPruner.
HIVE-18447: JDBC: Provide a way for JDBC users to pass cookie info via connection string.
HIVE-18460: Compactor doesn't pass Table properties to the Orc writer.
HIVE-18462: (Explain formatted for queries with map join has columnExprMap with unformatted column name).
HIVE-18467: support whole warehouse dump / load + create/drop database events.
HIVE-18488: LLAP ORC readers are missing some null checks.
HIVE-18490: Query with EXISTS and NOT EXISTS with non-equi predicate can produce wrong result.
HIVE-18506: LlapBaseInputFormat - negative array index.
HIVE-18517: Vectorization: Fix VectorMapOperator to accept VRBs and check vectorized flag correctly to support LLAP Caching).
HIVE-18523: Fix summary row in case there are no inputs.
HIVE-18528: Aggregate stats in ObjectStore get wrong result.
HIVE-18530: Replication should skip MM table (for now).
HIVE-18548: Fix log4j import.
HIVE-18551: Vectorization: VectorMapOperator tries to write too many vector columns for Hybrid Grace.
HIVE-18577: SemanticAnalyzer.validate has some pointless metastore calls.
HIVE-18587: insert DML event may attempt to calculate a checksum on directories.
HIVE-18597: LLAP: Always package the log4j2 API jar for org.apache.log4j.
HIVE-18613: Extend JsonSerDe to support BINARY type.
HIVE-18626: Repl load "with" clause does not pass config to tasks.
HIVE-18643: don't check for archived partitions for ACID ops.
HIVE-18660: PCR doesn't distinguish between partition and virtual columns.15: Remove unused feature in HPL/SQL.44: Groupping sets position is set incorrectly during DPP.
Kafka
This release provides Kafka 1.0.0 and the following Apache patches.
KAFKA-4827: Kafka connect: error with special characters in connector name.
KAFKA-6118: Transient failure in kafka.api.SaslScramSslEndToEndAuthorizationTest.testTwoConsumersWithDifferentSaslCredentials.
KAFKA-6156: JmxReporter can't handle windows style directory paths.
KAFKA-6164: ClientQuotaManager threads prevent shutdown when encountering an error loading logs.
KAFKA-6167: Timestamp on streams directory contains a colon, which is an illegal character.
KAFKA-6179: RecordQueue.clear() does not clear MinTimestampTracker's maintained list.
KAFKA-6185: Selector memory leak with high likelihood of OOM in case of down conversion.
KAFKA-6190: GlobalKTable never finishes restoring when consuming transactional messages.
KAFKA-6210: IllegalArgumentException if 1.0.0 is used for inter.broker.protocol.version or log.message.format.version.
KAFKA-6214: Using standby replicas with an in memory state store causes Streams to crash.
KAFKA-6215: KafkaStreamsTest fails in trunk.
KAFKA-6238: Issues with protocol version when applying a rolling upgrade to 1.0.0.
KAFKA-6260: AbstractCoordinator not clearly handles NULL Exception.
KAFKA-6261: Request logging throws exception if acks=0.
KAFKA-6274: Improve KTable Source state store auto-generated names.
Mahout and 2 or 2.
Oozie
This release provides Oozie 4.2.0 with the following Apache patches.
OOZIE-2571: Add spark.scala.binary.version Maven property so that Scala 2.11 can be used.
OOZIE-2606: Set spark.yarn.jars to fix Spark 2.0 with Oozie.
OOZIE-2658: --driver-class-path can overwrite the classpath in SparkMain.
OOZIE-2787: Oozie distributes application jar twice making the spark job fail.
OOZIE-2792: Hive2 action is not parsing Spark application ID from log file properly when Hive is on Spark.
OOZIE-2799: Setting log location for spark sql on hive.
OOZIE-2802: Spark action failure on Spark 2.1.0 due to duplicate sharelibs.
OOZIE-2923: Improve Spark options parsing.
OOZIE-3109: SCA: Cross-Site Scripting: Reflected.
OOZIE-3139: Oozie validates workflow incorrectly.
OOZIE-3167: Upgrade tomcat version on Oozie 4.3 branch.
Phoenix
This release provides Phoenix 4.7.0 and the following Apache patches:
PHOENIX-1751: Perform aggregations, sorting, etc., in the preScannerNext instead of postScannerOpen.
PHOENIX-2714: Correct byte estimate in BaseResultIterators and expose as interface.
PHOENIX-2724: Query with large number of guideposts is slower compared to no stats.
PHOENIX-2855: Workaround Increment TimeRange not being serialized for HBase 1.2.
PHOENIX-3023: Slow performance when limit queries are executed in parallel by default.
PHOENIX-3040: Don't use guideposts for executing queries serially.
PHOENIX-3112: Partial row scan not handled correctly.
PHOENIX-3240: ClassCastException from Pig loader.
PHOENIX-3452: NULLS FIRST/NULL LAST should not impact whether GROUP BY is order preserving.
PHOENIX-3469: Incorrect sort order for DESC primary key for NULLS LAST/NULLS FIRST.
PHOENIX-3789: Execute cross region index maintenance calls in postBatchMutateIndispensably.
PHOENIX-3865: IS NULL does not return correct results when first column family not filtered against.
PHOENIX-4290: Full table scan performed for DELETE with table having immutable indexes.
PHOENIX-4373: Local index variable length key can have trailing nulls while upserting.
PHOENIX-4466: java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data.
PHOENIX-4489: HBase Connection leak in Phoenix MR Jobs.
PHOENIX-4525: Integer overflow in GroupBy execution.
PHOENIX-4560: ORDER BY with GROUP BY doesn't work if there is WHERE on pk column.
PHOENIX-4586: UPSERT SELECT doesn't take in account comparison operators for subqueries.
PHOENIX-4588: Clone expression also if its children have Determinism.PER_INVOCATION.
Pig
This release provides Pig 0.16.0 with the following Apache patches.
Ranger
This release provides Ranger 0.7.0 and the following Apache patches:
RANGER-1805: Code improvement to follow best practices in js.
RANGER-1960: Take snapshot's table name into consideration for deletion.
RANGER-1982: Error Improvement for Analytics Metric of Ranger Admin and Ranger KMS.
RANGER-1984: Hbase audit log records may not show all tags associated with accessed column.
RANGER-1988: Fix insecure randomness.
RANGER-1990: Add One-way SSL MySQL support in Ranger Admin.
RANGER-2006: Fix problems detected by static code analysis in ranger usersync for ldap sync source.
RANGER-2008: Policy evaluation is failing for multiline policy conditions.
Slider
This release provides Slider 0.92.0 with no additional Apache patches.
Spark
This release provides Spark 2.3.0 and the following Apache patches:
SPARK-13587: Support virtualenv in pyspark.
SPARK-19964: Avoid reading from remote repos in SparkSubmitSuite.
SPARK-22882: ML test for structured streaming: ml.classification.
SPARK-22915: Streaming tests for spark.ml.feature, from N to Z.
SPARK-23020: Fix another race in the in-process launcher test.
SPARK-23040: Returns interruptible iterator for shuffle reader.
SPARK-23173: Avoid creating corrupt parquet files when loading data from JSON.
SPARK-23264: Fix scala.MatchError in literals.sql.out.
SPARK-23288: Fix output metrics with parquet sink.
SPARK-23329: Fix documentation of trigonometric functions.
SPARK-23406: Enable stream-stream self-joins for branch-2.3.
SPARK-23434: Spark should not warn `metadata directory` for a HDFS file path.
SPARK-23436: Infer partition as Date only if it can be casted to Date.
SPARK-23457: Register task completion listeners first in ParquetFileFormat.
SPARK-23462: improve missing field error message in `StructType`.
SPARK-23490: Check storage.locationUri with existing table in CreateTable.
SPARK-23524: Big local shuffle blocks should not be checked for corruption.
SPARK-23525: Support ALTER TABLE CHANGE COLUMN COMMENT for external hive table.
SPARK-23553: Tests should not assume the default value of `spark.sql.sources.default`.
SPARK-23569: Allow pandas_udf to work with python3 style type-annotated functions.
SPARK-23570: Add Spark 2.3.0 in HiveExternalCatalogVersionsSuite.
SPARK-23598: Make methods in BufferedRowIterator public to avoid runtime error for a large query.
SPARK-23599: Add a UUID generator from Pseudo-Random Numbers.
SPARK-23599: Use RandomUUIDGenerator in Uuid expression.
SPARK-23601: Remove .md5 files from release.
SPARK-23608: Add synchronization in SHS between attachSparkUI and detachSparkUI functions to avoid concurrent modification issue to Jetty Handlers.
SPARK-23614: Fix incorrect reuse exchange when caching is used.
SPARK-23623: Avoid concurrent use of cached consumers in CachedKafkaConsumer (branch-2.3).
SPARK-23624: Revise doc of method pushFilters in Datasource V2.
SPARK-23628: calculateParamLength should not return 1 + num of expressions.
SPARK-23630: Allow user's hadoop conf customizations to take effect.
SPARK-23635: Spark executor env variable is overwritten by same name AM env variable.
SPARK-23637: Yarn might allocate more resource if a same executor is killed multiple times.
SPARK-23639: Obtain token before init metastore client in SparkSQL CLI.
SPARK-23642: AccumulatorV2 subclass isZero scaladoc fix.
SPARK-23644: Use absolute path for REST call in SHS.
SPARK-23645: Add docs RE `pandas_udf` with keyword args.
SPARK-23649: Skipping chars disallowed in UTF-8.
SPARK-23658: InProcessAppHandle uses the wrong class in getLogger.
SPARK-23660: Fix exception in yarn cluster mode when application ended fast.
SPARK-23670: Fix memory leak on SparkPlanGraphWrapper.
SPARK-23671: Fix condition to enable the SHS thread pool.
SPARK-23691: Use sql_conf util in PySpark tests where possible.
SPARK-23695: Fix the error message for Kinesis streaming tests.
SPARK-23706: spark.conf.get(value, default=None) should produce None in PySpark.
SPARK-23728: Fix ML tests with expected exceptions running streaming tests.
SPARK-23729: Respect URI fragment when resolving globs.
SPARK-23759: Unable to bind Spark UI to specific host name / IP.
SPARK-23760: CodegenContext.withSubExprEliminationExprs should save/restore CSE state correctly.
SPARK-23769: Remove comments that unnecessarily disable Scalastyle check.
SPARK-23788: Fix race in StreamingQuerySuite.
SPARK-23802: PropagateEmptyRelation can leave query plan in unresolved state.
SPARK-23806: Broadcast.unpersist can cause fatal exception when used with dynamic allocation.
SPARK-23808: Set default Spark session in test-only spark sessions.
SPARK-23809: Active SparkSession should be set by getOrCreate.
SPARK-23816: Killed tasks should ignore FetchFailures.
SPARK-23822: Improve error message for Parquet schema mismatches.
SPARK-23823: Keep origin in transformExpression.
SPARK-23827: StreamingJoinExec should ensure that input data is partitioned into specific number of partitions.
SPARK-23838: Running SQL query is displayed as "completed" in SQL tab.
SPARK-23881: Fix flaky test JobCancellationSuite."interruptible iterator of shuffle reader".
Sqoop
This release provides Sqoop 1.4.6 with no additional Apache patches.
Storm
This release provides Storm 1.1.1 and the following Apache patches:
STORM-2652: Exception thrown in JmsSpout open method.
STORM-2841: testNoAcksIfFlushFails UT fails with NullPointerException.
STORM-2854: Expose IEventLogger to make event logging pluggable.
STORM-2870: FileBasedEventLogger leaks non-daemon ExecutorService which prevents process to be finished.
STORM-2960: Better to stress importance of setting up proper OS account for Storm processes.
Tez
This release provides Tez 0.7.0 and the following Apache patches:
Zeppelin
This release provides Zeppelin 0.7.3 with no additionalApache patches.
ZEPPELIN-3072: Zeppelin UI becomes slow/unresponsive if there are too many notebooks.
ZEPPELIN-3129: Zeppelin UI doesn't sign out in IE.
ZEPPELIN-903: Replace CXF with Jersey2.
ZooKeeper
This release provides ZooKeeper 3.4.6 and the following Apache patches:
ZOOKEEPER-1256: ClientPortBindTest is failing on Mac OS X.
ZOOKEEPER-1901: [JDK8] Sort children for comparison in AsyncOps tests.
ZOOKEEPER-2423: Upgrade Netty version due to security vulnerability (CVE-2014-3488).
ZOOKEEPER-2693: DOS attack on wchp/wchc four letter words (4lw).
ZOOKEEPER-2726: Patch for introduces potential race condition.
Fixed Common Vulnerabilities and Exposures
This section covers all Common Vulnerabilities and Exposures (CVE) that are addressed in this release.
CVE-2017-7676
CVE-2017-7677
CVE-2017-9799
CVE-2016-4970
CVE-2016-8746
CVE-2016-8751
Fixed issues for support
Fixed issues represent selected issues that were previously logged via Hortonworks Support, but are now addressed in the current release. These issues may have been reported in previous versions within the Known Issues section; meaning they were reported by customers or identified by Hortonworks Quality Engineering team.
Incorrect Results
Other
Performance
Potential Data Loss
Query Failure
Security
Stability
Supportability
Upgrade
Usability
Behavioral changes
Known issues
HDInsight integration with ADLS Gen 2 There are two issues on HDInsight ESP clusters using Azure Data Lake Storage Gen 2 with user directories and permissions:
Home directories for users are not getting created on Head Node 1. As a workaround, create the directories manually and change ownership to the respective user’s UPN.
Permissions on /hdp directory is currently not set to 751. This needs to be set to
chmod 751 /hdp chmod –R 755 /hdp/apps
Spark 2.3
[SPARK-23523][SQL] Incorrect result caused by the rule OptimizeMetadataOnlyQuery
[SPARK-23406] Bugs in stream-stream self-joins
Spark sample notebooks are not available when Azure Data Lake Storage (Gen2) is default storage of the cluster.
Enterprise Security Package
- Spark Thrift Server does not accept connections from ODBC clients. Workaround steps:
- Wait for about 15 minutes after cluster creation.
- Check ranger UI for existence of hivesampletable_policy.
- Restart Spark service. STS connection should work now.
Workaround for Ranger service check failure
RANGER-1607: Workaround for Ranger service check failure while upgrading to HDP 2.6.2 from previous HDP versions.
Note
Only when Ranger is SSL enabled. in the ranger-admin your environment is configured for Ranger-KMS, add the property ranger.tomcat.ciphers in theranger-kms
Note
The noted values are working examples and may not be indicative of your environment. Ensure that the way you set these properties matches how your environment is configured.
RangerUI: Escape of policy condition text entered in the policy form
Component Affected: Ranger
Description of Problem
If a user wants to create policy with custom policy conditions and the expression or text contains special characters, then policy enforcement will not work. Special characters are converted into ASCII before saving the policy into the database.
Special Characters: & < > " ` '
For example, the condition tags.attributes['type']='abc' would get converted to the following once the policy is saved.
tags.attds['dsds']='cssdfs'
You can see the policy condition with these characters by opening the policy in edit mode.
Workaround
Option #1: Create/Update policy via Ranger Rest API
REST URL: http://<host>:6080/service/plugins/policies
Creating policy with policy condition:
The following example will create policy with tags as `tags-test` and assign it to `public` group with policy condition astags.attr['type']=='abc' by selecting all hive component permissions like select, update, create, drop, alter, index, lock, all.
Example:
curl -H "Content-Type: application/json" -X POST -u admin:admin -d '{"policyType":"0","name":"P100","isEnabled":true,"isAuditEnabled":true,"description":"","resources":{"tag":{"values":["tags-test"],"isRecursive":"","isExcludes":false}},"policyItems":[{"groups":["public"],"conditions":[{"type":"accessed-after-expiry","values":[]},{"type":"tag-expression","values":["tags.attr['type']=='abc'"]}],}]}],"denyPolicyItems":[],"allowExceptions":[],"denyExceptions":[],"service":"tagdev"}'
Update existing policy with policy condition:
The following example will update policy with tags as `tags-test` and assign it to `public` group with policy condition astags.attr['type']=='abc' by selecting all hive component permissions like select, update, create, drop, alter, index, lock, all.
REST URL: http://<host-name>:6080/service/plugins/policies/<policy-id>
Example:
curl -H "Content-Type: application/json" -X PUT -u admin:admin -d '{"id":18,"guid":"ea78a5ed-07a5-447a-978d-e636b0490a54","isEnabled":true,"createdBy":"Admin","updatedBy":"Admin","createTime":1490802077000,"updateTime":1490802077000,"version":1,"service":"tagdev","name":"P0101","policyType":0,"description":"","resourceSignature":"e5fdb911a25aa7f77af5a9546938d9ed","isAuditEnabled":true,"resources":{"tag":{"values":["tags"],"isExcludes":false,"isRecursive":false}},"policyItems":[{}],"users":[],"groups":["public"],"conditions":[{"type":"ip-range","values":["tags.attributes['type']=abc"]}],"delegateAdmin":false}],"denyPolicyItems":[],"allowExceptions":[],"denyExceptions":[],"dataMaskPolicyItems":[],"rowFilterPolicyItems":[]}'
Option #2: Apply Javascript changes
Steps to update JS file:
Find out PermissionList.js file under /usr/hdp/current/ranger-admin
Find out definition of renderPolicyCondtion function (line no:404).
Remove following line from that function i.e under display function(line no:434)
val = _.escape(val);//Line No:460
After removing the above line, the Ranger UI will allow you to create policies with policy condition that can contain special characters and policy evaluation will be successful for the same policy.
HDInsight Integration with ADLS Gen 2: User directories and permissions issue with ESP clusters
- Home directories for users are not getting created on Head Node 1. Workaround is to create these manually and change ownership to the respective user’s UPN.
- Permissions on /hdp is currently not set to 751. This needs to be set to a. chmod 751 /hdp b. chmod –R 755 /hdp/apps
Deprecation
OMS Portal: We have removed the link from HDInsight resource page that was pointing to OMS portal. Log Analytics initially used its own portal called the OMS portal to manage its configuration and analyze collected data. All functionality from this portal has been moved to the Azure portal where it will continue to be developed. HDInsight has deprecated the support for OMS portal. Customers will use HDInsight Log Analytics integration in Azure portal.
Spark 2.3
Upgrading
All of these features are available in HDInsight 3.6. To get the latest version of Spark, Kafka and R Server (Machine Learning Services), please choose the Spark, Kafka, ML Services version when you create a HDInsight 3.6 cluster. To get support for ADLS, you can choose the ADLS storage type as an option. Existing clusters will not be upgraded to these versions automatically.
All new clusters created after June 2018 will automatically get the 1000+ bug fixes across all the open-source projects. Please follow this guide for best practices around upgrading to a newer HDInsight version.
Feedback
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog. | https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-release-notes | 2019-02-16T01:23:40 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.microsoft.com |
Laravel
Laravel is supported via a native package, sentry-laravel.
Laravel 5.x
Install the
sentry/sentry-laravel package:
$ composer require sentry/sentry-laravel:1.0.0-beta3 php-http/curl-client guzzlehttp/psr7
If you’re on Laravel 5.4 or earlier, you’ll need to add the following to your
config/app.php (for Laravel 5.5+ these will be auto-discovered by Laravel):
"
Add your DSN to
.env:
SENTRY_LARAVEL_DSN=___PUBLIC_DSN___
User Feeback
To see how to show user feedback dialog see: User Feedback
Laravel 4.x
Install the
sentry/sentry-laravel package:
Laravel 4.x is supported until version 0.8.x.
$ composer require "sentry/sentry-laravel:0.8.*"
Add the Sentry service provider and facade in
config/app.php:
'providers' => array( // ... 'Sentry\SentryLaravel\SentryLaravelServiceProvider', ) 'aliases' => array( // ... 'Sentry' => 'Sentry\SentryLaravel\SentryFacade', )
Create the Sentry configuration file (
config/sentry.php):
$ php artisan config:publish sentry/sentry-laravel
Add your DSN to
config/sentry.php:
<?php return array( 'dsn' => '___PUBLIC_DSN___', // ... );
If you wish to wire up Sentry anywhere outside of the standard error handlers, or
if you need to configure additional settings, you can access the Sentry instance
through
$app:
$app['sentry']->setRelease(Git::sha());
Lumen 5.x
Install the
sentry/sentry-laravel package:
$ composer require sentry/sentry-laravel:1.0.0-beta3 php-http/curl-client guzzlehttp/psr7
Register Sentry in
bootstrap/app.php:
$app->register('Sentry\Laravel\LumenServiceProvider'); # Sentry must be registered before routes are included require __DIR__ . '/../app/Http/routes.php';
Add Sentry reporting to
app/Exceptions/Handler.php:
public function report(Exception $e) { if (app()->bound('sentry') && $this->shouldReport($e)) { app('sentry')->captureException($e); } parent::report($e); }
Create the Sentry configuration file (
config/sentry.php):
<?php return array( 'dsn' => '___PUBLIC_DSN___', // capture release as git sha // 'release' => trim(exec('git log --pretty="%h" -n1 HEAD')), );
Testing with Artisan
You can test your configuration using the provided
artisan command:
$ php artisan sentry:test [sentry] Client DSN discovered! [sentry] Generating test event [sentry] Sending test event [sentry] Event sent: e6442bd7806444fc8b2710abce3599ac
Laravel specific options
breadcrumbs.sql_bindings
Capture bindings on SQL queries.
Defaults to
true.
'breadcrumbs.sql_bindings' => false,
Using Laravel 5.6 log channels
Note
If you’re using log channels to log your exceptions and are also logging exceptions to Sentry in your exception handler (as you would have configured above) exceptions might end up twice in Sentry.
To configure Sentry as a log channel, add the following config to the
channels section in
config/logging.php:
'channels' => [ // ... 'sentry' => [ 'driver' => 'sentry', ], ],
After you configured the Sentry log channel, you can configure your app to both log to a log file and to Sentry by modifying the log stack:
'channels' => [ 'stack' => [ 'driver' => 'stack', // Add the Sentry log channel to the stack 'channels' => ['single', 'sentry'], ], //... ],
Optionally, you can set the logging level and if events should bubble on the driver:
'channels' => [ // ... 'sentry' => [ 'driver' => 'sentry', 'level' => null, // The minimum monolog logging level at which this handler will be triggered // For example: `\Monolog\Logger::ERROR` 'bubble' => true, // Whether the messages that are handled can bubble up the stack or not ], ],
Naming you log channels
If you have multiple log channels you would like to filter on inside the Sentry interface, you can add the
name attribute to the log channel.
It will show up in Sentry as the
logger tag, which is filterable.
For example:
'channels' => [ 'my_stacked_channel' => [ 'driver' => 'stack', 'channels' => ['single', 'sentry'], 'name' => 'my-channel' ], //... ],
You’re now able to log errors to your channel:
\Log::channel('my_stacked_channel')->error('My error');
And Sentry’s
logger tag now has the channel’s
name. You can filter on the “my-channel” value.
Resolve name conflicts with packages also called Sentry
To resolve this, you’ll need to create your own service provider extending ours so we can prevent naming conflicts.
<?php namespace App\Support; class SentryLaravelServiceProvider extends \Sentry\Laravel\ServiceProvider { public static $abstract = 'sentry-laravel'; }
You can then add this service provider to the
config/app.php.
'providers' => array( // ... App\Support\SentryLaravelServiceProvider::class, )
Optionally, if you want to use the facade, you also need to extend/create a new facade.
<?php namespace App\Support; class SentryLaravelFacade extends \Sentry\Laravel\Facade { protected static function getFacadeAccessor() { return 'sentry-laravel'; } }
And add that facade to your
config/app.php.
'aliases' => array( // ... 'SentryLaravel' => App\Support\SentryLaravelFacade::class, )
After you’ve added your own service provider, running
php artisan vendor:publish --provider="App\Support\SentryLaravelServiceProvider" publishes the Sentry config file to your chosen name (in the example above
config/sentry-laravel.php) preventing conflicts with a
config/sentry.php config file that might be used by the other package.
If you followed the regular installation instructions above (you should), make sure you replace
app('sentry') with
app('sentry-laravel').
The namespace
\App\Support can be anything you want in the examples above.
Note
If you’re on Laravel 5.5+ the Sentry package is probably auto-discovered by Laravel. To solve this, add or append to the
extra section in your
composer.json file and run composer update/install afterward.
"extra": { "laravel": { "dont-discover": ["sentry/sentry-laravel"] } } | https://docs.sentry.io/platforms/php/laravel/ | 2019-02-16T01:56:30 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.sentry.io |
Income Tax Myths
"Section 861 shows that the domestic income of
U.S. citizens is not taxable."
Some people argue that section 861 of the tax code
shows that U.S. citizens are taxed only on foreign-source income. This
argument is mistaken.
Section 861 of the tax code (plus some of the following
sections and the attendant regulations) determines whether income is
considered to be from a source within the United States or from a source
outside the United States. But the rest of the tax code (especially
sections 1, 61,
and 63) shows that U.S. citizens are taxed on their income from
all sources, whether from within or outside the United States.
So anyone can use section 861 to determine whether their income is from
foreign or domestic sources, but for most U.S. citizens, the source
simply doesn’t matter, because U.S. citizens are taxed on their
income from all sources.
It’s as though section 861 said that some of
your income shall be considered red income and some blue income, but
then the rest of the code said that you are taxed on all your income
regardless of color. You could use section 861 to determine the color
of your income, but it wouldn’t matter.
So why does section 861 exist, if it doesn’t
matter? The answer is that section 861 doesn’t matter to most
U.S. citizens, but it matters a lot to some people. Without going into
all the situations where section 861 matters, here are the two most
important:
1. Foreigners (unless admitted to the U.S. for permanent
residence) are not subject to U.S. income tax on their foreign-source
income, but only on their U.S.-source income. This is only natural:
the U.S. could hardly expect to tax a billion Chinese people on the
income they earn in China! But if a foreigner earns income from a U.S.
source (for example, by maintaining a bank account in the U.S., which
earns interest), then the U.S. government can tax that income. So section
861 is very important to foreigners, because it tells them what part
of their income is subject to U.S. income tax.
This is made clear by section 2(d) of the tax code,
26 U.S.C. § 2(d), which provides that "In the case of a nonresident
alien individual, the taxes imposed by sections 1 and 55 shall apply
only as provided by section 871 or 877," and by section 871, 26
U.S.C. § 871, which provides, "there is hereby imposed for
each taxable year a tax of 30 percent of the amount received from
sources within the United States by a nonresident alien individual
. . . "
Notice the difference from section 61. Section 61 defines
gross income as "all income from whatever source derived,"
but section 871 provides that foreigners are taxed only on "the
amount received from sources within the United States."
The contrast between the two sections makes the rule
particularly clear. Foreigners need to know whether their income is
from sources within or without the United States, because they are taxed
only on their U.S.-source income, but U.S. citizens are taxed on all
income from whatever source derived, so for the basic purpose
of determining gross income it doesn't matter whether a U.S. citizen's
income is from within or without the United States.
2. U.S. citizens who have income from foreign sources
(e.g., from working abroad, receiving dividends from foreign companies,
or interest on foreign bank accounts) may have paid tax on that income
to a foreign government. Double taxation is usually considered bad,
so the U.S. tax code allows a credit for taxes paid to foreign governments.
But determining the credit requires apportionment of income and deductions
into foreign and domestic sources. So section 861 is important to U.S.
citizens who have foreign source income and who want to claim the foreign
tax credit. (This is provided in tax code sections 27 and 901.)
But to repeat, for a typical U.S. citizen, section
861 is irrelevant. Anyone can go ahead and use it to characterize their
income as from U.S. sources or foreign sources, but, because U.S. citizens
are taxed on all income from all sources, the characterization simply
won’t matter for most U.S. citizens.
That's really all one needs to know, but readers who
have seen this
video or other material by one Larken Rose, a big proponent of the
861 argument, may want more detail. For more detail, please click here:
Readers might also be interested to know that Mr. Rose
served a substantial
jail term following his conviction on tax charges. In seeking trustworthy
tax law information, no sensible person would turn to a convicted tax
criminal who has no training in law.
(Also, Rose's conviction was affirmed by the U.S. Court
of Appeals for the Third Circuit in August 2008, and it appears from
the court's opinion
that Rose did not even make the 861 argument to the court.
If he really believed the argument, one would have expected him to make
it in his own case.)
Like other tax protestor arguments, the 861 argument
has been to court many times and has a batting average of zero. Every
court to consider the argument has rejected it. A representative sampling
(from among dozens of cases that have considered the argument):
"Bell's main rationale for avoiding the
income tax is known as the 'U.S. Sources argument' or the 'Section
861 argument.' This method has been universally discredited."
United States v. Bell, 414 F.3d 474 (3d Cir. 2005).
"Carmichael asserts that, under I.R.C. §
861, only the domestic income of those engaged in certain activities
relating to foreign commerce are taxable . . . . This argument
has been uniformly rejected by courts that have considered it,
. . . and we reject it as well." Carmichael v. United
States, 128 Fed. Appx. 109 (Fed. Cir. 2005).
"Rayner insists that he owed no tax in 1998
because all his income that year . . . derived from sources within
the United States and therefore (so he says) is not taxable income
under 26 U.S.C. § 861 and the regulations construing that
statute. This absurd argument is patently frivolous." Rayner
v. Commissioner, 70 Fed. Appx. 739 (5th Cir. 2003).
"While the plaintiffs have attached a 12-page
supplement to their 1040X, explaining that they did not have to
pay income taxes as wage earners in the United States pursuant
to 26 U.S.C. § 861, this argument is simply frivolous and
has been uniformly rejected by other courts." Deyo v.
I.R.S., 2004 WL 2051217 (D. Conn. 2004).
"Loofbourow . . . [argues] that his compensation
does not constitute gross income because it is not an item of
income listed in 26 C.F.R. § 1.861- 8(f). Loofbourrow's argument,
however, is misplaced and takes the regulations out of context."
Loofbourrow v. Commissioner, 208 F. Supp. 2d 698 (S.D.
Tex. 2002).
Somewhat amusingly, you can read a scathing critique
of the 861 argument on the web
page of Irwin Schiff, who is a leading tax protestor himself. | http://docs.law.gwu.edu/facweb/jsiegel/Personal/taxes/861.htm | 2009-07-04T08:55:03 | crawl-002 | crawl-002-021 | [] | docs.law.gwu.edu |
What Is Load Balancing?.
Assigning a Load Factor
Load refers to a number assigned to a service request based on the amount of time required to execute that service. Loads are assigned to services so that the BEA BEA an MSSQ (Multiple Server, Single Queue).
Load Balancing | http://e-docs.bea.com/tuxedo/tux80/atmi/intatm24.htm | 2009-07-04T13:23:59 | crawl-002 | crawl-002-021 | [] | e-docs.bea.com |
Before your customer can purchase a product, you must find out which vendors are offering the product for sale.
In ECS, an offer is the data associated with a vendor offering a product for sale in a specific condition (new, used, refurbished or collectible).
Offer listings are the current price/availability for various shipping methods.
If there are no offer listings for an item, that item cannot be purchased. If the offer listing's availability does not begin with the words "Usually ships" (US) or equivalent (UK, DE, JP), then the product cannot be purchased.
Unlike the ASIN, which is a permanent identifier, offers and offer listings are more transient data specified by a vendor with available stock at a certain price. For this reason, offers should not be stored locally for long periods of time.
Offer and Offer Listing Details
You can get a summary of availability for different product conditions and available quantities for all vendors (including Amazon). The Amazon database divides product conditions into four categories:
New
Used
Refurbished
Collectible
You can use the OfferSummary response group with an ItemLookup, ItemSearch, SimilarityLookup or ListLookup operation to get an overview of whether there are offers available for each product condition and the lowest price in each category, as shown in the table below.
If you are only interested in New inventory, then Items/Item/OfferSummary/TotalNew will indicate how many vendors are offering this product for sale in new condition.
Children of the OfferSummary element
There are two ways to select a specific item for purchase:
For Amazon products, offer listings are identified by the ASIN
For other vendors, offer listings are identified by the OfferListingId
You may do an ItemLookup, ItemSearch, SimilarityLookup or ListLookup operation with the ResponseGroup parameter set to OfferFull or Offers in order to see all of the available offers and offer listings by vendors willing to sell a given product.
The table below shows the data returned in the Offers/Offer and Offers/Offer/OfferListing elements. Note that you must refer to the Offers/Offer/OfferListing/Condition element to determine whether the product is New, Used, Refurbished or Collectible.
The data in Offers and OfferListing elements | http://docs.amazonwebservices.com/AWSEcommerceService/2005-03-23/PgDatamodelOffers.html | 2009-07-04T13:23:38 | crawl-002 | crawl-002-021 | [] | docs.amazonwebservices.com |
Up to this point, you have seen how you can use operation input parameters and response groups to filter out unwanted responses. The SearchBins response group provides a different means of refining results. It enables you to filter results based on values returned in a response.
The SearchBins response group categorizes the items returned by
ItemSearch into groups, called bins. The grouping is based
on some criteria, depending on the search index. For example, a set of bins can be based on a set of price ranges for an item. In the case of women’s shoes, for example, SearchBins might return a bin that contains ASINs for shoes that cost between $0 and $50, a second bin for shoes that cost $50 to $100, and a third bin for shoes that cost more than $100.
The advantage of using search bins is that the response group divides the items into bins without you having to return or parse item attributes. You can then submit a second
ItemSearch request and return only the items in one bin.
You cannot create bins nor can you specify the criteria used to divide the items into groups. The SearchBins response group does that automatically.
Some search indices support more than one kind of bin. For example, apparel items can be divided in to bins according to price range and brand. In this case, the response would return multiple sets of bins, called SearchBinSets, in which the items would be divided according to different criteria.
The criteria used to divide the returned items into bins is called the NarrowBy value.
Topics
NarrowBy Values And Search Indices
The following request uses the SearchBins response group to return search bins.
Service=AWSECommerceService& AWSAccessKeyId=
[Access Key ID]& Operation=
ItemSearch& SearchIndex=Baby& Keywords=pants& Availability=Available& MerchantId=All& Condition=All& ResponseGroup=SearchBins
The following xml is a snippet from the response.
<SearchBinSets> <SearchBinSet NarrowBy="PriceRange"> <Bin> <BinName>$0$24</BinName> <BinItemCount>1645</BinItemCount> <BinParameter> <Name>MinimumPrice</Name> <Value>0</Value> </BinParameter> <BinParameter> <Name>MaximumPrice</Name> <Value>2499</Value> </BinParameter> </Bin> <Bin> <BinName>$25$49</BinName> <BinItemCount>647</BinItemCount> <BinParameter> <Name>MinimumPrice</Name> <Value>2500</Value> </BinParameter> <BinParameter> <Name>MaximumPrice</Name> <Value>4999</Value> </BinParameter> </Bin> <Bin> <BinName>$50$99</BinName> <BinItemCount>173</BinItemCount> <BinParameter> <Name>MinimumPrice</Name> <Value>5000</Value> </BinParameter> <BinParameter> <Name>MaximumPrice</Name> <Value>9999</Value> </BinParameter> </Bin>
This response snippet shows the first three bins in the response. The NarrowBy value shows that the items were divided up based on price range. The BinName element names the bin. The names are descriptive of the price ranges that each bin represents. For example, the BinName, $50$99, contains items that cost between $50 and $99.99, which you can see by the values returned for MinimumPrice and MaximumPrice in that bin.The BinItemCount element shows how many items are in each bin, for example, there are 173 items in the last bin
The BinParameter/Value elements show the values used to create the bins. In this example, the parameters are the minimum and maximum prices of the items in that bin. For example, in the last bin, the minimum price of an item in that bin is $50.00 and the maximum value is $99.99.
The BinParameter/Name value, such as MaximumPrice, is an
ItemSearch parameter name. This means that you can use the <Value> as the value for the parameter named by <Name> in a subsequent
ItemSearch request. In this example, MinimumPrice is the
ItemSearch parameter and, in the last bin, the value is 5000. By submitting a second request using
ItemSearch's parameters,
MinimumPrice and
MaximumPrice , you could return the item attributes for only the items in that bin.
As you can see from this example, the SearchBins response group enables you to narrow your search without you having to parse through item attributes.
One value of using search bins is that you can divide items into groups according to criteria without having to parse item attributes. Based on the search bins returned, you can then submit a second request using the
ItemSearch parameter value that helps target your results, but how?
The names of bins and the parameters that describe the bins vary according to the bin. The following sample shows a bin based on price:
< <Bin> <BinName>$0$24</BinName> <BinItemCount>1645</BinItemCount> <BinParameter> <Name>MinimumPrice</Name> <Value>9</Value> </BinParameter> <BinParameter> <Name>MaximumPrice</Name> <Value>2499</Value> </BinParameter> </Bin>
The response shows the minimum and maximum price for items in the bin, $0$24, and the number of items in it, 1645.
Other NarrowBy values name bins differently. The following example shows a response snippet when NarrowBy is “Merchant.” In this case, the BinParameter name is merchant ID.
>
As you can see from these examples, BinParameter names are the same as
ItemSearch input parameter names. This correspondence means that you can create a second
ItemSearch request using the search bin results as values for
ItemSearch parameter values. For example, MinimumPrice and MaximumPrice are returned in search bins based on PriceRange. You could take the values of the search bin and put them directly into
ItemSearch parameters. Using the PriceRange example above, you could write the following
ItemSearch request to retrieve items only in the first search bin:? Service=AWSECommerceService& AWSAccessKeyId=
[AWS Access Key ID]& Operation=
ItemSearch& SearchIndex=Baby& Keywords=pants& Availability=Available& Condition=All& MinimumPrice=0& MaximumPrice=2499& ResponseGroup=SearchBins
ItemSearch divides the results of this request into another set of search bins because the SearchBins response group was used again. This means that the price range of the first search bin in the first response is split into multiple search bins in the response to the second request. The second response enables you to present more granularity in price ranges. For example, from the first response, you could return all items that cost between $0 and $24.99. In the second response, you are able to provide a much smaller price interval, for example, $10 to $14.99.
The process of using search bin results for
ItemSearch parameter values can be iterative. You can, for example, submit a third request using the SearchBins response group to divide one search bin into more search bins. This process can be repeated until the level of granularity you desire is reached. At that point, you can send a last request using other response groups of your choosing.
Alternatively, you could refine the search results in a different way. Some search indices return more than one set of search bins. In those cases, you can use the values from more than one set of search bins in an
ItemSearch request. Using the above example, if the response also included a search bin based on BrandName, which is the NarrowBy value, you could use brand and price range values in an
ItemSearch request:
Brand=Levi’s& MinimumPrice=0& MaximumPrice=2499&
The response would then only include shirts by Levi’s that cost under $25. You could continue to drill down by adding additional parameters to the request.
Here are some tips to help you create accurate ItemSearch requests:
The default value of the Condition parameter is "New."
If you do not get satisfactory results and you have not specified a Condition, set the parameter to "All." This value returns all Conditions. If you change the value to something besides the default, New, you must also set the MerchantId parameter to "All." If you do not, you will get the same results. The reason is that the default value of MerchantId is Amazon. Because Amazon only sells new items, the response can only contain new items, which was the case when Condition was New, the default value. Setting MerchantId to "All" enables the response to contain merchants that sell items in all conditions.
The default value of the MerchantId parameter is "Amazon."
If you want to find items sold by other merchants or items that are not in "New" condition (Amazon only sells new items), either specify the merchant using MerchantId, or, to search all merchants, set the parameter to "All."
The Keywords parameter searches for word matches in an item's title and description.
If you know a word is part of the title of an item, use the Title parameter because, in this case, it often returns fewer but more accurate results than the Keywords parameter.
Use the TextStream parameter to search using a block of text.
For more information, see
ItemSearch in the API
Reference Guide.
To use boolean values, such as AND, NOT, or OR, in an
ItemSearch request, use
the Power parameter.
You can create relatively sophisticated search criteria using this parameter. For more information, see ItemSearch in the API Reference Guide. | http://docs.amazonwebservices.com/AWSECommerceService/2007-10-29/DG/UsingSearchBinstoFindItems.html | 2009-07-04T13:24:06 | crawl-002 | crawl-002-021 | [] | docs.amazonwebservices.com |
The WebLogic Server EJB Container and Supported Services
The following sections describe the WebLogic Server EJB container, plus various aspects of EJB behavior in terms of the features and services that the container provides. See WebLogic Server Container-Managed Persistence Service, for more information on container-managed persistence (CMP). Life Cycle.
Entity Bean Lifecycle and Caching and Pooling
WebLogic Server provides these features to improve performance and throughput for entity EJBs:
The sections that follow describe the lifecycle of an entity bean instance, and how the container populates and manages the free pool and the cache. For an illustration, see Figure 4-2.
Initializing Entity EJB Instances (Free Pool)
If you specify a non-zero value for initial-beans-in-free-pool, value of the max-beans-in-free-pool element in weblogic-ejb-jar.xml.
READY and ACTIVE Entity EJB Instances (Cache). Current Beans in Cache field in the monitoring tab displays the count of active and ready beans.
An ACTIVE instance is currently enlisted in a transaction. After completing the transaction, the instance becomes READY, and remains in cache until space is needed for other:
Figure 4-1 Entity EJB Caching Behavior by Concurrency Type
Removing Beans EJB Lifecycle Transitions
Figure 4-2 illustrates the EJB free pool and cache, and the transitions that occur throughout an entity bean instance's lifecycle.
Figure 4-2 Entity Bean Lifecycle-3 by the value of the max-beans-in-free-pool deployment element, available memory, or the number of execute threads. 4-4 Stateful Session EJB Life Cycle
Stateful Session EJB Creation.
Stateful Session EJB Passivation.
Controlling Passivation
The rules that govern the passivation of stateful session beans vary, based on the value of the beans cache-type element, which can be::
Preventing Removal of Idle EJBs
Setting idle-timeout-seconds to 0 stops WebLogic Server from removing EJBs that are idle for a period of time. However, EJBs may still be passivated if cache resources become scarce.
Managing EJB Cache Size
For a discussion of managing cache size to optimize performance in a production environment see "Setting EJB Pool Size" in WebLogic Server Performance and Tuning.
Specifying the Persistent Store Directory for Passivated Beans\700\bea\user_domains\mydomain\myserver\pstore\
The path to the persistence store is:
RootDirectory\ServerName\persistent-store-dir
where:
D:\releases\700\bea\user_domains\mydomain
RootDirectory can be specified at server startup with the -Dweblogic.RootDirectory property.
The persistent store directory contains a subdirectory, named with a hash code, for each passivated bean. For example, the subdirectory for a passivated bean in the example above might be:
D:\releases\700\bea\user_domains\mydomain\myserver\pstore\14t89gex0m2fr
Concurrent Access to Stateful Session Beans
In accordance with the EJB 2.0. is-modified-method-name to Limit Calls to ejbStore() (EJB 1.1 Only)..
EJB Concurrency Strategy
The concurrency strategy specifies how the EJB container should manage concurrent access to an entity bean. Although the Database option is the default concurrency strategy for WebLogic Server, you may want to specify other options for your entity bean depending on the type of concurrency access the bean requires. WebLogic Server provides the following concurrency strategy options:
Concurrency Strategy for Read-Write EJBs
You can use the Exclusive, Database, and Optimistic concurrency strategies for read-write EJBs. WebLogic Server loads EJB data into the cache at the beginning of each transaction, or as described in Using cache-between-transactions to Limit Calls to ejbLoad(). WebLogic Server calls ejbStore() at the successful commit of a transaction, or as described under Using is-modified-method-name to Limit Calls to ejbStore() (EJB 1.1 Only).
Specifying the Concurrency Strategy the concurrency strategy for an EJB. In the following sample XML, the code specifies the default locking mechanism, Database.
Figure 4-5 Sample XML specifying the concurrency strategy
<entity-descriptor>
<entity-cache>
...
<concurrency-strategy>Database</concurrency-strategy>
</entity-cache>
...
</entity-descriptor>
If you do not specify a concurrency-strategy, WebLogic Server performs database locking for entity EJB instances.
A description of each concurrency strategy is covered in the following sections.
Exclusive Concurrency Strategy
The Exclusive concurrency strategy was the default in WebLogic Server 5.1 and 4.5.1. This locking method Concurrency Strategy
The Database concurrency strategy is the default option for WebLogic Server and the recommended mechanism for EJB 1.1 and EJB 2.0 beans..
When using the Database concurrency strategy instead of Optimistic with the cache-between-transactions element set to "True," you will receive a warning message from the compiler indicating that cache-between-transactions should be disabled. If this condition exists, WebLogic Server automatically disables cache-between-transactions.
Optimistic Concurrency Strategy
The Optimistic concurrency strategy does not hold any locks in the EJB container or the database while the transaction is in process. When you specify this option, the EJB container ensures that the data being updated by a transaction has not changed. It performs a "smart update" by checking the fields before it commits the transaction.
Note: The EJB container does not check Blob/Clob fields for optimistic concurrency. The work-around is to use version or timestamp checking.
Limitations of Optimistic Concurrency
If you use optimistic concurrency, BEA recommends that the include-updates element in weblogic-cmp-jar.xml be set to false. Using optimistic concurrency with include-updates set to true is inefficient—it is equivalent to using pessimistic concurrency. If you need to set include-updates true, use the database concurrency strategy.
Using optimistic concurrency with include-updates set to true is not supported for databases that hold locks during transactions (non-Oracle databases) This is because: optimistic transactions read using a local transaction to avoid holding locks until the end of the transaction. However, optimistic transactions write using the current JTA transaction so that the updates can be rolled back, if necessary. In general, updates made by the JTA transaction are not visible to the read transactions until the JTA transaction commits.
Check Data for Validity with Optimistic Concurrency concurrency. The work-around is to use version or timestamp checking.
Configuring Optimistic Checking
Configure validity checking for a bean with Optimistic concurrency using the verify-columns element in the table-name stanza for the bean in weblogic-cmp-jar.xml.
The verify-columns element specifies how columns in a table are checked for validity when you use the optimistic concurrency strategy.
A version column must be created with an initial value of 0, and must increment by 1 whenever the row is modified.
The EJB container manages the version or timestamp column, updating its value as appropriate upon completion of the transaction.
Note: The version or timestamp column is not updated if the transaction did not modify and regular CMP or CMR fields—if the only data changed during the transaction was the value of the version or timestamp column (as a result of transaction initiation) the column used for optimistic checking will not be updated at the end of the transaction.
The optimistic-column element identifies a database column that contains a version or timestamp value used to implement optimistic concurrency. This element is case maintaining, though not all databases are case sensitive. The value of this element is ignored unless verify-columns is set to Version or Timestamp.
If the EJB is mapped to multiple tables, optimistic checking is only performed on the tables that are updated during the transaction.
By default, caching between transactions is not enabled for optimistic beans. You must explicitly enable it. See Using cache-between-transactions to Limit Calls to ejbLoad().. In addition, notifications for updates of optimistic data are broadcast to other cluster members to help avoid optimistic conflicts and keep cached data fresh.
Optimistic Checking and Oracle Databases:
Calendar cal = Calendar.getInstance();
cal.set(Calendar.MILLISECOND, 0); // clears millisecond
Date myDate = cal.getTime();
ReadOnly Concurrency Strategy
WebLogic Server provides support for concurrent access to read-only entity beans. This concurrency strategy activates an instance of a read-only entity bean for each transaction so that requests may be processed in parallel.
Prior to WebLogic Server 7.0 read-only entity beans used the exclusive locking concurrency strategy. This strategy places an exclusive lock on cached entity bean instances when the bean is associated with a transaction. Other requests for the entity bean instance are block until the transaction completes.
To avoid reading from the database, WebLogic Server copies the state for an EJB 2.0 CMP bean from the existing instance in the cache. For this release, the default concurrency strategy for read-only entity beans is the ReadOnly option.
You can specify read-only entity bean caching at the application-level or the component-level.
To enable read-only entity bean caching:
Read-Only Entity Beans and ReadOnly Concurrency
Previous versions of read-only entity beans will work in this version of WebLogic Server. As in previous versions, you can set the read-timeout-seconds element set in weblogic-ejb-jar.xml. If an EJB's concurrency strategy is ReadOnly and read-timeout-seconds is set, when a read-only bean is invoked, WebLogic Server checks whether the cached data is older than the read-timeout-seconds setting. If it is, the bean's ejbLoad is called. Otherwise, the cached data is used.
Restrictions for ReadOnly Concurrency Strategy
Entity EJBs using the read-only concurrency strategy must observe the following restrictions:
Read-Only Multicast Invalidation
Read-only multicast invalidation is an efficient means of invalidating cached data.
Invalidate a read-only entity bean by calling the following invalidate() method on either the CachingHome or CachingLocalHome interface:
Figure 4-6 Sample code showing codes shows how to cast the home to CachingHome and then call the method:
Figure 4:
%SAMPLES_HOME%/server/config:
If you are running EJB 2.0, you can approximate the read-mostly pattern using a single bean that uses optimistic concurrency. An optimistic bean acts like a read-only beans Optimistic Concurrency Strategy..
Combined Caching with Entity Beans
Combined caching allows multiple entity beans that are part of the same J2EE application to share a single runtime cache. Previously, you had to configure a separate cache for each entity bean that was part of an application. This caused some usability and performance problems in that it took more time to configure caches for each entity bean and more memory to run the application. This feature will help solve those problems.
To configure an application level cache:
<weblogic-application>
<ejb>
<entity-cache>
<entity-cache-name>large_account</entity-cache-name>
<max-cache-size>
<megabytes>1</megabytes>
</max-cache-size>
</entity-cache>
</ejb>
</weblogic_application>
Use the entity-cache element to define a named application level cache that will be used to cache entity bean instances at runtime. There are no restrictions on the number of different entity beans that may reference an individual cache.
The sub elements of entity-cache have the same basic meaning as they do in the weblogic-ejb-jar.xml deployment descriptor file.
Use the entity-descriptor element to configure an entity bean to use an application level cache.
For instructions on specifying deployment descriptors, see Specifying and Editing the EJB Deployment Descriptors.
The weblogic-application.xml deployment descriptor is documented in full in the "Application.xml Deployment Descriptor Elements" section of Developing WebLogic Server Applications.
Caching Between Transactions
Use caching between transactions or long tern caching to enable the EJB container to cache an entity bean's persistent data between transactions. Whether you can set caching between transactions for an entity bean depends on its concurrency strategy, as summarized in the following three tables:
Table 4-1 Permitted cache-between-transactions values, by concurrency strategy, for BMP beans
Table 4-2 Permitted cache-between-transactions values, by concurrency strategy, for CMP 2.0 beans
Table 4-3 Permitted cache-between-transactions values, by concurrency strategy, for CMP 1.1 beans
Caching Between Transactions with Exclusive Concurrency
When you enable long term caching for an entity bean with an Exclusive concurrency strategy the EJB container must have exclusive update access to the underlying data. This means that another application outside of the EJB container must not be updating the data. If you deploy an EJB with an Exclusive concurrency strategy in a cluster, long term caching is disabled automatically because any node in the cluster may update the data. This would make long term caching impossible.
In previous versions of WebLogic Server, this feature was controlled by the db-is-shared element of weblogic-ejb-jar.xml.
Note: Exclusive concurrency is a single-server feature. Do not attempt to use it with clustered servers.
Caching Between Transactions with ReadOnly Concurrency
When you disable long term caching for an entity bean with a ReadOnly concurrency strategy it ignores the value of the cache-between-transactions setting because the EJB container always performs long term caching of read-only data.
Caching Between Transactions with Optimistic Concurrency. See Optimistic Concurrency Strategy for instructions on setting optimistic checking.
In addition, notifications for updates of optimistic data are broadcast to other cluster members to help avoid optimistic conflicts.
Enabling Caching Between Transactions
To enable caching between transactions:
Using cache-between-transactions transaction ever accesses a particular EJB concurrently, such as when you use exclusive concurrency for a single server; not a cluster, transaction accessing a particular EJB, WebLogic Server provides the cache-between-transactions deployment parameter. By default, cache-between-transactions is set to "false" for each EJB in the bean's weblogic-ejb-jar.xml file, which ensures that ejbLoad() is called at the start of each transaction. Where only a single WebLogic Server transaction ever accesses an EJB's underlying data concurrently, you can set d to "true" in the bean's weblogic-ejb-jar.xml file. When you deploy an EJB with cache-between-transactions set to "true," the single instance of WebLogic Server calls ejbLoad() for the bean only when:
Restrictions for cache-between-transactions
The following restrictions apply to cache-between-transactions:
In a single-server deployment, enable cache-between-transactions only with Exclusive, Optimistic and Read-Only concurrency strategies. You cannot use cache-between transactions with a Database concurrency strategy.
In a clustered deployment, enable cache-between-transactions only with Optimistic and Read-Only concurrency strategies. You cannot use cache-between-transactions with an Exclusive or Database Concurrency strategy.
EJBs in WebLogic Server Clusters
This section describes clustering support for EJBs.
Clustered Homes and EJBObjects-8 Single server behavior
Note: Failover of EJBs work only between a remote client and the EJB.
Clustered EJB Home Objects.
When an EJB bean is deployed to a cluster, its home is bound into the cluster-wide naming service..
The clustered home stub provides load balancing by distributing EJB lookup requests to available servers. It can also provide failover support for lookup requests, because it routes those requests to available servers when other servers have failed. concurrency strategy selected at deployment time. For more information, see Clustering Support for Different Types of EJBs.
Clustering Support for Different Types of EJBs
These sections describe the clustering support for session and entity EJBs.
Stateless Session EJBs in a Cluster-9 Stateless session EJBs in a clustered server environment
Stateful Session EJBs in a Cluster-10Bs, entity EJBs can utilize cluster-aware home stubs once you set home-is-clusterable to "true."
The behavior of the EJBObject stub depends on the concurrency-strategydeployment element in weblogic-ejb-jar.xml. concurrency-strategy can be set to Read-Write or Read-Only. The default value is Read-Write.
Fore details, see:
Read-Only Entity EJBs in a Cluster
When a home finds or creates a read-only entity bean, it returns a replica-aware an EJBObject stub. This stub load balances on every call but does not automatically fail over in the event of a recoverable call failure. Read-only beans are also cached on every server to avoid database reads.
Read-Write Entity EJBs in a Cluster
When a home finds or creates a read-write entity bean, it obtains an instance on the local server and returns an EJBObject stub pinned to that server. Load balancing and failover occur only at the home level. Because it is possible for multiple instances of the entity bean to exist in the cluster, each instance must read from the database before each transaction and write on each commit.
read-write entity EJBs in a cluster behave similarly to entity EJBs in a non-clustered system, in that:
Figure 4-11 shows read-write entity EJBs in a WebLogic Server clustered environment. The three arrows on Home Stub point to all three servers and show multiple client access.
Figure 4-11
The method for setting the transaction isolation level differs according to whether your application uses bean-managed or container-managed transaction demarcation. The following sections examine each of these scenarios.
Setting Bean-Managed Transaction Isolation Levels
You set the isolation level for bean-managed transactions in the EJB's java code. When the application runs, the transaction is explicitly started. Allowable isolation levels are defined in transaction-isolation.
Note: The Oracle-only isolation level values—TRANSACTION_READ_COMMITTED_FOR_UPDATE and TRANSACTION_READ_COMMITTED_FOR_UPDATE_NO_WAIT cannot be set for a bean-managed transaction.
See Figure 4-12 for a code sample.
Figure 4-12 Sample Code Setting Transaction Isolation Level isolation-level sub-element-level to TransactionSerializable setting for an EJB, you may receive exceptions or rollbacks in the EJB client if contention occurs between clients for the same rows. To prevent these problems, make sure that the code in your client application catches and examines the SQL exceptions, and that you take the appropriate action to resolve the exceptions, such as restarting the transaction.
WebLogic Server provides special isolation-level settings designed to prevent this problem with Oracle databases, as described in Special Note for Oracle Databases.
For other database vendors, refer to your database documentation for more details about isolation level support.
Special Note for Oracle Databases
Even with an isolation-level setting of TransactionSerializable, Oracle does not detect serialization problems until commit time. The error message returned is:
java.sql.SQLException: ORA-08177: can't serialize access for this transaction
WebLogic Server provides special isolation-level settings to prevent this. For more information, see isolation-level.-13:
The allowed values for the delay-database-insert-until element are:
Figure 4-14 Sample xml specifying delay-database-insert-until
<delay-database-insert-until>ejbPostCreate</delay-database-insert-until> --> delay database exception, you must set the foreign key constraint on the child table in the database.
Bulk Insert
Bulk insert support increases the performance of container-managed persistence (CMP) bean creation by enabling the EJB container to perform multiple database inserts for CMP beans in one SQL statement. This feature allows the container to avoid making multiple database inserts.
The EJB container performs bulk database inserts when you specify the commit option for the delay-database-insert-until element in the weblogic-cmp-rdbms-jar.xml file.
When using bulk insert, you must set the boundary for the transaction as bulk insert only applies to the inserts between transaction begin and transaction commit.
Note: Bulk insert only works with drivers that support the addBatch() and executeBatch() methods. For example, the Oracle thin driver supports these methods but the WebLogic Oracle JDBC driver does not.
The two limitations on using bulk insert are:
The total number of entries you create in a single bulk insert cannot exceed the max-beans-in-cache setting, which is specified in the weblogic-ejb-jar.xml file. See max-beans-in-cache for more information on this element.
If you set the dbms-column-type element in the weblogic-cmp-rdbms-jar.xml file to either OracleBlob or OracleClob, bulk insert automatically turns off because you will not save much time if a Blob or Clob column exist in the database table. In this case, WebLogic Server performs one insert per bean, which is the default behavior. more information on configuring transactional and non-transactional data sources, see Configure a JDBC Data Source."/> | http://e-docs.bea.com/wls/docs70//////ejb/EJB_environment.html | 2009-07-04T13:25:49 | crawl-002 | crawl-002-021 | [] | e-docs.bea.com |
Interoperability Solutions Guide
Web Services for Remote Portlets (WSRP) is an increasingly popular mechanism for generating markup fragments on a remote system for display in a local portal application. This section describes how AquaLogic Service Bus can be used to provide Service Level Agreement monitoring in applications that use WSRP.
The topics discussed in this section include:
The AquaLogic Service Bus Console, which is described in the AquaLogic Service Bus Console Online Help, is used to configure AquaLogic Service Bus. For more information about creating WSRP-enabled portals using WebLogic Portal, see Using WSRP with WebLogic Portal.
WSRP involves two integral components:
This section describes the basic WSRP architecture and then shows how this architecture can be enhanced by adding AquaLogic Service Bus.
The following figure shows the basic WSRP SOAP request and response flow between a producer application and a consumer application.
Figure 6-1 Basic Request/Response Flow Between Producer and Consumer Applications
Because a WSRP producer implements SOAP Web Services, an enterprise service bus (such as the AquaLogic Service Bus) can be used as an intermediary between the producer and consumer to provide Service Level Agreement monitoring, as shown in the following figure.
Figure 6-2 Enhanced WSRP Request / Response Flow Via AquaLogic Service Bus
In this architecture, the WSRP SOAP request / response flow occurs in the following sequence:
The remainder of this section provides instructions for configuring the AquaLogic Service Bus to proxy service requests for WSRP services. It describes services that a producer provides, along with other attributes of WSRP that must be used to properly configure AquaLogic Service Bus. It provides different possible strategies that can be used to monitor producers with increasing degrees of detail. Finally, it discusses load balancing and failover with WSRP.
This topic describes the following WSRP design concepts:
The following table describes the kinds of services offered by producers.
Each producer implements a minimum of two services (Service Description and Markup). A simple producer offers just these two services. A complex producer, however, provides two additional services (Registration and Management). WebLogic Portal producers also implement an extension service (Markup Extension) that replaces the standard Markup service.
These services are described using a standard WSDL format. The producer supplies a single URL for retrieving its WSDL, which describes all of the services that are available from that producer. The endpoints for each service indicate whether the consumer should use transport-level security (HTTPS) or not to communicate with the producer.
WSRP uses SOAP over HTTP for all messages sent between producers and consumers. In addition to using standard message formats in the SOAP Body, WSRP requires that certain transport headers be set in the request message—at a minimum, consumers must set the
SOAPAction header, cookie headers, and the usual HTTP headers (such as
Content-Type). Producers will return a session cookie, plus any application-specific cookies, in the HTTP transport header of the response message. The consumer must return the session cookie in subsequent request messages.
Configuring AquaLogic Service Bus for WSRP involves the following tasks:
This topic describes the following tasks:
As a common practice, consumers contact a producer directly to obtain its WSDL. However, if AquaLogic Service Bus is used to proxy the service, then all access to the producer occurs via AquaLogic Service Bus. Therefore, a proxy service must be implemented for consumers that calls the producer's real URL to obtain its WSDL, and then transforms the results by:
The developer who creates a producer can specify whether the producer requires SSL or not (
"secure=true"). In addition, the AquaLogic Service Bus administrator can change the security requirement to the consumer via AquaLogic Service Bus configuration. For example, suppose a producer does not require SSL. The AquaLogic Service Bus administrator can require consumers to use SSL by:
When configured in this way, AquaLogic Service Bus automatically bridges the secure messages from the consumer to the non-secure messages used by the producer.
After the consumer has retrieved a copy of the WSDL, it uses the definitions in the WSDL to formulate service requests that it then sends to the producer via AquaLogic Service Bus. The WSRP request / response process involves the following steps:
WSRP Web services expose portlets and those can rely on HTTP cookies and sessions. Therefore, WLSB must be configured to propagate HTTP transport headers (such as
SOAPAction and cookies). However, by default, AquaLogic Service Bus does not pass transport headers from the proxy service to the business service, because it cannot assume that the proxy service uses the same transport as the business service. Therefore, the message flow must be configured to copy the request headers from the inbound request to the outbound request. Similarly, the response headers from the business service must be copied back to the proxy service's response to the consumer.
Although it is possible to copy all transport headers between the proxy service and the business service, it is necessary to be more selective to avoid errors. The Set-Cookie and Cookie headers must be copied. Because AquaLogic Service Bus is the entity that assembles the final message to send, it must own some of the headers information (such as
Content-Length). For example, if the message flow were to copy the
Content-Length header from the proxy service to the business service, it might result in an error because the length of the message could change during processing.
When monitoring WSRP applications, an AquaLogic Service Bus administrator must decide about the degree of granularity that is required.
The decision about which monitoring level to implement has an impact on the complexity of the AquaLogic Service Bus configuration. It determines the type and number of proxies or business services that must be created for each producer. In addition, the AquaLogic Service Bus administrator can choose to monitor both the proxy service and the producer service—the granularity of monitoring does not need to be the same for each side.
Producer-level monitoring tracks the total number of requests sent to a producer, without regard to the specific service being requested. As such, producer-level monitoring is the simplest to configure within AquaLogic Service Bus. Because the service types are not significant, it is not necessary to create the proxy service or business service based on a WSDL. Instead, the service type is configured as
"Any SOAP Service". Each producer requires only a single proxy service and a single business service. For an example implementation, see Producer-Level Monitoring Example.
To configure producer-level monitoring, complete the following tasks:
The suitability of producer-level monitoring depends on the specific requirements of a given implementation. In producer-level monitoring, the elapsed time for all services and operations for the producer are averaged together, regardless of the differences among them. However, a producer's services and operations can have vastly different characteristics, and it might not be meaningful to consider aggregated measurements. For example, the Markup service is the workhorse of WSRP—it requires substantially more time to execute than the Registration service. However, producer-level monitoring does not distinguish between the two. Nonetheless, producer-level monitoring can be useful to gauge the extent to which a producer is being utilized, or to help when there is a severe performance problem at the producer. Because the Markup service typically gets used more often (almost 99%) in a production system, it might still be useful to monitor Service Level Agreement (SLAs) at the producer level.
Operation-level monitoring tracks operations for services individually. Monitoring proxy services via operation-level monitoring is very easy to set up. Configuring operation-level monitoring for business services, however, requires more work. Fortunately, the message flow for WSRP services introduces very little overhead, and the mapping between proxy services and producers, and between business services and producers, is simple to configure. Therefore, to satisfy SLA requirements, it is often sufficient to monitor only the proxy services at the operation level. For an example implementation, see Operation-Level Monitoring Example.
To configure operation-level monitoring for WSRP proxy services, create a proxy service for each of the services implemented by the producer.
These proxy services should be based on the standard WSRP WSDLs using SOAP bindings. Only a single business service for the producer should be created, and it should be configured to use
"Any SOAP Service" instead of being based on a WSDL. The message flow between the proxies and the business service should not modify the SOAP body in any way. However, just as for all WSRP message flows, it must pass the request headers via HTTP from the client request to the actual producer. Similarly, the response HTTP headers returned by the producer must be copied back to the client in the message flow.
If operation-level monitoring is required for producer business services, then individual business services must be created for each of the Web services described in the producer's WSDL, and the business services must be defined using the WSDL. There is a one-to-one mapping between the proxy services and the business services—a simple, unconditional routing node is sufficient in the message flow.
For the operations to be counted correctly, AquaLogic Service Bus must be told which operation to use. Normally, the administrator would do this by selecting one of the operations from a drop-down menu when the business service is selected for the Route action. However, the operation specified by the client message is not the same for all messages, so a single, hard-coded value will not work here. The administrator must ensure that the business service uses the same operation as the proxy service. While this could be achieved by specifying a Routing Table action that selects the case using the
$operation variable, it is a very tedious approach because the WSRP standard defines 14 operations across all WSRP services, and each would require a Route action with transformations to propagate the transport headers.
Fortunately, there is a more effective alternative. When routing to the business service, rather than selecting the operation from the drop-down menu, an administrator should use another transformation in the request actions to insert the value of
$inbound/ctx:service/ctx:operation into
$outbound/ctx:service. With this transformation, the operation for the business service is dynamically set to the same value as was specified for the proxy service, and AquaLogic Service Bus will correctly count and monitor all operations of the service.
AquaLogic Service Bus allows business services to define multiple endpoints that all provide the same Web service. When multiple endpoints are defined, AquaLogic Service Bus can automatically load balance requests across endpoints, and it can automatically fail over requests when an endpoint is inaccessible. However, WSRP imposes some limitations on the use of these features.
Portlets are a means of surfacing a user interface to some application. Therefore, portlets typically have session data associated with them. To preserve session data, requests to the portlet must be directed to the same server (or cluster) that serviced the original request. This requirement makes load balancing via AquaLogic Service Bus inappropriate. Multiple endpoints in a business service will usually target different servers or cluster. Because there is no communication among servers that are in separate clusters, there is no way to preserve the session. Therefore, if multiple endpoints are defined for a WSRP business service, then the load balancing algorithm must be set to
"none".
Multiple endpoints can be used to provide redundancy in certain circumstances in the event that one of the endpoints is unavailable. The WSRP service is still available via a secondary endpoint. However, any session data that existed at the time the first endpoint failed will not be available on other endpoints.
This failover configuration is an option only for simple producers (see WSRP WSDLs), not for complex produces. Complex producers require that their consumers first register with the producer before sending service requests. The producer returns a registration handle that the consumer must include with each request to that producer. In the case where a business service defines multiple endpoints, each endpoint requires its own registration handle.
AquaLogic Service Bus is, however, stateless across requests—it does not maintain a mapping of the correct handle to send to a particular endpoint. In fact, it would only send the registration request to a single endpoint, so the consumer would be registered with only that one producer. If that one producer crashed, then AquaLogic Service Bus would route a service request to another endpoint defined for that business service, but the consumer would never have registered with that new producer, and the request would fail with an
"InvalidRegistration" fault.
The management of registration handles therefore requires an application outside of AquaLogic Service Bus to maintain this state data. Error handling could be challenging to implement. Therefore, the registration requirement precludes defining multiple endpoints for complex producers. Because simple producers do not require or support the Registration service, a failover configuration that defines multiple endpoints in the business service is possible, although session data is lost on failover.
This section describes a WSRP interoperability example. It contains the following topics:
The WSRP interoperability example assumes the following components and configuration:
platform:7001
alsb:7001
For an AquaLogic Service Bus configuration that supports the configuration defined in this example, see the AquaLogic Service Bus/WSRP code sample, available from the AquaLogic Service Bus code samples page on BEA dev2dev:
This example includes separate configurations for two producers. Although the actual producer is the same for both examples, from the consumer's point of view, the producers are different.
The structure of the sample is divided into three projects—one containing common resources, and two containing resources for two example producers.
The basic configuration example (in the
producerExample folder) is the easiest configuration to implement. This configuration supports the monitoring of a producer in the aggregate (see Producer-Level Monitoring), but it does not consider the constituent services or operations.
Implementing this producer-level monitoring configuration involves:
The rest of this section describes the tasks required to implement this producer-level monitoring configuration.
To configure producer-level monitoring, the first step is to create the resources needed to retrieve the producer's WSDL and return it to the consumer. Because the WSDL contains the endpoints of the producer's services, it is necessary to transform them to hide the IP address and port of the actual server. Instead, the addresses must refer to the AquaLogic Service Bus server, and the URIs must match the URIs that the proxy service defines for this producer.
Create a business service to obtain the WSDL from the producer. This resource is specific to the producer, so it must be created in the
producerExample project. The following table describes the properties of the business service.
All endpoint addresses in the producer's WSDL must be transformed to reflect the AquaLogic Service Bus server address and the proxy service URI values. Because each producer WSDL can have four or more ports defined, it is convenient to create an XQuery expression to simplify the construction of the endpoint locations. The XQuery expression accepts the following three string variables as input and concatenates them together to form a SOAP address element:
The following table shows the query definition in the
wsrp project.
A subsequent configuration task (see Step 1.4: Create a Common Proxy Service) requires only, which is all that is required for this example.
Create a proxy service used by consumers to get WSDLs from producers. This proxy service is appropriate for any producer configuration modeled on this basic sample. The example described in this section is only a suggestion—a different approach might better suit the specific requirements of a given implementation. Because this proxy service is not specific to a single producer, it should be created in the
wsrp project folder.
The approach used in this step requires the administrator to assign each producer a name that is included in part of the URL to retrieve the WSDL. The message flow for the proxy service will extract the name from the URL, use it to locate the business service specific to that producer, obtain the WSDL, and then transform the WSDL to rewrite the endpoints to AquaLogic Service Bus. The proxy service endpoint URI is configured as
/producerWSDL, and the URL that consumers use to obtain a WSDL is:
producerName
where
producerName is the name assigned to the producer by the administrator. In this example, the producer name is
producerExample.
The following table describes how the proxy service is configured: of the transformations are performed. Before executing the Replace Actions to transform the WSDL, assign the base URL of the AquaLogic Service Bus server to a context variable to avoid specifying it on every transformation:
Assign "" to variable nonSecureBaseURL
Because a producer can implement four ports, the proxy service must transform each port. If the producer does not implement a particular port, the XQuery transformation simply does nothing. Because a single endpoint will be defined to handle all WSRP traffic for this producer, the Replace Action uses the
addr XQuery resource created earlier (see Step 1.2: Create an XQuery Expression to Construct URLs) to transform the endpoint to the value:
The four Replace Actions are defined as shown in the following code listing. The value of
name is replaced with the binding names from the table.
Replace ./wsdl:definitions/wsdl:service/wsdl:port[@binding="name"]/soap:address[starts-with(attribute::location,"http:")]in variable body with xqTransform()
urn:WSRP_v1_Markup_Binding_SOAP
urn:WSRP_v1_ServiceDescription_Binding_SOAP
urn:WSRP_v1_PortletManagement_Binding_SOAP
urn:WSRP_v1_Registration_Binding_SOAP
For the first Replace Action, the following User Namespace definitions must be added:
Note: Producers created by BEA tools implement an extension service (
urn:WLP_WSRP_v1_Markup_Ext_Binding_SOAP). This port is not used in this example. It is harmless to leave its endpoint unmodified.
The route node of this message flow consists of a routing table that selects the case based on
$producerName. For each known producer (this example uses only one producer named
producerExample), add cases so that each case routes to the correct business service to retrieve the WSDL if the name matches. This example uses the following directive:
= "producerExample" Route to wsdlSvc
To handle cases in which an unknown producer name is given, add a Default Case that routes to the no-op service (defined in Step 1.3: Create a No-Op Proxy Service):
Default Route to nullSvc
In this example, return an HTTP 404 status code by adding these response actions to the default case:
Insert <http:http-response-code>404</http:http-response-code> as last child of ./ctx:transport/ctx:response in variable inbound
After the resources needed to retrieve the producer's WSDL have been created, create the configuration resources to handle normal WSRP service requests via AquaLogic Service Bus. The easiest configuration involves creating a single proxy service and a single business service, and then linking them via a message flow that propagates the transport headers that WSRP requires.
The minimal business service required for WSRP is not based on a WSDL—instead, it is created to accept any SOAP message. This approach simplifies configuration and allows a single business process to handle all port types used by WSRP. The trade-off with this approach is that it limits monitoring capabilities. Configure the business service with the following settings:
The most convenient way to define the proxy service is to create it from the existing business service defined in the previous step. This creates a proxy service with the correct type (
"Any SOAP Service", the same type configured in the business service) and also constructs the basic message flow that unconditionally routes messages to the proper business service. The message flow must be edited in a subsequent step. Configure the proxy service using the following settings.
WSRP relies on data conveyed in the transport headers to function properly. In particular, producers will return to consumers any session cookies in the response headers that they expect consumers to supply in subsequent requests. Similarly, producers expect consumers to provide the requested operation in the
SOAPAction request header.
By default, AquaLogic Service Bus does not copy transport headers from the inbound request to the outbound request, or from the outbound response to the inbound response. The message flow must therefore propagate the required headers both in and out of the business service. Because these transformations are required for every WSRP service, it is convenient to define two common XQuery resources—one for request headers and one for response headers—that extract the correct headers.
For request headers, use the following query.
The
rqstHeaders query extracts all transport headers (except
Content-Length) from the
$in variable. AquaLogic Service Bus can sometimes reformat the message body so that its length no longer exactly matches the request message. Copying the length from the original request can result in transport errors if the body was modified (such as reformatted).
To copy the inbound request headers to the outbound business service, add the following Replace request action to the message flow:
Replace ./ctx:transport/ctx:request/tp:headers in variable outbound with xqTransform()
Variable Mapping (wsrp/rqstHeaders):
Similar to the request side, the response side defines a common XQuery resource to extract all but the
Content-Length header from the response returned from the producer.
For response headers, use the following query.
The following Replace response action in the route node propagates the required headers:
Replace ./ctx:transport/ctx:response/tp:headers in variable inbound with xqTransform()
Variable Mapping (wsrp/rspncHeaders):
After completing the simple configuration for the producer-level example, activate the changes made in the session. To test the configuration:
to get the WSDL for the sample.
When configuring AquaLogic Service Bus, monitoring of any of the components has not yet been explicitly enabled. The procedure to enable monitoring is no different for WSRP services than it is for any other Web service in AquaLogic Service Bus. For each component of interest, select (check) the Enable Monitoring box on the Manage Monitoring page and set the aggregation interval. Consider setting up any alert rules, if applicable. After these changes have been activated, the configuration can be monitored from the dashboard facility of the AquaLogic Service Bus console.
The full monitoring configuration example (in the
operationExample folder) involves configuring AquaLogic Service Bus to monitor all services and operations of a producer (see Operation-Level Monitoring). All of the concepts described for the producer-level monitoring example (see Producer-Level Monitoring Example) still apply to this example; to simplify configuration tasks, certain elements of that configuration will be copied. The operation-level monitoring example uses the same producer application as the producer-level monitoring example.
The fundamental difference between this example and the producer-level monitoring example is that the operation-level monitoring configuration uses both business services and proxy services that are based on the WSDLs defined by the WSRP standard. This example defines the additional resources to describe the WSRP services and extend the message flows to support monitoring at the operation level.
The rest of this section describes the tasks required to implement this operation-level monitoring configuration.
Import all of the WSRP WSDL definition files, along with the XML schema files on which the definitions depend. All of the files are available as part of the sample code associated with this example, but the standard resource locations are described in the following table.
Producers generated by BEA Portal extend the standard WSDLs by defining an additional port that allows messages to be sent using MIME attachments. Describing this extension is beyond the scope of this example, but it is still necessary to.
In the producer-level monitoring example, a single business service was created to process all messages for the producer, an approach that worked because the business service was not associated with a WSDL.
This operation-level monitoring example uses the WSDL bindings for each port type implemented by the producer. Because a business service can be associated with only one WSDL port or binding, a separate business service resource must be created for each. A simple producer implements only the required Markup and Service Description interfaces, while a complex producer also implements the Management and Registration interfaces. The services are created identically (except for the service name and types), as shown in the following table.
For each service, minimally set the attributes listed in the following table.
Proxy services in this operation-level monitoring example are very similar to the proxy services created for the producer-level monitoring example, with some important differences:
To create the proxy services:
As in the earlier example, create the proxy service using the existing
operationExample/base business service as the model. This will automatically base the proxy service on the same WSDL binding as the business service, and it will create a message flow with an unconditional route action to the business service. For the Endpoint URI, anything may be used, such as the producer name with the port type abbreviation appended to it (for example,
/operationExampleBase).
Normally, in a Route Action that routes to a WSDL-based service, an operation to invoke (by selecting the correct operation from the drop-down menu) is specified. However, each WSRP port implements several operations, and so the configuration requires a routing table with a case for each operation. Each case requires the same transformations to propagate the transport headers.
Creating all of the transformations in this way might prove to be quite tedious. Fortunately, there is a more convenient approach. Instead of using the drop-down menu, use another transformation to copy the operation from the proxy service to the business service. Configure this transformation by adding an Insert Action to the Request Actions of the message flow:
Insert $inbound/ctx:service/ctx:operation as last child of ./ctx:service in variable outbound
The proxy services for the other business services can be created by repeating these steps, although a shortcut can be used to avoid recreating all of the transformations manually. For example, to create the proxy service for the Service Description service:
operationExample/baseproxy service just created as the model. Following this example, use
/operationExampleDescfor the Endpoint URI.
WSRPServiceDescriptionServiceport.
Just as in the producer-level monitoring example, create a service that will retrieve the WSDL from the producer and transform it to hide the actual producer endpoints. The resources created are very similar to those created in the producer-level monitoring sample, but in this example the proxies for each producer have a different URI. The rest of this section describes how to create the resources to retrieve the producer WSDL.
The business service used to obtain the producer WSDL for this example is identical to the resource used in the producer-level monitoring example (see Step 1.1: Create a Business Service).
Creating a proxy service (named
wsrp/getWSDL) using
wsrp/producerWSDL (see Step 1.4: Create a Common Proxy Service) as the model. (see Step 1.2: Create an XQuery Expression to Construct URLs) accepts an extension argument to construct the URI location. Simply change that argument to the proper value, as shown in the following table.
Finally, edit the Routing Table in the route node to make the cases correspond to the producers known to the system.
After completing the configuration, test it to verify that it works correctly. The testing steps for testing the configuration are similar to the producer-level monitoring example (see Step 3: Test the Configuration)—the only difference is with the URL used to retrieve the WSDL from the. | http://e-docs.bea.com/alsb/docs21/interop/wsrp_interop.html | 2009-07-04T13:27:30 | crawl-002 | crawl-002-021 | [] | e-docs.bea.com |
External Competitions
What are External Competitions?
External competitions are hosted by law schools and institutions around the country throughout the academic school year. GW sponsors teams, usually of 4, to compete in external competitions. For more information please contact Todd Betor, VP of Externals at [email protected].
When are the competitions?
External competitions are scheduled throughout the academic school year. Below is the list of external mock trial competitions currently scheduled for the 2008-2009 school year.
Who can participate?
Only members of the Mock Trial Board are eligible to compete in GW-sponsored external competitions. There are exceptions for non-Board members if the Executive Board decides to open a particular competition to the the general student body at GW.
Can I get credit for doing this competition?
Those participating may earn 1 credit hour for the competition that they compete in. However, credit is limited to 1 credit hour per semester and capped at 3 credit hours during your entire tenure at GW. The three credit hour limit also includes credits received through internal mock trial competitions as well as other GW Skills Boards competitions.
How can I sign up?
Board members wishing to compete must fill out an application and return it to the VP of External Competitions. Once placed on a team, participants must fill out an intent to compete form and register for the competition via GWeb or the Office of Student Affairs.
Fall 2008 External Competitions
National Trial Advocacy Competition (NTAC)
This is the first external competition scheduled for the academic year. The competition will be held in Lansing, Michigan on October 23-25.
Quinnipiac Criminal Justice Mock Trial Competition
This competition is going to be held on October 31 - November 1 in New Hartford, CT. GW will be represented by the members of the year-long mock trial team.
For more information please contact Todd Betor, [email protected]. | http://docs.law.gwu.edu/stdg/mocktrial/externalComp.html | 2009-07-04T14:58:07 | crawl-002 | crawl-002-021 | [] | docs.law.gwu.edu |
The History Dialog displays the list of the documents you have opened in previous sessions. It is more complete than the list you get with the “Open Recent” command.
Denne dialogen opnar ei liste over dokument, dvs. bilete, du har opna tidlegare. Denne loggen er fyldigare enn den du får frå “Sist opna filer” i verktøyskrinet.
You can access to this dialog in different ways:
From the toolbox-menu and the image Menu bar:→ →
From the image Menu-bar:→
The scroll bar allows you to browse all images you have opened before.
The Open the selected entry button allows you to open the image you have selected. With "Shift" key pressed, it raises an image hidden behind others. With "Ctrl" key pressed, it opens the Open Image dialog.
The Remove the selected entry button allows you to remove an image from the History dialog. The image is removed from the recently open images list also. But the image itself is not deleted.
The Recreate Preview button updates preview in case of change. With "Shift" key pressed, it acts on all previews. With "Ctrl" key pressed, previews that can't be found out are deleted. | http://docs.gimp.org/2.2/en/gimp-document-dialog.html | 2009-07-04T14:59:05 | crawl-002 | crawl-002-021 | [] | docs.gimp.org |
Release Notes
Upgrading
Samples and Tutorials
System Administration
Administration Console Online Help
Security
Programming
Deployment
Developer Tools
Reference
Web Services
What's New in the Next Service Pack?
Platform Contents
WebLogic Server
WebLogic Workshop
WebLogic Integration
WebLogic Portal
Deployment
Application Deployment
WebLogic Server Application Deployment
Overview of WebLogic Server application deployment, and links to component-specific deployment documentation.
Porting and Deploying Smart Ticket with WebLogic Builder
Tutorial demonstrating rapid deployment of Sun's Blueprint wireless application on WebLogic Server.
Application Assembly and Packaging
Packaging WebLogic Server Applications
Assembling and Configuring Web Applications
Packaging EJBs for the WebLogic Server Container
Packaging Resource Adapters
Assembling WebLogic Web Services
Deployment Tools
WebLogic Server Administration Console
weblogic.management.deploy API
WebLogic Builder
Visual environment for generating and editing deployment descriptors and
for deploying applications to WebLogic Server.
EJBGen
Tool that uses Javadoc comments to generate
EJB deployment descriptor and the EJB Home, Remote, and Local interfaces. | http://e-docs.bea.com/wls/docs70/deployment.html | 2009-07-04T14:59:44 | crawl-002 | crawl-002-021 | [] | e-docs.bea.com |
How Do I: Connect a Database Control to a Database Such as SQL Server or Oracle?
WebLogic Server manages the databases you can use and allows you to access configured databases through JDBC data sources. If you wish to use a new database, there must be a JDBC data source set up that allows you to access that database.
Each WebLogic Workshop Database control determines which data source it will use from its connection property (@jws:connection tag).
To Change the Data Source Used by a Database Control
In Design View, click the Database control your service will use to access the database.
In the Properties pane, expand the connection property.
Click the data-source-jndi-name attribute, then select the value corresponding to the data source you want to use.
To Configure a New Connection Pool and Data Source
To access an Oracle, SQL Server, or Informix database, you first need to install the appropriate JDBC driver. For evaluation versions and installation information, see:
Installing WebLogic jDriver for Microsoft SQL Server
Installing WebLogic jDriver for Oracle
Installing WebLogic jDriver for Informix
Once you have installed the driver, you must configure a new connection pool and data source in WebLogic Server, using the WebLogic Server Console application. If you are running WebLogic Workshop and WebLogic Console on the same computer, you can access the console as follows. Otherwise, check with your system administrator to configure WebLogic Server.
Launch the console by navigating to.
Provide your user name and password; by default, both are set to installadministrator.
Next, create a new connection pool:
Navigate to Connection Pools in the JDBC section.
Click Configure a new JDBC Connection Pool.
On the Configuration tab, click General.
Fill in the fields on the General tab as shown in the following table:
Apply your changes.
On the Targets tab, choose the target server on which you want to deploy this connection pool.
Apply your changes and restart WebLogic Server.
Once you set up a connection pool, you must set up a data source based on that connection pool. The name you provide for the data source is the name that you will use to set the data-source-jndi-name attribute of the connection tag for the Database control.
WebLogic Workshop provides default transaction semantics for web service operations. The default transaction semantics include wrapping database operations that occur within a web service operation in a transaction. The default transaction semantics require use of a TxDataSource (transacted data source), instead of a DataSource. If you use a Tx Data Source, you may not attempt to control transaction semantics directly (e.g. via calls to java.sql.Connection.setAutoCommit). The sample data source cgSampleDataSource is a Tx Data Source.
To learn more about WebLogic Workshop's default transaction semantics, see Transactions in WebLogic Workshop.
Depending on whether you want default transaction semantics or not, navigate to Data Sources or Tx Data Sources in the WebLogic Server Console.
Choose Configure a new JDBC data source.
Specify a friendly name in the Name field for this data source.
Specify the JNDI name by which you will refer to this data source.
Specify the name of the pool you created previously in the Pool Name field.
Click the Targets tab and choose the target server on which you wish to deploy this data source.
Restart WebLogic Server.
For more information on configuring connection pools and data sources, see Configuring WebLogic JDBC Features.
Database Control: Using a Database from Your Web Service
How Do I: Use a Database from a Web Service?
How Do I: WebLogic Workshop-Enable an Existing WebLogic Server Domain? | http://e-docs.bea.com/workshop/docs70/help/guide/howdoi/howConnectaDatabaseControltoaDifferentDatabaseSQLServerOracle.html | 2009-07-04T15:00:49 | crawl-002 | crawl-002-021 | [] | e-docs.bea.com |
Datadog bills for hosts running the Agent and all GCE instances picked up by the Google Cloud integration. You are not billed twice if you are running the Agent on a GCE instance picked up by the Google Cloud integration.
Other Google Cloud resources (CloudSQL, Google App Engine, Pub/Sub, etc.) are not part of monthly billing.
Use the Google Cloud integration tile to control your metric collection. Go to the Configuration tab and select a project or add a new one. Each project is controlled under Optionally Limit Metrics Collection to hosts with tag. Limit metrics by host tag:
When adding limits to existing Google Cloud projects within the integration tile, the previously discovered instances could stay in the Infrastructure List up to 24 hours. During the transition period, GCE instances display a status of
???. This does not count towards your billing.
Hosts with a running Agent still display and are included in billing. Using the limit option is only applicable to GCE instances without a running Agent.
For technical questions, contact Datadog support.
For billing questions, contact your Customer Success Manager. | https://docs.datadoghq.com/account_management/billing/google_cloud/ | 2019-06-15T22:57:32 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.datadoghq.com |
This page describes the requirements for Mender when integrated with the Debian family target OS images such as Debian, Ubuntu and Raspbian.
For these devices,
mender-convert is used to perform Mender integration.
mender-convert will take care of the full integration, which includes:
mender-convert builds GRUB as second stage bootloader for Debian on Beaglebone. It is built from the official repository. Mender does not require any patches for GRUB and should be built with EFI platform support.
mender-convert provides for building and installing patched U-Boot for Raspbian. Currently only a patched U-Boot is supported by Mender on Raspberry Pi 3.
The procedure to be followed to integrate Mender is the same as for to create a Mender Artifact for Debian family OSes. Refer to Debian family Artifact creation for step by step instructions. | https://docs.mender.io/2.0/devices/debian-family | 2019-06-15T23:13:42 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.mender.io |
Contents Now Platform Capabilities Previous Topic Next Topic Installed with Connect Support Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Installed with Connect Support Several types of components are installed with Connect Support. Note: Connect Support also utilizes many of the components installed with Connect. Tables installed with Connect SupportProperties installed with Connect SupportBusiness rules installed with Connect SupportRelated ReferenceInstalled with Connect On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/use/collaboration/reference/r_InstalledWithConnectSupport.html | 2019-06-15T23:04:10 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.servicenow.com |
The Kubernetes-native platform (v2).
The Package manager for Kubernetes.
The Kubernetes-native Service Broker.
Deis Workflow's builder component relies on a registry for storing application docker images.
Deis Workflow ships with a registry component by default, which provides an in-cluster Docker registry backed by the platform-configured object storage. Operators might want to use an off-cluster registry for performance or security reasons.
Every component that relies on a registry uses two inputs for configuration:
DEIS_REGISTRY_LOCATION
registry-secret
The Helm chart for Deis Workflow can be easily configured to connect Workflow components to off-cluster registry. Deis Workflow supports external registries which provide either short-lived tokens that are valid only for a specified amount of time or long-lived tokens (basic username/password) which are valid forever for authenticating to them. For those registries which provide short lived tokens for authentication, Deis Workflow will generate and refresh them such that the deployed apps will only have access to the short-lived tokens and not to the actual credentials for the registries.
When using a private registry the docker images are no longer pulled by Deis Workflow Controller but rather are managed by Kubernetes. This will increase security and overall speed, however the
port information can no longer be discovered. Instead the
port information can be set via
deis config:set PORT=<port> prior to deploying the application.
Deis Workflow currently supports:
helm inspect values deis/workflow > values.yaml
registry_locationparameter to reference the registry location you are using:
off-cluster,
ecr,
gcr
You are now ready to
helm install deis/workflow --namespace deis -f values.yaml using your desired registry.
Here we show how the relevant parts of the fetched
values.yaml file might look like after configuring for a particular off-cluster registry:
global: ... registry_location: "ecr" ... registry-token-refresher: # Time in minutes after which the token should be refreshed. # Leave it empty to use the default provider time. token_refresh_time: "" ... ecr: # Your AWS access key. Leave it empty if you want to use IAM credentials. accesskey: "ACCESS_KEY" # Your AWS secret key. Leave it empty if you want to use IAM credentials. secretkey: "SECRET_KEY" # Any S3 region region: "us-west-2" registryid: "" hostname: "" ...
Note:
registryid and
hostname should not be set. See this issue for more info.
global: ... registry_location: "gcr" ... registry-token-refresher: # Time in minutes after which the token should be refreshed. # Leave it empty to use the default provider time. token_refresh_time: "" ... gcr: key_json: <base64-encoded JSON data> hostname: ""
Note:
hostname should be left empty.
After following the docs and creating a registry, e.g.
myregistry, with its corresponding login server of
myregistry.azurecr.io, the following values should be supplied:
global: ... registry_location: "off-cluster" ... registry-token-refresher: ... off_cluster_registry: hostname: "myregistry.azurecr.io" organization: "myorg" username: "myusername" password: "mypassword" ...
Note: The mandatory organization field (here
myorg) will be created as an ACR repository if it does not already exist.
global: ... registry_location: "off-cluster" ... registry-token-refresher: ... off_cluster_registry: hostname: "quay.io" organization: "myorg" username: "myusername" password: "mypassword" ... | https://docs.teamhephy.com/installing-workflow/configuring-registry/ | 2019-06-15T23:41:52 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.teamhephy.com |
Update Guide for Theme and Plugins
- Delete KarPartZ KarPartZ theme folder to your themes folder instead old KarPartZ theme folder.
- After updating the theme, Install the new version of KarPartZ Core plugin from Appearance > Install Plugins
- Done
Its Important that after updating theme or plugins, Clear / Delete your browser Cache. | http://docs.droitthemes.com/docs/karfixr-wordpress-theme/tips-guide/updating-your-theme/ | 2019-06-15T22:57:01 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.droitthemes.com |
Printer Safety Check¶
The Printer Safety Check plugin comes bundled with OctoPrint starting with version 1.3.7.
It tries to identify printers or rather printer firmwares with known safety issues, such as disabled thermal runaway protection, and displays a warning box to logged in users on such identification.
Fig. 20 An example of a warning generated by the Printer Safety Check
Please refer to the entry on the “unsafe firmware” warning in OctoPrint’s FAQ for a list of currently identified printers.
If you know of further printers/printer firmwares that need to be added here, please get in touch on the forum and provide their response to an M115.
Note
Feel free to disable the plugin in OctoPrint’s Plugin Manager if you feel like it is unnecessary. Be advised though that even if your printer might be running totally fine with a known unsafe configuration, that might change unexpectedly and with catastrophic results.
Events¶
- warning
A printer safety warning was triggered.
Payload:
check_type: type of check that was triggered (e.g.
m115,
receivedor
cap)
check_name: name of check that was triggered (e.g.
aneta8)
warning_type: type of warning that was triggered (e.g.
firmware-unsafe) | http://docs.octoprint.org/en/latest/bundledplugins/printer_safety_check.html | 2019-06-15T23:04:21 | CC-MAIN-2019-26 | 1560627997501.61 | [array(['../_images/bundledplugins-printer_safety_check-example.png',
'Printer Safety Check warning example'], dtype=object) ] | docs.octoprint.org |
References¶
- Underscore.js: A similar library for JavaScript
- Enumerable: A similar library for Ruby
- Clojure: A functional language whose standard library has several counterparts in
toolz
- itertools: The Python standard library for iterator tools
- functools: The Python standard library for function tools
- Functional Programming HOWTO: The description of functional programming features from the official Python docs.
Contemporary Projects¶
These projects also provide iterator and functional utilities within Python. Their functionality overlaps substantially with that of PyToolz. | https://toolz.readthedocs.io/en/latest/references.html | 2017-09-19T16:58:41 | CC-MAIN-2017-39 | 1505818685912.14 | [] | toolz.readthedocs.io |
by a morphology (see Creating a neuron morphology) and a set of equations for
transmembrane currents (see Creating a spatially extended neuron).
Creating a neuron morphology¶
Schematic morphologies¶
Morphologies can be created combining geometrical objects:
soma = Soma(diameter=30*um) cylinder = Cylinder(diameter=1*um, length=100*um, n=10)
The first statement creates a single iso-potential compartment (i.e. with no axial resistance within the compartment), with its area calculated as the area of a sphere with the given diameter. The second one specifies a cylinder consisting of 10 compartments with identical diameter and the given total length.
For more precise control over the geometry, you can specify the length and diameter of each individual compartment,
including the diameter at the start of the section (i.e. for
n compartments:
n length and
n+1 diameter
values) in a
Section object:
section = Section(diameter=[6, 5, 4, 3, 2, 1]*um, length=[10, 10, 10, 5, 5]*um, n=5)
The individual compartments are modeled as truncated cones, changing the diameter linearly between the given diameters
over the length of the compartment. Note that the
diameter argument specifies the values at the nodes between the
compartments, but accessing the
diameter attribute of a
Morphology object will return the diameter at the center
of the compartment (see the note below).
The following table summarizes the different options to create schematic morphologies (the black compartment before the start of the section represents the parent compartment with diameter 15 μm, not specified in the code below):
Note
For a
Section, the
diameter argument specifies the diameter between the compartments
(and at the beginning/end of the first/last compartment). the corresponding values can therefore be later retrieved
from the
Morphology via the
start_diameter and
end_diameter attributes. The
diameter attribute of a
Morphology does correspond to the diameter at the midpoint of the compartment. For a
Cylinder,
start_diameter,
diameter, and
end_diameter are of course all identical.
The tree structure of a morphology sections)
The names given to sections are completely up to the user. However, names that consist of a single digit (
1 to
9) or the letters
L (for left) and
R (for right) allow for a special short syntax: they can be joined
together directly, without the needs for dots (or dictionary syntax) and therefore allow to quickly navigate through
the morphology tree (e.g.
morpho.LRLLR is equivalent to
morpho.L.R.L.L.R). This short syntax can also be used to
create trees:
morpho = Soma(diameter=30*um))
The above instructions create a dendritic tree with two main sections, three sections attached to the first section and
two to the second. This can be verified with the
Morphology.topology() method:
>>> morpho.topology() ( ) [root] `---| .L `---| .L.1 `---| .L.2 `---| .L.3 `---| .R `---| .R.L `---| .R.R
Note that an expression such as
morpho.L will always refer to the entire subtree. However, accessing the attributes
(e.g.
diameter) will only return the values for the given section.
Note
To avoid ambiguities, do not use names for sections that can be interpreted in the abbreviated way detailed above.
For example, do not name a child section
L1 (which will be interpreted as the first child of the child
L)
The number of compartments in a section can be accessed with
morpho.n (or
morpho.L.n, etc.), the number of
total sections and compartments in a subtree can be accessed with
morpho.total_sections and
morpho.total_compartments respectively.
Adding coordinates¶
For plotting purposes, it can be useful to add coordinates to a
Morphology that was created using the “schematic”
approach described above. This can be done by calling the
generate_coordinates method on a morphology,
which will return an identical morphology but with additional 2D or 3D coordinates. By default, this method creates a
morphology according to a deterministic algorithm in 2D:
new_morpho = morpho.generate_coordinates()
To get more “realistic” morphologies, this function can also be used to create morphologies in 3D where the orientation of each section differs from the orientation of the parent section by a random amount:
new_morpho = morpho.generate_coordinates(section_randomness=25)
This algorithm will base the orientation of each section on the orientation of the parent section and then randomly
perturb this orientation. More precisely, the algorithm first chooses a random vector orthogonal to the orientation
of the parent section. Then, the section will be rotated around this orthogonal vector by a random angle, drawn from an
exponential distribution with the \(\beta\) parameter (in degrees) given by
section_randomness. This
\(\beta\) parameter specifies both the mean and the standard deviation of the rotation angle. Note that no maximum
rotation angle is enforced, values for
section_randomness should therefore be reasonably small (e.g. using a
section_randomness of
45 would already lead to a probability of ~14% that the section will be rotated by more
than 90 degrees, therefore making the section go “backwards”).
In addition, also the orientation of each compartment within a section can be randomly varied:
new_morpho = morpho.generate_coordinates(section_randomness=25, compartment_randomness=15)
The algorithm is the same as the one presented above, but applied individually to each compartment within a section (still based on the orientation on the parent section, not on the orientation of the previous compartment).
Complex morphologies¶
Morphologies can also be created from information about the compartment coordinates in 3D space. Such morphologies can
be loaded from a
.swc file (a standard format for neuronal morphologies; for a large database of morphologies in
this format see):
morpho = Morphology.from_file('corticalcell.swc')
To manually create a morphology from a list of points in a similar format to SWC files, see
Morphology.from_points.
Morphologies that are created in such a way will use standard names for the sections that allow for the short syntax
shown in the previous sections: if a section has one or two child sections, then they will be called
L and
R,
otherwise they will be numbered starting at
1.
Morphologies with coordinates can also be created section by section, following the same syntax as for “schematic” morphologies:
soma = Soma(diameter=30*um, x=50*um, y=20*um) cylinder = Cylinder(10, x=[0, 100]*um, diameter=1*um) section = Section(5, x=[0, 10, 20, 30, 40, 50]*um, y=[0, 10, 20, 30, 40, 50]*um, z=[0, 10, 10, 10, 10, 10]*um, diameter=[6, 5, 4, 3, 2, 1])*um
Note that the
x,
y,
z attributes of
Morphology and
SpatialNeuron will return the coordinates at the
midpoint of each compartment (as for all other attributes that vary over the length of a compartment, e.g.
diameter
or
distance), but during construction the coordinates refer to the start and end of the section (
Cylinder),
respectively to the coordinates of the nodes between the compartments (
Section).
A few additional remarks:
- In the majority of simulations, coordinates are not used in the neuronal equations, therefore the coordinates are purely for visualization purposes and do not affect the simulation results in any way.
- Coordinate specification cannot be combined with length specification – lengths are automatically calculated from the coordinates.
- The coordinate specification can also be 1- or 2-dimensional (as in the first two examples above), the unspecified coordinate will use 0 μm.
- All coordinates are interpreted relative to the parent compartment, i.e. the point (0 μm, 0 μm, 0 μm) refers to the end point of the previous compartment. Most of the time, the first element of the coordinate specification is therefore 0 μm, to continue a section where the previous one ended. However, it can be convenient to use a value different from 0 μm for sections connecting to the
Somato make them (visually) connect to a point on the sphere surface instead of the center of the sphere.: the
SpatialNeuron inherits all the geometrical variables of the
compartments (
length,
diameter,
area,
volume), as well as the
distance variable that gives the
distance to the soma. For morphologies that use coordinates, the
x,
y and
z variables are provided as well.
Additionally, a state variable
Cm is created. It is initialized with the value given at construction, but it can be
modified on a compartment per compartment basis (which is useful to model myelinated axons). The membrane potential is
stored in state variable
v.
Note that for all variable values that vary across a compartment (e.g.
distance,
x,
y,
z,
v), the
value that is reported is the value at the midpoint of the compartment. and time constants can obtained for any point of the neuron with the
space_constant respectively
time_constant attributes:
l = neuron.space_constant[0] tau = neuron.time_constant[0]
The calculation is based on the local total conductance (not just the leak conductance), therefore, it can potentially vary during a simulation (e.g. decrease during an action potential). The reported value is only correct for compartments with a cylindrical geometry, though, it does not give reasonable values for compartments with strongly varying diameter. section.
That is, if the axon had branches, then the above statement would change
gNa on the main section
and all the sections in the subtree. To access the main section section, on_pre='gs += w') S.connect(i=0, j=50) S.connect(i=1, j=100)
This creates two synapses, on compartments 50 and 100. One can specify the compartment number with its spatial position by indexing the morphology:
S.connect(i=0, j=morpho[25*um]) S.connect(i=1,)''', on') | http://brian2.readthedocs.io/en/2.0rc3/user/multicompartmental.html | 2017-09-19T17:14:54 | CC-MAIN-2017-39 | 1505818685912.14 | [array(['../_images/morphology_deterministic_coords.png',
'../_images/morphology_deterministic_coords.png'], dtype=object)] | brian2.readthedocs.io |
.
To locate a unit on the map, click on its name in the work list. The map will be centered on this unit. At that, current map zoom will remain the same.
Only units checked in the first column of the work list are shown on the map. To display all units from the work list, mark a check box in the left top corner of the list. Remove this checkbox to remove unit icons from the map.
Note, that in order units to be displayed on the map you should check if the corresponding layer icon in the main menu is active.
Units are seen on the map if they get into view according to the current map position. You can move and zoom the map according to your needs.
However, if the option Show unit icons at map borders is selected in User Settings, in case a unit gets out of view, its icon is displayed by map border. Click on the icon to move to the unit on the map.
It is possible to watch a unit constantly. For this, enable the option Watch unit on map against a necessary unit in the corresponding column (
) of the Monitoring panel. Units marked in this column are always seen on the map. If such a unit gets out of view, the map automatically centers at this unit each time when a new message comes.
To track stationary units, make use of a specially designed app — Sensolator. | http://docs.wialon.com/en/local/1504/user/monitor/monitor | 2017-09-19T17:11:40 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.wialon.com |
Plain and keyword target_link_libraries signatures cannot be mixed.
CMake 2.8.12 introduced the target_link_libraries signature using the PUBLIC, PRIVATE, and INTERFACE keywords to generalize the LINK_PUBLIC and LINK_PRIVATE keywords introduced in CMake 2.8.7. Use of signatures with any of these keywords sets the link interface of a target explicitly, even if empty. This produces confusing behavior when used in combination with the historical behavior of the plain target_link_libraries signature. For example, consider the code:
target_link_libraries(mylib A) target_link_libraries(mylib PRIVATE B)
After the first line the link interface has not been set explicitly so CMake would use the link implementation, A, as the link interface. However, the second line sets the link interface to empty. In order to avoid this subtle behavior CMake now prefers to disallow mixing the plain and keyword signatures of target_link_libraries for a single target.
The OLD behavior for this policy is to allow keyword and plain target_link_libraries signatures to be mixed. The NEW behavior for this policy is to not to allow mixing of the keyword and plain signatures.
This policy was introduced in CMake version 2.8.12.. | http://docs.w3cub.com/cmake~3.9/policy/cmp0023/ | 2017-09-19T16:59:19 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.w3cub.com |
With the Change List activity you can change a list that is stored in a variable.
See Microflow Element Common Properties for properties that all activities share (e.g. caption). This page only describes the properties specific to the action.
Input Properties
List
Defines the list variable that is changed.
Action Properties
Type
Defines the type of change that is performed to the list.
Default value: Add
Value
Value defines the value that is used to change the list. The value is entered using a microflow expression. The microflow expression should result in an object or list of the same entity as the input list. | https://docs.mendix.com/refguide6/change-list | 2017-09-19T17:09:22 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.mendix.com |
Site CMS has the following essential components:
Pages and Page templates
Widgets and Widget templates
Widget templates define the functionality and layout of widgets - how they behave when displaying content or other features. You can easily modify widget templates with the built-in editor.
Content items..
Sitefinity CMS has a number of synchronization options with which you can create complex synchronization scenarios. You can synchronize data between Sitefinity CMS servers, SharePoint sites, cloud storage, SalesForce, and Marketo.
Back To Top | https://docs.sitefinity.com/sitefinity-overview | 2017-09-19T17:12:45 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.sitefinity.com |
style. -:
J3.4:Editing a template with. | https://docs.joomla.org/index.php?title=J3.x:Modifying_a_Joomla!_Template&diff=105164&oldid=105146 | 2015-03-27T02:09:56 | CC-MAIN-2015-14 | 1427131293580.17 | [] | docs.joomla.org |
Message-ID: <1340540004.903.1427421735489.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_902_1871317817.1427421735488" ------=_Part_902_1871317817.1427421735488 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
by Toby Miller=20
Syntactic macros allow new language constructs to be created. See = Macros for a list of existing ones in b= oo.=20
We could, for example, mimic VisualBasic's with statement.=20
Given the following code:=20
fooInstanceWithReallyLongName =3D Foo() fooInstanceWithReallyLongName.f1 =3D 100 fooInstanceWithReallyLongName.f2 =3D "abc" fooInstanceWithReallyLongName.DoSomething()=20
If we define a 'with' macro we could rewrite it like this:=20
with fooInstanceWithReallyLongName: _f1 =3D 100 _f2 =3D "abc" _DoSomething()=20
In boo, macros are CLI objects that implement the Boo.Lang.Compiler.IAst= Macro interface. It is interesting to note that there is nothing magic<= /em> about these objects. They must simply implement the interface the comp= iler expects. This implies that boo macros can be written in any CLI langua= ge!=20
When an unknown syntactic structure is encountered at compile time, like=
the
with statement above, the compiler will look for the corr=
ect IAstMacro class, create an instance, and ask that instance to expand th=
e macro. The compiler identifies the class to use via a simple naming conve=
ntion. The class name must start with the name of the macro and end with 'M=
acro'. Additionally the Pascal case naming convention must be used. So in this case the name must be 'WithMac=
ro'.
Here's the code to implement our macro:=20
import Boo.Lang.Compiler import Boo.Lang.Compiler.Ast import Boo.Lang.Compiler.Ast.Visitors class WithMacro(AbstractAstMacro): private class NameExpander(DepthFirstTransformer): _inst as ReferenceExpression def constructor(inst as ReferenceExpression): _inst =3D inst override def OnReferenceExpression(node as ReferenceExpression): // if the name of the reference begins with '_' // then convert the reference to a member reference // of the provided instance if node.Name.StartsWith('_'): // create the new member reference and set it up mre =3D MemberReferenceExpression(node.LexicalInfo) mre.Name =3D node.Name[1:] mre.Target =3D _inst.CloneNode() // replace the original reference in the AST // with the new member-reference ReplaceCurrentNode(mre) override def Expand(macro as MacroStatement) as Statement: assert 1 =3D=3D macro.Arguments.Count assert macro.Arguments[0] isa ReferenceExpression inst =3D macro.Arguments[0] as ReferenceExpression // convert all _<ref> to inst.<ref> block =3D macro.Block ne =3D NameExpander(inst) ne.Visit(block) return block=20
Some explanation is in order. The parsing stage of the compiler pipeline= parses a source stream into an abstract syntax tree (AST). A subtree, corr= esponding to the macro, will be passed to the Expand() method. Expand() is = responsible for building an AST that will replace the provided subtree.= =20
The subtree corresponding to a macro statement is embodied by the MacroS= tatement parameter.=20
A MacroStatement has a collection of arguments and a block.=20
In this case we expect a single argument: a reference to an object. We t= hen traverse the block looking for references whose name begins with the '_= ' character. Whenever we encounter one, we replace it with a reference to a= member.=20
There are two classes related specifically to AST traversal: DepthFirstV= isitor and DepthFirstTransformer. Both classes walk an AST invoking appropr= iate methods for each type of element in the tree. In this case, we subclas= sed DepthFirstTransformer as a convenient way to find and replace Reference= Expression nodes in the macro's block.=20
You can find this plus other macro examples in the examples/macros directory.=20
A custom macro syntax is also planned.=20
Some other examples of macros already implemented in boo:=20
print "hello!" assert x =3D=3D true debug "print debug message" using file=3DFile.OpenText(fname): //disposes of file when done print(file.ReadLine()) o1 =3D object() lock o1: //similar to "synchronized" in java pass=20 | http://docs.codehaus.org/exportword?pageId=8555 | 2015-03-27T02:02:15 | CC-MAIN-2015-14 | 1427131293580.17 | [] | docs.codehaus.org |
changes.mady.by.user G. Ann Campbell
Saved on Dec 17, 2013
Saved on Dec 18, 2013
...
Check = Coding Rule.
A good coding practice. Not complying to coding rules leads to quality flaws and creation of issues in SonarQube. type of measurement. Metrics can have varying values, or measures, over time. Examples: number of lines of code, complexity, etc.
A metric may be either:
The value of the metric for a given component is called measure.
A set of coding rules.
Each snapshot is based on a single quality profile.
A set of measures and issues on a given component at a given time.
A snapshot is generated for each analysis.. | http://docs.codehaus.org/pages/diffpages.action?pageId=236782032&originalId=236224781 | 2015-03-27T02:16:47 | CC-MAIN-2015-14 | 1427131293580.17 | [] | docs.codehaus.org |
startx. The
startxcommand is a front-end to the
xinitcommand, which launches the X server (
Xorg) and connects X client applications to it. Because the user is already logged into the system at runlevel 3,
startxdoes not launch a display manager or authenticate users. Refer to Section 21.5.2, “Runlevel 5” for more information about display managers.
startxcommand is executed, it searches for the
.xinitrcfile in the user's home directory to define the desktop environment and possibly other X client applications to run. If no
.xinitrcfile is present, it uses the system default
/etc/X11/xinit/xinitrcfile instead.
xinitrcscript then searches for user-defined files and default system files, including
.Xresources,
.Xmodmap, and
.Xkbmapin the user's home directory, and
Xresources,
Xmodmap, and
Xkbmapin the
/etc/X11/directory. The
Xmodmapand
Xkbmapfiles, if they exist, are used by the
xmodmaputility to configure the keyboard. The
Xresourcesfile is read to assign specific preference values to applications.
xinitrcscript executes all scripts located in the
/etc/X11/xinit/xinitrc.d/directory. One important script in this directory is
xinput.sh, which configures settings such as the default language.
xinitrcscript attempts to execute
.Xclientsin the user's home directory and turns to
/etc/X11/xinit/Xclientsif it cannot be found. The purpose of the
Xclientsfile is to start the desktop environment or, possibly, just a basic window manager. The
.Xclientsscript in the user's home directory starts the user-specified desktop environment in the
.Xclients-defaultfile. If
.Xclientsdoes not exist in the user's home directory, the standard
/etc/X11/xinit/Xclientsscript attempts to start another desktop environment, trying GNOME first and then KDE followed by
twm. | http://docs.fedoraproject.org/en-US/Fedora/13/html/Deployment_Guide/s1-x-runlevels.html | 2015-03-27T01:57:11 | CC-MAIN-2015-14 | 1427131293580.17 | [] | docs.fedoraproject.org |
Implementation Details¶
Up to version 0.3 kazoo used the Python bindings to the Zookeeper C library. Unfortunately those bindings are fairly buggy and required a fair share of weird workarounds to interface with the native OS thread used in those bindings.
Starting with version 0.4 kazoo implements the entire Zookeeper wire protocol itself in pure Python. Doing so removed the need for the workarounds and made it much easier to implement the features missing in the C bindings.
Handlers¶
Both the Kazoo handlers run 3 separate queues to help alleviate deadlock issues and ensure consistent execution order regardless of environment. The SequentialGeventHandler runs a separate greenlet for each queue that processes the callbacks queued in order. The SequentialThreadingHandler runs a separate thread for each queue that processes the callbacks queued in order (thus the naming scheme which notes they are sequential in anticipation that there could be handlers shipped in the future which don’t make this guarantee).
Callbacks are queued by type, the 3 types being:
- Session events (State changes, registered listener functions)
- Watch events (Watch callbacks, DataWatch, and ChildrenWatch functions)
- Completion callbacks (Functions chained to IAsyncResult objects)
This ensures that calls can be made to Zookeeper from any callback except for a state listener without worrying that critical session events will be blocked.
Warning
Its important to remember that if you write code that blocks in one of these functions then no queued functions of that type will be executed until the code stops blocking. If your code might block, it should run itself in a separate greenlet/thread so that the other callbacks can run. | http://kazoo.readthedocs.org/en/latest/implementation.html | 2015-03-27T01:54:13 | CC-MAIN-2015-14 | 1427131293580.17 | [] | kazoo.readthedocs.org |
I can't pair with a Bluetooth enabled device
If you are having trouble pairing with a Bluetooth enabled device, there are a few things that you can check.
- or 1234 until you change it. If you don't know what the passkey is, try 0000 or 1234.
- If your BlackBerry device doesn't detect the Bluetooth enabled device that you want to pair with, try making your BlackBerry device discoverable for a short period of time. On the home screen, swipe down from the top of the screen. Tap
Settings > Network Connections >, Serial Port, or Personal Area Network profiles.. | http://docs.blackberry.com/en/smartphone_users/deliverables/48934/rok1384785327205.jsp | 2015-03-27T02:04:59 | CC-MAIN-2015-14 | 1427131293580.17 | [] | docs.blackberry.com |
Change your payment method
You can choose to pay for apps on BlackBerry World using a PayPal account, a credit card, or by including purchases on the bill that you receive from your wireless service provider. Depending on your wireless service provider, country, or organization, this feature might not be supported.
- On the BlackBerry World home screen, click My Account > Payment Options.
- Enter your BlackBerry ID.
- Select a payment option.
- Click Next.
- Complete the instructions on the screen.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/50659/1177703.jsp | 2015-03-27T02:01:23 | CC-MAIN-2015-14 | 1427131293580.17 | [] | docs.blackberry.com |
Fundamentals Guide
Local Navigation
Using code samples and the API reference
The API reference for the BlackBerry® Administration API describes the interfaces, classes, methods, and data types available in the BlackBerry Administration API, provides UML diagrams that illustrate the inheritance model used by all elements in the APIs, and provides code samples in Java® and Microsoft® Visual C#® that show how to use the APIs.
Some elements listed in the API reference for the BlackBerry Administration API might not be available if the BlackBerry® Enterprise Server component that manages the elements is not installed in your organization's BlackBerry Enterprise Server environment. The client proxies that you generate in your development environment contain only the elements that are available in your organization's BlackBerry Enterprise Server environment.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/nl-nl/developers/deliverables/25821/Using_the_API_reference_and_code_samples_1430072_11.jsp | 2015-03-27T02:08:34 | CC-MAIN-2015-14 | 1427131293580.17 | [] | docs.blackberry.com |
: creating complex feature datastores with mapping files.
The second part would continue as the gt-app-schema module. The first part would split off and become a new module 'gt-complex'. More specifically, this 'gt-complex' module will. | http://docs.codehaus.org/pages/viewpage.action?pageId=230395015 | 2015-03-27T02:10:30 | CC-MAIN-2015-14 | 1427131293580.17 | [] | docs.codehaus.org |
See: Description
Contains an implementation of Spring's transaction SPI for local Hibernate transactions. This package is intentionally rather minimal, with no template classes or the like, in order to follow native Hibernate recommendations as closely as possible.
This package supports Hibernate 4.x only.
See the
org.springframework.orm.hibernate3 package for Hibernate 3.x support. | http://docs.spring.io/spring-framework/docs/3.2.0.RELEASE/javadoc-api/org/springframework/orm/hibernate4/package-summary.html | 2015-03-27T02:02:05 | CC-MAIN-2015-14 | 1427131293580.17 | [] | docs.spring.io |
java.lang.Object
org.springframework.remoting.support.RemoteAccessororg.springframework.remoting.support.RemoteAccessor
public abstract class RemoteAccessor
Abstract base class for classes that access a remote service. Provides a "serviceInterface" bean property.
protected final Log logger
public RemoteAccessor()
public void setServiceInterface(Class serviceInterface)
Typically required to be able to create a suitable service proxy, but can also be optional if the lookup returns a typed proxy.
public Class getServiceInterface() | http://docs.spring.io/spring/docs/2.0.x/api/org/springframework/remoting/support/RemoteAccessor.html | 2015-03-27T02:03:11 | CC-MAIN-2015-14 | 1427131293580.17 | [] | docs.spring.io |
Anatomy of a Standard Party Detail Page
A Party Detail page is the layout used to display information about a Customer and other external entities your organization deals with.
While every Party Type can be configured to display specific information, the party's layout page usually remains similar to other party types. For new installations of ServiceJourney, there is a standard Party Detail Page.
Standard Party Detail Page
A standard Party Detail page looks like the screenshot below:
Header Title
This is the topmost section of the Case detail page.
Header Toolbar
Right under the header title is the header toolbar that contains the major actions to do on the Case. By default it contains:
Summary Section
This section contains basic information about the Party.
Central Tabs
The central tabs contain detailed information about the Party.
Right tabs
By default, this section shows:
| https://docs.eccentex.com/doc1/anatomy-of-a-standard-party-detail-page | 2022-06-25T10:40:18 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['../doc1/2053406886/Party%20Detail%202.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object)
array(['../doc1/2053406886/Party-Header.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object)
array(['../doc1/2053406886/Party-Toolbar.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object)
array(['../doc1/2053406886/Party-Summary.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object)
array(['../doc1/2053406886/Party-Tans.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object)
array(['../doc1/2053406886/Party-Righttabsr.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object) ] | docs.eccentex.com |
Hi!! I have a column that have lines that look like these:
LINE1: {value=vneobbnlri, id=123}, {value=ajfheofbks, id = 456}, {value=malualves, id = 678}....
LINE 2: {value=fhegnbegiervnrte uigel, id=123}, {value=ihefbgiuergbi, id = 456}, {value=malualve123, id = 678}
I want a query that my output would be the string that comes BEFORE the id 678. In the first line malualves and in the second malualves123 I´ve used substring and charindex, but i dont know how to separate only that string. | https://docs.microsoft.com/en-us/answers/questions/321522/how-can-a-split-a-string-in-sql.html | 2022-06-25T12:39:14 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.microsoft.com |
Framework
Content Element. Data Context Changed Event
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Occurs when this element's data context
Event Type
Remarks
For an explanation of data contexts and data binding, see Data Binding Overview.
Important
When an element's DataContext changes, all data-bound properties on this element are potentially affected. This applies to any elements that are descendant elements of the current element, which inherit the data context, and also the current element itself. All such bindings re-interpret the new DataContext to reflect the new value in bindings. There is no guarantee made about the order of these changes relative to the raising of the DataContextChanged event. The changes can occur before the event, after the event, or in any mixture. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.frameworkcontentelement.datacontextchanged?view=windowsdesktop-6.0&viewFallbackFrom=netcore-3.1 | 2022-06-25T12:43:35 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.microsoft.com |
- Factory Reset: select this to restore the configuration of the transmitter to factory default settings.
- Clear Analog Output Settings: this option removes the current analog output settings from Indigo's memory. When you clear the analog output settings, Indigo automatically adapts the analog output settings of the next probe you connect. | https://docs.vaisala.com/r/M211877EN-F/en-US/GUID-5EA695F5-CB35-4034-8DEA-3753E48E1074/GUID-0934DF29-4F8D-4EE8-BC2E-06374E877320 | 2022-06-25T12:02:10 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.vaisala.com |
Metadata¶
Samples are normally annotated with the use of
descriptor and
descriptor_schema. However in some cases the fields defined in
DescriptorSchema do not suffice and it comes handy to upload sample
annotations in a table where each row holds information about some
sample in collection. In general, there can be multiple rows referring
to the same sample in the collection (for example one sample received
two or more distinct treatments). In such cases one can upload this
tables with the process Metadata table. However, quite often there is
exactly one-on-one mapping between rows in such table and samples in
collection. In such case, please use the “unique” flavour of the above
process, Metadata table (one-to-one).
Metadata in ReSDK is just a special kind of
Data resource that
simplifies retrieval of the above mentioned tables. In addition to all
of the functionality of
Data, it also has two additional attributes:
df and
unique:
# The "df" attribute is pandas.DataFrame of the output named "table" # The index of df are sample ID's m.df # Attribute "unique" is signalling if this is metadata is unique or not m.unique
Note
Behind the scenes,
df is not an attribute but rather a property.
So it has getter and setter methods (
get_df and
set_df).
This comes handy if the default parsing logic does not suffice. In
such cases you can provide your own parser and keyword arguments for
it. Example:
import pandas m.get_df(parser=pandas.read_csv, sep="\t", skiprows=[1, 2, 3])
In the most common case, Metadata objects exist somewhere on Resolwe server and user just fetches them:
# Get one metadata by slug m = res.metadata.get("my-slug") # Filter metadata by some conditions, e.g. get all metadata # from a given collection: ms = res.metadata.filter(collection=<my-collection>):
Sometimes, these objects need to be updated, and one can easily do that.
However,
df and
unique are upload protected - they can be set
during object creation but cannot be set afterwards:
m.unique = False # Will fail on already existing object m.df = <new-df> # Will fail on already existing object
Sometimes one wishes to create a new Metadata. This can be achieved in the same manner as for other ReSDK resources:
m = res.metadata.create(df=<my-df>, collection=<my-collection>) # Creating metadata without specifying df / collection will fail m = res.metdata.create() # Fail m = res.metdata.create(collection=<my-collection>) # Fail m = res.metdata.create(df=<my-df>) # Fail
Alternatively, one can also build this object gradually from scratch and
than call
save():
m = Metadata(resolwe=<resolwe>) m.collection = <my-collection> my_df = m.set_index(<my-df>) m.df = my_df m.save()
where
m.set_index(<my-df>) is a helper function that finds
Sample name/slug/ID
column or index name, maps it to
Sample ID and sets it as index.
This function is recommended to use because the validation step is trying to
match
m.df index with
m.collection sample ID’s.
Deleting Metadata works the same as for any other resource. Be careful, this cannot be undone and you need to have sufficient permissions:
m.delete() | https://resdk.readthedocs.io/en/latest/metadata.html | 2022-06-25T11:01:06 | CC-MAIN-2022-27 | 1656103034930.3 | [] | resdk.readthedocs.io |
When creating or updating a customer, you can add custom fields to the customer record, which you will be able to use this fields in the email templates.
Custom fields types
Custom fields should cannot be a nested object.
At the moment, Remarkety supports the following types:
- String
- Boolean
- Number
- Date-time (date-time field name should have a suffix of "_at")
Custom fields names
You can name your fields however you like, but we recommend to keep it clear and simple.
API Examples
The following example will add a boolean field (is_vip) and a date-time field (wedding_at):
curl --request PUT \ --url{store_id}/customers/hash \ --header 'accept: application/json' \ --header 'content-type: application/json' \ --data '{"customer": {"is_vip": true, "wedding_at": "2017-02-23T19:00:00+03:00"}}'
Using custom fields in tempaltes
Custom fields are available in email templates exactly like other customer fields.
You can use it like this: {$shopper.custom_field_name}
For example, to show the "wedding_at" custom field from the API Example, formatted, you can do:
{$shopper.wedding_at|dateformat:"m/d/Y"}
To conditionally check a custom field, you can use:
{if $shopper.is_vip}VIP Club{/if} | https://api-docs.remarkety.com/reference/custom-fields | 2022-06-25T11:39:50 | CC-MAIN-2022-27 | 1656103034930.3 | [] | api-docs.remarkety.com |
- Requirements
- Setup components
- Configure the external load balancer
- Configure PostgreSQL
- Configure Redis
- Configure Gitaly
- Configure GitLab Rails
- Configure Prometheus
- Configure the object storage
- Configure Advanced Search
- Configure NFS (optional)
- Cloud Native Hybrid reference architecture with Helm Charts (alternative)
Reference architecture: up to 2,000 users
This page describes GitLab reference architecture for up to 2,000 users. For a full list of reference architectures, see Available reference architectures.
- Supported users (approximate): 2,000
- High Availability: No. For a highly-available environment, you can follow a modified 3K reference architecture.
- Estimated Costs: See cost table
- Cloud Native Hybrid: Yes
- Validation and test results: The Quality Engineering team does regular smoke and performance tests to ensure the reference architectures remain compliant
-.
Requirements
Before starting, you should take note of the following requirements / guidance for this reference architecture.
Supported CPUs
This reference architecture was built and tested on Google Cloud Platform (GCP) using the Intel Xeon E5 v3 (Haswell) CPU platform. On different hardware you may find that adjustments, either lower or higher, are required for your CPU or node counts. For more information, see our Sysbench-based CPU benchmarks.
Supported infrastructure
As a general guidance, GitLab should run on most infrastructure such as reputable Cloud Providers (AWS, GCP, Azure) and their services, or self managed (ESXi) that meet both the specs detailed above, as well as any requirements in this section. However, this does not constitute a guarantee for every potential permutation.
Be aware of the following specific call outs:
- Azure Database for PostgreSQL is not recommended due to known performance issues or missing features.
- Azure Blob Storage is recommended to be configured with Premium accounts to ensure consistent performance.
Setup.
Configure
Ensure the external load balancer only routes to working services with built in monitoring endpoints. The readiness checks all require additional configuration on the nodes being checked, otherwise, the external load balancer will not be able to connect.
Ports
In this section, you’ll be guided through configuring an external PostgreSQL database to be used with GitLab.
Provide your own PostgreSQL instance
If you’re hosting GitLab on a cloud provider, you can optionally use a managed service for PostgreSQL.
A reputable provider or solution should be used for this. Google Cloud SQL and Amazon RDS are known to work, however Azure Database for PostgreSQL is not recommended due to performance issues.
In this section, you’ll be guided through configuring an external Redis instance to be used with GitLab.
Provide roles(["redis_master_role"])
Gitaly server node requirements are dependent on data size, specifically the number of projects and those projects’ sizes. gitlab_kas[, or its certificate authority, must be installed on all Gitaly nodes (including the Gitaly node using the certificate) and on all client nodes that communicate with it following the procedure described in GitLab custom certificate configuration.
It’s possible to configure Gitaly servers with both an unencrypted listening
address (
listen_addr) and an encrypted listening address (
tls_listen_addr)
at the same time. This allows you to do a gradual transition from unencrypted to
encrypted traffic, if necessary.
To configure Gitaly with TLS:']['enabled'] = true
GitLab supports using an object storage service for holding several types of data, and is recommended over NFS. In general, object storage services are better for larger environments, as object storage is typically much more performant, reliable, and scalable.
GitLab has been tested on a number of object storage providers:
- Amazon S3
- Google Cloud Storage
- Digital Ocean Spaces
- Oracle Cloud Infrastructure
- OpenStack Swift
- Azure Blob storage
- On-premises hardware and appliances from various storage vendors.
- MinIO. We have a guide to deploying this within our Helm Chart documentation. also recommended to switch to Incremental logging, which uses Redis instead of disk space for temporary caching of job logs. This is required when no NFS node has been deployed. Advanced Search
You can leverage Elasticsearch and enable Advanced Search for faster, more advanced code search across your entire GitLab instance.
Elasticsearch cluster design and requirements are dependent on your specific data. For recommended best practices about how to set up your Elasticsearch cluster alongside your instance, read how to choose the optimal cluster configuration.
Configure NFS (optional)
For improved performance, object storage, along with Gitaly, are recommended over using NFS whenever possible.
See how to configure NFS.
Read:.
Cluster are known to work.
- Should be run on reputable third-party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work.
Resource usage settings
The following formulas help when calculating how many pods may be deployed within resource constraints. The 2k reference architecture example values file documents how to apply the calculated configuration to the Helm Chart.
Webs. | https://docs.gitlab.com/14.10/ee/administration/reference_architectures/2k_users.html | 2022-06-25T11:28:17 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.gitlab.com |
Overview
There is no universally accepted definition for what an interpretable model is, or what information is adequate as an interpretation of a model. This guide focuses on the commonly used notion of feature importance, where an importance score for each input feature is used to interpret how it affects model outputs. This method provides insight but also requires caution. Feature importance scores can be misleading and should be analyzed carefully, including validation with subject matter experts if possible. Specifically, we advise you not to trust feature importance scores without verification, because misinterpretations can lead to poor business decisions.
In the following illustration, the measured features of an iris are passed into a model that predicts the species of the plant, and associated feature importances (SHAP attributions) for this prediction are displayed. In this case, the petal length, petal width, and sepal length all contribute positively to the classification of Iris virginica, but sepal width has a negative contribution. (This information is based on the iris dataset from [4].)
Feature importance scores can be global, indicating that the score is valid for the model across all inputs, or local, indicating that the score applies to a single model output. Local feature importance scores are often scaled and summed to produce the model output value, and thus termed attributions. Simple models are considered more interpretable, because the effects of the input features on model output are more easily understood. For example, in a linear regression model, the magnitudes of the coefficients provide a global feature importance score, and for a given prediction, a local feature attribution is the product of its coefficient and the feature value. In the absence of a direct local feature importance score for a prediction, you can compute an importance score from a set of baseline input features to understand how a feature contributes relative to the baseline. | https://docs.aws.amazon.com/prescriptive-guidance/latest/ml-model-interpretability/overview.html | 2022-06-25T12:17:38 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['images/iris-example.png',
'Predicting an iris by using measured features and SHAP attributions'],
dtype=object) ] | docs.aws.amazon.com |
Create advanced audience combinations
Using the Optimizely app, you can easily use "and", "or" operators (i.e., 'any' or 'all') to create an audience combination composed of other audiences.
If you want to use more complex nested logical operators with "and, or, not", you can do so in JSON in Code Mode. Each audience is a rule like
User likes salads, and an audience combination is a Boolean combination of these rules, like
User likes pizza NOT (User likes salads AND User likes soup)
If an experiment or rollout using the "Match all audiences" or "Custom" audience type is evaluated with a 1.x or 2.x SDK, targeting won't pass and conversions and impressions will be lost.
Get the audience identifiers
Each individual condition is a JSON object with an
audience_id. You can add these identifiers directly in Code Mode or find them by selecting Match any audience or Match all audiences, selecting the audience, and switching to Code Mode.
Update Code Mode JSON
Select Code Mode to access the code editor and define your JSON audience combination.
The Evaluated Audiences field provides a summary of the defined conditions, which allows you to verify the audience combination's definition and accuracy.
Define the conditions
Conditions are joined together in lists:
- The first element in each list is an operator,
"and",
"or", or
"not", and the rest of the conditions are combined using that operator.
- You can replace any individual condition with another list, which allows for a nested structure of
"and"and
"or"conditions.
- A
"not"list should only have one condition or list, which will be negated. A
"not"with a list of other conditions like
["not", ["and", {...}, {...}]]can negate the entire result of the child condition list.
The example below shows how you could define audience combination conditions. You can also create a feature with an audience combination in Optimizely and look at the Code Mode view.
// "User who loves salads" // or "User who loves sandwiches" [ "or", { "audience_id": 1038980040 }, { "audience_id": 1033280055 } ] // "User who loves salads" // or "User who loves sandwiches" // or doesn't "Like both salads & sadwiches" [ "or", { "audience_id": 1038980040 }, { "audience_id": 1033280055 }, [ "not", { "audience_id": 1120870079 } ] ] // Is not "User who loves salads" // AND is not "User who loves sandwiches" [ "not", [ "and", { "audience_id": 1038980040 }, { "audience_id": 1033280055 } ] ]
Optional: Use the REST API to save the audience combination
As an optional final step, you can
HTTP PUT the audience combination as a serialized string (for example, by using
JSON.stringify(data) in JavaScript) via the
audience_conditions key and
/features REST API endpoint.
We'll return it in the same format, so to traverse it you'll need to parse it as an object (for example, by using
JSON.parse(string) in JavaScript).
For more information about the REST API endpoints, see:
Note
You can use the same
audience_conditionsformat for the
/featuresREST API endpoints.
For the REST API, the only difference is that you must use a value of "everyone" for
audience_conditionsto target the rollout accordingly (i.e., everyone should be allowed).
Updated about 2 years ago | https://docs.developers.optimizely.com/full-stack/docs/configure-a-custom-audience-combination | 2022-06-25T10:58:28 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['https://files.readme.io/ef1e921-audience-id-codemode.gif',
'audience-id-codemode.gif'], dtype=object)
array(['https://files.readme.io/ef1e921-audience-id-codemode.gif',
'Click to close...'], dtype=object)
array(['https://files.readme.io/834deef-evaluated-audiences.png',
'evaluated-audiences.png'], dtype=object)
array(['https://files.readme.io/834deef-evaluated-audiences.png',
'Click to close...'], dtype=object) ] | docs.developers.optimizely.com |
- ,
create a ~"failure::flaky-test" issue.
If the test cannot be fixed in a timely fashion, there is an impact on the
productivity of all the developers, so it should be quarantined by
assigning the
:quarantine metadata with the issue URL, and add the
~"quarantined test" label to the issue.
it 'succeeds', quarantine: '' do expect(response).to have_gitlab_http_status(:ok) end
This means it is skipped unless run with
--tag quarantine:
bin/rspec --tag quarantine
Once a test is in quarantine, there are 3 choices:
- Fix the test (that is, get rid of its flakiness).
- Move the test to a lower level of testing.
- Remove the test entirely (for example, because there’s already a lower-level test, or it’s duplicating another same-level test, or it’s testing too much etc.).
scripts/rspec_bisect_flaky,
which would give us the minimal test combination to reproduce the failure:
- First obtain the list of specs that ran before the flaky test. You can search for the list under
Knapsack node specs:in the CI job output log.
Save the list of specs as a file, and run:
cat knapsack_specs.txt | xargs scripts/rspec_bisect_flaky
If there is an order-dependency issue, the script above will print the minimal reproduction.
- Avoid asserting against flash notice banners
Capybara viewport size related issues
- Transient failure of spec/features/issues/filtered_search/filter_issues_spec.rb:
Capybara JS driver related issues
- Don’t wait for AJAX when no AJAX request is fired:
- Bis:
Capybara expectation times out
- Test imports a project (via Sidekiq) that is growing over time, leading to timeouts when the import takes longer than 60 seconds
Hanging specs
If a spec hangs, it might be caused by a bug in Rails:
-
- | https://docs.gitlab.com/14.10/ee/development/testing_guide/flaky_tests.html | 2022-06-25T11:19:38 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.gitlab.com |
Improvements
- Improved the dSYM archive uploader so it will now dispatch asynchronously to the build, and will retry the upload up to 3 times.
- Custom metrics will now accept spaces in the name.
- Improved Cocoapods support. The NewRelicAgent podspec no longer includes version strings in framework paths. This eases pod management and reduces diff clutter when dealing with version control software.
Fixes
- Corrected a bug in crash reporting where certain internal symbols were globally visible when the -ObjC linker flag is present.
- Corrected a bug in crash reporting where Cortex-A9 chips were reported as ARM-Unknown in crash reports.
- Validating the length of activities, only allowing those with a start time within the current session. | https://docs.newrelic.com/docs/release-notes/mobile-release-notes/ios-release-notes/ios-agent-536/ | 2022-06-25T11:16:04 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.newrelic.com |
Message-ID: <1212234139.20911.1394177882255.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_20910_1228483371.1394177882255" ------=_Part_20910_1228483371.1394177882255 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Coverage plugin to decode Matlab files which adhere to version 5 of the = format specification.=20
=20
IP Check: review.apt added, all headers are in place
Quality Assurance: test coverage is poor, we need to work =
on that and find suitable test data
Stability: No pla=
nned API changes
Supported: Documents available. Modu=
le an= d grid to world transformation.=20
Notice that the Matlab data format version 5 support many different type= s of datatypes, like Int32 and Double. For more information, please check t= he file format specification here .=20
We plan to do what follows as time and funding permits:=20
The Image I/O-Ext project leverages existing native APIs=
to extend the set of formats that the JAI ImageIO API can =
read.
Actual version of Image I/O-Ext is 1.0.5, but this plugin is le= veragin on version 1.1 unstable. | http://docs.codehaus.org/exportword?pageId=137200232 | 2014-03-07T07:38:02 | CC-MAIN-2014-10 | 1393999636668 | [] | docs.codehaus.org |
If you take testing seriously as I do, you most likely know test code coverage tools. And on Flex Universe the de facto standard and unique alternative is Flex Cover. But let's be honest here, the tool does the job, but isn't simple. Requiring a special compiler made me dislike the idea of running code coverage on Flex projects. And mainly that was why test code coverage never got into Flexmojos. The available tool was too complex to use, making it so complicated to automate that I never gave it any serious thought.
Since day ZERO, Flexmojos has as master goal to KISS (Keep It Simple
Stupid Smartguy). With that in mind, Flexmojos is proud to announce a state of art test code coverage support.
Flexmojos test code coverage support is really, really easy to use. Just need to set coverage to true.
How does it work
Flexmojos is using Apparat. Apparat is a framework to optimize ABC, SWC and SWF files.
Better than that, Apparat allows people to manipulate actionscript bytecode. So this was the way to go. Instrument flex test code w/o a custom compiler. So I start to talk to Joa Ebert, Apparat author. He made changes on apparat side to make possible to instrument the Flexmojos test runner SWF and I made the changes to Flexmojos to collect the code usage flag by Apparat.
Once the code usage information is collected Flexmojos uses Cobertura to handle that information and to genarate reports. This allows Flexmojos to generate html, xml and/or summaryXml.
This is only available on Flexmojos 4. So if you never thought about Maven 3... that may be the time.
Note: Only reports line coverage, not branch coverage.
Sample project
To show how simple it is to enable test code coverage on Flexmojos 4.x, we will edit Flexunit sample project to enable code coverage. The project can be found here. Once you get the sources, edit pom.xml. There are just 2 simple changes:
- change the %flexmojos.version property to 4.0-SNAPSHOT;
- add <coverage>true</coverage> to flexmojos configuration section;
The pom with all required changes is available here.
Running $mvn clean install will produce the following report:
The original report can be found here
Configuration Options
test-compile goal options:
test-run goal options
Jun 05, 2010
Olivier Chicha says:This is exactly what I was looking for, it's working great. Two little things :...
This is exactly what I was looking for, it's working great.
Two little things :
- I think there is a bug on interfaces, the package declaration of interfaces is marked as a non tested line
- I could only access your pom by replacing "https" by "http" :
Jun 06, 2010
Marvin Herman Froeder says:Https things fixed. No clue on package
Https things fixed.
No clue on package | https://docs.sonatype.org/display/FLEXMOJOS/Test+coverage+support | 2014-03-07T07:34:23 | CC-MAIN-2014-10 | 1393999636668 | [array(['/download/attachments/6062091/coverage.png?version=1&modificationDate=1271936116452',
None], dtype=object) ] | docs.sonatype.org |
What's new in REAL Studio 2011 R3?
From Xojo Documentation
Here's what's new:
- Better Localization Support - You can now retrieve at runtime the localized version of a string constant for any language.
- Better HTML5 GeoLocation Support - The WebDeviceLocation class now supports additional HTML5 geolocation features: Accuracy, Altitude, AltitudeAccuracy, Heading, Speed and TimeStamp.
- Easier CGI Deployment - When building a web app, the CGI has better file permissions to make it more likely they will be correct when the CGI is installed on the web server.
- Improved JSON Support - The JSON class has a new Lookup function similar to the one in the Dictionary class.
- OS X 10.7 Lion Support - Several improvements have been made for Lion including support for high-quality voices that are now used when using the Speak method.
- Faster, Better Web App Session Launching - Web apps have been optimized for faster session startup. Also App.LaunchMessageDelay has been added to allow the developer to determine when, if ever, the session launch page should be displayed.
We added 11 new features and fixed 73 bugs. | http://docs.xojo.com/index.php?title=What's_new_in_REAL_Studio_2011_R3%253F&oldid=35187 | 2014-03-07T07:34:41 | CC-MAIN-2014-10 | 1393999636668 | [] | docs.xojo.com |
View tree
Close tree
|
Preferences
|
|
Feedback
|
Legislature home
|
Table of contents
Search
Up]
Down
Down
/2011/related/rules/senate
true
legislativerules
/2011/related/rules/senate/4/46/3/e
legislativerules/2011/sr46(3)(e)
legislativerules/2011/sr46(3)(e)
section
true
PDF view
View toggle
Cross references for section
View sections affected
References to this
Reference lines
Clear highlighting
Permanent link here
Permanent link with tree | http://docs.legis.wisconsin.gov/2011/related/rules/senate/4/46/3/e | 2014-03-07T07:35:07 | CC-MAIN-2014-10 | 1393999636668 | [] | docs.legis.wisconsin.gov |
Install tools from the BlackBerry Resource Kit
Previous versions of the tools have been released for use with the BlackBerry Enterprise Server and BlackBerry smartphones. You cannot use the setup application for the BlackBerry Resource Kit for the BlackBerry Device Service to upgrade previous versions of the tools that are designed for use with the BlackBerry Enterprise Server.
Check the system requirements for the tools that you want to install.
- On the computer where you plan to install the tools, create a folder for the installation files.
- In a browser, visit, and navigate to the Downloads area for the BlackBerry Resource Kit for the BlackBerry Device Service.
- Download the installation package that matches the version of the BlackBerry Device Service that is installed in your organization's environment.
- Extract the contents of the installation package to the folder that you created.
- Double-click the brk<version>.msi file.
- Click Next.
- Accept the end-user license agreement. Click Next.
- In the drop-down list for each tool, perform one of the following actions:
- To install the tool, click Will be installed on local hard drive.
- To prevent the setup application from installing the tool, click Entire feature will be unavailable.
- Click Next.
- If you chose to install the BlackBerry Enterprise Service 10 User Administration Tool, you are prompted for the DNS name of the BlackBerry Administration Service. Type the DNS name of the BlackBerry Administration Service that you use to manage the BlackBerry Device Service (for example, server1.test.company.com).
- If necessary, click Verify DNS Name. After the DNS name is verified, click Next.
- Verify the file path where the tools will be installed. If you want to change the file path, click Change and select a new location.
- Click Next.
- If you do not want the tools to use UAC-compliant file paths for configuration files, input files, output files, and log files, clear the Use Windows UAC compliant paths check box.
- Click Next.
- Click Install.
- When the installation process completes, click Finish.
- Complete the postinstallation tasks for the tools that you installed. For more information, see Post-installation tasks.
- If you want to change the information that you specified for the BlackBerry Enterprise Service 10 User Administration Tool, you can edit the BESUserAdminClient.exe.config file located at the file path that you specified in step 12. If your computer's operating system uses UAC and does not permit you to modify the configuration file at this location, open the virtualized copy of the BESUserAdminClient.exe.config file in the application data folder for the current user (for example, <drive>\Users\<user_name>\AppData\Local\VirtualStore\Program Files (x86)\Research In Motion\BlackBerry Resource Kit for BlackBerry Device Service).
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/admin/deliverables/49280/1747039.jsp | 2014-03-07T07:38:31 | CC-MAIN-2014-10 | 1393999636668 | [] | docs.blackberry.com |
The hardware is classified by:
Schematics and similar files for many of these can be found in the Blackfin Hardware Project.
We also have a checklist of things that should be checked before you design your Blackfin based design.
Purchase information can be found in the buy stuff page.
Complete Table of Contents/Topics | http://docs.blackfin.uclinux.org/doku.php?id=hw | 2014-03-07T07:33:43 | CC-MAIN-2014-10 | 1393999636668 | [] | docs.blackfin.uclinux.org |
Assembly
Record of Committee Proceedings
Committee on Colleges and Universities
Assembly Bill 498
Relating to: student loans, the individual income tax subtract modification for tuition and student fees, creating an authority, to be known as the Wisconsin Student Loan Refinancing Authority, granting rule-making authority, and making an appropriation.
By Representatives Mason, Kolste, Johnson, Wachs, Smith, Pasch, Wright, Goyke, Kahl, Doyle, Zamarripa, Billings, Bewley, Ringhand, Danou, Barnes, Ohnstad, Kessler, C. Taylor, Young, Riemer, Genrich, Hesselbein, Jorgensen, Richards, Barca, Shankland, Hintz, Clark, Milroy, Sargent, Bernard Schaber, Berceau, Pope, Hulsey, Vruwink, Zepnick, Hebl and Sinicki; cosponsored by Senators Hansen, Lassa, Miller, Harris, Lehman, L. Taylor, Wirch, Risser, T. Cullen, C. Larson, Erpenbach, Vinehout, Carpenter, Jauch and Shilling.
November 08, 2013 Referred to Committee on Colleges and Universities
February 10, 2014 Public Hearing Held
Present: (10) Representative Nass; Representatives Murphy, Weatherston, Stroebel, Schraa, Bewley, Billings, Hesselbein, Wachs and Berceau.
Absent: (0) None.
Excused: (3) Representatives Knudson, Ballweg and Krug.
Appearances For
· None.
Appearances Against
· None.
Appearances for Information Only
· None.
Registrations For
· None.
Registrations Against
· None.
Registrations for Information Only
· None.
April 08, 2014 Failed to pass pursuant to Senate Joint Resolution 1
______________________________
Mike Mikalsen
Committee Clerk | http://docs.legis.wisconsin.gov/2013/related/records/assembly/colleges_and_universities/1083432 | 2020-08-03T13:28:36 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.legis.wisconsin.gov |
Tea lovers in New Jersey and New York now have a new reason to hit the beverage aisle of their local “Inserra” Shop-Rite stores. A variety of Doc’s Tea has made its way onto their shelves, including the following flavors; Mango, Elderberry Blueberry, Orange Ginger and Island Coconut. Select stores are also placing Little Doc’s, Grapple Tea, on store shelves too!
Doc’s Tea is excited to be available to Shop-Rite customers and is currently doing samplings in these new locations. You can follow our social media pages to keep up to date with demo locations and dates. Find the closest Shop-Rite to you to carry Doc’s Tea under retail locations on our website. | http://docstea.com/2019/03/11/docs-tea-hitting-the-shelves-in-the-shop-rite-grocery-chain/ | 2020-08-03T12:26:18 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docstea.com |
CIDER improvements. | https://docs.cider.mx/cider/0.24/about/compatibility.html | 2020-08-03T12:27:59 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.cider.mx |
Monitor attacks
The Monitor page allows you to view and triage attacks that are currently happening, and look back to see attacks that occurred within a specific timeframe. The dashboard gives you the full picture of the attackers that attempted to exploit your applications, the type of attack events detected and which applications were involved.
The Monitor overview divides attack event data into three categories: Attackers, Attack Events and Target Applications. Use the dropdown menus at the top of the page to customize your view by time span and environment. Use the search field to find attacks by the attacker's IP information or source name, affected applications, or specific Assess or Protect rules. You can also check the box to Show probed if you want to include information for attack events that resulted in a Probed status.
The Active Attacks badge at the top of the page communicates the current attack status of your organization. This keeps you apprised of any changes that may occur as you delve into details of other attacks.
View attackers
In the Attackers column, you can see a list of attackers and the number of associated attack events reported within your selected timeframe. Click on the total number of attackers at the top of the column to see the data in the Attacks grid.
If an attacker is identified by a source name, hover over the name to see a list of the IP addresses labeled with this name. If an attacker is unknown (not identified by a source name), the silhouette icon to the left of their IP address includes a question mark. If an attacker successfully exploited an application, it's shown in red. Click on an attacker to go to the relevant Attack Details page.
Note
If the data reported for an attack event matches more than one source name, Contrast applies the name that you updated most recently.
View attack events
The Attack Events column displays a list of the types of attacks detected, along with the total number of attack events per type. The bar below each attack type shows a breakdown of the attack events by result, such as Exploited (red) or Blocked (green).
View target applications
In the Target Applications column, Contrast shows each application that has been targeted by an attack. The bar below each application shows a breakdown of the attack events by result, such as Exploited (red) or Blocked (green). Click on the total number of applications to see the data in the Attacks grid. Click on an application to see a filtered view in the Attack Events grid.
View attack details
Click on an attacker's IP address or source name to see details about an attack. This takes you to the Attack Details page with a summary of information including the attack type, its duration, and affected applications and servers. Contrast shows the total number of events that make up the attack. Click on one to see more details about the individual event.
From this page, you can add the attacker to an IP blacklist, export all the individual events that comprise the attack, or suppress an attack (and its events) altogether. You can also take actions on events as you triage, such as creating virtual patches, configuring Protect rules or adding exclusions. | https://docs.contrastsecurity.com/en/monitor-attacks.html | 2020-08-03T11:23:58 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.contrastsecurity.com |
Configure the search head with the dashboard
You can edit the configuration of an existing search head node enable a search head initially, see Enable the search head. URI field.
- To change the security.
It is extremely unlikely that you will want to change the node type for nodes in an active cluster. Consider the consequences carefully before doing so.! | https://docs.splunk.com/Documentation/Splunk/6.5.9/Indexer/Configuresearchheadwithdashboard | 2020-08-03T13:23:32 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
User Pool
Resource OverviewResource Overview
A User Pool is a user directory stored in Amazon Cognito.
Implementing a User Pool in your stack gives you the ability to securely sign-up and authenticate users who intend to use your serverless applications. User Pools can be managed with the AWS SDK and accessed by Functions and Edge Functions to create, update, or delete the user profiles stored inside.
The events located in each User Pool resource allow you to offer a custom sign-up/sign-in process for your users and to better serve your application's needs. These events are covered in the User Pool Components & Implementation section below.
Event SubscriptionEvent Subscription
Event subscription wires (solid line) visualize and configure event subscription integrations between two resources.
The following resources can be subscribed to a User Pool:
- User Pool resource:
- Function
- User Pool will be prefixed with this value.
The identifier you provide must only contain alphanumeric characters (A-Za-z0-9) and be unique within the stack.
Default Logical ID Example:
UserPool.
Allow Public Sign-Ups
Allows non-administrative users to sign up to this User Pool.
Auto-Verify Emails
Enabling this will automatically send email verifications when a user is signed-up to this User Pool.
User Pool Components & ImplementationUser Pool Components & Implementation
Configuring User Pool ClientsConfiguring User Pool Clients
User Pool Client resources (app client) can be configured to generate authentication tokens used to authorize a user for an application. When a User Pool Client resource is connected (using an event subscription wire) to a User Pool and the stack is deployed, a Client ID will be generated for an application to use to access the User Pool.
When you connect a Function resource to the User Pool Client with a service discovery wire (dashed wire), the Function will populate the User Pool Client ID to reference.
This value can be used directly in the Function's handler code since they'll be automatically configured as environment variables.
The screenshot above is an example of a Function resource connected to both a User Pool and a User Pool Client. The Function's environment variables are populated with the User Pool Client's identifier as well as the User Pool's identifier, which the Function requires in order to authorize users within the User Pool.
User Pool EventsUser Pool Events
The following User Pool Events can be attached to Function resources with an event subscription wire to invoke them when specific events occur.
Pre Sign-up
Occurs just before a new user is added to the User Pool, providing you with the ability to perform custom validation to accept or deny a sign-up request.
Post Confirmation
Occurs after a new user is confirmed and added to the User Pool. This event contains the request with all the current attributes of the new user for you to perform custom messaging or logic to.
Pre Authentication
Occurs when a user attempts to sign in, providing you with the ability to perform custom validation to accept or deny a authentication request.
Occurs after a user has successfully signed in, providing you with the ability to add custom logic after the user has been authenticated.
Custom Message
Occurs before the User Pool sends email or phone verification messages, or multi-factor authentication (MFA) codes, providing you with the ability to customize those verification messages.
IAM PermissionsIAM Permissions
When connected by a service discovery wire (dashed wire), a Function or Edge Function will add the following IAM policy to its role and gain permission to access this resource.
- Statement: - Effect: Allow Action: - cognito-idp:Admin* - cognito-idp:DescribeIdentityProvider - cognito-idp:DescribeResourceServer - cognito-idp:DescribeUserPool - cognito-idp:DescribeUserPoolClient - cognito-idp:DescribeUserPoolDomain - cognito-idp:GetGroup - cognito-idp:ListGroups - cognito-idp:ListUserPoolClients - cognito-idp:ListUsers - cognito-idp:ListUsersInGroup - cognito-idp:UpdateGroup Resource: !GetAtt UserPool.Arn
Environment VariablesEnvironment Variables
When connected by a service discovery wire (dashed wire), a Function or Edge Function will automatically populate and reference the following environment variables in order to interact with this resource.
USER_POOL_ID
The unique identifier for the User Pool in Amazon Cognito
Example:
us-east-1_Iqc12345
USER_POOL
The Amazon Resource Name of the Cognito User Pool
Example:
arn:aws:cognito-idp:us-east-1:123412341234:userpool/us-east-1_Iqc12345
AWS SDK Code ExampleAWS SDK Code Example
Language-specific examples of AWS SDK calls using the environment variables discussed above.
Create a new user in a User PoolCreate a new user in a User Pool
// Load AWS SDK and create a new Cognito object
const AWS = require("aws-sdk");
const cognito = new AWS.CognitoIdentityServiceProvider();
const clientId = process.env.USER_POOL_CLIENT_ID; // supplied by Function service-discovery wire
exports.handler = async message => {
// Construct parameters for the signUp call
const params = {
ClientId: clientId,
Password: 'Password$123',
Username: 'NewUser_99',
UserAttributes: [
{
Name: 'email',
Value: '[email protected]'
}
]
};
await cognito.signUp(params).promise();
console.log('User ' + params.Username + ', created');
}
import boto3
import os
# Create an Cognito Identity Provider client
cognito = boto3.client('cognito-idp')
user_pool_client_id = os.environ['USER_POOL_CLIENT_ID'] # Supplied by Function service-discovery wire
def handler(message, context):
# Add a new user to your User Pool
response = cognito.sign_up(
ClientId=user_pool_client_id,
Username='NewUser_99',
Password='Password$123',
UserAttributes=[
{
'Name':'email',
'Value':'[email protected]'
}
]
)
return response
Related AWS DocumentationRelated AWS Documentation
AWS Documentation: AWS::Cognito::UserPool
AWS SDK Documentation: Node.js | Python | Java | .NET | https://docs.stackery.io/docs/3.12.2/api/nodes/UserPool/ | 2020-08-03T11:50:28 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['/docs/assets/resources/user-pool-client.png', 'screenshot'],
dtype=object)
array(['/docs/assets/resources/service-discovery/func-user-pool.png',
'screenshot'], dtype=object) ] | docs.stackery.io |
“Little Doc’s, inspired by the little people in the Doc’s Tea family. An alternative to the typical juice box; we worked long and hard designing the perfect, kid friendly, good for you tea. Creating healthy habits for healthier kids!”
The newest addition to the Doc’s Tea line up, Little Doc’s Grapple Tea, is now available on. In addition to finding Little Doc’s in individual units at a retail level, we are excited to offer case packs online through our e-commerce partner, Amazon.
Little Doc’s is perfect for the “Littles” in your life! Naturally Caffeine Free and 0 Grams of Sugar make this tea the perfect addition to a lunch box, an after school snack or an alternative to water with a meal.
Direct to your door and free shipping for Amazon prime members, shopping healthy for your family has never been more easy! | http://docstea.com/2019/05/18/little-docs-grapple-tea-now-on-amazon/ | 2020-08-03T12:54:58 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docstea.com |
TOPICS×
Alter the Appearance (HBS)
Now.
To make use of the extension, the instance of the comment system in a website to be affected (/content) must set its resourceType to be the custom comment system.
Modify the HBS Scripts
Using CRXDE Lite :
- Comment out the tag which includes the avatar for a comment post (~ line 21):
<!-- <<img class="scf-comment-avatar {{#if topLevel}}withTopLevel{{/if}}" src="{{author.avatarUrl}}"></img> -->
- Comment out the tag which includes the avatar for the next comment entry (~ line 44):
<!-- <img class="scf-composer-avatar" src="{{loggedInUser.avatarUrl}}"></img> -->
- Select Save All
Replicate Custom App
After the application has been modified, it is necessary to re-replicate the custom component.
One way to do so is
- From the main menu
- Select Tools > Operations > Replication
- Select Activate Tree
- Set Start Path : to /apps/custom
- Uncheck Only Modified
- Select Activate button
View Modified Comment on Published Sample Page
Continuing the experience on the publish instance, still signed in as the same user, it is now possible to refresh the page in the publish environment to view the modification to remove the avatar:
| https://docs.adobe.com/content/help/en/experience-manager-64/communities/develop/extend-alter-appearance.html | 2020-08-03T12:44:37 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-81.png',
None], dtype=object) ] | docs.adobe.com |
Tool
Strip Item. On Parent Enabled Changed(EventArgs) Method
Definition
Raises the EnabledChanged event when the Enabled property value of the item's container changes.
protected public: virtual void OnParentEnabledChanged(EventArgs ^ e);
protected internal virtual void OnParentEnabledChanged (EventArgs e);
abstract member OnParentEnabledChanged : EventArgs -> unit override this.OnParentEnabledChanged : EventArgs -> unit
Protected Friend Overridable Sub OnParentEnabledChanged (e As EventArgs)
Parameters
Remarks
Raising an event invokes the event handler through a delegate. For more information, see Handling and Raising Events.
The OnParentEnabledChanged method also allows derived classes to handle the event without attaching a delegate. This is the preferred technique for handling the event in a derived class.
Notes to Inheritors
When overriding OnParentEnabledChanged(EventArgs) in a derived class, be sure to call the base class's OnParentEnabledChanged(EventArgs) method so that registered delegates receive the event. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.toolstripitem.onparentenabledchanged?view=netframework-4.8 | 2020-08-03T13:32:54 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
Installation guide¶
MONAI’s core functionality is written in Python 3 (>= 3.6) and only requires Numpy and Pytorch.
The package is currently distributed via Github as the primary source code repository, and the Python package index (PyPI). The pre-built Docker images are made available on DockerHub.
This page provides steps to:
To install optional features such as handling the NIfTI files using Nibabel, or building workflows using Pytorch Ignite, please follow the instructions:
From PyPI¶
To install the current milestone release:
pip install monai
From GitHub¶
(If you have installed the
PyPI release version using
pip install monai, please run
pip uninstall monai before using the commands from this section. Because
pip by
default prefers the milestone release.)
The milestone versions are currently planned and released every few months. As the codebase is under active development, you may want to install MONAI from GitHub for the latest features:
Option 1 (as a part of your system-wide module):¶
pip install git+
this command will download and install the current master branch of MONAI from GitHub.
This documentation website by default shows the information for the latest version.
Option 2 (editable installation):¶
To install an editable version of MONAI, it is recommended to clone the codebase directly:
git clone
This command will create a
MONAI/ folder in your current directory.
You can install it by running:
cd MONAI/ python setup.py develop # to uninstall the package please run: python setup.py develop --uninstall
or simply adding the root directory of the cloned source code (e.g.,
/workspace/Documents/MONAI) to your
$PYTHONPATH
and the codebase is ready to use (without the additional features of MONAI C++/CUDA extensions).
Validating the install¶
You can verify the installation by:
python -c 'import monai; monai.config.print_config()'
If the installation is successful, this command will print out the MONAI version information, and this confirms the core modules of MONAI are ready-to-use.
MONAI version string¶
The MONAI version string shows the current status of your local installation. For example:
MONAI version: 0.1.0+144.g52c763d.dirty
0.1.0indicates that your installation is based on the
0.1.0milestone release.
+144indicates that your installation is 144 git commits ahead of the milestone release.
g52c763dindicates that your installation corresponds to the git commit hash
52c763d.
dirtyindicates that you have modified the codebase locally, and the codebase is inconsistent with
52c763d.
From DockerHub¶
Make sure you have installed the NVIDIA driver and Docker 19.03+ for your Linux distribution. Note that you do not need to install the CUDA toolkit on the host, but the driver needs to be installed. Please find out more information on nvidia-docker.
Assuming that you have the Nvidia driver and Docker 19.03+ installed, running the following command will download and start a container with the latest version of MONAI. The latest master branch of MONAI from GitHub is included in the image.
docker run --gpus all --rm -ti --ipc=host projectmonai/monai:latest
You can also run a milestone release docker image by specifying the image tag, for example:
docker run --gpus all --rm -ti --ipc=host projectmonai/monai:0.1.0
Installing the recommended dependencies¶
By default, the installation steps will only download and install the minimal requirements of MONAI. Optional dependencies can be installed using the extras syntax to support additional features.
For example, to install MONAI with Nibabel and Scikit-image support:
git clone cd MONAI/ pip install -e '.[nibabel,skimage]'
Alternatively, to install all optional dependencies:
git clone cd MONAI/ pip install -e '.[all]'
To install all optional dependencies for MONAI development:
git clone cd MONAI/ pip install -r requirements-dev.txt
Since MONAI v0.2.0, the extras syntax such as
pip install 'monai[nibabel]' is available via PyPI.
The options are
[nibabel, skimage, pillow, tensorboard, gdown, ignite]
which correspond to
nibabel,
scikit-image,
pillow,
tensorboard,
gdown, and
pytorch-ignite respectively.
pip install 'monai[all]'installs all the optional dependencies. | https://docs.monai.io/en/latest/installation.html | 2020-08-03T11:23:34 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.monai.io |
Hashicorp Vault plugin
The HashiCorp Vault plugin is an XL Release plugin to retrieve secrets from a Vault Server for use in your tasks and automations. These secrets include static and dynamic username and password fields from the Secrets Engine of your choice.
Requirements
- XL Release: version 9.6+
Installation
This documentation assumes gradle version 6.0.1. See
gradle/wrappter/gradle-wrapper.properties for the actual version.
Import the jar file into your
%XLRELEASE_INSTALLATION%/plugins/xlr-official folder,
or from the XL Release web UI as a new plugin. Adding the plugin requires a server restart.
Authentication
Vault permits several types of authentication as outlined in the Hashicorp Vault Authentication documentation.
Note: This plugin implements a subset of the authentication options, namely token. Other authentication options can be added as demanded.
Define the server configuration of URL plus token.
Release notes
Release Hashicorp Vault Plugin 9.7.0
Features
- Initial release | https://docs.xebialabs.com/v.9.7/release/how-to/install-hashicorp-vault-plugin/ | 2020-08-03T12:47:45 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.xebialabs.com |
MapR-FS
- Compiling Alluxio with MapR Version
- Configuring Alluxio for MapR-FS
- Configuring Alluxio to use MapR-FS as Under Storage
- Running Alluxio Locally.
Configuring Alluxio to use MapR-FS as Under Storage
There are various ways to configure Alluxio to use MapR-FS as under storage. If you want to
mount MapR-FS to the root of Alluxio, add the following to
conf/alluxio-site.properties:
alluxio.master.mount.table.root.ufs
Start up Alluxio locally to see that everything works.
$ ./bin/alluxio format $ ./bin/alluxio-start.sh local
This should start one Alluxio master and one Alluxio worker locally. You can see the master UI at.
Run a simple example program:
$ ./bin/alluxio runTests
Visit MapR-FS web UI to verify the files and directories created by
Alluxio exist. For this test, you should see files named like:
/default_tests_files/BASIC_CACHE_CACHE_THROUGH
Stop Alluxio by running:
$ ./bin/alluxio-stop.sh local | https://docs.alluxio.io/os/user/2.0/en/ufs/MapR-FS.html | 2020-08-03T12:07:19 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.alluxio.io |
Ray SOP
Summary[edit]
The Ray SOP is used to project one surface onto another. Rays are projected from each point of the input geometry in the direction of its normal. This can be used to drape clothes over surfaces, shrink-wrap one object with another, and other similar effects.
Parameters - Page
Group
group - If there are input groups, specifying a group name in this field will cause this SOP to act only upon the group specified. Accepts patterns, as described in Pattern Matching.
Method
method - ⊞ - Select the method of projection for the Ray SOP.
- Minimum Distance
minimum- Points are placed on the closest point on the collision geometry. This method does not use point normals. Use it to shrinkwrap or project one primitve onto another.
- Project Rays
project- Points are projected along their normals until intersecting with collision geometry.
Transform Points
dotrans - If selected, it will transform the input points as defined below. Leave this off when only interested in updating the source point attributes.
Intersect Farthest Surface
lookfar - If selected, this option allows the user to choose between intersecting with the closest intersecting object or the furthest. See example, below.
Point Intersection Normal
normal - ⊞ - If selected, updates each point in the source geometry with the normal at the collision surface it intersects with. If the point doesn't intersect at the collision surface, a normal of (0,0,0) is used.
- Source Normal
source-
- Collision Normal
collision-
- Reflected Ray
reflect-
Bounces
bounces -
Save Bounce Geometry
bouncegeo -
Point Intersection Distance
putdist - If selected, updates each point intersected with the distance to the collision surface. If the point doesn't intersect at the collision surface a distance of 0 is used. This value is placed in the
$DIST point attribute, accessible from the Point SOP.
Scale
scale - A value of zero will leave the input point unaffected. A value of one will land the point on the intersection surface. Negative values and values > 1 are also valid.
Lift
lift - This value further offsets the surface input by offsetting it in the direction of its normal.
Sample
sample - This value determines the number of rays sent per point. If greater than one, the remaining rays are perturbed randomly, and averaged.
Jitter Scale
jitter - Controls the perturbation of the extra sample rays.
Seed
seed - Allows a different random sequence at higher sampling rates.
Create Point Group
newgrp - If selected, it will create a point group containing all the points which were intersected successfully.
Ray Hit Group
hitgrp - Specifies the name of the above point group.
Example
- Place a Grid SOP and translate it in TZ by 2.5. Turn it's template flag on.
- Append a Point SOP to the Grid and enable the Create Point Normals option.
- Place a NURBS Sphere with a Radius of 2,2,2 and translate it in Z by -2.5 .
- Display point normals by enabling the option in the Viewport > Display Options.
- Append a Ray SOP to the Point SOP and connect the Sphere to the right input. Make it the display SOP.
- Toggle the Intersect Farthest Surface button on and off.
The Ray SOP will move the points of the Grid in the direction of the point normals. The first surface of the Collision Source (right input) will be where those points of the grid will rest. You can make those points rest on the other side of the sphere by enabling the Intersect Farthest Surface option. This means that the points should continue to project to the farthest surface of the collision source.
Operator Inputs
- Input 0 -
- Input 1 -
TouchDesigner Build:
An Operator Family that reads, creates and modifies 3D polygons, curves, NURBS surfaces, spheres, meatballs and other 3D surface data. | https://docs.derivative.ca/Ray_SOP | 2020-08-03T12:18:09 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.derivative.ca |
Masonite Routing is an extremely simple but powerful routing system that at a minimum takes a url and a controller. Masonite will take this route and match it against the requested route and execute the controller on a match.
All routes are created inside
routes/web.py and are contained in a
ROUTES constant. All routes consist of some form of HTTP route classes (like
Get() or
routes/web.pyGet('/url/here', '[email protected]')
Most of your routes will consist of a structure like this. All URI’s should have a preceding
/. Routes that should only be executed on Post requests (like a form submission) will look very similar:
routes/web.pyPost('/url/here', '[email protected]')
Notice the controller here is a string. This is a great way to specify controllers as you do not have to import anything into your
web.py file. All imports will be done in the backend. More on controllers later.
If you wish to not use string controllers and wish to instead import your controller then you can do so by specifying the controller as well as well as only passing a reference to the method. This will look like:
routes/web.py...from app.http.controllers.DashboardController import DashboardControllerROUTES = [Get('/url/here', DashboardController.show)]
It’s important here to recognize that we didn't initialize the controller or the method, we did not actually call the method. This is so Masonite can pass parameters into the constructor and method when it executes the route, typically through auto resolving dependency injection.
There are a few methods you can use to enhance your routes. Masonite typically uses a setters approach to building instead of a parameter approach so to add functionality, we can simply attach more methods.
There are several HTTP verbs you can use for routes:
routes/web.pyfrom masonite.routes import Get, Post, Put, Patch, Delete, Match, Options, Trace, ConnectGet(..)Post(..)Put(..)Patch(..)Delete(..)Match(..)Options(..)Trace(..)Connect(..)
Some routes may be very similar. We may have a group of routes under the same domain, uses the same middleware or even start with the same prefixes. In these instances we should group our routes together so they are more DRY and maintainable.
We can add route groups like so:
from masonite.routes import RouteGroup, GetROUTES = [RouteGroup([Get('/url1', ...),Get('/url2', ...),Get('/url3', ...),]),]
This alone is great to group routes together that are similar but in addition to this we can add specific attributes to the entire group like adding middleware:
ROUTES = [RouteGroup([Get('/url1', ...),Get('/url2', ...),Get('/url3', ...),], middleware=('auth', 'jwt')),]
In the case you are only using one middleware:
ROUTES = [RouteGroup([Get('/url1', ...),Get('/url2', ...),Get('/url3', ...),], middleware=('auth',)),]
The , at the end of 'auth' ensures that it's treated as a tuple and not as an array of strings.
In this instance we are adding these 2 middleware to all of the routes inside the group. We have access to a couple of different methods. Feel free to use some or all of these options:
ROUTES = [RouteGroup([Get('/url1', ...).name('create'),Get('/url2', ...).name('update'),Get('/url3', ...).name('delete'),],middleware=('auth', 'jwt'),domain='subdomain',prefix='/dashboard',namespace='auth.',name='post.',add_methods=['OPTIONS']),]
The
prefix parameter will prefix that URL to all routes in the group as well as the
name parameter. The code above will create routes like
/dashboard/url1 with the name of
post.create. As well as adding the domain and middleware to the routes.
All of the options in a route group are named parameters so if you think adding a groups attribute at the end is weird you can specify them in the beginning and add the
routes parameter:
RouteGroup(middleware=('auth', 'jwt'), name='post.', routes = [Get('/url1', ...).name('create'),Get('/url2', ...).name('update'),Get('/url3', ...).name('delete'),]),
Even more awesome is the ability to nest route groups:
ROUTES = [RouteGroup([Get('/url1', ...).name('create'),Get('/url2', ...).name('update'),Get('/url3', ...).name('delete'),RouteGroup([Get('/url4', ...).name('read'),Get('/url5', ...).name('put'),], prefix='/users', name='user.'),], prefix='/dashboard', name='post.', middleware=('auth', 'jwt')),]
This will go to each layer and generate a route list essentially from the inside out. For a real world example we refactor routes from this:
ROUTES = [Get().domain('www').route('/', '[email protected]').name('welcome'),Post().domain('www').route('/invite', '[email protected]').name('invite'),Get().domain('www').route('/dashboard/apps', '[email protected]').name('app.show').middleware('auth'),Get().domain('www').route('/dashboard/apps/create', '[email protected]').name('app.create').middleware('auth'),Post().domain('www').route('/dashboard/apps/create', '[email protected]').name('app.store'),Post().domain('www').route('/dashboard/apps/delete', '[email protected]').name('app.delete'),Get().domain('www').route('/dashboard/plans', '[email protected]').name('plans').middleware('auth'),Post().domain('www').route('/dashboard/plans/subscribe', '[email protected]').name('subscribe'),Post().domain('www').route('/dashboard/plans/cancel', '[email protected]').name('cancel'),Post().domain('www').route('/dashboard/plans/resume', '[email protected]').name('resume'),Post().domain('*').route('/invite', '[email protected]').name('invite.subdomain'),Get().domain('*').route('/', '[email protected]').name('welcome'),]ROUTES = ROUTES + [Get().domain('www').route('/login', '[email protected]').name('login'),Get().domain('www').route('/logout', '[email protected]'),Post().domain('www').route('/login', '[email protected]'),Get().domain('www').route('/register', '[email protected]'),Post().domain('www').route('/register', '[email protected]'),Get().domain('www').route('/home', '[email protected]').name('home'),]
into this:
ROUTES = [RouteGroup([# Dashboard RoutesRouteGroup([# App RoutesRouteGroup([Get('', '[email protected]').name('show'),Get('/create', '[email protected]').name('create'),Post('/create', '[email protected]').name('store'),Post('/delete', '[email protected]').name('delete'),], prefix='/apps', name='app.'),Get('/plans', '[email protected]').name('plans'),Post('/plans/subscribe', '[email protected]').name('subscribe'),Post('/plans/cancel', '[email protected]').name('cancel'),Post('/plans/resume', '[email protected]').name('resume'),], prefix="/dashboard", middleware=('auth',)),# Login and Register RoutesGet('/login', '[email protected]').name('login'),Get('/logout', '[email protected]'),Post('/login', '[email protected]'),Get('/register', '[email protected]'),Post('/register', '[email protected]'),Get('/home', '[email protected]').name('home'),# Base RoutesGet('/', '[email protected]').name('welcome'),Post('/invite', '[email protected]').name('invite'),], domain='www'),# Subdomain invitation routesPost().domain('*').route('/invite', '[email protected]').name('invite.subdomain'),Get().domain('*').route('/', '[email protected]').name('welcome'),]
This will likely be the most common way to build routes for your application.
You can also use View routes which is just a method on the normal route class:
ROUTES = [Get().view('/template', 'some/template', {'key': 'value'})]
You can use this view method with any route class.
You can also redirect right from the routes list using a
Redirect route class:
from masonite.routes import RedirectROUTES = [Redirect('/old/route', '/new/route', status=302, methods=['GET', 'POST'])]
You do not have to specify the last 2 parameters. The default is a
302 response on
GET methods.
You may have noticed above that we have a
Match route class. This can match several incoming request methods. This is useful for matching a route with both
PUT and
PATCH.
Match(['PUT', 'PATCH']).route(...)
The request methods are not case sensitive. They will be converted to uppercase on the backend. So
['Put', 'Patch'] will work just fine
We can name our routes so we can utilize these names later when or if we choose to redirect to them. We can specify a route name like so:
routes/web.pyGet('/dashboard', '[email protected]').name('dashboard')
It is good convention to name your routes since route URI's can change but the name should always stay the same.
Middleware is a great way to execute classes, tasks or actions either before or after requests. We can specify middleware specific to a route after we have registered it in our
config/middleware.py file but we can go more in detail in the middleware documentation. To add route middleware we can use the middleware method like so:
routes/web.pyGet('/dashboard', '[email protected]').middleware('auth', 'anothermiddleware')
This middleware will execute either before or after the route is executed depending on the middleware.
Read more about how to use and create middleware in the Middleware documentation.
All controllers are located in
app/http/controllers but sometimes you may wish to put your controllers in different modules deeper inside the controllers directory. For example, you may wish to put all your product controllers in
app/http/controllers/products or all of your dashboard controllers in
app/http/controllers/users. In order to access these controllers in your routes we can simply specify the controller using our usual dot notation:
routes/web.pyGet('/dashboard', '[email protected]')
Controllers are defaulted to the
app/http/controllers directory but you may wish to completely change the directory for a certain route. We can use a forward slash in the beginning of the controller namespace:
Get('/dashboard', '/[email protected]')
This can enable us to use controllers in third party packages.
You can also import the class directly and reference the method you want to use:
from app.controllers.SomeController import SomeControllerGet('/dashboard', SomeController.show)
Very often you’ll need to specify parameters in your route in order to retrieve information from your URI. These parameters could be an
id for the use in retrieving a certain model. Specifying route parameters in Masonite is very easy and simply looks like:
routes/web.pyGet('/dashboard/@id', '[email protected]')
That’s it. This will create a dictionary inside the
Request object which can be found inside our controllers.
In order to retrieve our parameters from the request we can use the
param method on the
Request object like so:
app/http/controller/YourController.pydef show(self, request: Request):request.param('id')
Sometimes you want to optionally match routes and route parameters. For example you may want to match
/dashboard/user and
/dashboard/user/settings to the same controller method. In this event you can use optional parameters which are simply replacing the
@ with a
?:
routes/web.pyGet('/dashboard/user/?option', '[email protected]')
You can also set default values if the route is not hit:
routes/web.pyGet('/dashboard/user/?option', '[email protected]').default({'option': 'settings'})
Sometimes you will want to make sure that the route parameter is of a certain type. For example you may want to match a URI like
/dashboard/1 but not
/dashboard/joseph. In order to do this we simply need to pass a type to our parameter. If we do not specify a type then our parameter will default to matching all alphanumeric and underscore characters.
routes/web.pyGet('/dashboard/@id:int', '[email protected]')
This will match all integers but not strings. So for example it will match
/dashboard/10283 and not
/dashboard/joseph
If we want to match all strings but not integers we can pass:
routes/web.pyGet('/dashboard/@id:string', '[email protected]')
This will match
/dashboard/joseph and not
/dashboard/128372. Currently only the integer and string types are supported.
These are called "Route Compilers" because they compile the route differently depending on what is specified. If you specify
:int or
:integer it will compile to a different regex than if you specified
:string.
We can add route compilers to our project by specifying them in a Service Provider.
Make sure you add them in a Service Provider where
wsgi is
False. We can add them on the Route class from the container using the
compile method. A completed example might look something like this:
app/http/providers/RouteCompileProvider.pyfrom masonite.provider import ServiceProviderfrom masonite.routes import Routeclass RouteCompilerProvider(ServiceProvider):wsgi = False...def boot(self, route: Route):route.compile('year', r'([0-9]{4})')
We just need to call the
compile() method on the
Route class and make sure we specify a regex string by preceding an
r to the beginning of the string.
Your regex should be encapsulated in a group. If you are not familiar with regex, this basically just means that your regex pattern should be inside parenthesis like the example above.
You may wish to only render routes if they are on a specific subdomain. For example you may want
example.com/dashboard to route to a different controller than
joseph.example.com/dashboard.
Out of the box this feature will not work and is turned off by default. We will need to add a call on the Request class in order to activate subdomains. We can do this in the boot method of one of our Service Providers that has wsgi=False:
app/providers/UserModelProvider.pywsgi = False...def boot(self, request: Request):request.activate_subdomains()
To use subdomains we can use the
.domain() method on our routes like so:
routes/web.pyGet().domain('joseph').route('/dashboard', '[email protected]')
This route will match to
joseph.example.com/dashboard but not to
example.com/dashboard or
test.example.com/dashboard.
It may be much more common to match any subdomain. For this we can pass in an asterisk instead.
routes/web.pyGet().domain('*').route('/dashboard', '[email protected]')
This will match all subdomains such as
test.example.com/dashboard,
joseph.example.com/dashboard but not
example.com/dashboard.
If a match is found, it will also add a
subdomain parameter to the Request class. We can retrieve the current subdomain like so:
app/http/controllers/YourController.pydef show(self, request: Request):print(request.param('subdomain')) | https://docs.masoniteproject.com/the-basics/routing | 2020-08-03T11:44:26 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.masoniteproject.com |
Desktop duplication
Windows 8 introduces a new Microsoft DirectX Graphics Infrastructure (DXGI)-based API to make it easier for independent software vendors (ISVs) to support desktop collaboration and remote desktop access scenarios.
Such applications are widely used in enterprise and educational scenarios. These applications share a common requirement: access to the contents of a desktop together with the ability to transport the contents to a remote location. The Windows 8 Desktop duplication APIs provide access to the desktop contents.
Currently, no Windows API allows an application to seamlessly implement this scenario. Therefore, applications use mirror drivers, screen scrapping, and other proprietary methods to access the contents of the desktop. However, these methods have the following set of limitations:
- It can be challenging to optimize the performance.
- These solutions might not support newer graphics-rendering APIs because the APIs are released after the product ships.
- Windows does not always provide rich metadata to assist with the optimization.
- Not all solutions are compatible with the desktop composition in Windows Vista and later versions of Windows.
Windows 8 introduces a DXGI-based API called Desktop Duplication API. This API provides access to the contents of the desktop by using bitmaps and associated metadata for optimizations. This API works with the Aero theme enabled, and is not dependent on the graphics API that applications use. If a user can view the application on the local console, then the content can be viewed remotely as well. This means that even full screen DirectX applications can be duplicated. Note that the API provides protection against accessing protected video content.
The API enables an application to request Windows to provide access to the contents of the desktop along monitor boundaries. The application can duplicate one or more of the active displays. When an application requests duplication, the following occurs:
- Windows renders the desktop and provides a copy to the application.
- Each rendered frame is placed in GPU memory.
- Each rendered frame comes with the following metadata:
- Dirty region
- Screen-to-screen moves
- Mouse cursor information
- Application is provided access to frame and metadata.
- Application is responsible for processing each frame:
- Application can choose to optimize based on dirty region.
- Application can choose to use hardware acceleration to process move and mouse data.
- Application can choose to use hardware acceleration for compression before streaming out.
For detailed documentation and samples, see Desktop Duplication API. | https://docs.microsoft.com/en-us/windows-hardware/drivers/display/desktop-duplication-api | 2020-08-03T12:35:28 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
A step-by-step guide to help you get started using Docker containers with your Node.js apps.
Prerequisites
This guide assumes that you've already completed the steps to set up your Node.js development environment with WSL 2, including:
- Install Windows 10 Insider Preview build 18932 or later.
- Enable the WSL 2 feature on Windows.
- Install a Linux distribution (Ubuntu 18.04 for our examples). You can check this with:
wsl lsb_release -a.
- Ensure your Ubuntu 18.04 distribution is running in WSL 2 mode. (WSL can run distributions in both v1 or v2 mode.) You can check this by opening PowerShell and entering:
wsl -l -v.
- Using PowerShell, set Ubuntu 18.04 as your default distribution, with:
wsl -s ubuntu 18.04..
Install Docker Desktop WSL 2 Tech Preview
Previously, WSL 1 could not run the Docker daemon directly, but that has changed with WSL 2 and led to significant improvements in speed and performance with Docker Desktop for WSL 2.
To install and run Docker Desktop WSL 2 Tech Preview:
Download the Docker Desktop WSL 2 Tech Preview Installer. (You can reference the installer docs if needed).
Open the Docker installer that you just downloaded. The installation wizard will ask if you want to "Use Windows containers instead of Linux containers" - leave this unchecked as we will be using the Linux subsystem. Docker will be installed in a managed directory in your default WSL 2 distribution and will include the Docker daemon, CLI, and Compose CLI.
If you don't yet have a Docker ID, you will need to set one up by visiting:. Your ID must be all lowercase alphanumeric characters.
Once installed, start Docker Desktop by selecting the shortcut icon on your desktop or finding it in your Windows Start menu. The Docker icon will appear in the hidden icons menu of your taskbar. Right-click the icon to display the Docker commands menu and select "WSL 2 Tech Preview".
Once the tech preview windows opens, select Start to begin running the Docker daemon (background process) in WSL 2. When the WSL 2 docker daemon starts, a docker CLI context is automatically created for it.
To confirm that Docker has been installed and display the version number, open a command line (WSL or PowerShell) and enter:
docker --version
Test that your installation works correctly by running a simple built-in Docker image:
docker run hello-world
Here are a few Docker commands you should
- List your Docker system statistics and resources (CPU & memory) available to you in the WSL 2 context, with:
docker info
- Display where docker is currently running, with:
docker context ls
You can see that there are two contexts that Docker is running in --
default (the classic Docker daemon) and
wsl (our recommendation using the tech preview). (Also, the
ls command is short for
list and can be used interchangeably).
Tip
Try building an example Docker image with this tutorial on Docker Hub. Docker Hub also contains many thousands of open-source images that might match the type of app you want to containerize. You can download images, such as this Gatsby.js framework container or this Nuxt.js framework container, and extend it with your own application code. You can search the registry using Docker from your command line or the Docker Hub website.
Install the Docker extension on VS Code
The Docker extension makes it easy to build, manage and deploy containerized applications from Visual Studio Code.
Open the Extensions window (Ctrl+Shift+X) in VS Code and search for Docker.
Select the Microsoft Docker extension and install. You will need to reload VS Code after installing to enable the extension.
By installing the Docker extension on VS Code, you will now be able to bring up a list of
Dockerfile commands used in the next section with the shortcut:
Ctrl+Space
Learn more about working with Docker in VS Code.
Create a container image with DockerFile
A container image stores your application code, libraries, configuration files, environment variables, and runtime. Using an image ensures that the environment in your container is standardized and contains only what is necessary to build and run your application.
A DockerFile contains the instructions needed to build the new container image. In other words, this file builds the container image that defines your app’s environment so it can be reproduced anywhere.
Let's build a container image using the Next.js app set up in the web frameworks guide.
Open your Next.js app in VS Code (ensuring that the Remote-WSL extension is running as indicated in the bottom-left green tab). Open the WSL terminal integrated in VS Code (View > Terminal) and make sure that the terminal path is pointed to your Next.js project directory (ie.
~/NextProjects/my-next-app$).
Create a new file called
Dockerfilein the root of your Next.js project and add the following:
# Specifies where to get the base image (Node v12 in our case) and creates a new container for it FROM node:12 # Set working directory. Paths will be relative this WORKDIR. WORKDIR /usr/src/app # Install dependencies COPY package*.json ./ RUN npm install # Copy source files from host computer to the container COPY . . # Build the app RUN npm run build # Specify port app runs on EXPOSE 3000 # Run the app CMD [ "npm", "start" ]
To build the docker image, run the following command from the root of your project (but replacing
<your_docker_username>with the username you created on Docker Hub):
docker build -t <your_docker_username>/my-nextjs-app .
Note
Docker must be running with the WSL Tech Preview for this command to work. For a reminder of how to start Docker see step #4 of the install section. The
-t flag specifies the name of the image to be created, "my-nextjs-app:v1" in our case. We recommend that you always use a version # on your tag names when creating an image. Be sure to include the period at the end of the command, which specifies the current working directory should be used to find and copy the build files for your Next.js app.
To run this new docker image of your Next.js app in a container, enter the command:
docker run -d -p 3333:3000 <your_docker_username>/my-nextjs-app:v1
The
-pflag binds port '3000' (the port that the app is running on inside the container) to local port '3333' on your machine, so you can now point your web browser to and see your server-side rendered Next.js application running as a Docker container image.
Tip
We built our container image using
FROM node:12 which references the Node.js version 12 default image stored on Docker Hub. This default Node.js image is based on a Debian/Ubuntu Linux system, there are many different Node.js images to choose from, however, and you may want to consider using something more lightweight or tailored to your needs. Learn more in the Node.js Image Registry on Docker Hub.
Upload your container image to a repository
A container repository stores your container image in the cloud. Often a container repository will actually contain a collection of related images, such as different versions, that are all available for easy setup and rapid deployment. Typically, you can access images on container repositories via secure HTTPs endpoints, allowing you to pull, push or manage images through any system, hardware or VM instance.
A container registry, on the other hand, stores a collection of repositories as well as indexes, access control rules, and API paths. These can be hosted publicly or privately. Docker Hub is an open-source Docker registry and the default used when running
docker push and
docker pull commands. It is free for public repos and requires a fee for private repos.
To upload your new container image to a repo hosted on Docker Hub:
Log in to Docker Hub. You will be prompted to enter the username and password you used to create your Docker Hub account during the installation step. To log in to Docker in your terminal, enter:
docker login
To get a list of the docker container images that you've created on your machine, enter:
docker image ls --all
Push your container image up to Docker Hub, creating a new repo for it there, using this command:
docker push <your_docker_username>/my-nextjs-app:v1
You can now view your repository on Docker Hub, enter a description, and link your GitHub account (if you want to), by visiting:
You can also view a list of your active Docker containers with:
docker container ls(or
docker ps)
You should see that your "my-nextjs-app:v1" container is active on port 3333 ->3000/tcp. You can also see your "CONTAINER ID" listed here. To stop running your container, enter the command:
docker stop <container ID>
Typically, once a container is stopped, it should also be removed. Removing a container cleans up any resources it leaves behind. Once you remove a container, any changes you made within its image filesystem are permanently lost. You will need to build a new image to represent changes. TO remove your container, use the command:
docker rm <container ID>
Learn more about building a containerized web application with Docker.
Deploy to Azure Container Registry
Azure Container Registry (ACR) enables you to store, manage, and keep your container images safe in private, authenticated, repositories. Compatible with standard Docker commands, ACR can handle critical tasks for you like container health monitoring and maintenance, pairing with Kubernetes to create scalable orchestration systems. Build on demand, or fully automate builds with triggers such as source code commits and base image updates. ACR also leverages the vast Azure cloud network to manage network latency, global deployments, and create a seamless native experience for anyone using Azure App Service (for web hosting, mobile back ends, REST APIs), or other Azure cloud services.
Important
You need your own Azure subscription to deploy a container to Azure and you may receive a charge. If you don't already have an Azure subscription, create a free account before you begin.
For help creating an Azure Container Registry and deploy your app container image, see the exercise: Deploy a Docker image to an Azure Container Instance.
Additional resources
- Node.js on Azure
- Quickstart: Create a Node.js web app in Azure
- Online course: Administer containers in Azure
- Using VS Code: Working with Docker
- Docker Docs: Docker Desktop WSL 2 Tech Preview | https://docs.microsoft.com/en-us/windows/nodejs/containers?WT.mc_id=windows-c9-niner | 2020-08-03T13:44:27 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['../images/docker-context.png',
'Docker display context in Powershell'], dtype=object)] | docs.microsoft.com |
This document describes and tells how to deploy, configure, and manage the OpenCloud Diameter Resource Adaptor package.
About the Diameter Resource Adaptors package
The.
Topics
This document includes the following topics:
Other documentation for the Diameter Resource Adaptors can be found on the Diameter Resource Adaptors product page. | https://docs.rhino.metaswitch.com/ocdoc/books/diameter/3.1.3/diameter-resource-adaptors-guide/index.html | 2020-08-03T11:27:05 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.rhino.metaswitch.com |
System requirements
This topic describes Deploy server and client requirements as well as minimal requirements for workers, database, and middleware servers.
Server requirements
To install the Deploy server, your system must meet the following requirements:
- Operating system: Any commercially supported version of 32-bit or 64-bit Microsoft Windows (under Mainstream Support), or a commercially supported Linux/Unix-based operating system.
Oracle, IBM, Apple Java Development Kit (JDK), and OpenJDK: JDK 1.8.0_161 or later, or JDK 11.
Important: Deploy is not supported on non-LTS Java Development Kits (JDKs). See the Java SE support roadmap for details on which Java versions are LTS and non-LTS.
- CPU: Modern multi-core CPU with x64 architecture.
- RAM: At least 2 GB of RAM must be available for Deploy. See Memory requirements for more details.
- Hard disk space: See the Hard disk space requirements for more details.
Depending on the environment, the following may also be required:
- Database: The default Deploy setting is to use the internal database that stores the data on the file system. This is intended for temporary use and is not recommended for production use. For production use, it is strongly recommended to use an industrial-grade external database server such as PostgreSQL, MySQL, Oracle, Microsoft SQL Server or DB2. For more information, see Configure the Deploy SQL repository.
- LDAP: To enable group-based security, an LDAP x.509 compliant registry is needed. For more information, see Connect Deploy to LDAP or Active Directory.
Networking requirements
Before installing Deploy, ensure that the network connection to the Deploy host name is functional. You should be able to successfully execute
ping xl_deploy_hostname.
By default, the Deploy server uses port
4516. If, during installation, you choose to enable secure communication (SSL) between the server and the Deploy GUI, the server uses port
4517.
External worker setup requires
4516 or
4517 ports to be open to clients and worker machines and, additionally, port
8180 for master-worker communication on all masters and workers.
To enable secure communication and/or to change the port number during installation, choose the manual setup option when installing Deploy.
Memory requirements
Depending on your configuration, memory requirements are:
- 3 GB for a single machine internal worker setup
- 2 GB for master instance and 2 GB per worker on one machine with local worker setup
- 2 GB for master machine and 2 GB for each worker machine in external worker setup
Hard disk space requirements
The default installation of XLD requires approximately 250 MB of disk space (excluding Java and other tools), but it is recommended to have at least 100 GB of disk space on each master or worker node. Faster hard drives are preferred.
The main hard disk space usage comes from the artifacts and log files in the Deploy system. The required file system size of the artifacts will vary from installation to installation, but depends mainly on the:
- Size and storage mechanism used for artifacts
- Number of packages in the system
- Number of deployments performed (specifically, the amount of logging information stored)
Deploy supports three ways of storing artifacts:
- In an external file storage such as Maven or some HTTP(S) location, as described here. This is the preferred way.
- Directly in the database
- On a file system that is shared between all of the masters and workers. In this case, there must be enough space on the shared file system to hold all artifacts for all versions of all applications.
Deploy always requires that the disk space for the server be persistent. This is important for several reasons:
- Configuration files such as
deployit.conf,
xl-deploy.conf, and
deployit-defaults.propertiesare updated by the running system
- Log files are also updated by the running system (unless configured otherwise)
Estimating required disk space
To estimate the total required disk space:
- Install and configure Deploy for your environment as described in this document. Make sure you correctly set up the database- or file-based repository.
- Estimate the number of packages to be imported, either the total number or the number per unit of time (
NumPackages).
- Estimate the number of deployments to be performed, either the total number or the number per unit of time (
NumDeployments).
- Record the amount of disk space used by Deploy (
InitialSize).
- Import a small number of packages using the GUI or CLI.
- Record the amount of disk space used by Deploy (
SizeAfterImport).
- Perform a small number of deployments.
- Record the amount of disk space used by Deploy (
SizeAfterDeployments).
The needed amount of disk space in total is equal to:
Space Needed = ((SizeAfterImport - InitialSize) * NumPackages) + ((SizeAfterDeployments - SizeAfterImport) * NumDeployments)
If
NumPackages and
NumDeployments are expressed per time unit (the number of packages to be imported per month), the end result represents the space needed per time unit as well.
Minimal reference configuration
For both internal and external workers, having an external database instance is preferred.
Internal worker configuration
- Modern multi-core CPU with x64 architecture
- RAM: 3 GB
- Hard drive: 100 GB
- Faster network connection between database and master/worker instances is preferred
External worker configuration
Master:
- Modern multi-core CPU with x64 architecture
- RAM: 2 GB
- Hard drive: 100 GB
- Faster network connection between database and master/worker instances is preferred
Worker:
- Modern multi-core CPU with x64 architecture
- RAM: 2 GB
- Hard drive: 100 GB
- Faster network connection between database and master/worker instances is preferred
Database server configuration
Minimal requirements:
- Modern multi-core CPU with x64 architecture
- RAM: 2 GB
- Hard drive: 100 GB
- Faster network connection between database and master/worker instances is preferred
For additional information check database vendor requirements pages:
- MySQL
- PostgreSQL
- Oracle
- MSSql
- DB2
Client requirements
GUI clients
To use the Deploy GUI, your system must meet the following requirements:
Web browser:
- Firefox: the latest 2 versions
- Chrome: the latest 2 versions
- Internet Explorer 11 or later
Note: Internet Explorer Compatibility View is not supported.
CLI clients
To use the Deploy CLI, your system must meet the following requirements:
- Operating system: Microsoft Windows or Unix-based operating system running Java.
- Java Runtime Environment: The same Java Development Kit (JDK) version as your version of Deploy.
Middleware server requirements
Unix middleware server requirements
Unix-based middleware servers that Deploy interacts with must meet the following requirements:
- Credentials: Deploy must be able to log in to the target systems using a user name and password combination to perform at least the following Unix commands:
cp,
ls,
mv,
rm,
mkdir, and
rmdir. If the login user cannot perform these actions, Deploy can also use a
sudouser that can execute these commands.
SSH access: The target systems must be accessible by SSH from the Deploy server. They should run an SSH2 server. It is also possible to handle key-based authorization. Notes:
- It is possible that the SSH daemon on AIX hangs with certain types of SSH traffic.
- For security, the SSH account that is used to access a host should have limited rights.
- A variety of Linux distributions have made SSH require a TTY by default. This setting is incompatible with Deploy and is controlled by the
Defaults requirettysetting in the
sudoersfile.
Windows middleware server requirements
Microsoft Windows-based middleware servers that Deploy interacts with must meet the following requirements:
- File system access: The target file system must be accessible via CIFS from the Deploy server.
- Host access: The target host must be accessible from the Deploy server via WinRM or Windows Telnet server running in stream mode.
- Directory shares: The account used to access a target system must have access to the host’s administrative shares such as
C$.
Ports:
- For CIFS connectivity, port 445 on the target system must be accessible from the Deploy server.
- For Telnet connectivity, port 23 must be accessible from the Deploy server.
- For WinRM connectivity, port 5985 (HTTP) or port 5986 (HTTPS) must be accessible from the Deploy server.
Extending middleware support
You can connect Deploy to middleware servers that do not support SSH, Telnet, or WinRM. This requires you to use the Overthere remote execution framework to create a custom access method that connects to the server. | https://docs.xebialabs.com/v.9.7/deploy/concept/requirements-for-installing-xl-deploy/ | 2020-08-03T12:34:20 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.xebialabs.com |
POST /column/batch
Create a batch of ColumnModel that can be used as columns of a TableEntity. Unlike other objects in Synapse ColumnModels are immutable and reusable and do not have an "owner" or "creator". This method is idempotent, so if the same ColumnModel is passed multiple time a new ColumnModel will not be created. Instead the existing ColumnModel will be returned. This also means if two users create identical ColumnModels for their tables they will both receive the same ColumnModel. This call will either create all column models or create none
Resource URL | https://rest-docs.synapse.org/rest/POST/column/batch.html | 2020-08-03T13:03:16 | CC-MAIN-2020-34 | 1596439735810.18 | [] | rest-docs.synapse.org |
Xamarin.Forms RadioButton
The Xamarin.Forms
RadioButton is a type of button that allows users to select one option from a set. Each option is represented by one radio button, and you can only select one radio button in a group. The
RadioButton class inherits from the
Button class.
The following screenshots show
RadioButton objects in their cleared and selected states, on iOS and Android:
Important
RadioButton is currently experimental and can only be used by setting the
RadioButton_Experimental flag. For more information, see Experimental Flags.
The
RadioButton control defines the following properties:
IsChecked, of type
bool, which defines whether the
RadioButtonis selected. This property uses a
TwoWaybinding, and has a default value of
false.
GroupName, of type
string, which defines the name that specifies which
RadioButtoncontrols are mutually exclusive. This property has a default value of
null.
These properties are backed by
BindableProperty objects, which means that they can be targets of data bindings, and styled.
The
RadioButton control defines a
CheckedChanged event that's fired when the
IsChecked property changes, either through user or programmatic manipulation. The
CheckedChangedEventArgs object that accompanies the
CheckedChanged event has a single property named
Value, of type
bool. When the event is fired, the value of the
Value property is set to the new value of the
IsChecked property.
In addition, the
RadioButton class inherits the following typically-used properties from the
Button class:
Command, of type
ICommand, which is executed when the
RadioButtonis selected.
CommandParameter, of type
object, which is the parameter that's passed to the
Command.
FontAttributes, of type
FontAttributes, which determines text style.
FontFamily, of type
string, which defines the font family.
FontSize, of type
double, which defines the font size.
Text, of type
string, which defines the text to be displayed.
TextColor, of type
Color, which defines the color of the displayed text.
For more information about the
Button control, see Xamarin.Forms Button.
Create RadioButtons
The following example shows how to instantiate
RadioButton objects in XAML:
<StackLayout> <Label Text="What's your favorite animal?" /> <RadioButton Text="Cat" /> <RadioButton Text="Dog" /> <RadioButton Text="Elephant" /> <RadioButton Text="Monkey" IsChecked="true" /> </StackLayout>
In this example,
RadioButton objects are implicitly grouped inside the same parent container. This XAML results in the appearance shown in the following screenshots:
Alternatively,
RadioButton objects can be created in code:
StackLayout stackLayout = new StackLayout { Children = { new Label { Text = "What's your favorite animal?" }, new RadioButton { Text = "Cat" }, new RadioButton { Text = "Dog" }, new RadioButton { Text = "Elephant" }, new RadioButton { Text = "Monkey", IsChecked = true } } };
Group RadioButtons
Radio buttons work in groups, and there are two approaches to grouping radio buttons:
- By putting them inside the same parent container. This is known as implicit grouping.
- By setting the
GroupNameproperty on each radio button to the same value. This is known as explicit grouping.
The following XAML example shows explicitly grouping
RadioButton objects by setting their
GroupName properties:
<Label Text="What's your favorite color?" /> <RadioButton Text="Red" TextColor="Red" GroupName="colors" /> <RadioButton Text="Green" TextColor="Green" GroupName="colors" /> <RadioButton Text="Blue" TextColor="Blue" GroupName="colors" /> <RadioButton Text="Other" GroupName="colors" />
In this example, each
RadioButton is mutually exclusive because it shares the same
GroupName value. This XAML results in the appearance shown in the following screenshots:
Respond to a RadioButton state change
A radio button has two states: selected or cleared. When a radio button is selected, its
IsChecked property is
true. When a radio button is cleared, its
IsChecked property is
false. A radio button can be cleared by clicking another radio button in the same group, but it cannot be cleared by clicking it again. However, you can clear a radio button programmatically by setting its
IsChecked property to
false.
When the
IsChecked property changes, either through user or programmatic manipulation, the
CheckedChanged event fires. An event handler for this event can be registered to respond to the change:
<RadioButton Text="Red" TextColor="Red" GroupName="colors" CheckedChanged="OnColorsRadioButtonCheckedChanged" />
The code-behind contains the handler for the
CheckedChanged event:
void OnColorsRadioButtonCheckedChanged(object sender, CheckedChangedEventArgs e) { // Perform required operation }
The
sender argument is the
RadioButton responsible for this event. You can use this to access the
RadioButton object, or to distinguish between multiple
RadioButton objects sharing the same
CheckedChanged event handler.
Alternatively, an event handler for the
CheckedChanged event can be registered in code:
RadioButton radioButton = new RadioButton { ... }; radioButton.CheckedChanged += (sender, e) => { // Perform required operation };
Note
An alternative approach for responding to a
RadioButton state change is to define an
ICommand and assign it to the
RadioButton.Command property. For more information, see Button: Using the command interface.
RadioButton visual states
RadioButton has an
IsChecked
VisualState that can be used to initiate a visual change when a
RadioButton is selected.
The following XAML example shows how to define a visual state for the
IsChecked state:
<ContentPage ...> <ContentPage.Resources> <Style TargetType="RadioButton"> <Setter Property="VisualStateManager.VisualStateGroups"> <VisualStateGroupList> <VisualStateGroup x: <VisualState x: <VisualState.Setters> <Setter Property="TextColor" Value="Red" /> <Setter Property="Opacity" Value="0.5" /> </VisualState.Setters> </VisualState> <VisualState x: <VisualState.Setters> <Setter Property="TextColor" Value="Green" /> <Setter Property="Opacity" Value="1" /> </VisualState.Setters> </VisualState> </VisualStateGroup> </VisualStateGroupList> </Setter> </Style> </ContentPage.Resources> <StackLayout> <Label Text="What's your favorite mode of transport?" /> <RadioButton Text="Car" CheckedChanged="OnRadioButtonCheckedChanged" /> <RadioButton Text="Bike" CheckedChanged="OnRadioButtonCheckedChanged" /> <RadioButton Text="Train" CheckedChanged="OnRadioButtonCheckedChanged" /> <RadioButton Text="Walking" CheckedChanged="OnRadioButtonCheckedChanged" /> </StackLayout> </ContentPage>
In this example, the implicit
Style targets
RadioButton objects. The
IsChecked
VisualState specifies that when a
RadioButton is selected, its
TextColor property will be set to green with an
Opacity value of 1. The
Normal
VisualState specifies that when a
RadioButton is in a cleared state, its
TextColor property will be set to red with an
Opacity value of 0.5. Therefore, the overall effect is that when a
RadioButton is cleared it's red and partially transparent, and is green without transparency when it's selected:
For more information about visual states, see Xamarin.Forms Visual State Manager.
Disable a RadioButton
Sometimes an application enters a state where a
RadioButton being checked is not a valid operation. In such cases, the
RadioButton can be disabled by setting its
IsEnabled property to
false. | https://docs.microsoft.com/en-us/xamarin/xamarin-forms/user-interface/radiobutton | 2020-08-03T12:54:58 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['radiobutton-images/radiobutton-states.png',
'RadioButtons on iOS and Android Screenshot of RadioButtons in selected and cleared states, on iOS and Android'],
dtype=object)
array(['radiobutton-images/radiobuttons.png',
'Implicitly grouped RadioButtons on iOS and Android Screenshot of implicitly grouped RadioButtons, on iOS and Android'],
dtype=object)
array(['radiobutton-images/grouped-radiobuttons.png',
'Explicitly grouped RadioButtons on iOS and Android Screenshot of explicitly grouped RadioButtons, on iOS and Android'],
dtype=object)
array(['radiobutton-images/ischecked-visualstate.png',
'RadioButton visual states on iOS and Android Screenshot of RadioButton appearance set by visual state, on iOS and Android'],
dtype=object) ] | docs.microsoft.com |
- Your Account
- Reference
- Getting Around
- Concepts
- How To
- API
- Security
You can upload existing PDF or image files to quickly create web forms. The system will let you convert PDF files, Postscript files, and image files including PNG, JPG and GIF. Please note that this feature is essentially a "beta" feature, meaning that it may not work for all PDF files. Please keep this in mind when using it, and feel free to contact us if you find any issues or problems.
If your PDF file already has fields in it, the conversion process will re-create them for you on the web form. If the PDF does not contain fields, or you are using an image, the upload will create the base template with all the pages, and a background image that matches the PDF or image. You will then need to manually add the fields on top. This is easy when you use the "Clone" keyboard shortcut, as many time you only need to create one field, and then clone it and move a little to get it in place.
Follow these steps to upload and convert an existing PDF file to a Doculicious web form: | http://docs.doculicious.com/concepts/convert-pdf-to-web-form | 2013-05-18T13:51:05 | CC-MAIN-2013-20 | 1368696382398 | [] | docs.doculicious.com |
Pizza Bugs and Fun March 16, 2013
Revision as of 14:31, 1 March 2013
We:
- [Will add in the coming weeks]
North America
Joomla!Day Boston Microsoft New England Research & Development Center One Memorial Drive Suite 100 Cambridge, MA 02142 10:00 AM-12:15 PM EST and 1:30 PM-4:00 PM EST
South America
Asia/Pacific March 16, 2013/Contributors List | http://docs.joomla.org/index.php?title=Pizza_Bugs_and_Fun_March_16,_2013&diff=prev&oldid=82227 | 2013-05-18T13:54:42 | CC-MAIN-2013-20 | 1368696382398 | [] | docs.joomla.org |
java.lang.Object
org.jboss.aop.advice.AspectDefinitionorg.jboss.aop.advice.AspectDefinition
public class AspectDefinition
Definition of an aspect or interceptor.
This class is used by JBoss AOP to manage all configured informations regarding aspects and interceptors, and can be used to define new aspects and interceptors dynamically.
AspectManager.addAspectDefinition(AspectDefinition)
protected String name
domain.
protected Scope scope
protected AspectFactory factory
protected boolean deployed
domain.
public Map<Advisor,Boolean> advisors
public AspectDefinition(String name, Scope scope, AspectFactory factory)
name- the name of the aspect. This name is used by the domain to identify the aspect, so it must be unique in the AOP
domain.
scope- the aspect scope, indicates how many aspects instances must be created during execution. Defaults to PER_VM if
null.
factory- factory responsible for creating the aspect instances
AspectFactory,
GenericAspectFactory
public AspectDefinition()
public void undeploy()
public boolean isDeployed()
trueif this aspect definition is deployed in its
domain.An aspect definition is considered to be deployed if it is active in the domain, and can intercept joinpoints. It is not deployed when it is inactive and won't intercept any joinpoints.
trueif this aspect definition is active in its domain
public void setName(String name)
domain.
name- the new name of this aspect definition.
public void setScope(Scope scope)
scope- the new scope of this aspect definition.
public void setFactory(AspectFactory factory)
factory- the new factory of this aspect definition
public AspectFactory getFactory()
public String getName()
domain.
domain
public void registerAdvisor(Advisor advisor)
advisoras being a client of this definition. This means that
advisoruses an instance of the defined aspect for interception of one or more joinpoints.
For internal use only
advisor- an advisor responsible for managing joinpoints and their interception execution
public void unregisterAdvisor(Advisor advisor)
advisoras being a client of this definition. This means that
advisorno more uses an instance of the defined aspect for interception.
For internal use only
advisor- responsible for managing a set of joinpoints and their interception execution
public Scope getScope()
public int hashCode()
hashCodein class
Object
public boolean equals(Object obj)
objfor equality. Returns
trueif and only if
objis an aspect definition with the same
nameas this one.
equalsin class
Object
obj- the obj for comparison.
trueif
objis an aspect definition with the same
nameas this one. | http://docs.jboss.org/jbossaop/docs/2.0.0.GA/docs/aspect-framework/apidocs/org/jboss/aop/advice/AspectDefinition.html | 2013-05-18T13:50:10 | CC-MAIN-2013-20 | 1368696382398 | [] | docs.jboss.org |
Python.
All Python releases are Open Source (see for the Open Source Definition). Historically, most, but not all, Python releases have also been GPL-compatible; the table below summarizes the various releases.
Note: GPL-compatible doesn't mean that we're distributing Python under the GPL. All Python licenses, unlike the GPL, let you distribute a modified version without making your changes open source. The GPL-compatible licenses make it possible to combine Python with other software that is released under the GPL; the others don't.
Thanks to the many outside volunteers who have worked under Guido's direction to make these releases possible.
See About this document... for information on suggesting changes.See About this document... for information on suggesting changes. | http://docs.python.org/release/2.5.2/ext/node51.html | 2013-05-18T13:10:49 | CC-MAIN-2013-20 | 1368696382398 | [] | docs.python.org |
(May 12, 2012)
(April 16, 2012).
(August 27, 2011)
(July 31, 2011)
(October 11, 2010) | http://python-wordpress-xmlrpc.readthedocs.org/en/latest/dev/changelog.html | 2013-05-18T13:31:59 | CC-MAIN-2013-20 | 1368696382398 | [] | python-wordpress-xmlrpc.readthedocs.org |