content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Setting HTTP request/response header limits Edge for Private Cloud v. 4.16.05 The syntax is slightly different for the Message Processor because the properties are commented out by default. If that file does not exist, create it. You must restart the Message Processor after changing these properties: > /<inst_root>/apigee/apigee-service/bin/apigee-service edge-message-processor restart Help or comments? - If something's not working: Ask the Apigee Community or see Apigee Support. - If something's wrong with the docs: Send Docs Feedback (Incorrect? Unclear? Broken link? Typo?)
http://ja.docs.apigee.com/private-cloud/v4.16.05/setting-http-requestresponse-header-limits
2017-09-19T20:30:15
CC-MAIN-2017-39
1505818686034.31
[]
ja.docs.apigee.com
Tabbed Forms SilverStripe's FormScaffolder can automatically generate Form instances for certain database models. In the CMS and other scaffolded interfaces, it will output TabSet and Tab objects and use jQuery Tabs to split parts of the data model. All interfaces within the CMS such as ModelAdmin and LeftAndMain use tabbed interfaces by default. When dealing with tabbed forms, modifying the fields in the form has a few differences. Each Tab will be given a name, and normally they all exist under the Root TabSet. TabSet instances can contain child Tab and further TabSet instances, however the CMS UI will only display up to two levels of tabs in the interface. If you want to group data further than that, try ToggleField. Adding a field to a tab $fields->addFieldToTab('Root.Main', new TextField(..)); $fields->removeFieldFromTab('Root.Main', 'Content'); $fields->addFieldToTab('Root.MyNewTab', new TextField(..)); $content = $fields->dataFieldByName('Content'); $fields->removeFieldFromTab('Root.Main', 'Content'); $fields->addFieldToTab('Root.MyContent', $content); $fields->addFieldsToTab('Root.Content', array( TextField::create('Name'), TextField::create('Email') ));
https://docs.silverstripe.org/en/3/developer_guides/forms/tabbed_forms/
2020-02-16T21:54:31
CC-MAIN-2020-10
1581875141430.58
[]
docs.silverstripe.org
The. For more information on Alfresco S3 connector, see Installing and configuring Alfresco S3 connector [9]. The diagram shows the architecture of CCS. The major classes and interfaces that form the Caching Content Store are: The CachingContentStore class is highly configurable and many of its components could be exchanged for other implementations. For example, the lookup table could easily be replaced with a different implementation of SimpleCache than that supplied. The cached content cleaner (CachedContentCleaner) periodically traverses the directory structure containing the cached content files and deletes the content files that are not in use by the cache. Files are considered not in use by the cache if they have no entry in the lookup table managed by ContentCacheImpl. The content cache cleaner is not a part of the architecture but is a helper object for ContentCacheImpl and allows it to operate more efficiently. The following properties are used in the sample context file, caching-content-store-context.xml.sample and can be set in the alfresco-global.properties file. Their default values are provided in the repository.properties file. <bean id="fileContentStore" class="org.alfresco.repo.content.caching.CachingContentStore" init- <property name="backingStore" ref="backingStore"/> <property name="cache" ref="contentCache"/> <property name="cacheOnInbound" value="${system.content.caching.cacheOnInbound}"/> <property name="quota" ref="standardQuotaManager"/> </bean> In this case, the fileContentStore bean is overridden. The ContentService bean uses fileContentStore bean, so CCS is used automatically. You can also specify a different name and an overridden contentService bean. The main collaborators of backingStore, cache and quota refer to the beans for Backing Store, Content Cache and Quota Manager as shown in the diagram in the CachingContentStore overview [10] topic. Each CachingContentStore class should have its own dedicated instances of these collaborators and they should not be shared across other CachingContentStore beans, should you have any defined. <bean id="tenantRoutingContentStore" class="org.alfresco.module.org_alfresco_module_cloud.repo.content.s3store.TenantRoutingS3ContentStore" parent="baseTenantRoutingContentStore"> <property name="defaultRootDir" value="${dir.contentstore}" /> <property name="s3AccessKey" value="${s3.accessKey}" /> <property name="s3SecretKey" value="${s3.secretKey}" /> <property name="s3BucketName" value="${s3.bucketName}" /> <property name="s3BucketLocation" value="${s3.bucketLocation}" /> <property name="s3FlatRoot" value="${s3.flatRoot}" /> <property name="globalProperties"> <ref bean="global-properties" /> </property> </bean> <bean id="contentCache" class="org.alfresco.repo.content.caching.ContentCacheImpl"> <property name="memoryStore" ref="cachingContentStoreCache"/> <property name="cacheRoot" value="${dir.cachedcontent}"/> </bean> The ContentCacheImpl uses a fast lookup table for determining whether an item is currently cached by the CCS, for controlling the maximum number of items in the cache and their Time To Live (TTL). The lookup table is specified here by the memoryStore property. The ContentCacheImpl also uses a directory on the local filesystem for storing binary content data (the actual content being cached). This directory is specified by the cacheRoot property. The following code illustrates the bean referencing the specified memoryStore reference: <bean id="cachingContentStoreCache" factory- <constructor-arg </bean> <bean id="standardQuotaManager" class="org.alfresco.repo.content.caching.quota.StandardQuotaStrategy" init- <property name="maxUsageMB" value="${system.content.caching.maxUsageMB}"/> <property name="maxFileSizeMB" value="${system.content.caching.maxFileSizeMB}"/> <property name="cache" ref="contentCache"/> <property name="cleaner" ref="cachedContentCleaner"/> </bean> bean <property name="jobClass"> <value>org.alfresco.repo.content.caching.cleanup.CachedContentCleanupJob</value> </property> <property name="jobDataAsMap"> <map> <entry key="cachedContentCleaner"> <ref bean="cachedContentCleaner" /> </entry> </map> </property> </bean> <bean id="cachedContentCleaner" class="org.alfresco.repo.content.caching.cleanup.CachedContentCleaner" init- <property name="minFileAgeMillis" value="${system.content.caching.minFileAgeMillis}"/> <property name="maxDeleteWatchCount" value="${system.content.caching.maxDeleteWatchCount}"/> <property name="cache" ref="contentCache"/> <property name="usageTracker" ref="standardQuotaManager"/> </bean> <bean id="cachingContentStoreCleanerTrigger" class="org.alfresco.util.CronTriggerBean"> <property name="jobDetail"> <ref bean="cachingContentStoreCleanerJobDetail" /> </property> <property name="scheduler"> <ref bean="schedulerFactory" /> </property> <property name="cronExpression"> <value>${system.content.caching.contentCleanup.cronExpression}</value> </property> </bean> Note that both the cleaner and the quota manager limit the usage of disk space but they do not perform the same function. In addition to removing the orphaned content, the cleaner's job is to remove files that are out of use from the cache due to parameters, such as TTL, which sets the maximum time an item should be used by the CCS. The quota manager exists to set specific requirements in terms of allowed disk space. A number of property placeholders are used in the specified definitions. You can replace them directly in your configuration with the required values, or you can use the placeholders as they are and set the values in the repository.properties file. An advantage of using the property placeholders is that the sample file can be used with very few changes and the appropriate properties can be modified to get the CCS running with little effort. The Aggregating content store contains a primary store and a set of secondary stores. The order in which the stores appear in the list of participating stores is important. The first store in the list is known as the primary store. Content can be read from any of the stores, as if it were a single store. When the replicator goes to fetch content, the stores are searched from first to last. The stores should therefore, be arranged in order of speed. For example, if you have a fast (and expensive) local disk, you can use this as your primary store for best performance. The old infrequently used files may be stored on lower cost, slower storage. When replication is disabled, content is written to the primary store only. The other stores are used to retrieve content and the primary store is not updated with the content. Example configuration for tiered storage The following configuration defines an additional tiered storage solution. The default content store is not changed. An additional set of secondary stores is defined (tier1, tier2 and tier3). As content ages (old infrequently used files), it can be moved to lower tiers. If the tiered storage is slow, a Caching content store can be placed in front. The aggregating-store-context.xml file enables Aggregating content store. The content of this file is shown below. Place the aggregating-store-context.xml file in your <TOMCAT_HOME>/shared/classes/alfresco/extension folder. <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE beans PUBLIC '-//SPRING//DTD BEAN//EN' ''> <!-- This file enables an aggregating content store. It should be placed in shared/classes/alfresco/extension --> <beans> <bean id="defaultContentStore" class="org.alfresco.repo.content.filestore.FileContentStore"> <constructor-arg> <value>${dir.contentstore}</value> </constructor-arg> <!-- Uncomment the property below to add content filesize limit. <property name="contentLimitProvider" ref="defaultContentLimitProvider"/> --> </bean> <bean id="tier1ContentStore" class="org.alfresco.repo.content.filestore.FileContentStore"> <constructor-arg> <value>${dir.contentstore1}</value> </constructor-arg> <!-- Uncomment the property below to add content filesize limit. <property name="contentLimitProvider" ref="defaultContentLimitProvider"/> --> </bean> <bean id="tier2ContentStore" class="org.alfresco.repo.content.filestore.FileContentStore"> <constructor-arg> <value>${dir.contentstore2}</value> </constructor-arg> <!-- Uncomment the property below to add content filesize limit. <property name="contentLimitProvider" ref="defaultContentLimitProvider"/> --> </bean> <bean id="tier3ContentStore" class="org.alfresco.repo.content.filestore.FileContentStore"> <constructor-arg> <value>${dir.contentstore3}</value> </constructor-arg> <!-- Uncomment the property below to add content filesize limit. <property name="contentLimitProvider" ref="defaultContentLimitProvider"/> --> </bean> <!-- this is the aggregating content store - the name fileContentStore overrides the alfresco default store --> <bean id="fileContentStore" class="org.alfresco.repo.content.replication.AggregatingContentStore" > <property name="primaryStore" ref="defaultContentStore" /> <property name="secondaryStores"> <list> <ref bean="tier1ContentStore" /> <ref bean="tier2ContentStore" /> <ref bean="tier3ContentStore" /> </list> </property> </bean> </beans> The Encrypted Content Store provides encryption at rest capability. This is done by scrambling plain text into cipher text (encryption) and then back again (decryption) with the help of symmetric and asymmetric keys. When a document is written to the Encrypted Content Store, the Encrypted Content Store uses symmetric encryption to encrypt the document before it is written to the wrapped content store. A new symmetric key is generated each time a document is written to the content store. This means that every document in the system is encrypted with a different symmetric key. Further more, asymmetric encryption (such as RSA) is used to encrypt/decrypt those symmetric encryption/decryption keys. The asymmetric encryption uses a master key which is selected from a set of configured master keys. The Encrypted content store encrypts content with a master key that is randomly selected from the pool of master keys. No control is provided for using a specific master key for a specific piece of content, as that would allow attackers to target specific master keys when attempting to access or tamper with content. The following diagram shows the application of Encrypted Content Store over your default Alfresco content store. For example, use the following command to generate the master key: keytool -genkey -alias key1 -keyalg RSA -keystore <master keystore path> -keysize 2048 The Encrypted Content Store is configured using the properties in the alfresco-global.properties file and can be administered using JMX. filecontentstore.subsystem.name=encryptedContentStore cryptodoc.jce.keystore.path=<path_to_the_keystore> cryptodoc.jce.keystore.password=<master_password_for_the_keystore> cryptodoc.jce.key.aliases=<alias_for_the_key> cryptodoc.jce.key.passwords=<password_for_the_key_itself> cryptodoc.jce.keygen.defaultSymmetricKeySize=128For detailed information on these properties, see Encrypted Content Store properties [20]. You can configure the Encrypted Content Store using the JMX client, such as JConsole on the JMX MBeans > Alfresco > Configuration > ContentStore > managed > encrypted > Attributes tab. The keystore path, password, aliases and their password are the common properties you can overwrite to configure Encrypted Content Store using the alfresco-global.properties file. The JMX interface exposes these properties and allows the user to change them for a running system. For more information, see Encryption-related JMX operations [17]. The JMX client, JConsole, allows the user to see the set of current master keys and the total number of symmetric keys encrypted by each master key. It also enables the users to revoke a master key and to add a new master key alias. The available managed beans are displayed in JConsole. The Attribute values window is displayed. The available managed beans are displayed in JConsole. The Operation invocation window is displayed. The relevant master key will not be used for encryption. This will reencrypt the symmetric keys of this master key with a new master key. CAS Links: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22]
https://docs.alfresco.com/print/book/export/html/2097866
2020-02-16T21:56:50
CC-MAIN-2020-10
1581875141430.58
[]
docs.alfresco.com
A volatile desktop is a guest that can be deleted and replaced with a fresh one, and its user can keep working as if nothing happened. This is implemented in the following way:: A: You must make TCP ports 22, 443, 7777, 9443, 5800 and 5900 and up reachable from your other hosts in the platform. Of these, only TCP 443 must be reachable from the outside to communicate with the flexVDI clients.: If you decide to use the standard location, it is a good design choice to mount an additional SSD disk with enough space in /flexvdi/volatile, so that differential images have dedicated fast storage. A: Now flexVDI configuration defaults to not periodically rescan for new disk devices. After the disk is installed in the host, execute And the flexVDI Host will update its device list without disturbing end users or virtual machines. The information will be automatically propagated to flexVDI manager and flexVDI dashboard in some seconds.
https://docs.flexvdi.com/plugins/viewsource/viewpagesrc.action?pageId=4883243
2020-02-16T22:12:47
CC-MAIN-2020-10
1581875141430.58
[]
docs.flexvdi.com
Annotations for Confluence is the app that allows you to find out the author of any piece of text. We are happy to help you. If you have any suggestions you can always: Add your idea Vote for existing ideas or just email us! You can use Annotations only with the text.
https://docs.stiltsoft.com/display/public/AFC/Annotations+for+Confluence?key=AFC
2020-02-16T23:30:34
CC-MAIN-2020-10
1581875141430.58
[]
docs.stiltsoft.com
mitmproxy¶ Note - We strongly encourage you to use Inline Scripts rather than mitmproxy. - Inline Scripts are equally powerful and provide an easier syntax. - Most examples are written as inline scripts. - Multiple inline scripts can be used together. - Inline Scripts can either be executed headless with mitmdump or within the mitmproxy UI. All of mitmproxy’s basic functionality is exposed through the mitmproxy library. The example below shows a simple implementation of the “sticky cookie” functionality included in the interactive mitmproxy program. Traffic is monitored for Cookie and Set-Cookie headers, and requests are rewritten to include a previously seen cookie if they don’t already have one. In effect, this lets you log in to a site using your browser, and then make subsequent requests using a tool like curl, which will then seem to be part of the authenticated session.
https://mitmproxy.readthedocs.io/en/v0.17/scripting/mitmproxy.html
2020-02-16T23:02:58
CC-MAIN-2020-10
1581875141430.58
[]
mitmproxy.readthedocs.io
General FAQ Is the source code unencrypted? yes, It is unencrypted, this allows you to make any changes you want. How many users can crea8socialpro serve at a time? We have worked to optimize creasocialpro and make it less in weight. Primarily, your server capability is important, there is a need to share and upgrade your server in order to withstand large amount of traffic. On most shared-server plans, crea8socialpro should be able to support tens of thousands of users, while a single dedicated server can usually support upwards of one hundred thousand however we make no guarantee of performance due to the many variable involved. Can i upgrade to another version? yes, you can upgrade to any version at any time once your license to upgrade is still active. You are priviledged to upgrade your own copy of crea8socialpro to another within depends on the plan you purchased. Can i translate the script to another language? yes, crea8socialpro comes with four languages including English.
http://docs.crea8social.com/docs/frequently-ask-questions/
2020-02-16T22:28:56
CC-MAIN-2020-10
1581875141430.58
[]
docs.crea8social.com
This option allow you to send invitation code to your friends in other to allow them to registered on your website. Watch the video below to know how to do this and how to generate invitation code: OR Follow the steps below: - Go to “Invitation” - Click on “Invitation” on your left side - Click on “Show Invitation Code”. - Click on “Regenerate” - Copy the code and give your friend to be able to Signup on your website. Thanks for Reading
http://docs.crea8social.com/docs/user-management/how-to-generate-invitation-code-for-singup/
2020-02-16T21:16:17
CC-MAIN-2020-10
1581875141430.58
[array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-48.png', None], dtype=object) array(['https://i.gyazo.com/f0f63e4691085c4091d364067dc47638.png', None], dtype=object) array(['https://i.gyazo.com/54f18b63f2937c9c525bbf421ac14493.png', None], dtype=object) array(['https://i.gyazo.com/58887d295f32ad434dfd8fb22dd09703.png', None], dtype=object) ]
docs.crea8social.com
Secret revealed In a previous post (Extra, Extra – Read all about it!) I mentioned an upcoming highly requested feature. Well, it’s here. Last night we released the Visual Studio, code name “Orcas” March 2007 CTP. In this release you will find a number of VS, Language, and Platform features. And the System AddIn library has a new feature. We extended Add-in activation beyond AppDomains to support activating Add-In’s out of process! I will post more on the Add-In team blog about the feature (as well as a little refactoring of the code we did) but here is a little code snippet using the calculator sample from our MSDN articles, showing how simple it is to activate an Add-In out of process and some additional control you may attain as a Host using process isolation. … AddInToken calcToken = ChooseCalculator(tokens); //Activate the selected AddInToken in a new AppDomain sandboxed in the internet zone //Calculator calculator = calcToken.Activate<Calculator>(AddInSecurityLevel.Internet); AddInProcess addInProcess = new AddInProcess(); Process HostProcess = Process.GetCurrentProcess(); System.Diagnostics.Trace.WriteLine("Calc Host PID: " + HostProcess.Id.ToString()); // The Process ID is -1 (i.e., Unassigned. Not yet created until Activate) System.Diagnostics.Trace.WriteLine("addInProcess PID: " + addInProcess.ProcessId.ToString()); Calculator calculator = calcToken.Activate<Calculator>(addInProcess, AddInSecurityLevel.FullTrust); // Get the AddinProcess created Process tmpAddinProcess = Process.GetProcessById(addInProcess.ProcessId); // Constrain the process WS tmpAddinProcess.MaxWorkingSet = (IntPtr)((long)tmpAddinProcess.MaxWorkingSet - (long)tmpAddinProcess.MinWorkingSet / 2); … Where can I get the bits? Use the following links for step-by-step instructions on installing and using the VPC images or installable bits.
https://docs.microsoft.com/en-us/archive/blogs/jackg/secret-revealed
2020-02-16T22:11:23
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
: /opt/LifeKeeper/bin/lkGUIapp & - To Connect to the LifeKeeper GUI Applet from a Web Browser, go.. このトピックへフィードバック
http://docs.us.sios.com/spslinux/9.4.0/ja/topic/configure-the-cluster
2020-02-16T22:40:49
CC-MAIN-2020-10
1581875141430.58
[array(['https://manula.r.sizr.io/large/user/1870/img/configure-the-cluster-step-4.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/configure-the-cluster-step-5.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/configure-the-cluster-step-6.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/configure-the-cluster-step-7.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/configure-the-cluster-step-8.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/configure-the-cluster-step-9.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/configure-the-cluster-step-9-1.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/configure-the-cluster-step-10.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/verify-step-1.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/verify-step-2.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/create-and-extend-step-1.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/create-and-extend-step-2.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/createresourcemyql.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/youripresource.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/mysql-cluster-from-the-gui.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/mysql-cluster-create.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/mysql-cluster-data.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/mysql-cluster-extend.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/mysql-cluster-extend2.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/mysql-cluster-from-the-gui2.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/should-look-as-follows.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/should-look-as-follows2.png', None], dtype=object) ]
docs.us.sios.com
All content with label 2lcache+async+development+events+hibernate_search+infinispan+installation+out_of_memory+repeatable_read+setup+test+transactionmanager+write_behind+xsd. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, dist, release, query, deadlock, jbossas, lock_striping, nexus, guide, schema, listener, cache, amazon, s3, grid, jcache, api, ehcache, maven, documentation, wcm,, integration, cluster, websocket, transaction, interactive, xaresource, build, gatein, searchable, demo, scala, client, non-blocking, jpa, filesystem, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, hotrod, snapshot, webdav, docs, consistent_hash, batching, store, jta, faq, as5, jsr-107, jgroups, lucene, locking, rest, hot_rod more » ( - 2lcache, - async, - development, - events, - hibernate_search, - infinispan, - installation, - out_of_memory, - repeatable_read, - setup, - test, - transactionmanager, - write_behind, - xsd ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/2lcache+async+development+events+hibernate_search+infinispan+installation+out_of_memory+repeatable_read+setup+test+transactionmanager+write_behind+xsd
2020-02-16T21:32:14
CC-MAIN-2020-10
1581875141430.58
[]
docs.jboss.org
All content with label client_server+ec2+ehcache+gridfs+hotrod+infinispan+installation+jboss_cache+jta+meeting+searchable+tx+write_behind. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, recovery, transactionmanager, dist, release, partitioning, query, deadlock, archetype, jbossas, lock_striping, nexus, guide, schema, listener, cache, amazon, s3, memcached, grid, test, api, xsd, maven, documentation, wcm, s, hibernate, getting, aws, interface, custom_interceptor, setup, clustering, eviction, concurrency, out_of_memory, examples, import, index, events, configuration, hash_function, batch, buddy_replication, loader, xa, write_through, cloud, remoting, mvcc, tutorial, notification, murmurhash2, xml, read_committed, jbosscache3x, distribution, started, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, development, websocket, async, transaction, interactive, xaresource, build, gatein, demo, cache_server, scala, client, migration, non-blocking, filesystem, jpa, gui_demo, eventing, testng, murmurhash, infinispan_user_guide, standalone, webdav, repeatable_read, docs, batching, consistent_hash, store, faq, 2lcache, as5, jsr-107, jgroups, lucene, locking, hot_rod more » ( - client_server, - ec2, - ehcache, - gridfs, - hotrod, - infinispan, - installation, - jboss_cache, - jta, - meeting, - searchable, - tx, - write_behind ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/client_server+ec2+ehcache+gridfs+hotrod+infinispan+installation+jboss_cache+jta+meeting+searchable+tx+write_behind
2020-02-16T22:31:07
CC-MAIN-2020-10
1581875141430.58
[]
docs.jboss.org
All content with label cloud+deadlock+ec2+gridfs+infinispan+installation+json+json_encryption+jsr-107+podcast+repeatable_read+rest+searchable+setup+standalone. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, rest_security, intro,, out_of_memory, examples, jboss_cache, import, index, events, configuration, hash_function, batch, buddy_replication, loader, write_through, mvcc, notification, tutorial, presentation, read_committed, jbosscache3x, distribution, jose, started, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, transaction, async, interactive, xaresource, build, gatein, demo, scala, mod_cluster, client, migration, non-blocking, jpa, filesystem, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, hotrod, webdav, snapshot, docs, consistent_hash, store, jta, faq, as5, 2lcache, jgroups, lucene, locking, json_signature, hot_rod more » ( - cloud, - deadlock, - ec2, - gridfs, - infinispan, - installation, - json, - json_encryption, - jsr-107, - podcast, - repeatable_read, - rest, - searchable, - setup, - standalone ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/cloud+deadlock+ec2+gridfs+infinispan+installation+json+json_encryption+jsr-107+podcast+repeatable_read+rest+searchable+setup+standalone
2020-02-16T22:08:34
CC-MAIN-2020-10
1581875141430.58
[]
docs.jboss.org
Mango RTMs! Now what?: -): Finally, if you’re building a Mango app or game right now – I’d love to talk to you! Give me a shout via the comments area of this post, use our contact page, or send me a note on Twitter! -Paul
https://docs.microsoft.com/en-us/archive/blogs/cdnmobiledevs/mango-rtms-now-what
2020-02-16T23:21:37
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here. Class: Seahorse::Client::HandlerList - Includes: - Enumerable - Defined in: - aws-sdk-core/lib/seahorse/client/handler_list.rb Instance Method Summary collapse - #add(handler_class, options = {}) ⇒ Class<Handler> Registers a handler. - #copy_from(source_list, &block) ⇒ void Copies handlers from the source_listonto the current handler list. - #each(&block) ⇒ Object Yields the handlers in stack order, which is reverse priority. - #entries ⇒ Array<HandlerListEntry> - #for(operation) ⇒ HandlerList Returns a handler list for the given operation. - #remove(handler_class) ⇒ Object - #to_stack ⇒ Handler Constructs the handlers recursively, building a handler stack. Instance Method Details #add(handler_class, options = {}) ⇒ Class<Handler> There can be only one :send handler. Adding an additional send handler replaces the previous. Registers a handler. Handlers are used to build a handler stack. Handlers default to the :build step with default priority of 50. The step and priority determine where in the stack a handler will be. Handler Stack Ordering A handler stack is built from the inside-out. The stack is seeded with the send handler. Handlers are constructed recursively in reverse step and priority order so that the highest priority handler is on the outside. By constructing the stack from the inside-out, this ensures that the validate handlers will be called first and the sign handlers will be called just before the final and only send handler is called. Steps Handlers are ordered first by step. These steps represent the life-cycle of a request. Valid steps are: :initialize :validate :build :sign :send Many handlers can be added to the same step, except for :send. There can be only one :send handler. Adding an additional :send handler replaces the previous one. Priorities Handlers within a single step are executed in priority order. The higher the priority, the earlier in the stack the handler will be called. - Handler priority is an integer between 0 and 99, inclusively. - Handler priority defaults to 50. - When multiple handlers are added to the same step with the same priority, the last one added will have the highest priority and the first one added will have the lowest priority. #copy_from(source_list, &block) ⇒ void This method returns an undefined value. Copies handlers from the source_list onto the current handler list. If a block is given, only the entries that return a true value from the block will be copied. #each(&block) ⇒ Object Yields the handlers in stack order, which is reverse priority. #entries ⇒ Array<HandlerListEntry> #for(operation) ⇒ HandlerList Returns a handler list for the given operation. The returned will have the operation specific handlers merged with the common handlers.
https://docs.aws.amazon.com/sdkforruby/api/Seahorse/Client/HandlerList.html
2020-02-16T22:00:11
CC-MAIN-2020-10
1581875141430.58
[]
docs.aws.amazon.com
See Also: IUIScrollViewAccessibilityDelegate Members This interface contains the required methods (if any) from the protocol defined by UIKit.UIScrollViewAccessibilityDelegate. If you create objects that implement this interface, the implementation methods will automatically be exported to Objective-C with the matching signature from the method defined in the UIKit.UIScrollViewAccessibilityDelegate protocol. Optional methods (if any) are provided by the UIKit.UIScrollViewAccessibilityDelegate_Extensions class as extension methods to the interface, allowing you to invoke any optional methods on the protocol.
http://docs.go-mono.com/monodoc.ashx?link=T%3AUIKit.IUIScrollViewAccessibilityDelegate
2021-11-27T14:30:57
CC-MAIN-2021-49
1637964358189.36
[]
docs.go-mono.com
IButtonProperties.Glyphs Property Namespace: DevExpress.XtraEditors.ButtonPanel Assembly: DevExpress.Utils.v21.2.dll Declaration [DXCategory("Behavior")] object Glyphs { get; set; } <DXCategory("Behavior")> Property Glyphs As Object Property Value Remarks IButtonProperties.Image and IButtonProperties.ImageIndex properties are ignored. See the Header Buttons topic for more info. See Also Feedback
https://docs.devexpress.com/WindowsForms/DevExpress.XtraEditors.ButtonPanel.IButtonProperties.Glyphs
2021-11-27T15:44:37
CC-MAIN-2021-49
1637964358189.36
[]
docs.devexpress.com
Date: Tue, 17 Aug 1999 00:40:24 -0700 From: "David Schwartz" <[email protected]> To: <[email protected]> Subject: panic: timeout handle full Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help I just had one of my servers crash with this panic from kern_timeout.c. The relevant code seems to be: /* Fill in the next free callout structure. */ new = SLIST_FIRST(&callfree); if (new == NULL) /* XXX Attempt to malloc first */ panic("timeout table full"); SLIST_REMOVE_HEAD(&callfree, c_links.sle); I'm just curious, what might consume too many timeout structures? And can I cause it to default to more of them? DS To Unsubscribe: send mail to [email protected] with "unsubscribe freebsd-stable" in the body of the message Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=72153+0+/usr/local/www/mailindex/archive/1999/freebsd-stable/19990822.freebsd-stable
2021-11-27T15:06:53
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
ConstantBox From Xojo Documentation Displays a class constant box {{ConstantBox | name = constant's name | type = type of the constant | value = default value of constant | owner = the constant's owner class | platform = all/mac/win/linux | newinversion = version where this class first appeared | modifiedinversion = version where this class has been modified | replacementreason = obsolete/deprecated | replacement = the replacement }} OR for Global Constants
http://docs.xojo.com/Template:ConstantBox
2021-11-27T15:18:41
CC-MAIN-2021-49
1637964358189.36
[]
docs.xojo.com
SAP User Accounts Evidence FlexNet Manager for SAP Applications collects evidence of SAP user accounts that show particularly high usage or that were used to concurrently log on to your organization's SAP systems. These user accounts are identified based on activity data collected by the SAP Inventory Agent, and the Work Time and Multiple Logons activity checks that were executed in the SAP Admin module. The Work Time activity check identifies user accounts that show long periods of activity that could indicate that a user is indirectly accessing SAP data. For example, if a user account executes a long-running job, this job might output SAP data into files which could then be distributed to non-SAP users or a non-SAP system. The Multiple Logons activity check identifies user accounts that have been used to concurrently log on to SAP systems. A user account that was used by several people or non-SAP systems could indicate indirect access. User accounts that are identified by the Work Time or Multiple Logons activity checks are listed on the SAP User Accounts tab on the Indirect Access page (accessible under Optimization). Based on the information presented on the tab, you can contact the administrator of the relevant SAP system who can then take appropriate measures to correct the situation, if necessary. You can link the user accounts that are indirectly accessing SAP data to a non-SAP system. This enables you to closely monitor these user accounts. For information about non-SAP systems, see Managing Evidence Using Non-SAP Systems. FlexNet Manager Suite (On-Premises) 2020 R2
https://docs.flexera.com/FlexNetManagerSuite2020R2/EN/WebHelp/concepts/SAP-UserAccountsEvid.html
2021-11-27T15:24:53
CC-MAIN-2021-49
1637964358189.36
[]
docs.flexera.com
“Transaction Profile by Maximum Object Types Used” Rule Use this rule to identify users who access a limited number of specified objects of each specific type (transaction, report, and job). The criteria for a suggestion are met only if the counted numbers for all object types are below or equal to the specified maximum value. If one of the values is exceeded, no license type suggestions are made. The main difference from the “Transaction Profile by Maximum Objects Used” Rule is that, for the “Transaction Profile by Maximum Object Types Used” rule, you define a threshold for each individual object type. To be able to use this rule, you must have created a transaction profile beforehand. For information about transaction profiles, see SAP Transaction Profiles. Usage Scenario: You could use the Transaction Profile by Maximum Object Types Used rule to distinguish between a Professional and a Limited Professional user, based on the breadth of SAP operations being used.. - Transaction Profile—Select the name of the transaction profile that should be matched against the reported user consumption data. The transaction profile defines the scope of the objects that are considered by this rule. - Maximum transaction used—Enter the maximum number of different transactions that were run by the user. - Maximum report used—Enter the maximum number of different reports that were run by the user. - Maximum job used—Enter the maximum number of different jobs that were run by the user. -
https://docs.flexera.com/FlexNetManagerSuite2020R2/EN/WebHelp/concepts/SAPXactnProfByMaxObjTypesRule.html
2021-11-27T15:33:02
CC-MAIN-2021-49
1637964358189.36
[]
docs.flexera.com
Go to hazelcast.org/documentation..
https://docs.hazelcast.org/docs/3.5/manual/html/queue-persistence.html
2021-11-27T13:57:00
CC-MAIN-2021-49
1637964358189.36
[]
docs.hazelcast.org
Windows Containers - How to Containerize an ASP.NET Web API Application in Windows using Docker This post is about: - You have or want to build an ASP.NET Web API app you want to run in a Windows Container - You want to automate the building of an image to run that Windows-based ASP.NET Web API Click image for full size Figure 1: How this post can help you Prerequisite Post This post can give you some background that I will assume you know for this post: Background for APIs Building APIs Hardly a day goes by where you don't learn of some business opening up there offerings as a web service application programming interface (API). this is a hot topic among developers these days, regardless of whether you're a startup or large enterprise. These APIs are typically offered as a series of interdependent web services. Exposing an API through a Windows Container So in this post I would like to get into the significance of the modern API and then get into a technical discussion about how you might build that out. In addition, I would like to address the use of a Docker container for hosting and running our API in the cloud. The Programmable Web The programmable web acts as a directory to all of the various APIs that are available. Click image for full size Figure 2: The Programmable Web Build and test locally - deploy to cloud and container Windows Container Workflow These will be the general steps we follow in this post. Click image for full size Figure 3: Workflow for building and running Windows containers Why containers are important Perhaps the most important reason why containerization is interesting, is the fact that it can increase deployment velocity. The main reason this happens with the help of containerization is that all of an application's dependencies are bundled up with the application itself. Delivering the application with all its dependencies Oftentimes dependencies are installed in a virtual machine. That means when applications are deployed to that virtual machine, there might be an impedance mismatch between the dependency in the virtual machine and the container. Bundling up the dependency along with the container minimizes this risk. Our web application will be deployed with all its dependencies in a container In our example we will bundle up a specific version of IIS and a specific version of the ASP.net framework. Once we roll this out as a container, we have a very high degree of confidence that our web application will run as expected, since we will be developing and debugging with the same version of IIS and ASP.net. We will use the ASP.NET MVC Web API to build out our HTTP-based service. We will use Visual Studio to do so. Starting with the new project in Visual Studio We will use the ASP.net MVC web API to build out our restful API service. Click image for full size Figure 4: Creating a new project with Visual Studio Our API will leverage ASP.NET MVC Web API as you see below. Somewhat tangential, we will choose to store our diagnostic information up in the cloud, but that is orthogonal to this post. Click image for full size Figure 5: Choosing ASP.NET Web Application We will need to select a template below, which defines the type of application we wish to create. - Web API - Do NOT host in the cloud - No Authentication Click image for full size Figure 6: Choosing Web API, No Authentication, Do NOT Host in the cloud We will ignore the notion of authentication for the purposes of this post. Click image for full size Figure 7: No authentication Our Visual Studio Solution Explorer should look like the following. You may choose a different solution name but you'll need to keep this in mind with later parts of the code, particularly with the Dockerfile. Click image for full size Figure 8: Visual Studio Solution Explorer The ValuesController.cs file is contains the code that gets executed automatically when HTTP requests come in. Notice that in the code snippet below that the various HTTP verbs (GET, POST, PUT, DELETE), map to code or functions. When we issue " web server/api/values", for example, you will see that because the get method below in return an array of strings, { "Run this ", "from a container" }. public class ValuesController : ApiController { public IEnumerable<string> Get() { return new string[] { "Run this ", "from a container" }; } // GET api/values/5 public string Get(int id) { return "value"; } // POST api/values public void Post([FromBody]string value) { } // PUT api/values/5 public void Put(int id, [FromBody]string value) { } // DELETE api/values/5 public void Delete(int id) { } } Click image for full size Figure 9: Opening thne ValuesController.cs file Code to modify Modify the strings you see in the red box to match. These will be this to stream they get returned back to the browser based on the http request. Click image for full size Figure 10: modifying the Get() method Running the solution locally Get the F5 key or go to the debug menu and select Start Debugging. In this case we are running locally on my laptop and that is why you see . when I ran the project within Visual Studio it automatically routed me to localhost with the corresponding port. Click image for full size Figure 11: The view from inside the browser Provisioning a Windows Server Docker host, Building the image, and running image as container in cloud Click image for full size Figure 12: Remaining Work There are a few things that we need to do before we are finished. The first thing is we will need a host for our container application. The host and the main purpose of this post will be to demonstrate how we will host a Windows application. So we will need to provision a Windows server 2016 container-capable Docker host. From there we will begin the work of building an image that contains our web application that we just finished building. To do this there will be a few artifacts that are required, such as a Dockerfile, some Powershell, and a docker build command. Let's get started and go to the Azure portal and provision a Windows 2016 virtual machine that is capable of running docker containers. It is currently in Technical Preview 6. Provision Winows 2016 Docker Host at Portal Navigate to the Azure portal. this is where we will provision a Windows server virtual machine that is capable of hosting our Docker containers. Click image for full size Figure 13: Provisioning Windows server 2016 Be sure to select the version that can support containers. Click image for full size Figure 14: Containerized Windows server Entering the basic information about your Windows virtual machine. Click image for full size Figure 15: Naming your VM Selecting a virtual machine with two cores and seven GBs of RAM. Click image for full size Figure 16: Choosing the hardware footprint It's important to remember to modify the virtual network configuration. Two addresses only are supported for Docker functionality for Windows server 2016 technical preview 5. The two supported subnets include: - 192.x.x.x - 10.x.x.x.x Click image for full size Figure 17: Specifying network configuration Notice that in this case I selected the 10.1.x.x network. Click image for full size Figure 18: Entering the address space for the subnet Dockerfile and Docker Build Once we create this virtual machine in Azure, we will connect to it. From there we will create a "Dockerfile," which is nothing more than a text file that contains instructions on how we wish to build our image. The instructions inside of the Docker will begin by downloading a base image from Docker Hub, which is a central repository for both Linux and Windows-based images. The Dockerfile will then continue by the deployment process for our MVC App. The process will install some compilation tools. It will also compile our MVC app yet again and then copy it over to the Web server directory of the image (c:\inetpub\wwwroot). After the build process is done, we will have an image that contains all the necessary binaries to run our MVC Web API app. Additional Guidance I borrowed some of the guidelines from Anthony Chu: Fixing some bugs I also ran into some bugs that could be easily fixed. Once your container is running, it may not be reachable by client browsers outside of Azure. To fix this problem we will use the following Powershell command,"Get-NetNatStaticMapping." Get-NetNatStaticMapping | ? ExternalPort -eq 80 | Remove-NetNatStaticMapping So now we will begin the process of provisioning a container enabled version of Windows server 2016. Begin by going to the Azure portal and clicking on the + . From there, type in Windows 2016 into the search text box. Click image for full size Figure 19: Provisioning a Windows virtual machine in Azure You will then see the ability to choose Windows server 2016 with containers tech preview 5 Click image for full size Figure 20: Searching for the appropriate image It is now time to copy our source code to our Windows server 2016 virtual machine running in Azure. so go to your local directory for your laptop on which you are developing your ASP.net MVC Web API application. From there we will remotely connect to the virtual machine running an Azure. the next goal will be to copy over our MVC application, along with all of its source code, to this running Windows Server virtual machine an Azure. Click image for full size Figure 21: Remotely connecting to the Windows 2116 server Click image for full size Figure 22: Copying our entire project from the local laptop used to develop the MVC web app Now that we have the project in the clipboard, the next step is to go back to our Windows server running in the cloud and paste into a folder we create. We will call that folder docker for simplicity's sake. When working with docker and containerization, most of your work is achieved at the command line. In the Linux world, we typically work in a bash environment, while in the Windows will will just simply use either a command prompt or Powershell. So let's navigate into Powershell. Click image for full size Figure 23: Start the Powershell Command Line We will create a docker directory in which we will place our work. Click image for full size Figure 24: Command line to make a directory Let's be clear that you are pasting into the Windows server 2016 virtual machine running in Azure. Click image for full size Figure 25: Paste in your application and supporting code The code below provides some interesting ways for us to deploy our MVC Application. - The base image will be a Windows Image with IIS pre-installed. starting with this base image saves the time of us installing Internet Information Server. - We install the Chocolatey tools, which lets you install Windows programs from the command line very easily - Because we will compile our MVC application prior to deploying into the image, the next section requires us to install the build tooling - A bill directory is created and files are copied into it so that the build process can take place in its own directory - Nuget packages are installed, a build takes place in the files are copied to c:\inetpub\wwwroot You will need to pay particular attention to the application name below, AzureCourseAPI.sln, and the related dependencies. You will obviously need to modify this for the name of your project. # TP5 for technology preview (will not be needed when we go GA) # FROM microsoft/iis FROM microsoft/iis:TP5 # Install Chocolatey (tools to automate commandline compiling) ENV chocolateyUseWindowsCompression false RUN @powershell -NoProfile -ExecutionPolicy unrestricted -Command "(iex ((new-object net.webclient).DownloadString(''))) >$null 2>&1" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin # Install build tools RUN powershell add-windowsfeature web-asp-net45 \ && choco install microsoft-build-tools -y --allow-empty-checksums -version 14.0.23107.10 \ && choco install dotnet4.6-targetpack --allow-empty-checksums -y \ && choco install nuget.commandline --allow-empty-checksums -y \ && nuget install MSBuild.Microsoft.VisualStudio.Web.targets -Version 14.0.0.3 \ && nuget install WebConfigTransformRunner -Version 1.0.0.1 RUN powershell remove-item C:\inetpub\wwwroot\iisstart.* # Copy files (temporary work folder) RUN md c:\build WORKDIR c:/build COPY . c:/build # Restore packages, build, copy RUN nuget restore \ && "c:\Program Files (x86)\MSBuild\14.0\Bin\MSBuild.exe" /p:Platform="Any CPU" /p:VisualStudioVersion=12.0 /p:VSToolsPath=c:\MSBuild.Microsoft.VisualStudio.Web.targets.14.0.0.3\tools\VSToolsPath AzureCourseAPI.sln \ && xcopy c:\build\AzureCourseAPI\* c:\inetpub\wwwroot /s # NOT NEEDED ANYMORE –> ENTRYPOINT powershell .\InitializeContainer Dockerfile InitializeContainer gets executed at the and. The web.config file needs to be transformed once our app gets deployed. If (Test-Path Env:\ASPNET_ENVIRONMENT) { \WebConfigTransformRunner.1.0.0.1\Tools\WebConfigTransformRunner.exe \inetpub\wwwroot\Web.config "\inetpub\wwwroot\Web.$env:ASPNET_ENVIRONMENT.config" \inetpub\wwwroot\Web.config } # prevent container from exiting powershell InitializeContainer Docker Build At this point we are ready to begin the building of our image. docker build -t docker-demo . Docker build The syntax for the docker build commamd. Click image for full size Figure 26: The docker build command The next step is to build the image using the doctor build command as seen below. Click image for full size Figure 27: The docker build command continued... The docker run takes the name of our image and runs it as a container. Docker Run docker run -d -p 80:80 docker-demo Docker run Getting ready to test our running container Is a few more things to do before we can test our container properly. The first thing we need to do is open up port 80 on the Windows server virtual machine running in Azure. By default everything is locked down. Click image for full size Figure 28: Public IP address from the portal Network security groups are the mechanism by which we can open and close ports. A network security group can contain one or more rules. We are adding a rule to open up port 80 below. Click image for full size Figure 29: Opening up Port 80 We are now ready to navigate to the public IP address, as indicated in the figure, Public IP address from the portal. the default homepage is displayed. Click image for full size Figure 30: Home Page for Web Site The real goal of this exercise is to make an API called to a restful endpoint that will return some JSON data. Notice that in the browser we can see the appropriate JSON data being returned. Click image for full size Figure 31: JSON Data from API Call Conclusion This post demonstrated the implementation of an ASP.net MVC Web Api application running in their Windows container. Interestingly, there is support for this type of an application in a Linux-based container, but that is reserved for a future post. In addition, there will be a forthcoming Windows Nano implementation, which will be a much lighter version than what we saw here in this post. Hopefully, this post provided some value as some of this was difficult to discover and write about. I welcome your comments below. Troubleshooting Guidance (orthogonal to this post) below are some command to help you better troubleshoot issues that might arise. docker inspect docker-demo This command can tell you about your running container. [ { "Id": "sha256:27cdd74ae5d66bb59306c62cdd63cd629da4c7fd77d7a9efbf240d0b4882ead7", "RepoTags": [ "docker-demo:latest" ], "RepoDigests": [], "Parent": "sha256:50fcbe5e3653b3ea65d4136957b4d06905ddcb37bf46c4440490f885b99c38dd", "Comment": "", "Created": "2016-10-04T03:36:32.8079573Z", "Container": "98190701562a0a70b100e470f8244d203afaa68cb4ccb64c42ba5bee10817934", "ContainerConfig": { "Hostname": "2ac70997c0f2", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "chocolateyUseWindowsCompression=false" ], "Cmd": [ "cmd", "/S", "/C", "#(nop) ", "ENTRYPOINT [\"cmd\" \"/S\" \"/C\" \"powershell .\\\\InitializeContainer\"]" ], ": {} }, "DockerVersion": "1.12.1", "Author": "", "Config": { "Hostname": "2ac70997c0f2", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "chocolateyUseWindowsCompression=false" ], "Cmd": null, ": {} }, "Architecture": "amd64", "Os": "windows", "Size": 8650981554, "VirtualSize": 8650981554, "GraphDriver": { "Name": "windowsfilter", "Data": { "dir": "C:\\ProgramData\\docker\\windowsfilter\\498f5114b4972b7a19e00c3e7ac1303ad28addd774d6c7b949e9955e2147950e" } }, "RootFS": { "Type": "layers", "Layers": [ "sha256:72f30322e86c1f82bdbdcfaded0eed9554188374b2f7f8aae300279f1f4ca2cb", "sha256:23adcc284270a324a01bb062ac9a6f423f6de9a363fcf54a32e3f82e9d022fc4", "sha256:fbb9343bb3906680e5f668b4c816d04d1befc7e56a284b76bc77c050dfb04f1f", "sha256:ad000fd14864d0700d9b0768366e124dc4c661a652f0697f194cdb5285a5272c", "sha256:8b6bfce4717823dfde8bde9624f8192c83445a554adaec07adf80dc6401890ba", "sha256:8ff4edf470318e6d6bce0246afc6b4cb6826982cd7ef3625ee928a24be048ad8", "sha256:1852364f9fd5c7f143cd52d6103e3eec5ed9a0e909ff0fc979b8250d42cf56bd", "sha256:08325b3804786236045a8979b3575fd8dcd501ff9ca22d9c8fc82699d2c045ad", "sha256:7a7f406dcbae5fffbbcd31d90e86be62618e4657fdf9ef6d1af75e86f29fcd19", "sha256:d2d8dc7b30514f85991925669c6f829e909c5634204f2eaa543dbc5ceb811d29", "sha256:da0607f92811e97e941311b3395bb1b9146d91597ab2f21b2e34e503ad57e73f", "sha256:0937ca7b5cbb9ec4a34394c4342f7700d97372ea85cec6006555f96eada4d8c3" ] } } ] netstat -ab | findstr ":80" Displays information about network connections for the Transmission Control Protocol (both incoming and outgoing), routing tables, and a number of network interface (network interface controller or software-defined network interface) and network protocol statistics. Click image for full size Figure 32: snap32.png
https://docs.microsoft.com/en-us/archive/blogs/allthingscontainer/windows-containers-how-to-containerize-a-asp-net-web-api-application-in-windows-using-docker?wt.mc_id=DX_875106
2021-11-27T13:55:28
CC-MAIN-2021-49
1637964358189.36
[]
docs.microsoft.com
Data modeling¶ A data model is a model that organizes data and specifies how they are related to one another. This topic describes the Nebula Graph data model and provides suggestions for data modeling with Nebula Graph. Data structures¶ Nebula Graph data model uses six data structures to store data. They are graph spaces, vertices, edges, tags, edge types and properties. - Graph spaces: Graph spaces are used to isolate data from different teams or programs. Data stored in different graph spaces are securely isolated. Storage replications, privileges, and partitions can be assigned. - Vertices: Vertices are used to store entities. - In Nebula Graph, vertices are identified with vertex identifiers (i.e. VID). The VIDmust be unique in the same graph space. VID should be int64, or fixed_string(N). - A vertex must have at least one tag or multiple tags. - Edges: Edges are used to connect vertices. An edge is a connection or behavior between two vertices. - There can be multiple edges between two vertices. - Edges are directed. ->identifies the directions of edges. Edges can be traversed in either direction. - An edge is identified uniquely with a source vertex, an edge type, a rank value, and a destination vertex. Edges have no EID. - An edge must have one and only one edge type. - The rank value is an immutable user-assigned 64-bit signed integer. It identifies the edges with the same edge type between two vertices. Edges are sorted by their rank values. The edge with the greatest rank value is listed first. The default rank value is zero. - Tags: Tags are used to categorize vertices. Vertices that have the same tag share the same definition of properties. - Edge types: Edge types are used to categorize edges. Edges that have the same edge type share the same definition of properties. - Properties: Properties are key-value pairs. Both vertices and edges are containers for properties. Note Tag and Edge type are similar to the vertex table and edge table in the relational databases. Directed property graph¶ Nebula Graph stores data in directed property graphs. A directed property graph has a set of vertices connected by directed edges. Both vertices and edges can have properties. A directed property graph is represented as: G = < V, E, PV, PE > - V is a set of vertices. - E is a set of directed edges. - PV is the property of vertices. - PE is the property of edges. The following table is an example of the structure of the basketball player dataset. We have two types of vertices, that is player and team, and two types of edges, that is serve and follow. Note Nebula Graph supports only directed edges. Compatibility Nebula Graph 2.6.1 allows dangling edges. Therefore, when adding or deleting, you need to ensure the corresponding source vertex and destination vertex of an edge exist. For details, see INSERT VERTEX, DELETE VERTEX, INSERT EDGE, and DELETE EDGE. The MERGE statement in openCypher is not supported.
https://docs.nebula-graph.io/2.6.1/1.introduction/2.data-model/
2021-11-27T15:16:26
CC-MAIN-2021-49
1637964358189.36
[]
docs.nebula-graph.io
Home > Journals > RR > Vol. 3 (2007) > Iss. 1 Abstract Humans are not inclined to serve others unless it benefits them. We are a product of evolution and a part of the animal kingdom. Do people really need to help others? As with any philosophical question, it possesses no real answer and leads to a long line of questions that follow. We can only keep asking ourselves these questions while we continue to learn. Recommended Citation Tran, Emily (2008) "Do people really want to help others? A look at the human nature of service," Reason and Respect: Vol. 3 : Iss. 1 , Article 7. Available at:
https://docs.rwu.edu/rr/vol3/iss1/7/
2021-11-27T15:10:42
CC-MAIN-2021-49
1637964358189.36
[]
docs.rwu.edu
Columns Trifacta Wrangler Pro. -. This page has no comments.
https://docs.trifacta.com/pages/viewpage.action?pageId=172764525
2021-11-27T15:37:09
CC-MAIN-2021-49
1637964358189.36
[]
docs.trifacta.com
Bug Bounty We are proud to announce that WardenSwap Platform now has the bug bounty program that covers all Smart Contracts interacting or holding users fund. If there is any bug in our system, we encourage users or researchers to submit the report to us and receive suitable intensive bounty. Introduction The bug bounty program from the Warden Swap Platform currently contains two separate scopes, which share the same rules with a few exceptions as noted below. The scopes are: 1. Smart contracts for Multi-Chain Best Rate Swap 2. Smart contracts for Farm & Liquidity Providing The program may be expanded in the future to include more asset types such as frontends and apps. Risk rating methodology We generally base our rewards on an OWASP Risk Rating Methodology score, factoring in both impact and likelihood. One exception to this is described in the Smart Contracts section. Report policy A bug report may qualify for a reward only when: It makes the Warden team aware of the bug for the first time. The reporter allows the Warden team a reasonable amount of time to fix the vulnerability before disclosing it to other parties or to the public. The reporter has not used the bug to receive any reward or monetary gain outside of the bug bounty rewards described in this document, or allowed anyone else to profit outside the bug bounty program. A bug is reported without any conditions, demands, or threats. The investigation method and vulnerability report must adhere to the guidelines in this document. It is ultimately our sole discretion whether a report meets the reward requirements. The reporter makes a good faith effort to avoid privacy violations, destruction of data, and interruption or degradation of our service. Only interact with accounts you own or with the explicit permission of the account holder. A detailed report increases the likelihood of a reward payout and may also increase the reward amount. Please include as much information about the vulnerability as possible, including: The conditions on which reproducing the bug is contingent. The steps needed to reproduce the bug or, better yet, a proof-of-concept. If the amount of detail is not sufficient to reproduce the bug, no reward will be paid. The potential implications of the vulnerability being abused. Multiples or duplicates Submit one vulnerability per report, unless you need to chain vulnerabilities to provide impact. When duplicates occur, we only award the first report that was received (provided that it can be fully reproduced). Multiple vulnerabilities caused by one underlying issue will be awarded one bounty. Rewards amounts mentioned in this document are the minimum bounties we will pay per bug based on severity. We aim to be fair; all reward amounts are at our discretion. Let us know as soon as possible upon discovery of a potential security issue, and we'll make every effort to quickly resolve the issue. Ineligible methods Vulnerabilities contingent on any of the following activities do not qualify for a reward in the bug bounty program: Social engineering DDOS attack Spamming Any physical attacks against Warden property, data centers or employees Automated tools Compromising or misusing third party systems or services Ineligible bugs Vulnerabilities already known to the public or to the Warden team including previous findings from another participant in the bug bounty program. Vulnerabilities in outdated software from Warden or which affects only outdated third party software. Bugs that are not reproducible. Bugs disclosed to other parties without consent from the Warden team. Issues which we cannot reasonably be expected to be able to do anything about. Cookies missing security flags (for non-sensitive cookies). Additional missing security controls often considered “Best practice”, such as: Content Security Policy (CSP) HTTP header HTTP Public Key Pinning (HPKP) Subresource integrity Referrer Policy The following vulnerabilities in a vendor we integrate with: Cross-site Scripting (XSS) Cross-Site Request Forgery (CSRF) Cross Frame Scripting Content Spoofing Vulnerabilities only affecting users of outdated or un-patched browsers and platforms. Weak TLS and SSL cyphers (we are already aware of) Time to response Please allow 5 business days for our reply. We may follow up with additional questions regarding how to reproduce the bug, and to qualify for a reward the investigator must respond to these in a timely manner. Smart Contracts Scope At this time, rewards will be paid out for vulnerabilities discovered in our core smart contracts for Warden Swap Platform as listed below. Exploits may be grouped as following: 1. Function-level (exploitable through a single entry-point) 2. Contract-level (combining multiple entry-points) 3. System-level (combining multiple contracts) We have the level of Bug Bounty criteria (Smart Contract only) as follows: Level Bounty Critical up to $100,000 + NFT* High up to $10,000 + NFT* Medium up to 5,000 + NFT* Low NFT* For NFT souvenir, if the NFT system doesn’t release, we will reward you later once it’s ready. We accept only Smart Contract vulnerability or bug. All bounty will be paid in term of WAD token (USD rate at time of payment). Conclusion Our vision is to create the Best Rate Engine for all mankind, we all can make the future and the world better! Audit Report by CertiK Next - Tokenomics Warden Token Last modified 3mo ago Copy link Contents Introduction Risk rating methodology Report policy Ineligible methods Ineligible bugs Time to response Smart Contracts Scope Conclusion
https://docs.wardenswap.com/bug-bounty
2021-11-27T14:05:23
CC-MAIN-2021-49
1637964358189.36
[]
docs.wardenswap.com
You're viewing Apigee Edge documentation. View Apigee X documentation. On Monday, August 7, 2017, we began releasing a new version of Apigee Edge for Public Cloud. Bugs fixed This release fixes several issues that improve management API availability. In addition, the following bug was fixed in this release. This list is primarily for users checking to see if their support tickets have been fixed. It's not designed to provide detailed information for all users.
https://docs.apigee.com/release/notes/170731-apigee-edge-public-cloud-release-notes-api-management-and-runtime?authuser=0&hl=ja
2021-11-27T14:37:18
CC-MAIN-2021-49
1637964358189.36
[]
docs.apigee.com
Functional Classification NOTE: If your sample has a high host background, please contact [email protected] before uploading your samples to CosmosID-HUB Microbiome. Our support team will assist you in de-hosting the raw sequencing reads and then uploading the data to the platform. High host content specimen includes skin, oral swabs etcetera. Functional profiling from whole genome shotgun microbiome or metatranscriptomic sequencing provides crucial insights into the genomic potential of underlying molecular, biochemical and metabolic activities of microbial communities. Understanding the functional potential of a microbial community also allows testing of hypotheses to link or associate specific molecular or biochemical activities to environmental and health associated phenotypes. In order to aid scientists explore and investigate these hypotheses, we are pleased to introduce the functional workflow in CosmosID-HUB Microbiome that leverages MetaCyc Pathways database and GO Terms database to characterize the functional potential of the microbiome community. The single sample view of functional workflow entails the tabular view of both MetaCyc Pathways along with stacked bar chart and donut chart to aid in visual inspection of functional capabilities of the microbiome population. Clicking on Pathway ID and GO Terms ID will take you to that specific feature's description in MetaCyc and GO Terms database. Technical Appendix Updated about 2 months ago
https://docs.cosmosid.com/docs/functional-analysis
2021-11-27T15:29:11
CC-MAIN-2021-49
1637964358189.36
[array(['https://p-AeFvB6.t2.n0.cdn.getcloudapp.com/items/Blux0e58/f0170723-1573-4a90-bdf2-3fd140d299bb.jpg?v=f8ad3045f62914ffba625ac02f5e12b6', None], dtype=object) ]
docs.cosmosid.com
Alerts monitoring The following tables can help you monitor different aspects of the existing alerts in the web application. This may be useful in case you want to have a general overview of the alerts in the system, check their parameters or spot potential errors. All domains siem.logtrust.alert.info In this table, you can find detailed information about all alerts triggered in the current domain. You can see below the most relevant columns included in this table along with a brief explanation. siem.logtrust.alert.error In this table, you can find detailed information about all the alert errors that occurred in the current domain, understanding an error as an event in which the conditions have been met but the alert has not been triggered. It is very similar to the siem.logtrust.alert.info table except for the fact that this table focuses on the errors and excludes the alerts triggered. You can see below the most relevant columns included in this table along with a brief explanation. Self domain siem.logtrust.pilot.alerts This table collects all the alerts that meet the conditions to be triggered before they are examined for post-filters or anti-flooding. They are stored in the siem.logtrust.alert.info table If they are triggered after that and in the siem.logtrust.alert.error if they are not. siem.logtrust.backend.info This table collects the alerts affected by post-filters before being sent. siem.logtrust.alertengine.out This table collects the alerts affected by the anti-flooding policies before being sent. siem.logtrust.alertengine.alerts This table collects the alerts triggered focusing on their delivery methods, which are shown under the AlertReceiver column.
https://docs.devo.com/confluence/ndt/v7.2.0/searching-data/monitoring-tables/alerts-monitoring
2021-11-27T14:39:59
CC-MAIN-2021-49
1637964358189.36
[]
docs.devo.com
>>. >> =20 >> . =20 >>=20 >>. =20 > . > . =20 =3D=3D.: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1533376+0+/usr/local/www/mailindex/archive/2020/freebsd-questions/20200510.freebsd-questions
2021-11-27T14:15:21
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
In Grid Manager, you can use the IPv6 Net Map (network map) and List panels to manage your IPv6 network infrastructure. After you select a network container from the IPAM tab, Grid Manager displays it in the Net Map panel, by default. The Net Map panel provides a graphical view of your networks and has a number of features that simplify network management. The List panel displays the networks in table format. You can always switch your view of a network container between the Net Map and List panels. Grid Manager keeps track of which panel you last used. When you select a network container, Grid Manager displays it in the Net Map or List panel, depending on which one you last used. For information about each panel, see IPv4 Network Map and IPAM Home . You can use Grid Manager to manage IPv6 networks and their AAAA, PTR and host resource records. You can configure IPv6 netw orks and track IP address usage in those networks. You can also split and join IPv6 networks, when necessary. IPv6 Network Map After you select an IPv6 network container from the IPAM tab, Grid Manager displays it in the IPv6 Net Map (network map) panel, by default. Just like the IPv4 Net Map, the IPv6 Net Map provides a high-level view of the network address space. You can use Net Map to design and plan your network infrastructure, and to configure and manage individual networks. The Net Map panel presents a complete view of the network space, including the different types of networks that are in it and its unused address space. IP addresses that belong to a network are blocked off. Each color-coded block represents a network container, a leaf network, or a block of networks that are too small to be displayed individually in the map. For example, in a /64 or /96 network, networks smaller than /76 or /108 respectively and that are beside each other are represented as a multiple network block. In addition, the fill pattern of the blocks indicates their utilization. Therefore, you can quickly evaluate how many networks are in a network container, their relative sizes, utilization, and how much space you have left. As you mouse over areas of the map, it displays IP information about the area. Net Map also has a zoom feature that allows you to enlarge or reduce your view of a particular area. Figure 13.9 displays the network map of a 1111::/16 network, which is a network container that has network containers and leaf networks. Figure 13.9 IPv6 Network Map Displaying Network Information As shown in Figure 13.9, as you mouse over the map, Net Map displays IP information about the area. When you mouse over an unused area, Net Map displays the following information: - The start and end IP address - The largest possible network - The number of /64 networks that can fit in that space When you mouse over a network container, Net Map displays the following information: - Network address and netmask - The first and last IP address of the network - The number of networks in that block - IPAM utilization When you mouse over a network, Net Map displays the following information: - Network address and netmask - The first and last IP address of the network When you mouse over a block of multiple networks, Net Map displays the following information: - The start and end IP address of that block of networks - The number of networks in that block Zooming In and Out Use the zoom function to enlarge and reduce your view of a selected area. You can zoom in on any area in your network. You can zoom in on an area until it displays 128 addresses per row, for a total of 1024 addresses for the map. When you reach the last possible zoom level, the Zoom In icon in the Net Map task bar and the menu item are disabled. After you zoom in on an area, you can click the Zoom Controller icon to track where you zoomed in. The Zoom Controller lists all the areas that you zoomed in and updates its list dynamically. You can click an item on the list to view that area again. Click the Zoom Controller again to close it. To select an area and zoom in: - Right-click and select Zoom In, or click the Zoom In icon in the Net Map task bar. The pointer changes to the zoom in selector. - Select a starting point and drag to the end point. The starting point can be anywhere in the map. It does not have to be at the beginning of a network. Net Map displays a magnified view of the selected area after you release the mouse button. As you mouse over the zoomed in area, Net Map displays IP information about it. - You can do the following: - Select an area and zoom in again. - Add a network. If you zoom in on an area and click Add without selecting an open area first, Net Map selects the area where it can create the biggest possible network in that magnified area. - Select a network and perform any of the following operations: - Edit its properties. - Open it to display its IP List. - Delete it immediately, or schedule its deletion. - Right-click and select Zoom Out, or click the Zoom Out icon in the Net Map task bar. Each time you click Zoom Out, Net Map zooms out one level and the Zoom Controller is updated accordingly. Net Map Tasks From Net Map, you can create IPv6 networks, and evaluate and manage your network resources according to the needs of your organization. You can do the following: - Zoom in on specific areas, as described in Zooming In and Out. - Use the Go to function to find a network in the current zoom level of Net Map. - Add a network, as described in Adding a Network from Net Map. - Select a network and view IP address list, as described in Viewing IPv6 Data. - Select a network and edit its properties, as described in Modifying IPv4 and IPv6 Network Containers and Networks. - Split a network, as described in Splitting IPv6 Networks into Subnets. - Join networks, as described in Joining IPv6 Networks. - Delete one or multiple networks, as described in Discovering Networks (Under Network Insight only). - Switch to the List view of the network. For information, see IPv6 Network List. - When you select one or more networks in Net Map and then switch to the List view, the list displays the page with the first selected network. - If you select one or more networks in the List view and then switch to the Net Map view, the first network is also selected in Net Map. Although, if you select a network in the List view that is part of a Multiple Networks block in Net Map, it is not selected when you switch to the Net Map view. Adding a Network from Net Map When you create networks from Net Map, you can view the address space to which you are adding a network, so you can determine how much space is available and which IP addresses are not in use. When you mouse over an open area, Net Map displays useful information, such as the largest possible network that fits in that area. In addition, you can create networks without having to calculate anything. When you add a network, Net Map displays a netmask slider so you can determine the appropriate netmask for the size of the network that you need. As you move the slider, it displays network information, including the total number of addresses. After you select the netmask, you can even move the new network around the open area to select another valid start address. To add a network from the Net Map panel: - Do one of the following: - Click the Add icon. Net Map displays the netmask slider and outlines the open area that can accommodate the largest network. - Select an open area, and then click the Add icon. Net Map displays the netmask slider and outlines the largest network that you can create in the open area that you selected. - Move the slider to the desired netmask. You can move the slider to the netmask of the largest network that can be created in the open area. You can also move the slider to the smallest network that can be placed in the current zoom level of Net Map. As you move the slider, Net Map displays the netmask. The outline in the network map also adjusts as you move the slider. When you mouse over the outline, it displays the start and end address of the network. - After you set the slider to the desired netmask, you can drag the new network block around the open area to select a new valid starting address. You cannot move the block to a starting address that is invalid. - Click Launch Wizard to create the network. The Add Network wizard displays the selected network address and netmask. - You can add comments, automatically create reverse mapping zones, and edit the extensible attributes. (For information, see Adding IPv6 Networks.) - Save the configuration and click Restart if it appears at the top of the screen. Grid Manager updates Net Map with the newly created network. Viewing Network Details From Net Map, you can focus on a specific network or area and view additional information about it. If you have a network hierarchy of networks within network containers, you can drill down to individual leaf networks and view their IP address usage. - Select a network or area. - Click the Open icon. - If you selected a network container, Grid Manager displays it in the Net Map panel. You can drill down further by selecting a network or open area and clicking the Open icon again. - If you selected a block of multiple networks, Grid Manager displays the individual networks in the Net Map panel. You can then select a network or open area for viewing. - If you selected a leaf network, Grid Manager displays it in the Network List panel. - If you selected an open area, Grid Manager displays an enlarged view of that area in the Net Map panel. This is useful when you are creating small networks in an open area. IPv6 Network Li st The Network list panel is an alternative view of an IPv6 network hierarchy. For a given network, the panel shows all the networks of a selected network view in table format. A network list displays only the first-level subnets. It does not show further descendant or child subnets. You can open a subnet to view its child subnets. Subnets that contain child subnets are displayed as network containers. If the number of subnets in a network exceeds the maximum page size of the table, the network list displays the subnets on multiple pages. You can use the page navigation buttons at the bottom of the table to navigate through the pages of subnets. The IPAM home panel displays the following: - Network: The network address. - Comment: Information you entered about the network. - IPAM Utilization: For a network, this is the percentage based on the IP addresses in use divided by the total addresses in the network. You can use this information to verify if there is a sufficient number of available addresses in a network. The IPAM utilization is calculated approximately every 15 minutes. - Site: The site to which the IP address belongs. This is a predefined extensible attribute. - Active Users: The number of active users on the selected network. You can select the following columns for display: - Disabled: Indicates whether the network is disabled. - Leaf Network: Indicates whether or not the network is a leaf network. - Other available extensible attributes You can sort the list of subnets in ascending or descending order by columns. For information about customizing tables in Grid Manager, see Customizing Tables. You can also modify some of the data in the table. Double click a row of data, and either edit the data in the field or select an item from a drop-down list. Note that some fields are read-only. For more information about this feature, see Modifying Data in Tables. Tip: If you select a network from the list and switch to the Net Map panel, the network is also selected in the network map. Filtering the Network List You can filter the network list, so it displays only the networks you need. You can filter the list based on certain parameters, such as network addresses, comments and extensible attributes. When you expand the list of available fields you can use for the filter, note that the extensible attributes are those with a gray background. Splitting IPv6 Networks into Subnets You can create smaller subnets simultaneously within a network by splitting it. You do not have to configure each subnet individually. You can create smaller subnets with larger netmasks. A larger netmask defines a larger number of network addresses and a smaller number of IP addresses. Note that you cannot split a network that is part of a shared network. To split an IPv6 network: - From the Data Management tab, select the IPAM tab -> network check box, and then click Split from the Toolbar. - In the Split Network editor, do the following: - Address: Displays the network address. You cannot modify this field. - Net mask: Specify the appropriate netmask for each subnet. - IPv6 Prefix Collector Network: If you split a network with prefix delegations that are not tied to specific addresses, specify the network in which all prefix delegations are assigned. If you leave this field blank, the server assigns all prefix delegations that are not tied to specific addresses to the first network. - Immediately create: Select one of the following: - Only networks with ranges and fixed addresses: Adds only the networks that have DHCP ranges and fixed addresses. - All possible networks: Adds all networks that are within the selected netmasks. You can select this option only when you increase the CIDR by 8 bits. - Automatically create reverse-mapping zone: Select this check box to have the appliance automatically create reverse-mapping zones for the subnets. This function is enabled if the netmask of the network is a multiple of four, such as 4, 12 or 16. Joining IPv6 Networks Joining multiple networks into a larger network is the opposite of splitting a network. You can select a network and expand it into a larger network with a smaller netmask. A smaller netmask defines fewer networks while accommodating a larger number of IP addresses. Joining or expanding a network allows you to consolidate all of the adjacent networks into the expanded network. Adjacent networks are all networks that fall under the netmask of the newly-expanded network. To join or expand a network: - From the Data Management tab, select the IPAM tab -> network check box, and then click Join from the Toolbar. - In the Join Network editor, do the following: - Address: Displays the network address. You cannot modify this field. - Netmask: Enter the netmask of the expanded network. - Automatically create reverse-mapping zone: Select this check box to configure the expanded network to support reverse-mapping zones . The appliance automatically creates reverse-mapping zones only if the netmask is between /4 through /128, in increments of 4 (that is, /4, /8, /12, and so on until /128). - Click OK. This page has no comments.
https://docs.infoblox.com/display/NAG8/Managing+IPv6+Networks
2021-11-27T15:37:43
CC-MAIN-2021-49
1637964358189.36
[]
docs.infoblox.com
About Splunk IT Service Intelligence Splunk IT Service Intelligence (ITSI) is a scalable IT monitoring and analytics solution that provides actionable insight into the performance and behavior of your IT operations. ITSI is built on the Splunk operational intelligence platform and uses the search and correlation capabilities of the platform to enable you to collect, monitor, and report on data from IT devices, systems, and applications. As issues are identified, administrators can quickly investigate and resolve issues. Use IT Service Intelligence to do the following: - Monitor the health of your services with the Service Analyzer - Triage and investigate issues using Episode Review - Create glass tables to visualize your IT and business services and their relationships - Troubleshoot issues using deep dives See also - IT Service Intelligence concepts and features in Administer Splunk IT Service Intelligence - Install Splunk IT Service Intelligence on a single instance in Install and Upgrade Splunk IT Service Intelligence Access Splunk IT Service Intelligence - Open a web browser and navigate to Splunk Web. - Log in with your username and password. - From the Apps list, select!
https://docs.splunk.com/Documentation/ITSI/4.3.0/User/Overview
2021-11-27T15:52:12
CC-MAIN-2021-49
1637964358189.36
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Add custom correlation searches The Splunk App for PCI Compliance includes correlation searches that are used to identify threats to systems within the PCI cardholder data environment. These correlation searches have been mapped to the relevant sections of PCI DSS. You can create custom correlation searches from within the app and map them to the relevant PCI DSS sections for use with the app. Create a custom correlation search Create a custom correlation search using the Content Management page. For this example, create a correlation search for Splunk_DA-ESS_PCICompliance. - Go to Configure >Content Management >. Correlation searches are saved in a configuration file The Splunk App for PCI Compliance saves the search to the correlationsearches.conf file in the local directory of the app defined in the application context for the search. In the steps above, the correlationsearches.conf file is placed in the /Applications/splunk/etc/apps/Splunk_DA-ESS_PCICompliance/local directory. The contents of correlationsearches.conf look like this: [PCI - 1.3.3 - Unauthorized or Insecure Communication Permitted - Rule] rule_name = Unauthorized or Insecure Communication Permitted security_domain = network severity = high Map the PCI DSS controls After you create a correlation search, map the correlation search to the relevant PCI DSS controls. This step requires file system access on the server. Splunk Cloud customers must work with Support to map the correlation search to the relevant PCI DSS controls. Perform these steps in the same directory as the correlation correlationsearches.conffile and paste it into the governance.conffile. [PCI - 1.3.3 - Unauthorized or Insecure Communication Permitted – Rule] - Add a compliance control mapping by adding a governance and control line under the correlation search stanza.[PCI - 1.3.3 - Unauthorized or Insecure Communication Permitted – Rule] compliance.0.governance = pci compliance.0.control = 1.3.3 - (Optional) Add additional compliance control mappings in pairs. The first line indicates the compliance or governance standard. The second line indicates the control mapping for the standard.[PCI - 1.3.3 - Unauthorized or Insecure Communication Permitted – Rule] compliance.0.governance = pci compliance.0.control = 1.3.3 compliance.1.governance = pci compliance.1.control = 1.3.2 - Save the file. The results take effect the next time the correlation search matches and creates a notable event. See Create new correlation searches in this manual for additional information. Feedback submitted, thanks!
https://docs.splunk.com/Documentation/PCI/3.2.1/Install/Addacustomsearch
2021-11-27T14:59:52
CC-MAIN-2021-49
1637964358189.36
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
public class ProviderSignInAttempt extends Object implements Serializable addConnection(String,ConnectionFactoryLocator,UsersConnectionRepository)post-signup to establish a connection between a new user account and the provider account. For the latter, existing users should sign-in using their local application credentials and formally connect to the provider they also wish to authenticate with. clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public static final String SESSION_ATTRIBUTE public ProviderSignInAttempt(Connection<?> connection) public Connection<?> getConnection(ConnectionFactoryLocator connectionFactoryLocator)." connectionFactoryLocator- A ConnectionFactoryLocatorused to lookup the connection
https://docs.spring.io/spring-social/docs/1.1.x/apidocs/org/springframework/social/connect/web/ProviderSignInAttempt.html
2021-11-27T14:49:14
CC-MAIN-2021-49
1637964358189.36
[]
docs.spring.io
UAB Research Computing Day 2011 From UABgrid Documentation (Difference between revisions) Revision as of 14:32, 9 August 2011 UAB 2011 Research Computing Day - Date - 8:30 - 4pm, September 15, 2011, Agenda - Date - September 16, 2011: 2011 HPC Boot Camp and Galaxy Workshop - Place - Auditorium, Hill University Center (HUC) - Membership
https://docs.uabgrid.uab.edu/w/index.php?title=UAB_Research_Computing_Day_2011&diff=prev&oldid=3003&printable=yes
2021-11-27T15:50:14
CC-MAIN-2021-49
1637964358189.36
[]
docs.uabgrid.uab.edu
Custom pipelines can use this method in order to perform any required pre-batch tasks for the given Game Object. It must return the texture unit the Game Object was assigned. The Game Object being rendered or added to the batch. Optional frame to use. Can override that of the Game Object. The texture unit the Game Object has been assigned.
https://newdocs.phaser.io/docs/3.55.2/focus/Phaser.Renderer.WebGL.Pipelines.UtilityPipeline-setGameObject
2021-11-27T15:27:00
CC-MAIN-2021-49
1637964358189.36
[]
newdocs.phaser.io
Unix Tutorial #7: Scripting¶ Note Topics covered: wildcards, scripting Commands covered: awk Combining Commands¶ So far you have learned how to use for-loops and conditional statements to both automate and make decisions about when to run blocks of code. You’ll soon find, however, that large and complex blocks of code are tedious to write out by hand every time you want to run them. It is also difficult to debug a long string of code that you wrote in the Terminal. Instead, we can put everything into a script, or file that contains code. This allows you to make your code compact and easy to move between directories if you need to. It also makes debugging much easier. Downloading a Text Editor for Coding¶ Before we begin scripting, you should download an appropriate code editor. Windows users can download Notepad++, and Mac users should download TextWrangler from the Apple Store. It is important to use one of these rather than the default text editor; otherwise, you may run into problems with the carriage returns, which is demonstrated in a video here. Writing your First Script¶ Once you’ve downloaded TextWrangler, open it and write this code on the first line, also known as a shebang: #!/bin/bash. It signifies that the following code should be interpreted with the bash shell and follow bash syntax. Example of the shebang in a file edited in TextWrangler. The shebang is always written on the first line of the file starting with a pound sign and exclamation mark, followed by an absolute path to the shell that is used to interpret the code. Next, write one of the for-loops you saw previously, such as this: for i in 1 2 3; do echo $i; done It is good coding practice to indent the body of a for-loop or conditional statement, usually with a tab or a few spaces. This allows the eye to quickly see the structure of the code and guess where certain commands are located. It is also helpful to include comments with the pound sign: Anything written after the pound sign will not be interpreted by the shell, but is useful for the reader to know what the command is doing. For example, before the loop we could write a comment about how the following code will print the numbers 1 through 3. Some coders prefer to put a space between each major section of code; this is a stylistic choice that is up to you. Now click on File -> Save As and call it printNums.sh, with the .sh extension signifying that the file is a shell script. Save it to the Desktop. In a Terminal, navigate to the Desktop and then type bash printNums.sh to run it. You can also run the command by typing ./printNums.sh. This will run all of the code in the script, just as if you had typed it out by hand. This is a simple example, but you can see how you can add as many lines of code as you want. Running Larger Scripts¶ Let’s see how we can run a larger script containing many lines of code. Go to this link and click on make_FSL_Timings.sh. Click on the Raw button to see the raw text. You can either right click anywhere on the page and save this as a script, or you can copy and paste the code into TextWrangler. Save it as make_FSL_Timings.sh, and move it to the Flanker directory. Let’s take a look at what this code does. Notice that we have a shebang indicating that the script is written in Bash syntax; we also have comments after each pound sign marking the major sections of the code. The first block of code is a conditional statement that checks whether a file called subjList.txt exists; if it doesn’t, then list each subject directory and redirect that list of subjects to a file called subjList.txt. Wildcards¶ This brings up an important concept: Wildcards. There are two types of wildcards you will often use. The first is an asterisk, which looks for one or more characters. For example, navigate to the Flanker directory and type mkdir sub-100. If you type ls -d sub-* It will return every directory that starts with sub-, whether it is sub-01 or sub-100. The asterisk wildcard doesn’t discriminate whether the directory is six characters long or six hundred; it will match and return all of them, as long as they start with sub-. The other type of wildcard is the question mark, which matches a single occurrence of any character. If you type ls -d sub-??, it will only return directories with two integers after the dash - in other words, it will return sub-01 through sub-26, but not sub-100. Text Manipulation with Awk¶ The body of the for-loop contains something else that is new, a command called awk. Awk is a text processing command that prints columns from a text file. Here are the basics about how it works: If you go into a subjects’ func directory and type cat sub-08_task-flanker_run-1_events.tsv, it will return all of the text in that file. For our fMRI analysis, we want the columns that specify the onset time and duration, as well as the number 1 as a placeholder in the last column. You can redirect the output of this command into the input for the awk command by using a vertical pipe. Then, you can use conditional statements in awk to print the onset times for specific experimental conditions, and redirect that output into a corresponding text file. This is discussed in more detail in the book chapter in the link below. Now navigate back to the directory containing all the subjects, remove the sub-100 directory and run the script. It will take a few moments, and then create timing files for all of your subjects. You can inspect them using the cat command, and they should all look something like this: Scripts and wildcards give you more flexibility with your code, and can save you countless hours of labor - just imagine typing out each of the commands in our script for each subject. Later on we will use these scripts to automate the analysis of an entire dataset - but to do that, we will need to learn about one more command for manipulating text - the sed command. Video¶ This video will walk you through how to write a script using TextWrangler, and how to execute the script in the Terminal.
https://andysbrainbook.readthedocs.io/en/latest/unix/Unix_07_Scripting.html
2021-11-27T14:58:58
CC-MAIN-2021-49
1637964358189.36
[array(['../_images/TextWrangler_Shebang.png', '../_images/TextWrangler_Shebang.png'], dtype=object) array(['../_images/Wildcards_Demo.gif', '../_images/Wildcards_Demo.gif'], dtype=object) array(['../_images/OnsetFile_Output.png', '../_images/OnsetFile_Output.png'], dtype=object)]
andysbrainbook.readthedocs.io
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Get-ECSService-Cluster <String>-Include <String[]>-Service <String[]>-Select <String>-PassThru <SwitchParameter> TAGSis specified, the tags are included in the response. If this field is omitted, tags aren't included in the response. Get-ECSService -Service my-hhtp-serviceThis example shows how to retrieve details of a specific service from your default cluster. Get-ECSService -Cluster myCluster -Service my-hhtp-serviceThis example shows how to retrieve details of a specific service running in the named cluster. AWS Tools for PowerShell: 2.x.y.z
https://docs.aws.amazon.com/powershell/latest/reference/items/Get-ECSService.html
2021-11-27T15:58:31
CC-MAIN-2021-49
1637964358189.36
[]
docs.aws.amazon.com
Use. Create custom folders to streamline access to relevant data tables. They can also serve to limit the data tables certain roles can access. by opening the finder dropdown list. Here's more information about working with custom finders:
https://docs.devo.com/confluence/ndt/v7.5.0/searching-data/accessing-data-tables/run-a-search-using-a-finder/use-a-custom-finder
2021-11-27T15:11:53
CC-MAIN-2021-49
1637964358189.36
[]
docs.devo.com
Hello, I have a scenario for using Azure Key vault. 1) I have stored a refresh token in Key Vault. Retrieved the token from key vault in ADF using the web activity. Call the service provider endpoint to generate the Access Token based on refresh token. 2) I want to store above generated Access token from ADF to Key Vault dynamically. How Can i do that? I went through many articles but did not find any solution on storing the information generated in ADF to Key Vault. Any help is much appreciated. Thanks
https://docs.microsoft.com/en-us/answers/questions/550751/storing-secret-token-information-from-adf-to-key-v.html
2021-11-27T15:56:10
CC-MAIN-2021-49
1637964358189.36
[]
docs.microsoft.com
Execution Model¶ Mint provides a mesh-aware Execution Model, based on the RAJA programming model abstraction layer. The execution model supports on-node fine-grain parallelism for mesh traversals. Thereby, enable the implementation of computational kernels that are born parallel and portable across different processor architectures. Note To utilize NVIDIA GPUs, using the RAJA CUDA backend, Axom needs to be compiled with CUDA support and linked to a CUDA-enabled RAJA library. Consult the Axom Quick Start Guide for more information. The execution model consists of a set of templated functions that accept two arguments: - A pointer to a mesh object corresponding to one of the supported Mesh Types. - The kernel that defines the operations on the supplied mesh, which is usually specified by a C++11 Lambda Expression. Note Instead of a C++11 Lambda Expression a C++ functor may also be used to encapsulate a kernel. However, in our experience, using C++11 functors, usually requires more boiler plate code, which reduces readability and may potentially have a negative impact on performance. The Execution Model provides Node Traversal Functions, Cell Traversal Functions and Face Traversal Functions to iterate and operate on the constituent Nodes, Cells and Faces of the mesh respectively. The general form of these functions is shown in Fig. 21. Fig. 21 General form of the constituent templated functions of the Execution Model As shown in Fig. 21, the key elements of the functions that comprise the Execution Model are: - The Iteration Space: Indicated by the function suffix, used to specify the mesh entities to traverse and operate upon, e.g. the Nodes, Cells or Faces of the mesh. - The Execution Policy: Specified as as the first, required, template argument to the constituent functions of the Execution Model. The Execution Policy specifies where and how the kernel is executed. - The Execution Signature: Specified by a second, optional, template argument to the constituent functions of the Execution Model. The Execution Signature specifies the type of arguments supplied to a given kernel. - The Kernel: Supplied as an argument to the constituent functions of the Execution Model. It defines the body of operations performed on the supplied mesh. See the Tutorial for code snippets that illustrate how to use the Node Traversal Functions, Cell Traversal Functions and Face Traversal Functions of the Execution Model. Execution Policy¶ The Execution Policy is specifed as the first template argument and is required by all of the constituent functions of the Execution Model. Axom defines a set of high-level execution spaces, summarized in the table below. Internally, the implementation uses the axom::execution_space traits object to map each execution space to corresponding RAJA execution policies and bind the default memory space for a given execution space. For example, the default memory space for the axom::CUDA_EXEC execution space is unified memory, which can be accessed from both the host (CPU ) and device (GPU). Execution Signature¶ The Execution Signature is specified as the second, optional template argument to the constituent functions of the Execution Model. The Execution Signature indicates the list of arguments that are supplied to the user-specified kernel. Note If not specified, the default Execution Signature is set to mint::xargs::index, which indicates that the supplied kernel takes a single argument that corresponds to the index of the corresponding iteration space, i.e, the loop index. The list of currently available Execution Signature options is based on commonly employed access patterns found in various mesh processing and numerical kernels. However, the Execution Model is designed such that it can be extended to accomodate additional access patterns. mint::xargs::index - Default Execution Signature to all functions of the Execution Model - Indicates that the supplied kernel takes a single argument that corresponds to the index of the iteration space, i.e. the loop index. mint::xargs::ij/ mint::xargs::ijk - Applicable only with a Structured Mesh. - Used with Node Traversal Functions ( mint::for_all_nodes()) and Cell Traversal Functions ( mint::for_all_cells()). - Indicates that the supplied kernel takes the corresonding \((i,j)\) or \((i,j,k)\) indices, in 2D or 3D respectively, as additional arguments. mint::xargs::x/ mint::xargs::xy/ mint::xargs::xyz - Used with Node Traversal Functions ( mint::for_all_nodes()). - Indicates that the supplied kernel takes the corresponding nodal coordinates, \(x\) in 1D, \((x,y)\) in 2D and \((x,y,z)\) in 3D, in addition to the corresponding node index, nodeIdx. mint::xargs::nodeids - Used with Cell Traversal Functions ( mint::for_all_cells()) and Face Traversal Functions ( mint::for_all_faces()). - Indicates that the specified kernel is supplied the constituent node IDs as an array argument to the kernel. mint::xargs::coords - Used with Cell Traversal Functions ( mint::for_all_cells()). and Face Traversal Functions ( mint::for_all_faces()) - Indicates that the specified kernel is supplied the constituent node IDs and corresponding coordinates as arguments to the kernel. mint::xargs::faceids - Used with the Cell Traversal Functions ( mint::for_all_cells()). - Indicates that the specified kernel is supplied an array consisting of the constituent cell face IDs as an additional argument. mint::xargs::cellids - Used with the Face Traversal Functions ( mint::for_all_faces()). - Indicates that the specified kernel is supplied the ID of the two abutting Cells to the given. By conventions, tor external boundary Faces, that are bound to a single cell, the second cell is set to \(-1\). Warning Calling a traversal function with an unsupported Execution Signature will result in a compile time error.
https://axom.readthedocs.io/en/latest/axom/mint/docs/sphinx/sections/execution_model.html
2021-11-27T15:15:30
CC-MAIN-2021-49
1637964358189.36
[array(['../../../../../_images/execmodel.png', 'Execution Model'], dtype=object) ]
axom.readthedocs.io
TeamForge 19.3 Product Release TeamForge 19.3 is out in the market to make Agile Lifecycle Management much easier. TeamForge 19.3 brings you many more salient features and fixes, which include: Pre-Submit Webhooks for Tracker Artifacts Ad Hoc Query Support for Baseline and TeamForge Webhooks-based Event Broker Databases Documents Widget in My Workspace page Trackers - Attachment Reminder for Tracker Artifacts - Handling Simultaneous Updates to the Same Artifact - Support for Tracker Artifacts with Parent Artifacts in the Backlog Items Swimlane of Task Board Documents Beta version of the redesigned Documents list page is available now. The following features are added to the redesigned Documents list page (beta). Inclusion of Document Folders Monitor and Unmonitor Document Folders and Documents More Actions on Document Folders and Documents - Add a New Subfolder to a Document Folder - Rename Document Folders and Documents - Move/Copy Document Folders and Documents - Download Document Folders and Documents - Users Monitoring Document Folders and Documents - Set Documents as Favorites On-scroll Display of Document Folders and Documents Configure Default Document Columns - Save an Applied Column Configuration - Delete a Column Configuration Search Document Folders and Documents Recent Document Files Reports - Selection of Multiple Planning Folders in Tracker Reports File Releases - Audit/Change Log for File Releases TeamForge Webhooks-based Event Broker TeamForge Webhooks-based Event Broker is upgraded to verstion v4. The following features are introduced as part of this upgrade. - TOPIC Event Type - SYNC Event Type - QUEUE Event Type - Scripts and Filters TeamForge CLI Support for all methods of TeamForge related REST API calls—GET, PUT, PATCH, POST, DELETE, OPTIONS and HEAD. validjsoncommand has been introduced to validate JSON content. printjsoncommand has been introduced for pretty printing JSON content. GitAgile™—Enterprise Version Control - Download Folders from a Git Repository - Support for Relative Paths to Files, Folders, and Images in Markdown Files - Ignore Whitespaces in Code Diff View - Configurable Checkout Command for Git Repositories - Support for Unified Diff View of Images in Code Browser - Inclusion of LFS Data in Downloaded Zip Archives of Git Repositories and Repository Tags For more information, see TeamForge 19.3 Release Notes.
https://docs.collab.net/teamforge193.html
2021-11-27T15:32:21
CC-MAIN-2021-49
1637964358189.36
[]
docs.collab.net
BaseView.DocumentClosed Event Fires after a document has been closed. Namespace: DevExpress.XtraBars.Docking2010.Views Assembly: DevExpress.XtraBars.v21.2.dll Declaration public event DocumentEventHandler DocumentClosed Public Event DocumentClosed As DocumentEventHandler Event Data The DocumentClosed event's data class is DocumentEventArgs. The following properties provide information specific to this event: Remarks When a document is closed, it is destroyed. A document can be closed by calling the IBaseViewController.Close method, accessible via the BaseView.Controller object. An end-user can close a document by clicking the close (‘x’) button. To prevent a document from being closed or perform additional actions when a document is being closed, handle the BaseView.DocumentClosing event. See Also Feedback
https://docs.devexpress.com/WindowsForms/DevExpress.XtraBars.Docking2010.Views.BaseView.DocumentClosed
2021-11-27T15:17:33
CC-MAIN-2021-49
1637964358189.36
[]
docs.devexpress.com
custom view A page presenting data in an arrangement, or view, that is not available in the standard release of FlexNet Manager Suite. This custom view is developed by experts, most often a consultant from Flexera or a partner company. Tip: It is always possible to configure the standard data views in ways that you prefer, and to save those settings so that the data is always available in your preferred form. This kind of preferred view is available for each operator individually. For example, you can: - Choose different columns to display in a list - Set the sort order, including complex sorts on multiple columns - Group data by any column - Apply your preferred filters to hide some records and highlight others. FlexNet Manager Suite (On-Premises) 2020 R2
https://docs.flexera.com/FlexNetManagerSuite2020R2/EN/WebHelp/glossaries/customView.html
2021-11-27T13:57:24
CC-MAIN-2021-49
1637964358189.36
[]
docs.flexera.com
Date: Sat, 23 Jul 2005 09:23:48 -0400 From: Hornet <[email protected]> To: [email protected] Subject: Re: Restrict Tunneling thru SSH Message-ID: <list.freebsd.questions#[email protected]> References: <list.freebsd.questions#[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On 7/22/05, Trevor Sullivan <[email protected]> wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: RIPEMD160 > > Hornet wrote: > > > On 7/21/05, Trevor Sullivan <[email protected]> wrote: > > > >> Hello list, I am curious as to whether or not it is possible to > >> restrict certain users from tunneling traffic through SSH. I > >> would like to be able to tunnel my own traffic, but provide user > >> logins that are restricted from accessing the rest of my inside > >> network. Is it possible to restrict this by user? Thanks > >> > >> Trevor > > > > I'm pretty sure it is an all or nothing config option in sshd.conf > > in the global sense. But you can make specific options for specific > > hosts. > > > So could I possibly restrict SSH tunneling by IP (host)? I guess my > concern is that if I create a user account, it will be able to tunnel > to other machines on my network w/o restriction. Is the way to do this > maybe a DMZ or separate VLAN? > > Trevor Yes, should be able to do this via your sshd config. I would recommend using webmin for this. I have not done this before, but it looks do able. Are your user going to be using ssh, or is this just a SMB box? If it is just a SMB box, then I would just set the shell account to "nologin" since that is separate from the SMB account. Also I guess you could set a up firewall and restrict the ports that can talk on the LAN. -Erik- _______________________________________________ [email protected] mailing list To unsubscribe, send any mail to "[email protected]" Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=2357686+0+/usr/local/www/mailindex/archive/2005/freebsd-questions/20050724.freebsd-questions
2021-11-27T15:32:45
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
Obtain and Set Cell Values in Code - 2 minutes to read You can use the GridControl‘s methods to obtain and modify cell values. Refer to Obtaining Row Handles and Accessing and Identifying Columns for information on how to obtain row handles and identify columns. Obtain Cell Values Set Cell Values Note You can use methods above even if end users are not allowed to edit data. Once the cell’s value has been changed, the GridViewBase.CellValueChanged event is raised. See Also Feedback
https://docs.devexpress.com/WPF/6150/controls-and-libraries/data-grid/data-editing-and-validation/modify-cell-values/obtain-and-set-cell-values-in-code
2021-11-27T15:42:08
CC-MAIN-2021-49
1637964358189.36
[]
docs.devexpress.com
Special update: SSDT Preview Update with SQL Server 2016 RC2 support Update: -
https://docs.microsoft.com/en-us/archive/blogs/ssdt/ssdt-preview-update-rc2
2021-11-27T14:35:38
CC-MAIN-2021-49
1637964358189.36
[]
docs.microsoft.com
ReceivedRepresenting The ReceivedRepresenting element identifies the principal in a delegate access scenario. <ReceivedRepresenting> <Mailbox/> </ReceivedRepresenting>.
https://docs.microsoft.com/en-us/exchange/client-developer/web-service-reference/receivedrepresenting
2021-11-27T14:43:17
CC-MAIN-2021-49
1637964358189.36
[]
docs.microsoft.com
Stories¶ Stories are a great way to add dynamic content around your site. Stories can be used for: - Homepage rotators/slideshows - Sponors rotators - Featured content A story consists of the following basic fields: - Title - Description - Image - Link (internal or external) Stories can be set to activate and expire based on a start and end date. Stories may also be set to never expire.
https://tendenci.readthedocs.io/en/stable/topic-guides/stories.html
2021-11-27T13:46:04
CC-MAIN-2021-49
1637964358189.36
[]
tendenci.readthedocs.io
Available processors¶ This section presents a detailed description of all processors that are currently supported by the Auditory front-end framework. Each processor can be controlled by a set of parameters, which will be explained and all default settings will be listed. Finally, a demonstration will be given, showing the functionality of each processor. The corresponding Matlab files are contained in the Auditory front-end folder /test and can be used to reproduce the individual plots. A full list of available processors can be displayed by using the command requestList. An overview of the commands for instantiating processors is given in Computation of an auditory representation. - Pre-processing ( preProc.m) - Auditory filter bank - Inner hair-cell ( ihcProc.m) - Adaptation ( adaptationProc.m) - Auto-correlation ( autocorrelationProc.m) - Rate-map ( ratemapProc.m) - Spectral features ( spectralFeaturesProc.m) - Onset strength ( onsetProc.m) - Offset strength ( offsetProc.m) - Binary onset and offset maps ( transientMapProc.m) - Pitch ( pitchProc.m) - Medial Olivo-Cochlear (MOC) feedback ( mocProc.m) - Amplitude modulation spectrogram ( modulationProc.m) - Spectro-temporal modulation spectrogram - Cross-correlation ( crosscorrelationProc.m) - Interaural time differences ( itdProc.m) - Interaural level differences ( ildProc.m) - Interaural coherence ( icProc.m) - Precedence effect ( precedenceProc.m) Pre-processing ( preProc.m)¶ Prior to computing any of the supported auditory representations, the input signal stored in the data object can be pre-processed with one of the following elements: - DC bias removal - Pre-emphasis - RMS normalisation using an automatic gain control - Level scaling to a pre-defined SPL reference - Middle ear filtering The order of processing is fixed. However, individual stages can be activated or deactivated, depending on the requirement of the user. The output is a time domain signal representation that is used as input to the next processors. Moreover, a list of adjustable parameters is listed in Table 4. The influence of each individual pre-processing stage except for the level scaling is illustrated in Fig. 7,. 7. Auditory 5. The gammatone filter bank is illustrated in Fig. 8,_1<< Fig. 8 Time domain signal (left panel) and the corresponding output of the gammatone processor consisting of 16 auditory filters spaced between 80 Hz and 8000 Hz (right panel).. 9. Currently the implementation follows the model defined as CASP by [Jepsen2008], in terms of the detailed structure and operation, which is specified by the default argument 'CASP' for fb_model. Fig. 9. 10 shows the BM stage output at 1 kHz characteristic frequency using the DRNL processor (on the right hand side), compared to that using the gammatone filter bank (left hand side), based on the right ear input signal shown in panel 1 of Fig. 7 . 10 The gammatone processor output (left panel) compared to the output of the DRNL processor (right panel), based on the right ear signal shown in panel 1 of Fig. 7,. Inner 6. A particular model can be selected by using the parameter ihc_method. The effect of the IHC processor is demonstrated in Fig. 11,. Fig. 11 Illustration of the envelope extraction processor. BM output (left panel) and the corresponding IHC model output using ihc_method = ’dau’ (right panel). Adaptation ( 7 lists the parameters and their default values, and Table 8. 12, where the output of the IHC model from the same input as used in the example of Inner hair-cell (ihcProc.m) (the right panel of Fig. 11) is compared to the adaptation output by running the script DEMO_Adaptation.m. Fig. 12 Illustration of the adaptation processor. IHC output (left panel) as the input to the adaptation processor and the corresponding output using adpt_model=’adt_dau’ (right panel). Auto-correlation ( autocorrelationProc.m)¶ Auto-correlation is an important computational concept that has been extensively studied in the context of predicting human pitch perception [Licklider1951], [Meddis1991]. To measure the amount of periodicity that is present in individual frequency channels, the ACF is computed in the FFT domain for short time frames based on the IHC representation. The unbiased ACF scaling is used to account for the fact that fewer terms contribute to the ACF at longer time lags. The resulting ACF is normalised by the ACF at lag zero to ensure values between minus one and one. The window size ac_wSizeSec determines how well low-frequency pitch signals can be reliably estimated and common choices are within the range of 10 milliseconds – 30 milliseconds. For the purpose of pitch estimation, it has been suggested to modify the signal prior to correlation analysis in order to reduce the influence of the formant structure on the resulting ACF [Rabiner1977]. This pre-processing can be activated by the flag ac_bCenterClip and the following nonlinear operations can be selected for ac_ccMethod: centre clip and compress ’clc’, centre clip ’cc’, and combined centre and peak clip ’sgn’. The percentage of centre clipping is controlled by the flag ac_ccAlpha, which sets the clipping level to a fixed percentage of the frame-based maximum signal level. A generalised ACF has been suggested by [Tolonen2000], where the exponent ac\_K can be used to control the amount of compression that is applied to the ACF. The conventional ACF function is computed using a value of ac\_K=2, whereas the function is compressed when a smaller value than 2 is used. The choice of this parameter is a trade-off between sharpening the peaks in the resulting ACF function and amplifying the noise floor. A value of ac\_K = 2/3 has been suggested as a good compromise [Tolonen2000]. A list of all ACF-related parameters is given in Table 9. Note that these parameters will influence the pitch processor, which is described in Pitch (pitchProc.m). A demonstration of the ACF processor is shown in Fig. 13, which has been produced by the scrip DEMO_ACF.m. It shows the IHC output in response to a 20 ms speech signal for 16 frequency channels (left panel). The corresponding ACF is presented in the upper right panel, whereas the SACF is shown in the bottom right panel. Prominent peaks in the SACF indicate lag periods which correspond to integer multiples of the fundamental frequency of the analysed speech signal. This relationship is exploited by the pitch processor, which is described in Pitch (pitchProc.m). Fig. 13 IHC representation of a speech signal shown for one time frame of 20 ms duration (left panel) and the corresponding ACF (right panel). The SACF summarises the ACF across all frequency channels (bottom right panel). Rate 10. The rate-map is demonstrated by the script DEMO_Ratemap and the corresponding plots are presented in Fig. 14. The IHC representation of a speech signal is shown in the left panel, using a bank of 64 gammatone filters spaced between 80 and 8000 Hz. The corresponding rate-map representation scaled in dB is presented in the right panel. Fig. 14 IHC representation of s speech signal using 64 auditory filters (left panel) and the corresponding rate-map representation (right panel). Spectral 11. The extraction of spectral features is demonstrated by the script Demo_SpectralFeatures.m, which produces the plots shown in Fig. 15. The complete set of 14 spectral features is computed for the speech signal shown in the top left panel. Whenever the unit of the spectral feature was given in frequency, the feature is shown in black in combination with the corresponding rate-map representation. Fig. 15 Speech signal and 14 spectral features that were extracted based on the rate-map representation. Onset 10, 12. The resulting onset strength expressed in decibel, which is a function of time frame and frequency channel, is shown in Fig. 16. The two figures can be replicated by running the script DEMO_OnsetStrength.m. When considering speech as an input signal, it can be seen that onsets appear simultaneously across a broad frequency range and typically mark the beginning of an auditory event. Fig. 16 Rate-map representation (left panel) of speech and the corresponding onset strength in decibel (right panel). Offset strength ( offsetProc.m)¶ Similarly to onsets, the strength of offsets can be estimated by measuring the frame-based decrease in logarithmically-scaled energy. As discussed in the previous section, the selected rate-map parameters as listed in Table 10 will influence the offset processor. Similar to the onset strength, the offset strength can be constrained to a maximum value of ons_maxOffsetdB = 30. A list of all parameters is presented in Table 12. The offset strength is demonstrated by the script DEMO_OffsetStrength.m and the corresponding figures are depicted in Fig. 17. It can be seen that the overall magnitude of the offset strength is lower compared to the onset strength. Moreover, the detected offsets are less synchronised across frequency. Fig. 17 Rate-map representation (left panel) of speech and the corresponding offset strength in decibel (right panel). Binary onset and offset maps ( transientMapProc.m)¶ The information about sudden intensity changes, as represented by onsets or offsets, can be combined in order to organise and group the acoustic input according to individual auditory events. The required processing is similar for both onsets and offsets, and is summarised by the term transient detection. To apply this transient detection based on the onset strength or offset strength, the user should use the request name ’onset_map’ or ’offset_map’, respectively. Based on the transient strength which is derived from the corresponding onset strength and offset strength processor (described in Onset strength (onsetProc.m) and Offset strength (offsetProc.m), a binary decision about transient activity is formed, where only the most salient information is retained. To achieve this, temporal and across-frequency constraints are imposed for the transient information. Motivated by the observation that two sounds are perceived as separated auditory events when the difference in terms of their onset time is in the range of 20 ms – 40 ms [Turgeon2002], transients are fused if they appear within a pre-defined time context. If two transients appear within this time context, only the stronger one will be considered. This time context can be adjusted by trm_fuseWithinSec. Moreover, the minimum across-frequency context can be controlled by the parameters trm_minSpread. To allow for this selection, individual transients which are connected across multiple TF units are extracted using Matlab’s image labelling tool bwlabel . The binary transient map will only retain those transients which consists of at least trm_minSpread connected TF units. The salience of the cue can be specified by the detection thresholds trm_minStrengthdB. Whereas this thresholds control the required relative change, a global threshold excludes transient activity if the corresponding rate-map level is below a pre-defined threshold, as determined by trm_minValuedB. A summary of all parameters is given in Table 14. To illustrate the benefit of selecting onset and offset information, a rate-map representation is shown in Fig. 18 (left panel), where the corresponding onsets and offsets detected by the transientMapProc, through two individual requests ’onset_map’ and ’offset_map’, and without applying any temporal or across-frequency constraints are overlaid (respectively in black and white). It can be seen that the onset and offset information is quite noisy. When only retaining the most salient onsets and offsets by applying temporal and across-frequency constraints (right panel), the remaining onsets and offsets can be used as temporal markers, which clearly mark the beginning and the end of individual auditory events. Fig. 18 Detected onsets and offsets indicated by the black and white vertical bars. The left panels shows all onset and offset events, whereas the right panel applies temporal and across-frequency constraints in order to retain the most salient onset and offset events. Pitch ( pitchProc.m)¶ Following [Slaney1990], [Meddis2001], [Meddis1997], the sub-band periodicity analysis obtained by the ACF can be integrated across frequency by giving equal weight to each frequency channel. The resulting SACF reflects the strength of periodicity as a function of the lag period for a given time frame, as illustrated in Fig. 13. Based on the SACF representation, the most salient peak within the plausible pitch frequency range p_pitchRangeHz is detected for each frame in order to obtain an estimation of the fundamental frequency. In addition to the peak position, the corresponding amplitude of the SACF is used to reflect the confidence of the underlying pitch estimation. More specifically, if the SACF magnitude drops below a pre-defined percentage p_confThresPerc of its global maximum, the corresponding pitch estimate is considered unreliable and set to zero. The estimated pitch contour is smoothed across time frames by a median filter of order p_orderMedFilt, which aims at reducing the amount of octave errors. A list of all parameters is presented in Table 15. In the context of pitch estimation, it will be useful to experiment with the settings related to the non-linear pre-processing of the ACF, as described in Auto-correlation (autocorrelationProc.m). The task of pitch estimation is demonstrated by the script DEMO_Pitch and the corresponding SACF plots are presented in Fig. 19. The pitch is estimated for an anechoic speech signal (top left panel). The corresponding is presented in the top right panel, where each black cross represents the most salient lag period per time frame. The plausible pitch range is indicated by the two white dashed lines. The confidence measure of each individual pitch estimates is shown in the bottom left panel, which is used to set the estimated pitch to zero if the magnitude of the SACF is below the threshold. The final pitch contour is post-processed with a median filter and shown in the bottom right panel. Unvoiced frames, where no pitch frequency was detected, are indicated by NaN‘s. Fig. 19 Time domain signal (top left panel) and the corresponding SACF (top right panel). The confidence measure based on the SACF magnitude is used to select reliable pitch estimates (bottom left panel). The final pitch estimate is post-processed by a median filter (bottom right panel). Medial 16. 20. 20. Amplitude modulation spectrogram ( modulationProc.m)¶ The detection of envelope fluctuations is a very fundamental ability of the human auditory system which plays a major role in speech perception. Consequently, computational models have tried to exploit speech- and noise specific characteristics of amplitude modulations by extracting so-called amplitude modulation spectrogram (AMS)features with linearly-scaled modulation filters [Kollmeier1994], [Tchorz2003], [Kim2009], [May2013a], [May2014a], [May2014b]. The use of linearly-scaled modulation filters is, however, not consistent with psychoacoustic data on modulation detection and masking in humans [Bacon1989], [Houtgast1989], [Dau1997a], [Dau1997b], [Ewert2000]. As demonstrated by [Ewert2000], the processing of envelope fluctuations can be described effectively by a second-order band-pass filter bank with logarithmically-spaced centre frequencies. Moreover, it has been shown that an AMS feature representation based on an auditory-inspired modulation filter bank with logarithmically-scaled modulation filters substantially improved the performance of computational speech segregation in the presence of stationary and fluctuating interferers [May2014c]. In addition, such a processing based on auditory-inspired modulation filters has recently also been successful in speech intelligibility prediction studies [Joergensen2011], [Joergensen2013]. To investigate the contribution of both AMS feature representations, the amplitude modulation processor can be used to extract linearly- and logarithmically-scaled AMS features. Therefore, each frequency channel of the IHC representation is analysed by a bank of modulation filters. The type of modulation filters can be controlled by setting the parameter ams_fbType to either ’lin’ or ’log’. To illustrate the difference between linear linearly-scaled and logarithmically-scaled modulation filters, the corresponding filter bank responses are shown in Fig. 21. The linear modulation filter bank is implemented in the frequency domain, whereas the logarithmically-scaled filter bank is realised by a band of second-order IIR Butterworth filters with a constant-Q factor of 1. The modulation filter with the lowest centre frequency is always implemented as a low-pass filter, as illustrated in the right panel of Fig. 21. Fig. 21 Transfer functions of 15 linearly-scaled (left panel) and 9 logarithmically-scaled (right panel) modulation filters. Similarly to the gammatone processor described in Gammatone (gammatoneProc.m), there are different ways to control the centre frequencies of the individual modulation filters, which depend on the type of modulation filters ams_fbType = 'lin' - Specify ams_lowFreqHz, ams_highFreqHzand ams_nFilter. The requested number of filters ams_nFilterwill be linearly-spaced between ams_lowFreqHzand ams_highFreqHz. If ams_nFilteris omitted, the number of filters will be set to 15 by default. ams_fbType = 'log' - Directly define a vector of centre frequencies, e.g. ams_cfHz = [4 8 16 ...]. In this case, the parameters ams_lowFreqHz, ams_highFreqHz, and ams_nFilterare ignored. - Specify ams_lowFreqHzand ams_highFreqHz. Starting at ams_lowFreqHz, the centre frequencies will be logarithmically-spaced at integer powers of two, e.g. 2^2, 2^3, 2^4 ... until the higher frequency limit ams_highFreqHzis reached. - Specify ams_lowFreqHz, ams_highFreqHzand ams_nFilter. The requested number of filters ams_nFilterwill be spaced logarithmically as power of two between ams_lowFreqHzand ams_highFreqHz. The temporal resolution at which the AMS features are computed is specified by the window size ams_wSizeSec and the step size ams_hSizeSec. The window size is an important parameter, because it determines how many periods of the lowest modulation frequencies can be resolved within one individual time frame. Moreover, the window shape can be adjusted by ams_wname. Finally, the IHC representation can be downsampled prior to modulation analysis by selecting a downsampling ratio ams_dsRatio larger than 1. A full list of AMS feature parameters is shown in Table 17. The functionality of the AMS feature processor is demonstrated by the script DEMO_AMS and the corresponding four plots are presented in Fig. 22. The time domain speech signal (top left panel) is transformed into a IHC representation (top right panel) using 23 frequency channels spaced between 80 and 8000 Hz. The linear and the logarithmic AMS feature representations are shown in the bottom panels. The response of the modulation filters are stacked on top of each other for each IHC frequency channel, such that the AMS feature representations can be read like spectrograms. It can be seen that the linear AMS feature representation is more noisy in comparison to the logarithmically-scaled AMS features. Moreover, the logarithmically-scaled modulation pattern shows a much higher correlation with the activity reflected in the IHC representation. Fig. 22 Speech signal (top left panel) and the corresponding IHC representation (top right panel) using 23 frequency channels spaced between 80 and 8000 Hz. Linear AMS features (bottom left panel) and logarithmic AMS features (bottom right panel). The response of the modulation filters are stacked on top of each other for each IHC frequency channel, and each frequency channel is visually separated by a horizontal black line. The individual frequency channels, ranging from 1 to 23, are labels at the left hand side. Spect. 23.. Fig. 23 Real part of 41 spectro-temporal Gabor filters. The Gabor feature processor is demonstrated by the script DEMO_GaborFeatures.m, which produces the two plots shown in Fig. 24.. Fig. 24 Rate-map representation of a speech signal (left panel) and the corresponding output of the Gabor feature processor (right panel). Cross 18.. 25.. Fig. 25 Left and right ear signals shown for one time frame of 20 ms duration (left panel) and the corresponding CCF (right panel). The SCCF summarises the CCF across all auditory channels (bottom right panel). Inter 18). The ITD representation is computed by using the request entry ’itd’. The ITD processor is demonstrated by the script DEMO_ITD.m, which produces two plots as shown in Fig. 26.. Fig. 26 Binaural speech signal (left panel) and the estimated ITD in ms shown as a function of time frames and frequency channels. Inter 19. The ILD processor is demonstrated by the script DEMO_ILD.m and the resulting plots are presented in Fig. 27.. Fig. 27 Binaural speech signal (left panel) and the estimated ILD in dB shown as a function of time frames and frequency channels. Inter. 28.. 28). Precedence effect ( precedenceProc.m)¶ The precedence effect describes the ability of humans to fuse and localize the sound based on the first-arriving parts, in the presence of its successive version with a time delay below an echo-generating threshold [Wallach1949]. The effect of the later-arriving sound is suppressed by the first part in the localization process. The precedence effect processor in Auditory front-end models this, with the strategy based on the work of [Braasch2013]. The processor detects and removes the lag from a binaural input signal with a delayed repetition, by means of an autocorrelation mechanism and deconvolution. Then it derives the ITD and ILD based on these lag-removed signals. The input to the precedence effect processor is a binaural time-frequency signal chunk from the gammatone filterbank. Then for each chunk a pair of ITD and ILD values is calculated as the output, by integrating the ITDs and ILDs across the frequency channels according to the weighted-image model [Stern1988], and through amplitude-weighted summation. Since these ITD/ILD calculation methods of the precedence effect processor are different from what are used for the Auditory front-end ITD and ILD processors, the Auditory front-end ITD and ILD processors are not connected to the precedence effect processor. Instead the steps for the correlation analyses and the ITD/ILD calculation are coded inside the processor as its own specific techniques. Table 20 lists the parameters needed to operate the precedence effect processor. Fig. 29 shows the output from a demonstration script DEMO_precedence.m. The input signal is a 800-Hz wide bandpass noise of 400 ms length, centered at 500 Hz, mixed with a reflection that has a 2 ms delay, and made binaural with an ITD of 0.4 ms and a 0-dB ILD. During the processing, windowed chunks are used as the input, with the length of 20 ms. It can be seen that after some initial confusion, the processor estimates the intended ITD and ILD values as more chunks are analyzed. Fig. 29 Left panel: band-pass input noise signal, 400 ms long (only the first 50 ms is shown), 800 Hz wide, centered at 500 Hz, mixed with a reflection of a 2-ms delay, and made binaural with an of 0.4 ms ITD and ILD of 0 dB. Right panel: estimated ITD and ILD shown as a function of time frames.
http://docs.twoears.eu/en/1.3/afe/processors/
2019-05-19T08:45:58
CC-MAIN-2019-22
1558232254731.5
[array(['../../_images/Pre_Proc.png', '../../_images/Pre_Proc.png'], dtype=object) array(['../../_images/Gammatone.png', '../../_images/Gammatone.png'], dtype=object) array(['../../_images/DRNL_Diagram.png', '../../_images/DRNL_Diagram.png'], dtype=object) array(['../../_images/DRNLs.png', '../../_images/DRNLs.png'], dtype=object) array(['../../_images/IHC.png', '../../_images/IHC.png'], dtype=object) array(['../../_images/IHCadapt.png', '../../_images/IHCadapt.png'], dtype=object) array(['../../_images/ACF.png', '../../_images/ACF.png'], dtype=object) array(['../../_images/Ratemap.png', '../../_images/Ratemap.png'], dtype=object) array(['../../_images/SpecFeatures.png', '../../_images/SpecFeatures.png'], dtype=object) array(['../../_images/OnsetStrength.png', '../../_images/OnsetStrength.png'], dtype=object) array(['../../_images/OffsetStrength.png', '../../_images/OffsetStrength.png'], dtype=object) array(['../../_images/OnOffset.png', '../../_images/OnOffset.png'], dtype=object) array(['../../_images/Pitch.png', '../../_images/Pitch.png'], dtype=object) array(['../../_images/MOC.png', '../../_images/MOC.png'], dtype=object) array(['../../_images/ModFB.png', '../../_images/ModFB.png'], dtype=object) array(['../../_images/AMS.png', '../../_images/AMS.png'], dtype=object) array(['../../_images/Gabor_2D.png', '../../_images/Gabor_2D.png'], dtype=object) array(['../../_images/Gabor.png', '../../_images/Gabor.png'], dtype=object) array(['../../_images/CCF.png', '../../_images/CCF.png'], dtype=object) array(['../../_images/ITD.png', '../../_images/ITD.png'], dtype=object) array(['../../_images/ILD.png', '../../_images/ILD.png'], dtype=object) array(['../../_images/IC.png', '../../_images/IC.png'], dtype=object) array(['../../_images/Precedence.png', '../../_images/Precedence.png'], dtype=object) ]
docs.twoears.eu
The Content Library Content is viewed, edited, created and organised in the Content Library that is opened when you choose the Production view tab in the Dynamic Content app. On this page we'll provide an overview of the features available from the Content Library and provide links to pages where you can find more information. On this page The Content Library window is shown in the example below. In the left hand pane a list of the repositories available to you is displayed. In the example there are two repositories, one labelled "content" and one labelled "slots". If you're a content producer, you'll usually be working in the content repositories and may only have read only access to the ones containing slots. In the image below, the "Content" repository is selected. If you're working on several projects, you'll probably have access to more than one content repository, each containing its own content and allowing you to create particular types of content. The Content Library shows the contents of the selected repository and if you have content organised into folders, will show the contents of the selected folder. Content library cards Each piece of content in the Content Library is represented by a library card. This card displays a preview of the content it contains, provides access to a contextual menu with items for managing the content, and also includes various icons showing the content’s publish status. If card previews are not displayed when you open the Content Library, you might need to check your settings. See the pre-requisites section at the end of this page for more details. The card menu To display the contextual menu for a content item, hover over or select the item's card and click to open the menu represented by the ellipsis ("…"). The menu is shown below. From this menu you can: View the content to edit it. See Viewing and editing content. You can also double-click anywhere in a card to open the content item for editing. Rename the content. This displays a dialog allowing you to give the content a new name. For more content saving options see Saving content Copy the content. Creates a copy of the content item and adds it to the Content Library with the name you've chosen. For more details see Copying content Get the content id. Each content item has its own unique id that is used by developers to retrieve this item from Dynamic Content. Developers can find more information in Slot examples and Consuming content pages in the integration section. Publish the content. Publish this content and any linked content items. For more information about publishing content see Publishing content. Archive the content. Move this content item to the archive folder in the current repository. For more information about archiving content see Archiving and unarchiving content. Publish status icons If a content item has been published, then its card will show an icon reflecting its published status. A green tick indicates that the latest version of the item has been published, as shown in the image below. A green cloud icon indicates that an earlier version of the content has been published and the content has been changed since its publication. Content can either be published directly from the Content Library or added to a slot in an edition and scheduled for publication. If you are making use of the planning and scheduling features, then in order for content to be published, it must first be saved to an edition. This makes the current version of the content available to planners to schedule it for publishing. Once the slot containing this content item is published, the content item's publish status will change to a green tick. However, if the content is subsequently edited, it will create a new version, so the published version will no longer be the current one and the status icon will change to the green cloud. You can find out more about content versioning and revision history on the Revision history page. Making content available to be scheduled for publishing is covered in Saving content to an edition. Switching between grid and list view The Content Library can either viewed as a grid, where each item of content is represented by its card showing a preview of its content, or as a list. The functionality provided by both views is the same so you can choose the one that best fits in with your workflow. To switch from grid view to list view, click the list icon, as highlighted below. The list view is now shown, showing information about each content item's content type, creation and modification date. You can change the sort order in both the grid and list view, as shown in sorting. To open an item for editing, double-lick anywhere within that item's entry in the list or choose "View" from the contextual menu that will appear on the right hand side of the item when its hovered over or selected. Sorting content You can change the way that content items are sorted by choosing an option from the sort menu. Currently content can be sorted in ascending and descending order by modification and creation dates, content name and content type. By default content is sorted by modification date, with the most recently updated items shown first. In the example below, we've chosen to sort content items in ascending order alphabetically. The Content Library is refreshed, now showing the content items sorted in ascending alphabetical order by name. Filtering. Searching You can search for content containing some specified text, searching in the content title, within the fields of the content itself, or the content id. The search is not case sensitive. In the example below, we're searching for "summer", so we enter the text in the search box. The content items that match the search result are returned, ordered by relevance. Search matches on the content name will take precedence over matches in the item's content. Content can also be organised into folders. You can find more details of folders on the Organising content page. Card preview pre-requisites If you see an error message rather than a card preview for items in the Content Library, you may need to check your setup and contact your administrator: - You must have a virtual staging environment specified in your settings in order to show previews for any content. - The current user's IP address must be in the whitelist of approved IP addresses in order for the card preview to be displayed Video: The Content Library The following video shows how content is organised in the Content Library, demonstrates how to rename and copy items and walks through searching for and organising content in folders.
https://docs.amplience.net/production/contentlibrary.html
2019-05-19T09:24:04
CC-MAIN-2019-22
1558232254731.5
[]
docs.amplience.net
EventInformation Describes an EC2 Fleet or Spot Fleet event. Contents - eventDescription The description of the event. Type: String Required: No - eventSubType The event. The following are the errorevents:events:pirationset.events: launched- A request was fulfilled and a new instance was launched. terminated- An instance was terminated by the user. The following are the Informationevents:. Type: String Required: No - instanceId The ID of the instance. This information is available only for instanceChangeevents. Type: String Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EventInformation.html
2019-05-19T09:04:11
CC-MAIN-2019-22
1558232254731.5
[]
docs.aws.amazon.com
The tutorial package will deploy the application files and data and set up a new SQL database. If your deployment is using tables in an existing database, you can skip Step 1. To complete this step, you must have SQL Server Management Studio installed. 1. Open SQL Server Management and connect to the database server service: You'll need to use SQL Server Management Studio 17 (64bit) for the latest version of Visual LANSA. All the steps to be followed, are as shown here. 2. Expand the Database group, so you are aware what database names have already been used. 3. Right click on Database and select New Database from the context menu. 4. Enter any suitable name for the new database: 5. Click OK to create the new database, using the default configuration. Note: The Visual LANSA install configures the SQL Server database service, to use Integrated Windows Authentication.
https://docs.lansa.com/14/en/lansa022/content/lansa/vldtoolt_0645.htm
2019-05-19T08:26:15
CC-MAIN-2019-22
1558232254731.5
[]
docs.lansa.com
Wireless WAN¶ A wireless card in a firewall running pfSense can be used as the primary WAN interface or an additional WAN in a multi-WAN deployment. Interface assignment¶ If the wireless interface has not yet been assigned, there are two possible choices: Add it as an additional OPT interface or reassign it as WAN. Before starting, create the wireless instance as described in Creating and Managing Wireless Instances if it does not already exist. When working as a WAN, it must use Infrastructure mode (BSS). To add the interface as a new OPT interface: - Browse to Interfaces > (assign) - Select the wireless interface from the Available network ports drop-down below the other interfaces - Click Add to add the interface as an OPT interface To reassign the wireless interface as WAN: - Browse to Interfaces > (assign) - Select the wireless interface as WAN - Click Save Figure Wireless WAN Interface Assignment shows an Atheros card assigned as WAN. Wireless WAN Interface Assignment Configuring the wireless network¶ Most wireless WANs need only a handful of options set, but specifics vary depending on the Access Point (AP) to which this client interface will connect. - Browse to the Interfaces menu for the wireless WAN interface, for example Interfaces > WAN - Select the type of configuration (DHCP, Static IP, etc.) - Scroll down to Common Wireless Configuration - Set the Standard to match the AP, for example 802.11g - Select the appropriate Channel to match the AP - Scroll down to Network-specific Wireless Configuration - Set the Mode to Infrastructure (BSS) mode - Enter the SSID for the AP - Configure encryption such as WPA2 (Wi-Fi Protected Access) if in use by the AP - Review the remaining settings if necessary and select any other appropriate options to match the AP - Click Save - Click Apply Changes Checking wireless status¶ Browse to Status > Interfaces to see the status of the wireless interface. If the interface has successfully associated with the AP it will be indicated on the status page. A status of associated means the interface has connected to the AP successfully, as shown in Figure Associated Wireless WAN Interface Associated Wireless WAN Interface If the interface status shows No carrier, it was unable to associate. Figure No carrier on wireless WAN shows an example of this, where the antenna was disconnected so it could not connect to a wireless network that was some distance away. No carrier on wireless WAN Showing available wireless networks and signal strength¶ The wireless access points visible by the firewall may be viewed by navigating to Status > Wireless as shown in Figure Wireless Status. A wireless interface must be configured before this menu item will appear. Wireless Status
https://docs.netgate.com/pfsense/en/latest/book/wireless/wireless-wan.html
2019-05-19T08:31:37
CC-MAIN-2019-22
1558232254731.5
[array(['../_images/wifi-wan-interface-assignments.png', '../_images/wifi-wan-interface-assignments.png'], dtype=object) array(['../_images/wifi-wan-ath0-associated.png', '../_images/wifi-wan-ath0-associated.png'], dtype=object) array(['../_images/wifi-wan-ath0-no-carrier.png', '../_images/wifi-wan-ath0-no-carrier.png'], dtype=object) array(['../_images/wifi-wan-status-wireless.png', '../_images/wifi-wan-status-wireless.png'], dtype=object)]
docs.netgate.com
Message-ID: <1348573228.302171.1558254269587.JavaMail.confluence@docs-node.wso2.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_302170_1577770883.1558254269587" ------=_Part_302170_1577770883.1558254269587 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Welcome to the WSO2 Carbon 4.4.3 documentati= on! WSO2 Carbon is the award-winning, light-weight, service-oriented platfo= rm for all WSO2 products. It is 100% open source and is delivered under Apa= che License 2.0. Consisting of a collection of OSGi bundles, WSO2 Carbon ho= sts components for integration, security, clustering, governance, statistic= s, S= pace Operations , and then clic= k one of the export options. To print the documentation, = export to PDF (generates= only one PDF at a time) and then use the PDF print options.
https://docs.wso2.com/exportword?pageId=48285855
2019-05-19T08:24:29
CC-MAIN-2019-22
1558232254731.5
[]
docs.wso2.com
c2cgeoportal supports WMS Time layers. When the time is enabled for a layer group, a slider is added to this group in the layer tree which enables changing the layer time. Most of the configuration is automatically extracted from the mapfile. But there is also some configuration to do in the administration interface. In the mapfile, a WMS Time layer is configured with the help of the layer metadata: - wms_timeextent - wms_timeitem - wms_timedefault c2cgeoportal uses the wms_timeextent to configure the slider. Two different formats are supported to define the time: - min/max/interval - value1,value2,value3,... The format min/max/interval allows specifying a time range by giving a start date, an end date and an interval between the time stops. The format value1,value2,value3,... allows specifying a time range by listing discrete values. The dates (min, max and valueN) could be specified using any of the following formats: - YYYY - YYYY-MM - YYYY-MM-DD - YYYY-MM-DDTHH:MM:SSTZ (TZ is an optional timezone, i.e. “Z” or “+0100”) The format used for the dates in the mapfile determines both the resolution used for the slider and the format of the time parameter used for the GetMap requests. For example when a layer has monthly data, the YYYY-MM should be used in the mapfile to make sure that only months and years are displayed in the slider tip and passed to the GetMap request. The interval (interval) has to be defined regarding international standard ISO 8601 and its duration/time intervals definition (see ISO 8601 Durations / Time intervals). Some examples for the interval definition: - An interval of one year: P1Y - An interval of six months: P6M Most of the configuration is done in the mapfile as described in the above section. However the slider time mode must be configured via the admin interface. The slider time mode is one of: - single - range - disabled The single mode is used to display a slider with a single thumb. The WMS Time layer will display data that are relative to the time selected by the thumb. The range mode is used to display a slider with two thumbs. In such a case, the layer will display data that are relative to the range defined by the two thumbs. The disabled mode allows hidding the slider. No time parameter will be sent to the GetMap request in such a case. The previous section describes the time configuration for a single layer. However there could be multiple WMS Time layers in a group. In such a case, you need to be aware of some limitations that apply to the configuration of the WMS Time layers of the same group. Some of those limitations apply to the mapfile: - The WMS Time layers of a same group must all be configured with either a list of discrete values or an interval. It is not possible to mix the 2 different types of definition within the same group, - If the WMS Time layers of a group use the min/max/interval, they must all use the same interval. There is also a limitation that applies to the admin interface: all the WMS Time layers of a group should be configured to use the same time mode (single or range) except for layers with time mode disabled that can be mixed within others. If you need to get the default WMS-time values, you must make sure to use OWSLib 0.8.3 and Python 2.7. To use the OWSLib 0.8.3, add the following lines in the buildout.cfg file: [versions] OWSLib = 0.8.3 PIL = 1.1.7 cov-core = 1.7 pytest-cov = 1.6 python-dateutil = 2.1 coverage = 3.7 py = 1.4.19 pytest = 2.5.1
http://docs.camptocamp.net/c2cgeoportal/1.4/integrator/wmstime.html
2019-05-19T09:34:30
CC-MAIN-2019-22
1558232254731.5
[]
docs.camptocamp.net
Searchkick Searchkick is an alternative to the official Elasticsearch Rails client. It provides some out of the box support for features that the official client does not (autocomplete, spelling suggestions, stemming, etc). Note This documentation covers the basics of using the Searchkick client for Elasticsearch and is not meant to be exhaustive. For complete documentation, please see the project’s GitHub page. Users who are using the official Elasticsearch Rails client should read the documentation here. Getting started Searchkick looks for an environment variable called ELASTICSEARCH_URL to determine how to connect to your Elasticsearch cluster. If this variable is not found, Searchkick will default to. The first step is to make sure that this environment variable is present so Searchkick can communicate with Elasticsearch. Heroku users: When Bonsai is added to your application, it automatically creates an environment variable called BONSAI_URL and populates it with your cluster URL. You can initialize the ELASTICSEARCH_URL variable by running the following in a terminal: heroku config:set ELASTICSEARCH_URL=`heroku config:get BONSAI_URL` Direct users Users who sign up for Bonsai through our website will need to create the environment variable manually. For Linux/BSD/OSX, this will probably look something like the following: export ELASTICSEARCH_URL="<URL copy/pasted from your Bonsai dashboard>" Setting up Rails Add the searchkick gem to your Gemfile: gem 'searchkick' Make sure to run bundle install after modifying your Gemfile. Next, add Searchkick to the models you want to search, like so: class Product < ActiveRecord::Base searchkick end You should now be able to create an index on your cluster and populate it with your data by running Product.reindex within a Rails console. Once your data is indexed, you can search it from a console as well: products = Product.search "apples" products.each do |product| puts product.name end Next steps Once you have confirmed that your application is able to communicate with your Bonsai cluster, you can begin the work of configuring your other models and search options. Please consult the Searchkick documentation for more details: If you have any issues, please don’t hesitate to open a support ticket by emailing [email protected].
https://docs.bonsai.io/article/99-searchkick
2019-05-19T08:48:57
CC-MAIN-2019-22
1558232254731.5
[]
docs.bonsai.io
Email notifications can be sent from Jamf Pro to Jamf Pro device group membership changes. Smart user group membership changes. Tomcat is started or stopped. The database is backed up successfully. A database backup fails. Jamf Pro fails to add a file to the cloud distribution point. An instance of the Jamf Pro web app in a clustered environment fails. The Volume Purchasing (formerly Jamf Pro. Note: An email notification is sent if the Infrastructure Manager fails to check in with Jamf Pro after three attempts. Only one notification is sent for this event. Each Jamf Pro user can choose which email notifications they want to receive. Requirements.) Enabling Email Notifications Log in to Jamf Pro. At the top of the page, click the account settings icon and then click Notifications. Note: The Notifications option will not be displayed if your Jamf Pro user account is associated with an LDAP group. Select the checkbox for each event that you want to receive email notifications for. Click Save. Related Information For related information, see the following section in this guide: Integrating with Apple's Volume Purchasing Find out how to configure email notifications for VPP accounts.
https://docs.jamf.com/10.7.0/jamf-pro/administrator-guide/Email_Notifications.html
2019-05-19T08:41:39
CC-MAIN-2019-22
1558232254731.5
[]
docs.jamf.com
All content with label as5+gridfs+infinispan+installation+loader+lock_striping+store+tx. Related Labels: publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, query, deadlock, archetype, jbossas, nexus, guide, schema, cache, amazon, s3, grid, jcache, test, api, xsd, maven, documentation, wcm, write_behind, 缓存, ec2, s, hibernate, getting, aws, interface, custom_interceptor, setup, clustering, eviction, concurrency, out_of_memory, examples, jboss_cache, import, index, events, hash_function, configuration, batch, buddy_replication, write_through, cloud, mvcc, tutorial, notification, xml, jbosscache3x, read_committed, distribution, started, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, websocket, transaction, async, interactive, xaresource, build, gatein, searchable, demo, scala, client, migration, jpa, filesystem, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, snapshot, webdav, hotrod, repeatable_read, docs, batching, consistent_hash, jta, faq, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod more » ( - as5, - gridfs, - infinispan, - installation, - loader, - lock_striping, - store, - tx ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/as5+gridfs+infinispan+installation+loader+lock_striping+store+tx
2019-05-19T09:03:25
CC-MAIN-2019-22
1558232254731.5
[]
docs.jboss.org
All content with label client+deadlock+distribution+gridfs+hibernate+infinispan+query. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, rehash, replication, transactionmanager, dist, release, partitioning, archetype, jbossas, lock_striping, nexus, guide, schema, listener, state_transfer, cache, amazon, grid, memcached, jcache, test, api, xsd, ehcache, maven, documentation, write_behind, 缓存, ec2, aws, interface, custom_interceptor, clustering, setup, eviction, concurrency, jboss_cache, import, index, events, configuration, hash_function, batch, buddy_replication, loader, colocation, write_through, cloud, remoting, tutorial, notification, murmurhash2, read_committed, xml, cachestore, data_grid, resteasy, hibernate_search, cluster, br, websocket, transaction, async, interactive, xaresource, build, hinting, searchable, demo, scala, installation, command-line, non-blocking, migration, rebalance, jpa, filesystem, tx, gui_demo, eventing, shell, client_server, testng, murmurhash, infinispan_user_guide, standalone, repeatable_read, snapshot, webdav, hotrod, docs, consistent_hash, batching, store, jta, faq, as5, 2lcache, jsr-107, jgroups, locking, rest, hot_rod more » ( - client, - deadlock, - distribution, - gridfs, - hibernate, - infinispan, - query ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/client+deadlock+distribution+gridfs+hibernate+infinispan+query
2019-05-19T09:56:37
CC-MAIN-2019-22
1558232254731.5
[]
docs.jboss.org
File Manager¶ The File Manager tab in a Captive Portal zone is used to upload files that can then be utilized inside a captive portal page, such as style sheets, image files, PHP or JavaScript files. The total size limit for all files in a zone is 1 MB. File Name Conventions¶ When a file is uploaded using the File Manager, the file name will automatically be prefixed with captiveportal-. For example, if logo.png is uploaded it will become captiveportal-logo.png. If a file already has that prefix in its name, the name is not changed. These files will be made available in the root directory of the captive portal server for this zone. The files may be referenced directly from the portal page HTML code using relative paths. Example: An image with the name captiveportal-logo.jpg was uploaded using the file manager, It can then be included in the portal page as follows: <img src="captiveportal-logo.jpg" /> PHP scripts may be uploaded as well, but they may need to be passed extra parameters to work as desired, for example: <a href="/captiveportal-aup.php?zone=$PORTAL_ZONE$&redirurl=$PORTAL_REDIRURL$"> Acceptable usage policy </a> Managing Files¶ To upload files: - Navigate to Services > Captive Portal - Edit the zone where the files will be uploaded - Click the File Manager tab - Click - Click Browse - Locate and select the file to upload - Click Upload The file will be transferred to the firewall and stored in the configuration. To delete files: - Navigate to Services > Captive Portal - Edit the zone where the file to delete is located - Click the File Manager tab - Click next to the file to remove - Click OK to confirm the delete action The file will be removed from the portal configuration and will no longer be available for use in portal pages.
https://docs.netgate.com/pfsense/en/latest/book/captiveportal/file-manager.html
2019-05-19T09:10:48
CC-MAIN-2019-22
1558232254731.5
[]
docs.netgate.com
. Note You are assumed to have finished Part 1 before moving on to this part of the tutorial. The finished project from Part) == true: global variables: states: A dictionary for holding our animation states. (Further explanation below) animation_speeds: A dictionary for holding all of the speeds we want to play our animations at. of the states we can transition to. For example, if we are in currently in state Idle_unarmed, we can only transition to Knife_equip, Pistol_equip, Rifle_equip, and Idle_unarmed. If we try to transition to a state that is not included in our possible transitions states, then we get a warning message and the animation does not change. We can also automatically transition from some states into others, as will be explained further below in animation_ended Note For the sake of keeping this tutorial simple we are not using a ‘proper’. Tip Notice that all of the firing animations are faster than their normal speed. Remember this for later! current_state will hold the name of the animation state we are currently in. Finally, callback_function will be a FuncRef passed in by our player for spawning bullets at the proper frame of animation. A FuncRef allows us to pass in a function as an argument, effectively allowing us to call a function from another script, which is how we will use it later. Now lets sets the animation to the that of the passed in animation state if we can transition to it. In other words, if the animation state we are currently in has the passed in animation state name in states, then we will change to that animation. To start we check if the passed in animation is the same as the animation state we are currently in. If they are the same, then we write a warning to the console and return true. Next we see if AnimationPlayer has the passed in animation using has_animation. If it does not, we return false. Then we check if current_state is set or not. If current_state is not currently set, we set current_state to the passed in animation and tell AnimationPlayer to start playing the animation with a blend time of -1 and at the speed set in animation_speeds and then we return true. If we have a state in current_state, then we get all of the possible states we can transition to. If the animation name is in the list of possible transitions, we set current_state to the passed in animation, tell AnimationPlayer to play the animation with a blend time of -1 at the speed set in animation_speeds and return true. Now lets look at animation_ended. animation_ended is the function that will be called by AnimationPlayer when it’s done playing a animation. For certain animation states, we may need to transition into another state when its finished. To handle this, we check for every possible animation state. If we need to, ideally would be part of the data in states, but in an effort to make the tutorial easier to understand, we’ll hard code each state transition in animation_ended. Finally we function at the bottom of the list, click the plus icon on the bottom bar of animation window, right next to the loop button and the up arrow. This will bring up a window with three choices. We’re wanting to add a function callback track, so click the option that reads “Add Call Func. Note The timeline is the window where all forwards click in setup. This method we create/spawn a bullet object in the direction our gun is facing, and then it sends itself forward. There are several advantages to this method. The first being we do not have to store the bullets in our player. We can simply create the bullet and then move on, and the bullet itself a object also makes the bullet take time to reach its target, it doesn’t instantly hit whatever its pointed at. This feels more realistic because nothing in real life moves instantly from one point to another. One of the huge disadvantages object, fragment however many clients are connected to the server. While we are not implementing any form of networking (as that would be it’s change ‘travels’ through space. Note for this method is it’s light on performance. Sending a couple hundred rays through space is is. Lets get the bullet object setup. This is what our pistol will create when the “Pistol_fire” animation callback function is called. Open up Bullet_Scene.tscn. The scene contains Spatial node called bullet, with a MeshInstance and an Area with a CollisionShape childed a Area and not a RigidBody? The mean reason we’re not using a RigidBody is because we do not want the bullet to interact with other RigidBody nodes. By using an Area we are assuring that none of the other RigidBody nodes, including other bullets, will be effected. Another reason is simply because it is easier to detect collisions with, self.global_transform.origin) hit_something = true queue_free() Lets go through the script: First we define a few global variables: BULLET_SPEED: The speed the bullet travels at. BULLET_DAMAGE: The damage the bullet will cause to whatever it collides with. KILL_TIMER: How long the bullet can last without hitting anything. timer: A float for tracking how long we’ve assure that no bullets will travel forever and consume resources. Tip As in Part 1, we have a couple all uppercase global variables. The reason behind this is the same as the reason given in Part in at the scene in local mode, you will find that the bullet faces the positive local Z axis. Next we translate the entire bullet by that forward direction, multiplying in our speed and delta time. After that we add delta time to our timer and check if the timer has as long or longer than our KILL_TIME constant. If it has, we use queue_free to free ourselves. In collided we check if we’ve hit something yet or not. Remember that collided is only called when a body has entered the Area node. If we have not already collided with something, we then proceed to check if the body we’ve collided with has a function/method called bullet_hit. If it does, we call it and pass in our damage and our position. Note in collided, the passed in body can be a StaticBody, RigidBody, or KinematicBody We set hit_something to true because regardless of whether or not the body the bullet collided with has the bullet_hit function/method, it has hit something and so we need to not hit anything else. Then we free the bullet using queue_free. Tip if the bullets will for sure collide with as it goes along. Note There is a invisible mesh instance for debugging purposes. The mesh is a small sphere that visually shows where. Note If you are wondering where the positions of the points came from, a Area node. We are using a Area for the knife because we only care for all. Note You can also look at the HUD nodes if you want. There is nothing fancy there and other than using a single Label, we will not be touching any of those nodes. Check Design interfaces with the Control nodes global. All of the weapons we’ll make will have all. If we could write all we’re assuming we’ll fill in Player.gd.. Tip By instancing the scene, we are creating a new node holding all of the node(s) in the scene we instanced, effectively cloning that scene. Then we add clone to the first child node of the root of the scene we are currently in. By doing this we’re making it at a child of the root node of the currently loaded scene. In other words, we are adding. Warning As mentioned later below in the section on adding sounds, this method makes more or less is if the animation manager is in the pistol’s idle animation. If we are in the pistol’s idle animation, we set is_weapon_enabled to true and return true because we have successfully been equipped. Because we know our pistol’s equip animation automatically transitions to the pistol’s idle animation, if we are in the pistol’s idle animation we most have finished playing the equip animation. Note We know these animations will transition because we wrote to the code to make them transition in Animation_Manager.gd Next we check to see if we are in the Idle_unarmed animation state. Because all unequipping animations go to this state, and because any weapon can be equipped from this state, we change animations to Pistol_equip if we are in Idle_unarmed. if we are in our idle animation. Then check to make sure we are not in the Pistol_unequip animation. If we are not in the Pistol_unequip animation, we want to play pistol_unequip. Note You may be wondering why we are checking to see if we are the pistol’s idle animation, and then making sure we if we are in Idle_unarmed, which is the animation state we will transition into from Pistol_unequip. If we are, then we set is_weapon_enabled to false since we are no longer using this weapon, and return true because we have successfully unequipped the pistol. If we are not in Idle_unarmed, we return false because we have not yet successfully unequipped the pistol. Creating the other two weapons¶ Now that we all.get_collision_point()) they have a function/method called bullet_hit. If they do, we call it and pass in the amount of damage this bullet does ( DAMAGE), and the point where the raycast collided with the.origin) be able to stab ourselves. If the body is the player, we use continue so we jump to looking at the next body in bodies. If we have not jumped to the next body, we then check to see if the body has the bullet_hit function/method. If it does, we call it, passing in the amount of damage a single knife swipe does ( DAMAGE) and the position of the Area. Note While we could attempt to calculate a rough location for where the knife hit, we do not bother because using the area’s position works well enough and the extra time needed to calculate a rough position for each body is not worth the effort. Making the weapons work¶ Lets start making the weapons work in Player.gd. First lets start by adding some global Lets reserves. our animation_manager variable. Then we set the callback function to a FuncRef that will call the player’s fire_bullet function. Right now we haven’t written our fire_bullet function, but we’ll get there soon. Next we get all of the weapon nodes and assign them to weapons. This will allow us to access the weapon nodes only with their name ( KNIFE, PISTOL, or RIFLE). We then get Gun_Aim_Point’s global position so we can rotate our weapons to aim at it. Then we go through each weapon in weapons. We first get the weapon node. If the weapon node is not null, we then set it’s player_node variable to ourself. Then we have it look at gun_aim_point_pos, and then rotate it by 180 degrees on the Y axis. Note. Lets add a new function call to _physics_process so we can change weapons. Here’s the new code: func _physics_process(delta): process_input(delta) process_movement(delta) process_changing_weapons(delta) Now we will call process_changing_weapons. Now lets add all) # ---------------------------------- Lets. Note are, we add/subtract 1 from weapon_change_number. Because we may have shifted weapon_change_number outside of the number of weapons we have, we clamp it so it cannot exceed the maximum number of weapons we have and has to be 0 or more. Then we check to make sure we are not already changing weapons. If we are not, we then check to see if the weapon we want to change to is a new weapon and not the one we are currently using. If the weapon we’re wanting to change to is a new weapon, we then set changing_weapon_name to the weapon at weapon_change_number and set changing_weapon to true. For firing the weapon we first check to see if the fire action is pressed. Then we check to make sure we are not changing weapons. Next we get the weapon node for the current weapon. If the current weapon node does not equal null, and we are in it’s IDLE_ANIM_NAME state, we set our animation to the current weapon’s FIRE_ANIM_NAME. Lets add process_changing_weapons next. Add the following code:_equiped = false var weapon_to_equip = weapons[changing_weapon_name] if weapon_to_equip == null: weapon_equiped = true else: if weapon_to_equip.is_weapon_enabled == false: weapon_equiped = weapon_to_equip.equip_weapon() else: weapon_equiped = true if weapon_equiped == true: changing_weapon = false current_weapon_name = changing_weapon_name changing_weapon_name = "" Lets go over what’s happening here: The first thing we do is make sure we’ve recived have need to check to see if the weapon is enabled or not. If the weapon is enabled, we call it’s unequip_weapon function so it will start the unequip animation. If the weapon is not enabled, we set weapon_unequippped to true, because we we want to change to. If we have successfully unequipped the current weapon ( weapon_unequipped == true), we need to equip the new weapon. First we define a new variable ( weapon_equipped) for tracking whether we have successfully equipped the new weapon or not. Then we get the weapon we want to change to. If the weapon we want to change to is not null, we then check to see whether or not it’s enabled. If it is not enabled, we call it’s equip_weapon function so it starts to equip the weapon. If the weapon is enabled, we set weapon_equipped to true. If the weapon we want to change to is null, we simply set weapon_equipped to true because we do not have any node/script for UNARMED, nor do we have any animations. Finally, we check to see if we have successfully equipped the new weapon. If we have, we set changing_weapon to false because we are no longer changing weapons. We also set current_weapon_name to changing_weapon_name, since the current weapon has changed, and then we set changing_weapon_name to a empty string. Now, we need to add one more function to the player, and then the player is ready to start the weapons! We need to add fire_bullet, which will be called when by the AnimationPlayer at those points we set earlier in the AnimationPlayer function track: func fire_bullet(): if changing_weapon == true: return weapons[current_weapon_name].fire_weapon() Lets go over what this function is doing: First we check if we are changing weapons or not. If we are changing weapons, we do not want shoot so we return. Tip we func _ready(): pass func bullet_hit(damage, bullet_hit_pos): var direction_vect = global_transform.origin - bullet_hit_pos direction_vect = direction_vect.normalized() apply_impulse(bullet_hit_pos, direction_vect * damage) Lets go over how bullet_hit works: First we get the direction from the bullet pointing towards our global Transform. We do this by subtracting the bullet’s hit position from the RigidBody’s position. This results in a Vector3 that we can use to tell the direction the bullet collided into the RigidBody at. We then normalize it so we do not get crazy results from collisions on the extremes of the collision shape attached to the RigidBody. Without normalizing shots farther away from the center of the RigidBody would cause a more noticeable reaction than those closer to the center. Finally, we apply an impulse at the passed in bullet collision position. With the force being the directional vector times the damage the bullet is supposed to cause. This makes the RigidBody seem to move in response to the bullet colliding into it. Now we need to attach this script to all of the RigidBody nodes we want to effect. Open up Testing_Area.tscn and select all of the cubes parented to the Cubes node. Tip If you select the top cube, and then hold down shift and select the last cube, Godot will select all of the cubes in between! Once you have all of the cubes selected, scroll down in the inspector until you get to the the “scripts” section. Click the drop down and select “Load”. Open your newly created RigidBody_hit_test.gd script. Final notes¶ That was a lot of code! But now with all that done you can go give your weapons a test! You should now be able to fire as many bullets as you want on the cubes and they will move in response to the bullets colliding into them. In Part 3, we will add ammo to the weapons, as well as some sounds! Warning If you ever get lost, be sure to read over the code again! You can download the finished project for this part here: Godot_FPS_Part_2.zip
https://docs.godotengine.org/en/3.0/tutorials/3d/fps_tutorial/part_two.html
2019-05-19T08:37:10
CC-MAIN-2019-22
1558232254731.5
[array(['../../../_images/PartTwoFinished.png', '../../../_images/PartTwoFinished.png'], dtype=object) array(['../../../_images/AnimationPlayerAddTrack.png', '../../../_images/AnimationPlayerAddTrack.png'], dtype=object) array(['../../../_images/AnimationPlayerCallFuncTrack.png', '../../../_images/AnimationPlayerCallFuncTrack.png'], dtype=object) array(['../../../_images/AnimationPlayerAddPoint.png', '../../../_images/AnimationPlayerAddPoint.png'], dtype=object) array(['../../../_images/AnimationPlayerEditPoints.png', '../../../_images/AnimationPlayerEditPoints.png'], dtype=object) array(['../../../_images/PartTwoFinished.png', '../../../_images/PartTwoFinished.png'], dtype=object)]
docs.godotengine.org
Tutorial: Create a Gold Spotlight¶¶ Consider getting familiar with the following concepts before starting this tutorial: Create a Wall to Shine the Light On¶ Your gold spotlight needs a surface to shine on. Create this surface or wall in the Create app: - In Interface, pull up your HUD or Tablet and go to Create. - Click the ‘Cube’ icon to create a cube entity. - Go to the ‘Properties’ tab and make the following changes: - Change the color of the cube to teal (Red = ‘0’, Green = ‘128’, Blue = ‘128’). - Change the dimensions of the cube to make it bigger and look more like a wall. We’ve used the following local dimensions: - X = ‘0.1300’ - Y = ‘2.4000’ - Z = ‘3.2000’ You’ve made your wall! Create the Gold Spotlight¶ ‘Light’ ‘Properties’ tab and modify the light entity to make a gold spotlight: - Change the color of the light to gold (R = ‘255’, G = ‘215’, B = ‘0’). - You can make the light entity brighter by changing its intensity. Change the ‘Intensity’ to ‘100’. You’ll see that the light is now covering a larger area and is much brighter. - You can modify the light entity’s ‘Fall-off Radius’ so that the it dims gradually towards the edges. The ‘Fall-off Radius’ defines the shape of the light curve of a light. A larger radius will simulate a larger light, which will “fall-off”, or dim, more gradually. It is the distance from the light at which the intensity is reduced by ‘25%’. Change this value to ‘0.5’. - Select the ‘Spotlight’ checkbox to convert the light entity to a spotlight. - Change the ‘Spotlight Cut-off’ to ‘50’. This property determines the radius of the spotlight. A higher cut-off value corresponds with a larger spotlight radius. You should see the beam tighten get smaller. - Change the ‘Spotlight Exponent’ to ‘5’. This property affects the softness of the beam. You should see the edge of the beam soften. - Rotate the spotlight so that it’s facing down the wall by changing the ‘Local Rotation’s’ X value to ‘-90.0000’. A spotlight positioned like this can be used for a soft lighting effect over paintings or wall hangings in your world. Congratulations! You’ve created a soft gold spotlight! You can experiment with different spotlight exponents, cutoff values, and intensity combinations for varied effects. See Also
https://docs.highfidelity.com/en/rc81/create/entities/create-spotlight.html
2019-05-19T09:25:28
CC-MAIN-2019-22
1558232254731.5
[]
docs.highfidelity.com
Any change to c2cgeoportal and CGXP requires a GitHub pull request. To give everyone a chance to review changes pull requests should not stay open for at least 24 hours. Any main developers of c2cgeoportal projects can take responsibility for merging commits in the main (master) branch. Pull requests with significant impacts can and should be reviewed by more than one person. To create a Git branch from the master branch use: $ git checkout -b <branch_name> master You can then add commits to your branch, and push commits to a remote branch using: $ git push -u origin <branch_name> The -u option adds an upstream (tracking) reference for the branch. This is optional, but convenient. Once the branch has an upstream reference you can push commits by just using git push. The “origin” remote can either represent the main repository (that in the “camptocamp” organization) or your own fork. Creating branches in the main repository can ease collaboration between developers, but isn’t required. To update a branch from the master you first need to update your local master branch: $ git checkout master $ git fetch origin $ git merge origin/master Note You’ll use “upstream” instead of “origin” if “origin” references your own fork. You can now update your branch from master: $ git checkout <branch_name> $ git rebase master $ git push origin <branch_name> Making a pull request is done via the GitHub web interface. Open your branch in the browser (e.g.) and press the Pull Request button. Once a pull request is merged it is good practise to add a comment in the pull request, for others to get notifications.
http://docs.camptocamp.net/c2cgeoportal/1.4/developer/development_procedure.html
2019-05-19T09:34:10
CC-MAIN-2019-22
1558232254731.5
[]
docs.camptocamp.net
Two Martine Center Residents Celebrate the Same Milestone Birthday...Only a Two Year Age Difference Across New York Patch - 8/3/17 White Plains Mayor Roach & NYS Senator Stewart-Cousins Present Proclamations, Joining Families and Friends at the Celebration Walking For The Ones Who Can't Providence Patch - 10/23/17 How One Bannister Center Employee and Her Daughter Knows the Importance of Walking For Breast Cancer Awareness Mini Therapy Horses Bring Out the Best in Residents at Hope Center Washington Heights Patch - 6/12/17 Special Needs Residents Are All Smiles on this Very Different Kind of Visit Northern Riverview Nine-Year Resident Turns 104 Years Old With A Smile Nyack Patch - 6/8/17 Originally From Manhattan, Birthday Girl Aida Moscato Preaches "Respect Your Elders"
http://www.publicity4docs.com/published-articles---press-releases.html
2019-05-19T08:22:26
CC-MAIN-2019-22
1558232254731.5
[]
www.publicity4docs.com
GetAccountLimit Gets the specified limit for the current account, for example, the maximum number of health checks that you can create using the account. For the default limit, see Limits in the Amazon Route 53 Developer Guide. To request a higher limit, open a case. Note You can also view account limits in AWS Trusted Advisor. Sign in to the AWS Management Console and open the Trusted Advisor console at. Then choose Service limits in the navigation pane. Request Syntax GET /2013-04-01/accountlimit/ TypeHTTP/1.1 URI Request Parameters The request requires the following URI parameters. - Type The limit that you want to get. Valid values include the following: MAX_HEALTH_CHECKS_BY_OWNER: The maximum number of health checks that you can create using the current account. MAX_HOSTED_ZONES_BY_OWNER: The maximum number of hosted zones that you can create using the current account. MAX_REUSABLE_DELEGATION_SETS_BY_OWNER: The maximum number of reusable delegation sets that you can create using the current account. MAX_TRAFFIC_POLICIES_BY_OWNER: The maximum number of traffic policies that you can create using the current account. MAX_TRAFFIC_POLICY_INSTANCES_BY_OWNER: The maximum number of traffic policy instances that you can create using the current account. (Traffic policy instances are referred to as traffic flow policy records in the Amazon Route 53 console.) Valid Values: MAX_HEALTH_CHECKS_BY_OWNER | MAX_HOSTED_ZONES_BY_OWNER | MAX_TRAFFIC_POLICY_INSTANCES_BY_OWNER | MAX_REUSABLE_DELEGATION_SETS_BY_OWNER | MAX_TRAFFIC_POLICIES_BY_OWNER Request Body The request does not have a request body. Response Syntax HTTP/1.1 200 <?xml version="1.0" encoding="UTF-8"?> <GetAccountLimitResponse> <Count>long</Count> <Limit> <Type>string</Type> <Value>long</Value> </Limit> </GetAccountLimitResponse> Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in XML format by the service. - GetAccountLimitResponse Root level tag for the GetAccountLimitResponse parameters. Required: Yes - Count The current number of entities that you have created of the specified type. For example, if you specified MAX_HEALTH_CHECKS_BY_OWNERfor the value of Typein the request, the value of Countis the current number of health checks that you have created using the current account. Type: Long Valid Range: Minimum value of 0. - Limit The current setting for the specified limit. For example, if you specified MAX_HEALTH_CHECKS_BY_OWNERfor the value of Typein the request, the value of Limitis the maximum number of health checks that you can create using the current account. Type: AccountLimit object Errors For information about the errors that are common to all actions, see Common Errors. - InvalidInput The input is not valid. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/Route53/latest/APIReference/API_GetAccountLimit.html
2019-05-19T09:06:09
CC-MAIN-2019-22
1558232254731.5
[]
docs.aws.amazon.com
DescribeAccountAuditConfiguration Gets information about the Device Defender audit settings for this account. Settings include how audit notifications are sent and which audit checks are enabled or disabled. Request Syntax GET /audit/configuration HTTP/1.1 URI Request Parameters The request does not use any URI parameters. Request Body The request does not have a request body. Response Syntax HTTP/1.1 200 Content-type: application/json { "auditCheckConfigurations": { "string" : { "enabled": boolean } }, "auditNotificationTargetConfigurations": { "string" : { "enabled": boolean, "roleArn": "string", "targetArn": "string" } }, "roleArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - auditCheckConfigurations Which audit checks are enabled and disabled for this account. Type: String to AuditCheckConfiguration object map - auditNotificationTargetConfigurations Information about the targets to which audit notifications are sent for this account. Type: String to AuditNotificationTarget object map Valid Keys: SNS - roleArn The ARN of the role that grants permission to AWS IoT to access information about your devices, policies, certificates and other items as necessary when performing an audit. On the first call to UpdateAccountAuditConfigurationthis parameter is required. Type: String Length Constraints: Minimum length of 20. Maximum length of 2048. Errors - InternalFailureException An unexpected error has occurred. HTTP Status Code: 500 - ThrottlingException The rate exceeds the limit. HTTP Status Code: 429 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/iot/latest/apireference/API_DescribeAccountAuditConfiguration.html
2019-05-19T08:53:05
CC-MAIN-2019-22
1558232254731.5
[]
docs.aws.amazon.com
The EmpowerID SSO framework allows you to configure Yammer as an identity provider for the EmpowerID Web application. EmpowerID integrates with Yammer using the OAuth protocol to allow your users to log in to EmpowerID using their Yammer account. This topic describes how to configure an IDP connection for Yammer and is divided into the following activities: For specific directions on registering EmpowerID as an application in Yammer, see the information provided by Yammer at registering EmpowerID in Yammer, use the following URL as the Callback or Return URL, replacing "FQDN_OF_YOUR_EMPOWERID_SERVER" with the FQDN of the EmpowerID Web server in your environment. Next, add a login tile for Yammer to the desired IdP Domains. This allows your users to authenticate to EmpowerID with their Yammer credentials. If you have not set up an IdP Domain for your environment, you can do so by following the directions in the below drop-down. Now that the IDP Connection is configured, you can test it by following the procedure outlined below.
https://docs.empowerid.com/2017/admin/managingappsandsso/authenticationoptions/idpconnections/yammer
2019-05-19T08:38:10
CC-MAIN-2019-22
1558232254731.5
[]
docs.empowerid.com
Word Fields:. Automate Word Documents with Minimal Code By Cindy Meister I think there is general agreement: The less code required to update information in a Word document, the better. Some 80-plus fields are built into Word that provide information about the file and the user; store, display, and manipulate reference information; and link the document to other applications - all without a bit of code. This series of articles will explore some interesting solutions that can be accomplished with fields, field combinations, and a small amount of code to improve user friendliness. Most of the field examples can be applied to all versions of Word, but the code samples are VBA and therefore applicable only to Word 97 or Word 2000. A complete listing of available fields can be found in the Insert | Field dialog box (see FIGURE 1). A short description of the field's purpose is displayed there, but more in-depth information can be referenced by first clicking the ? at the top right, then the field name, in order to call up Help for that field. The Word Help files for fields contain detailed information and examples, plus links to related topics (unlike the Help for most other subjects). FIGURE 1: The main Insert | Field dialog box. .gif) The results of many fields can be altered or enhanced by using switches. The user can select from a list of relevant switches by clicking the Options button on the Insert | Field dialog box to open the dialog box shown in FIGURE 2. FIGURE 3 lists important switches common to many fields. FIGURE 2: There are general, numeric, and field-specific switches that can be added to many fields to modify or enhance their behavior. .gif) FIGURE 3: Important, common switches. Insert | Field is a handy tool for the average user, and for getting an overview of available fields. It's less useful, however, for accessing the full potential of fields in combination. Sometimes, you'll need to work with the field codes in the document directly. FIGURE 4 lists the keyboard shortcuts and basic VBA syntax of the most important commands. FIGURE 4: Commands for editing fields directly in a document. Now let's look at some of the things Word fields allow you to do. Referencing Information A common task is to refer to information found in the same or another document. Using bookmarks and REF fields, existing content can be displayed in other areas of a document, rather than duplicating the text. If the content of the bookmark is edited, those changes will automatically be displayed once the REF fields are updated. REF fields are also used by Word when generating cross-references. Less well-known are the PAGEREF and STYLEREF fields. The PAGEREF field returns the page number of the bookmark. When Word generates a Table of Contents (TOC), for example, it sets bookmarks and uses hidden PAGEREF fields to display the page numbers in the TOC. It's also the basis of a work-around for the unreliable behavior of the two fields that display the total number of pages in a document and in a section (NUMPAGES and SECTIONPAGES, respectively). Simply insert a bookmark at the end of the document or section, then put a PAGEREF field referencing that bookmark where the total number of pages should be displayed: { PAGE } of { PAGEREF BookmarkName \* CharFormat} Note that the \* CharFormat switch will force the field to display with the same formatting as applied to the first character in the field, rather than that of the bookmarked text. This is important, since the font attributes of the header or footer often differ from that of the text. The STYLEREF field enables you to pick up the text or paragraph numbering from document content formatted with the specified style. It's most frequently used in document headers or footers to display the chapter or heading to which the current page belongs, or for dictionary-style headers that list the first and last entry in the style found on the page. An interesting, less usual, application is to display version or title information from the title page throughout a long, multi-section document. Unlike a REF field, which depends on the presence of bookmarks (which are prone to accidental deletion) or DocProperty fields (that may be overlooked when updating a document), using a unique style to mark the information meant for a STYLEREF field to pick it up is transparent and easy for the user. To ensure that the information is included in a document, the developer must only check for the presence of the style on the title page, and that it contains a minimum number of characters. The procedure in FIGURE 5 is a possible approach. The style name and range in which the style should be are passed to the function. The range is checked for the presence of the style using the Find property. If it's found, the number of characters in the range (which now comprises what was found) is returned. Otherwise it is zero. Error trapping is turned off for the function, because a VBA error is returned if the style isn't found in the document's collection of styles. Function ExistsStyle(rng As Word.Range, _ sStyleName As String) As Long On Error Resume Next With rng.Find .ClearFormatting .ClearAllFuzzyOptions .Forward = True .Wrap = wdFindStop .Text = "" .Style = sStyleName If .Execute = True Then ExistsStyle = Len(rng.Text) Else ExistsStyle = 0 End If End With End Function FIGURE 5: Ascertain whether any text in a range is formatted with a specific style. Long Document Management By now it's common knowledge that Word's Master Document feature is buggy. Therefore, creating one long document is usually preferable to using the feature. If, however, a set of documents should be centrally maintained and combined as required, there are alternate approaches. RD fields, for instance, can be used to pick up information from other documents when generating tables of content and indexes. Content from other Word documents is often best accessed using the Insert | File command (see FIGURE 6). If the Link option is activated, Word generates an INCLUDETEXT field; specifying a bookmark range in addition brings in only that part of the file contained in the bookmark. Normally, information is only displayed in a field. It can't be edited on a permanent basis, and will be lost when the field is updated. INCLUDETEXT is an exception; editing changes can be sent back to the source document by pressing [Ctrl][Shift][F7]. FIGURE 6: The Insert | Fielddialog box in Word 2000. Click the Range button to specify a bookmark name; click the arrow to the right of the Insert button to select the Insert as****Link option. .jpg) By combining INCLUDETEXT with IF fields, text can be displayed according to set criteria - often a requirement for mail merge. The following field set references a bookmark containing a dunning notice, based on the reminder number: { INCLUDETEXT "C:\\My Documents\\DunningText.doc" { IF {MERGEFIELD ReminderNum } = 1 "Notice1" "{IF { MERGEFIELD ReminderNum } = 2 "Notice2" "Notice3" }" } } Note that Word's IF field doesn't use argument separators; each True/False pair is marked by a pair of quotation marks separated by a space. INCLUDETEXT can also be used to generate consecutive page numbering across multiple documents. In the first document of a pair: - Insert a PAGE field at the end of the document. - Select it and assign it a bookmark (PageNum, for example). - Format it as hidden or white so it won't be visible. In the second document of the pair: - Create an expression field that adds the current page number in this document to the page number in the first document, referenced using an INCLUDETEXT field. - Complete the INCLUDETEXT field with a \* CharFormat and a \* ! switch to override the hidden formatting, and prevent the PAGE field being referenced to update to the page number in the current document. For example: { = { PAGE } + { INCLUDETEXT "C:\\My Documents\\Chap1.doc" PageNum \* CharFormat \* ! } } Linked Excel Tables and Charts Reports often include tables or charts that were created with Excel. The question of the best/easiest way to insert this information in Word often comes up. On the whole, copying the Excel data and using Paste Special to insert it in a Word document gives the best results. It allows you to control the selection exactly. You can choose to insert a table as an object, or in Word table format, and you can specify whether to set up a link. Of course, this approach is less satisfactory if the reporting tool should be fully automated. But an examination of the Paste Special result indicates an alternate solution. Paste Special with a link inserts a LINK field, such as: { LINK Excel.Sheet.8 "C:\\My Documents\\Monthly Reports\\JAN2000.XLS" NorthRegionSales!R1C1:R11C6 \a \r \* MERGEFORMAT } Or, for a chart: { LINK Excel.Chart.8 "C:\\My Documents\\Monthly Reports\\JAN2000.XLS" "All Regions Sales Chart" \a \p } All that's needed to generate this field in VBA are the various elements in the field: the application class, the file path, and the workbook range. The optional \a switch instructs Word to automatically update the link. The switches \p and \r determine how the table information should be displayed - as a picture (OLE object), or in RichText format (as a Word table), respectively. Include the \* MergeFormat switch when formatting applied directly in Word should override Excel formatting when the field is updated. (Please note this is broken in Word 2000.) A very simple example is demonstrated in FIGURE 7. All the field elements are determined, then combined into sFieldText. For the Add method of the Fields object, the Range (end of the document) is required. The Type of field is passed as a built-in Word constant; the Text will automatically follow the field name. PreserveFormatting instructs VBA whether to include the \* MergeFormat switch. Sub InsertLinkFieldForExcelData() Dim rng As Word.Range Dim sWBPath As String Dim sSheet As String Dim sRange As String Dim sAppClass As String Dim sFieldText As String ' Field will be inserted at the end of current document. Set rng = ActiveDocument.Range rng.Collapse Direction:=wdCollapseEnd ' Info required for creating LINK field. sWBPath = _ "C:\\My Documents\\Monthly Reports\\JAN2000.XLS" sSheet = "NorthRegionSales" sRange = "SalesData" If Instr(sSheet, "Chart") = 0 Then sAppClass = "Excel.Sheet.8" sFieldText = sAppClass & " " & """" & sWBPath & _ """" & " " & """" & sSheet & "!" & _ sRange & """" & " \r" Else sAppClass = "Excel.Chart.8" sFieldText = sAppClass & " " & """" & sWBPath & _ """" & " " & """" & sSheet & """" & " \p" End If ' Insert the LINK field. ActiveDocument.Fields.Add Range:=rng, _ Type:=wdFieldLink, Text:=sFieldText, _ PreserveFormatting:= True ' Break the link to the Excel source. rng.MoveEnd wdCharacter, 1 rng.Fields.Unlink End Sub FIGURE 7: Include the \* MergeFormat switch when formatting applied directly in Word should override Excel formatting when the field is updated. Sometimes the Excel data should not remain linked to its source. In this case, the field can simply be unlinked after it's been inserted, resulting in an embedded object, or - if a spreadsheet has been inserted in RichText format - a Word table. Take Control of Numbering There are definite problems maintaining auto-numbering when converting between Word 97/2000 and earlier versions, as well as with numbering in shared or exchanged Word 97/2000 documents. SEQ fields provide an alternative that's stable and reliable across all versions of WinWord. SEQ is short for "sequence" and enables you to have many independent lists in a document. Word itself uses SEQ fields to generate figure numbering sequences in captions. The basic SEQ field is quite simple. It consists of the field name plus the sequence identifier, e.g. { SEQ HdLevel1 }. The field-specific switches, shown in FIGURE 8, provide versatility and flexibility when generating sequences. FIGURE 8: SEQ field switches (from Word's Help files). For example, a three-level, outline-numbering sequence can be created using the combinations shown in FIGURE 9. If each outline-level field combination is saved as an individual AutoText entry, the numbering can quickly be inserted by the user from keyboard assignments, or a toolbar with buttons for each outline level, or an AutoText list. FIGURE 9: Outline numbering using SEQ fields. One drawback of using SEQ fields is certainly the fact that the numbering will not update automatically when the order of numbered items is changed, or a new item is inserted into an existing list. Indeed, exactly what causes fields to update (and when) can be very confusing (Knowledge Base article Q89953 provides some explanation). Activating Tools | Options | Print | Update Fields should ensure that everything will be correct when the document is printed, but may not be a satisfactory solution for the user working with the document. Programmatically, all fields in a document can be updated using a routine such as this: Sub UpdateAllDocFields() Dimsry ForEach sry In ActiveDocument.StoryRanges sry.Fields.Update NextStr End Sub If the document is large, however, or contains many fields - especially complex field sets or fields that force repagination - this process can be rather slow. It may therefore be advantageous to update only certain types of fields. The following procedure demonstrates how to update only SEQ fields of a HdLevel sequence: Sub UpdateOnlyHdLevelSEQFields() Dim fld As Word.Field For Each fld In ActiveDocument.Fields If fld.Type = wdFieldSequence Then If InStr(fld.Code, "HdLevel") <> 0 Then fld.Update End If End If Next fld End Sub SEQ fields can also be used to generate types of numbering sequences that Word's autonumbering cannot (for example, incrementing numbering by a factor other than one, or numbering a list in reverse order). FIGURE 10 shows the field combinations for generating a sequence from 5 to 1. Set a starting number that's one higher than the number you want to count down from; then subtract the value of the SEQ sequence from that number. { Set StartNum 6 } { = { StartNum } - { SEQ RevNum1 } \# 0 } Paragraph 5 { = { StartNum } - { SEQ RevNum1 } \# 0 } Paragraph 4 { = { StartNum } - { SEQ RevNum1 } \# 0 } Paragraph 3 { = { StartNum } - { SEQ RevNum1 } \# 0 } Paragraph 2 { = { StartNum } - { SEQ RevNum1 } \# 0 } Paragraph 1 Of course, you can make it much easier for the user to create such sequences with a bit of VBA code, such as that shown in Listing One. The user selects the paragraphs to be numbered in reverse order, then starts the macro. The starting (high) number is the number of paragraphs selected plus one. Rather than using a SET field to set the starting number, this procedure places it in a document property, which is less likely to be deleted accidentally. It also allows for multiple reversed numbering lists in the same document, by incrementing a number at the end of the document property and sequence names. The field set is generated once, then stored as an AutoText entry for creating the remainder of the list, for two reasons: - Inserting an AutoText entry is generally faster than recreating the field set. - The code works with the Range object to insert the AutoText entry, rather than the Selection object used when creating the fields set, making it more accurate. You will notice that the procedure to generate the field set, NewRevNumFieldSet, began as a recorded macro and uses the Selection object. Microsoft neglected to provide a VBA method to create nested fields. It isn't possible to include the field bracket characters as part of a string; they must be inserted separately. There are a number of approaches to the problem of creating nested fields, none of them particularly satisfactory from a developer standpoint. One is to record a macro, edit it judiciously, and test it very thoroughly. An alternate method based on the Range will be presented in the next installment of this series. Conclusion Hopefully, this article has animated you to explore how Word fields can render your documents more powerful. Part II of this series will continue to look at teaching Word how to overcome its limitations with the help of complex field sets. Cindy Meister has her own consulting business, INTER-Solutions, based in Switzerland. Prior to MSWord support and as Sysop in the CompuServe MSWord forum have given her in-depth knowledge of Microsoft Office and Word. For general questions on Word and links to other useful sites, visit her Web site at. You can reach her at mailto:[email protected]. Begin Listing One - Create numbering sequences with VBA ' Number the selected paragraphs in reverse order. Sub NumberInReverse() Dim iStartNum As Long Dim sPropName As String Dim rngSelection As Word.Range Dim rngWork As Word.Range Dim sATName As String Dim iCounter As Long Application.ScreenUpdating = False ActiveDocument.Windows(1).View.ShowFieldCodes = False ' Get the number of selected paragraphs. iStartNum = Selection.Paragraphs.Count ' Define a DocProperty to hold the high number + 1. sPropName = CreateStartNumDocProp("StartNum", iStartNum) ' Save the current selection in a range ' because the selection will be changing. Set rngSelection = Selection.Range ' Create duplicate to work with. Set rngWork = rngSelection.Duplicate rngWork.Collapse wdCollapseStart ' Create and insert first number in sequence. sATName = CreateReverseOrderCounter(sPropName) ' Insert remaining numbers at start of ' each paragraph in selection. For iCounter = 2 To iStartNum ' Set working range to next paragraph. Set rngWork = rngSelection.Paragraphs(iCounter).Range rngWork.Collapse wdCollapseStart ActiveDocument.AttachedTemplate. _ AutoTextEntries(sATName).Insert _ Where:=rngWork, RichText:= True Next ' Update only fields in selection ' (reverse-order numbering). rngSelection.Fields.Update End Sub ' Returns DocProperty name. Function CreateStartNumDocProp(ByVal sPropName, _ ByVal PropValue As Long) As String Dim prop As Office.DocumentProperty ' If no DocProperties, or none with StartNum in name ' DocProperty name is StartNum1. If ActiveDocument.CustomDocumentProperties.Count = 0 Then sPropName = "StartNum1" Else ' Otherwise, increment StartNum by 1. For Each prop In ActiveDocument. _ CustomDocumentProperties If Left(prop.Name, 8) = "StartNum" Then sPropName = _ "StartNum" & CStr(Val(Mid(prop.Name, 9)) + 1) Else sPropName = "StartNum1" End If Next prop End If ' Create DocProperty and set value to ' number of selected paragraphs plus 1. ActiveDocument.CustomDocumentProperties.Add _ Name:=sPropName, Type:=msoPropertyTypeString, _ Value:= CStr(PropValue + 1), LinkToContent:= False CreateStartNumDocProp = sPropName End Function ' Returns name of AutoText entry that creates ' the reverse order list. Function CreateReverseOrderCounter( _ ByVal sPropName As String) As String Dim sRevNum As String Dim sATName As String sRevNum = Mid(sPropName, 9) sATName = "SEQAT" & sRevNum ' Create the field set and insert ' first one at beginning of selection Selection.Collapse wdCollapseStart Call NewRevNumFieldSet(sPropName, "RevNum" & sRevNum) ' Create an AutoText entry for rest of list ' because inserting it will be faster and more accurate. Selection.MoveRight wdCharacter, 1, Extend:= True ActiveDocument.AttachedTemplate.AutoTextEntries.Add _ Name:=sATName, Range:=Selection.Range CreateReverseOrderCounter = sATName End Function ' Recorded macro that generates the set of nested fields ' for a sequence that counts in reverse: ' { = {DocProperty StartNum} - {SEQ RevNum} } Sub NewRevNumFieldSet(sProp, sRevNum) ' Expression field Selection.Fields.Add Range:=Selection.Range, _ Type:=wdFieldEmpty, Text:="= ", _ PreserveFormatting:= False ' Show field codes to position IP. ActiveWindow.View.ShowFieldCodes = True Selection.MoveRight Unit:=wdCharacter, Count:=4 Selection.Fields.Add Range:=Selection.Range, _ Type:=wdFieldEmpty, Text:="DOCPROPERTY " & sProp, _ PreserveFormatting:= False Selection.TypeText Text:=" - " Selection.Fields.Add Range:=Selection.Range, _ Type:=wdFieldEmpty, Text:="SEQ " & sRevNum, _ PreserveFormatting:= False ActiveWindow.View.ShowFieldCodes = False Selection.Fields.Update End Sub
https://docs.microsoft.com/en-us/previous-versions/office/developer/office2000/aa163918(v=office.10)
2019-05-19T08:34:40
CC-MAIN-2019-22
1558232254731.5
[array(['images%5caa163918.vba200004cm_f_image002(en-us,office.10', None], dtype=object) array(['images%5caa163918.vba200004cm_f_image004(en-us,office.10', None], dtype=object) array(['images%5caa163918.vba200004cm_f_image006(en-us,office.10', None], dtype=object) ]
docs.microsoft.com
monitor start monitor start—Begin monitoring a file on the local device. When a file is monitored, any logging information is displayed on the console as it is added to the file. Command Syntax monitor start filename Options - Filename To Monitor - filename Name of the file to monitor. Output Fields The output fields are self-explanatory. Example Output Start and stop monitoring a file, and view the files that are being monitored: vEdge# monitor start /var/log/vsyslog vEdge# show jobs JOB COMMAND 1 monitor start /var/log/vsyslog vEdge# log:local7.notice: Dec 16 14:55:26 vsmart SYSMGR[219]: %Viptela-vsmart-SYSMGR-5-NTCE-200025: System clock set to Wed Dec 16 14:55:26 2015 (timezone 'America/Los_Angeles') log:local7.notice: Dec 16 14:55:27 vsmart SYSMGR[219]: %Viptela-vsmart-SYSMGR-5-NTCE-200025: System clock set to Wed Dec 16 14:55:27 2015 (timezone 'America/Los_Angeles') vEdge# monitor stop /var/log/vsyslog vEdge# Release Information Command introduced in Viptela Software Release 15.4. Additional Information job stop monitor stop show jobs
https://sdwan-docs.cisco.com/Product_Documentation/Command_Reference/Operational_Commands/monitor_start
2019-05-19T08:25:08
CC-MAIN-2019-22
1558232254731.5
[]
sdwan-docs.cisco.com
If you are a moderator for a place, you'll have access to a list of requests for moderation. In places where content moderation is enabled, the author of a piece of content will see a note explaining that their content needs to be approved by the moderator before it will be published: After the author submits the post, a moderation request will be sent to the moderator's Moderation queue for approval or rejection. (The community administrator sets up moderators for places. There can be more than one moderator in a place. For more about that, see Setting Up Content Moderation). Note that the author can still access the submitted content by going to , where they can continue to make changes. The moderator will see only one item to moderate, no matter how many changes the author makes. If there is more than one item awaiting moderation, you'll see a list in your Moderation queue. For example, you might see content that's been reported as abusive, as well as content that's been submitted for moderation, depending on the moderation features set up by your community administrator. The following example shows a variety of items awaiting moderation: Notice that at the top of the page, you can filter the content listed. Filtering can be helpful when you've got a very long list. To moderate, simply click on an item and make a decision to:
https://docs.jivesoftware.com/jive/6.0/community_admin/topic/com.jivesoftware.help.sbs.online_6.0/admin/ReviewingQueuedModerationRequests.html
2015-02-27T04:00:00
CC-MAIN-2015-11
1424936460472.17
[]
docs.jivesoftware.com
A plugin is a part of the core 3 Joomla extensions (Components, Modules and Plugins)) A plugin is installed like other extensions(templates, components and modules) are installed. To find out how to install extensions, please read Installing an extension. For a full list of plugins and their available parameters see the Help Screen Documentation by going into the edit view of a plugin and clicking on the "Help" button. Alternatively, you can browse the core plugins:
https://docs.joomla.org/index.php?title=Administration_of_a_Plugin_in_Joomla&diff=prev&oldid=103841
2015-02-27T05:15:54
CC-MAIN-2015-11
1424936460472.17
[]
docs.joomla.org
Example of merging the 1.5 release back to the trunk: svn merge /Applications/MAMP/htdocs/joomla/trunk The arguments are svn merge A B C where you want to change A in B and put it in C. You may like to use the --dry-run option first just to test the command. Merging may take some time. It is a good idea to use the command line to perform this. If you are using Eclipse you will not be able to run this in the background. See for an explanation of the concepts behind branching and merging..
https://docs.joomla.org/index.php?title=Development_FAQ&oldid=3397
2015-02-27T04:54:18
CC-MAIN-2015-11
1424936460472.17
[]
docs.joomla.org
The Joomla Framework is an important part of the Joomla architecture. It's based on modern object-oriented design patterns that make the Joomla core highly maintainable and easily extendable (Learn more about the Joomla Framework). Third party developers benefit from the rich, and easily accessible functionality that the Joomla Framework provides. On this page we'd like to provide you a reference of all classes and respective methods. The links will take you to further information about each class including, where possible, examples of use. If you would like to help us improve this resource, please read API Reference Project.
https://docs.joomla.org/index.php?title=Chunk:Framework&oldid=71508
2015-02-27T04:04:44
CC-MAIN-2015-11
1424936460472.17
[]
docs.joomla.org
To create a new User Profile Menu Item: To edit an existing Edit User Profile Menu Item, click its Title in Menu Manager: Menu Items. Shows a table where the user can edit their profile when the page is navigated to. This Layout has no unique Parameters. See Menu Item Manager: New Menu Item for help on fields common to all Menu Item types, including Details, Link Type Options, Page Display Options, Metadata Options, and Module Assignments for this Menu Item. At the top right you will see the toolbar: The functions are:
https://docs.joomla.org/index.php?title=Help25:Menus_Menu_Item_User_Profile_Edit&diff=80947&oldid=69825
2015-02-27T05:25:05
CC-MAIN-2015-11
1424936460472.17
[]
docs.joomla.org
To protect the wiki against automated account creation, we kindly ask you to answer the question that appears below (more info): To pass captcha, please enter the... 5th ...characters from the sequence 7dd4ab1060 in the reverse order of the listing above: 7dd4ab1060 Real name is optional. If you choose to provide it, this will be used for giving you attribution for your work. edits pages recent contributors
https://docs.joomla.org/index.php?title=Special:UserLogin&type=signup&returnto=JBrowser::getInstance/1.6
2015-02-27T05:08:56
CC-MAIN-2015-11
1424936460472.17
[]
docs.joomla.org
. To 'add' a new Custom HTML module or 'edit' an existing Custom HTML module, navigate to the Module Manager: Click the 'New' button and click on Custom HTML in the modal popup window. To 'Edit' an existing Custom HTML module, in the Module Manager click on the Custom HTML Module's Title or click the Custom HTML module's check box and then click the Edit button in the Toolbar. This Module allows you to create your own HTML Module using a WYSIWYG editor.
https://docs.joomla.org/index.php?title=Help32:Extensions_Module_Manager_Custom_HTML&diff=prev&oldid=106323
2015-02-27T04:45:20
CC-MAIN-2015-11
1424936460472.17
[]
docs.joomla.org
Method GtkWidgetget_frame_clock Description [src] Obtains the frame clock for a widget. The frame clock is a global “ticker” that can be used to drive animations and repaints. The most common reason to get the frame clock is to call gdk_frame_clock_get_frame_time(), in order to get a time to use for animating. For example you might record the start of the animation with an initial value from gdk_frame_clock_get_frame_time(), and then update the animation by calling gdk_frame_clock_get_frame_time() again during each repaint. gdk_frame_clock_request_phase() will result in a new frame on the clock, but won’t necessarily repaint any widgets. To repaint a widget, you have to use gtk_widget_queue_draw() which invalidates the widget (thus scheduling it to receive a draw on the next frame). gtk_widget_queue_draw() will also end up requesting a frame on the appropriate frame clock. A widget’s frame clock will not change while the widget is mapped. Reparenting a widget (which implies a temporary unmap) can change the widget’s frame clock. Unrealized widgets do not have a frame clock.
https://docs.gtk.org/gtk4/method.Widget.get_frame_clock.html
2021-07-23T22:17:57
CC-MAIN-2021-31
1627046150067.51
[]
docs.gtk.org
Add a custom role You can add custom roles if you are an Enterprise Super Admin. Custom roles are based on the predefined roles. You can restrict the access for a custom role to a specific product. You can also create a role that allows an administrator to have full access to one product and read-only access to a second product. The shared settings are: - Tamper protection - Allowed applications - Website management - Proxy configuration - Blocked item - Bandwidth usage (Encryption access required) - HTTPS updating - DLP rules - Manage content control list - Reject network connections - XDR threat analysis center To create a custom role, do as follows: - Enterprise. - Choose the product and access type you want the role to have in Sophos Central Admin.For example, you create a custom role called Endpoint Help Desk. This custom role uses Read-only as its Base role and Endpoint Protection as its selected product with an access type of Help Desk. This custom role allows any administrators assigned to this role to access Endpoint Protection in Sophos Central Admin with Help Desk permissions. They have the same permissions in Sophos Central Enterprise as an administrator with the Enterprise Read-only role. - Choose more than one product, if required. You can choose different access types for different products. For example you can create a custom role that has Help Desk access permissions for Endpoint Protection and Read-only access for Mobile. You can set the permissions for all other products to None. This means that the custom role only has access in Sophos Central Admin to Endpoint Protection with Help Desk permissions and Mobile with Read-only permissions. - Choose the additional access and management options for the custom role in Sophos Central Admin. For example, this allows an Enterprise Super Admin to add these permissions to a Read-only or Help Desk role. You can also use these options to reduce the permissions for an Admin role. For example, you could prevent the custom role from managing policies.Note These additional options only apply to the selected products for the custom role. - Enable access to logs & reports. - Enable policy management (add, edit, and delete). - Enable policy assignment to users, device, etc.. (turn policies on and off; and add users, user groups, devices, and device groups to existing policies). The additional options are the same for all products and access types for the custom role. - Select Save. You can now assign this role to administrators.
https://docs.sophos.com/central/Enterprise/help/en-us/central/Enterprise/tasks/AddCustomRole.html
2021-07-23T21:34:48
CC-MAIN-2021-31
1627046150067.51
[]
docs.sophos.com
High Definition Render Pipeline Glossary General terms atmospheric scattering: Atmospheric scattering is the phenomena that occurs when particles suspended in the atmosphere diffuse (or scatter) a portion of the light, passing through them, in all directions. bokeh: The effect that occurs when a camera renders an out-of-focus point of light. channel packing: A channel-packed Texture is a Texture which has a separate grayscale image in each of its color channels. Exponential Variance Shadow Map: A type of shadow map that uses a statistical representation of the Scene's depth distribution and allows for the filtering of data stored in it. face: A face refers to one side of a piece of geometry. The front face is the side of the geometry with the normal. face culling: Face culling is an optimization that makes the renderer not draw faces of geometry that the camera can not see. f-number: The ratio of the focal length to the diameter of the camera lens. Nyquist rate: The minimum rate at which you can sample a real-world signal without introducing errors. This is equal to double the highest frequency of the real-world signal. physically-based rendering (PBR): PBR is an approach to rendering that emulates accurate lighting of real-world materials. ray marching: An iterative ray intersection test where your ray marches back and forth until it finds the intersection or, in a more general case, solves the problem you define for it. texture atlas: A texture atlas is a large texture containing several smaller textures packed together. HDRP uses texture atlases for shadow maps and decals. Normal mapping tangent space normal map: A type of normal map in the UV space of the GameObject. You can use it on any Mesh, including deforming characters. object space normal map: This contains the same details as the tangent space normal map, but also includes orientation data. You can only use this type of normal map on a static Mesh that does not deform. This normal map type is less resource-intensive to process, because Unity does not need to make any transform calculations. bent normal map: HDRP uses the bent normal to prevent light leaking through the surface of a Mesh. In HDRP, bent normal maps can be in tangent space or object space. Aliasing and anti-aliasing terms aliasing: Describes a distortion between a real-world signal and a digital reconstruction of a sample of a signal and the original signal itself. fast approximate anti-aliasing (FXAA): An anti-aliasing technique that smooths edges on a per-pixel level. It is not as resource intensive as other techniques. spatial aliasing: Refers to aliasing in digital samples of visual signals. temporal anti-aliasing (TAA): An anti-aliasing technique that uses frames from a history buffer to smooth edges more effectively than fast approximate anti-aliasing. It is substantially better at smoothing edges in motion but requires motion vectors to do so. Lighting terms illuminance: A measure of the amount of light (luminous flux) falling onto a given area. Differs from luminance because illuminance is a specific measurement of light whereas luminance describes visual perceptions of light. luminous flux: A measure of the total amount of visible light a light source emits. luminous intensity: A measure of visible light as perceived by human eyes. It describes the brightness of a beam of light in a specific direction. The human eye has different sensitivities to light of different wavelengths, so luminous intensity weights each different wavelength contribution by the standard luminosity function. luminosity function: A function that describes a wave that represents the human eye’s relative sensitivity to light of different wavelengths. This wave corresponds weight values, between 0 and 1 on the vertical axis, to different wavelengths, on the horizontal axis. For example, the standard luminosity function peaks, with a weight of 1, at a wavelength of 555 nanometers and decreases symmetrically with distance from this value. punctual lights: A light is considered to be punctual if it emits light from a single point. HDRPs Spot and Point Lights are punctual.
https://docs.unity3d.com/Packages/[email protected]/manual/Glossary.html
2021-07-23T23:56:34
CC-MAIN-2021-31
1627046150067.51
[array(['Images/GlossaryLighting3.png', None], dtype=object) array(['Images/GlossaryLighting1.png', 'Luminous flux'], dtype=object) array(['Images/GlossaryLighting2.png', 'Luminous intensity'], dtype=object)]
docs.unity3d.com
oauthsub¶ Simple oauth2 subrequest handler for reverse proxy configurations Purpose¶ The goal of oauthsub is to enable simple and secure Single Sign On by deferring authentication to an oauth2 provider (like google, github, microsoft, etc). oauthsub does not provide facilities for access control. The program is very simple and if you wanted to implement authentication and access control, feel free to use it as a starting point. It was created, however, to provide authentication for existing services that already do their own access control. Details¶ oauthsub implements client authentication subrequest handling for reverse proxies, and provides oauth2 redirect endpoints for doing the whole oauth2 dance. It can provide authentication services for: - NGINX (via http_auth_request) - Apache (via mod_perl and Authen::Simple::HTTP, backup link) - HA-Proxy (via a lua extension, backup link) The design is basically this: - For each request, the reverse proxy makes a subrequest to oauthsubwith the original requested URI oauthsubuses a session cookie to keep track of authenticated users. If the user’s session has a valid authentication token, it returns HTTP status 200. Otherwise it returns HTTP status 401. - If the user is not authenticated, the reverse proxy redirects them to the oauthsublogin page, where they can start the dance with an oauth2provider. You can choose to enable multiple providers if you’d like. - The oauth2provider bounces the user back to the oauthsubcallback page where the authentication dance is completed and the users credentials are stored. oauthsubsets a session cookie and redirects the user back to the original URL they were trying to access. - This time when they access the URL the subrequest handler will return status 200. Oauthsub will also pass the authenticated username back to the reverse-proxy through a response header. This can be forwarded to the proxied service as a Remote User Token for access control. Application Specifics¶ oauthsub is a flask application with the following routes: - /auth/login: start of oauth dance - /auth/callback: oauth redirect handler - /auth/logout: clears user session - /auth/query_auth: subrequest handler - /auth/forbidden: optional redirect target for 401’s The /auth/ route prefix can be changed via configuration. oauthsub uses the flask session interface. You can configure the session backend however you like (see configuration options). If you share the session key between oauthsub and another flask application behind the same nginx instance then you can access the oauthsub session variables directly (including the oauth token object).
https://oauthsub.readthedocs.io/en/latest/
2021-07-23T22:38:08
CC-MAIN-2021-31
1627046150067.51
[]
oauthsub.readthedocs.io
Media Reference Stack¶ The Media Reference Stack (MeRS) is a highly optimized software stack for Intel® Architecture Processors (the CPU) and Intel® Processor Graphics (the GPU) to enable media prioritized workloads, such as transcoding and analytics. This guide explains how to use the pre-built MeRS container image, build your own MeRS container image, and use the reference stack. Overview¶ Developers face challenges due to the complexity of software integration for media tasks that require investing time and engineering effort. For example: - Finding the balance between quality and performance. - Understanding available standard-compliant encoders. - Optimizing across the hardware-software stack for efficiency. MeRS abstracts away the complexity of integrating multiple software components and specifically tunes them for Intel platforms. MeRS enables media and visual cloud developers to deliver experiences using a simple containerized solution. Releases¶ Refer to the System Stacks for Linux* OS repository for information and download links for the different versions and offerings of the stack. MeRS V0.2.0 release announcement including media processing on GPU and analytics on CPU. MeRS V0.1.0 including media processing and analytics CPU. MeRS Release notes on Github* for the latest release of Deep Learning Reference Stack Prerequisites¶ MeRS can run on any host system that supports Docker*. This guide uses Clear Linux* OS as the host system. To install Clear Linux OS on a host system, see how to install Clear Linux* OS from the live desktop. To install Docker* on a Clear Linux OS host system, see the instructions for installing Docker*. Important For optimal media analytics performance, a processor with Vector Neural Network Instructions (VNNI) should be used. VNNI is an extension of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) and is available starting with the 2nd generation of Intel® Xeon® Scalable processors, providing AI inference acceleration. Stack features¶ The MeRS provides a pre-built Docker image available on DockerHub, which includes instructions on building the image from source. MeRS is open-sourced to make sure developers have easy access to the source code and are able to customize it. MeRS is built using the latest clearlinux/os-core Docker image and aims to support the latest Clear Linux OS version. MeRS provides the following libraries and drivers: Components of the MeRS include: Clear Linux OS as a base for performance and security. OpenVINO™ toolkit for inference. FFmpeg* with plugins for: GStreamer* with plugins for: - Note The MeRS is validated on 11th generation Intel Processor Graphics and newer. Older generations should work but are not tested against. Note The pre-built MeRS container image configures FFmpeg without certain elements (specific encoder, decoder, muxer, etc.) that you may require. If you require changes to FFmpeg we suggest starting at Build the MeRS container image from source. Note The Media Reference Stack is a collective work, and each piece of software within the work has its own license. Please see the MeRS Terms of Use for more details about licensing and usage of the Media Reference Stack. Get the pre-built MeRS container image¶ Pre-built MeRS Docker images are available on DockerHub* at To use the MeRS: Pull the image directly from Docker Hub. docker pull sysstacks/mers-clearlinux Note The MeRS docker image is large in size and will take some time to download depending on your Internet connection. If you are on a network with outbound proxies, be sure to configure Docker to allow access. See the Docker service proxy and Docker client proxy documentation for more details. Once you have downloaded the image, run it using the following command: docker run -it sysstacks/mers-clearlinux This will launch the image and drop you into a bash shell inside the container. GStreamer and FFmpeg programs are installed in the container image and accessible in the default $PATH. Use these programs as you would outside of MeRS. Paths to media files and video devices, such as cameras, can be shared from the host to the container with the --volume switch using Docker volumes. Build the MeRS container image from source¶ If you choose to build your own MeRS container image, you can optionally add customizations as needed. The Dockerfile for the MeRS is available on GitHub and can be used as a reference when creating your own container image. The MeRS image is part of the dockerfiles repository inside the Clear Linux OS organization on GitHub. Clone the stacksrepository. git clone Navigate to the stacks/mers/clearlinuxdirectory which contains the Dockerfile for the MeRS. cd ./stacks/mers/clearlinux Use the docker build command with the Dockerfileto build the MeRS container image. docker build --no-cache -t sysstacks/mers-clearlinux . Use the MeRS container image¶ This section shows examples of how the MeRS container image can be used to process media files. The models and video source can be substituted from your use-case. Some publicly licensed sample videos are available at sample-videos repository for testing. Media Transcoding¶ The examples below show transcoding using the GPU or CPU for processing. On the host system, setup a workspace for data and models: mkdir ~/ffmpeg mkdir ~/ffmpeg/input mkdir ~/ffmpeg/output Copy a video file to ~/ffmpeg/input. cp </path/to/video> ~/ffmpeg/input Run the sysstacks/mers-clearlinux Docker image, allowing shared access to the workspace on the host: docker run -it \ --volume ~/ffmpeg:/home/mers-user:ro \ --device=/dev/dri \ --env QSV_DEVICE=/dev/dri/renderD128 \ sysstacks/mers-clearlinux:latest Note The --device parameter and the GSV_DEVICE environment variable allow shared access to the GPU on the host system. The values needed may be different depending on host’s graphics configuration. After running the docker run command, you enter a bash shell inside the container. From the container shell, you can run FFmpeg and GStreamer commands against the videos in /home/mers-user/inputas you would normally outside of MeRS. Some sample commands are provided for reference. For more information on using the FFmpeg commands, refer to the FFmpeg documentation. For more information on using the GStreamer commands, refer to the GStreamer documentation. Example: Transcoding using GPU¶ The examples below show transcoding using the GPU for processing. Using a FFmpeg to transcode raw content to SVT-HEVC and mp4: ffmpeg -y -vaapi_device /dev/dri/renderD128 -f rawvideo -video_size 320x240 -r 30 -i </home/mers-user/input/test.yuv> -vf 'format=nv12, hwupload' -c:v h264_vaapi -y </home/mers-user/output/test.mp4> Using a GStreamer to transcode H264 to H265: gst-launch-1.0 filesrc location=</home/mers-user/input/test.264> ! h264parse ! vaapih264dec ! vaapih265enc rate-control=cbr bitrate=5000 ! video/x-h265,profile=main ! h265parse ! filesink location=</home/mers-user/output/test.265> MeRS builds FFmpeg with HWAccel enabled which supports VAAPI. Refer to the FFmpeg wiki on VAAPI and GStreamer with Media-SDK wiki for more usage examples and compatibility information. Example: Transcoding using CPU¶ The example below shows transcoding of raw yuv420 content to SVT-HEVC and mp4, using the CPU for processing. ffmpeg -f rawvideo -vcodec rawvideo -s 320x240 -r 30 -pix_fmt yuv420p -i </home/mers-user/input/test.yuv> -c:v libsvt_hevc -y </home/mers-user/output/test.mp4> Additional generic examples of FFmpeg commands can be found in the OpenVisualCloud repository and used for reference with MeRS. Media Analytics¶ This example shows how to perform analytics and inferences with GStreamer using the CPU for processing. The steps here are referenced from the gst-video-analytics Getting Started Guide except simply substituting the gst-video-analytics docker image for the sysstacks/mers-clearlinux image. The example below shows how to use the MeRS container image to perform video with object detection and attributes recognition of a video using GStreamer using pre-trained models and sample video files. On the host system, setup a workspace for data and models: mkdir ~/gva mkdir ~/gva/data mkdir ~/gva/data/models mkdir ~/gva/data/models/intel mkdir ~/gva/data/models/common mkdir ~/gva/data/video Clone the opencv/gst-video-analytics repository into the workspace: git clone ~/gva/gst-video-analytics cd ~/gva/gst-video-analytics git submodule init git submodule update Clone the Open Model Zoo repository into the workspace: git clone ~/gva/open_model_zoo Use the Model Downloader tool of Open Model Zoo to download ready to use pre-trained models in IR format. Note If you are on a network with outbound proxies, you will need to configure set environment variables with the proxy server. Refer to the documentation on Proxy Configuration for detailed steps. On Clear Linux OS systems you will need the python-extras bundle. Use sudo swupd bundle-add python-extras for the downloader script to work. cd ~/gva/open_model_zoo/tools/downloader python3 downloader.py --list ~/gva/gst-video-analytics/samples/model_downloader_configs/intel_models_for_samples.LST -o ~/gva/data/models/intel Copy a video file in h264 or mp4 format to ~/gva/data/video. Any video with cars, pedestrians, human bodies, and/or human faces can be used. git clone ~/gva/data/video This example simply clones all the video files from the sample-videos repsoitory. From a desktop terminal, allow local access to the X host display. xhost local:root export DATA_PATH=~/gva/data export GVA_PATH=~/gva/gst-video-analytics export MODELS_PATH=~/gva/data/models export INTEL_MODELS_PATH=~/gva/data/models/intel export VIDEO_EXAMPLES_PATH=~/gva/data/video Run the sysstacks/mers-clearlinux docker image, allowing shared access to the X server and workspace on the host: docker run -it --runtime=runc --net=host \ -v ~/.Xauthority:/root/.Xauthority \ -v /tmp/.X11-unix:/tmp/.X11-unix \ -e DISPLAY=$DISPLAY \ -e HTTP_PROXY=$HTTP_PROXY \ -e HTTPS_PROXY=$HTTPS_PROXY \ -e http_proxy=$http_proxy \ -e https_proxy=$https_proxy \ -v $GVA_PATH:/home/mers-user/gst-video-analytics \ -v $INTEL_MODELS_PATH:/home/mers-user/intel_models \ -v $MODELS_PATH:/home/mers-user/models \ -v $VIDEO_EXAMPLES_PATH:/home/mers-user/video-examples \ -e MODELS_PATH=/home/mers-user/intel_models:/home/mers-user/models \ -e VIDEO_EXAMPLES_DIR=/home/mers-user/video-examples \ sysstacks/mers-clearlinux:latest Note In the docker run command above: --runtime=runc specifies the container runtime to be runc for this container. It is needed for correct interaction with X server. --net=host provides host network access to the container. It is needed for correct interaction with X server. Files ~/.Xauthorityand /tmp/.X11-unixmapped to the container are needed to ensure smooth authentication with X server. -v instances are needed to map host system directories inside the Docker container. -e instances set the Docker container environment variables. Some examples need these variables set correctly in order to operate correctly. Proxy variables are needed if host is behind a firewall. After running the docker run command, it will drop you into a bash shell inside the container. From the container shell, run a sample analytics program in ~/gva/gst-video-analytics/samplesagainst your video source. Below are sample analytics that can be run against the sample videos. Choose one to run: Samples with face detection and classification: ./gst-video-analytics/samples/shell/face_detection_and_classification.sh $VIDEO_EXAMPLES_DIR/face-demographics-walking-and-pause.mp4 ./gst-video-analytics/samples/shell/face_detection_and_classification.sh $VIDEO_EXAMPLES_DIR/face-demographics-walking.mp4 ./gst-video-analytics/samples/shell/face_detection_and_classification.sh $VIDEO_EXAMPLES_DIR/head-pose-face-detection-female-and-male.mp4 ./gst-video-analytics/samples/shell/face_detection_and_classification.sh $VIDEO_EXAMPLES_DIR/head-pose-face-detection-male.mp4 ./gst-video-analytics/samples/shell/face_detection_and_classification.sh $VIDEO_EXAMPLES_DIR/head-pose-face-detection-female.mp4 When running, a video with object detection and attributes recognition (bounding boxes around faces with recognized attributes) should be played. Sample with vehicle detection: ./gst-video-analytics/samples/shell/vehicle_detection_2sources_cpu.sh $VIDEO_EXAMPLES_DIR/car-detection.mp4 When running, a video with object detection and attributes recognition (bounding boxes around vehicles with recognized attributes) should be played. Sample with FPS measurement: ./gst-video-analytics/samples/shell/console_measure_fps_cpu.sh $VIDEO_EXAMPLES_DIR/bolt-detection.mp4 Add AOM support¶ The current version of MeRS does not include the Alliance for Open Media Video Codec (AOM). AOM can be built from source on an individual basis. To add AOM support to the MeRS image: The following programs are needed to add AOM support to MeRS: docker, git, patch. On Clear Linux OS these can be installed with the commands below. For other operating systems, install the appropriate packages. sudo swupd bundle-add containers-basic dev-utils Clone the Intel Stacks repository from GitHub. git clone Navigate to the directory for the MeRS image. cd stacks/mers/clearlinux/ Apply the patch to the Dockerfile. patch -p1 < aom-patches/stacks-mers-v2-include-aom.diff Use the docker build command to build a local copy of the MeRS container image tagged as aom. docker build --no-cache -t sysstacks/mers-clearlinux:aom . Once the build has completed successfully, the local image can be used following the same steps in this tutorial by substituting the image name with sysstacks/mers-clearlinux:aom. Intel, Xeon, OpenVINO, and the Intel logo are trademarks of Intel Corporation or its subsidiaries. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos.
https://docs.01.org/clearlinux/latest/guides/stacks/mers.html
2021-07-23T21:44:42
CC-MAIN-2021-31
1627046150067.51
[]
docs.01.org
- Requirements - Configuration - Installation command line options - Chart configuration examples - Using the Community Edition of this chart - Global Settings - Deployments settings - Ingress Settings - Resources - External Services - Chart Settings - Configuring the networkpolicy Using the GitLab Webservice Chart The webservice sub-chart provides the GitLab Rails webserver with two Webservice webservice chart is configured as follows: Global Settings, Deployments settings, Ingress Settings, External Services, and Chart Settings. Installation command line options The table below contains all the possible chart configurations that can be supplied to the helm install command using the --set flags..webservice Webservice pods. For example: annotations: kubernetes.io/example-annotation: annotation-value strategy deployment.strategy allows you to change the deployment update strategy. It defines how the pods will be recreated when deployment is updated. When not provided, the cluster default is used. For example, if you don’t want to create extra pods when the rolling update starts and change max unavailable pods to 50%: deployment: strategy: rollingUpdate: maxSurge: 0 maxUnavailable: 50% You can also change the type of update strategy to Recreate, but be careful as it will kill all pods before scheduling new ones, and the web UI will be unavailable until the new pods are started. In this case, you don’t need to define rollingUpdate, only type: deployment: strategy: type: Recreate For more details, see the Kubernetes documentation.. Deployments settings This chart has the ability to create multiple Deployment objects and their related resources. This feature allows requests to the GitLab application to be distributed between multiple sets of Pods using path based routing. The keys of this Map ( default in this example) are the “name” for each. default will have a Deployment, Service, HorizontalPodAutoscaler, PodDisruptionBudget, and optional Ingress created with RELEASE-webservice-default. Any property not provided will inherit from the gitlab-webservice chart defaults. deployments: default: ingress: path: # Does not inherit or default. Leave blank to disable Ingress. pathType: Prefix provider: nginx annotations: # inherits `ingress.anntoations` proxyConnectTimeout: # inherits `ingress.proxyConnectTimeout` proxyReadTimeout: # inherits `ingress.proxyReadTimeout` proxyBodySize: # inherits `ingress.proxyBodySize` deployment: annotations: # map labels: # map # inherits `deployment` pod: labels: # additional labels to .podLabels annotations: # map # inherit from .Values.annotations service: labels: # additional labels to .serviceLabels annotations: # additional annotations to .service.annotations # inherits `service.annotations` hpa: minReplicas: # defaults to .minReplicas maxReplicas: # defaults to .maxReplicas metrics: # optional replacement of HPA metrics definition # inherits `hpa` pdb: maxUnavailable: # inherits `maxUnavailable` resources: # `resources` for `webservice` container # inherits `resources` workhorse: # map # inherits `workhorse` extraEnv: # # inherits `extraEnv` puma: # map # inherits `puma` workerProcesses: # inherits `workerProcesses` shutdown: # inherits `shutdown` nodeSelector: # map # inherits `nodeSelector` tolerations: # array # inherits `tolerations` Deployments Ingress Each deployments entry will inherit from chart-wide Ingress settings. Any value presented here will override those provided there. Outside of path, all settings are identical to those. webservice: deployments: default: ingress: path: / api: ingress: path: /api The path property is directly populated into the Ingress’s path property, and allows one to control URI paths which are directed to each service. In the example above, default acts as the catch-all path, and api received all traffic under /api You can disable a given Deployment from having an associated Ingress resource created by setting path to empty. See below, where internal-api will never receive external traffic. webservice: deployments: default: ingress: path: / api: ingress: path: /api internal-api: ingress: path: Ingress Settings annotations annotations is used to set annotations on the Webservice Ingress. We set one annotation by default: nginx.ingress.kubernetes.io/service-upstream: "true". This helps balance traffic to the Webservice pods more evenly by telling NGINX to directly contact the Service itself as the upstream. For more information, see the NGINX docs. To override this, set: gitlab: webservice: ingress: annotations: nginx.ingress.kubernetes.io/service-upstream: "false" proxyBodySize proxyBodySize is used to set the NGINX proxy maximum body size. This is commonly required to allow a larger Docker image than the default. It is equivalent to the nginx['client_max_body_size'] configuration in an Omnibus installation. As an alternative option, you can set the body size with either of the following two parameters too: gitlab.webservice.ingress.annotations."nginx\.ingress\.kubernetes\.io/proxy-body-size" global.ingress.annotations."nginx\.ingress\.kubernetes\.io/proxy-body-size" Resources Memory requests/limits Each pod spawns an amount of workers equal to workerProcesses, who each use some baseline amount of memory. We recommend: - A minimum of 1.25GB per worker ( requests.memory) - A maximum of 1.5GB per worker ( limits.memory) Note that required resources are dependent on the workload generated by users and may change in the future based on changes or upgrades in the GitLab application. Default: workerProcesses: 2 resources: requests: memory: 2.5G # = 2 * 1.25G # limits: # memory: 3G # = 2 * 1.5G With 4 workers configured: workerProcesses: 4 resources: requests: memory: 5G # = 4 * 1.25G # limits: # memory: 6G # = 4 * 1.5G External Services Redis The Redis documentation has been consolidated in the globals page. Please consult this page for the latest Redis configuration options. PostgreSQL The PostgreSQL documentation has been consolidated in the globals page. Please consult this page for the latest PostgreSQL configuration options. Webservice Webservice. Share the token with GitLab Shell and Webservice using a shared Secret. shell: authToken: secret: gitlab-shell-secret key: secret port: WebServer options Current version of chart supports Puma web server. Puma unique options: Configuring the networkpolicy This section controls the NetworkPolicy. This configuration is optional and is used to limit Egress and Ingress of the Pods to specific endpoints. Example Network Policy The webservice service requires Ingress connections for only the Prometheus exporter if enabled and traffic coming from the NGINX Ingress, and normally requires Egress connections to various places. This examples adds the following network policy: - All Ingress requests from the network on TCP 10.0.0.0/8port 8080 are allowed for metrics exporting and NGINX Ingress - - All Egress requests to the network on TCP 10.0.0.0/8port 8075 are allowed for Gitaly - Other Egress requests to the local network on 10.0.0.0/8are restricted - Egress requests outside of the 10.0.0.0/8are allowed Note the example provided is only an example and may not be complete Note that the Webservice requires outbound connectivity to the public internet for images on external object storage networkpolicy: enabled: true ingress: enabled: true rules: - from: - ipBlock: cidr: 10.0.0.0/8 ports: - port: 8080: 10.0.0.0/8 ports: - port: 8075 protocol: TCP - to: - ipBlock: cidr: 0.0.0.0/0 except: - 10.0.0.0/8 LoadBalancer Service If the service.type is set to LoadBalancer, you can optionally specify service.loadBalancerIP to create the LoadBalancer with a user-specified IP (if your cloud provider supports it). When the service.type is set to LoadBalancer you must also set service.loadBalancerSourceRanges to restrict the CIDR ranges that can access the LoadBalancer (if your cloud provider supports it). This is currently required due to an issue where metric ports are exposed. Additional information about the LoadBalancer service type can be found in the Kubernetes documentation service: type: LoadBalancer loadBalancerIP: 1.2.3.4 loadBalancerSourceRanges: - 10.0.0.0/8
https://docs.gitlab.com/charts/charts/gitlab/webservice/index.html
2021-07-23T23:21:04
CC-MAIN-2021-31
1627046150067.51
[]
docs.gitlab.com
New tool: "Microsoft Kerberos Configuration Manager for SQL Server" is ready to resolve your Kerberos/Connectivity issues. It can perform the following functions: - Gather information on OS and Microsoft SQL Server instances installed on a server. - Report on all SPN and delegation configurations on the server. - Identify potential problems in SPNs and delegations. - Fix potential SPN problems. Supported Operating System Windows 7, Windows 8, Windows Server 2008 R2 SP1, Windows Server 2012 The following are required on the machine where the Kerberos Configuration Manager for SQL Server is launched: - .Net framework 4.0 or higher To Install: - Download the 32bit or 64bit version of the Kerberos Configuration Manager installer that matches your computer’s OS architecture. - Click Open to start the installation immediately or click Save to save the installation .msi file to disk and install it later. - Accept the license term of this tool. - Click Next to complete the installation. To Launch the Tool: - After the installation is complete successfully, double click the KerberosConfigMgr.exe to launch the application. To Generate SPN List from Command Line: - Go to command line. - Switch to the folder where KerberosConfigMgr.exe is. - Type KerberosConfigMgr.exe -q -l - For more command line option, type KerberosConfigMgr.exe -h To Save a Server’s Kerberos Configuration Information: - Connect to the target windows server. - Click on Save button on the toolbar - Specify the location where you want the file to be saved at. It can be on a local drive or network share. - The file will be saved as .XML format. To View a Server’s Kerberos Configuration Information from Saved File: - Click on the Load button on the toolbar. - Open the XML file generated by Kerberos Configuration Manager. To Generate a Script to Fix SPN from Command Line: - Click on the Generate button for the SPN entry. - The generated script can be used by a user who has privilege to fix the SPN on the server. To See the Log Files for this Tool: - By default, one log file is generated in the user’s application data folder. To Get Help: Option 1: Hover the mouse cursor over the command for tooltip. Option 2: Run KerberosConfigMgr.exe –h from command line Option 3: Click the Help button in the toolbar.
https://docs.microsoft.com/en-us/archive/blogs/farukcelik/new-tool-microsoft-kerberos-configuration-manager-for-sql-server-is-ready-to-resolve-your-kerberosconnectivity-issues
2021-07-23T23:42:11
CC-MAIN-2021-31
1627046150067.51
[]
docs.microsoft.com
Example Games iOSiOS - Sprity Bird: An open source iOS Flappy Bird clone implemented in SpriteKit - Cookie Crunch Adventure: a popular Swift example project - Unity iOS example: a simple iOS integration with a unity application - 2048: an open source iOS Native 2048 game UnityUnity - 3D Cave Runner: a simple Android + Unity infinite-runner - BlockBreaker: includes turn-based tournaments and the skillz_difficulty parameter - Pinball: an open source Cross Platform Unity game
https://docs.skillz.com/docs/v22.0.18/example-games/
2021-07-23T22:17:25
CC-MAIN-2021-31
1627046150067.51
[]
docs.skillz.com
ak.mixin_class¶ Defined in awkward.behaviors.mixins on line 10. - ak.mixin_class(registry)¶ - Parameters registry (dict) – The destination behavior mapping registry. Typically, this would be the global registry ak.behavior, but one may wish to register methods in an alternative way. This decorator can be used to register a behavior mixin class. Any inherited behaviors will automatically be made available to the decorated class. See the “Mixin decorators” section of ak.behavior for further details.
https://awkward-array.readthedocs.io/en/latest/_auto/ak.mixin_class.html
2021-07-23T23:25:20
CC-MAIN-2021-31
1627046150067.51
[]
awkward-array.readthedocs.io
Example #About Misakey Misakey is an single account infrastructure for data portability. It provides the API to simplify and secure the access to client data for you, for your clients and your partners. Check out our website : misakey.com to get the full picture. #Rose's journey with Misakey Rose is a regular Internet user. Rose is also a serious trail runner. She needs new pairs of shoes twice a year. A big race is coming next month. Rose has recently bought new shoes but the pair does not suit her so well. She has to find new ones. As Rose browses through her favorite trail shoes shop : my-trail-shoes.com , she find a keen new model that looks perfect. Misakey Checkout is activated on the my-trail-shoes.com website. After adding to her cart the keen new shoes she wants, Rose proceed to buy the shoes. As Rose has a Misakey account, she clicks on the "Checkout with Misakey" to select the shipping address and the paiement information to confirm her purchase. Rose is good to go, setup for her next race. #Vault my-trail-shoes.com website protects its clients personal data with Misakey Vault. As Rose confirmed her purchase, she receives a confirmation email. The email contains a magic link to her invoice stored in her my-trail-shoes.com Vault on the Misakey infrastructure. Just like her purchase experience is both practical and safe, Rose wants her personal data to be protected from her email provider. Rose also likes to get a simple access to her purchases and invoices history in her my-trail-shoes.com Vault, emails are a mess. #Connect As her new keen trail shoes are on their way, Rose wants to sell the current pair she barely used. occazshop.com is Rose favorite resale online shop. occazshop.com website has activated many Misakey features. As Rose browses to occazshop.com, she signs in with a simple click on the "Connect with Misakey" button. She instantly confirms her ID and continues to occazshop.com #Identity For KYC purposes, occazshop.com requires a confirmation of Rose personal information : Full Name, Email address and Phone number. occazshop.com displays a request for information banner as Rose browses the website. Rose chooses to confirm her information with Misakey Identity as it is the fastest and most reliable solution. She reviews the information shared to occazshop.com. As they are still up to date, she confirms and continues back to occazshop.com website. Rose is thrilled, the experience was smooth, fast and form free. #Sync Rose wants to sell her shoes a good price as she bought them only a few days ago. As Rose starts creating a new offer to sell her shoes on occazshop.com, the website gives her the opportunity to import product information and the official invoice from the purchase on my-trail-shoes.com. Rose clicks on the sync button, consents to sync invoices from my-trail-shoes.com to occazshop.com and continues back to creating the offer to sell her shoes. Rose can select the right invoice on occazshop.com to make her selling offer more valuable to the buyers. She can peacefully put a fair price on her sale. Rose is happy because she knows people will have a better perception of how new and un-used her trail shoes are. #Going further Should you like to activate Misakey features for your website or application, detailed information can be found in the Misakey documentation. Feel free to start a conversation.
https://backend.docs.misakey.dev/demo/
2021-07-23T21:05:55
CC-MAIN-2021-31
1627046150067.51
[array(['/assets/images/checkout-cd9c140478f3213bceb8e534b24929a9.png', 'Misakey checkout illustration'], dtype=object) array(['/assets/images/vault-b86a043200b6ac42aa268fc41dad6e4d.png', 'Misakey vault illustration'], dtype=object) array(['/assets/images/connect-fa22a5ffaf9714067e6318d54001010f.png', 'Misakey connect illustration'], dtype=object) array(['/assets/images/KYC-6425b40b2cc894bae2eea81cc8e14257.png', 'Misakey Identity illustration'], dtype=object) array(['/assets/images/sync-28d9ec988b789d478570bb1d3b0bffdf.png', 'Misakey Sync illustration'], dtype=object) ]
backend.docs.misakey.dev
FSMO Role (AD_RMT_FSMO_ROLE_CONNECTIVITY) Each instance monitors the connectivity status of one of the FSMO role holders for the domain controller. Note The FSMO Role monitor type in BMC ProactiveNet Performance Management is referred as the AD_RMT_FSMO_ROLE_CONNECTIVITY application class in BMC PATROL. Domain controllers must be able to locate and establish connection with the FSMO role holders. There will be one instance named after each FSMO role: - Schema Master - Domain naming master - Relative ID master - PDC emulator - Infrastructure master To access the AD_RMT_FSMO_ROLE_CONNECTIVITY application class, double click the FSMO Connectivity icon. Default properties Additional details when using the TrueSight console Each tab provides options for a specific configuration. Depending on what you want to monitor, you must specify values in the respective tabs. The following table lists the tabs and the configurations that you can do in them: Add and configure monitor types to a monitoring solution You can add and configure monitor types for the compatible PATROL monitoring solutions that are located in the Deployable Package Repository. For a list of monitoring solutions and monitor types that you can configure, see configuration, click the action menu for that configuration and select Delete. After you save the policy, the deleted monitor type configuration is removed from the selected PATROL Agents. After you save or update a policy, the monitor type configurations are pushed to the selected PATROL Agents. Configure filters to include or exclude data and events After you configure the monitor types in the PATROL Agents, they send the collected data and the generated events to the Infrastructure Management server. You can configure filters to. Based on parameter usage, ensure that you store data for and configure the following parameters in the Infrastructure Management server database: - KPI parameters - Parameters required in performance reporting - Parameters requiring “duration” thresholds. For example, you do not want an event unless the parameter has breached a threshold for 15 minutes. (PATROL Agent does not support this capability.) - Parameters requiring “time of day” type thresholds. (This is accomplished using baselines.) - Parameters for which predictive event generation and abnormality detection are required. This generally applies to all KPIs, which can be extended. Note: A PATROL Agent applies a filtering policy during the next scheduled discovery or collection of a PATROL object (monitor type or attribute). That is, filtering is applied when the first collection occurs on the PATROL object after filtering rules are applied. Due to this behavior, there might be a delay in the deployment of the filtering rules on the PATROL object. -. - To modify the properties of a polling interval: Click the action menu associated with that polling interval, and select Edit. - To delete a polling interval: Click the action menu associated with that polling interval and select Delete. Define thresholds on PATROL Agents Configure range-based thresholds for attributes of a monitor type on the PATROL Agents. You can specify whether the thresholds apply to a monitor type or to an instance of a monitor type. Best Practice for KPIs and performance parameters in the Infrastructure Management server before you define the server and agent thresholds. Server thresholds override global thresholds. - Use server thresholds for instance-level thresholds. To configure the server thresholds: - Click the Server Threshold tab. - Click Add Server Threshold. Specify values for the following properties: - Click OK. - To modify parameters of a threshold, click the action menu for that threshold and select Edit. - To remove a threshold, click the action menu for that threshold and select Delete. After you save the policy, the deleted threshold configurations are removed from the selected PATROL Agents. After you save or update a policy, the new threshold configurations are pushed to the selected PATROL Agents. Configure PATROL Agents You can configure the properties of a PATROL Agent and specify the action that the Agent must perform when the policy is applied. For more information, see Specifying objects in an authorization profile . Click the Agent tab. Specify the following properties: Configure actions performed by Infrastructure Management server Specify actions to be performed on the Infrastructure Management server when the policy is applied. The actions apply to all devices that are associated with the PATROL Agent and all the monitor instances that the Agent monitors. Best Practice - Do not use the automated group creation functionality excessively. Plan the groups that you need and configure accordingly. - Use the copy baseline feature only when you know the existing baseline is appropriate for a new Agent or device. For example, if you are adding an additional server to an Apache web server farm behind a load balancer where the new server has exactly the same configuration as the other servers in the farm (OS version, machine sizing and type, Apache version, Apache configuration) and the new Apache web server processes exactly the same types of transactions for the same application. If you are not certain, do not use the copy baseline feature. - Click the Server tab. - Specify values for the following properties: Define and manage configuration variables You can define individual configuration variables or import them from a ruleset file (.cfg). The PATROL Agent configuration is saved in a set of configuration variables that are stored in the Agent's configuration database. You can control the PATROL Agent configuration by changing the values of these configuration variables. Also, you can define a configuration variable, and the definitions are set on PATROL Agent when the policy is applied. Note To view the configuration variables that are available in the previous PATROL Agent versions, use the Query Agent functionality. If you are modifying the default Agent configuration, you must restart the PATROL Agent to reflect the changes. Best Practice - Avoid creating a policy with both monitoring configuration and a configuration variable. You can create separate policies for monitoring configuration and configuration variables. - To keep the PATROL Agent in sync with the policy configuration, change an existing configuration variable's operation to DELVAR, instead of deleting it. After a configuration variable is deleted from the policy, you cannot perform any actions on it. To import existing configuration variables - In the Configuration Variable page, click the common action menu in the table and select Import. - Browse for and select the configuration file (.cfg) to be imported. - Click Open. The variables from the file are added to the table. Note The import operation supports only REPLACE, DELETE, and DELVAR operators. If the .cfg file contains the MERGE or APPEND operators, the file cannot be imported. You must delete these operators before importing the file. To add new configuration variables - Click Add Configuration Variable. In the Add Configuration Variable dialog box, specify values for the following properties and click OK: To add another configuration variable, repeat the earlier steps. Notes: For the defaultAccountconfiguration variable, specify the value in the userName/password format. Note that the password can be a plain text or a PATROL Agent-encrypted string. Examples: patrol/patAdm1n patrol/FA4E70ECEAE09E75A744B52D2593C19F For the SecureStoreconfiguration variable, specify the value in the context/data format. Note that the context and data can be a plain text or a PATROL Agent-encrypted string. Examples: MY_KM1;MY_KM2;MY_KM3/mysecretdata “EDC10278901F8CB04CF927C82828595B62D25EC355D0AF38589CE4235A246F8C63F24575073E4ECD” where “EDC10278901F8CB04CF927C82828595B62D25EC355D0AF38589CE4235A246F8C63F24575073E4ECD”is the encrypted form of "MY_KM1;MY_KM2;MY_KM3/mysecretdata" - To modify any value in a variable, click the action menu for the variable and select Edit. In the Edit Configuration Variable dialog box, modify the properties and click OK. - To remove a variable, click the action menu for the variable and select Delete. InfoBox The AD_RMT_FSMO_ROLE_CONNECTIVITY InfoBox contains only the standard InfoBox fields. Attributes (parameters) The following attributes are available for this monitor type:
https://docs.bmc.com/docs/PATROL4Windows/51/fsmo-role-ad_rmt_fsmo_role_connectivity-758870484.html
2021-07-23T22:42:48
CC-MAIN-2021-31
1627046150067.51
[]
docs.bmc.com
Most basic eCommerce stores use a "cart" to collect items/services and a "checkout" to collect payment and customer information. At the heart of Drupal Commerce is the ubiquitous Order. This Order is first created when a user adds a product to a cart. The Order is actually the "cart" when it has a status of "Shopping Cart." Since Drupal Commerce is an open eCommerce framework, these statuses can be added with additional modules (like shipping or customer profiles). Below are some topics related to the add to cart form. Found errors? Think you can improve this documentation? edit this page
https://docs.drupalcommerce.org/commerce1/user-guide/shopping-cart
2021-07-23T21:30:40
CC-MAIN-2021-31
1627046150067.51
[]
docs.drupalcommerce.org
To install this theme you must have a working version of WordPress already installed. If you need help installing WordPress, follow the instructions in WordPress Codex. Please Note: To use a theme you need a WordPress installation from WordPress.org running on your own web server. - How to install WordPress – - First Steps with WordPress – - FAQ New To WordPress –
https://docs.eagle-themes.com/kb/general/wordpress-information/
2021-07-23T21:40:45
CC-MAIN-2021-31
1627046150067.51
[]
docs.eagle-themes.com
API Reference Introduction The Application Programming Interface (API) document describes how to process and run various payment related methods programmatically including Card (credit / debit) and Automated Clearing House (ACH) transactions. The Web Service is currently released to Production and is available for use by approved entities. Changes to the web services may be modified in future releases. The API utilizes REpresentational State Transfer (REST) and JavaScript Object Notation (JSON) to communicate. More about the JSON data format can be found at. These technologies are supported by every major programming language and environment. Intended Audience The intended audience for this document is technical managers and software development professionals. Web Services Summary There are various features of the API and some methods are not available unless the Client is signed up for certain features of the system. For example, a client not signed up for ACH service will receive a 403 response with a message that notifies the user they are not signed up for the service. API Version Numbering Please pay attention to the version number of the API they are utilizing. The version number is broken up into three positions: compatibility.features.fixes. For example, v1.2.3 the first position (1 in this example) represents the compatibility level. Any application utilizing version 1 of the API will always work with any other revision of version 1. Should the need ever arise to change the scope of the API a version 2 will be released which will not be backwards compatible. The latest compatibility version of the API will be available in the event a version 2 is released. If a new compatibility version of the API is released, the URL will reflect the change to ensure integrations written utilizing any version will continue to function. The second position (2 in this example) refers to the addition of new fields/features. These features provide additional functionality and are backwards compatible. The third position (3 in this example) indicates bug fixes, when nothing structural has changed within the API. HTTPS (TLS1.2) All requests MUST utilize HTTPS/TLS1.2 to communicate with the API. Non-TLS transactions are not supported and any non-TLS transactions will not receive a response. Overview Many APIs are similar in the way that they handle requests and response. Ours is not much different than the rest. Most of the same concepts apply, only the parameters may vary slightly. This section describes the basics of our API, including specific details that you will need in order to properly integrate with us. Request Types Our API is a simple REST based API. It follows most of the standards for submitting data and handling requests. Each REST request and response has a specific format for the payload. The available request types for the API endpoints and their intended purpose are as follows: - POST - Creates new records - PUT - Updates existing records - GET - Queries records - DELETE - Deletes existing records Response Codes Each of the above request types will return an HTTP response code. Any HTTP response in the 2xx or 3xx range is considered a successful response. Any response in the 4xx or 5xx range is considered a failure response. The following matrix shows the possible return codes for each request type. * - Special case used for contactsso endpoint Useful Tip: The above response codes are HTTP response codes only. In order to determine if a transaction (which is a specific endpoint) is actually approved or declined, you will need to look at the payload body and evaluate the status_id field. More about this can be found by visiting the transactions endpoint documentation. Endpoints Below you will find a list of all our Endpoints. You can click on any of the specific Endpoints to view information about allowed request types, expected format, and more.
https://docs.fortispay.com/developers/api
2021-07-23T21:34:29
CC-MAIN-2021-31
1627046150067.51
[]
docs.fortispay.com
A. When a certain number of incorrect login attempts occur, the Survey Solutions login form will start asking a CAPTCHA challenge: The above shown CAPTCHA shows if the server administrator has configured Survey Solutions to utilize Google’s captcha service. If this service is not used, Survey Solutions will rely on its built in CAPTCHA, which appears like the following: If the user continues to enter incorrect combinations of login/password even with the CAPTCHA, an autolock will eventually engage.
https://docs.mysurvey.solutions/headquarters/accounts/captcha/
2021-07-23T23:12:46
CC-MAIN-2021-31
1627046150067.51
[array(['images/captcha_tractors.png', None], dtype=object) array(['images/captcha_default.png', None], dtype=object)]
docs.mysurvey.solutions
Reputation requirements Review the requirements before you Tanium dependencies Make sure that your environment meets the following requirements. Tanium™ Module Server Reputation is installed and runs as a service on the Module Server host computer. The impact on the Module Server is minimal and depends on usage. The Reputation service is automatically disabled when the disk usage of the Module Server exceeds the value of the Maximum Disk Capacity setting. The default value is 85%. For more information on how to configure the Reputation service settings, see Configure Reputation service settingsConfigure Reputation service settings. Endpoints Reputation does not deploy packages to endpoints. For Tanium Client operating system support, see Tanium Client Management User Guide: Client version and host system requirements. Third-party software With Reputation, you can integrate with several different kinds of third-party software. If no specific version is listed, there are no version requirements for that software. - Palo Alto Networks WildFire - Recorded Future - ReversingLabs A1000 - ReversingLabs TitaniumCloud - VirusTotal Host and network security requirements Specific ports and processes are needed to run Reputation. Ports For Tanium as a Service ports, see Tanium as a Service Deployment Guide: Host and network security requirements. The following ports are required for. No additional process exclusions are required. Internet URLs If security software is deployed in the environment to monitor and block unknown URLS, your security administrator might need to allow the following URLs. - recordedfuture.com - reversinglabs.com - virustotal.com - wildfire.paloaltonetworks.com User role requirements The following tables list the role permissions required to use Reputation. For more information about role permissions and associated content sets, see Tanium Core Platform User Guide: Managing RBAC. For more information and descriptions of content sets and permissions, see Tanium Core Platform User Guide: Users and user groups. Last updated: 7/19/2021 8:15 PM | Feedback
https://docs.tanium.com/reputation/reputation/requirements.html
2021-07-23T22:33:34
CC-MAIN-2021-31
1627046150067.51
[]
docs.tanium.com
UTM - Unified Threat ManagementUTM - Unified Threat Management UTM simplifies the administration of your network, increases the performance of your resources and raises the security level of your data, which guarantees high performance and advanced technologies against various malicious techniques and digital threats, in addition to having the best cost-benefit ratio. Blockbit Client reference manual (PDF) How to Import and Export UTM 1.5 to UTM 2.0 How to Import and Export UTM 1.5 to UTM 2.0 (PDF) How to: IPSEC VPN Redundancy with BGP Dynamic Routing (PDF) Installation Guide (PDF) AWS Installation Guide (PDF) Oracle Installation Guide (PDF) How to upgrade Kernel (PDF) How to install UTM on Oracle Cloud (PDF) SSO Configuration Tutorial for UTM 1.5.5 (PDF) BLOCKBIT UTM Site-to-Site with AZURE (PDF) GSM - Global Security ManagementGSM - Global Security Management Network administrators will find it simpler to group devices and users into templates to assess traffic, deploy the same configuration to any security control on the platform, firewall, IPS, Secure Web Gateway, Advanced Threat Protection, VPN, SD-WAN among others. Appliance Manuals and DatasheetsAppliance Manuals and Datasheets Explore the technical specifications of each Blockbit Appliance model and check out our Datasheets. Appliance Manual BB 50-C-02 Appliance Manual BB 100-C-02 Appliance Manual BB 500-C Appliance Manual BB 500-D Appliance Manual BB 1000-C-01 Appliance Manual BB 1000-D Appliance Manual BB 10000-D Appliance Manual BB 15000-E LegacyLegacy Consult the documents of Blockbit products that have already reached their End Of Life. SMX - Secure eMail eXchange VCM - Vulnerability and Compliance Management - Blockbit UTM - How to: Import and Export UTM 1.5 to UTM 2.0 (Resource Center - EN/US) — Thank you for choosing Blockbit. In this document we will discuss how to perform the export and import process from UTM 1.5 to UTM 2.0 After reading and applying the steps in this Tutorial you will be able to export your Blockbit UTM data to the most updated version with ease and security. - Interfaces - 3G / 4G / LTE connection (Resource Center - EN/US) — In view of the importance of keeping the network always available and operational efficiency, some Blockbit appliances have built-in 3G / 4G / LTE module, providing connectivity to the mobile network infrastructure and allowing compatible appliances to have access. This solution is particularly useful in operations established in regions with unstable network performance, the feature aims to be used as a cost-effective alternative or also serving as a contingency network to guarantee availability in case of any unforeseen or bottleneck in the network. - Provisioning - Actions menu - Create Device (Resource Center - EN/US) — To perform Zero Touch provisioning, the device must be properly licensed, the license is always linked to a company's e-mail and to a UUID, this step is essential because the approval and confirmation of the provisioning is sent by e-mail, in addition because all provisioning is tied to the UUID of an appliance. - UTM - Services - SD-WAN (Resource Center - EN/US) — Blockbit UTM contemplates multiple internet links, being able to segment and prioritize traffic through network interfaces according to the data obtained by monitoring various performance indicators, allowing traffic to be routed through the interfaces configured through the best path available, this benefit is obtained through the SD-WAN. Add Comment
https://docs.blockbit.com/display/RCE/Resource+Center
2021-07-23T22:57:35
CC-MAIN-2021-31
1627046150067.51
[]
docs.blockbit.com
You may be interested to start using the project group that brings your projects together. Add your projects to your project group as the initial step.. -. []: []:
https://docs.collab.net/teamforge200/addprojectstoaprojectgroup.html
2021-07-23T22:02:53
CC-MAIN-2021-31
1627046150067.51
[]
docs.collab.net
Upgrade Paths in Modularity On the package level, upgrades of a modular system work the same way as on a traditional system — using NEVRA comparison to determine which packages are the newest. There is, however, one additional step right before the NEVRA comparison that needs to happen on a modular system — limiting which packages are going to be part of the comparison based on what modules are enabled — and that is the key difference. There are up to three classes of RPM packages available to a modular system: Standalone packages (also refered-to as "bare RPMs" or "ursine RPMs") — packages not being part of a module. In Fedora, these are coming from the Everything repository. Modular packages — packages being part of a module. In Fedora, these are coming from the Modular repository. Hotfixes — standalone packages created on-demand by users or vendors meant to fix a critical issue before an official upgrade comes from the distribution. These need to be provided in a separate repository with a hotfix flag set. Fedora doesn’t provide such packages. To determine the limited set of packages for the NEVRA comparison, the following algorithm is used: Take all standalone packages. Add modular packages that are part of an enabled module, and potentially replace any standalone packages having a same name. To do this, look at the target modulemd rather than the current one. This step ensures that modular packages have always a higher priority than standalone packages. In other words, standalone packages can never upgrade modular packages. Add all hotfix packages. These are just added, not replacing anything. That means hotfixes can potentially upgrade both traditional and modular packages. The next step is to take this set of packages, and run a NEVRA comparison to determine the highest version of each package. The highest versions are then installed as a part of the upgrade process.
https://docs.fedoraproject.org/fur/modularity/architecture/consuming/upgrade-paths/
2021-07-23T23:21:22
CC-MAIN-2021-31
1627046150067.51
[]
docs.fedoraproject.org
Telos Amend is a Decentralized Document Amendment service for the Telos Blockchain Network. It allows any Telos holder to create text documents or links and allow a group of decentralized voters to propose amendments. The above fees are adjustable by a Block Producer multisig or a referendum by the Telos token holders. To make a deposit to cover service fees, simply transfer TLOS to the amend.decide account. Telos Amend will catch the transfer and automatically create a deposit balance, if one doesn't already exist. Further deposits and withdrawals will automatically flow from this deposit balance. To make a withdrawal from Telos Amend, call the withdraw action on the amend.decide contract.
https://docs.telos.net/developers/telos_contracts/telos-amend
2021-07-23T21:55:51
CC-MAIN-2021-31
1627046150067.51
[]
docs.telos.net
1) * The Default (allowed/denied) values are based on the assumption that you use the default permissions file included with the menu, and you’ve granted yourself no special permissions or added yourself to any of the admin/moderator groups. If you DON’T use the default permissions file, then every option will be DENIED by default. 2) ** These options are only allowed by default for the “Moderators” / “Admins” groups in the provided permissions file with this resource. 3) *** When spawning a car using the Spawn By Name button, it will always check to see if you have permission for that specific vehicle’s class. eg: If you don’t have permission to spawn cars from the Super class, trying to spawn an adder using the Spawn By Name button won’t work. 4) **** Only admins are allowed to use this by default. Adding/Removing/Customizing any weapon is automatically ALLOWED when you give the player permissions to access this menu. For a list of individual weapon permissions check this link. The Save Personal Settings option in the Misc Settings Menu is always allowed, so there’s no permission line for that. Also the menu itself does not have a permisison to access it, for the same reason why saving preferences is always allowed. The About vMenu submenu is always available for everyone, and can not be disabled with the use of permissions. If you don’t feel like showing credits to everyone –which seems very selfish to me– then you’ll have to edit the code and disable it yourself, which also means I won’t be giving you any support whatsoever. Consider supporting me on Patreon!
https://docs.vespura.com/vmenu/permissions-ref/permissions/
2021-07-23T22:16:16
CC-MAIN-2021-31
1627046150067.51
[]
docs.vespura.com
Some vocabularies used (or which could be used) for stock record attributes and coded value maps in Evergreen are published on the web using SKOS. The record attributes system can now associate Linked Data URIs with specific attribute values. In particular, seed data supplying URIs for the RDA Content Type, Media Type, and Carrier Type has been added. This is an experimental, "under-the-hood" feature that will be built upon in subsequent releases.
http://docs.evergreen-ils.org/reorg/dev/opac/_skos_support.html
2018-07-15T23:23:04
CC-MAIN-2018-30
1531676589022.38
[]
docs.evergreen-ils.org