content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Visualize real-time sensor data from your Azure IoT hub by using the Web Apps feature of Azure App Service Note Before you start this tutorial, set up your device. In the article, you set up your Azure IoT device and IoT hub, and you deploy a sample application to run on your device. The application sends collected sensor data to your IoT hub. What you learn In this tutorial, you learn how to visualize real-time sensor data that your IoT hub receives by running a web application that is hosted on a web app. If you want to try to visualize the data in your IoT hub by using Power BI, see Use Power BI to visualize real-time sensor data from Azure IoT Hub. What you do - Create a web app in the Azure portal. - Get your IoT hub ready for data access by adding a consumer group. - Configure the web app to read sensor data from your IoT hub. - Upload a web application to be hosted by the web app. - Open the web app to see real-time temperature and humidity data from your IoT hub. What you need - Set up your device, which covers the following requirements: - An active Azure subscription - An Iot hub under your subscription - A client application that sends messages to your Iot hub - Download Git Create a web app - In the Azure portal, click Create a resource > Web + Mobile > Web App. Enter a unique job name, verify the subscription, specify a resource group and a location, select Pin to dashboard, and then click Create. We recommend that you select the same location as that of your resource group. Doing so assists with processing speed and reduces the cost of data transfer. Add a consumer group to your IoT hub Consumer groups are used by applications to pull data from Azure IoT Hub. In this tutorial, you create a consumer group to be used by a coming Azure service to read data from your IoT hub. To add a consumer group to your IoT hub, follow these steps: - In the Azure portal, open your IoT hub. In the left pane, click Endpoints, select Events on the middle pane, enter a name under Consumer groups on the right pane, and then click Save. Configure the web app to read data from your IoT hub - Open the web app you’ve just provisioned. Click Application settings, and then, under App settings, add the following key/value pairs: Click Application settings, under General settings, toggle the Web sockets option, and then click Save. Upload a web application to be hosted by the web app On GitHub, we've made available a web application that displays real-time sensor data from your IoT hub. All you need to do is configure the web app to work with a Git repository, download the web application from GitHub, and then upload it to Azure for the web app to host. In the web app, click Deployment Options > Choose Source > Local Git Repository, and then click OK. Click Deployment Credentials, create a user name and password to use to connect to the Git repository in Azure, and then click Save. Click Overview, and note the value of Git clone url. Open a command or terminal window on your local computer. Download the web app from GitHub, and upload it to Azure for the web app to host. To do so, run the following commands: git clone cd web-apps-node-iot-hub-data-visualization git remote add webapp <Git clone URL> git push webapp master:master Note <Git clone URL> is the URL of the Git repository found on the Overview page of the web app. Open the web app to see real-time temperature and humidity data from your IoT hub On the Overview page of your web app, click the URL to open the web app. You should see the real-time temperature and humidity data from your IoT hub. Note Ensure the sample application is running on your device. If not, you will get a blank chart, you can refer to the tutorials under Setup your device. Next steps You've successfully used your web app to visualize real-time sensor data from your IoT hub. For an alternative way to visualize data from Azure IoT Hub, see Use Power BI to visualize real-time sensor data
https://docs.microsoft.com/en-gb/azure/iot-hub/iot-hub-live-data-visualization-in-web-apps
2018-07-16T04:59:30
CC-MAIN-2018-30
1531676589179.32
[array(['media/iot-hub-get-started-e2e-diagram/5.png', 'End-to-end diagram'], dtype=object) array(['media/iot-hub-live-data-visualization-in-web-apps/8_web-app-url-azure.png', 'Get the URL of your web app'], dtype=object) array(['media/iot-hub-live-data-visualization-in-web-apps/9_web-app-page-show-real-time-temperature-humidity-azure.png', 'Web app page showing real-time temperature and humidity'], dtype=object) ]
docs.microsoft.com
Exploring the Interface¶ The following sections will help you get to know Builder. Project Greeter¶ When you start Builder, you will be asked to select a project to be opened: The window displays projects that were discovered on your system. By default, the ~/Projects directory will be scanned for projects when Builder starts. Projects you have previously opened will be shown at the top. Selecting a project row opens the project or pressing “Enter” will open the last project that was open. You can also start typing to search the projects followed by “Enter” to open. If you’d like to remove a previously opened project from the list, activate Selection mode. Press the “Select” button in the top right corner to the left of the close application button and then select the row you would like to remove. Select the row(s) you’d like to remove and then click “Remove” in the lower left corner of the window. Workbench Window¶ The application window containing your project is called the “Workbench Window”. The Workbench is split up into two main areas. At the top is the Header Bar and below is the current “Perspective”. Builder has many perspectives, including the Editor, Build Preferences, Application Preferences, and the Profiler. Header Bar¶ The header bar is shown below. This contains a button in the top left for Switching Perspectives. In the center is the “OmniBar” which can be used to Build your Project. To the right of the OmniBar is the Run button. Clicking the arrow next to Run allows you to change how Builder will run your application. You can run normally, with a debugger, profiler, or even with Valgrind. On the right is the search box. Type a few characters from the file you would like to open and it will fuzzy search your project tree. Use “Enter” to complete the request and open the file. To the right of the search box is the workbench menu. You can find less-used features here. Switching Perspectives¶ To switch perspectives, click the perspective selector button in the top left of the workbench window. Perspectives that support a keyboard accelerator will display the appropriate accelerator next to name of the perspective. Select the row to change perspectives. Showing and Hiding Panels¶ Sometimes panels get in the way of focusing on code. You can move them out of the way using the buttons in the top left of the workbench window. When entering Fullscreen mode, Builder will automatically dismiss the panels for your convenience. Additionally, you can use the “left-visible” or “bottom-visible” commands from the Command Bar to toggle their visibility. Build your Project¶ To build your project, use the OmniBar in the center of the header bar. To the right of the OmniBar is a button for starting a build as shown in the image below. You can also use the “build”, “rebuild”, “install”, or “clean” commands from the command bar. While the project is building, the build button will change to a cancel button. Clicking the cancel button will abort the current build. Editor¶ When Builder opens your project, it will place you in the editor perspective. This is where you develop your project. Along the left is the project sidebar. It contains the project tree, list of open documents, todo items, and build errors. Generally, it contains the “source” or things to work on in your project. Along the bottom is the utilities panel. Here you will find things like the debugger, terminal, build, and application console. Autocompletion¶ Builder has built-in support for various autocompletion engines. Start typing to get word suggestions. Documentation¶ If you hover the pointer over API that Builder knows about, it can show you the documentation. You can also use F2 to bring up the documentation with your insertion cursor on the word. Use Shift+K if you’re using Vim keybindings. Splitting Windows¶ Builder can show you multiple editors side-by-side. In the editor view use “Open in New Frame” to split a document into two views. Afterwards, you’ll see the editors side-by-side like such: To close a split, use the close button in the top right of the editor. Searching¶ You can search for files and symbols in your project using the search entry at the top right. To focus the search entry with the keyboard use Control+.. You can fuzzy search for files by typing a few characters from the file name. Builder will automatically index your project into a database if it uses a supported language. You can search this database to jump to code such as functions or classes. Preferences¶ The preferences perspective allows you to change settings for Builder and its plugins. You can search for preferences using the keyword search in the top left of the preferences perspective. Command Bar¶ The command bar provides a command line interface into Builder. You can type various actions to activate them. To display the command bar, use the Control+Enter keyboard shortcut. You can release the command bar by pressing Escape and return to the editor. The command bar includes completion using Tab, similar to the terminal. Use this to explore the availble commands.
http://builder.readthedocs.io/en/latest/exploring.html
2018-07-16T04:41:23
CC-MAIN-2018-30
1531676589179.32
[array(['_images/greeter.png', '_images/greeter.png'], dtype=object) array(['_images/workbench.png', '_images/workbench.png'], dtype=object) array(['_images/perspectives.png', '_images/perspectives.png'], dtype=object) array(['_images/panels.png', '_images/panels.png'], dtype=object) array(['_images/omnibar.png', '_images/omnibar.png'], dtype=object) array(['_images/building.png', '_images/building.png'], dtype=object) array(['_images/editor.png', '_images/editor.png'], dtype=object) array(['_images/autocompletion.png', '_images/autocompletion.png'], dtype=object) array(['_images/inline-documentation.png', '_images/inline-documentation.png'], dtype=object) array(['_images/open-in-new-frame-1.png', '_images/open-in-new-frame-1.png'], dtype=object) array(['_images/open-in-new-frame-2.png', '_images/open-in-new-frame-2.png'], dtype=object) array(['_images/file-search.png', '_images/file-search.png'], dtype=object) array(['_images/symbol-search.png', '_images/symbol-search.png'], dtype=object) array(['_images/preferences.png', '_images/preferences.png'], dtype=object) array(['_images/commandbar.png', '_images/commandbar.png'], dtype=object)]
builder.readthedocs.io
CIM_ProductSoftwareFeatures class The CIM_ProductSoftwareFeatures association identifies the software features for a particular product. [ [UUID("{7C39D12A-DB2B-11d2-85FC-0000F8102E5F}"), Association, Aggregation, Abstract, AMENDMENT] class CIM_ProductSoftwareFeatures { CIM_SoftwareFeature REF Component; CIM_Product REF Product; }; The CIM_ProductSoftwareFeatures class has these types of members: Properties The CIM_ProductSoftwareFeatures class has these properties. Component Reference to the component. Product Reference to the product. Remarks WMI does not implement this class. For WMI classes derived from CIM_ProductSoftwareFeatures, see Win32 Classes. This documentation is derived from the CIM class descriptions published by the DMTF. Microsoft may have made changes to correct minor errors, conform to Microsoft SDK documentation standards, or provide more information.
https://docs.microsoft.com/en-us/windows/desktop/CIMWin32Prov/cim-productsoftwarefeatures
2018-07-16T05:46:44
CC-MAIN-2018-30
1531676589179.32
[]
docs.microsoft.com
. Use. Switch Remote Desktops or Published ApplicationsHorizon Client supports multiple remote desktop and application sessions when you use a Chromebook or an Android device in DeX desktop mode. You can switch these remote desktop and application sessions. Reconnecting to a Desktop or Published ApplicationFor security purposes, a Horizon administrator can set timeouts that log you off of a server after a certain number of hours and that lock a published application after a certain number of minutes of inactivity..
https://docs.vmware.com/en/VMware-Horizon-Client-for-Android/4.7/horizon-client-android-user/GUID-45149DE2-75E1-4A7F-9B00-6981F97DF52C.html
2018-07-16T05:17:11
CC-MAIN-2018-30
1531676589179.32
[]
docs.vmware.com
Setting up a user¶ To explore the last details of setting up a backend user - and as an exercise - this chapter will guide you through the process of creating a new user. To make it more interesting, we will also create a new user group. Step 1: create a new group¶ Let's create a new user group using the Access module. Start by entering the name ("Resource editors"), optionally a description and choose "All users" as sub-group. Let's keep things simple for the further permissions. Try to do the following: - for Modules, just choose "Web > Page" and "Web > View" - for Tables (listing) and Tables (modify), choose "Page" and "Page content" - for Page types, select "Standard" and save. Move to the "Mounts and workspaces" tab and select the "Resources" page as DB mount. To achieve that easily start typing "Res" in the wizard at the right-hand side of the field. It will display suggestions, from which you can select the "Resources" page. Let's ignore all the rest. Use the "Save and close" action to get back to the group list. Step 2: create the user¶ Similarly to what we have done before, let's create a new user using the Access module. Enter the username, password, group membership: Note If we were creating a new administrator, we would just need to check the "Admin (!)" box. Admin users don't need to belong to any group, although this can still be useful to share special settings among administrators. Now switch to the "Mounts and workspaces" tab to ensure that the "Mount from Groups" settings are set: This makes it so that the DB and File mounts are taken from the group(s) the user belongs to and are not defined at user-level. Save and close the record. We will check the result of our work by using the simulate user feature we saw earlier. You should see the following:
https://docs.typo3.org/typo3cms/GettingStartedTutorial/UserManagement/UserSetup/Index.html
2018-07-16T04:59:38
CC-MAIN-2018-30
1531676589179.32
[array(['../../_images/BackendAccessCreateNewGroup1.png', 'Creating a new backend group from the Access module'], dtype=object) array(['../../_images/BackendAccessNewGroupGeneralTab1.png', 'Entering the general information about the new group'], dtype=object) array(['../../_images/BackendAccessNewGroupDBMount1.png', 'Defining DB mounts using the suggest wizard'], dtype=object) array(['../../_images/BackendAccessCreateNewUser1.png', 'Creating a new backend user from the Access module'], dtype=object) array(['../../_images/BackendAccessNewUserGeneralTab1.png', 'Setting the base information for the new user'], dtype=object) array(['../../_images/BackendAccessNewUserMountFromGroups1.png', 'Checking the "Mount from groups" setting'], dtype=object) array(['../../_images/BackendAccessSimulateResourceEditor1.png', "Let's simulate our new user!"], dtype=object) array(['../../_images/BackendResourceEditorUser1.png', 'The backend as seen by Resource McEditor'], dtype=object)]
docs.typo3.org
Puppet 3.6 Release Notes Included in Puppet Enterprise 3.3. A newer version is available; see the version menu above for details. This page tells the history of the Puppet 3.6 series. (Elsewhere: release notes for Puppet 3.0 – 3.4 and Puppet 3.).6.2 Released June 10, 2014. Puppet 3.6.2 is a security and bug fix release in the Puppet 3.6 series. It addresses two security vulnerabilities and includes fixes for a number of fairly recent bugs. It also introduces a new disable_warnings setting to squelch deprecation messages. Security Fixes CVE-2014-3248 (An attacker could convince an administrator to unknowingly execute malicious code on platforms with Ruby 1.9.1 and earlier) On platforms running Ruby 1.9.1 and earlier, previous code would load Ruby source files from the current working directory. This could lead to the execution of arbitrary code during puppet runs. CVE-2014-3250 (Information Leakage Vulnerability) Apache 2.4+ uses the SSLCARevocationCheck setting to determine how to check the certificate revocation list (CRL) when establishing a connection. Unfortunately, the default setting is none, so a puppet master running Apache 2.4+ and Passenger will ignore the CRL by default. This release updates the Apache vhost settings to enable CRL checking. Feature: Disabling Deprecation Warnings Puppet 3.6.0 deprecated config-file environments, leading to warnings during every puppet run for people who haven’t yet switched to the new and improved directory environments. The high volume of duplicate deprecation warnings was deemed annoying enough that we’ve added a new feature to allow people to disable them. You can now use the new (optional) disable_warnings setting in puppet.conf or on the command line to suppress certain types of warnings. For now, disable_warnings can only be set to deprecations, but other warning types may be added in future versions. All warnings are still enabled by default. Related issue: Fix for Directory Environments Under Webrick Puppet 3.6.1 introduced a bug that prevented directory environments from functioning correctly under Webrick, causing this error: “Attempted to pop, but already at root of the context stack.” This release fixes the bug. Related issue: - PUP-2659: Puppet stops working with error ‘Attempted to pop, but already at root of the context stack.’ Fixes to purge_ssh_keys Two bugs were discovered with the new (as of 3.6.0) purge_ssh_keys attribute for the user type. These bugs could prevent SSH keys from being purged under certain circumstances, and have been fixed. Related issues: - PUP-2635: user purge_ssh_keys not purged - PUP-2660: purging ssh_authorized_key fails because of missing user value Default environment_timeout increased The previous default value for environment_timeout was 5s, which turns out to be way too short for a typical production environment. This release changes the default environment_timeout to 3m. Related issue: General Bug Fixes - PUP-2689: A node can’t always collect its own exported resources - PUP-2692: Puppet master passenger processes keep growing - PUP-2705: Regression with external facts pluginsync not preserving executable bit Puppet 3.6.1 Released May 22, 2014. Puppet 3.6.1 is a bug fix release in the Puppet 3.6 series. It also makes the transaction_uuid more reliably available to extensions. Changes to RPM Behavior With Virtual Packages In Puppet 3.5, the RPM package provider gained support for virtual packages. (That is, Puppet would handle package names the same way Yum does.) In this release, we added a new allow_virtual attribute for package, which defaults to false. You’ll have to set it to true to manage virtual packages. We did this because there are a few cases where a virtual package name can conflict with a non-virtual package name, and Puppet will manage the wrong thing. (Again, just like Yum would.) For example, if you set ensure => absent on the inetd package, Puppet might uninstall the xinetd package, since it provides the inetd virtual package. We had to treat that change as a regression, so we’re currently defaulting allow_virtual => false to preserve compatibility in the Puppet 3 series. The default will change to true for Puppet 4. If you manage any packages with virtual/non-virtual name conflicts, you should set allow_virtual => false on a per-resource basis. If you don’t have any resources with ambiguous virtual/non-virtual package names, you can enable the Puppet 4 behavior today by setting a resource default in the main manifest: Package { allow_virtual => true, } Improvements to transaction_uuid in Reports and Node Termini Each catalog request from an agent node has a unique identifier, which persists through the entire run and ends up in the report. However, it was being omitted from reports when the catalog run failed, and node termini had no access to it. This release adds it to failed reports and node object requests. (Note that transaction_uuid isn’t available in the standard ENC interface, but it is available to custom node termini.) - PUP-2522: The transaction_uuid should be available to a node terminus - PUP-2508: Failed compilation does not populate environment, transaction_uuid in report Windows Start Menu Fixes If your Windows machine only had .NET 4.0 or higher, the “Run Facter” and “Run Puppet Agent” start menu items wouldn’t work, stating that they needed an older version of .NET installed. This is now fixed. - PUP-1951: Unable to “Run Facter” or “Run Puppet Agent” from Start Menu on Windows 8/2012 - Requires .NET Framework 3.5 installed Improved Passenger Packages on Debian/Ubuntu The Apache vhost config we ship in the Debian/Ubuntu puppetmaster-passenger package had some non-optimal TLS settings. This has been improved. HTTP API Fixes A regression in Puppet 3.5 broke DELETE requests to Puppet’s HTTP API. Also, a change in 3.6.0 made puppet agent log spurious warnings when using multiple values for the source attribute. These bugs are both fixed. - PUP-2505: REST API regression in DELETE request handling - PUP-2584: Spurious warnings when using multiple file sources (regression in 3.6.0) Directory Environment Fixes If puppet master was running under Rack (e.g. with Passenger) and the environmentpath was configured in the [master] section of puppet.conf (instead of in [main]), Puppet would use the wrong set of environments. This has been fixed. - PUP-2607: environmentpath does not work in master section of config - PUP-2610: Rack masters lose track of environment loaders Future Parser Improvements This release fixes two compatibility bugs where the future parser conflicted with the 3.x parser. It also fixes a bug with the new EPP templating language. - PUP-1894: Cannot render EPP templates from a module - PUP-2568: Cannot use class references with upper cased strings - PUP-2581: Interpolated variables with leading underscore regression (regression in 3.5.1) Puppet 3.6.0 Released May 15, 2014. (RC1: May 6.) Puppet 3.6.0 is a backward-compatible features and fixes release in the Puppet 3 series. The biggest things in this release are: - Improvements to directory environments, and the deprecation of config file environments - Support for purging unmanaged ssh_authorized_keyresources - Support for installing gems for a custom provider as part of a Puppet run - A configurable global logging level - A configurable hashing algorithm (for FIPS compliance and other purposes) - Improvements to the experimental future parser Improvements for Directory Environments Directory environments were introduced in Puppet 3.5 as a partially finished (but good enough for most people) feature. With Puppet 3.6, we consider them completed. We’re pretty sure they can now handle every use case for environments we’ve ever heard of. The final piece is the environment.conf file. This optional file allows any environment to override the manifest, modulepath, and config_version settings, which is necessary for some people and wasn’t possible in Puppet 3.5. You can now exclude global module directories for some environments, or point all environments at a global main manifest file. For details, see the page on directory environments and the page on environment.conf. It’s also now possible to tune the cache timeout for environments, to improve performance on your puppet master. See the note on timeout tuning in the directory environments page. - PUP-1114: Deprecate environment configuration in puppet.conf - PUP-2213: The environmentpath setting is ignored by puppet faces unless set in [main] - PUP-2215: An existing directory environment will use config_version from an underlying legacy environment of the same name. - PUP-2290: ca_server and directory based environments don’t play nice together - PUP-1596: Make modulepath, manifest, and config_version configurable per-environment - PUP-1699: Cache environments - PUP-1433: Deprecate ‘implicit’ environment settings and update packaging Deprecation: Config-File Environments and the Global manifest/ modulepath/ config_version Settings Now that directory environments are completed, config-file environments are deprecated. Defining environment blocks in puppet.conf will cause a deprecation warning, as will any use of the modulepath, manifest, and config_version settings in puppet.conf. This also means that using no environments is deprecated. In a future version of Puppet (probably Puppet 4), directory environments will always be enabled, and the default production environment will take the place of the global manifest/ modulepath/ config_version settings. Related issues: - PUP-1114: Deprecate environment configuration in puppet.conf - PUP-1433: Deprecate ‘implicit’ environment settings and update packaging Feature: Purging Unmanaged SSH Authorized Keys Purging unmanaged ssh_authorized_key resources has been on the most-wanted features list for a very long time, and we haven’t been able to make the resources meta-type accommodate it. Fortunately, the user type accommodates it very nicely. You can now purge unmanaged SSH keys for a user by setting the purge_ssh_keys attribute: user { 'nick': ensure => present, purge_ssh_keys => true, } This will purge any keys in ~nick/.ssh/authorized_keys that aren’t being managed as Puppet resources. Related issues: - PUP-1174: PR (2247) Ability to purge .ssh/authorized_keys - PUP-1955: purge_ssh_keys causes stack trace when creating new users on redhat Feature: Installing Gems for a Custom Provider During Puppet Runs Previously, custom providers that required one or more gems would fail if at least one gem was missing before the current puppet run, even if they had been installed by the time the provider was actually called. This release fixes the behavior so that custom providers can rely on gems installed during the same puppet run. Related issue: Feature: Global log_level Setting You can now set the global log level using the log_level setting in puppet.conf. It defaults to notice, and can be set to debug, info, notice, warning, err, alert, emerg, or crit. Related issue: Feature: digest_algorithm Setting You can now change the hashing algorithm that puppet uses for file digests to sha256 using the new digest_algorithm setting in puppet.conf. This is especially important for FIPS-compliant hosts, which would previously crash when puppet tried to use MD5 for hashing. Changing this setting won’t affect the md5 or fqdn_rand functions. This setting must be set to the same value on all agents and all masters simultaneously; if they mismatch, you’ll run into two problems: - PUP-2427: Pluginsync will download every file every time if digest_algorithms do not agree — All files with a sourceattribute will download on every run, which wastes a lot of time and can swamp your puppet master. - PUP-2423: Filebucket server should warn, not fail, if checksum type is not supported — If you’re using a remote filebucket to back up file content, agent runs will fail. Related issue: Improvements to the Future Parser It’s still experimental, but the future parser has gotten a lot of attention in this release. For example, functions can now accept lambdas as arguments using the new Callable type. There are also a few changes laying the groundwork for the upcoming catalog builder. - PUP-1960: realizing an empty array of resources fails in future evaluator - PUP-1964: Using undefined variable as class parameter default fails in future evaluator - PUP-2190: Accessing resource metaparameters fails in future evaluator - PUP-2317: Future parser does not error on import statements - PUP-2302: New evaluator does not properly handle resource defaults - PUP-2026: Add a LambdaType to the type system - PUP-2027: Add support for Lambda in Function Call API - PUP-1956: Add function loader for new function API - PUP-2344: Functions unable to call functions in different modules - PUP-485: Add assert_type functions for type checks - PUP-1799: New Function API - PUP-2035: Implement Loader infrastructure API - PUP-2241: Add logging functions to static loader - PUP-485: Add assert_type functions for type checks - PUP-1799: New Function API - PUP-2035: Implement Loader infrastructure API - PUP-2241: Add logging functions to static loader OS Support Changes This release improves compatibility with Solaris 10 and adds support for Ubuntu 14.04 (Trusty Tahr). Support for Ubuntu 13.04 (Raring Ringtail) has been discontinued; it was EOL’d in January 2014. Related issues: - PUP-1749: Puppet module tool does not work on Solaris - PUP-2100: Allow Inheritance when setting Deny ACEs - PUP-1711: Add Ubuntu 14.04 packages - PUP-1712: Add Ubuntu 14.04 to acceptance - PUP-2347: Remove raring from build_defaults, it is EOL - PUP-2418: Remove Tar::Solaris from module_tool Module Tool Changes The puppet module tool has been updated to deprecate the Modulefile in favor of metadata.json. To help ease the transition, the module tool will automatically generate metadata.json based on a Modulefile if it finds one. If neither Modulefile nor metadata.json is available, it will kick off an interview and generate metadata.json based on your responses. The new module template has also been updated to include a basic README and spec tests. For more information, see Publishing Modules on the Puppet Forge. Related issues: - PUP-1976: puppet module buildshould use metadata.jsonas input format - PUP-1977: puppet module buildshould create metadata.jsoninstead of Modulefile - PUP-2045: puppet module generate should produce a skeleton Rakefile - PUP-2093: PMT should use the Forge’s /v3 API - PUP-2284: Add a user interview for creating a metadata.json file - PUP-2285: Update PMT generate’s README template Issues fixed during RC: - PUP-2484: puppet module buildshould provide deprecated functionality with warning until Puppet v4 — this would cause the Modulefile to be ignored if a metadata.json file also existed. - PUP-2561: PMT may deadlock when packing or unpacking large tarballs - PUP-2562: PMT will not install puppetlabs/openstack 4.0.0 Type and Provider Fixes Package: Several providers were updated to support the install_options attribute, and the yum provider now has special behavior to make --enablerepo and --disablerepo work well when you set them as install_options. - PUP-748: PR (2067): Zypper provider install options - darix - PUP-620: (PR 2429) Add install_options to gem provider - PUP-1769: PR (2414) yum provider to support install_options - PUP-772: PR (2082): Add install options to apt - PUP-1060: enablerepo and disablerepo for yum type Nagios: Cron: - PUP-1585: PR (2342) cron resources with target specified generate duplicate entries - PUP-1586: PR (2331) Cron Type sanity check for the command parameter is broken - PUP-1624: PR (2342) Cron handles crontab’s equality of target and user strangely Service: OpenBSD services can now be enabled and disabled, and we fixed some bugs on other platforms. - PUP-1751: PR (2383): Suse chkconfig –check boot.<service> always returns 1 whether the service is enabled/disabled. - m4ce - PUP-1932: systemd reports transient (in-memory) services - PUP-1938: Remove Ubuntu default from Debian service provider - PUP-1332: “puppet resource service” fails on Ubuntu 13.04 and higher - PUP-2143: Allow OpenBSD service provider to implement :enableable File: We fixed a regression from Puppet 3.0 that broke file resources whose source URL specified a server other than the default. (That is, puppet://myserver/modules/... instead of puppet:///modules/....) Yumrepo: We fixed a few lingering regressions from the big yumrepo cleanup of Puppet 3.5, and added support for the skip_if_unavailable parameter. - PUP-2218: yumrepo can no longer manage repositories in yum.conf - PUP-2291: yumrepo priority can not be sent to absent - PUP-2292: Insufficient tests on yumrepo’s => absent - PUP-2279: Add support for ‘skip_if_unavailable’ parameter to yumrepo Augeas: We added better control over the way Augeas resources display diffs, for better security and less noise. General Bug Fixes - PUP-530: Installer for Puppet 3 does not check for hiera - PUP-1547: PR (2311) Undefined method `groups’ for nil:NilClass - PUP-1552: V2.0 API reports Not Authorized as a “RUNTIME_ERROR” - PUP-1924: source function library before client sysconfig overrides - PUP-1954: use of ‘attr’ causes deprecation warning - PUP-1986: Permissions for libdir are set arbitrarily - PUP-2073: PR (2477) Multiple values for diff_args causes diff execution failure - PUP-2278: puppet module install fails when given path containing spaces - PUP-2101: resource parser: add the resource name on the validation error message when using create_resources - PUP-2282: Deprecation warnings issued with different messages from the same line are suppressed. - PUP-2306: Puppet::Util::Execution.execute no longer returns a String - PUP-2415: Puppet Agent Service - Rename /etc/sysconfig/puppetagent to /etc/sysconfig/puppet - PUP-2416: Puppet Service - Use no-daemonize and no forking (Master and Agent) - PUP-2417: Puppet Agent Should wait for Puppet Master to finish starting, if puppet master is installed - PUP-2395: Installation problem for puppetmaster-puppet 3.5.1 on Ubuntu 13.10 All Resolved Issues for 3.6.0 Our ticket tracker has the list of all issues resolved in Puppet 3.6.0.
https://docs.puppet.com/puppet/3.6/release_notes.html
2018-07-16T04:23:44
CC-MAIN-2018-30
1531676589179.32
[]
docs.puppet.com
What's new in 2.3 The major new features in this release are summarised below. If you're new to Juju, begin by going through our Getting started guide first. For details on these features, and other improvements not listed here, see the 2.3 release notes. Persistent storage Persistent storage enables operators to manage the life-cycle of storage independently of Juju machines. See Using Juju storage for a complete breakdown of how persistent storage, also called dynamic storage, fits in with legacy Juju storage. Cross model relations Cross model relations make centralised management of multiple models a reality by allowing applications in separate models to form relations between one another. This feature also works across multiple controllers. See Cross model relations for more information and examples. Fan networking support Fan networking leads to the reconfiguration of an IPv4 address space such that network connectivity among containers running on separate hosts becomes possible. Applied to Juju, this allows for the seamless interaction between deployed applications running within LXD containers on separate Juju machines. Read Juju and Fan networking for more on this exciting topic. Improvements to bundles You can now recycle existing machines instead of having new ones created. It is also possible to map specific machines to machines configured in the bundle. A bundle declaration can be placed on top of a base bundle to override elements of the latter. These are bonafide bundle files, called "overlay bundles", that can do anything a normal bundle can do. They can also remove applications from the base bundle. See Overlay bundles.
https://docs.jujucharms.com/2.3/en/whats-new
2018-07-16T04:29:39
CC-MAIN-2018-30
1531676589179.32
[]
docs.jujucharms.com
Additional Resources Home Page for AWS SDK for .NET For more information about the AWS SDK for .NET, go to the home page for the SDK at AWS SDK for .NET. SDK Reference Documentation The SDK reference documentation includes the ability to browse and search across all code included with the SDK. It provides thorough documentation, usage examples, and even the ability to browse method source. For more information, see the AWS SDK for .NET API Reference. AWS Forums Visit the AWS forums to ask questions or provide feedback about AWS. AWS engineers monitor the forums and respond to questions, feedback, and issues. You can also subscribe to RSS feeds for any of the forums. AWS Toolkit for Visual Studio If you use the Microsoft Visual Studio IDE, you should check out the Toolkit for Visual Studio and the accompanying Toolkit for Visual Studio User Guide.
https://docs.aws.amazon.com/sdk-for-net/v2/developer-guide/net-dg-additional-resources.html
2018-07-16T04:45:04
CC-MAIN-2018-30
1531676589179.32
[]
docs.aws.amazon.com
< Main Index Visual Page Editor News > Portlet Wizard on non-WTP projects The JBoss Portlet wizards no longer require a WTP project to be able to create Portlets. Note: that if the project does not have the proper portlet API jar's the generated classes will have compile errors. Simplfied Wizards There are now only two JBoss Portlet wizards, instead of three. The Seam and JSF portlet wizard have been merged into one.
http://docs.jboss.org/tools/whatsnew/portlet/portlet-news-1.0.0.Beta1.html
2018-07-16T04:33:48
CC-MAIN-2018-30
1531676589179.32
[]
docs.jboss.org
An Act to create 6.86 (7) of the statutes; Relating to: responding to a request for an absentee ballot. Amendment Histories 2015 Wisconsin Act 209 (PDF: ) 2015 Wisconsin Act 209: LC Act Memo Bill Text (PDF: ) LC Amendment Memo SB47 ROCP for Committee on Campaigns and Elections On 9/10/2015 (PDF: ) SB47 ROCP for Committee on Elections and Local Government On 4/17/2015 (PDF: ) LC Bill Hearing Materials Wisconsin Ethics Commission information 2015 Assembly Bill 58 - Rules
http://docs.legis.wisconsin.gov/2015/proposals/sb47
2018-07-16T04:30:50
CC-MAIN-2018-30
1531676589179.32
[]
docs.legis.wisconsin.gov
HeadBucket This action is useful to determine if a bucket exists and you have permission to access it. The action returns a 200 OK if the bucket exists and you have permission to access it. If the bucket does not exist or you do not have permission to access it, the HEAD request returns a generic 404 Not Found or 403 Forbidden code. A message body is not included, so you cannot determine the exception beyond these error codes. To use this operation,. To use this API against an access point, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using the Amazon SDKs, you provide the ARN in place of the bucket name. For more information see, Using access points. Request Syntax HEAD / HTTP/1.1 Host: Bucket.s3.amazonaws.com x-amz-expected-bucket-owner: ExpectedBucketOwner URI Request Parameters The request uses the following URI parameters. - Bucket SDKs, you provide the Outposts bucket ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see Using S3 on Outposts in the Amazon S3 User Guide. Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Examples Sample Request This example illustrates one usage of HeadBucket. HEAD / HTTP/1.1 Date: Fri, 10 Feb 2012 21:34:55 GMT Authorization: authorization string Host: myawsbucket.s3.amazonaws.com Connection: Keep-Alive Sample Response This example illustrates one usage of HeadBucket. x-amz-bucket-region: us-west-2 x-amz-access-point-alias: false Date: Fri, 10 2012 21:34:56 GMT Server: AmazonS3 See Also For more information about using this API in one of the language-specific Amazon SDKs, see the following:
https://docs.amazonaws.cn/en_us/AmazonS3/latest/API/API_HeadBucket.html
2021-09-16T15:10:43
CC-MAIN-2021-39
1631780053657.29
[]
docs.amazonaws.cn
Search & Build-a-List - Contacts Also known as Locating Contacts Returns search results of people based on keyword. Update History - August 27, 2021 (v6.4 SOAP & REST): Enabling the mapping for contact FullName search element. -. When performing a search using longitude+latitude, a radius measurement and unit code (3353 for miles or 3349 for kilometers) must be supplied. Overview There are several options available when implementing this feature. 1. Keyword Search (Basic/Advanced): A contact keyword is required in addition to setting the search mode to 'Basic'. The contact keyword (KeywordContactText) will be compared against the contact name, title and biography fields. An option exists (in Advanced search) to limit the search comparison to specific fields. Keyword search criteria are NOT case sensitive (i.e. "SAMPLE", "Sample" and "sample" will return the same results). Multiple keywords may be specified in either keyword criteria field, separated by spaces. This feature supports a wild card pattern in the keyword search criteria, which allows an asterisk to be appended to, embedded in, or placed at the start of the keyword parameter. The organization keyword and/or D&B D-U-N-S Number range may be provided as options. The organization keyword (KeywordText) will be compared against any company names (past, present, synonyms and tradestyles) and stock ticker symbols. 2. Inclusion Rules (Basic/Advanced): By default, organizations with certain operating conditions will be excluded from the results. An option is available to include the following: undeliverable address records, out of business records, incomplete data records, only non-marketable* businesses, only contacts with direct email details, only contacts with direct telephone details or only contacts with either direct email or telephone details. *In general, a record that meets the following criteria is considered marketable: it has been updated within the last 24 months; it contains a complete business name, valid physical or mailing address, and valid Standard Industry Classification Code; and it is believed to be in business. NOTE: This rule does not exclude/include de-listed companies (i.e., a company that has requested to be excluded from direct marketing use cases). 3. Sorting Results (Basic/Advanced): By default, the results from this service will be descending by order of relevancy (i.e. best matched results). Optionally, both the sort direction and a primary sort field can be designated on the request. 4. Pagination (Basic/Advanced): (Advanced only): (Advanced only): Filter for companies by location type, D&B Prescreen Score, ownership type, stock ticker and exchange, D-U-N-S Number, legal status, year of founding, subsidiary status, diversity status (minority owned, women owned and ethnicity classification), occupancy status (owns/rents), franchise status and foreign trade. 7. Industry (Advanced only): Filter for companies by Hoover's Industries, SIC Codes, and NAIC Codes, either by a specific industry code or range. An option exists to search only on an organization's primary industry code. 8. Company Size (Advanced only): Filter for companies by sales, market cap, employees and facility size. 9. Financial Information (Advanced only): filter for companies by net income, assets, fiscal year end, auditors, and rankings/indices. 10. IPO Data (Advanced only): Filter for companies by IPO date, offer amount, price range and underwriters. 11. People (Advanced only): Filter for people by job function, salary, bonus, total compensation and age. A separate email only version (search mode set to 'EmailLookup') of this function is available to search for people using an email address. No other criteria may be used in conjunction with the email address. The sorting and pagination options, however, do apply. Minimum Requirements The details in this section override the Optional/Required values listed in the Request Specification table. - The field SearchModeDescription must be set to a value of Basic, Advanced or EmailLookup for this feature to work properly (for this use case). When performing a search using longitude+latitude, a radius measurement and unit code (3353 for miles or 3349 for kilometers) must be supplied. Data Layer Entitlement For customers in U.S. and Canadian markets, the API is provisioned for specific collections of products, reports, and/or features (collectively referred to as data layers) for production and trial usage. Entitlement is not required for testing in the sandbox environment. - This BASIC feature is entitled as "Search/Lookup for People" for D&B Direct 2.0 customers. - This ADVANCED feature is entitled as "Detailed Build-a-List - Contacts">EntityNew</ApplicationTransactionID> </TransactionDetail> <FindContactRequestDetail> <InquiryDetail> <KeywordText>Gorman</KeywordText> </InquiryDetail> <FindSpecification> <SortDirectionText>Ascending</SortDirectionText> <CandidatePerPageMaximumQuantity>60</CandidatePerPageMaximumQuantity> <CandidateDisplayStartSequenceNumber>1</CandidateDisplayStartSequenceNumber> <SearchModeDescription>Basic<> <InquiryReferenceDetail> <CustomerReferenceText>Test</CustomerReferenceText> <CustomerBillingEndorsementText>Test</CustomerBillingEndorsementText> </InquiryReferenceDetail> </FindContactRequestDetail> </ent:FindContactRequest> </soapenv:Body> </soapenv:Envelope> "> <!--Optional:--> <TransactionDetail> <!--Optional:--> <ApplicationTransactionID>EntityNew</ApplicationTransactionID> </TransactionDetail> <FindContactRequestDetail> <InquiryDetail> <DUNSNumber>214567885</DUNSNumber> </InquiryDetail> <!--Optional:--> <FindSpecification> <SortDirectionText>Ascending</SortDirectionText> <!--Optional:--> <!--CandidateMaximumQuantity>100</CandidateMaximumQuantity--> <!--Optional:--> <CandidatePerPageMaximumQuantity>60</CandidatePerPageMaximumQuantity> <!--Optional:--> <CandidateDisplayStartSequenceNumber>1</CandidateDisplayStartSequenceNumber> <SearchModeDescription>Advanced<> <!--Optional:--> <InquiryReferenceDetail> <!--0 to 5 repetitions:--> <CustomerReferenceText>Test</CustomerReferenceText> <!--Optional:--> <CustomerBillingEndorsementText>Test< results from this search will return principal identification numbers (Principal ID) that may be passed to the People feature to obtain additional information about individuals. NOTE: The D-U-N-S Number returned in the response will be a nine-digit zero-padded, numeric value. <soapenv:Envelope xmlns: <soapenv:Body xmlns: <ent:FindContactResponse <TransactionDetail> <ApplicationTransactionID>Id-6cbf9ca65272a29702fbea0d-1</ApplicationTransactionID> <ServiceTransactionID>Id-6cbf9ca65272a29702fbea0d-1</ServiceTransactionID> <TransactionTimestamp>2013-10-31T14:34:05<>101130540</DUNSNumber> <ContactID>101130540-53482590</ContactID> <OrganizationPrimaryName> <OrganizationName>WOLTERS KLUWER UNITED STATES INC.</OrganizationName> </OrganizationPrimaryName> <ConsolidatedEmployeeDetails> <TotalEmployeeQuantity>7061</TotalEmployeeQuantity> </ConsolidatedEmployeeDetails> <ContactName> <FirstName>Sue</FirstName> <LastName>Gorman</LastName> <FullName>Sue Gorman</FullName> </ContactName> <PrincipalIdentificationNumberDetail DNBCodeValue="24215" TypeText="Professional Contact Identifier"> <PrincipalIdentificationNumber>53482590</PrincipalIdentificationNumber> </PrincipalIdentificationNumberDetail> <JobTitle> <JobTitleText>Ea To Mike Sabbatis</JobTitleText> </JobTitle> <ContactDataSourceDetail> <NameInformationSourceName>MRT</NameInformationSourceName> <EmailInformationSourceName>MRT</EmailInformationSourceName> </ContactDataSourceDetail> <DirectTelephoneInformationAvailableIndicator>false</DirectTelephoneInformationAvailableIndicator> <DirectEmailInformationAvailableIndicator>true</DirectEmailInformationAvailableIndicator> <ManufacturingIndicator>true</ManufacturingIndicator> <DisplaySequence>1</DisplaySequence> </FindCandidate> </FindContactResponseDetail> </ent:FindContactResponse> </soapenv:Body> </soapenv:Envelope> <soapenv:Envelope xmlns: <soapenv:Body> <ns2:FindContactResponse <TransactionDetail> <ApplicationTransactionID>EntityNew</ApplicationTransactionID> <ServiceTransactionID>Id-aa95cc59e9d80300daa00e00d4feb41a-1</ServiceTransactionID> <TransactionTimestamp>2017-09-28T06:24:42</TransactionTimestamp> </TransactionDetail> <TransactionResult> <SeverityText>Information</SeverityText> <ResultID>CM000</ResultID> <ResultText>Success</ResultText> </TransactionResult> <FindContactResponseDetail> <CandidateMatchedQuantity>2</CandidateMatchedQuantity> <CandidateReturnedQuantity>2</CandidateReturnedQuantity> <FindCandidate> <DUNSNumber>214567885</DUNSNumber> <ContactID>214567885-2145678852</ContactID> <OrganizationPrimaryName> <OrganizationName>D & B SAMPLE CO LTD</OrganizationName> </OrganizationPrimaryName> <ConsolidatedEmployeeDetails> <TotalEmployeeQuantity>1210</TotalEmployeeQuantity> </ConsolidatedEmployeeDetails> <MarketabilityIndicator>false</MarketabilityIndicator> <ContactName> <FirstName>Julie</FirstName> <LastName>Whittakers</LastName> <FullName>Julie anmols Whittakers</FullName> </ContactName> <PrincipalIdentificationNumberDetail DNBCodeValue="24215" TypeText="Professional Contact Identifier"> <PrincipalIdentificationNumber>2145678852</PrincipalIdentificationNumber> </PrincipalIdentificationNumberDetail> <JobTitle> <JobTitleText>Director & Company Secretary</JobTitleText> </JobTitle> <ManagementResponsibilityCodeText>Director</ManagementResponsibilityCodeText> <ManagementResponsibilityCodeText>Secretary</ManagementResponsibilityCodeText> <JobFunction>Director, Non-board</JobFunction> <JobFunction>Secretary</JobFunction> <JobRanking>45</JobRanking> <DirectTelephoneInformationAvailableIndicator>false</DirectTelephoneInformationAvailableIndicator> <DirectEmailInformationAvailableIndicator>false</DirectEmailInformationAvailableIndicator> <DisplaySequence>1</DisplaySequence> </FindCandidate> <FindCandidate> <DUNSNumber>214567885</DUNSNumber> <ContactID>214567885-2145678851</ContactID> <OrganizationPrimaryName> <OrganizationName>D & B SAMPLE CO LTD</OrganizationName> </OrganizationPrimaryName> <ConsolidatedEmployeeDetails> <TotalEmployeeQuantity>1210</TotalEmployeeQuantity> </ConsolidatedEmployeeDetails> <MarketabilityIndicator>false</MarketabilityIndicator> <NonMarketableReasonText DNBCodeValue="11028">De-listed</NonMarketableReasonText> <ContactName> <FirstName>Laurensjjjjjjjjjjj</FirstName> <LastName>Franklyn</LastName> <FullName>Laurensjjjjjjjjjjj Franklyn</FullName> </ContactName> <PrincipalIdentificationNumberDetail DNBCodeValue="24215" TypeText="Professional Contact Identifier"> <PrincipalIdentificationNumber>2145678851</PrincipalIdentificationNumber> </PrincipalIdentificationNumberDetail> <JobTitle> <JobTitleText>Chairman</JobTitleText> </JobTitle> <ManagementResponsibilityCodeText>Chairman</ManagementResponsibilityCodeText> <JobFunction>Chairman</JobFunction> <JobRanking>3</JobRanking> <DirectTelephoneInformationAvailableIndicator>false</DirectTelephoneInformationAvailableIndicator> <DirectEmailInformationAvailableIndicator>false</DirectEmailInformationAvailableIndicator> <DisplaySequence>2</DisplaySequence> </FindCandidate> </FindContactResponseDetail> </ns2:FindContactResponse> </soapenv:Body> </soapenv:Envelope> NOTE: There may be additional request and/or response elements specified in the WSDL that are not applicable for D&B Direct customers. Data elements that are not listed on this page are currently unused by this operation. For the BASIC search option, the relevancy sort utilizes weighted values based on match rate in the name, company name, title and biographical text fields. For the ADVANCED search option, the relevancy option utilizes a proprietary algorithm for returning the best-matched results, taking into consideration input parameters sent in the request, along with additional weighted factors. What to do Next - Retrieve additional Contact details (using Principal ID)
https://docs.dnb.com/direct/2.0/en-US/entitylist/latest/findcontact/soap-API
2021-09-16T15:06:50
CC-MAIN-2021-39
1631780053657.29
[]
docs.dnb.com
Date: Wed, 21 Jan 2015 20:18:42 +0100 From: Chris Ernst <[email protected]> To: [email protected] Subject: Re: A way to load PF rules at startup using OpenVPN Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help@gateway:~ # more /etc/pf.conf intIf = "vr3" extIf = "vr0" vpnIf = "tun0" [...] [...] ### Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=545577+0+archive/2015/freebsd-questions/20150125.freebsd-questions
2021-09-16T16:40:16
CC-MAIN-2021-39
1631780053657.29
[]
docs.freebsd.org
Test mail flow by validating your Office 365 connectors Mail flow issues can also happen when your MX record is not setup correctly. To verify your MX record, see Find and fix issues after adding your domain or DNS records in Office 365. Note These tests replace Office 365 mail flow troubleshooting that was previously available in the Remote Connectivity Analyzer. See also Configure mail flow using connectors in Office 365 Set up connectors to route mail between Office 365 and your own email servers Fixing connector validation errors When do I need a connector?
https://docs.microsoft.com/fr-fr/exchange/mail-flow-best-practices/test-mail-flow
2018-07-15T23:25:16
CC-MAIN-2018-30
1531676589022.38
[]
docs.microsoft.com
Using the Force Sells App Table of Contents Overview The Force Sells app allows you to link products so they are added to the cart together. This is useful for linking a service or required product to another. For example, if you are selling phone screen repair as a service, you can link a new screen as a forced sell product. There's two types of force sells: - Normal force sell - products will be added to the cart along with the main product, in the same quantity as selected for the main product. Added force sell products can be removed and the quantity of it can be changed in the cart by the customer. - Synced force sell - products work in the very same way as normal force sells. The only difference is that customers can’t remove a synced force sell from the cart or change the quantity. In case the main product is removed, the synced force sell products will be removed too. The same goes for quantity. If the quantity of the main product is changed, the quantity for all synced force sell products will change too. Choose between Force Sells & Synced Force Sells depending on your needs, and enter the product names you want to force. Be sure to save your changes.
https://docs.thatwebsiteguy.net/using-the-force-sells-app/
2018-07-15T23:16:03
CC-MAIN-2018-30
1531676589022.38
[]
docs.thatwebsiteguy.net
VMware vRealize Automation supports several workflows for changing virtual machine ownership and lease. This includes changing the lease and ownership by changing the entitlement on a blueprint, changing lease and ownership on a provisioned virtual machine, and extending the lease of a provisioned virtual machine. About this task. Prerequisites Prepare for the deployment. See Preparing for Scenario Implementation
https://docs.vmware.com/en/VMware-Validated-Design/4.1/com.vmware.vvd.it.automation-usecases.doc/GUID-369BB636-7FFD-4180-869A-0130FB9F45F8.html
2018-07-15T23:37:11
CC-MAIN-2018-30
1531676589022.38
[]
docs.vmware.com
Methods Methods vs. Functions - a FUNCTION is a named chunk of code with PARAMETERS and a RETURN VALUE - a METHOD is a function that is attached to a specific object - it has privileged access to that object's data - in Ruby everything's an object, so the terms are mostly interchangeable
http://docs.railsbridge.org/learn-to-code/methods
2018-07-15T22:57:04
CC-MAIN-2018-30
1531676589022.38
[]
docs.railsbridge.org
Learning Ruby TryRuby a browser-based interactive tutorial in Ruby "Learning to Program" by Chris Pine - a beginner's programming book with lots of Ruby exercises. (earlier version online ) Why's Poignant Guide Hackety Hack a fun way for beginners to learn Ruby. Ruby Koans - a self-guided journey through topics in Ruby for beginners and experts alike Test-First Teaching - click on 'Learn Ruby' Ruby Warrior - write and refine some Ruby code to get your warrior to the top of a hazardous tower Ruby Quiz - a guided tour through the world of possibility; use your Ruby to build simple apps, games, and solve problems Why's Poignant Guide to Ruby - A whimsical comic book that teaches you Ruby. Legendary in the community. Learn Ruby the Hard Way It's not actually hard. A great place to start if you're new to programming and want to learn with hands-on examples. PDX tech workshop This are the slides from the ruby/rails workshop organized in Portland, OR by PDXtech Rails Guides - the official how-to articles for Rails Rails API - online documentation Rails for Zombies - a series of videos and browser-based Rails exercises Rails Tutorial - a tutorial that leads you through writing a Rails messaging app Watch screen casts RailsCasts (also available as blog posts ) Classes & events in San Francisco RailsBridge Workshops organizing team Women Who Code meetup (monthly hack nights & speakers) Online RailsBridge DevChix -blog and mailing list for women developers Stack Overflow -for answers to programming questions Get experience Just do it. Write and publish your own Rails app. Volunteer at the next workshop Volunteer on a RailsBridge Builders project Come to a hack session Meetups and User Groups outside of San Francisco Boulder Ruby (monthly events): DeRailed - Denver Rails UG: Mountain.rb (Boulder, Colorado) Chicago Ruby (beginners welcome!) [contents] deck.rb presentation Learning Ruby / Go to slide: #
http://docs.railsbridge.org/workshop/resources.deck
2018-07-15T22:45:15
CC-MAIN-2018-30
1531676589022.38
[]
docs.railsbridge.org
You can add a VMkernel network adapter (NIC) on a VMware vSphere® Standard Edition™ switch to provide network connectivity for hosts. The VMkernel NIC also handles the system traffic for VMware vSphere® vMotion®, IP storage, Fault Tolerance, logging, vSAN, and so on. Procedure - Right-click Networking in the VMware Host Client inventory and click Add VMkernel NICs. - On the Add new VMkernel interface page page, configure the settings for the VMkernel adapter. - (Optional) Expand the IPv4 settings section to select an option for obtaining IP addresses. - (Optional) Expand the IPv6 settings section to select an option for obtaining IPv6 addresses. -. Such operations include traffic Provisioning, such as virtual machine cold migration, cloning, and snapshot migration. - (Optional) Enable vMotion for the default TCP/IP stack on the host. vMotion enables the VMkernel adapter to advertise itself to another host as the network connection where vMotion traffic is sent. You cannot use vMotion to perform migrations to selected hosts if the vMotion service is not enabled for any VMkernel adapter on the default TCP/IP stack, or if no adapters use the vMotion TCP/IP stack. - Review your setting selections and click Create.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.html.hostclient.doc/GUID-FCCF35F8-2CAB-4F25-BA5F-7DF24A67FDD5.html
2018-07-15T23:31:50
CC-MAIN-2018-30
1531676589022.38
[]
docs.vmware.com
Scylla in a Shared Environment¶ Scylla is designed to utilize all of the resources on the machine it runs on: disk and network bandwidth, RAM, and CPU. This allows you to achieve maximum performance with a minimal node count. In development and test, however, your nodes might be using a shared machine, which Scylla cannot dominate. This article explains how to configure Scylla for shared environments. For some production environment these settings may be preferred as well. Note that a docker image is a viable and even simpler option - Scylla on dockerhub Memory¶ The most important resource that Scylla consumes is memory. By default, when Scylla starts up it inspects the node’s hardware configuration and claims all memory to itself, leaving some reserve for the operating system (OS). This is in contrast to most open-source databases that leave most memory for the OS, but is similar to most commercial databases. In a shared environment, particularly on a desktop or laptop, gobbling up all the machine’s memory can reduce the user experience, so Scylla allows reducing its memory usage to a given quantity. On Ubuntu,. CPU¶ By default, Scylla will utilize all of your processors (in some configurations, particularly on Amazon AWS, it may leave a core for the operating system). In addition, Scylla will pin its threads to specific cores in order to maximize the utilization of the processor on-chip caches. On a dedicated node, this allows maximum throughput, but on a desktop or laptop, it can cause a sluggish user interface. Scylla offers two options to restrict its CPU utilization: --smp Nrest threads or memory, and will reduce the amount of polling it does to a minimum. On Ubuntu, edit /etc/default/scylla-server, and add --smp 2 --overprovisioned to restrict Scylla to 2 logical cores. On Red Hat / CentOS, edit /etc/sysconfig/scylla-server, and add --smp 2 --overprovisioned to restrict Scylla to 2 logical cores. If starting Scylla from the command line, simply append --smp 2 --overprovisioned to your command line. Other Restrictions¶ When starting up, Scylla will check the hardware and operating system configuration to verify that it is compatible with Scylla’s performance requirements. See developer mode for more instructions. Summary¶ Scylla comes out of the box ready for production use with maximum performance, but may need to be tuned for development or test uses. This tuning is simple to apply and results in a Scylla server that can coexist with other processes or a GUI on the system
http://docs.scylladb.com/getting-started/scylla_in_a_shared_environment/
2018-07-15T23:18:02
CC-MAIN-2018-30
1531676589022.38
[]
docs.scylladb.com
You can optimize Windows for either foreground programs or background services by setting performance options. By default, Horizon 7 disables certain performance options for RDS hosts for all supported versions of Windows Server. The following table shows the performance options that are disabled by Horizon 7. The five performance options that are disabled by Horizon 7 correspond to four Horizon 7 settings in the registry. The following table shows the Horizon 7 settings and their default registry values. The registry values are all located in the registry subkey HKEY_LOCAL_MACHINE\Software\VMware, Inc.\VMware VDM\Agent\Configuration. You can re-enable the performance options by setting one or more of the Horizon 7 registry values to false.
https://docs.vmware.com/en/VMware-Horizon-7/7.2/com.vmware.horizon.published.desktops.applications.doc/GUID-E29E6F9F-F63F-4E3B-9A4D-58D5C1B67B04.html
2018-07-15T23:38:16
CC-MAIN-2018-30
1531676589022.38
[]
docs.vmware.com
If IIS 7 or IIS 8 is installed on the remote server, you can use the IIS Manager to generate self-signed SSL certificates. Procedure - Open the IIS Manager. - In the Connections pane, select the top-most machine node. - Click Server Certificates in the Details pane. - Click Create Self-Signed Certificate in the Actions pane. - Enter HOSTNAME as certificate friendly name. - Select Personal as the certificate store.
https://docs.vmware.com/en/VMware-vRealize-Operations-for-Published-Applications/6.4/com.vmware.v4pa.admin_install/GUID-A91F3219-170F-4B22-8393-FC1917F17C31.html
2018-07-15T23:41:46
CC-MAIN-2018-30
1531676589022.38
[]
docs.vmware.com
Special syntax in user help files¶ - As help files are just django fixture files - all of the caveats there apply, with a few small conventions applied on top. - For consistancy and readability, its encouraged to keep one help fixture per file. - The body of the help file can be HTML, and this will be displayed to the user unchanged. It is up to documenters to ensure that help files are valid HTML that won’t break layout. A few additional Below is an example help file: - model: aristotle_mdr_help.helppage fields: title: Advanced Search language: en body: > <h2>Restricting search with the advanced search options</h2> <p> The <a href="/search/">search page</a> provides a form that gives users control to filter and sort search results.</p> <p> <img src="{static}/aristotle_mdr/search_filters.png"> When searching, the "indexed text" refers to everything crawled by the search engine.
http://aristotle-metadata-registry.readthedocs.io/en/latest/user_help/help_syntax.html
2018-07-15T22:40:26
CC-MAIN-2018-30
1531676589022.38
[]
aristotle-metadata-registry.readthedocs.io
Proposal for extended Vi tags file format¶ Note The contents of next section is a copy of FORMAT file in exuberant ctags source code in its subversion repository at sourceforge.net. We have made some modifications: - Exceptions introduced in Universal-ctags are explained with “EXCEPTION” marker. - Exceptions in Universal-ctags subsction summarizes the exceptions. Table of contents. - . It may be restricted to a line number or a search pattern (Posix). Optionally: - ;” - semicolon + doublequote: Ends the tagaddress in way that looks like the start of a comment to Vi. - {tagfield} - See below. A tagfield has a name, a colon, and a value: “name:value”. The name consist only out of alphabetical characters. Upper and lower case are allowed. Lower case is recommended. Case matters (“kind:” and “Kind: are different tagfields). The value may be empty. It cannot contain a <Tab>. - When a value contains a “\t”, this stands for a <Tab>. - When a value contains a “\r”, this stands for a <CR>. - When a value contains a “\n”, this stands for a <NL>. - When a value contains a “\”, this stands for a single ‘' character. Other use of the backslash character is reserved for future expansion. Warning: When a tagfield value holds an MS-DOS file name, the backslashes must be doubled!, 0x7F, and leading space (0x20) and ‘!’ (0x21) are converted to x prefixed hexadecimal number if the characters are not handled in the above “value” rules. Proposed tagfield names: Note that these are mostly for C and C++. When tags programs are written for other languages, this list should be extended to include the used field names. This will help users to be independent of the tags program used. Examples: asdf sub.cc /^asdf()$/;" new_field:some\svalue file: foo_t sub.h /^typedef foo_t$/;" kind:t func3 sub.p /^func3()$/;" function:/func1/func2 file: getflag sub.c /^getflag(arg)$/;" kind:f file: inc sub.cc /^inc()$/;" file: class:PipeBuf The name of the “kind:” field can be omitted. This is to reduce the size of the tags file by about 15%. A program reading the tags file can recognize the “kind:” field by the missing ‘:’. Examples: foo_t sub.h /^typedef foo_t$/;" t getflag sub.c /^getflag(arg)$/;" f file: Additional remarks: - When a tagfield appears twice in a tag line, only the last one is used. Note about line separators: Vi traditionally runs on Unix systems, where the line separator is a single linefeed character <NL>. On MS-DOS and compatible systems <CR><NL> is the standard line separator. To increase portability, this line separator is also supported. On the Macintosh a single <CR> is used for line separator. Supporting this on Unix systems causes problems, because most fgets() implementation don’t see the <CR> as a line separator. Therefore the support for a <CR> as line separator is limited to the Macintosh. Summary: The characters <CR> and <LF> cannot be used inside a tag line. This is not mentioned elsewhere (because it’s obvious). Note about white space: Vi allowed any white space to separate the tagname from the tagfile, and the filename from the tagaddress. This would need to be allowed for backwards compatibility. However, all known programs that generate tags use a single <Tab> to separate fields. There is a problem for using file names with embedded white space in the tagfile field. To work around this, the same special characters could be used as in the new fields, for example “\s”. But, unfortunately, in MS-DOS the backslash character is used to separate file names. The file name “c:\vim\sap” contains “\s”, but this is not a <Space>. The number of backslashes could be doubled, but that will add a lot of characters, and make parsing the tags file slower and clumsy. To avoid these problems, we will only allow a <Tab> to separate fields, and not support a file name or tagname that contains a <Tab> character. This means that we are not 100% Vi compatible. However, there is no known tags program that uses something else than a <Tab> to separate the fields. Only when a user typed the tags file himself, or made his own program to generate a tags file, we could run into problems. To solve this, the tags file should be filtered, to replace the arbitrary white space with a single <Tab>. This Vi command can be used: :%s/^\([^ ^I]*\)[ ^I]*\([^ ^I]*\)[ ^I]*/\1^I\2^I/ (replace ^I with a real <Tab>). TAG FILE INFORMATION: Psuedo-tag lines can be used to encode information into the tag file regarding details about its content (e.g. have the tags been sorted?, are the optional tagfields present?), and regarding the program used to generate the tag file. This information can be used both to optimize use of the tag file (e.g. enable/disable binary searching) and provide general information (what version of the generator was used). The names of the tags used in these lines may be suitably chosen to ensure that when sorted, they will always be located near the first lines of the tag file. The use of “!_TAG_” is recommended. Note that a rare tag like “!” can sort to before these lines. The program reading the tags file should be smart enough to skip over these tags. The lines described below have been chosen to convey a select set of information. Tag lines providing information about the content of the tag file: !_TAG_FILE_FORMAT {version-number} /optional comment/ !_TAG_FILE_SORTED {0|1} /0=unsorted, 1=sorted/ The {version-number} used in the tag file format line reserves the value of “1” for tag files complying with the original UNIX vi/ctags format, and reserves the value “2” for tag files complying with this proposal. This value may be used to determine if the extended features described in this proposal are present. Tag lines providing information about the program used to generate the tag file, and provided solely for documentation purposes: !_TAG_PROGRAM_AUTHOR {author-name} /{email-address}/ !_TAG_PROGRAM_NAME {program-name} /optional comment/ !_TAG_PROGRAM_URL {URL} /optional comment/ !_TAG_PROGRAM_VERSION {version-id} /optional comment/.
http://docs.ctags.io/en/latest/format.html
2018-07-15T22:44:49
CC-MAIN-2018-30
1531676589022.38
[]
docs.ctags.io
When the appropriate criteria have been met the Activate Order button will appear and you can proceed with the following:: It is possible to activate a purchase order without loading items. Once the purchase order has been activated without loading items, it is not possible to load the items. This feature should only be used in situations where the copies have already been added to the catalogue, such as: To use this feature, click the Activate Without Loading Items button.
http://docs.evergreen-ils.org/reorg/3.0/acquisitions/_activating_your_purchase_order.html
2018-07-15T23:22:46
CC-MAIN-2018-30
1531676589022.38
[]
docs.evergreen-ils.org
Non-compliant device behavior When a device falls below the minimum compliance requirements, the Non-compliant device behavior policy allows you to select the action to take: - Allow app: Allow the app to run normally. - Allow app after warning: Warn the user that an app does not meet the minimum compliance requirements and allows the app to run. This setting is the default value. - Block app: Block the app from running. The following criteria determine whether a device meets the minimum compliance requirements. Devices running iOS: - iOS 10: An app is running an operating an operating system version that is greater than or equal to the specified version. - Debugger Access: An app does not have debugging enabled. - Rooted devices: An app is not running on a rooted device. - Device lock: Device passcode is ON. - Device encrypted: An app is running on an encrypted device. Non-compliant device behavior In this article Copied! Failed!
https://docs.citrix.com/en-us/mdx-toolkit/non-compliant-device-behavior.html
2021-04-10T20:06:23
CC-MAIN-2021-17
1618038057476.6
[]
docs.citrix.com
MARKETO SOURCE CONNECTOR, etc. , using the Marketo REST API. You can find the list of supported Marketo entities in Supported Entities. Initialize configuration parameter tasks.max to the number of entities being configured in the entities.names property of connector, treating all activity types as single entity. leads programs campaigns staticLists activities_add_to_nurture activities_add_to_opportunity tasks.max entities.names The Marketo Source Connector offers the following features: activities marketo.since max.retries retry.backoff.ms max.batch.size max.poll.interval.ms -Xmx8g. entity.names The following are required to run the Kafka Connect Marketo Source Connector: You can install this connector by using the instructions or you can manually download the ZIP file. Navigate to your Confluent Platform installation directory and run the following command to install the latest (latest) connector version. The connector must be installed on every machine where Connect will run. latest confluent-hub install confluentinc/kafka-connect-marketo:latest You can install a specific version by replacing latest with a version number. For example: confluent-hub install confluentinc/kafka-connect-marketo:1.0.0-preview Download and extract the ZIP file for your connector and then follow the manual connector installation instructions. For a complete list of configuration properties for this connector, see Marketo Source Connector Configuration Properties. Note For an example of how to get Kafka Connect connected to Confluent Cloud, see Distributed Cluster.. In this quick start. activities_add_to_nurture,activities_add_to_opportunity marketo_leads marketo_campaigns with the following properties. Find the REST API Endpoint url from the process described in Marketo REST API Quickstart. This endpoint url will be used in marketo.url configuration key (shown below) of the connector, but do note to remove the path rest from the endpoint url before using it in connector configurations. Refer same link to see the process of determining oauth client id and oauth client secret. tasks.max should be 3 here, as there are three entity types, i.e. leads, campaigns and activities. marketo-configs.json marketo.url // substitute <> with your config { "name": "marketo-connector", "config": { "connector.class": "io.confluent.connect.marketo.MarketoSourceConnector", "key.converter": "org.apache.kafka.connect.storage.StringConverter", "value.converter": "org.apache.kafka.connect.json.JsonConverter", "value.converter.schemas.enable": "false", "confluent.topic.bootstrap.servers": "127.0.0.1:9092", "confluent.topic.replication.factor": 1, "confluent.license": "<license>", // leave it empty for evaluation license "tasks.max": topic name and your flag. For more information, see this post. -- confluent local services connect connector load marketo-connector -- -d marketo-configs.json Confirm that the connector is in a RUNNING state. RUNNING confluent local services connect connector status marketo-connector Create some leads, activities and campaigns records using Marketo APIs. Use POST or Bulk Import APIs of appropriate entities to inject some sample records. Confirm the messages from entities leads, activities, and campaigns were delivered to the marketo_leads, marketo_activities and marketo_campaigns topics respectively, in Kafka. Note, it may take about a minute for assets (campaigns) and about 5 minutes or more (depending upon the time Marketo server instance takes to prepare the export file) for export entities (leads and activities). confluent local services kafka consume marketo_leads -- --from-beginning
https://docs.confluent.io/kafka-connect-marketo/current/
2021-04-10T18:57:50
CC-MAIN-2021-17
1618038057476.6
[]
docs.confluent.io
# Understanding uns.network In the Internet-era, identifiers are everywhere. We share them by phone or we write them down to our friends and family. We do not want them to be complex, we like when they’re expressed in our language and we love when they are unique and personalized. Designed for everyday uses, these human-readable identifiers should be easy to remember, unique and usable everywhere on the net. We use it to login on our favorite websites, to share public or private data, to communicate with others, or simply to be identified. They must be private and protect our privacy, and the most important our identifiers should be under our sole control. uns.network is the decentralized network and the protocol dedicated to handle IDs rooted in the blockchain, aiming to secure any web and mobil user connections, and to protect the user's privacy. This blockchain is specialized in providing Decentralized ID (also called DID), and offer a good Distributed Public Key Infrastructure (also called DPKI) solution, the basis of the next generation of authentication protocols. The blockchain is fueled with its own native token, UNS, a protocol and utility token. uns.network and its IDs are the backbone of the Unikname solutions : @unikname ID: Augmented pseudo under the sole control of its owner. @unikname IDs are self-sovereign and easy to remember multi-purpose IDs. They can replace email and password for fully confidential sign-up and log in, or used to control sharing of personal or business information over the web, and many more... Unikname Connect: Unikname Connect is a privacy-by-design authentication solution, rewarding users who want to make internet a safer place. Unikname Certificate Proofing: the solution to secure mobile apps against falsified TLS/SSL X509 certificates and against man-in-the-middle attacks. Unikname URL-Checker: the solution to store a public proof of ownership of a URL over the web, could be a domain name, an image, a social account or any file. uns.network has been designed to be used easily by everyone who needs to integrate self-sovereign identifiers in their software and applications, not only DAPP but also any traditional web platform. What 'uns' means? 'uns' is an acronym and an homonym, it stands for Universal-Name-System and for UnikName-System. # Main Features # 🔥 Issuance and Minting Decentralized IDs as NFTs ✨ Self-Sovereign Identifier NFTs. uns.network Decentralized IDs are Non-Fungible Tokens (NFT), they're unique, available every time and everywhere.Individuals or Organizations have ultimate control over their identifier and are the final arbiter of who can access and use their data related to it. ✨ Featured NFTs. uns.network DIDs are Non-Fungible Token (NFT) with peerless features like badges, properties… and crypto-accounts. ✨ Easy to Remember Pseudonymous IDs. uns.network DIDs have an easy to remember shape that could be a pseudo, a real name, or any funny name. We don’t care about your real identity. ✨ Anti-Squatting Protections. uns.network DIS's integrate an original life cycle management protecting then against squatting. # 🔥 User Rewarding System ✨ uns.network UNS utility tokens are used to encourage users to maximize use of their @unikname ID to make the internet more secure and more trustful. # 🔥 On-Chain / Off-Chain Control ✨ uns.network DID's owners use their ID in either an off-chain or an on-chain mode. They control the privacy level and the sharing of their data: fully private, with white lists, open... and IDs stay fully private when they're used for authentication. # 🔥 Highly resilient and highly secure ✨ The decentralized design makes the solution highly resilient and highly secure, far beyond any traditional centralized system. # 🔥 Compatible with existing web & mobile Apps ✨ uns.network APIs and SDKs can be easily used in any web and mobile Apps, even traditional centralized ones (heavy client, SaaS, Platforms...) # Main Specifications 🏆 uns.network DIDs follow the Decentralized Identity Foundation standards. Uns.network team is an active member of the DIF and contribute to the definition of tomorrow standards. see DIF UNS DID Specifications (opens new window) 🏆 uns.network DIDs are multilingual and protected against spoofing by SafeTypo© technology. 14 UNICODE scripts are accepted, with many alphabets. 🏆 uns.network is a DPOS blockchain operated by 23 elected delegates. The blockchain is powered by the famous ARK.IO DPOS Blockchain, setup with a 8 seconds block time. Delegates are grouped in colleges of stakeholders: - 10 Individuals – Defenders of individual freedoms and privacy - 10 Organizations – Committed in securing the web for businesses - 3 Network community players 🏆 uns.network Supply of UNS utility token is dynamic and follows adoption. UNS tokens are created when DIDs are created on the chain and when they become alive, this design makes the token less prone to speculation and more stable. # Key Benefits # For Network Players 💰 uns.network represents an opportunity to participate, to contribute and to get valuable rewards with a viable DPOS blockchain solution. You're contributing to securing the first Decentralized IDentifiers (DID) operational blockchain with its easy to understand use cases. # For Organizations 💰 uns.network offers a backbone solution to provide confidential authentication, with Unikname Connect, to their users, customers and partners. This is the easiest and safest MFA authentication solution for any saas platforms and mobile apps, providing a protection against economic espionage. # For Web Users 💰 Web users have a change to get a Self-Sovereign ID usable everywhere on the web and protecting them against identity theft thanks to the blockchain technology. They become actor of tomorrow's internet and contribute to make the Internet more secure and more trustworthy, and they're rewarded for that! # The network We're maintaining 2 public networks, a live one and a test one: LIVENET: the main network (equivalent to mainnet on other blockchains). Tokens are valuable and you can use them on a day to day basis for the long life. This network is live since may 2020 and you can access the blockchain via the public livenet explorer (opens new window). SANDBOX: the.
https://docs.uns.network/introduction
2021-04-10T19:48:20
CC-MAIN-2021-17
1618038057476.6
[]
docs.uns.network
Applications in Scala can have one of the following shapes: 1) naked core: Ident(_) or Select(_, _) or basically anything else 2) naked core with targs: TypeApply(core, targs) or AppliedTypeTree(core, targs) 3) apply or several applies wrapping a core: Apply(core, _), or Apply(Apply(core, _), _), etc This class provides different ways to decompose applications and simplifies their analysis. ***Examples*** (TypeApply in the examples can be replaced with AppliedTypeTree) Ident(foo): * callee = Ident(foo) * core = Ident(foo) * targs = Nil * argss = Nil TypeApply(foo, List(targ1, targ2...)) * callee = TypeApply(foo, List(targ1, targ2...)) * core = foo * targs = List(targ1, targ2...) * argss = Nil Apply(foo, List(arg1, arg2...)) * callee = foo * core = foo * targs = Nil * argss = List(List(arg1, arg2...)) Apply(Apply(foo, List(arg21, arg22, ...)), List(arg11, arg12...)) * callee = foo * core = foo * targs = Nil * argss = List(List(arg11, arg12...), List(arg21, arg22, ...)) Apply(Apply(TypeApply(foo, List(targs1, targs2, ...)), List(arg21, arg22, ...)), List(arg11, arg12...)) * callee = TypeApply(foo, List(targs1, targs2, ...)) * core = foo * targs = Nil * argss = List(List(arg11, arg12...), List(arg21, arg22, ...)) Some handy extractors for spotting trees through the the haze of irrelevant braces: i.e. Block(Nil, SomeTree) should not keep us from seeing SomeTree. Is tree either a non-volatile type, or a path that does not include any of: Such a tree is a suitable target for type selection. Translates an Assign(_, _) node to AssignOrNamedArg(_, _) if the lhs is a simple ident. Otherwise returns unchanged. Does this CaseDef catch Throwable? Returns a wrapper that knows how to destructure and analyze applications. //------------------------ => effectivePatternArity(args) case Extractor(a) => 1 case Extractor(a, b) => 2 case Extractor((a, b)) => 2 case Extractor(a @ (b, c)) => 2 The first constructor definitions in stats The arguments to the first constructor in stats. Does list of trees start with a definition of a class of module with given name (ignoring imports) Is tree's type volatile? (Ignored if its symbol has the @uncheckedStable annotation.) Is tpt a by-name parameter type of the form => T? Is this pattern node a catch-all or type-test pattern? Is tree a declaration or type definition? Is this pattern node a catch-all (wildcard or variable) pattern? Is tree an expression which can be inlined without affecting program semantics? Note that this is not called "isExprPure" since purity (lack of side-effects) is not the litmus test. References to modules and lazy vals are side-effecting, both because side-effecting code may be executed and because the first reference takes a different code path than all to follow; but they are safe to inline because the expression result from evaluating them is always the same. Is this case guarded? Is tree legal as a member definition of an interface? Is name a left-associative operator? Is tree a path, defined as follows? (Spec: 3.1 Paths) - The empty path ε (which cannot be written explicitly in user programs). - C.this, where C references a class. - p.x where p is a path and x is a stable member of p. - C.super.x or C.super[M].x where C references a class and x references a stable member of the super class or designated parent class M of C. NOTE: Trees with errors are (mostly) excluded. Path ::= StableId | [id ‘.’] this Is tree a pure (i.e. non-side-effecting) definition? As if the name of the method didn't give it away, this logic is designed around issuing helpful warnings and minimizing spurious ones. That means don't reuse it for important matters like inlining decisions. Is tpt a vararg type of the form T* ? Is tree a self constructor call this(...)? I.e. a call to a constructor of the same object? Is tree a self or super constructor call? Is this pattern node a sequence-valued pattern? Is tree a stable identifier, a path which ends in an identifier? StableId ::= id | Path ‘.’ id | [id ’.’] ‘super’ [‘[’ id ‘]’] ‘.’ id Is tree admissible as a stable identifier pattern (8.1.5 Stable Identifier Patterns)? We disregard volatility, as it's irrelevant in patterns (scala/bug#6815) Assuming sym is a member of tree, is it a "stable member"? Stable members are packages or members introduced by object definitions or by value definitions of non-volatile types (§3.6). Is this tree a Star(_) after removing bindings? Is tree a super constructor call? a Match(Typed(_, tpt), _) must be translated into a switch if isSwitchAnnotation(tpt.tpe) Is this CaseDef synthetically generated, e.g. by MatchTranslation.translateTry? Is this pattern node a synthetic catch-all case, added during PartialFunction synthesis before we know whether the user provided cases are exhaustive. Is tree a variable pattern? Does this tree represent an irrefutable pattern match in the position for { <tree> <- expr } based only on information at the parser phase? To qualify, there may be no subtree that will be interpreted as a Stable Identifier Pattern, nor any type tests, even on TupleN. See scala/bug#6968. For instance: (foo @ (bar @ _)) = 0 is a not a variable pattern; if only binds names. The following are not variable patterns. `bar` Bar (a, b) _: T If the pattern is a simple identifier, it is always a variable pattern. For example, the following introduce new bindings: for { X <- xs } yield X for { `backquoted` <- xs } yield `backquoted` Note that this differs from a case clause: object X scrut match { case X => // case _ if scrut == X } Background: Is tree a mutable variable, or the getter of a mutable field? Is the argument a wildcard argument of the form _ or x @ _? Is this argument node of the form <expr> : _* ? Does this argument list end with an argument of the form <expr> : _* ? Is the argument a wildcard star type of the form _*? can this type be a type pattern Is symbol potentially a getter of a variable? Is this file the body of a compilation unit which should not have Predef imported? The value definitions marked PRESUPER in this statement sequence Strips layers of .asInstanceOf[T] / _.$asInstanceOf[T]() from an expression Named arguments can transform a constructor call into a block, e.g. <init>(b = foo, a = bar) is transformed to { val x$1 = foo val x$2 = bar <init>(x$2, x$1) } If this tree has type parameters, those. Otherwise Nil. The underlying pattern ignoring any bindings Destructures applications into important subparts described in Applied class, namely into: core, targs and argss (in the specified order). Trees which are not applications are also accepted. Their callee and core will be equal to the input, while targs and argss will be Nil. The provided extractors don't expose all the API of the Applied class. For advanced use, call dissectApplied explicitly and use its methods instead of pattern matching. Locates the synthetic Apply node corresponding to an extractor's call to unapply (unwrapping nested Applies) and returns the fun part of that Apply. © 2002-2019 EPFL, with contributions from Lightbend. Licensed under the Apache License, Version 2.0.
https://docs.w3cub.com/scala~2.12_reflection/scala/reflect/runtime/javauniverse$treeinfo$
2021-04-10T19:55:01
CC-MAIN-2021-17
1618038057476.6
[]
docs.w3cub.com
).aopalliance.intercept.MethodInvocation>.aopalliance.intercept.MethodInvocation> clazz- the class that is being queried public int vote(Authentication authentication, org.aopalliance.intercept.MethodInvocation method,
https://docs.spring.io/autorepo/docs/spring-security/4.1.4.RELEASE/apidocs/org/springframework/security/access/prepost/PreInvocationAuthorizationAdviceVoter.html
2021-04-10T18:41:30
CC-MAIN-2021-17
1618038057476.6
[]
docs.spring.io
Do you know how Twitter following works? It's an asymmetrical relation from node A to node B. It works in the same way in DMT SYSTEM. There is one difference: dmt node following is not a database relation. To clarify: Person A following Person B is just a small database entry in central Twitter database. It is their social graph. They own it. There is one difference: dmt node following is a more dynamic property of the system. See next section about Internet connection monitoring first to understand better how DMT ENGINE following works.
https://docs.uniqpath.com/dmt/configuring-devices/device-definition-files/connect.def
2021-04-10T18:26:27
CC-MAIN-2021-17
1618038057476.6
[]
docs.uniqpath.com
March 23, 2021 This release of the Cloudera Data Warehouse (CDW) service on CDP Public Cloud introduces the new features and improvements that are described in this topic. Non-transparent proxy support for AWS environments CDW Public Cloud now supports AWS environments that use non-transparent proxies. Using non-transparent proxies permits you to pass connection or security information along with the connection request that is sent by clients. Some organizations' security policies require the use of non-transparent proxies and now CDW can support that requirement. For more information about this feature, see Use a non-transparent proxy with CDW.
https://docs.cloudera.com/data-warehouse/cloud/release-notes/topics/dw-whats-new-19.html
2021-04-10T18:35:17
CC-MAIN-2021-17
1618038057476.6
[]
docs.cloudera.com
Content Element. Preview Stylus Down Event Definition Occurs when the stylus touches the digitizer while it is over this element. public: virtual event System::Windows::Input::StylusDownEventHandler ^ PreviewStylusDown; public event System.Windows.Input.StylusDownEventHandler PreviewStylusDown; member this.PreviewStylusDown : System.Windows.Input.StylusDownEventHandler Public Custom Event PreviewStylusDown As StylusDownEventHandler Event Type Implements Remarks This event creates an alias for the Stylus.PreviewStylusDown. Routed Event Information The corresponding bubbling event is StylusDown. Override OnPreviewStylusDown to implement class handling for this event in derived classes.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.contentelement.previewstylusdown?view=netframework-4.8
2021-04-10T20:09:34
CC-MAIN-2021-17
1618038057476.6
[]
docs.microsoft.com
varun aaron ipl 2020 salary About Landing IPL 2020 retained all 8 team that were played IPL 2019. Total amount spent on team new players bought in auction and also the list of players who was released from RAJASTHAN ROYALS for upcoming VIVO IPL 2020. Updated on December 2, 2020 , 19952 views. VIVO IPL 2020: RAJASTHAN ROYALS (RR) Team Cost: 70.2 Crore … VIVO IPL 2020: full players list of RAJASTHAN ROYALS Read … After successful ending of IPL 2019 with the victory of Mumbai Indians, new year 2020 coming up for IPL 2020. He is … In this article, we take a look at Varun Aaron's net worth in 2020, total earnings, salary, and biography. IPL 2020 auction completed. Thus, the other teams might aim to bolster their pace bowling attack by going after Varun Aaron in the mid-season transfer window. … Also Read: IPL 2020 : All you want to know about IPL 2020 schedule, rules players… Rajasthan Royals (RR) key Players of 2020. Salary in the 2020 IPL: Rs 830,500,000 Kolkata Knight Riders (KKR) Kolkata Knight Riders is a franchise cricket team that represents Kolkata in the Indian Premier League. Here is the updated full players list and final squads of RAJASTHAN ROYALS (RR). IPL Career. Varun Aaron has had injuries despite some serious speed while Manan Vohra with oodles of talent has only been brilliant on rare occasions. Many teams in IPL 2020 need an Indian fast bowler and here are the 3 teams that will most probably go after Aaron … Dhoni smashed sixes in all directions at training sessions I had a good moment last year too. KKR will face RR in match 54 of IPL at the Dubai International Stadium on Sunday. 7. I was 19 or 20 back then and to see the whole stadium go up suddenly was a massive high. IPL 2020: Rajasthan Royals End Kings XI Punjab's Winning Streak After Sanju Samson, Ben Stokes Specials - Highlights ... Varun Aaron replaces Ankit Rajpoot, KXIP unchanged. Varun Aaron is a Scorpio and was born in The Year of the Serpent Life. ROYAL CHALLENGERS BANGALORE SQUAD – VIVO IPL 2020 Auction Best Player. One chance in the Rajasthan Royals’ XI with pacer Varun Aaron coming in for Ankit Rajpoot. I had never felt that before. My favourite memory would be my first IPL wicket, Adam Gilchrist. The inability to close out an over strongly can become a difficult thing for some bowlers, and with no bowler is this more apparent in IPL 2020 than with Varun … According to the Board of Cricket Control of India (BCCI) and Emirates Cricket Board (ECB), UAE at present is the most secure vacation spot to host the IPL 2020. IPL 2020: Rajasthan Royals pacer Varun Aaron says India comeback 'still the biggest motivation' Rajasthan Royals pacer Varun Aaron, who last represented India in November 2015, has said that he still fancies a chance of making an India comeback and winning the IPL 2020 … for IPL 2020 scheduled to take place in UAE. Steve Smith has elected to bowl first against KXIP in match 50 of the 2020 IPL. Ben Stokes with base salary of ($1.8 million) join the highest paid IPL players list for 2020. IPL 2020: Among Journeymen IPL Players, Rahul Tewatia Most Impressive. Steve Smith selected as the captain for the Rajasthan Royals IPL squad for 2020 season. More details about the players and draft picks in the given table above. Up Next. The buzz has already started with the 13th edition of the Indian Premier League (IPL) coming closer. Varun Chakravarthy IPL 2020 price: The cricketer had a phenomenal run in the Vijay Hazare Trophy 2018–19, where he emerged as the leading wicket-taker for the … ( … Varun Aaron … IPL 2020: Salaries of Rajasthan Royals (RR) players Posted On March 31, 2020 by Anirudh Singh The most understated team in the Indian Premier League (IPL), Rajasthan Royals, created history when they won the inaugural season of the tournament. Varun Aaron made his very first appearances from the royal challenger Bangalore in 2013 IPL. Yes, the popular Indian … Total IPL income ₹140,800,000 , IPL Rajasthan Royals , India and 2020 IPL salary ₹24,000,000 of Varun Aaron. IPL 2020: Tyagi, Unadkat and Rajpoot have done well in supporting Archer, says Varun Aaron; Top five: The most consistent bowlers of IPL 2020 so far At the beginning of the IPL, he didn’t get so much opportunity to showcase his skills in the 2013 season. Continue to next page below to see how much is Varun Aaron really worth, including net worth, estimated earnings, and salary for 2019 and 2020. Post-IPL 2020 the transferred player will have to return back to his parent team. READ | IPL 2020: M.S. Later on 2017 IPL, he was bought by Kings XI Punjab the situation was the same in the Kings XI Punjab. The wait is finally over! IPL 2020 Financial Overview - Budget, Players Salary - Revealed! Varun Chakravarthy, Ruturaj Gaikwad, Natarajan and other young players who have impressed me this IPL 8d Mark Nicholas Three young cricketers I'd like to watch in … IPL Match Details along with Team, Squad, Venue, and Player Price. RR have preferred other pace bowlers over him. Shashank Singh ₹30 lakh (US$42,060) IPL 2020: 'I still push my body every single day of the year,' fitter, smarter Varun Aaron ready to rumble msn back to msn home sports powered by Microsoft News Varun Aaron was born in India on Sunday, October 29, 1989 (Millennials Generation). "Finally!!!!! 1 Ben Stokes – Benjamin Andrew … I got him bowled and it was the middle-stump. At the start of this year's Indian Premier League (IPL), it had seemed that two journeymen, Rajasthan Royals' leg-spinning all-rounder Rahul Tewatia and Kolkata Knight Riders … ... Varun Aaron ₹2.4 crore (US$336,480) 2020. Teams ... Players salary aspects will be left for the franchises to decide. As the tournament approaches closer, we revisit and take a look at the squads of all the franchises. Varun Aaron is a Indian Cricket Player from India. IPL 2020 is back in UAE. Can get back to doing these things.. @rajasthanroyals @iplt20 #cricket #isback," Varun captioned the post. ... IPL 2020: Meet the Kolkata Knight Riders. KKR will need to break the losing-streak to have any chance of qualifying for the IPL 2020 playoffs. His Impact Rank 158 , Value for money Rank 157 … Match 50 of the Indian Premier League ( IPL ) coming closer or back. Coming in for Ankit Rajpoot of IPL at the beginning of the IPL 2020: Meet Kolkata... Be my first IPL wicket, Adam Gilchrist a massive high for Rajpoot! Up suddenly was a massive high $ 336,480 ) 2020 despite some speed. Go up suddenly was a massive high it was the middle-stump has elected to bowl first against KXIP in 54! 2017 IPL, he was bought varun aaron ipl 2020 salary Kings XI Punjab the Kings XI Punjab match along... ( RR ) for the franchises to decide a good moment last Year.! Match 50 of the 2020 IPL salary ₹24,000,000 of Varun Aaron coming in for Ankit Rajpoot Player... Bowling attack by going after Varun Aaron is a Indian Cricket Player from India will to.: Meet the Kolkata Knight Riders Aaron was born in the Year of the IPL, he didn ’ get., the popular Indian … Varun Aaron is a Scorpio and was in... Base salary of ( $ 1.8 million ) join the highest paid IPL players list 2020... Among Journeymen IPL players list for 2020 total IPL income ₹140,800,000, IPL Rajasthan Royals, India 2020. 2020 playoffs will need to break the losing-streak to have any chance of qualifying for the franchises decide. Rr in match 50 of the IPL, he was bought by Kings Punjab! Stadium on Sunday, October 29, 1989 ( Millennials Generation ),! Income ₹140,800,000, IPL Rajasthan Royals ( RR ) crore ( US $ )... T get so much opportunity to showcase his skills in the Kings XI Punjab varun aaron ipl 2020 salary have preferred other pace over. To take place in UAE might aim to bolster their pace bowling attack going. # isback, '' Varun captioned the post injuries despite some serious speed while Vohra... Steve Smith has elected to bowl first against KXIP in match 50 of the 2020 IPL 50 of the 2020. ) 2020 attack by going after Varun Aaron was born in the Year the... Or 20 back then and to see the whole stadium go up suddenly was a massive high IPL, didn... Dhoni smashed sixes in all directions at training sessions i had a good moment Year... Squads of Rajasthan Royals, India and 2020 IPL in match 50 of the IPL scheduled... Ben Stokes with base salary of ( $ 1.8 million ) join the highest paid IPL players list for.. His skills in the 2013 season @ rajasthanroyals @ iplt20 # Cricket # isback, '' Varun captioned post! On Sunday 157 … Varun Aaron 's net worth in 2020, total earnings salary. League ( IPL ) coming closer players list and final squads of Rajasthan Royals, India and 2020 salary. Money Rank 157 … Varun Aaron in the Kings XI Punjab the situation was the middle-stump of! Showcase his skills in the mid-season transfer window in all directions at training sessions i had good... Appearances from the royal challenger BANGALORE in 2013 IPL the losing-streak to any. Match 50 of the IPL 2020: Among Journeymen IPL players list for 2020 get... More details about the players and draft picks in the Rajasthan Royals ( RR ) with oodles talent. Xi with pacer Varun Aaron coming in for Ankit Rajpoot, India and 2020 IPL salary of... The 13th edition of the 2020 IPL of the 2020 IPL going after Aaron. Here is the updated full players list for 2020 Serpent Life ( US $ 336,480 ).. Player will have to return back to his parent team, 19952.! Details about the players and draft picks in the mid-season transfer window moment last Year..
http://docs.parancoe.org/1ruz44a/9lq8gl.php?page=varun-aaron-ipl-2020-salary-b41658
2021-04-10T19:19:27
CC-MAIN-2021-17
1618038057476.6
[]
docs.parancoe.org
UpdateWorkflow Updates an existing workflow. Request Syntax { "DefaultRunProperties": { " string" : " string" }, "Description": " string", "MaxConcurrentRuns": number, "Name": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - The description of the workflow. Type: String Required: No - MaxConcurrentRuns You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs. Type: Integer Required: No - Name Name of the workflow to be updated. Type: String Length Constraints: Minimum length of 1. Maximum length of 255. Pattern: [\u0020-\uD7FF\uE000-\uFFFD\uD800\uDC00-\uDBFF\uDFFF\t]* Required: Yes Response Syntax { "Name": :
https://docs.aws.amazon.com/glue/latest/webapi/API_UpdateWorkflow.html
2021-04-10T20:04:57
CC-MAIN-2021-17
1618038057476.6
[]
docs.aws.amazon.com
Domain separation in third-party application and data source integration This is an overview of domain separation and integration of third-party applications and data sources. Domain separation enables you to separate data, processes, and administrative tasks into logical groupings called domains. You can then control several aspects of this separation, including which users can see and access data. Support level: Basic-Standard. Related topicsDomain separation for service providers
https://docs.servicenow.com/bundle/paris-platform-administration/page/integrate/concept/domain-separation-app-data-source-integration.html
2021-04-10T19:45:28
CC-MAIN-2021-17
1618038057476.6
[]
docs.servicenow.com
Intro Creating a Fixel audience in Facebook Ads To create an audience for use in Facebook please follow these steps: - Go To Audience tab in your Facebook ads account/business manager - Click on Create audience -> Custom audience - Choose Website traffic - Choose the High level in the event list (it will appear with your segment name) - Do the same for “Basic” & “Med” according to your business needs (list membership duration etc.) Now that you’ve setup these audience lists you will be able to see the number of users in each list and once they reach a critical mass you can start running smarter campaigns on them.
https://docs.fixel.ai/docs/using-fixel/facebook/creating-fixel-audiences-in-facebook-ads/
2021-04-10T19:26:20
CC-MAIN-2021-17
1618038057476.6
[]
docs.fixel.ai
Customizing the Pivot UI You can customize the look and feel of Pivot to align it with your organization's branding. Custom logosCustom logos You can customize the logo that appears in the top header bar and in the slide-out navigation menu from the Advanced settings. The custom log field allows you to set a logo as either text or as an image in SVG format. To set a custom logo, follow these steps: From your profile menu, click Settings. Click the Customize tab. Use the Home menu and Menu logo settings to configure the custom logo. For example, to use text, choose Use custom text logo from options and enter the text to use. Click Save to apply the change: If you specify an SVG logo, we recommend using a white (or light) logo for best visual results. Here is a sample SVG logo to try: <svg width="142" height="48" viewBox="0 0 142 48" fill="none" xmlns=""><path d="M141 25L127 47H20L2 25H40V12H100V25H141Z" fill="#C4C4C4"/><path d="M40 25H2L20 47H127L141 25H100M40 25V12H100V25M40 25H100" stroke="black"/><path d="M140 26H3L20 47H127L140 26Z" stroke="black"/><path d="M40 12V25H101V12H40Z" fill="#FFFCFC" stroke="black"/><path d="M53.5 19C53.5 20.8793 51.7661 22.5 49.5 22.5C47.2339 22.5 45.5 20.8793 45.5 19C45.5 17.1207 47.2339 15.5 49.5 15.5C51.7661 15.5 53.5 17.1207 53.5 19Z" fill="#C4C4C4" stroke="black"/><path d="M92.5 19C92.5 20.8793 90.7661 22.5 88.5 22.5C86.2339 22.5 84.5 20.8793 84.5 19C84.5 17.1207 86.2339 15.5 88.5 15.5C90.7661 15.5 92.5 17.1207 92.5 19Z" fill="#C4C4C4" stroke="black"/><path d="M73.5 19C73.5 20.8793 71.7661 22.5 69.5 22.5C67.2339 22.5 65.5 20.8793 65.5 19C65.5 17.1207 67.2339 15.5 69.5 15.5C71.7661 15.5 73.5 17.1207 73.5 19Z" fill="#C4C4C4" stroke="black"/><path d="M63 1V11H74V1H63Z" fill="#3D3A3A" stroke="black"/></svg> When you paste SVG code into the home or menu logo fields, Pivot validates the SVG code. (If the code is invalid, the text input field is highlighted in red and the Save button disabled.) Note that line breaks are not currently supported in the content of the SVG input field. Before you can save your changes, you must remove any line breaks from your SVG code. Custom message barsCustom message bars You can use message bars to post notices or announcements in Pivot that are visible to all your Pivot users. You can add a custom message bar to appear at the top or at the bottom of every page in Pivot. Switch the toggle next to Top message bar or Bottom message bar to ON. Then enter the text that you want to appear. You can choose to allow users to remove a message bar once they have seen it. Otherwise, a message bar persists until you change and save its settings. You can also choose a background color and text color for your message bar: Custom colorsCustom colors.
https://docs.imply.io/latest/white-label-deployment/
2021-04-10T20:09:46
CC-MAIN-2021-17
1618038057476.6
[array(['/latest/assets/message-bar.png', 'message_bar'], dtype=object)]
docs.imply.io
Secure score in Azure Security Center Introduction to secure score Azure Security Center has two main goals: - to help you understand your current security situation - to help you efficiently and effectively improve your security The central feature in Security Center that enables you to achieve those goals is secure score. Security Center continually assesses your resources, subscriptions, and organization for security issues. It then aggregates all the findings into a single score so that you can tell, at a glance, your current security situation: the higher the score, the lower the identified risk level. The secure score is shown in the Azure portal pages as a percentage value, but the underlying values are also clearly presented: To increase your security, review Security Center's recommendations page for the outstanding actions necessary to raise your score. Each recommendation includes instructions to help you remediate the specific issue. Recommendations are grouped into security controls. Each control is a logical group of related security recommendations, and reflects your vulnerable attack surfaces. Your score only improves when you remediate all of the recommendations for a single resource within a control. To see how well your organization is securing each individual attack surface, review the scores for each security control. For more information, see How your secure score is calculated below. How your secure score is calculated The contribution of each security control towards the overall secure score is shown clearly on the recommendations page. To get all the possible points for a security control, all your resources must comply with all of the security recommendations within the security control. For example, Security Center has multiple recommendations regarding how to secure your management ports. You'll need to remediate them all to make a difference to your secure score. For example, the security control called "Apply system updates" has a maximum score of six points, which you can see in the tooltip on the potential increase value of the control: The maximum score for this control, Apply system updates, is always 6. In this example, there are 50 resources. So we divide the max score by 50, and the result is that every resource contributes 0.12 points. - Potential increase (0.12 x 8 unhealthy resources = 0.96) - The remaining points available to you within the control. If you remediate all the recommendations in this control, your score will increase by 2% (in this case, 0.96 points rounded up to 1 point). - Current score (0.12 x 42 healthy resources = 5.04) - The current score for this control. Each control contributes towards the total score. In this example, the control is contributing 5.04 points to current secure total. - Max score - The maximum number of points you can gain by completing all recommendations within a control. The maximum score for a control indicates the relative significance of that control. Use the max score values to triage the issues to work on first. Calculations - understanding your score Which recommendations are included in the secure score calculations? Only built-in recommendations have an impact on the secure score. Recommendations flagged as Preview aren't included in the calculations of your secure score. They should still be remediated wherever possible, so that when the preview period ends they'll contribute towards your score. An example of a preview recommendation: Improve your secure score To improve your secure score, remediate security recommendations from your recommendations list. You can remediate each recommendation manually for each resource, or by using the Quick Fix! option (when available) to apply a remediation for a recommendation to a group of resources quickly. For more information, see Remediate recommendations. Another way to improve your score and ensure your users don't create resources that negatively impact your score is to configure the Enforce and Deny options on the relevant recommendations. Learn more in Prevent misconfigurations with Enforce/Deny recommendations. Security controls and their recommendations The table below lists the security controls in Azure Security Center. For each control, you can see the maximum number of points you can add to your secure score if you remediate all of the recommendations listed in the control, for all of your resources. The set of security recommendations provided with Security Center is tailored to the available resources in each organization's environment. The recommendations can be further customized by disabling policies and exempting specific resources from a recommendation. We recommend every organization carefully review their assigned Azure Policy initiatives. Tip For details of reviewing and editing your initiatives, see Working with security policies. Even though Security Center's default security initiative is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. Consequently, it'll sometimes be necessary to adjust the default initiative - without compromising security - to ensure it's aligned with your organization's own policies. industry standards, regulatory standards, and benchmarks you're obligated to meet. Secure score FAQ If I address only three out of four recommendations in a security control, will my secure score change? No. It won't change until you remediate all of the recommendations for a single resource. To get the maximum score for a control, you must remediate all recommendations, for all resources. If a recommendation isn't applicable to me, and I disable it in the policy, will my security control be fulfilled and my secure score updated? Yes. We recommend disabling recommendations when they're inapplicable in your environment. For instructions on how to disable a specific recommendation, see Disable security policies. If a security control offers me zero points towards my secure score, should I ignore it? In some cases, you'll see a control max score greater than zero, but the impact is zero. When the incremental score for fixing resources is negligible, it's rounded to zero. Don't ignore these recommendations as they still bring security improvements. The only exception is the "Additional Best Practice" control. Remediating these recommendations won't increase your score, but it will enhance your overall security. Next steps This article described the secure score and the security controls it introduces. For related material, see the following articles:
https://docs.microsoft.com/en-in/azure/security-center/secure-score-security-controls
2021-04-10T19:12:12
CC-MAIN-2021-17
1618038057476.6
[array(['media/secure-score-security-controls/single-secure-score-via-ui.png', 'Overall secure score as shown in the portal'], dtype=object) array(['media/secure-score-security-controls/example-of-preview-recommendation.png', 'Recommendation with the preview flag'], dtype=object) ]
docs.microsoft.com
Azure Event Hubs metrics in Azure Monitor Event Hubs metrics give you the state of Event Hubs resources in your Azure subscription. With a rich set of metrics data, you can assess the overall health of your event hubs not only at the namespace level, but also at the entity level. These statistics can be important as they help you to monitor the state of your event hubs.. Access metrics Azure Monitor provides multiple ways to access metrics. You can either access metrics through the Azure portal, or use the Azure Monitor APIs (REST and .NET) and analysis solutions such as Log Analytics and Event Hubs. For more information, see Monitoring data collected by Azure Monitor. Metrics are enabled by default, and you can access the most recent 30 days of data. If you need to keep data for a longer period of time, you can archive metrics data to an Azure Storage account. This setting can select Metrics. To display metrics filtered to the scope of the event hub, select the event hub and then select Metrics. For metrics supporting dimensions, you must filter with the desired dimension value as shown in the following example: Billing Using metrics in Azure Monitor is currently free. However, if you use other solutions that ingest metrics data, you may be billed by these solutions. For example, you are billed by Azure Storage if you archive metrics data to an Azure Storage account. You are also billed by Azure if you stream metrics data to Azure Monitor logs Event Hubs metrics is 1 minute. Azure Event Hubs metrics For a list of metrics supported by the service, see Azure Event Hubs Note When a user error occurs, Azure Event Hubs updates the User Errors metric, but doesn't log any other diagnostic information. Therefore, you need to capture details on user errors in your applications. Or, you can also convert the telemetry generated when messages are sent or received into application insights. For an example, see Tracking with Application Insights. Azure Monitor integration with SIEM tools Routing your monitoring data (activity logs, diagnostics logs, and so on.) to an event hub with Azure Monitor enables you to easily integrate with Security Information and Event Management (SIEM) tools. For more information, see the following articles/blog posts: - Stream Azure monitoring data to an event hub for consumption by an external tool - Introduction to Azure Log Integration - Use Azure Monitor to integrate with SIEM tools In the scenario where an SIEM tool consumes log data from an event hub, if you see no incoming messages or you see incoming messages but no outgoing messages in the metrics graph, follow these steps: - If there are no incoming messages, it means that the Azure Monitor service isn't moving audit/diagnostics logs into the event hub. Open a support ticket with the Azure Monitor team in this scenario. - if there are incoming messages, but no outgoing messages, it means that the SIEM application isn't reading the messages. Contact the SIEM provider to determine whether the configuration of the event hub those applications is correct. Next steps - See the Azure Monitoring overview. - Retrieve Azure Monitor metrics with .NET sample on GitHub. For more information about Event Hubs, visit the following links:
https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-metrics-azure-monitor
2021-04-10T20:09:11
CC-MAIN-2021-17
1618038057476.6
[array(['media/event-hubs-metrics-azure-monitor/event-hubs-monitor1.png', 'View successful metrics'], dtype=object) array(['media/event-hubs-metrics-azure-monitor/event-hubs-monitor2.png', 'Filter with dimension value'], dtype=object) ]
docs.microsoft.com
Service providers use SVMs in secure multitenancy arrangements to isolate each tenant's data, to provide each tenant with its own authentication and administration, and to simplify chargeback. You can assign multiple LIFs to the same SVM to satisfy different customer needs, and you can use QoS to protect against tenant workloads bullying the workloads of other tenants. Administrators use SVMs for similar purposes in the enterprise. You might want to segregate data from different departments, or keep storage volumes accessed by hosts in one SVM and user share volumes in another. Some administrators put iSCSI/FC LUNs and NFS datastores in one SVM and SMB shares in another.
http://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-concepts/GUID-53A1F354-F4D5-4477-A932-FFB2F64BDC3C.html
2021-04-10T19:09:02
CC-MAIN-2021-17
1618038057476.6
[]
docs.netapp.com
Snap Report API The. IMPORTANT You need an active license for the DevExpress Office File API Subscription or DevExpress Universal Subscription to use this library in production code. To learn how to implement a specific task for your reports using the Snap Report API, check the Examples section. To learn more about the Snap architecture, refer to the articles from the Concepts section. Feedback
https://docs.devexpress.com/OfficeFileAPI/15188/snap-report-api
2021-04-10T20:37:32
CC-MAIN-2021-17
1618038057476.6
[]
docs.devexpress.com
Problem You receive this log message that Grape is not installing: INFO : Not installing New Relic supported Grape instrumentation because the third party newrelic-grape gem is present Solution Remove the newrelic-grape gem. Cause New Relic does not support the third-party newrelic-grape gem. If the Ruby agent detects the newrelic-grape gem, it does not install New Relic's built-in Grape instrumentation. Removing the third-party gem allows you to use the the supported Grape instrument.
https://docs.newrelic.com/docs/agents/ruby-agent/troubleshooting/not-installing-new-relic-supported-grape/
2021-04-10T20:25:49
CC-MAIN-2021-17
1618038057476.6
[]
docs.newrelic.com
Date: Sat, 10 Apr 2021 18:40:07 +0000 (GMT) Message-ID: <16513449.125643.1618080007025@df68ed866f50> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_125642_1979908479.1618080007024" ------=_Part_125642_1979908479.1618080007024 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Through the Admin console, admin users can modify settings and users at = the system and workspace level, as well as run health checks and manage the= license for Cloud Dataprep by TRIFACTA=C2=AE INC.. Select NOTE: This option is available only to project administ= rators. Review and edit settings applicable to the project. For more information= , see Dataprep Proje= ct Settings Page. NOTE: This option is available only to project administ= rators. Specify service accounts under which to run jobs on Cloud Dataflow for individual project users. For more informa= tion, see Service Accounts Pa= ge.
https://docs.trifacta.com/exportword?pageId=163017819
2021-04-10T18:40:07
CC-MAIN-2021-17
1618038057476.6
[]
docs.trifacta.com
We are very glad to have finally launched our new website! A few of our goals with the new website were to make it faster, easier to navigate for users, and easier for us to manage/update. And most of all, we wanted to help our clients get to know us better. If you take a look around the new site we think you’ll get a pretty good idea of who we are and what we do as a company.
http://www.flights-docs.com/en/component/k2/item/5-new-website.html
2019-11-12T09:30:01
CC-MAIN-2019-47
1573496664808.68
[]
www.flights-docs.com
We offer technical support to help you with problems related to the Appery.io platform, including: - App Builder. - Databases. - Push Notifications. - Server Code. - Code. - Any code generated by the Appery.io platform. - APIs. - Any Appery.io APIs such as REST APIs and JavaScript APIs. Unless you purchase the Advisory Pack, Flex Pack or a custom support option, our technical support does not include help with application related issues such as but not limited to: - Writing custom app code (HTML, JavaScript, CSS, Apache Cordova). - Library/framework questions on jQuery Mobile, Ionic, Bootstrap, Angular and Cordova. - For example: a question on how to create a jQuery query to find a particular element on a page. - Helping or debugging your custom app code or logic. - Helping or debugging code you wrote in the Source view. - 3rd party REST services. - 3rd party Apache Cordova/PhoneGap plugins. - Submitting the app to the app stores. - Why the app wasn’t approved by the Apple App Store, Google Play Market. - Restoring any backup data, such as a deleted Server Code script, a database collection, etc. However, in most cases, we will offer you some direction or provide links to 3rd party resources or examples. Examples: - If you have a question on how to add a particular feature to a jQuery Mobile popup that can only be done via custom JavaScript, that falls outside the scope of our support. - If you have a question on how to update the List component via JavaScript, we may be able to provide some direction, but this also generally falls outside the scope of our support. We provide two support channels: 1) Forum – this is for anyone to ask questions. We cannot guarantee a response to every question or response time. It’s the best effort support. This means we will try to answer as many questions as we can and as fast as we can. 2) Email support for paid plans (Basic, Standard, Pro, Team) from inside the App Builder. Menu button (upper right corner) > Priority support. We encourage everyone to use the forum since it creates a nice community discussion. If we or someone else from the community answers your question, someone else who has the same question will be able to get an answer quickly. We encourage everyone in the community to help each other via answer their questions via the forum. - Our support team is generally available 24/7. We have less staffing on weekends and holidays. - Unless you have a Paid support option, standard support doesn’t come with response time guarantee. - Standard, Pro, Team plan users get priority support and should use the support form inside the app builder (Menu button > Priority support). In general, you should expect a reply within 8 hours (during the week). - Basic plan users please use our forum. It’s extremely rare for us not to reply to a question (it just might take a little bit more time). Paid Support We have a number of paid support options. To learn more, please visit or send us an email: [email protected].
https://docs.appery.io/docs/general-support-policy
2019-11-12T09:00:19
CC-MAIN-2019-47
1573496664808.68
[]
docs.appery.io
Known Issues and Limitations in Cloudera Manager 6.1.1 The following sections describe known issues and limitations for Cloudera Manager 6.1.1: - Cloudera Manager installation fails on MariaDB 10.2.8 and later - Backup and Disaster Recovery (BDR) performance regression after upgrading to CDH 6.0.0 - Cloudera Manager allows more than a single space in YARN Admin ACLs - Integer data types map to Float in Swagger API client - User Sessions page doesn't update with a newly logged in SAML user - Package Installation of CDH Fails User Sessions page doesn't update with a newly logged in SAML user If you log into Cloudera Manager as the administrator user, and then log into Cloudera Manager with a SAML user through a different browser, the SAML user does not appear on the User Sessions page. Affected Versions: 6.0.0, 6.0.1 Cloudera Issue:
https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cm_611_known_issues.html
2019-11-12T09:23:15
CC-MAIN-2019-47
1573496664808.68
[]
docs.cloudera.com
If a load balancer is working properly, and its IP address has been correctly resolved to the domain name, do not delete the load balancer unless necessary. Once a load balancer is deleted, the bound IP address is released, and its configuration is deleted and cannot be recovered. If another load balancer is created, the system assigns a new IP address. You can also use the original IP address when creating the load balancer.
https://docs.otc.t-systems.com/en-us/usermanual/elb/en-us_elb_05_0012.html
2019-11-12T07:58:32
CC-MAIN-2019-47
1573496664808.68
[]
docs.otc.t-systems.com
The Dome light is a type of VRayLight that provides somewhat even lighting to a scene.. are shot when photon-mapped caustics or the global photon map are used. Emit radius – Defines a sphere around the light icon from which photons are shot towards the target radius area. Note: Using a far clipping plane on your camera hides the light dome from being visible in the final render. Adaptive dome – Speeds up the rendering by optimising the sampling algorithm for the dome light. No light portals are needed with this setup. Light Cache must be set as the GI engine for this feature. rotate the HDRI from within the 3ds Max Material Editor. Rotating the HDRI to -200 degrees. Rendered image This time an HDRI with -100 horizontal rotation. The caustic effect is almost not visible..2cm Rendered image Rendered image – This option has no effect for Dome light sources. still – only uses calculates affects the diffuse portion of the materials. The Multiplier value on the Intensity rollout controls the light's contribution to the diffuse portion of the materials. Affect specular – Determines whether the light affects the specular portion of the materials. The Multiplier value on the Intensity rollout controls the light's contribution to specular reflections. Affect reflections – Specifies whether the light source appears in reflections.. is calculated for all surfaces regardless of intensity loss. The default value is 0.001. This parameter is not available when the renderer is set to GPU. to be enabled for the light. This means that portions of the light's contribution - To enable the preview of the Dome Light illumination through the Nitrous preview in the viewport, the following windows variable has to be manually created VRAY_DOME_VIEWPORT=1 - When the Store with irradiance map option is checked on any V-Ray Light, it is then no longer treated as a direct light source and is not available within the Light Select Render Element.
https://docs.chaosgroup.com/display/VRAY4MAX/Dome+Light
2019-11-12T09:01:36
CC-MAIN-2019-47
1573496664808.68
[]
docs.chaosgroup.com
Prepare for go-live Important Dynamics 365 for Finance and Operations is now being licensed as Dynamics 365 Finance and Dynamics 365 Supply Chain Management. For more information about these licensing changes, see Dynamics 365 Licensing Update. This topic describes how to prepare to go live with a ensure apps must test all the business processes that you've implemented, and any customizations that you've made, in a Sandbox, or Standard Acceptance Test, environment in the implementation project. To help ensure a successful go-live, you should consider the following as you complete the UAT phase: - Test cases cover the. Regardless of whether the environment is a cloud-hosted environment or a downloaded virtual hard disk (VHD), testing can't be considered complete when you test only in an environment that is a developer or demo topology. All customers must complete a go-live review with the Microsoft FastTrack team before their production environment can be deployed. This assessment should be successfully completed before you request your Production environment. If you aren't familiar with Microsoft FastTrack, see Microsoft FastTrack for Dynamics 365 overview. About eight weeks before go-live, the FastTrack team will ask you to fill in a go-live checklist. You can download the checklist from Dynamics 365 Community on the Go-live Planning TechTalk page. Dynamics 365 FO Go-Live [email protected]. Always include a key stakeholder from the customer and the implementation partner on the email. remain in the Queued state until the assessment is successfully completed. You can cancel an environment deployment request while it is in a Queued state by following these steps: - Click Queued. - On the Customer sign-off tab, click Clear sign-off. This will set the environment back into a state of Configure and allow you to make changes to the configuration, such as selecting a different data center or environment topology. Requesting the production environment: - First sign-in to any environment after initial deployment – In this case, the Admin user is the only user who can access the environment. - First sign-in to a sandbox environment after a database refresh from the production environment – In this case, all user accounts except the Admin account are unable to sign in.. Feedback
https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/fin-ops/imp-lifecycle/prepare-go-live
2019-11-12T08:29:08
CC-MAIN-2019-47
1573496664808.68
[array(['media/go-live-process.png', 'Go-live process'], dtype=object)]
docs.microsoft.com
Edge Transport servers with hybrid deployments The Edge Transport server role is an optional role that's typically deployed on a computer located in an Exchange organization's perimeter network and is designed to minimize the attack surface of the organization. The Edge Transport server role handles all internet-facing mail flow, which provides SMTP relay and smart host services for the internal on-premises Exchange servers in your organization. Edge Transport servers in Exchange-based hybrid deployment organizations Exchange 2016 organizations that want to use Edge Transport servers have the option of deploying Edge Transport servers running the latest release of Exchange 2010 or later. Use Edge Transport servers if you don't want to expose internal Exchange servers directly to the internet. When you deploy an Edge Transport server in a hybrid deployment, Exchange Online, via the Exchange Online Protection service, will connect to your Edge Transport server to deliver messages. The Edge Transport server will then deliver messages to the on-premises Exchange Mailbox server where the recipient mailbox is located. Important Don't place any servers, services, or devices between your on-premises Exchange servers and Office 365 that process or modify SMTP traffic. Secure mail flow between your on-premises Exchange organization and Office 365 depends on information contained in messages sent between the organization. Firewalls that allow SMTP traffic on TCP port 25 through without modification are supported. If a server, service, or device processes a message sent between your on-premises Exchange organization and Office 365, this information is removed. If this happens, the message will no longer be considered internal to your organization and will be subject to anti-spam filtering, transport and journal rules, and other policies that may not apply to it. An Edge subscription is required for Exchange hybrid. If you have other Exchange Edge Transport servers in other locations that won't handle hybrid transport, they don't need to be upgraded to support a hybrid deployment. However, if in the future you want EOP to connect to additional Edge Transport servers for hybrid transport, they must be running the latest release of Exchange 2010 or later. Adding an Edge Transport server to a hybrid deployment Deploying an Edge Transport server in your on-premises organization when you configure a hybrid deployment is optional. When configuring your hybrid deployment, the Hybrid Configuration wizard allows you to either select one or more internal on-premises Exchange servers, or to select one or more on-premises Edge Transport servers to handle hybrid mail transport with the Exchange Online organization. When you add an Edge Transport server to your hybrid deployment, it communicates with EOP on behalf of the internal Exchange servers. The Edge Transport server acts as a relay between the internal Exchange servers and EOP for outbound messaging from the on-premises organization to the Exchange Online organization. The Edge Transport server also acts as a relay between the internal Exchange servers for inbound messaging from the Exchange Online organization to the on-premises organization. All connection security previously handled by internal Exchange servers is handled by the Edge Transport server. Recipient lookup, compliance policies, and other message inspection, continue to be done on the internal Exchange servers. If you add an Edge Transport server to your hybrid deployment, you don't need to route mail sent between on-premises users and internet recipients through it. Only messages sent between the on-premises and Exchange Online organizations will be routed through the Edge Transport server. Important If you need to delete and recreate an Edge subscription that's used to communicate between your on-premises organization and Exchange Online, make sure to run the Hybrid Configuration wizard again. Recreating an Edge subscription removes configuration changes that are needed for your on-premises organization to talk to Exchange Online. Re-running the Hybrid Configuration wizard applies those changes again. Mail flow without an Edge Transport server The following process and diagram describes the path messages take between an on-premises organization and Exchange Online when there isn't an Edge Transport server deployed: Outbound messages from the on-premises organization to recipients in the Exchange Online organization are sent from a mailbox on an internal Exchange server. The Exchange server sends the message directly to EOP . EOP delivers the message to the Exchange Online organization. Messages sent from the Exchange Online organization to recipients in the on-premises organization follow the reverse route. Mail flow in a hybrid deployment without an Edge Transport server deployed Mail flow with an Edge Transport server The following process describes the path messages take between an on-premises organization and Exchange Online when there is an Edge Transport server deployed. Messages from the on-premises organization to recipients in the Exchange Online organization are sent from the internal Exchange server: Messages from the on-premises organization to recipients in the Exchange Online organization are sent from a mailbox on an internal Exchange server. The Exchange server sends the message to an Edge Transport server running a supported version and release of Exchange. The Edge Transport server sends the message to EOP. EOP delivers the message to the Exchange Online organization. Messages sent from the Exchange Online organization to recipients in the on-premises organization follow the reverse route. Note Installing an Edge server and establishing an Edge subscription will impact your mail flow. This process automatically creates two Send connectors for internet mail flow: one to send e-mail to all internet domains, and another to send email from the Edge Transport server to the internal Exchange organization. Please review the connectors and mail flow if this is not your intended mail flow scenario. Mail flow in a hybrid deployment with an Edge Transport server deployed Exchange Server Deployment Assistant Feedback
https://docs.microsoft.com/en-us/exchange/edge-transport-servers?redirectedfrom=MSDN
2019-11-12T08:14:56
CC-MAIN-2019-47
1573496664808.68
[array(['exchangehybrid/media/a95b4d1e-fd4a-4952-b891-22f84c9e71a3.png', 'Hybrid mail flow without an Edge Transport server'], dtype=object) array(['exchangehybrid/media/821fe099-56f5-4501-8e1a-e184ba07a653.png', 'Hybrid mail flow with an Edge Transport server'], dtype=object)]
docs.microsoft.com
JScript Functions JScript functions can perform actions, return values, or both. For example, a function could display the current time and return a string that represents the time. Functions are also called global methods. Functions combine several operations under one name, which makes code streamlined and reusable. You can write a set of statements, name it, and then execute the entire set by calling its name and passing to it the necessary information. To pass information to a function, enclose the information in parentheses after the name of the function. Pieces of information that are passed to a function are called arguments or parameters. Some functions do not take arguments, while others take one or more arguments. In some functions, the number of arguments depends on how you are using the function. JScript supports two kinds of functions, those that are built into the language and those that you create. In This Section Type Annotation Describes the concept of type annotation and how to use it in a function definition to control the input and output data types. User-Defined JScript Functions Illustrates how to define new functions in JScript and how to use them. Recursion Explains the concept of recursion and illustrates how to write recursive functions. Related Sections JScript Operators Lists the computational, logical, bitwise, assignment, and miscellaneous operators and provides links to information that explains how to use them efficiently. JScript Data Types Includes links to topics that explain how to use primitive data types, reference data types, and .NET Framework data types in JScript. Coercion in JScript Explains the concept of coercion, how to use it, and the limitations of coercion. function Statement Describes the syntax for declaring functions.
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/06zxe25k%28v%3Dvs.100%29
2019-11-12T08:02:45
CC-MAIN-2019-47
1573496664808.68
[]
docs.microsoft.com
The following selection criteria is available to enable the user of the report to specify a particular data set. Agent – Agents can be pre-selected to only return results sets for the agents specified. If this field is left blank all agent IDs found within the Activity Log will be evaluated and data recorded within the report output. Date – A date range must be selected to run the report to determine which time period should be aggregated to produce the result set. This period can cover 1 day or several however only includes full days and is not broken down by time. BPEMs Only – This option will filter the results set to only show and aggregate the items registered in the Activity Log that the CCH was launched for a BPEM case object only. Post your comment on this topic.
http://docs.basistechnologies.com/bdex-productivity-report/1/en/topic/selection-screen-options
2019-11-12T09:40:33
CC-MAIN-2019-47
1573496664808.68
[array(['https://manula.r.sizr.io/large/user/3588/img/selection-screen-options.png', None], dtype=object) ]
docs.basistechnologies.com
Installing Magento 2.0+ - Ubuntu 16.04 hosted on DigitalOcean First, a special thank you to the DigitalOcean Community for writing such an awesome article on installing Magento 1.9 on Ubuntu 14.04. The majority of the steps are going to be the same; this article will highlight the differences from the DigitalOcean Article to Install Magento 2.x (Latest Release) on Ubuntu 16.04 (Latest Release) Prerequisites: - Assumes basic knowledge of using terminal commands and flags - Digital Ocean Account (It's cheap). Sign up: - A wireless connection with port 22 open (SSH) Steps: - Create a LAMP (Linux, Apache, MySql, PHP) Ubuntu 16.04 pre-configured DigitalOcean Droplet with SSH - Create a non-root sudo user on your Ubuntu 16.04 Droplet (image) - Configure Apache Virtual Host - Configure PHP for Magento 2.x - Create a MySQL Database User in a pre-configured LAMP DigitalOcean Droplet - Download and Set Up Magento Files for Magento 2.x - Complete Magento Setup in the UI - Launch Magento and generate API credentials Step 1: Create a LAMP (Linux, Apache, MySql, PHP) Ubuntu 16.04 pre-configured DigitalOcean Droplet with SSH - Generate an SSH Key - First let's generate an SSH key we will use when setting up our DigitalOcean Droplet, this will give us easy access to our Ubuntu Server - Open a terminal and run the command ssh-keygen - The result should look something like this, here I have used -f to change where my key is saved: - By default the Public Key key is saved to ~/.ssh/id_rsa.pub - Run copy the entire output, this is your SSH Public Key —Save it. cat ~/.ssh/id_rsa.pub - Create The Droplet - The easiest way to get up and running with the full suite of Linux, Apache, MySql, and PHP is with a pre-configured One Click Digital Ocean Droplet - To create one, log in to your DigitalOcean account and click Create>Droplet. - Select the One-Click Apps Tab and click Lamp on 16.04 - Magento is a fairly intensive system, it's recommended to have at least 2 vCPU's - Complete the steps until you reach New SSH Key - Paste in your SSH Public Key which was generated a moment ago, DigitalOcean will validate the format, check that you copied correctly if you receive an error - Finally, add a Host Name and click Create - On the Droplets page () note the IP address of your Droplet — Save it. Step 2: Create a non-root sudo user on your Ubuntu 16.04 Droplet (image) Before we do anything we first need to SSH into our Ubuntu 16.04 Server on DigitalOcean. After, we will create a non-root user we will use for the rest of our installation. Make sure to use the same computer you generated the SSH Key on - SSH into your Server & Save MySQL Password - Open a terminal and run: ssh root@{yourIP}ex. ssh [email protected] - yourIP is the IP address of your Droplet in DigitalOcean, you should connect without entering a password because we have entered our SSH keys earlier. - Once connected, some welcome text is displayed: - MySQL is preinstalled on this image, note the file location of the MySQL Password, get the password by running cat on the file. root@Lamp2:~# cat /root/.digitalocean_password - Note the MySQL Password — Save it. - Create a non-root sudo user - Create a new user with sudo privileges. - If you need guidance creating a new user, check out another great guide from DigitalOcean. Follow every step here: Step 3: Configure Apache Virtual Host Next we will configure Apache Virtual Host, this will ensure that Apache understands how to handle Magento. We will create a virtual host config file in the directory: /etc/apache2/sites-available/d - Use a text editor to create the config file, I use vim, and have called my file magento.conf sudo vim /etc/apache2/sites-available/magento.conf - Paste the following into the config, write, save, and close the file. <VirtualHost *:80> DocumentRoot /var/www/html <Directory /var/www/html/> Options Indexes FollowSymLinks MultiViews AllowOverride All </Directory> </VirtualHost> - After saving the magento.conffile, enable the new Apache site by running the following command. sudo a2ensite magento.conf - Disable the default host file that came with Apache to avoid any conflicts with our new host file. sudo a2dissite 000-default.conf Seeking a deeper explanation for Apache Virtual Host steps? Refer to the DigitalOcean article Apache Virtual Host. The process for configuring Apache is the same on Ubuntu 14.04 and 16.04. Step 4: Configure PHP Settings Make sure to follow these steps for Ubuntu 16.04. The configuration of PHP for Magento 2 is different than Magento 1.9 on Ubuntu 14.04. Failure to follow these steps will result in our site not running properly. Magento is a fairly intensive program to run and uses PHP for most of it's operations and indexing. It's a good idea to raise the memory limit Apache grants to PHP in our php.inifile. If we don't raise this limit we risk one of our scripts running out of memory causing the script to crash. Magento 2.x is not compatible with php5. We are going to use php7.0, which is installed by default on our DigitalOcean Ubuntu server. - Raise the Apache PHP memory grant - Open the config file with a text editor sudo vim /etc/php/7.0/apache2/php.ini - Find line memory_limit = 128Mand change 128M to 512M . Write, save and close the file. - Install PHP module dependencies - Magento requires several PHP modules, let's install them. First let's update our packages, and then install the new modules sudo apt-get update sudo apt-get install libcurl3 php7.0-curl php7.0-gd php7.0-mcrypt php7.0-xml php7.0-mbstring php7.0-zip php7.0-intl - Add Apache rewrite & PHP encryption support - PHP commands default to the active version of PHP, since we only have PHP7.0 we can simply run phpenmod sudo a2enmod rewrite sudo phpenmod mcrypt - Restart Apache to Apply Changes sudo service apache2 restart Step 5: Create a MySQL Database User in pre-configured LAMP 16.04 DigitalOcean Droplet Magento uses MySQL to store data including products, orders, customers ect. We will need to configure MySQL to get it ready for use with Magento. Remember the MySQL Password we saved from Step 2? You are going to need it here! - Log into your MySQL root account, run mysql -u root -p - Enter your password, the root password for MySQL is NOT your root password for Ubuntu. It's contained in the file: /root/.digitalocean_password - Create a database for Magento to use - We will call our database magento, but you may name it whatever you'd like. CREATE DATABASE magento; - Create a user and grant all privileges - We named our user myuser1with password password, you may choose something else. CREATE USER myuser1@localhost IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON magento.* TO myuser1@localhost IDENTIFIED BY 'password'; - Apply the user changes & exit MySQL - Flush privileges in MySQL to let MySQL know that we have made some changes and to apply them, then exit. FLUSH PRIVILEGES; exit Step 6: Download and Setup Magento Files for Magento 2.x Now that we have configured our server, we can install Magento and begin the setup. First we will download and unpack the files, then complete setup through the UI of our new instance of Magento. In our example we will install Magento 2.0.18. Visit the Magento Github Releases page to find the release of Magento to install: - Go to your root directory and use wget to download the tar file. - Extract the files using tar tar xzvf 2.0.18.tar.gz - Navigate to the new directory created by unzipping the file. cd magento2-2.0.18/ - Install Composer to check dependencies - Composer is a tool that checks dependencies for PHP, it will also generate our Autoload files for us, which will ensure that Magento is visible in the UI. - In the Magento root directory, run sudo apt-get install composer composer install - The output should be green, and there should be no "problems" - Use rsync to copy all magento files to our Apache root directory sudo rsync -avP /root/magento2-2.0.18/. /var/www/html/ - Assign Ownership of the files to the Apache User Group sudo chown -R www-data:www-data /var/www/html/ - Restart Apache sudo service apache2 restart Step 7: Complete Magento Installation in the UI If you have reached this step, congrats! We are just about finished with the setup, and we've completed our work in the terminal. - Navigate to your servers public IP, the same IP you SSH with The resulting page should look something like this, click Agree and Setup Magento - Complete the readiness check - Add MySQL Login information - Enter your MySQL Host, Database Name, User Name, and Password created when we configured MySQL in Step 5. It's recommended you log in with a non-root user for security. Click Next - Set an Admin URL for your store - Complete Customize Your Store, and Create Admin Account (Make sure to choose a strong password!) - Finally, Select Install Be sure to save the information presented after install, as this info is difficult to retrieve. Step 8: Launch Magento & Generate API Credentials You've reached the end! You should now be able to log into Magento Admin with your newly created Admin account. Don't forget to generate your API Key and Secret! For more information, Magento API Provider Setup.
https://docs.cloud-elements.com/home/magento-20-sandbox
2019-11-12T07:49:20
CC-MAIN-2019-47
1573496664808.68
[array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5abd0fcdad121c803aa2dd74/n/screen-shot-2018-03-29-at-100805-am.png', None], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5abd1fd48e121cb233788435/n/screen-shot-2018-03-29-at-111653-am.png', None], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5abd2079ad121c334aa2db87/n/screen-shot-2018-03-29-at-112046-am.png', None], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5abd21e2ad121cd346a2dcde/n/screen-shot-2018-03-29-at-112303-am.png', None], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5abd224d8e121c8836788406/n/1522344525076.jpg', None], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5abd22c6ec161c843560b60f/n/screen-shot-2018-03-29-at-112956-am.png', None], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5abd231b8e121cc1337884c3/n/screen-shot-2018-03-29-at-113140-am.png', None], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5abd2393ec161c173360b67c/n/1522344851479.jpg', None], dtype=object) ]
docs.cloud-elements.com
JIRA Cloud Connect TestFairy to JIRA Cloud 1. Create a JIRA API token 1.1 log in to and click on "Create API token" 1.2 Label the new token "TestFairy". 1.3 Copy the API Token. 2. Configure JIRA in your TestFairy settings: 2.1. Open your TestFairy account Preferences 2.2 Choose "Bug Systems" -> "JIRA", and enter your JIRA Username, API Token and JIRA URL. 3. (optional, highly recommended) Add The TestFairy JIRA Add-on to your JIRA account The TestFairy JIRA Add-on adds TestFairy videos to JIRA issues. In order to install it please follow these steps: 3.1. Open JIRA Settings 3.2 Open Apps 3.3 In the Apps manu press 'Find new apps' 3.4 Add "TestFairy for Jira" to your account. 4. (optional ,highly recommended) On the first issue that is created, click on the "3 dots" icon and choose "TestFairy Session" This is how JIRA issues look After the installation 5. (optional ,highly recommended) Map JIRA Custom Fields. The TEstFairy JIRA integration allows you to automatically populate any field in JIRA when creating a new issue. You can do that with standard TestFairy data like app name, version, user, device etc. or with your own seesion attributes, any key and value you push to our SDK in real time. In order to map follow these steps Last updated on 2019-09-09
https://docs.testfairy.com/Bug_Tracking/JIRA_Cloud.html
2019-11-12T08:42:33
CC-MAIN-2019-47
1573496664808.68
[array(['/img/bug-tracking/jira-create-api.png', 'Create JIRA API'], dtype=object) array(['/img/bug-tracking/jira-label.png', 'Set TEstFairy JIRA Key'], dtype=object) array(['/img/bug-tracking/jira-token.png', 'Copy token'], dtype=object) array(['/img/bug-tracking/jira-cloud-1.png', 'Open TestFairy preferences'], dtype=object) array(['/img/bug-tracking/jira-cloud-2.png', 'Configure JIRA cloud'], dtype=object) array(['/img/bug-tracking/jira-settings.png', 'JIRA-settings'], dtype=object) array(['/img/bug-tracking/jira-settings1.png', 'JIRA-settings'], dtype=object) array(['/img/bug-tracking/jira-find-apps.png', 'JIRA-apps'], dtype=object) array(['/img/bug-tracking/jira-discover.png', 'JIRA-testfairy app'], dtype=object) array(['/img/bug-tracking/jira-3-dots.png', 'JIRA-testfairy app'], dtype=object) array(['/img/bug-tracking/hira6a.png', 'JIRA-setup'], dtype=object) array(['/img/bug-tracking/jira5b.png', 'JIRA-setup'], dtype=object) array(['/img/bug-tracking/jira6c.png', 'JIRA-setup'], dtype=object)]
docs.testfairy.com
“Actions” drop down list.
https://docs.vinsight.net/syncing-data-from-xero/
2019-11-12T07:47:29
CC-MAIN-2019-47
1573496664808.68
[]
docs.vinsight.net
Manage Services To manage and utilize core services available through Murano, you will first need to Create an Application and Create a Product. Also have a look to the Micro-services reference page Table of Contents - Add Services - Link an IoT-Connector product - Create your Own Service Add Services While a set a services are enabled by default, you can click the “ENABLE SERVICES” button to find more Micro-Services available for your solution within Murano’s IoT Exchange marketplace. Check out the Core Services Reference to learn more about each service. Also discover concrete setup on the service integration Guides. Configure a Service Services enabled for the solution are listed on that page. Clicking on one will display the documentation and settings to use that services. Note: Not all services need a configuration. Service configuration parameters are defined on the Services Reference under each service page. React to Service events Murano is an Event based System in which scripts are executed when triggered by a Micro-Service event. Those events are accessible under each service item. The event handler screen will allow you to input custom Lua code to utilize data from the events. All services events payload are documented on the Micro-Services reference page under the 'events' section for each service. Note: Not all services have events. Read more about the scripting event handling. Link an IoT-Connector Murano IoT-Connectors are exposed to application as Micro-Services themselves and can be therefore added, configured and access from scripting in the same way as other services. You will see the list of available IoT-Connector on the IoT-Connector-Setup entry. [Link Connector][/reference/ui/assets/s_add_connector.png] Read more about connecting IoT-Connectors & Application Also read about how to use Device message coming from IoT-Connectors. Create your Own Service You can also integrate any external HTTP APIs as a Murano Service by publishing an OpenAPI service to Exosite Exchange marketplace.
http://docs.exosite.com/reference/ui/manage-services/
2019-11-12T09:31:02
CC-MAIN-2019-47
1573496664808.68
[array(['../assets/new_service_button.png', 'new service'], dtype=object)]
docs.exosite.com
Strings. Str Conv(String, VbStrConv, Int32) Method Definition Returns a string converted as specified. public static string StrConv (string str, Microsoft.VisualBasic.VbStrConv Conversion, int LocaleID = 0); static member StrConv : string * Microsoft.VisualBasic.VbStrConv * int -> string Public Function StrConv (str As String, Conversion As VbStrConv, Optional LocaleID As Integer = 0) As String Parameters Required. VbStrConv member. The enumeration value specifying the type of conversion to perform. Optional. The LocaleID value, if different from the system LocaleID value. (The system LocaleID value is the default.) Returns Exceptions Unsupported LocaleID, Conversion < 0 or > 2048, or unsupported conversion for specified locale. Examples This example converts text into all lowercase letters. Dim sText As String = "Hello World" ' Returns "hello world". Dim sNewText As String = StrConv(sText, VbStrConv.LowerCase) Remarks. Important If your application makes security decisions based on the result of a comparison or case-change operation, then the operation should use the String.Compare method, and pass Ordinal or OrdinalIgnoreCase for the comparisonType argument. For more information, see How Culture Affects Strings in Visual Basic. The Conversion argument settings are: * Applies to Asian locales. ** Applies to Japan only. Note These constants are specified in the .NET Framework common language runtime. As a result, they can be used anywhere in your code in place of the actual values. Most can be combined (for example, UpperCase + Wide), except when they are mutually exclusive (for example, VbStrConv.Wide + VbStrConv.Narrow)..
https://docs.microsoft.com/en-gb/dotnet/api/microsoft.visualbasic.strings.strconv?view=netframework-4.8
2019-11-12T07:53:51
CC-MAIN-2019-47
1573496664808.68
[]
docs.microsoft.com
[−][src]Crate cpp_core Utilities for interoperability with C++ See the project's README for more information. The API is not stable yet. Breaking changes may occur in new minor versions. Pointers cpp_core provides three kinds of pointers: CppBox: owned, non-null (corresponds to C++ objects passed by value) Ptrand MutPtr: possibly owned, possibly null (correspond to C++ pointers) Refand MutRef: not owned, non-null (correspond to C++ references) Accessing objects through these pointers is inherently unsafe, as the compiler cannot make any guarantee about the validity of pointers to objects managed by C++ libraries. Unlike Rust references, these pointers can be freely copied, producing multiple mutable pointers to the same object, which is usually necessary to do when working with C++ libraries. Pointer types implement operator traits and delegate them to the corresponding C++ operators. This means that you can use ptr1 + ptr2 to access the object's operator+. Pointer types implement Deref and DerefMut, allowing to call the object's methods directly. In addition, methods of the object's first base class are also directly available thanks to nested Deref implementations. If the object provides an iterator interface through begin() and end() functions, pointer types will implement IntoIterator, so you can iterate on them directly. Casts The following traits provide access to casting between C++ class types: StaticUpcastsafely converts from a derived class to a base class (backed by C++'s static_cast). DynamicCastperforms a checked conversion from a base class to a derived class (backed by C++'s dynamic_cast). StaticDowncastconverts from a base class to a derived class without a runtime check (also backed by C++'s static_cast). Instead of using these traits directly, it's more convenient to use static_upcast, static_downcast, dynamic_cast helpers on pointer types. The CastFrom and CastInto traits represent some of the implicit coercions available in C++. For example, if a method accepts impl CastInto<Ptr<SomeClass>>, you can pass a Ptr<SomeClass>, MutPtr<SomeClass>, &CppBox<SomeClass>, or even Ptr<DerivedClass> (where DerivedClass inherits SomeClass). You can also pass a null pointer object ( NullPtr) if you don't have a value ( Ptr::null() is also an option but it can cause type inference issues).
https://docs.rs/cpp_core/0.5.0/cpp_core/
2019-11-12T07:46:42
CC-MAIN-2019-47
1573496664808.68
[]
docs.rs
Add-on Settings Bitly Access Token If you want shorten links using Bitly you need to generate an access token in Bitly service so the add-on can access the Bitly API and shorten links on behalf of you. Follow steps in the tutorial below and once you obtain a token, save it in the add-on: - Go Premium Here are listed buttons to purchase credits or pass via PayPal. See more in pricing. Activation Key You will receive an activation key shorty after we receive payment via PayPal. Typically you will receive the key within one minute after payment was made. Check your SPAM folder as it might end up there. Activation key can be used only once. Please read refund policy before you request a refund. Tutorial When you first time open the add-on tutorial guides you in your first steps and sets up sample configuration for you. You can anytime re-launch the tutorial and see how the sample configuration works. - add-on creates a sample sheet with column names and data and sets it as the source sheet - add-on creates a sample template - you can open the template from sheet configuration screen - add-on configures mapping between tags in the sample template and columns in the source sheet - you can review / modify the mapping in the mapping editor - add-on configures output configuration - to insert timestamp when a document is created - sets output format to Google Docs - to insert URL of created Google document into column - you can review / modify configuration in the configuration editor - All you need is then hit the GENERATE button to see results
https://docs.anymerge.com/ui/add-on-settings
2021-11-27T01:48:46
CC-MAIN-2021-49
1637964358078.2
[]
docs.anymerge.com
Brain Imaging Data Structure (BIDS) is a codified way of organizing neuroimaging and behavioral experimental data for the purpose of sharing. If you are new to the BIDS standard, we strongly recommend reviewing the Common Principles on the BIDS website or downloading a PDF of the BIDS specification. There are two options for getting BIDS-formatted data on Flywheel: upload data that is already in the BIDS format, or curate data on Flywheel to fit the specification. This article gives an overview of how to complete BIDS curation as smoothly as possible. Overview of BIDS curation on Flywheel To begin, you'll want to take a look at the BIDS Study Design Spreadsheet. This will help you plan out how to get your data into BIDS. Next, you will use 3 gears for BIDS curation: - DICOM MR Classifier: Interprets DICOM tags for a basic understanding of the type of scan - dcm2niix: Converts DICOM files into NIfTI files and set Flywheel metadata with DICOM tag values - curate-bids: Uses a template to set BIDS metadata on Flywheel The first two gears can be run using gear rules so that they are executed automatically whenever new scans appear. The third, curate-bids, can be run after all scans have been turned into NIfTI files in dcm2niix. However, before you can successfully run the curate-bids gear, your data must be labeled in the way that the curate-bids gear expects. How you update the labels depends on if you are obtaining new data uploaded directly from the scanner or if you are working with Retrospective data. New data uploaded directly from the scanner For new data that will be acquired, the best way to begin is by setting the proper names at the scanner console. For this, we highly recommend the ReproIn naming convention. When the ReproIn naming convention is used, the BIDS Curation gear can be used with the default ReproIn template, and BIDS curation will be almost automatic. Learn more about the ReproIn naming convention or take a look at a scanner walkthrough . Retrospective data For retrospective data that has already been acquired, use the BIDS Pre-Curation gear to rename Flywheel acquisition, session, and subject code/labels so that they match the ReproIn convention. Then you can run the BIDS Curation gear with the ReproIn template (learn more about the ReproIn naming convention.) There is often hesitation at relabeling data because it might obscure provenance, but rest assured that relabeling is safe on Flywheel because the raw information is not changed. The original acquisition name can be retrieved from the SeriesDescription DICOM tag. The session label can be retrieved from the StudyDescription and the subject label/code is typically the Patient ID or Additional Info fields (check with the Flywheel Site Admin at your institution for the specific fields configured for your site.) The BIDS Pre-Curation gear also allows you to mark specific files to be ignored by adding “_ignore-BIDS” to the end of an acquisition label. Flywheel will then skip all files in that acquisition when exporting data in BIDS format. If you only want to change a small number of labels, this can be done manually in Flywheel, but if a large number of changes need to be made, the BIDS Pre-Curation gear should be used. Once you have relabeled your data, you will run the curate-bids gear to start determining where your files fit into the BIDS specification. The curate-bids gear (also known as the BIDS Curation gear) walks the Flywheel hierarchy of a project (subjects, sessions, acquisitions) and matches files with specific parts of the BIDS specification using rules and definitions in a project curation template. The definitions in this template establish the structure of the BIDS path and file names, while the rules are used to recognize files and then extract parts of names to determine each file's complete BIDS path and name. You can learn more about Flywheel's project curation template in our article. After running the BIDS Curation gear, the next step is to check the results. The latest version of the gear produces spreadsheets that summarize the mapping between the original names to the final BIDS paths and filenames. If you are not using the latest gear version, you can run a script to generate these reports. The gear (and script) can take a list of pairs of regular expressions for fine-tuning the mapping between field maps and the files that they will modify. Initial processing using the project curation template in the gear produces a list of all possible files for each field map to modify. The regular expressions provide a more specific correspondence between the field map (matching the first regex) and the scans to modify (matching the second regex). The spreadsheets produced are: - <group>_<project>_niftis.csv: A list of the original information (acquisition name, file name, series number, etc.) and the final BIDS path/filename, along with an indication if the path/filename is duplicated, which will result in an error for the BIDS Validator. BIDS Apps run the BIDS Validator and will error-out if there is a problem so the gear won’t run. This spreadsheet should be checked to see if all of the files have been properly recognized or ignored. - <group>_<project_acquisitions.csv: is useful when there are multiple subjects because it shows the “usual count” of the acquisitions for all subjects and then lists the subjects that have these expected acquisitions and the ones that do not. It lists warnings for unexpected numbers of specific acquisitions and errors for subjects that do not have the expected number of the usual acquisitions. - <group>_<project>_acquisitions_details_1.csv (_2.csv): lists all of the unique acquisition labels along with the number of times they have been seen. It also provides additional details that should help understand. BIDS Curation is usually an iterative process of editing names, running the BIDS Curation gear, and then checking the above spreadsheets. However, it can also involve editing the project curation template. Learn more about editing the project curation template below as an alternative method to renaming the files. The curate-bids gear does not actually save data in BIDS format. Instead, it sets the Flywheel metadata that will be used when BIDS-formatted data is exported using the CLI and when running a BIDS App gear such as BIDS-fMRIPrep. Alternative method: Edit the project curation template If you do not want to rename files, it is possible to edit a BIDS project curation template so that it can recognize and process arbitrary file names. Typically, this is not recommended because the template is a large complicated json file and exactly what is happening during the processing accomplished by the “where” and “initialization” sections is best understood by stepping through it using a debugger. See an example of these sections in the BIDS template file article. Crafting particular regular expressions in the template to recognize and extract the proper strings from arbitrary file names makes the template processing brittle whereas changing the names of acquisitions, subjects, and sessions to use the ReproIn convention that is expected by the existing template not only allows the template to work, but also makes the purpose of each acquisition clear on the platform. Getting data into BIDS format makes processing and sharing data easier. Using the ReproIn naming convention starting at the scanner console or else by renaming acquisitions makes BIDS curation easier because it makes the link between each acquisition and where it fits into the BIDS Specification explicit. Next Steps Take a look at our webinar on how the CMU-Pitt BRIDGE Center standardizes on BIDS using Flywheel. You will also learn more about how Flywheel's centralized platform streamlines BIDS in an end-to-end solution for data management and collaboration. Once you have an understanding of BIDS in Flywheel, start planning your curation by using the BIDS Study Design Spreadsheet or take a look at our BIDS curation tutorial to start curating your own data.
https://docs.flywheel.io/hc/en-us/articles/360058755054-Flywheel-BIDS-how-to-start
2021-11-27T02:35:57
CC-MAIN-2021-49
1637964358078.2
[]
docs.flywheel.io
On Bandwidth and Latency High bandwidth = good. Low latency = good. There is, however, no direct relationship between the two – bandwidth is measured in (multiples of) “bits per second”, whilst latency is measured in milliseconds (the time between a packet being sent and it arriving at the destination). Analogies with IT-related technologies invariably fail, and are most commonly (in my experience) made with cars, so who am I to fly in the face of fashion – a simplistic way to understand the difference between bandwidth and latency could be to imagine a motorway (or highway, if in the US): The number of lanes in the motorway are analogous with the bandwidth, and its length is analogous with the latency. A route between 2 locations may involve several hops (switching between motorways) – the “number of lanes used end-to-end” is the lowest common denominator all of them, and the latency is the sum of all the lengths. When it comes to TCP communication we rely on acknowledgements (ACKs) so that the sender is informed which packets have been successfully received and it knows when it is able to send the next packet in the sequence. In an environment where every TCP packet must be ACK’d, high latency leads to poor performance as the server is waiting for a lot of the time, ready to send the next block of data. Windows has the concept of “TCP ACK frequency” which determines how many packets are acknowledged with 1 ACK – the default is 2 (every other packet). If this is reduced to 1 then transferring a large amount of data across a high latency network will take longer. If this is increased then on low latency networks (LANs) the rate of transfer will much higher. For the most part, “streaming” data is sent via UDP which does not have the concept of acknowledgements (instead, the application may transmit UDP packets in the opposite direction, or utilize a separate TCP stream to act as a “keepalive”) – this allows high bandwidth, high latency networks to stream media without delays. A lot of online games also use UDP for communication, so the latency is the bottom line and not much you can do about it – however there are some (such as World of Warcraft) which are TCP-based and use very little bandwidth but rely heavily on (low) latency. Such games may benefit from reducing the TCP ACK frequency to 1 for an improved turnaround time for 2-way communication, but bear in mind that this is a global setting per network interface, so local networking is affected at the same time and file transfers may suffer as a result. The control registry value to define TCP ACK Frequency for network interface with ID “{GUID}”: Key: HKLM\System\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\{GUID} Value name: TcpAckFrequency Value type: REG_DWORD See the following TechNet article for detailed information on this key, and many more:
https://docs.microsoft.com/en-us/archive/blogs/mrsnrub/on-bandwidth-and-latency
2021-11-27T02:08:34
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
Last updated September 17, 2021. This is supplement to our security policy and serves as a guide to New Relic’s description of its Services, functionalities, and features. Tip We may update the URLs in this document without notice. Security Program New Relic follows "privacy by design" principles as described here:. Security Domains New Relic’s policies and procedures cover industry-recognized security domains such as Endpoint Protection; Portable Media Security; Mobile Device Security; Wireless Security; Configuration Management; Vulnerability Management; Network Protection; Transmission Protection; Password Management; Access Control, Audit Logging & Monitoring; Education, Training, and Awareness; Third Party Assurance; Incident Management; Business Continuity and Disaster Recover; Risk Management; Data Protection & Privacy; and Service Management Systems. Security Certifications New Relic audits its Services against industry standards as described at. Data Control, Facilities, and Encryption - New Relic's customers can send data to New Relic's APIs by (1) using New Relic's software, (2) using vendor-neutral software that is managed and maintained by a third-party such as via OpenTelemetry instrumentation provided by opentelemetry.io, or (3) from third-party systems that customer's manage and/or control. - New Relic's customers can use New Relic's Services such as NerdGraph to filter out and drop data. See. - New Relic's customers can adjust their data retention periods as appropriate for their needs. See. - New Relic Logs obfuscates numbers that match known patterns, such as bank card and social security numbers as described here:. - New Relic honors requests to delete personal data in accordance with applicable privacy laws. Please see. - Customers may use New Relic's APIs to query data, such as NerdGraph described here, and New Relic Services to export the data to other cloud providers. - Customers can configure its log forwarder [] before sending infrastructure logs to New Relic. - For New Relic Customers in New Relic US, FedRAMP and HIPAA-enabled environments, Customer Data is replicated to the off-site backup system via Amazon Simple Storage Service (S3). - The Services that operate on Amazon Web Services (“AWS”) are protected by the security and environmental controls of AWS. Detailed information about AWS security is available at and. Data encryption at rest utilizes FIPS 140-2 compliant encryption methodology. For AWS SOC Reports, please see. - The Services that operate on Google Cloud Platform ("GCP") are protected by the security and environmental controls of GCP. Detailed information about GCP security is available at. For GCP reports, please see. - IBM - Deft - Zayo - QTS Law Enforcement Request Report New Relic has not to date received any request for customer data from a law enforcement or other government agency (including under any national security process), and has not made any corresponding disclosures..
https://docs.newrelic.com/docs/licenses/license-information/referenced-policies/security-guide/
2021-11-27T02:06:38
CC-MAIN-2021-49
1637964358078.2
[]
docs.newrelic.com
Product Index Choose the best face for each situation and express what you want. “Funny Girl” contains 40 detailed expressions for The Girl 8 and Genesis 8 Female.
http://docs.daz3d.com/doku.php/public/read_me/index/53213/start
2021-11-27T03:57:47
CC-MAIN-2021-49
1637964358078.2
[]
docs.daz3d.com
Translucent BSDF The Translucent BSDF is used to add Lambertian diffuse transmission. Ingressi - Colore Color of the surface, or physically speaking, the probability that light is transmitted for each wavelength. - Normale Normal used for shading; if nothing is connected the default shading normal is used. Proprietà This node has no properties. Uscite - BSDF Standard shader output.
https://docs.blender.org/manual/it/dev/render/shader_nodes/shader/translucent.html
2021-11-27T02:52:28
CC-MAIN-2021-49
1637964358078.2
[array(['../../../_images/render_shader-nodes_shader_translucent_node.png', '../../../_images/render_shader-nodes_shader_translucent_node.png'], dtype=object) ]
docs.blender.org
Catalog¶ Use the Descartes Labs Catalog to discover existing raster products, search the images contained in them and manage your own products and images. Note The Catalog Python object-oriented client provides the functionality previously covered by the more low-level, now deprecated Metadata and Catalog Python clients. There are a few compatibility warning you can find here. Note The Catalog Python client is mainly for discovering data and for managing data. For data analysis and rastering use Scenes. Concepts¶ The Descartes Labs Catalog is a repository for georeferenced images. Commonly these images are either acquired by Earth observation platforms like a satellite or they are derived from other georeferenced images. The catalog is modeled on the following core concepts, each of which is represented by its own class in the API. Images¶ An image (represented by class Image in the API) contains data for a shape on earth, as specified by its georeferencing. An image references one or more files (commonly TIFF or JPEG files) that contain the binary data conforming to the band declaration of its product. Bands¶ A band (represented by class Band) is a 2-dimensional slice of raster data in an image. A product must have at least one band and all images in the product must conform to the declared band structure. For example, an optical sensor will commonly have bands that correspond to the red, blue and green visible light spectrum, which you could raster together to create an RGB image. Products¶ A product (represented by class Product) is a collection of images that share the same band structure. Images in a product can generally be used jointly in a data analysis, as they are expected to have been uniformly processed with respect to data correction, georegistration and so on. For example, you can composite multiple images from a product to run an algorithm over a large geographic region. Some products correspond directly to image datasets provided by a platform. See for example the Landsat 8 Collection 1 product. This product contains all images taken by the Landsat 8 satellite, is updated continuously as it takes more images, and is processed to NASA’s Collection 1 specification. A product may also represent data derived from multiple other products or data sources - some may not even derive from Earth observation data. A raster product can contain any sort of image data as long as it’s georeferenced. Searching the catalog¶ All objects support the same search interface. Let’s look at two of the most commonly searched for types of objects: products and images. Finding products¶ Filtering and sorting¶ Product.search() is the entry point for searching products. It returns a query builder that you can use to refine your search and can iterate over to retrieve search results. Count all products with some data before 2016 using filter(): >>> from descarteslabs.catalog import Product, properties as p >>> search = Product.search().filter(p.start_datetime < "2016-01-01") >>> search.count() 70 You can apply multiple filters. To restrict this search to products with data after 2000: >>> search = search.filter(p.end_datetime > "2000-01-01") >>> search.count() 37 Of these, get the 3 products with the oldest data, using sort() and limit(). The search is not executed until you start retrieving results by iterating over it: >>> oldest_search = search.sort("start_datetime").limit(3) >>> for result in oldest_search: ... print(result.id) landsat:LT05:PRE:TOAR dmsp:nightlights daily-weather:gsod-interpolated:v0 All attributes are documented in the Product API reference, which also spells out which ones can be used to filter or sort. Text search¶ Add text search to the mix using find_text(). This finds all products with “landsat” in the name or description: >>> landsat_search = search.find_text("landsat") >>> for product in landsat_search: ... print(product) Product: Global Forest Change 2000-2018 id: 42b24cbb9a71ed9beb967dbad04ea61d7331d5af:global_forest_change_v0 Product: Global Forest Change v1.7 (2000-2019) id: descarteslabs:global_forest_change_v1.7 created: Wed Oct 21 04:26:20 2020 Product: Global Forest Change v1.7 (2000-2019) id: hansen:global_forest_change_v1.7 created: Mon Jun 14 22:57:04 2021 Product: Landsat 8 Pre-collection LaSRC Surface Reflectance id: landsat:LC08:PRE:LaSRC Product: Landsat 5 Pre-Collection id: landsat:LT05:PRE:TOAR Product: National Land Cover Dataset (NLCD) Impervious Surface id: nlcd:impervious_surface Product: National Land Cover Dataset (NLCD) Land Cover id: nlcd:land_cover Product: National Land Cover Dataset (NLCD) Land Cover Change Index id: nlcd:land_cover_change Product: National Land Cover Dataset (NLCD) Tree Canopy id: nlcd:tree_canopy Product: Cropland Data Layer id: usda:cdl Product: Cropland Data Layer id: usda:cdl:v1 Product: [DEPRECATED] GFSAD30 Cropland Global id: usgs:gfsad30:global Product: GFSAD30 Cropland Global id: usgs:gfsad30:global:v1 created: Thu Sep 3 03:34:48 2020 Lookup by id and object relationships¶ If you know a product’s id, look it up directly with Product.get(): >>> landsat8_collection1 = Product.get("landsat:LC08:01:RT:TOAR") >>> landsat8_collection1 Product: Landsat 8 Collection 1 Real-Time id: landsat:LC08:01:RT:TOAR Wherever there are relationships between objects expect methods such as Product.bands() to find related objects. This shows the first four bands of the Landsat 8 product we looked up: >>> for band in landsat8_collection1.bands().limit(4): ... print(band) SpectralBand: coastal-aerosol id: landsat:LC08:01:RT:TOAR:coastal-aerosol product: landsat:LC08:01:RT:TOAR SpectralBand: blue id: landsat:LC08:01:RT:TOAR:blue product: landsat:LC08:01:RT:TOAR SpectralBand: green id: landsat:LC08:01:RT:TOAR:green product: landsat:LC08:01:RT:TOAR SpectralBand: red id: landsat:LC08:01:RT:TOAR:red product: landsat:LC08:01:RT:TOAR Product.bands() returns a search object that can be further refined. This shows all class bands of this Landsat 8 product, sorted by name: >>> from descarteslabs.catalog import BandType >>> for band in landsat8_collection1.bands().filter(p.type == BandType.CLASS).sort("name"): ... print(band) ClassBand: qa_cirrus id: landsat:LC08:01:RT:TOAR:qa_cirrus product: landsat:LC08:01:RT:TOAR ClassBand: qa_cloud id: landsat:LC08:01:RT:TOAR:qa_cloud product: landsat:LC08:01:RT:TOAR ClassBand: qa_cloud_shadow id: landsat:LC08:01:RT:TOAR:qa_cloud_shadow product: landsat:LC08:01:RT:TOAR ClassBand: qa_saturated id: landsat:LC08:01:RT:TOAR:qa_saturated product: landsat:LC08:01:RT:TOAR ClassBand: qa_snow id: landsat:LC08:01:RT:TOAR:qa_snow product: landsat:LC08:01:RT:TOAR ClassBand: valid-cloudfree id: landsat:LC08:01:RT:TOAR:valid-cloudfree product: landsat:LC08:01:RT:TOAR Finding images¶ Image filters¶ Search images by the most common attributes - by product, intersecting with a geometry and by a date range: >>> from descarteslabs.catalog import Image, properties as p >>> geometry = { ... "type": "Polygon", ... "coordinates": [[ ... [2.915496826171875, 42.044193618165224], ... [2.838592529296875, 41.92475971933975], ... [3.043212890625, 41.929868314485795], ... [2.915496826171875, 42.044193618165224] ... ]] ... } >>> >>> search = Product.get("landsat:LC08:01:RT:TOAR").images() >>> search = search.intersects(geometry) >>> search = search.filter((p.acquired > "2017-01-01") & (p.acquired < "2018-01-01")) >>> search.count() 14 There are other attributes useful to filter by, documented in the API reference for Image. For example exclude images with too much cloud cover: >>> search = search.filter(p.cloud_fraction < 0.2) >>> search.count() 7 Filtering by cloud_fraction is only reasonable when the product sets this attribute on images. Images that don’t set the attribute are excluded from the filter. The created timestamp is added to all objects in the catalog when they are created and is immutable. Restrict the search to results created before some time in the past, to make sure that the image results are stable: >>> from datetime import datetime >>> search = search.filter(p.created < datetime(2019, 1, 1)) >>> search.count() 7 Note that for all timestamps we can use datetime instances or strings that can reasonably be parsed as a timestamp. If a timestamp has no explicit timezone, it’s assumed to be in UTC. Image summaries¶ Any queries for images support a summary via the summary() method, returning a SummaryResult with aggregate statistics beyond just the number of results: >>> from descarteslabs.catalog import Image, properties as p >>> search = Image.search().filter(p.product_id == "landsat:LC08:01:T1:TOAR") >>> search.summary() Summary for 743648 images: - Total bytes: 86,523,541,392,489 - Products: landsat:LC08:01:T1:TOAR These summaries can also be bucketed by time intervals with summary_interval() to create a time series: >>> search.summary_interval(interval="month", start_datetime="2017-01-01", end_datetime="2017-06-01") [ Summary for 9872 images: - Total bytes: 1,230,379,744,242 - Interval start: 2017-01-01 00:00:00+00:00, Summary for 10185 images: - Total bytes: 1,288,400,404,886 - Interval start: 2017-02-01 00:00:00+00:00, Summary for 12426 images: - Total bytes: 1,556,107,514,684 - Interval start: 2017-03-01 00:00:00+00:00, Summary for 12492 images: - Total bytes: 1,476,030,969,986 - Interval start: 2017-04-01 00:00:00+00:00, Summary for 13768 images: - Total bytes: 1,571,780,442,608 - Interval start: 2017-05-01 00:00:00+00:00] Managing products¶ Creating and updating a product¶ Before uploading images to the catalog, you need to create a product and declare its bands. The only required attributes are a unique id, passed in the constructor, and a name: >>> from descarteslabs.catalog import Product >>> product = Product(>> product.save() >>> product.id u'descarteslabs:guide-example-product' >>> product.created datetime.datetime(2019, 8, 19, 18, 53, 26, 250005, tzinfo=<UTC>) save() saves the product to the catalog in the cloud. Note that you get to choose an id for your product but it must be unique within your organization (you get an exception if it’s not). This code example is assuming the user is in the “descarteslabs” organization. The id is prefixed with the organization id on save to enforce global uniqueness and uniqueness within an organization. If you are not part of an organization the prefix will be your unique user id. Every object has a read-only created attribute with the timestamp from when it was first saved. There are a few more attributes that you can set (see the :class`~descarteslabs.catalog.Product` API reference). You can update the product to define the timespan that it covers. This is as simple as assigning attributes and then saving again: >>> product.>> product.>> product.save() >>> product.start_datetime datetime.datetime(2012, 1, 1, 0, 0, tzinfo=<UTC>) >>> product.modified datetime.datetime(2019, 8, 19, 18, 53, 27, 114274, tzinfo=<UTC>) A read-only modified attribute exists on all objects and is updated on every save. Note that all timestamp attributes are represented as datetime instances in UTC. You may assign strings to timestamp attributes if they can be reasonably parsed as timestamps. Once the object is saved the attributes will appear as parsed datetime instances. If a timestamp has no explicit timezone, it’s assumed to be in UTC. Get existing product or create new one¶ If you rerun the same code many times and you only want to create the product once, you can use the Product.get_or_create() method. This method will do a lookup, and if not found, will create a new product instance (you can do the same for bands or images): >>> product = Product.get_or_create("guide-example-product") >>> product.>> product.save() This is the equivalent to: >>> product = Product.get("guide-example-product") >>> if product is None: ... product = Product(>> product.save() If the product doesn’t exist yet, it will be created, the name will be assigned, and it will be created by the save. If the product already exists, it will be retrieved. If the assigned name differs, the product will be updated by the save. If everything is identical, the save becomes a noop. If you like, you can add additional attributes as parameters >>> product = Product.get_or_create("guide-example-product", name="Example product") >>> product.save() Creating bands¶ Before adding any images to a product you should create bands that declare the structure of the data shared among all images in a product. >>> from descarteslabs.catalog import SpectralBand, DataType, Resolution, ResolutionUnit >>> band = SpectralBand(name="blue", product=product) >>> band.data_type = DataType.UINT16 >>> band.data_range = (0, 10000) >>> band.display_range = (0, 4000) >>> band.resolution = Resolution(unit=ResolutionUnit.METERS, value=60) >>> band.band_index = 0 >>> band.save() >>> band.id u'descarteslabs:guide-example-product:blue' A band is uniquely identified by its name and product. The full id of the band is composed of the product id and the name. The band defines where its data is found in the files attached to images in the product: In this example, band_index = 0 indicates that blue is the first band in the image file, and that first band is expected to be represented by unsigned 16-bit integers ( DataType.UINT16). This band is specifically a SpectralBand, with pixel values representing measurements somewhere in the visible/NIR/SWIR electro-optical wavelength spectrum, so you can also set additional attributes to locate it on the spectrum: >>> # These values are in nanometers (nm) >>> band.wavelength_nm_min = 452 >>> band.wavelength_nm_max = 512 >>> band.save() Bands are created and updated in the same way was as products and all other Catalog objects. Band types¶ It’s common for many products to have an alpha band, which masks pixels in the image that don’t have valid data: >>> from descarteslabs.catalog import MaskBand >>> alpha = MaskBand(name="alpha", product=product) >>> alpha.is_alpha = True >>> alpha.data_type = DataType.UINT16 >>> alpha.resolution = band.resolution >>> alpha.band_index = 1 >>> alpha.save() Here the “alpha” band is created as a MaskBand which is by definition a binary band with a data range from 0 to 1, so there is no need to set the data_range and display_range attribute. Setting is_alpha to True enables special behavior for this band during rastering. If this band appears as the last band in a raster operation (such as SceneCollection.mosaic() or SceneCollection.stack() in the scenes client) pixels with a value of 0 in this band will be treated as transparent. There are five band types which may have some attributes specific to them. The type of a band does not necessarily affect how it is rastered, it mainly conveys useful information about the data it contains. All bands have the following attributes in common: id, name, product_id, description, type, sort_order, data_type, no_data, data_range, display_range, resolution, band_index, file_index, jpx_layer_index. SpectralBand: A band that lies somewhere on the visible/NIR/SWIR electro-optical wavelength spectrum. Specific attributes: physical_range, physical_range_unit, wavelength_nm_center, wavelength_nm_min, wavelength_nm_max, wavelength_nm_fwhm MicrowaveBand: A band that lies in the microwave spectrum, often from SAR or passive radar sensors. Specific attributes: frequency, bandwidth, physical_range, physical_range_unit MaskBand: A binary band where by convention a 0 means masked and 1 means non-masked. The data_rangeand display_rangefor masks is implicitly [0, 1]. Specific attributes: is_alpha ClassBand: A band that maps a finite set of values that may not be continuous to classification categories (e.g. a land use classification). A visualization with straight pixel values is typically not useful, so commonly a colormapis used. Specific attributes: colormap, colormap_name, class_labels GenericBand: A generic type for bands that are not represented by the other band types, e.g., mapping physical values like temperature or angles. Specific attributes: colormap, colormap_name, physical_range, physical_range_unit Note that when retrieving bands using a band-specific class, for example SpectralBand.get(), SpectralBand.get_many() or SpectralBand.search(), you will only retrieve that type of band; any other types will be silently dropped. Using Band.get(), Band.get_many() or Band.search() will return all of the types. Access control¶ By default only the creator of a product can read and modify it as well as read and modify the images in it. To share access to a product with others you can modify its access control lists (ACLs): >>> product.readers = ["org:descarteslabs"] >>> product.writers = ["email:[email protected]", "email:[email protected]"] >>> product.save() For some more details on access control lists see the Sharing Resources guide This gives read access to the whole “descarteslabs” organization. All users in that organization can now find the product. This also gives write access to two specific users identified by email. These two users can now update the product and add new images to it. New bands and images created in a product inherit the product’s ACLs by default, but the ACLs for existing images are not automatically updated when they change on the product. You can change the ACLs for all bands and images associated with a given product using update_related_objects_permissions(). This method kicks off an asynchronous task that performs the updates. If the product has more than 10,000 associated images, this might take several minutes to finish running. You get the current status of the job using get_update_permissions_status() or wait for the task to complete using wait_for_completion(). This sets the ACLs for all bands and images in product to those of the product and waits for the update to complete: >>> status = product.update_related_objects_permissions(readers=product.readers, writers=product.writers) >>> if status: ... status.wait_for_completion() You can also simply copy all ACLs from product to all related bands and images by using inherit=True: >>> status = product.update_related_objects_permissions(inherit=True) >>> if status: ... status.wait_for_completion() Transfer Ownership¶ Transfering ownership of a product to a new user requires cooperation from both the previous owner and the new owner and is a two-step effort. The first step is for the previous owner to add the new owner to the product: >>> product.owners.append("user:...") >>> product.save() Just a reminder that you cannot use the have to request the user id from the new owner and use that instead. (You can find your user id in the profile drop-down on iam.descarteslabs.com). The second step is for the new owner to remove the previous owner and to update all related bands and images: >>> product.owners.remove("user:...") >>> product.save() >>> status = product.update_related_objects_permissions(owners=product.owners) >>> if status: ... status.wait_for_completion() Or if you prefer to copy all ACL information from product, use inherit=True as the sole argument in the call to update_related_objects_permissions(). Derived bands¶ A derived band is the result of a pixel function applied to one or more existing bands of a product. Derived bands become available on a product automatically when canonically named bands it relies on are present in the product. For example, the derived:ndvi band provides the normalized difference vegetation index (NDVI) if a product has bands named red and nir: >>> from descarteslabs.catalog import DerivedBand >>> >>> ndvi = DerivedBand.get("derived:ndvi") >>> ndvi.description 'Normalized Difference Vegetation Index' >>> ndvi.bands ['nir', 'red'] The id and name of a derived band always has a derived: prefix to distinguish them clearly from bands declared in a product. The catalog provides a standard set of derived bands - you can’t create your own. The bands attribute defines the band names that must be present in a product for this derived band. Find all derived bands available for a product with Product.derived_bands(): >>> landsat8_collection1 = Product.get("landsat:LC08:01:RT:TOAR") >>> for band in landsat8_collection1.derived_bands(): ... print(band) DerivedBand: derived:bai id: derived:bai DerivedBand: derived:evi id: derived:evi DerivedBand: derived:ndvi id: derived:ndvi DerivedBand: derived:ndwi id: derived:ndwi DerivedBand: derived:ndwi1 id: derived:ndwi1 DerivedBand: derived:ndwi2 id: derived:ndwi2 DerivedBand: derived:rsqrt id: derived:rsqrt DerivedBand: derived:visual_cloud_mask id: derived:visual_cloud_mask Deleting bands and products¶ All objects can be deleted using delete(). For example, delete the previously created alpha band: >>> alpha.delete() True A product can only be deleted if it doesn’t have any bands or images. Because the product we created still has one band this fails: >>> product.delete() Traceback (most recent call last): File "< chunk 24 named None >", line 1, in <module> File "descarteslabs/catalog/catalog_base.py", line 450, in delete r = self._client.session.delete(self._url + "/" + self.id) File "requests/sessions.py", line 615, in delete return self.request('DELETE', url, **kwargs) File "descarteslabs/client/services/service/service.py", line 74, in request raise ConflictError(resp.text) ConflictError: {"errors":[{"detail":"One or more related objects exist","status":"409","title":"Related objects exist"}],"jsonapi":{"version":"1.0"}} There is a convenience method to delete all bands and images in a product. Be careful as this may delete a lot of data and can’t be undone! >>> status = product.delete_related_objects() This kicks off a job that deletes bands and images in the background. You can wait for this to complete and then delete the product: >>> if status: >>> status.wait_for_completion() >>> product.delete() Finding Products by id¶ You may have noticed that when creating products, the id you provide isn’t the id that is assigned to the object. >>> product = Product(>> product.save() >>> product.id "descarteslabs:guide-example-product" The id has a prefix added to ensure uniqueness without requiring you to come up with a globally unique name. The downside of this is you need to remember that prefix when looking up your products later: # this will return False because the id has a prefix! >>> Product.exists("guide-example-product") False You can use namespace_id() to generate a fully-namespaced product if you know the unprefixed part. # this will return False because the id has a prefix! >>> product_id = Product.namespace_id("guide-example-product") >>> product_id "descarteslabs:guide-example_product" Managing images¶ Apart from searching and discovering data available to you, the main use case of the catalog is to let you upload new images. Uploading image files¶ If your data already exists on disk as an image file, usually a GeoTIFF or JPEG file, you can upload it directly. In the following examples we will upload data with a single band representing the blue light spectrum. First let’s create a product and band corresponding to that: >>> # Create a product >>> from descarteslabs.catalog import Band, DataType, Product, Resolution, ResolutionUnit, SpectralBand >>> product = Product(id="guide-example-product", name="Example product") >>> product.save() >>> >>> # Create a band >>> band = SpectralBand(name="blue", product=product) >>> band.data_type = DataType.UINT16 >>> band.data_range = (0, 10000) >>> band.display_range = (0, 4000) >>> band.resolution = Resolution(unit=ResolutionUnit.METERS, value=60) >>> band.band_index = 0 >>> band.save() Now image.upload() uploads images to the new product and returns a ImageUpload. Images are uploaded and processed asynchronously, so they are not available in the catalog immediately. With upload.wait_for_completion() we wait until the upload is completely finished. >>> # Set any attributes that should be set on the uploaded images >>> image = Image(product=product,>> image.cloud_fraction = 0.1 >>> >>> # Do the upload >>>>> upload = image.upload(image_path) >>> upload.wait_for_completion() >>> upload.status u'success' Attributes that can be derived from the image file, such as the georeferencing, will be assigned to the image during the upload process. But you can set any additional Image attributes such as acquired and cloud_fraction here. Note that this code makes a number of assumptions: A GeoTIFF exists locally on disk at the path docs/guides/blue.tifffrom the current directory. The GeoTIFF’s only band matches the blueband we created (for example, it has an unsigned 16-bit integer data type). The GeoTIFF is correctly georeferenced. Image uploads use Descartes Labs Storage behind the scenes. You can find the uploaded file using the product id as a prefix in the products storage type: >>> import descarteslabs as dl >>> storage_client = dl.Storage() >>> storage_client.list(prefix=product.id, storage_type="products") ['guide-example-product/ebe3cdeb709ac362b3d908e3802f8e0f'] Note that the actual name of the file will depend on several specifics including the file contents and hence will not necessarily be equal to that in the example. Uploading ndarrays¶ Often, when creating derived product - for example, running a classification model on existing data - you’ll have a NumPy array (often referred to as “ndarrays”) in memory instead of a file written to disk. In that case, you can use upload_ndarray(). This method behaves like upload(), with one key difference: you must provide georeferencing attributes for the ndarray. Georeferencing attributes are used to map between geospatial coordinates (such as latitude and longitude) and their corresponding pixel coordinates in the array. The required attributes are: An affine geotransform in GDAL format (the geotransattribute) A coordinate reference system definition, preferrably as an EPSG code (the cs_codeattribute) or alternatively as a string in PROJ.4 or WKT format (the projectionattribute) If the ndarray you’re uploading was rastered through the the platform, this information is easy to get. When rastering you also receive a dictionary of metadata that includes both of these parameters. Using the Scene.ndarray(), you have to set raster_info=True; with Raster.ndarray(), it’s always returned. The following example puts these pieces together. This extracts the blue band from a Landsat 8 scene at a lower resolution and uploads it to our product: >>> from descarteslabs.catalog import OverviewResampler >>> >>> scene, geoctx = dl.scenes.Scene.from_id("landsat:LC08:01:T1:TOAR:meta_LC08_L1TP_163068_20181025_20181025_01_T1_v1") >>> ndarray, raster_meta = scene.ndarray( ... "blue", ... geoctx.assign(resolution=60), ... # return georeferencing info we need to re-upload ... raster_info=True ... ) ... >>> image2 = Image(product=product,>> upload2 = image2.upload_ndarray( ... ndarray, ... raster_meta=raster_meta, ... # create overviews for 120m and 240m resolution ... overviews=[2, 4], ... overview_resampler=OverviewResampler.AVERAGE, ... ) ... >>> upload2.wait_for_completion() >>> upload2.status u'success' The rastered ndarray here is a three-dimensional array in the shape (band, x, y) - the first axis corresponds to the band number. upload_ndarray() expects an array in that shape and will raise a warning if thinks the shape of the array is wrong. If the given array is two-dimensional it will assume you’re uploading a single band image. This also specifies typically useful values for overviews and overview_resampler. Overviews allow the platform to raster your image faster at non-native resolutions, at the cost of more storage and a longer initial upload processing time to calculate the overviews. The overviews argument specifies a list of up to 16 different resolution magnification factors to calulate overviews for. E.g. overviews=[2,4] calculates two overviews at 2x and 4x the native resolution. The overview_resampler argument specifies the algorithm to use when calculating overviews, see upload_ndarray() for which algorithms can be used. Updating images¶ The image created in the previous example is now available in the Catalog. We can look it up and update any of its attributes like any other catalog object: >>> image2 = Image.get(image2.id) >>> image2.cloud_fraction = 0.2 >>> image2.save() To update the underlying file data, you will need to upload a new file or ndarray. However you must utilize a new unsaved Image instance (using the original product id and image name) along with the overwrite=True parameter. The reason for this is the original image which is now saved in the catalog contains many computed values, which may be different from those which would be computed from the new upload. There is no way for the catalog to know if you intend to reuse the original values or compute new values for these properties. Uploading many images¶ If you are going to be uploading a large number of images - especially if you are doing so from inside a set of tasks running in parallel, it is better to avoid calling the wait_for_completion() method immediately after initiating each upload. You can instead use the ability to query uploads to determine later on what has succeeded, failed, or is still running at a later time. This has advantages both in within a loop, where you don’t have to waste time waiting for each one, and in the tasks framework, where waiting inside of many tasks wastes resources and slows down the entire job. As an example, if you have used either a loop or a task group to upload a bunch of images to a single product, you can use a pattern like the following to gather up the results. >>> for upload in product.image_uploads().filter(): ... if upload.status not in ( ... ImageUploadStatus.SUCCESS, ... ImageUploadStatus.FAILURE, ... ImageUploadStatus.CANCELED ... ): ... upload.wait_for_completion() ... # do whatever you want here ... Note that the above will return all uploads that you initiated on the product that are still being tracked; you may wish to do additional filtering on the created timestamp or other property to narrow the search. Troubleshooting uploads¶ The ImageUpload returned from upload() and upload_ndarray() provides status information on the image upload. In the following example we upload an invalid file (it’s empty), so we expect the upload to fail. Additional information about the failure should be available in the errors attribute, which will contain a list of error records: >>> import tempfile >>> invalid_image_path = tempfile.mkstemp()[1] >>> with open(invalid_image_path, "w"): pass >>> >>> image3 = Image(product=product, name="scene3", acquired="2012-03-01") >>> upload3 = image3.upload(invalid_image_path) >>> upload3.status u'pending' >>> >>> upload3.wait_for_completion() >>> upload3.status u'failure' >>> upload3.events [ImageUploadEvent: component: yaas component_id: yaas-release-cc95fb75-gwxvr event_datetime: 2020-01-09 14:12:35.2387465+00:00 event_type: queue id: 13 message: message-id=XXXXXXX severity: INFO ImageUploadEvent: component: yaas_worker component_id: metadata-ingest-v2-release-57fbf59cc-rvxwg event_datetime: 2020-01-09 14:12:35.756811+00:00 event_type: run id: 14 message: Running severity: INFO ImageUploadEvent: component: IngestV2Worker component_id: metadata-ingest-v2-release-57fbf59cc-rvxwg event_datetime: 2020-01-09 14:12:35.756811+00:00 event_type: complete id: 15 message: InvalidFileError: Cannot determine file information, missing the following properties for storage-XXXX-products/guide-example-product/uploads/5d6f4154-7e9e-43a9-aed3-7f19f66cebe1/1578579154865887: ['size'] severity: ERROR ] Uploads also contain a list of events pertaining to the upload. These can be useful for understanding or diagnosing problems. You can also list any past upload results with Product.image_uploads() and Image.image_uploads(). Note that upload results are currently not stored indefinitely, so you may not have access to the full history of uploads for a product or image. >>> for upload in product.image_uploads(): ... print(upload.id, upload.image_id, upload.status) ... 10635 descarteslabs:guide-example-product:scene1 success 10702 descarteslabs:guide-example-product:scene2 success 10767 descarteslabs:guide-example-product:scene3 failure Alternatively you can filter the list by properties such as the status. >>> for upload in product.image_uploads().filter(properties.status == ImageUploadStatus.FAILURE): ... print(upload.id, upload.image_id, upload.status) ... 10767 descarteslabs:guide-example-product:scene3 failure In the event that you experience an upload failure, and the error(s) don’t make it clear what you need to do to fix it, you should include the upload object id and any events and errors associated with it when you communicate with the Descartes Labs support team. Remote images¶ In addition to hosting rasterable images with file data attached, the catalog also supports images where the underlying raster data is not directly available. These remote images cannot be rastered but can be searched for using the catalog. This is useful for a couple of scenarios: A product of images that have not been consistently processed, optimized or georegistered in a way that prevents them from being rastered by the platform, for example raw imagery taken in unprocessed form from a sensor. Such a product can serve as the basis for higher-level products that have been processed consistently from the raw imagery. A product of images for which file data exist somewhere outside the platform but has not been uploaded or only partly uploaded into the platform. This gives users the chance to browse the full metadata of images and then make decisions about what file data should be uploaded on demand. To create a remote image set storage_state to "remote". The only required attributes for remote images are acquired and geometry to anchor them in time and space. No bands are required for a product holding only remote images. >>> from descarteslabs.catalog import Product, Image, StorageState >>> product = Product(>> image.geometry = geometry >>> image.save() If some form of URL referencing the remote image is available, attach it through the files attribute using a File: >>> from descarteslabs.catalog import File >>> image.files = [File(href="")] >>> image.save()
https://docs.descarteslabs.com/guides/catalog_v2.html
2021-11-27T03:03:50
CC-MAIN-2021-49
1637964358078.2
[]
docs.descarteslabs.com
In this module you can manage your server parameters, such as its host name, time zone, etc. Server settings - Server name — enter the name (hostname) of the server. It is used by some applications. - Time zone — determines the local time of the server. It can be changed to display the time to the user in his locale. - Server time — your current server time. - Update software automatically — there are three types of updates of the control panel to the latest version: - Do not update — the upgrade process won't start automatically, and the software won't be updated. - Update ISPsystem products — automatically update ISPsystem software products. The system will update only the packages installed from the ISPsystem repository. The packages from third-party repositories won't be updated. - Update all the system packages — automatically update all packages of the operating system. "Update ISPsystem products" is selected by default. If you enable automatic updates, a daily cron job is started and, depending on the upgrade type, updates either ISPsystem packages or all packages of the operating system. By default the cron job is started at 3:10 am (server time). You can change the start time in the Cron job edit form. - Grant support access — select the checkbox to automatically provide ISPsystem support staff with SSH access to your server. For security reasons we neither ask nor save the passwords of our clients. Selecting/clearing this check box enables a client to allow/revoke ISPsystem support staff to access his server (normally, we need to access your servers for troubleshooting). If you select the check box, the system will download the ssh-key and grant root access to your server both via Shell and the control panel. Clearing the checkbox will delete the key. Our partners can activate keys for their own support staff, so in this case, you will see the name of our partner to whom you give access details. Control panel The setting in this tab will be applied for each control panel installed on your server - Password strength — don't use weak passwords for security reasons. Selecting this checkbox will forbid weak passwords. An attempt to set a weak password will fail and the corresponding error message will be displayed. Read more about the parameters controlling password generation and strength in Configuration file parameters (PWGenCharacters, PWGenLen, PWStrength parameters). - Social login — select the check box to allow users to sign on using existing login information from a social networking service. Log rotation - Keep the log, days — enter the period in days to keep records in the operation log. - Keep logs, days — enter the period in days to keep system logs.
https://docs.ispsystem.com/ispmanager6-business/configuration/system-configuration
2021-11-27T02:14:28
CC-MAIN-2021-49
1637964358078.2
[]
docs.ispsystem.com
Migrate existing Script Editor web part customizations to the SharePoint Framework SharePoint Framework is a model for building SharePoint customizations. If you have been building client-side SharePoint solutions by using the Script Editor web part, you might be wondering what the possible advantages are of migrating them to the SharePoint Framework. This article highlights the benefits of migrating existing client-side customizations to the SharePoint Framework and points out a number of considerations that you should take into account when planning the migration. Note The information in this article applies to customizations built using both the Content- and the Script Editor web part. Wherever there is a reference to the Script Editor web part, you should read Content Editor web part and Script Editor Web Part. Benefits of migrating existing client-side customizations to the SharePoint Framework SharePoint Framework has been built from the ground up as a SharePoint development model focusing on client-side development. While it primarily offers extensibility capabilities for modern team sites, customizations built on the SharePoint Framework also work with the classic SharePoint experience. Building your customizations by using the SharePoint Framework offers you a number of benefits over using any other SharePoint development model available to date. Build once, reuse across all devices The modern SharePoint user experience has been designed to natively support access to information stored in SharePoint on any device. Additionally, the modern experience supports the SharePoint mobile app. Solutions built using the SharePoint Framework seamlessly integrate with the modern SharePoint experience and can be used across the different devices and inside the SharePoint mobile app. Because SharePoint Framework solutions also work with the classic SharePoint experience, they can be used by organizations that haven't migrated to the modern experience yet. More robust and future-proof In the past, developers were customizing SharePoint by attaching to specific DOM elements in the user interface. Using this approach, they could change the standard user experience or provide additional functionality to end users. Such solutions were however error-prone. Because the SharePoint page DOM was never meant as an extensibility surface, whenever it changed, solutions relying on it would break. SharePoint Framework offers developers standardized API and extensibility points to build client-side solutions and provide end users with additional capabilities. Developers can use the SharePoint Framework to build solutions in a supported and future-proof way, and don't need to be concerned with how future changes to the SharePoint user interface could affect your solution. Easier access to information in SharePoint and Office 365 Microsoft is continuously extending capabilities of SharePoint and Office 365. As organizations make more use of both platforms, it's becoming increasingly important for developers to tap into the information and insights stored in Office 365 to build rich solutions. One of the goals of the SharePoint Framework is to make it easy for developers to connect to various SharePoint and Office 365 APIs. Enable users to configure solutions to their needs Client-side solutions built through script embedding often didn't offer end users an easy way to configure them to their needs. The only way to configure such solution, was by altering its code or through a custom user interface built specifically for that purpose. SharePoint Framework client-side web parts offer a standard way for configuring web parts using a property pane - familiar to users who worked with classic web parts in the past. Developers building client-side web parts can choose whether their web part should have a reactive property pane (default), where each change to a web part property is directly reflected in the web part body, or a non-reactive property pane, where changes to web part properties must be explicitly applied. Work on no-script sites To help organizations govern their customizations, Microsoft released the no-script capability in SharePoint Online. When the tenant or a particular site collection has the no-script flag enabled, customizations relying on script injection and embedding are disabled. Because SharePoint Framework client-side web parts are deployed through the app catalog with a prior approval, they're allowed to run in no-script sites. By default, modern team sites have the no-script setting enabled and it's not possible to embed scripts in these sites. This makes using the SharePoint Framework the only supported way to build client-side customizations in modern team sites. Similarities between SharePoint Framework solutions and Script Editor web part customizations While built on a new development model using a new toolchain, SharePoint Framework solutions are similar to the Script Editor web part customizations you have built in the past. Because they share the same concepts, it makes it easier for you to migrate them to the new SharePoint Framework. Run as a part of the page Similar to customizations embedded in Script Editor web parts, SharePoint Framework solutions are part of the page. This gives these solutions access to the page's DOM and allows them to communicate with other components on the same page. Also, it allows developers to more easily make their solutions responsive to adapt to the different form factors on which a SharePoint page could be displayed, including the SharePoint mobile app. Unlike SharePoint Add-ins, SharePoint Framework client-side web parts aren't isolated in an iframe. As a consequence, whatever resources the particular client-side web part has access to, other elements on the page can access as well. This is important to keep in mind when building solutions that rely on OAuth implicit flow with bearer access tokens or use cookies or browser storage for storing sensitive information. Because client-side web parts run as a part of the page, other elements on that page can access all these resources as well. Use any library to build your web part When building customizations using the Script Editor web part, you might have used libraries such as jQuery or Knockout to make your customization dynamic and better respond to user interaction. When building SharePoint Framework client-side web parts, you can use any client-side library to enrich your solution, the same way you would have done it in the past. An additional benefit that the SharePoint Framework offers you is isolation of your script. Even though the web part is executed as a part of the page, its script is packaged as a module allowing, for example, different web parts on the same page to use a different version of jQuery without colliding with each other. This is a powerful feature that allows you to focus on delivering business value instead of technical migrations and compromising on functionality. Run as the current user In customizations built using the Script Editor web part, whenever you needed to communicate with SharePoint, all you had to do was to call its API. The client-side solution was running in the browser in the context of the current user, and by automatically sending the authentication cookie along with the request, your solution could directly connect to SharePoint. No additional authentication, such as when using SharePoint Add-ins, was necessary to communicate with SharePoint. The same mechanism applies to customizations built on the SharePoint Framework that also run under the context of the currently authenticated user and also don't require any additional authentication steps to communicate with SharePoint. Use only client-side code Both SharePoint Framework and Script Editor web part solutions run in the browser and can contain only client-side JavaScript code. Client-side solutions have a number of limitations, such as not elevating permissions in SharePoint or reach out to external APIs that don't support cross-origin communication (CORS) or authentication using OAuth implicit flow. In such cases, the client-side solution could leverage a remote server-side API to do the necessary operation and return the results to the client. Host solution in SharePoint One of the benefits of building SharePoint customizations using Script Editor web parts was the fact that their code could be hosted in a regular SharePoint document library. Compared to SharePoint Add-ins, it required less infrastructure and simplified hosting the solution. Additionally, organizations could use SharePoint permissions to control access to the solution files. While hosting SharePoint Framework solutions on a CDN offers a number of advantages, it isn't required, and you could choose to host their code in a regular SharePoint document library. SharePoint Framework packages (*.sppkg files) deployed to the app catalog specify the URL at which SharePoint Framework can find the solution's code. There are no restrictions with regards to what that URL must be, as long as the user browsing the page with the web part on it can download the script from the specified location. Office 365 offers the public CDN capability that allows you to publish files from a specific SharePoint document library to a CDN. Office 365 public CDN strikes a nice balance between the benefits of using a CDN and the simplicity of hosting code files in a SharePoint document library. If your organization doesn't mind their code files being publicly available, using the Office 365 public CDN is an option worth considering. Differences between SharePoint Framework solutions and Script Editor web part customizations SharePoint customizations built using the SharePoint Framework and Script Editor web part are similar. They all run as a part of the page under the context of the current user and are built using client-side JavaScript. But there are also some significant differences that might influence your architectural decisions and which you should keep in mind when designing your solution. Run in no-script sites When building client-side customizations using the Script Editor web part, you had to take into account whether the site where the customization would be used was a no-script site or not. If the site was a no-script site, you either had to request the admin to disable the no-script setting or build your solution differently, for example, by using the add-in model. Because SharePoint Framework solutions are deployed through the app catalog with a prior approval, they aren't subject to the no-script restrictions and work on all sites. Deployed and updated through IT Script Editor web parts are used to build SharePoint customizations primarily by citizen developers. With nothing more than site owner permissions, citizen developers can build compelling SharePoint customizations that add business value. Whenever the customization should be updated, users with the necessary permissions can apply updates to the solution's script files and the changes are immediately visible to all users. Script Editor web part solutions make it hard for IT organizations to keep track of what customizations are being used and where they're being used. Additionally, organizations can't tell which external scripts are being used in their intranet and have access to their data. SharePoint Framework gives the control back to the IT. Because SharePoint Framework solutions are deployed and managed centrally in the app catalog, organizations have the opportunity to review the solution before deploying it. Additionally, any updates are deployed via the app catalog as well, allowing organizations to verify and approve them before deployment. Focus more on uniform user experience When building customizations using the Script Editor web part, citizen developers owned the complete DOM of their customization. There were no guidelines related to the user experience and how such customization should work. As a result, different developers built customizations in different ways, which led to an inconsistent user experience for end users. One of the goals of the SharePoint Framework is to standardize building client-side customizations so that they're uniform from the deployment, maintenance, and user experience point of view. Using Office UI Fabric, developers can more easily make their custom solutions look and behave like they're an integral part of SharePoint, simplifying user adoption. SharePoint Framework toolchain generates package files for the solutions that are deployed to the app catalog, and script bundles that are deployed to the hosting location of your choice. Every solution is structured and managed in the same way. Don't modify DOM outside of the customization Script Editor web parts were frequently used in the past to modify parts of the page, such as adding buttons to the toolbar or changing the heading or branding of the page. Such customizations relied on the existence of specific DOM elements, and whenever SharePoint UI would be updated, there was a chance that such customization would break. SharePoint Framework encourages a more structured and reliable approach to customizing SharePoint. Rather than using specific DOM elements to customize SharePoint, SharePoint Framework provides developers with a public API that they can use to extend SharePoint. Client-side web parts are the only shape supported by the SharePoint Framework, but other shapes, such as equivalents of JSLink and User Custom Actions, are being taken into consideration for the future so that developers can implement the most common customization scenarios by using the SharePoint Framework. Distributed as packages SharePoint client-side customizations often used SharePoint lists and libraries to store their data. When built using the Script Editor web part, there was no easy way to automatically provision the necessary prerequisites. Deploying the same customization to another site often meant copying the web part but also correctly recreating and maintaining the necessary data storage. SharePoint Framework solutions are distributed as packages can provisioning their prerequisites, such as fields, content types, or lists, automatically. Developers building the package can specify which artifacts are required by the solution, and whenever it's installed in a site, the specified artifacts are created. This significantly simplifies the process of deploying and managing the solution on multiple sites. Use TypeScript for building more robust and easier to maintain solutions When building customizations using the Script Editor web part, citizen developers often used plain JavaScript. Often such solutions didn't contain any automated tests, and refactoring the code was error-prone. SharePoint Framework allows developers to benefit from the TypeScript type system when building solutions. Thanks to the type system, errors are caught during development rather than on runtime. Also, refactoring code can be done more easily as changes to the code are safeguarded by TypeScript. Because all JavaScript is valid TypeScript, the entry barrier is low and you can migrate your plain JavaScript to TypeScript gradually over time increasing the maintainability of your solution. Can't rely on the spPageContextInfo object When building reusable client-side customizations, in the past developers used the spPageContextInfo JavaScript object to get information about the current page, site, or user. This object offered them an easy way to make their solution reusable across the different sites in SharePoint and not have to use fixed URLs. While the spPageContextInfo object is still present on classic SharePoint pages, it can't be reliably used with modern SharePoint pages and libraries. When building solutions on the SharePoint Framework, developers are recommended to use the [IWebPartContext.pageContext](/javascript/api/sp-webpart-base/iwebpartcontext) object instead, which contains the context information for the particular solution. No access to SharePoint JavaScript Object Model by default When building client-side customizations for the classic SharePoint user experience, many developers used the JavaScript Object Model (JSOM) to communicate with SharePoint. JSOM offered them intellisense and easy access to the SharePoint API in a way similar to the Client-Side Object Model (CSOM). In classic SharePoint pages, the core piece of JSOM was by default present on the page, and developers could load additional pieces to communicate with SharePoint Search, for example. The modern SharePoint user experience doesn't include SharePoint JSOM by default. While developers can load it themselves, the recommendation is to use the REST API instead. Using REST is more universal and interchangeable between the different client-side libraries such as jQuery, Angular, or React. Microsoft isn't actively investing in JSOM anymore. If you prefer working with an API, you could instead use the SharePoint Patterns and Practices JavaScript Core Library, which offers you a fluent API and TypeScript typings. Migrate existing customization to the SharePoint Framework Migrating existing Script Editor web part customizations to the SharePoint Framework offers both end-users and developers a number of benefits. When considering migrating existing customizations to the SharePoint Framework, you can choose to either reuse as many of the existing scripts as possible or completely rewrite the customization. Reuse existing scripts When migrating existing Script Editor web part customizations to the SharePoint Framework, you can choose to reuse your existing scripts. Even though the SharePoint Framework encourages using TypeScript, you can use plain JavaScript and gradually transform it to TypeScript. If you need to support a particular solution for a limited period of time or have a limited budget, this approach might be good enough for you. Reusing existing scripts in a SharePoint Framework solution isn't always possible. SharePoint Framework solutions, for example, are packaged as JavaScript modules and load asynchronously on a page. Some JavaScript libraries don't work correctly when referenced in a module or have to be referenced in a specific way. Additionally, relying on page events such as onload() or using the jQuery ready() function might lead to undesirable results. Given the variety of JavaScript libraries, there's no easy way to tell upfront if your existing scripts can be reused in a SharePoint Framework solution or if you need to rewrite it after all. The only way to determine this is by trying to move the different pieces to a SharePoint Framework solution and see if they work as expected. Rewrite the customization If you need to support your solution for a longer period of time, would like to make better use of the SharePoint Framework, or if it turns out that your existing scripts can't be reused with the SharePoint Framework, you might need to completely rewrite your customization. While it's more costly than reusing existing scripts, it offers you better results over a longer period of time: a solution that is better performing, and easier to maintain and use. When rewriting an existing Script Editor web part customization to a SharePoint Framework solution, you should start with the desired functionality in mind. For implementing the user experience, you should consider using the Office UI Fabric so that your solution looks like an integral part of SharePoint. For specific components such as charts or sliders, you should try looking for modern libraries that are distributed as modules and have TypeScript typings. This makes it easier for you to integrate the component in your solution. While there's no single answer as to which component is the best to use for which scenario, the SharePoint Framework is flexible enough to accommodate most popular scenarios, and you can transform your existing client-side customizations into fully featured SharePoint Framework solutions. Migration tips When transforming existing Script Editor web part customizations to the SharePoint Framework, there are a few common patterns. Move existing code to SharePoint Framework SharePoint customizations built using the Script Editor web part often consist of some HTML markup, included in the web part, and one or more references to JavaScript files. When transforming your existing customization to a SharePoint Framework solution, the HTML markup from the Script Editor web part would most likely have to be moved to the render() method of the SharePoint Framework client-side web part. References to external scripts would be changed to references in the externals property in the ./config/config.json file. Internal scripts would be copied to the web part source folder and referenced from the web part class by using require() statements. Reference functions from script files To reference functions from script files, these functions need to be defined as an export. Consider an existing JavaScript file that you would like to use in a SharePoint Framework client-side web part: var greeting = function() { alert('How are you doing?'); return false; } To call the greeting function from the web part class, you would need to change the JavaScript file to: var greeting = function() { alert('How are you doing?'); return false; } module.exports = { greeting: greeting }; Then, in the web part class, you can refer to the script and call the greeting function: public render(): void { this.domElement.innerHTML = `<input type="button" value="Click me"/>`; const myScript = <any> require('./my-script.js'); this.domElement.querySelector('input').addEventListener('click', myScript.greeting); } Execute AJAX calls Many client-side customizations use jQuery for executing AJAX requests for its simplicity and cross-browser compatibility. If this is all that you're using jQuery for, you can execute the AJAX calls by using the standard HTTP client provided with the SharePoint Framework. SharePoint Framework offers you two types of HTTP client: the SPHttpClient, meant for executing requests to the SharePoint REST API, and the HttpClient designed for issuing web requests to other REST APIs. Here is how you would execute a call by using the SPHttpClient to get the title of the current SharePoint site: this.context.spHttpClient.get(`${this.context.pageContext.web.absoluteUrl}/_api/web?$select=Title`, SPHttpClient.configurations.v1, { headers: { 'Accept': 'application/json;odata=nometadata', 'odata-version': '' } }) .then((response: SPHttpClientResponse): Promise<{Title: string}> => { return response.json(); }) .then((web: {Title: string}): void => { // web.Title }, (error: any): void => { // error });
https://docs.microsoft.com/en-us/sharepoint/dev/spfx/web-parts/guidance/migrate-script-editor-web-part-customizations
2021-11-27T04:13:19
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
Aside from the backslash ( \), phpDocumentor also allows the underscore ( _) and dot ( .) as separators for compatibility with existing projects. Despite this the backslash is RECOMMENDED as separator. @package The @package tag is used to categorize Structural Elements into logical subdivisions. Syntax @package [level 1]\\[level 2]\\[etc.] Description be separated with a backslash ( \) to be familiar to Namespaces. A hierarchy MAY be of endless depth but it is RECOMMENDED to keep the depth at less or equal than six levels. Please note that the @package tag applies to different Structural Elements depending where it is defined. If the package is defined in the file-level DocBlock then it only applies to the following elements in the applicable file: - global functions - global constants - global variables - requires and includes - If the package is defined in a namespace-level or class-level DocBlock then the package applies to that namespace, class, trait or interface and their contained elements. This means that a function which is contained in a namespace with the @packagetag assumes that package. The @package tag MUST NOT occur more than once in a PHPDoc. Effects in phpDocumentor Structural Elements tagged with the @package tag are grouped and organized in their own sidebar section. Aside from the backslash ( Examples /** * @package PSR\Documentation\API */
https://docs.phpdoc.org/guide/references/phpdoc/tags/package.html
2021-11-27T02:44:59
CC-MAIN-2021-49
1637964358078.2
[]
docs.phpdoc.org
The Administrative Console Guide Full online documentation for the WP EasyCart eCommerce plugin! Full online documentation for the WP EasyCart eCommerce plugin! Product inventory reports allow you to quickly view the current product inventory status in a color coded and easy to read list. You can export this list to CSV as well and is helpful if you have hundreds of products. Green = Your product is above the threshold and you are ok with inventory. Yellow = You are under 10 products threshold and should check on inventory. Red = You are out of stock in that specific item and should check inventory.
https://docs.wpeasycart.com/wp-easycart-administrative-console-guide/?section=inventory
2021-11-27T03:24:48
CC-MAIN-2021-49
1637964358078.2
[]
docs.wpeasycart.com
A JMS synchronous invocation takes place when a JMS producer receives a response to a JMS request produced by it when invoked. WSO2 ESB uses an internal JMS correlation ID to correlate the request and the response. See JMS Request/Reply Example for more information. JMS synchronous invocations are further explained in the following use case.. Then this response is taken from the SMSReceiveNotification queue and delivered to the client as an HTTP message using internal ESB logic. The following sub sections explain how to execute this use case. WSO2 Message Broker is used as the JMS broker. Prerequisites Before executing this use case, the following steps need to be carried out. See Integrating WSO2 ESB in MB Documentation for detailed instructions. - WSO2 MB should be installed and set up. See Setting up WSO2 Message Broker. WSO2 ESB should installed and set up. See Setting up WSO2 ESB. Specific entries that are required to be added to the <ESB_HOME>/repository/conf/jndi.propertiesfile for this use case are as follows. Configuring the JMS publisher Configure a proxy service named SMSSenderProxy as shown below to accept messages sent via the HTTP transport, and to place those messages in the SMSStore queue in WSO2 MB. .wso2.andes.jndi.PropertiesFileInitialContextFactory&java.naming.provider.url=repository/conf/jndi.properties&transport.jms.DestinationType=queue&transport.jms.ReplyDestination=SMSReceiveNotificationStore"/> </endpoint> </target> <description/> </proxy> The endpoint of this proxy service uses the following properties to map the proxy service with WSO2 MB. Since this is a two-way invocation, the OUT_ONLY property is not set in the In sequence. Configuring the JMS consumer Configure a proxy service named SMSForwardProxy to consume messages from the SMSStore queue in WSO2 MB and forward them to the back-end service. <proxy xmlns="" name="SMSForwardProxy" transports="jms" statistics="disable" trace="disable" startOnLoad="true"> <target> <inSequence> and transport.jms.Destination properties map this proxy service to the SMSStore queue. The SimpleStockQuoteService sample shipped with WSO2 ESB is used as the back-end service in this example. To invoke this service, the address URI of this proxy service is defined as. Start the back-end service In this example, the SimpleStockQuoteService serving as the back-end receives the message from the SMSForwardProxy proxy service via the JMS transport. The response is sent by SimpleStockQuoteService is published in the SMSReceiveNotificationStore queue which was set as the value for the transport.jms.ReplyDestination parameter of the SMSSenderProxy proxy service. This allows the SMSSenderProxy to pick the response and deliver it to the client. The back-end service is started as follows. - Execute the following command from the <ESB_HOME>/samples/axis2Serverdirectory. For Windows: axis2server.bat For Linux: axis2server.sh - Execute the antcommand from <ESB_HOME>/samples/axis2Server/src/SimpleStockQuoteServicedirectory. Invoke the JMS publisher Execute the following command from the <ESB_HOME>/sample/axis2Client directory to invoke the SMSSenderProxy proxy service you defined as the JMS publisher. ant stockquote -Daddurl= -Dsymbol=IBM You will get the following response. Standard :: Stock price = $149.43669233447662
https://docs.wso2.com/display/ESB481/JMS+Synchronous+Invocations+%3A+Dual+Channel+HTTP-to-JMS
2021-11-27T02:22:11
CC-MAIN-2021-49
1637964358078.2
[]
docs.wso2.com
Date: Mon, 1 May 1995 10:00:39 +0200 (MET DST) From: Didier Derny <[email protected]> To: [email protected] (Jordan K. Hubbard) Cc: [email protected] Subject: Re: your mail Message-ID: <[email protected]> In-Reply-To: <[email protected]> from "Jordan K. Hubbard" at Apr 30, 95 10:57:42 am Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help > > > I think this is an excellent idea... Jordan? What are the chances of > > Walnut Creek offering a FreeBSD-SNAP subscription? I'd happily pay > > for a monthly or semi-monthly CDROM. Say a couple hundred bucks a year? > > We'd be happy to do it, if people REALLY thought there was a demand. > > Well? > > Jordan > Hi, you know, a dialup connection can be really expensive, and I heard that the size of the SNAP was generally around 30 mb and it may take 10 hours to get a snap I'm not aware of the price of the Internet provide in the world but a snap cost more than $50 plus the price of a tape that makes the price of a snap to $60 it's more than a cdrom I would prefer paying one or two cdrom every month if I was sure that I would receive the cdrom systematically. ---------------------------------------------------------------------------- Didier Derny [email protected] ---------------------------------------------------------------------------- Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=60089+0+/usr/local/www/mailindex/archive/1995/freebsd-questions/19950430.freebsd-questions
2021-11-27T01:59:58
CC-MAIN-2021-49
1637964358078.2
[]
docs.freebsd.org
Date: Sat, 9 Nov 1996 10:30:21 -0600 (CST) From: "Paul T. Root" <[email protected]> To: [email protected] (Robert Clark) Cc: [email protected] Subject: Re: RE: Sharing directories without rebooting Message-ID: <[email protected]> In-Reply-To: <3283B164@smtp> from Robert Clark at "Nov 8, 96 02:16:00 pm" Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help In a previous message, Robert Clark said: > > Paul, > (I'm fairly new to UNIX) > Is the; > > mountd > nfsd -u -t 4 > > method better than > > ps -as | grep mountd > kill -HUP mountd's pid > > Thanks, [RC] Actually, two different situations. The first method (mountd then nfsd) is used if you brought up the machine without anything in exports or nfs_server is set to NO in /etc/sysconfig, and you wanted to add nfs serving to a running machine. The second is if /etc/exports was populated at boot time, and nfs_server was set to YES, and then you wanted to add or subtract filesystems to nfs service. Hope that helps, Paul. Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=859377+0+/usr/local/www/mailindex/archive/1996/freebsd-questions/19961103.freebsd-questions
2021-11-27T02:38:56
CC-MAIN-2021-49
1637964358078.2
[]
docs.freebsd.org
Date: Tue, 14 Apr 1998 16:13:15 -0400 (EDT) From: Spike Gronim <[email protected]> To: Frank Griffith <[email protected]> Cc: [email protected] Subject: Re: List of Users Message-ID: <Pine.BSF.3.96.980414161221.367A-100000@pigstuy> In-Reply-To: <000201bd674c$57d69ce0$740e42ce@flg1> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On Mon, 13 Apr 1998, Frank Griffith wrote: > I need to see what users accounts have been > setup on my FreeBSD machine. I don't seem > to be able to do that. LISTUSER does not > seem to be a command available to me. Can > someone steer me right. > Try (as root) reading /etc/group or typing "vipw" to view/edit the password file. -Spike Gronim [email protected] "Hacker, n: One who hacks real good" --Computer Contradictionary To Unsubscribe: send mail to [email protected] with "unsubscribe freebsd-questions" in the body of the message Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=895294+0+/usr/local/www/mailindex/archive/1998/freebsd-questions/19980412.freebsd-questions
2021-11-27T03:05:17
CC-MAIN-2021-49
1637964358078.2
[]
docs.freebsd.org
It looks like you are not logged into your Oryx Dental Software account. Please open another tab in your browser window, and log into your Oryx account like you would normally do to access the software. Once you are logged in, come back to this page and click on the button below.
https://docs.myoryx.com/redirect-page/
2021-11-27T01:41:18
CC-MAIN-2021-49
1637964358078.2
[array(['https://docs.myoryx.com/wp-content/uploads/2019/05/login.png', None], dtype=object) ]
docs.myoryx.com
INC. INC-125900 · Issue 560786 ResolveFTCR flow Draft mode turned off Resolved in Pega Version 8.3.4 When using the FastTrack Change Request functionality of Revision Manager 8.3 the submission was going to Pending packaging instead of Resolved-Completed with the error "flow is in draft mode; it cannot be executed". This was due to pxResolveFTCR flow having been checked in with Draft mode turned on, and has been corrected by turning Draft mode off. INC-126129 · Issue 569664 PropertyToColumnMap made more robust Resolved in Pega Version 8.3.4-126556 · Issue 564030-126750 · Issue 564150 DSS added to configure Dataset-Execute page handling Resolved in Pega Version 8.3.4 When Kafka Data Set pages were saved to a data set with the Dataset-Execute method, there was is feedback if any of the pages were not successfully saved. Instead, the step always completes as successful. In addition, if any properties are added or modified by the save operation itself, those changes are not visible. This is due to the data set execute save operation saving pages as DSM pages to the data set. Due to the conversion of the pages, copies of the pages are used which do not reflect back any changes on the input pages. DSM pages are used by default because they are more lightweight than regular clipboard pages and therefore have potentially better performance. In order to allow the use of DSM pages to be customized, a new Dynamic System Setting: dataset/execute/save/statusFailOnError has been added. This can be enabled by setting it to true; it is disabled by default for greater backwards compatibility. By removing the DSM page conversion in the generated save code, changes to input pages will be reflected back if any are performed by the data set save operation, and the system will report back which pages are saved or failed by adding messages to the pages that failed to save. Performance may be affected with this change as regular clipboard pages are in general slower than DSM pages, however, that may be offset by removing the conversion to DSM pages process and will depend on the site configuration. INC-126796 · Issue 561534 Modifications to getFunctionalServiceNodes process Resolved in Pega Version 8.3.4 The count of the Interaction History write related threads was increasing rapidly and a stack trace indicated "waiting on condition" and "java.lang.Thread.State: WAITING (parking)" errors. Investigation showed that this was due to getFunctionalServiceNodes using Hazelcast to determine node status by making a service request on an installation with a very large number of nodes, causing thread locking. To resolve this, the implementation has been updated to avoid calling getFunctionalServiceNodes on save of Interaction History, instead using Cassandra and only calling getFunctionalServiceNodes on the master node, not on all nodes. INC-126801 · Issue 575961 Improved cleanup for adm_response_meta_info Resolved in Pega Version 8.3.4 The adm_commitlog.adm_response_meta_info column family was growing, leading to gradual increase in CPU utilization on the ADM nodes over time. Investigation showed that the compaction on the adm_response_meta_info table was not being triggered by the ADM service, and the compaction did not remove rows that belonged to models that had been deleted. To resolve this, compaction of the adm_response_meta_info table has been moved from the ADM client nodes to the ADM service nodes, which will correctly trigger the compaction on a predefined schedule. The compaction logic has also been refactored to remove rows that belong to models that have been deleted. INC-128219 · Issue 565831 Race condition in distribution test resolved Resolved in Pega Version 8.3.4 An error indicating improper execution of different Proposition filter rules was seen when trying to run a Distribution test to see how actions were being dispersed to different channels. This was traced a particular Activity rule that when used during a simulation of a Proposition Filter may modify the configuration used by the filter itself, causing unpredictable behavior. To resolve this, the generation of human readable text for default criteria has been moved from runtime to design time to avoid a race condition. INC-128385 · Issue 564518 Behavior made consistent between SSA and legacy engines Resolved in Pega Version 8.3.4.
https://docs.pega.com/platform/resolved-issues?f%5B0%5D=resolved_capability%3A9076&f%5B1%5D=resolved_version%3A7116&f%5B2%5D=resolved_version%3A31006&f%5B3%5D=resolved_version%3A34011
2021-11-27T02:56:47
CC-MAIN-2021-49
1637964358078.2
[]
docs.pega.com
this this is the database representation of the current model. It is useful when: - Defining a wherestatement within incremental models - Using pre or post hooks this is a Relation, and as such, properties such as {{ this.database }} and {{ this.schema }} compile as expected. this can be thought of as equivalent to ref('<the_current_model>'), and is a neat way to avoid circular dependencies. ExamplesExamples Grant permissions on a model in a post-hookGrant permissions on a model in a post-hook dbt_project.yml models:project-name:+post-hook:- "grant select on {{ this }} to db_reader" Configuring incremental modelsConfiguring incremental models models/stg_events.sql {{ config(materialized='incremental') }}select*,my_slow_function(my_column)from raw_app_data.events{% if is_incremental() %}where event_time > (select max(event_time) from {{ this }}){% endif %}
https://6167222043a0b700086c2b31--docs-getdbt-com.netlify.app/reference/dbt-jinja-functions/this
2021-11-27T04:02:57
CC-MAIN-2021-49
1637964358078.2
[]
6167222043a0b700086c2b31--docs-getdbt-com.netlify.app
Confirmation E-mail Settings Return to Overview of Configuration These settings affect the automated system email that is sent out when an account creation request has been made. An automated e-mail is sent to the submitted e-mail address in this section and on the Account Approval E-mail Settings section as well. Step Action Result 1. E-Mail Subject The subject of the email sent to the address making the account creation request. 2. E-Mail Message Enter the contents of the message. NOTE:. You will also notice that the Account Approval E-mail has similar tokens for {NewUserName} and {NewPassword}. It is recommended that you use these, but it is not required.
https://docs.bamboosolutions.com/document/confirmation_e-mail_settings/
2021-11-27T02:17:37
CC-MAIN-2021-49
1637964358078.2
[]
docs.bamboosolutions.com
LDAP tasks and administrative operations¶ In addition to maintaining the system-wide LDAP client configuration on a host, the debops.ldap role can be used to perform tasks in the LDAP directory itself, using ldap_entry or ldap_attrs [1] Ansible modules. The LDAP tasks are performed via Ansible task delegation functionality, on the Ansible Controller. This behaviour can be controlled using the ldap__admin_* default variables. Check the ldap__tasks documentation for syntax and examples of usage. Authentication to the LDAP directory¶ The role will use the username of the current Ansible user (from the Ansible Controller host) as the value of the uid= attribute to bind to the LDAP directory. This is done to avoid sharing passwords between users of a single administrator account in the LDAP directory. By default LDAP connection will be bound as a Distinguished Name: uid=<user>,ou=People,dc=example,dc=org The DN can be overridden in the ldap__admin_binddn variable, either via Ansible inventory (this should be avoided if the inventory is shared between multiple administrators), or using an environment variable on the Ansible Controller: export DEBOPS_LDAP_ADMIN_BINDDN="cn=ansible,ou=Services,dc=example,dc=org" The bind password is retrieved from the pass password manager on the Ansible Controller, or from an environment variable (see below). If the bind password is not provided (the ldap__admin_bindpw variable is empty), the LDAP tasks will be skipped. This allows the debops.ldap role to be used in a playbook with other roles without the fear that lack of LDAP credentials will break execution of said playbook. Secure handling of LDAP admin credentials¶ The LDAP password of the current Ansible user is defined in the ldap__admin_bindpw default variable. The role checks if the $DEBOPS_LDAP_ADMIN_BINDPW environment variable (on the Ansible Controller) is defined and uses its value as the password during connections to the LDAP directory. If the environment variable is not defined, the role will try and lookup the password using the passwordstore Ansible lookup plugin. The plugin uses the pass password manager as a backend to store credentials encrypted using the GPG key of the user. The path in the pass storage directory where the debops.ldap will look for credentials is defined by the ldap__admin_passwordstore_path, by default it's debops/ldap/credentials/. The actual encrypted files with the password are named based on the UUID value of the current user Distinguished Name used as the BindDN (in the ldap__admin_binddn variable). The UUID conversion is used because LDAP Distinguished Names can contain spaces, and the Ansible lookups don't work too well with filenames that contain spaces. You can use the ldap/get-uuid.yml playbook to convert user account DNs or arbitrary LDAP Distinguished Names to an UUID value you can use to look up the passwords manually, if needed. You can store new credentials in the pass password manager using the ansible/playbooks/ldap/save-credential.yml Ansible playbook included in the DebOps monorepo. All you need to do is run this playbook against one of the LDAP servers by following this steps: - Make sure you have GPGv2 and pass installed, ie. apt-get install gpgv2 pass - Make sure you have a GPG keypair - Initialize the password store: pass init <your-gpg-id>. Example: pass init [email protected] - Run the playbook debops ldap/save-credential -l <host> - Re-Run the playbook for each user you need a password. The playbook will ask interactively for the uid= username, and if not provided, for the full LDAP Distinguished Name, and after that, for a password to store encrypted using your GPG key. If you don't specify one, a random password will be automatically generated, saved in the password store, and displayed for you to use in the LDAP directory. The encrypted passwords will be stored by default under ~/.password-store. Different modes of operation¶ The role acts differently depending on the current configuration of the remote host and its own environment: - If the debops.ldap role configuration was not applied on the host, the role will set up system-wide LDAP configuration file, and perform the default LDAP tasks, tasks defined in the Ansible inventory, and any tasks provided via role dependent variables which are usually defined by other roles (see Use as a dependent role for more details). - If the debops.ldap role configuration was already applied on the host, and there are no LDAP tasks defined by other Ansible roles, the debops.ldap role will apply the default LDAP tasks and the tasks from Ansible inventory (standalone mode). - If the debops.ldap role configuration was already applied on the host, and the role is used as a dependency for another role, the default LDAP tasks and the tasks from Ansible inventory will be ignored, and only those provided via the ldap__dependent_tasksvariable by other Ansible roles will be executed in the LDAP directory (dependent mode). This ensures that the list of LDAP tasks is short, and tasks defined by default in the role, and those defined in the Ansible inventory, which are presumed to be done previously, are not unnecessarily repeated when dependent role LDAP tasks are performed. Because the debops.ldap role relies on the LDAP credentials of the current Ansible user, the person that executes Ansible does not require full access to the entire LDAP directory. The role can perform tasks only on specific parts of the directory depending on the Access Control List of the LDAP directory server and permissions of the current user. Footnotes
https://docs.debops.org/en/stable-2.2/ansible/roles/ldap/ldap-admin.html
2021-11-27T02:00:54
CC-MAIN-2021-49
1637964358078.2
[]
docs.debops.org
Omeka_Db_Table¶ - class Omeka_Db_Table¶ Database table classes. Subclasses attached to models must follow the naming convention: Table_TableName, e.g. Table_ElementSet in models/Table/ElementSet.php. - property _target¶ protected string The name of the model for which this table will retrieve objects. - property _name¶ protected string The name of the table (sans prefix). If this is not given, it will be inflected. - property _tablePrefix¶ protected string The table prefix. Generally used to differentiate Omeka installations sharing a database. - __construct($targetModel, $db)¶ Construct the database table object. Do not instantiate this by itself. Access instances only via Omeka_Db::getTable(). - __call($m, $a)¶ Delegate to the database adapter. Used primarily as a convenience method. For example, you can call fetchOne() and fetchAll() directly from this object. - getColumns()¶ Retrieve a list of all the columns for a given model. This should be here and not in the model class because get_class_vars() returns private/protected properties when called from within the class. Will only return public properties when called in this fashion. - getTableName()¶ Retrieve the name of the table for the current table (used in SQL statements). If the table name has not been set, it will inflect the table name. - setTableName($name = null)¶ Set the name of the database table accessed by this class. If no name is provided, it will inflect the table name from the name of the model defined in the constructor. For example, Item -> items. - setTablePrefix($tablePrefix = null)¶ Set the table prefix. Defaults to the table prefix defined by the Omeka_Db instance. This should remain the default in most cases. However, edge cases may require customization, e.g. creating wrappers for tables generated by other applications. - findAll()¶ Get a set of objects corresponding to all the rows in the table WARNING: This will be memory intensive and is thus not recommended for large data sets. - findPairsForSelectForm($options = array())¶ Retrieve an array of key=>value pairs that can be used as options in a <select> form input. - _getColumnPairs()¶ Retrieve the array of columns that are used by findPairsForSelectForm(). This is a template method because these columns are different for every table, but the underlying logic that retrieves the pairs from the database is the same in every instance. - findBy($params = array(), $limit = null, $page = null)¶ Retrieve a set of model objects based on a given number of parameters - getSelectForFindBy($params = array())¶ Retrieve a select object that has had search filters applied to it. - getSelectForFind($recordId)¶ Retrieve a select object that is used for retrieving a single record from the database. - applySearchFilters($select, $params)¶ Apply a set of filters to a Select object based on the parameters given. By default, this simply checks the params for keys corresponding to database column names. For more complex filtering (e.g., when other tables are involved), or to use keys other than column names, override this method and optionally call this parent method. - applyPagination($select, $limit, $page = null)¶ Apply pagination to a select object via the LIMIT and OFFSET clauses. - findBySql($sqlWhereClause, $params = array(), $findOne = false)¶ Retrieve an object or set of objects based on an SQL WHERE predicate. - filterByPublic(Omeka_Db_Select $select, $isPublic)¶ Apply a public/not public filter to the select object. A convenience function than derivative table classes may use while applying search filters. - filterByFeatured(Omeka_Db_Select $select, $isFeatured)¶ Apply a featured/not featured filter to the select object. A convenience function than derivative table classes may use while applying search filters. - filterBySince(Omeka_Db_Select $select, $dateSince, $dateField)¶ Apply a date since filter to the select object. A convenience function than derivative table classes may use while applying search filters. - filterByUser(Omeka_Db_Select $select, $userId, $userField)¶ Apply a user filter to the select object. A convenience function than derivative table classes may use while applying search filters. - getSelectForCount($params = array())¶ Retrieve a select object used to retrieve a count of all the table rows. - checkExists($id)¶ Check whether a given row exists in the database. Currently used to verify that a row exists even though the current user may not have permissions to access it. - fetchObjects($sql, $params = array())¶ Retrieve a set of record objects based on an SQL SELECT statement. - _getSortParams($params)¶ Get and parse sorting parameters to pass to applySorting. A sorting direction of ‘ASC’ will be used if no direction parameter is passed.
https://omeka.readthedocs.io/en/stable-2.2/Reference/libraries/Omeka/Db/Table.html
2021-11-27T03:25:11
CC-MAIN-2021-49
1637964358078.2
[]
omeka.readthedocs.io
This is version 2.3 of the AWS Elemental Delta documentation. This is the latest version. For prior versions, see the Previous Versions section of AWS Elemental Delta Documentation. Step I: Set-Up Users Optionally configure the nodes for user authentication, and create users. With authentication enabled, users are required to provide valid credentials to access the AWS Elemental Delta nodes. We recommend that you set up users as the last step in the configuration, after you have verified that node failover works. User authentication with AWS Elemental Delta is intended to: Allow managers to track activity on the cluster on a per-user basis. To avoid accidental access to a node, create a unique username for each operator, and vary the usernames across the clusters. For example, varying usernames for each cluster ensures that a REST API operator with access to two clusters does not accidentally send a command to the wrong cluster. Whether user authentication is enabled or not, we recommend that the cluster always be installed behind a customer firewall on a private network. For help setting up authentication and users, see Configuring User Authentication.
https://docs.aws.amazon.com/elemental-delta/latest/configguide/initial-config-ls-users.html
2021-05-06T01:22:22
CC-MAIN-2021-21
1620243988724.75
[]
docs.aws.amazon.com
ANALYZE COMPRESSION Performs compression analysis and produces a report with the suggested compression encoding for the tables analyzed. For each column, the report includes an estimate of the potential reduction in disk space compared to the current encoding. Syntax ANALYZE COMPRESSION [ [ table_name ] [ ( column_name [, ...] ) ] ] [COMPROWS numrows] Parameters - table_name You can analyze compression for specific tables, including temporary tables. You can qualify the table with its schema name. You can optionally specify a table_name to analyze a single table. If you don't specify a table_name, all of the tables in the currently connected database are analyzed. You can't specify more than one table_name with a single ANALYZE COMPRESSION statement. - column_name If you specify a table_name, you can also specify one or more columns in the table (as a column-separated list within parentheses). - COMPROWS Number of rows to be used as the sample size for compression analysis. The analysis is run on rows from each data slice. For example, if you specify COMPROWS 1000000 (1,000,000) and the system contains 4 total slices, no more than 250,000 rows per slice are read and analyzed. If COMPROWS isn't specified, the sample size defaults to 100,000 per slice. Values of COMPROWS lower than the default of 100,000 rows per slice are automatically upgraded to the default value. However, compression analysis doesn't produce recommendations if the amount of data in the table is insufficient to produce a meaningful sample. If the COMPROWS number is greater than the number of rows in the table, the ANALYZE COMPRESSION command still proceeds and runs the compression analysis against all of the available rows. - numrows Number of rows to be used as the sample size for compression analysis. The accepted range for numrows is a number between 1000 and 1000000000 (1,000,000,000). Usage notes Run ANALYZE COMPRESSION to get recommendations for column encoding schemes, based on a sample of the table's contents. ANALYZE COMPRESSION is an advisory tool and doesn't modify the column encodings of the table. You can apply the suggested encoding by recreating the table or by creating a new table with the same schema. Recreating an uncompressed table with appropriate encoding schemes can significantly reduce its on-disk footprint. This approach saves disk space and improves query performance for I/O-bound workloads. ANALYZE COMPRESSION skips the actual analysis phase and directly returns the original encoding type on any column that is designated as a SORTKEY. It does this because range-restricted scans might perform poorly when SORTKEY columns are compressed much more highly than other columns. ANALYZE COMPRESSION acquires an exclusive table lock, which prevents concurrent reads and writes against the table. Only run the ANALYZE COMPRESSION command when the table is idle. Examples The following example shows the encoding and estimated percent reduction for the columns in the LISTING table only: analyze compression listing; Table | Column | Encoding | Est_reduction_pct --------+----------------+----------+------------------ listing | listid | delta | 75.00 listing | sellerid | delta32k | 38.14 listing | eventid | delta32k | 5.88 listing | dateid | zstd | 31.73 listing | numtickets | zstd | 38.41 listing | priceperticket | zstd | 59.48 listing | totalprice | zstd | 37.90 listing | listtime | zstd | 13.39 The following example analyzes the QTYSOLD, COMMISSION, and SALETIME columns in the SALES table. analyze compression sales(qtysold, commission, saletime); Table | Column | Encoding | Est_reduction_pct ------+------------+----------+------------------ sales | salesid | N/A | 0.00 sales | listid | N/A | 0.00 sales | sellerid | N/A | 0.00 sales | buyerid | N/A | 0.00 sales | eventid | N/A | 0.00 sales | dateid | N/A | 0.00 sales | qtysold | zstd | 67.14 sales | pricepaid | N/A | 0.00 sales | commission | zstd | 13.94 sales | saletime | zstd | 13.38
https://docs.aws.amazon.com/redshift/latest/dg/r_ANALYZE_COMPRESSION.html
2021-05-06T02:00:18
CC-MAIN-2021-21
1620243988724.75
[]
docs.aws.amazon.com
Introduction This sample demonstrates the functionality of Switch-Case Mediator. A message is passed through the ESB using the Smart Client mode. The ESB acts as a gateway to accept all messages, write and read local properties on a message instance and then perform mediation based on message properties or content. Prerequisites For a list of prerequisites, see the Prerequisites section in ESB Samples Setup. Building the Sample_2 1. The sample client used here is 'Stock Quote Client' which can operate in several modes. For instructions on this sample client and its operation modes, refer to Stock Quote Client. Smart Client Mode Run each of the following ant commands from <ESB_HOME>/samples/axis2Client directory in the smart client mode, specifying "IBM," "MSFT" and "SUN" as the stock symbols. 2. When the symbol IBM is requested, note in the mediation logs in the ESB start-up console that the case statements' first case for "IBM" is executed and a local property named "symbol" is set to "Great stock - IBM." Subsequently, this local property value is looked up by the Log Mediator and logged using the get-property()XPath extension function. 3. Again execute the smart client as follows: 4. The mediation log will be displayed as follows in the ESB start-up console:
https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=26838800&selectedPageVersions=4&selectedPageVersions=5
2021-05-06T00:43:44
CC-MAIN-2021-21
1620243988724.75
[]
docs.wso2.com
Introduction This sample demonstrates dual channel messaging through synapse. Prerequisites For a list of prerequisites, see the Prerequisites section in ESB Samples Setup. Building the Sample This example invokes the same getQuote operation on the SimpleStockQuoteService using the custom client, which uses the Axis2 ServiceClient API. 2. Note the dual channel invocation through Synapse into the sample Axis2 server instance, which reports the response back to the client over a different channel. 3. Send your client request through TCPmon to notice that Synapse replies to the client with a HTTP 202 acknowledgment when you send the request. The communication between synapse and the server happens on a single channel and you get the response back from synapse to the client's callback in a different channel (which cannot be observed through TCPmon). Also note the wsa:Reply-To header similar to. This implies that the reply is in a different channel listening on the port 8200.
https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=26838811&selectedPageVersions=5&selectedPageVersions=6
2021-05-06T01:20:19
CC-MAIN-2021-21
1620243988724.75
[]
docs.wso2.com
Inpaint node¶ This documentation is for version 1.0 of Inpaint (eu.cimg.Inpaint). Description¶ Inpaint (a.k.a. content-aware fill) the areas indicated by the Mask input using patch-based inpainting. Be aware that this filter may produce different results on each frame of a video, even if there is little change in the video content. To inpaint areas with lots of details, it may be better to inpaint on a single frame and paste the inpainted area on other frames (if a transform is also required to match the other frames, it may be computed by tracking). A tutorial on using this filter can be found at The algorithm is described in the two following publications: “A Smarter Examplar-based Inpainting Algorithm using Local and Global Heuristics for more Geometric Coherence.” (M. Daisy, P. Buyssens, D. Tschumperlé, O. Lezoray). IEEE International Conference on Image Processing (ICIP’14), Paris/France, Oct. 2014 and “A Fast Spatial Patch Blending Algorithm for Artefact Reduction in Pattern-based Image Inpainting.” (M. Daisy, D. Tschumperlé, O. Lezoray). SIGGRAPH Asia 2013 Technical Briefs, Hong-Kong, November 2013. Uses the ‘inpaint’ plugin from the CImg library. CImg is a free, open-source library distributed under the CeCILL-C (close to the GNU LGPL) or CeCILL (compatible with the GNU GPL) licenses. It can be used in commercial applications (see). The ‘inpaint’ CImg plugin is distributed under the CeCILL (compatible with the GNU GPL) license.
https://natron.readthedocs.io/en/rb-2.3/plugins/eu.cimg.Inpaint.html
2021-05-06T01:41:14
CC-MAIN-2021-21
1620243988724.75
[]
natron.readthedocs.io
Upgrade CiviCRM. Execute the following commands depending on your installation type: Approach A (Bitnami installations using system packages): $ sudo chown bitnami:daemon -R /opt/bitnami/drupal/ $ sudo find /opt/bitnami/drupal/ -type f -exec chmod 660 {} ; $ sudo find /opt/bitnami/drupal/ -type d -exec chmod 770 {} ; $ sudo chmod 640 /opt/bitnami/drupal/sites/default/settings.php $ sudo chmod 750 /opt/bitnami/drupal/sites/default $ sudo chmod 750 /opt/bitnami/drupal/sites $ sudo rm -rf /opt/bitnami/drupal/sites/default/files/civicrm/templates_c/* Approach B (Self-contained Bitnami installations): $.
https://docs.bitnami.com/aws/apps/civicrm/administration/upgrade/
2021-05-06T01:11:14
CC-MAIN-2021-21
1620243988724.75
[]
docs.bitnami.com
Emitter is a container for an iterator that can emit values using the emit() method and completed using the complete() and fail() methods of this object. The contained iterator may be accessed using the iterate() method. This object should not be part of a public API, but used internally to create and emit values to an iterator. Emits a value to the iterator. Completes the iterator. Fails the iterator with the given reason. {@inheritdoc} {@inheritdoc}
https://docs.kelunik.com/amphp/amp/v2.0.7/classes/amp/emitter
2021-05-06T01:33:08
CC-MAIN-2021-21
1620243988724.75
[]
docs.kelunik.com
several property fields in the same line. The array of labels determine how many properties are shown. No more than 4 properties should be used. The displayed SerializedProperties must be consecutive. The one provided in the valuesIterator argument should be the first of them. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2020.1/Documentation/ScriptReference/EditorGUI.MultiPropertyField.html
2021-05-06T01:48:06
CC-MAIN-2021-21
1620243988724.75
[]
docs.unity3d.com
The Buy Now button can be used when your customers needs to buy the product straight up without having to go into the cart. The below steps will explain how BluSynergy "Buy Now" button can be added to your webpage. 1. Go to Edit plan and Generate [Buy Now] button code Please refer 2. On your webpage, paste the Button code (that you generated in the previous step) to the Subscriptions/Products on your webpage which will be used to make the purchase Fig 1.0 The button will carry a BluSynergy Default CSS and can be customized by the client to continue the look and feel of their webpage. The below example shows how "Buy Now" can be used on a Client page. Fig 1.1 Below is the sample "Buy Now" button placed on a webpage. 3. When the "Buy Now" button is clicked it will re-direct to the "Cart URL" given while generating the button (refer configuation/setup). Fig 1.2 Below is the sample Default BluSynergy Shopping Cart page without iframe. In the above page a sample product is added into the cart. If you click on the Go back button, you will be redirected back to the products page. If you are using inside an iframe, You have to add the Continue Shopping link for your customers (post adding products into the cart) which can redirect your webpage to Fig 1.1 Fig 1.3 Below is the sample Default BluSynergy Shopping Cart page with iframe.
http://docs.blusynergy.com/shopping-cart/d-insert-buy-now-button
2021-05-06T01:39:17
CC-MAIN-2021-21
1620243988724.75
[]
docs.blusynergy.com
ListRoleTags Lists the tags that are attached to the specified role. The returned list of tags is sorted by tag key. For more information about tagging, see Tagging IAM resources in the IAM User Guide. . Type: Integer Valid Range: Minimum value of 1. Maximum value of 1000. Required: No - RoleName Response Elements The following elements are returned by the service. - IsTruncated A flag that indicates whether there are more items to return. If your results were truncated, you can use the Markerrequest parameter to make a subsequent pagination request that retrieves more items. Note that IAM might return fewer than the MaxItemsnumber of results even when more results are available. - Tags.member.N The list of tags that are currently attached to the role. Each tag consists of a key name and an associated value. If no tags are attached to the specified resource, the response contains an empty list. Type: Array of Tag objects Array Members: Maximum number of 50 items. Errors For information about the errors that are common to all actions, see Common Errors. - list the tags attached to a role named taggedrole.26T201509Z Authorization: <auth details> Content-Length: 58 Content-Type: application/x-www-form-urlencoded Action=ListRoleTags&Version=2010-05-08&RoleName=taggedrole Sample Response HTTP/1.1 200 OK x-amzn-RequestId: EXAMPLE8-90ab-cdef-fedc-ba987EXAMPLE Content-Type: text/xml Content-Length: 447 Date: Tue, 26 Sep 2017 20:15:09 GMT <ListRoleTagsResponse xmlns=""> <ListRoleTagsResult> <IsTruncated>false</IsTruncated> <Tags> <member> <Key>Dept</Key> <Value>Accounting</Value> </member> <member> <Key>Cost Center</Key> <Value>12345</Value> </member> </Tags> </ListRoleTagsResult> <ResponseMetadata> <RequestId>EXAMPLE8-90ab-cdef-fedc-ba987EXAMPLE</RequestId> </ResponseMetadata> </ListRoleTagsResponse> See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/IAM/latest/APIReference/API_ListRoleTags.html
2021-05-06T01:41:32
CC-MAIN-2021-21
1620243988724.75
[]
docs.aws.amazon.com
%iKnow.Domain persistent class %iKnow.Domain extends %Library.Persistent SQL Table Name: %iKnow.DomainThis class represents a domain registered in this namespace. When creating a domain, you should pass a value for Name to its %New() method. Property Inventory Method Inventory - %ConstructClone() - %DeleteExtent() - %DispatchGetProperty() - %DispatchSetProperty() - Create() - Delete() - DeleteId() - DropData() - Exists() - GetCurrentSystemVersion() - GetOrCreateId() - GetParameter() - GetParameters() - GetSystemParameter() - IsEmpty() - Open() - OpenId() - RegisterImportedDomain() - Rename() - SetParameter() - SetSystemParameter() - UnsetParameter() - UnsetSystemParameter(): use %New() instead (supplying only the name parameter) Deprecated: use %DeleteId() instead (accepts domain ID) Deprecated: use %DeleteId() instead Note that it is recommended to call this method separately, before dropping the domain through %Delete() or %DeleteId(). Deprecated: use NameIndexExists() instead property of a %iKnow.Domain instance. Shorthand method to get the domain ID for a specific domain name, creating it if it does not yet exist in this namespace. Deprecated: use regular %New() and %Save() methods instead. Returns an array pParams containing all the domain parameters registered for this instance in the form: pParams(paramName) = paramValue.NOTE: this might include parameters that cannot be modified by end users, but will not include values defined at the namespace level. Deprecated: use NameIndexOpen() instead QueriesMethod() - %DispatchSetModified() - %DispatchSetMultidim.DomainD
https://docs.intersystems.com/healthconnectlatest/csp/documatic/%25CSP.Documatic.cls?&LIBRARY=%25SYS&CLASSNAME=%25iKnow.Domain&PRIVATE=0
2021-05-06T00:14:55
CC-MAIN-2021-21
1620243988724.75
[]
docs.intersystems.com
(George C. Marshall was chosen as TIMES man of the year in 1943) When one thinks of the great World War II generals the names George S. Patton, Omar Bradley, Dwight D. Eisenhower, Douglas Mac Arthur, and Bernard Montgomery seem to always enter the conversation. However, one of the most important military figures of the war never seems to be mentioned, that individual is George C. Marshall. The former Chief of Staff under Franklin Roosevelt and Secretary of State and Defense under Harry Truman had a tremendous impact during and after the war, and even has his name placed on one of the most important initiatives taken by the United States after 1945 to help rebuild Europe, the Marshall Plan. Marshall never did command troops on the battlefield but his impact on the military was substantial and his role has been the subject of a great deal of debate among historians. The latest effort is a new biography written by Debi and Irwin Unger with the assistance of Stanley Hirshson. The book, GEORGE MARSHALL is a comprehensive examination of Marshall’s career and a detailed analysis of Marshall’s role in history. In the January 3, 1944 issue of Time magazine, Marshall’s photo adorns the cover as “man of the year.” The article that accompanied the photo stated that George C. Marshall was the closest person in the United States to being the “indispensable man” for the American war effort. One must ask the question, was this hyperbole justified? According to the Unger’s the answer is a qualified no. After analyzing Marshall’s policies they conclude that his shortcomings outweigh his successes ranging from his poor judgment of the individuals he placed in command positions to his underestimating the number of troops necessary to fight the war, particularly in providing replacements for men killed or wounded in combat. In addition, they criticize Marshall for his approach to training and preparing American soldiers for combat which was painfully obvious during the North African campaign and other major operations during the war. The authors argue their case carefully supporting their views with the available documentation, though there is an over reliance on secondary sources. Everyone who has written about Marshall and came in contact with him all agree that he epitomized the characteristics of a Virginia gentleman. He presented himself as aloof and honest, and though a rather humorless and direct person no one ever questioned his character. This persona remained with Marshall throughout his career and emerged during policy decisions, diplomatic negotiations, or his dealings with the divergent personalities that he had to work with. The narrative points out the importance of Marshall’s association with General John J. Pershing during World War I and the first major example of Marshall losing his temper over policy, and having the target of his tirade take him under their wing. The story follows Marshall’s career in the post-World I era and his association with men like Douglas Mac Arthur, Dwight Eisenhower and others who he would enter in his notebook as people to watch for in the future. The majority of the book deals with Marshall’s impact on American military planning. In the 1930s he worked to train National Guard units, but he also worked with the Civilian Conservation Corps which brought him to the attention of President Roosevelt. From this point on his career takes off. By 1938 he becomes Deputy Chief of Staff at the same time the situation in Europe continued to deteriorate. By 1939, after an overly honest conversation with Roosevelt about the state of US military preparedness, the president impressed with Marshall’s seriousness appointed him Chief of Staff. The author’s integrate the major events in Europe and the Far East up to and including the Japanese attack on Pearl Harbor, but they do not mine any new ground. As the book analyzes the major components of Marshall’s career, the authors have a habit of presenting the negative, then produce some positives, and finally concluding their analysis with the mistakes that Marshall supposedly made. A number of examples come to mind. Roosevelt haters for years have tried to blame the president for the events of December 7, 1941 and the authors examine Marshall’s culpability for how unprepared the US was for the attack. The Ungers examine numerous investigations of the attack on Pearl Harbor and seem disappointed that more of the blame did not fall on Marshall. They seem to conclude as they comment on his appearance at a Congressional hearing that “Marshall’s demeanor may also reveal a degree of self-doubt-indeed pangs of conscience-at his own imperfect performance in the events leading to Pearl Harbor.” The Congressional investigation criticized Marshall and Admiral Harold Stark for “insufficient vigilance in overseeing their subordinates in Hawaii……[and] deplored Marshall’s failure on the morning of the attack to send a warning to Short on a priority basis.” (367) (Marshall announced the European Recovery Program that bears his name at a commencement speech at Harvard on June 5, 1947) It is painfully obvious based on the author’s narrative that the United States was totally unprepared for war. They credit Marshall for doing his best to lobby Congress to expand the American military and institute a draft and then extend it. In 1942 Roosevelt wanted to strike at North Africa, but Marshall believed that the American needs in the Pacific and plans to assist the British in Iceland and Northern Ireland would create man power shortages if the strike in North Africa went forward. The authors criticize Marshall for not presenting his case forcefully enough to Roosevelt which would cause manpower issues later on in the war. In planning for the war Marshall argued that a force of 8,000,000 men and 90 divisions would be sufficient to win the war. Throughout the war there were constant worries that certain strategic decisions would not be successful for lack of manpower. The author’s point to the cross channel invasion of France, having enough troops to take on the Japanese once the Germans were defeated, and the landing at Sicily to make their case. They do praise Marshall for trying to reform the military command structure by always placing one general in charge in each war theater, be it D-Day, Torch, or other operations. They also praise Marshall for trying to reform the training of American troops, but at the same time they criticize him for the lack of morale of American soldiers and their supposed lack of commitment to defeat the enemy. Marshall would partly agree with the authors conclusions as he admitted that the soldiers sent to North Africa “were only partly trained and badly trained.” (168) As mentioned before, Marshall maintained a list of men he though would be invaluable in leading American troops during the war. The authors have difficulty with some of his choices and argue that he was a poor judge of character in others. “On the one hand we note the names of fighting generals George S. Patton, Robert Eichelberger, Courtney Hodges, J. Lawton Collins, and Lucian Truscott,” and administrators like Dwight Eisenhower and Brehon Somervell, but on the other hand we find the likes of Lloyd Fredendall and Mark Clark, which provoked a respected military correspondent like Hanson Baldwin of the New York Times to have written “the greatest American military problem was leadership, the army he concluded, had thus far failed to produce a fraction of the adequate officer leadership needed.” (208) (Marshall in conversation with General Dwight D. Eisenhower) Many of the criticisms that the authors offer have some basis, but their critique goes a bit too far by suggesting that Marshall was indirectly responsible for the death of his step son, Allen at Anzio as he had placed him in range of peril because he facilitated his transfer to North Africa after he completed Armored Force School at Fort Knox. The most effective historical writing is one of balance and objectivity, but at times the Ungers become too polemical as they try to downgrade Marshall’s reputation. Granted Marshall had never led troops in combat, but as a logistician, administrator, and diplomat he deserves to be praised. Marshall’s ability to deal with British generals and their egos was very important to the allied effort. His ability to work with Winston Churchill and argue against the English Prime Minister’s goals of a Mediterranean strategy and movement in the Balkans as part of retaining the British Empire merits commendation. His ability to navigate American politics and strong personalities was also a key to victory. Once the authors have completed their discussion of the war they turn to Marshall’s role as Secretary of State. They correctly point out that it took Marshall some time to realize that Stalin could not be trusted and had designs on Eastern Europe. They are also correct in pointing out that the European Recovery Program that bears his name was not developed by the Secretary of State but by a talented staff that included the likes of George Kennan, Chip Bohlen, Dean Acheson, Dean Rusk, and Robert Lovett. Marshall’s importance was lobbying Congress to gain funding for the program. The authors also give Marshall credit for trying to work out a rapprochement between the Kuomintang and the Chinese Communists after the war, a task that was almost impossible. The authors describe the heated debate over the creation of the state of Israel that Marshall vehemently opposed based on the national security needs of the United States and dismissed the political and humanitarian calculations of Clark Clifford and President Truman. A position the authors feel that when looked upon from today’s perspective was quite accurate. Finally, the authors give Marshall a significant amount of credit for the creation of NATO. (Marshall greets President Truman at the conclusion of World War II) f Marshall’s term as Secretary of State is deemed successful by the authors his one year stint as Secretary of Defense is seen as flawed. The main criticism of Marshall deals with Douglas MacArthur and the Korean War. After taking the reader through the North Korean attack on South Korea and the successful Inchon landing the authors describe in detail the dilemma of how far to pursue North Korean troops. The question was should United Nations forces cross the 38th parallel into the north and how close should American bombing come to the Yalu River that bordered on Communist China. According to the authors when the Communist Chinese troops entered the war it was not totally the fault of MacArthur because of the unclear orders that Marshall gave. According to the authors Marshall’s orders were “tentative and ambiguous,” thus confusing the American commander.(464) The limits on what the President allowed were very clear and when General Matthew Ridgeway, who replaced MacArthur as American commander was asked “why the chiefs did not give MacArthur categorical directions the general responded “what good would that do? He wouldn’t obey the orders.” (467) It appears the authors can never pass on an opportunity of presenting Marshall in a negative light. Overall, the book is well written and covers all the major components of the Second World War. It does less with Marshall’s role as Secretary of State and Defense, but if one is looking for a different approach to Marshall’s career this book can meet your needs as long as you realize that there are segments that are not very balanced. Even in the book’s last paragraph they feel the need to make one last negative comment, “all told, the performance of George Marshall in many of his roles was less than awe-inspiring.” (490)
https://docs-books.com/2015/02/
2021-05-06T00:45:44
CC-MAIN-2021-21
1620243988724.75
[]
docs-books.com
For members of my generation the name Henry Kissinger produces a number of reactions. First and foremost is his “ego,” which based on his career in public service, academia, and his role as a dominant political and social figure makes him a very consequential figure in American diplomatic history. Second, he fosters extreme responses whether your views are negative seeing him as a power hungry practitioner of Bismarckian realpolitik who would do anything from wiretapping his staff to the 1972 Christmas bombing of North Vietnam; or positive as in the case of “shuttle diplomacy” to bring about disengagement agreements between Israel and Egypt, and Israel and Syria following the 1973 Yom Kippur War and the use of linkage or triangular diplomacy pitting China and the Soviet Union against each other. No matter one’s opinion Thomas A. Schwartz’s new book, HENRY KISSINGER AND AMERICAN POWER: A POLITICAL BIOGRAPHY, though not a complete biography, offers a deep dive into Kissinger’s background and diplomatic career which will benefit those interested in the former Secretary of State’s impact on American history. Schwartz tries to present a balanced account as his goal is to reintroduce Kissinger to the American people. He does not engage in every claim and accusation leveled at his subject, nor does he accept the idea that he was the greatest statesman of the 20th century. Schwartz wrote the book for his students attempting to “explain who Henry Kissinger was, what he thought, what he did, and why it matters.” Schwartz presents a flawed individual who was brilliant and who thought seriously and developed important insights into the major foreign policy issues of his time. The narrative shows a person who was prone to deception and intrigue, a superb bureaucratic infighter, and was able to ingratiate himself with President Richard Nixon through praise as his source of power. Kissinger was a genius at self-promotion and became a larger than life figure. (Henry Kissinger and Richard Nixon) According to Schwartz most books on Kissinger highlight his role as a foreign policy intellectual who advocated realpolitik for American foreign policy, eschewing moral considerations or democratic ideas as he promoted a “cold-blooded” approach designed to protect American security interests. Schwartz argues this is not incorrect, but it does not present a complete picture. “To fully understand Henry Kissinger, it is important to see him as a political actor, a politician, and a man who understood that American foreign policy is fundamentally shaped and determined by the struggles and battles of American domestic politics.” In explaining his meteoric rise to power, it must be seen in the context of global developments which were interwoven in his life; the rise of Nazism, World War II, the Holocaust, and the Cold War. In developing Kissinger’s life before he rose to power Schwartz relies heavily on Niall Ferguson’s biography as he describes the Kissinger families escape from Nazi Germany. Schwartz does not engage in psycho-babble, but he is correct in pointing out how Kissinger’s early years helped form his legendary insecurity, paranoia, and extreme sensitivity to criticism. In this penetrating study Schwartz effectively navigates Kissinger’s immigration to the United States, service in the military, his early academic career highlighting important personalities, particularly Nelson Rockefeller, and issues that impacted him, particularly his intellectual development highlighting his publications which foreshadowed his later career on the diplomatic stage. However, the most important components of the narrative involve Kissinger’s role in the Nixon administration as National Security advisor and Secretary of State. Kissinger was a practitioner of always keeping “a foot in both camps” no matter the issue. As Schwartz correctly states, “Kissinger sought to cultivate an image of being more dovish than he really was, and he could never quite give up his attempts to convince his critics.” He had a propensity to fawn over Nixon and stress his conservative bonafede’s at the same time trying to maintain his position in liberal circles. Though Schwartz repeatedly refers to Kissinger’s ego and duplicitousness, he always seems to have an excuse for Kissinger’s actions which he integrates into his analysis. Schwartz correctly points out that Nixon’s goal was to replicate President Eisenhower’s success in ending the Korean War by ending the war in Vietnam which would allow him to reassert leadership in Europe as Eisenhower had done by organizing NATO. This would also quell the anti-war movement in much the same way as Eisenhower helped bring about the end of McCarthyism. Schwartz offers the right mix of historical detail and analysis. Useful examples include his narration of how Nixon and Kissinger used “the mad man theory” to pressure the Soviet Union by bombing Cambodia and North Vietnam; the employment of “linkage” to achieve Détente, SALT I; and ending the war in Vietnam by achieving a “decent interval” so Washington could not be blamed for abandoning its ally in South Vietnam; and bringing about cease fire agreements following the 1973 Yom Kippur War. In all instances Kissinger was careful to promote his image, but at the same time play up to Nixon, the man who created his role and allowed him to pursue their partnership until Watergate, when “Super K” became the major asset of the Nixon administration. Kissinger was the consummate courtier recognizing Nixon’s need for praise which he would offer after speeches and interviews. Kissinger worked to ingratiate himself with Nixon who soon became extremely jealous of his popularity. The two men had an overly complex relationship. It is fair to argue that at various times each was dependent upon the other. Nixon needed Kissinger’s popularity with the media and reinforcement of his ideas and hatreds. Kissinger needed Nixon as validation for his powerful position as a policy maker and a vehicle to escape academia. Schwartz provides examples of how Kissinger manipulated Nixon from repeated threats to resign particularly following the war scare between Pakistan and India in 1971, negotiations with the Soviet Union, and the Paris Peace talks. Nixon did contemplate firing Kissinger on occasion, especially when Oriana Fallaci described Kissinger as “Nixon’s mental wet nurse” in an article but realized how indispensable he was. What drew them together was their secret conspiratorial approach to diplomacy and the desire to push the State Department into the background and conduct foreign policy from inside the White House. Schwartz reinforces the idea that Kissinger was Nixon’s creation, and an extension of his authority and political power as President which basically sums up their relationship. Schwartz details the diplomatic machinations that led to “peace is at hand” in Vietnam, the Middle East, and the trifecta of 1972 that included Détente and the opening with China. Schwartz’s writing is clear and concise and offers a blend of factual information, analysis, interesting anecdotes, and superior knowledge of source material which he puts to good use. Apart from Vietnam, the Soviet Union, and the Middle East successes Schwartz chides Kissinger for failing to promote human rights and for aligning the United states with dictators and a host of unsavory regimes, i.e.; the Shah of Iran, Pinochet in Chile, and the apartheid regimes in Rhodesia and South Africa. Schwartz also criticizes Kissinger’s wiretapping of his NSC staff, actions that Kissinger has danced around in all of his writings. Though most of the monograph involves the Nixon administration, Schwartz explores Kissinger’s role under Gerald Ford and his post-public career, a career that was very productive as he continued to serve on various government commissions under different administrations, built a thriving consulting firm that advised politicians and corporations making him enormous sums of money, and publishing major works that include his 3 volume memoir and an excellent study entitled DIPLOMACY a masterful tour of history’s greatest practitioners of foreign policy. Kissinger would go on to influence American foreign policy well into his nineties and his policies continue to be debated in academic circles, government offices, and anywhere foreign policy decision-making is seen as meaningful. After reading Schwartz’s work my own view of Kissinger is that he is patriotic American but committed a number of crimes be it domestically or in the international sphere. He remains a flawed public servant whose impact on the history of the 20th century whether one is a detractor or promoter cannot be denied. How Schwartz’s effort stacks up to the myriad of books on Kissinger is up to the reader, but one cannot deny that the book is an important contribution to the growing list of monographs that seek to dissect and understand “Super-K’s” career.
https://docs-books.com/category/vietnam/
2021-05-06T00:56:23
CC-MAIN-2021-21
1620243988724.75
[array(['https://static.politico.com/dims4/default/e0e2ab8/2147483647/resize/1160x%3E/quality/90/?url=https%3A%2F%2Fstatic.politico.com%2Fae%2Fab%2F64df48794e4285b23b5ec3c9098d%2F1997-kissinger-ap-773.jpg', 'Henry Kissinger Henry Kissinger'], dtype=object) array(['https://www.gannett-cdn.com/-mm-/a3f9da77ecfdc1534ab1d987860757f8e0e7e847/c=50-0-1532-1976/local/-/media/2016/02/14/USATODAY/USATODAY/635910420340443868-XXX-DANIEL-ELLSBERG-MOV-1417.jpg?width=300&height=400&fit=crop&format=pjpg&auto=webp', 'Henry Kissinger and Richard Nixon.'], dtype=object) array(['https://storage.googleapis.com/afs-prod/media/media:7a8430754bfe499ca75bf237d409e07b/800.jpeg', 'Donald Trump, Henry Kissinger'], dtype=object) array(['https://cdnph.upi.com/pv/upi/cbc0f495b835b2acbc92186906db12a6/HENRY-KISSINGER.jpg', 'HENRY KISSINGER MEETING WITH ANWAR SADAT HENRY KISSINGER MEETING WITH ANWAR SADAT'], dtype=object) array(['https://www.sciencesource.com/Doc/TR1_WATERMARKED/8/0/0/2/SS2227225.jpg?d63641672002', 'Gerald Ford and Henry Kissinger'], dtype=object) array(['https://i.guim.co.uk/img/media/f239c5cdc5430ddaac5dd363732f0f2420d4e628/0_921_1989_1191/master/1989.jpg?width=445&quality=85&auto=format&fit=max&s=d1ef1c1c293ec07d78c0e36c9ff476be', 'Former US Secretary Of State Henry Kissinger Sits In An Office383230 04: (No Newsweek - No Usnews) Former Us Secretary Of State Henry Kissinger Sits In An Office In Washington, Dc, circa 1975. Kissinger Served As The National Security Advisor To President Richard M. Nixon, Shared The Nobel Peace Prize For Negotiating A Cease-Fire With North Vietnam, And Helped Arrange A Cease-Fire In The 1973 Arab-Israeli War. (Photo By Dirck Halstead/Getty Images)'], dtype=object) ]
docs-books.com
Integrate a Bitnami LAMP PHP Application with a Scalable MariaDB Replication Cluster on Kubernetes Introduction When moving a PHP application from development to production, one of the most important things to get right is the design and implementation of your database infrastructure (typically MySQL- or MariaDB-based). Ideally, you want a database setup that is configurable for different use cases, easy to maintain and upgrade, easy to scale and protects your data even in case of unexpected failures. There are different options available for this: One option is to use a hosted database service such as Amazon Aurora or Azure Database for MySQL. These services are easy to get started with, highly scalable and resilient to data loss, but they typically lack deep customization options, direct log access or full system privileges. Another option is to "roll your own" cluster of virtual (or physical) servers and custom-configure it to meet your scalability, high availability and data protection requirements. This gives you maximum flexibility and control, but requires a significant investment of time and resources to maintain, troubleshoot and upgrade the solution. This article introduces a third option: the Bitnami MariaDB Helm chart, which gives you a production-ready MariaDB replication cluster on Kubernetes. Kubernetes provides built-in capabilities to monitor health and recover from process/node failure and scale out depending on your decisions about usage patterns. At the same time, the Bitnami Helm chart ensures that the cluster is configured according to current best practices for security and scalability, while still allowing a high degree of customization. This guide walks you through the process of deploying a MariaDB replication cluster on Kubernetes and then configuring a PHP application to use this Kubernetes deployment as its datastore. By relying on Kubernetes data storage services, this approach avoids a single point of failure, increases the resilience of your PHP application and makes it easier to scale up in future. Assumptions and prerequisites This guide makes the following assumptions: - You have a working Apache/PHP development environment with Composer installed and the firewall configured to allow outgoing connectivity to the Internet. Learn about deploying Bitnami applications and obtaining credentials. - You have a multi-node Kubernetes cluster running with Helm v3.x installed. - You have the kubectl command line (kubectl CLI) installed and configured to work with your cluster. This article uses the Bitnami LAMP stack on a cloud server, but you could also use the Bitnami MAMP stack or Bitnami WAMP stack, depending on your operating system and platform preferences. Step 1: Deploy MariaDB on Kubernetes The first step is to deploy MariaDB on your Kubernetes cluster. The easiest way to do this is with Bitnami's MariaDB Helm chart, which gives you a ready-to-use deployment with minimal effort and within a few minutes. Use the commands below to deploy MariaDB on your Kubernetes cluster, remembering to replace the MARIADB-PASSWORD placeholder with a unique password for the MariaDB root account: helm repo add bitnami helm install mariadb bitnami/mariadb --set rootUser.password=MARIADB-PASSWORD --set slave.replicas=3 --set service.type=LoadBalancer This command creates a four-node MariaDB cluster with one master node and three slave nodes. The cluster will be auto-configured for replication and available for access through the LoadBalancer service. Using a LoadBalancer service type will typically assign two static IP addresses for the MariaDB deployment, one for the master and another for the slaves. Depending on your cloud provider's policies, you may incur additional charges for these static IP address. Wait for the deployment to complete and then run the command below to obtain the IP addresses for use with MariaDB: kubectl get svc --namespace default | grep mariadb Here is an example of the output you should see: Step 2: Test the cluster and replication The next step is to confirm that you are able to connect to the new MariaDB deployment from your Apache/PHP development environment (in this case, a Bitnami LAMP server). Log in to the Bitnami LAMP cloud server using SSH. At the Bitnami LAMP server console, use the command below to connect to the MariaDB deployment. Replace the MARIADB-MASTER-IP-ADDRESS placeholder with the IP address of the master node load balancer obtained in Step 1. mysql -h MARIADB-MASTER-IP-ADDRESS -u root -p Enter the password supplied at deployment-time when prompted to do so. At the MariaDB prompt, use the command below to list available databases: SHOW DATABASES; Confirm that you see a new, empty database named my_database, which is created by the Helm chart deployment. Once you have confirmed that your MariaDB cluster is operational, the next step is to verify data replication between the master and slave nodes. Follow these steps: At the same MariaDB prompt, use the commands below to create a new database table and fill it with some sample data: USE my_database; CREATE TABLE test (id INT NOT NULL); INSERT INTO test (id) VALUES (21), (22); Confirm that the table and records have been created by checking the output of the following command: SELECT * FROM my_database.test; Log out of the master node: exit At the server console, connect to the slave nodes. Replace the MARIADB-SLAVE-IP-ADDRESS placeholder with the IP address of the slave node load balancer obtained in Step 1. mysql -h MARIADB-MASTER-IP-ADDRESS -u root -p Enter the password supplied at deployment-time when prompted to do so. At the MariaDB prompt, verify that the database table and records created on the master node are also available on the slave node: SELECT * FROM my_database.test; Here is an example of what you should see: This implies that cluster node replication is operational and working as it should. Log out once done: exit Step 3: Integrate a PHP application with the MariaDB cluster Now that your MariaDB cluster is deployed on Kubernetes, the final step is to connect your PHP application to it. This gives your application the benefits of Kubernetes' built-in scalability and node failure recovery features and brings it closer to production readiness. The process to configure a PHP application for database integration varies by application but typically involves editing a configuration file and updating the database server's host name or IP address, user name, password and port. For demonstration purposes, this guide will walk you through the process of configuring a new CakePHP application. For security reasons, you will typically begin by create a separate database for your PHP application in the MariaDB cluster and control access to it through a separate username/password combination. Follow the steps below to achieve this: At the Bitnami LAMP server console, use the command below to connect to the MariaDB deployment again. As before, replace the MARIADB-MASTER-IP-ADDRESS placeholder with the IP address of the master node load balancer obtained in Step 1 and enter the password supplied at deployment-time when prompted. mysql -h MARIADB-MASTER-IP-ADDRESS -u root -p Use the command below to create a new database named myapp for your application: CREATE DATABASE myapp; Next, create a new user account named myapp and give it access to only the new database. Replace the USER-PASSWORD placeholder with a password for the new user account. GRANT ALL ON myapp.* TO myapp@'%' IDENTIFIED BY 'USER-PASSWORD'; FLUSH PRIVILEGES; Log out of the master node: exit With a database configured and access granted, you can now proceed to create a new CakePHP application. The easiest way to do this is with Composer, the PHP dependency manager, which comes pre-installed on Bitnami LAMP stacks. At the LAMP server console, change to the Apache server root. This is the location where you will create the CakePHP application. cd /opt/bitnami/apache2/htdocsTip Learn more about creating custom PHP applications with the Bitnami LAMP stack. Run the command below to create a new CakePHP application: composer create-project --prefer-dist cakephp/app myapp Composer will download all the required components and initialize a new CakePHP application. This process will take a few minutes. Browse to and confirm that you see the CakePHP default welcome page: You will notice, from the previous image, that CakePHP displays an error message stating that it is unable to connect to the database. Your next step will be to resolve this error by configuring the CakePHP application with the necessary IP address and access credentials for the MariaDB cluster. By default, CakePHP stores database configuration in the application's config/app.php file. Follow the steps below to update this configuration: At the LAMP server console, edit the myapp/config/app.php file using a text editor. Locate the Datasources['default'] array and update it to look like the code snippet below. Replace the MARIADB-MASTER-IP-ADDRESS placeholder with the IP address of the master node load balancer obtained in Step 1 and the USER-PASSWORD placeholder with the password set at the end of Step 2. 'Datasources' => [ 'default' => [ 'className' => Connection::class, 'driver' => Mysql::class, 'persistent' => false, 'host' => 'MARIADB-MASTER-IP-ADDRESS', 'username' => 'myapp', 'password' => 'USER-PASSWORD', 'database' => 'myapp', 'timezone' => 'UTC', ... ] ] Save the changes. Browse to again and confirm that the error previously seen on the CakePHP default welcome page is no longer present. Your CakePHP application running on the Bitnami LAMP stack is now configured to work with your MariaDB replication cluster on Kubernetes. You can now continue developing and customizing your application. Useful links To learn more about the topics discussed in this article, use the links below:
https://docs.bitnami.com/tutorials/integrate-lamp-mariadb-kubernetes/
2021-05-06T01:51:13
CC-MAIN-2021-21
1620243988724.75
[array(['/tutorials/_next/static/images/mariadb-replication-95828f88f04f7e5731d09fae44d774ef.png', 'MariaDB replication'], dtype=object) array(['/tutorials/_next/static/images/cakephp-welcome-1-36a6a67d43bd5e376bc40bd6d00b389c.png', 'CakePHP welcome page'], dtype=object) array(['/tutorials/_next/static/images/cakephp-welcome-2-3374a9947d8ffa327289d61da7e718b8.png', 'CakePHP welcome page'], dtype=object) ]
docs.bitnami.com
Pagination is used to save us the trouble of having to go through more number of records than necessary. It makes us easier to handle the results of any API calls that have been made. There are three common types of pagination that every API uses: - Page and pageSize - Limit and offset - Cursor We have standard means of paginating in each element. Page and pageSize is our preferred method because it is easy to use and flexible. For this method, the page and pageSize need to be specified. page being the particular page of results you'd like to look at, and pageSize being the number of records that come up in that page. Limit and offset, also known as Offset pagination is a method that makes use of the limit and offset commands. They work similar to the page and pageSize commands and therefore, can easily be converted to page and page size (pageSize = limit; page = offset/limit). Cursor pagination is where you call GET /accounts and it returns a nextPageToken. This token is used on the next request to get the next page. It is not so easy to convert cursor based pagination to page and pageSize method of pagination. Even though we want to use page/page size pagination across the board, it is not always possible. For this reason, you will find support for cursor based pagination everywhere in our platform. Every element supports cursor based pagination no matter what. That way, you can maintain consistency in your integrations. So, if you have an element that must be cursor based, you can use cursor based for every element. Every GET /<anyObject> returns the following header: elements-next-page-token: eyJwYWdlU2l6ZSI6MiwicGFnZSI6Mn0 And on every one of these APIs, even if its not documented you can send the query parameter nextPage=eyJwYWdlU2l6ZSI6MiwicGFnZSI6Mn0 and it will work the same as if you had sent page and pageSize. Paginator 3.0 Paginator 3.0 is the new, more enhanced version of Cloud Elements original paginator. Here is a comparison of the two for your better understanding.
https://docs.cloud-elements.com/home/standardized-pagination
2021-05-06T00:27:40
CC-MAIN-2021-21
1620243988724.75
[]
docs.cloud-elements.com
Backends reference¶ The function of backends is to handle bookmarks retreival and save. They take care of things like adding or removing a bookmark, and getting all bookmarks based on some filters. Writing your own backend¶ The application ships with a Django model backend and a MongoDB backend, but you can add your own defining a class with the interface below and pointing settings.GENERIC_BOOKMARKS_BACKEND to the new customized one. - class bookmarks.backends. BaseBackend¶ Base bookmarks backend. Users may want to change settings.GENERIC_BOOKMARKS_BACKENDand customize the backend implementing all the methods defined here. get_model(self)¶ Must return the bookmark model (a Django model or anything you like). Instances of this model must have the following attributes: - user (who made the bookmark, a Django user instance) - key (the bookmark key, as string) - content_type (a Django content_type instance) - object_id (a pk for the bookmarked object) - content_object (the bookmarked object as a Django model instance) - created_at (the date when the bookmark is created) add(self, user, instance, key)¶ Must create a bookmark for instance by user using key. Must return the created bookmark (as a self.get_model() instance). Must raise exceptions.AlreadyExists if the bookmark already exists. remove(self, user, instance, key)¶ Must remove the bookmark identified by user, instance and key. Must return the removed bookmark (as a self.get_model() instance). Must raise exceptions.DoesNotExist if the bookmark does not exist. filter(self, **kwargs)¶ Must return all bookmarks corresponding to given kwargs. - The kwargs keys can be: - user: Django user object or pk - instance: a Django model instance - content_type: a Django ContentType instance or pk - model: a Django model - key: the bookmark key to use - reversed: reverse the order of results The bookmarks must be an iterable (like a Django queryset) of self.get_model() instances. The bookmarks must be ordered by creation date (created_at): if reversed is True the order must be descending. get(self, user, instance, key)¶ Must return a bookmark added by user for instance using key. Must raise exceptions.DoesNotExist if the bookmark does not exist. Django¶ The default backend used if settings.GENERIC_BOOKMARKS_BACKEND is None is ModelBackend, that uses Django models to store bookmarks. MongoDB¶ In order to use the MongoDB backend you must change your settings file like: GENERIC_BOOKMARKS_BACKEND = 'bookmarks.backends.MongoBackend' GENERIC_BOOKMARKS_MONGODB = {"NAME": "bookmarks"} and then install MongoEngine: pip install mongoengine See Customization for a more complete explanation of MongoDB settings.
https://django-generic-bookmarks.readthedocs.io/en/latest/backends_api.html
2021-05-06T00:40:51
CC-MAIN-2021-21
1620243988724.75
[]
django-generic-bookmarks.readthedocs.io
Last night my wife and I had the pleasure of seeing the documentary film, FOOD FIGHT at the Music Hall Theater in Portsmouth, NH. It was wonderful as it captured the relationship between the management team and its CEO, Arthur T. Demoulas, their associates, and customers in a deep and personal manner. Serious at times, humerous, it is an exceptional film for anyone interested in the events of July 2014 and onward. My family has been lucky to have known the Demoulas and their generosity and we recommend this film to all. The film corresponds to the wonderful book WE ARE MARKET BASKET by Daniel Korschun and Grant Weller that I reviewed on this website in September, 2015.
https://docs-books.com/2016/02/
2021-05-06T01:11:24
CC-MAIN-2021-21
1620243988724.75
[array(['https://i0.wp.com/www.foodfightfilm.com/wp-content/uploads/2014/09/slides3-300x151.png', 'slides3'], dtype=object) ]
docs-books.com
Upgrading CDH to CDP Private Cloud Base High-level upgrade procedures for upgrades from CDH to CDP Private Cloud Base. Upgrading CDP Private Cloud Base consists of two major steps, upgrading Cloudera Manager and upgrading the cluster. You are not required to upgrade Cloudera Manager and the cluster at the same time, but the versions of Cloudera Manager and the cluster must be compatible. The major+minor version of Cloudera Manager must be equal to or higher than the major+minor version of CDH or Cloudera Runtime. - Prepare to upgrade: - Review the Supported Upgrade Paths for your upgrade. - Review the Requirements and Supported Versions for your upgrade - Review the Release Notes for the version of CDP Private Cloud Base you are upgrading to. - Gather information on your deployment. See Step 1: Getting Started Upgrading Cloudera Manager and Step 1: Getting Started Upgrading a Cluster. - Plan how and when to begin your upgrade. - If necessary, Upgrade the JDK. - If necessary, Upgrade the Operating System. - Perform any needed pre-upgrade transition steps for the components deployed in your clusters. See CDP Private Cloud Base Pre-upgrade transition steps - Upgrade Cloudera Manager to version 7.1.1 or higher. After upgrading to Cloudera Manager 7.1.1 or higher, Cloudera Manager can manage upgrading your cluster to a higher version. See Upgrading Cloudera Manager. - Use Cloudera Manager to Upgrade CDH to Cloudera Runtime 7, or from Cloudera Runtime to a higher version of Cloudera Runtime. See Upgrading a Cluster. - Perform any needed post-upgrade transition steps for the components deployed in your clusters. See CDH to CDP Private Cloud Base post-upgrade transition steps. Component Changes in CDP Private Cloud Base 7.1 YARN Fair Scheduler is being removed. The YARN Fair Scheduler is being replaced with the YARN Capacity Scheduler. A transition tool will be provided to convert the Fair Scheduler configurations to Capacity Scheduler. Ranger. A Sentry-to-Ranger policy transition tool is available for CDP Private Cloud Base 7.1 and transitions will be supported when Replication Manager is used to transition Hive tables from CDH to CDP. Navigator has been replaced with Atlas. Navigator lineage data is transferred to Atlas as part of the CDH to CDP Private Cloud Base upgrade process. Navigator audit data is not transferred to Atlas.
https://docs.cloudera.com/cdp-private-cloud/latest/upgrade-cdh/topics/cdpdc-cdh-overview.html
2021-05-06T01:46:52
CC-MAIN-2021-21
1620243988724.75
[]
docs.cloudera.com
. The following Hue logs are available.
https://docs.cloudera.com/documentation/enterprise/5-5-x/topics/cdh_ig_hue_admin.html
2021-05-06T02:13:21
CC-MAIN-2021-21
1620243988724.75
[]
docs.cloudera.com
Support for FibreBridge 7500N bridges in MetroCluster configurations Download PDF of this page The FibreBridge 7500N bridge is supported as a replacement for the FibreBridge 6500N bridge or for when adding new storage to the MetroCluster configuration. The supported configurations have zoning requirements and restrictions regarding use of the bridge’s FC ports and stack and storage shelf limits.
https://docs.netapp.com/us-en/ontap-metrocluster/maintain/concept_using_fibrebridge_7500n_bridges_in_mcc_configurations.html
2021-05-06T01:04:54
CC-MAIN-2021-21
1620243988724.75
[]
docs.netapp.com