content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Replication with Sentry Enabled If the cluster has Sentry enabled and you are using Replication Manager Replication Manager jobs since Sentry ACLs will be bypassed for this user.For example, create a user named bdr-only-user. - Configure HDFS on the source cluster: - In the Cloudera Manager Admin Console, select Clusters > <HDFS service>. -on the realm elephantrequires the following value: bdr-only-user, bdr-only-user@ElephantRealm Description: This field is optional. Restart the NameNode. - Restart the NameNode. - Repeat step 2 on the destination cluster. - When you create a replication schedule, specify the user you created in step 1 in the Run As Username and Run on Peer as Username (if available) fields.
https://docs.cloudera.com/cdp/latest/data-migration/topics/rm-dc-replication-with-sentry-enabled.html
2020-09-18T11:49:47
CC-MAIN-2020-40
1600400187390.18
[]
docs.cloudera.com
Remark not intended for use in production scenarios. This article will now explain how to create a retargeting audience through Google Ads Customer Match. To learn how to achieve this through Google Analytics instead, click here. Retargeting through Google Ads Customer Match Requirements for using Customer Match Customer Match is not available for all Google Ads advertisers. To use Customer Match for Google Ads, your Google Adsaccount must have: - A good history of policy compliance with Google Ads - A good payment history in Google Ads - At least 90 days history in Google Ads. - More than USD 50,000 total lifetime spend in Google Ads If Customer Match is available for your Google Ads account, you also need to have edit rights for your Google Ads account to implement this integration. Read more about the requirements in the Google Ads docs article here: Step 1: Integrate your Google Ads account with your Exponea project First, go to integrations and connect your Exponea project to your Google Ads account, as shown below. Step 2: Create a scenario to define your audience Now you need to create a scenario which will define the desired group of customers for retargeting and will add them to your Google Ads audience. Create a flow as indicated below: Double-click the retargeting node to open a modal window, select "Google Ads" and then "Customer match". Click on "Customer matching" to enable matching customers by Email, Phone, Mobile advertiser ID, and Google Ads ID. User ID Matching (BETA) Google User ID matching requires its own custom setup. You need to track this ID to Google beforehand. Setup special tracking Google matching pixels that you can find in the tag manager pre-set or track it within your existing tagging infrastructure. You need to manually create a new User ID-based customer list audience using Google Ads user interface Choose the newly created User ID audience in the Retargeting node configuration Once you upload the audience to Google Ads, the following process takes place - Google Ads will check the format of the uploaded data - Google Ads will match your data to your customers on Google's networks. - You can add this list to your targeting now, but matching can take up to 24 hours to finish - When matching is complete, your ads can start showing to your new audiences. Lists must have at least 1000 matched users for them to serve. <!-- use this tag to send Exponea external ID to Google Ads and then use this User ID in the retargeting scenarios --> <script> gtag('event', 'page_view', { 'send_to': '[[GoogleAdsConversionID]]', // 'user_id': '{{ customer_ids.cookie if customer_ids.cookie is string else customer_ids.cookie | last }}' // Exponea cookie, feel free to customize }); </script> Removing customers In case the customer asks for the removal of all his data, it is now possible to pick if you want to add or remove a customer to/from a Google Ads audience. Retargeting nodes track events automatically. This enables simple evaluations of retargeting scenarios that contribute to the single customer view of a customer. Consent policy Consent policy settings is part of the retargeting node, which simplifies the design of the retargeting scenarios and ensures that only people with proper consent will be pushed to Google Ads audiences. Now click on the "Audience" drop-down, click + CREATE NEW. Give your audience a name and save it, then click save again to return to the scenario editor. Save the whole scenario now. You can click on TEST to see how many customers will flow to the retargeting node. You can now start the scenario, which will add the defined customers to your Google Ads Audience. Audience with the same name as you defined it should now appear in Audiences your Google Ads account. Doubleclick Integration in Google Marketing Platform Google allows you to share Google Ads audiences with Display & Video 360 (former Doubleclick Bid Manager) and Search Ads 360 (former Doubleclick Search). Updated 17 days ago
https://docs.exponea.com/docs/google-ads-customer-match
2020-09-18T10:30:06
CC-MAIN-2020-40
1600400187390.18
[array(['https://files.readme.io/953af34-Google_Ads_Integration.gif', 'Google Ads Integration.gif'], dtype=object) array(['https://files.readme.io/953af34-Google_Ads_Integration.gif', 'Click to close...'], dtype=object) array(['https://files.readme.io/817957c-AdWords_Retargeting_Scenario.png', 'AdWords Retargeting Scenario.png'], dtype=object) array(['https://files.readme.io/817957c-AdWords_Retargeting_Scenario.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/5b10148-Google_scenarios.png', 'Google scenarios.png'], dtype=object) array(['https://files.readme.io/5b10148-Google_scenarios.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/1a84c21-Remove_google.png', 'Remove google.png'], dtype=object) array(['https://files.readme.io/1a84c21-Remove_google.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/25a02e3-Consent_google.png', 'Consent google.png'], dtype=object) array(['https://files.readme.io/25a02e3-Consent_google.png', 'Click to close...'], dtype=object) ]
docs.exponea.com
Enter the "Management" module and select the sub-category "Staff Management". In the central part we will see all the teachers and administrators that exist at the moment in the kindergarten and information about them. At the top right is a green "Add New User" button. Pressing it will open a page where you will start to fill in data such as the group or groups the teacher will be assigned to, email, name, surname, telephone and address. First of all, we must make sure that under "Role Name", "Educator" is selected. Pressing „Save” will automatically send the teacher an e-mail containing an invitation to join Kinderpedia and access the kindergarten.
https://docs.kinderpedia.co/en/articles/2930008-cum-adaug-un-educator
2020-09-18T09:38:35
CC-MAIN-2020-40
1600400187390.18
[array(['https://downloads.intercomcdn.com/i/o/199020276/75c04c02367f2be367abc59b/www.kinderpedia.co_mykp_kg-admins+%282%29.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/199021345/47f0605ff8615d96e574bd4d/www.kinderpedia.co_mykp_kg-admins_add+%281%29.png', None], dtype=object) ]
docs.kinderpedia.co
ODE¶ This submodule contains tools used to perform inference on ordinary differential equations. - class pymc3.ode. DifferentialEquation(func, times, *, n_states, n_theta, t0=0)¶ Specify an ordinary differential equation\[\dfrac{dy}{dt} = f(y,t,p) \quad y(t_0) = y_0\] - Parameters - funccallable Function specifying the differential equation. Must take arguments y (n_states,), t (scalar), p (n_theta,) - timesarray Array of times at which to evaluate the solution of the differential equation. - n_statesint Dimension of the differential equation. For scalar differential equations, n_states=1. For vector valued differential equations, n_states = number of differential equations in the system. - n_thetaint Number of parameters in the differential equation. - t0float Time corresponding to the initial condition Examples def odefunc(y, t, p): #Logistic differential equation return p[0] * y[0] * (1 - y[0]) times = np.arange(0.5, 5, 0.5) ode_model = DifferentialEquation(func=odefunc, times=times, n_states=1, n_theta=1, t0=0) perform(self, node, inputs_storage, output_storage)¶ Required: Calculate the function on the inputs and put the variables in the output storage. Return None. - Parameters - nodeApply instance Contains the symbolic inputs and outputs. - inputslist Sequence of inputs (immutable). - output_storagelist List of mutable 1-element lists (do not change the length of these lists) - Raises - MethodNotDefined The subclass does not override this method. Notes The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.
https://docs.pymc.io/api/ode.html
2020-09-18T09:39:37
CC-MAIN-2020-40
1600400187390.18
[]
docs.pymc.io
NetCURL 6.1 is built as individual modules, so it is possible to communicate directly with each of them separately. If you are sure that curl for example is always available, regardless of what platform you put your code, the CurlWrapper can be used instead of NetWrapper. The unittests that was built initially, was built in a way so each module could be tested separately. It was in the second step the MODULE_CURL was written, but with the new name NetWrapper - which is the module that is self selecting "best available driver". Sections Configurables and requirements In netcurl 6.0 all of the data communication was basically running through MODULE_CURL and then, that data was passed over to a proper communications driver. Basically, curl was the only reliable driver, but with a failsafe selection to components that was installable via composer. - When curl was not found and drivers from Wordpress or Guzzle became available, the module could fail over to those drivers. At that next lever, both Wordpress and Guzzle handled the php-internal streamdriver. - If the webrequest was based on SOAP, netcurl automatically chose to run through SoapClient, if the conditions allowed it (SoapClient and some XML-drivers had to be installed). In netcurl 6.1, NetWrapper (MODULE_CURL), almost do the same as above but it is configured to handle more by itself. There is also a register-function (undocumented) that allows developers to do their own module. However, from 6.1, netcurl tries to cover the most important parts itself. - Curl is the primary selected driver. - When curl is not available it fails over to SimpleStreamWrapper. This wrapper requires that allow_url_fopen is true, in php.ini - When a Soapcall is requested, we still use an internal SoapWrapper. However, it is as of 6.1.0 called SoapClientWrapper instead of SimpleSoap - When a network request is being made and the content type indicates RSS, netcurl is trying to utilize the RssWrapper. If you set composer to install Laminas RSS-parser, it tries to utilize Laminas before SimpleXML-drivers. Own drivers To configure anything, this meant you have to run all setups through that module. As 6.1 is converted to fit PSR4-requirements each module is practically independent. There are external components however, that can (and is) injectable in the modules, depending on how they are written. It will be possible to write own drivers, that takes advantage of the same modules. Configuration The drivers should initially not require any preconfiguration like for other modules where you have to write your communication drivers from scratch. This is the biggest thing with netcurl, as it tries to pick any available driver from the system, parse requested data and give it back as formatted as possible with defaults from WrapperConfig - see below. Preconfigured modules There are a few modules that you'd want to be aware of. Here's a list of them and how to use them. Wrappers and drivers Basic Drivers TorneLIB\Module\Config\WrapperConfig The major configuration module. Almost every feature and defaults that is used within (at least) the curl extension are configured here. WrapperConfig is very much the heart of the module as much configurations are processed through this class. Both streamoptions, curloptions and user-agent setups. TorneLIB\Module\Network\NetWrapper Replacement for MODULE_CURL. If you don't want to make the calls yourself, you'd use this. The request methods (doGet, doPost, doPut, doDelete, etc) have been replaced with request(). TorneLIB\Module\Network\Wrappers\CurlWrapper The main curlwrapper. If nothing else is defined and curl is installed, this is the driver that will be used for "almost everything". TorneLIB\Module\Network\Wrappers\SimpleStreamWrapper The failover wrapper, uses the binary safe file_get_contents (and not the fopen/fread/etc). If curl is unavailable, this is where your requests will land. TorneLIB\Module\Network\Wrappers\SoapClientWrapper TorneLIB\Helpers\GenericParser Prior location was TorneLIB\Module\Config\GenericParser. This is a generic parser used for parsing data in the wrappers that is commonly used by more than one wrapper. For example, header extractions is being made here (when you for example need to get information about the http-head response). It also takes care of content-type based parsing from the body, or whatever needs to be properly parsed. In netcurl 6.0, most of the data transmitted in the interfaces was running guessing games which may have caused problems in the way that the IO-parser genereated its own error handler to catch XML errors. This is completely removed. The others TorneLIB\Module\Config\WrapperConstants Internal constants requested by netcurl, if necessary. In 6.1.0, there are no preset constants. TorneLIB\Module\Config\WrapperDriver This driver can be instantiated but should normally not do this. This part of netcurls keeps track of available communication drivers in the system, and is also used for third parties in case they really need to register their own wrapper. Registrations always goes this way and is basically what NetWrapper (the real replacement for MODULE_CURL) uses to select what to use when calling for it. TorneLIB\Module\Config\WrapperCurlOpt Protective layer for CURLOPT constants, in case they are not installed via curl. Instead of netcurl screaming out warnings of missing constants, we use those. TorneLIB\Module\Config\WrapperSSL SSL is a quite big part of the communication drivers, so the SSL configurator has been separated from the standard WrapperConfig. This is however not very used by the curlwrapper. It rather aims at the stream based communications like the built in fetchers from PHP, and SoapClient. TorneLIB\Module\Config\WrapperDriver Basically the wrapper container used globally in netcurl, to figure out which communication drivers that is available. This one is also used to configure your own drivers if you miss any.
https://docs.tornevall.net/display/TORNEVALL/Getting+started%3A+Vital+modules
2020-09-18T11:03:24
CC-MAIN-2020-40
1600400187390.18
[]
docs.tornevall.net
Goodman Spectrograph Reference Lamp Library¶ This is a visual library of all the usable lamps of the Goodman High Throughput Spectrograph. Note The plots in this library are automatically generated therefore in some cases if two lines are too close together the labels will appear stacked The table below is presented as a quick reference. It is not generated automatically. To see a full set of plots of all lamps please visit this GitHub Repository Plots
https://goodman.readthedocs.io/projects/lamps/en/latest/
2020-09-18T09:59:39
CC-MAIN-2020-40
1600400187390.18
[]
goodman.readthedocs.io
Until now WiFi pwnage wasn’t possible on most Android phones due to lack of support in the WiFi chipset. This is surprising, due to the fact that most android devices have a bcm43xx WiFi chipset. This talk will present our research on the bcm43xx chipsets and the custom tools we’ve developed, enabling the use of mobile phones as a platform for common WiFi pwnage tools. Unlike PCs which use SoftMac, embedded devices use FullMac meaning that the WiFi chip translates the 802.11 packets into ethernet packets. Crucial information is lost during the process, making WiFi pwnage impossible. Since this translation is done by the WiFi chipset, the only possible solution is to patch its firmware. One of the challenges was the fact that we only had part of the firmware and were missing the chip’s ROM. To overcome this, we exploited the firmware loading mechanism and extracted the ROM segment of the chip (the protected memory region). To optimize worktime we decided to implement a live debugging engine using Wireshark as a frontend client, producing custom output from any given function (e.g. stacktrace, return values and buffers). Using our debugging engine and a lot of reverse engineering we managed to enable both monitor mode as well as packet injection on any mobile device based on the Broadcomchipset (Galaxy S1/2/3, Nexus S, and many others). We will also demonstrate how to use the debugging engine to perform additional analysis and add additional features. Turning our phones into mobile pwning stations..
http://www.secdocs.org/docs/wardriving-from-your-pocket-video/
2020-09-18T11:34:10
CC-MAIN-2020-40
1600400187390.18
[]
www.secdocs.org
Components Associations Options From Joomla! Documentation Description Default permissions used for all content in the Multilingual Associations Component. How to Access To access this screen: - Navigate to Components → Multilingual Associations screen, then - Click the Options button at the top right. This opens the Multilingual Associations: Options screen. Screenshot Details Permissions Default permissions used for all content in this component. To change the permissions, do the following. - Select the Group by clicking its title. - Find the desired Action. Possible Actions are: - Configure ACL & Options. Allows users in the group to edit the options and permissions of the Multilingual Associations Component. - Configure Options Only. Allows users in the group to edit the options except the permissions of the Multilingual Associations Component. - Access Administrative Interface. Allows users in the group to access the administration interface of the Multilingual Associations Component. - left of the Multilingual Associations Options window you will see the toolbar. The functions are: - Save. Saves the Multilingual Associations options and stays in the current screen. - Save & Close. Saves the Multilingual Associations options and closes the current screen. - Cancel. Closes the current screen and returns to the previous screen without saving any modifications you may have made. - Help. Opens this help screen.
https://docs.joomla.org/Help39:Components_Associations_Options
2020-09-18T10:04:16
CC-MAIN-2020-40
1600400187390.18
[]
docs.joomla.org
Installation issues With Beta2 we incorporated all your feedback, resulting in much improved install experience. Remaining issues are Development & runtime issues App runtime issues Android show error message "libobjc.so : text relocation" An error message is shown. After you click ok, you can resume app without problems. Is this a bug that occurs with some Samsung devices. Bug was fixed This issue has been fixed. Please download BETA6b. Updated 2 years ago
https://docs.scade.io/docs/currently-known-issues
2020-09-18T10:38:15
CC-MAIN-2020-40
1600400187390.18
[array(['https://files.readme.io/888b089-androidbug1.png', 'androidbug1.png'], dtype=object) array(['https://files.readme.io/888b089-androidbug1.png', 'Click to close...'], dtype=object) ]
docs.scade.io
Adding/editing a questionnaire From Carleton Moodle Docs: - If a questionnaire has already been created (in another course on the same Moodle site) with the "public" setting, then you may use that "public" questionnaire in your own course(s). The number of settings available to such questionnaires is limited and you cannot edit its questions nor view the responses. Example .- If a public questionnaire has been created in course A, it can be "used" in courses B, C, ... All the responses from courses A, B, C, ... are collected in the public questionnaire created in course A (the original course where it was created) and are viewable there by the person (admin or teacher) who originally created it. What next? Once you have clicked Save: Configure the questionnaire type Editing Questionnaire questions
https://docs.moodle.carleton.edu/index.php?title=Adding/editing_a_questionnaire&oldid=17543&printable=yes
2020-09-18T11:05:53
CC-MAIN-2020-40
1600400187390.18
[]
docs.moodle.carleton.edu
. You can use source-to-Image (S2I) to combine source code and base images. Builder images make use of S2I to enable your development and operations teams to collaborate on a reproducible build environment. When developers commit code with Git for an application using build images, OpenShift Container Platform can perform the following functions: Trigger, either via CI process. You can use OpenShift Container Registry to manage access to final images. Both S2I and native build images are automatically pushed to the OpenShift Container Registry. In addition to the included Jenkins for CI, you can also integrate your own build / CI environment with OpenShift Container Platform using RESTful APIs, as well as use any API-compliant image registry. OpenShift Container Platform Developer Guide OpenShift Container Platform Architecture: Source-to-Image (S2I) Build OpenShift Container Platform Using Images: Other Images → Jenkins have to supply a custom .npmrc file for the build that contains a URL, user name, and password. For security reasons, you do not want to expose your credentials in the application image. Using this example scenario, you can add an input secret to a new BuildConfig: Create the secret, if it does not exist: $ oc secrets new secret-npmrc .npmrc=~/.npmrc This creates a new secret named secret-npmrc, which contains the base64 encoded content of the ~/.npmrc file. Add the secret to the source section in the existing BuildConfig: source: git: uri: secrets: - secret: name: secret-npmrc To include the secret in a new BuildConfig, run the following command: $ oc new-build \ openshift/nodejs-010-centos7~ \ --build-secret secret-npmrc OpenShift Container Platform Developer Guide: Input Secrets You can design your container image management and build process to use container layers so that you can separate control. For example, an operations team manages base images, while architects manage middleware, runtimes, databases, and other solutions. Developers can then focus on application layers and just write code. Because new vulnerabilities are identified daily, you need to proactively check container content over time. To do this, you should integrate automated security testing into your build or CI process. For example: SAST / DAST – Static and Dynamic security testing tools. Scanners for real-time checking against known vulnerabilities. Tools like these catalog the open source packages in your container, notify you of any known vulnerabilities, and update you when new vulnerabilities are discovered in previously scanned packages. Your CI process should include policies that flag builds with issues discovered by security scans so that your team can take appropriate action to address those issues. You should sign your custom built containers to ensure that nothing is tampered with between build and deployment. Red Hat Enterprise Linux Atomic Host Managing Containers: Signing Container Images
https://docs.openshift.com/container-platform/3.7/security/build_process.html
2020-09-18T11:50:36
CC-MAIN-2020-40
1600400187390.18
[]
docs.openshift.com
Prior name: Tornevall Networks RaspTunnel. What is it? Tornevall Networks RaspTunnel was an experimental project where micro servers in a DMZ-controlled environment is hosting the primary IPv6 connectivity. Main purpose is ipv6 GRE-tunneling with the help from your primary router, which should be configured to route traffic to a DMZ-address, where this micro server, known as Raspberry pi, are the tunnel gateway.. What is the goal?. How does it work?. IPv6 Networks DHCP delegation is normally installed clientside. It is also recommended to run dhcp on a /64-prefix. On prefixes higher (65 and up), it may not work properly. Hurricane electric assigns prefixes by automation. The tunnel types there are based on the SIT-protocol. By requesting directly from them, there won't be any slowdowns in the routing. Visual view
https://docs.tornevall.net/pages/viewpage.action?pageId=8618001&preview=%2F8618001%2F62423145%2Ftornevall-openvpn
2020-09-18T11:51:23
CC-MAIN-2020-40
1600400187390.18
[array(['/download/attachments/8618001/tornevall-openvpn.png?version=3&modificationDate=1597146452000&api=v2', 'tornevall-openvpn'], dtype=object) ]
docs.tornevall.net
Any Pivotal Platform (formerly known as Cloud Foundry) deployment can send metrics and events to Datadog. The data helps you track the health and availability of all nodes in the deployment, monitor the jobs they run, collect metrics from the Loggregator Firehose, and more. Use this page to learn how to monitor your application on Pivotal Platform and your Pivotal Platform cluster. There are three main components for the Pivotal Platform integration with Datadog. First, use the buildpack to collect custom metrics from your applications. Then, use the BOSH Release to collect metrics from the platform. Finally, use the Loggregator Firehose Nozzle to collect all of the other metrics from your infrastructure. For Pivotal Platform, you have the option to install the Datadog integration tiles with Ops Manager: Use the Datadog Pivotal Platform Buildpack to monitor your Pivotal Platform application. This is a supply buildpack for Pivotal Platform that installs a Datadog DogStatsD binary and Datadog Agent in the container your app is running on. Our buildpack uses the Pivotal Platform multi-buildpack feature that was introduced in version 1.12. For older versions, Pivotal Platform provides a back-port of this feature in the form of a buildpack. You must install and configure this backport in order to use Datadog’s buildpack: Upload the multi-buildpack back-port. Download the latest multi-build pack release and upload it to your Pivotal Platform environment. cf create-buildpack multi-buildpack ./multi-buildpack-v-x.y.z.zip 99 --enable Add a multi-buildpack manifest to your application. As detailed on the multi-buildpack back-port repo, create a multi-buildpack.yml file at the root of your application and configure it for your environment. Add a link to the Datadog Pivotal Platform Buildpack and to your regular buildpack: buildpacks: - "" - "" # Replace this with your regular buildpack The URLs for the Datadog Buildpack are: Do not use the latest version here (replace x.y.z by the specific version you want to use). Important: Your regular buildpack should be the last in the manifest to act as a final buildpack. To learn more refer to Pivotal Platform documentation about buildpacks. Push your application with the multi-buildpack Ensure that the multi-buildpack is the buildpack selected by Pivotal Platform for your application: cf push <YOUR_APP> -b multi-buildpack Upload the Datadog Pivotal Platform Buildpack. Download the latest Datadog build pack release and upload it to your Pivotal Platform environment. cf create-buildpack datadog-cloudfoundry-buildpack ./datadog-cloudfoundry-buildpack-latest.zip Push your application with the Datadog buildpack and your buildpacks. The process to push your application with multiple buildpacks is described in the Pivotal Platform documentation. cf push <YOUR_APP> --no-start -b binary_buildpack cf v3-push <YOUR_APP> -b datadog-cloudfoundry-buildpack -b <YOUR-BUILDPACK-1> -b <YOUR-FINAL-BUILDPACK> Important: If you were using a single buildpack before, it should be the last one loaded so it acts as a final buildpack. To learn more refer to Pivotal Platform documentation about buildpacks. If you are a meta-buildpack user, Datadog’s buildpack can be used as a decorator out of the box. Note: The meta-buildpack has been deprecated by pivotal in favor of the multi-buildpack and Datadog might drop support for it in a future release. Set an API Key in your environment to enable the buildpack: # set the environment variable cf set-env <YOUR_APP> DD_API_KEY <DD_API_KEY> # restage the application to make it pick up the new environment variable and use the buildpack cf restage <YOUR_APP> The Datadog Trace Agent (APM) is enabled by default. Learn more about setup for your specific language in APM Setup. To start collecting logs from your application in Pivotal Platform, the Agent contained in the buildpack needs to be activated and log collection enabled. cf set-env <YOUR_APP_NAME> RUN_AGENT true cf set-env <YOUR_APP_NAME> DD_LOGS_ENABLED true # Disable the Agent core checks to disable system metrics collection cf set-env <YOUR_APP_NAME> DD_ENABLE_CHECKS false # Redirect Container Stdout/Stderr to a local port so the Agent collects the logs cf set-env <YOUR_APP_NAME> STD_LOG_COLLECTION_PORT <PORT> # Configure the Agent to collect logs from the wanted port and set the value for source and service cf set-env <YOUR_APP_NAME> LOGS_CONFIG '[{"type":"tcp","port":"<PORT>","source":"<SOURCE>","service":"<SERVICE>"}]' # restage the application to make it pick up the new environment variable and use the buildpack cf restage <YOUR_APP_NAME> The following parameters can be used to configure log collection: Example: A Java application named app01 is running in Pivotal Platform. The following configuration redirects the container stdout/ stderr to the local port 10514. It then configures the Agent to collect logs from that port while setting the proper value for service and source: # Redirect Stdout/Stderr to port 10514 cf set-env app01 STD_LOG_COLLECTION_PORT 10514 # Configure the Agent to listen to port 10514 cf set-env app01 LOGS_CONFIG '[{"type":"tcp","port":"10514","source":"java","service":"app01"}]' For Agent v6.12+, when using a proxy configuration with the Buildpack, a verification is made to check if the connection can be established. Log collection is started depending on the result of this test. If the connection fails to be established and the log collection is not started, an event like the one below is sent to your Datadog event stream. Set up a monitor to track these events and be notified when a misconfigured Buildpack is deployed: To build this buildpack, edit the relevant files and run the ./build script. To upload it, run ./upload. See the DogStatsD documentation for more information. There is a list of DogStatsD libraries compatible with a wide range of applications. There are two points of integration with Datadog, each of which achieves a different goal: You must have a working Cloud Foundry deployment and access to the BOSH Director that manages it. You also need BOSH CLI to deploy each integration. You may use either major version of the CLI-v1 or v2. Datadog provides tarballs of the Datadog Agent packaged as a BOSH release. Upload the latest release to your BOSH Director and then install it on every node in your deployment as an addon (the same way a Director deploys the BOSH Agent to all nodes). # BOSH CLI v1 bosh upload release # BOSH CLI v2 bosh upload-release -e <BOSH_ENV> If you’d like to create your own release, see the Datadog Agent BOSH Release repository. Add the following to your BOSH Director’s runtime configuration file (e.g. runtime.yml): --- releases: - name: datadog-agent version: <VERSION_YOU_UPLOADED> # specify the real version (x.y.z not 'latest') addons: - name: datadog jobs: - name: dd-agent release: datadog-agent properties: dd: use_dogstatsd: true dogstatsd_port: 18125 # Many CF deployments have a StatsD already on port 8125 api_key: <DD_API_KEY> tags: ["<KEY:VALUE>"] # any tags you wish generate_processes: true # to enable the process check To see which datadog-agent release version you uploaded earlier, run bosh releases. Check if you have a previously configured runtime-config by running: # BOSH CLI v1 `bosh runtime-config` # BOSH CLI v2 bosh -e <BOSH_ENV> runtime-config In Bosh v2, if the runtime.yml file is empty, you should see the response: No runtime config. For each extra Agent check you want to enable across your deployment, add its configuration under the properties.dd.integrations key, for example: properties: dd: integrations: directory: init_config: {} instances: directory: "." #process: # init_config: {} #... The configuration under each check name should look the same as if you were configuring the check in its own file in the Agent’s conf.d directory. Everything you configure in runtime.yml applies to every node. You cannot configure a check for a subset of nodes in your deployment. To customize configuration for the default checks-system, network, disk, and ntp-see the full list of configuration options for the Datadog Agent BOSH release. # BOSH CLI v1 bosh update runtime-config runtime.yml # BOSH CLI v2 bosh update-runtime-config -e <BOSH_ENV> runtime.yml # BOSH CLI v1 bosh deployment <YOUR_DEPLOYMENT_MANIFEST>.yml bosh -n deploy --recreate # BOSH CLI v2 bosh -n -d <YOUR_DEPLOYMENT> -e <BOSH_ENV> deploy --recreate <YOUR_DEPLOYMENT_MANIFEST>.yml Since runtime configuration applies globally, BOSH redeploys every node in your deployment. If you have more than one deployment, redeploy all deployments to install the Datadog Agent everywhere. To check if the Agent installs were successful, filter by cloudfoundry on the Host map page in Datadog. The Agent BOSH release tags each host with a generic cloudfoundry tag. Optionally group hosts by any tag, such as bosh_job, as in the following screenshot: Click on any host to zoom in, then click system within its hexagon to make sure Datadog is receiving metrics for it: Datadog provides a BOSH release of the Datadog Firehose Nozzle. After uploading the release to your Director, add the Nozzle to an existing deployment, or create a new deployment that only includes the Nozzle. The instructions below assume you’re adding it to an existing Pivotal Platform deployment that has a working Loggregator Firehose. # BOSH CLI v1 bosh upload release # BOSH CLI v2 bosh upload-release -e <BOSH_ENV> If you’d like to create your own release, see the Datadog Firehose Nozzle release repository. In the manifest that contains your UAA configuration, add a new client for the Datadog Nozzle so the job(s) can access the Firehose: uaa: clients: datadog-firehose-nozzle: access-token-validity: 1209600 authorities: doppler.firehose,cloud_controller.admin_read_only authorized-grant-types: client_credentials override: true scope: doppler.firehose,cloud_controller.admin_read_only secret: <YOUR_SECRET> Redeploy the deployment to add the user. Configure one or more Nozzle jobs in your main Pivotal Platform deployment manifest (e.g. cf-manifest.yml): jobs: #- instances: 4 # name: some_other_job # ... - instances: 1 # add more instances if one job cannot keep up with the Firehose name: datadog_nozzle_z1 networks: - name: cf1 # some network you've configured elsewhere in the manifest resource_pool: small_z1 # some resource_pool you've configured elsewhere in the manifest templates: - name: datadog-firehose-nozzle release: datadog-firehose-nozzle properties: datadog: api_key: <YOUR_DATADOG_API_KEY> api_url: flush_duration_seconds: 15 # seconds between flushes to Datadog. Default is 15. loggregator: # do NOT append '/firehose' or even a trailing slash to the URL; 'ws://<host>:<port>' works traffic_controller_url: <LOGGREGATOR_URL> # e.g. ws://traffic-controller.your-cf-domain.com:8081 nozzle: deployment: <DEPLOYMENT_NAME> # tags each firehose metric with 'deployment:<DEPLOYMENT_NAME>' subscription_id: datadog-nozzle # can be anything (firehose streams data evenly to all jobs using the same subscription_id) # disable_access_control: true # for development only # insecure_ssl_skip_verify: true # for development only; enable if your UAA does not use a verifiable cert uaa: client: datadog-firehose-nozzle # client name you just configured client_secret: <SECRET_YOU_JUST_CONFIGURED> url: <UAA_URL> # e.g. To see all available configuration options, check the Datadog Firehose Nozzle repository. In the same manifest, add the Datadog Nozzle release name and version: releases: # - name: "<SOME_OTHER_RELEASE>" # version: <x.y.z> # ... - name: datadog-firehose-nozzle version: "<VERSION_YOU_UPLOADED>" # specify the real version (x.y.z not 'latest') To see which datadog-firehose-nozzle release version you uploaded earlier, run bosh releases. # BOSH CLI v1 bosh deployment cf-manifest.yml bosh -n deploy --recreate # BOSH CLI v2 bosh -n -d cf-manifest -e <BOSH_ENV> deploy --recreate cf-manifest.yml On the Metrics explorer page in Datadog, search for metrics beginning cloudfoundry.nozzle: The following metrics are sent by the Datadog Firehose Nozzle ( cloudfoundry.nozzle). The Datadog Agent release does not send any special metrics of its own, just the usual metrics from any Agent checks you configure in the Director runtime config (and, by default, system, network, disk, and ntp metrics). The Datadog Firehose Nozzle only collects CounterEvents (as metrics, not events), ValueMetrics, and ContainerMetrics; it ignores LogMessages and Errors.
https://docs.datadoghq.com/integrations/pivotal_platform/
2020-01-17T19:24:40
CC-MAIN-2020-05
1579250590107.3
[]
docs.datadoghq.com
Provides dock panels to the End-User Report Designer. Namespace: DevExpress.XtraReports.UserDesigner Assembly: DevExpress.XtraReports.v19.2.Extensions.dll public class XRDesignDockManager : DockManager, IDesignControl, IDesignPanelListener Public Class XRDesignDockManager Inherits DockManager Implements IDesignControl, IDesignPanelListener Combined with the XRDesignMdiController and XRDesignRibbonController (or, XRDesignBarManager) components, the XRDesignDockManager creates a desktop reporting application that carries the bars, or Ribbon interface. To access the DockManager of an End-User Report Designer, use the IDesignForm.DesignDockManager property. To access a specific dock panel by specifying its type, name, or index in the XRDesignDockManager.DesignDockPanels.2 library in your application to be able to access the Report Designer's dock panel settings. using DevExpress.XtraBars.Docking; using DevExpress.XtraReports.UI; using DevExpress.XtraReports.UserDesigner; // ... private void button1_Click(object sender, System.EventArgs e) { // Create a Design Tool with an assigned report instance. ReportDesignTool designTool = new ReportDesignTool(new XtraReport1()); // Access the standard or ribbon-based Designer form and its MDI Controller. // IDesignForm designForm = designTool.DesignForm; IDesignForm designForm = designTool.DesignRibbonForm; // Access and hide the Group and Sort panel. GroupAndSortDockPanel groupSort = (GroupAndSortDockPanel)designForm.DesignDockManager[DesignDockPanelType.GroupAndSort]; groupSort.Visibility = DockVisibility.AutoHide; // Access and hide the Report Explorer. ReportExplorerDockPanel reportExplorer = (ReportExplorerDockPanel)designForm.DesignDockManager[DesignDockPanelType.ReportExplorer]; reportExplorer.Visibility = DockVisibility.AutoHide; // Access and hide the Report Gallery. ReportGalleryDockPanel reportGallery = (ReportGalleryDockPanel)designForm.DesignDockManager[DesignDockPanelType.ReportGallery]; reportGallery.Visibility = DockVisibility.AutoHide; // Access the Property Grid and customize some of its settings. PropertyGridDockPanel propertyGrid = (PropertyGridDockPanel)designForm.DesignDockManager[DesignDockPanelType.PropertyGrid]; propertyGrid.ShowCategories = false; propertyGrid.ShowDescription = false; // Access the Field List and customize some of its settings. FieldListDockPanel fieldList = (FieldListDockPanel)designForm.DesignDockManager[DesignDockPanelType.FieldList]; fieldList.ShowNodeToolTips = false; fieldList.ShowParametersNode = false; // Load a Report Designer in a dialog window. // designTool.ShowDesignerDialog(); designTool.ShowRibbonDesignerDialog(); }
https://docs.devexpress.com/XtraReports/DevExpress.XtraReports.UserDesigner.XRDesignDockManager
2020-01-17T18:21:10
CC-MAIN-2020-05
1579250590107.3
[]
docs.devexpress.com
Thanks Enable Search icon & sticky header Flexible theme provides eight widgets which can be used in Front Page Widgets Area to setup various Section in homepage Note: Please use unique id in "Section ID" field in the widgets and use same ID in the menu for One Page Menu. 1. Flexible: About Us Section 2. Flexible: Services Section 3. Flexible: Portfolio Section 4. Flexible: Testimonials Section 5. Flexible: Call to Action 6. Flexible: Latest News 7. Flexible: Contact Us 8. Flexible: Sponsors
https://docs.mysterythemes.com/flexible/
2020-01-17T18:19:42
CC-MAIN-2020-05
1579250590107.3
[]
docs.mysterythemes.com
Out of Place Country: - Taiwan - Family - Politics - History - Investigative - Author's Point of View Introduction This is a movie about searching and losing. Two threads intertwine and form the whole of the story. One of which is about my husband, a man from a Ji family in Muzha Village, Neimen Township. He could be a descendent of Pingpu tribe, but neither he nor his family remembers their origin. A look at a few of old pictures of Pingpu people sent him off to the search of his origin. However distant our origin is, we spend our lifetime pursuing the disorientated nostalgia, because the homesickness is the ultimate representation of one’s own self-esteem, no matter how vague the image of homeland is in our mind. The other is about the tragedy in which Xiaoling Village, one of the few places in Taiwan comprehensively preserving Pingpu culture, was destroyed overnight during the Typhoon Morakot disaster. In an attempt to restore Pingpu culture through audio and visual media, I set off on a journey of searching. Not willing to let this part of history wiped out in the catastrophe, I tried to record the trauma and sorrow of the locals to produce a documentary. While many still stuck in the post-disaster trauma, we try to extend the time and observe the rising and falling of life and the cycle of the nature, so that we can walk out of sadness and realize the ever-changing nature and wisdom of life. Awards 2012 New Taipei City Documentary Film Festival 2012 South Taiwan Film Festival 2012 New Taipei City Documentary Film Festival 2012 South Taiwan Film Festival 2013 Taiwan International Ethnographic Film Festival 2013 Hong Kong Independent Film Festival 2013 Hong Kong Independent Film Festival Team - Director - Producer - Cinematographer - Editor - Production - Distribution
https://docs.tfi.org.tw/en/film/4467
2020-01-17T20:33:18
CC-MAIN-2020-05
1579250590107.3
[array(['https://docs.tfi.org.tw/sites/default/files/styles/film_banner/public/default_images/img-default-top.jpg?itok=6F8nOs7H', None], dtype=object) array(['https://docs.tfi.org.tw/sites/default/files/styles/film_poster/public/default_images/news-default-poster%402x.jpg?itok=Y4UOkuCn', None], dtype=object) ]
docs.tfi.org.tw
Makes a. using UnityEngine; using System.Collections; [ExecuteInEditMode] public class ExampleClass : MonoBehaviour { public Transform target; void Update() { if (target) transform.LookAt(target); } }
https://docs.unity3d.com/kr/540/ScriptReference/ExecuteInEditMode.html
2020-01-17T20:32:07
CC-MAIN-2020-05
1579250590107.3
[]
docs.unity3d.com
API methods available to smart contract¶ Docker container-based smart contracts can use node REST API. Smart contract developers can use limited list of REST API methods. This list is represented below, these methods are available directly from the container. Addresses methods - GET /addresses/publicKey/{publicKey} GET /addresses/balance/{address} GET /addresses/data/{address} GET /addresses/data/{address}/{key} Crypto methods Privacy methods GET /privacy/{policy-id}/getData/{policy-item-hash} GET /privacy/{policy-id}/getInfo/{policy-item-hash} GET /privacy/{policy-id}/hashes GET /privacy/{policy-id}/recipients Transactions methods Contracts methods A smart contract can use Contracts methods implementing the separated /internal/contracts/ route, which is totally identical to the regular Contracts methods. GET /internal/contracts/{contractId}/{key} GET /internal/contracts/executed-tx-for/{id} GET /internal/contracts/{contractId} - PKI methods Docker contract authorization¶ A smart contract requires an authorization to use the node REST API. There are following steps for the correct REST API methods usage by the smart contract: The following variables should be defined in the Docker contract environment: The Docker contract developer assigns the value of the variable API_TOKENto the request header X-Contract-Api-Token. The node specifies JWT authorization token into the variable API_TOKENfor the contract creation and execution. The contract code should pass the received token in the request header ( X-Contract-Api-Token) each time the node API is accessed.
https://docs.wavesenterprise.com/en/how-to-use/smart-contracts/docker/api-for-contract.html
2020-01-17T18:43:32
CC-MAIN-2020-05
1579250590107.3
[]
docs.wavesenterprise.com
Installation¶ Gjs is a JavaScript engine based on Spidermonkey, which contains bindings to Gtk and other libraries that are used to create graphical user interfaces under GNOME. The method by which Gjs communicates with these libraries is through APIs exposed by the GObject Introspection framework. GNOME Shell is implemented in JavaScript and run by Gjs, and Gjs therefore comes with any GNOME 3 installation. Those who have GNOME 3 installed will not need to install any additional packages. Everyone else can get Gjs through their package managers. On Debian and derived distributions like Ubuntu: apt-get install gjs libgjs-dev On Fedora: dnf install gjs gjs-devel On Arch: pacman -S gjs
https://gjs-tutorial.readthedocs.io/en/latest/install.html
2020-01-17T18:12:28
CC-MAIN-2020-05
1579250590107.3
[]
gjs-tutorial.readthedocs.io
11.13.3 Unstructured Column-Layer Grid Indexable Elements The unstructured column-layer grids have both implicit and explicit topological relationships between the grid elements and a corresponding mixture of implicit and explicit indexing. As shown in Figure 11.13.3-1 , the indexable elements can be organized into three categories: topology, geometry, or additional elements. As with IJK grids, there are a number of “object1 per object2” indexable element kinds for unstructured column-layer grids, which may be used in favor of “object1” indices when the latter are more complicated. For example, for an unstructured column-layer grid, for a column of the model with N sides, there are N+2 “faces per cell”: 0=top, 1=bottom, 2-(N+1) are side faces following the explicit pillars per column ordering. This is a very simple enumeration, versus the “faces” indexing, which depends on the grid faulting and column edge enumeration. Faces per cell indices appear in the grid connections representation and the blocked wellbore representation.
http://docs.energistics.org/RESQML/RESQML_TOPICS/RESQML-000-285-0-C-sv2010.html
2021-05-06T01:28:04
CC-MAIN-2021-21
1620243988724.75
[array(['RESQML_IMAGES/RESQML-000-124-0-sv2010.png', None], dtype=object)]
docs.energistics.org
How It Works Teradata FastLoad processes a series of Teradata FastLoad commands and Teradata SQL statements entered either interactively or in batch mode. Use the Teradata FastLoad commands for session control and data handling of the data transfers. The Teradata SQL statements create, maintain, and drop tables on the Teradata Database. During a load operation, Teradata FastLoad inserts the data from each record of the data source into one row of the table on a Teradata Database. The table on the Teradata Database receiving the data must be empty and have no defined secondary indexes. Note: Teradata FastLoad does not load duplicate rows from the data source to the Teradata Database. (A duplicate row is one in which every field contains the exact same data as the fields of an existing row.) This is true even for MULTISET tables. To load duplicate rows in a MULTISET table, use MultiLoad.
https://docs.teradata.com/r/PE6mc9dhvMF3BuuHeZzsGg/JT97~um0NdGbCrh~nWKN0A
2021-05-06T00:29:27
CC-MAIN-2021-21
1620243988724.75
[]
docs.teradata.com
Release Notes¶ Upcoming Release¶ Warning The features listed below are not released yet, but will be part of the next release! To use the features already you have to install the master branch, e.g. pip install git+. When using iterative LOPF with n.ilopf()for impedance updates of lines, the attributes p_nomand s_nomof lines and links are reset to their original values after final iteration. Bump minimum pandasrequirement to version 1.1.0. When solving n.lopf(pyomo=False), PyPSA now supports setting lower and upper capacity bounds per bus and carrier. These are specified in the columns n.buses['nom_min_{carrier}']and n.buses['nom_max_{carrier}']respectively. For example, if multiple generators of carrier “wind” are at bus “bus1”, the combined capacity is limited to 1000 MW by setting n.buses.loc['bus1', 'nom_max_wind'] = 1000(a minimal capacity is forced by setting n.buses.loc['bus1', 'nom_min_wind']). In the same manner the combined p_nomof components StorageUnitand e_nomof components Storecan be limited. Fix setting marginand boundarieswhen plotting a network with geomap=False. Adjust log file creation for CPLEX version 12.10 and higher. network.snapshotsare now a property, hence assigning values with network.snapshots = values `` is the same as ``network.set_snapshots(values) network.snapshot_weightingsare now subdivided into weightings for the objective function, generators and stores/storage units. Objective weightings determine the multiplier of the marginal costs in the objective function of the LOPF. Generator weightings specify the impact of generators in a GlobalConstraint. Store weightings define the elapsed hours for the charge, discharge, standing loss and spillage of storage units and stores in order to determine the current state of charge. PyPSA still supports setting snapshot_weightingswith a pandas.Series. In this case, the weightings are uniformly applied to all columns of the new snapshot_weightings pandas.DataFrame. The function geo.area_from_lon_lat_polywas deprecated and will be removed in v0.19. PyPSA 0.17.1 (15th July 2020)¶ This release contains bug fixes and extensions to the features for optimization when not using Pyomo. N-1 security-constrained linear optimal power flow is now also supported without pyomo by running network.sclopf(pyomo=False). Added support for the FICO Xpress commercial solver for optimization withhout pyomo, i.e. pyomo=False. There was a bug in the LOPF with pyomo=Falsewhereby if some Links were defined with multiple outputs (i.e. bus2, bus3, etc. were defined), but there remained some Links without multiple outputs (bus2, bus3, etc. set to ""), then the Links without multiple outputs were assigned erroneous non-zero values for p2, p3, etc. in the LOPF with pyomo=False. Now p2, p3, etc. revert to the default value for Links where bus2, bus3, etc. are not defined, just like for the LOPF with pyomo=True. Handle double-asterisk prefix in solution_fnwhen solving n.lopf(pyomo=False)using CBC. When solving n.lopf(pyomo=False, store_basis=True, solver_name="cplex")an error raised by trying to store a non-existing basis is caught. Add compatibility for Pyomo 5.7. This is also the new minimum requirement. Fixed bug when saving dual variables of the line volume limit. Now using dual from the second last iteration in pypsa.linopf, because last iteration returns NaN (no optimisation of line capacities in final iteration). Added tracking of iterations of global constraints in the optimisation. When solving n.lopf(pyomo=False), PyPSA now constrains the dispatch variables for non extendable components with actual constraints, not with standard variable bounds. This allows retrieving shadow prices for all dispatch variables when running n.lopf(pyomo=False, keep_shadowprices=True). Can now cluster lines with different static s_max_puvalues. Time-varying s_max_puare not supported in clustering. Improved handling of optional dependencies for network clustering functionalities ( sklearnand community). Thanks to Pietro Belotti from FICO for adding the Xpress support, to Fabian Neumann (KIT) and Fabian Hofmann (FIAS) for all their hard work on this release, and to all those who fixed bugs and reported issues. PyPSA 0.17.0 (23rd March 2020)¶ This release contains some minor breaking changes to plotting, some new features and bug fixes. For plotting geographical features basemapis not supported anymore. Please use cartopyinstead. Changes in the plotting functions n.plot()and n.iplot()include some breaking changes: A set of new arguments were introduced to separate style parameters of the different branch components: link_colors, link_widths, transformer_colors, transformer_widths, link_cmap, transformer_cmap line_widths, line_colors, and line_cmapnow only apply for lines and can no longer be used for other branch types (links and transformers). Passing a pandas.Series with a pandas.MultiIndex will raise an error. Additionally, the function n.iplot() has new arguments line_text, link_text, transformer_textto configure the text displayed when hovering over a branch component. The function directed_flow()now takes only a pandas.Series with single pandas.Index. The argument bus_colorscalein n.iplot()was renamed to bus_cmap. The default colours changed. If non-standard output fields in the time-dependent network.components_t(e.g. network.links_t.p2when there are multi-links) were exported, then PyPSA will now also import them automatically without requiring the use of the override_component_attrsargument. Deep copies of networks can now be created with a subset of snapshots, e.g. network.copy(snapshots=network.snapshots[:2]). When using the pyomo=Falseformulation of the LOPF ( network.lopf(pyomo=False)): It is now possible to alter the objective function. Terms can be added to the objective via extra_functionalityusing the function pypsa.linopt.write_objective(). When a pure custom objective function needs to be declared, one can set skip_objective=True. In this case, only terms defined through extra_functionalitywill be considered in the objective function. Shadow prices of capacity bounds for non-extendable passive branches are parsed (similar to the pyomo=Truesetting) Fixed pypsa.linopf.define_kirchhoff_constraints()to handle exclusively radial network topologies. CPLEX is now supported as an additional solver option. Enable it by installing the cplex package (e.g. via pip install cplexor conda install -c ibmdecisionoptimization cplex) and setting solver_name='cplex' When plotting, bus_sizesare now consistent when they have a pandas.MultiIndexor a pandas.Index. The default is changed to bus_sizes=0.01because the bus sizes now relate to the axis values. When plotting, bus_alphacan now be used to add an alpha channel which controls the opacity of the bus markers. The argument bus_colorscan a now also be a pandas.Series. The carriercomponent has two new columns ‘color’ and ‘nice_name’. The color column is used by the plotting function if bus_sizesis a pandas.Series with a MultiIndex and bus_colorsis not explicitly defined. The function pypsa.linopf.ilopf()can now track the intermediate branch capacities and objective values for each iteration using the track_iterationskeyword. Fixed unit commitment: when min_up_timeof committable generators exceeds the length of snapshots. when network does not feature any extendable generators. Fixed import from pandapower for transformers not based on standard types. The various Jupyter Notebook examples are now available on the binder platform. This allows new users to interactively run and explore the examples without the need of installing anything on their computers. Minor adjustments for compatibility with pandas v1.0.0. After optimizing, the network has now an additional attribute objective_constantwhich reflects the capital cost of already existing infrastructure in the network referring to p_nomand s_nomvalues. Thanks to Fabian Hofmann (FIAS) and Fabian Neumann (KIT) for all their hard work on this release, and to all those who reported issues. PyPSA 0.16.1 (10th January 2020)¶ This release contains a few minor bux fixes from the introduction of nomopyomo in the previous release, as well as a few minor features. When using the nomopyomoformulation of the LOPF with network.lopf(pyomo=False), PyPSA was not correcting the bus marginal prices by dividing by the network.snapshot_weightings, as is done in the pyomoformulation. This correction is now applied in the nomopyomoformulation to be consistent with the pyomoformulation. (The reason this correction is applied is so that the prices have a clear currency/MWh definition regardless of the snapshot weightings. It also makes them stay roughly the same when snapshots are aggregated: e.g. if hourly simulations are sampled every n-hours, and the snapshot weighting is n.) The status, termination_conditionthat the network.lopfreturns is now consistent between the nomopyomoand pyomoformulations. The possible return values are documented in the LOPF docstring, see also the LOPF documentation. Furthermore in the nomopyomoformulation, the solution is still returned when gurobi finds a suboptimal solution, since this solution is usually close to optimal. In this case the LOPF returns a statusof warningand a termination_conditionof suboptimal. For plotting with network.plot()you can override the bus coordinates by passing it a layouterfunction from networkx. See the docstring for more information. This is particularly useful for networks with no defined coordinates. For plotting with network.iplot()a background from mapbox can now be integrated. Please note that we are still aware of one implementation difference between nomopyomo and pyomo, namely that nomopyomo doesn’t read out shadow prices for non-extendable branches, see the github issue. PyPSA 0.16.0 (20th December 2019)¶ This release contains major new features. It is also the first release to drop support for Python 2.7. Only Python 3.6 and 3.7 are supported going forward. Python 3.8 will be supported as soon as the gurobipy package in conda is updated. A new version of the linear optimal power flow (LOPF) has been introduced that uses a custom optimization framework rather than Pyomo. The new framework, based on nomoypomo, uses barely any memory and is much faster than Pyomo. As a result the total memory usage of PyPSA processing and gurobi is less than a third what it is with Pyomo for large problems with millions of variables that take several gigabytes of memory (see this graphical comparison for a large network optimization). The new framework is not enabled by default. To enable it, use network.lopf(pyomo=False). Almost all features of the regular network.lopfare implemented with the exception of minimum down/up time and start up/shut down costs for unit commitment. If you use the extra_functionalityargument for network.lopfyou will need to update your code for the new syntax. There is documentation for the new syntax as well as a Jupyter notebook of examples. Distributed active power slack is now implemented for the full non-linear power flow. If you pass network.pf()the argument distribute_slack=True, it will distribute the slack power across generators proportional to generator dispatch by default, or according to the distribution scheme provided in the argument slack_weights. If distribute_slack=Falseonly the slack generator takes up the slack. There is further documentation. Unit testing is now performed on all of GNU/Linux, Windows and MacOS. NB: You may need to update your version of the package six. Special thanks for this release to Fabian Hofmann for implementing the nomopyomo framework in PyPSA and Fabian Neumann for providing the customizable distributed slack. PyPSA 0.15.0 (8th November 2019)¶ This release contains new improvements and bug fixes. The unit commitment (UC) has been revamped to take account of constraints at the beginning and end of the simulated snapshotsbetter. This is particularly useful for rolling horizon UC. UC now accounts for up-time and down-time in the periods before the snapshots. The generator attribute initial_statushas been replaced with two attributes up_time_beforeand down_time_beforeto give information about the status before network.snapshots. At the end of the simulated snapshots, minimum up-times and down-times are also enforced. Ramping constraints also look before the simulation at previous results, if there are any. See the unit commitment documentation for full details. The UC example has been updated with a rolling horizon example at the end. Documentation is now available on readthedocs, with information about functions pulled from the docstrings. The dependency on cartopy is now an optional extra. PyPSA now works with pandas 0.25 and above, and networkx above 2.3. A bug was fixed that broke the Security-Constrained Linear Optimal Power Flow (SCLOPF) constraints with extendable lines. Network plotting can now plot arrows to indicate the direction of flow by passing network.plotan flowargument. The objective sense ( minimizeor maximize) can now be set (default remains minimize). The network.snapshot_weightingsis now carried over when the network is clustered. Various other minor fixes. We thank colleagues at TERI for assisting with testing the new unit commitment code, Clara Büttner for finding the SCLOPF bug, and all others who contributed issues and pull requests. PyPSA 0.14.1 (27th May 2019)¶ This minor release contains three small bug fixes: Documentation parses now correctly on PyPI Python 2.7 and 3.6 are automatically tested using Travis PyPSA on Python 2.7 was fixed This will also be the first release to be available directly from conda-forge. PyPSA 0.14.0 (15th May 2019)¶ This release contains a new feature and bug fixes. Network plotting can now use the mapping library cartopy as well as basemap, which was used in previous versions of PyPSA. The basemap developers will be phasing out basemap over the next few years in favour of cartopy (see their end-of-life announcement). PyPSA now defaults to cartopy unless you tell it explicitly to use basemap. Otherwise the plotting interface is the same as in previous versions. Optimisation now works with the newest version of Pyomo 5.6.2 (there was a Pyomo update that affected the opt.py expression for building linear sums). A critical bug in the networkclustering sub-library has been fixed which was preventing the capital_cost parameter of conventional generators being handled correctly when networks are aggregated. Network.consistency_check() now only prints necessary columns when reporting NaN values. Import from pandapower networks has been updated to pandapower 2.0 and to include non-standard lines and transformers. We thank Fons van der Plas and Fabian Hofmann for helping with the cartopy interface, Chloe Syranidis for pointing out the problem with the Pyomo 5.6.2 update, Hailiang Liu for the consistency check update and Christian Brosig for the pandapower updates. PyPSA 0.13.2 (10th January 2019)¶ This minor release contains small new features and fixes. Optimisation now works with Pyomo >= 5.6 (there was a Pyomo update that affected the opt.py LConstraint object). New functional argument can be passed to Network.lopf: extra_postprocessing(network,snapshots,duals), which is called after solving and results are extracted. It can be used to get the values of shadow prices for constraints that are not normally extracted by PyPSA. In the lopf kirchhoff formulation, the cycle constraint is rescaled by a factor 1e5, which improves the numerical stability of the interior point algorithm (since the coefficients in the constraint matrix were very small). Updates and fixes to networkclustering, io, plot. We thank Soner Candas of TUM for reporting the problem with the most recent version of Pyomo and providing the fix. PyPSA 0.13.1 (27th March 2018)¶ This release contains bug fixes for the new features introduced in 0.13.0. Export network to netCDF file bug fixed (components that were all standard except their name were ignored). Import/export network to HDF5 file bug fixed and now works with more than 1000 columns; HDF5 format is no longer deprecated. When networks are copied or sliced, overridden components (introduced in 0.13.0) are also copied. Sundry other small fixes. We thank Tim Kittel for pointing out the first and second bugs. We thank Kostas Syranidis for not only pointing out the third issue with copying overridden components, but also submitting a fix as a pull request. For this release we acknowledge funding to Tom Brown from the RE-INVEST project. PyPSA 0.13.0 (25th January 2018)¶ This release contains new features aimed at coupling power networks to other energy sectors, fixes for library dependencies and some minor internal API changes. If you want to define your own components and override the standard functionality of PyPSA, you can now override the standard components by passing pypsa.Network() the arguments override_componentsand override_component_attrs, see the section on Custom Components. There are examples for defining new components in the git repository in examples/new_components/, including an example of overriding network.lopf()for functionality for combined-heat-and-power (CHP) plants. The Linkcomponent can now be defined with multiple outputs in fixed ratio to the power in the single input by defining new columns bus2, bus3, etc. ( busfollowed by an integer) in network.linksalong with associated columns for the efficiencies efficiency2, efficiency3, etc. The different outputs are then proportional to the input according to the efficiency; see sections Link with multiple outputs or inputs and Controllable branch flows: links and the example of a CHP with a fixed power-heat ratio. Networks can now be exported to and imported from netCDF files with network.export_to_netcdf()and network.import_from_netcdf(). This is faster than using CSV files and the files take up less space. Import and export with HDF5 files, introduced in PyPSA 0.12.0, is now deprecated. The export and import code has been refactored to be more general and abstract. This does not affect the API. The internally-used sets such as pypsa.components.all_componentsand pypsa.components.one_port_componentshave been moved from pypsa.componentsto network, i.e. network.all_componentsand network.one_port_components, since these sets may change from network to network. For linear power flow, PyPSA now pre-calculates the effective per unit reactance x_pu_efffor AC lines to take account of the transformer tap ratio, rather than doing it on the fly; this makes some code faster, particularly the kirchhoff formulation of the LOPF. PyPSA is now compatible with networkx 2.0 and 2.1. PyPSA now requires Pyomo version greater than 5.3. PyPSA now uses the Travis CI continuous integration service to test every commit in the PyPSA GitHub repository. This will allow us to catch library dependency issues faster. We thank Russell Smith of Edison Energy for the pull request for the effective reactance that sped up the LOPF code and Tom Edwards for pointing out the Pyomo version dependency issue. For this release we also acknowledge funding to Tom Brown from the RE-INVEST project. PyPSA 0.12.0 (30th November 2017)¶ This release contains new features and bug fixes. Support for Pyomo’s persistent solver interface, so if you’re making small changes to an optimisation model (e.g. tweaking a parameter), you don’t have to rebuild the model every time. To enable this, network_lopfhas been internally split into build_model, prepare_solverand solveto allow more fine-grained control of the solving steps. Currently the new Pyomo PersistentSolver interface is not in the main Pyomo branch, see the pull request; you can obtain it with pip install git+ Lines and transformers (i.e. passive branches) have a new attribute s_max_puto restrict the flow in the OPF, just like p_max_pufor generators and links. It works by restricting the absolute value of the flow per unit of the nominal rating abs(flow) <= s_max_pu*s_nom. For lines this can represent an n-1 contingency factor or it can be time-varying to represent weather-dependent dynamic line rating. The marginal_costattribute of generators, storage units, stores and links can now be time dependent. When initialising the Network object, i.e. network = pypsa.Network(), the first keyword argument is now import_nameinstead of csv_folder_name. With import_namePyPSA recognises whether it is a CSV folder or an HDF5 file based on the file name ending and deals with it appropriately. Example usage: nw1 = pypsa.Network("my_store.h5")and nw2 = pypsa.Network("/my/folder"). The keyword argument csv_folder_nameis still there but is deprecated. The value network.objectiveis now read from the Pyomo results attribute Upper Boundinstead of Lower Bound. This is because for MILP problems under certain circumstances CPLEX records the Lower boundas the relaxed value. Upper boundis correctly recorded as the integer objective value. Bug fix due to changes in pandas 0.21.0: A bug affecting various places in the code, including causing network.lopfto fail with GLPK, is fixed. This is because in pandas 0.21.0 the sum of an empty Series/DataFrame returns NaN, whereas before it returned zero. This is a subtle bug; we hope we’ve fixed all instances of it, but get in touch if you notice NaNs creeping in where they shouldn’t be. All our tests run fine. Bug fix due to changes in scipy 1.0.0: For the new version of scipy, csgraphhas to be imported explicit. Bug fix: A bug whereby logging level was not always correctly being seen by the OPF results printout is fixed. Bug fix: The storage unit spillage had a bug in the LOPF, whereby it was not respecting network.snapshot_weightingsproperly. We thank René Garcia Rosas, João Gorenstein Dedecca, Marko Kolenc, Matteo De Felice and Florian Kühnlenz for promptly notifying us about issues. PyPSA 0.11.0 (21st October 2017)¶ This release contains new features but no changes to existing APIs. There is a new function network.iplot()which creates an interactive plot in Jupyter notebooks using the plotly library. This reveals bus and branch properties when the mouse hovers over them and allows users to easily zoom in and out on the network. See the SciGRID example for a showcase of this feature and also the (sparse) documentation Plotting Networks. There is a new function network.madd()for adding multiple new components to the network. This is significantly faster than repeatedly calling network.add()and uses the functions network.import_components_from_dataframe()and network.import_series_from_dataframe()internally. Documentation and examples can be found at Adding and removing multiple components. There are new functions network.export_to_hdf5()and network.import_from_hdf5()for exporting and importing networks as single files in the Hierarchical Data Format. In the network.lopf()function the KKT shadow prices of the branch limit constraints are now outputted as series called mu_lowerand mu_upper. We thank Bryn Pickering for introducing us to plotly and helping to hack together the first working prototype using PyPSA. PyPSA 0.10.0 (7th August 2017)¶ This release contains some minor new features and a few minor but important API changes. There is a new component Global Constraints for implementing constraints that effect many components at once (see also the LOPF subsection Global constraints). Currently only constraints related to primary energy (i.e. before conversion with losses by generators) are supported, the canonical example being CO2 emissions for an optimisation period. Other primary-energy-related gas emissions also fall into this framework. Other types of global constraints will be added in future, e.g. “final energy” (for limits on the share of renewable or nuclear electricity after conversion), “generation capacity” (for limits on total capacity expansion of given carriers) and “transmission capacity” (for limits on the total expansion of lines and links). This replaces the ad hoc network.co2_limitattribute. If you were using this, instead of network.co2_limit = my_capdo network.add("GlobalConstraint", "co2_limit", type="primary_energy", carrier_attribute="co2_emissions", sense="<=", constant=my_cap). The shadow prices of the global constraints are automatically saved in network.global_constraints.mu. The LOPF output network.buses_t.marginal_priceis now defined differently if network.snapshot_weightingsare not 1. Previously if the generator at the top of the merit order had marginal_costc and the snapshot weighting was w, the marginal_pricewas cw. Now it is c, which is more standard. See also Nodal power balances. network.pf()now returns a dictionary of pandas DataFrames, each indexed by snapshots and sub-networks. convergedis a table of booleans indicating whether the power flow has converged; errorgives the deviation of the non-linear solution; n_iterthe number of iterations required to achieve the tolerance. network.consistency_check()now includes checking for potentially infeasible values in generator.p_{min,max}_pu. The PyPSA version number is now saved in network.pypsa_version. In future versions of PyPSA this information will be used to upgrade data to the latest version of PyPSA. network.sclopf()has an extra_functionalityargument that behaves like that for network.lopf(). Component attributes which are strings are now better handled on import and in the consistency checking. There is a new generation investment screening curve example showing the long-term equilibrium of generation investment for a given load profile and comparing it to a screening curve analysis. There is a new logging example that demonstrates how to control the level of logging that PyPSA reports back, e.g. error/warning/info/debug messages. Sundry other bug fixes and improvements. All examples have been updated appropriately. Thanks to Nis Martensen for contributing the return values of network.pf() and Konstantinos Syranidis for contributing the improved network.consistency_check(). PyPSA 0.9.0 (29th April 2017)¶ This release mostly contains new features with a few minor API changes. Unit commitment as a MILP problem is now available for generators in the Linear Optimal Power Flow (LOPF). If you set committable == Truefor the generator, an addition binary online/offline status is created. Minimum part loads, minimum up times, minimum down times, start up costs and shut down costs are implemented. See the documentation at Generator unit commitment constraints and the unit commitment example. Note that a generator cannot currently have both unit commitment and capacity expansion optimisation. Generator ramping limits have also been implemented for all generators. See the documentation at Generator ramping constraints and the unit commitment example. Different mathematically-equivalent formulations for the Linear Optimal Power Flow (LOPF) are now documented in Passive branch flow formulations and the arXiv preprint paper Linear Optimal Power Flow Using Cycle Flows. The new formulations can solve up to 20 times faster than the standard angle-based formulation. You can pass the network.lopffunction the solver_ioargument for pyomo. There are some improvements to network clustering and graphing. API change: The attribute network.nowhas been removed since it was unnecessary. Now, if you do not pass a snapshotsargument to network.pf() or network.lpf(), these functions will default to network.snapshotsrather than network.now. API change: When reading in network data from CSV files, PyPSA will parse snapshot dates as proper datetimes rather than text strings. João Gorenstein Dedecca has also implemented a MILP version of the transmission expansion, see, which properly takes account of the impedance with a disjunctive relaxation. This will be pulled into the main PyPSA code base soon. PyPSA 0.8.0 (25th January 2017)¶ This is a major release which contains important new features and changes to the internal API. Standard types are now available for lines and transformers so that you do not have to calculate the electrical parameters yourself. For lines you just need to specify the type and the length, see Line Types. For transformers you just need to specify the type, see Transformer Types. The implementation of PyPSA’s standard types is based on pandapower’s standard types. The old interface of specifying r, x, b and g manually is still available. The transformer model has been substantially overhauled, see Transformer model. The equivalent model now defaults to the more accurate T model rather than the PI model, which you can control by setting the attribute model. Discrete tap steps are implemented for transformers with types. The tap changer can be defined on the primary side or the secondary side. In the PF there was a sign error in the implementation of the transformer phase_shift, which has now been fixed. In the LPF and LOPF angle formulation the phase_shifthas now been implemented consistently. See the new transformer example. There is now a rudimentary import function for pandapower networks, but it doesn’t yet work with all switches and 3-winding transformers. The object interface for components has been completely removed. Objects for each component are no longer stored in e.g. network.lines["obj"]and the descriptor interface for components is gone. You can only access component attributes through the dataframes, e.g. network.lines. Component attributes are now defined in CSV files in pypsa/component_attrs/. You can access these CSVs in the code via the dictionary network.components, e.g. network.components["Line"]["attrs"]will show a pandas DataFrame with all attributes and their types, defaults, units and descriptions. These CSVs are also sourced for the documentation in Components, so the documentation will always be up-to-date. All examples have been updated appropriately. PyPSA 0.7.1 (26th November 2016)¶ This release contains bug fixes, a minor new feature and more warnings. The unix-only library resourceis no longer imported by default, which was causing errors for Windows users. Bugs in the setting and getting of time-varying attributes for the object interface have been fixed. The Linkattribute efficiencycan now be make time-varying so that e.g. heat pump Coefficient of Performance (COP) can change over time due to ambient temperature variations (see the heat pump example). network.snapshotsis now cast to a pandas.Index. There are new warnings, including when you attach components to non-existent buses. Thanks to Marius Vespermann for promptly pointing out the resource bug. PyPSA 0.7.0 (20th November 2016)¶ This is a major release which contains changes to the API, particularly regarding time-varying component attributes. network.generators_tare no longer pandas.Panels but dictionaries of pandas.DataFrames, with variable columns, so that you can be flexible about which components have time-varying attributes; please read Time-varying data carefully. Essentially you can either set a component attribute e.g. p_max_puof Generator, to be static by setting it in the DataFrame network.generators, or you can let it be time-varying by defining a new column labelled by the generator name in the DataFrame network.generators_t["p_max_pu"]as a series, which causes the static value in network.generatorsfor that generator to be ignored. The DataFrame network.generators_t["p_max_pu"]now only includes columns which are specifically defined to be time-varying, thus saving memory. The following component attributes can now be time-varying: Link.p_max_pu, Link.p_min_pu, Store.e_max_puand Store.e_min_pu. This allows the demand-side management scheme of to be implemented in PyPSA. The properties dispatch, p_max_pu_fixedand p_min_pu_fixedof Generatorand StorageUnitare now removed, because the ability to make p_max_puand p_min_pueither static or time-varying removes the need for this distinction. All messages are sent through the standard Python library logging, so you can control the level of messages to be e.g. debug, info, warningor error. All verbose switches and print statements have been removed. There are now more warnings. You can call network.consistency_check()to make sure all your components are well defined; see Troubleshooting. All examples have been updated to accommodate the changes listed below. PyPSA 0.6.2 (4th November 2016)¶ This release fixes a single library dependency issue: pf: A single line has been fixed so that it works with new pandas versions >= 0.19.0. We thank Thorben Meiners for promptly pointing out this issue with the new versions of pandas. PyPSA 0.6.1 (25th August 2016)¶ This release fixes a single critical bug: opf: The latest version of Pyomo (4.4.1) had a bad interaction with pandas when a pandas.Index was used to index variables. To fix this, the indices are now cast to lists; compatibility with less recent versions of Pyomo is also retained. We thank Joao Gorenstein Dedecca for promptly notifying us of this bug. PyPSA 0.6.0 (23rd August 2016)¶ Like the 0.5.0 release, this release contains API changes, which complete the integration of sector coupling. You may have to update your old code. Models for Combined Heat and Power (CHP) units, heat pumps, resistive Power-to-Heat (P2H), Power-to-Gas (P2G), battery electric vehicles (BEVs) and chained hydro reservoirs can now be built (see the sector coupling examples). The refactoring of time-dependent variable handling has been postponed until the 0.7.0 release. In 0.7.0 the object interface to attributes may also be removed; see below. All examples have been updated to accommodate the changes listed below. Sector coupling¶ components, opt: A new Storecomponent has been introduced which stores energy, inheriting the energy carrier from the bus to which it is attached. The component is more fundamental than the StorageUnit, which is equivalent to a Storeand two Linkfor storing and dispatching. The Generatoris equivalent to a Storewith a lossy Link. There is an example which shows the equivalences. components, opt: The Sourcecomponent and the Generatorattribute gen.sourcehave been renamed Carrierand gen.carrier, to be consistent with the bus.carrierattribute. Please update your old code. components, opt: The Linkattributes link.s_nom*have been renamed link.p_nom*to reflect the fact that the link can only dispatch active power. Please update your old code. components, opt: The TransportLinkand Convertercomponents, which were deprecated in 0.5.0, have been now completely removed. Please update your old code to use Linkinstead. Downgrading object interface¶ The intention is to have only the pandas DataFrame interface for accessing component attributes, to make the code simpler. The automatic generation of objects with descriptor access to attributes may be removed altogether. examples: Patterns of for loops through network.components.objhave been removed. components: The methods on Buslike bus.generators()and bus.loads()have been removed. components: network.add()no longer returns the object. Other¶ components, opf: Unlimited upper bounds for e.g. generator.p_nom_maxor line.s_nom_maxwere previous set using np.nan; now they are set using float("inf")which is more logical. You may have to update your old code accordingly. components: A memory leak whereby references to component.networkwere not being correctly deleted has been fixed. PyPSA 0.5.0 (21st July 2016)¶ This is a relatively major release with some API changes, primarily aimed at allowing coupling with other energy carriers (heat, gas, etc.). The specification for a change and refactoring to the handling of time series has also been prepared (see Time-varying data), which will be implemented in the next major release PyPSA 0.6.0 in the late summer of 2016. An example of the coupling between electric and heating sectors can be found in the GitHub repository at pypsa/examples/coupling-with-heating/ and at. components: To allow other energy carriers, the attribute current_typefur buses and sub-neworks (sub-networks inherit the attribute from their buses) has been replaced by carrierwhich can take generic string values (such as “heat” or “gas”). The values “DC” and “AC” have a special meaning and PyPSA will treat lines and transformers within these sub-networks according to the load flow equations. Other carriers can only have single buses in sub-networks connected by passive branches (since they have no load flow). components: A new component for a controllable directed link Linkhas been introduced; TransportLinkand Converterare now deprecated and will be removed soon in an 0.6.x release. Please move your code over now. See Link for more details and a description of how to update your code to work with the new Linkcomponent. All the examples in the GitHub repository in pypsa/examples/have been updated to us the Link. graph: A new sub-module pypsa.graphhas been introduced to replace most of the networkx functionality with scipy.sparse methods, which are more performant the the pure python methods of networkx. The discovery of network connected components is now significantly faster. io: The function network.export_to_csv_folder()has been rewritten to only export non-default values of static and series component attributes. Static and series attributes of all components are not exported if they are default values. The functionality to selectively export series has been removed from the export function, because it was clumsy and hard to use. See Export to folder of CSV files for more details. plot: Plotting networks is now more performant (using matplotlib LineCollections) and allows generic branches to be plotted, not just lines. test: Unit testing for Security-Constrained Linear Optimal Power Flow (SCLOPF) has been introduced. PyPSA 0.4.2 (17th June 2016)¶ This release improved the non-linear power flow performance and included other small refactorings: pf: The non-linear power flow network.pf()now accepts a list of snapshots network.pf(snapshots)and has been refactored to be much more performant. pf: Neither network.pf()nor network.lpf()accept the nowargument anymore - for the power flow on a specific snapshot, either set network.nowor pass the snapshot as an argument. descriptors: The code has been refactored and unified for each simple descriptor. opt: Constraints now accept both an upper and lower bound with ><. opf: Sub-optimal solutions can also be read out of pyomo. PyPSA 0.4.1 (3rd April 2016)¶ This was mostly a bug-fixing and unit-testing release: pf: A bug was fixed in the full non-linear power flow, whereby the reactive power output of PV generators was not being set correctly. io: When importing from PYPOWER ppc, the generators, lines, transformers and shunt impedances are given names like G1, G2, …, L1, T1, S1, to help distinguish them. This change was introduced because the above bug was not caught by the unit-testing because the generators were named after the buses. opf: A Python 3 dict.keys() list/iterator bug was fixed for the spillage. test: Unit-testing for the pf and opf with inflow was improved to catch bugs better. We thank Joao Gorenstein Dedecca for a bug fix. PyPSA 0.4.0 (21st March 2016)¶ Additional features: New module pypsa.contingencyfor contingency analysis and security-constrained LOPF New module pypsa.geofor basic manipulation of geographic data (distances and areas) Re-formulation of LOPF to improve optimisation solving time New objects pypsa.opt.LExpression and pypsa.opt.LConstraint to make the bypassing of pyomo for linear problem construction easier to use Deep copying of networks with network.copy()(i.e. all components, time series and network attributes are copied) Stricter requirements for PyPI (e.g. pandas must be at least version 0.17.1 to get all the new features) Updated SciGRID-based model of Germany Various small bug fixes We thank Steffen Schroedter, Bjoern Laemmerzahl and Joao Gorenstein Dedecca for comments and bug fixes. PyPSA 0.3.3 (29th February 2016)¶ Additional features: network.lpfcan be called on an iterable of snapshotsi.e. network.lpf(snapshots), which is more performant that calling network.lpfon each snapshot separately. Bug fix on import/export of transformers and shunt impedances (which were left out before). Refactoring of some internal code. Better network clustering. PyPSA 0.3.2 (17th February 2016)¶ In this release some minor API changes were made: The Newton-Raphson tolerance network.nr_x_tolwas moved to being an argument of the function network.pf(x_tol=1e-6)instead. This makes more sense and is then available in the docstring of network.pf. Following similar reasoning network.opf_keep_fileswas moved to being an argument of the function network.lopf(keep_files=False). PyPSA 0.3.1 (7th February 2016)¶ In this release some minor API changes were made: Optimised capacities of generators/storage units and branches are now written to p_nom_opt and s_nom_opt respectively, instead of over-writing p_nom and s_nom The p_max/min limits of controllable branches are now p_max/min_pu per unit of s_nom, for consistency with generation and to allow unidirectional HVDCs / transport links for the capacity optimisation. network.remove() and io.import_series_from_dataframe() both take as argument class_name instead of list_name or the object - this is now fully consistent with network.add(“Line”,”my line x”). The booleans network.topology_determined and network.dependent_values_calculated have been totally removed - this was causing unexpected behaviour. Instead, to avoid repeated unnecessary calculations, the expert user can call functions with skip_pre=True. PyPSA 0.3.0 (27th January 2016)¶ In this release the pandas.Panel interface for time-dependent variables was introduced. This replaced the manual attachment of pandas.DataFrames per time-dependent variable as attributes of the main component pandas.DataFrame. Release process¶ Update release_notes.rst Update version in setup.py, doc/conf.py, pypsa/__init__.py git commitand put release notes in commit message git tag v0.x.0 git pushand git push --tags To upload to PyPI, run python setup.py sdist, then twine check dist/pypsa-0.x.0.tar.gzand twine upload dist/pypsa-0.x.0.tar.gz To update to conda-forge, check the pull request generated at the feedstock repository. Making a GitHub release will trigger zenodo to archive the release with its own DOI. Inform the PyPSA mailing list.
https://pypsa.readthedocs.io/en/latest/release_notes.html
2021-05-06T00:30:18
CC-MAIN-2021-21
1620243988724.75
[]
pypsa.readthedocs.io
Note: this API is in beta currently and is subject to change in the future. This endpoint is used to pass a payload of key/values to the datasource. The fields passed within payload are based on your Datasource configuration. {"payload": {"Site Title": "New Site","URL": ""}} If the sync is successful, you will receive a 202 status response that will contain information about the objects that were created or modified. {"objects": [{"obj_type": "Business","method": "update","data": {...The full Business detail will be returned here},"resource_uri": "/api/v2/businesses/12345"},{"obj_type": "Site","method": "create".,"data": {...The full Site detail will be returned here},"resource_uri": "/api/v2/sites/123456"}]} If there are errors with the sync, you will receive a 400 status response. If the error is general (configuration errors, or bad payload data) you will receive a ErrorLog type. {"__all__": [{"type": "ErrorLog","error": "KeyError: u'Site Title'"}]} If the error is related to validation issues with a particular object API, you will receive a APIErrorLog type with more structured information. {"__all__": [{"type": "APIErrorLog","error": {"formatted_domain": ["Invalid domain: notvalid"]}}]}
https://api-docs.devhub.com/advanced/projects
2021-05-05T23:48:20
CC-MAIN-2021-21
1620243988724.75
[]
api-docs.devhub.com
Release notes -- v2.208.78 (since v2.208.45) Highlights EL-1223 Pardot - Issue fix in object metadata - Fix in generating metadata for objects endpoint in Pardot element. EL-878 & EL-900 & EL-981 | Added ability to retry hooks in Element Builder and fixed trace issues - Added POST /hooks-tryitoutendpoint to enable re-trying hooks out through Element Builder. RVCL-744 Common resources accounts and org fields with same path - Throws 404 if transformations are null - Allows account and org to level fields to have same path El 1296 infusionsoftRest enhancements Contacts Resource- modified GET/POST/PATCH/GETbyIdrequest and response models to follow standard model definitions Orders Resource- Added POST/DELETEmethod and modified GETresponse model to follow standard model definitions Products Resource- Added POST/DELETEmethods Transactions Resource- modified GET/GETbyIdrequest and response models to follow standard model definitions RVCL-690: Don't store sensitive headers in FISEVs - Do not store the Authorizationheader in formula step execution values. EL-1522 | Expensify - Fixed swagger validation errors - Updated modes for policies resources. EL-1120 Intacct - Added resource for order-entry-transactions - Introduced resources for /order-entry-transactions in Intacct element. - Enhanced models for Intacct element. RVCL-765: Map full arrays correctly when they are nested - Fix for mapping full arrays using transformations when they are nested inside another object. EL-1075 - Performance issue with file upload for the Evernote element - The Evernote element's file upload was very slow, in the 50s range, for a not so large file, e.g., 700K. - The issue appeared to be related to slow performance of a dependent, third party library. - Upgrading this library to its latest released version fixed the issue. EL-251: Allbound prehook JS made valid for v4 Endpoints - Changed prehook JS to make it valid for v4 Endpoints EL-1009 SAP C4C CRM Added Ping API - Added new API: - GET /ping EL-30 Google Analytics Added measurements API - Added POST /measurements API EL-893 : Intacct - Added new APIs for resource to support DTD v3.0 Now following resources are added to support objects for DTD v3.0 for Intacct. CRUDS /contacts-advanced CRUDS /ledger-accounts-advanced CRUDS /customers-advanced RDS /invoices-advanced RDS /credit-memos-advanced EL-1147 | GlobalMeet : Added support for native filter in reports API - GET /reports/{eventId} API now supports where clause for filtering reports. EL-162 SuccessFactors: Added 'where' parameter to GET Onboardings API - Added whereparameter for GET /getOnboardingsAPI EL-1069 Successfactors requisitions model modified Age and TimeToFIll from Integer to string - Successfactors modified GET /requisitionsresponse model to include "age" and "timeToFill" as "string" type EL_15 SuccessFactors Performance Feedback Added APIs to support performance feedback. Resources added - achievements - activities - continuous-feedback - continuous-feedback-request - goal-templates - goals - permissions - user-accounts/{id}/permissions - goal-achievements - dev-goal-achievements RVCL-598 V2 backwards compatibility - Makes v2 vdrs act like v1 with /accounts/{id}/objects/definitions and /instances/{id}/objects/definition EL-12 sage200 - SAGE 200 element EL-1124 Etsy: Gap Enhancements - Added the following resources :- - GET /users - PATCH /orders/{id} - GET /billcharge/{id} - Changed the models of the existing resources to standard models. EL-1151 syncplicity : Added new resources - Added following resources to Syncplicity element - company - companies - devices - groups - policies - users EL-1326 - SFDC - Support filter by nested date queries like Account.CreatedDate - FIX - SFDC - filter by nested date queries like Account.CreatedDatehave issues RVCL-596 Added the ability to search bulk logs by text - Added searchTextto /usage/bulkAPI searchTextworks the same as in the /usageendpoint
https://docs.cloud-elements.com/home/staging-release-notes-v2-208-78
2021-05-06T01:12:17
CC-MAIN-2021-21
1620243988724.75
[]
docs.cloud-elements.com
UserInputDiscretiser¶ The UserInputDiscretiser() sorts the variable values into contiguous intervals which limits are arbitrarily defined by the user. The user must provide a dictionary of variable:list of limits pair when setting up the discretiser. The UserInputDiscretiser() works only with numerical variables. The discretiser will check that the variables entered by the user are present in the train set and cast as numerical. import numpy as np import pandas as pd from sklearn.datasets import load_boston from feature_engine.discretisers import UserInputDiscretiser boston_dataset = load_boston() data = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names) user_dict = {'LSTAT': [0, 10, 20, 30, np.Inf]} transformer = UserInputDiscretiser( binning_dict=user_dict, return_object=False, return_boundaries=False) X = transformer.fit_transform(data) X['LSTAT'].head() 'LotArea': [-inf, 22694.5, 44089.0, 65483.5, 86878.0, 108272.5, 129667.0, 151061.5, 172456.0, 193850.5, inf], 'GrLivArea': [-inf, 768.2, 1202.4, 1636.6, 2070.8, 2505.0, 2939.2, 3373.4, 3807.6, 4241.799999999999, inf]} 0 0 1 0 2 0 3 0 4 0 Name: LSTAT, dtype: int64 API Reference¶ - class feature_engine.discretisers. UserInputDiscretiser(binning_dict, return_object=False, return_boundaries=False)[source]¶ The UserInputDiscretiser() divides continuous numerical variables into contiguous intervals are arbitrarily entered by the user. The user needs to enter a dictionary with variable names as keys, and a list of the limits of the intervals as values. For example {‘var1’:[0, 10, 100, 1000], ‘var2’:[5, 10, 15, 20]}. The UserInputDiscretiser() works only with numerical variables. The discretiser will check if the dictionary entered by the user contains variables present in the training set, and if these variables are cast as numerical, before doing any transformation. Then it transforms the variables, that is, it sorts the values into the intervals, transform. - Parameters binning_dict (dict) – The dictionary with the variable : interval limits pairs, provided by the user. A valid dictionary looks like this: {‘var1’:[0, 10, 100, 1000], ‘var2’:[5, 10, 15, 20]}. Checks that the user entered variables are in the train set and cast as numerical. - Parameters - binner_dict\_ The dictionary containing the {variable: interval limits} pairs used to sort the values into discrete intervals. - Type dictionary
https://feature-engine.readthedocs.io/en/0.6.x_a/discretisers/UserInputDiscretiser.html
2021-05-06T01:27:44
CC-MAIN-2021-21
1620243988724.75
[]
feature-engine.readthedocs.io
ADAFRUIT_ST7565 (community library) Summary Adafruit ST7565 Library Example Build Testing Device OS Version: This table is generated from an automated build. Success only indicates that the code compiled successfully. Library Read Me This content is provided by the library maintainer and has not been validated or approved. ST7565 ST7565 Library for Particle Devices. Ported from Adafruit's ST7565_LCD library. Extended with Particle Library Information. Browse Library Files
https://docs.particle.io/cards/libraries/a/ADAFRUIT_ST7565/
2021-05-06T00:35:47
CC-MAIN-2021-21
1620243988724.75
[]
docs.particle.io
scipy.sparse.csgraph.reconstruct_path¶ scipy.sparse.csgraph. reconstruct_path(csgraph, predecessors, directed=True)¶ Construct a tree from a graph and a predecessor list. New in version 0.11.0. - Parameters - csgrapharray_like or sparse matrix The N x N matrix representing the directed or undirected graph from which the predecessors are drawn. - predecessorsarray_like, one dimension The length-N array of indices of predecessors for the tree. The index of the parent of node i is given by predecessors[i]. -]. - Returns - cstreecsr matrix The N x N directed compressed-sparse representation of the tree drawn from csgraph which is encoded by the predecessor list. Examples >>> from scipy.sparse import csr_matrix >>> from scipy.sparse.csgraph import reconstruct_path >>> graph = [ ... [0, 1 , 2, 0], ... [0, 0, 0, 1], ... [0, 0, 0, 3], ... [0, 0, 0, 0] ... ] >>> graph = csr_matrix(graph) >>> print(graph) (0, 1) 1 (0, 2) 2 (1, 3) 1 (2, 3) 3 >>> pred = np.array([-9999, 0, 0, 1], dtype=np.int32) >>> cstree = reconstruct_path(csgraph=graph, predecessors=pred, directed=False) >>> cstree.todense() matrix([[ 0., 1., 2., 0.], [ 0., 0., 0., 1.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.]])
https://docs.scipy.org/doc/scipy-1.4.1/reference/generated/scipy.sparse.csgraph.reconstruct_path.html
2021-05-06T01:58:01
CC-MAIN-2021-21
1620243988724.75
[]
docs.scipy.org
How to find monitoringHow to find monitoring This guide documents how to find monitoring within Sourcegraph’s source code. Sourcegraph employees should also refer to the handbook’s monitoring section for Sourcegraph-specific documentation. The developing observability page contains relevant documentation as well. AlertsAlerts Alerts are defined in the monitoring/definitions package - for example, querying for definitions of Warning or Critical will surface all Sourcegraph alerts. MetricsMetrics You can use Sourcegraph itself to search for metrics definitions - for example, by querying for usages of prometheus.HistogramOpts. Sometimes the metrics are hard to find because their name declarations are not literal strings, but are concatenated in code from variables. In these cases you can try a specialized tool called promgrep to find them. go get github.com/sourcegraph/promgrep # in the root `sourcegraph/sourcegraph` source directory promgrep <some_partial_metric_name> # no arguments lists all declared metrics
https://docs.sourcegraph.com/dev/how-to/find_monitoring
2021-05-06T00:18:04
CC-MAIN-2021-21
1620243988724.75
[]
docs.sourcegraph.com
From the backend, you can create a new page with details like multiple images, blocks, variables, and widgets. All of them are design-friendly for search engines that crawl the metadata of the page so that people can find you easily. Enable Page: You can enable/disable the Page as per requirement. Page Title: Enter the relevant Title of the Page. Page Type: Select the type as Is Home Page or Is CMS Page. By selecting Is Home Page, you will get the “Page Builder” blocks to configure the Home page and by selecting Is CMS Page, you will get the “HTML Editor” to edit the content of that Page. Content: – Content Heading: Enter the main heading at the top of the page. – Editor: Enter the text with the WYSIWYG editor, click on Show/Hide Editor button to add blocks, images and widgets. After inserting all the details click on Save and Continue Edit. Search Engine Optimization: Expand the Search Engine Optimization tab and specify URL Key, Meta Title, Meta Keywords, and Meta Description for the page. – URL Key: Fill out a URL Key for the page, added to the base URL for the new online address of the page. Note: Remember to insert lowercase characters and hyphens without spaces. i.e. new-page – Enter the Meta Title, Keywords and Description for the CMS page, these are important for SEO. Pages in Websites: You can choose the Store View for any new page. You can also use the page for All Store Views. Design: In the Layout field, choose one of the options from the dropdown list of the page layout: 1 column, 2 columns with left bar, 2 columns with right bar and 3 columns. Custom Design Update: Scroll down to customize the page design. On the Custom Design section, you can change theme, layout, and style within exact time period like holiday or sales. Click on the Save button to create a CMS page.
http://docs.appjetty.com/document/general-configuration/manage-pages/
2021-05-06T01:30:18
CC-MAIN-2021-21
1620243988724.75
[]
docs.appjetty.com
CommerceXpand: All-in-one Shopify apps About CommerceXpand Shopify-CommerceXpand is a number of Shopify apps in one for Shopify merchants to manage the backend operation related to the product management, abandoned orders, provide the discount with count down of the time, and many others. CommerceXpand is an all-in-one solution for many stand-alone apps. Shopify merchant can manage the website as per own business aspects. Here are the apps of the AppJetty Shopify – CommerceXpand: -> To Reduce the Cart Abandonment – Abandoned Checkouts – Add to Cart Sticky -> To increase the product sales – Back in stock alert – Countdown manager ->Manage Products from Backend – Product Bundles – Bulk Product Editor -> To improve the browsing experience – Image Optimizer – Inactive Tab – Scroll to Top
http://docs.appjetty.com/document/introduction/
2021-05-06T01:06:23
CC-MAIN-2021-21
1620243988724.75
[]
docs.appjetty.com
11.17 How to Transfer Active Cells (Ex: actnum) - Create a local property kind called "active," which is a child of the standard property kind called “discrete.” - Create a discrete property on a representation; the discrete property uses this property kind to identify the active cells (which occur in flow simulation initialization context).
http://docs.energistics.org/RESQML/RESQML_TOPICS/RESQML-000-289-0-C-sv2010.html
2021-05-06T01:14:44
CC-MAIN-2021-21
1620243988724.75
[]
docs.energistics.org
This is a set of style and usage guidelines for Homebrew’s prose documentation aimed at users, contributors, and maintainers (as opposed to executable computer code). It applies to documents like those in docs in the Homebrew/brew repository, announcement emails, and other communications with the Homebrew community. This does not apply to any Ruby or other computer code. You can use it to inform technical documentation extracted from computer code, like embedded man pages, but it’s just a suggestion there. The primary goal of Homebrew’s prose documents is communicating with its community of users and contributors. “Users” includes “contributors” here; wherever you see “users” you can substitute “users and contributors”. Understandability is more important than any particular style guideline. Users take precedence over maintainers, except in specifically maintainer-focused documents. Homebrew’s audience includes users with a wide range of education and experience, and users for whom English is not a native language. We aim to support as many of those users as feasible. We strive for “correct” but not “fancy” usage. Think newspaper article, not academic paper. This is a set of guidelines to be applied using human judgement, not a set of hard and fast rules. It is like The Economist’s Style Guide or Garner’s Modern American Usage. It is less like the Ruby Style Guide. All guidelines here are open to interpretation and discussion. 100% conformance to these guidelines is not a goal. The intent of this document is to help authors make decisions about clarity, style, and consistency. It is not to help settle arguments about who knows English better. Don’t use this document to be a jerk. We prefer: h1headings; sentence case in all other headings fixed width font <...>brackets git remote add <my-user-name><my-user-name>/homebrew-core.git gitand breware styled in fixed width font BLAHto 5”, not “Set $BLAHto 5” homebrew/coreare styled in fixed width font. Repository names may be styled in either fixed width font like “ Homebrew/homebrew-core”, as links like “Homebrew/homebrew-core”, or regular text like “Homebrew/homebrew-core”, based on which looks best for a given use. Refer to these guidelines to make decisions about style and usage in your own writing for Homebrew documents and communication. PRs that fix style and usage throughout a document or multiple documents are okay and encouraged. PRs for just one or two style changes are a bit much. Giving style and usage feedback on a PR or commit that involves documents is okay and encouraged. But keep in mind that these are just guidelines, and for any change, the author may have made a deliberate choice to break these rules in the interest of understandability or aesthetics.
https://docs.brew.sh/Prose-Style-Guidelines
2021-05-05T23:59:20
CC-MAIN-2021-21
1620243988724.75
[]
docs.brew.sh
Shopware Enterprise provides tools to measure and improve the performance of your shop. While our performance whitepaper gives an overview how Shopware can be scaled, our JMeter scripts will help you to examine the limits of your individual shop setup. SwagEssentials will then help you to improve the scalability of your shop.
https://docs.enterprise.shopware.com/performance/
2021-05-05T23:57:56
CC-MAIN-2021-21
1620243988724.75
[]
docs.enterprise.shopware.com
If you do not want to use your Darlic®| Create Free Website site Remember, once deleted your site cannot be restored. Click the checkbox next to the text below: I'm sure I want to permanently disable my site, and I am aware I can never get it back or use docs.darlic.com/ again. Then click the blue button with text "Delete My Site Permanently".
https://docs.darlic.com/tools/how-to-delete-site/
2021-05-05T23:55:09
CC-MAIN-2021-21
1620243988724.75
[array(['https://docs.darlic.com/wp-content/uploads/sites/3/2019/03/delete_site-1024x617.png', None], dtype=object) ]
docs.darlic.com
Subscribe for New SMS Messages A common use case is to subscribe to all incoming SMS messages. This can be used to either sync messages or to take action in incoming messages. RingCentral has a couple of ways to retrieve incoming messages. This tutorial describes retrieving messages one at a time where there is no replicated dataset. If you wish to replicate to your own datastore, please contact RingCentral devsupport. To retrieve incoming SMS messages, there are two steps - Subscribe for events on the message store to receive information on new SMS messages - Retrieve the messages from the message store by querying the store for the time range Before continuing, familiarize yourself with subscriptions Step 1: Subscribe for New SMS events: For this step, create a subscription to the /restapi/v1.0/account/~/extension/~/message-store event filter with the account id and extension id of interest where ~ represents your currently authorized values. When receiving an event, you will receive an array of changes of which, some can have the type attribute set to SMS along with a newCount attribute. When newCount is > 0, there is a new SMS. To subscribe for new message store events, use the following: sub = client.create_subscription sub.subscribe ['/restapi/v1.0/account/~/extension/~/message-store'] Information on subscription is here: Step 2: SMS Retrieval To retrieve the new SMS message given an subscription, use the event's body.lastUpdated time property to retrieve inbound SMS messages matching that time. You can do this with the following steps: - Retrieve the event's body.lastUpdatedproperty - Create a message store API call setting the dateFromand dateToparameters around the event's body.lastUpdatedproperty. You can set the range to be 1 second on either side. - Upon receiving an array of messages in the response, filter the messages on the message lastModifiedTimewhich will be the same as the event's body.lastUpdatedtime. To accomplish the above, you can use the Ruby SDK as follows in your observer object: retriever = RingCentralSdk::REST::MessagesRetriever.new client messages = @retriever.retrieve_for_event event, direction: 'Inbound' messages.each do |message| # do something end An example observer object shows how to combine these: # Create an observer object class MyObserver def initialize(client) @client = client @retriever = RingCentralSdk::REST::MessagesRetriever.new client end def update(message) event = RingCentralSdk::REST::Event.new message messages = @retriever.retrieve_for_event event, direction: 'Inbound' messages.each do |message| # do something end end end For additional reading, see: Example Implementation The above is implemented in an example script using the Ruby SDK. The script retrieves each message and sends it to a Glip chat team using the Glip::Poster module. Other chat modules with the same interfae can also be used. - Script: sms_to_chat.rb - SDK : messages_retriever.rb
https://ringcentral-sdk-ruby.readthedocs.io/en/latest/usage/notifications/Subscribe_for_New_SMS/
2021-05-06T00:56:23
CC-MAIN-2021-21
1620243988724.75
[]
ringcentral-sdk-ruby.readthedocs.io
javax.swing.text.BoxView baselineLayout, baselineRequirements, calculateMajorChanged, layoutMajorAxis, layoutMinorAxis, modelToView, paint, paintChild, preferenceChanged, replace, setAxis, setSize, viewToModel Methods inherited from class javax.swing.text.CompositeView Methods inherited from class java.lang.Object clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait Methods inherited from class javax.swing.text.View append, breakView the setParentmethod. This is reimplemented to not load any children directly (as they are created in the process of formatting). If the layoutPool variable is null, an instance of LogicalView is created to represent the logical view that is used in the process of formatting. -. - Overrides: calculateMinorAxisRequirementsin class BoxView - Parameters: axis- the axis being studied r- the SizeRequirementsobject; if nullone will be created - Returns: - the newly initialized SizeRequirementsobject - See Also: SizeRequirements insertUpdate public void insertUpdate(DocumentEvent changes, Shape a, ViewFactory f)Gives notification that something was inserted into the document in a location that this view is responsible for. - public void removeUpdate(DocumentEvent changes, Shape a, ViewFactory f)Gives notification that something was removed from the document in a location that this view is responsible for. -) setParent public void setParent(View parent)Sets the parent of the view. This is reimplemented to provide the superclass behavior as well as calling the loadChildrenmethod if this view does not already have children. The children should not be loaded in the constructor because the act of setting the parent may cause them to try to search up the hierarchy (to get the hosting Containerfor example). If this view has children (the view is being moved from one place in the view hierarchy to another), the loadChildrenmethod will not be called. - Overrides: setParentin class CompositeView - Parameters: parent- the parent of the view, nullif none
https://docs.huihoo.com/java/javase/9/docs/api/javax/swing/text/FlowView.html
2021-09-16T19:26:49
CC-MAIN-2021-39
1631780053717.37
[]
docs.huihoo.com
ocss7 2.1.0.1 SGC improvements in this release: ANSI SCCP support has been introduced. ITU TCAP protocols can be carried over ANSI SCCP + M3UA networks. M3UA DPC defaults for MSS and MUSS have been changed. The new MSS default is 245, down from 247. The new MUSS default is 252, down from 254. These values are compatible with both ITU and ANSI SCCP. (SSSVN-812) Added cluster-wide parameter sccp-variantwhich can be used to switch between ITU and ANSI SCCP. Added cluster-wide parameter nationalwhich controls the national indicator bit used in SCCP management messages. Added DPC parameter congestion-notification-timeoutwhich, for ANSI SCCP only, controls for how long M3UA congestion notifications will be considered valid. (SSSVN-817) System OID has been updated to 1.3.6.1.4.1.19808.10.2.1.0 (SSSVN-894) A configSaveFailedalarm will now be raised if the SGC could not save its configuration file. Additionally, improved the resilience of the SGC configuration saving process. (SSSVN-918) Added an mtpCongestionalarm, which is raised whenever an M3UA/MTP congestion notification is received by M3UA. (SSSVN-941) The SGC now sets SSN=0 in any outbound SCCP messages that are routeOnGT without a user-specified SSN in accordance with ITU-T and ANSI SCCP specifications. (SSSVN-947) The SGC will now accept malformed M3UA messages that are missing their final parameter padding octets. (SSSVN-787) Fixed SNMP’s misreporting of the following stats, which could appear with a higher value than the actual count when read. This affected the SNMP interface only, CLI display of these stats was always correct. The fixed stats are, for SCCP: segmentationFailureCount, unitdataIndCount, reassembledUnitdataCount, reassemblyFailureCount; and for Health: forceAllocatedReqTasks. (SSSVN-789) Improved the log message emitted when the SGC receives an M3UA message of an invalid size. (SSSVN-788) The CLI will now attempt to find an appropriate Java binary to execute even if JAVA_HOMEis not set. It will check JAVA_HOMEfirst, then SGC_HOME/config/sgcenv, and finally look for a javaexecutable in PATH. (SSSVN-429) Improved generate-report.shscript. (SSSVN-432) SGC bug fixes in this release: Corrected an issue where failure to deliver a reassembled N-UNITDATA indication could ignore the value of returnOnError set in the first segment received. This resulted in no notification of delivery failure being sent to the sender. (SSSVN-901) Corrected a defect in the handling of the M3UA affected point code list that could result in valid point codes not matching entries beyond the first. (SSSVN-884) Corrected a defect in the ITU-T SCCP Status Test procedure that could result in the procedure being incorrectly discontinued. (SSSVN-953) Corrected a defect in the ITU-T SCCP Status Test procedure that would result in the procedure running even if the remote DPC was already marked prohibited. (SSSVN-956) Multiple outbound GTT rules with a replace-gtspecifying route-on=SSNand an SSN no longer trigger excessive SSTs when multiple rules resolve to the same DPC and SSN. (SSSVN-931) The SGC now honours the value of the national indicator bit when the address type is C7/ITU. Note that for C7/ITU addresses CGIN 1.5.4 with OCSS7 does not honour the national indicator for C7/ITU addresses, and will never send a 'true' value to the SGC/network and will never report a 'false' value to the SLEE for C7/ITU addresses. CGIN versions with OCSS7 ANSI support will always honour the national indicator. (SSSVN-910, SSSVN-1056) Corrected a NullPointerExceptionwhen clearing path failure alarms during M3UA shutdown. (SSSVN-890) Corrected an issue that could result in an AccessControlExceptionif finest logging was enabled in the OCSS7 TCAP stack. (SSSVN-889) replace-gtrules now use the correct ranges for validating encoding scheme, nature of address, numbering plan and global title indicator parameters. (SSSVN-937) Fixed an issue that could cause the SGC to restart if more than 1 million transactions were being concurrently handled by a single TCAP stack connection. (SSSVN-868) Corrected a defect where receipt of a malformed SCCP message from a TCAP stack could cause the SGC to restart. (SSSVN-991) TCAP stack (CGIN, SIS, and IN Scenario Pack) improvements in this release: ANSI SCCP support has been introduced. ITU TCAP protocols can be carried over ANSI SCCP + M3UA networks. TCAP stack (CGIN, SIS, and IN Scenario Pack) bug fixes in this release: Fixed an issue where decoding of a TCMessage encoded with indefinite length could fail if the component portion was absent. (SSSVN-921) Fixed a double-free in the TCAP stack which caused transactions to be reallocated whilst still alive, resulting in permanent entries in CGIN’s internal dialog maps. (SSSVN-1035) Fixed defect where TCAP stack could hang if stopped while dialogs were still active. (SSSVN-779) Fixed a deadlock that could occur during TCAP stack deactivation. (SSSVN-780) Fixed a defect that prevented sending TC-ABORT in response in incoming TC-BEGINs while the TCAP stack was deactivating. (SSSVN-781) ocss7 2.0.0.1 Improvements in this release: Upgraded Hazelcast to version 3.7. (SSSVN-350) The SGC is now able to detect when distributed Hazelcast data has been irrevocably lost and will restart the node in order to load configuration from file. (SSSVN-357) Added more statistics: TcapStats, SccpStats, TcapErrorStats, SccpErrorStats. These are accessible via the command line client, JMX and SNMP. Some statistics previously in LocalSsnInfo are now recorded in SccpStats instead. (SSSVN-668) The SGC will now detect and apply log4j.xml changes in real-time. (SSSVN-163) Added a status field to display-info-tcapconninfo to display connection state. (SSSVN-426) An alarm will be raised if the SGC detects that its default map backup-count configuration is considered too small for the size of cluster. (SSSVN-683) SGC version information is now logged in the startup log and also on rollover in the main ss7.log. (SSSVN-737) The SGC’s SNMP system OID is now 1.3.6.1.4.1.19808.10.2. (SSSVN-445) Heartbeat is now enabled by default on all TCAP stack to SGC connections. (SSSVN-486) The SGC now raises an alarm at MINOR if one or more paths in an association go down. The existing alarm for the whole association going down is now raised at MAJOR instead of MINOR. (SSSVN-715) Bug fixes in this release: Memory requirements to support prefix migration have been drastically reduced in both the SGC and the TCAP stack. (SSSVN-732, SSSVN-733) Local SSN prohibited alarm is now raised at MAJOR level. (SSSVN-738) Demoted "Route routeId=x is DOWN - sending message back to SCCP" message from WARN to DEBUG. (SSSVN-143) Fixed statistics problem where messages could be incorrectly recorded against SSN 0. (SSSVN-144) Fixed issue that could generate a spurious "peer FSM exists in state: CLOSED will be replaced" message. (SSSVN-166) Fixed NullPointerExceptions that could be generated if a SNMP node was configured with an invalid host name. (SSSVN-173) Fix NullPointerException seen after disabling a local endpoint. (SSSVN-219) Tab completion for 'display-info-ogtinfo' no longer suggests a non- existent 'ssn' column. (SSSVN-221) Improved the error message displayed when the CLI batch mode cannot parse the provided batch file. (SSSVN-224) TCAP stack connection loss no longer logs stack traces at WARN. (SSSVN-225) Fixed an IllegalArgumentException seen when SNMP alarm IDs wrapped around. (SSSVN-356) Fixed an AssertionError in Transport$ClientInitializer.initChannel() that occurred when an SGC was unable to connect to other cluster members via the comm switch interface. (SSSVN-362) Fixed defect in decoder for SCCP importance field that could result in the wrong value being decoded. (SSSVN-385) The CLI will no longer generate a "java.util.ArrayList cannot be cast to javax.management.Attribute" message when an unknown argument is passed to a display-info-xxx command. (SSSVN-394) The SGC’s default minimum heap size is now equal to its default maximum heap size. The default minimum perm gen size is now the same as the default max perm gen size. (SSSVN-404) STANDALONE hazelcast group name generation is now less prone to collisions when starting multiple standalone SGCs simultaneously. (SSSVN-425) The SGC node manager now binds its listen socket with SO_REUSEADDR. (SSSVN-434) The SGC no longer generates a NullPointerException if asked to send to a null destination SCCP address. (SSSVN-448) The SGC will now attempt to restart if it exits with code 78. There is now a 30s wait between restart attempts. (SSSVN-514) The CLI batch mode should now be significantly faster. (SSSVN-628) Fixed an AssertionError in TcapRouter.globalRouteSelector that could occur if the SCCP reassembly timer raced with an incoming XUDT message. (SSSVN-660) The Comm Switch will no longer be left in a zombie state if it fails to connect to other cluster members on startup. (SSSVN-681) Fixed a NullPointerException in nodeLeft that could be generated when a cluster member left the cluster while another node was starting up. (SSSVN-682) Fixed a defect where the TCAP stack would stop allocating new dialog ID following prefix wraparound. (SSSVN-691) Fixed a prefix leak in the SGC that could result in the SGCs returning "out of prefixes" when a TCAP stack connected. (SSSVN-701) Corrected an issue where a TCAP to SGC connection could end up not being used for any messages at all. (SSSVN-703) Fixed a hard to trigger meshed connection manager deadlock. (SSSVN-704) Under certain prefix migration conditions data structures associated with migrations would not be freed, resulting in an OutOfMemoryError. This has been corrected. (SSSVN-705) Fixed TCAP stack leak that could occur if invoke timeouts expired while there were no connections available to any SGC. (SSSVN-712) Corrected an issue where the TCAP stack could get its internal state confused during prefix migration. (SSSVN-713) Corrected tab completion for display-info-tcapconninfo migratedPrefixes and tcapStackID columns. (SSSVN-734) Installed a sensible default column display order for display-info- tcapconninfo. (SSSVN-735) ocss7 1.1.0.0 Improvements in this release: Added ability to connect to multiple SGCs simultaneously in a mesh style using new ocss7.sgcs TCAP stack property. (SSSVN-388) SGCs now support failover of dialogs between connections to the same TCAP stack when using the new ocss7.sgcs TCAP stack property. (SSSVN-258) sgc-cli.sh script now attempts to auto-detect SGC JMX host and port if those parameters are not set on the command line (-h -p) (SSSVN-292) Bug fixes in this release: DAUD should no longer be slow to happen when many ASes are configured. (SSSVN-8) Fixed an issue where IPv6 addreses in host:port format could not be parsed in the ocss7.urlList TCAP stack configuration property. (SSSVN-133) "We should never have invoke timeout in UNUSED or ALLOCATED state" TCAP stack message has been downgraded from SEVERE to DEBUG. (SSSVN-182) Segmentation configuration parameters MSS and MUSS will no longer permit unacceptable combinations. (SSSVN-209) The CLI now reports a connection error rather than an unknown error when unable to communicate with the SGC. (SSSVN-228) Fixed a ConcurrentModificationException that could be thrown in TcapRouter. This also fixes an issue where the TcapRouter could prevent graceful shutdown from completing. (SSSVN-315) Fixed issue where "releasing begin prefix which has apparently been not assigned - no prefix at all" could be erroneously logged. (SSSVN-316) Fixed very small thread local leak seen when rebuilding outbound translation data. (SSSVN-338) Fixed an issue where SctpManagementMessages that were queued for transmission when a socket was closed were not released properly. (SSSVN-353) sgcd script now creates /var/lock/subsys/sgcd entry. (SSSVN-365) If the SGC fails to start up due to an Exception the stack trace from that Exception will now be logged at WARN (used to be logged at DEBUG). (SSSVN-376) Made some usability fixes to the display-event-history command. (SSSVN-383) SCCP defers registering to handle distributed tasks until after ReqCtx pools have been initialized. This prevents a NullPointerException at SccpManagement.sendSCMGMessage. (SSSVN-457) Fix NullPointerException if an unknown alarm is raised via Hazelcast. (SSSVN-464) Unexpected exceptions thrown while processing SGCTopics will now cause the SGC to attempt a graceful shutdown, similar to uncaught exceptions thrown elsewhere. (SSSVN-513) Fixed NullPointerException in NodeObjectManager.nodeLeft when a node without configuration (create-node) leaves the cluster. (SSSVN-538) Updated documentation to include Hazelcast configuration recommendations for clusters with more than 2 members. (SSSVN-550) Applied a workaround for Hazelcast issue where a lock owned by an SGC could be unlocked underneath it during exit of another cluster member, generating "java.lang.IllegalMonitorStateException: Current thread is not owner of the lock!" messages. (SSSVN-604) Fixed NullPointerException during config save. (SSSVN-626) Fixed case where create-XXX could return before XXX had completed creation, resulting in subsequent enable-XXX or remove-XXX commands failing to notice that XXX was created. (SSSVN-626) Decoder now correctly decodes Reject components that have no argument when received with component length in indefinite length form. (SSSVN-638) Added -XX:+PrintGCDateStamps JVM flag to SGC startup script. (SSSVN-407) Added documentation suggesting appropriate ulimit settings for user processes. (SSSVN-575) Changed some default ports so that they’re no longer in the emphemeral range: SGC’s JMX port, Hazelcast multicast UDP discovery port. (SSSVN-618) ocss7 1.0.1.15 Bug fixes in this release: Fixed issue that could result in the SGC’s routing taking longer than expected to be available. (SSSVN-608) Prevent NullPointerException during alarm unregistration while an SGC cluster split/merge is in progress from restarting the SGC. (SSSVN-553) Failure of the comm switch to bind its listen port will now raise an alarm at critical level and attempt to rebind that port at regular intervals until successful. (SSSVN-531) ocss7 1.0.1.14 Bug fixes in this release: Corrected issue where global title translation tables were only updated on a single node after a local SSN status change. (SSSVN-536) Fixed a ConcurrentModificationException that could occur when multiple threads attempted to modify the same configuration object simultaneously. (SSSVN-511) Fix NullPointerException that could occur if global title rules were created and deleted very quickly. (SSSVN-420) Corrected issue where display-info-remotessninfo was not updated cluster-wide on a local SSN state change. (SSSVN-517) ocss7 1.0.1.12 Bug fixes in this release: Fixed an issue which caused the SGC to leave gracefully disconnected TCAP stack connections in CLOSE_WAIT state indefinitely. (SSSVN-488) Unexpected exceptions thrown while processing SGCTopics will now cause the SGC to attempt a graceful shutdown, similar to uncaught exceptions thrown elsewhere. (SSSVN-513) ocss7 1.0.1.11 Bug fixes in this release: Fixed an error where certain DPCs in SST/SSA/SSP messages would be incorrectly decoded. (SSSVN-314) ocss7 1.0.1.10 Bug fixes in this release: Fixed a race condition that could result in the SGC shutting down due to an unchecked IllegalStateException during task data pool exhaustion. (SSSVN-395) Fixed an issue where segments could arrive out of order if sent while the task pool was exhausted. (SSSVN-397) Fixed NullPointerExceptions that could sometimes be thrown when requesting SNMP counters for OGT and DPC info. (SSSVN-398) Fixed an issue where under certain conditions all worker threads could be hanging around waiting to be allowed to send, resulting in no worker threads available to process incoming messages (such as heartbeats required to maintain connectivity). (SSSVN-400) Fixed an issue where segments received split across multiple stream IDs could result in the SGC exiting due to an uncaught exception. (SSSVN-406) ocss7 1.0.1.9 Bug fixes in this release: Fixed an issue which could prevent the SGC from successfully restarting. (SSSVN-364) Fixed an issue with SCCP decoding when the Data part comes after the Optional Parameters part in the received message. (SSSVN-372) Fixed a NullPointerException caused by insufficient tasks being available in the task pool. (SSSVN-201) Fixed AssertionError: Empty buffer: Cannot decode message when transferring messages internally from one SGC to another. (SSSVN-352) Fixed an issue which could cause the SGC not to restart automatically following a graceful shutdown caused by an uncaught exception. (SSSVN-249) Fixed an issue causing the CLI and MBeans to display stale status information for asinfo, associationinfo, dpcinfo, and pcinfo on failed SGCs. (SSSVN-202) Fixed a race condition that could cause the SGC to attempt to register the same MBean twice. (SSSVN-159) Handled the failure to deliver NOTICE to TCAP stack in an edge case. (SSSVN-165) The uncaught exception handler is now initialized earlier. (SSSVN-248) The example hazelcast.xml.sample file now matches the in-jar default hazelcast configuration. (SSSVN-235) Disabled the hazelcast SystemLogService by default as it has a small memory leak. (SSSVN-192) Fixed a NullPointerException in OgtInfoHelper.compare, which resulted in display-info-ogtinfo, snmpwalk and retrieving snmp stats to fail under certain conditions. (SSSVN-363) Fixed the SGC system OID to be the correct OID. (SSSVN-371) The SGC no longer spams stdout (or the startup log) with state info. (SSSVN-146) Improvements in this release: Outbound global title translation errors are now logged at WARN level. (SSSVN-43) Added generate-report.sh script for easy log gathering. (SSSVN-191) ocss7 1.0.1.8 Bug fixes in this release: The default Hazelcast configuration has been changed so that the SGC now detects cluster member failure more quickly. (SSSVN-295/SSSVN-280) Changed the default values of ocss7.trdpCapacity , ocss7.schNodeListSize, ocss7.taskdpCapacity, ocss7.wgQueuesSize, and ocss7.senderQueueSize; this provides a more coherent default configuration. (SSSVN-274) Fixed an issue which caused the SGC not to raise alarms for certain configuration items (ASes, associations, etc) if it was unable to activate them after starting up. (SSSVN-129) Fixed a issue which would cause the SGC to throw an AssertionError and exit abnormally after receiving a TC-BEGIN which could not be handled because of insufficient resources. (SSSVN-302) Fixed an issue which could cause the SGC to crash under certain M3UA failure conditions. (SSSVN-196) Fixed an issue which could cause DPC configuration changes for segmentation not to be applied when changed. (SSSVN-211) An error is now returned to the CGIN user when attempting to initiate a dialog to a global title longer than the supported maximum. (SSSVN-53) Fixed a SGC crash that could occur when unregistering alarms. (SSSVN-255) display-info-associationinfo no longer intermittently displays out of date information following disabling of an active association. (SSSVN-285/SSSVN-279) Fixed an issue with the CLI which caused it to report unknown command errors when the connection to the SGC was lost. (SSSVN-99) ocss7 1.0.1.3 Improvements in this release: SCCP XUDT support extended to include reassembly of incoming segments, segmentation of outgoing messages, and configuration of segmentation parameters and preferred message type per destination Point Code. (SSSVN-79) Added SGC startup option --seed, which can be used to provide an alternative entropy source to the SGC’s encryption facilities. (SSSVN-116) When deactivating an M3UA connection to a peer the SGC now sends ASP-INACTIVE and ASP_DOWN rather than simply using the SCTP Shutdown procedure. (SSSVN-123) SGC local port configuration validation prevents the use of ports below 1,024. (SSSVN-124) Bug fixes in this release: The SGC will now restart correctly if terminated by an uncaught Exception or Error. (SSSVN-157) Fixed an issue causing SCTP association down alarms to be removed and replaced periodically while the association remained down. (SSSVN-120) Fixed MIB syntax errors which prevented some SNMP clients from accepting the OCSS7 MIB. (SSSVN-1) Fixed timestamps sent by SNMP, which now send the day of the month, but used to send the day of the year. (SSSVN-2) Corrected the spelling of CLI configuration parameter table.format.maxCellContentLength. (SSSVN-51) Input is now validated when changing boolean configuration attributes. (SSSVN-61) Fixed an SCCP handling bug which could cause the SGC to exit if an unrecognised optional parameter was received. (SSSVN-126) Fixed an issue allowing overly long GTs to be placed in inbound-gtt configuration. (SSSVN-152) Fixed in issue allowing overly long GTs to be placed in replace-gt configuration. (SSSVN-54) ocss7 1.0.0.9 Improvements in this release: The SGC’s SNMP MIB description strings have been improved. (SSSVN-58) If both global title and point code are absent on an incoming SCCP message, set the SCCP OPC to that of the L3MTP OPC, in order that we’re able to route responses to this message. (SSSVN-104) ocss7 1.0.0.8 Bug fixes in this release: Support A, D and E sccp address global title digits (defined as spare in Q.713). (SSSVN-42) ocss7 1.0.0.7 Bug fixes in this release: Inbound GTT rules may omit the SSN, allowing the CdPA SSN to be retained. (SSSVN-4) Inbound GTT rules with duplicate NAI/NP/TT/digits can no longer be created. (SSSVN-5) Fixed incorrect outbound GTT for messages with: GT present, PC present, SSN absent. (SSSVN-11) ocss7 1.0.0.6 Initial release.
https://docs.rhino.metaswitch.com/ocdoc/books/ocss7/2.1.0/ocss7-changelog/index.html
2021-09-16T19:00:56
CC-MAIN-2021-39
1631780053717.37
[]
docs.rhino.metaswitch.com
. Why aggregate?¶ the An Aggregation Case Study to build the Opportunity page discussed throughout this topic. Create an Aggregate Model¶ Creating an aggregate model consists of three steps—plus any desired conditions. Create the model [[]]¶ In a Skuid page, click Models. ClickAdd Model and edit the model properties: - Model Id: Give the model a unique name. - Data Source Type: Salesforce or other SQL data source. Note Aggregate models can only be used with the Salesforce and SQL data sources. Data Source: Your specific Salesforce or SQL data source External Object Name: The object that contains the fields to be aggregated. Note This property is called Salesforce Object Name when the Salesforce data source type is selected. Model Behavior: Aggregate Max # of records (Limit): Typically left blank, this field limits the number of groups—determined by a model’s groupings—displayed in components. Aggregations always retrieve all the records for a field, but if you only wish to retrieve the first X number of groups, use this field. When the model behavior changes to Aggregate, the model elements (listed below the model name in the App Elements pane) shift from Fields, Conditions, and Actions to Aggregations, Conditions, Groupings, Actions. Ordering fields in a table [[]]¶ If desired, display display in the order selected. Create an aggregation [[]]¶ What criteria are used to choose what fields to aggregate? Essentially, aggregations ask the user to determine what information they need collected or compiled across a large number of records. Click the aggregate model. Click Aggregations. ClickAdd to the left the field to be used for the aggregation. - If using a SUM, AVG, MAX, or MIN aggregation function (see below), be sure to select a field that returns a numeric value. Warning It’s not possible to aggregate on a field that is also used as the grouping field. Click the new aggregation and indicate the type of aggregation: - displayed when Add field(s)* is clicked from a table. Create a grouping [[]]¶,? Click an aggregate model. ClickGroupings. Select the grouping method. There are two options: Note You can only choose one grouping method for the model, even if there is more than one grouping created. If changing this method after adding additional groupings, it may be necessary to re-add any groupings using the other option. - Simple: Use with aggregations that will product basic totals. - Rollup: This grouping method calculates a SUM total of all rows—which are created by groupings—in the model, and appends that sum as an extra row to the model. Note Since [[]]¶ If groupings provide a way to “slice” up aggregations into meaningful buckets, multiple groupings provide a way to similarly split a group for increased granularity. [[]]¶. Add conditions [[]]¶ Just as in basic models, aggregate models use conditions to limit the queried data coming into the model. Working with the Salesforce HAVING clause [[]]¶. Troubleshooting¶ General issues [[]]¶ Because aggregate models include a number of moving parts, basic troubleshooting starts by checking all the elements needed to make the models work: -). Error Messages [[]]¶ Non-grouped query that uses overall aggregate functions cannot also use LIMIT (Skuid on Salesforce)¶ Normally, if there is no grouping field, having a limit on an aggregate model will cause this failure. However, if there is a grouping field and a limit on an aggregate model, this error may mean there is a permissions issue in Salesforce and the running user does not have access to the field(s) selected for grouping. There are a few ways to resolve this error: - Make sure there is a grouping. - Remove the limit and see if the query works. - Try adding a standard model and table with the fields you are grouping on to see if you can access the fields.
https://docs.skuid.com/latest/v1/en/skuid/models/aggregate-model/index.html
2021-09-16T18:42:05
CC-MAIN-2021-39
1631780053717.37
[]
docs.skuid.com
It is a form tool used for e-signing in lists and outputs. E-Signature tool icon at Toolbox How it looks like on the Canvas Properties Properties of this tool ; Name: The name of the tool will be written here. To edit, go to the Property Panel which is located on the right. This area is saved to the database. If the Display area is chosen as No, it makes E-Signature not seem on the client screen. Otherwise, the opposite happens. Actions Validation: Assign an “Action” as on value tool handler. For more information please click here. Example Used You can edit the text color as white and background color as black from the Color area. Client view on screen
https://docs.xpoda.com/hc/en-us/articles/360011665339-E-Signature
2021-09-16T19:00:51
CC-MAIN-2021-39
1631780053717.37
[array(['/hc/article_attachments/360008546140/b29.jpg', 'b29.jpg'], dtype=object) array(['/hc/article_attachments/360008546160/b30.jpg', 'b30.jpg'], dtype=object) array(['/hc/article_attachments/360008549839/b31.jpg', 'b31.jpg'], dtype=object) array(['/hc/article_attachments/360014707680/mceclip0.png', 'mceclip0.png'], dtype=object) ]
docs.xpoda.com
This document is old and some of the information is out-of-date. Use with caution. Strategy for finding leaks¶{.external .text})). Start finding and fixing leaks by running part of the task under nsTraceRefcnt logging, gradually building up from as little as possible to the complete task, and fixing most of the leaks in the first steps before adding additional steps. (By most of the leaks, I mean the leaks of large numbers of different types of objects or leaks of objects that are known to entrain many non-logged objects such as JS objects. Seeing a leaked GlobalWindowImpl, nsXULPDGlobalObject, nsXBLDocGlobalObject, or nsXPCWrappedJS is a sign that there could be significant numbers of JS objects leaked.) For example, start with bringing up the mail window and closing the window without doing anything. Then go on to selecting a folder, then selecting a message, and then other activities one does while reading mail. Once you’ve done this, and it doesn’t leak much, then try the action under trace-malloc or LSAN or Valgrind to find the leaks of smaller graphs of objects. (When I refer to the size of a graph of objects, I’m referring to the number of objects, not the size in bytes. Leaking many copies of a string could be a very large leak, but the object graphs are small and easy to identify using GC-based leak detection.) What leak tools do we have?¶ Tool Finds Platforms Requires Leak tools for large object graphs Leak Gauge Windows, documents, and docshells only All platforms Any build GC and CC logs JS objects, DOM objects, many other kinds of objects All platforms Any build Leak tools for medium-size object graphs BloatView, refcount tracing and balancing Objects that implement nsISupports or use MOZ_COUNT_{CTOR,DTOR} All tier 1 platforms Debug build (or build opt with --enable-logrefcnt) Leak tools for debugging memory growth that is cleaned up on shutdown Common leak patterns¶ When trying to find a leak of reference-counted objects, there are a number of patterns that could cause the leak: Ownership cycles. The most common source of hard-to-fix leaks is ownership cycles. If you can avoid creating cycles in the first place, please do, since it’s often hard to be sure to break the cycle in every last case. Sometimes these cycles extend through JS objects (discussed further below), and since JS is garbage-collected, every pointer acts like an owning pointer and the potential for fan-out is larger. See bug 106860{.external .text} and bug 84136{.external .text} for examples. (Is this advice still accurate now that we have a cycle collector? --Jesse) Dropping a reference on the floor by: Forgetting to release (because you weren’t using nsCOMPtrwhen you should have been): See bug 99180{.external .text} or bug 93087{.external .text} for an example or bug 28555{.external .text} for a slightly more interesting one. This is also a frequent problem around early returns when not using nsCOMPtr. Double-AddRef: This happens most often when assigning the result of a function that returns an AddRefed pointer (bad!) into an nsCOMPtrwithout using dont_AddRef(). See bug 76091{.external .text} or bug 49648{.external .text} for an example. [Obscure] Double-assignment into the same variable: If you release a member variable and then assign into it by calling another function that does the same thing, you can leak the object assigned into the variable by the inner function. (This can happen equally with or without nsCOMPtr.) See bug 38586{.external .text} and bug 287847{.external .text} for examples. Dropping a non-refcounted object on the floor (especially one that owns references to reference counted objects). See bug 109671{.external .text} for an example. Destructors that should have been virtual: If you expect to override an object’s destructor (which includes giving a derived class of it an nsCOMPtrmember variable) and delete that object through a pointer to the base class using delete, its destructor better be virtual. (But we have many virtual destructors in the codebase that don’t need to be – don’t do that.) Debugging leaks that go through XPConnect¶ Many large object graphs that leak go through XPConnect{.external .text}. This can mean there will be XPConnect wrapper objects showing up as owning the leaked objects, but it doesn’t mean it’s XPConnect’s fault (although that has been known to happen{.external .text}, it’s rare). Debugging leaks that go through XPConnect requires a basic understanding of what XPConnect does. XPConnect allows an XPCOM object to be exposed to JavaScript, and it allows certain JavaScript objects to be exposed to C++ code as normal XPCOM objects. When a C++ object is exposed to JavaScript (the more common of the two), an XPCWrappedNative object is created. This wrapper owns a reference to the native object until the corresponding JavaScript object is garbage-collected. This means that if there are leaked GC roots from which the wrapper is reachable, the wrapper will never release its reference on the native object. While this can be debugged in detail, the quickest way to solve these problems is often to simply debug the leaked JS roots. These roots are printed on shutdown in DEBUG builds, and the name of the root should give the type of object it is associated with. One of the most common ways one could leak a JS root is by leaking an nsXPCWrappedJS object. This is the wrapper object in the reverse direction -- when a JS object is used to implement an XPCOM interface and be used transparently by native code. The nsXPCWrappedJS object creates a GC root that exists as long as the wrapper does. The wrapper itself is just a normal reference-counted object, so a leaked nsXPCWrappedJS can be debugged using the normal refcount-balancer tools. If you really need to debug leaks that involve JS objects closely, you can get detailed printouts of the paths JS uses to mark objects when it is determining the set of live objects by using the functions added in bug 378261{.external .text} and bug 378255{.external .text}. (More documentation of this replacement for GC_MARK_DEBUG, the old way of doing it, would be useful. It may just involve setting the XPC_SHUTDOWN_HEAP_DUMP environment variable to a file name, but I haven’t tested that.) Post-processing of stack traces¶ On Mac and Linux, the stack traces generated by our internal debugging tools don’t have very good symbol information (since they just show the results of dladdr). The stacks can be significantly improved (better symbols, and file name / line number information) by post-processing. Stacks can be piped through the script tools/rb/fix_stacks.py to do this. These scripts are designed to be run on balance trees in addition to raw stacks; since they are rather slow, it is often much faster to generate balance trees (e.g., using make-tree.pl for the refcount balancer or diffbloatdump.pl --use-address for trace-malloc) andthen run the balance trees (which are much smaller) through the post-processing. Getting symbol information for system libraries¶ Windows¶ Setting the environment variable _NT_SYMBOL_PATH to something like symsrv*symsrv.dll*f:\localsymbols* as described in Microsoft’s article{.external .text}. This needs to be done when running, since we do the address to symbol mapping at runtime. Linux¶ Many Linux distros provide packages containing external debugging symbols for system libraries. fix_stacks.py uses this debugging information (although it does not verify that they match the library versions on the system). For example, on Fedora, these are in *-debuginfo RPMs (which are available in yum repositories that are disabled by default, but easily enabled by editing the system configuration). Tips¶ Disabling Arena Allocation¶ With many lower-level leak tools (particularly trace-malloc based ones, like leaksoup) it can be helpful to disable arena allocation of objects that you’re interested in, when possible, so that each object is allocated with a separate call to malloc. Some places you can do this are: layout engine : Define DEBUG_TRACEMALLOC_FRAMEARENA where it is commented out in layout/base/nsPresShell.cpp glib : Set the environment variable G_SLICE=always-malloc Other References¶ - Leak Debugging Screencasts{.external .text} LeakingPages - a list of pages known to leak mdc:Performance{.extiw} - contains documentation for all of our memory profiling and leak detection tools
https://firefox-source-docs.mozilla.org/performance/memory/leak_hunting_strategies_and_tips.html
2021-09-16T19:05:34
CC-MAIN-2021-39
1631780053717.37
[]
firefox-source-docs.mozilla.org
You can call this tool by pressing Ctrl+Alt+2 or by selecting the menu item: Tools->IDE Tools->Opcode Search. To find an opcode, type some words in the input line, e.g. actor car. The tool displays the opcodes with these words. You can also use special search operators. When the tool is open, it checks if there is a selected word in the editor. If the selected word is found, it will be copied into the search field. To copy an opcode into clipboard, select it in the list and press Enter. To add another opcode to the clipboard content press Shift+Enter. To copy the entire results list press F2. Enter copy selected opcode onto clipboard Shift+Enter add selected opcode to the clipboard F1 - show help information F2 - copy all opcodes from the results list into clipboard F3 - sort the list by opcodes F4 - sort the list alphabetically F11 - clear the search field, display all opcodes ESC - close the tool window A single space between words serves as the AND operator: @ player Finds all opcodes with both @ and the word player. The pipe character | is the OR operator: @ | player Finds all opcodes with either @ or player. If you write | as the first character in the search, the tool will connect all the following words with the OR operator. | actor player car Finds opcodes with either actor, player or car. Two dashes before the word exclude opcodes with this word from the result: car --actor Finds opcodes with car but without actor. A single dash can be used before identifiers or $ and @ characters, but not numbers: -10 [email protected] -car ---1 Finds opcodes with the number -10 and without @, car and -1 ^ - shows only conditional opcodes ^word - finds word in the conditional opcodes: ^car == Finds all conditional opcodes with car and ==. -^ - excludes all conditional opcodes from the list: player -^ Finds all non-conditional opcodes with player. -^word - excludes the conditional opcodes with word: player -^actor Finds opcodes with player, with the exception of the conditional opcodes with actor. % - finds opcodes with the words in the given order: % @ = @ Finds opcodes where @ = @ follow each other (possibly with other words in between).
https://docs.sannybuilder.com/editor/opcode-search-tool
2021-09-16T18:49:35
CC-MAIN-2021-39
1631780053717.37
[]
docs.sannybuilder.com
Snakemake Executor Tutorials¶ This set of tutorials are intended to introduce you to executing cloud executors. We start with the original Snakemake tutorial and expand upon it to be run in different cloud environments. For each run, we show you how to: - authenticate with credentials, if required - prepare your workspace - submit a basic job - generate an error and debug The examples presented in these tutorials come from Bioinformatics. However, Snakemake is a general-purpose workflow management system for any discipline. We ensured that no bioinformatics knowledge is needed to understand the tutorial. - Google Life Sciences Tutorial - Auto-scaling Azure Kubernetes cluster without shared filesystem
https://snakemake.readthedocs.io/en/stable/executor_tutorial/tutorial.html
2021-09-16T17:55:25
CC-MAIN-2021-39
1631780053717.37
[]
snakemake.readthedocs.io
Date: Tue, 28 Feb 2012 08:44:55 +0000 From: Matthew Seaman <[email protected]> To: [email protected] Subject: Re: "find" not traversing all directories on a single zfs file system Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --------------enig7D9A41A9B34CB51B4F6BAE31 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable On 28/02/2012 02:21, Robert Banfield wrote: > I have some additional information that I didnt see before actually > digging into the log file. It is quite interesting. There are 82,206 > subdirectories in one of the folders. Like this: >=20 > /zfs_mount/directoryA/token[1-82206]/various_tileset_files >=20 > When looking at the output of find, here is what I see: >=20 > Lines 1-9996943: The output of find, good as good can be > Lines 9996944-10062479: Subdirectory entries only, it traversed none o= f > them. >=20 > Notice 10062479-9996944+1 =3D 65536 =3D 2^16 >=20 > So, of the 82206 subdirectories, the first 82206-2^16 were traversed, > and the final 2^16 were not. The plot thickens... Now this is very interesting indeed. 80,000 subdirectories is quite a lot.. As is a grand total of more than 10,000,000 files. Hmmm... and you see the find problem just when searching within the structure under directoryA? I think you have found a bug, although whether it is in find(1), the filesystem or elsewhere is not clear. Given that 'ls -R' shows the same problem, the bug could be in fts(3). Still, that's a testable hypothesis. Let me see if I can reproduce the problem. Cheers, Matthew --=20 Dr Matthew J Seaman MA, D.Phil. 7 Priory Courtyard Flat 3 PGP: Ramsgate JID: [email protected] Kent, CT11 9PW --------------enig7D9A41A9B34CB51B4F6BAE31MlA4ACgkQ8Mjk52CukIyTxQCglbQ/KZOOoBa9/XQghEFL0r6b 5ccAnjzQpBeylXIrtmSrk31Dt6p6I8gM =wyQF -----END PGP SIGNATURE----- --------------enig7D9A41A9B34CB51B4F6BAE31-- Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=271450+0+archive/2012/freebsd-questions/20120304.freebsd-questions
2021-09-16T18:42:29
CC-MAIN-2021-39
1631780053717.37
[]
docs.freebsd.org
Real-Time Operational Analytics: Compression Delay option with NCCI and the performance The previous blog showed the scenario and benefits of compression-delay option for NCCI. In this blog, I describe an experiment on a transactional workload representing Order Management to measure the effectiveness of compression delay. Workload It is an order management application. New orders are inserted and they go through multiple updates over the next 45 minutes and then they becomes dormant. We ran this workload in two distinct phases. In phase-1, the concurrent transaction workload is run for a fixed duration creating/processing new orders. At the end of phase-1, we measure how many orders were processed. In phase-2, we run a fixed number of concurrent analytics queries in a loop and measure how long it took to complete them. This experiment was run both with compression delay (a) 0 and (b) 45 minutes. Here are the results Compression Delay = 0 minutes (default) - Total numbers of orders processed = 466 million - Total on-disk storage for NCCI = 13.8 GB - Total time taken to run the fixed set of analytics queries = 03:06:09 Compression Delay = 45 minutes - Total numbers of orders processed = 541 million - Total on-disk storage for NCCI = 9.9 GB - Total time taken to run the fixed set of analytics queries = 02:19:53 What it shows that with compression delay, we speed up transactional workload approximately 15%, reduced storage footprint by 30% and improved the performance of analytic queries by 20%+. All this was done by just setting the compression delay without requiring any changes to the application. Note, the compression delay is just an option on the index. You can change it anytime and it does not require index rebuild. Note, compression delay option is supported on all forms of columnstore indexes. Thanks Sunil
https://docs.microsoft.com/en-us/archive/blogs/sqlserverstorageengine/real-time-operational-analytics-compression-delay-option-with-ncci-and-the-performance
2021-09-16T19:46:10
CC-MAIN-2021-39
1631780053717.37
[]
docs.microsoft.com
This operation handles bar code reading from the camera on mobile devices. Scan Barcode From Camera operation at Property Panel Properties Description: Description related to action is written. Heading: When added as Form Action, it specifies the name in the action list. Linked Field: The field to write the result. are a textbox and a button in the form. This button has a Scan Barcode from Camera operation with When clicked event. The linked field is the area where the data will be written to the textbox when the barcode is scanned. This operation runs with a mobile phone camera.
https://docs.xpoda.com/hc/en-us/articles/360011575300-Scan-Barcode-from-Camera
2021-09-16T18:54:54
CC-MAIN-2021-39
1631780053717.37
[array(['/hc/article_attachments/360015094119/mceclip0.png', 'mceclip0.png'], dtype=object) array(['/hc/article_attachments/360015110020/mceclip1.png', 'mceclip1.png'], dtype=object) array(['/hc/article_attachments/360015094219/mceclip2.png', 'mceclip2.png'], dtype=object) ]
docs.xpoda.com
This is an old revision of the document! In this dialog, you can define the following properties for some link between tables: The “Join conditions” panels consists of two main parts. Firs part (at the top) is the list of conditions, second (at the bottom) contains different control which allows to define new condition and add it into the list. Delete button at the right of condition list deletes a selected condition. Clear button - clears all conditions completely. To add new condition you need to do the following: Discussion
http://docs.korzh.com/easyquery/data-model-editor/edit-link-dialog?rev=1378913700
2021-09-16T19:49:15
CC-MAIN-2021-39
1631780053717.37
[]
docs.korzh.com
To leverage Appian Record functionality, it all starts with a record type object. The record type is the design object that allows you to configure the source of your records, define record views, create a record list, and more. This page explains how to create a new record type object and how to navigate the record type object after its creation. If you already have a record type object, learn how to configure the record type by: For a full, guided experience creating and configuring a record type, see the Records Tutorial. With each Appian release, the record type object is improved with new components, features, and functions. To use these new enhancements, update your existing record type objects created in 20.2 or earlier. Record types are created in Appian Designer. To create a record type: From the New dropdown, select Record Type. Directly after creating your record type, you need to add the record type security. Once you save the record type security, the record type object will open in a new tab by default. As you continue to modify and define your record type you may encounter guidance. Appian design guidance reinforces best practice design patterns that should be implemented in your objects. Guidance is calculated while editing expressions within the record type or when a precedent of the record type record type. Learn more about recommendation dismissal. Warnings cannot be dismissed and should always be addressed to avoid complications when the logic in the object is executed. Record type design guidance is also visible outside of the object on the Health Dashboard. See design guidance for the full list of possible guidance. Appian design guidance is not available for the User record type. After you create the record type object, you will configure the record type. Appian recommends configuring the record type in three phases: (1) defining the source data, (2) creating your record views and actions, and (3) configuring your record list and actions. The first element you'll want to configure is the record data. Your record data is a combination of data from a data source and filters on the data source. To configure the record data, you'll perform the following steps: To define the data source for your record type, you'll use a guided experience to connect to a data source. You can choose one of the following as the source of your record type: When you define the data source for your record type, Appian allows you to choose how the record type queries the data. You can choose to query directly from the data source by simply selecting the source type. Alternatively, you can enable data sync to cache a copy of your source data in Appian. This way, the record type only has to query the synced data instead of the external source, allowing you to make changes in your application faster and leverage sync-enabled features. If you enable data sync, you can configure source filters to limit which rows from your source are synced in Appian. Using source filters, you can ensure you are only working with relevant data, which can improve query performance, and use data from larger data sources in your record type without exceeding the row limit for record types with sync enabled. There are some data structures that are better fit for data sync than others. Before enabling sync, review When to use data sync to ensure your data structure is a good fit. After enabling data sync, set yourself up for faster app development and smarter data by establishing record type relationships. These relationships allow you to easily reference related record data from other record types without building extensive queries or database views. To finish configuring the record data, you'll want to consider creating default filters. Default filters determine which records in the record type are available to end users. Default filters are useful when you need to exclude data from certain users or groups, or if you need to create complex conditions to personalize your record list for each user. Once you've configured your record data, each row of data is represented as a record in Appian. But a record is more than just a row of data. Each record is made up of record views and related actions, which together create a more comprehensive view of your data. After you define the record data, perform the following steps: Record views present information about a single record to your end users. You can have multiple record views to present information about a record in a variety of ways. To define a record view, you'll call an interface object to display the record information. The layout and data that display for each record is determined by the expression used to define the views. Before you define a view in the record type, create a record view interface. As you build the interfaces for each record view, consider the different ways you want to present the record information. By default, each record type will have a Summary view. If your record type has data sync enabled, Appian can generate an interface and use it to configure the Summary view automatically. Learn how to generate a Summary view. Once you configure your record views, you can style the record header. The record header appears at the top of each record view as the background and contains the title, breadcrumbs, and related actions. Now that your record can be analyzed from different perspectives using record views, it's time to add related actions. Related actions are links to process models that the user can start directly from a record using information about that record. Related action process models are the same as any other process model. Before you add related actions, make sure you build a process model that can pass the record data. Learn how to create a process model. If your record type has data sync enabled, Appian can generate common related actions and the necessary process models using basic information you provide. Common related actions include updating a record and deleting a record. Learn how to generate record actions. Once you have your process model and configure related actions, they will appear in the Related Actions view on each record. You can also display related actions on individual record views using related action shortcuts or on an interface using the record action component. Now that you have your record data and your records ready to go, the last element to configure is the record list. The record list allows you to present a list of multiple records to end users so they can search and filter to find the records they need. In the final phase of configuring the record type, perform the following steps: The record list displays a list of records as either a grid-style or feed-style list. The record list itself is a responsive display of all of the records for a given record type. Depending on the style you choose, the way the records appear in the list will vary. You can also enable users to export the record list to Excel. Next, you'll want to add a record list action that lets users add new records to the list. Optionally, you can add more record list actions to support your business needs. Similar to a related action, a record list action is a link to a process model; however, the user can start this link directly from the record list. As you did for a related action, before you add a record list action, make sure you build a process model that can pass the record data. Learn how to create a process model. If your record type has data sync enabled, Appian can generate common record list actions and the necessary process models using basic information you provide. Common record list actions include creating a record. Learn how to generate record actions. Once you have your process model and configure record list actions, they will appear as buttons on the record list. You can also display record list actions on an interface using the record action component. Once your record list is configured, you can create interactive filters so users can determine which records appear on the list. You can use a guided configuration or use an expression to create user filters. All user filters will be available to end users unless otherwise specified. Once you configure the user filters, they will appear above the record list. That's it! Once you finish configuring the list, you can display your records on a site, in read-only grids or charts, and Tempo. Learn more about where to display records. On This Page
https://docs.appian.com/suite/help/21.3/Create_a_Record_Type.html
2021-09-16T19:22:29
CC-MAIN-2021-39
1631780053717.37
[]
docs.appian.com
Packages: instantjchem-VERSION-windows-x32.exe instantjchem-VERSION-windows-x64.exe instantjchem-VERSION-windows.msi After downloading, double-click on instantjchem-VERSION-windows.exe to start the installer. Instant JChem will be wrapped out into the selected directory. {primary} Instant JChem uses its own Java (that it is bundled in the installer). If you want to use another Java, see How to change IJC's Java. {primary} Vista users who updates from 2.3.1 to 2.4: After installation of new IJC, starting of new IJC application may try to run still the old version. That is the side effect of Virtual File System of Vista. Vista protects the writting of the Program Files folder from normal users. Unless you access it in administrator mode , those files that are copied into “Program Files”, will not be stored there. Instead of that, they are saved in your home directory. Despite of it, your application can see these files in the “Program Files” folder. By upgrade, the situation is similar. If installers are run in administrator mode but update modules launched from IJC application (in non-administrator mode), two different users writes the same folder. If you would like to remove this IJC version, installer can removes only those files that are located into the directory that it created. The workaround : Delete the following directory: <HOME>\AppData\Local\VirtualStore\<INSTANTJCHEMPATH> where <HOME> is your home directory and <INSTANTJCHEMPATH> is the relative path of the codebase. For example: C:\Users\myaccount\AppData\Local\VirtualStore\Program Files\ChemAxon\InstantJChem)
https://docs.chemaxon.com/display/lts-gallium/installation-on-windows.md
2021-09-16T17:58:30
CC-MAIN-2021-39
1631780053717.37
[]
docs.chemaxon.com
Build RESTful APIs You. API proxies give you the full power of Apigee's API platform to secure API calls, throttle traffic, mediate messages, control error handling, cache things, build developer portals, document APIs, analyze API traffic data, make money on the use of your APIs, protect against bad bots, and more. Build API proxies The first practical step in using Apigee is building API proxies. Whether you start with a hello world API proxy or dive in with OAuth security, Node.js, caching, conditional routing, and so on, proxies are the foundation of building out your API program to share with internal and external developers. Publish APIs & control access After you build API proxies, you're ready to set up access controls, register developers and apps, generate API keys, and publish your APIs on developer portals. Analyze & Troubleshoot With traffic flowing through your API proxies, it's time to analyze API traffic with charts and reports. Analyzing API traffic is a critical step in fine tuning and troubleshooting your APIs. Monetize APIs As an API provider, Apigee's monetization features let you set up a variety of plans to charge for the use of your APIs (or pay royalties to developers). Developer resources The following samples, videos, and tools help API proxy developers work more efficiently and productively. Edge for Private Cloud Install and manage Edge in your own cloud environment, where you control the system components' configuration, including load balancers, routers, message processors, databases, and identity providers. Apigee Sense Apigee Sense protects your APIs from unwanted request traffic, including attacks from malicious clients. Using Sense analysis, you can identify clients making unwanted requests, then take action to allow, block, or flag those requests.
https://docs.apigee.com/?hl=he
2021-07-23T19:37:42
CC-MAIN-2021-31
1627046150000.59
[]
docs.apigee.com
The check_cpanel_rpms Script Last modified: June 17, 2021 Overview The /usr/local/cpanel/scripts/check_cpanel_rpms script scans every installed RedHat® Package Manager (RPM) file on your server for problems. This script can also reinstall any affected cPanel & WHM RPMs to repair them. To run the /usr/local/cpanel/scripts/check_cpanel_rpms script nightly, use the Maintenance cPanel RPM Check and Maintenance cPanel RPM Digest Check settings in the Software section of WHM’s Tweak Settings interface (WHM >> Home >> Server Configuration >> Tweak Settings). Script functions The /usr/local/cpanel/scripts/check_cpanel_rpms script performs four basic functions each time that it runs: - Discovers missing RPMs. - Tracks RPMs that are out-of-date and need updates. - Checks for any altered RPMs. Altered RPMs meet any of the following conditions: - Unordered sub-list. - Unordered sub-list. - Their mode has changed. - An MD5 checksum does not exist. - They are symlinks, and the file points to the wrong path. - They are missing. - Checks whether to uninstall any cPanel-managed RPMs. - The /usr/local/cpanel/scripts/check_cpanel_rpmsscript runs for a few minutes. If it does not detect any problems, it will not produce any output and exit to the command prompt. - The /usr/local/cpanel/scripts/check_cpanel_rpmsscript does not check for problems with incorrect file permissions. Run the script To run the /usr/local/cpanel/scripts/check_cpanel_rpms script on the command line, use the following format: /usr/local/cpanel/scripts/check_cpanel_rpms [options] Options You can use the following options with the /usr/local/cpanel/scripts/check_cpanel_rpms script: Example For example, to use the --fix option, run the following command: /usr/local/cpanel/scripts/check_cpanel_rpms --fix Checks performed The /usr/local/cpanel/scripts/check_cpanel_rpms script runs the rpm -Vv check on all cPanel-managed RPMs. This checks for changes in the files since their installation. The script does not check configuration and documentation files. If the output indicates that only Mode or mTime have changed, the script will not report that as an altered RPM. The output of the rpm -Vv check lists the following changes:
https://docs.cpanel.net/knowledge-base/rpm-versions/the-check_cpanel_rpms-script/
2021-07-23T18:49:07
CC-MAIN-2021-31
1627046150000.59
[]
docs.cpanel.net
Info Breakpoints¶ info breakpoints [ bp-number… ] Show status of user-settable breakpoints. If no breakpoint numbers are given, the show all breakpoints. Otherwise only those breakpoints listed are shown and the order given. The columns in a line show are as follows: - The “Num” column is the breakpoint number which can be used in condition, delete, disable, enable commands. - The “Disp” column contains one of “keep”, “del”, the disposition of the breakpoint after it gets hit. - The “enb” column indicates whether the breakpoint is enabled. - The “Where” column indicates where the breakpoint is located.
https://zshdb.readthedocs.io/en/latest/commands/info/breakpoints.html
2021-07-23T19:47:22
CC-MAIN-2021-31
1627046150000.59
[]
zshdb.readthedocs.io
Couchbase TLS Transport layer security (TLS) is responsible for securing data over networks. This section documents what security protections are available. Basic TLS Configuration Couchbase Server supports TLS out of the box. Couchbase Server generates a self-signed CA certificate for the whole cluster. Each pod that is added to the cluster gets a certificate valid for its hostname. The cluster CA certificate is available from the Couchbase UI and may be given to any client in order to authenticate that a Couchbase Server connection is to a trusted pod. The cluster CA may also be used to secure cross data center replications (XDCR). Managed TLS Configuration When using basic configuration the end user has no control over how the certificates are generated. When using the public connectivity feature for example — the Couchbase Cluster is accessed from outside of the Kubernetes cluster — the internal DNS names that would be automatically generated by Couchbase Server are no longer valid. The certificate needs to be valid for an alternative public DNS name. Likewise the Prometheus exporter when operating over TLS needs to access the local pod on localhost. It may also be desirable to integrate Couchbase Server into an existing corporate TLS hierarchy. Managed TLS gives the ability for you to use any TLS certificates you like with Couchbase Server. A certificate may be a single certificate or an entire certificate chain. The only basic constraint imposed by the Operator is that the certificate contain at least a wildcard DNS subject alternative name (SAN) that is valid for any pod name the may be generated during the cluster life cycle. The Operator does not act as a certificate authority (CA) that could be used to sign arbitrary certificates. For configuration details please see the TLS configuration how-to. TLS Client Authentication Couchbase Server supports mutual TLS (mTLS). With this mode of operation not only do clients verify they are talking to a trusted entity, but the Couchbase Server instance can also establish trust in the client. Client authentication may be mandatory or optional. When using mTLS then the certificate must also contain identity information in the form of a username that maps to a Couchbase Server user. With optional authentication if the client does not supply a certificate to Couchbase Server on request then it will fall back to basic (username & password) authentication. mTLS is fully supported by the Operator. When enabled the user must supply the Operator with a client certificate valid for the cluster administrator. For configuration details please see the TLS client certificate how-to. TLS Certificate Rotation Certificates go out of date and cause clients to stop working. Private keys corresponding to certificates can be compromised thus allowing communications to be decrypted. With this in mind, the Operator allows certificates to be rotated and replaced with new versions. The Operator supports all possible rotation types: Replacing a server certificate/key Replacing a client certificate/key Replacing the entire PKI For details on certificate rotation read the TLS certificate rotation how-to.
https://docs.couchbase.com/operator/current/concept-tls.html
2021-07-23T18:18:48
CC-MAIN-2021-31
1627046150000.59
[]
docs.couchbase.com
Installing Java Java is a popular programming language that allows you run programs on many platforms, including Fedora. If you want to create Java programs, you need to install a JDK (Java Development Kit). If you want to run a Java program, you can do that on a JVM (Java Virtual:. Additional resources For Java in Fedora, see: - Freenode IRC channel #fedora-java - For more information about Java in general, see: To develop Java applications, consider the following open-source IDEs:
https://docs.fedoraproject.org/fr/quick-docs/installing-java/
2021-07-23T19:31:29
CC-MAIN-2021-31
1627046150000.59
[]
docs.fedoraproject.org
Shuttle docking, New Zealand & XP If there is one thing I love the Internet for its watching stuff happening in space. I regularly tune in to Nasa TV when the shuttle is up to watch the goings on. As I write this the shuttle is preparing to dock with ISS. Its cool being able to watch this. Along with realtime video footage you get shots of some animations of where the shuttle and ISS are at, like the one below. Couple of things caught my eye a few mins ago ... 1) they were over home (New Zealand) & 2) the app showing all this in mission control is running on XP (nice to see it helping to run these missions).
https://docs.microsoft.com/en-us/archive/blogs/cjohnson/shuttle-docking-new-zealand-xp
2021-07-23T20:31:10
CC-MAIN-2021-31
1627046150000.59
[]
docs.microsoft.com
Animation is implemented within multiview and pager components. It sets the way the views will be changed as you switch between views or page through the component. There exist two animation types with their own subtypes: Or, instead, you can choose one of the four animation directions: Multiview uses slide:"together" animation type by default. Animation can be switched off: {view:"multiview", animate:false } Enabling Animation 1 . The simplest means is including the animate property into the multiview or pager constructor and define the necessary type and subtype: webix.ui({ view:"multiview", animate:{ type:"flip", subtype:"vertical" }, cells:[] }); 2 . The values for the animate property can be assigned directly out of the multiview constructor. If some values have already been defined with the help of the above-mentioned method, direct assignment will rewrite them: $$("multi").config.animate.type = "flip"; $$("multi").config.animate.subtype = "vertical"; 3 . By means of the show() method. The changes will be applied once while showing the stated view. Then, the default values (initial or 'rewritten') will be used. $$(id).show({type:"flip", subtype:"horizontal"}) Related sample: Animated Multiview Related sample: Paging Animation Types The moment of a Webix view initialization can be animated as well. It works for views created dynamically in the existing Webix layout. To instantiate the component with animation, you should call the webix.ui.animate() method instead of the standard webix.ui(): webix.ui.animate(obj, parent, config); where: webix.ui.animate({ id:"aboutView", template:"About page...", }, $$("listView")); Related sample: Manual View RecreatingBack to top
https://docs.webix.com/desktop__animation.html
2021-07-23T19:08:46
CC-MAIN-2021-31
1627046150000.59
[]
docs.webix.com
Analyze Update Verify Logs¶ When?¶ When update verify tasks fail it is your responsibility as releaseduty to analyze them and determine whether or not any action needs to be taken for any differences found. How?¶ Update verify tasks that have failed usually have a diff-summary.log in their artifacts. This file shows you all of the differences found for each update tested. In the diffs, source is an older version Firefox that a MAR file from the current release has been applied to, and target is the full installer for the current release. Here’s an example of a very alarming difference: Found diffs for complete update from Files source/bin/xul.dll and target/bin/xul.dll differ In the above log, xul.dll is shown to be different between an applied MAR and a full installer. If we were to ship a release with a difference like this, partial MARs would fail to apply for many users in the next release. Usually a case like this represents an issue in the build system or release automation, and requires a rebuild. If you’re not sure how to proceed, ask for help. If no diff-summary.log is attached to the Task something more serious went wrong. You will need to have a look at live.log to investigate. Known differences¶ There are no known cases where diffs are expected, so all task failures should be checked carefully. See bug 1461490 for the implementation of transforms to resolve expected differences.
http://docs.mozilla-releng.net/en/latest/procedures/misc-operations/analyze-update-verify-logs.html
2021-07-23T19:39:28
CC-MAIN-2021-31
1627046150000.59
[]
docs.mozilla-releng.net
pom_xpath_remove POM_XPATH_REMOVE(7) Java Packages Tools POM_XPATH_REMOVE(7) NAME pom_xpath_remove - remove a node from XML file SYNOPSIS %pom_xpath_remove [OPTIONS] XPath [XML-file-location]... OPTIONS -r Work in recursive mode. That means that given node is also removed. This also works on attributes and text nodes.. EXAMPLES %pom_xpath_remove pom:project/pom:reporting - this call removes reporting section from POM in current working directory. %pom_xpath_remove 'ivy:configure' build.xml - this call disables loading of ivy configuration in build.xml file. Note the use of ivy namespace which was declared in the document as xmlns:ivy="antlib:org.apache.ivy.ant"._replace(7), pom_xpath_set(7). JAVAPACKAGES 01/29/2020 POM_XPATH_REMOVE(7)
https://docs.fedoraproject.org/ms/java-packaging-howto/manpage_pom_xpath_remove/
2021-07-23T20:30:40
CC-MAIN-2021-31
1627046150000.59
[]
docs.fedoraproject.org
Handling changing files¶ Nightly emails always report errors¶ A very common cause for that is files which are either read-locked from TSM completely (more common on windows OSes) or files that change constantly during backups. The output from command line dsmc inc or the contents of dsmsched.log will list the progress and results of the backup. If files are changing the client will retry them later on, several times. In the below example, the actual files that change are log files under /var/log, and one in particular is hard to back up successfully, /var/log/audit/audit.log. Normal File--> 85,113 /var/log/secure Changed Retry # 1 Normal File--> 21,841 /var/log/messages [Sent] Retry # 1 Normal File--> 85,113 /var/log/secure [Sent] Normal File--> 144,000 /var/log/wtmp [Sent] Normal File--> 1,756,058 /var/log/audit/audit.log Changed Retry # 2 Normal File--> 85,113 /var/log/secure [Sent] Retry # 1 Normal File--> 144,000 /var/log/wtmp [Sent] Retry # 1 Normal File--> 1,758,725 /var/log/audit/audit.log Changed Retry # 2 Normal File--> 1,775,152 /var/log/audit/audit.log Changed Retry # 3 Normal File--> 1,784,573 /var/log/audit/audit.log Changed Retry # 4 Normal File--> 1,788,872 /var/log/audit/audit.log Changed Normal File--> 6,291,591 /var/log/audit/audit.log.1 [Sent] Normal File--> 6,291,599 /var/log/audit/audit.log.2 [Sent] Normal File--> 6,291,641 /var/log/audit/audit.log.3 [Sent] ANS1228E Sending of object '/var/log/audit/audit.log' failed. ANS4037E Object '/var/log/audit/audit.log' changed during processing. Object skipped. Normal File--> 6,291,561 /var/log/audit/audit.log.4 [Sent] Normal File--> 47,820,447 /var/log/icinga2/icinga2.log Changed Retry # 1 Normal File--> 47,820,447 /var/log/icinga2/icinga2.log [Sent] ANS1802E Incremental backup of '/var' finished with 1 failure(s) audit.logfile was skipped due to constant changes. In this case we know the answer to why it changes, it is because the access of all files is logged on this particular machine. So everytime dsmc tries to read audit.log, the local auditing system will log, into that very file, that "dsmc tried to read audit.log, and we allowed it". So when dsmc had read the file and sent it to the server, it checks the last-changed-date and size, noticing those have changed in that time. Total number of objects inspected: 58,902 Total number of objects backed up: 69 Total number of objects updated: 11 Total number of objects failed: 1 Total objects deduplicated: 74 Total number of retries: 30 objects failed: 1in there. This will end up on the nightly report as: UORUIJSMAMENW FILE_2000 futu 4 4 4 4 4 HOST1 EXCLUDErules to your dsm.sys/dsm.optfiles for files. Read more on Includes and Excludes In the above example, the offending file is also getting rotated by the operating system, so we are getting good backups of the already-rotated audit files, which can be a hint to add a small script to PRESCHEDCOMMAND in the dsm.opt/sys file to force rotation just before the scheduled backup is running, which means you get all data up-to that point in files which will then not be moving while the backup runs.
https://docs.safespring.com/backup/howto/changing-files/
2021-07-23T18:42:26
CC-MAIN-2021-31
1627046150000.59
[]
docs.safespring.com
Generates the Unicode character corresponding to an inputted Integer value.Generates the Unicode character corresponding to an inputted Integer value. Unic Column reference example: char(MyCharIndex) Output: Returns the Unicode value for the number in the MyCharIndex column. String literal example: char(65) Output: Returns the string: A. Syntax and Arguments: Tip: For additional examples, see Common Tasks. Examples Tip: For additional examples, see Common Tasks. Example - char and unicode functions CHARfunction can be used to convert numeric index values to Unicode characters, and the UNICODEfunction can be used to convert characters back to numeric values. Source: The following column contains some source index values: Transformation: When the above values are imported to the Transformer page, the column is typed as integer, with a single mismatched value ( 33.5). To see the corresponding Unicode characters for these characters, enter the following transformation: To see how these characters map back to the index values, now add the following transformation: Results: Note that the floating point input value was not processed. This page has no comments.
https://docs.trifacta.com/display/DP/CHAR+Function
2021-07-23T19:14:57
CC-MAIN-2021-31
1627046150000.59
[]
docs.trifacta.com
A kanban board for effective organization of team work.. Webix Kanban Board is highly customizable and can be adjusted to your needs with ease. Due to its rich API, the widget allows creating Kanban boards of various structure and complexity, adding, editing, filtering tasks, tuning their appearance, assigning tasks to the team members, etc. Check Kanban Board documentation for more information. var kanban = webix.ui({ view:"kanban", type:"space", cols:[ { header:"Backlog", body:{ view:"kanbanlist", status:"new" } }, { header:"In Progress", body:{ view:"kanbanlist", status:"work" } }, { header:"Done", body:{ view:"kanbanlist", status:"done" } } ], url: "tasks.php" });
https://docs.webix.com/api__refs__ui.kanban.html
2021-07-23T19:16:07
CC-MAIN-2021-31
1627046150000.59
[]
docs.webix.com
You need to include the following files in the head section of your document: <script src="codebase/webix.js" type="text/javascript"></script> <script src="codebase/spreadsheet.js" type="text/javascript"></script> <link rel="stylesheet" type="text/css" href="codebase/webix.css"> <link rel="stylesheet" type="text/css" href="codebase/spreadsheet.css"> To initialize SpreadSheet and load it with data, use the code as in: //webix.ready() function ensures that the code will be executed when the page is loaded webix.ready(function(){ //object constructor webix.ui({ view:"spreadsheet", //loaded data object data: base_data }); }); Related sample: Basic init After downloading Spreadsheet there are 3 ways to run package samples locally. The simplest one is to navigate to the root directory and open the samples folder. Find the desired file and open it with a double-click. The sample will be opened in a new browser tab. Running samples on a local server You can run package samples on a local server (e.g Apache). Set the Spreadsheet folder as the root directory of the local server and launch the server. In general a local server runs at localhost. Running samples on a development server To be able to modify samples and see the corresponding changes you should run them on a development server. Go to the Spreadsheet root directory, install the necessary dependencies and start the server as: Back to topBack to top // navigate to the root directory cd spreadsheet // install dependencies yarn install //or npm install // start dev server yarn server //or npm run server
https://docs.webix.com/spreadsheet__spreadsheet_init.html
2021-07-23T19:28:07
CC-MAIN-2021-31
1627046150000.59
[]
docs.webix.com
Pixel-based tracking allows affiliates to refer customers without the need for them to click an affiliate link. By placing the tracking pixel on a site, the affiliate can automatically register hits from visitors who simply view the page on which the tracking pixel is placed.. To show affiliates the image or iframe tracking pixel they can use, we use a shortcode which can be added to the affiliate area or any other page. The [affiliates_pixel] shortcode renders the HTML code that affiliates will use to place a tracking pixel. Please refer to the [affiliates_pixel] shortcode documentation page for details and examples.
http://docs.itthinx.com/document/affiliates-enterprise/pixel-tracking/
2018-09-18T17:47:18
CC-MAIN-2018-39
1537267155634.45
[]
docs.itthinx.com
Panels¶ The Django Debug Toolbar ships with a series of built-in panels. In addition, several third-party panels are available. Default built-in panels¶ The following panels are enabled by default. Version¶ Path: debug_toolbar.panels.versions.VersionsPanel Shows versions of Python, Django, and installed apps if possible. Headers¶ Path: debug_toolbar.panels.headers.HeadersPanel This panels shows the HTTP request and response headers, as well as a selection of values from the WSGI environment. Note that headers set by middleware placed before the debug toolbar middleware in MIDDLEWARE_CLASSES won’t be visible in the panel. The WSGI server itself may also add response headers such as Date and Server. SQL¶ Path: debug_toolbar.panels.sql.SQLPanel SQL queries including time to execute and links to EXPLAIN each query. Template¶ Path: debug_toolbar.panels.templates.TemplatesPanel Templates and context used, and their template paths. Static files¶ Path: debug_toolbar.panels.staticfiles.StaticFilesPanel Used static files and their locations (via the staticfiles finders). Logging¶ Path: debug_toolbar.panels.logging.LoggingPanel Logging output via Python’s built-in logging module. Redirects¶ Path: debug_toolbar.panels.redirects.RedirectsPanel When this panel is enabled, the debug toolbar will show an intermediate page upon redirect so you can view any debug information prior to redirecting. This page will provide a link to the redirect destination you can follow when ready. Since this behavior is annoying when you aren’t debugging a redirect, this panel is included but inactive by default. You can activate it by default with the DISABLE_PANELS configuration option. Non-default built-in panels¶ The following panels are disabled by default. You must add them to the DEBUG_TOOLBAR_PANELS setting to enable them. Profiling¶ Path: debug_toolbar.panels.profiling.ProfilingPanel Profiling information for the processing of the request. If the debug_toolbar.middleware.DebugToolbarMiddleware is first in MIDDLEWARE_CLASSES then the other middlewares’ process_view methods will not be executed. This is because ProfilingPanel.process_view will return a HttpResponse which causes the other middlewares’ process_view methods to be skipped. Note that the quick setup creates this situation, as it inserts DebugToolbarMiddleware first in MIDDLEWARE_CLASSES. If you run into this issues, then you should either disable the ProfilingPanel or move DebugToolbarMiddleware to the end of MIDDLEWARE_CLASSES. If you do the latter, then the debug toolbar won’t track the execution of other middleware. Third-party panels¶ Note Third-party panels aren’t officially supported! The authors of the Django Debug Toolbar maintain a list of third-party panels, but they can’t vouch for the quality of each of them. Please report bugs to their authors. If you’d like to add a panel to this list, please submit a pull request! Haystack¶ URL: Path: haystack_panel.panel.HaystackDebugPanel See queries made by your Haystack backends. HTML Tidy/Validator¶ URL: Path: debug_toolbar_htmltidy.panels.HTMLTidyDebugPanel HTML Tidy or HTML Validator is a custom panel that validates your HTML and displays warnings and errors. Inspector¶ URL: Path: inspector_panel.panels.inspector.InspectorPanel Retrieves and displays information you specify using the debug statement. Inspector panel also logs to the console by default, but may be instructed not to. Line Profiler¶ URL: Path: debug_toolbar_line_profiler.panel.ProfilingPanel This package provides a profiling panel that incorporates output from line_profiler. Memcache¶ URL: Path: memcache_toolbar.panels.memcache.MemcachePanel or memcache_toolbar.panels.pylibmc.PylibmcPanel This panel tracks memcached usage. It currently supports both the pylibmc and memcache libraries. MongoDB¶ URL: Path: debug_toolbar_mongo.panel.MongoDebugPanel Adds MongoDB debugging information. Neo4j¶ URL: Path: neo4j_panel.Neo4jPanel Trace neo4j rest API calls in your django application, this also works for neo4django and neo4jrestclient, support for py2neo is on its way. Request History¶ URL: Path: ddt_request_history.panels.request_history.RequestHistoryPanel Switch between requests to view their stats. Also adds support for viewing stats for ajax requests. Sites¶ URL: Path: sites_toolbar.panels.SitesDebugPanel Browse Sites registered in django.contrib.sites and switch between them. Useful to debug project when you use django-dynamicsites which sets SITE_ID dynamically. Template Profiler¶ URL: Path: template_profiler_panel.panels.template.TemplateProfilerPanel Shows template render call duration and distribution on the timeline. Lightweight. Compatible with WSGI servers which reuse threads for multiple requests (Werkzeug). Template Timings¶ URL: Path: template_timings_panel.panels.TemplateTimings.TemplateTimings Displays template rendering times for your Django application. User¶ URL: Path: debug_toolbar_user_panel.panels.UserPanel Easily switch between logged in users, see properties of current user. API for third-party panels¶ Third-party panels must subclass Panel, according to the public API described below. Unless noted otherwise, all methods are optional. Panels can ship their own templates, static files and views. They’re no public CSS API at this time. - class debug_toolbar.panels.Panel(*args, **kwargs)¶ Base class for panels. Title shown in the side bar. Defaults to title. Subtitle shown in the side bar. Defaults to the empty string. - has_content¶ True if the panel can be displayed in full screen, False if it’s only shown in the side bar. Defaults to True. - title¶ Title shown in the panel when it’s displayed in full screen. Mandatory, unless the panel sets has_content to False. - template¶ Template used to render content. Mandatory, unless the panel sets has_content to False or overrides attr:content`. - content¶ Content of the panel when it’s displayed in full screen. By default this renders the template defined by template. Statistics stored with record_stats() are available in the template’s context. - enable_instrumentation()¶ Enable instrumentation to gather data for this panel. This usually means monkey-patching (!) or registering signal receivers. Any instrumentation with a non-negligible effect on performance should be installed by this method rather than at import time. Unless the toolbar or this panel is disabled, this method will be called early in DebugToolbarMiddleware.process_request. It should be idempotent. - disable_instrumentation()¶ Disable instrumentation to gather data for this panel. This is the opposite of enable_instrumentation(). Unless the toolbar or this panel is disabled, this method will be called late in DebugToolbarMiddleware.process_response. It should be idempotent. - record_stats(stats)¶ Store data gathered by the panel. stats is a dict. Each call to record_stats updates the statistics dictionary. - process_request(request)¶ Like process_request in Django’s middleware. Write panel logic related to the request there. Save data with record_stats(). - process_view(request, view_func, view_args, view_kwargs)¶ Like process_view in Django’s middleware. Write panel logic related to the view there. Save data with record_stats(). JavaScript API¶ Panel templates should include any JavaScript files they need. There are a few common methods available, as well as the toolbar’s version of jQuery. This is a helper function to fetch values stored in the cookies. This is a helper function to set a value stored in the cookies.
https://django-debug-toolbar.readthedocs.io/en/1.2.2/panels.html
2018-09-18T17:24:18
CC-MAIN-2018-39
1537267155634.45
[]
django-debug-toolbar.readthedocs.io
Using Tags with Animation Graphs In the Animation Editor, you use tags to describe the current state of your character and control the transition between different states. Tags are Boolean flags that are either active (enabled) or inactive (disabled). Some examples of tags are Happy, Holding Sword, and Left Leg Injured. Adding Tags Tags are represented by animation graph parameters. When you define a parameter, you can specify a different value for each entity that uses the same animation graph and parameter. For example, you can specify a different value for the Speed parameter for each entity that uses the animation graph. Similarly, you can assign a different tag to each entity. For example, one entity has the Holding Sword tag active and another entity has the Happy tag active. For more information about parameters, see About Parameters. To create a tag In Lumberyard Editor, choose Tools, Animation Editor. In the Animation Editor, in the Parameters pane, click the + button. In the Create Parameter dialog box, do the following: For Value type, select Tag. For Name, enter a name for your tag. For Description, enter an optional description for your tag. For Default, select the check box to enable the tag. Click Create. Adding Conditions to Tags Use tag conditions to enable the state machine to change the active state. For example, you can choose a specific jump animation based on the active tag. To transition to Awesome Jump, you would enable the Freaky, Awesome, and Happy tags. You can also use tags in combination with wildcard transitions to choose a specific state that other states can access. Wildcard transitions are transitions that can originate from any node. In the preceding example, the arrow to the left of Idle represents the wildcard transition. This means you can transition from any state to the Idle state, as long as the condition for the wildcard transition is met. Tag conditions have two attributes: test function and tags. Test Function Specifies the tag status to pass the condition. You can choose from the following options: All tags active – All tags must be active or the condition blocks the change. One or more tags inactive – At least one tag must be inactive or the condition blocks the change. One or more tags active – At least one tag must be active or the condition blocks the change. No tag active – All tags must be inactive or the condition blocks the change. Specifies the tags that the condition checks for. To add a tag to a condition, select the transition line between your nodes. In the Attributes pane, select the values that you want to use. Note You can only choose from tags that are available in the Parameters pane. For more information, see Adding Tags.
https://docs.aws.amazon.com/lumberyard/latest/userguide/animation-editor-using-tags.html
2018-09-18T17:42:16
CC-MAIN-2018-39
1537267155634.45
[array(['images/anim-graph-tag-conditions-example.png', None], dtype=object) array(['images/anim-graph-tag-conditions-attributes.png', None], dtype=object) array(['images/anim-graph-tag-conditions-values.png', None], dtype=object)]
docs.aws.amazon.com
Applying Silhouette Parallax Occlusion Mapping (SPOM) To apply SPOM, complete the following procedure. To apply Silhouette Parallax Occlusion Mapping In Lumberyard Editor, click Tools, Material Editor. In the left tree, select the desired asset. In the right pane, under Shader Generation Params, select Parallax occlusion mapping with silhouette. Under Shader Params, adjust the values of the following parameters. Height bias – Moves the plane where the displacement is applied. This reduces gaps in meshes, and prevents objects from displacing other objects that are placed above them. Self shadow strength – Changes the strength of self-shadowing. A larger value imparts more shadowing Silhouette POM Displacement – Sets the SPOM depth. A larger value adds more depth. Under Texture Maps, enter the paths to the various textures.
https://docs.aws.amazon.com/lumberyard/latest/userguide/mat-maps-parallax-spom.html
2018-09-18T17:45:37
CC-MAIN-2018-39
1537267155634.45
[]
docs.aws.amazon.com
You can perform the following operations on talks and your comments: Starting from Talk 2.5.0, you can edit and delete comments; resolve, archive and restore archived discussions both in View and Edit modes. Editing Comments - Select the appropriate discussion. - Click Edit below your comment, make changes to it and click Save. Deleting Comments - Select the appropriate discussion. Click Delete below your comment and confirm its removal. When the discussion contains a single comment, and you decide to delete this comment, you will be prompted to resolve the entire discussion. Resolving Discussions - Select the appropriate discussion. - In the top right corner of the discussion, locate the Resolve button and click it. In the prompted form, choose Remove. After resolving, the discussion is removed from the page. All the page watchers will receive a notification that the discussion was resolved. Starting from Talk Add-on 1.7.10, the discussion can be found in the page history if you open the page version where the talk was not resolved. Starting from Talk Add-on 2.2.0, you are able to archive discussion without removing it (see details below on this page) Restoring Resolved Discussions If you have accidentally resolved the discussion or you need to restore the discussion resolved by another user, you should follow these steps: - Open the appropriate page. - Click Tools and select Page History. - In the page history, locate the page version which is prior to the version with the note 'Talk discussion resolved'. - Click the Restore this version link. The page with the talk discussion will be restored so you can continue this discussion. When you revert to the older page version, the page contents that have been added since then will be removed. You will get the page that contains information available before the talk had been resolved. Archiving and Restoring Archived Discussions To archive a discussion: - Click Resolve in the discussion cloud. Click Archive in the appeared window. - Click its icon on the page. - Click Restore in the discussion cloud and confirm restoring. The discussion is again visible on the sidebar by default.
https://docs.stiltsoft.com/display/public/Talk/Managing+Discussions
2018-09-18T18:19:05
CC-MAIN-2018-39
1537267155634.45
[]
docs.stiltsoft.com
GDI Objects GDI objects support only one handle per object. Handles to GDI objects are private to a process. That is, only the process that created the GDI object can use the object handle. There is a theoretical limit of 65,536 GDI handles per session. However, the maximum number of GDI handles that can be opened per session is usually lower, since it is affected by available memory. Windows 2000: There is a limit of 16,384 GDI handles per session. There is also a default per-process limit of GDI handles. To change this limit, set the following registry value: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\GDIProcessHandleQuota This value can be set to a number between 256 and 65,536. Windows 2000: This value can be set to a number between 256 and 16,384. Managing GDI Objects The following table lists the GDI objects, along with each object's creator and destroyer functions. The creator functions either create the object and an object handle or simply return the existing object handle. The destroyer functions remove the object from memory, which invalidates the object handle.
https://docs.microsoft.com/en-us/windows/desktop/SysInfo/gdi-objects
2018-09-18T17:51:44
CC-MAIN-2018-39
1537267155634.45
[]
docs.microsoft.com
Using the Preview Panel You can use the Preview panel in the Particle Editor to view the particle effects. This panel has the following features: Display content in the viewport Display the main menu Choose which emitter hierarchies to display in the viewport Reset the viewport to default settings Toggle time of day visibility to simulate the approximate time of day Toggle the display of the particles wireframe Play back the timeline Play, pause, and step forward Reset emitter playback Loop playback You can access the following Preview panel options in the main menu and context menu.
https://docs.aws.amazon.com/lumberyard/latest/userguide/particle-editor-preview-panel.html
2018-09-18T17:46:14
CC-MAIN-2018-39
1537267155634.45
[array(['images/particle-preview-panel.png', 'Particle panel in the Particle Editor.'], dtype=object)]
docs.aws.amazon.com
The attentive reader discovers that there is many missing sensor types; UV, Luminance, Dew point, Barometic pressure Rainrate, Raintotal, Winddirection, Windaverage and Windgust which is supported by the Tellstick devices. Support have not been implemented on the openhab side yet, contributions are welcome. Switchbased sensors workaround Some 433MHz magnetic & PIR sensors for example magnetic door sensors are detected as a regular switch things instead of a separate type. There is technically no way of distinguish them apart from regulur switch things. For using them as sensors only (not paired to a lamp) please consult the workaround in the channel section. Discovery Devices which is added to Telldus Core and Telldus Live can be discovered by openHAB. When you add this binding it will try to discover the Telldus Core Bridge. If it’s installed correct its devices will show up. If you want to use the Telldus Live its bridge, Telldus Live bridge need to be added manually. Binding Configuration For USB connected tellsticks only, eg. Basic and DUO/”). If you have trouble getting the telldus core library to work you can modify the library path using Thing Configuration Only the bridges require manual configuration. The devices and sensors should not be added by hand, let the discovery/inbox initially configure these. Dimmers & switches There is an option to override the resend count of the commands. Use the option repeat for that. Default resend count is 2. Bridges: 3) Local Rest API is a local API which would work similar to Telldus Live but local. Depending on your Tellstick model different API methods is available: Telldus Core Bridge Bridge tellstick:telldus-core:1 "Tellstick Duo" [resendInterval="200"] Optional: - libraryPath: The path to tellduscore.dll/so, - resendInterval: The interval between each transmission of command, default 100ms. Tell Channels Actuators ([dimmer]/[switch]) support the following channels: Sensors ([sensor]) support the following channels: Switch postUpdate(front_door_proxy, OPEN); end rule "proxy_front_door_off" when Item front_door_sensor changed to OFF then postUpdate(front_door_proxy, CLOSED); end Full Example tellstick.things Bridge tellstick:telldus-core:1 "Tellstick Duo" [resendInterval="200"] Bridge tellstick:telldus-live:2 "Tellstick ZWave" [refresh="10000", publicKey="XXXXXXXX", privateKey="YYYYYY", token= "ZZZZZZZZ", tokenSecret="UUUUUUUUUU"] Devices are preferable discovered automatically. Add them either with karaf: inbox approve <thingId> or in paperUI. The bridges can also be added with PaperUI. tellstick.items List available devices in karaf with things or get the channels in paperUI. Slider living_room_ceiling "Living room ceiling" <light> {channel="tellstick:dimmer:1:3:state"} Switch living_room_table "Living room table" <light> {channel="tellstick:switch:1:3:state"} Number inside_temperature "Inside temperature [%.1f °C]" <temperature> {channel="tellstick:sensor:1:47_temperaturehumidity_fineoffset:temperature"} Number inside_humidity "Inside humidity [%.1f RH]" <humidity> {channel="tellstick:sensor:1:47_temperaturehumidity_fineoffset:humidity"}
https://docs.openhab.org/v2.1/addons/bindings/tellstick/readme.html
2018-09-18T18:14:33
CC-MAIN-2018-39
1537267155634.45
[array(['doc/tellstick_duo.jpg', 'Tellstick Duo with device'], dtype=object)]
docs.openhab.org
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. a Vault in Amazon Glacier and Delete Vault in the Amazon Glacier Developer Guide. Namespace: Amazon.Glacier Assembly: AWSSDK.dll Version: (assembly version) Container for the necessary parameters to execute the DeleteVault service method. .NET Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MGlacierIGlacierDeleteVaultDeleteVaultRequestNET35.html
2018-04-19T13:58:51
CC-MAIN-2018-17
1524125936969.10
[]
docs.aws.amazon.com
App: The Operating Systems screen lists all of the OS images that have already been imported..
https://docs.citrix.com/en-us/dna/7-14/configure/import-operating-systems.html
2018-04-19T13:57:42
CC-MAIN-2018-17
1524125936969.10
[]
docs.citrix.com
Interacting With Julia" for help. | | | | | | |/ _` | | | | |_| | | | (_| | | Version 0.6.0-dev.2493 (2017-01-31 18:53 UTC) _/ |\__'_|_|_|\__'_| | Commit c99e12c* (0 days old master) |__/ | x86_64-linux-gnu julia> To exit the interactive session, type ^D – the control key together with the d key on a blank line – or type quit() followed by the return or enter key. The REPL greets you with a banner and a julia> prompt. The different prompt modes The Julian mode" In Julia mode, the REPL supports something called prompt pasting. This activates when pasting text that starts with julia> into the REPL. In that case, only expressions starting with julia> are parsed, others are removed. This makes it is possible to paste a chunk of code that has been copied from a REPL session without having to scrub away prompts and outputs. This feature is enabled by default but can be disabled or enabled at will with Base.REPL.enable_promptpaste(::Bool). If it is enabled, you can try it out by pasting the code block above this paragraph straight into the REPL. This feature does not work on the standard Windows command prompt due to its limitation at detecting when a paste occurs. Help mode stringmime Cstring Cwstring RevString read?> AbstractString search: AbstractString AbstractSparseMatrix AbstractSparseVector AbstractSet No documentation found. Summary: abstract AbstractString <: Any Subtypes: Base.Test.GenericString DirectIndexString String Help mode can be exited by pressing backspace at the beginning of the line. Shell mode).x".uliarc.jl: import Base: LineEdit, REPL const mykeys = Dict{Any,Any}( # Up Arrow "\e[A" => (s,o...)->(LineEdit.edit_move_up(s) || LineEdit.history_prev(s, LineEdit.mode(s).hist)), # Down Arrow "\e[B" => (s,o...)->(LineEdit.edit_move_up(s) || LineEdit.history_next(s, LineEdit.mode(s).hist)) ) function customize_keys(repl) repl.interface = REPL.setup_interface(repl; extra_repl_keymap = mykeys) end atreplinit(customize_keys) Users should refer to base stringmime strip julia> Stri[TAB] StridedArray StridedMatrix StridedVecOrMat StridedVector String The tab key can also be used to substitute LaTeX math symbols with their Unicode equivalents, and get a list of LaTeX matches as well:× , see second line after ; where limit and keep are keyword arguments: julia> split("1 1 1", [TAB] split(str::AbstractString) in Base at strings/util.jl:278 split{T<:AbstractString}(str::T, splitter; limit, keep) in Base at strings/util.jl:254> Pkg.a[TAB] add available Fields for output from functions can also be completed: julia> split("","")[1].[TAB] endof offset string The completion of fields for output from functions uses type inference, and it can only suggest fields if the function is type stable. Customizing Colors The colors used by Julia and the REPL can be customized, as well. To change the color of the Julia prompt you can add something like the following to your .juliarculiarculiarc.jl file: ENV["JULIA_ERROR_COLOR"] = :magenta ENV["JULIA_WARN_COLOR"] = :yellow ENV["JULIA_INFO_COLOR"] = :cyan
https://docs.julialang.org/en/stable/manual/interacting-with-julia/
2018-04-19T13:42:38
CC-MAIN-2018-17
1524125936969.10
[]
docs.julialang.org
How to: Configure the cursor threshold Option (SQL Server Management Studio). To configure the cursor threshold option In Object Explorer, right-click a server and select Properties. Click the Advanced node. Under Miscellaneous, change the Cursor Threshold option to the desired value. Use the cursor threshold option to specify the number of rows in the cursor set at which cursor keysets are generated asynchronously. If you set cursor threshold to -1, all keysets are generated synchronously, which benefits small cursor sets. If you set cursor threshold to 0, all cursor keysets are generated asynchronously. See Also Concepts Help and Information Getting SQL Server 2005 Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms175817(v=sql.90)
2018-04-19T14:24:56
CC-MAIN-2018-17
1524125936969.10
[]
docs.microsoft.com
Usage Examples¶ DefectDojo is designed to make tracking testing engagements simple and intuitive. The Models page will help you understand the terminology we use below, so we recommend taking a look at that first. Create a new Product Type¶ The first step to using DefectDojo is to create a Product Type. Some examples might be “Mobile Apps” or “New York Office.” The idea is to make it easy to divide your Products into logical categories, based on your organizational structure, or just to divide internal and external applications. Select “View Product Types” from the “Products” dropdown in the main menu. Click the “New Product Type” button at the top. Enter a name for your new Product Type. Create a new Test Type¶ Test Types will help you differentiate the scope of your work. For instance, you might have a Performance Test Type, or a specific type of security testing that you regularly perform. Select “Test Types” from the “Engagements” dropdown in the main menu. Click the “New Test Type” button at the top. Enter a name for your new Test Type. Create a new Development Environment¶ Development Environments are for tracking distinct deployments of a particular Product. You might have one called “Local” if you deploy the Product on your own computer for testing, or “Staging” or “Production” for official deployments. Select “Development Environments” from the “Engagements” dropdown in the main menu. Click the “New Development Environment” button at the top. Enter a name for your new Development Environment. Create a new Engagement¶ Engagements are useful for tracking the time spent testing a Product. They are associated with a Product, a Testing Lead, and are comprised of one or more Tests that may have Findings associated with them. Engagements also show up on your calendar. Select “Engagements” from the “Engagements” dropdown in the main menu. Click the “New Engagement” button on the right. Enter the details of your Engagement. Adding Tests to an Engagement¶ From the Engagement creation page, you can add a new Test to the Engagement. You can also add a Test to the Engagement later from that Engagement’s main page. Tests are associated with a particular Test Type, a time, and an Environment. Enter the details of your Test. Adding Findings to a Test¶ Findings are the defects or interesting things that you want to keep track of when testing a Product during a Test/Engagement. Here, you can lay out the details of what went wrong, where you found it, what the impact is, and your proposed steps for mitigation. You can also reference CWEs, or add links to your own references. Templating findings allows you to create a version of a finding that you can then re-use over and over again, on any Engagement. Enter the details of your Finding, or click the “Add Finding from Template” button to use a templated Finding. From the “Add Finding Template” popup, you can select finding templates from the list, or use the search bar. Templates can be used across all Engagements. Define what kind of Finding this is. Is it a false positive? A duplicate? If you want to save this finding as a template, check the “Is template” box. Viewing an Engagement¶ Most of the work of an Engagement can be done from that Engagement’s main page. You can view the Test Strategy or Threat Model, modify the Engagement dates, view Tests and Findings, add Risk Acceptance, complete the security Check List, or close the Engagement. This page lets you do most of the common tasks that are associated with an Engagement. Tracking your Engagements in the calendar¶ The calendar can help you keep track of what Engagements your team is currently working on, or determine the time line for past Engagements. Select “Calendar” in the main menu. Here you can view the current engagements for the month, or go back in time. Tracking metrics for your Products¶. Select “All” or a Product Type from the “Metrics” drop-down in the main menu. Here you can see graphs of various metrics, with the ability to filter your results by time, Product Type, and severity. At the bottom of the Metrics page, you can see granular data about your work, such as a breakdown of the most severe bugs by Product, lists of open, accepted, and closed Findings, and trends for each week, as well as the age of all current open Findings.
http://defectdojo.readthedocs.io/en/latest/start-using.html
2018-04-19T13:11:19
CC-MAIN-2018-17
1524125936969.10
[array(['_images/getting_started_1.png', '_images/getting_started_1.png'], dtype=object) array(['_images/getting_started_2.png', '_images/getting_started_2.png'], dtype=object) array(['_images/getting_started_3.png', '_images/getting_started_3.png'], dtype=object) array(['_images/getting_started_4.png', '_images/getting_started_4.png'], dtype=object) array(['_images/getting_started_5.png', '_images/getting_started_5.png'], dtype=object) array(['_images/getting_started_6.png', '_images/getting_started_6.png'], dtype=object) array(['_images/getting_started_7.png', '_images/getting_started_7.png'], dtype=object) array(['_images/getting_started_8.png', '_images/getting_started_8.png'], dtype=object) array(['_images/getting_started_9.png', '_images/getting_started_9.png'], dtype=object) array(['_images/getting_started_10.png', '_images/getting_started_10.png'], dtype=object) array(['_images/getting_started_11.png', '_images/getting_started_11.png'], dtype=object) array(['_images/getting_started_12.png', '_images/getting_started_12.png'], dtype=object) array(['_images/getting_started_13.png', '_images/getting_started_13.png'], dtype=object) array(['_images/getting_started_14.png', '_images/getting_started_14.png'], dtype=object) array(['_images/getting_started_15.png', '_images/getting_started_15.png'], dtype=object) array(['_images/getting_started_16.png', '_images/getting_started_16.png'], dtype=object) array(['_images/getting_started_17.png', '_images/getting_started_17.png'], dtype=object) array(['_images/getting_started_18.png', '_images/getting_started_18.png'], dtype=object) array(['_images/getting_started_19.png', '_images/getting_started_19.png'], dtype=object) array(['_images/getting_started_20.png', '_images/getting_started_20.png'], dtype=object) array(['_images/getting_started_21.png', '_images/getting_started_21.png'], dtype=object) array(['_images/getting_started_22.png', '_images/getting_started_22.png'], dtype=object) ]
defectdojo.readthedocs.io
You can delete an OS Layer or Layer version, as long as it is not being used by another Layer, or Image Template. Deleting the Layer itself removes all versions, volumes, and resources from the App Layering appliance. You can delete an entire layer or a layer version if it is:
https://docs.citrix.com/zh-cn/citrix-app-layering/4/azure/manage0/manage/delete-os-layer.html
2018-04-19T13:56:42
CC-MAIN-2018-17
1524125936969.10
[]
docs.citrix.com
PipelineComponent.BufferTypeToDataRecordType Method Returns a managed data type based on an Integration Services data type. This API is not CLS-compliant. Namespace: Microsoft.SqlServer.Dts.Pipeline Assembly: Microsoft.SqlServer.PipelineHost (in Microsoft.SqlServer.PipelineHost.dll) Syntax 'Declaration <CLSCompliantAttribute(False)> _ Protected Shared Function BufferTypeToDataRecordType ( _ type As DataType _ ) As Type 'Usage Dim type As DataType Dim returnValue As Type returnValue = PipelineComponent.BufferTypeToDataRecordType(type) [CLSCompliantAttribute(false)] protected static Type BufferTypeToDataRecordType( DataType type ) [CLSCompliantAttribute(false)] protected: static Type^ BufferTypeToDataRecordType( DataType type ) [<CLSCompliantAttribute(false)>] static member BufferTypeToDataRecordType : type:DataType -> Type protected static function BufferTypeToDataRecordType( type : DataType ) : Type Parameters - type Type: Microsoft.SqlServer.Dts.Runtime.Wrapper.DataType A value from the DataType enumeration. Return Value Type: System.Type The managed type that maps to an Integration Services DataType. Remarks This helper function gets the managed type that corresponds to an Integration Services DataType. It is typically used in concert with the ConvertBufferDataTypeToFitManaged method. For more information, see Working with Data Types in the Data Flow. Warning Developers should use the data type mapping methods of the the PipelineComponent class with caution, and may want to code data type mapping methods of their own that are more suited to the unique needs of their custom components. The existing methods do not consider numeric precision or scale, or other properties closely related to the data type itself. Microsoft may modify or remove these methods, or modify the mappings that they perform, in a future version of Integration Services.
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms186938(v=sql.100)
2018-04-19T14:45:57
CC-MAIN-2018-17
1524125936969.10
[]
docs.microsoft.com
New in version 1.2: The attribute function was added in Twig 1.2. The attribute function can be used to access a "dynamic" attribute of a variable: {{ attribute(object, method) }} {{ attribute(object, method, arguments) }} {{ attribute(array, item) }} In addition, the defined test can check for the existence of a dynamic attribute: {{ attribute(object, method) is defined ? 'Method exists' : 'Method does not exist' }} Note The resolution algorithm is the same as the one used for the . notation, except that the item can be any valid expression. © 2009–2017 by the Twig Team Licensed under the three clause BSD license. The Twig logo is © 2010–2017 SensioLabs
http://docs.w3cub.com/twig~1/functions/attribute/
2018-04-19T13:40:12
CC-MAIN-2018-17
1524125936969.10
[]
docs.w3cub.com
- What's new in AppDNA 7.6 - System requirements - Install - Upgrade - Import - Analyze - Report views - Resolve - Manage - Prepare to import - Configure - Licenses - Administer - Migrate - SDK - Troubleshoot - Glossary This section provides information about how to configure a variety of features in AppDNA. Quick links to section topics:
https://docs.citrix.com/es-es/dna/7-6/dna-configure.html
2018-04-19T13:51:53
CC-MAIN-2018-17
1524125936969.10
[]
docs.citrix.com
- What's New? - Fixed Issues - Known Issues - What's New in Previous 11.0 Builds - Fixed Issues in Previous 11.0 Builds The enhancements and changes that were available in NetScaler 11.0 releases prior to Build 65.31. The build number provided below the issue description indicates the build in which this enhancement or change was provided. AAA-TM | AAA-TM/NetScaler Gateway | Admin Partitions | Application Firewall | Cache Redirection | CloudBridge Connector | Cluster | Command Line Interface | DNS | GSLB | HDX Insight | Load Balancing | NetScaler Gateway | NetScaler Insight Center | NetScaler SDX Appliance | Networking | Optimization | Platform | Policies | SSL | System | Telco OAuth/OpenID-Connect Mechanisms for AAA-TM The NetScaler AAA-TM feature now supports OAuth and OpenID-Connect mechanisms for authenticating and authorizing users to applications that are hosted on applications such as Google, Facebook, and Twitter. Note: OAuth on NetScaler is currently qualified only for Google applications. A major advantage is that user's information is not sent to the hosted applications and therefore the risk of identity theft is considerably reduced. In the NetScaler implementation, the application to be accessed is represented by the AAA-TM virtual server. So, to configure OAuth, an action must be configured and associated with a AAA-TM policy which is then associated with a AAA-TM virtual server. The configuration to define a OAuth action is as follows: > add authentication OAuthAction <name> -authorizationEndpoint <URL> -tokenEndpoint <URL> [-idtokenDecryptEndpoint <URL>] -clientID <string> -clientSecret <string> [-defaultAuthenticationGroup <string>] [-Attribute1 <string>] [-Attribute2 <string>] [-Attribute3 <string>] .... Note: - Refer to the man page for information on the parameters. - Attributes (1 to 16) are attributes that can be extracted in OAuth response. Currently, these are not evaluated. They are added for future reference. [From Build 55.23] [#491920] Using Certificates to Log on to a SAML IdP When used as a SAML IdP (identity provider), the NetScaler appliance now allows logon using certificates. [From Build 55.23] [#512125] Logging Errors in NetScaler Log Files The NetScaler appliance now stores AAA authentication logs. - Errors and warnings are logged in the /var/nslog/ns.log file - Information and debug level logs are logged in the /var/log/nsvpn.log file. [From Build 55.23] [#482228, 479557] Including Additional Attributes in SAML IdP Assertion When used as a SAML IdP (identity provider), the NetScaler appliance can now be configured to send 16 additional attributes in addition to the NameId attribute. These attributes must be extracted from the appropriate authentication server. For each of them, you can specify the name, the expression, the format, and a friendly name. These attributes must be specified in the SAML IdP profile as follows: From the CLI: > set authentication samlIdPProfile <name> [-Attribute1 <string> -Attribute1Expr <string> [-Attribute1FriendlyName <string>] [-Attribute1Format ( URI | Basic )]] [-Attribute2 <string> -Attribute2Expr <string> [-Attribute2FriendlyName <string>] [-Attribute2Format ( URI | Basic )]] For example, the following command adds the attribute "MyName": > add authentication samlIdPProfile ns-saml-idp -samlSPCertName nssp -samlIdPCertName nssp -assertionConsumerServiceURL "" -Attribute1 MyName -Attribute1Expr http.req.user.name -Attribute1FriendlyName Username -Attribute1Format URI From the GUI: Navigate to the screen where you configure the SAML IdP profile, and specify the additional attributes as required. [From Build 55.23] [#460680, 504703] Using the SHA256 Algorithm to Sign SAML IdP Assertions When used as a SAML IdP (identity provider), the NetScaler appliance can now be configured to digitally sign assertions by using the SHA256 algorithm. Additionally, you can configure the appliance to accept only digitally signed requests from the SAML SP (service provider). These configurations must be specified in the SAML IdP profile as follows: From the CLI: > set authentication samlIdPProfile <name> [-rejectUnsignedRequests ( ON | OFF )] [-signatureAlg ( RSA-SHA1 | RSA-SHA256 )] [-digestMethod ( SHA1 | SHA256 )] From the GUI: Navigate to the screen where you configure the SAML IdP profile, and specify the corresponding parameters. [From Build 55.23] [#474977] Supporting Encrypted Assertions on SAML SP When used as a SAML SP (service provider), the NetScaler appliance can now decrypt the encrypted tokens that it receives from the a SAML IdP. No configuration is required on the NetScaler. [From Build 55.23] [#291693] Using 401-based Authentication to Log on to a SAML IdP When used as a SAML IdP (identity provider), the NetScaler appliance now allows logon using the following 401-based authentication mechanisms: Negotiate, NTLM, and Certificate. [From Build 55.23] [#496725, 508689] The output of "show ns ip" now also includes the aaadnatIp address. [From Build 55.23] [#472912] Fallback from Certificate to Other Authentication Mechanisms When authentication is configured to be done by using certificates and then followed by LDAP or other authentication mechanisms, the following behavior holds true: - In previous releases: If certificate authentication fails (or was skipped), the other authentication mechanism is not processed. - From this release onwards: Even if certificate authentication is not done, the other authentication mechanism is processed. [From Build 55.23] [#550946] The configuration of a AAA-TM virtual server in the NetScaler GUI is simplified for ease of configuring the required authentication mechanism. [From Build 55.23] [#524386] Using Cookies to Track SAML Sessions In a deployment where a NetScaler appliance is configured as a SAML IdP (identity provider) for multiple SAML SPs (service provider), the appliance allows a user to access multiple SPs without explicitly authenticating every time.The appliance creates a session cookie for the first authentication and every subsequent request uses this cookie for authentication. [From Build 55.23] [#503882] Fallback to NTLM Authentication When the NetScaler appliance is configured for Negotiate authentication and sends a 401 Negotiate response to client, if client is not able to reach domain controller or is not domain joined, then it automatically falls back to NTLM authentication and the client starts NTLM handshake. The NetScaler appliance is able to verify the credentials presented as part of NTLM authentication. This feature allows user logins locally or remotely. [From Build 55.23] [#509829] Support for Redirect Binding for SAML SP When used as a SAML SP (service provider), in addition to POST bindings, the NetScaler appliance now supports redirect bindings. In redirect bindings, SAML assertions are in the URL, as against POST bindings where the assertions are in the POST body. Using the CLI: > add authentication samlAction <name> . . . [-samlBinding ( REDIRECT | POST )] [From Build 55.23] [#493220, 462777, 493224] The NetScaler appliance now supports the SiteMinder SAML SP. [From Build 55.23] [#488077] Encrypting SAML IdP Assertion When used as a SAML IdP (identity provider), the NetScaler appliance can now be configured to encrypt the assertions by using the public key of the SAML SP (service provider). Note: - Make sure the SAML SP certificate is specified. - For enhanced security, it is recommended that you encrypt assertions that contain sensitive information. This configuration must be specified on the SAML IdP profile as follows: On the CLI: > set authentication samlIdPProfile <name> [-encryptAssertion ( ON | OFF )] [-encryptionAlgorithm <encryptionAlgorithm>] On the GUI: Navigate to the screen where you configure the SAML IdP profile and specify the corresponding parameters. [From Build 55.23] [#482185] Multi-Factor (nFactor) Authentication The NetScaler appliance now supports a new approach to configuring multi-factor authentication. With this approach, you can configure any number of authentication factors. You can also customize the login form as required. In NetScaler terminology, this feature is called "nFactor Authentication." For more information, see. [From Build 62.10] [#482250, 451913, 549966] Configuring Validity for SAML Assertions A NetScaler appliance can be configured to provide SAML authentication to an application by playing the role of the SAML Identity Provider (IdP) and/or the SAML Service Provider (SP). If the system time on NetScaler SAML IdP and the peer SAML SP is not in sync, the messages might get invalidated by either party. To avoid such cases, you can now configure the time duration for which the assertions will be valid. This duration, called the "skew time," specifies the number of minutes for which the message should be accepted. The skew time can be configured on the SAML SP and the SAML IdP. - When the NetScaler is used as a SAML IdP, configure the skew time on the SAML IdP profile, to accept incoming requests from SP and to send assertions. --- Using the CLI: > set samlidpProfile <name> -skewTime 30 --- Using the GUI: Navigate to Security > AAA - Application Traffic > Policies > Authentication > Advanced Policies > Policy, and in the required SAML IdP policy, configure the skew time for the SAML IdP profile. - When the NetScaler is used as a SAML SP, configure the skew time on the SAML action. --- Using the CLI: > set samlaction <name> -skewTime 30 --- Using the GUI: Navigate to Security > AAA - Application Traffic > Policies > Authentication > Advanced Policies > Policy, and in the required SAML SP policy, configure the skew time for the SAML action. [From Build 64.34] [#582266] Increased Length of SAML Attributes for Extraction In the SAML Service Provider (SP) module, names of the attributes that can be extracted from an incoming SAML assertion can be up to 127 bytes long. The previous limit was 63 bytes. [From Build 64.34] [#581644] When used as a SAML SP, the NetScaler appliance can now extract multi-valued attributes from a SAML assertion. These attributes are sent is nested XML tags such as: <saml:Attribute FriendlyName="groups" Name="groups" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified?> <saml:AttributeValue xmlns:xsi="" xsi:type="xs:string?> <AttributeValue>grp1</AttributeValue> <AttributeValue>grp2</AttributeValue> <AttributeValue>grp3</AttributeValue> </saml:AttributeValue> </saml:Attribute> [From Build 64.34] [#577853] SAML IdP Validating the SAML SP When used as a SAML Identity Provider (IdP), the NetScaler appliance can be configured to serve assertions only to SAML Service Providers (SP) that are pre-configured on or trusted by the IdP. For this configuration, the SAML IdP must have the service provider ID (or issuer name) of the relevant SAML SPs. Using the CLI: > set samlidpProfile <name> -serviceProviderID <string> Using the GUI: Navigate to Security > AAA - Application Traffic > Policies > Authentication > Advanced Policies > Policy, and in the required SAML IdP policy, configure the SP ID for the SAML IdP profile. [From Build 64.34] [#582265] When used as a SAML IdP, the NetScaler appliance can now send multi-valued attributes in a SAML assertion. [From Build 64.34] [#588125] Support for Redirect Binding for SAML IdP When used as a SAML Identity Provider (IdP), the NetScaler appliance now supports redirect bindings (in addition to POST binding). Using the CLI: > set authentication samlIdPProfile <name> -samlBinding REDIRECT Using the GUI: Navigate to Security > AAA - Application Traffic > Policies > Authentication > Advanced Policies > Policy, and in the required SAML IdP policy, configure the SAML binding as "Redirect" for the SAML IdP profile. [From Build 64.34] [#564947, 590768] Scriptable monitors can now be configured on the admin partitions that are available on a NetScaler appliance. [From Build 55.23] [#535494] Getting NetScaler Trace for Specific Partitions You can now generate the NetScaler trace for a specific admin partition. To do so, you must access that admin partition and run the "nstrace" operation. The trace files for the admin partition will be stored in the /var/partitions/<partitionName>/nstrace/ directory. [From Build 55.23] [#496937, 515294] Setting L2 and L3 parameters in Admin Partitions On a partitioned NetScaler appliance, the scope of updating the L2 and L3 parameters is as follows: - For L2 parameters that are set by using the "set L2Param" command, the following parameters can be updated only from the default partition, and their values are applicable to all the admin partitions: maxBridgeCollision, bdgSetting, garpOnVridIntf, garpReply, proxyArp, resetInterfaceOnHAfailover, and skip_proxying_bsd_traffic. The other L2 parameters can be updated in specific admin partitions, and their values are local to those partitions. - For L3 parameters that are set by using the "set L3Param" command, all parameters can be updated in specific admin partitions, and their values are local to those partitions. Similarly, the values that are updated in the default partition are applicable only to the default partition. [From Build 55.23] [#513564] Getting Web Logs for Specific Partitions/Users Using the NetScaler Web Logging (NSWL) client, the NetScaler can now retrieve the web logs for all the partitions with which the logged in user is associated. To view the partition for each log entry, customize the log format to include the %P option. You can then filter the logs to view the logs for a specific partition. [From Build 55.23] [#534986] Supporting Dynamic Routing in Admin Partitions While dynamic routing (OSPF, RIP, BGP, ISIS, BGP+) is by default enabled on the default partition, in an admin partition, it must be enabled by using the following command: > set L3Param -dynamicRouting ENABLED Note: A maximum of 63 partitions can run dynamic routing (62 admin partitions and 1 default partition). [From Build 55.23] [#514848] Configuring Integrated Caching on a Partitioned NetScaler Integrated caching (IC) can now be configured for admin partitions. After defining the IC memory on the default partition, the superuser NetScaler. [From Build 55.23] [#481444, 484618] Partition Specific Load Balancing Parameters When you update load balancing parameters in an admin partition, the updates now apply to that partition only. You can have different load balancing parameter settings in different partitions. Note: - In previous releases, any updates to these parameters were applied across all partitions, regardless of the partition in which the changes were made. - These parameters are set in the CLI by using the "set lb parameter" command or in the GUI by navigating to Traffic Management > Load Balancing. [From Build 62.10] [#563004] The following load balancing features can now be configured in admin partitions: - DBS autoscale - Stateless connection mirroring - RDP - Radius - Graceful shutdown For the detailed list of NetScaler feature support on admin partitions, see. [From Build 64.34] [#588406] The NetScaler appliance now supports FTP load balancing in admin partitions. [From Build 64.34] [#568811]] [#506157]] [#520048] The application firewall is fully supported in striped, partially striped, or spotted configurations. The two main advantages of striped and partially striped virtual server support in cluster configurations are the following: - Session failover support: Striped and partially striped virtual server configurations support session failover. The advanced application firewall security features, such as Start URL Closure and the Form Field Consistency check, maintain and use sessions during transaction processing. In ordinary high availability configurations, or in spotted cluster configurations, when the node that is processing the application application firewall's ability to handle multiple simultaneous requests, thereby improving the overall performance. Security checks and signature protections can be deployed without the need for any additional cluster-specific application firewall configuration. You just do the usual application firewall configuration on the configuration coordinator (CCO) node for propagation to all the nodes. Cluster details are available at. [From Build 55.23] [#408831, 403780] Geolocation, which identifies the geographic location from which requests originate, can help you configure the application firewall for the optimal level of security. For example, if an excessively large number of requests are received from a specific area, it is easy to determine whether they are being sent by users or a rogue machine. The application firewall offers you the convenience of using the built-in NetScaler database or any other geolocation based database to identify the source of origin of coordinated attacks launched from a country. This information can be quite useful for enforcing the optimal level of security for your application to block malicious requests originating from a specific geographical region. Geolocation logging uses the Common Event Format (CEF). To use Geolocation Logging 1. Enable CEFLogging and GeoLocationLogging. >set appfw settings GeoLocationLogging ON CEFLogging ON 2. Specify the database >add locationfile /var/netscaler/inbuilt_db/Citrix_Netscaler_InBuilt_GeoIP_DB.csv or add locationfile <path to database file> [From Build 55.23] [#483703] The NetScaler application firewall module offers data leak prevention and supports credit card protection. It can examines the credit card numbers in the response and takes the specified action if a match is found. In some scenarios, it might be desirable to exclude a specific set of numbers from the credit card security check inspection. For example, server responses for some internet applications might include a string of digits that is not a credit card number but matches the pattern of a credit card number. These responses can trigger false positives and therefore get blocked by the application firewall's Credit Card security check. The application firewall now offers the ability to learn and deploy relaxations for the credit card numbers. The credit card relaxation rule provides the flexibility to exclude a specific string of numbers from the safe commerce check without compromising credit card security. These numbers are not examined in the responses even if the credit card check is ON. Examples of CLI Commands: 1. Bind the credit card number to profile: bind appfw profile <profile-name> -creditCardNumber <any number/regex> "<url>" 2. Unbind credit card number from profile: unbind appfw profile <profile-name> -creditCardNumber <credit card number> "<url>" 3. Log: Enable Logging of credit card Numbers add appfw profile <profilename> - doSecureCreditCardLogging <ON/OFF> set appfw profile <profilename> - doSecureCreditCardLogging <ON/OFF> 4. Learn: show appfw learningdata <profilename> creditCardNumber rm appfw learningdata <profilename> -creditcardNumber <credit card number> "<url>" export appfw learningdata <profilename> creditCardNumber [From Build 55.23] [#383298] The NetScaler application firewall offers SQL/XSS security check protections to detect and block possible attacks against the applications. You now have much tighter security control when configuring SQL/XSS protections. Instead of deploying relaxation rules that completely bypass the security check inspection for a field, you now have an option to relax a specific subset of violation patterns. You can continue to inspect the relaxed field in the incoming requests to detect and block the rest of the SQL/XSS violation patterns. The commands used in relaxations and learning now have optional parameters for value type and value expression. You can specify whether the value expression is a regular expression or a literal string. Command Line Interface: bind appfw profile <name> -SQLInjection <String> [isNameRegex (REGEX | NOTREGEX)] <formActionURL> [-location <location>] [-valueType (Keyword| SpecialString|Wildchar) [<valueExpression>][-isValueRegex (REGEX | NOTREGEX) ]] unbind appfw profile <name> -SQLInjection <String><formActionURL> [-location <location>][-valueType (Keyword|SpecialString|Wildchar) [<valueExpression>]] bind appfw profile <name> -crossSiteScripting <String> [isN>]] [From Build 55.23] [#450324, 483683] The field format rules specify the inputs that are allowed in the target form fields. You can also limit the minimum and the maximum allowed length for the inputs. The application firewall learning engine monitors the traffic and provides field format recommendations based on the observed values. If the initial field format learned rules are based on a small sample of data, a few non typical values might possibly result in a recommendation that is too lenient for the target field. Updates to the application firewall have now decoupled violations and learning for the field formats. The firewall learns the field formats regardless of the violations. The learning engine monitors and evaluates all the incoming new data points to recommend new rules. This allows fine tuning the configuration to specify optimal input formats with adequate min/max range values.. [From Build 55.23] [#450326, 483677, 513927] Support for default syntax expressions You can now use default syntax expressions in cache redirection policies. The NetScaler appliance provides built-in cache redirection policies based on default syntax expressions, or you can create custom cache redirection policies to handle typical cache requests. In addition to the same types of evaluations done by classic cache redirection policies, the default syntax policies enable you to analyze more data (for example, the body of an HTTP request) and to configure more operations in the policy rule (for example, directing requests to either cache or origin server). [From Build 55.23] [#490297, 495915, 536986, 536992, 537010, 537014, 538269] Support for IPv6 Traffic through IPV4 Tunnels The NetScaler appliance now supports transferring IPv6 traffic through a IPV4 GRE tunnel. This feature can be used for enabling communication between Isolated IPv6 networks without upgrading the IPv4 infrastructure between them. For configuring this feature, you associate. [From Build 55.23] [#497414] Disabling Steering on the Cluster Backplane By default, a NetScaler cluster steers traffic over the cluster backplane, from the flow receiver node to the flow processor node. You can disable steering so that the process becomes local to the flow receiver and thereby ensure that the flow receiver also becomes the flow processor. Such a configuration can come in handy when you have a high latency link. Note: This configuration is applicable only for striped virtual servers. Steering can be disabled at the global NetScaler level or at the individual virtual server level. The global configuration takes precedence over the virtual server setting. - At the global level, steering can be disabled for all striped virtual servers. It is configured at cluster instance level. Traffic meant for any striped virtual server will not be steered on cluster backplane. The command is: > add cluster instance <clId> -processLocal ENABLED - At a virtual server level, you can disable steering for a specific striped virtual server. It is configured on a striped virtual server. Traffic meant for that virtual server will not be steered on cluster backplane. The command is: > add lb vserver <name> <serviceType> -processLocal ENABLED For more information, see. [From Build 55.23] [#539136] Link Redundancy based on Minimum Throughput In a dynamic cluster link aggregation (LA) deployment that has link redundancy enabled, you can configure the cluster to select the partner channel or interface on the basis of its throughput. To do this, configure a threshold throughput on the channel or interface as follows: > set channel CLA/1 -linkRedundancy ON -lrMinThroughput <positive_integer> The throughput of the partner channels is checked against the configured threshold throughput. The partner channel that satisfies the threshold throughput is selected in FIFO manner. If none of the partner channel meets the threshold, or if threshold throughput is not configured, the partner channel with the maximum number of links is selected. [From Build 55.23] [#508993] Nodegroup for Datacenter Redundancy A cluster nodegroup can now be configured to provide datacenter redundancy. In this use case, nodegroups are created by logically grouping the cluster nodes. You must create active and spare nodegroups. When the active nodegroup goes down, the spare nodegroup which has the highest priority (the lower priority number) is made active and it starts serving traffic. For more information, see. [From Build 55.23] [#495019] Routing in a L3 Cluster In a L3 cluster, different nodegroups can have different VLANs and subnets associated with them. This can result in a VLAN getting exposed only in some nodes. Therefore, you can now configure dynamic routing on a VLAN to expose the VLAN to ZebOS even when there are no IP addresses with dynamic routing that are bound to it. The command to configure this is: > add/set vlan <id> -dynamicRouting (ENABLED | DISABLED) Note: - This option is also available for VXLAN and BridgeGroups. - This configuration can also be used for L2 clusters. [From Build 55.23] [#531868] BridgeGroups are now supported in a NetScaler cluster deployment. [From Build 55.23] [#494991] Routing on Striped SNIP addresses You can now run dynamic routing on a striped SNIP address in a NetScaler cluster. The routes advertised by the cluster have the striped SNIP as the next hop. There is just one adjacency with the cluster. Internally, the cluster picks one of the active nodes as the routing leader. When the current routing leader goes down, the routing ownership moves to an active node. Note: - Striped SNIP addresses are useful mainly for cluster LA (link aggregation) deployments. They can also be used for ECMP, but the multipath routing functionality is unavailable. - Striped SNIP addresses can also be used in asymmetrical topologies. - Routing on striped SNIPs and routing on spotted SNIPs can coexist in a cluster. To specify leader node configurations, in the VTYSH shell, use the "owner-node leader" command. [From Build 55.23] [#329439] Reduce Backplane Steering for Spotted and Partially-striped Virtual Serves when Using ECMP With the Equal Cost Multiple Path (ECMP) mechanism, virtual server IP addresses are advertised by all active cluster nodes. This means that traffic can be received by any cluster node, which then steers the traffic to the node that must process the traffic. While there are no hassles in this approach, there can be a lot of redundant steering in case of spotted and partially striped virtual servers. Therefore, from NetScaler 11 onwards, spotted and partially striped virtual server IP addresses are advertised only by the owner nodes. This reduces the redundant steering. You can override this default behavior, by entering the following command in the VTYSH shell: ns(config)# ns spotted-vip-adv all-nodes [From Build 55.23] [#317706] Cluster to Include Nodes from Different Networks (L3 Cluster) You can now create a cluster that includes nodes from different networks. To configure a cluster over L3, you must add the nodes of different networks to different nodegroups. For more information, see. You can transition an existing L2 cluster to an L3 cluster. For instructions, see. [From Build 55.23] [#374289, 317257] Web Interface on NetScaler (WIonNS) Support on a Cluster WIonNS can now be configured on a NetScaler cluster deployment. To use WIonNS on a cluster, you must do the following: 1. Make sure that the Java package and the WI package are installed in the same directory on all the cluster nodes. 2. Create a load balancing virtual server that has persistency configured. 3. Create services with IP addresses as the NSIP address of each of the cluster nodes that you want to serve WI traffic. 4. Bind the services to the load balancing virtual server. Note: If you are using WIonNS over a VPN connection, make sure that the load balancing virtual server is set as WIHOME. [From Build 62.10] [#498295, 489463] FTP Load Balancing Support on a Cluster FTP load balancing is now supported in a NetScaler cluster deployment. [From Build 62.10] [#513612] Cluster versioning When you are upgrading a cluster to NetScaler 11.0 build 64.x from an earlier NetScaler 11.0 build, cluster configuration propagation is disabled. Traditionally, this issue occurred only during an upgrade of a cluster to a different NetScaler version (for example, from 10.5 to 11.0). This exception arises because the cluster version in build 64.x is different from the one in previous NetScaler 11.0 builds. Note: Normally, the cluster version matches the NetScaler version. Configuration propagation remains disabled until all the cluster nodes are upgraded to Build 64.x. [From Build 64.34] [#591877] Reducing the Minimum Value for the Dead Interval You can now set the dead interval for a cluster instance to a minimum value of 1 second. Note: If the dead interval value is less than 3 seconds, set the hello interval parameter to 200 ms. [From Build 64.34] [#573218] The NetScaler administrator can now specify the maximum number of concurrent sessions a user can log on to the CLI. Although logons to the configuration utility do not count against the limit, all logon attempts are denied after the limit is reached. For example, if the maximum number of concurrent sessions is set to 20, a user can log on to the CLI 19 times and can log on to the configuration utility any number of times. Once the user logs on to the CLI for the 20th time, he or she can no longer log on to the CLI or the configuration utility. Any logon attempt then results in a system error message. [From Build 64.34] [#491778] Rewrite and responder support for DNS The rewrite and responder features now support DNS. You can now configure rewrite and responder functionalities to modify DNS requests and responses as you would for HTTP or TCP requests and responses. [From Build 55.23] [#405769] Support for DNS Logging You can now configure a NetScaler appliance to log DNS requests and responses. The logs are in SYSLOG format. You can use these logs to: - Audit the DNS responses to the client - Audit DNS clients - Detect and prevent DNS attacks - Troubleshoot [From Build 55.23] [#419632, 561291] Enable or disable negative caching of DNS records The NetScaler appliance supports caching of negative responses for a domain. You can enable or disable negative caching from the command line, by setting cacheNegativeResponses with the set dns parameter command, or in the configuration utility, in the Configure DNS Parameters dialog box. Note: You can enable or disable negative caching independent of global caching. By default, negative caching is enabled. [From Build 55.23] [#391254] Support for binding a single Virtual Server as a backup for multiple GSLB Virtual servers In a GSLB site deployment, you can now bind a single virtual server as a backup virtual server for multiple GSLB virtual servers in the deployment. [From Build 55.23] [#373061] GSLB Service Selection using Content Switching Description: You can now configure a content switching (CS) policy to customize a GSLB deployment so that you can: * Restrict the selection of a GSLB service to a subset of GSLB services bound to a GSLB virtual server for the given domain. * Apply different Load Balancing methods on the different subsets of GSLB services in the deployment. * Apply spillover policies on a subset of GSLB services, and you can have a backup for a subset of GSLB services. * Configure a subset of GSLB services to serve a specific type of content. * Define a subset GSLB services with different priorities, and define the order in which the services in the subset are applied to a request. For more information, see Configuring GSLB Service Selection Using Content Switching. [From Build 63.16] [#503588] NetScaler GSLB deployments support NAPTR records In GSLB deployments, the NetScaler appliance now supports DNS queries with NAPTR records. You can now configure a NetScaler appliance to receive DNS queries with NAPTR records from clients (for example, Mobile Management Entity (MME))and respond with the list of services configured for a domain. Also, the NetScaler appliance monitors the health of the services and in the response it provides only the list of services that are up. [From Build 64.34] [#468647] Ability to specify GSLB Site IP address as source IP address for an RPC node You can now configure the NetScaler appliance to use GSLB Site IP address as the source IP address for an RPC node. [From Build 64.34] [#531395] HDX Insight now supports displaying of Appflow records from Netscaler cluster. [From Build 62.10] [#525758] Support for Secure LDAP Monitor You can now monitor LDAP services over SSL. To monitor the LDAP services over SSL, use the built-in LDAP monitor or create a user monitor and enable the "secure" option. [From Build 55.23] [#418061, 556530] IPv6 Support for HTTP based User Monitors You can now use IPv6 addresses in the following monitors: - USER - SMTP - NNTP - LDAP - SNMP - POP3 - FTP_EXTENDED - STOREFRONT - APPC - CITRIX_WI_EXTENDED Note: The monitor for MySQL does not support IPv6 addresses. [From Build 55.23] [#510111] In the configuration utility, navigate to Traffic Management > Load Balancing > Change Load Balancing Parameters and select Use Secured Persistence Cookie and Cookie Passphrase and enter a passphrase. [From Build 55.23] [#347108, 323325, 348588] New Trap for Spillover If you have configured spillover on a virtual server and also configured a trap listener on the appliance, an SNMP trap is now sent to the trap listener when the virtual server experiences spillover. The trap message displays the name of the virtual server that experienced the spillover, the spillover method, the spillover threshold, and the current spillover value. If the spillover is policy based, the rule causing it appears in the Spillover Threshold field. If the virtual server is DOWN or disabled, the status message "vserver not up" appears in the trap message. [From Build 55.23] [#486268, 475400] Setting the Maintenance State for your Server with Minimal Interruption You can now set the maintenance state for your server with minimal interruption and without changing any configuration on the NetScaler appliance. In the maintenance state, the server continues to accept persistent client connections while new connections are load balanced among the active servers. On the NetScaler appliance, configure a transition out of service (TROFS)-enabled monitor and bind it to a service representing the server. Specify a trofsCode or trofsString in the monitor. Upon receipt of a matching code or string from the server in response to a monitor probe, the appliance places the service in the TROFS state. During this time, it continues to honor persistent client connections. To avoid disrupting established sessions, you can place a service in the Note: This enhancement is not applicable to GSLB services. From release 11, if you bind only one monitor to a service, and the monitor is a TROFS-enabled monitor,. Important! - You can bind multiple monitors to a service, but only one monitor must be TROFS-enabled. - You can convert a TROFS-enabled monitor to a monitor that is not TROFS-enabled, but not vice versa. [From Build 55.23] [#408103] Automatic Restart of the Internal Dispatcher In earlier releases, if the internal dispatcher failed, the services that used scriptable monitors also went down and the appliance had to be restarted. From release 11, if the internal dispatcher fails, the pitboss process restarts it. As a result, you no longer have to restart the appliance. For information about user monitors, see. [From Build 55.23] [#368128] The following global timeouts has been introduced for TCP sessions on a NetScaler appliance related to RNAT rules, forwarding sessions, or load balancing configuration of type ANY: * Any TCP Client. Global idle timeout, in seconds, for TCP client connections. Client timeout set for an entity overrides the global timeout setting. * Any TCP Server. Global idle timeout, in seconds, for TCP server connections. Server timeout set for an entity overrides the global timeout setting. These timeout can be set either from the NetScaler command line (set ns timeout command) or from the configuration utility (System > Settings > Change Timeout Values page). Note: For applying these timeouts to a virtual server or service of type ANY, set these timeouts before adding the virtual server or the service. [From Build 55.23] [#507701] If you configure cookie persistence and custom cookie on a virtual server, and later change the name or IP address of the virtual server, persistence is not honored. [From Build 55.23] [#524079, 559022] With the following new OID, you can use SNMP to learn the current number of server connections per service. svcCurSrvrConnections, 1.3.6.1.4.1.5951.4.1.2.1.1.59 [From Build 64.34] [#548470] Retaining the Original State of a Service Group Member after Disabling and Enabling a Virtual Server errors. [From Build 64.34] [#493692] With the following new OID, you can use SNMP to learn the effective state of a virtual server. vsvrCurEffState, 1.3.6.1.4.1.5951.4.1.3.1.1.75 [From Build 64.34] [#538499] Support for Unauthenticated Stores In earlier releases, the StoreFront monitor tried to authenticate anonymous stores. As a result, a service could be marked as DOWN and you could not launch XenApp or XenDesktop by using the URL of the load balancing virtual server. The probe order has changed. The monitor now determines the state of the StoreFront store by successively probing the account service, the discovery document, and then the authentication service, and skips authentication for anonymous stores. [From Build 64.34] [#575549] Striped Cluster for NetScaler Gateway in ICA Proxy Mode This feature allows administrators to deploy NetScaler Gateway with XenApp and XenDesktop in a striped style cluster where all nodes in the cluster serve traffic. Administrators can use existing Gateway configurations and scale seamlessly in a cluster deployment without having to restrict the VPN configuration to a single node. Note that this feature is limited to ICA Proxy basic mode virtual servers and does not support SmartAccess. [From Build 55.23] [#490329, 503332]. [From Build 55.23] [#444387] SharePoint 2013 and Outlook Web Access 2013 are supported with clientless VPN access mode. [From Build 55.23] [#494995] NetScaler now uses SPNEGO encapsulation on Kerberos tickets that are sent to backend web applications and servers. [From Build 55.23] [#404899] The Portal Customization options have been expanded to allow end-to-end customization of the VPN user portal. Administrators can apply themes to their VPN portal design or use them. [From Build 55.23] [#489467]. [From Build 55.23] [#406312] speeds up single sign-on for native Receiver users. In the NetScaler configuration utility, the WebFront feature is on the Configuration tab at System --> WebFront. [From Build 55.23] [#497619] NetScaler Gateway now has a full Linux VPN client plug-in. The plug-in is supported on Ubuntu 12.04 and 14.04 distributions. [From Build 55.23] [#495767] The Unified Gateway Wizard for XenDesktop/Xenapp Application creates wrong configurations with the Storefront option. The client launches the Java plug-in instead of Win/Mac/iOS/Android plug-in. [From Build 55.23] [#576275] The WebFront enhancement supports the transparent SSO feature when accessed from the Citrix Receiver. WebFront optimizes packet flow and improves performance for users accessing StoreFront through Gateway using Citrix Receivers. Data transferred over WAN is reduced by 41%. [From Build 55.23] [#497625] NetScaler Gateway now has an Android client plug-in that supports full VPN capabilities. The plug-in supports Android versions 4.1 and later. [From Build 55.23] [#520483] The SmartControl feature allows administrators to apply access policies for various XenApp and XenDesktop attributes through NetScaler Gateway without the need for identical policy duplication on the XenApp or XenDesktop servers. [From Build 55.23] [#525947] Automatic session timeout can be enabled for ICA connections as a VPN parameter. Enabling this parameter forces active ICA connections to time out when a VPN session closes. [From Build 55.23] [#358672, 527884] If a StoreFront application is created using the Unified Gateway Wizard, the configuration of the following session actions need to be updated. If a configured wihome ends with "web", then update the wihome. For example, if wihome is "/citrix/storeweb set vpnsessionAction AC_WB_<UG_IPADDRESS> -wihome ?/citrix/storeweb? set vpnsessionAction AC_OS_<UG_IPADDRESS> -wihome ?/citrix/storeweb? Also, initiate the following commands before to update the "client choices" and "transparent" interception options. set vpnsessionAction AC_WB_<UG_IPADDRESS> -clientchoices ON ?transparentinterception ON set vpnsessionAction AC_OS_<UG_IPADDRESS> -clientchoices OFF ?transparentinterception OFF These steps must be manually performed using the CLI or the NeScaler configuration utility. 1. Using the configuration utility, navigate to "NetScaler Gateway -> Policies -> Session -> Session Profiles and edit the relevant profile. 2. Navigate to "Published Application Tab" and update the "Web Interface Address" field ( this corresponds to the wihome setting mentioned above ). 3. Go to the "Client Experience" tab and then click to the "General" tab and update client choices as mentioned above for the corresponding actions. 4. On the "Client Experience" tab set the "plugin type" field as "Windows/ MAC OS X" for the relevant profiles as mentioned above. [From Build 55.23] [#576101, 576304] Support for Common Gateway Protocol (CGP) over WebSockets NetScaler Gateway virtual servers have improved intelligence for handling CGP traffic destined for the common CGP port, 2598, over WebSockets. This enhancement allows Receiver for HTML5 user sessions through NetScaler Gateway to support Session Reliability. [From Build 55.23] [#519899] NetScaler Gateway now has a full iOS VPN client plug-in. The plug-in is supported on iOS 7 and later releases. [From Build 55.23] [#587571] This enhancement adds support to disable Autoupdate for NetScaler Gateway Endpoint Analysis and VPN plug-in. [From Build 55.23] [#236620] NetScaler with Unified Gateway. [From Build 55.23] [#519875] NetScaler Gateway now supports Windows 10. [From Build 62.10] [#579428] NetScaler Gateway now supports the new UDP-based Framehawk virtual channel. [From Build 62.10] [#587560] For Linux Clients, support for binding Intranet IPv6 to VPN Virtual Server is introduced. IPv6 binding with VPN Global support is also introduced with the same. [From Build 63.16] [#556101] The NetScaler appliance was enhanced so the Portal Theme can be added using the NetScaler Gateway Wizard. [From Build 63.16] [#591427] DTLS-TURN support for Unified Gateway was added. For Unified Gateway, the CS virtual server is the end-point that users connect. The VPN virtual server has to be bound as the target virtual server. This functionality enables support for secure external access using Framehawk display channel with XenApp and XenDesktop. [From Build 64.34] [#593568] The VPN plugin was enhanced to acknowledge the intranet application protocol flag. ICMP blocking can be achieved by configuring separate intranet applications for UDP and TCP. [From Build 64.34] [#589202] NetScaler Gateway provides an RDP enforcement feature. NetScaler administrators can disable RDP capabilities through the NetScaler Gateway configuration. The following are configurable as part of the RDP client profile. - Redirection of ClipBoard - Redirection of Printers - Redirection of Disk Drives [From Build 64.34] [#581578] Support for EPA verbose logging was added. [From Build 64.34] [#590932, 591183] The Dual-Hop enhancement enables next-hop requests to be distributed among several available NetScalers. The Dual-Hop feature expands the capability to load balance across any next-hop server, so that if one next-hop server is unavailable, connections can be re-established using another available server. This enhancement supports the below configurations: - Create a LB Vserver on DMZ NetScaler for the next-hop targets, and allow this LB to be added as a Next-Hop Server. - Specify a next-hop server as an FQDN name so a GSLB solution could be used [From Build 64.34] [#524991] Unified Gateway now supports the same cluster functionality as supported by the NetScaler Gateway. Earlier Unified Gateway did not support a cluster environment. [From Build 64.34] [#593064] The default value for the VPN parameter "transparentInterception" is now set to OFF. You must set it to ON when full tunnel access is needed. For more information, see. [From Build 64.34] [#560267, 564572] You can now configure NetScaler Insight Center to display the reports in your local time or GMT time. [From Build 55.23] [#491073] Exporting Reports You can now save the Web Insight reports or HDX Insight reports in PDF, JPEG, PNG , or CSV format on your local computer. You can also schedule the export of the reports to specified email addresses at various intervals. For more information, see. [From Build 55.23] [#320860] You can now identify the root cause of a terminated ICA session by viewing the session termination reason on the HDX Insight node. Along with the termination reason, it also displays the session TCP metrics such as ICA RTT and WAN Latency. [From Build 55.23] [#488279] You can now configure a DNS server when you set up NetScaler Insight Center. Configuring a DNS server helps resolve the host name of a server into its IP address. For example, while creating an email server, you now have an option to specify the server name of the server rather than the IP address. [From Build 55.23] [#514612] Insight Deployment Management You can now improve the processing power of and increase storage space in your NetScaler Insight Center deployment by adding agents, connectors, and databases. An agent processes HTTP traffic and sends the data to the connectors that distribute this data across databases. You can add multiple agents, connectors, and databases to scale your deployment. In this deployment, you can also the decide the number of resources you have to allocate and determine the elements you need in the database architecture, on the basis of the number of HTTP requests per second, number of ICA sessions, and number of active WAN connections. [From Build 55.23] [#404919] You can configure NetScaler Insight Center to display the geo maps for a particular geographical location or LAN by specifying the private IP range (start and end IP address) for the location. [From Build 55.23] [#502478] The WAN Insight feature of NetScaler Insight Center gives CloudBridge administrators an easy way to monitor the accelerated and unaccelarted WAN traffic that flows through CloudBridge datacenter and CloudBridge branch appliances, and it provides end-to-end visibility that includes client-specific data, application-specific data, and branch- specific data. With the ability to identify and monitor all the applications, clients, and branches on the network, you can effectively deal with the issues that degrade performance. [From Build 55.23] [#430882] After an ICA connection is established between a client and a NetScaler Gateway appliance, errors or old receiver or server versions, can prevent the appliance from exporting the AppFlow records to NetScaler Insight Center. In such cases, the NetScaler Insight Center dashboard now displays the reasons for which the NetScaler appliance does not export the AppFlow records. [From Build 55.23] [#504954] 55.23] [#490147, 482900] Hop Diagram Support The 55.23] [#443824] You can now increase the storage space of NetScaler Insight Center to 512 GB. [From Build 55.23] [#425761, 553254] Multi-Hop support for NetScaler Insight Center enables Insight Center to detect which Citrix appliances a connection passes through (CloudBridge, NetScaler, NetScaler Gateway), and in which order, for improved reporting. [From Build 55.23] [#383172] The NetScaler Insight Center configuration utility now displays the progress of the upgrade process. [From Build 55.23] [#519788, 522021] Appliance Reboot Progress Status NetScaler SDX Appliance now displays the reboot progress. This helps in keeping the user informed about the various stages of the appliance reboot. [From Build 55.23] [#454093] Management service now provides support for SNMP v3 traps in addition to the already existing support for SNMP v2 traps. SNMP v3 provides better administration and security capabilities through better encryption, authentication and data integrity mechanisms. [From Build 55.23] [#431687] Partial Licensing You can now. [From Build 55.23] [#519771] Management service now provides support for XenServer 6.5. [From Build 55.23] [#538641] Initiate Virtual-NMI The Initiate Virtual-NMI generates a core dump of a VPX instance. Initiating a virtual NMI is useful when your NetScaler instance has stopped responding. To generate a virtual NMI, click on Configuration > Diagnostics. Click Initiate NMI under Non-Maskable Interrupt. [From Build 55.23] [#475027] NetScaler SDX supports cluster with three tuple notation. [From Build 55.23] [#470894] Retrieve LDAP Server Attributes When you configure an LDAP server and provide the IP address of the LDAP server, the management service automatically fetches the attributes like Server Logon Name Attribute, Search Filter, Group Attribute, Sub Attribute Name. This helps in reducing the error during filling these details for LDAP configuration. [From Build 55.23] [#491661] In the Management Service, the user interface for licensing the NetScaler SDX appliances is now identical to the user interface for licensing the NetScaler MPX and NetScaler VPX appliances. [From Build 55.23] [#479628, 517234] If you create channels on SDX and use these channels in VPXs and then take a backup of the appliance to restore either the complete appliance or selected instances, then channels are not restored and instances may fail. [From Build 55.23] [#432899, 435206] Syslog Viewer Syslog Viewer helps you in searching through the syslog messages based on various filters. You can narrow your search based on module like API, CLI, CONFIG, EVENT etc. You can further choose the type of message that you want to search through, like, ALERT, CRITICAL etc. Syslog Viewer also provides the option to search through regular expression or based on case sensitive text [From Build 55.23] [#478512] When you use the NetScaler provisioning wizard, the option to upload the XVA file has been added to the wizard. To use the XVA file to create a NetScaler instance, you need to first upload the XVA file. [From Build 55.23] [#476695] Support for SNMP MIB Configuration NetScaler SDX appliance now supports SNMP MIB configuration. You can configure SNMP MIB from Management Service by navigating to Configuration > System > SNMP > Settings > Configure SNMP MIB [From Build 55.23] [#523926] Options to disable and enable TLSv1, TLSv1.1, and TLSv1.2 has been added in the Management Service. To enable or disable TLS, navigate to Configuration > System. In the System Settings group, click on Change SSL Settings link. [From Build 55.23] [#540347] Setup Wizard You can use the Setup Wizard to complete all the first time configurations in a single flow. You can use the wizard to assign various management network IP addresses, configure system settings, change the default admin password, manage and update licenses. You can also use this wizard to modify the network configuration details that you provided for the NetScaler SDX appliance during initial configuration. To access the wizard, navigate to Configuration > System, under Setup Appliance, click Setup Wizard. [From Build 55.23] [#498284] Default time zone The default timezone when management service creates NetScaler instances is the NTP timezone. When this default timezone is modified using the management service, then the update is synchronized across the NetScaler instances [From Build 55.23] [#451866, 492929] Clean Install You can use the clean install feature to downgrade the software version of a NetScaler SDX appliance without losing the IP addresses or passwords. Clean install is different than factory reset in the manner that you can choose the SDX version to which you want to downgrade the appliance. To perform a clean install, navigate to Configuration > System > System Administration. In the System Administration Group, click Appliance Reset and follow the prompts. [From Build 62.10] [#519772] Static Routes Support for Management Service You can now specify an IP address as a static route when provisioning a NetScaler instance. The instance then uses this address, instead of the default route, to connect to the Management Service. [From Build 64.34] [#498445] Support to Encrypt Backup Files The Management Service now provides an option to encrypt the backup files. [From Build 64.34] [#576381] Option to Disable nsrecover Login Account Using the Management Service interface, you can now disable the nsrecover login account. To disable the nsrecover login account, navigate to "Configuration > System > Configure System Settings" and clear the "Enable nsrecover Login" check box. [From Build 64.34] [#576375] Ability to configure SSL Ciphers to Securely Access the Management Service You can select SSL cipher suites from a list of SSL ciphers supported by SDX appliances, and bind any combination of the SSL ciphers to access the Management Service securely through HTTPS [From Build 64.34] [#530232] Redundant Interface Sets A redundant interface set is a set of interfaces in which one interface is active and the others are on standby. If the active interface fails, one of the standby interfaces takes over and becomes active. Following are the main benefits of using redundant interface sets: - The back-up links between the NetScaler appliance and a peer device ensure connection reliability. - Unlike link redundancy using LACP, no configuration is required on the peer device for a redundant interface set. To the peer device, a redundant interface set appears as individual interfaces, not as a set or collection. - In a high availability (HA) configuration, redundant interface sets can minimize the number the HA failovers. A redundant interface set is specified in LR/X notation, where X can range from 1 to 4. For example, LR/1. [From Build 55.23] [#355237, 186503, 249551] Logging HTTP Header Information The NetScaler appliance can now log header information of HTTP requests related to an LSN configuration. The following header information of an HTTP request packet can be logged: - URL that the HTTP request is destined to. - HTTP Method specified in the HTTP request. - HTTP version used in the HTTP request. - IP address of the subscriber that sent the HTTP request. An HTTP header log profile is a collection of HTTP header attributes (for example, URL and HTTP method) that can be enabled or disabled for logging. The HTTP header log profile is then bound to an LSN group. The NetScaler appliance then logs HTTP header attributes, which are enabled in the bound HTTP header log profile for logging, of any HTTP requests related to the LSN group. An HTTP header log profile can be bound to multiple LSN groups but an LSN group can have only one HTTP header log profile. [From Build 55.23] [#496835] Layer 2 PBR Support for Forwarding Sessions In earlier releases, Layer 2 information (for example, destination MAC address, source VLAN, and Interface ID) about packets related to forwarding sessions were ignored during a PBR lookup. In other words, any packet related to a forwarding session was not considered for matching against a PBR having Layer 2 parameters as its condition. Now, layer 2 information about a packet related to a forwarding session is matched against layer 2 parameters in the configured PBRs. This feature is useful in a scenario where packets related to a forwarding session must be processed by another device before being sent to their destination. Following are the benefits of this support: - Instead of defining new PBRs that are based on Layer 3 parameters, you can use existing PBRs based on Layer 2 parameters for sending the packets related to forwarding sessions to the desired next hop device. - In a deployment that includes NetScaler appliances and optimization devices (for example, Citrix ByteMobile and Citrix CloudBridge appliances), PBRs based on Layer 2 parameters can be very handy compared to other, complex configuration for identifying the forwarding session related packets for PBR processing. - Identifying forwarding session related Ingress packets for sending them to the optimization device. - Identifying egress packets, which also matched a forwarding session rule, from the optimization device for sending the packets to the desired next hop device. [From Build 55.23] [#484458] MAC Address Wildcard Mask for Extended ACLs A new wildcard mask parameter for extended ACLs and ACL6s can be used with the source MAC address parameter to define a range of MAC addresses to match against the source MAC address of incoming packets. MAC Address Wildcard Mask for PBRs A new wildcard mask parameter for PBRs and PBR6s can be used with the source MAC address parameter to define a range of MAC addresses to match against the source MAC address of outgoing packets. [From Build 55.23] [#391630] Client Source Port for Server Side Connections?related to INAT and RNAT Rules The NetScaler appliance, for INAT and RNAT rules, now supports using client port as the source port for server side connections. A parameter Use Proxy Port has been added to the INAT and RNAT command set. When?Use Proxy Port is disabled?for an INAT rule or a RNAT rule, the NetScaler appliance retains the source port of the client's request?for the server side connection. When the option is enabled (default), the NetScaler appliance uses a random port as the source port for the server side connection.?? You must disable this?parameter?for proper functioning of certain protocols that require?a specific?source port in the request packet. [From Build 55.23] [#399821] Blocking Traffic on Internal Ports At the command prompt, type: > set l3param -implicitACLAllow [ENABLED|DISABLED] Note: The parameter implicitACLAllow is enabled by default. Example > set l3param -implicitACLAllow DISABLED Done [From Build 55.23] [#529317] GRE a new parameter with the GRE IP tunnel command set. You can set the GRE payload parameter to do one of the following before the packet is sent through the GRE tunnel: - Carry the Ethernet header but drop the VLAN header - Drop the Ethernet header as well as the VLAN header - Carry the Ethernet header as well the VLAN header [From Build 55.23] [#518397] Changing the Priority of a VIP Address Automatically in an Active-Active Deployment To ensure that a backup VIP address takes over as the master VIP before the node of the current master VIP address goes down completely, you can configure a node to change the priority of a VIP address on the basis of the states of the interfaces on that node. For example, the node reduces the priority of a VIP address when the state of an interface changes to DOWN, and increases the priority when the state of the interface changes to UP. This feature is configured on each node. It applies to the specified VIP addresses on the node. To configure this feature on a node, you set the Reduced Priority (trackifNumPriority) parameter, and then associate the interfaces whose state is to be tracked for changing the priority of the VIP address. When any associated interface's state changes to DOWN or UP, the node reduces or increases the priority of the VIP address by the configured Reduced Priority (trackifNumPriority) value. [From Build 55.23] [#512848] Configuring Communication Intervals for an Active-Active Deployment In an active-active deployment, all NetScaler nodes use the Virtual Router Redundancy Protocol (VRRP) to advertise their master VIP addresses and the corresponding priorities in VRRP advertisement packets (hello messages) at regular intervals. VRRP uses the following communication intervals: * Hello Interval?Interval between successive VRRP hello messages that a node sends, for all of its active (master) VIP addresses, to the other nodes of the VRRP deployment. For a VIP address, nodes on which the VIP address is in the inactive state use the hello messages as verification that the master VIP address is still UP. * Dead Interval?Time after which a node of a backup VIP address considers the state of the master VIP address to be DOWN if VRRP hello messages are not received from the node that has the master VIP address. After the dead interval, the backup VIP address takes over and becomes the master VIP address. You can change these intervals to a desired value on each node. They apply to all VIP addresses on that node. [From Build 55.23] [#512843] OSPFv3 Authentication For ensuring the integrity, data origin authentication, and data confidentiality of OSPFv3 packets, OSPFv3 authentication must be configured on OSPFv3 peers. The NetScaler appliance supports OSPFv3 authentication and is partially compliant with RFC 4552. OSPFv3 authentication is based on the two IPSec protocols: Authentication Header (AH) and Encapsulating Security Payload (ESP). The NetScaler supports only the AH protocol for OSPFv3 authentication. OSPFv3 authentication use manually defined IPSec Security Associations (SAs) between the OSPFv3 peers and does not rely on IKE protocol for forming dynamic SAs. Manual SAs define the security parameter Index (SPI) values, algorithms, and keys to be used between the peers. Manual SAs require no negotiation between the peers; therefore, same SA must be defined on both the peers. You can configure OSPFv3 authentication on a VLAN or for an OSPFv3 area. When you configure for a VLAN, the settings are applied to all the interfaces that are member of the VLAN. When you configure OSPFv3 authentication for an OSPF area, the settings are applied to all the VLANs in that area. The settings are in turn applied to all the interfaces that are members of these VLANs. These settings do not apply to member VLANs on which you have configured OSPFv3 authentication directly. [From Build 55.23] [#471703] Jumbo Frames Support for NetScaler VPX Appliances NetScaler VPX appliances now support receiving and transmitting jumbo frames containing up to 9216 bytes of IP data. Jumbo frames can transfer large files more efficiently than is possible with the standard IP MTU size of 1500 bytes. A NetScaler. Jumbo Frames support is available on NetScaler VPX appliances running on the following virtualization platforms: - VMware ESX (Note that NetScaler VPX appliances running on VMware ESX support receiving and transmitting jumbo frames containing up to only 9000 bytes of IP data.) - Linux-KVM For configuring Jumbo Frames on a NetScaler VPX appliance, you must: - Set the MTU of the interface or channel of the VPX appliance to a value in the range 1501-9216. Use the NetScaler command line interface or the configuration utility of the VPX appliance to set the MTU size. - Set the same MTU size on the corresponding physical interfaces of the virtualization host by using its management applications. [From Build 55.23] [#464830, 478103, 485905] The NetScaler appliance supports sending static IPv6 routes through a VXLAN. You can enable the NetScaler appliance to send an IPv6 route through either a VXLAN or a VLAN. A VXLAN parameter is added to the static IPv6 route command set. [From Build 55.23] [#472443] Support of IPv6 Dynamic Routing Protocols on VXLANs The NetScaler appliance supports IPv6 dynamic routing protocols for VXLANs. You can configure various IPv6 Dynamic Routing protocols (for example, OSPFv3, RIPng, BGP) on VXLANs from the VTYSH command line. An option IPv6 Dynamic Routing Protocol has been added to VXLAN command set for enabling or disabling IPv6 dynamic routing protocols on a VXLAN. After enabling IPv6 dynamic routing protocols on a VXLAN, processes related to the IPv6 dynamic routing protocols are required to be started on the VXLAN by using the VTYSH command line. [From Build 55.23] [#472432] Specifying a VLAN in a Static ARP Entry In a static ARP entry, you can specify the VLAN through which the destination device is accessible. This feature is useful when the interface specified in the static ARP entry is part of multiple tagged VLANs and the destination is accessible through one of the VLANs. The NetScaler appliance includes the specified VLAN ID in the outgoing packets matching the static ARP entry. If you don't specify a VLAN ID in an ARP entry, and the specified interface is part of multiple tagged VLANs, the appliance assigns the interface's native VLAN to the ARP entry. For example, say NetScaler interface 1/2 is part of native VLAN 2 and of tagged VLANs 3 and 4, and you add a static ARP entry for network device A, which is part of VLAN 3 and is accessible through interface 1/2. You must specify VLAN 3 in the ARP entry for network device A. The NetScaler appliance then includes tagged VLAN 3 in all the packets destined to network device A, and sends them from interface 1/2. If you don't specify a VLAN ID, the NetScaler appliance assigns native VLAN 2 for the ARP entry. Packets destined to device A are dropped in the network path, because they do not specify tagged VLAN 3, which is the VLAN for device A. [From Build 55.23] [#520355] As-Override Support in Border Gateway Protocol As a part of BGP loop prevention functionality, if a router receives a BGP packet containing the router's Autonomous System Number (ASN) in the Autonomous Systems (AS) path, the router drops the packet. The assumption is that the packet originated from the router and has reached the place from where it originated. If an enterprise has several sites with a same ASN, BGP loop prevention causes the sites with an identical ASN to not get linked by another ASN. Routing updates (BGP packets) are dropped when another site receives them. To solve this issue, BGP AS-Override functionality has been added to the ZebOS BGP routing module of the NetScaler. With AS-Override enabled for a peer device, when the NetScaler appliance receives a BGP packet for forwarding to the peer, and the ASN of the packet matches that of the peer, the appliance replaces the ASN of the BGP packet with its own ASN number before forwarding the packet. [From Build 55.23] [#503566] Keeping a VIP address in the Backup State By default, for configurations with USIP option disabled or with USIP and use proxy port options enabled, the NetScaler appliance communicates to the servers from a random source port (greater than 1024). The NetScaler supports using a source port from a specified port range for communicating to the servers. One of the use case of this feature is for servers that are configured to identify received traffic belonging to a specific set on the basis of source port for logging and monitoring purposes. For example, identifying internal and external traffic for logging purpose. For more information, see. [From Build 64.34] [#420067, 420039] Keeping a VIP address in the Backup State You can force a VIP address to always stay in backup state in a VRRP deployment. This operation is helpful in maintenance or testing of the deployment. When a VIP address is forced to stay in backup state, it does not participate in VRRP state transitions. Also, it cannot become master even if all other nodes go down. To force a VIP address to stay in backup state, you set the priority of the associated VMAC address to zero. To ensure that none of the VIP addresses of a node handle traffic during a maintenance process on the node, set all the priorities to zero. For more information, see. [From Build 64.34] [#553311] Delaying Preemption By default, a backup VIP address preempts the master VIP address immediately after its priority becomes higher than that of the master VIP. When configuring a backup VIP address, you can specify an amount of time by which to delay the preemption. Preemption delay time is a per-node setting for each backup VIP address. The preemption delay setting for a backup VIP does not apply in the following conditions: * The node of the master VIP goes down. In this case, the backup VIP takes over as the master VIP after the dead interval set on the backup VIP's node. * The priority of the master VIP is set to zero. The backup VIP takes over as the master VIP after the dead interval set on the backup VIP's node. For more information, see. [From Build 64.34] [#553246] Support for JPEG-XR image format in Front End Optimization (FEO) The front end optimization feature now supports the conversion of GIF, JPEG, TIFF, and PNG images to JPEG-XR format as part of the image optimization functionality. [From Build 55.23] [#504044] Support for WebP image format in Front End Optimization (FEO) The front end optimization feature now supports the conversion of GIF, JPEG, and PNG images to WEBP format as part of the image optimization functionality. [From Build 55.23] [#509338] Media classification support on the NetScaler appliance You can now monitor and display the statistics of the media traffic going through the NetScaler appliance. [From Build 55.23] [#493103] Support for New Hardware Platforms The MPX 25100T and MPX 25160T platforms are now supported in this release. For more information about these platforms, see. [From Build 55.23] [#486703, 495591, 552218] Policy extensions support on NetScaler appliance The NetScaler appliance now supports policy extensions, which you can use to add customized functions to default syntax policy expressions. An extension function can accept text, double, Boolean or number values as input, perform a computation, and produce a text, double, Boolean or number result. [From Build 55.23] [#248822] Transaction Scope Variables Transaction scope variables are added to variables feature. You can now use transaction scope variables to specify separate instances with values for each transaction processed by the NetScaler appliance. Transaction variables are useful for passing information from one phase of the transaction to another. For example, you can use a transaction variable to pass information about the request onto the response processing. [From Build 55.23] [#444109] Support for Displaying the Hex Code of a CIpher The show ciphersuite command now displays the IETF standard hexadecimal code of the cipher. It is helpful in debugging, because a hex code is unique to a cipher but the cipher name might differ on the NetScaler appliance, OpenSSL, and Wireshark. At the NetScaler command line, type: show ciphersuite In the configuration utility, navigate to Traffic Management > SSL > Cipher Groups. [From Build 55.23] [#491286] Stricter Control on Client Certificate Validation You can configure the SSL virtual server to accept only client certificates that are signed by a CA certificate bound to the virtual server. To do so, enable the ClientAuthUseBoundCAChain setting in the SSL profile bound to the virtual server. For more information, see. [From Build 55.23] [#533241] Support for TLS Protocol Version 1.1 and 1.2 on the backend on the NetScaler MPX, MPX-FIPS, and SDX Appliances The NetScaler MPX appliance now supports TLS protocol versions 1.1 and 1.2 on the backend. MPX-FIPS appliances running firmware version 2.2 also support TLSv1.1/1.2 on the backend. On an SDX appliance, TLSv1.1/1.2 is supported on the backend only if an SSL chip is assigned to the VPX instance. [From Build 55.23] [#494082, 566364] Changes to the Default Cipher Suite If user-defined ciphers or cipher groups are not bound to an SSL virtual server, the DEFAULT cipher group is used for cipher selection at the front end and the ALL cipher group is used for cipher selection at the back end. In this release, the predefined cipher suites, such as DEFAULT and ALL, are modified to give strong ciphers a higher priority. For example, earlier RC4-MD5 was given a higher priority but it is deprioritized in the new list because it is a weak cipher. [From Build 55.23] [#226713, 258311, 384491] Support for TLS Protocol Version 1.1 and 1.2 on the front end on the NetScaler VPX and SDX Appliances The NetScaler VPX appliance now supports TLS protocol versions 1.1 and 1.2 on the front end. On an SDX appliance, TLSv1.1/1.2 are supported on the front end even if an SSL chip is not assigned to the VPX instance. [From Build 55.23] [#424463, 481970] Support for Checking the Subject Alternative Name in addition to the Common Name in a Server Certificate If you configure a common name on an SSL service or service group for server certificate authentication, the subject alternative name (SAN), if specified, is matched in addition to the common name. Therefore, if the common name does not match, the name that you specify is compared to the values in the SAN field in the certificate. If it matches one of those values, the handshake is successful. Note that in the SAN field, only DNS names are matched. [From Build 55.23] [#439161] 2048-bit Default Certificates on the NetScaler Appliance With this release, the default certificate on a NetScaler appliance is 2048-bits. In earlier builds, the default certificate was 512-bits or 1024-bits. After upgrading to release 11.0, you must delete all your old certificate-key pairs starting with "ns-", and then restart the appliance to automatically generate a 2048-bit default certificate. [From Build 55.23, 64.34] [#451441, 405363, 458905, 465280, 540467, 551603, 559154, 547106, 584335, 588128] Support for SNI with a SAN Extension Certificate The NetScaler appliance now supports SNI with a SAN extension certificate. During handshake initiation, the host name provided by the client is first compared to the common name and then to the subject alternative name. If the name matches, the corresponding certificate is presented to the client. [From Build 55.23] [#250573] DH Key Performance Optimization DH key generation is optimized on a VPX appliance by adding a new parameter dhKeyExpSizeLimit. You can set this parameter on an SSL virtual server or on an SSL profile and bind the profile to the SSL virtual server. The key generation is optimized as defined by NIST in. Additionally, the minimum DH count is set to zero. As a result, you can now generate a DH key for each transaction as opposed to a minimum of 500 transactions earlier. This helps to achieve perfect forward secrecy (PFS). [From Build 55.23] [#498162, 512637] Support for TLS_FALLBACK_SCSV signaling cipher suite value The. For more information, see. [From Build 55.23] [#509666, 573528] Support for Additional Ciphers on a DTLS Virtual Server EDH, DHE, ADH, EXP, and ECDHE ciphers are now supported on a DTLS virtual server. [From Build 55.23] [#508440, 483391] Support for Auto-Detection of the Certificate-Key Pair Format The NetScaler software has been enhanced to automatically detect the format of the certificate-key pair. To do so, the format of the certificate and key file should be the same. If you specify the format in the inform parameter, it is ignored by the software. Supported formats are PEM, DER, and PFX. [From Build 55.23] [#209047, 432330, 481660] New SNMP OIDs for SSL transactions per second The following SNMP OIDs have been added to the display the SSL transactions per second: NS-ROOT-MIB::sslTotTransactionsRate.0 = Gauge32: 0 NS-ROOT-MIB::sslTotSSLv2TransactionsRate.0 = Gauge32: 0 NS-ROOT-MIB::sslTotSSLv3TransactionsRate.0 = Gauge32: 0 NS-ROOT-MIB::sslTotTLSv1TransactionsRate.0 = Gauge32: 0 [From Build 55.23] [#449923] Support for ECDHE Ciphers at the Back End The NetScaler appliance now supports the following ECDHE ciphers at the back end: - TLS1-ECDHE-RSA-RC4-SHA - TLS1-ECDHE-RSA-DES-CBC3-SHA - TLS1-ECDHE-RSA-AES128-SHA - TLS1-ECDHE-RSA-AES256-SHA Note: This feature is available only for NetScaler MPX platforms. [From Build 55.23] [#523464] Support for Thales nShield(R) HSM All NetScaler MPX, SDX, and VPX appliances except the MPX 9700/10500/12500/15500 appliances now support the Thales nShield(R) Connect external Hardware Security Module (HSM). With a Thales HSM, the keys are securely stored as application key tokens on a remote file server and can be reconstituted only inside the Thales HSM. Thales HSMs comply with FIPS 140-2 Level 3 specifications. Thales integration with the ADC is supported for TLS versions 1.0, 1.1, and 1.2. For more information about support for Thales nShield(R) HSM, see. [From Build 62.10] [#440351, 477544] 64.34] [#551603, 559154] Using the SSL Chip Utilization Percentage Counter for Capacity Planning on MPX Appliances that use N3 Chips Knowing the percentage utilization of all the SSL chips in an appliance over a period of time helps in capacity planning. The counter increments every 7 seconds and therefore provides real-time data, which can help you predict when an appliance is likely to reach capacity. Note: This feature is available on only the MPX appliances that use N3 chips, which include MPX 11515/11520/11530/11540/11542 and MPX 220140/22060/22080/22100/22120/24100/24150 appliances. Some models of MPX 14020/14030/14040/14060/14080/14100 and MPX 25100/25160/25200, which use N3 chips, also support this feature. [From Build 64.34] [#416807, 197702] New Counters in SSL Statistics Because TLS 1.1 and 1.2 are becoming the primary security protocols, the transaction and session statistics for these protocols are now included in the SSL statistics. [From Build 64.34] [#336395, 559165, 560353] Graceful Cleanup of SSL sessions after change in any SSL entity parameter Some operations - for example, updating a certificate to replace a potentially exposed certificate, using a stronger key (2048-bit instead of 1024-bit), adding/removing a certificate from a certificate chain, or changing any of the SSL parameters - should clean the SSL sessions gracefully instead of abruptly terminating existing sessions. With this enhancement, existing connections continue to use the current settings but all new connections use the new certificate or settings. However, connections that are in the middle of a handshake or sessions that are renegotiating are terminated, and session reuse is not allowed. To clear the sessions immediately after a configuration change, you must disable and reenable each entity. [From Build 64.34] [#529979] Enhanced SSL Profile The SSL infrastructure on the NetScaler appliance is continually updated to address the ever growing requirements for security and performance. Vulnerabilities in SSLv3 and RC4 implementation have emphasized the need to use the latest ciphers and protocols to negotiate the security settings for a network connection. Implementing any changes to the configuration, such as disabling SSLv3, across thousands of SSL end points is a cumbersome process. Therefore, settings that were part of the SSL end points configuration have been moved to the SSL profile, along with the default ciphers. To implement any change in the configuration, including cipher support, you need only modify the profile. If the profile is enabled, the change is immediately reflected in all the end points that the profile is bound to. Important: After the upgrade, if you enable the profile, you cannot reverse the changes. That is, the profile cannot be disabled. [From Build 64.34] [#533640] 55.23] [#480258, 494482, 523853] One Perl script to support both call home and regular uploads The script used to upload collector archives to Citrix servers is now packaged as part of the official NetScaler build (collector_upload.pl). However, using this script directly is not recommended. Instead, use the -upload option in showtechsupport utility to upload the archives. [From Build 55.23] [#525332] Support for milliseconds, microseconds, and nanoseconds in Time Format Definition table You can now configure NetScaler web logging clients to capture transaction times in milliseconds, microseconds, and nanoseconds for logging on the NetScaler appliance. [From Build 55.23] [#505840, 505377] Maintaining minimum number of reuse pool connections in HTTP Profiles You can now specify the minimum number of reuse pool connections to be opened from the NetScaler appliance to a particular server. This setting helps in optimal memory utilization and reduces the number of idle connections to the server. [From Build 55.23] [#397478] User configurable congestion window for TCP profile You can now set the maximum congestion window size for a TCP profile on the NetScaler appliance. [From Build 55.23] [#248711] Call home support for NetScaler VPX models Call home support has been added to NetScaler VPX models 1000 and higher. [From Build 55.23] [#311620] The NetScaler Web Logging (NSWL) client logs a hyphen (-) instead of a user name when %u is specified in the log format. [From Build 55.23] [#238440, 239481, 247372, 422873] Showtechsupport utility enhancement If your NetScaler appliance has Internet connectivity, you can now directly upload the newly generated collector archive to the Citrix technical support server from the appliance. [From Build 55.23] [#480797] Support for FACK on TCP profiles The TCP profiles on a NetScaler appliance now support forward acknowledgement (FACK). FACK avoids TCP congestion by explicitly measuring the total number of data bytes outstanding in the network, and helping the sender (either a NetScaler ADC or a client) control the amount of data injected into the network during retransmission timeouts. [From Build 55.23] [#439130] NTP Version Update 55.23] [#440375, 440591] The NetScaler appliance fails intermittently when trace is started in 'RX' mode. [From Build 55.23] [#576067] The NetScaler introduces a new role called sysadmin. A sysadmin is lower than a superuser is terms of access allowed on the appliance. A sysadmin user can perform all NetScaler operations with the following exceptions: no access to the NetScaler shell, cannot perform user configurations, cannot perform partition configurations, and some other configurations as stated in the sysadmin command policy. [From Build 55.23] [#548516] Support for HTTP/2 on the NetScaler Appliance The NetScaler appliance supports HTTP/2 connections with clients supporting HTTP/2 protocol. [From Build 55.23] [#490096, 505747] The NetScaler appliance generates SNMP clear alarm traps for successful cases of haVersionMismatch, haNoHeartbeats, haBadSecState, haSyncFailure, and haPropFailure error events in an HA configuration. [From Build 55.23] [#368832] Specifying a domain name for a logging server When configuring an auditlog action, you can specify the domain name of a syslog or nslog server instead of its IP address. Then, if the server's IP address changes, you do not have to change it on the NetScaler appliance. [From Build 64.34] [#314438]. [From Build 64.34] [#569974] Support for Configuring a Proxy Server to Install Licenses You no longer have to configure internet connectivity on the NetScaler appliance in order to use a hardware serial number or license activation code to allocate a NetScaler license. Instead, you can use a proxy server. On the NetScaler GUI, navigate to Configuration > System > Licenses > Manage Licenses > Add a New License, select the Connect through Proxy Server check box, and specify the IP address and port of your proxy server. [From Build 64.34] [#541474] Support for MPTCP Version Negotiation A 64.34] [#529883] The tech support bundle that is generated for a NetScaler MPX appliance that has a LOM port, generates a list of LOM sensors and stores this list in the support bundle in the "shell/ipmitool_sensor_list.out" file. [From Build 64.34] [#596315] Subscriber-Aware Service Chaining Service chaining is determining the set of services through which the outbound traffic from a subscriber must pass before going to the Internet. Multiple services, such as antivirus services, parental control services, firewalls, and web filter, are running in a Telco network. Different subscribers have different plans and each plan has specific services associated with it. The decision to direct a subscriber's request to a service is based on the subscriber information. Instead of sending all the traffic to all the services, the NetScaler appliance intelligently routes all requests from a subscriber to a specific set of services on the basis of the policy defined for that subscriber. The appliance receives the subscriber information from the PCRF over a Gx interface. For more information about subscriber-aware service chaining, see. [From Build 62.10] [#561747] Support for RADIUS Accounting Message The NetScaler appliance can now dynamically receive the subscriber information through a RADIUS accounting message. It receives the subscriber IP address and MSISDN and uses this information to retrieve the subscriber rules from the PCRF server. For more information about RADIUS Accounting Message, see. [From Build 62.10] [#526981] Support for Gx Interface The NetScaler appliance can now dynamically receive the subscriber information over a Gx interface. The appliance communicates with the PCRF server over the Gx interface, receives the subscriber information, and uses this information to direct the flow of traffic. The PCRF server can send updates over this interface at any point during the subscriber session. For more information about Gx interface, see. [From Build 62.10] [#402469] Provide Internet Access to IPv4 Subscribers Through the IPv6 Core Network of a Telecom Service Provider (Dual-Stack Lite) Because of the shortage of IPv4 addresses, and the advantages of IPv6 over IPv4, many ISPs have started transitioning to IPv6 infrastructure. But during this transitioning,6 tunneling to send a subscriber's IPv4 packet over a tunnel on the IPv6 access network to the ISP. The IPv6 packet is de capsulated to recover the subscriber's IPv4 packet and is then sent to the Internet after NAT address and port translation other LSN related processing. The response packets traverse through the same path to the subscriber. The NetScaler appliance implements the AFTR component of a DS-Lite deployment and is compliant with RFC 6333. For more information about the DS-Lite feature, see. [From Build 62.10] [#407162] Subscriber-Aware Traffic Steering Traffic steering is directing subscriber traffic from one point to another based on subscriber information. When a subscriber connects to the network, the packet gateway associates an IP address with the subscriber and forwards the data packet to the NetScaler appliance. The appliance communicates with the PCRF server over the Gx interface to get the policy information. Based on the policy information, the appliance performs one of the following actions: - Forwards the data packet to another set of services - Drops the packet - Performs LSN if configured on the appliance For more information about subscriber-aware traffic steering, see. [From Build 62.10] [#402473] Provide Internet Access to a Large Number of Private IPv4 Subscribers of a Telecom Service Provider (Large Scale NAT) The Internet's phenomenal growth has resulted in a shortage of public IPv4 addresses. Large Scale NAT (LSN/CGNAT) provides a solution to this issue, maximizing the use of available public IPv4 addresses by sharing a few public IPv4 addresses among a large pool of Internet users. LSN translates private IPv4 addresses into public IPv4 addresses. It includes network address and port translation methods to aggregate many private IP addresses into fewer public IPv4 addresses. LSN is designed to handle NAT on a large scale. The NetScaler supports LSN and is compliant with RFC 6888, 5382, 5508, and 4787. The NetScaler LSN feature is very useful for Internet Service Providers (ISPs) and carriers providing millions of translations to support a large number of users (subscribers) and at very high throughput. The LSN architecture of an ISP using Citrix products consists of subscribers (Internet users) in private address spaces accessing the Internet through a NetScaler appliance deployed in ISP's core network. The following lists some of the LSN features supported on a NetScaler appliance: * ALGs: Support of application Layer Gateway (ALG) for SIP, PPTP, RTSP, FTP, ICMP, and TFTP protocols. * Deterministic/ Fixed NAT: Support for pre-allocation of block of ports to subscribers for minimizing logging. * Mapping: Support of Endpoint-independent mapping (EIM), Address-dependent mapping ( ADM), and Address-Port dependent mapping. * Filtering: Support of Endpoint-independent filtering (EIF), Address-dependent filtering, and Address-Port-dependent filtering. * Quotas: Configurable limits on number of ports and sessions per subscriber. * Static Mapping: Support of manually defining an LSN mapping. * Hairpin Flow: Support for communication between subscribers or internal hosts using public IP addresses. * LSN Clients: Support for specifying or identifying subscribers for LSN NAT by using IPv4 addresses and extended ACL rules. * Logging: Support for logging LSN session for law enforcement. In addition, the following are also supported for logging: ** Reliable SYSLOG: Support reduces the LSN log volume. For more information about the Large Scale NAT feature, see. [From Build 62.10] [#316909] Provide Visibility into SLA Reports An ISP often purchases international bandwidth from upstream ISPs, who then become layer 2 ISPs. To provide the redundancy required for reliable service to its customers, the purchasing ISP negotiates Service Level Agreements with multiple layer 2 ISPs. The SLAs stipulate a penalty in the event that the layer 2 ISP fails to maintain a specified level of service. NetScaler Insight Center and the NetScaler cache redirection feature can now be used to monitor the traffic flowing through the NetScaler appliances and calculate SLA breaches. The NetScaler cache redirection feature helps save bandwidth over international links. NetScaler Insight Center works with the NetScaler cache redirection feature to calculate, and provide visibility into, the percentage of bandwidth saved and any breaches of the SLA. ISP administrators are alerted whenever there is a breach for response time, hit rate/sec, or bandwidth. For a specific domain, NetScaler calculates the following SLA breaches and forwards the data to NetScaler Insight Center: * SLA Breach. A breach that occurs when a metric (response time, hits, or bandwidth) crosses the defined threshold value. For example, SLA breach is considered if the response time for a specific domain crosses 100 ms. * SLA Breach Duration. Time period in which a SLA breach lasted. For example, SLA Breach Duration is considered 5 mins, if the response time for a domain is greater than 100 ms consistently for 5 mins. * Breached Request Percentage. Percentage of requests whose response time is not within the minimum response time and maximum response time range. For example, if you configure this value as 10%, then among 100 requests, the response time of 10 requests are not within the minimum response time and maximum response time. NetScaler Insight Center then calculates the following SLA breaches: * SLA Breach Frequency- SLA Breach Frequency is the defined as the number of times the SLA breach occurs for the SLA Breach Duration. For example, SLA Breach Frequency is considered 1, if the response time for a domain is greater than 100 ms consistently for 5 mins. All of these metrics are calculated for a SLA group, which contains a list of domains defined by the ISP administrator. [From Build 62.10] [#495288, 501269, 501277, 501278, 501279, 501280] High Availability Support for Dynamic Subscriber Sessions In the absence of a high availability (HA) setup, the subscriber information that is received from the RADIUS client is lost if the appliance fails. With HA support, the subscriber sessions are continually synchronized on the secondary node. In the event of a failover, the subscriber information is still available on the secondary node. [From Build 63.16] [#574838] Subscriber Session Event Logging The NetScaler appliance currently maintains millions of subscriber sessions in its database (subscriber store) but does not log these messages. Telco administrators need reliable log messages to track the control plane messages specific to a subscriber. They also need historical data to analyze subscriber activities. The appliance now supports logging of RADIUS control plane accounting messages and Gx control plane logging messages. Some of the key attributes are MSISDN and time stamp. By using these logs, you can track a user by using the IP address, and the MSISDN if available. [From Build 64.34] [#575621, 575623] IPv6 Prefix based Subscriber Sessions A telco user can be uniquely. [From Build 64.34] [#574135] Logging MSISDN Information for a Large Scale NAT configuration A Mobile Station Integrated Subscriber Directory Number (MSISDN) is a telephone number uniquely identifying a subscriber across multiple mobile networks. The MSISDN is associated with a country code and a national destination code identifying the subscriber's operator. You can configure a NetScaler appliance to include MSISDNs in LSN log entries for subscribers in mobile networks. The presence of MSISDNs in the LSN logs helps the administrator in faster and accurate back tracing of a mobile subscriber who has violated a policy or law, or whose information is required by lawful interception agencies. For more information, see. [From Build 64.34] [#581315, 502083] Deterministic NAT Allocation for DS-Lite Deterministic NAT allocation for DS-Lite LSN deployments is a type of NAT resource allocation in which the NetScaler appliance pre-allocates, from the LSN NAT IP pool and on the basis of the specified port block size, an LSN NAT IP address and a block of ports to each subscriber (subscriber behind B4 device). The appliance sequentially allocates NAT resources to these subscribers. It assigns the first block of ports on the beginning NAT IP address to the beginning subscriber IP address. The next range of ports is assigned to the next subscriber, and so on, until the NAT address does not have enough ports for the next subscriber. At that point, the first port block on the next NAT address is assigned to the subscriber, and so on. The NetScaler appliance logs the allocated NAT IP address and the port block for a subscriber. For a connection, a subscriber can be identified by just its mapped NAT IP address and port block. For this reason, the NetScaler appliance does not log the creation or deletion of an LSN session. For more information, see. [From Build 64.34] [#582325] Port Block Size in a Large Scale NAT Configuration Deterministic NAT and Dynamic NAT with port block allocation significantly reduce the LSN log volume. For these two types of configuration, the NetScaler appliance allocates a NAT IP address and a block of ports to a subscriber. The minimum port block size for deterministic LSN configuration and dynamic LSN configuration with port block has been reduced from 512 ports to 256. This reduction of the minimum port block doubles the maximum number of subscribers for a NAT IP address in an LSN configuration. It also reduces the number of unused ports assigned to subscribers who do not need more than 256 ports at a time. The port block size parameter can be set while adding or modifying an LSN group as part of an LSN configuration. A value of 256 (default) or a multiple of 256 can be set to the port block size parameter. For instructions on configuring Large Scale NAT, see. For sample LSN configurations, see. [From Build 64.34] [#581285] Configuring DS-Lite Static LSN Maps The NetScaler appliance supports manual creation of DS-Lite LSN mappings, which contain the mapping between the following information: * Subscriber's IP address and port, and IPv6 address of B4 device or component *). For more information, see. [From Build 64.34] [#558406] IP. IP prefix NAT is useful in a deployment of NetScaler appliances and optimization devices (for example, Citrix ByteMobile) for identifying traffic from different client networks, which share the same network address, for meeting different optimization needs for traffic from each client network. For more information, see [From Build 64.34] [#590571] Idle Session Management of Subscriber Sessions in a Telco Network Subscriber-session cleanup on appliance. The idle session management feature provides configurable timers to identify idle sessions, and cleans up these sessions on the basis of the specified action. [From Build 64.34] [#574138]
https://docs.citrix.com/zh-tw/netscaler/11/release-notes/main-releases/whats-new-in-previous-11-0-builds.html
2018-04-19T13:52:13
CC-MAIN-2018-17
1524125936969.10
[]
docs.citrix.com
. Leave your feedback on this topic here If you have questions or need support, please visit the Plesk forum or contact your hosting provider. The comments below are for feedback on the documentation only. No timely answers or help will be provided.
https://docs.plesk.com/en-US/onyx/cli-win/using-command-line-utilities/init_confexe-server-initial-configuration-and-administrators-personal-info.25731/
2018-04-19T13:44:31
CC-MAIN-2018-17
1524125936969.10
[]
docs.plesk.com
Security Advisory Microsoft Security Advisory 2847140 Vulnerability in Internet Explorer Could Allow Remote Code Execution Published: May 03, 2013 | Updated: May 14, 2013 Version: 2.0 General Information Executive Summary. 3, 2013): Advisory published. - V1.1 (May 8, 2013): Added link to Microsoft Fix it solution, "CVE-2013-1347 MSHTML Shim Workaround," that prevents exploitation of this issue. - V2.0 (May 14, 2013): Advisory updated to reflect publication of security bulletin. Built at 2014-04-18T13:49:36Z-07:00
https://docs.microsoft.com/en-us/security-updates/SecurityAdvisories/2013/2847140
2018-04-19T14:27:39
CC-MAIN-2018-17
1524125936969.10
[]
docs.microsoft.com
View Plan Guide Properties You can view the properties of plan guides in SQL Server 2017 by using SQL Server Management Studio or Transact-SQL In This Topic Before you begin: To view the properties of plan guides, using: SQL Server Management Studio Before You Begin Security Permissions The visibility of the metadata in catalog views is limited to securables that either a user owns or on which the user has been granted some permission.. Hints Displays the query hints or query plan to be applied to the Transact-SQL statement. When a query plan is specified as a hint, the XML Showplan output for the plan is displayed. Is disabled Displays the status of the plan guide. Possible values are True and False. Name Displays the name of the plan guide. Parameters When the scope type is SQL or TEMPLATE, displays the name and data type of all parameters that are embedded in the Transact-SQL statement. Scope batch Displays the batch text in which the Transact-SQL statement appears. Scope object name When the scope type is OBJECT, displays the name of the Transact-SQL stored procedure, user-defined scalar function, multistatement table-valued function, or DML trigger in which the Transact-SQL statement appears. Scope schema name When the scope type is OBJECT, displays the name of the schema in which the object is contained. Scope type Displays the type of entity in which the Transact-SQL statement appears. This specifies the context for matching the Transact-SQL statement to the plan guide. Possible values are OBJECT, SQL, and TEMPLATE. Statement Displays the Transact-SQL statement against which the plan guide is applied. Click OK. Using Transact-SQL).
https://docs.microsoft.com/en-us/sql/relational-databases/performance/view-plan-guide-properties?view=sql-server-2017
2018-04-19T13:48:39
CC-MAIN-2018-17
1524125936969.10
[array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'], dtype=object) array(['../../includes/media/no.png?view=sql-server-2017', 'no'], dtype=object) array(['../../includes/media/no.png?view=sql-server-2017', 'no'], dtype=object) array(['../../includes/media/no.png?view=sql-server-2017', 'no'], dtype=object) ]
docs.microsoft.com
Resource Type: stage Included in Puppet Enterprise 3.8. A newer version is available; see the version menu above for details. NOTE: This page was generated from the Puppet source code on 2016-01-15 16:31:56 +0100 stage Description A resource type for creating new run stages. Once a stage is available, classes can be assigned to it by declaring them with the resource-like syntax and using the stage metaparameter. Note that new stages are not useful unless you also declare their order in relation to the default main stage. A complete run stage example: stage { 'pre': before => Stage['main'], } class { 'apt-updates': stage => 'pre', } Individual resources cannot be assigned to run stages; you can only set stages for classes. Attributes stage { 'resource title': name => # (namevar) The name of the stage. Use this as the value for # ...plus any applicable metaparameters. } name (Namevar: If omitted, this attribute’s value defaults to the resource’s title.) The name of the stage. Use this as the value for the stage metaparameter when assigning classes to this stage. (↑ Back to stage attributes) NOTE: This page was generated from the Puppet source code on 2016-01-15 16:31:56 +0100
https://docs.puppet.com/puppet/3.8/types/stage.html
2018-04-19T14:00:22
CC-MAIN-2018-17
1524125936969.10
[]
docs.puppet.com
Windows Storage Server Overview Applies To: Windows Storage Server 2012, Windows Storage Server 2012 R2, Windows Server 2012 R2, Windows Server 2012 This topic describes Windows Storage Server including supported roles and features, practical applications, the most significant new and updated functionality in Windows Storage Server 2012 R2 and Windows Storage Server 2012, hardware and software requirements, and application compatibility. Did you mean… File and Storage Services Overview Windows Storage Server 2008 R2 Product description. In addition, iSCSI Target Server enables you to offer customers block-level storage services, and it operates with a wide range of iSCSI initiators. iSCSI Target Server is included with all editions of Windows Server 2012 R2 and Windows Server 2012 including Windows Storage Server editions. Windows Storage Server editions Storage appliances are available with the following editions of Windows Storage Server 2012 R2 and Windows Storage Server 2012: Windows Storage Server 2012 R2 Workgroup Windows Storage Server 2012 R2 Standard Windows Storage Server 2012 Workgroup Windows Storage Server 2012 Standard The following roles and features are supported in each edition of Windows Storage Server. Note When configuring Windows Storage Server for failover clustering, we recommend that you have separate network interfaces for cluster communications and public communications. When configuring Windows Storage Server for failover clustering with iSCSI Software Target, you can take advantage of multiple network interfaces by using Microsoft Multipath I/O (MPIO) or Load Balance and Failover (LBFO) to provide load balancing and path redundancy. Hardware features supported in Windows Storage Server The following hardware features are available in each edition of Storage Server. Practical applications This section discusses the following practical applications for new and enhanced capabilities of Windows Storage Server 2012: Reduce storage costs, and increase cost efficiency for file storage Use iSCSI storage devices to serve remote disks as though they were local Offload printing to your storage server Storage management for complex applications Improved scalability and performance for branch offices New options for storage through SMB 3.0 Reduce storage costs, and increase cost efficiency for file storage Use Storage Spaces to provide cost-effective, highly available and scalable storage using industry-standard disks In Windows Storage Server, you can use Storage Spaces to provide enterprise-class storage using industry standard SAS or SATA disks, either internally or in JBOD enclosures. This provides a high level of performance and availability without the added cost of Fibre Channel components and RAID adapters. For details, see Storage Spaces Overview. Store your Hyper-V virtual machines and Microsoft SQL Server databases on SMB file shares In Windows Storage Server you can leverage the high-performance and high-availability features of SMB 3.0 for application-based file shares for SQL Server and Hyper-V. SMB Direct and SMB Multichannel on a file server hosting VDX drivers for a Hyper-V cluster approach the performance of local storage on Hyper-V guest operating systems. And you get the resiliency of failover and cluster-aware updating to maintain service during planned updates or failures. For details, see Server Message Block Overview. Use DFS Namespaces and DFS Replication to replicate folders across multiple servers and sites Enhancements to DFS Namespaces and DFS Replication in the File and Storage Services role improve performance and efficiency and reduce administrative overhead when replicating folders across multiple servers and sites. For details, see DFS Namespaces and DFS Replication Overview. Use BranchCache with Server Message Block (SMB) to optimize performance over the WAN In hosted branch caching if identical content exists in a file – or across many files on the content server or hosted cache server – BranchCache stores only one instance of the content, reducing storage costs. In addition, client computers at office locations download only one instance of duplicate content, saving additional wide area network (WAN) bandwidth. Windows Server 2012 streamlines deployment of BranchCache and introduces significant improvements in scalability, security, performance, and manageability. For more information, see BranchCache Overview. Provide highly available storage through clustering You can provide continuous availability, with transparent server-side failover, for applications deployed in Network File System (NFS) version 3 or NFS version 2. For continuous availability in heterogeneous environments, you can deploy iSCSI target servers in failover clusters. For more information, see Failover Clustering Overview. Use NFS as backend storage for your VMware environment Enhancements to NFS improve the experience of running your VMware ESX and VMware ESXi virtual machines from file-based storage. You can deploy NFS servers in a failover cluster for continuous availability; improvements in failover clustering make the NFS server fail over much faster than in earlier versions of Windows Server. Starting in Windows Server 2012, both the NFS server and the NFS client run on top of a new, scalable, high-performance RPC-XDR runtime infrastructure. For more information, see Network File System Overview. Store more data in less space Use Data Deduplication to store more data in less space. The goal of data duplication is to save disk space by segmenting files into small, variable-size chunks, identifying duplicate chunks, and maintaining a single copy of each chunk. When integrated with BranchCache, Data Deduplication provides faster download times and reduced bandwidth consumption over a WAN. For more information, see Data Deduplication Overview. Use iSCSI storage devices to serve remote disks as though they were local The iSCSI Target Server role service lets you use Server Manager to quickly create and share iSCSI LUNs over the network. Virtual hard disk (VHDx or VHD) files appear as locally attached hard drives. Application servers running just about any workload can connect to the target using an iSCSI initiator. The following are a few of many practical uses for iSCSI Target Server. Consolidate storage for multiple application servers with diverse storage requirements Applications running just about any workload can connect to the target using the iSCSI initiator. Interoperability with non-Windows applications makes this particularly useful. Set up an iSCSI SAN for a Windows Server-based failover cluster You can use Windows Storage Server as inexpensive SAN storage for a failover cluster using the iSCSI protocol instead of SMB if your applications don’t support SMB fail over. Create inexpensive development and test environments Using iSCSI Target Server, you can create inexpensive development and test environments for complex scenarios such as clustering, live migration, SAN transfer, and Storage Manager. For example, you can set up an iSCSI SAN for a clustered SQL Server instance on a single computer. For more information, see the blog entry Six Uses for the Microsoft iSCSI Software Target and iSCSI Target Server Overview. Offload printing to your storage server Windows Storage Server 2012 R2 and Windows Storage Server 2012 includes the Printer Server role. This is especially useful for branch offices, which can now manage printing on the same server that provides infrastructure services and file and storage services, including iSCSI SAN management. Storage management for complex applications Migrate existing data for complex applications, such as hierarchical storage management (HSM) and medical applications, to new storage. Programmatically control highly sensitive files for LOB applications Take advantage of improvements to File Classification Infrastructure (FCI) features of File Server Resource Manager (FSRM) to programmatically control highly sensitive files for your line-of-business (LOB) applications. With FCI, you can classify files by defining automated rules, and then programmatically perform tasks on those files based on their classification. For example, to access high-business-impact (HBI) data, you might require that a user be a full-time employee, obtain access from a managed device, and log on with a smart card. The Windows Server 2008 R2 operating system introduced the Microsoft Data Classification Toolkit, which reduced administrative overhead by defining a basic set of classification properties related to common compliance requirements. However, the classifications were local to each file server, which meant the administrator had to ensure that the same classification properties were used on all file servers. Starting in Windows Server 2012, the classification properties are managed centrally in Active Directory Domain Services (AD DS), making the classification properties standard on all file servers. For more information, see the blog entry Protect everything: using FCI to protect files of any type with Windows Server 2012. Improved scalability and performance for branch offices Windows Storage Server can provide comprehensive infrastructure services for the branch office. Use DNS and WINS for server address identification. Use your storage server as your primary DHCP server or as a DHCP failover target if the primary DHCP server goes offline. New options for storage through SMB 3.0 The introduction of the Server Message Block (SMB) 3.0 protocol opens new storage options and capabilities, and simplifies storage management in a heterogeneous environment. These include the following capabilities. For more information, see What's New in SMB in Windows Server. Highly available shared data storage for SQL Server databases and Hyper-V workloads Scale-Out File Server, new in Windows Server 2012, lets you store server application data, such as Hyper-V virtual machine files, on file shares, and obtain a similar level of reliability, availability, manageability, and high performance that you would expect from a storage area network. All file shares are online on all nodes simultaneously. This is also known as an active-active cluster. For more information, see Scale-Out File Server for Application Data Overview. Direct access to your Fibre Channel infrastructure Hyper-V virtual Fibre Channel provides direct access to Fibre Channel storage arrays by using Fibre Channel ports in the Hyper-V guest operating system. This enables you to virtualize workloads that require direct access to Fibre Channel storage and to cluster guest operating systems over Fibre Channel. No application downtime for planned maintenance or unexpected failures With SMB Transparent Failover, file shares move transparently between file server cluster nodes with no service interruption on the SMB client. Built-in data encryption for secure wire transfers SMB 3.0 performs transport-level encryption, and setting it up is as simple as selecting a single check box. You can configure a single share or an entire file server for SMB 3.0 encryption. Clients running earlier versions of SMB will not even see the shares. SMB file share backup using the same backup solution used for local storage Volume Shadow Copy Service (VSS) now supports backup of remote file storage. Any third-party backup software that uses VSS can back up files, virtual machines, and databases stored on SMB file shares. New and changed features in Windows Storage Server 2012 R2 Windows Storage Server 2012 R2 makes significant improvements throughout File and Storage Services, including to protocols, data access and replication, continuous availability, scalability, deployment and management. Note This table summarizes some of the most significant new and updated features of Windows Storage Server 2012 R2. For a complete list of new features available through File and Storage Services in Windows Server 2012 R2, see File and Storage Services Overview. New and changed features in Windows Storage Server 2012 Improved processes and added capabilities throughout Storage and File Services in Windows Storage Server 2012 make significant improvements to security, performance, management, and scalability. Note This table summarizes some of the most significant new and updated features of Windows Storage Server 2012. For a complete list of new features available through File and Storage Services in Windows Server 2012, see File and Storage Services Overview and the Storage: Windows Server 2012 white paper. Hardware and software requirements An appliance based on Windows Storage Server must meet basic system and hardware requirements of Windows Server. For Windows Storage Server 2012 R2, see System Requirements and Installation Information for Windows Server 2012 R2. For Windows Storage Server 2012, see Installing Windows Server 2012 and Windows Storage Server Getting Started. For special requirements for individual storage features, see the feature overviews under File and Storage Services Overview. Application compatibility in Windows Storage Server Use the following guidelines to determine your applications’ compatibility with Windows Storage Server. For more information, see the Windows 8 and Windows Server 2012 Compatibility Cookbook. Windows Storage Server and Windows Server are built on the same code base. Applications certified for Windows Server are expected to have the same application compatibility profile on Windows Storage Server. Windows Storage Server and Windows Server have the same application frameworks, services, libraries, and tools to support running the full breadth of Windows-compatible applications. Applications that rely on roles that are removed from Windows Storage Server 2012 - such as Fax Server, Active Directory Domain Services (AD DS), and Remote Desktop Services (RDS) - will not be able to leverage those roles. Note All versions of Windows Storage Server can be added to an Active Directory domain, but the server cannot function as a domain controller. Ultimately, Independent Software Vendors (ISVs) decide which versions and editions of the Windows operating system their products support. Customers should review ISV compatibility and support information before purchasing software for use on a Windows Storage Server appliance. Windows Storage Server customers should review the End User License Agreement (EULA) to see the types of applications that are permitted for installation. After installation, you can find the license agreement in %SystemDrive%\Windows\System32\license.rtf. Or, on the Start page, open Run, and enter winver. You can run antivirus software on Windows Storage Server, just like on Windows Server. See also For additional related information, see the following resources.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-storage-solutions/jj643303(v=ws.11)
2018-04-19T14:21:10
CC-MAIN-2018-17
1524125936969.10
[]
docs.microsoft.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. in a specified hosted zone, send a GET request to the /Route 53 API version/trafficpolicyinstance resource and include the ID of the hosted zone. Amazon Route 53 returns a maximum of 100 items in each response. If you have a lot of traffic policy instances, you can use the MaxItems parameter to list them in groups of up to 100. The response includes four values that help you navigate from one group of MaxItems traffic policy instances to the next: If the value of IsTruncated in the response is true, there are more traffic policy instances associated with the current AWS account. If IsTruncated is false, this response includes the last traffic policy instance that is associated with the current account. The value that you specified for the MaxItems parameter in the request that produced the current response. If IsTruncated is true, these two values in the response represent the first traffic policy instance in the next group of MaxItems traffic policy instances. To list more traffic policy instances, make another call to ListTrafficPolicyInstancesByHostedZone,HostedZone service method. .NET Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MRoute53Route53ListTrafficPolicyInstancesByHostedZoneListTrafficPolicyInstancesByHostedZoneRequestNET45.html
2018-04-19T13:50:20
CC-MAIN-2018-17
1524125936969.10
[]
docs.aws.amazon.com
JLog/ construct From Joomla! Documentation < API16:JLogRevision as of 22 Constructor [<! removed edit link to red link >] <! removed transcluded page call, red link never existed > Syntax __construct( />
https://docs.joomla.org/index.php?title=API16:JLog/_construct&oldid=99359
2015-08-28T03:07:04
CC-MAIN-2015-35
1440644060173.6
[]
docs.joomla.org
Changes related to "Configuring Xdebug for PHP development/Windows" ← Configuring Xdebug for PHP development/Windows This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&days=3&from=&target=Configuring_Xdebug_for_PHP_development%2FWindows
2015-08-28T03:26:05
CC-MAIN-2015-35
1440644060173.6
[]
docs.joomla.org
How to Share a Folder or File The following steps give you information on how to share a folder or file in the GroupDocs Dashboard: - Click adjacent to the preferred folder or file (for example, New contract.pdf), and then click Share. Result: The Share window pops up. - In this window, enter the email address to which you want to share the selected folder or file, and then click Add. Result: The selected folder or file (for example, New contract.pdf) is shared to the specified email IDs/users.
http://groupdocs.com/docs/display/GrDash/How+to+Share+a+Folder+or+File
2015-08-28T02:10:13
CC-MAIN-2015-35
1440644060173.6
[]
groupdocs.com
Installation and Configuration Guide Local Navigation Postinstallation tasks - Test the BlackBerry Enterprise Server installation - Install the BlackBerry database notification system - Best practice: Running the BlackBerry Enterprise Server - Configure the BlackBerry Administration Service instances in a pool to communicate across network subnets - Log in to the BlackBerry Administration Service for the first time - There is a problem with this website's security certificate - Configuring a computer for monitoring - Configuring communication with distributed components - Restrict database permissions for the BlackBerry Attachment Service - Configuring minimum Microsoft SQL Server permissions for the Windows account - Configure the BlackBerry Mail Store Service to use a local system account - Add database credentials for the local system account - Changing the BlackBerry Configuration Database - Provisioning the BlackBerry Collaboration Service as a trusted application Previous topic: Install a standby BlackBerry Enterprise Server Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/25805/Postinstallation_tasks_280312_11.jsp
2015-08-28T02:22:57
CC-MAIN-2015-35
1440644060173.6
[]
docs.blackberry.com
Contents There. There are both procedural and object-oriented interfaces for the FITPACK library. The interp1d class in scipy.interpolate is a convenient method to create a function based on fixed data points which can be evaluated anywhere within the domain defined by the given data using linear interpolation. An instance of this class is created by passing the 1-d vectors comprising the data. The instance of this class defines a __call__ method and can therefore by treated like a function which interpolates between known data values to obtain unknown values (it also has a docstring for help). Behavior at the boundary can be specified at instantiation time. The following example demonstrates, png, pdf) Spline interpolation requires two essential steps: (1) a spline representation of the curve is computed, and (2) the spline is evaluated at the desired points. In order to find the spline representation, there are two different ways to represent a curve and obtain (smoothing) spline coefficients: directly and parametrically. The direct method finds the spline representation of a curve in a two- dimensional plane using the function splrep. The first two arguments are the only ones required, and these provide the and components of the curve. The normal output is a 3-tuple, , containing the knot-points, , the coefficients and the order of the spline. The default spline order is cubic, but this can be changed with the input keyword, k. For curves in -dimensional space the function splprep allows defining the curve parametrically. For this function only 1 input argument is required. This input is a list of -arrays representing the curve in -dimensional space. The length of each array is the number of curve points, and each array provides one component of the -dimensional data point. The parameter variable is given with the keword argument, u, which defaults to an equally-spaced monotonic sequence between and . The default output consists of two objects: a 3-tuple, , containing the spline representation and the parameter variable The keyword argument, s , is used to specify the amount of smoothing to perform during the spline fit. The default value of is where is the number of data-points being fit. Therefore, if no smoothing is desired a value of ( ) with 8 or more knots, the roots of the spline can be estimated ( sproot). These functions are demonstrated in the example that follows. >>> import numpy as np >>> import matplotlib.pyplot as plt >>> from scipy import interpolate Cubic-spline >>> x = np.arange(0,2*np.pi+np.pi/4,2*np.pi/8) >>> y = np.sin(x) >>> tck = interpolate.splrep(x,y,s=0) >>> xnew = np.arange(0,2*np.pi,np.pi/50) >>> ynew = interpolate.splev(xnew,tck,der=0) >>> plt.figure() >>> plt.plot(x,y,'x',xnew,ynew,xnew,np.sin(xnew),x,y,'b') >>> plt.legend(['Linear','Cubic Spline', 'True']) >>> plt.axis([-0.05,6.33,-1.05,1.05]) >>> plt.title('Cubic-spline interpolation') >>> plt.show() (Source code, png, pdf) Derivative of spline >>> yder = interpolate.splev(xnew,tck,der=1) >>> plt.figure() >>> plt.plot(xnew,yder,xnew,np.cos(xnew),'--') >>> plt.legend(['Cubic Spline', 'True']) >>> plt.axis([-0.05,6.33,-1.05,1.05]) >>> plt.title('Derivative estimation from spline') >>> plt.show() Integral of spline >>> def integ(x,tck,constant=-1): >>> x = np.atleast_1d(x) >>> out = np.zeros(x.shape, dtype=x.dtype) >>> for n in xrange(len(out)): >>> out[n] = interpolate.splint(0,x[n],tck) >>> out += constant >>> return out >>> >>> yint = integ(xnew,tck) >>> plt.figure() >>> plt.plot(xnew,yint,xnew,-np.cos(xnew),'--') >>> plt.legend(['Cubic Spline', 'True']) >>> plt.axis([-0.05,6.33,-1.05,1.05]) >>> plt.title('Integral estimation from spline') >>> plt.show() Roots of spline >>> print interpolate.sproot(tck) [ 0. 3.1416] Parametric spline >>> t = np.arange(0,1.1,.1) >>> x = np.sin(2*np.pi*t) >>> y = np.cos(2*np.pi*t) >>> tck,u = interpolate.splprep([x,y],s=0) >>> unew = np.arange(0,1.01,0.01) >>> out = interpolate.splev(unew,tck) >>> plt.figure() >>> plt.plot(x,y,'x',out[0],out[1],np.sin(2*np.pi*unew),np.cos(2*np.pi*unew),x,y,'b') >>> plt.legend(['Linear','Cubic Spline', 'True']) >>> plt.axis([-1.05,1.05,-1.05,1.05]) >>> plt.title('Spline of parametrically-defined curve') >>> plt.show() The spline-fitting capabilities described above are also available via an objected-oriented interface. The one dimensional splines are objects of the UnivariateSpline class, and are created with the methodsivarateSpline is the other subclass of UnivarateSpline. It allows the user to specify the number and location of internal knots as() For (smooth) spline-fitting to a two dimensional surface, the function bisplrep is available. This function takes as required inputs the 1-D arrays x, y, and z which represent points on the surface The default output is a list whose entries represent respectively, the components of the knot positions, the coefficients of the spline, and the order of the spline in each coordinate. It is convenient to hold this list in a single object, tck, so that it can be passed easily to the function bisplev. The keyword, s , can be used to change the amount of smoothing performed on the data while determining the appropriate spline. The default value is where is the number of data points in the x, y, and z vectors. As a result, if no smoothing is desired, then should be passed to bisplrep . To evaluate the two-dimensional spline and it’s partial derivatives (up to the order of the spline), the function bisplev is required. This function takes as the first two arguments two 1-D arrays whose cross-product specifies the domain over which to evaluate the spline. The third argument is the tck list returned from bisplrep. If desired, the fourth and fifth arguments provide the orders of the partial derivative in the and direction respectively. It is important to note that two dimensional interpolation should not be used to find the spline representation of images. The algorithm used is not amenable to large numbers of input points. The signal processing toolbox contains more appropriate algorithms for finding the spline representation of an image. The two dimensional interpolation commands are intended for use when interpolating a two dimensional function as shown in the example that follows. This example uses the numpy.mgrid command in SciPy which is useful for defining a “mesh-grid “in many dimensions. (See also the numpy.ogrid command if the full-mesh is not needed). The number of output arguments and the number of dimensions of each argument is determined by the number of indexing objects passed in numpy.mgrid. >>> import numpy as np >>> from scipy import interpolate >>> import matplotlib.pyplot as plt Define function over sparse 20x20 grid >>> x,y = np.mgrid[-1:1:20j,-1:1:20j] >>> z = (x+y)*np.exp(-6.0*(x*x+y*y)) >>> plt.figure() >>> plt.pcolor(x,y,z) >>> plt.colorbar() >>> plt.title("Sparsely sampled function.") >>> plt.show() (Source code, png, pdf) Interpolate function over new 70x70 grid >>> xnew,ynew = np.mgrid[-1:1:70j,-1:1:70j] >>> tck = interpolate.bisplrep(x,y,z,s=0) >>> znew = interpolate.bisplev(xnew[:,0],ynew[0,:],tck) >>> plt.figure() >>> plt.pcolor(xnew,ynew,znew) >>> plt.colorbar() >>> plt.title("Interpolated function.") >>> plt.show(). Radial basis functions can be used for smoothing/interpolating scattered data in n-dimensions, but should be used with caution for extrapolation outside of the observed data range. This example compares the usage of the Rbf and UnivariateSpline classes from the scipy.interpolate module. >>> import numpy as np >>> from scipy.interpolate import Rbf, InterpolatedUnivariateSpline >>> import matplotlib.pyplot as plt >>> # setup data >>> x = np.linspace(0, 10, 9) >>> y = np.sin(x) >>> xi = np.linspace(0, 10, 101) >>> # use fitpack2 method >>> ius = InterpolatedUnivariateSpline(x, y) >>> yi = ius(xi) >>> plt.subplot(2, 1, 1) >>> plt.plot(x, y, 'bo') >>> plt.plot(xi, yi, 'g') >>> plt.plot(xi, np.sin(xi), 'r') >>> plt.title('Interpolation using univariate spline') >>> # use RBF method >>> rbf = Rbf(x, y) >>> fi = rbf(xi) >>> plt.subplot(2, 1, 2) >>> plt.plot(x, y, 'bo') >>> plt.plot(xi, fi, 'g') >>> plt.plot(xi, np.sin(xi), 'r') >>> plt.title('Interpolation using RBF - multiquadrics') >>> plt.show() (Source code, png, pdf) This example shows how to interpolate scattered 2d data. >>> import numpy as np >>> from scipy.interpolate import Rbf >>> import matplotlib.pyplot as plt >>> from matplotlib import cm >>> # 2-d tests - setup scattered data >>> x = np.random.rand(100)*4.0-2.0 >>> y = np.random.rand(100)*4.0-2.0 >>> z = x*np.exp(-x**2-y**2) >>> ti = np.linspace(-2.0, 2.0, 100) >>> XI, YI = np.meshgrid(ti, ti) >>> # use RBF >>> rbf = Rbf(x, y, z, epsilon=2) >>> ZI = rbf(XI, YI) >>> # plot the result >>> n = plt.normalize(-2., 2.) >>> plt.subplot(1, 1, 1) >>> plt.pcolor(XI, YI, ZI, cmap=cm.jet) >>> plt.scatter(x, y, 100, z, cmap=cm.jet) >>> plt.title('RBF interpolation - multiquadrics') >>> plt.xlim(-2, 2) >>> plt.ylim(-2, 2) >>> plt.colorbar() (Source code, png, pdf)
http://docs.scipy.org/doc/scipy-0.8.x/reference/tutorial/interpolate.html
2015-08-28T02:15:34
CC-MAIN-2015-35
1440644060173.6
[]
docs.scipy.org
Once you're in the patient's profile, select "Edit Profile". Profile In the Profile tab, you can keep track of the patient's information, including their Medicare number and effective dates for future reference. Current Plan You can update the patient's Medicare plan information in the Current Plan tab: Amplicare can detect a patient's current plan, but make sure to click Confirm Plan when you confirm the plan was identified appropriately. Select Choose Plan to manually enter the patient's plan if there wasn't one detected. Learn more about how to confirm a patient's current plan. Subsidy You can update the patient's subsidy information in the Subsidy tab: Similar to current plan detection, Amplicare can detect a patient's subsidy, so from here you can Confirm Subsidy or manually select the appropriate option. Learn more about how to confirm a patient's subsidy. What's Next? Not sure how to confirm a patient's current plan or subsidy? Learn how!
https://docs.amplicare.com/en/articles/440954-editing-and-confirming-patient-information
2020-11-23T21:30:14
CC-MAIN-2020-50
1606141168074.3
[array(['https://downloads.intercomcdn.com/i/o/123134240/66cc491ab57e673b4adc09c7/Screen+Shot+2019-05-22+at+6.26.10+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/123133878/224d962743cf92884e9679e9/Screen+Shot+2019-05-22+at+6.15.28+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/123134223/d22f34242b4ace15f19496b1/Screen+Shot+2019-05-22+at+6.25.52+PM.png', None], dtype=object) ]
docs.amplicare.com
# Bash Runner Bash is an amazing piece of software and a great skill to have in your toolbelt. Being able to read and write files and glue things together quickly is an important part of software development. Commandeer offers a convenient GUI on top of Bash so you can develop your Bash project and run it. # Choose File All you need to get up and running is to select your Bash file to run. There is also the option to create a new file. Once the file is selected, you're ready to run some Bash! # Run Bash, Run! Clicking run will run your currently selected Bash script. You'll see the terminal output on the right-hand side. # Edit Files Once you have your Bash file selected, you'll see the list of your files on the side. Clicking on the file in the side navigation opens it in the code pane. Feel free to navigate to different files in your project and edit them. Once you edited the file, click the save button to save your edits. # Share Code Sharing your Bash code is easy. Just click the share button from the code panel, choose the channel to send your code to, and click send. You can share your code over email, Slack, or SMS. # Customize Settings Commandeer Bash runner is fully customizable to suit your needs. Simply open the settings panel on the runner page or expand the Bash panel in your Commandeer settings to adjust any settings you like. # Copy Command If you like to run the same file in your terminal, you can copy the command from Commandeer. Just click the Copy to Clipboard button on the terminal command panel. # Conclusion Bash runs everywhere, and it's great for putting together some infrastructure scripts. Commandeer takes it on another level by providing a better way to run and develop your Bash scripts from a graphical interface. It comes preconfigured with a common set of defaults allowing for some additional customizations. All of which makes Commandeer a great tool for working with you Bash scripts. ← What is Bash Fractal →
https://docs.getcommandeer.com/docs/Bash/bash-runner
2020-11-23T21:25:05
CC-MAIN-2020-50
1606141168074.3
[array(['https://images.commandeer.be/_uploads/bash-choose-file.png', 'Choosing a bash file Choosing a bash file'], dtype=object) array(['https://images.commandeer.be/_uploads/bash-run.png', 'Running some Bash in Commandeer Running some Bash in Commandeer'], dtype=object) array(['https://images.commandeer.be/_uploads/bash-code.png', 'Editing a bash file Editing a bash file'], dtype=object) array(['https://images.commandeer.be/_uploads/bash-share.png', 'Sharing some Bash code Sharing some Bash code'], dtype=object) array(['https://images.commandeer.be/_uploads/bash-settings.png', 'Bash settings Bash settings'], dtype=object) ]
docs.getcommandeer.com
After creating your Data Services Server artifacts, you can package them and export them into a Composite Application Archive (CAR) file. - Go to the developer studio, right-click the Project Explorer, choose New -> Project, and then select Composite Application Project. - The composite application is created. Righ-click it and then click Export Project as Deployable Archive. - Specify the location for the CAR file and the artifacts you want to include in it. Overview Content Tools Activity
https://docs.wso2.com/display/DSS351/Packaging+your+Artifacts+into+a+Deployable+Archive
2020-11-23T22:01:26
CC-MAIN-2020-50
1606141168074.3
[]
docs.wso2.com
@Generated(value="OracleSDKGenerator", comments="API Version: 20190101") public final class CreateModelDetails extends Object Parameters needed to create a new model. Models are mathematical representations of the relationships between data. Models are represented by their associated metadata and artifact. Note: Objects should always be created or deserialized using the CreateModelDetails.Builder. This model distinguishes fields that are null because they are unset from fields that are explicitly set to null. This is done in the setter methods of the CreateMode","projectId","displayName","description","freeformTags","definedTags"}) @Deprecated public CreateModelDetails(String compartmentId, String projectId, String displayName, String description, Map<String,String> freeformTags, Map<String,Map<String,Object>> definedTags) public static CreateModelDetails.Builder builder() Create a new builder. public String getCompartmentId() public String getProjectId() public String getDisplayName() A user-friendly display name for the resource. Does not have to be unique, and can be modified. Avoid entering confidential information. Example: My Model public String getDescription() A short blurb describing the model. Set<String> get__explicitlySet__() public boolean equals(Object o) equalsin class Object public int hashCode() hashCodein class Object public String toString() toStringin class Object
https://docs.cloud.oracle.com/en-us/iaas/tools/java/1.17.5/com/oracle/bmc/datascience/model/CreateModelDetails.html
2020-11-23T22:29:17
CC-MAIN-2020-50
1606141168074.3
[]
docs.cloud.oracle.com
Statement and Connection Leak Detection This feature allows you to set specific time-outs so that if SQL statements or JDBC connections haven’t been closed by an application (potentially leading to a memory leak) they can be logged and/or closed. By default these values are set to 0 meaning this detection feature is turned off. Configuring Leak Detection using the admin console Click on the name of the JDBC connections pool Select the Advanced tab Scroll down to Connection Settings Set the Connection Leak Timeout and Statement Leak Timeout value in seconds Configuring Leak Detection using administration commands You also can set the time-out values using the following asadmin commands: asadmin> set resources.jdbc-connection-pool.test-pool.statement-leak-timeout-in-seconds=5 asadmin> set resources.jdbc-connection-pool.test-pool.connection-leak-timeout-in-seconds=5 You can turn on reclaiming of the leaking resources with the following commands: asadmin> set resources.jdbc-connection-pool.DerbyPool.connection-leak-reclaim=true asadmin> setresources.jdbc-connection-pool.DerbyPool.statement-leak-reclaim=true Once these values are set, if connection or statement leaks are detected, you will see messages similar to the example below in the application log: WARNING: A potential connection leak detected for connection pool test-pool. The stack trace of the thread is provided below: ...
https://docs.payara.fish/community/docs/5.201/documentation/user-guides/connection-pools/leak-detection.html
2020-11-23T21:57:00
CC-MAIN-2020-50
1606141168074.3
[array(['../../../_images/connection-pools/connection_pools_5.png', 'Leak Detection setting in Admin console'], dtype=object) ]
docs.payara.fish
To support SSO on an instant-cloned VM in a Horizon 7 Linux desktop environment, configure Samba on the master Linux VM on an Ubuntu system. Use the following steps to use Samba to offline domain join an instant-cloned Linux desktop to Active Directory on an Ubuntu system. Procedure - On your master Linux VM, install the winbind and samba packages, including any other dependent libraries such as smbfs and smbclient. - Install the Samba tdb-tools package using the following command. sudo apt-get install tdb-tools - Install Horizon 7 Agent for Linux. - Edit the /etc/samba/smb.conf configuration file so that it has content similar to the following example. [global] security = ads realm = LAB.EXAMPLE.COM workgroup = LAB idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = yes winbind enum group = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes client ntlmv2 auth = yes encrypt passwords = yes winbind use default domain = yes restrict anonymous = 2 - Edit the /etc/krb5.conf configuration file so that it has content similar to the following example.. [libdefaults] default_realm = EXAMPLE.COM krb4_config = /etc/krb.conf krb4_realms = /etc/krb.realms kdc_timesync = 1 ccache_type = 4 forwardable = true proxiable = true [realms] YOUR-DOMAIN = { kdc = 10.111.222.33 } [domain_realm] your-domain = EXAMPLE.COM .your-domain = EXAMPLE.COM - Edit the /etc/nsswitch.conf configuration file, as shown in the following example. passwd: files winbind group: files winbind shadow: files winbind gshadow: files - Verify that the host name is correct and that the system date and time are synchronized with your DNS system. - Set the following option in the /etc/vmware/viewagent-custom.conf file to inform the Horizon Agent for Linux that the Linux VM is domain joined using the Samba method. OfflineJoinDomain=samba - Reboot your system and log back in.
https://docs.vmware.com/en/VMware-Horizon-7/7.9/linux-desktops-setup/GUID-986977D4-87CE-459C-BC2A-55C0B6EA09AC.html
2020-11-23T23:03:06
CC-MAIN-2020-50
1606141168074.3
[]
docs.vmware.com
The VMware vRealize Operations for Horizon Administration Guide describes how to monitor VMware Horizon® environments through VMware vRealize® Operations Manager™. Intended Audience This information is intended for users who monitor the performance of objects in Horizon environments in vRealize Operations Manager and administrators who are responsible for maintaining and troubleshooting a vRealize Operations for Horizon deployment. Terminology For definitions of terms as they are used in this document, see the VMware Glossary.
https://docs.vmware.com/en/VMware-vRealize-Operations-for-Horizon/6.7/com.vmware.vrealize.horizon.admin.doc/GUID-AE6FB38D-E1D6-4C5A-94D9-CE9F61D65BE0.html
2020-11-23T23:13:46
CC-MAIN-2020-50
1606141168074.3
[]
docs.vmware.com
Add a vCenter Server to a Data Center before using that vCenter Server to create a private cloud environment. Prerequisites Ensure that you have the vCenter Server fully qualified domain name, user name, and password. Procedure - On the left pane, click Datacenters. - To add a vcenter, on the Datacenters page, click + Add vCenter. - Enter the vCenter Name and vCenter FQDN. - Click Select vCenter Credentials. - You can either search for an existing vCenter credentials or add new credentials using the + sign . - Click the + sign on the right corner to assign a password for the selected vCenter credential. - Enter the Password details and click Add. - Enter the vCenter User Name for the vCenter server.You should have the required vCenter privileges. - Select the vCenter Type. vCenter Type selection is currently used only for classification; the setting has no associated product functionality. -. - Click Validate and Save the changes. - To import vCenter Servers, click Import. - Select the .CSV file and click Import. You can upload only one file at a time for a bulk import of VCs in a selected datacenter. - Click Submit. What to do next Go to the Requests page to see the status of this request. When the status is Completed, you can use this vCenter Server to create environments. For more information on vCenter user privileges, see .
https://docs.vmware.com/en/VMware-vRealize-Suite-Lifecycle-Manager/8.1/com.vmware.vrsuite.lcm.8.1.doc/GUID-F508B2AE-C554-4EBF-964B-2E365D9C3640.html
2020-11-23T21:33:40
CC-MAIN-2020-50
1606141168074.3
[]
docs.vmware.com
Automated Deployment Before you launch the automated deployment, please review the architecture, configuration, and other considerations discussed in this guide. Follow the step-by-step instructions in this section to configure and deploy ClassicLink Mirror into your account. Time to deploy: Approximately five (5) minutes Prerequisites Enable AWS CloudTrail ClassicLink Mirror requires AWS CloudTrail in order to use API calls to generate a CloudWatch event. Therefore, you must turn on AWS CloudTrail before deploying this solution. For detailed instructions, refer to the AWS CloudTrail documentation. Configure a Test Environment It is best practice to test an automated solution before deploying it to production resources. This solution includes an AWS CloudFormation template that creates a simple EC2-Classic stack for testing purposes (see Testing). Alternatively, you can launch some EC2-Classic instances for a test deployment, and then configure and modify their security groups to verify the ClassicLink Mirror functionality. What We'll Cover The procedure for deploying this architecture on AWS consists of the following steps. For detailed instructions, follow the links for each step. Launch the AWS CloudFormation template into your AWS account. Enter values for the required parameter: Stack Name Create the VPC to mirror to, and enable ClassicLink on that VPC. Step 3. Tag Your EC2-Classic Security Groups Apply the custom tag to applicable security groups in EC2 Classic. Step 1. Launch the Stack This automated AWS CloudFormation template deploys ClassicLink Mirror classiclink-mirrorAWS CloudFormation template. You can also download the template as a starting point for your own implementation. The template is launched in the US East (N. Virginia) Region Region by default. To launch the ClassicLink Mirror in a different AWS Region, use the region selector in the console navigation bar. Note This solution is for EC2-Classic customers and uses the AWS Lambda service. You must launch this solution in an AWS Region that supports both AWS Lambda and EC2-Classic: Asia Pacific (Tokyo) Region, US West (N. California) Region, US East (N. Virginia) Region, and US West (Oregon) Region. On the Select Template page, verify that you selected the correct template and choose Next. On the Specify Details page, assign a name to your ClassicLink Mirror stack. (5) minutes. To quickly test the ClassicLink Mirror AWS Lambda function, you can make a relevant API call (see the appendix), and then check the ClassicLink Mirror log files in CloudWatch Logs to confirm the Lambda function was invoked. Note that ClassicLink Mirror will not make changes to your resources at this point because you have not yet tagged any EC2-Classic security groups to be managed. Step 2. Create a VPC You must create the VPC that you will migrate your EC2-Classic resources to. After you create the VPC, there are no ongoing configuration tasks to complete because ClassicLink Mirror will fully manage it, ensuring that it mirrors your EC2-Classic environment throughout the duration of your migration. Open the Amazon VPC console, make sure you are in the correct AWS Region, and in the left pane, choose Your VPCs. Choose Create VPC and configure your network as necessary. (See Amazon VPC documentation for guidance.) Enable ClassicLink on your new VPC. Select the VPC, right-click, and choose Enable ClassicLink. Note the VPC ID ( vpc-xxxxxxxx) to use in the next step of this deployment. Step 3. Tag Your EC2-Classic Security Groups You must assign tags to each EC2-Classic security group that you want ClassicLink Mirror to manage. Use the following format: Tag name: classicmirror:linkToVPC Tag value: <The VPC ID noted in the previous procedure, e.g., vpc-11112222> Within a few minutes, you will see that the AWS Lambda function was invoked and completed the following actions: created a VPC security group analogous to the EC2-Classic security group that you tagged; copied over its rules; and linked (via ClassicLink) any member EC2 instances to that VPC security group.
https://docs.aws.amazon.com/solutions/latest/classiclink-mirror/deployment.html
2018-10-15T15:06:26
CC-MAIN-2018-43
1539583509326.21
[]
docs.aws.amazon.com
MQL4参考 语言基础 运行式和表达式 Some characters and character sequences are of a special importance. These are so-called operation symbols, for example: + - * / % Symbols of arithmetic operations && || Symbols of logical operations = += *= Characters assignment operators Operation symbols are used in expressions and have sense when appropriate operands are given to them. Punctuation marks are emphasized, as well. These are parentheses, braces, comma, colon, and semicolon. Operation symbols, punctuation marks, and spaces are used to separate language elements from each other. This section contains the description of the following topics:
https://docs.mql4.com/cn/basis/operations
2018-10-15T16:19:28
CC-MAIN-2018-43
1539583509326.21
[]
docs.mql4.com