Posts Tagged ‘BOWI-30’


Here are the scenarios for “Maximum binary output size limit reached” error –

– When a large Web Intelligence report is exported to CSV format, we get error: Max Binary file size limit exceeded. The document is too large to be processed by the server. Error WIS 30271

– In Web Intelligence, when attempting to save a report to Excel or Portable Document Format (PDF), the following error message appears:

“Max binary file size limit exceeded. The document is too large to be processed by the server. Contact your database administrator. (Error: WIS 30271)”.

Cause:

This error message appears because Business Objects Enterprise allows documents to be saved up to the limit set for Web Intelligence Report Server in the Central Management Console (CMC). The default file size limit is 50 megabytes for binary file types like PDF and Excel.

Solution –

To resolve this error message, you will need to increase the Max Binary file size value.

1. Open the CMC.

2. Browse to Servers

3. Click on your Web Intelligence Report Server.

4. On the Properties page, increase the Maximum Binary File Size (Default value=50 MBytes; Maximum value = 65535 MBytes)..

5. Apply the setting and allow the Web Intelligence Report Server to restart.

Increasing these values too high could impact Web Intelligence performance when users attempt to save excessively large files.

Recommendation –

Thresholds are built to protect the system. If there is valid business reason, then by all means, go ahead and adjust the parameters as needed to solve the problem. However, I would use this opportunity to look into the report itself, and see if this is the best way to present/publish the information.

Before increasing the Max binary file size ask your user –

– Do You really need such a large Excel/PDF file?

– Can some of the unnecessary columns be dropped

– Can some filter be applied on the report to reduce the no of rows

If none of the above can be done then go ahead and increase the size.

 

Advertisements

50% discount promotion code for certification exams taken at US and CA Pearson VUE delivery centers before 9/30/2011.

PROMOTION CODE: 10CERTSS

For a list of exams available at Pearson VUE, please refer to their website at http://www.pearsonvue.com/sap.



Free Sample Questions for SAP Business Objects Web Intelligence(BOWI-30) Exam

Which three output context expressions are valid extended syntax expressions? (Choose three.)

A. Where Section In [State]

B. Where ([State] = “CA”)

C. In Section Where ([State] = “CA”)

D. Where ([State]=”CA” And [Year] =”2006″)

E. In Block Where Section = [State]

 

 

Answer: BCD


Free Sample Questions for SAP Business Objects Web Intelligence(BOWI-30) Exam

In which two situations should you use sub-queries? (Choose two.)

A. When the query filter involves values that are known

B. When the query filter involves values that are not known

C. When the query filter for the report involves a value that will never change over time

D. When the query filter for the report involves a value that will change over time

 

 

Answer: BD


Free Sample Questions for SAP Business Objects Web Intelligence(BOWI-30) Exam

Which three options appear in the Scope of Analysis pane in the Query panel? (Choose three.)

A. None

B. Default

C. One Level

D. Custom

 

 

 

Answer: ACD


Free Sample Questions for SAP Business Objects Web Intelligence(BOWI-30) Exam

Which two statements are true of an ambiguous query? (Choose two.)

A. It can be resolved with a loop.

B. It can be resolved with a context.

C. It contains one or more objects that can potentially return two different types of information

D. It contains all possible combinations of rows from the tables inferred by the objects.

 

 

 

Answer: BC


Number of instances to run

To benefit from maximum scalability, set the number of instances of this service within a machine (or virtual machine) to be at least the number CPU cores (or Virtual cores) available to it. If the machine has 4 cores then you should have at least 4 instances of Web Intelligence Processing Servers running. This is a golden rule not to break!

One ‘Web Intelligence Processing Servers per CPU core per machine’ also helps with load balancing. Let’s say you have 2 machines, one machine of 4 cores, and another machine of 2 cores. You’ll want the 4 core machine to receive twice as much Web Intelligence ‘work’ as the 2 core machine.  To do this you just allocate twice as many Web Intelligence Processing Servers to the 4 core machine compared to the 2 core machine.  And if you follow the one ‘Web Intelligence Processing Server per CPU core per machine’ rule, that’s exactly what end up with! Perfect.

Maximum Connections

Sometimes, however things are not so straightforward and you want a bit more fine tuning without breaking the golden rule “One Web Intelligence Processing Server per CPU core per machine”. For example when the ‘power’ of your machines isn’t determined by the number of CPU cores, but by something else like faster CPUs, what do you do then?  This is when the Web Intelligence Processing Server property ‘Maximum Connections’ should be used to ‘balance’ the load.  Let’s say you have 2 machines, both have 4 cores, but one machine runs at 2.0 GHz and the other at 2.5 GHz (25% faster). You want the 2.5 GHz machine to receive 25% more ‘work’ than the 2.0 GHz. Simply by increasing the ‘Maximum Connections’ for each Web Intelligence Processing Server on the 2.5 GHz machine by the required ratio will result in the load balancing to be weighted towards the 2.5 GHz machine. In this example we would increase this parameter from 50 to 63 (50 + 25%= 62½, round up to 63). Don’t be tempted to reduce this figure, as if there aren’t enough connections a user will get “server busy” messages. You don’t want to add unnecessary software limits on your hardware resources. There is a school of thought that you should drop this parameter to ‘protect’ the server, I don’t follow that school, and would prefer to let the server do as much as it can, potentially giving a slower response time when busy, rather than generate an error when it’s not busy at all.

Caching

There are many wonderful ways to configure the cache to suite your environment. Let’s have a look at the main cache properties on the Web Intelligence Processing Server one by one:

The property “Enable Document Cache” is correctly enabled by default, make sure it stays enabled.

Document Cache can have a very significant performance improvement as experienced by end users. This feature, when enabled, allows a Web Intelligence Job to ‘prime the cache’. The cache is primed when a Web Intelligence job is run, as scheduled, and the presentation view (Microsoft Excel®, Adobe® Acrobat® and HTML, but it’s actually XML) is also generated as part of the job, though the down side is the job will take longer to complete. The cache is generated after the document has been refreshed and in the chosen formats selected.  When scheduling a Web Intelligence document it is important that the cache option is selected to benefit from this feature.  Enabling this feature can provide significant performance improvements.

The property “Enable Realtime Cache” is correctly enabled by default, make sure it stays enabled. 

Realtime Cache also has significant performance improvements when enabled. Realtime cache allows the cache to be loaded dynamically. When something is not found in the cache, it is calculated and then saved into the cache for later re-use. This applies for scheduled as well as non scheduled documents.

The “Maximum Document Cache Size”, the default settings is 1048576 K Bytes, or 1 G Byte.

This is typically quite small for most organisations. With a cache size of just 1 G Byte it will not take long for a document that is in cache to be pushed out of cache. This will mean users could experience a good performance one minute and slower the next. I recommend increasing this size significantly. A cache size of 20 GB or even 50 GB is not uncommon. Confusingly the ‘Maximum Document Cache Size’ really isn’t the maximum size, the product doesn’t check, every time its writes a file to cache, the size of the cache! This parameter is used just as a value when the clean-up route runs.  Estimate the amount of cache by clearing out the cache and seeing how much disk space is consumed once a number of documents have ‘primed’ the cache (see above about ‘priming’ the cache).  Carefully monitor the disk space after making such alterations! You don’t want to run out of disk space.

The “Document Cache Duration”, the default setting is 4320 minutes, or 3 days.

This means the cache for a given document is valid for 3 days, which is ok if your documents are scheduled at least once every 3 days. But if you’ve got weekly or even monthly documents, then you should consider increasing this duration to a week or a month, otherwise you won’t be reusing perfectly valid cache content. You want to avoid any extra regeneration of cache as possible.  If most of your documents are scheduled weekly, then a setting of about 8 days would be appropriate. The server is never going to provide a user with cache content that is invalid or out-of-date.

The “Document Cache Clean-up Interval (minutes)”, the default setting is 120 minutes, or 2 hours. (The ‘properties’ page incorrectly says “(seconds)”)

This is how often the process will scan its cache and reduce the size of that cache if it’s too large. It will delete the oldest files first, and reduce the total amount of space to ‘Maximum Document Cache Reduction Space % of ‘Maximum Document Cache Size’ (or 70% of 1 GB with default settings).

So the cache is reduced to a % size of the maximum. Thus when you are determining what the Maximum Document Cache Size should be, you need to add on a fair bit as only 70% of it will kept after a clean-up.

There’s no harm with a short setting (2 hours), but you could probably increase this dramatically if you have plenty of disk space and aren’t worried about the cache size growing well over its limit.

Sharing the Web Intelligence cache across multiple machines. Not set by default.

There’s not much point in each Web Intelligence Processing Server having its own cache when you can share. Sharing a cache has huge benefits; it means one Web Intelligence Processing Server can re-use the cache generated by another. And if you’re ‘priming’ the cache then this is even more important to gain from that extra effort your jobs are performing.

To share the cache the Web Intelligence property ‘Output Cache Directory’ should be set to a shared cache folder, typically hosted on network storage device which is accessed via a UNC or NFS path. You’ve probably already got a network file share for the File Repository Server (FRS) file store. Why not just add another folder onto that share for the Web Intelligence document cache. Set the ‘Output Cache Directory’ property to it. (If running on Microsoft Windows you’ll need to make sure the Server Intelligence Agent is running as a user who has network access).  Sharing the cache across machines can provide massive performance improvements, but watch out for network or disk bottlenecks. If you really want to get the most throughput, which might be important if you’re ‘bursting’ lots of documents via a publication, enable a network share and a separate disk system for each of: Input FRS, Output FRS, Web Intelligence Document Cache.

The “Universe Cache Size” the default value is 20 universes.

Universes that are pushed out of cache will require extra processing which is unnecessary. Set this to at least the same number of universes you have, or even a few more! It’s only a tiny bit of extra disk space and setting this to a large value won’t affect the amount of RAM consumed.


Report Design: Guidelines & Best Practices

Introduction – Gives the basic guidelines/practices that could be followed in any Report Design.

General

  • Give meaningful names for the report tabs
  •  For complex reports, keep an overview report tab explaining the report
  • Use the Report properties to give more information about the report

Dataproviders

  • Each Dataprovider should be given a name that reflects the usage of the data its going to fetch.
  • Select Objects in such a fashion that the resulting SQL gives a hierarchial order of Tables. This helps to achieve SQL Optimisation.
  • Avoid bringing lot of data into the report which will unnecessarily slow down the report performance.

Report Variables

  • Follow the naming convention of “var_” as prefix to each report level variable. This helps to identify Report Variables different from Universe Objects.
  •  Each variable that carries a calculation involving division should have IF <Denominator> <> 0 THEN <Object>. This avoids display of #DIV/0 errors in the report.
  •  Avoid having deep nested calculations which will slow down the performance of the report.

Report Structure

  • Make use of Report Templates when having most of the report with similar structures. This makes the work to move faster and consistant across.

Report Formats

  • All the reports should have page layout set in a printable manner. (Landscape/Portrait, Fit in 1 page wideor/and 1 page tall are different options).
  • All the reports should have page numbers in the footer.
  • All the reports should have Last Refreshed Timestamp in the header or footer.
  • All the above can be standardized by using templates

Report CELL Formats

  • All Numeric should be given Number format as per the language Eg. For German #.##00 for English #,##00.
  • Number cells should have a Right Alignment while Text cells should have Left Alignment.
  •  Cell showing Percentage should carry the % text (either Column Header or in each cell).
  • Indenting should ALWAYS be done using the Indenting Tool and NOT by using ” “.

With lots of Reports to be made, Universes to be designed and in parallel, Processes to followed for QA Analysys, there would be little things to remember that can help to design which in long terms helps for ease of maintenance, readability and helps to avoid rework for simple mistakes.

The document is a compilation of learnings that can be used as Guideline and Best Practices for Report & Universe Design.

Universe Design: Guidelines & Best Practices

Introduction – Gives the basic guidelines/practices that could be followed in any Universe Design

Connection

  •  When using a repository, always define a SECURED Connection to the Database.
  •  Use the Universe Property panel to define the Universe Use and Version (last update).
  •  Define the Connection Name that helps for Easy Database Identification.
  •  Parameters – SQL Tab – Multiple SQL statements for each measure to be unchecked.
  •  Parameters – SQL Tab – Cartesian Products – Prevent is checked.

Class

  •  Define Universe Classes / Subclasses as per the business logic & Naming Convetion.
  •  Involve the business users in defining the classes hierarchy and business names for the classes and objects.
  •  AVOID Auto Class generation in the Designer.
  •  Give description for the use of each Class/SubClass.
  •  Avoid deep level of subclasses as it reduces the navigability and usability.

Objects

  •  Object to be used in calculation HAS to be Measure Objects.
  •  Object to be used in Analysis HAS to be Dimension Objects.
  •  Give description for the use of each Object.
  •  Include an Eg. In the description for Objects used in LOV.
  •  Do not set LOV Option for each Dimension. Use it only for required Objects, esp. those to be used in

Report Prompts.

  •  Keep “Automatic Refresh before Use” option clicked for LOV Objects:
  •  If LOV is editable by the user, provide a significant name to List Name under object properties.
  •  All the measure objects should use aggregate functions. This will ensure that the aggregation happens at the database for the selected dimensions.
  •  Avoid having dupicate Object names (in different classes).
  •  Format for objects of type Numeric, Currency & Date should be defined.

Predefined Conditions

  •  Give description for the use of each pre-defined condition.
  •  If Condition is resulting in a Prompt, make sure associated Dimension Object has LOV.
  •  Time dimension related predefined conditions such as Current year, Current month,Previous year,Last(x) weeks, etc can be defined to make it easy for scheduling daily/weekly/period based reports.

Tables

  • Alias Tables should be named with proper functional use.
  • Arrange the tables in the Structure as per Business/Functional logic. This helps other Universe users in understanding.
  • It is always best to bring the tables without joins and build them manually. It helps the designer to understand the intricacies of the model.

Joins & Context

  • AVOID keeping hanging (not joined) tables in the structure.
  • AVOID having joins that are not part of any context.
  • Give proper functional naming to the context for easy identification.
  •  AVOID having 1:1 joins.

Import/Export

  • Make sure of the path for Import, which usually is always in the Business Objects’ Universe folder.
  •  LOCK the universe if Administrator/Designer does not want any user to Import/Export.
  •  DO “Integrity Check” before Exporting the Universe.
  •  Good to have correct folder structure , so that you can have a secured environment.
  •  Once exported, never delete any objects from the universe without doing an impact analysis on the object usage

Free Sample Questions for SAP Business Objects Web Intelligence(BOWI-30) Exam

Which two statements are true of an ambiguous query? (Choose two.)

A. It can be resolved with a loop.

B. It can be resolved with a context.

C. It contains all possible combinations of rows from the tables inferred by the objects.

D. It contains one or more objects that can potentially return two different types of information.

 

 

Answer- BC